Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[arXiv 2024]EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
  • Posted on: 06/02/2025 –
  • Comments: 6 Comments
[ICRL 2025] Weakly-Supervised Affordance Grounding Guided by Part-Level Semantic Priors
  • Posted on: 05/26/2025 –
  • Comments: 8 Comments
[ICRA 2022] Affordance Learning from Play for Sample-Efficient Policy Learning
  • Posted on: 05/19/2025 –
  • Comments: 6 Comments
[Arxiv 2025]AffordanceSAM: Segment Anything Once More in Affordance Grounding
  • Posted on: 05/12/2025 –
  • Comments: 4 Comments
[CVPR 2025]VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
  • Posted on: 05/05/2025 –
  • Comments: 4 Comments
[CVPR 2025]Grounding 3D Object Affordance with Language Instructions, Visual Observations and Interactions
  • Posted on: 04/28/2025 –
  • Comments: 4 Comments
[CVPR 2024]Continual Segmentation with Disentangled Objectness Learning and Class Recognition
  • Posted on: 04/13/2025 –
  • Comments: 2 Comments
[ICLR 2024(Oral)] ASID: Active Exploration for System Identification in Robotic Manipulation
  • Posted on: 04/07/2025 –
  • Comments: 4 Comments
[CVPR 2024]Grounding Image Matching in 3D with MASt3R
  • Posted on: 03/24/2025 –
  • Comments: 8 Comments
[ICRA 2024]Language-Conditioned Affordance-Pose Detection in 3D Point Clouds
  • Posted on: 03/02/2025 –
  • Comments: 10 Comments
Newer Posts 1 2 3 … 13 14 Older Posts

Conference Deadline

NEW POST

  • [ACL Findings 2025] Detecting and Mitigating Challenges in Zero-Shot Video Summarization with Video LLMs
  • [Arxiv 2023]ONE-PEACE: EXPLORING ONE GENERAL REPRESENTATION MODEL TOWARD UNLIMITED MODALITIES
  • SIM-COT: Supervised Implicit Chain-of-Thought
  • [CVPR 2025]Token Cropr Faster ViTs for Quite a Few Taskscopr
  • VIRAL: Visual Representation Alignmentfor Multimodal Large Language Models

New Comment

  1. 김 영규 on Human to Robot (H2R): Workshop on Sensorizing, Modeling, and Learning from Humans10/01/2025

    안녕하세요 우현님 글 읽어주셔서 감사합니다. 우선 human video는 여러 형태로 존재할 수 있지만 제가 학회에 참석하면서 본 거의 대부분의 human…

  2. 이 재찬 on [CoRL 2025] Planning from Point Clouds over Continuous Actions for Multi-object Rearrangement10/01/2025

    안녕하세요 인택님, 리뷰 읽어주셔서 감사합니다. 1. 저는 우선 신뢰도 높은 동작 == 쉬운 동작 이 성립하지는 않는다고 생각합니다. 또 항상…

  3. 황 찬미 on Improving Language Understanding by Generative Pre-Training09/29/2025

    안녕하세요 인택님 질문 감사합니다~!! 1. Masked Self-Attention은 현재 토큰이 현재와 이전 토큰만 보도록 미래위치를 마스킹하는것을 말하는데 −∞는 softmax 전에 미래…

  4. 신 인택 on [Arxiv 2023]ONE-PEACE: EXPLORING ONE GENERAL REPRESENTATION MODEL TOWARD UNLIMITED MODALITIES09/29/2025

    안녕하세요 의철님 답글 감사합니다. 1번 질문에 대한 답으로 논문에 해당 부분 성능이 리포팅되어있습니다. 제가 리뷰에 올리지는 않았지만 figure 5 에…

  5. 신 인택 on [Arxiv 2023]ONE-PEACE: EXPLORING ONE GENERAL REPRESENTATION MODEL TOWARD UNLIMITED MODALITIES09/29/2025

    안녕하세요 성준님 답글 감사합니다. 질문에 대한 답변으로 명시적으로 오디오-비전 쌍을 정렬해주는 것이 성능이 더 좋을 수 있습니다. 논문에 직접적인 언급이…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV