Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[RA-L 2024]Uncertainty-Aware Suction Grasping for Cluttered Scenes
  • Posted on: 11/04/2024 –
  • Comments: 6 Comments
[CoRL 2024 Oral]Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation
  • Posted on: 09/30/2024 –
  • Comments: 5 Comments
[IROS 2024]OVGNet: An Unified Visual-Linguistic Framework for Open-Vocabulary Robotic Grasping
  • Posted on: 09/02/2024 –
  • Comments: 3 Comments
[arXiv 2024]WorldAfford: Affordance Grounding based on Natural Language Instructions
  • Posted on: 08/25/2024 –
  • Comments: 6 Comments
[CVPR 2024] AffordanceLLM: Grounding Affordance from Vision Language Models
  • Posted on: 08/18/2024 –
  • Comments: 4 Comments
[CVPR 2023]LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding
  • Posted on: 08/04/2024 –
  • Comments: 4 Comments
[CVPR 2024]Open-vocabulary object 6D pose estimation
  • Posted on: 07/21/2024 –
  • Comments: 4 Comments
[CVPR 2024]FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
  • Posted on: 07/15/2024 –
  • Comments: 2 Comments
[NeurIPS 2020]Object-Centric Learning with Slot Attention
  • Posted on: 07/07/2024 –
  • Comments: 8 Comments
CVPR 참관기@이승현
  • Posted on: 07/01/2024 –
  • Comments: No Comments
Newer Posts 1 2 3 4 5 … 13 14 Older Posts

Conference Deadline

NEW POST

  • [ACL Findings 2025] Detecting and Mitigating Challenges in Zero-Shot Video Summarization with Video LLMs
  • [Arxiv 2023]ONE-PEACE: EXPLORING ONE GENERAL REPRESENTATION MODEL TOWARD UNLIMITED MODALITIES
  • SIM-COT: Supervised Implicit Chain-of-Thought
  • [CVPR 2025]Token Cropr Faster ViTs for Quite a Few Taskscopr
  • VIRAL: Visual Representation Alignmentfor Multimodal Large Language Models

New Comment

  1. 김 영규 on Human to Robot (H2R): Workshop on Sensorizing, Modeling, and Learning from Humans10/01/2025

    안녕하세요 우현님 글 읽어주셔서 감사합니다. 우선 human video는 여러 형태로 존재할 수 있지만 제가 학회에 참석하면서 본 거의 대부분의 human…

  2. 이 재찬 on [CoRL 2025] Planning from Point Clouds over Continuous Actions for Multi-object Rearrangement10/01/2025

    안녕하세요 인택님, 리뷰 읽어주셔서 감사합니다. 1. 저는 우선 신뢰도 높은 동작 == 쉬운 동작 이 성립하지는 않는다고 생각합니다. 또 항상…

  3. 황 찬미 on Improving Language Understanding by Generative Pre-Training09/29/2025

    안녕하세요 인택님 질문 감사합니다~!! 1. Masked Self-Attention은 현재 토큰이 현재와 이전 토큰만 보도록 미래위치를 마스킹하는것을 말하는데 −∞는 softmax 전에 미래…

  4. 신 인택 on [Arxiv 2023]ONE-PEACE: EXPLORING ONE GENERAL REPRESENTATION MODEL TOWARD UNLIMITED MODALITIES09/29/2025

    안녕하세요 의철님 답글 감사합니다. 1번 질문에 대한 답으로 논문에 해당 부분 성능이 리포팅되어있습니다. 제가 리뷰에 올리지는 않았지만 figure 5 에…

  5. 신 인택 on [Arxiv 2023]ONE-PEACE: EXPLORING ONE GENERAL REPRESENTATION MODEL TOWARD UNLIMITED MODALITIES09/29/2025

    안녕하세요 성준님 답글 감사합니다. 질문에 대한 답변으로 명시적으로 오디오-비전 쌍을 정렬해주는 것이 성능이 더 좋을 수 있습니다. 논문에 직접적인 언급이…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV