Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[ICRL 2025] Weakly-Supervised Affordance Grounding Guided by Part-Level Semantic Priors
  • Posted on: 05/26/2025 –
  • Comments: 8 Comments
[ICRA 2022] Affordance Learning from Play for Sample-Efficient Policy Learning
  • Posted on: 05/19/2025 –
  • Comments: 6 Comments
[Arxiv 2025]AffordanceSAM: Segment Anything Once More in Affordance Grounding
  • Posted on: 05/12/2025 –
  • Comments: 4 Comments
[CVPR 2025]VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
  • Posted on: 05/05/2025 –
  • Comments: 4 Comments
[CVPR 2025]Grounding 3D Object Affordance with Language Instructions, Visual Observations and Interactions
  • Posted on: 04/28/2025 –
  • Comments: 4 Comments
[CVPR 2024]Continual Segmentation with Disentangled Objectness Learning and Class Recognition
  • Posted on: 04/13/2025 –
  • Comments: 2 Comments
[ICLR 2024(Oral)] ASID: Active Exploration for System Identification in Robotic Manipulation
  • Posted on: 04/07/2025 –
  • Comments: 4 Comments
[CVPR 2024]Grounding Image Matching in 3D with MASt3R
  • Posted on: 03/24/2025 –
  • Comments: 8 Comments
[ICRA 2024]Language-Conditioned Affordance-Pose Detection in 3D Point Clouds
  • Posted on: 03/02/2025 –
  • Comments: 10 Comments
[arXiv 2024]GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation
  • Posted on: 02/24/2025 –
  • Comments: 4 Comments
Newer Posts 1 2 3 4 5 … 15 16 Older Posts

Conference Deadline

NEW POST

  • [AAAI 2025] Does VLM Classification Benefit from LLM Description Semantics?
  • [ICLR 2026] VisionTrim: Unified Vision Token Compression forTraining-Free MLLM Acceleration
  • [RSS 2025] DEXOP: A Device for Robotic Transfer of Dexterous Human Manipulation
  • [ICLR 2024] Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition
  • [CVPR 2026] Thinking Beyond Labels: Vocabulary-Free Fine-Grained Recognition using Reasoning-Augmented LMMs

New Comment

  1. 이 재윤 on [CVPR 2026] SARMAE : Masked Autoencoder for SAR Representation Learning05/11/2026

    안녕하세요 우진님, 좋은 질문 감사합니다. 이쪽 분야를 접한 이유는 저희 팀 기업 과제가 task가 SAR object detection이고, 과제 팔로우업을 겸해서…

  2. 이 재윤 on [CVPR 2026] SARMAE : Masked Autoencoder for SAR Representation Learning05/11/2026

    안녕하세요 정우님, 좋은 질문 감사합니다. DINOv3는 frozen 상태로 optical branch에서 이미지 패치 feature를 추출하는 용도로만 사용되며, SAR branch에서는 일반적인 ViT…

  3. 이 재윤 on [CVPR 2026] SARMAE : Masked Autoencoder for SAR Representation Learning05/11/2026

    안녕하세요 인택님, 좋은 질문 감사합니다. 말씀주신 대로 SAR-1M 데이터셋은 SAR 이미지 중 매칭된 광학 이미지 쌍이 존재하는 경우도 있고, 아닌…

  4. 이 재윤 on [AAAI 2025] Does VLM Classification Benefit from LLM Description Semantics?05/11/2026

    안녕하세요 예은님, 좋은 리뷰 감사합니다. description selection 과정에서, 단순히 타겟 클래스의 이미지와 가장 유사도가 높은 텍스트를 고르는 것에 그치지 않고…

  5. 최 인하 on [RSS 2025] DEXOP: A Device for Robotic Transfer of Dexterous Human Manipulation05/11/2026

    안녕하세요 승현님 좋은 질문 감사합니다 프로젝트 페이지에 따로 fingertip nail을 사용해서 task를 수행한 정성적인 영상 결과가 있습니다. 예를 들어서 바닥에…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV