Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[CVPR 2025(Highlight)] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints
  • Posted on: 06/09/2025 –
  • Comments: 11 Comments
[arXiv 2024]EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
  • Posted on: 06/02/2025 –
  • Comments: 6 Comments
[ICRL 2025] Weakly-Supervised Affordance Grounding Guided by Part-Level Semantic Priors
  • Posted on: 05/26/2025 –
  • Comments: 8 Comments
[ICRA 2022] Affordance Learning from Play for Sample-Efficient Policy Learning
  • Posted on: 05/19/2025 –
  • Comments: 6 Comments
[Arxiv 2025]AffordanceSAM: Segment Anything Once More in Affordance Grounding
  • Posted on: 05/12/2025 –
  • Comments: 4 Comments
[CVPR 2025]VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
  • Posted on: 05/05/2025 –
  • Comments: 4 Comments
[CVPR 2025]Grounding 3D Object Affordance with Language Instructions, Visual Observations and Interactions
  • Posted on: 04/28/2025 –
  • Comments: 4 Comments
[CVPR 2024]Continual Segmentation with Disentangled Objectness Learning and Class Recognition
  • Posted on: 04/13/2025 –
  • Comments: 2 Comments
[ICLR 2024(Oral)] ASID: Active Exploration for System Identification in Robotic Manipulation
  • Posted on: 04/07/2025 –
  • Comments: 4 Comments
[CVPR 2024]Grounding Image Matching in 3D with MASt3R
  • Posted on: 03/24/2025 –
  • Comments: 8 Comments
1 2 … 13 14 Older Posts

Conference Deadline

NEW POST

  • [CVPR2025] Masking meets Supervision: A Strong Learning Alliance
  • [CVPR 2024] PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection
  • [ICRA 2024] Universal Visual Decomposer: Long-Horizon Manipulation Made Easy
  • [CVPR 2025] DiscoVLA: Discrepancy Reduction in Vision, Language, and Alignment for Parameter-Efficient Video-Text Retrieval
  • [WACV 2024] DTrOCR: Decoder-only Transformer for Optical Character Recognition

New Comment

  1. 신 인택 on [CVPR2025] Masking meets Supervision: A Strong Learning Alliance07/01/2025

    안녕하세요 정민님 깔끔한 리뷰 감사합니다. 말씀하신 것처럼 약간 지도학습기반으로 다시 회귀하는 점이 장점이자 단점이라고 생각할 수 있을 것 같습니다. 제가…

  2. 이 상인 on [arXiv 2025] [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster06/30/2025

    안녕하세요. 리뷰 읽어주셔서 감사합니다. 아, 네 제가 여태 몇 번 리뷰와 세미나를 이 주제로 하며 새로운 Pruning 전략이 아니면 짚고…

  3. 이 상인 on [arXiv 2025] [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster06/30/2025

    안녕하세요. 리뷰 읽어주셔서 감사합니다. 제 생각컨데, 데이터셋 기반의 분석에 편향등의 오류가 분명 있을 수 있습니다. 또, 아래 주영님의 질문처럼 특정…

  4. 이 상인 on [arXiv 2025] [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster06/30/2025

    안녕하세요. 리뷰 읽어주셔서 감사합니다. 우선, 질문에 대해 전적으로 동의합니다. 저도 Task-specific relevance라는 점을 대응하기 위해서는 Text-relevance해야한다고 생각합니다. 우선 현재의 벤치마킹은…

  5. 신 정민 on [AAAI 2025] Zero-shot Depth Completion via Test-time Alignment with Affine-invariant Depth Prior06/30/2025

    하이요. 여기가 댓글 맛집이라고 해서 댓글 남깁니다. 리뷰 내용 초반부에 "센서마다 각기 다른 한계점을 갖고 있기 때문입니다. 예를 들어 LiDAR는…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV