Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[ICCV 2025]A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
  • Posted on: 08/11/2025 –
  • Comments: No Comments
[CVPRw 2024] Strategies to Leverage Foundation Model Knowledge in Object Affordance Grounding
  • Posted on: 07/28/2025 –
  • Comments: 4 Comments
2025 상반기 회고
  • Posted on: 07/21/2025 –
  • Comments: 5 Comments
[ICRA 2025(Best Paper Finalist)]UAD: Unsupervised Affordance Distillation for Generalization in Robotic Manipulation
  • Posted on: 07/14/2025 –
  • Comments: 6 Comments
[CVPR 2025(Highlight)] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints
  • Posted on: 06/09/2025 –
  • Comments: 12 Comments
[arXiv 2024]EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
  • Posted on: 06/02/2025 –
  • Comments: 6 Comments
[ICRL 2025] Weakly-Supervised Affordance Grounding Guided by Part-Level Semantic Priors
  • Posted on: 05/26/2025 –
  • Comments: 8 Comments
[ICRA 2022] Affordance Learning from Play for Sample-Efficient Policy Learning
  • Posted on: 05/19/2025 –
  • Comments: 6 Comments
[Arxiv 2025]AffordanceSAM: Segment Anything Once More in Affordance Grounding
  • Posted on: 05/12/2025 –
  • Comments: 4 Comments
[CVPR 2025]VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
  • Posted on: 05/05/2025 –
  • Comments: 4 Comments
Newer Posts 1 2 3 … 14 15 Older Posts

Conference Deadline

NEW POST

  • [NeurIPS2025]VideoLucy: Deep Memory Backtracking for Long Video Understanding
  • [arXiv 2025] SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
  • [NIPS2025] Vgent: Graph-based Retrieval-Reasoning-Augmented Generation For Long Video Understanding
  • [AAAI 2025] Motion-aware Contrastive Learning for Temporal Panoptic Scene Graph Generation
  • [arXiv 2025] IGen: Scalable Data Generation for Robot Learning from Open-World Images

New Comment

  1. 김기현 on [arXiv 2025] IGen: Scalable Data Generation for Robot Learning from Open-World Images01/12/2026

    안녕하세요, 영규님 좋은 리뷰 감사합니다. 리뷰를 읽으며 특히 인상 깊었던 점은, 단일 이미지로부터 로봇의 action뿐 아니라 visual observation까지 포함된 시퀀스를…

  2. 허 재연 on [NIPS 2025] Don’t Just Chase “Highlighted Tokens” in MLLMs: Revisiting Visual Holistic Context Retention01/12/2026

    좋은 리뷰 감사합니다. 중요도 등 하나의 기준으로만 프루닝을 하게 되면 오히려 정보가 비슷한 토큰만 남게 되는 점이 재밌네요. 질문이 하나…

  3. 허 재연 on [NeurIPS2025]VideoLucy: Deep Memory Backtracking for Long Video Understanding01/12/2026

    좋은 리뷰 감사합니다. 결국 핵심은 agent 기반으로 비디오 요약을 수행하되, frame 단위로 너무 local한 답변을 내놓는 기존 프레임워크들과 달리 클립을…

  4. 이 예은 on [NIPS 2017]Attention Is All You Need01/11/2026

    안녕하세요 인하님, 좋은 리뷰 감사합니다. 쉽게 설명해주셔서 덕분에 공부가 많이 되었습니다. positional encoding 부분에서 궁금한 점이 있는데요, 하필 sinusoid 형태의…

  5. 이 승현 on [CVPR 2025]Compositional Caching for Training-free Open-vocabulary Attribute Detection01/08/2026

    질문 감사합니다. φ_db와 φ_llm을 곱하는 게 아니 더하는 등의 다양한 조합에 대해서는 논문에 따로 언급하고있지 않습니다. (Supplementary Material에도 따로 없네요)…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV