Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[CoRL 2024(oral)] D3Fields: Dynamic 3D Descriptor Field for Zero-Shot Generalizable Rearrangement
  • Posted on: 09/08/2025 –
  • Comments: 1 Comment
[ICCV 2025] Selective Contrastive Learning for Weakly Supervised Affordance Grounding
  • Posted on: 09/01/2025 –
  • Comments: 5 Comments
[arXiv 2025]Affordance-R1: Reinforcement Learning for Generalizable Affordance Reasoning in Multimodal Large Language Model
  • Posted on: 08/18/2025 –
  • Comments: 2 Comments
[ICCV 2025]A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
  • Posted on: 08/11/2025 –
  • Comments: No Comments
[CVPRw 2024] Strategies to Leverage Foundation Model Knowledge in Object Affordance Grounding
  • Posted on: 07/28/2025 –
  • Comments: 4 Comments
2025 상반기 회고
  • Posted on: 07/21/2025 –
  • Comments: 5 Comments
[ICRA 2025(Best Paper Finalist)]UAD: Unsupervised Affordance Distillation for Generalization in Robotic Manipulation
  • Posted on: 07/14/2025 –
  • Comments: 6 Comments
[CVPR 2025(Highlight)] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints
  • Posted on: 06/09/2025 –
  • Comments: 12 Comments
[arXiv 2024]EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
  • Posted on: 06/02/2025 –
  • Comments: 6 Comments
[ICRL 2025] Weakly-Supervised Affordance Grounding Guided by Part-Level Semantic Priors
  • Posted on: 05/26/2025 –
  • Comments: 8 Comments
1 2 … 13 14 Older Posts

Conference Deadline

NEW POST

  • [arxiv 2025.02] SOFAR: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
  • [arXiv 2024] Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG
  • [ArXiv 2025]Accurate and efficient Zero-shot 6D pose estimation with frozen foundation models
  • [NIPS2023] Self-Chained Image-Language Model for Video Localization and Question Answering
  • [CVPR 2023] Feature Aggregated Queries for Transformer-based Video Object Detectors

New Comment

  1. 재윤 이 on [CVPR 2016]Deep Residual Learning for Image Recognition09/09/2025

    안녕하세요 신인택 연구원님, ResNet을 예습해 보고자하는 생각으로 본 x-review를 읽게 되었는데, 대략적인 흐름 파악을 하는데 큰 도움이 되었습니다. 초심자의 입장에서…

  2. 정우 김 on [ICCV 2019] Rethinking ImageNet Pre-Training09/09/2025

    안녕하세요 재연님 상세한 리뷰 덕에 논문을 잘 이해했습니다. 좋은 리뷰 감사합니다. URP과정에서 pretrained를 제대로 불러오지 못한채로 학습을 돌렸다가 결과가 하나도…

  3. 정 의철 on [2025 CVPR] Narrating the Video: Boosting Text-Video Retrieval via Comprehensive Utilization of Frame-Level Captions09/08/2025

    안녕하세요 성준님 질문 감사합니다. 먼저 co-attention에서 서로 다른 모달리티가 들어와도 projection을 통해서 차원은 맞춰줄 수 있습니다. query-aware adaptive filtering은 단지…

  4. 정 의철 on [2025 CVPR] Narrating the Video: Boosting Text-Video Retrieval via Comprehensive Utilization of Frame-Level Captions09/08/2025

    안녕하세요 유진님 질문 감사합니다. video level caption의 캡션은 비디오의 전역적인 정보를 담고 있어, 비디오의 전반적인 내용을 갖는다고 할 수 있습니다.…

  5. 이상인 on [ECCV 2018] CBAM: Convolutional Block Attention Module09/08/2025

    안녕하세요. 너무 예전에 쓴 리뷰라 해당 논문이 100% 기억나진 않지만, 지금의 제 지식으로 어느 정도 설명드릴 순 있을 것 같습니다.…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV