Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[ICLR 2024(Oral)] ASID: Active Exploration for System Identification in Robotic Manipulation
  • Posted on: 04/07/2025 –
  • Comments: 4 Comments
[CVPR 2024]Grounding Image Matching in 3D with MASt3R
  • Posted on: 03/24/2025 –
  • Comments: 8 Comments
[ICRA 2024]Language-Conditioned Affordance-Pose Detection in 3D Point Clouds
  • Posted on: 03/02/2025 –
  • Comments: 10 Comments
[arXiv 2024]GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation
  • Posted on: 02/24/2025 –
  • Comments: 4 Comments
[arXiv 202]GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency
  • Posted on: 02/17/2025 –
  • Comments: 8 Comments
[CVPR Workshop 2024]What does CLIP know about peeling a banana?
  • Posted on: 02/10/2025 –
  • Comments: 8 Comments
[IROS 2024 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models
  • Posted on: 01/20/2025 –
  • Comments: 4 Comments
[arXiv 2025] SeqAfford: Sequential 3D Affordance Reasoning via Multimodal Large Language Model
  • Posted on: 01/13/2025 –
  • Comments: 6 Comments
[arXiv 2024]UniAff: A Unified Representation of Affordances for Tool Usage and Articulation with Vision-Language Models
  • Posted on: 01/06/2025 –
  • Comments: 4 Comments
[이승현] 2024년을 돌아보며
  • Posted on: 12/28/2024 –
  • Comments: No Comments
Newer Posts 1 2 3 4 5 … 15 16 Older Posts

Conference Deadline

NEW POST

  • [CoRL 2025] Steering Your Diffusion Policy with Latent Space Reinforcement Learning
  • [CVPR 2025] Scale Efficient Training for Large Datasets
  • [AAAI 2026] SM3Det: A Unified Model for Multi-Modal Remote Sensing Object Detection
  • [ICRL 2026] HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model
  • [RSS 2025] Robot Data Curation with Mutual Information Estimators

New Comment

  1. 안 우현 on [arxiv 2026]Less Is More : Scalable Visual Navigation from Limited Data03/24/2026

    안녕하세요 주영님 좋은 댓글 감사합니다. 먼저 MPPI가 쓰는 traversability map은 비디오에서 직접 추정한 정보라기보다 각 frame에 대응되는 elevation map으로부터 CNN이…

  2. 안 우현 on [ICRA 2026]NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance03/24/2026

    안녕하세요 우진님 좋은 댓글 감사합니다. 일단 우진님께서 파악하신게 맞습니다. 그리고 esdf는 위에서 언급하였지만 시뮬레이션(정적인맵)에서 학습단계에서 특권 정보로만 사용하는 것으로 이해하시면…

  3. 안 우현 on [CVPR 2025]CityWalker Learning Embodied Urban Navigation from Web-Scale Videos03/24/2026

    안녕하세요 정우님 좋은 댓글 감사합니다. 저도 실제 데이터랑 성능차이가 많이 날 것이라고 예상했는데, 위에 실험 결과를 보시면 제로샷 성능 기준으로…

  4. 안 우현 on [CVPR 2025]CityWalker Learning Embodied Urban Navigation from Web-Scale Videos03/24/2026

    안녕하세요 우진님 좋은 댓글 감사합니다. 일단 MAOE 평가지표에 대해서는 위에 적어놓았으니 참고해주시면 좋을 것 같습니다. 그리고 두번째 질문에 대해서도 위에…

  5. 안 우현 on [CVPR 2025]CityWalker Learning Embodied Urban Navigation from Web-Scale Videos03/24/2026

    안녕하세요 기현님 좋은 댓글 감사합니다. 해당 방법론에서 진행한 시나리오는 독자 기준에서는 다를 수 있지만 저자는 조금 단순하지 않고 어렵다라고 합니다.…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV