Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 재찬

About Posts
[EMNLP 2024] LUQ: Long-text Uncertainty Quantification for LLMs
  • Posted on: 06/09/2025 –
  • Comments: 8 Comments
[ICLR 2024] Online Continual Learning For Interactive Instruction Following Agents
  • Posted on: 05/26/2025 –
  • Comments: 4 Comments
[CoRL 2023 Oral] Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance
  • Posted on: 05/19/2025 –
  • Comments: 6 Comments
[ICLR 2025] PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding
  • Posted on: 05/12/2025 –
  • Comments: 10 Comments
[ICRA 2017] Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer
  • Posted on: 04/14/2025 –
  • Comments: 4 Comments
[RA-L 2022] Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation
  • Posted on: 04/07/2025 –
  • Comments: 2 Comments
[IROS 2024] CoPa: General Robotic Manipulation through Spatial Constraints of Parts with Foundational Model
  • Posted on: 03/24/2025 –
  • Comments: 2 Comments
[CoRL 2024] ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation
  • Posted on: 03/04/2025 –
  • Comments: 2 Comments
KRoC 2025 참관기
  • Posted on: 02/16/2025 –
  • Comments: No Comments
[RSS 2024] MOKA: Open-World Robotic Manipulation through Mark-Based Visual Prompting
  • Posted on: 02/10/2025 –
  • Comments: 6 Comments
Newer Posts 1 2 3 4 Older Posts

Conference Deadline

NEW POST

  • [TCSVT 2024] Question-Aware Global-Local Video Understanding Network for Audio-Visual Question Answering
  • [CVPR 2025] Video Summarization with Large Language Models
  • [ICCV 2025] Toward Better Out-painting: Improving the Image Composition with Initialization Policy Model
  • [ICCV 2025] How Can Objects Help Video-Language Understanding?
  • [ICCV2025] SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts

New Comment

  1. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    승현님, 리뷰 읽어주셔서 감사합니다. 1. 타당한 질문이라고 생각이 들지만, 본 논문에서는 pick-and-place를 low-level primitive action으로 두기 때문에, keyframe selection에서 이동중이다에…

  2. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    인하님, 리뷰 읽어주셔서 감사합니다! 말씀해주신 부분 중 1. wrist keypoint에 대한 속도만 계산한거냐? -> 손에 모든 keypoints들의 centroid를 계산해서 그…

  3. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    영규님, 리뷰 읽어주셔서 감사합니다. 1. 저도 리뷰 쓰며 의아했던 부분이긴 합니다. 뭐 저렇게 까지 성공률이 0일수가 있지. 저자들이 실험을 잘못…

  4. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    예은님, 리뷰 읽어주셔서 감사합니다!
 생각지도 못하고 있었는데, 완전 타당한 질문이네요. 좋은 문제정의 같습니다. 근데 조금 어려운 문제라고 생각이 들어서, 저희가…

  5. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    태주님, 리뷰 읽어주셔서 감사합니다! Q1. 휴먼비디오작업환경 - 로봇작업환경 이 동일한 배치라는 전제인가? 카메라 뷰포인트나 미세한 위치나 자세조정까지 완벽히 동일 배치는…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV