Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

김 형준

About Posts
[2021 ICML] What Makes for End-to-End Object Detection?
  • Posted on: 10/31/2021 –
  • Comments: 6 Comments
[2021 ICCV Workshop] TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios
  • Posted on: 10/21/2021 –
  • Comments: 2 Comments
YOLOX: Exceeding YOLO Series in 2021
  • Posted on: 10/16/2021 –
  • Comments: 1 Comment
Protected: [2021 AAAI Under Review] Pay Attention to Spatial Alignment for RGB-Thermal Object Detection
  • Posted on: 10/10/2021 –
  • Comments: Enter your password to view comments.
YOLO 시리즈 1~3 리뷰
  • Posted on: 10/03/2021 –
  • Comments: 5 Comments
Protected: [RA-L ICRA 2022] BAANet: Learning Bi-directional Adaptive Attention Gates for Multispectral Pedestrian Detection
  • Posted on: 09/26/2021 –
  • Comments: Enter your password to view comments.
[2021 AIRE] Vision-based Robotic Grasping from Object Localization, Pose Estimation, Grasp Detection to Motion Planning: A Review
  • Posted on: 09/19/2021 –
  • Comments: No Comments
[2021 IEEE TNNLS] Weakly Aligned Feature Fusion for Multimodal Object Detection
  • Posted on: 09/12/2021 –
  • Comments: 6 Comments
[2021 ScienceDirect] Adaptive spatial pixel-level feature fusion network for multispectral pedestrian detection
  • Posted on: 09/05/2021 –
  • Comments: 2 Comments
[Sensors 2021] Attention Fusion for One-Stage Multispectral Pedestrian Detection
  • Posted on: 08/29/2021 –
  • Comments: 4 Comments
Newer Posts 1 2 3 4 5 … 7 8 Older Posts

Conference Deadline

NEW POST

  • [CVPR 2025] Video Summarization with Large Language Models
  • [ICCV 2025] Toward Better Out-painting: Improving the Image Composition with Initialization Policy Model
  • [ICCV 2025] How Can Objects Help Video-Language Understanding?
  • [ICCV2025] SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts
  • [NeurIPS2025]AdaVideoRAG: Omni-Contextual Adaptive Retrieval-Augmented Efficient Long Video Understanding

New Comment

  1. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    승현님, 리뷰 읽어주셔서 감사합니다. 1. 타당한 질문이라고 생각이 들지만, 본 논문에서는 pick-and-place를 low-level primitive action으로 두기 때문에, keyframe selection에서 이동중이다에…

  2. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    인하님, 리뷰 읽어주셔서 감사합니다! 말씀해주신 부분 중 1. wrist keypoint에 대한 속도만 계산한거냐? -> 손에 모든 keypoints들의 centroid를 계산해서 그…

  3. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    영규님, 리뷰 읽어주셔서 감사합니다. 1. 저도 리뷰 쓰며 의아했던 부분이긴 합니다. 뭐 저렇게 까지 성공률이 0일수가 있지. 저자들이 실험을 잘못…

  4. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    예은님, 리뷰 읽어주셔서 감사합니다!
 생각지도 못하고 있었는데, 완전 타당한 질문이네요. 좋은 문제정의 같습니다. 근데 조금 어려운 문제라고 생각이 들어서, 저희가…

  5. 이 재찬 on [IROS 2025] VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model12/16/2025

    태주님, 리뷰 읽어주셔서 감사합니다! Q1. 휴먼비디오작업환경 - 로봇작업환경 이 동일한 배치라는 전제인가? 카메라 뷰포인트나 미세한 위치나 자세조정까지 완벽히 동일 배치는…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV