Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

이 승현

About Posts
[ECCV Workshop 2022]TransNet: Transparent Object Manipulation Through Category-Level Pose Estimation
  • Posted on: 05/19/2024 –
  • Comments: 2 Comments
[arXiv 2024]Leveraging Positional Encoding for Robust Multi-Reference-Based Object 6D Pose Estimation
  • Posted on: 05/12/2024 –
  • Comments: 2 Comments
[3DV 2022(Oral)]PIZZA: A Powerful Image-only Zero-Shot Zero-CAD Approach to 6 DoF Tracking
  • Posted on: 05/05/2024 –
  • Comments: 2 Comments
[CVPR 2024]NOPE: Novel Object Pose Estimation from a Single Image
  • Posted on: 04/28/2024 –
  • Comments: 6 Comments
[CVPR 2022]Templates for 3D Object Pose Estimation Revisited: Generalization to New Objects and Robustness to Occlusions
  • Posted on: 04/07/2024 –
  • Comments: No Comments
[CVPR2023]SCANet: Self-Paced Semi-Curricular Attention Network for Non-Homogeneous Image Dehazing
  • Posted on: 04/01/2024 –
  • Comments: 8 Comments
[CVPR 2023]AShapeFormer : Semantics-Guided Object-Level Active Shape Encoding for 3D Object Detection via Transformers
  • Posted on: 03/24/2024 –
  • Comments: 4 Comments
[CVPR 2024]MatchU: Matching Unseen Objects for 6D Pose Estimation from RGB-D Images
  • Posted on: 03/17/2024 –
  • Comments: No Comments
[ECCV 2022]Zero-Shot Category-Level Object Pose Estimation
  • Posted on: 03/04/2024 –
  • Comments: 4 Comments
[RA-L 2023]i2c-net: Using Instance-Level Neural Networks for Monocular Category-Level 6D Pose Estimation
  • Posted on: 02/25/2024 –
  • Comments: No Comments
Newer Posts 1 2 … 4 5 6 … 13 14 Older Posts

Conference Deadline

NEW POST

  • [arxiv 2025.02] SOFAR: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
  • [arXiv 2024] Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG
  • [ArXiv 2025]Accurate and efficient Zero-shot 6D pose estimation with frozen foundation models
  • [NIPS2023] Self-Chained Image-Language Model for Video Localization and Question Answering
  • [CVPR 2023] Feature Aggregated Queries for Transformer-based Video Object Detectors

New Comment

  1. 박 성준 on [NIPS2023] Self-Chained Image-Language Model for Video Localization and Question Answering09/09/2025

    안녕하세요. 현우님 좋은 댓글 감사합니다. 네, 현우님이 댓글에 남겨주신 것처럼 Table 3에서 A와 B는 샘플링하는 프레임수가 32개와 4개로 sparse하게 샘플링했을…

  2. 신 인택 on [ICLR2024]CLIPSELF : VISION TRANSFORMER DISTILLS ITSELF FOR OPEN-VOCABULARY DENSE PREDICTION09/09/2025

    안녕하세요 현우님 답글 감사합니다. 질문에 대한 답변을 드리자면 1. 아래 [1] 논문기반으로 답변드리면 마지막 블록의 self-attention 을 제거하는 이유는 CLS…

  3. 박 성준 on [NIPS2023] Self-Chained Image-Language Model for Video Localization and Question Answering09/09/2025

    안녕하세요. 황유진 연구원님 좋은 댓글 감사합니다. 키프레임의 경우 저자는 단순하게 자연어 쿼리와의 유사도를 기반으로 추출하고 있습니다. 다만, BLIP-2 자체에도 이미지-텍스트…

  4. 신 인택 on [ICLR2024]CLIPSELF : VISION TRANSFORMER DISTILLS ITSELF FOR OPEN-VOCABULARY DENSE PREDICTION09/09/2025

    안녕하세요 우진님 답변 감사합니다. self-distillation 과정을 다른 논문들을 읽어본 것은 아니라 일반적인 설정 기준은 모르겠으나, 해당 논문 기준에서는 같은 구조에서…

  5. 신 인택 on [ICLR2024]CLIPSELF : VISION TRANSFORMER DISTILLS ITSELF FOR OPEN-VOCABULARY DENSE PREDICTION09/09/2025

    안녕하세요 재연님 답글 감사합니다. 1. 해당 부분은 논문에 구체적으로 설명이 없고 코드도 제공되지 않았었는데, github issue 에서 저자가 공개한 코드기반으로…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV