Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

홍 주영

About Posts
[ECCV 2024] Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
  • Posted on: 09/02/2024 –
  • Comments: 8 Comments
[CVPR 2022] Entropy-based Active Learning for Object Detection with Progressive Diversity Constraint
  • Posted on: 08/19/2024 –
  • Comments: No Comments
[ICML 2021] (CLIP) Learning Transferable Visual Models From Natural Language Supervision
  • Posted on: 08/04/2024 –
  • Comments: 16 Comments
[NerulPS 2022] Flamingo: a Visual Language Model for Few-Shot Learning
  • Posted on: 07/22/2024 –
  • Comments: 4 Comments
[NeurIPS 2023] Visual Instruction Tuning
  • Posted on: 07/08/2024 –
  • Comments: 4 Comments
2024년 상반기 회고@홍주영
  • Posted on: 06/30/2024 –
  • Comments: No Comments
[CVPR 2024] Active Prompt Learning in Vision Language Models
  • Posted on: 06/10/2024 –
  • Comments: 8 Comments
[CVPR 2022] Active Learning by Feature Mixing
  • Posted on: 06/02/2024 –
  • Comments: 2 Comments
[ICML 2023] SAAL: Sharpness-Aware Active Learning
  • Posted on: 05/19/2024 –
  • Comments: 6 Comments
[NIPS 2017] Neural Discrete Representation Learning
  • Posted on: 05/06/2024 –
  • Comments: 3 Comments
Newer Posts 1 2 3 4 … 10 11 Older Posts

Conference Deadline

NEW POST

  • [CoRL 2024] 3D Diffuser Actor: Policy Diffusion with 3D Scene Representations
  • [CVPR 2025] Unbiased Video Scene Graph Generation via Visual and Semantic Dual Debiasing
  • [TPAMI 2018] SEED: Semantics Enhanced Encoder-Decoder Framework for Scene Text Recognition
  • [NeurIPS 2024] Introspective Planning: Aligning Robots’ Uncertainty with Inherent Task Ambiguity
  • [ECCV 2024]FreeZe: Training-free zero-shot 6D pose estimation with geometric and vision foundation models

New Comment

  1. 안 우현 on [ICLR 2025]DEPTH PRO: Sharp Monocular Metric Depth In Less Than a Second07/15/2025

    안녕하세요 영규님 읽어주셔서 감사합니다. 물론 Patch Encoder도 전체 이미지를 1x1로 다운샘플리해서 하나의 patch로 입력받기 때문에 전역적인 정보를 어느 정도 포착할…

  2. 안 우현 on [ICLR 2025]DEPTH PRO: Sharp Monocular Metric Depth In Less Than a Second07/15/2025

    안녕하세요 우진님 읽어주셔서 감사합니다. 이미학습 완료된 depth estimation 네트워에서 feature 6을 frozen된 상태로 사용하고, 이 위에 작은 CNN head를 붙여…

  3. 손 우진 on [CVPR 2024]SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation07/15/2025

    안녕하세요 정민님. 리뷰 및 질문 남겨주셔서 감사합니다. 먼저 말씀하신 proposal 집합 M은 SAM을 통해 추론된 object-level mask map들을 의미합니다.SAM이 출력하는…

  4. 이 재찬 on [EMNLP 2024] LUQ: Long-text Uncertainty Quantification for LLMs07/15/2025

    안녕하세요 영규님, 리뷰 읽어주셔서 감사합니다. 맞습니다. 의미적으로 구분 가능한 명확한 대상 이란 뜻으로 사용되었습니다. 두번째는 "할 말 없어서 암말 안한다"와…

  5. 이 재찬 on [EMNLP 2024] LUQ: Long-text Uncertainty Quantification for LLMs07/15/2025

    안녕하세요 정민님, 리뷰 읽어주셔서 감사합니다. 1. 제가 하고자 하는 연구에서는, LLM 기반으로 로봇 task를 decomposition 할 때 hallu 영향 좀…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV