Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

정 윤서

About Posts
[CVPR 2025] UniVAD: A Training-free Unified Model for Few-shot Visual Anomaly Detection
  • Posted on: 08/17/2025 –
  • Comments: 2 Comments
[ICCV 2025] MultiADS: Defect-aware Supervision for Multi-type Anomaly Detection and Segmentation in Zero-Shot Learning
  • Posted on: 08/11/2025 –
  • Comments: 2 Comments
[CVPR 2025] Towards Training-free Anomaly Detection with Vision and Language Foundation Models
  • Posted on: 08/04/2025 –
  • Comments: 2 Comments
2025 상반기 회고
  • Posted on: 07/28/2025 –
  • Comments: No Comments
[ICLR 2025] MMAD: A COMPREHENSIVE BENCHMARK FOR MULTIMODAL LARGE LANGUAGE MODELS IN INDUSTRIAL ANOMALY DETECTION
  • Posted on: 07/28/2025 –
  • Comments: 2 Comments
[CVPR 2025] Towards Zero-Shot Anomaly Detection and Reasoning with Multimodal Large Language Models
  • Posted on: 07/14/2025 –
  • Comments: 2 Comments
[AAAI 2024](Oral) AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models
  • Posted on: 07/07/2025 –
  • Comments: 8 Comments
[CVPR 2024] PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection
  • Posted on: 06/30/2025 –
  • Comments: 2 Comments
[arXiv 2024] Char-SAM: Turning Segment Anything Model into Scene Text Segmentation Annotator with Character-level Visual Prompts
  • Posted on: 06/23/2025 –
  • Comments: 2 Comments
[TPAMI 2024] Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation
  • Posted on: 06/09/2025 –
  • Comments: 4 Comments
1 2 … 7 8 Older Posts

Conference Deadline

NEW POST

  • [arxiv 2025.02] SOFAR: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
  • [arXiv 2024] Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG
  • [ArXiv 2025]Accurate and efficient Zero-shot 6D pose estimation with frozen foundation models
  • [NIPS2023] Self-Chained Image-Language Model for Video Localization and Question Answering
  • [CVPR 2023] Feature Aggregated Queries for Transformer-based Video Object Detectors

New Comment

  1. 최 인하 on [arxiv 2025.02] SOFAR: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation09/08/2025

    안녕하세요 재찬님 좋은 논문 리뷰 감사합니다. 리뷰를 읽으면서 로봇 조작에 있어서 객체 중심의 위치뿐만 아니라 semantic orientation의 정보도 매우 중요하다는…

  2. 박 성준 on [NIPS2023] Self-Chained Image-Language Model for Video Localization and Question Answering09/08/2025

    안녕하세요. 홍주영 연구원님 좋은 댓글 감사합니다. 저자도 localizer의 초기 성능의 중요성의 중요성을 언급하고 있긴합니다. 저자는 BLIP-2의 성능을 믿고 있기도하고(?) 추가적으로…

  3. 박 성준 on [NIPS2023] Self-Chained Image-Language Model for Video Localization and Question Answering09/08/2025

    안녕하세요. 정의철 연구원님 좋은 댓글 감사합니다. 기본적으로 Localizer는 사전학습을 거친 모델을 활용하기에 어느정도의 localizing 능력을 보유하고 있다는 것이 저자의 주장이긴합니다.…

  4. 신 인택 on [NerulPS 2017] Attention is all you need09/08/2025

    안녕하세요 찬미님 답글 감사합니다. 순차적으로 답변 해드리자면 1. 어텐션 구조로 인한 정보희석이라는 표현은 단일헤드로 만들었을 때와 멀티헤드로 만들었을때의 차이점이라고 생각하면…

  5. 홍 주영 on [ICCV 2025] MobileViCLIP: An Efficient Video-Text Model for Mobile Devices09/08/2025

    Q1. Skip Connection & BatchNorm 제거하면 성능 저하되지 않는가? -> 학습과 추론 과정 중 모델 구조를 다르게 가져가는 이유는 Structual…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV