Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

정 윤서

About Posts
[ECCV 2022] Language Matters – A Weakly Supervised Vision-Language Pre-training Approach for Scene Text Detection and Spotting
  • Posted on: 10/09/2024 –
  • Comments: 2 Comments
[CVPR 2023] Turning a CLIP Model into a Scene Text Detector
  • Posted on: 09/29/2024 –
  • Comments: 9 Comments
[CVPR 2023] DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in Transformer
  • Posted on: 09/02/2024 –
  • Comments: 2 Comments
[ICCV 2023] Open-Vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models
  • Posted on: 08/26/2024 –
  • Comments: 1 Comment
KCCV 2024 참관기
  • Posted on: 08/18/2024 –
  • Comments: No Comments
[CVPR 2023] Self-supervised Implicit Glyph Attention for Text Recognition
  • Posted on: 07/28/2024 –
  • Comments: 1 Comment
[ACM MM 2022] Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition
  • Posted on: 07/21/2024 –
  • Comments: 6 Comments
[CVPR 2024] Bridging the Gap Between End-to-End and Two-Step Text Spotting
  • Posted on: 07/07/2024 –
  • Comments: 2 Comments
2024년 상반기 회고@정윤서
  • Posted on: 06/30/2024 –
  • Comments: No Comments
[TCSVT 2024] Pro-Tuning: Unified Prompt Tuning for Vision Tasks
  • Posted on: 06/24/2024 –
  • Comments: 4 Comments
Newer Posts 1 2 3 4 5 … 7 8 Older Posts

Conference Deadline

NEW POST

  • [NeurIPS2025]VideoLucy: Deep Memory Backtracking for Long Video Understanding
  • [arXiv 2025] SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
  • [NIPS2025] Vgent: Graph-based Retrieval-Reasoning-Augmented Generation For Long Video Understanding
  • [AAAI 2025] Motion-aware Contrastive Learning for Temporal Panoptic Scene Graph Generation
  • [arXiv 2025] IGen: Scalable Data Generation for Robot Learning from Open-World Images

New Comment

  1. 김기현 on [arXiv 2025] SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics01/12/2026

    안녕하세요, 영규님 댓글 감사합니다. 비동기(asynchronous) inference와 관련해 논문에서는 명시적·정량적으로 성능이 우수하다고 평가한 부분은 없고, 정성적으로 더 빠른 반응성과 연속적인 움직임을…

  2. 김 영규 on [arXiv 2025] SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics01/12/2026

    안녕하세요 기현님 리뷰 감사합니다. Smol VLA의 구조 에 대해서 잘 설명을 해주신 것 같습니다. Asynchronous inference의 실험 결과에서 성능이 좋아지는…

  3. 김기현 on [arXiv 2025] IGen: Scalable Data Generation for Robot Learning from Open-World Images01/12/2026

    안녕하세요, 영규님 좋은 리뷰 감사합니다. 리뷰를 읽으며 특히 인상 깊었던 점은, 단일 이미지로부터 로봇의 action뿐 아니라 visual observation까지 포함된 시퀀스를…

  4. 허 재연 on [NIPS 2025] Don’t Just Chase “Highlighted Tokens” in MLLMs: Revisiting Visual Holistic Context Retention01/12/2026

    좋은 리뷰 감사합니다. 중요도 등 하나의 기준으로만 프루닝을 하게 되면 오히려 정보가 비슷한 토큰만 남게 되는 점이 재밌네요. 질문이 하나…

  5. 허 재연 on [NeurIPS2025]VideoLucy: Deep Memory Backtracking for Long Video Understanding01/12/2026

    좋은 리뷰 감사합니다. 결국 핵심은 agent 기반으로 비디오 요약을 수행하되, frame 단위로 너무 local한 답변을 내놓는 기존 프레임워크들과 달리 클립을…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV