Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

신 정민

About Posts
[NeurIPS2022] Croco: Self-supervised Pre-training for 3D Vision tasks by Cross-view Completion
  • Posted on: 01/15/2023 –
  • Comments: 2 Comments
[ECCV2022]MultiMAE: Multi-modal Multi-task Masked Autoencoders
  • Posted on: 01/06/2023 –
  • Comments: No Comments
Self-supervised Learning
  • Posted on: 12/15/2022 –
  • Comments: No Comments
[ICLR2019] ImageNet-Trained CNNs are Biased Towards Texture; Increasing Shape Bias Improves Accuracy And Robustness
  • Posted on: 12/09/2022 –
  • Comments: 3 Comments
[CVPR2022] Toward Practical Monocular Indoor Depth Estimation
  • Posted on: 12/03/2022 –
  • Comments: 4 Comments
[CVPR2021] Reducing Domain Gap by Reducing Style Bias
  • Posted on: 11/27/2022 –
  • Comments: 4 Comments
[CVPR2022] Iterative Deep Homography Estimation
  • Posted on: 10/30/2022 –
  • Comments: 2 Comments
[CVPR2022](ORAL) Splicing ViT Features for Semantic Appearance Trasnfer
  • Posted on: 09/22/2022 –
  • Comments: 4 Comments
2022년도 상반기를 마치며
  • Posted on: 08/27/2022 –
  • Comments: 2 Comments
[CVPR2022] InstaFormer: Instance-Aware I2I Translation with Transformer
  • Posted on: 08/21/2022 –
  • Comments: 2 Comments
Newer Posts 1 2 … 6 7 8 … 16 17 Older Posts

Conference Deadline

NEW POST

  • [IJCV 2025] Guiding Audio-Visual Question Answering with Collective Question Reasoning
  • [NeurIPS2025]VideoLucy: Deep Memory Backtracking for Long Video Understanding
  • [arXiv 2025] SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
  • [NIPS2025] Vgent: Graph-based Retrieval-Reasoning-Augmented Generation For Long Video Understanding
  • [AAAI 2025] Motion-aware Contrastive Learning for Temporal Panoptic Scene Graph Generation

New Comment

  1. 신 인택 on [NIPS 2017]Attention Is All You Need01/13/2026

    안녕하세요 인하님 트랜스포머를 다뤄주셨네요. 저도 트랜스포머를 처음 봤을떄도 그렇고 지금도 cross attention 이나 self attention 을 모듈에 사용하면서도 어떻게 연산이…

  2. 김기현 on [arXiv 2025] SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics01/12/2026

    안녕하세요, 영규님 댓글 감사합니다. 비동기(asynchronous) inference와 관련해 논문에서는 명시적·정량적으로 성능이 우수하다고 평가한 부분은 없고, 정성적으로 더 빠른 반응성과 연속적인 움직임을…

  3. 김 영규 on [arXiv 2025] SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics01/12/2026

    안녕하세요 기현님 리뷰 감사합니다. Smol VLA의 구조 에 대해서 잘 설명을 해주신 것 같습니다. Asynchronous inference의 실험 결과에서 성능이 좋아지는…

  4. 김기현 on [arXiv 2025] IGen: Scalable Data Generation for Robot Learning from Open-World Images01/12/2026

    안녕하세요, 영규님 좋은 리뷰 감사합니다. 리뷰를 읽으며 특히 인상 깊었던 점은, 단일 이미지로부터 로봇의 action뿐 아니라 visual observation까지 포함된 시퀀스를…

  5. 허 재연 on [NIPS 2025] Don’t Just Chase “Highlighted Tokens” in MLLMs: Revisiting Visual Holistic Context Retention01/12/2026

    좋은 리뷰 감사합니다. 중요도 등 하나의 기준으로만 프루닝을 하게 되면 오히려 정보가 비슷한 토큰만 남게 되는 점이 재밌네요. 질문이 하나…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV