Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

Profile

신 정민

About Posts
[CVPR2022]RFNet : Unsupervised-Network for mutually reinforcing Multi-modal Image registration and fusion
  • Posted on: 05/20/2022 –
  • Comments: 1 Comment
[AAAI2017] Unsupervised Deep Learning for Optical Flow
  • Posted on: 05/08/2022 –
  • Comments: 1 Comment
[CVPR2021] Deep Rectangling for Image Stitching : A Learning Baseline
  • Posted on: 04/17/2022 –
  • Comments: No Comments
[CVPR2019]Pseudo-Lidar from Visual Depth Estimation : Bridging the Gap in 3D Object Detection for Autonomous Driving
  • Posted on: 04/10/2022 –
  • Comments: 2 Comments
[NeurIPS2020] Swapping Autoencoder for deep Image Manipulation
  • Posted on: 03/27/2022 –
  • Comments: 4 Comments
MPViT : Multi-Path vision Transformer for Dense Prediction
  • Posted on: 03/18/2022 –
  • Comments: 2 Comments
[CVPR2018]Pyramid Stereo Matching Network
  • Posted on: 03/11/2022 –
  • Comments: 1 Comment
Visual Attention Network
  • Posted on: 03/04/2022 –
  • Comments: 1 Comment
[ICLR2022](Spotlight) How Do Vision Transformer Work?
  • Posted on: 02/20/2022 –
  • Comments: 1 Comment
Robust Mutual Learning for Semi-supervised Semantic Segmentation
  • Posted on: 02/13/2022 –
  • Comments: 4 Comments
Newer Posts 1 2 … 7 8 9 … 15 16 Older Posts

Conference Deadline

NEW POST

  • [CoRL 2025] Planning from Point Clouds over Continuous Actions for Multi-object Rearrangement
  • [ACCV2024]Vision language models are blind: Failing to translate detailed visual features into words
  • Improving Language Understanding by Generative Pre-Training
  • [CoRL 2025] O3Afford: One-Shot 3D Object-to-Object Affordance Grounding for Generalizable Robotic Manipulation
  • [CoRL 2025]One View, Many Worlds: Single-Image to 3D Object Meets Generative Domain Randomization for One-Shot 6D Pose Estimation

New Comment

  1. 김 영규 on [CoRL 2025(Oral)] X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real09/16/2025

    안녕하세요 인하님 리뷰 읽어주셔서 감사합니다. 첫번째 질문에 대한 답으로는 해당 기법은 물체의 trajectory를 dense reward로 정의해서 강화학습을 통해서 manipulator가 trajectory를…

  2. 김 영규 on [CoRL 2025(Oral)] X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real09/16/2025

    안녕하세요 재찬님 댓글 감사합니다. reward를 다른 방식으로 변화를 주어 RL을 진행한 ablation이라는 표현이 객체 중심의 reward와 모션 중심의 reward를 말씀하시는건가요?…

  3. 허 재연 on [CVPR 2023] Feature Aggregated Queries for Transformer-based Video Object Detectors09/16/2025

    basic query는 기본적으로 random init되므로 해당 frame의 시각적 정보를 담고 있지 않습니다. 이를 함께 사용하면 학습 과정에서 도움을 줄 수는…

  4. 신 인택 on [CVPR 2024] Open-Vocabulary Calibration for Fine-tuned CLIP09/15/2025

    안녕하세요 예은님 답글 감사합니다. 1번 질문에 대해서는 올바르게 이해하셨씁니다. 파인튜닝을 진행하지 않았을때는 뭐 당연하게도 base novel 클래스에 대해 비슷한 분포를…

  5. 신 인택 on [CVPR 2024] Open-Vocabulary Calibration for Fine-tuned CLIP09/15/2025

    안녕하세요 재윤님 답글 감사합니다. temperature는 softmax 함수에서 확률값이 되기 전 로짓에 T 라는 상수를 나눠줘 너무 극단적으로 확률값이 치우쳐지지 않게…

  • Sign-in
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV