Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • News
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review

[일:] 2020년 02월 01일

Posted in Publications

“Single-Shot Adaptive Fusion Network for Robust Multispectral Pedestrian Detection,” 32rd Workshop on Image Processing and Image Understanding (IPIU), Feb 2020.

 김 지원  02/01/2020  Leave a Comment on “Single-Shot Adaptive Fusion Network for Robust Multispectral Pedestrian Detection,” 32rd Workshop on Image Processing and Image Understanding (IPIU), Feb 2020.
Continue Reading
Posted in Publications

“Automatic Mixed Precision을 활용한 보행자 검출 알고리즘 경량화 기법,” 32rd Workshop on Image Processing and Image Understanding (IPIU), Feb 2020.

 김 지원  02/01/2020  Leave a Comment on “Automatic Mixed Precision을 활용한 보행자 검출 알고리즘 경량화 기법,” 32rd Workshop on Image Processing and Image Understanding (IPIU), Feb 2020.
Continue Reading

학술대회 마감

  • [BMVC 2023] 2023.05.31 
  • [ICCV 2023] 2023.03.08 
  • [CVPR 2023] 2022.11.12
  • [ICRA 2023] 2022.09.16
  • [AAAI 2022] 2022.08.16
  • [NeurIPS 2022] 2022.05.20
  • [ECCV 2022] 2022.03.08
2020 2월
일 월 화 수 목 금 토
 1
2345678
9101112131415
16171819202122
23242526272829
« 8월   3월 »

태그

3D adversarial adversarial validation attention AutoML autonomous driving BoW CDVA compression contrastive learning data augmentation descriptor GAN ICP LIFT NIP NLP NT-Xent object detection point clouds range image self-supervised SIFT transformer video retrieval Waymo 기초연구실 수중로봇 챌린지

카테고리

B.S. BoW.2020 Conference Director M.S. News Paper Patents Peer Review Ph.D. Publications RCVWS.2020 Videos X-Course X-Diary X-Project X-Review 미분류

최신 글

  • [NeurIPS2021]CLIP-It! Language-Guided Video Summarization
  • [ICLR 2025] PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding
  • [ICLR 2025] How new data permeates LLM knowledge and how to dilute it
  • [ECCV 2024] Scene-Graph ViT: End-to-End Open-Vocabulary Visual Relationship Detection
  • [Arxiv 2025]AffordanceSAM: Segment Anything Once More in Affordance Grounding
  • [arXiv 2025] RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning
  • [CVPR 2024] OMNIPARSER: A Unified Framework for Text Spotting, Key Information Extraction and Table Recognition
  • [CVPR 2024] ECoDepth: Effective Conditioning of Diffusion Models for Monocular Depth Estimation
  • [ECCV 2020] End-to-End Object Detection with Transformers
  • [CVPR 2022] SGTR: End-to-end Scene Graph Generation with Transformer

최신 댓글

  • [CVPR 2022] Text Spotting Transformers의 류 지연
  • [CVPR 2020] PVN3D: A Deep Point-wise 3D Keypoints Voting Network for 6DoF Pose Estimation의 류 지연
  • [ICLR 2025] PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding의 이 재찬
  • [ICLR 2025] PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding의 이 재찬
  • [ICLR 2025] PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding의 이 재찬
  • Sign-in
  • RCV-Yona
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle
  • NAS
    • NAS-DGX
    • NAS-Multispectral
    • NAS-AI632

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV