Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • News
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review
 Posted in Peer Review, X-Review

보호된 글: [PeerReview] A Three-Stage Enhancement Framework for TIR Images with Hazy Atmospheric Background

 신 정민  01/24/2021

이 콘텐츠는 비밀번호로 보호되어 있습니다. 이 콘텐츠를 보려면 아래에 비밀번호를 입력해주세요:

Author: 신 정민

글 내비게이션

← DepressNet – Visually Interpretable Representation Learning for Depression Recognition from Facial Images
[NeurIPS 2020] FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence →

학술대회 마감

  • [BMVC 2023] 2023.05.31 
  • [ICCV 2023] 2023.03.08 
  • [CVPR 2023] 2022.11.12
  • [ICRA 2023] 2022.09.16
  • [AAAI 2022] 2022.08.16
  • [NeurIPS 2022] 2022.05.20
  • [ECCV 2022] 2022.03.08
2021 1월
일 월 화 수 목 금 토
 12
3456789
10111213141516
17181920212223
24252627282930
31  
« 12월   2월 »

태그

3D adversarial adversarial validation attention AutoML autonomous driving BoW CDVA compression contrastive learning data augmentation descriptor GAN ICP LIFT NIP NLP NT-Xent object detection point clouds range image self-supervised SIFT transformer video retrieval Waymo 기초연구실 수중로봇 챌린지

카테고리

B.S. BoW.2020 Conference Director M.S. News Paper Patents Peer Review Ph.D. Publications RCVWS.2020 Videos X-Course X-Diary X-Project X-Review 미분류

최신 글

  • [CVPR2023]Align and Attend: Multimodal Summarization with Dual Contrastive Losses
  • [EMNLP 2024] LUQ: Long-text Uncertainty Quantification for LLMs
  • [CVPR 2025(Highlight)] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints
  • [NeurIPS 2021] Aligning Pretraining for Detection via Object-Level Contrastive Learning
  • [TPAMI 2024] Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation
  • [IROS 2024] ShapeGrasp: Zero-Shot Task-Oriented Grasping with Large Language Models through Geometric Decomposition
  • [COLING 2025] Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMs
  • [arXiv 2025] Depth Anything with Any Prior
  • [CVPR 2025] Rethinking Noisy Video-Text Retrieval via Relation-aware Alignment
  • [CVPR2022] Think Global, Act Local: Dual-scale Graph Transformer for vision-and-Language Navigation

최신 댓글

  • [arXiv2025]Video Summarization with Large Language Models의 황 유진
  • [EMNLP 2024] LUQ: Long-text Uncertainty Quantification for LLMs의 신 인택
  • [CVPR 2020] On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention의 신 인택
  • [CVPR 2016]Deep Residual Learning for Image Recognition의 신 인택
  • [CVPR 2025] Rethinking Noisy Video-Text Retrieval via Relation-aware Alignment의 홍 주영
  • Sign-in
  • RCV-Yona
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle
  • NAS
    • NAS-DGX
    • NAS-Multispectral
    • NAS-AI632

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV