Skip to content

Robotics and Computer Vision Lab

AI in Sensing, AI in Perception, AI in Action

  • About
    • News
    • History
    • Photo
    • Admission
  • Members
  • Publications
    • Patents
  • X-Review
  • X-Diary
  • Peer Review
 Posted in Publications

“실내 수직농장의 재배 제어 방식 자동화를 위한 영상 기반의 작물 성장 상태 모니터링,” 33rd Workshop on Image Processing and Image Understanding (IPIU), Feb 2021.

 최 유경  02/01/2021  Leave a Comment on “실내 수직농장의 재배 제어 방식 자동화를 위한 영상 기반의 작물 성장 상태 모니터링,” 33rd Workshop on Image Processing and Image Understanding (IPIU), Feb 2021.

Author: 최 유경

Computer Vision, Machine Learning

글 내비게이션

← [CVPR2018]High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
“3rd Place Solution to NAVER LABS Mapping & Localization Challenge 2020: Indoor Track,” 33rd Workshop on Image Processing and Image Understanding (IPIU), Feb 2021. →

답글 남기기 응답 취소

이메일 주소는 공개되지 않습니다. 필수 항목은 *(으)로 표시합니다

학술대회 마감

  • [BMVC 2023] 2023.05.31 
  • [ICCV 2023] 2023.03.08 
  • [CVPR 2023] 2022.11.12
  • [ICRA 2023] 2022.09.16
  • [AAAI 2022] 2022.08.16
  • [NeurIPS 2022] 2022.05.20
  • [ECCV 2022] 2022.03.08
2021 2월
일 월 화 수 목 금 토
 123456
78910111213
14151617181920
21222324252627
28  
« 1월   3월 »

태그

3D adversarial adversarial validation attention AutoML autonomous driving BoW CDVA compression contrastive learning data augmentation descriptor GAN ICP LIFT NIP NLP NT-Xent object detection point clouds range image self-supervised SIFT transformer video retrieval Waymo 기초연구실 수중로봇 챌린지

카테고리

B.S. BoW.2020 Conference Director M.S. News Paper Patents Peer Review Ph.D. Publications RCVWS.2020 Videos X-Course X-Diary X-Project X-Review 미분류

최신 글

  • [CVPR2023]Align and Attend: Multimodal Summarization with Dual Contrastive Losses
  • [EMNLP 2024] LUQ: Long-text Uncertainty Quantification for LLMs
  • [CVPR 2025(Highlight)] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints
  • [NeurIPS 2021] Aligning Pretraining for Detection via Object-Level Contrastive Learning
  • [TPAMI 2024] Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation
  • [IROS 2024] ShapeGrasp: Zero-Shot Task-Oriented Grasping with Large Language Models through Geometric Decomposition
  • [COLING 2025] Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMs
  • [arXiv 2025] Depth Anything with Any Prior
  • [CVPR 2025] Rethinking Noisy Video-Text Retrieval via Relation-aware Alignment
  • [CVPR2022] Think Global, Act Local: Dual-scale Graph Transformer for vision-and-Language Navigation

최신 댓글

  • [COLING 2025] Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMs의 이 상인
  • [COLING 2025] Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMs의 이 상인
  • [CVPR 2025(Highlight)] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints의 이 승현
  • [COLING 2025] Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMs의 이 상인
  • [CVPR 2025(Highlight)] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints의 이 승현
  • Sign-in
  • RCV-Yona
  • RCV-Calendar
  • RCV-Github
  • Paper R/W
    • Arxiv
    • Deadline
    • Overleaf
  • Coding
    • OnlineJudge
    • Kaggle
  • NAS
    • NAS-DGX
    • NAS-Multispectral
    • NAS-AI632

포기하지 않는 강한 집념 만이 작은 차이를 만든다.

Design by SejongRCV