- 👋 Hi, I’m Qingyun Li (李青云).
- 👀 I’m interested in Multimodal Model/Data, Perception (Det./Seg.), Agent Frameworks.
- 🌱 I’m currently a Phd candidate at Harbin Institude of Technology (HIT), supervised by Prof. Yushi Chen.
- 🐳 I participated in researches at OpenGVLab, collaborated with Xue Yang, Wenhai Wang and Jifeng Dai.
- 💞️ I was an active contributor of MMDetection, collaborated with Shilong Zhang and Haian Huang.
Highlights
- Pro
Pinned Loading
-
OpenGVLab/OmniCorpus
OpenGVLab/OmniCorpus Public[ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text
-
OpenGVLab/all-seeing
OpenGVLab/all-seeing Public[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"
-
VisionXLab/mllm-mmrotate
VisionXLab/mllm-mmrotate Public[IGARSS 2025 Oral] A Simple Aerial Detection Baseline of Multimodal Language Models.
-
VisionXLab/sam-mmrotate
VisionXLab/sam-mmrotate PublicSAM (Segment Anything Model) for generating rotated bounding boxes with MMRotate, which is a comparison method of H2RBox-v2.
-
open-mmlab/mmdetection
open-mmlab/mmdetection PublicOpenMMLab Detection Toolbox and Benchmark
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.




