Skip to content
View unumbrela's full-sized avatar
🎯
Focusing
🎯
Focusing
  • Jiangnan University
  • No. 1800, Lihu Avenue, Wuxi, 214122, P. R. China
  • 00:25 (UTC +08:00)

Highlights

  • Pro

Block or report unumbrela

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
unumbrela/README.md
Typing SVG
Profile Views

About Me

I'm an undergraduate researcher at Jiangnan University, School of AI & Computer Science (Class of 2027). My work spans Multimodal Learning, AI Safety, LLM Reasoning, and AI for Science (Computational Biology).

  • Published / submitted 4 papers as first or co-first author (1 CCF-B, 3 CCF-C)
  • iGEM 2025 Gold Medalist — presented in Paris as dry-lab lead

Research Interests

Multimodal Learning & VLMs  ·  Generative AI Safety & Deepfake Detection
LLM Reasoning & Evaluation  ·  AI for Science (Protein Design / Genomics)

Publications

# Title Venue Role Status
1 MambaGuard: A CLIP-Mamba Approach for OOD Generated Image Detection PRCV 2026 (CCF-C) Co-first Author Accepted
2 SHINE: A Neuro-Symbolic Approach to Language-Driven Travel Planning ICAPS 2026 (CCF-B) First Author Under Review
3 MLLM-based Image Forgery Detection and Localization ICIC 2026 (CCF-C) First Author Under Review
4 Response-Pattern Enhanced Item Response Theory for Large Language Model Evaluation IJCNN 2026 (CCF-C) First Author Under Review

Featured Projects

AMP Forge

iGEM 2025 Gold Medal — Antimicrobial Peptide de novo Design Platform

Transformer-VAE + Latent Diffusion architecture fusing protein language models (ESM-2 / ProtT5 / Ankh) for conditional AMP generation with 6 generation modes.

PyTorch Diffusion Models PLM React

Boltz-Universal

Universal Biomolecular Rational Design System

End-to-end protein-ligand complex design covering small molecules, peptides, and DNA/RNA aptamers. Optimized pLDDT from 0.55 to 0.75+.

Boltz-1 LigandMPNN PyRosetta

Evo2 Fine-tuning

Genomic Language Model Domain Adaptation

LoRA fine-tuning of Evo2-1B on BioNeMo Framework with Megatron-LM distributed preprocessing. Full bio-sequence deep learning pipeline.

BioNeMo Megatron-LM LoRA

SHINE System

Neuro-Symbolic Travel Planning

Hybrid LLM + symbolic reasoning architecture with constraint-aware DFS. Achieved 77.27% pass rate on ChinaTravel benchmark (SOTA).

LLM RAG Symbolic AI

Tech Stack

Deep Learning & LLMs

PyTorch HuggingFace DeepSpeed CUDA

Architectures & Techniques

Transformer Mamba/SSM Diffusion LoRA RAG CLIP

AI for Science

ESM-2 ProtT5 PyRosetta BioNeMo

Languages & Tools

Python JavaScript React LaTeX Linux Git


"The best way to predict the future is to invent it." — Alan Kay

Pinned Loading

  1. AMP-Forge AMP-Forge Public

    AMP Forge is an antimicrobial peptide generation project based on PLM embeddings, VAE, and latent diffusion.

    Python

  2. evo_fine_tune evo_fine_tune Public

    Fine-tuning pipeline for NVIDIA BioNeMo Evo2 1B on genomics sequences, including preprocessing, training, inference, embeddings, and evaluation.

    Python 1

  3. SHINE SHINE Public

    SHINE-LLM-Travel-Planner

    Python