Yunqi Hong

myphoto.png

yunqihong@ucla.edu

I am a second-year PhD student in the Computer Science Department at UCLA, advised by Prof. Cho-Jui Hsieh.

My research focuses on LLM post-training, inferencing, and downstream applications. I am currently working on LLM reinforcement learning, reward modeling, and text-to-image generation. Previously, I explored topics in LLM automatic prompt optimization, model interpretability, scalable graph adversarial attacks, graph representation learning, and recommender systems.

I also collaborate with Prof. Neil Y.C. Lin on developing LLM-driven methods for biomedical research.

🙌 I’m actively looking for research internships for Summer 2026. Feel free to reach out if you are interested.

selected publications

  1. NeurIPS 2025
    black_cuckoo.png
    Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs
    Yunqi Hong, Sohyun An, Andrew Bai, Neil YC Lin, and Cho-Jui Hsieh
    Advances in Neural Information Processing Systems, 2025
  2. Preprint
    intro.png
    IRIS: Intrinsic Reward Image Synthesis
    Yihang Chen, Yuanhao Ban, Yunqi Hong, and Cho-Jui Hsieh
    arXiv preprint arXiv:2509.25562, 2025
  3. Preprint
    pathology.png
    Adaptive Diagnostic Reasoning Framework for Pathology with Multimodal Large Language Models
    Yunqi Hong, Johnson Kao, Liam Edwards, Nein-Tzu Liu, Chung-Yen Huang, and 3 more authors
    arXiv preprint arXiv:2511.12008, 2025
  4. EMNLP 2025
    qgcoc_pipeline.png
    QG-CoC: Question-Guided Chain-of-Captions for Large Multimodal Models
    Kuei-Chun Kao, Hsu Tzu-Yin, Yunqi Hong, Ruochen Wang, and Cho-Jui Hsieh
    In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2025