Jinghan Jia

prof_pic.png

Room 3210

428 S Shaw LN

East Lansing, MI, USA

I am Jinghan Jia, a fourth-year Ph.D. student affiliated with the OPTML Group at Michigan State University, guided by Prof. Sijia Liu. My research focuses on advancing the trustworthiness and operational efficiency of AI systems, with a keen focus on bridging theoretical foundations and real-world applications. I am actively seeking research collaborations and full‑time job opportunities where I can apply and expand these research directions.

My research interets include following:

🛡️ Trustworthy and Aligned Foundation Models: My work seeks to improve the reliability, safety, and ethical alignment of foundation models. I focus on machine unlearning, LLM alignment ,privacy-preserving techniques, and the development of robust AI systems that align with human values and can withstand real-world challenges.

Efficient and Scalable AI Training: I develop methods for efficient and scalable model training, including memory- and parameter-efficient fine-tuning, model sparsification, and Mixture-of-Experts architectures. This line of research aims to make large-scale models more adaptable and accessible while reducing resource requirements.

🧠 Reasoning and Advanced AI Capabilities: A key focus of my research is on enhancing the reasoning abilities of LLMs through test-time computation, reasoning-driven training, and reinforcement learning. These approaches aim to empower AI systems to address complex problems with greater transparency, adaptability, and reliability.

📈 Optimization for Modern AI: My research explores advanced optimization techniques—including gradient‑free, zeroth‑order, and bi‑level optimization methods—to boost performance and scalability across diverse AI applications.

Research Keywords: Foundation Models (LLMs / Diffusion Models), Trustworthy AI (Unlearning, Alignment, Privacy), Efficient Training (Sparsification, Memory-/Parameter-Efficient Fine-Tuning, MoE), LLM Reasoning (Test-Time Computing, Reasoning-Enhanced Training), Machine Learning, Zeroth-order Optimization, Bi-level Optimization, Convex/Non-convex Optimization

news

May 16, 2025 :partying_face: One paper is accepted to ACL 2025 main conference! SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?.
May 1, 2025 :partying_face: Three of my papers have been accepted to ICML 2025. Explore my co‑first‑authored work: Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond.
Dec 2, 2024 :partying_face: Thrilled to announce that our position paper, Rethinking Machine Unlearning for LLMs, has been accepted for publication in Nature Machine Intelligence.
Sep 25, 2024 :partying_face: Two papers were accepted to the Neurips 2024 and one paper was accepted to the Neurips 2024 DB track!
Sep 19, 2024 :partying_face: A first-author paper was accepted at EMNLP’24 main track.

Selected publications

See a full publication list at here.
  1. ICML’25
    Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond
    Chongyu Fan*, Jinghan Jia*, Yihua Zhang, and 3 more authors
    In The Forty-Second International Conference on Machine Learning 2025
  2. NeurIPS’24
    WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models
    Jinghan Jia, Jiancheng Liu, Yihua Zhang, and 3 more authors
    In The Thirty-eighth Annual Conference on Neural Information Processing Systems 2024
  3. EMNLP’24
    Soul: Unlocking the power of second-order optimization for llm unlearning
    Jinghan Jia, Yihua Zhang, Yimeng Zhang, and 5 more authors
    In The 2024 Conference on Empirical Methods in Natural Language Processing 2024
  4. ECCV’24
    To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images... For Now
    Yimeng Zhang*, Jinghan Jia*, Xin Chen, and 5 more authors
    In European Conference on Computer Vision 2024
  5. NAACL’24
    Leveraging LLMs for dialogue quality measurement
    Jinghan Jia, Abi Komma, Timothy Leffel, and 5 more authors
    In 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics 2024
  6. NeurIPS’23
    Model Sparsity Can Simplify Machine Unlearning
    Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, and 5 more authors
    In Thirty-seventh Conference on Neural Information Processing Systems 2023
  7. SANER’23
    CLAWSAT: Towards Both Robust and Accurate Code Models
    Jinghan Jia*, Shashank Srikant*, Tamara Mitrovska, and 4 more authors
    In 2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2023