Jinghan Jia

prof_pic.png

Room 3210

428 S Shaw LN

East Lansing, MI, USA

I am Jinghan Jia, a third-year Ph.D. student affiliated with the OPTML Group at Michigan State University, guided by Prof. Sijia Liu. My research is committed to advancing the trustworthiness and operational efficiency of AI systems through cutting-edge optimization strategies and machine learning techniques. My areas of focus include:

  1. Machine Unlearning and AI Alignment: My research targets the development of methodologies to align generative AI models, such as Large Language Models (LLMs) and Diffusion Models, with human ethical standards. This involves not only aligning behaviors with human values but also enhancing the capacity for selective data removal from trained models. I am particularly interested in extending machine unlearning processes to conventional computer vision tasks, reinforcing privacy and control over data.
  2. Adversarial Robustness: I am dedicated to bolstering the resilience of generative AI systems against adversarial attacks, commonly referred to as “Jailbreaking”. My work ensures that these systems maintain reliability and performance integrity under diverse and challenging scenarios.
  3. AI for Code: I leverage AI to fortify robustness in programming, software development, and code understanding.
  4. Gradient-Free Optimization: I explore alternative optimization techniques that bypass the reliance on traditional gradient methods, aiming to decrease memory usage and computational costs, thereby facilitating more efficient and scalable AI solutions.

My research strives to produce AI innovations that are not only effective but are also reliable and adhere to ethical guidelines.

news

Sep 25, 2024 :partying_face: Two papers were accepted to the Neurips 2024 and one paper was accepted to the Neurips 2024 DB track!
Sep 19, 2024 :partying_face: A first-author paper was accepted at EMNLP’24 main track.
Jul 2, 2024 :partying_face: A first-author paper was accepted at ECCV’24.
Oct 24, 2023 :partying_face: Grateful to be awarded the NeurIPS 2023 Scholar Award !
Oct 22, 2023 :partying_face: One preprint released To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images… For Now.

Selected publications

See a full publication list at here.
  1. NeurIPS’24
    WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models
    Jinghan Jia, Jiancheng Liu, Yihua Zhang, and 3 more authors
    In The Thirty-eighth Annual Conference on Neural Information Processing Systems 2024
  2. EMNLP’24
    Soul: Unlocking the power of second-order optimization for llm unlearning
    Jinghan Jia, Yihua Zhang, Yimeng Zhang, and 5 more authors
    In The 2024 Conference on Empirical Methods in Natural Language Processing 2024
  3. ECCV’24
    To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images... For Now
    Yimeng Zhang*, Jinghan Jia*, Xin Chen, and 5 more authors
    In European Conference on Computer Vision 2024
  4. NAACL’24
    Leveraging LLMs for dialogue quality measurement
    Jinghan Jia, Abi Komma, Timothy Leffel, and 5 more authors
    In 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics 2024
  5. NeurIPS’23
    Model Sparsity Can Simplify Machine Unlearning
    Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, and 5 more authors
    In Thirty-seventh Conference on Neural Information Processing Systems 2023
  6. SANER’23
    CLAWSAT: Towards Both Robust and Accurate Code Models
    Jinghan Jia*, Shashank Srikant*, Tamara Mitrovska, and 4 more authors
    In 2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2023