Jinghan Jia
Room 3210
428 S Shaw LN
East Lansing, MI, USA
I am Jinghan Jia, a third-year Ph.D. student affiliated with the OPTML Group at Michigan State University, guided by Prof. Sijia Liu. My research is committed to advancing the trustworthiness and operational efficiency of AI systems through cutting-edge optimization strategies and machine learning techniques. My areas of focus include:
- Machine Unlearning and AI Alignment: My research targets the development of methodologies to align generative AI models, such as Large Language Models (LLMs) and Diffusion Models, with human ethical standards. This involves not only aligning behaviors with human values but also enhancing the capacity for selective data removal from trained models. I am particularly interested in extending machine unlearning processes to conventional computer vision tasks, reinforcing privacy and control over data.
- Adversarial Robustness: I am dedicated to bolstering the resilience of generative AI systems against adversarial attacks, commonly referred to as “Jailbreaking”. My work ensures that these systems maintain reliability and performance integrity under diverse and challenging scenarios.
- AI for Code: I leverage AI to fortify robustness in programming, software development, and code understanding.
- Gradient-Free Optimization: I explore alternative optimization techniques that bypass the reliance on traditional gradient methods, aiming to decrease memory usage and computational costs, thereby facilitating more efficient and scalable AI solutions.
My research strives to produce AI innovations that are not only effective but are also reliable and adhere to ethical guidelines.
news
Sep 25, 2024 | Two papers were accepted to the Neurips 2024 and one paper was accepted to the Neurips 2024 DB track! |
---|---|
Sep 19, 2024 | A first-author paper was accepted at EMNLP’24 main track. |
Jul 2, 2024 | A first-author paper was accepted at ECCV’24. |
Oct 24, 2023 | Grateful to be awarded the NeurIPS 2023 Scholar Award ! |
Oct 22, 2023 | One preprint released To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images… For Now. |
Selected publications
See a full publication list at here.-
NAACL’24Leveraging LLMs for dialogue quality measurementIn 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics 2024