Wang Run

Run Wang

Associate Professor of Cyber Science and Engineering

Wuhan University


I am currently a tenured associate professor at School of Cyber Science and Engineering (SCSE) in Wuhan University (WHU). Prior to joining WHU, I worked as a postdoctoral research fellow (2019-2021) under the supervision of Prof. LIU Yang in Nanyang Technological University (NTU), Singapore. I obtained my Ph.D. (2015-2018) in Information Security at Wuhan University advised by Prof. Lina Wang.


My research interests are at the intersection of security, privacy, and Al. I am dedicated to improving the trustworthiness, safety, and efficiency of AI systems, as well as investigating the application of AI in enhancing security measures. Recently, my work has concentrated on Large Language Models (LLMs), Large Vision Models (LVMs), and Multi-modal Large Models (MLMs). For a more comprehensive understanding of my research endeavors and academic contributions, please refer to the detailed list of my research interests and publications. For more details, check my research interests and publications .


I am actively seeking for highly self-motivated Undergraduate/Master/PhD students who have a strong background and interests in my research topics (but not limited). Please drop me an email with your CV if you are interested to work with me. Together, we have the chance to embark on a gratifying journey, confronting real-world problems and achieving substantial, tangible impacts.

Working Experience

1. Wuhan University, China Feb, 2024 - Now

   Tenured Associate Professor

2. Wuhan University, China Feb, 2021 - Jan, 2024

   Tenure-track Associate Professor

3. Nanyang Technological University, Singapore Apr, 2019 - Jan, 2021

   Research Fellow, supervised by Pro. LIU Yang

Research

As the model structure becomes more complex and the number of model parameters continues to increase, the processing power of the model is further enhanced, and it is widely applied in numerous fields such as images, voice, and text. However, both internally and externally, the model faces multiple security threats. Internally, the increase in the number of model parameters leads to emerging risks such as insufficient explainability and new biases. Externally, large models face various threats that affect model performance and violate privacy data. Our research goal is to design effective methods to identify the vulnerabilities of the model, and to develop efficient methods to protect the model from the aforementioned threats.

Efficient AI Algorithm & System
AI Security & Privacy

The incredible potential of large models in Artificial Intelligence Generated Content (AIGC), including cutting-edge technologies like Large Language Models (LLMs) and Diffusion Models, have revolutionized a wide range of applications, spanning natural language processing, content generation, creative arts, and more. However, large model size, and high memory and computational requirements present formidable challenges. We aim to tackle these hurdles head-on and make these advanced AI technologies more practical, democratizing access to these future-changing technologies for everyone.

AI Safety
Efficient AI Algorithm & System

The rapid development of Artificial Intelligence (AI) has led to concerns about the misuse of advanced technologies, particularly in the emergence of DeepFakes. DeepFakes use deep learning techniques to create highly convincing fake media, posing risks such as spreading misinformation and manipulating public opinion. As DeepFake technology becomes more sophisticated, the primary objective of our research is to develop effective strategies to defense (e.g., DeepFake Passive Detection, DeepFake Proactive Defense) and countermeasures to evade (e.g., DeepFake Evasion Attack, Realistic Creation).

AI Security & Privacy & Safety
AI Safety

News

  • Jan 2024

    I'm awarded (early) tenure at Wuhan University. I couldn't have done it without my students, team leader, and family. Thank you all!

  • Dec 2023

    One paper is accepted by AAAI 2024. Congrats to Mengjie Wu and Jingui Ma.

  • Nov 2023

    Our team “DeepRadar” won the Second Prize of "HUAWEI Cup" the 2th China Post-Graduate Cyber Security Innovation Contest (“华为杯”第二届中国研究生网络安全创新大赛全国二等奖). Congrats to our students, Wei Wang, Yuyang Zhang, Ziyou Liang, and Mengjie Wu.

  • Nov 2023

    Our team “泰Kun啦” won the Investment Value Award (highest award, award rate less than 0.1%) of the 2023 Cyber Security Outstanding Innovation Achievement Competition (Finals of College Students’ Innovation and Entrepreneurship Works) from China Cybersecurity Industry Alliance (CCIA) (2023年网络安全优秀创新成果大赛总决赛大学生创新创业作品投资价值奖).

  • Oct 2023

    NSFOCUS will sponsor our research on defending real-time interactive DeepFakes via CCF-Kunpeng funds (CCF-绿盟科技“鲲鹏”科研基金).

  • Oct 2023

    Student achievements: Our master students, Ziheng Huang, Jixing Ren, and Yuyang Zhang, won the Nationl Scholarship (Top 0.2% National-Wide)

  • Oct 2023

    Student achievements: Our undergraduate students, Boheng Li and Yan Cai, won the Nationl Scholarship (Top 0.2% National-Wide).

  • Aug 2023

    Our team “泰Kun啦” won the First Prize and the Most Innovative and Entrepreneurial Value Award (highest award) of the 16th National College Student Information Security Contest (全国大学生信息安全竞赛全国一等奖和最具创新创业价值奖). Congrants to Yuyang Zhang, Mingming Zhang, Jinzhao Lv, and Muyang Li.

  • Jul 2023

    One paper is accepted by ACM MM 2023. Congrats to Jixing Ren, Boheng Li, Tianyi She, and Wenhui Zhang.

  • Jul 2023

    One paper is accepted by ICCV 2023. Congrats to Ziheng Huang, Boheng Li, and Yan Cai.

Contact

    Office: Room B532

    Office Tel: 027-68758132

    Email: d2FuZ3J1bkB3aHUuZWR1LmNu (encoded by base64)

    Address: 湖北省武汉市东西湖区临空港大道国家网络安全人才与创新基地新珈楼

    Address: 湖北省武汉市武昌区八一路299号武汉大学国家网络安全学院