Course information

Course overview

What on your mind when presented the word “trustworthy”? In the era of machine learning, deep learning, and generative AI, the need for trustworthiness has never been so pressing. The key to ensure AI safety is to build a great wall of trustworthiness. In this course, we will cover several key topics related to trustworthy AI, including, but not limited to: robustness, generalization, responsibility, privacy, security, safety, and interpretability. By taking this course, students will form a basic understanding of essential concepts of trustworthiness, learn how to think beyond an existing AI system, develop key skills to deal with different untrustworthy conditions, read relevant paper, and even write their own research ideas. This course will have flexible forms, including lectures, guest lectures, paper reading, homework, and group projects.

Syllabus

Note 1: the following table contains all contents but may be subject to change since I could be traveling to other conferences or events, and/or the guest lecturers could be changed due to their private matters. Please be aware of my emails.

Note 2: each class lasts 80 minutes = 55min lecture + 5min buffer + 20min presentation. Each student is supposed to finish one presentation. For those who do not finish the presentations during the lecture sessions can present in the rest of the class (i.e., week 13, 14, or 15).

Week Topic/Content Resources/Suggested Reading Note
1 (01/23) L01: Introduction
Slides, Video 1. KDD 2023 tutorial: Trustworthy Machine Learning
  1. Trustworthy AI: From Principles to Practices | Start of homework 1 | | 2 (01/27, 01/29) | Guest lecture: Kunpeng Liu, Towards Smarter LLMs: Stages and Innovations in Large Language Models Reasoning, Portland State University | 1. Trustworthy AI Explained (video)
  2. Microsoft Trustworthy AI (video)
  3. Google CEO and the future of AI (video)
  4. Trustworthy AI in healthcare (video)
  5. The Next Frontier: Sam Altman on the Future of AI and Society (video) | | | 3 (02/03, 02/05) | L02: Transfer learning Slides | 1. Introduction to Transfer Learning
  6. A Comprehensive Survey on Transfer Learning | | | 4 (02/10, 02/12) | L03: Domain Generalization Slides | 1. Generalizing to Unseen Domains: A Survey on Domain Generalization
  7. IJCAI 2022 Tutorial on Domain Generalization
  8. DIVERSIFY: A General Framework for Time Series Out-of-Distribution Detection and Generalization
  9. AAAI 2024 tutorial on OOD generalization for time series | Homework 1 due; Homework 2 starts | | 5 (02/17, 02/19) | L04: Adversarial robustness Slides | 1. Explaining and Harnessing Adversarial Examples
  10. Towards Deep Learning Models Resistant to Adversarial Attacks
  11. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | | | 6 (02/24, 02/26) | L05: Backdoor attack Slides, Video | 1. Communication-Efficient Learning of Deep Networks from Decentralized Data
  12. Deep Learning with Differential Privacy | | | 7 (03/03, 03/05) | L06: AI Privacy (Slides) Guest lecture on 03/05: Songgaojun Deng, Assistant Professor, Eindhoven University of Technology (TU/e) | 1. FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning
  13. MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare | Homework 3 starts | | 8 (03/10, 03/12) | Spring Break | | | | 9 (03/17, 03/19) | L07: Federated Learning (Slides); L08: Large Language Models (Slides) | 1. On the Opportunities and Risks of Foundation Models
  14. What is Generative AI and How Does It Work? (video) | Homework 2 due | | 10 (03/24, 03/26) | L09: Trustworthy Large Language Models | 1. CulturePark: Boosting Cross-cultural Understanding in Large Language Models
  15. Promptrobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
  16. Constitutional AI: Harmlessness from AI Feedback | | | 11 (03/31, 04/02) | L09: Trustworthy Large Language Models (Slides)

| 1. Dyval: Dynamic evaluation of large language models for reasoning tasks 2. The good, the bad, and why: Unveiling emotions in generative ai 3. Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks | Homework 3 due; | | 12 (04/07, 04/09) | Guest lecture: Yiqiao Jin, PhD student, Georgia Institute of Technology L10: Fairness and Interpretability (Slides) | 1. Fairness and Abstraction in Sociotechnical Systems 2. [Why Should I Trust You?": Explaining the Predictions of Any Classifier](https://dl.acm.org/doi/pdf/10.1145/2939672.2939778?) | Homework 4 starts | | 13 (04/14, 04/16) | L11: Self-supervised learning (Slides) | 1. SimCLR: A Simple Framework for Contrastive Learning of Visual Representations 2. Masked autoencoders are scalable vision learners 3. Momentum contrast for unsupervised visual representation learning | | | 14 (04/21, 04/23) | L12: Semi-supervised learning | 1. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence 2. FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling 3. SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-Supervised Learning | | | 15 (04/28, 04/30) | Preparing for the final project | | Homework 4 due |

Grading

Acknowledgements