Instructor: Jindong Wang
Office hours: 3:30—4:30pm, Wednesday
Teaching assistant: Haoyang Jiang
Email group: [email protected]
Duration: 01/22/2025 — 05/02/2025
When: 2-3:20pm, Monday and Wednesday
Where: Integrated Science Center 3291
Website: https://go.jd92.wang/spring25
What on your mind when presented the word “trustworthy”? In the era of machine learning, deep learning, and generative AI, the need for trustworthiness has never been so pressing. The key to ensure AI safety is to build a great wall of trustworthiness. In this course, we will cover several key topics related to trustworthy AI, including, but not limited to: robustness, generalization, responsibility, privacy, security, safety, and interpretability. By taking this course, students will form a basic understanding of essential concepts of trustworthiness, learn how to think beyond an existing AI system, develop key skills to deal with different untrustworthy conditions, read relevant paper, and even write their own research ideas. This course will have flexible forms, including lectures, guest lectures, paper reading, homework, and group projects.
Note 1: the following table contains all contents but may be subject to change since I could be traveling to other conferences or events, and/or the guest lecturers could be changed due to their private matters. Please be aware of my emails.
Note 2: each class lasts 80 minutes = 55min lecture + 5min buffer + 20min presentation. Each student is supposed to finish one presentation. For those who do not finish the presentations during the lecture sessions can present in the rest of the class (i.e., week 13, 14, or 15).
Week | Topic/Content | Resources/Suggested Reading | Note |
---|---|---|---|
1 (01/23) | L01: Introduction | ||
Slides, Video | 1. KDD 2023 tutorial: Trustworthy Machine Learning |
| 1. Dyval: Dynamic evaluation of large language models for reasoning tasks 2. The good, the bad, and why: Unveiling emotions in generative ai 3. Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks | Homework 3 due; | | 12 (04/07, 04/09) | Guest lecture: Yiqiao Jin, PhD student, Georgia Institute of Technology L10: Fairness and Interpretability (Slides) | 1. Fairness and Abstraction in Sociotechnical Systems 2. [Why Should I Trust You?": Explaining the Predictions of Any Classifier](https://dl.acm.org/doi/pdf/10.1145/2939672.2939778?) | Homework 4 starts | | 13 (04/14, 04/16) | L11: Self-supervised learning (Slides) | 1. SimCLR: A Simple Framework for Contrastive Learning of Visual Representations 2. Masked autoencoders are scalable vision learners 3. Momentum contrast for unsupervised visual representation learning | | | 14 (04/21, 04/23) | L12: Semi-supervised learning | 1. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence 2. FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling 3. SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-Supervised Learning | | | 15 (04/28, 04/30) | Preparing for the final project | | Homework 4 due |
Homework (30%)
Paper reading (30%)
Final project (40%)
Bonus