Starting in 2022, I wrote Chinese Zhihu articles once a year to give an update of my research highlights in a year. Starting from this year, I decide to write an English version to share some spotlights of our research with more global friends. Your criticism, comments, and suggestions are welcome to help me, a junior researcher, to learn and grow😄
The year 2023 is an amazing year and will be a milestone in the history. The release of ChatGPT in late 2022 has brought revolutionary change to AI research, marking 2023 a significant year to embrace paradigm shift. Therefore, this blog is also categorized in roughly two sections: “Pre-LLM era” and “LLM era”. But it is wise to acknowledge that at least for me, we are not stopping investment in non-LLM research since LLMs or large-scale models cannot simply solve every problem in our world.
Pre-LLM Era
Semi-supervised Learning
We had nice progress in semi-supervised learning this year with two new algorithms.
- FreeMatch achieved the highest review score in ICLR 2023 among all SSL papers: https://arxiv.org/abs/2205.07246. FreeMatch is an automatic thresholding tuning algorithm that works perfectly well for low-resource semi-supervised learning, and it has become the new SOTA significantly advancing our previous FlexMatch (NeurIPS’21). FreeMatch has received 86 citations as of today.
- SoftMatch (ICLR 2023) made a nice contribution to study the quality-quantity trade-off for pseudo labels: https://arxiv.org/abs/2301.10921. Reviewers commonly acknowledged the contribution and it has received 37 citations as of now.
- USB, our unified semi-supervised learning library, has now officially joined the Pytorch ecosystem. USB has received more than 1000 stars on Github, continuously serving the community with the easy-to-use code library: https://github.com/microsoft/Semi-supervised-learning.
Transfer Learning and OOD Generalization
We continue to push the frontier of transfer learning and OOD generalization in this year.
- The English version of my book, Introduction to Transfer Learning, has finally been published by Springer. I finished the translation from Chinese to English during the Christmas holiday in 2022 and I really want to reach more international friends to help them quickly learn transfer learning. (Relax: This is not an advertisement and you do not need to buy the book🙂) The official website of the book: https://jd92.wang/tlbook/.
- Our DIVERSIFY algorithm was finally accepted by ICLR 2023: https://arxiv.org/abs/2209.07027. This is a new attempt to try domain-label-free OOD generalization in a dynamic world which is more realistic. A pity that the paper was rejected at ICLR’22 with a score of 866… This year, we further extended DIVERSIFY to OOD detection and it worked amazingly well: https://arxiv.org/abs/2308.02282.
- Addressing the trade-off between robustness and generalization in adversarial training (ICCV 2023): https://arxiv.org/abs/2308.02533. This is my first adversarial training paper and we proposed a simple technique to maintain the performance of both good robustness and generalization ability.
- Code libraries:
- We continuously to maintain the transfer learning library on Github for the 6th year, receiving more than 12.4K stars: https://github.com/jindongwang/transferlearning.
- We created a new Github repo called “robustlearn” to hold our latest research on OOD and robustness: https://github.com/microsoft/robustlearn.
- We further maintain the personalized federated learning library: https://github.com/microsoft/PersonalizedFL.
LLM Era
I started a new series of articles on Zhihu called “Research in the era of LLMs”.
Evaluation and Enhancement of LLMs
We built two websites: https://llm-eval.github.io/ and https://llm-enhance.github.io/ and to host all our LLM evaluation and enhancement research.
- How should an ordinary researcher do in the era of LLMs: This is a blog to describe possible research directions for normal researchers, i.e., those who do not have many GPUs, including myself. This article was then presented on Bilibili (a Chinese video website) and received more than 70K views (my first): https://zhuanlan.zhihu.com/p/623690301.