Choose Your Weapon: Survival Strategies for Depressed AI Academics

21st April 2023 at 8:04pm

https://arxiv.org/abs/2304.06035

This paper has a lot of suggestions which I think is helpful. In particular.

“In-the-wild domains such as games, IoT, and autonomous vehicles could allow AI to be deployed next to their end user and the data the user generates, i.e. at the edge of the network. This is often called edge AI [8], the operation of AI applications in devices of the physical world is possible when memory requirements are low and inference occurs rapidly.” (Togelius and Yannakakis, 2023, p. 4)

“Solve Problems Few Care About (For Now!)” (Togelius and Yannakakis, 2023, p. 5)

“In academia, failure can be as instructive and valuable as success and the stakes are overall lower. Many important inventions and ideas in AI come from trying the “wrong” thing. In particular, all of deep learning stems from researchers stubbornly working on neural networks even though there were good theoretical reasons why they shouldn’t work.” (Togelius and Yannakakis, 2023, p. 5)

“At the most basic level, open-sourcing models, including both weights and training scripts, helps a lot. It allows academic AI researchers to study the trained models, fine-tune them, and build systems around them. It still leaves academic researchers uncompetitive when it comes to training new models, but it is a start. To their credit, several large industrial research organizations regularly release their most capable models publicly. Others don’t, and could rightly be shamed for not doing so.” (Togelius and Yannakakis, 2023, p. 7)