Celebrating Excellence: Soufiane Hayou Awarded the Gradient AI Research Fellowship

Feb 28, 2024

Gradient Team

In the realm of artificial intelligence and machine learning, it is the relentless pursuit of innovation and the quest for understanding complex problems that drive the field forward. At Gradient we’re excited to announce Soufiane Hayou, as the first recipient of our Gradient AI Research Fellowship - a testament to his ongoing work in the field of neural network scalability.

In the realm of artificial intelligence and machine learning, it is the relentless pursuit of innovation and the quest for understanding complex problems that drive the field forward. At Gradient we’re excited to announce Soufiane Hayou, as the first recipient of our Gradient AI Research Fellowship - a testament to his ongoing work in the field of neural network scalability.

In the realm of artificial intelligence and machine learning, it is the relentless pursuit of innovation and the quest for understanding complex problems that drive the field forward. At Gradient we’re excited to announce Soufiane Hayou, as the first recipient of our Gradient AI Research Fellowship - a testament to his ongoing work in the field of neural network scalability.

In the realm of artificial intelligence and machine learning, it is the relentless pursuit of innovation and the quest for understanding complex problems that drive the field forward. At Gradient we’re excited to announce Soufiane Hayou, as the first recipient of our Gradient AI Research Fellowship - a testament to his ongoing work in the field of neural network scalability.

In the realm of artificial intelligence and machine learning, it is the relentless pursuit of innovation and the quest for understanding complex problems that drive the field forward. At Gradient we’re excited to announce Soufiane Hayou, as the first recipient of our Gradient AI Research Fellowship - a testament to his ongoing work in the field of neural network scalability.

About the Gradient AI Research Fellowship

Gradient’s AI Research Fellowship is designed to recognize and support individuals who have shown exceptional talent and promise in the field of artificial intelligence. By empowering researchers to continue their explorations without constraints that often present itself within academic and research endeavors, our hope is that we nurture the next generation of AI pioneers.

Meet Our First Fellowship Recipient

Soufiane Hayou is a Researcher at Simons Institute for the Theory of Computing and the Statistics department at UC Berkeley, hosted by Bin Yu and Peter Bartlett. He is currently on leave from his Peng Tsu Ann Assistant Professorship in mathematics at the National University of Singapore. Soufiane's current research focuses on the theory and practice of scaling neural networks, a critical area of study that promises to redefine the capabilities and efficiency of AI systems. This includes scaling rules for initialization, learning rate, architecture, data, etc. and naturally extends to scaling rules for fine-tuning LLMs, an area of immense importance as the AI community continues to build more complex and capable systems.

Soufiane Hayou holds a PhD in statistics from Oxford where he was advised by Arnaud Doucet and Judith Rousseau and graduated from Ecole Polytechnique in Paris before joining Oxford.

“My work is focused on scaling neural algorithms, and having access to compute resources is essential in my research. The Gradient fellowship is a great opportunity for me to access state of the art computing resources, especially in an era where GPUs are becoming scarcer and more expensive. Having access to such resources will help us (with my team) accelerate our research and further improve state-of-the-art methods in fine-tuning large models.” - Soufiane Hayou

Research Project

Soufiane's project is centered around advancing neural network scaling techniques, with a particular focus on developing principled methods for adjusting network initialization and learning rates as models scale up.

With the support of Gradient’s AI Research Fellowship, Soufiane will zero in on optimizing scaling methods for fine-tuning large language models (LLMs) using Low Rank Adaptation (LoRA). This process traditionally relies on trial and error for setting learning rates and the rank of LoRA weights. Collaborating with his team at UC Berkeley, Soufiane’s goal will be centered around establishing systematic scaling rules for fine-tuning, addressing a critical need among practitioners.

Preliminary success will include devising a method to determine the fine-tuning learning rate that will be based on the pre-training learning rate - suggesting that the former should typically be higher. This groundbreaking insight, however, awaits further large-scale empirical validation, underlining the importance of Soufiane's ongoing research efforts and his fellowship application.