“Large Language Model: A Challenge Called Scalability” is organised by CASTIEL2 in collaboration with NVIDIA. This internal webinar for NCCs and CoEs will take place on 24th October from 11.00 to 12.30 CEST.

 

The advent of Large Language Models (LLMs) has ushered in a new era for deep learning, transforming various application contexts from Natural Language Processing to Computer Vision. However, fully harnessing the potential of these models entails confronting a series of challenges, particularly in terms of numerical stability during training and performance optimization. In this webinar for NCCs and CoEs, Giuseppe Fiameni (Data Scientist at NVIDIA) will explore the fundamental aspects of LLMs and analyse the crucial role that computing infrastructures play in this context. Model complexity, data preprocessing, computational requirements, and optimization techniques will be discussed.

 

Register by October 23rd here.