Scaling fine-tuning of large language models (LLMs) to multiple GPUs can unlock new levels of performance and efficiency, making it accessible for industries of all sizes.
In this 3.5-hour course organized by the VSC Research Center, TU Wien and NCC Austria, participants from start-ups, SMEs, and large enterprises will gain hands-on experience with powerful multi-GPU fine-tuning techniques, optimizing their LLM workflows for both speed and scalability.
Read more and register here.