EuroCC@Greece announces the “HPC Tech & Tools Training Snippets” EuroCC@Greece, by ICS-FORTH online webinar, on January 20th, at 16:00-19:00 PM (EET). 

FormatWebinar Training Snippets of  8 hands on sessions
Date20/1/2022
Time16:00 – 19:00
Duration15-20 minute tutorials with hands-on examples
LanguageGreek
AudienceSME personnel interested in finding out more about the benefits and use of  HPC for their business, or looking into acquiring HPC-related expertise.
TitleSpeaker
Intro and Agenda presentationE. Kanellou, ICS-FORTH
Welcome and IntroductionI. Hatzakis, GRNET
Case study: The role of HPC technologies in Computational Mechanics and Digital TwinsS. Kokkinos, FEAC Engineering
HPC infrastructures in Greece: How to use and accessD. Dellis, GRNET
Introduction to CUDAM. Pavlidakis, ICS-FORTH
Use of GPUs: hands-on examples with Tensorflow on Python Notebooks.C. Kozanitis, ICS-FORTH
Using SparkI. Kolokasis, ICS-FORTH
FPGAs for faster application execution; market trends and benefitsD. Theodoropoulos, ISC-TUC
Using key-value stores for organizing data.G. Xanthakis, ICS-FORTH
Get Started with Kubernetes.A. Chazapis, ICS-FORTH
Frisbee: An advanced deployment and experiment orchestration framework over KubernetesF. Nikolaidis, ICS-FORTH
Karvdash: facilitating data science on Kubernetes.A. Chazapis, ICS-FORTH
Closing remarks

Presentations Description

Welcome and Introduction. A welcome to the event and an introduction to the EuroCC project, goals, and context.

Case study: The role of HPC technologies in Computational Mechanics and Digital Twins. In this talk, a real-life use case of HPC technologies will be presented – specifically, their application to the domain of computer-aided engineering and the benefits that they offer in terms of solving time and software’s technical capabilities.

HPC infrastructures in Greece: How to use and access. In this tutorial, you will get acquainted with ARIS, the Greek supercomputer, deployed and operated by GRNET. You will find out how to access it and will furthermore be introduced to the workflow for requesting HPC resources and the process of submitting a job, via practical examples

Introduction to CUDA.GPUs are a powerful tool for higher computing performance. They are specifically apt for processing large blocks of data in parallel. GPUs excel at parallel computing due to their large number of cores, which operate at lower frequencies than CPU. CUDA (Compute Unified Device Architecture) is the parallel computing platform and programming model used to program NVIDIA GPUs. It allows the user to access a GPU through general-purpose programming. In this tutorial, you will get acquainted with the core concepts of CUDA and then see them showcased in simple use cases.

Use of GPUs on Tensorflow: hands-on examples with Python Notebooks. In this tutorial, the use of Python notebooks will be introduced and the use of GPUs will be showcased. Based on a Tensorflow code example, the execution with and the execution without GPU will be showcased and the benefits of either case will be explored.

Using Spark. Apache Spark is an open-source, distributed big data analytic framework designed to help users easily analyze huge volumes of data. It supports several programming languages and ties in with libraries used for various analytic computations, making it suitable for speeding up a wide range of data-intensive applications (e.g., machine learning, graph processing). This tutorial will teach the essentials of coding with Spark and will touch upon the benefits of its use.

Using key-value stores for organizing data. A key-value store is a database-like storage which stores data as pairs of a unique identifying key and an associated value. The simplicity of this concept provides flexibility and makes data easier to access and manipulate. Furthermore, the simplicity of the concept makes it scalable in distributed contexts. Thus, key-value stores are a useful option for organizing big data. In this tutorial, you will get acquainted with core concepts of key-value stores and see a use case as an example.

Get Started with Kubernetes. Containers are a way of virtualizing a deployment environment, such as an OS, by making it independent of the underlying infrastructure. This makes it easier and more streamlined to develop software that is portable, e.g. in today’s cloud environments. Kubernetes is a popular container orchestration environment developed by Google. It provides automation for the creation, deployment, and management of containerized applications and is supported by major commercial cloud service providers. In this tutorial, you will learn the essential concepts necessary for its use.

Frisbee: An advanced deployment and experiment orchestration framework over Kubernetes. As distributed systems evolve, the testing scale multiplies, asking for dozens of test cases, combined with different benchmarks (e.g., performance, correctness), and arbitrary operating conditions. Despite their abundance, existing benchmarks and Chaos engineering tools work in isolation, thus posing a restriction on the complexity of testing scenarios we can build. Moreover, the validation of complex distributed experiments is typically done manually by the system’s evaluator — a rather laborious and error-prone process. We present Frisbee: a suite for the automated testing of distributed applications over Kubernetes. Frisbee simplifies a series of time-demanding activities, including the spin-up of the dependency stack required to bring the system into a steady state, the unified execution of workloads and faultloads, and the validation of the system’s behavior via test cases. We will evaluate Frisbee through a series of tests, focusing on uncertainties at the level of application (e.g., dynamically changing request patterns), infrastructure (e.g., crashes, network partitions), and deployment (e.g., saturation points).

Karvdash: facilitating data science on Kubernetes. Karvdash (Kubernetes CARV dashboard) is a dashboard service for facilitating data science on Kubernetes. It supplies the landing page for users, allowing them to launch notebooks and other services, design workflows, and specify parameters related to execution through a user-friendly interface.  Karvdash manages users, wires up relevant storage to the appropriate paths inside running containers, securely provisions multiple services under one externally-accessible HTTPS endpoint, while keeping them isolated in per-user namespaces at the Kubernetes level, and provides an identity service for OAuth 2.0/OIDC-compatible applications.