Cloud Native Timisoara, Romania

Cloud Native Timisoara - Winter Edition 2025

Attendees: 67
hybrid
Event date
December 10, 2025
06:00 PM - 09:00 PM EET
Location
Haufe Group, Timișoara
About this event

❄️ As winter settles in, so does our next CNCF Community Timisoara gathering — the Winter Edition.

This time, we’re bringing two standout sessions that blend cloud native engineering with the evolving world of AI.
Together with our friends at Haufe Group, we’re excited to host an evening focused on how modern infrastructure is powering the next wave of intelligent systems.

Alessandro Pilotti will walk us through how Kubernetes can accelerate bioinformatics workflows while Rareș Istoc will continue the journey by unpacking why MLOps is rapidly claiming the spotlight as the new DevOps.

Learn more about their talks below:

Accelerating Bioinformatics AI use cases with Kubernetes
Bioinformatics is an interdisciplinary scientific field that deals with large amounts of biological data. The advent of machine learning models, and in particular Transformer models, yielded very interesting results in the field (e.g. Google's AlphaFold or Meta's ESM research.

The complexity of training or fine tuning protein language models (PLMs) along with inference tasks requires a non-trivial amount of GPU resources and a disciplined approach, where DevOps and MLOps methodologies fit in very well. In this session we will present a series of tasks related to fine tuning PLMs for classification of SARS-CoV-2's spike proteins.

We will highlight how Kubernetes is used to execute large numbers of computationally intensive tasks, including best practices for sharing Nvidia GPUs (MIG, Time Slicing, MPS). The orchestration will be handled by a DAG based workflow manager such as Apache Airflow, working alongside other HPC cluster technologies often available in research labs, e.g. Slurm.

Why MLOps is the new Devops
Remember when getting code to production was the hard part? We solved that with DevOps—automated pipelines, continuous deployment, monitoring. Problem solved, right?

Not quite. When you deploy a machine learning model, you're not just shipping code you're shipping a living, breathing system that learned from data. And here's the thing: models drift, data changes, and what worked yesterday might fail tomorrow without anyone changing a single line of code.

This talk explores why ML systems need their own playbook. We'll walk through the real challenges teams face: How do you know when your model stops working? How do you ensure the features you use for training match what you compute in production? How do you deploy a new model without crossing your fingers and hoping for the best?

Hosts
Speakers
Organizers