CNCG Pune May Meetup x Platform9 (Invite Only)
CNCG Pune is all set to host the upcoming meetup for the month of May on 22nd May 2026 in collaboration with Platform9.
Join us for an exclusive event where industry experts will dive deep into AI, Cloud Native, Kubernetes, and CNCF technologies while you connect with some of the brightest minds in the ecosystem.
Join the core cloud-native community (developers, SREs, platform engineers, and cloud enthusiasts) for a power-packed day of technical talks, hands-on workshops, lightning sessions, and real-world stories around Kubernetes, Platform Engineering, Observability, AI + Cloud Native, Multi-cluster, Security, and the entire CNCF ecosystem.
This is a community-first, vendor-neutral event curated by Cloud Native Pune with special support from Platform9.
Important: This event is invite-only.
How to Register Fill out this short Google Form (takes < 2 minutes): 👉 https://forms.gle/7j1FmxvnT4EGexLQ6
Our team will review all responses and send final confirmation + venue details by 18th May 2026. Only confirmed attendees will receive entry instructions.
Here’s what you can expect:
Talks and workshops :
- Maximise GPU efficiency with MIG on Kubernetes by Devendra Kulkarni (Senior Customer Support Engineer IV @SUSE)
- Your Database Deserves Better: Run It the Kubernetes Way with OpenEverest by Atharva Mhaske (GSoC’2026 Open Source Contributor at Open Science Labs)
- Architecting Local AI Agents by Ashutosh Bhakare (Chief Executive Officer @Unnati Development and Training Centre Pvt Ltd)
- Escaping Hypervisor Lock-in: Open-Source VM Migration with vJailbreak by Omkar Deshpande (Senior MTS @Platform9)
Followed by Networking & Dinner
-
6:00 PM IST
Maximise GPU efficiency with MIG on Kubernetes
in-person"Modern AI workloads demand GPU, but in real-world setups they are 80% underutilized. This session explores how NVIDIA Multi-Instance GPU (MIG) transforms a single GPU into multiple isolated instances, enabling efficient sharing across workloads and how it integrates with Kubernetes to run multiple workloads on the same hardware.
The session includes a live hands-on demo where a single GPU is partitioned into multiple MIG instances and used to run multiple AI workloads—showing improved utilization, isolation, and cost efficiency. Along the way, attendees will learn how MIG works, when to use it, and how it integrates with Kubernetes scheduling through device plugins and GPU operators as we’ll be discussing use-cases and trade-offs.
Whether you are a student or working professional experimenting with AI or optimizing infrastructure costs, this session will give you a practical insight into maximizing GPU ROI without adding more hardware."
-
6:30 PM IST
Your Database Deserves Better: Run It the Kubernetes Way with OpenEverest
in-person"Managing databases manually is time consuming, error prone, and difficult to scale, especially in modern cloud-native environments. While Kubernetes has transformed how we run stateless applications, running databases on it still feels complex for many teams.
OpenEverest, now incubated under CNCF, aims to change that by making database management on Kubernetes simple, reliable, and production-ready.
In this talk, we will explore how OpenEverest eliminates the operational burden of managing databases like PostgreSQL, MySQL, and MongoDB. I will walk through the core ideas behind the platform and how it leverages Kubernetes operators to automate provisioning, scaling, and maintenance.
The session will include a live demo using the everestctl CLI, where we will deploy a database cluster in just a few commands. By the end, you will see how easy it is to move from manual database management to a fully automated, cloud-native approach.
This talk is designed for developers, DevOps engineers, and anyone curious about running stateful workloads efficiently on Kubernetes.
Link: https://openeverest.io by percona and solanica "
SPEAKERS -
7:00 PM IST
Architecting Local AI Agents:
in-personThis talk demonstrates a streamlined architecture for running advanced AI agents entirely on local infrastructure. By integrating the Agent Development Kit (ADK) for agentic orchestration, the state-of-the-art Gemma 4 model for reasoning, and a Docker-based Model Runner for containerized inference, this solution bridges the gap between complex AI development and local deployment.
SPEAKERS