Managing AI/ML Model Provenance and Compliance with KitOps
Join us for an engaging session on Managing AI/ML Model Provenance and Compliance with KitOps
The AI/ML ecosystem is drowning in ad-hoc tooling. Every team picks a different combination of experiment trackers, model registries, serving frameworks, and deployment scripts - none of which talk to each other. There's no standard way to package a model, no consistent way to answer "what went into this model and where has it been?"
This is a standardization problem, not a tooling problem. Adding another tool to the pile makes it worse.
This session covers how KitOps takes a different approach - using OCI, the same standard that containers and Helm charts already rely on - to package models, datasets, code, and metadata into versioned, immutable ModelKits. We'll
walk through:
- Why the current ad-hoc tooling landscape creates provenance gaps and compliance blind spots
- How OCI-based standardization lets models flow through existing infrastructure (registries, CI/CD, access controls) without reinventing the wheel
- Setting up governance workflows: approval gates, audit trails, and lineage tracking
- Aligning model lifecycle management with regulatory requirements
- Practical migration patterns from fragmented toolchains to a standards-based approach
If your org has more ML tools than ML models in production, or you're duct-taping provenance together from git logs and Slack messages, this session gives you a path toward standardization that works with your existing
infrastructure instead of replacing it.