Skip to main content
Version: Next

AI/ML Framework Integration with DataHub

Why Integrate Your AI/ML System with DataHub?

As a data practitioner, keeping track of your AI experiments, models, and their relationships can be challenging. DataHub makes this easier by providing a central place to organize and track your AI assets.

This guide will show you how to integrate your AI workflows with DataHub. With integrations for popular ML platforms like MLflow and Amazon SageMaker, DataHub enables you to easily find and share AI models across your organization, track how models evolve over time, and understand how training data connects to each model. Most importantly, it enables seamless collaboration on AI projects by making everything discoverable and connected.

Goals Of This Guide

In this guide, you'll learn how to:

  • Create your basic AI components (models, experiments, runs)
  • Connect these components to build a complete AI system
  • Track relationships between models, data, and experiments

Core AI Concepts

Here's what you need to know about the key components in DataHub:

  • Experiments are collections of training runs for the same project, like all attempts to build a churn predictor
  • Training Runs are attempts to train a model within an experiment, capturing parameters and results
  • Model Groups organize related models together, like all versions of your churn predictor
  • Models are successful training runs registered for production use

The hierarchy works like this:

  1. Every run belongs to an experiment
  2. Successful runs can be registered as models
  3. Models belong to a model group
  4. Not every run becomes a model
Terminology Mapping

Different AI platforms (MLflow, Amazon SageMaker) have their own terminology. To keep things consistent, we'll use DataHub's terms throughout this guide. Here's how DataHub's terminology maps to these platforms:

DataHubDescriptionMLflowSageMaker
ML Model GroupCollection of related modelsModelModel Group
ML ModelVersioned artifact in a model groupModel VersionModel Version
ML Training RunSingle training attemptRunTraining Job
ML ExperimentCollection of training runsExperimentExperiment

For platform-specific details, see our integration guides for MLflow and Amazon SageMaker.

Basic Setup

To follow this tutorial, you'll need DataHub Quickstart deployed locally. For detailed steps, see the Datahub Quickstart Guide.

Next, set up the Python client for DataHub using DataHubAIClient.

Create a token in DataHub UI and replace <your_token> with your token:

from dh_ai_client import DataHubAIClient

client = DataHubAIClient(token="<your_token>", server_url="http://localhost:9002")
Verifying via GraphQL

Throughout this guide, we'll show how to verify changes using GraphQL queries. You can run these queries in the DataHub UI at https://localhost:9002/api/graphiql.

Create Simple AI Assets

Let's create the basic building blocks of your ML system. These components will help you organize your AI work and make it discoverable by your team.

Create a Model Group

A model group contains different versions of a similar model. For example, all versions of your "Customer Churn Predictor" would go in one group.

Create a basic model group with just an identifier:
client.create_model_group(
group_id="airline_forecast_models_group",
)

Let's verify that our model group was created:

See your new model group in the DataHub UI:

Create a Model

Next, let's create a specific model version that represents a trained model ready for deployment.

Create a model with just the required version:
client.create_model(
model_id="arima_model",
version="1.0",
)

Let's verify our model:

Check your model's details in the DataHub UI:

Create an Experiment

An experiment helps organize multiple training runs for a specific project.

Create a basic experiment:
client.create_experiment(
experiment_id="airline_forecast_experiment",
)

Verify your experiment:

See your experiment's details in the UI:

Create a Training Run

A training run captures all details about a specific model training attempt.

Create a basic training run:
client.create_training_run(
run_id="simple_training_run_4",
)

Verify your training run:

View the run details in the UI:

Define Entity Relationships

Now let's connect these components to create a comprehensive ML system. These connections enable you to track model lineage, monitor model evolution, understand dependencies, and search effectively across your ML assets.

Add Model To Model Group

Connect your model to its group:

client.add_model_to_model_group(model_urn=model_urn, group_urn=model_group_urn)

View model versions in the Model Group under the Models section:

Find group information in the Model page under the Group tab:

Add Run To Experiment

Connect a training run to its experiment:

client.add_run_to_experiment(run_urn=run_urn, experiment_urn=experiment_urn)

Find your runs in the Experiment page under the Entities tab:

See the experiment details in the Run page:

Add Run To Model

Connect a training run to its resulting model:

client.add_run_to_model(model_urn=model_urn, run_urn=run_urn)

This relationship enables you to:

  • Track which runs produced each model
  • Understand model provenance
  • Debug model issues
  • Monitor model evolution

Find the source run in the Model page under the Summary tab:

See related models in the Run page under the Lineage tab:

Add Run To Model Group

Create a direct connection between a run and a model group:

client.add_run_to_model_group(model_group_urn=model_group_urn, run_urn=run_urn)

This connection lets you:

  • View model groups in the run's lineage
  • Query training jobs at the group level
  • Track training history for model families

See model groups in the Run page under the Lineage tab:

Add Dataset To Run

Track input and output datasets for your training runs:

client.add_input_datasets_to_run(
run_urn=run_urn,
dataset_urns=[str(input_dataset_urn)]
)

client.add_output_datasets_to_run(
run_urn=run_urn,
dataset_urns=[str(output_dataset_urn)]
)

These connections help you:

  • Track data lineage
  • Understand data dependencies
  • Ensure reproducibility
  • Monitor data quality impacts

Find dataset relationships in the Lineage tab of either the Dataset or Run page:

Full Overview

Here's your complete ML system with all components connected:

You now have a complete lineage view of your ML assets, from training data through runs to production models!

What's Next?

To see these integrations in action: