Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Advanced LLMOps with MLflow Tracing

# Install dependencies if running in Google Colab
try:
    import google.colab
    !pip install mlflow langchain langchain-openai openai
except ImportError:
    pass

Advanced LLMOps with MLflow Tracing

Logging strings in a table isn’t enough for LLMs. You need to see the internal “thought process” of your chains and agents. MLflow Tracing provides this visibility.

1. Automatic Tracing for LangChain

If you use LangChain, MLflow can instrument the entire chain automatically.

import mlflow
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

# Enable autologging for LangChain
mlflow.langchain.autolog()

llm = ChatOpenAI(model_name="gpt-4o")
prompt = PromptTemplate.from_template("Summarize this: {text}")
chain = prompt | llm

# This will create a Trace in MLflow UI showing the Prompt construction and LLM call
chain.invoke({"text": "MLflow is an open-source platform for the machine learning lifecycle..."})

2. Manual Tracing with Custom Spans

For complex apps involving database lookups or custom logic, use the @mlflow.trace decorator.

from mlflow.entities import SpanType

@mlflow.trace(name="Knowledge_Base_Search", span_type=SpanType.RETRIEVER)
def search_db(query):
    # Simulate a DB lookup
    return "Found: MLflow 2.14 supports Tracing."

@mlflow.trace(name="AI_Agent", span_type=SpanType.AGENT)
def run_agent(query):
    context = search_db(query)
    # ... logic to call LLM with context ...
    return "The latest version of MLflow is 2.14."

run_agent("What is the latest MLflow?")

3. Evaluating LLMs

MLflow also provides mlflow.evaluate() to run automated benchmarks against your traces using metrics like ‘faithfulness’ or ‘answer_relevance’.