LangChain Python
Build LLM-powered applications with Python
Pythonmedium
LangChain Python provides a framework for developing applications powered by language models. Route your LLM calls through Decyra to gain comprehensive observability and control.
Prerequisites
- Decyra account with API key
- Python 3.8+ installed
- LangChain packages installed
Installation
pip install langchain langchain-openai decyra-sdk
Integration
Configure ChatOpenAI with Decyra's proxy:
from langchain_openai import ChatOpenAI
from decyra import wrap_openai
llm = ChatOpenAI(
model="gpt-4",
base_url="https://proxy.decyra.com/v1",
default_headers={
"X-Decyra-API-Key": os.getenv("DECYRA_API_KEY")
}
)
# Wrap for enhanced tracking
decyra_llm = wrap_openai(llm)
Create and invoke a chain:
from langchain.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant"),
("human", "{input}")
])
chain = prompt | decyra_llm
result = chain.invoke({"input": "Explain machine learning"})
print(result.content)
What Gets Captured
| Field | Description |
|---|---|
| Model | The AI model name |
| Temperature | Temperature parameter value |
| Max Tokens | Maximum tokens limit |
| Prompt Hash | SHA-256 hash of prompt |
| Tools | Available function calling tools |
| Response Time | API call duration |
| Token Usage | Input/output token counts |
| Cost | Estimated cost per request |
Verify
Open your Decyra dashboard and navigate to the traces section. You'll see each chain invocation with complete details about the model interaction.