Haystack
Build production-ready NLP applications with Python
Pythonmedium
Haystack enables building production-ready NLP pipelines. Decyra captures all LLM interactions within your Haystack pipelines, providing visibility into document processing and generation.
Prerequisites
- Decyra account with API key
- Python 3.8+ installed
- Haystack package installed
Installation
pip install haystack-ai decyra-sdk
Integration
Configure OpenAIGenerator with Decyra's proxy:
from haystack.components.generators import OpenAIGenerator
import os
generator = OpenAIGenerator(
api_key=os.getenv("DECYRA_API_KEY"),
model="gpt-4",
api_base_url="https://proxy.decyra.com/v1",
generation_kwargs={
"temperature": 0.7,
},
headers={
"X-Decyra-API-Key": os.getenv("DECYRA_API_KEY")
}
)
Create and run a pipeline:
from haystack import Pipeline
from haystack.components.builders import PromptBuilder
prompt_template = """
Answer the following question: {{ question }}
"""
prompt_builder = PromptBuilder(template=prompt_template)
pipeline = Pipeline()
pipeline.add_component("prompt_builder", prompt_builder)
pipeline.add_component("llm", generator)
pipeline.connect("prompt_builder", "llm")
result = pipeline.run({
"prompt_builder": {"question": "What is AI?"}
})
What Gets Captured
| Field | Description |
|---|---|
| Model | The AI model used |
| Temperature | Temperature parameter |
| Pipeline Component | Name of the component |
| Prompt Template | Template used for generation |
| Prompt Hash | Hash of rendered prompt |
| Response Time | Time for pipeline execution |
| Token Usage | Input/output tokens |
| Cost | Estimated API cost |
Verify
Check your Decyra dashboard to see pipeline executions in the traces view. Each component interaction will be tracked separately.