LangChain.js
Orchestrate chains of LLM calls with TypeScript
TypeScriptmedium
LangChain JS enables building applications with language models through composable chains. Decyra's proxy captures all LLM interactions, providing visibility into your LangChain workflows.
Prerequisites
- Decyra account with API key
- Node.js 18+ installed
- LangChain packages installed
Installation
npm install @langchain/openai @decyra/sdk langchain
Integration
Configure ChatOpenAI to use Decyra's proxy:
import { ChatOpenAI } from '@langchain/openai';
import { wrapOpenAI } from '@decyra/sdk';
const model = new ChatOpenAI({
modelName: 'gpt-4',
configuration: {
baseURL: 'https://proxy.decyra.com/v1',
defaultHeaders: {
'X-Decyra-API-Key': process.env.DECYRA_API_KEY!,
},
},
});
// Wrap for enhanced tracking
const decyraModel = wrapOpenAI(model);
Create and invoke a chain:
import { ChatPromptTemplate } from '@langchain/core/prompts';
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant'],
['human', '{input}'],
]);
const chain = prompt.pipe(decyraModel);
const result = await chain.invoke({
input: 'What is the capital of France?',
});
What Gets Captured
| Field | Description |
|---|---|
| Model | The AI model identifier |
| Temperature | Sampling temperature setting |
| Max Tokens | Maximum response tokens |
| Prompt Hash | Hash of the full prompt chain |
| Tools | List of tools/functions available |
| Response Time | Total chain execution time |
| Token Usage | Input/output token counts |
| Cost | Estimated API cost |
Verify
Navigate to your Decyra dashboard and check the traces page. Each chain invocation will appear as a trace with full context about the model call.