Semantic Kernel
Integrate LLMs with your .NET applications
C#medium
Semantic Kernel enables building AI applications with plugins and planners. Decyra captures all kernel operations, providing visibility into your AI orchestration workflows.
Prerequisites
- Decyra account with API key
- Python 3.8+ installed
- Semantic Kernel package installed
Installation
pip install semantic-kernel decyra-sdk
Integration
Configure OpenAI with custom endpoint:
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
import os
kernel = sk.Kernel()
service = OpenAIChatCompletion(
service_id="openai",
ai_model_id="gpt-4",
api_key=os.getenv("DECYRA_API_KEY"),
endpoint="https://proxy.decyra.com/v1/chat/completions",
default_headers={
"X-Decyra-API-Key": os.getenv("DECYRA_API_KEY")
}
)
kernel.add_service(service)
Use the kernel for prompts:
prompt = kernel.create_function_from_prompt(
function_name="chat",
prompt="You are a helpful assistant. {{$input}}"
)
result = await kernel.invoke(prompt, input="Explain AI")
print(result)
What Gets Captured
| Field | Description |
|---|---|
| Model | The AI model identifier |
| Temperature | Temperature setting |
| Function Name | Name of the kernel function |
| Prompt Template | The prompt template used |
| Prompt Hash | Hash of rendered prompt |
| Response Time | Time for kernel execution |
| Token Usage | Tokens consumed |
| Cost | Estimated API cost |
Verify
Visit your Decyra dashboard to see kernel operations in the traces view. Each function invocation will appear with full context.