What's Included
Multi-Provider
GPT-4o, Claude, Gemini, Llama via OpenRouter
Chat Completions
Streaming responses with function calling
Image Generation
DALL-E 3, Stable Diffusion, Midjourney
Embeddings
Vector embeddings for semantic search
Auto Fallbacks
Automatic provider switching on errors
Usage Tracking
Per-request cost and token tracking
Quick Start
Chat completions with streaming:
import { useChat } from '@sylphx/sdk/react'
import { useState } from 'react'
function ChatComponent() {
const [input, setInput] = useState('')
const { messages, send, isLoading } = useChat({
model: 'openai/gpt-4o', // or 'anthropic/claude-sonnet-4', 'google/gemini-2.0-flash'
systemMessage: 'You are a helpful assistant.',
})
const handleSubmit = () => {
if (input.trim()) {
send(input)
setInput('')
}
}
return (
<div>
{messages.map((msg, i) => (
<div key={i} className={msg.role}>
{msg.content}
</div>
))}
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => e.key === 'Enter' && handleSubmit()}
disabled={isLoading}
/>
</div>
)
}Unified API
Switch between providers by changing the model parameter. The SDK normalizes responses across all providers automatically.
Available Models
| Property | Type | Description |
|---|---|---|
openai/gpt-4o | OpenAI | Fastest GPT-4 class model |
openai/gpt-4o-mini | OpenAI | Cost-effective GPT-4 class |
anthropic/claude-sonnet-4 | Anthropic | Best overall intelligence |
anthropic/claude-haiku-3.5 | Anthropic | Fast and affordable |
google/gemini-2.0-flash | Google | 1M context window |
meta-llama/llama-3.3-70b-instruct | Meta | Open-weight Llama 3.3 |
React Hooks
import {
useChat,
useCompletion,
useEmbedding,
useAI,
useModels,
} from '@sylphx/sdk/react'
// Chat with conversation history
const { messages, send, isLoading, clear } = useChat({
model: 'openai/gpt-4o',
systemMessage: 'You are a helpful assistant.',
})
// Single completion
const { complete, completion, isLoading } = useCompletion({
model: 'anthropic/claude-sonnet-4',
})
// Embeddings for semantic search
const { embed, embedMany, isLoading } = useEmbedding({
model: 'openai/text-embedding-3-small',
})
// Low-level AI access (chat, embed, vision, usage stats)
const { chat, embed, vision, getUsage } = useAI()
// Browse available models with filtering
const { models, setSearch, setCapability } = useModels({
capability: 'chat',
fetchOnMount: true,
})| Property | Type | Description |
|---|---|---|
useChat() | Hook | Multi-turn chat with conversation history |
useCompletion() | Hook | Single prompt completion |
useEmbedding() | Hook | Generate vector embeddings |
useAI() | Hook | Low-level access to chat, vision, embeddings, and usage |
useModels() | Hook | Browse and filter available AI models |
Server-Side Usage
Use AI in API routes and server actions:
app/api/chat/route.ts
import { ai } from '@sylphx/sdk/server'
import { streamText } from 'ai'
export async function POST(req: Request) {
const { messages } = await req.json()
const result = await streamText({
model: ai.model('openai/gpt-4o'),
messages,
})
return result.toDataStreamResponse()
}