Skip to main content

AI

Unified AI APIs for text, images, and embeddings. Access GPT-4o, Claude, Gemini, and more through a single interface with automatic fallbacks and cost tracking.

What's Included

Multi-Provider

GPT-4o, Claude, Gemini, Llama via OpenRouter

Chat Completions

Streaming responses with function calling

Image Generation

DALL-E 3, Stable Diffusion, Midjourney

Embeddings

Vector embeddings for semantic search

Auto Fallbacks

Automatic provider switching on errors

Usage Tracking

Per-request cost and token tracking

Quick Start

Chat completions with streaming:

import { useChat } from '@sylphx/sdk/react'
import { useState } from 'react'

function ChatComponent() {
  const [input, setInput] = useState('')
  const { messages, send, isLoading } = useChat({
    model: 'openai/gpt-4o', // or 'anthropic/claude-sonnet-4', 'google/gemini-2.0-flash'
    systemMessage: 'You are a helpful assistant.',
  })

  const handleSubmit = () => {
    if (input.trim()) {
      send(input)
      setInput('')
    }
  }

  return (
    <div>
      {messages.map((msg, i) => (
        <div key={i} className={msg.role}>
          {msg.content}
        </div>
      ))}
      <input
        value={input}
        onChange={(e) => setInput(e.target.value)}
        onKeyDown={(e) => e.key === 'Enter' && handleSubmit()}
        disabled={isLoading}
      />
    </div>
  )
}

Unified API

Switch between providers by changing the model parameter. The SDK normalizes responses across all providers automatically.

Available Models

PropertyTypeDescription
openai/gpt-4oOpenAIFastest GPT-4 class model
openai/gpt-4o-miniOpenAICost-effective GPT-4 class
anthropic/claude-sonnet-4AnthropicBest overall intelligence
anthropic/claude-haiku-3.5AnthropicFast and affordable
google/gemini-2.0-flashGoogle1M context window
meta-llama/llama-3.3-70b-instructMetaOpen-weight Llama 3.3

React Hooks

import {
  useChat,
  useCompletion,
  useEmbedding,
  useAI,
  useModels,
} from '@sylphx/sdk/react'

// Chat with conversation history
const { messages, send, isLoading, clear } = useChat({
  model: 'openai/gpt-4o',
  systemMessage: 'You are a helpful assistant.',
})

// Single completion
const { complete, completion, isLoading } = useCompletion({
  model: 'anthropic/claude-sonnet-4',
})

// Embeddings for semantic search
const { embed, embedMany, isLoading } = useEmbedding({
  model: 'openai/text-embedding-3-small',
})

// Low-level AI access (chat, embed, vision, usage stats)
const { chat, embed, vision, getUsage } = useAI()

// Browse available models with filtering
const { models, setSearch, setCapability } = useModels({
  capability: 'chat',
  fetchOnMount: true,
})
PropertyTypeDescription
useChat()HookMulti-turn chat with conversation history
useCompletion()HookSingle prompt completion
useEmbedding()HookGenerate vector embeddings
useAI()HookLow-level access to chat, vision, embeddings, and usage
useModels()HookBrowse and filter available AI models

Server-Side Usage

Use AI in API routes and server actions:

app/api/chat/route.ts
import { ai } from '@sylphx/sdk/server'
import { streamText } from 'ai'

export async function POST(req: Request) {
  const { messages } = await req.json()

  const result = await streamText({
    model: ai.model('openai/gpt-4o'),
    messages,
  })

  return result.toDataStreamResponse()
}