Typhoon Documentation & Developer Guide
👋 Welcome to Typhoon Documentation
Explore Typhoon — Thai-first, multilingual large language models optimized for practical deployment and developer accessibility. Whether you’re prototyping in a notebook, scaling an AI product, or just getting started with automation, you’ll find everything you need here.
🗺️ Overview: Where & How to Access Typhoon Models
Typhoon is available across multiple platforms. Choose what fits your workflow and infrastructure:
0. Web Playground 🌐
Best for: Fast experimentation and prompt design
No setup required — just start typing. Our playgrounds support both text and multimodal input.
1. API-based Access 🔌
Best for: Web apps, agents, automation workflows, and scalable integrations
Includes everything where you send a request and get a response over HTTP — regardless of where the model is hosted. This would include:
Typhoon API (hosted by Typhoon team)
Together.ai (official partner)
OpenRouter (community router)
Cloud Marketplaces (e.g. AWS, Azure)
1.1 Official APIs
Typhoon API ✅ (Free Tier)
Key Features
- OpenAI-Compatible API: Drop-in replacement for existing OpenAI-based apps
- Streaming Support: Token-by-token real-time responses
- Zero cost for light usage — ideal for startups and prototyping
- Hosted and maintained by Typhoon
Available Models
Model ID | Size | Description | Context Window | Rate Limits | Release Date |
---|---|---|---|---|---|
typhoon-v2.1-12b-instruct | 12B | Best model for instruction-following, tool use, and Thai NLP | 56K tokens | 5 req/s, 200 req/m | 2025-05-05 |
typhoon-v2-70b-instruct | 70B | SOTA Thai performance in large model class | 8K tokens | 5 req/s, 50 req/m | 2024-12-19 |
typhoon-v2-r1-70b-preview | 70B | Enhanced reasoning from DeepSeek R1 distillation + SFT | 8K tokens | 5 req/s, 50 req/m | 2025-03-17 |
We also have API for OCR model. Please see OCR tab for more information.
Typhoon API Pro via Together.ai
If you’re looking to scale Typhoon in your business or product with guaranteed availability, high throughput, and simple billing, Together.ai is our official cloud partner.
✅ Why Together.ai?
-
Production-ready infrastructure — optimized for low latency and stability
-
Almost no rate limits — deploy large-scale agents or multi-user apps
-
Simple pay-as-you-go billing — scale from startup to enterprise
🔧 Available Typhoon Models on Together.ai
Model | Size | Description | Endpoint |
---|---|---|---|
Typhoon 2 70B Instruct | 70B | Flagship Thai model for robust comprehension and generation | 🔗 Endpoint |
📘 Read our quickstart guide
1.2 Third-party Routers and Marketplaces
OpenRouter 🔁
OpenRouter is a rising favorite among developers for its flexibility and ease of use. Typhoon models are now available on OpenRouter, making it easy to:
- 🔄 Compare Typhoon with other top open-source models
- ⚙️ Plug directly into automation tools like n8n, Zapier, and Make.com
- 🧩 Use Typhoon in custom chatbot frameworks and no-code/low-code agents. This is ideal for Thai startups building agents, lead-generation bots, or content automation flows.
Cloud Marketplace Access ☁️
Deploy Typhoon in your existing cloud environment.
-
Azure AI Model Catalog: Typhoon 1.5 available (Typhoon 2 models coming soon)
-
AWS Marketplace: Coming soon
Please refer to respective platforms for deployment and billing setup.
2. Run Locally 💻
These tools support local use:
Use Typhoon entirely offline — great for R&D, privacy-sensitive use, or enterprise IT environments.
🤗 Hugging Face
-
Fine-tune and run Typhoon models with the
transformers
library -
Use in offline, on-premise settings
-
Access models in GGUF, Safetensors, and standard formats
Model Family | Sizes | Use Case |
---|---|---|
Typhoon2.1-Text | 4B, 12B | Latest instruction & multilingual model |
Typhoon2-Text | 1B–70B | Versatile Thai-first models |
Typhoon2-R1 | 70B | Advanced reasoning with SFT + DeepSeek merge |
Typhoon T1 | 3B | Thai reasoning model, research preview |
Typhoon2-Audio | 8B | End-to-end speech-to-speech |
Typhoon2-Vision | 7B | Vision-Language model for image/video input |
🔗 Explore Typhoon on Hugging Face
LM Studio & Ollama
Use Typhoon on your laptop with local tools. No GPU? No problem — use quantized models with CPU inference.
-
LM Studio: GUI app for offline use (GGUF format)
-
Ollama: Run with one CLI command
🧪 Get Started with Typhoon API
Jump in and start building with Thai-first LLMs in just a few steps:
-
🔑 Create your API key — Sign up on the Typhoon Playground to generate your free API key.
-
🚀 Follow the Quickstart Guide — Learn how to make your first API call with step-by-step instructions.
-
💡 Explore real-world use cases + source codes! — See you can use Typhoon in business, automation, and research.
-
💻 Browse example code and templates — Speed up your integration with ready-to-use API calls and workflows.
🧑🤝🧑 Support and Community
-
📧 Email: contact@opentyphoon.ai
-
💬 Discord: Join the Typhoon Community
-
📄 Papers: Typhoon | Typhoon 2 | Typhoon T1 | Typhoon R1