Skip to content

Models

OpenTyphoon.ai offers several models optimized for Thai language understanding and generation. Each model has different capabilities, performance characteristics, and rate limits.

Available Models

Model IDSizeDescriptionContext WindowRate LimitsRelease Date
typhoon-v2-8b-instruct8BImproved model for general Thai language tasks8K tokens5 req/s, 50 req/m2024-12-19
typhoon-v2-70b-instruct70BOur most powerful model with advanced Thai language capabilities8K tokens5 req/s, 50 req/m2024-12-19

Model Details

typhoon-v2-70b-instruct

Our flagship model, typhoon-v2-70b-instruct, is designed for advanced Thai language understanding and generation. It excels at:

  • Complex Thai language understanding and generation
  • Following detailed instructions in both Thai and English
  • Maintaining context over longer conversations
  • Code understanding and generation
  • Creative writing and content generation in Thai

Best For: Applications requiring the highest quality Thai language understanding and generation, including customer support, content creation, and complex question answering.

typhoon-v2-8b-instruct

A smaller but still powerful model, typhoon-v2-8b-instruct offers good performance for general Thai language tasks with lower latency and cost:

  • Good Thai language understanding and generation
  • Following instructions in both Thai and English
  • Basic coding tasks
  • Content generation and summarization

Best For: Applications requiring good Thai language capabilities with lower latency and higher throughput, such as real-time assistants and high-volume applications.

Choosing the Right Model

Consider these factors when choosing a model:

  • Task Complexity: More complex tasks generally benefit from larger models
  • Response Speed: Smaller models generally have lower latency
  • Rate Limits: Consider your application’s volume requirements
  • Quality Requirements: Higher quality generally requires larger models

For most applications requiring high-quality Thai language understanding and generation, we recommend using typhoon-v2-70b-instruct. If you need higher throughput or have simpler requirements, consider using typhoon-v2-8b-instruct.

Research and Development

Our models are built on extensive research in Thai language processing. You can learn more about the technical details in our research papers:

Future Models

We’re continuously working on improving our models and will release new versions as they become available. Stay updated by following our blog or joining our Discord community.