Quick Start Guide
This guide will help you get started with the OpenTyphoon.ai API quickly. The API is compatible with OpenAI’s API format, making it easy to integrate if you’re already familiar with OpenAI.
Get your API Key
- Sign up at OpenTyphoon.ai
- Navigate to the API Keys section in your dashboard
- Create a new API key
- Store your API key securely - it won’t be shown again!
Make your first API call
curl --location 'https://api.opentyphoon.ai/v1/chat/completions' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer <YOUR_API_KEY>' \ --data '{ "model": "typhoon-v2-70b-instruct", "messages": [ { "role": "system", "content": "You are a helpful assistant. You must answer only in Thai." }, { "role": "user", "content": "ขอสูตรไก่ย่าง" } ], "max_tokens": 512, "temperature": 0.6, "top_p": 0.95, "repetition_penalty": 1.05, "stream": false }'
from openai import OpenAI
# Initialize the client with your API key and the OpenTyphoon base URLclient = OpenAI( api_key="<YOUR_API_KEY>", base_url="https://api.opentyphoon.ai/v1")
# Make a completion requestresponse = client.chat.completions.create( model="typhoon-v2-70b-instruct", messages=[ {"role": "system", "content": "You are a helpful assistant. You must answer only in Thai."}, {"role": "user", "content": "ขอสูตรไก่ย่าง"} ], max_tokens=512, temperature=0.6)
# Print the responseprint(response.choices[0].message.content)
import OpenAI from 'openai';
// Initialize the client with your API key and the OpenTyphoon base URLconst openai = new OpenAI({ apiKey: '<YOUR_API_KEY>', baseURL: 'https://api.opentyphoon.ai/v1',});
async function main() { const response = await openai.chat.completions.create({ model: 'typhoon-v2-70b-instruct', messages: [ { role: 'system', content: 'You are a helpful assistant. You must answer only in Thai.' }, { role: 'user', content: 'ขอสูตรไก่ย่าง' } ], max_tokens: 512, temperature: 0.6 });
console.log(response.choices[0].message.content);}
main();
Recommended Parameter Settings
For optimal results with Typhoon models, we recommend the following parameter settings:
Parameter | Recommended Value | Description |
---|---|---|
temperature | 0.6 | Controls randomness. Lower values like 0.2 for factual/consistent responses, higher (0.8+) for more creative ones. |
max_tokens | 512 | Adjust based on how long you expect the response to be. |
top_p | 0.95 | Alternative to temperature for controlling randomness. |
repetition_penalty | 1.05 | Prevents repetitive text. Increase slightly (1.1-1.2) if you notice repetition. |
Next Steps
- Check out our API Reference for detailed endpoint documentation
- Explore Examples for common use cases
- Read about Prompting to get better results
- Learn about Models to understand which model is best for your needs