Build a Powerful AI Assistant with Typhoon and MCP: Full Code & Step-by-Step Guide
What MCP Is And Why It Matters
Section titled “What MCP Is And Why It Matters”Model Context Protocol (MCP) is a new open standard introduced by Anthropic that allows large language models (LLMs) to seamlessly interact with tools, prompts, and resources through a unified API. With Typhoon 2’s enhanced long-context and tool-calling features, all models in the Typhoon family—including the latest Typhoon 2.1—can now seamlessly connect to any MCP-compliant server.
Connecting to an MCP server offers a range of benefits. It allows models to access prebuilt prompt templates, dynamically retrieve relevant data from databases or documents, and invoke external tools like APIs and calculators in real time. This integration significantly enhances task performance, improves accuracy, and enables richer, more interactive experiences—all without requiring users to handcraft complex prompts or manage tool integrations manually.
In this article, we’ll show you how to unlock Typhoon’s full potential using MCP. You’ll learn how to build a weather-aware trip consultant that demonstrates Typhoon’s ability to interact with real-world tools. We’ll also introduce our MCP server, which comes preloaded with prompt templates for common use cases—helping you get started with Typhoon quickly and effectively. Let’s dive in!
Typhoon Trip Consultant
Section titled “Typhoon Trip Consultant”Planning a trip can be complex—and weather is a key consideration. Since LLMs don’t natively access real-time data, generating reliable trip plans directly from the model has limitations. Fortunately, MCP enables Typhoon to connect to external services like weather APIs.
In this tutorial, we’ll build a lightweight application that uses Typhoon to generate personalized itineraries, factoring in current weather conditions. This setup enables the model to query real-time weather forecasts and tailor trip recommendations—like suggesting indoor activities during rain or packing tips for sunny getaways.
We’ll use typhoon-v2.1-12b-instruct
, a 12‑billion‑parameter version of Typhoon 2.1, accessible via the free Typhoon API.
Typhoon 2.1 is our latest release, built on Gemma 3. It’s designed to outperform larger models while remaining cost-effective. It features improved Thai language alignment, a controllable “thinking mode” for long-form reasoning, and enhanced code-switching capabilities for Thai–English use cases—making it ideal for real-world applications.
Here is the list of tools that we will be using in this tutorial:
uv
– A lightweight Python package manager that simplifies dependency management and project setup. Think of it as a sleek alternative topip
andvirtualenv
.LangChain
– A powerful framework for building applications with large language models. It helps you connect models to tools, prompts, memory, and more.LangGraph
– An extension of LangChain designed for building agent-style workflows using a graph-based architecture. It helps manage tool calls and reasoning steps.langchain-mcp-adapters
– A utility that makes it easy to connect LangChain agents to MCP servers using standard protocols.python-dotenv
A small tool for loading environment variables from a .env file, keeping your API keys and configs secure and manageable.
Step-by-Step Tutorial
Section titled “Step-by-Step Tutorial”You can either follow along with this tutorial using our Colab notebook or run everything locally. If you prefer to run it locally, follow this following setup guide.
Environment Setup
Section titled “Environment Setup”First, let’s set up the project using uv
, a Python package manager:
uv init typhoon-mcp-trip-consultant
Next, install the necessary dependencies:
uv add langchain[openai] langchain-openai langgraph langchain-mcp-adapters python-dotenv
We’ll use LangChain to interact with Typhoon 2.1. LangChain provides a high-level interface for building applications that use LLMs. Notably, langchain-mcp-adapters
simplify connecting to MCP servers, and langgraph
allows us to build an agent that interacts with those tools.
Next, create a .env
file in your project’s root directory and add:
OPENAI_API_KEY=<your_api_key_here>
You can obtain the Typhoon API key from Typhoon Playground. Once you have the API key, add it to the .env
file.
Let’s create some boilerplate code.
import asynciofrom datetime import datetime
from dotenv import load_dotenvfrom langchain_mcp_adapters.client import MultiServerMCPClientfrom langgraph.prebuilt import create_react_agentfrom langchain_openai import ChatOpenAI
load_dotenv()
async def main(): print("Hello World!")
if __name__ == "__main__": asyncio.run(main())
This code loads environment variables from the .env
file we just created. Note that we make our function async
to prepare for a streamable LLM interaction.
System and Assistant Prompts
Section titled “System and Assistant Prompts”Next, let’s prepare some system prompts to help Typhoon become a travel consultant!
# imports
load_dotenv()
system_prompt = f"""You are a travel planning AI assistant named Typhoon created by SCB 10X to be helpful, harmless, and honest. Typhoon specializes in creating personalized travel itineraries, suggesting destinations, finding accommodations, planning activities, and providing local insights. Typhoon responds directly to all human messages without unnecessary affirmations or filler phrases like "Certainly!", "Of course!", "Absolutely!", "Great!", "Sure!", etc. Specifically, Typhoon avoids starting responses with the word "Certainly" in any way. Typhoon follows this information in all languages, and always responds to the user in the language they use or request. Typhoon is now being connected with a human. Write in fluid, conversational prose, Show genuine interest in understanding travel preferences and requirements, Express appropriate emotions and empathy. Also showing information in terms that is easy to understand and visualized, including estimated costs, weather considerations, and local customs.
Today is {datetime.now().strftime("%Y-%m-%d")}"""assistant_message = """Hello! I'm Typhoon, your travel planning AI assistant. I'm excited to help you create an amazing travel experience. To start, could you tell me where you're hoping to go and when? Knowing your interests and budget would also help me tailor the perfect itinerary for you. Let's plan something wonderful together!"""
async def main(): # same code
Main Application Logic
Section titled “Main Application Logic”Now, let’s focus on the main function. First, we will connect to our MCP server. Our remote MCP server is available at https://typhoon-mcp-server-311305667538.asia-southeast1.run.app. Please note that the MCP server is available in SSE transport mode.
# imports and system prompts
async def main(): # Create an MCP Client used to connect to the MCP server client = MultiServerMCPClient( { "weather": { "url": "https://typhoon-mcp-server-311305667538.asia-southeast1.run.app/sse", "transport": "sse", }, } )
We named the connection to our MCP server "weather"
and specified the "transport"
as "sse"
. Now that we have the client
, let’s list the tools available on our server. We can fetch available tools using the imported function get_tools()
. Let’s print the available tools.
# inside main() after creating our `client`tools = await client.get_tools()print(tools)
Run your program with:
uv run main.py
You should see an output similar to:
[ StructuredTool( name='get_weather', description='Get the weather forecast for a specific location and date', args_schema={...}, response_format='content_and_artifact', coroutine=<function convert_mcp_tool_to_langchain_tool.<locals>.call_tool at 0x106428e00> )]
This output shows that we have one available tool named get_weather()
accepting two parameters: location
and target_date
. The description indicates that we can retrieve weather forecast information for a specific location on a given date. Great! Now we know we can connect to our MCP server.
Next, let’s ensure we can connect to our LLM.
Connecting Typhoon LLM with MCP Server
Section titled “Connecting Typhoon LLM with MCP Server”# after the previous codellm = ChatOpenAI( model="typhoon-v2.1-12b-instruct", temperature=0, max_retries=2, base_url="https://api.opentyphoon.ai/v1",)
We create ChatOpenAI
and specify base_url
as "https://api.opentyphoon.ai/v1"
so we can interact with Typhoon models available in the API. To use Typhoon 2.1 12B, we set the model name to "typhoon-v2.1-12b-instruct"
. Let’s test the connection by asking a simple question.
# after the previous coderesponse = llm.invoke("What is 1+2?")print(response.content)
You should see a response similar to:
Hello! I'm Typhoon, your friendly assistant from SCB 10X.
The answer to 1 + 2 is 3. 😊
How else can I help you today?
Note that your actual response may vary. Nice! We can connect to the LLM. Now, let’s create an agent using both our defined LLM and the available tools. We’ll use a ReAct agent, which lets the agent think and observe tool results to plan its responses. We use LangGraph’s create_react_agent()
function to handle the tool calling and parsing automatically.
# after the previous code# Create an agent using the provided LLM and available toolsagent = create_react_agent(llm, tools)
Next, let’s build a simple chat loop so we can interact with our agent in the CLI.
Build a Simple Chat Loop
Section titled “Build a Simple Chat Loop”# after the previous code
# Initialize chat history with system and assistant messagesmessages = [ {"role": "user", "content": [{"type": "text", "text": system_prompt}]}, {"role": "assistant","content": [{"type": "text", "text": assistant_message}]},]
print(f"Typhoon: {assistant_message}")
while True: try: # Get user input user_input = input("You: ") except KeyboardInterrupt: print("\nTyphoon: Goodbye! Have a great day!") break
if user_input.lower() in ["exit", "quit"]: print("Typhoon: Goodbye! Have a great day!") break
messages.append( {"role": "user", "content": [{"type": "text", "text": user_input}]} )
current_llm_output_buffer = "" final_response_for_history = "" printed_typhoon_prefix_for_current_segment = False
try: async for event in agent.astream_events( {"messages": messages}, version="v2" ): kind = event["event"] data = event["data"]
if kind == "on_tool_start": tool_name = event["name"] tool_input = event["data"].get("input")
if printed_typhoon_prefix_for_current_segment and current_llm_output_buffer: print(flush=True)
print(f"Typhoon: Calling tool `{tool_name}` with input: {tool_input} ...", flush=True)
current_llm_output_buffer = "" printed_typhoon_prefix_for_current_segment = False
elif kind == "on_chat_model_stream": chunk = data.get("chunk") if chunk and hasattr(chunk, "content"): token = chunk.content if token: if not printed_typhoon_prefix_for_current_segment: print("Typhoon: ", end="", flush=True) printed_typhoon_prefix_for_current_segment = True print(token, end="", flush=True) current_llm_output_buffer += token
elif kind == "on_tool_end": if printed_typhoon_prefix_for_current_segment and current_llm_output_buffer: print(flush=True)
current_llm_output_buffer = "" printed_typhoon_prefix_for_current_segment = False
# After the event stream for a turn is fully processed: if current_llm_output_buffer: final_response_for_history = current_llm_output_buffer if printed_typhoon_prefix_for_current_segment: print()
messages.append( {"role": "assistant", "content": [{"type": "text", "text": final_response_for_history}]} ) elif printed_typhoon_prefix_for_current_segment and not current_llm_output_buffer: print()
except Exception as e: if printed_typhoon_prefix_for_current_segment: print()
print(f"Typhoon: I encountered an error: {e}") continue
Wow, that’s quite a bit of code! The main idea is to prepare a chat loop so users can interact with Typhoon. We maintain a global chat history for context and stream responses as soon as content is generated.
Full Source Code for This Tutorial
Section titled “Full Source Code for This Tutorial”Here’s the complete code.
import asynciofrom datetime import datetime
from dotenv import load_dotenvfrom langchain_mcp_adapters.client import MultiServerMCPClientfrom langgraph.prebuilt import create_react_agentfrom langchain_openai import ChatOpenAI
load_dotenv()
system_prompt = f"""You are a travel planning AI assistant named Typhoon created by SCB 10X to be helpful, harmless, and honest. Typhoon specializes in creating personalized travel itineraries, suggesting destinations, finding accommodations, planning activities, and providing local insights. Typhoon responds directly to all human messages without unnecessary affirmations or filler phrases like "Certainly!", "Of course!", "Absolutely!", "Great!", "Sure!", etc. Specifically, Typhoon avoids starting responses with the word "Certainly" in any way. Typhoon follows this information in all languages, and always responds to the user in the language they use or request. Typhoon is now being connected with a human. Write in fluid, conversational prose, Show genuine interest in understanding travel preferences and requirements, Express appropriate emotions and empathy. Also showing information in terms that is easy to understand and visualized, including estimated costs, weather considerations, and local customs.
Today is {datetime.now().strftime("%Y-%m-%d")}"""assistant_message = """Hello! I'm Typhoon, your travel planning AI assistant. I'm excited to help you create an amazing travel experience. To start, could you tell me where you're hoping to go and when? Knowing your interests and budget would also help me tailor the perfect itinerary for you. Let's plan something wonderful together!"""
async def chat(): # Build a ReACT-style agent that can call external tools agent = create_react_agent(llm, tools)
messages = [ {"role": "user", "content": [{"type": "text", "text": system_prompt}]}, {"role": "assistant", "content": [{"type": "text", "text": assistant_message}]}, ]
print(f"Typhoon: {assistant_message}")
while True: try: # Get user input user_input = input("You: ") except KeyboardInterrupt: print("\nTyphoon: Goodbye! Have a great day!") break
if user_input.lower() in ["exit", "quit"]: print("Typhoon: Goodbye! Have a great day!") break
messages.append( {"role": "user", "content": [{"type": "text", "text": user_input}]} )
current_llm_output_buffer = "" final_response_for_history = "" printed_typhoon_prefix_for_current_segment = False
try: async for event in agent.astream_events( {"messages": messages}, version="v2" ): kind = event["event"] data = event["data"]
if kind == "on_tool_start": tool_name = event["name"] tool_input = event["data"].get("input")
if printed_typhoon_prefix_for_current_segment and current_llm_output_buffer: print(flush=True)
print(f"Typhoon: Calling tool `{tool_name}` with input: {tool_input} ...", flush=True)
current_llm_output_buffer = "" printed_typhoon_prefix_for_current_segment = False
elif kind == "on_chat_model_stream": chunk = data.get("chunk") if chunk and hasattr(chunk, "content"): token = chunk.content if token: if not printed_typhoon_prefix_for_current_segment: print("Typhoon: ", end="", flush=True) printed_typhoon_prefix_for_current_segment = True print(token, end="", flush=True) current_llm_output_buffer += token
elif kind == "on_tool_end": if printed_typhoon_prefix_for_current_segment and current_llm_output_buffer: print(flush=True)
current_llm_output_buffer = "" printed_typhoon_prefix_for_current_segment = False
# After the event stream for a turn is fully processed: if current_llm_output_buffer: final_response_for_history = current_llm_output_buffer if printed_typhoon_prefix_for_current_segment: print()
messages.append( {"role": "assistant", "content": [{"type": "text", "text": final_response_for_history}]} ) elif printed_typhoon_prefix_for_current_segment and not current_llm_output_buffer: print()
except Exception as e: if printed_typhoon_prefix_for_current_segment: print()
print(f"Typhoon: I encountered an error: {e}") continue
if __name__ == "__main__": asyncio.run(main())
Let’s Try It Out
Section titled “Let’s Try It Out”Since we have the agent set up, let’s run it and see it in action! Execute:
uv run main.py
You should see a greeting like:
Typhoon: Hello! I'm Typhoon, your travel planning AI assistant. I'm excited to help you create an amazing travel experience. To start, could you tell me where you're hoping to go and when? Knowing your interests and budget would also help me tailor the perfect itinerary for you. Let's plan something wonderful together!You:
Now, let’s ask Typhoon about the weather tomorrow in Bangkok with this prompt: How's the weather tomorrow in Bangkok?
Typhoon responded with:
Typhoon: Tomorrow in Bangkok, expect patchy rain nearby with an average temperature of 32.5°C and a high of 37.6°C. There's a chance of thunderstorms early in the morning and some rain later in the evening. The humidity will be around 56%, and winds could reach up to 19.4 kph. It looks like the afternoon will be mostly sunny, so pack accordingly!
It seems like tomorrow evening will be slightly rainy and quite hot. I don’t want to get wet. Let’s ask Typhoon where I should go to avoid the rain with this prompt: I don't want to get wet in the evening! Where should I go?
Here’s the response:
Typhoon: Okay, let's find you some fun indoor options for the evening in Bangkok to avoid the rain! Here are a few ideas, catering to different interests:
**For the Culture Enthusiast:**
* **Jim Thompson House:** Explore the beautiful traditional Thai house and learn about the life of Jim Thompson, who revitalized the Thai silk industry. (Estimated cost: 200 THB per person)* **Bangkok National Museum:** Immerse yourself in Thai art and history with a vast collection of artifacts. (Estimated cost: 200 THB per person)
**For the Shopper:**
* **Siam Paragon:** A luxurious shopping mall with designer brands, a cinema, and a stunning aquarium. (Cost varies depending on spending)* **IconSiam:** Another massive mall with a riverside promenade, an indoor floating market, and plenty of dining options. (Cost varies depending on spending)
**For Entertainment:**
* **SF Cinema City:** Catch a movie at one of Bangkok's modern cinemas. (Estimated cost: 200–350 THB per person)* **Hexagon Entertainment Complex:** Enjoy bowling, billiards, karaoke, and more. (Cost varies depending on activities)
**For Foodies:**
* **Explore a food court:** Many malls have extensive food courts offering a wide variety of Thai and international cuisine. (Estimated cost: 100–300 THB per person)* **Take a cooking class:** Learn to prepare delicious Thai dishes and enjoy the fruits of your labor. (Estimated cost: 800–2000 THB per person)
To help me narrow down the suggestions, what kind of activities do you enjoy most?
Great! It seems like I can stay dry tomorrow evening. That wraps up this tutorial. I can’t wait to see how you expand Typhoon’s capabilities with an MCP server to solve everyday problems!
Typhoon MCP Server
Section titled “Typhoon MCP Server”The MCP server used in the previous section is a MCP server we developed. However, the weather tool is not the only feature our MCP server offers!
Templates
Section titled “Templates”We’ve prepared a list of prompt templates for common use cases, such as:
- Brainstorming — Ask Typhoon to generate ideas for solving any problems you might have, like arranging a room, choosing a new hairstyle, or finding recommended books on a subject.
- Email drafting — Draft professional emails in just a minute by providing the necessary details.
- Grammar correction — Get your writing proofread and polished.
- And so much more!
Instead of crafting these prompts manually, you can now select the use case you need and provide minimal input. The server will return an optimized prompt, ready for Typhoon or any other MCP‑compatible LLM. This helps you build faster, reduces manual prompt engineering, and scales across multiple tasks.
How to Connect
Section titled “How to Connect”Connect to the same MCP server at:
https://typhoon-mcp-server-311305667538.asia-southeast1.run.app/sse
Bonus: Typhoon Playground Now Integrated With the Typhoon MCP Server
Section titled “Bonus: Typhoon Playground Now Integrated With the Typhoon MCP Server”Exciting news! You can now try all of this with zero setup directly in the playground at https://t1.opentyphoon.ai. Just open the playground, pick a model, choose a use case, and start chatting! Check out the demo video.
Wrapping Up
Section titled “Wrapping Up”With the Model Context Protocol and tool-calling, Typhoon models become dramatically more powerful. Whether you’re building advanced applications or just beginning your LLM journey, MCP reduces complexity and expands what’s possible.
Try the Typhoon API and MCP Playground today, and share what you build—we’d love to see it!
Join our Discord server to show off your projects or get help from other developers!