Skip to content

Tool Calling with Typhoon

Introduction

What is Tool Calling?

If you ask LLM questions like What is the square root of 15129? or Compare the stock price of Nvidia and AMD?, the LLM might provide an incorrect answer. This is because, fundamentally, LLMs are language models and not calculators or real-time data providers.

This is where tool calling comes in. Tool calling enables LLMs to interact with external tools, such as calculators or APIs, to handle tasks beyond their inherent capabilities.

Some examples of tools are :
- Tool that search the internet for information
- Tool that send an email or sms
- Tool that convert money to other currency
- Tool that fetch youtube videos

How does it works?

The LLM does not directly use the tool per se. In actuality, the LLM generates a response that contains the information the tools needed, called a tool call message. The tool processes the message and returns a result based on your specific use case. That result is then sent back to the LLM, which it will generate an answer to the question asked. This approach allows the LLM to focus on generating a natural language answer, while the tools handle the actual computations behind the scene.

To summarize, integrating tool calling with an LLM involves 5 main steps:

  1. Define Your Tool
  2. Equip Your LLM with the Tool
  3. Obtain a Tool Call Message
  4. Use the tool
  5. Pass the Tool Result Back to the LLM

As you can see, the LLM will be invoked at least twice, once to obtain the tool call message and then again to create the final response for the user after receiving the tool's result.

Integrations

  1. OpenAI
  2. LangChain

OpenAI

Full Code

from openai import OpenAI
import json

llm = OpenAI(
   api_key=TYPHOON_API_KEY,                 # Insert your API key here
   base_url='https://api.opentyphoon.ai/v1' # Use this url to access Typhoon models
)

def get_color(
    day:    str
)-> dict:
    outfit_colors = {
    "Monday": {
        "เสริมเสน่ห์" : ["สีขาว","สีครีม","สีเหลือง","สีชมพู"],
        "การงาน" : ["สีเขียว"],
        "การเงิน" : ["สีส้ม","สีน้ำตาล"],
        "โชคดี" : ["สีดำ","สีเทา","สีม่วง"],
        "สีต้องห้าม" : ["สีแดง"]
        },
    "Tuesday": {
        "เสริมเสน่ห์" : ["สีชมพู"],
        "การงาน" : ["สีม่วง","สีเทา"],
        "การเงิน" : ["สีเงิน","สีทอง"],
        "โชคดี" : ["สีส้ม","สีน้ำตาล"],
        "สีต้องห้าม" : ["สีเหลือง","สีครีม"]
        }
    }

    return outfit_colors.get(day)


dailyColor = {
    "type": "function",
    "function": {
        "name": "daily_color",
        "description": "Retrieve the traditional color of the outfit to wear for a specific day of the week based on Thai cultural beliefs, along with the symbolic meaning behind the color.",
        "parameters": {
            "type": "object",
            "properties": {
                "day": {
                    "type": "string",
                    "description": "The name of the day of the week e.g. Monday Tuesday.",
                }
            },
            "required": ["day"],
            "additionalProperties": False,
        },
    }
}

messages = [
    {"role": "system", "content": "You are an expert at composing functions named Typhoon"},
    {"role": "user", "content": "วันอังคารควรใส่เสื้อสีอะไร"},
    ]

response = llm.chat.completions.create(
    model="typhoon-v2-8b-instruct",     # Selected Typhoon model
    messages=messages,                  # Message history
    tools=[dailyColor],                 # Tools
)

llm_response = response.choices[0].message
messages.append(llm_response)

for tool_call in llm_response.tool_calls:
    name = tool_call.function.name
    arguments = json.loads(tool_call.function.arguments)

    day = arguments.get("day")
    result = get_color(day)
    tool_result = {
        "role" : "tool",
        "content" : json.dumps({
                "name": name,
                "arguments": arguments,
                "results": result
            }, indent=4, ensure_ascii=False), 
        }

    messages.append(tool_result)

final_response = llm.chat.completions.create(
    model="typhoon-v2-8b-instruct",     # Selected Typhoon model
    messages=messages                   # Message history
)

print(messages[1]["content"])
print(final_response.choices[0].message.content)

Code Breakdown

1. Prepare Your LLM
from openai import OpenAI

llm = OpenAI(
   api_key=TYPHOON_API_KEY,                 # Insert your API key here
   base_url='https://api.opentyphoon.ai/v1' # Use this url to access Typhoon models
)

Let's see if your LLM is up and running or not

response = llm.chat.completions.create(
    model="typhoon-v2-8b-instruct",         # Selected Typhoon model
    messages=[{"role": "user", "content": "Hello how are you doing today?"}]
)
print(response.choices[0].message.content)

If you got a response, then you're ready to move to the next step.

2. Define your tool

Tool description used with OpenAI client must follows a specific format.

tool_1 = {
        "type": "function",
        "function": {
            "name": tool_name,
            "description": tool_description,     
            "parameters": {
                "type": "object",
                "properties": {
                    tool_input_1: {
                        "type": tool_input_1_datatype,
                        "description": tool_input_1_description,
                    },
                    tool_input_2: {
                        "type": tool_input_2_datatype,
                        "description": tool_input_2_description,
                    },
                    ......
            },
            "required": [tool_input_1,tool_input_2, ....],
            "additionalProperties": False,
            },
        }
    }

Here are some example tools.

dailyColor = {
    "type": "function",
    "function": {
        "name": "daily_color",
        "description": "Retrieve the traditional color of the outfit to wear for a specific day of the week based on Thai cultural beliefs, along with the symbolic meaning behind the color.",
        "parameters": {
            "type": "object",
            "properties": {
                "day": {
                    "type": "string",
                    "description": "The name of the day of the week e.g. Monday Tuesday.",
                },
                "aspect": {
                    "type": "string",
                    "description": "The aspect of life to focus on when choosing the outfit color.",
                },
            },
            "required": ["day"],
            "additionalProperties": False,
        },
    }
}


searchInternet = {
    "type": "function",
    "function": {
        "name": "search_internet",
        "description": "Searh the internet to obtain information.",
        "parameters": {
            "type": "object",
            "properties": {
                "keywords": {
                    "type": "array",
                    "items": {
                        "type": "string"
                    },
                    "description": "List of keywords to search the internet.",
                },
            },
            "required": ["keywords"],
            "additionalProperties": False,
        },
    }
}
Function of daily_color tool
def get_color(
    day:    str
)-> dict:
    outfit_colors = {
    "Monday": {
        "เสริมเสน่ห์" : ["สีขาว","สีครีม","สีเหลือง","สีชมพู"],
        "การงาน" : ["สีเขียว"],
        "การเงิน" : ["สีส้ม","สีน้ำตาล"],
        "โชคดี" : ["สีดำ","สีเทา","สีม่วง"],
        "สีต้องห้าม" : ["สีแดง"]
        },
    "Tuesday": {
        "เสริมเสน่ห์" : ["สีชมพู"],
        "การงาน" : ["สีม่วง","สีเทา"],
        "การเงิน" : ["สีเงิน","สีทอง"],
        "โชคดี" : ["สีส้ม","สีน้ำตาล"],
        "สีต้องห้าม" : ["สีเหลือง","สีครีม"]
        }
    }

    return outfit_colors.get(day)

Prepare the tools by storing them in a list of tools.

# The list of tools we have
tools = [
    searchInternet,
    dailyColor,
    ]
3. Obtain a Tool Call Message

Suppose we want to know what color to wear on Monday

messages = [
    {"role": "system", "content": "You are an expert at composing functions named Typhoon"},
    {"role": "user", "content": "วันจันทร์ควรใส่เสื้อสีอะไรดี"},
    ]

response = llm.chat.completions.create(
    model="typhoon-v2-8b-instruct",     # Selected Typhoon model
    messages=messages,                  # Message history
    tools=tools,                        # Tools
)

llm_response = response.choices[0].message
# llm_response.tool_calls
[
    ChatCompletionMessageToolCall(
        id='2e2996eb-4347-43a7-b46c-e082c7d6fe55',
        function=Function(
            arguments='{"day": "Tuesday"}',
            name='daily_color'
            ),
        type='function'
        )
]

llm_response.tool_calls yield a list of tool call messages.

Let's see what each value means

  • id : The tool call message id. This is use to track tool call message and tool result.
  • function : The tool call content, which specifies:
    • name : The name of the tool to use.
    • arguments : The arguments for the tool.
  • type : The type of the message. It will always be function for tool call messages.
4. Use Tool

Extract the tool name and its arguments, then pass it into the tool function.
The arguments is in string, so we need to convert it to json.

for tool_call in llm_response.tool_calls:

    # Obtain the name and arguments from tool call message
    name = tool_call.function.name
    arguments = json.loads(tool_call.function.arguments)

    day = arguments.get("day")      # Monday

    # Use the tool
    tool_result = get_color(day,aspect)
# tool_result
{
    "เสริมเสน่ห์"  : ["สีขาว","สีครีม","สีเหลือง","สีชมพู"],
    "การงาน"    : ["สีเขียว"],
    "การเงิน"    : ["สีส้ม","สีน้ำตาล"],
    "โชคดี"     : ["สีดำ","สีเทา","สีม่วง"],
    "สีต้องห้าม"  : ["สีแดง"]
}

After obtaining the result of the tool, you must convert it to tool message before passing it into the LLM else it will raise an error.

    # .....
    # tool_result = get_color(day,aspect)

    tool_result = {
        "role" : "tool",
        # Convert to string
        "content" : json.dumps({
            "name": name,
            "arguments": arguments,
            "results": tool_result
            },
            # This is needed because the tool result is in Thai
            indent=4, ensure_ascii=False
            )
        }

    # Add tool result to messages
    messages.append(tool_result)
# tool_result
{
    'role': 'tool',
    'content': {
        "เสริมเสน่ห์"  : ["สีขาว","สีครีม","สีเหลือง","สีชมพู"],
        "การงาน"    : ["สีเขียว"],
        "การเงิน"    : ["สีส้ม","สีน้ำตาล"],
        "โชคดี"     : ["สีดำ","สีเทา","สีม่วง"],
        "สีต้องห้าม"  : ["สีแดง"]
                }
    'tool_call_id': '91704bde-a78f-4298-98d4-fe9882da6eee'
}

5. Pass the Tool Result Back to the LLM

If you print out messages, you will see the chat history.

# messages
[
    {'role': 'system', 'content': ...,      # System prompt
    {'role': 'user', 'content': ...,        # Input message
    ChatCompletionMessage(content=....),    # LLM response including tool call message
    {'role': 'tool', 'content': ...         # Result from tool
]

If any of the messages are missing, the LLM might fail to generate the final response.
Note that tools=tools is not require in this case.

final_response = llm.chat.completions.create(
    model="typhoon-v2-8b-instruct",     # Selected Typhoon model
    messages=messages,                  # Message history
    tools=tools,                        # Tool
)
# messages[1]["content"]
วันอังคารควรใส่เสื้อสีอะไร

# final_response.choices[0].message.content
วันอังคารควรใส่เสื้อสีชมพูเพื่อเสริมเสน่ห์ สีม่วงหรือสีเทาสำหรับการงาน สีเงินหรือสีทองสำหรับการเงิน และสีส้มหรือน้ำตาลเพื่อโชคดี สีเหลืองและสีครีมเป็นสีต้องห้ามในวันนี้

LangChain

Full Code

from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage, ToolMessage
from pydantic import BaseModel, Field

llm = ChatOpenAI(
    model='typhoon-v2-8b-instruct',             # Selected Typhoon model
    base_url='https://api.opentyphoon.ai/v1',   # Use this url to access Typhoon models
    api_key=TYPHOON_API_KEY)                    # Insert your API key here

class convert_currency(BaseModel):
    """Convert money from one currency to another currency."""

    amount: float = Field(..., description="the amount of money to convert")
    from_currency: str = Field(..., description="the old currency code to convert from eg. THB USD")
    to_currency: str = Field(..., description="the new currency code to convert to eg. THB USD")

    def invoke(tool_call) -> dict:
            args = tool_call.get("args")
            amount = args.get("amount")
            from_currency = args.get("from_currency")
            to_currency = args.get("to_currency")
            # rate = 0.03     # USD
            rate = 0.046    # AUD
            to_amount = amount * rate

            return {
                "from_amount": amount,
                "from_currency": from_currency,
                "to_amount": to_amount,
                "to_currency": to_currency,
            }

llm = llm.bind_tools([convert_currency])

messages = [
    SystemMessage(content="You are an expert at composing functions named Typhoon"),
    # HumanMessage(content="เดือนหน้าจะไปเที่ยวอเมริกา มีอยู่ 100000 บาทแลกได้กี่ดอลลาร์"),
    HumanMessage(content="เดือนหน้าจะไปเที่ยวออสเตรเลีย มีอยู่ 100000 บาทแลกได้กี่ดอลลาร์"),
    ]

llm_response = llm.invoke(messages)
messages.append(llm_response)     

for tool_call in llm_response.tool_calls:
    tool_result = convert_currency.invoke(tool_call)
    tool_msg = ToolMessage(content=tool_result,
            name=tool_call["name"],
            tool_call_id=tool_call["id"])
    messages.append(tool_msg)

final_response = llm.invoke(messages)

print(messages[1].content)
print(final_response.content)
เดือนหน้าจะไปเที่ยวออสเตรเลีย มีอยู่ 100000 บาทแลกได้กี่ดอลลาร์
คุณมีเงินประมาณ 4600 ดอลลาร์ออสเตรเลียสำหรับการเดินทางของคุณในออสเตรเลีย

Code Breakdown

1. Prepare Your LLM

Let's start with getting your LLM up and running first.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model='typhoon-v2-8b-instruct',             # Selected Typhoon model
    base_url='https://api.opentyphoon.ai/v1',   # Use this url to access Typhoon models
    api_key=TYPHOON_API_KEY)                    # Insert your API key here

Try testing if your llm is working or not

response = llm.invoke("Hello Typhoon how are you doing today?")

If you got a valid result, you're ready to move on to the next step

2. Define Your Tool

LangChain allow several ways to define your tool.

Use Tool Decorator
from langchain_core.tools import tool
from typing import Annotated

@tool
def get_weather(
    location: Annotated[str, "The name of the location eg. Bangkok Pattaya"]
) -> str:
    """Get the current weather of of a given location."""

    # Write your function here

    result = "Incoming Typhoon and Heavy Rainfall"

    # It is a good idea to include the search query in the result
    return {
        "location" : location,
        "weather" : result
    }

You can design your own custom tools using the tool decorator.

Use Pydantic Class
from pydantic import BaseModel, Field

class convert_currency(BaseModel):
    """Convert money from one currency to another currency."""

    amount: float = Field(..., description="the amount of money to convert")
    from_currency: str = Field(..., description="the old currency code to convert from eg. THB USD")
    to_currency: str = Field(..., description="the new currency code to convert to eg. THB USD")
Use TypedDict Class
from typing_extensions import Annotated, TypedDict

class calculator(TypedDict):
    """Solve mathematic equations."""

    query: Annotated[str, ..., "The mathematic equation to solve"]

When you use Pydantic or TypedDict class, you only define the tool input.
But you would need to write the function of the tool later as well.

Example function of the tool
class convert_currency(BaseModel):
    """Convert money from one currency to another currency."""

    amount: float = Field(..., description="the amount of money to convert")
    from_currency: str = Field(..., description="the old currency code to convert from eg. THB USD")
    to_currency: str = Field(..., description="the new currency code to convert to eg. THB USD")

    def invoke(tool_call) -> dict:
            args = tool_call.get("args")
            amount = args.get("amount")
            from_currency = args.get("from_currency")
            to_currency = args.get("to_currency")
            # rate = 0.03     # USD
            rate = 0.046    # AUD
            to_amount = amount * rate

            return {
                "from_amount": amount,
                "from_currency": from_currency,
                "to_amount": to_amount,
                "to_currency": to_currency,
            }
Use Community Tool
from langchain_community.tools import YouTubeSearchTool

youtube_search = YouTubeSearchTool()

Many platform offer community tools that does not require you to write the tool yourself.

To equip the LLM with your tools, it is just as simple as using .bind_tools().

# The input of .bind_tools() is a list of runnable tools.
tools = [
    get_weather,
    convert_currency,
    calculator,
    youtube_search
    ]

# Create a new LLM runnable
llm_with_tool = llm.bind_tools(tools)

# Or overwrite itself
llm = llm.bind_tools(tools)

This is because tools written these ways are OpenAI compatible, so they can be applied into LLMs without any issue.
You can check the schema of these tools using convert_to_openai_function().

from langchain_core.utils.function_calling import convert_to_openai_function

tool_descriptions = [convert_to_openai_function(t) for t in tools]
Click here to see the tool descriptions
[
    # Tool Decorator
    {
        "name": "get_weather",
        "description": "Get the current weather of a given location.",
        "parameters": {
            "properties": {
                "location": {
                    "description": "The name of the location eg. Bangkok Pattaya",
                    "type": "string"
                }
            },
            "required": ["location"],
            "type": "object"
        }
    },

    # Pydantic Class
    {
        "name": "convert_currency",
        "description": "Convert money from one currency to another currency.",
        "parameters": {
            "properties": {
                "amount": {
                    "description": "The amount of money to convert",
                    "type": "number"
                },
                "from_currency": {
                    "description": "The old currency code to convert from eg. THB USD",
                    "type": "string"
                },
                "to_currency": {
                    "description": "The new currency code to convert to eg. THB USD",
                    "type": "string"
                }
            },
            "required": ["amount", "from_currency", "to_currency"],
            "type": "object"
        }
    },

    # TypedDict Class
    {
        "name": "calculator",
        "description": "Solve mathematic equations.",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "description": "The mathematic equation to solve",
                    "type": "string"
                }
            },
            "required": ["query"]
        }
    },

    # Community Tool
    {
        "name": "youtube_search",
        "description": "Search for youtube videos associated with a person. The input to this tool should be a comma separated list, where the first part contains a person name and the second part is a number, which is the maximum number of video results to return (aka num_results). The second part is optional.",
        "parameters": {
            "properties": {
                "query": {
                    "type": "string"
                }
            },
            "required": ["query"],
            "type": "object"
        }
    }
]
3. Obtain a Tool Call Message

Suppose we want to convert currency from THB to AUD.

# use llm_with_tool instead if you did not overwrite LLM

# Invoke with string
llm_response = llm.invoke("เดือนหน้าจะไปเที่ยวออสเตรเลีย มีอยู่ 100000 บาทแลกได้กี่ดอลลาร์")

# Invoke with Human Message class
from langchain_core.messages import HumanMessage

messages = [
    HumanMessage(content="เดือนหน้าจะไปเที่ยวออสเตรเลีย มีอยู่ 100000 บาทแลกได้กี่ดอลลาร์")
    ]

llm_response = llm.invoke(messages)

# Add LLM response (AI Message) to messages
messages.append(llm_response)
Passing questions as HumanMessage is encouraged. This is to separate message types.

Message Type Description
SystemMessage Represents prompts or instructions to the LLM.
HumanMessage Represents messages or inputs provided by the human user.
AIMessage Represents responses or outputs generated by the LLM(AI).
ToolMessage Represents outputs or responses generated by external tools.

llm_response.tool_calls yield a list of tool call messages.

# llm_response.tool_calls
[
    {
        "name": "convert_currency",
        "args": {
            "amount": 100000,
            "from_currency": "THB",
            "to_currency": "USD"
        },
        "id": "5bdff285-618d-4844-b6d3-03b8bb1631f0",
        "type": "tool_call"
    }
]

Let's see what each value means

  • name : The name of the tool to use.
  • args : The arguments for the tool.
  • id : The tool call message id. This is use to track tool call message and tool result.
  • type : The type of the message. Since it is a tool call message, it will always be tool_call.

How about when the LLM needed to call multiple tool at once?
In the case that more than one tool is called, its tool call message will be appened to the list.

llm_response = llm.invoke("พรุ่งนี้จะไปปารีส สภาพอากาศที่นู่นเป็นยังไงบ้าง ตอนนี้มีอยู่ 100,000 บาท แลกเป็นเงินที่นู่นได้เท่าไหร่ และระหว่างนี้ขอเพลงฝรั่งเศษมาฟังระหว่างจัดเสื้อผ้าหน่อย")
# llm_response.tool_calls
[
    {
        "name": "get_weather",
        "args": {
            "location": "Paris"
        },
        "id": "90e62f3b-bfd1-482e-ab97-3fab69c83ff6",
        "type": "tool_call"
    },
    {
        "name": "convert_currency",
        "args": {
            "amount": 100000,
            "from_currency": "THB",
            "to_currency": "EUR"
        },
        "id": "d432207c-d578-4ed6-b80e-ad44a2fb62e2",
        "type": "tool_call"
    },
    {
        "name": "youtube_search",
        "args": {
            "query": "French music"
        },
        "id": "435c64f9-be3f-4b4f-83ff-09467adca429",
        "type": "tool_call"
    }
]
4. Use Tool

Once we receive tool messages, we pass it into the corresponding tool.
Depending on what tool you have, the method to use tool is either .invoke, .run, or other methods.
Generally, .invoke works just fine.

# Tool Decorator and Community Tools
for tool_call in llm_response.tool_calls:

    # Invoke the tool with tool call message
    tool_result = convert_currency.invoke(tool_call)
# tool_result
{'from_amount': 100000, 'from_currency': 'THB', 'to_amount': 4600.0, 'to_currency': 'AUD'}
You would need to convert the result of the tool into ToolMessage class first before passing back to the LLM. If not, an error would be raised.

If your tool is created using Pydantic (In this case) or TypedDict class, you need to manually convert it to ToolMessage.

from langchain_core.messages import ToolMessage
    # .....
    # tool_result = convert_currency.invoke(tool_call)

    # Convert to Tool Message
    tool_msg = ToolMessage(content=tool_result,
            name=tool_call["name"],
            tool_call_id=tool_call["id"])

    # Add the Tool Message to messages
    messages.append(tool_msg)
# tool_msg
ToolMessage(content="{'from_amount': 100000, 'from_currency': 'THB', 'to_amount': 4600.0, 'to_currency': 'AUD'}",
            name='convert_currency',
            tool_call_id='5bdff285-618d-4844-b6d3-03b8bb1631f0')
The content of the Tool Message is forced to be a string regardless of what data type it was.

If your tool is created using tool decorator or is a community tool, the result of the tool is automatically converted to Tool Message.

llm_response = llm.invoke("ขอเพลงไทย 1 เพลง")

for tool_call in llm_response.tool_calls:

    # youtube_search is a community tool
    tool_result = youtube_search.invoke(tool_call)
# tool_result
ToolMessage(content="['https://www.youtube.com/watch?v=gEsCDJGqrlc']",
            name='youtube_search',
            tool_call_id='167db84b-b8eb-4754-b679-f9aaebb2292a')

# type(tool_result)
<class 'langchain_core.messages.tool.ToolMessage'>

5. Pass the Tool Result Back to the LLM

If you print out messages, you will see the chat history.

# messages
[
    SystemMessage(content=...),     # System prompt
    HumanMessage(content=....),     # User's question
    AIMessage(content=.......),     # LLM's response including the tool calls
    ToolMessage(content=.....)      # Result from using tools
]
If no messages are missing, invoking the LLM with messages will return the final answer
final_response = llm.invoke(messages)

# messages[1].content
เดือนหน้าจะไปเที่ยวออสเตรเลีย มีอยู่ 100000 บาทแลกได้กี่ดอลลาร์

# final_response
คุณมีเงินประมาณ 4600 ดอลลาร์ออสเตรเลียสำหรับการเดินทางของคุณในออสเตรเลีย