Tool Use & Function Calling
Tool use allows an LLM to call external functions — APIs, databases, code executors — to complete tasks.
How It Works
- Define tools as JSON schemas describing the function name, parameters, and description
- Send tools + user message to the LLM
- LLM decides which tool to call and with what arguments
- Your code executes the function and returns the result
- LLM uses the result to generate the final response
Key Terms
| Term | Meaning |
| tool definition | JSON schema describing a callable function |
| tool call | The LLM's request to invoke a specific tool |
| tool result | The output returned to the LLM after executing the tool |
| parallel tool calls | LLM requests multiple tools at once |
| tool loop | Cycle of: LLM calls tool → get result → LLM continues |
Useful Phrases
- "The model decided to call the search tool with the query 'latest Python release'."
- "We support parallel tool calls so the agent can fetch weather and news simultaneously."
- "The tool loop runs until the model produces a final text response with no more tool calls."