AI agents系列之全从零开始构建
在我们上一篇博客文章中,我们全面介绍了智能代理,讨论了它们的特性、组成部分、演变过程、面临的挑战以及未来的可能性。
这篇文章,咱们就来聊聊怎么用 Python 从零开始构建一个智能代理。这个智能代理能够根据用户输入做出决策,选择合适的工具,并相应地执行任务。咱们这就开工咯!
文章目录
- 1. 什么是智能代理AI agents?
- 2. 实现
- 2.1 先决条件
- 2.2 实现步骤
- 3. 总结
- 4 完整代码
1. 什么是智能代理AI agents?
智能代理AI agents是一个能够自主感知环境、做出决策并采取行动以实现特定目标的实体。智能代理的复杂程度各不相同,从简单地对刺激做出反应的反应式代理,到能够随着时间推移学习和适应的更高级智能代理。常见的智能代理类型包括:
- 反应式代理:直接对环境变化做出反应,没有内部记忆。
- 基于模型的代理:利用对世界的内部模型来做出决策。
- 基于目标的代理:根据实现特定目标来规划行动。
- 基于效用的代理:根据效用函数评估潜在行动,以最大化结果。
例如,聊天机器人、推荐系统和自动驾驶汽车,每一种都利用不同类型的智能代理来高效、智能地执行任务。
我们这个智能代理的核心组成部分有:
- 模型:智能代理的大脑,负责处理输入并生成响应。
- 工具:智能代理可以根据用户请求执行的预定义函数。
- 工具箱:智能代理可以使用的工具集合。
- 系统提示:指导智能代理如何处理用户输入并选择合适工具的指令集。
2. 实现
现在,咱们撸起袖子加油干,开始构建!
构建智能代理
2.1 先决条件
1. Python 环境设置
你需要安装 Python 才能运行智能代理。按照以下步骤设置环境:
安装 Python(如果尚未安装)
- 从 python.org 下载并安装 Python(推荐使用 3.8+ 版本)。
- 验证安装:
python --version
创建虚拟环境(推荐) 最好使用虚拟环境来管理依赖项:
python -m venv ai_agents_env
source ai_agents_env/bin/activate # 在 Windows 上:ai_agents_env\Scripts\activate
安装所需依赖项 导航到仓库目录并安装依赖项:
pip install -r requirements.txt
2. 本地设置 Ollama
Ollama 用于高效运行和管理本地语言模型。按照以下步骤安装并配置它:
下载并安装 Ollama
- 访问 Ollama 官方网站 并下载适用于你操作系统的安装程序。
- 按照你平台的说明进行安装。
验证 Ollama 安装 运行以下命令检查 Ollama 是否安装正确:
ollama --version
拉取模型(如果需要) 某些智能代理实现可能需要特定的模型。你可以使用以下命令拉取模型:
ollama pull mistral # 将 'mistral' 替换为所需的模型
2.2 实现步骤
作者提供的图片
步骤 1:设置环境
除了 Python,我们还需要安装一些必要的库。在本教程中,我们将使用 requests
、json
和 termcolor
。此外,我们还将使用 dotenv
来管理环境变量。
pip install requests termcolor python-dotenv
步骤 2:定义模型类
我们首先需要一个能够处理用户输入的模型。我们将创建一个 OllamaModel
类,它与本地 API 交互以生成响应。
以下是基本实现:
from termcolor import colored
import os
from dotenv import load_dotenv
load_dotenv()
### 模型
import requests
import json
import operator
class OllamaModel:
def __init__(self, model, system_prompt, temperature=0, stop=None):
"""
使用给定参数初始化 OllamaModel。
参数:
model (str): 要使用的模型名称。
system_prompt (str): 要使用的系统提示。
temperature (float): 模型的温度设置。
stop (str): 模型的停止标记。
"""
self.model_endpoint = "http://localhost:11434/api/generate"
self.temperature = temperature
self.model = model
self.system_prompt = system_prompt
self.headers = {"Content-Type": "application/json"}
self.stop = stop
def generate_text(self, prompt):
"""
根据提供的提示从 Ollama 模型生成响应。
参数:
prompt (str): 用户查询,用于生成响应。
返回:
dict:模型返回的响应,以字典形式表示。
"""
payload = {
"model": self.model,
"format": "json",
"prompt": prompt,
"system": self.system_prompt,
"stream": False,
"temperature": self.temperature,
"stop": self.stop
}
try:
request_response = requests.post(
self.model_endpoint,
headers=self.headers,
data=json.dumps(payload)
)
print("REQUEST RESPONSE", request_response)
request_response_json = request_response.json()
response = request_response_json['response']
response_dict = json.loads(response)
print(f"\n\nOllama 模型返回的响应:{response_dict}")
return response_dict
except requests.RequestException as e:
response = {"error": f"调用模型时出错!{str(e)}"}
return response
这个类使用模型名称、系统提示、温度和停止标记进行初始化。generate_text
方法向模型 API 发送请求并返回响应。
步骤 3:为智能代理创建工具
下一步是为我们的智能代理创建工具。这些工具是简单的 Python 函数,用于执行特定任务。以下是一个基本计算器和字符串反转器的示例:
def basic_calculator(input_str):
"""
根据输入字符串或字典对两个数字执行数值运算。
参数:
input_str (str 或 dict):要么是一个包含 'num1'、'num2' 和 'operation' 键的字典的 JSON 字符串,
要么直接是一个字典。例如:'{"num1": 5, "num2": 3, "operation": "add"}'
或 {"num1": 67869, "num2": 9030393, "operation": "divide"}
返回:
str:运算结果的格式化字符串。
引发:
Exception:如果在运算过程中发生错误(例如,除以零)。
ValueError:如果请求了不支持的操作或输入无效。
"""
try:
# 处理字典和字符串输入
if isinstance(input_str, dict):
input_dict = input_str
else:
# 清理并解析输入字符串
input_str_clean = input_str.replace("'", "\"")
input_str_clean = input_str_clean.strip().strip
("\"")
input_dict = json.loads(input_str_clean)
# 验证所需字段
if not all(key in input_dict for key in ['num1', 'num2', 'operation']):
return "错误:输入必须包含 'num1'、'num2' 和 'operation'"
num1 = float(input_dict['num1']) # 转换为浮点数以处理小数
num2 = float(input_dict['num2'])
operation = input_dict['operation'].lower() # 使大小写不敏感
except (json.JSONDecodeError, KeyError) as e:
return "输入格式无效。请输入有效的数字和运算符。"
except ValueError as e:
return "错误:请输入有效的数值。"
# 定义支持的运算并进行错误处理
operations = {
'add': operator.add,
'plus': operator.add, # “加” 的另一种说法
'subtract': operator.sub,
'minus': operator.sub, # “减” 的另一种说法
'multiply': operator.mul,
'times': operator.mul, # “乘” 的另一种说法
'divide': operator.truediv,
'floor_divide': operator.floordiv,
'modulus': operator.mod,
'power': operator.pow,
'lt': operator.lt,
'le': operator.le,
'eq': operator.eq,
'ne': operator.ne,
'ge': operator.ge,
'gt': operator.gt
}
# 检查运算是否受支持
if operation not in operations:
return f"不支持的操作:'{operation}'。支持的操作有:{', '.join(operations.keys())}"
try:
# 特殊处理除以零的情况
if (operation in ['divide', 'floor_divide', 'modulus']) and num2 == 0:
return "错误:不允许除以零"
# 执行运算
result = operations[operation](num1, num2)
# 根据类型格式化结果
if isinstance(result, bool):
result_str = "True" if result else "False"
elif isinstance(result, float):
# 处理浮点数精度
result_str = f"{result:.6f}".rstrip('0').rstrip('.')
else:
result_str = str(result)
return f"答案是:{result_str}"
except Exception as e:
return f"运算过程中出错:{str(e)}"
def reverse_string(input_string):
"""
反转给定的字符串。
参数:
input_string (str):要反转的字符串。
返回:
str:反转后的字符串。
"""
# 检查输入是否为字符串
if not isinstance(input_string, str):
return "错误:输入必须是字符串"
# 使用切片反转字符串
reversed_string = input_string[::-1]
# 格式化输出
result = f"反转后的字符串是:{reversed_string}"
return result
这些函数根据提供的输入执行特定任务。basic_calculator
处理算术运算,而 reverse_string
反转给定的字符串。
步骤 4:构建工具箱
ToolBox
类存储智能代理可以使用的全部工具,并为每个工具提供描述:
class ToolBox:
def __init__(self):
self.tools_dict = {}
def store(self, functions_list):
"""
存储列表中每个函数的字面名称和文档字符串。
参数:
functions_list (list):函数对象列表,用于存储。
返回:
dict:以函数名称为键、文档字符串为值的字典。
"""
for func in functions_list:
self.tools_dict[func.__name__] = func.__doc__
return self.tools_dict
def tools(self):
"""
以文本字符串形式返回在 store 中创建的字典。
返回:
str:以文本字符串形式表示的存储函数及其文档字符串的字典。
"""
tools_str = ""
for name, doc in self.tools_dict.items():
tools_str += f"{name}: \"{doc}\"\n"
return tools_str.strip()
这个类将帮助智能代理了解哪些工具可用以及每个工具的作用。
步骤 5:创建智能代理类
智能代理需要思考、决定使用哪个工具并执行它。以下是 Agent
类:
agent_system_prompt_template = """
你是一个能够使用特定工具的智能 AI 助手。你的回答必须始终是这种 JSON 格式:
{{
"tool_choice": "tool_name",
"tool_input": "tool_inputs"
}}
工具及其使用时机:
1. basic_calculator:用于任何数学计算
- 输入格式:{{"num1": number, "num2": number, "operation": "add/subtract/multiply/divide"}}
- 支持的操作:add/plus, subtract/minus, multiply/times, divide
- 示例输入和输出:
输入:"15 加 7 的结果是多少"
输出:{{"tool_choice": "basic_calculator", "tool_input": {{"num1": 15, "num2": 7, "operation": "add"}}}}
输入:"100 除以 5 的结果是多少"
输出:{{"tool_choice": "basic_calculator", "tool_input": {{"num1": 100, "num2": 5, "operation": "divide"}}}}
2. reverse_string:用于任何涉及反转文本的请求
- 输入格式:仅需反转的文本作为字符串
- 当用户提到 "reverse"、"backwards" 或要求反转文本时,始终使用此工具
- 示例输入和输出:
输入:"'Howwwww' 的反转是什么"
输出:{{"tool_choice": "reverse_string", "tool_input": "Howwwww"}}
输入:"Python 的反转是什么"
输出:{{"tool_choice": "reverse_string", "tool_input": "Python"}}
3. no tool:用于一般对话和问题
- 示例输入和输出:
输入:"你是谁?"
输出:{{"tool_choice": "no tool", "tool_input": "我是一个 AI 助手,可以帮助你进行计算、反转文本以及回答问题。我可以执行数学运算和反转字符串。今天我能帮你做些什么呢?"}}
输入:"你好吗?"
输出:{{"tool_choice": "no tool", "tool_input": "我运行良好,感谢你的关心!我可以帮助你进行计算、反转文本或回答你可能有的任何问题。"}}
严格规则:
1. 对于有关身份、能力或感受的问题:
- 始终使用 "no tool"
- 提供完整、友好的回答
- 提及你的能力
2. 对于任何文本反转请求:
- 始终使用 "reverse_string"
- 仅提取要反转的文本
- 删除引号、"reverse of" 以及其他多余文本
3. 对于任何数学运算:
- 始终使用 "basic_calculator"
- 提取数字和运算符
- 将文本数字转换为数字
以下是你的工具列表及其描述:
{tool_descriptions}
记住:你的回答必须始终是有效的 JSON,包含 "tool_choice" 和 "tool_input" 字段。
"""
class Agent:
def __init__(self, tools, model_service, model_name, stop=None):
"""
使用工具列表和模型初始化智能代理。
参数:
tools (list):工具函数列表。
model_service (class):具有 generate_text 方法的模型服务类。
model_name (str):要使用的模型名称。
"""
self.tools = tools
self.model_service = model_service
self.model_name = model_name
self.stop = stop
def prepare_tools(self):
"""
存储工具并返回它们的描述。
返回:
str:存储在工具箱中的工具描述。
"""
toolbox = ToolBox()
toolbox.store(self.tools)
tool_descriptions = toolbox.tools()
return tool_descriptions
def think(self, prompt):
"""
使用系统提示模板和工具描述在模型上运行 generate_text 方法。
参数:
prompt (str):要为其生成响应的用户查询。
返回:
dict:模型返回的响应,以字典形式表示。
"""
tool_descriptions = self.prepare_tools()
agent_system_prompt = agent_system_prompt_template.format(tool_descriptions=tool_descriptions)
# 使用系统提示创建模型服务实例
if self.model_service == OllamaModel:
model_instance = self.model_service(
model=self.model_name,
system_prompt=agent_system_prompt,
temperature=0,
stop=self.stop
)
else:
model_instance = self.model_service(
model=self.model_name,
system_prompt=agent_system_prompt,
temperature=0
)
# 生成并返回响应字典
agent_response_dict = model_instance.generate_text(prompt)
return agent_response_dict
def work(self, prompt):
"""
解析 think 返回的字典并执行适当的工具
。
参数:
prompt (str):要为其生成响应的用户查询。
返回:
执行适当工具的响应,或者如果没有找到匹配的工具,则返回 tool_input。
"""
agent_response_dict = self.think(prompt)
tool_choice = agent_response_dict.get("tool_choice")
tool_input = agent_response_dict.get("tool_input")
for tool in self.tools:
if tool.__name__ == tool_choice:
response = tool(tool_input)
print(colored(response, 'cyan'))
return
print(colored(tool_input, 'cyan'))
return
这个类有三个主要方法:
- prepare_tools:存储并返回工具的描述。
- think:根据用户提示决定使用哪个工具。
- work:执行选定的工具并返回结果。
步骤 6:运行智能代理
最后,咱们把所有东西整合起来,运行我们的智能代理。在脚本的 main
部分,初始化智能代理并开始接受用户输入:
# 示例用法
if __name__ == "__main__":
"""
使用此智能代理的说明:
你可以尝试以下示例查询:
1. 计算器运算:
- "15 加 7 的结果是多少"
- "100 除以 5 的结果是多少"
- "23 乘以 4 的结果是多少"
2. 字符串反转:
- "反转单词 'hello world'"
- "你能反转 'Python Programming' 吗?"
3. 一般问题(将获得直接回答):
- "你是谁?"
- "你能帮我做些什么?"
Ollama 命令(在终端中运行):
- 查看可用模型: 'ollama list'
- 查看正在运行的模型: 'ps aux | grep ollama'
- 列出模型标签: 'curl http://localhost:11434/api/tags'
- 拉取新模型: 'ollama pull mistral'
- 运行模型服务器: 'ollama serve'
"""
tools = [basic_calculator, reverse_string]
# 如果使用 OpenAI,请取消以下注释
# model_service = OpenAIModel
# model_name = 'gpt-3.5-turbo'
# stop = None
# 使用 Ollama 和 llama2 模型
model_service = OllamaModel
model_name = "llama2" # 可以更改为其他模型,如 'mistral'、'codellama' 等
stop = "<|eot_id|>"
agent = Agent(tools=tools, model_service=model_service, model_name=model_name, stop=stop)
print("\n欢迎使用智能代理!输入 'exit' 退出。")
print("你可以让我:")
print("1. 进行计算(例如,'15 加 7 的结果是多少')")
print("2. 反转字符串(例如,'反转 hello world')")
print("3. 回答一般问题\n")
while True:
prompt = input("问我任何问题:")
if prompt.lower() == "exit":
break
agent.work(prompt)
3. 总结
我们探索了AI Agents的定义,并逐步实现了它。我们设置了环境,定义了模型,创建了必要的工具,并构建了一个结构化的工具箱来支持智能代理的功能。最后,我们将所有东西整合在一起,让智能代理开始工作。
这种结构化的方法为构建能够自动化任务并做出明智决策的智能、交互式智能代理提供了坚实的基础。随着智能代理的不断发展,它们的应用将在各个行业不断扩展,推动效率和创新。敬请期待更多关于智能代理的见解和改进,让我们的智能代理迈向更高水平!
4 完整代码
from termcolor import colored
import os
from dotenv import load_dotenv
load_dotenv()
### Models
import requests
import json
import operator
class OllamaModel:
def __init__(self, model, system_prompt, temperature=0, stop=None):
"""
Initializes the OllamaModel with the given parameters.
Parameters:
model (str): The name of the model to use.
system_prompt (str): The system prompt to use.
temperature (float): The temperature setting for the model.
stop (str): The stop token for the model.
"""
self.model_endpoint = "http://localhost:11434/api/generate"
self.temperature = temperature
self.model = model
self.system_prompt = system_prompt
self.headers = {"Content-Type": "application/json"}
self.stop = stop
def generate_text(self, prompt):
"""
Generates a response from the Ollama model based on the provided prompt.
Parameters:
prompt (str): The user query to generate a response for.
Returns:
dict: The response from the model as a dictionary.
"""
payload = {
"model": self.model,
"format": "json",
"prompt": prompt,
"system": self.system_prompt,
"stream": False,
"temperature": self.temperature,
"stop": self.stop
}
try:
request_response = requests.post(
self.model_endpoint,
headers=self.headers,
data=json.dumps(payload)
)
print("REQUEST RESPONSE", request_response)
request_response_json = request_response.json()
response = request_response_json['response']
response_dict = json.loads(response)
print(f"\n\nResponse from Ollama model: {response_dict}")
return response_dict
except requests.RequestException as e:
response = {"error": f"Error in invoking model! {str(e)}"}
return response
def basic_calculator(input_str):
"""
Perform a numeric operation on two numbers based on the input string or dictionary.
Parameters:
input_str (str or dict): Either a JSON string representing a dictionary with keys 'num1', 'num2', and 'operation',
or a dictionary directly. Example: '{"num1": 5, "num2": 3, "operation": "add"}'
or {"num1": 67869, "num2": 9030393, "operation": "divide"}
Returns:
str: The formatted result of the operation.
Raises:
Exception: If an error occurs during the operation (e.g., division by zero).
ValueError: If an unsupported operation is requested or input is invalid.
"""
try:
# Handle both dictionary and string inputs
if isinstance(input_str, dict):
input_dict = input_str
else:
# Clean and parse the input string
input_str_clean = input_str.replace("'", "\"")
input_str_clean = input_str_clean.strip().strip("\"")
input_dict = json.loads(input_str_clean)
# Validate required fields
if not all(key in input_dict for key in ['num1', 'num2', 'operation']):
return "Error: Input must contain 'num1', 'num2', and 'operation'"
num1 = float(input_dict['num1']) # Convert to float to handle decimal numbers
num2 = float(input_dict['num2'])
operation = input_dict['operation'].lower() # Make case-insensitive
except (json.JSONDecodeError, KeyError) as e:
return "Invalid input format. Please provide valid numbers and operation."
except ValueError as e:
return "Error: Please provide valid numerical values."
# Define the supported operations with error handling
operations = {
'add': operator.add,
'plus': operator.add, # Alternative word for add
'subtract': operator.sub,
'minus': operator.sub, # Alternative word for subtract
'multiply': operator.mul,
'times': operator.mul, # Alternative word for multiply
'divide': operator.truediv,
'floor_divide': operator.floordiv,
'modulus': operator.mod,
'power': operator.pow,
'lt': operator.lt,
'le': operator.le,
'eq': operator.eq,
'ne': operator.ne,
'ge': operator.ge,
'gt': operator.gt
}
# Check if the operation is supported
if operation not in operations:
return f"Unsupported operation: '{operation}'. Supported operations are: {', '.join(operations.keys())}"
try:
# Special handling for division by zero
if (operation in ['divide', 'floor_divide', 'modulus']) and num2 == 0:
return "Error: Division by zero is not allowed"
# Perform the operation
result = operations[operation](num1, num2)
# Format result based on type
if isinstance(result, bool):
result_str = "True" if result else "False"
elif isinstance(result, float):
# Handle floating point precision
result_str = f"{result:.6f}".rstrip('0').rstrip('.')
else:
result_str = str(result)
return f"The answer is: {result_str}"
except Exception as e:
return f"Error during calculation: {str(e)}"
def reverse_string(input_string):
"""
Reverse the given string.
Parameters:
input_string (str): The string to be reversed.
Returns:
str: The reversed string.
"""
# Check if input is a string
if not isinstance(input_string, str):
return "Error: Input must be a string"
# Reverse the string using slicing
reversed_string = input_string[::-1]
# Format the output
result = f"The reversed string is: {reversed_string}"
return result
class ToolBox:
def __init__(self):
self.tools_dict = {}
def store(self, functions_list):
"""
Stores the literal name and docstring of each function in the list.
Parameters:
functions_list (list): List of function objects to store.
Returns:
dict: Dictionary with function names as keys and their docstrings as values.
"""
for func in functions_list:
self.tools_dict[func.__name__] = func.__doc__
return self.tools_dict
def tools(self):
"""
Returns the dictionary created in store as a text string.
Returns:
str: Dictionary of stored functions and their docstrings as a text string.
"""
tools_str = ""
for name, doc in self.tools_dict.items():
tools_str += f"{name}: \"{doc}\"\n"
return tools_str.strip()
agent_system_prompt_template = """
You are an intelligent AI assistant with access to specific tools. Your responses must ALWAYS be in this JSON format:
{{
"tool_choice": "name_of_the_tool",
"tool_input": "inputs_to_the_tool"
}}
TOOLS AND WHEN TO USE THEM:
1. basic_calculator: Use for ANY mathematical calculations
- Input format: {{"num1": number, "num2": number, "operation": "add/subtract/multiply/divide"}}
- Supported operations: add/plus, subtract/minus, multiply/times, divide
- Example inputs and outputs:
Input: "Calculate 15 plus 7"
Output: {{"tool_choice": "basic_calculator", "tool_input": {{"num1": 15, "num2": 7, "operation": "add"}}}}
Input: "What is 100 divided by 5?"
Output: {{"tool_choice": "basic_calculator", "tool_input": {{"num1": 100, "num2": 5, "operation": "divide"}}}}
2. reverse_string: Use for ANY request involving reversing text
- Input format: Just the text to be reversed as a string
- ALWAYS use this tool when user mentions "reverse", "backwards", or asks to reverse text
- Example inputs and outputs:
Input: "Reverse of 'Howwwww'?"
Output: {{"tool_choice": "reverse_string", "tool_input": "Howwwww"}}
Input: "What is the reverse of Python?"
Output: {{"tool_choice": "reverse_string", "tool_input": "Python"}}
3. no tool: Use for general conversation and questions
- Example inputs and outputs:
Input: "Who are you?"
Output: {{"tool_choice": "no tool", "tool_input": "I am an AI assistant that can help you with calculations, reverse text, and answer questions. I can perform mathematical operations and reverse strings. How can I help you today?"}}
Input: "How are you?"
Output: {{"tool_choice": "no tool", "tool_input": "I'm functioning well, thank you for asking! I'm here to help you with calculations, text reversal, or answer any questions you might have."}}
STRICT RULES:
1. For questions about identity, capabilities, or feelings:
- ALWAYS use "no tool"
- Provide a complete, friendly response
- Mention your capabilities
2. For ANY text reversal request:
- ALWAYS use "reverse_string"
- Extract ONLY the text to be reversed
- Remove quotes, "reverse of", and other extra text
3. For ANY math operations:
- ALWAYS use "basic_calculator"
- Extract the numbers and operation
- Convert text numbers to digits
Here is a list of your tools along with their descriptions:
{tool_descriptions}
Remember: Your response must ALWAYS be valid JSON with "tool_choice" and "tool_input" fields.
"""
class Agent:
def __init__(self, tools, model_service, model_name, stop=None):
"""
Initializes the agent with a list of tools and a model.
Parameters:
tools (list): List of tool functions.
model_service (class): The model service class with a generate_text method.
model_name (str): The name of the model to use.
"""
self.tools = tools
self.model_service = model_service
self.model_name = model_name
self.stop = stop
def prepare_tools(self):
"""
Stores the tools in the toolbox and returns their descriptions.
Returns:
str: Descriptions of the tools stored in the toolbox.
"""
toolbox = ToolBox()
toolbox.store(self.tools)
tool_descriptions = toolbox.tools()
return tool_descriptions
def think(self, prompt):
"""
Runs the generate_text method on the model using the system prompt template and tool descriptions.
Parameters:
prompt (str): The user query to generate a response for.
Returns:
dict: The response from the model as a dictionary.
"""
tool_descriptions = self.prepare_tools()
agent_system_prompt = agent_system_prompt_template.format(tool_descriptions=tool_descriptions)
# Create an instance of the model service with the system prompt
if self.model_service == OllamaModel:
model_instance = self.model_service(
model=self.model_name,
system_prompt=agent_system_prompt,
temperature=0,
stop=self.stop
)
else:
model_instance = self.model_service(
model=self.model_name,
system_prompt=agent_system_prompt,
temperature=0
)
# Generate and return the response dictionary
agent_response_dict = model_instance.generate_text(prompt)
return agent_response_dict
def work(self, prompt):
"""
Parses the dictionary returned from think and executes the appropriate tool.
Parameters:
prompt (str): The user query to generate a response for.
Returns:
The response from executing the appropriate tool or the tool_input if no matching tool is found.
"""
agent_response_dict = self.think(prompt)
tool_choice = agent_response_dict.get("tool_choice")
tool_input = agent_response_dict.get("tool_input")
for tool in self.tools:
if tool.__name__ == tool_choice:
response = tool(tool_input)
print(colored(response, 'cyan'))
return
print(colored(tool_input, 'cyan'))
return
# Example usage
if __name__ == "__main__":
"""
Instructions for using this agent:
Example queries you can try:
1. Calculator operations:
- "Calculate 15 plus 7"
- "What is 100 divided by 5?"
- "Multiply 23 and 4"
2. String reversal:
- "Reverse the word 'hello world'"
- "Can you reverse 'Python Programming'?"
3. General questions (will get direct responses):
- "Who are you?"
- "What can you help me with?"
Ollama Commands (run these in terminal):
- Check available models: 'ollama list'
- Check running models: 'ps aux | grep ollama'
- List model tags: 'curl http://localhost:11434/api/tags'
- Pull a new model: 'ollama pull mistral'
- Run model server: 'ollama serve'
"""
tools = [basic_calculator, reverse_string]
# Uncomment below to run with OpenAI
# model_service = OpenAIModel
# model_name = 'gpt-3.5-turbo'
# stop = None
# Using Ollama with llama2 model
model_service = OllamaModel
model_name = "llama2" # Can be changed to other models like 'mistral', 'codellama', etc.
stop = "<|eot_id|>"
agent = Agent(tools=tools, model_service=model_service, model_name=model_name, stop=stop)
print("\nWelcome to the AI Agent! Type 'exit' to quit.")
print("You can ask me to:")
print("1. Perform calculations (e.g., 'Calculate 15 plus 7')")
print("2. Reverse strings (e.g., 'Reverse hello world')")
print("3. Answer general questions\n")
while True:
prompt = input("Ask me anything: ")
if prompt.lower() == "exit":
break
agent.work(prompt)