Agent框架属于基于LLM上层的应用层框架,其主要的逻辑还是基于LLM通信并实现业务的可编排性与可解释性。而AutoGen支持多Agents,其多个功能是比较强大且值得借鉴的。
设计
以业务为基准,结合多Agent框架(AutoGen)所提供的能力,设计产研结合的开发架构设计。
能力
Nested
将工作流或Agent打包成一个单独的Agent,给其它Agent嵌入并复用。两个关键的因素:
-
注册nested agent。 -
触发Agent。
register_nested_chats
函数则是同时负责上述事项,由对应的参数管控。
nested_chats = [
{
"recipient": group_chat_manager_with_intros,
"summary_method": "reflection_with_llm",
"summary_prompt": "Summarize the sequence of operations used to turn " "the source number into target number.",
},
{
"recipient": code_writer_agent,
"message": "Write a Python script to verify the arithmetic operations is correct.",
"summary_method": "reflection_with_llm",
},
{
"recipient": poetry_agent,
"message": "Write a poem about it.",
"max_turns": 1,
"summary_method": "last_msg",
},
]
arithmetic_agent.register_nested_chats(
nested_chats,
# The trigger function is used to determine if the agent should start the nested chat
# given the sender agent.
# In this case, the arithmetic agent will not start the nested chats if the sender is
# from the nested chats' recipient to avoid recursive calls.
trigger=lambda sender: sender not in [group_chat_manager_with_intros, code_writer_agent, poetry_agent],
)
LLM Reflection
对LLM响应的Reflection。其行为还是基于register_nested_chats
函数,不同点在于message
参数;现支持为动态指定函数处理message
字段内容;而默认是str静态字符串。
# 定义reflection方法,作为nested agent的提示词message
def reflection_message(recipient, messages, sender, config):
print("Reflecting...")
return f"Reflect and provide critique on the following writing. nn {recipient.chat_messages_for_summary(sender)[-1]['content']}"
nested_chat_queue = [
{
"recipient": reflection_assistant,
"message": reflection_message,
"max_turns": 1,
},
]
user_proxy.register_nested_chats(
nested_chat_queue,
trigger=writing_assistant,
)
在Reflection方法里面,可以对聊天内容做处理,譬如,总结或是摘要等,可以随意的获取聊天历史信息。
initiate_chat
初始化chat链路,严格按照顺序调用agent,因此在多Agent的业务场景下,可用来实现Agent的业务编排,最终达到continue chat的效果。
summary_method
在顺序多Agent之间支持消息内容的结转(Carryover),由summary_method
参数指定,该参数有两个默认值,分别是last_msg
和reflection_with_llm
。而遵循该参数的设计规范,即可实现自己的summary_method
方法,如下所示:
def my_summary_method(
sender: ConversableAgent,
recipient: ConversableAgent,
summary_args: dict,
):
return recipient.last_message(sender)["content"]
以reflection_with_llm函数实现为例,该函数逻辑就是汇总所有的message,然后调用LLM Client,根据预设的 summary_args[“summary_prompt”]提示词,返回处理后的内容。
Teachability
可教性[1](Agent学习能力)。Teachability
类,用于存储Agent学习到的信息到矢量数据库中,该矢量数据库存储在本地磁盘,支持后续的数据检索与推理,即对话的记忆功能。
# Start by instantiating any agent that inherits from ConversableAgent.
teachable_agent = ConversableAgent(
name="teachable_agent", # The name is flexible, but should not contain spaces to work in group chat.
llm_config={"config_list": config_list, "timeout": 120, "cache_seed": None}, # Disable caching.
)
# Instantiate the Teachability capability. Its parameters are all optional.
teachability = Teachability(
verbosity=0, # 0 for basic info, 1 to add memory operations, 2 for analyzer messages, 3 for memo lists.
reset_db=True,
path_to_db_dir="./tmp/notebook/teachability_db",
recall_threshold=1.5, # Higher numbers allow more (but less relevant) memos to be recalled.
)
# Now add the Teachability capability to the agent.
teachability.add_to_agent(teachable_agent)
# Instantiate a UserProxyAgent to represent the user. But in this notebook, all user input will be simulated.
user = UserProxyAgent(
name="user",
human_input_mode="NEVER",
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
max_consecutive_auto_reply=0,
code_execution_config={
"use_docker": False
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)
Tool use
工具调用。主要是注册的流程,分为两步:工具的注册、工具的使用。这两个操作分别绑定注册到agent。
# Register the tool signature with the assistant agent.
assistant.register_for_llm(name="calculator", description="A simple calculator")(calculator)
# Register the tool function with the user proxy agent.
user_proxy.register_for_execution(name="calculator")(calculator)
二合一使用:
# Register the calculator function to the two agents.
register_function(
calculator,
caller=assistant, # The assistant agent can suggest calls to the calculator.
executor=user_proxy, # The user proxy agent can execute the calculator calls.
name="calculator", # By default, the function name is used as the tool name.
description="A simple calculator", # A description of the tool.
)
RAG
RetrieveUserProxyAgent
类与RetrieveAssistantAgent
类实现RAG功能。
# 1. create an RetrieveAssistantAgent instance named "assistant"
assistant = RetrieveAssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
llm_config={
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
},
)
ragproxyagent = RetrieveUserProxyAgent(
name="ragproxyagent",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
retrieve_config={
"task": "code",
"docs_path": [
"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md",
"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md",
os.path.join(os.path.abspath(""), "..", "website", "docs"),
],
"custom_text_types": ["non-existent-type"],
"chunk_token_size": 2000,
"model": config_list[0]["model"],
"vector_db": "mongodb", # MongoDB Atlas database
"collection_name": "demo_collection",
"db_config": {
"connection_string": os.environ["MONGODB_URI"], # MongoDB Atlas connection string
"database_name": "test_db", # MongoDB Atlas database
"index_name": "vector_index",
"wait_until_index_ready": 120.0, # Setting to wait 120 seconds or until index is constructed before querying
"wait_until_document_ready": 120.0, # Setting to wait 120 seconds or until document is properly indexed after insertion/update
},
"get_or_create": True, # set to False if you don't want to reuse an existing collection
"overwrite": False, # set to True if you want to overwrite an existing collection, each overwrite will force a index creation and reupload of documents
},
code_execution_config=False, # set to False if you don't want to execute the code
)
问题
-
RAG的触发与切换? -
即RAG对话与LLM对话分开,某些问题只走RAG Agent,而有些问题则走其它Agent。 -
tools的识别? -
一般会有一个 tools_choice
参数,默认为auto
,有LLM自身决定是否调用Function。
可教性: https://microsoft.github.io/autogen/docs/notebooks/agentchat_teachability/#set-your-api-endpoint
原文始发于微信公众号(阿郎小哥的随笔驿站):聊聊基于AutoGen多Agents框架带来的产研技术思考
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/296588.html