source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
text
stringlengths
11
14.2k
GitHub
autogen
autogen/CODE_OF_CONDUCT.md
autogen
# Microsoft Open Source Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Resources: - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
# AutoGen: Responsible AI FAQs
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What is AutoGen? AutoGen is a framework for simplifying the orchestration, optimization, and automation of LLM workflows. It offers customizable and conversable agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools and having conversations between multiple agents via automated chat.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What can AutoGen do? AutoGen is an experimentational framework for building a complex multi-agent conversation system by: - Defining a set of agents with specialized capabilities and roles. - Defining the interaction behavior between agents, i.e., what to reply when an agent receives messages from another agent. The agent conversation-centric design has numerous benefits, including that it: - Naturally handles ambiguity, feedback, progress, and collaboration. - Enables effective coding-related tasks, like tool use with back-and-forth troubleshooting. - Allows users to seamlessly opt in or opt out via an agent in the chat. - Achieves a collective goal with the cooperation of multiple specialists.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What is/are AutoGen’s intended use(s)? Please note that AutoGen is an open-source library under active development and intended for use for research purposes. It should not be used in any downstream applications without additional detailed evaluation of robustness, safety issues and assessment of any potential harm or bias in the proposed application. AutoGen is a generic infrastructure that can be used in multiple scenarios. The system’s intended uses include: - Building LLM workflows that solve more complex tasks: Users can create agents that interleave reasoning and tool use capabilities of the latest LLMs such as GPT-4. To solve complex tasks, multiple agents can converse to work together (e.g., by partitioning a complex problem into simpler steps or by providing different viewpoints or perspectives). - Application-specific agent topologies: Users can create application specific agent topologies and patterns for agents to interact. The exact topology may depend on the domain’s complexity and semantic capabilities of the LLM available. - Code generation and execution: Users can implement agents that can assume the roles of writing code and other agents that can execute code. Agents can do this with varying levels of human involvement. Users can add more agents and program the conversations to enforce constraints on code and output. - Question answering: Users can create agents that can help answer questions using retrieval augmented generation. - End user and multi-agent chat and debate: Users can build chat applications where they converse with multiple agents at the same time. While AutoGen automates LLM workflows, decisions about how to use specific LLM outputs should always have a human in the loop. For example, you should not use AutoGen to automatically post LLM generated content to social media.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
How was AutoGen evaluated? What metrics are used to measure performance? - Current version of AutoGen was evaluated on six applications to illustrate its potential in simplifying the development of high-performance multi-agent applications. These applications are selected based on their real-world relevance, problem difficulty and problem solving capabilities enabled by AutoGen, and innovative potential. - These applications involve using AutoGen to solve math problems, question answering, decision making in text world environments, supply chain optimization, etc. For each of these domains AutoGen was evaluated on various success based metrics (i.e., how often the AutoGen based implementation solved the task). And, in some cases, AutoGen based approach was also evaluated on implementation efficiency (e.g., to track reductions in developer effort to build). More details can be found at: https://aka.ms/AutoGen/TechReport - The team has conducted tests where a “red” agent attempts to get the default AutoGen assistant to break from its alignment and guardrails. The team has observed that out of 70 attempts to break guardrails, only 1 was successful in producing text that would have been flagged as problematic by Azure OpenAI filters. The team has not observed any evidence that AutoGen (or GPT models as hosted by OpenAI or Azure) can produce novel code exploits or jailbreak prompts, since direct prompts to “be a hacker”, “write exploits”, or “produce a phishing email” are refused by existing filters. - We also evaluated [a team of AutoGen agents](https://github.com/microsoft/autogen/tree/gaia_multiagent_v01_march_1st/samples/tools/autogenbench/scenarios/GAIA/Templates/Orchestrator) on the [GAIA benchmarks](https://arxiv.org/abs/2311.12983), and got [SOTA results](https://huggingface.co/spaces/gaia-benchmark/leaderboard) as of March 1, 2024.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What are the limitations of AutoGen? How can users minimize the impact of AutoGen’s limitations when using the system? AutoGen relies on existing LLMs. Experimenting with AutoGen would retain common limitations of large language models; including: - Data Biases: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair. - Lack of Contextual Understanding: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses. - Lack of Transparency: Due to the complexity and size, large language models can act as `black boxes,' making it difficult to comprehend the rationale behind specific outputs or decisions. - Content Harms: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. - Inaccurate or ungrounded content: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models to fabricate content without high authority input sources. - Potential for Misuse: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content. Additionally, AutoGen’s multi-agent framework may amplify or introduce additional risks, such as: - Privacy and Data Protection: The framework allows for human participation in conversations between agents. It is important to ensure that user data and conversations are protected and that developers use appropriate measures to safeguard privacy. - Accountability and Transparency: The framework involves multiple agents conversing and collaborating, it is important to establish clear accountability and transparency mechanisms. Users should be able to understand and trace the decision-making process of the agents involved in order to ensure accountability and address any potential issues or biases. - Trust and reliance: The framework leverages human understanding and intelligence while providing automation through conversations between agents. It is important to consider the impact of this interaction on user experience, trust, and reliance on AI systems. Clear communication and user education about the capabilities and limitations of the system will be essential. - Security & unintended consequences: The use of multi-agent conversations and automation in complex tasks may have unintended consequences. Especially, allowing LLM agents to make changes in external environments through code execution or function calls, such as install packages, could pose significant risks. Developers should carefully consider the potential risks and ensure that appropriate safeguards are in place to prevent harm or negative outcomes, including keeping a human in the loop for decision making.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What operational factors and settings allow for effective and responsible use of AutoGen? - Code execution: AutoGen recommends using docker containers so that code execution can happen in a safer manner. Users can use function call instead of free-form code to execute pre-defined functions only. That helps increase the reliability and safety. Users can customize the code execution environment to tailor to their requirements. - Human involvement: AutoGen prioritizes human involvement in multi agent conversation. The overseers can step in to give feedback to agents and steer them in the correct direction. By default, users get chance to confirm before code is executed. - Agent modularity: Modularity allows agents to have different levels of information access. Additional agents can assume roles that help keep other agents in check. For example, one can easily add a dedicated agent to play the role of safeguard. - LLMs: Users can choose the LLM that is optimized for responsible use. The default LLM is GPT-4 which inherits the existing RAI mechanisms and filters from the LLM provider. Caching is enabled by default to increase reliability and control cost. We encourage developers to review [OpenAI’s Usage policies](https://openai.com/policies/usage-policies) and [Azure OpenAI’s Code of Conduct](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/code-of-conduct) when using GPT-4. - Multi-agent setup: When using auto replies, the users can limit the number of auto replies, termination conditions etc. in the settings to increase reliability.
GitHub
autogen
autogen/README.md
autogen
<a name="readme-top"></a> <div align="center"> <img src="https://microsoft.github.io/autogen/img/ag.svg" alt="AutoGen Logo" width="100"> ![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue) [![PyPI - Version](https://img.shields.io/pypi/v/autogen-agentchat)](https://pypi.org/project/autogen-agentchat/) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen) </div> # AutoGen AutoGen is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AutoGen aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns. > [!IMPORTANT] > In order to better align with a new multi-packaging structure we have coming very soon, AutoGen is now available on PyPi as [`autogen-agentchat`](https://pypi.org/project/autogen-agentchat/) as of version `0.2.36`. This is the official package for the AutoGen project. > [!NOTE] > *Note for contributors and users*</b>: [microsoft/autogen](https://aka.ms/autogen-gh) is the official repository of AutoGen project and it is under active development and maintenance under MIT license. We welcome contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. We acknowledge the invaluable contributions from our existing contributors, as listed in [contributors.md](./CONTRIBUTORS.md). Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. For further information please also see [Microsoft open-source contributing guidelines](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing). > > -_Maintainers (Sept 6th, 2024)_ ![AutoGen Overview](https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png) - AutoGen enables building next-gen LLM applications based on [multi-agent conversations](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses. - It supports [diverse conversation patterns](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns) for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy, the number of agents, and agent conversation topology. - It provides a collection of working systems with different complexities. These systems span a [wide range of applications](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen) from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns. - AutoGen provides [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification). It offers utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc. AutoGen was created out of collaborative [research](https://microsoft.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
News <details> <summary>Expand</summary> :fire: June 6, 2024: WIRED publishes a new article on AutoGen: [Chatbot Teamwork Makes the AI Dream Work](https://www.wired.com/story/chatbot-teamwork-makes-the-ai-dream-work/) based on interview with [Adam Fourney](https://github.com/afourney). :fire: June 4th, 2024: Microsoft Research Forum publishes new update and video on [AutoGen and Complex Tasks](https://www.microsoft.com/en-us/research/video/autogen-update-complex-tasks-and-agents/) presented by [Adam Fourney](https://github.com/afourney). :fire: May 29, 2024: DeepLearning.ai launched a new short course [AI Agentic Design Patterns with AutoGen](https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen), made in collaboration with Microsoft and Penn State University, and taught by AutoGen creators [Chi Wang](https://github.com/sonichi) and [Qingyun Wu](https://github.com/qingyun-wu). :fire: May 24, 2024: Foundation Capital published an article on [Forbes: The Promise of Multi-Agent AI](https://www.forbes.com/sites/joannechen/2024/05/24/the-promise-of-multi-agent-ai/?sh=2c1e4f454d97) and a video [AI in the Real World Episode 2: Exploring Multi-Agent AI and AutoGen with Chi Wang](https://www.youtube.com/watch?v=RLwyXRVvlNk). :fire: May 13, 2024: [The Economist](https://www.economist.com/science-and-technology/2024/05/13/todays-ai-models-are-impressive-teams-of-them-will-be-formidable) published an article about multi-agent systems (MAS) following a January 2024 interview with [Chi Wang](https://github.com/sonichi). :fire: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/). :fire: Apr 26, 2024: [AutoGen.NET](https://microsoft.github.io/autogen-for-net/) is available for .NET developers! Thanks [XiaoYun Zhang](https://www.linkedin.com/in/xiaoyun-zhang-1b531013a/) :fire: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26). :fire: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://microsoft.github.io/autogen/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU). :fire: Mar 1, 2024: the first AutoGen multi-agent experiment on the challenging [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark achieved the No. 1 accuracy in all the three levels. <!-- :tada: Jan 30, 2024: AutoGen is highlighted by Peter Lee in Microsoft Research Forum [Keynote](https://t.co/nUBSjPDjqD). --> :tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023). <!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/microsoft/autogen/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://microsoft.github.io/autogen/docs/Installation#python). --> <!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://microsoft.github.io/autogen/blog/2023/11/13/OAI-assistants) for details and examples. --> :tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML). <!-- :tada: Nov 6, 2023: AutoGen is mentioned by Satya Nadella in a [fireside chat](https://youtu.be/0pLBvgYtv6U). --> <!-- :tada: Nov 1, 2023: AutoGen is the top trending repo on GitHub in October 2023. --> <!-- :tada: Oct 03, 2023: AutoGen spins off from [FLAML](https://github.com/microsoft/FLAML) on GitHub. --> <!-- :tada: Aug 16: Paper about AutoGen on [arxiv](https://arxiv.org/abs/2308.08155). --> :tada: Mar 29, 2023: AutoGen is first created in [FLAML](https://github.com/microsoft/FLAML). <!-- :fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web). :fire: [autogen](https://microsoft.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). :fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/). --> </details>
GitHub
autogen
autogen/README.md
autogen
Roadmaps To see what we are working on and what we plan to work on, please check our [Roadmap Issues](https://aka.ms/autogen-roadmap). <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Quickstart The easiest way to start playing is 1. Click below to use the GitHub Codespace [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/microsoft/autogen?quickstart=1) 2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration. 3. Start playing with the notebooks! *NOTE*: OAI_CONFIG_LIST_sample lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
[Installation](https://microsoft.github.io/autogen/docs/Installation) ### Option 1. Install and Run AutoGen in Docker Find detailed instructions for users [here](https://microsoft.github.io/autogen/docs/installation/Docker#step-1-install-docker), and for developers [here](https://microsoft.github.io/autogen/docs/Contribute#docker-for-development). ### Option 2. Install AutoGen Locally AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip: ```bash pip install autogen-agentchat~=0.2 ``` Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. <!-- For example, use the following to install the dependencies needed by the [`blendsearch`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function#blendsearch-economical-hyperparameter-optimization-with-blended-search-strategy) option. ```bash pip install "autogen-agentchat[blendsearch]~=0.2" ``` --> Find more options in [Installation](https://microsoft.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment). <!-- Each of the [`notebook examples`](https://github.com/microsoft/autogen/tree/main/notebook) may require a specific option to be installed. --> Even if you are installing and running AutoGen locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://microsoft.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://microsoft.github.io/autogen/docs/Installation#code-execution-with-docker-(default)). For LLM inference configurations, check the [FAQs](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints). <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Multi-Agent Conversation Framework Autogen enables the next-gen LLM applications with a generic [multi-agent conversation](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans. By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. Features of this use case include: - **Multi-agent conversations**: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. - **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. - **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), ```python from autogen import AssistantAgent, UserProxyAgent, config_list_from_json # Load LLM inference endpoints from an env variable or a file # See https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints # and OAI_CONFIG_LIST_sample config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST") # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4', 'api_key': '<your OpenAI API key here>'},] assistant = AssistantAgent("assistant", llm_config={"config_list": config_list}) user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}) # IMPORTANT: set to True to run code in docker, recommended user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") # This initiates an automated chat between the two agents to solve the task ``` This example can be run with ```python python test/twoagent.py ``` After the repo is cloned. The figure below shows an example conversation flow with AutoGen. ![Agent Chat Example](https://github.com/microsoft/autogen/blob/main/website/static/img/chat_example.png) Alternatively, the [sample code](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) here allows a user to chat with an AutoGen agent in ChatGPT style. Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples#automated-multi-agent-chat) for this feature. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Enhanced LLM Inferences Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating. <!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets. ```python # perform tuning for openai<1 config, analysis = autogen.Completion.tune( data=tune_data, metric="success", mode="max", eval_func=eval_func, inference_budget=0.05, optimization_budget=3, num_samples=-1, ) # perform inference for a test instance response = autogen.Completion.create(context=test_instance, **config) ``` Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples#tune-gpt-models) for this feature. --> <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Documentation You can find detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/). In addition, you can find: - [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/microsoft/autogen/blob/main/TRANSPARENCY_FAQS.md) - [Discord](https://aka.ms/autogen-dc) - [Contributing guide](https://microsoft.github.io/autogen/docs/Contribute) - [Roadmap](https://github.com/orgs/microsoft/projects/989/views/3) <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Related Papers [AutoGen Studio](https://www.microsoft.com/en-us/research/publication/autogen-studio-a-no-code-developer-tool-for-building-and-debugging-multi-agent-systems/) ``` @inproceedings{dibia2024studio, title={AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems}, author={Victor Dibia and Jingya Chen and Gagan Bansal and Suff Syed and Adam Fourney and Erkang (Eric) Zhu and Chi Wang and Saleema Amershi}, year={2024}, booktitle={Pre-Print} } ``` [AutoGen](https://aka.ms/autogen-pdf) ``` @inproceedings{wu2023autogen, title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework}, author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zhang and Shaokun Zhang and Jiale Liu and Ahmed Hassan Awadallah and Ryen W White and Doug Burger and Chi Wang}, year={2024}, booktitle={COLM}, } ``` [EcoOptiGen](https://arxiv.org/abs/2303.04673) ``` @inproceedings{wang2023EcoOptiGen, title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference}, author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah}, year={2023}, booktitle={AutoML'23}, } ``` [MathChat](https://arxiv.org/abs/2306.01337) ``` @inproceedings{wu2023empirical, title={An Empirical Study on Challenging Math Problem Solving with GPT-4}, author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang}, year={2023}, booktitle={ArXiv preprint arXiv:2306.01337}, } ``` [AgentOptimizer](https://arxiv.org/pdf/2402.11359) ``` @article{zhang2024training, title={Training Language Model Agents without Modifying Language Models}, author={Zhang, Shaokun and Zhang, Jieyu and Liu, Jiale and Song, Linxin and Wang, Chi and Krishna, Ranjay and Wu, Qingyun}, journal={ICML'24}, year={2024} } ``` [StateFlow](https://arxiv.org/abs/2403.11322) ``` @article{wu2024stateflow, title={StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows}, author={Wu, Yiran and Yue, Tianwei and Zhang, Shaokun and Wang, Chi and Wu, Qingyun}, journal={arXiv preprint arXiv:2403.11322}, year={2024} } ``` <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. If you are new to GitHub, [here](https://opensource.guide/how-to-contribute/#how-to-submit-a-contribution) is a detailed help source on getting involved with development on GitHub. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Contributors Wall <a href="https://github.com/microsoft/autogen/graphs/contributors"> <img src="https://contrib.rocks/image?repo=microsoft/autogen&max=204" /> </a> <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p> # Legal Notices Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode), see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the [LICENSE-CODE](LICENSE-CODE) file. Microsoft, Windows, Microsoft Azure, and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653. Privacy information can be found at https://go.microsoft.com/fwlink/?LinkId=521839 Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel, or otherwise. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/SECURITY.md
autogen
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
GitHub
autogen
autogen/SECURITY.md
autogen
Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
GitHub
autogen
autogen/SECURITY.md
autogen
Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
GitHub
autogen
autogen/SECURITY.md
autogen
Preferred Languages We prefer all communications to be in English.
GitHub
autogen
autogen/SECURITY.md
autogen
Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). <!-- END MICROSOFT SECURITY.MD BLOCK -->
GitHub
autogen
autogen/CONTRIBUTORS.md
autogen
# Contributors
GitHub
autogen
autogen/CONTRIBUTORS.md
autogen
Special thanks to all the people who help this project: > These individuals dedicate their time and expertise to improve this project. We are deeply grateful for their contributions. | Name | GitHub Handle | Organization | Features | Roadmap Lead | Additional Information | |---|---|---|---|---|---| | Qingyun Wu | [qingyun-wu](https://github.com/qingyun-wu) | Penn State University | all, alt-models, autobuilder | Yes | Available most of the time (US Eastern Time) | | Chi Wang | [sonichi](https://github.com/sonichi) | - | all | Yes | | | Li Jiang | [thinkall](https://github.com/thinkall) | Microsoft | rag, autobuilder, group chat | Yes | [Issue #1657](https://github.com/microsoft/autogen/issues/1657) - Beijing, GMT+8 | | Mark Sze | [marklysze](https://github.com/marklysze) | - | alt-models, group chat | No | Generally available (Sydney, AU time) - Group Chat "auto" speaker selection | | Hrushikesh Dokala | [Hk669](https://github.com/Hk669) | - | alt-models, swebench, logging, rag | No | [Issue #2946](https://github.com/microsoft/autogen/issues/2946), [Pull Request #2933](https://github.com/microsoft/autogen/pull/2933) - Available most of the time (India, GMT+5:30) | | Jiale Liu | [LeoLjl](https://github.com/LeoLjl) | Penn State University | autobuild, group chat | No | | | Shaokun Zhang | [skzhang1](https://github.com/skzhang1) | Penn State University | AgentOptimizer, Teachability | Yes | [Issue #521](https://github.com/microsoft/autogen/issues/521) | | Rajan Chari | [rajan-chari](https://github.com/rajan-chari) | Microsoft Research | CAP, Survey of other frameworks | No | | | Victor Dibia | [victordibia](https://github.com/victordibia) | Microsoft Research | autogenstudio | Yes | [Issue #737](https://github.com/microsoft/autogen/issues/737) | | Yixuan Zhai | [randombet](https://github.com/randombet) | Meta | group chat, sequential_chats, rag | No | | | Xiaoyun Zhang | [LittleLittleCloud](https://github.com/LittleLittleCloud) | Microsoft | AutoGen.Net, group chat | Yes | [Backlog - AutoGen.Net](https://github.com/microsoft/autogen/issues) - Available most of the time (PST) | | Yiran Wu | [yiranwu0](https://github.com/yiranwu0) | Penn State University | alt-models, group chat, logging | Yes | | | Beibin Li | [BeibinLi](https://github.com/BeibinLi) | Microsoft Research | alt-models | Yes | | | Gagan Bansal | [gagb](https://github.com/gagb) | Microsoft Research | All | | | | Adam Fourney | [afourney](https://github.com/afourney) | Microsoft Research | Complex Tasks | | | | Ricky Loynd | [rickyloynd-microsoft](https://github.com/rickyloynd-microsoft) | Microsoft Research | Teachability | | | | Eric Zhu | [ekzhu](https://github.com/ekzhu) | Microsoft Research | All, Infra | | | | Jack Gerrits | [jackgerrits](https://github.com/jackgerrits) | Microsoft Research | All, Infra | | | | David Luong | [DavidLuong98](https://github.com/DavidLuong98) | Microsoft | AutoGen.Net | | | | Davor Runje | [davorrunje](https://github.com/davorrunje) | airt.ai | Tool calling, IO | | Available most of the time (Central European Time) | | Friederike Niedtner | [Friderike](https://www.microsoft.com/en-us/research/people/fniedtner/) | Microsoft Research | PM | | | | Rafah Hosn | [Rafah](https://www.microsoft.com/en-us/research/people/raaboulh/) | Microsoft Research | PM | | | | Robin Moeur | [Robin](https://www.linkedin.com/in/rmoeur/) | Microsoft Research | PM | | | | Jingya Chen | [jingyachen](https://github.com/JingyaChen) | Microsoft | UX Design, AutoGen Studio | | | | Suff Syed | [suffsyed](https://github.com/suffsyed) | Microsoft | UX Design, AutoGen Studio | | |
GitHub
autogen
autogen/CONTRIBUTORS.md
autogen
I would like to join this list. How can I help the project? > We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://microsoft.github.io/autogen/docs/contributor-guide/contributing) guide.
GitHub
autogen
autogen/CONTRIBUTORS.md
autogen
Are you missing from this list? > Please open a PR to help us fix this.
GitHub
autogen
autogen/CONTRIBUTORS.md
autogen
Acknowledgements This template was adapted from [GitHub Template Guide](https://github.com/cezaraugusto/github-template-guidelines/blob/master/.github/CONTRIBUTORS.md) by [cezaraugusto](https://github.com/cezaraugusto).
GitHub
autogen
autogen/.devcontainer/README.md
autogen
# Dockerfiles and Devcontainer Configurations for AutoGen Welcome to the `.devcontainer` directory! Here you'll find Dockerfiles and devcontainer configurations that are essential for setting up your AutoGen development environment. Each Dockerfile is tailored for different use cases and requirements. Below is a brief overview of each and how you can utilize them effectively. These configurations can be used with Codespaces and locally.
GitHub
autogen
autogen/.devcontainer/README.md
autogen
Dockerfile Descriptions ### base - **Purpose**: This Dockerfile, i.e., `./Dockerfile`, is designed for basic setups. It includes common Python libraries and essential dependencies required for general usage of AutoGen. - **Usage**: Ideal for those just starting with AutoGen or for general-purpose applications. - **Building the Image**: Run `docker build -f ./Dockerfile -t autogen_base_img .` in this directory. - **Using with Codespaces**: `Code > Codespaces > Click on +` By default + creates a Codespace on the current branch. ### full - **Purpose**: This Dockerfile, i.e., `./full/Dockerfile` is for advanced features. It includes additional dependencies and is configured for more complex or feature-rich AutoGen applications. - **Usage**: Suited for advanced users who need the full range of AutoGen's capabilities. - **Building the Image**: Execute `docker build -f full/Dockerfile -t autogen_full_img .`. - **Using with Codespaces**: `Code > Codespaces > Click on ...> New with options > Choose "full" as devcontainer configuration`. This image may require a Codespace with at least 64GB of disk space. ### dev - **Purpose**: Tailored for AutoGen project developers, this Dockerfile, i.e., `./dev/Dockerfile` includes tools and configurations aiding in development and contribution. - **Usage**: Recommended for developers who are contributing to the AutoGen project. - **Building the Image**: Run `docker build -f dev/Dockerfile -t autogen_dev_img .`. - **Using with Codespaces**: `Code > Codespaces > Click on ...> New with options > Choose "dev" as devcontainer configuration`. This image may require a Codespace with at least 64GB of disk space. - **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://microsoft.github.io/autogen/docs/Contribute) page prior to submitting any pull requests. ### studio - **Purpose**: Tailored for AutoGen project developers, this Dockerfile, i.e., `./studio/Dockerfile`, includes tools and configurations aiding in development and contribution. - **Usage**: Recommended for developers who are contributing to the AutoGen project. - **Building the Image**: Run `docker build -f studio/Dockerfile -t autogen_studio_img .`. - **Using with Codespaces**: `Code > Codespaces > Click on ...> New with options > Choose "studio" as devcontainer configuration`. - **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://microsoft.github.io/autogen/docs/Contribute) page prior to submitting any pull requests.
GitHub
autogen
autogen/.devcontainer/README.md
autogen
Customizing Dockerfiles Feel free to modify these Dockerfiles for your specific project needs. Here are some common customizations: - **Adding New Dependencies**: If your project requires additional Python packages, you can add them using the `RUN pip install` command. - **Changing the Base Image**: You may change the base image (e.g., from a Python image to an Ubuntu image) to suit your project's requirements. - **Changing the Python version**: do you need a different version of python other than 3.11. Just update the first line of each of the Dockerfiles like so: `FROM python:3.11-slim-bookworm` to `FROM python:3.10-slim-bookworm` - **Setting Environment Variables**: Add environment variables using the `ENV` command for any application-specific configurations. We have prestaged the line needed to inject your OpenAI_key into the docker environment as a environmental variable. Others can be staged in the same way. Just uncomment the line. `# ENV OPENAI_API_KEY="{OpenAI-API-Key}"` to `ENV OPENAI_API_KEY="{OpenAI-API-Key}"` - **Need a less "Advanced" Autogen build**: If the `./full/Dockerfile` is to much but you need more than advanced then update this line in the Dockerfile file. `RUN pip install autogen-agentchat[teachable,lmm,retrievechat,mathchat,blendsearch]~=0.2 autogenra` to install just what you need. `RUN pip install autogen-agentchat[retrievechat,blendsearch]~=0.2 autogenra` - **Can't Dev without your favorite CLI tool**: if you need particular OS tools to be installed in your Docker container you can add those packages here right after the sudo for the `./base/Dockerfile` and `./full/Dockerfile` files. In the example below we are installing net-tools and vim to the environment. ```code RUN apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ software-properties-common sudo net-tools vim\ && apt-get clean \ && rm -rf /var/lib/apt/lists/* ``` ### Managing Your Docker Environment After customizing your Dockerfile, build the Docker image using the `docker build` command as shown above. To run a container based on your new image, use: ```bash docker run -it -v $(pwd)/your_app:/app your_image_name ``` Replace `your_app` with your application directory and `your_image_name` with the name of the image you built. #### Closing for the Day - **Exit the container**: Type `exit`. - **Stop the container**: Use `docker stop {application_project_name}`. #### Resuming Work - **Restart the container**: Use `docker start {application_project_name}`. - **Access the container**: Execute `sudo docker exec -it {application_project_name} bash`. - **Reactivate the environment**: Run `source /usr/src/app/autogen_env/bin/activate`. ### Useful Docker Commands - **View running containers**: `docker ps -a`. - **View Docker images**: `docker images`. - **Restart container setup**: Stop (`docker stop my_container`), remove the container (`docker rm my_container`), and remove the image (`docker rmi my_image:latest`). #### Troubleshooting Common Issues - Check Docker daemon, port conflicts, and permissions issues. #### Additional Resources For more information on Docker usage and best practices, refer to the [official Docker documentation](https://docs.docker.com).
GitHub
autogen
autogen/autogen/agentchat/contrib/agent_eval/README.md
autogen
Agents for running the [AgentEval](https://microsoft.github.io/autogen/blog/2023/11/20/AgentEval/) pipeline. AgentEval is a process for evaluating a LLM-based system's performance on a given task. When given a task to evaluate and a few example runs, the critic and subcritic agents create evaluation criteria for evaluating a system's solution. Once the criteria has been created, the quantifier agent can evaluate subsequent task solutions based on the generated criteria. For more information see: [AgentEval Integration Roadmap](https://github.com/microsoft/autogen/issues/2162) See our [blog post](https://microsoft.github.io/autogen/blog/2024/06/21/AgentEval) for usage examples and general explanations.
GitHub
autogen
autogen/website/README.md
autogen
# Website This website is built using [Docusaurus 3](https://docusaurus.io/), a modern static website generator.
GitHub
autogen
autogen/website/README.md
autogen
Prerequisites To build and test documentation locally, begin by downloading and installing [Node.js](https://nodejs.org/en/download/), and then installing [Yarn](https://classic.yarnpkg.com/en/). On Windows, you can install via the npm package manager (npm) which comes bundled with Node.js: ```console npm install --global yarn ```
GitHub
autogen
autogen/website/README.md
autogen
Installation ```console pip install pydoc-markdown pyyaml colored cd website yarn install ``` ### Install Quarto `quarto` is used to render notebooks. Install it [here](https://github.com/quarto-dev/quarto-cli/releases). > Note: Ensure that your `quarto` version is `1.5.23` or higher.
GitHub
autogen
autogen/website/README.md
autogen
Local Development Navigate to the `website` folder and run: ```console pydoc-markdown python ./process_notebooks.py render yarn start ``` This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
GitHub
autogen
autogen/website/docs/Migration-Guide.md
autogen
# Migration Guide
GitHub
autogen
autogen/website/docs/Migration-Guide.md
autogen
Migrating to 0.2 openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method. Therefore, some changes are required for users of `pyautogen<0.2`. - `api_base` -> `base_url`, `request_timeout` -> `timeout` in `llm_config` and `config_list`. `max_retry_period` and `retry_wait_time` are deprecated. `max_retries` can be set for each client. - MathChat is unsupported until it is tested in future release. - `autogen.Completion` and `autogen.ChatCompletion` are deprecated. The essential functionalities are moved to `autogen.OpenAIWrapper`: ```python from autogen import OpenAIWrapper client = OpenAIWrapper(config_list=config_list) response = client.create(messages=[{"role": "user", "content": "2+2="}]) print(client.extract_text_or_completion_object(response)) ``` - Inference parameter tuning and inference logging features are updated: ```python import autogen.runtime_logging # Start logging autogen.runtime_logging.start() # Stop logging autogen.runtime_logging.stop() ``` Checkout [Logging documentation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) to learn more. Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). - `seed` in autogen is renamed into `cache_seed` to accommodate the newly added `seed` param in openai chat completion api. `use_cache` is removed as a kwarg in `OpenAIWrapper.create()` for being automatically decided by `cache_seed`: int | None. The difference between autogen's `cache_seed` and openai's `seed` is that: - autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made. - openai's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using openai's `seed` with `cache_seed` set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.
GitHub
autogen
autogen/website/docs/Examples.md
autogen
# Examples
GitHub
autogen
autogen/website/docs/Examples.md
autogen
Automated Multi Agent Chat AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation via multi-agent conversation. Please find documentation about this feature [here](/docs/Use-Cases/agent_chat). Links to notebook examples: ### Code Generation, Execution, and Debugging - Automated Task Solving with Code Generation, Execution & Debugging - [View Notebook](/docs/notebooks/agentchat_auto_feedback_from_code_execution) - Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat) - Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat_qdrant) ### Multi-Agent Collaboration (>3 Agents) - Automated Task Solving by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat) - Automated Data Visualization by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_vis) - Automated Complex Task Solving by Group Chat (with 6 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_research) - Automated Task Solving with Coding & Planning Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_planning.ipynb) - Automated Task Solving with transition paths specified in a graph - [View Notebook](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) - Running a group chat as an inner-monolgue via the SocietyOfMindAgent - [View Notebook](/docs/notebooks/agentchat_society_of_mind) - Running a group chat with custom speaker selection function - [View Notebook](/docs/notebooks/agentchat_groupchat_customized) ### Sequential Multi-Agent Chats - Solving Multiple Tasks in a Sequence of Chats Initiated by a Single Agent - [View Notebook](/docs/notebooks/agentchat_multi_task_chats) - Async-solving Multiple Tasks in a Sequence of Chats Initiated by a Single Agent - [View Notebook](/docs/notebooks/agentchat_multi_task_async_chats) - Solving Multiple Tasks in a Sequence of Chats Initiated by Different Agents - [View Notebook](/docs/notebooks/agentchats_sequential_chats) ### Nested Chats - Solving Complex Tasks with Nested Chats - [View Notebook](/docs/notebooks/agentchat_nestedchat) - Solving Complex Tasks with A Sequence of Nested Chats - [View Notebook](/docs/notebooks/agentchat_nested_sequential_chats) - OptiGuide for Solving a Supply Chain Optimization Problem with Nested Chats with a Coding Agent and a Safeguard Agent - [View Notebook](/docs/notebooks/agentchat_nestedchat_optiguide) - Conversational Chess with Nested Chats and Tool Use - [View Notebook](/docs/notebooks/agentchat_nested_chats_chess) ### Applications - Automated Continual Learning from New Data - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_stream.ipynb) - [OptiGuide](https://github.com/microsoft/optiguide) - Coding, Tool Using, Safeguarding & Question Answering for Supply Chain Optimization - [AutoAnny](https://github.com/microsoft/autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen ### Tool Use - **Web Search**: Solve Tasks Requiring Web Info - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_web_info.ipynb) - Use Provided Tools as Functions - [View Notebook](/docs/notebooks/agentchat_function_call_currency_calculator) - Use Tools via Sync and Async Function Calling - [View Notebook](/docs/notebooks/agentchat_function_call_async) - Task Solving with Langchain Provided Tools as Functions - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_langchain.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG) - Function Inception: Enable AutoGen agents to update/remove functions during conversations. - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_inception_function.ipynb) - Agent Chat with Whisper - [View Notebook](/docs/notebooks/agentchat_video_transcript_translate_with_whisper) - Constrained Responses via Guidance - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb) - Browse the Web with Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_surfer.ipynb) - **SQL**: Natural Language Text to SQL Query using the [Spider](https://yale-lily.github.io/spider) Text-to-SQL Benchmark - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_sql_spider.ipynb) - **Web Scraping**: Web Scraping with Apify - [View Notebook](/docs/notebooks/agentchat_webscraping_with_apify) - **Write a software app, task by task, with specially designed functions.** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_code_writing.ipynb). ### Human Involvement - Simple example in ChatGPT style [View example](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) - Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_human_feedback.ipynb) - Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb) - Agent Chat with **Async Human Inputs** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/Async_human_input.ipynb) ### Agent Teaching and Learning - Teach Agents New Skills & Reuse via Automated Chat - [View Notebook](/docs/notebooks/agentchat_teaching) - Teach Agents New Facts, User Preferences and Skills Beyond Coding - [View Notebook](/docs/notebooks/agentchat_teachability) - Teach OpenAI Assistants Through GPTAssistantAgent - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachable_oai_assistants.ipynb) - Agent Optimizer: Train Agents in an Agentic Way - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb) ### Multi-Agent Chat with OpenAI Assistants in the loop - Hello-World Chat with OpenAi Assistant in AutoGen - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) - Chat with OpenAI Assistant using Function Call - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb) - Chat with OpenAI Assistant with Code Interpreter - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb) - Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) - OpenAI Assistant in a Group Chat - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_groupchat.ipynb) - GPTAssistantAgent based Multi-Agent Tool Use - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/gpt_assistant_agent_function_call.ipynb) ### Non-OpenAI Models - Conversational Chess using non-OpenAI Models - [View Notebook](/docs/notebooks/agentchat_nested_chats_chess_altmodels) ### Multimodal Agent - Multimodal Agent Chat with DALLE and GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_dalle_and_gpt4v.ipynb) - Multimodal Agent Chat with Llava - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) - Multimodal Agent Chat with GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) ### Long Context Handling <!-- - Conversations with Chat History Compression Enabled - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_compression.ipynb) --> - Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages) ### Evaluation and Assessment - AgentEval: A Multi-Agent System for Assess Utility of LLM-powered Applications - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb) ### Automatic Agent Building - Automatically Build Multi-agent System with AgentBuilder - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) - Automatically Build Multi-agent System from Agent Library - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) ### Observability - Track LLM calls, tool usage, actions and errors using AgentOps - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentops.ipynb)
GitHub
autogen
autogen/website/docs/Examples.md
autogen
Enhanced Inferences ### Utilities - API Unification - [View Documentation with Code Example](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification) - Utility Functions to Help Managing API configurations effectively - [View Notebook](/docs/topics/llm_configuration) - Cost Calculation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_cost_token_tracking.ipynb) ### Inference Hyperparameters Tuning AutoGen offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. The research study finds that tuning hyperparameters can significantly improve the utility of them. Please find documentation about this feature [here](/docs/Use-Cases/enhanced_inference). Links to notebook examples: * [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) * [Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
GitHub
autogen
autogen/website/docs/Research.md
autogen
# Research For technical details, please check our technical report and research publications. * [AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems](https://www.microsoft.com/en-us/research/publication/autogen-studio-a-no-code-developer-tool-for-building-and-debugging-multi-agent-systems/) ```bibtex @inproceedings{dibia2024studio, title={AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems}, author={Victor Dibia and Jingya Chen and Gagan Bansal and Suff Syed and Adam Fourney and Erkang (Eric) Zhu and Chi Wang and Saleema Amershi}, year={2024}, booktitle={Pre-Print} } ``` * [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://aka.ms/autogen-pdf). ```bibtex @inproceedings{wu2024autogen, title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework}, author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zhang and Shaokun Zhang and Jiale Liu and Ahmed Hassan Awadallah and Ryen W White and Doug Burger and Chi Wang}, year={2024}, booktitle={COLM} } ``` * [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). Chi Wang, Susan Xueqing Liu, Ahmed H. Awadallah. AutoML'23. ```bibtex @inproceedings{wang2023EcoOptiGen, title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference}, author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah}, year={2023}, booktitle={AutoML'23}, } ``` * [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337). Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2306.01337 (2023). ```bibtex @inproceedings{wu2023empirical, title={An Empirical Study on Challenging Math Problem Solving with GPT-4}, author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang}, year={2023}, booktitle={ArXiv preprint arXiv:2306.01337}, } ``` * [EcoAssistant: Using LLM Assistant More Affordably and Accurately](https://arxiv.org/abs/2310.03046). Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang. ArXiv preprint arXiv:2310.03046 (2023). ```bibtex @inproceedings{zhang2023ecoassistant, title={EcoAssistant: Using LLM Assistant More Affordably and Accurately}, author={Zhang, Jieyu and Krishna, Ranjay and Awadallah, Ahmed H and Wang, Chi}, year={2023}, booktitle={ArXiv preprint arXiv:2310.03046}, } ``` * [Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications](https://arxiv.org/abs/2402.09015). Negar Arabzadeh, Julia Kiseleva, Qingyun Wu, Chi Wang, Ahmed Awadallah, Victor Dibia, Adam Fourney, Charles Clarke. ArXiv preprint arXiv:2402.09015 (2024). ```bibtex @misc{Kiseleva2024agenteval, title={Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications}, author={Negar Arabzadeh and Julia Kiseleva and Qingyun Wu and Chi Wang and Ahmed Awadallah and Victor Dibia and Adam Fourney and Charles Clarke}, year={2024}, eprint={2402.09015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` * [Training Language Model Agents without Modifying Language Models](https://arxiv.org/abs/2402.11359). Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, Qingyun Wu. ICML'24. ```bibtex @misc{zhang2024agentoptimizer, title={Training Language Model Agents without Modifying Language Models}, author={Shaokun Zhang and Jieyu Zhang and Jiale Liu and Linxin Song and Chi Wang and Ranjay Krishna and Qingyun Wu}, year={2024}, booktitle={ICML'24}, } ``` * [AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks](https://arxiv.org/abs/2403.04783). Yifan Zeng, Yiran Wu, Xiao Zhang, Huazheng Wang, Qingyun Wu. ArXiv preprint arXiv:2403.04783 (2024). ```bibtex @misc{zeng2024autodefense, title={AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks}, author={Yifan Zeng and Yiran Wu and Xiao Zhang and Huazheng Wang and Qingyun Wu}, year={2024}, eprint={2403.04783}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` * [StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows](https://arxiv.org/abs/2403.11322). Yiran Wu, Tianwei Yue, Shaokun Zhang, Chi Wang, Qingyun Wu. ArXiv preprint arXiv:2403.11322 (2024). ```bibtex @misc{wu2024stateflow, title={StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows}, author={Yiran Wu and Tianwei Yue and Shaokun Zhang and Chi Wang and Qingyun Wu}, year={2024}, eprint={2403.11322}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
# LLM Caching AutoGen supports caching API requests so that they can be reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving. Since version [`0.2.8`](https://github.com/microsoft/autogen/releases/tag/v0.2.8), a configurable context manager allows you to easily configure LLM cache, using either [`DiskCache`](/docs/reference/cache/disk_cache#diskcache), [`RedisCache`](/docs/reference/cache/redis_cache#rediscache), or Cosmos DB Cache. All agents inside the context manager will use the same cache. ```python from autogen import Cache # Use Redis as cache with Cache.redis(redis_url="redis://localhost:6379/0") as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) # Use DiskCache as cache with Cache.disk() as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) # Use Azure Cosmos DB as cache with Cache.cosmos_db(connection_string="your_connection_string", database_id="your_database_id", container_id="your_container_id") as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) ``` The cache can also be passed directly to the model client's create call. ```python client = OpenAIWrapper(...) with Cache.disk() as cache: client.create(..., cache=cache) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Controlling the seed You can vary the `cache_seed` parameter to get different LLM output while still using cache. ```python # Setting the cache_seed to 1 will use a different cache from the default one # and you will see different output. with Cache.disk(cache_seed=1) as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Cache path By default [`DiskCache`](/docs/reference/cache/disk_cache#diskcache) uses `.cache` for storage. To change the cache directory, set `cache_path_root`: ```python with Cache.disk(cache_path_root="/tmp/autogen_cache") as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Disabling cache For backward compatibility, [`DiskCache`](/docs/reference/cache/disk_cache#diskcache) is on by default with `cache_seed` set to 41. To disable caching completely, set `cache_seed` to `None` in the `llm_config` of the agent. ```python assistant = AssistantAgent( "coding_agent", llm_config={ "cache_seed": None, "config_list": OAI_CONFIG_LIST, "max_tokens": 1024, }, ) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Difference between `cache_seed` and OpenAI's `seed` parameter OpenAI v1.1 introduced a new parameter `seed`. The difference between AutoGen's `cache_seed` and OpenAI's `seed` is AutoGen uses an explicit request cache to guarantee the exactly same output is produced for the same input and when cache is hit, no OpenAI API call will be made. OpenAI's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using OpenAI's `seed` with `cache_seed` set to `None`, even for the same input, an OpenAI API call will be made and there is no guarantee for getting exactly the same output.
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
# Retrieval Augmentation Retrieval Augmented Generation (RAG) is a powerful technique that combines language models with external knowledge retrieval to improve the quality and relevance of generated responses. One way to realize RAG in AutoGen is to construct agent chats with `AssistantAgent` and `RetrieveUserProxyAgent` classes.
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Example Setup: RAG with Retrieval Augmented Agents The following is an example setup demonstrating how to create retrieval augmented agents in AutoGen: ### Step 1. Create an instance of `AssistantAgent` and `RetrieveUserProxyAgent`. Here `RetrieveUserProxyAgent` instance acts as a proxy agent that retrieves relevant information based on the user's input. Refer to the [doc](https://microsoft.github.io/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent) for more information on the detailed configurations. ```python assistant = AssistantAgent( name="assistant", system_message="You are a helpful assistant.", llm_config={ "timeout": 600, "cache_seed": 42, "config_list": config_list, }, ) ragproxyagent = RetrieveUserProxyAgent( name="ragproxyagent", human_input_mode="NEVER", max_consecutive_auto_reply=3, retrieve_config={ "task": "code", "docs_path": [ "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md", "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md", os.path.join(os.path.abspath(""), "..", "website", "docs"), ], "custom_text_types": ["mdx"], "chunk_token_size": 2000, "model": config_list[0]["model"], "client": chromadb.PersistentClient(path="/tmp/chromadb"), "embedding_model": "all-mpnet-base-v2", "get_or_create": True, # set to False if you don't want to reuse an existing collection, but you'll need to remove the collection manually }, code_execution_config=False, # set to False if you don't want to execute the code ) ``` ### Step 2. Initiating Agent Chat with Retrieval Augmentation Once the retrieval augmented agents are set up, you can initiate a chat with retrieval augmentation using the following code: ```python code_problem = "How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached." ragproxyagent.initiate_chat( assistant, message=ragproxyagent.message_generator, problem=code_problem, search_string="spark" ) # search_string is used as an extra filter for the embeddings search, in this case, we only want to search documents that contain "spark". ``` *You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).*
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Example Setup: RAG with Retrieval Augmented Agents with PGVector The following is an example setup demonstrating how to create retrieval augmented agents in AutoGen: ### Step 1. Create an instance of `AssistantAgent` and `RetrieveUserProxyAgent`. Here `RetrieveUserProxyAgent` instance acts as a proxy agent that retrieves relevant information based on the user's input. Specify the connection_string, or the host, port, database, username, and password in the db_config. ```python assistant = AssistantAgent( name="assistant", system_message="You are a helpful assistant.", llm_config={ "timeout": 600, "cache_seed": 42, "config_list": config_list, }, ) ragproxyagent = RetrieveUserProxyAgent( name="ragproxyagent", human_input_mode="NEVER", max_consecutive_auto_reply=3, retrieve_config={ "task": "code", "docs_path": [ "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md", "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md", os.path.join(os.path.abspath(""), "..", "website", "docs"), ], "vector_db": "pgvector", "collection_name": "autogen_docs", "db_config": { "connection_string": "postgresql://testuser:testpwd@localhost:5432/vectordb", # Optional - connect to an external vector database # "host": None, # Optional vector database host # "port": None, # Optional vector database port # "database": None, # Optional vector database name # "username": None, # Optional vector database username # "password": None, # Optional vector database password }, "custom_text_types": ["mdx"], "chunk_token_size": 2000, "model": config_list[0]["model"], "get_or_create": True, }, code_execution_config=False, ) ``` ### Step 2. Initiating Agent Chat with Retrieval Augmentation Once the retrieval augmented agents are set up, you can initiate a chat with retrieval augmentation using the following code: ```python code_problem = "How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached." ragproxyagent.initiate_chat( assistant, message=ragproxyagent.message_generator, problem=code_problem, search_string="spark" ) # search_string is used as an extra filter for the embeddings search, in this case, we only want to search documents that contain "spark". ```
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Online Demo [Retrival-Augmented Chat Demo on Huggingface](https://huggingface.co/spaces/thinkall/autogen-demos)
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
More Examples and Notebooks For more detailed examples and notebooks showcasing the usage of retrieval augmented agents in AutoGen, refer to the following: - Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat) - Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) - Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) - Automated Code Generation and Question Answering with [MongoDB Atlas](https://www.mongodb.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_mongodb.ipynb) - Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG)
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Roadmap Explore our detailed roadmap [here](https://github.com/microsoft/autogen/issues/1657) for further advancements plan around RAG. Your contributions, feedback, and use cases are highly appreciated! We invite you to engage with us and play a pivotal role in the development of this impactful feature.
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
# Agent Observability AutoGen supports advanced LLM agent observability and monitoring through built-in logging and partner providers.
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
AutoGen Observability Integrations ### Built-In Logging AutoGen's SQLite and File Logger - [Tutorial Notebook](/docs/notebooks/agentchat_logging) ### Full-Service Partner Integrations AutoGen partners with [AgentOps](https://agentops.ai) to provide multi-agent tracking, metrics, and monitoring - [Tutorial Notebook](/docs/notebooks/agentchat_agentops)
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
What is Observability? Observability provides developers with the necessary insights to understand and improve the internal workings of their agents. Observability is necessary for maintaining reliability, tracking costs, and ensuring AI safety. **Without observability tools, developers face significant hurdles:** - Tracking agent activities across sessions becomes a complex, error-prone task. - Manually sifting through verbose terminal outputs to understand LLM interactions is inefficient. - Pinpointing the exact moments of tool invocations is often like finding a needle in a haystack. **Key Features of Observability Dashboards:** - Human-readable overview analytics and replays of agent activities. - LLM cost, prompt, completion, timestamp, and metadata tracking for performance monitoring. - Tool invocation, events, and agent-to-agent interactions for workflow monitoring. - Error flagging and notifications for faster debugging. - Access to a wealth of data for developers using supported agent frameworks, such as environments, SDK versions, and more. ### Compliance Observability is not just a development convenience—it's a compliance necessity, especially in regulated industries: - It offers insights into AI decision-making processes, fostering trust and transparency. - Anomalies and unintended behaviors are detected promptly, reducing various risks. - Ensuring adherence to data privacy regulations, thereby safeguarding sensitive information. - Compliance violations are quickly identified and addressed, enhancing incident management.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
# vLLM [vLLM](https://github.com/vllm-project/vllm) is a locally run proxy and inference server, providing an OpenAI-compatible API. As it performs both the proxy and the inferencing, you don't need to install an additional inference server. Note: vLLM does not support OpenAI's [Function Calling](https://platform.openai.com/docs/guides/function-calling) (usable with AutoGen). However, it is in development and may be available by the time you read this. Running this stack requires the installation of: 1. AutoGen ([installation instructions](/docs/installation)) 2. vLLM Note: We recommend using a virtual environment for your stack, see [this article](https://microsoft.github.io/autogen/docs/installation/#create-a-virtual-environment-optional) for guidance.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Installing vLLM In your terminal: ```bash pip install vllm ```
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Choosing models vLLM will download new models when you run the server. The models are sourced from [Hugging Face](https://huggingface.co), a filtered list of Text Generation models is [here](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending) and vLLM has a list of [commonly used models](https://docs.vllm.ai/en/latest/models/supported_models.html). Use the full model name, e.g. `mistralai/Mistral-7B-Instruct-v0.2`.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Chat Template vLLM uses a pre-defined chat template, unless the model has a chat template defined in its config file on Hugging Face. This can cause an issue if the chat template doesn't allow `'role' : 'system'` messages, as used in AutoGen. Therefore, we will create a chat template for the Mistral.AI Mistral 7B model we are using that allows roles of 'user', 'assistant', and 'system'. Create a file name `autogenmistraltemplate.jinja` with the following content: ```` text {{ bos_token }} {% for message in messages %} {% if ((message['role'] == 'user' or message['role'] == 'system') != (loop.index0 % 2 == 0)) %} {{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }} {% endif %} {% if (message['role'] == 'user' or message['role'] == 'system') %} {{ '[INST] ' + message['content'] + ' [/INST]' }} {% elif message['role'] == 'assistant' %} {{ message['content'] + eos_token}} {% else %} {{ raise_exception('Only system, user and assistant roles are supported!') }} {% endif %} {% endfor %} ```` ````mdx-code-block :::warning Chat Templates are specific to the model/model family. The example shown here is for Mistral-based models like Mistral 7B and Mixtral 8x7B. vLLM has a number of [example templates](https://github.com/vllm-project/vllm/tree/main/examples) for models that can be a starting point for your chat template. Just remember, the template may need to be adjusted to support 'system' role messages. ::: ````
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Running vLLM proxy server To run vLLM with the chosen model and our chat template, in your terminal: ```bash python -m vllm.entrypoints.openai.api_server --model mistralai/Mistral-7B-Instruct-v0.2 --chat-template autogenmistraltemplate.jinja ``` By default, vLLM will run on 'http://0.0.0.0:8000'.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Using vLLM with AutoGen Now that we have the URL for the vLLM proxy server, you can use it within AutoGen in the same way as OpenAI or cloud-based proxy servers. As you are running this proxy server locally, no API key is required. As ```api_key``` is a mandatory field for configurations within AutoGen we put a dummy value in it, as per the example below. Although we are specifying the model when running the vLLM command, we must still put it into the ```model``` value for vLLM. ```python from autogen import UserProxyAgent, ConversableAgent local_llm_config={ "config_list": [ { "model": "mistralai/Mistral-7B-Instruct-v0.2", # Same as in vLLM command "api_key": "NotRequired", # Not needed "base_url": "http://0.0.0.0:8000/v1" # Your vLLM URL, with '/v1' added } ], "cache_seed": None # Turns off caching, useful for testing different models } # Create the agent that uses the LLM. assistant = ConversableAgent("agent", llm_config=local_llm_config,system_message="") # Create the agent that represents the user in the conversation. user_proxy = UserProxyAgent("user", code_execution_config=False,system_message="") # Let the assistant start the conversation. It will end when the user types exit. assistant.initiate_chat(user_proxy, message="How can I help you today?") ``` Output: ```` text agent (to user): How can I help you today? -------------------------------------------------------------------------------- Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: Why is the sky blue? user (to agent): Why is the sky blue? -------------------------------------------------------------------------------- >>>>>>>> USING AUTO REPLY... agent (to user): The sky appears blue due to a phenomenon called Rayleigh scattering. As sunlight reaches Earth's atmosphere, it interacts with molecules and particles in the air, causing the scattering of light. Blue light has a shorter wavelength and gets scattered more easily than other colors, which is why the sky appears blue during a clear day. However, during sunrise and sunset, the sky can appear red, orange, or purple due to a different type of scattering called scattering by dust, pollutants, and water droplets, which scatter longer wavelengths of light more effectively. -------------------------------------------------------------------------------- Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: and why does it turn red? user (to agent): and why does it turn red? -------------------------------------------------------------------------------- >>>>>>>> USING AUTO REPLY... agent (to user): During sunrise and sunset, the angle of the sun's rays in the sky is lower, and they have to pass through more of the Earth's atmosphere before reaching an observer. This additional distance results in more scattering of sunlight, which preferentially scatters the longer wavelengths (red, orange, and yellow) more than the shorter wavelengths (blue and green). The scattering of sunlight by the Earth's atmosphere causes the red, orange, and yellow colors to be more prevalent in the sky during sunrise and sunset, resulting in the beautiful display of colors often referred to as a sunrise or sunset. As the sun continues to set, the sky can transition to various shades of purple, pink, and eventually dark blue or black, as the available sunlight continues to decrease and the longer wavelengths are progressively scattered less effectively. -------------------------------------------------------------------------------- Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: exit ````
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
# Non-OpenAI Models AutoGen allows you to use non-OpenAI models through proxy servers that provide an OpenAI-compatible API or a [custom model client](https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models) class. Benefits of this flexibility include access to hundreds of models, assigning specialized models to agents (e.g., fine-tuned coding models), the ability to run AutoGen entirely within your environment, utilising both OpenAI and non-OpenAI models in one system, and cost reductions in inference.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
OpenAI-compatible API proxy server Any proxy server that provides an API that is compatible with [OpenAI's API](https://platform.openai.com/docs/api-reference) will work with AutoGen. These proxy servers can be cloud-based or running locally within your environment. ![Cloud or Local Proxy Servers](images/cloudlocalproxy.png) ### Cloud-based proxy servers By using cloud-based proxy servers, you are able to use models without requiring the hardware and software to run them. These providers can host open source/weight models, like [Hugging Face](https://huggingface.co/) and [Mistral AI](https://mistral.ai/), or their own closed models. When cloud-based proxy servers provide an OpenAI-compatible API, using them in AutoGen is straightforward. With [LLM Configuration](/docs/topics/llm_configuration) done in the same way as when using OpenAI's models, the primary difference is typically the authentication which is usually handled through an API key. Examples of using cloud-based proxy servers providers that have an OpenAI-compatible API are provided below: - [Together AI example](/docs/topics/non-openai-models/cloud-togetherai) - [Mistral AI example](/docs/topics/non-openai-models/cloud-mistralai) - [Anthropic Claude example](/docs/topics/non-openai-models/cloud-anthropic) ### Locally run proxy servers An increasing number of LLM proxy servers are available for use locally. These can be open-source (e.g., LiteLLM, Ollama, vLLM) or closed-source (e.g., LM Studio), and are typically used for running the full-stack within your environment. Similar to cloud-based proxy servers, as long as these proxy servers provide an OpenAI-compatible API, running them in AutoGen is straightforward. Examples of using locally run proxy servers that have an OpenAI-compatible API are provided below: - [LiteLLM with Ollama example](/docs/topics/non-openai-models/local-litellm-ollama) - [LM Studio](/docs/topics/non-openai-models/local-lm-studio) - [vLLM example](/docs/topics/non-openai-models/local-vllm) ````mdx-code-block :::tip If you are planning to use Function Calling, not all cloud-based and local proxy servers support Function Calling with their OpenAI-compatible API, so check their documentation. ::: ```` ### Configuration for Non-OpenAI models Whether you choose a cloud-based or locally-run proxy server, the configuration is done in the same way as using OpenAI's models, see [LLM Configuration](/docs/topics/llm_configuration) for further information. You can use [model configuration filtering](/docs/topics/llm_configuration#config-list-filtering) to assign specific models to agents.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
Custom Model Client class For more advanced users, you can create your own custom model client class, enabling you to define and load your own models. See the [AutoGen with Custom Models: Empowering Users to Use Their Own Inference Mechanism](/blog/2024/01/26/Custom-Models) blog post and [this notebook](/docs/notebooks/agentchat_custom_model/) for a guide to creating custom model client classes.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
# Tips for Non-OpenAI Models Here are some tips for using non-OpenAI Models with AutoGen.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Finding the right model Every model will perform differently across the operations within your AutoGen setup, such as speaker selection, coding, function calling, content creation, etc. On the whole, larger models (13B+) perform better with following directions and providing more cohesive responses. Content creation can be performed by most models. Fine-tuned models can be great for very specific tasks, such as function calling and coding. Specific tasks, such as speaker selection in a Group Chat scenario, that require very accurate outputs can be a challenge with most open source/weight models. The use of chain-of-thought and/or few-shot prompting can help guide the LLM to provide the output in the format you want.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Validating your program Testing your AutoGen setup against a very large LLM, such as OpenAI's ChatGPT or Anthropic's Claude 3, can help validate your agent setup and configuration. Once a setup is performing as you want, you can replace the models for your agents with non-OpenAI models and iteratively tweak system messages, prompts, and model selection.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Chat template AutoGen utilises a set of chat messages for the conversation between AutoGen/user and LLMs. Each chat message has a role attribute that is typically `user`, `assistant`, or `system`. A chat template is applied during inference and some chat templates implement rules about what roles can be used in specific sequences of messages. For example, when using Mistral AI's API the last chat message must have a role of `user`. In a Group Chat scenario the message used to select the next speaker will have a role of `system` by default and the API will throw an exception for this step. To overcome this the GroupChat's constructor has a parameter called `role_for_select_speaker_messages` that can be used to change the role name to `user`. ```python groupchat = autogen.GroupChat( agents=[user_proxy, coder, pm], messages=[], max_round=12, # Role for select speaker message will be set to 'user' instead of 'system' role_for_select_speaker_messages='user', ) ``` If the chat template associated with a model you want to use doesn't support the role sequence and names used in AutoGen you can modify the chat template. See an example of this on our [vLLM page](/docs/topics/non-openai-models/local-vllm#chat-template).
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Discord Join AutoGen's [#alt-models](https://discord.com/channels/1153072414184452236/1201369716057440287) channel on their Discord and discuss non-OpenAI models and configurations.
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
# Agent Backed by OpenAI Assistant API The GPTAssistantAgent is a powerful component of the AutoGen framework, utilizing OpenAI's Assistant API to enhance agents with advanced capabilities. This agent enables the integration of multiple tools such as the Code Interpreter, File Search, and Function Calling, allowing for a highly customizable and dynamic interaction model. Version Requirements: - AutoGen: Version 0.2.27 or higher. - OpenAI: Version 1.21 or higher. Key Features of the GPTAssistantAgent: - Multi-Tool Mastery: Agents can leverage a combination of OpenAI's built-in tools, like [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) and [File Search](https://platform.openai.com/docs/assistants/tools/file-search), alongside custom tools you create or integrate via [Function Calling](https://platform.openai.com/docs/assistants/tools/function-calling). - Streamlined Conversation Management: Benefit from persistent threads that automatically store message history and adjust based on the model's context length. This simplifies development by allowing you to focus on adding new messages rather than managing conversation flow. - File Access and Integration: Enable agents to access and utilize files in various formats. Files can be incorporated during agent creation or throughout conversations via threads. Additionally, agents can generate files (e.g., images, spreadsheets) and cite referenced files within their responses. For a practical illustration, here are some examples: - [Chat with OpenAI Assistant using function call](/docs/notebooks/agentchat_oai_assistant_function_call) demonstrates how to leverage function calling to enable intelligent function selection. - [GPTAssistant with Code Interpreter](/docs/notebooks/agentchat_oai_code_interpreter) showcases the integration of the Code Interpreter tool which executes Python code dynamically within applications. - [Group Chat with GPTAssistantAgent](/docs/notebooks/agentchat_oai_assistant_groupchat) demonstrates how to use the GPTAssistantAgent in AutoGen's group chat mode, enabling collaborative task performance through automated chat with agents powered by LLMs, tools, or humans.
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
Create a OpenAI Assistant in Autogen ```python import os from autogen import config_list_from_json from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent assistant_id = os.environ.get("ASSISTANT_ID", None) config_list = config_list_from_json("OAI_CONFIG_LIST") llm_config = { "config_list": config_list, } assistant_config = { # define the openai assistant behavior as you need } oai_agent = GPTAssistantAgent( name="oai_agent", instructions="I'm an openai assistant running in autogen", llm_config=llm_config, assistant_config=assistant_config, ) ```
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
Use OpenAI Assistant Built-in Tools and Function Calling ### Code Interpreter The [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) empowers your agents to write and execute Python code in a secure environment provide by OpenAI. This unlocks several capabilities, including but not limited to: - Process data: Handle various data formats and manipulate data on the fly. - Generate outputs: Create new data files or even visualizations like graphs. - ... Using the Code Interpreter with the following configuration. ```python assistant_config = { "tools": [ {"type": "code_interpreter"}, ], "tool_resources": { "code_interpreter": { "file_ids": ["$file.id"] # optional. Files that are passed at the Assistant level are accessible by all Runs with this Assistant. } } } ``` To get the `file.id`, you can employ two methods: 1. OpenAI Playground: Leverage the OpenAI Playground, an interactive platform accessible at https://platform.openai.com/playground, to upload your files and obtain the corresponding file IDs. 2. Code-Based Uploading: Alternatively, you can upload files and retrieve their file IDs programmatically using the following code snippet: ```python from openai import OpenAI client = OpenAI( # Defaults to os.environ.get("OPENAI_API_KEY") ) # Upload a file with an "assistants" purpose file = client.files.create( file=open("mydata.csv", "rb"), purpose='assistants' ) ``` ### File Search The [File Search](https://platform.openai.com/docs/assistants/tools/file-search) tool empowers your agents to tap into knowledge beyond its pre-trained model. This allows you to incorporate your own documents and data, such as product information or code files, into your agent's capabilities. Using the File Search with the following configuration. ```python assistant_config = { "tools": [ {"type": "file_search"}, ], "tool_resources": { "file_search": { "vector_store_ids": ["$vector_store.id"] } } } ``` Here's how to obtain the vector_store.id using two methods: 1. OpenAI Playground: Leverage the OpenAI Playground, an interactive platform accessible at https://platform.openai.com/playground, to create a vector store, upload your files, and add it into your vector store. Once complete, you'll be able to retrieve the associated `vector_store.id`. 2. Code-Based Uploading:Alternatively, you can upload files and retrieve their file IDs programmatically using the following code snippet: ```python from openai import OpenAI client = OpenAI( # Defaults to os.environ.get("OPENAI_API_KEY") ) # Step 1: Create a Vector Store vector_store = client.beta.vector_stores.create(name="Financial Statements") print("Vector Store created:", vector_store.id) # This is your vector_store.id # Step 2: Prepare Files for Upload file_paths = ["edgar/goog-10k.pdf", "edgar/brka-10k.txt"] file_streams = [open(path, "rb") for path in file_paths] # Step 3: Upload Files and Add to Vector Store (with status polling) file_batch = client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store.id, files=file_streams ) # Step 4: Verify Completion (Optional) print("File batch status:", file_batch.status) print("Uploaded file count:", file_batch.file_counts.processed) ``` ### Function calling Function Calling empowers you to extend the capabilities of your agents with your pre-defined functionalities, which allows you to describe custom functions to the Assistant, enabling intelligent function selection and argument generation. Using the Function calling with the following configuration. ```python # learn more from https://platform.openai.com/docs/guides/function-calling/function-calling from autogen.function_utils import get_function_schema def get_current_weather(location: str) -> dict: """ Retrieves the current weather for a specified location. Args: location (str): The location to get the weather for. Returns: Union[str, dict]: A dictionary with weather details.. """ # Simulated response return { "location": location, "temperature": 22.5, "description": "Partly cloudy" } api_schema = get_function_schema( get_current_weather, name=get_current_weather.__name__, description="Returns the current weather data for a specified location." ) assistant_config = { "tools": [ { "type": "function", "function": api_schema, } ], } ```
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md
autogen
# Introduction to Transform Messages Why do we need to handle long contexts? The problem arises from several constraints and requirements: 1. Token limits: LLMs have token limits that restrict the amount of textual data they can process. If we exceed these limits, we may encounter errors or incur additional costs. By preprocessing the chat history, we can ensure that we stay within the acceptable token range. 2. Context relevance: As conversations progress, retaining the entire chat history may become less relevant or even counterproductive. Keeping only the most recent and pertinent messages can help the LLMs focus on the most crucial context, leading to more accurate and relevant responses. 3. Efficiency: Processing long contexts can consume more computational resources, leading to slower response times.
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md
autogen
Transform Messages Capability The `TransformMessages` capability is designed to modify incoming messages before they are processed by the LLM agent. This can include limiting the number of messages, truncating messages to meet token limits, and more. :::info Requirements Install `autogen-agentchat`: ```bash pip install autogen-agentchat~=0.2 ``` For more information, please refer to the [installation guide](/docs/installation/). ::: ### Exploring and Understanding Transformations Let's start by exploring the available transformations and understanding how they work. We will start off by importing the required modules. ```python import copy import pprint from autogen.agentchat.contrib.capabilities import transforms ``` #### Example 1: Limiting the Total Number of Messages Consider a scenario where you want to limit the context history to only the most recent messages to maintain efficiency and relevance. You can achieve this with the MessageHistoryLimiter transformation: ```python # Limit the message history to the 3 most recent messages max_msg_transfrom = transforms.MessageHistoryLimiter(max_messages=3) messages = [ {"role": "user", "content": "hello"}, {"role": "assistant", "content": [{"type": "text", "text": "there"}]}, {"role": "user", "content": "how"}, {"role": "assistant", "content": [{"type": "text", "text": "are you doing?"}]}, {"role": "user", "content": "very very very very very very long string"}, ] processed_messages = max_msg_transfrom.apply_transform(copy.deepcopy(messages)) pprint.pprint(processed_messages) ``` ```console [{'content': 'how', 'role': 'user'}, {'content': [{'text': 'are you doing?', 'type': 'text'}], 'role': 'assistant'}, {'content': 'very very very very very very long string', 'role': 'user'}] ``` By applying the `MessageHistoryLimiter`, we can see that we were able to limit the context history to the 3 most recent messages. However, if the splitting point is between a "tool_calls" and "tool" pair, the complete pair will be included to obey the OpenAI API call constraints. ```python max_msg_transfrom = transforms.MessageHistoryLimiter(max_messages=3) messages = [ {"role": "user", "content": "hello"}, {"role": "tool_calls", "content": "calling_tool"}, {"role": "tool", "content": "tool_response"}, {"role": "user", "content": "how are you"}, {"role": "assistant", "content": [{"type": "text", "text": "are you doing?"}]}, ] processed_messages = max_msg_transfrom.apply_transform(copy.deepcopy(messages)) pprint.pprint(processed_messages) ``` ```console [{'content': 'calling_tool', 'role': 'tool_calls'}, {'content': 'tool_response', 'role': 'tool'}, {'content': 'how are you', 'role': 'user'}, {'content': [{'text': 'are you doing?', 'type': 'text'}], 'role': 'assistant'}] ``` #### Example 2: Limiting the Number of Tokens To adhere to token limitations, use the `MessageTokenLimiter` transformation. This limits tokens per message and the total token count across all messages. Additionally, a `min_tokens` threshold can be applied: ```python # Limit the token limit per message to 3 tokens token_limit_transform = transforms.MessageTokenLimiter(max_tokens_per_message=3, min_tokens=10) processed_messages = token_limit_transform.apply_transform(copy.deepcopy(messages)) pprint.pprint(processed_messages) ``` ```console [{'content': 'hello', 'role': 'user'}, {'content': [{'text': 'there', 'type': 'text'}], 'role': 'assistant'}, {'content': 'how', 'role': 'user'}, {'content': [{'text': 'are you doing', 'type': 'text'}], 'role': 'assistant'}, {'content': 'very very very', 'role': 'user'}] ``` We can see that we were able to limit the number of tokens to 3, which is equivalent to 3 words for this instance. In the following example we will explore the effect of the `min_tokens` threshold. ```python short_messages = [ {"role": "user", "content": "hello there, how are you?"}, {"role": "assistant", "content": [{"type": "text", "text": "hello"}]}, ] processed_short_messages = token_limit_transform.apply_transform(copy.deepcopy(short_messages)) pprint.pprint(processed_short_messages) ``` ```console [{'content': 'hello there, how are you?', 'role': 'user'}, {'content': [{'text': 'hello', 'type': 'text'}], 'role': 'assistant'}] ``` We can see that no transformation was applied, because the threshold of 10 total tokens was not reached. ### Apply Transformations Using Agents So far, we have only tested the `MessageHistoryLimiter` and `MessageTokenLimiter` transformations individually, let's test these transformations with AutoGen's agents. #### Setting Up the Stage ```python import os import copy import autogen from autogen.agentchat.contrib.capabilities import transform_messages, transforms from typing import Dict, List config_list = [{"model": "gpt-3.5-turbo", "api_key": os.getenv("OPENAI_API_KEY")}] # Define your agent; the user proxy and an assistant assistant = autogen.AssistantAgent( "assistant", llm_config={"config_list": config_list}, ) user_proxy = autogen.UserProxyAgent( "user_proxy", human_input_mode="NEVER", is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""), max_consecutive_auto_reply=10, ) ``` :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: We first need to write the `test` function that creates a very long chat history by exchanging messages between an assistant and a user proxy agent, and then attempts to initiate a new chat without clearing the history, potentially triggering an error due to token limits. ```python # Create a very long chat history that is bound to cause a crash for gpt 3.5 def test(assistant: autogen.ConversableAgent, user_proxy: autogen.UserProxyAgent): for _ in range(1000): # define a fake, very long messages assitant_msg = {"role": "assistant", "content": "test " * 1000} user_msg = {"role": "user", "content": ""} assistant.send(assitant_msg, user_proxy, request_reply=False, silent=True) user_proxy.send(user_msg, assistant, request_reply=False, silent=True) try: user_proxy.initiate_chat(assistant, message="plot and save a graph of x^2 from -10 to 10", clear_history=False) except Exception as e: print(f"Encountered an error with the base assistant: \n{e}") ``` The first run will be the default implementation, where the agent does not have the `TransformMessages` capability. ```python test(assistant, user_proxy) ``` Running this test will result in an error due to the large number of tokens sent to OpenAI's gpt 3.5. ```console user_proxy (to assistant): plot and save a graph of x^2 from -10 to 10 -------------------------------------------------------------------------------- Encountered an error with the base assistant Error code: 429 - {'error': {'message': 'Request too large for gpt-3.5-turbo in organization org-U58JZBsXUVAJPlx2MtPYmdx1 on tokens per min (TPM): Limit 60000, Requested 1252546. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}} ``` Now let's add the `TransformMessages` capability to the assistant and run the same test. ```python context_handling = transform_messages.TransformMessages( transforms=[ transforms.MessageHistoryLimiter(max_messages=10), transforms.MessageTokenLimiter(max_tokens=1000, max_tokens_per_message=50, min_tokens=500), ] ) context_handling.add_to_agent(assistant) test(assistant, user_proxy) ``` The following console output shows that the agent is now able to handle the large number of tokens sent to OpenAI's gpt 3.5. `````console user_proxy (to assistant): plot and save a graph of x^2 from -10 to 10 -------------------------------------------------------------------------------- Truncated 3804 tokens. Tokens reduced from 4019 to 215 assistant (to user_proxy): To plot and save a graph of \( x^2 \) from -10 to 10, we can use Python with the matplotlib library. Here's the code to generate the plot and save it to a file named "plot.png": ```python # filename: plot_quadratic.py import matplotlib.pyplot as plt import numpy as np # Create an array of x values from -10 to 10 x = np.linspace(-10, 10, 100) y = x**2 # Plot the graph plt.plot(x, y) plt.xlabel('x') plt.ylabel('x^2') plt.title('Plot of x^2') plt.grid(True) # Save the plot as an image file plt.savefig('plot.png') # Display the plot plt.show() ```` You can run this script in a Python environment. It will generate a plot of \( x^2 \) from -10 to 10 and save it as "plot.png" in the same directory where the script is executed. Execute the Python script to create and save the graph. After executing the code, you should see a file named "plot.png" in the current directory, containing the graph of \( x^2 \) from -10 to 10. You can view this file to see the plotted graph. Is there anything else you would like to do or need help with? If not, you can type "TERMINATE" to end our conversation. --- ````` ### Create Custom Transformations to Handle Sensitive Content You can create custom transformations by implementing the `MessageTransform` protocol, which provides flexibility to handle various use cases. One practical application is to create a custom transformation that redacts sensitive information, such as API keys, passwords, or personal data, from the chat history or logs. This ensures that confidential data is not inadvertently exposed, enhancing the security and privacy of your conversational AI system. We will demonstrate this by implementing a custom transformation called `MessageRedact` that detects and redacts OpenAI API keys from the conversation history. This transformation is particularly useful when you want to prevent accidental leaks of API keys, which could compromise the security of your system. ```python import os import pprint import copy import re import autogen from autogen.agentchat.contrib.capabilities import transform_messages, transforms from typing import Dict, List # The transform must adhere to transform_messages.MessageTransform protocol. class MessageRedact: def __init__(self): self._openai_key_pattern = r"sk-([a-zA-Z0-9]{48})" self._replacement_string = "REDACTED" def apply_transform(self, messages: List[Dict]) -> List[Dict]: temp_messages = copy.deepcopy(messages) for message in temp_messages: if isinstance(message["content"], str): message["content"] = re.sub(self._openai_key_pattern, self._replacement_string, message["content"]) elif isinstance(message["content"], list): for item in message["content"]: if item["type"] == "text": item["text"] = re.sub(self._openai_key_pattern, self._replacement_string, item["text"]) return temp_messages def get_logs(self, pre_transform_messages: List[Dict], post_transform_messages: List[Dict]) -> Tuple[str, bool]: keys_redacted = self._count_redacted(post_transform_messages) - self._count_redacted(pre_transform_messages) if keys_redacted > 0: return f"Redacted {keys_redacted} OpenAI API keys.", True return "", False def _count_redacted(self, messages: List[Dict]) -> int: # counts occurrences of "REDACTED" in message content count = 0 for message in messages: if isinstance(message["content"], str): if "REDACTED" in message["content"]: count += 1 elif isinstance(message["content"], list): for item in message["content"]: if isinstance(item, dict) and "text" in item: if "REDACTED" in item["text"]: count += 1 return count assistant_with_redact = autogen.AssistantAgent( "assistant", llm_config=llm_config, max_consecutive_auto_reply=1, ) redact_handling = transform_messages.TransformMessages(transforms=[MessageRedact()]) redact_handling.add_to_agent(assistant_with_redact) user_proxy = autogen.UserProxyAgent( "user_proxy", human_input_mode="NEVER", max_consecutive_auto_reply=1, ) messages = [ {"content": "api key 1 = sk-7nwt00xv6fuegfu3gnwmhrgxvuc1cyrhxcq1quur9zvf05fy"}, # Don't worry, the key is randomly generated {"content": [{"type": "text", "text": "API key 2 = sk-9wi0gf1j2rz6utaqd3ww3o6c1h1n28wviypk7bd81wlj95an"}]}, ] for message in messages: user_proxy.send(message, assistant_with_redact, request_reply=False, silent=True) result = user_proxy.initiate_chat( assistant_with_redact, message="What are the two API keys that I just provided", clear_history=False ``` ```console user_proxy (to assistant): What are the two API keys that I just provided -------------------------------------------------------------------------------- Redacted 2 OpenAI API keys. assistant (to user_proxy): As an AI, I must inform you that it is not safe to share API keys publicly as they can be used to access your private data or services that can incur costs. Given that you've typed "REDACTED" instead of the actual keys, it seems you are aware of the privacy concerns and are likely testing my response or simulating an exchange without exposing real credentials, which is a good practice for privacy and security reasons. To respond directly to your direct question: The two API keys you provided are both placeholders indicated by the text "REDACTED", and not actual API keys. If these were real keys, I would have reiterated the importance of keeping them secure and would not display them here. Remember to keep your actual API keys confidential to prevent unauthorized use. If you've accidentally exposed real API keys, you should revoke or regenerate them as soon as possible through the corresponding service's API management console. -------------------------------------------------------------------------------- user_proxy (to assistant): -------------------------------------------------------------------------------- Redacted 2 OpenAI API keys. ```
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
# Compressing Text with LLMLingua Text compression is crucial for optimizing interactions with LLMs, especially when dealing with long prompts that can lead to higher costs and slower response times. LLMLingua is a tool designed to compress prompts effectively, enhancing the efficiency and cost-effectiveness of LLM operations. This guide introduces LLMLingua's integration with AutoGen, demonstrating how to use this tool to compress text, thereby optimizing the usage of LLMs for various applications. :::info Requirements Install `autogen-agentchat[long-context]~=0.2` and `PyMuPDF`: ```bash pip install "autogen-agentchat[long-context]~=0.2" PyMuPDF ``` For more information, please refer to the [installation guide](/docs/installation/). :::
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 1: Compressing AutoGen Research Paper using LLMLingua We will look at how we can use `TextMessageCompressor` to compress an AutoGen research paper using `LLMLingua`. Here's how you can initialize `TextMessageCompressor` with LLMLingua, a text compressor that adheres to the `TextCompressor` protocol. ```python import tempfile import fitz # PyMuPDF import requests from autogen.agentchat.contrib.capabilities.text_compressors import LLMLingua from autogen.agentchat.contrib.capabilities.transforms import TextMessageCompressor AUTOGEN_PAPER = "https://arxiv.org/pdf/2308.08155" def extract_text_from_pdf(): # Download the PDF response = requests.get(AUTOGEN_PAPER) response.raise_for_status() # Ensure the download was successful text = "" # Save the PDF to a temporary file with tempfile.TemporaryDirectory() as temp_dir: with open(temp_dir + "temp.pdf", "wb") as f: f.write(response.content) # Open the PDF with fitz.open(temp_dir + "temp.pdf") as doc: # Read and extract text from each page for page in doc: text += page.get_text() return text # Example usage pdf_text = extract_text_from_pdf() llm_lingua = LLMLingua() text_compressor = TextMessageCompressor(text_compressor=llm_lingua) compressed_text = text_compressor.apply_transform([{"content": pdf_text}]) print(text_compressor.get_logs([], [])) ``` ```console ('19765 tokens saved with text compression.', True) ```
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 2: Integrating LLMLingua with `ConversableAgent` Now, let's integrate `LLMLingua` into a conversational agent within AutoGen. This allows dynamic compression of prompts before they are sent to the LLM. ```python import os import autogen from autogen.agentchat.contrib.capabilities import transform_messages system_message = "You are a world class researcher." config_list = [{"model": "gpt-4-turbo", "api_key": os.getenv("OPENAI_API_KEY")}] # Define your agent; the user proxy and an assistant researcher = autogen.ConversableAgent( "assistant", llm_config={"config_list": config_list}, max_consecutive_auto_reply=1, system_message=system_message, human_input_mode="NEVER", ) user_proxy = autogen.UserProxyAgent( "user_proxy", human_input_mode="NEVER", is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""), max_consecutive_auto_reply=1, ) ``` :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: ```python context_handling = transform_messages.TransformMessages(transforms=[text_compressor]) context_handling.add_to_agent(researcher) message = "Summarize this research paper for me, include the important information" + pdf_text result = user_proxy.initiate_chat(recipient=researcher, clear_history=True, message=message, silent=True) print(result.chat_history[1]["content"]) ``` ```console 19953 tokens saved with text compression. The paper describes AutoGen, a framework designed to facilitate the development of diverse large language model (LLM) applications through conversational multi-agent systems. The framework emphasizes customization and flexibility, enabling developers to define agent interaction behaviors in natural language or computer code. Key components of AutoGen include: 1. **Conversable Agents**: These are customizable agents designed to operate autonomously or through human interaction. They are capable of initiating, maintaining, and responding within conversations, contributing effectively to multi-agent dialogues. 2. **Conversation Programming**: AutoGen introduces a programming paradigm centered around conversational interactions among agents. This approach simplifies the development of complex applications by streamlining how agents communicate and interact, focusing on conversational logic rather than traditional coding for mats. 3. **Agent Customization and Flexibility**: Developers have the freedom to define the capabilities and behaviors of agents within the system, allowing for a wide range of applications across different domains. 4. **Application Versatility**: The paper outlines various use cases from mathematics and coding to decision-making and entertainment, demonstrating AutoGen's ability to cope with a broad spectrum of complexities and requirements. 5. **Hierarchical and Joint Chat Capabilities**: The system supports complex conversation patterns including hierarchical and multi-agent interactions, facilitating robust dialogues that can dynamically adjust based on the conversation context and the agents' roles. 6. **Open-source and Community Engagement**: AutoGen is presented as an open-source framework, inviting contributions and adaptations from the global development community to expand its capabilities and applications. The framework's architecture is designed so that it can be seamlessly integrated into existing systems, providing a robust foundation for developing sophisticated multi-agent applications that leverage the capabilities of modern LLMs. The paper also discusses potential ethical considerations and future improvements, highlighting the importance of continual development in response to evolving tech landscapes and user needs. ```
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 3: Modifying LLMLingua's Compression Parameters LLMLingua's flexibility allows for various configurations, such as customizing instructions for the LLM or setting specific token counts for compression. This example demonstrates how to set a target token count, enabling the use of models with smaller context sizes like gpt-3.5. ```python config_list = [{"model": "gpt-3.5-turbo", "api_key": os.getenv("OPENAI_API_KEY")}] researcher = autogen.ConversableAgent( "assistant", llm_config={"config_list": config_list}, max_consecutive_auto_reply=1, system_message=system_message, human_input_mode="NEVER", ) text_compressor = TextMessageCompressor( text_compressor=llm_lingua, compression_params={"target_token": 13000}, cache=None, ) context_handling = transform_messages.TransformMessages(transforms=[text_compressor]) context_handling.add_to_agent(researcher) compressed_text = text_compressor.apply_transform([{"content": message}]) result = user_proxy.initiate_chat(recipient=researcher, clear_history=True, message=message, silent=True) print(result.chat_history[1]["content"]) ``` ```console 25308 tokens saved with text compression. Based on the extensive research paper information provided, it seems that the focus is on developing a framework called AutoGen for creating multi-agent conversations based on Large Language Models (LLMs) for a variety of applications such as math problem solving, coding, decision-making, and more. The paper discusses the importance of incorporating diverse roles of LLMs, human inputs, and tools to enhance the capabilities of the conversable agents within the AutoGen framework. It also delves into the effectiveness of different systems in various scenarios, showcases the implementation of AutoGen in pilot studies, and compares its performance with other systems in tasks like math problem-solving, coding, and decision-making. The paper also highlights the different features and components of AutoGen such as the AssistantAgent, UserProxyAgent, ExecutorAgent, and GroupChatManager, emphasizing its flexibility, ease of use, and modularity in managing multi-agent interactions. It presents case analyses to demonstrate the effectiveness of AutoGen in various applications and scenarios. Furthermore, the paper includes manual evaluations, scenario testing, code examples, and detailed comparisons with other systems like ChatGPT, OptiGuide, MetaGPT, and more, to showcase the performance and capabilities of the AutoGen framework. Overall, the research paper showcases the potential of AutoGen in facilitating dynamic multi-agent conversations, enhancing decision-making processes, and improving problem-solving tasks with the integration of LLMs, human inputs, and tools in a collaborative framework. ```
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
# What Next? Now that you have learned the basics of AutoGen, you can start to build your own agents. Here are some ideas to get you started without going to the advanced topics: 1. **Chat with LLMs**: In [Human in the Loop](./human-in-the-loop) we covered the basic human-in-the-loop usage. You can try to hook up different LLMs using local model servers like [Ollama](https://github.com/ollama/ollama) and [LM Studio](https://lmstudio.ai/), and chat with them using the human-in-the-loop component of your human proxy agent. 2. **Prompt Engineering**: In [Code Executors](./code-executors) we covered the simple two agent scenario using GPT-4 and Python code executor. To make this scenario work for different LLMs and programming languages, you probably need to tune the system message of the code writer agent. Same with other scenarios that we have covered in this tutorial, you can also try to tune system messages for different LLMs. 3. **Complex Tasks**: In [ConversationPatterns](./conversation-patterns) we covered the basic conversation patterns. You can try to find other tasks that can be decomposed into these patterns, and leverage the code executors and tools to make the agents more powerful.
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Dig Deeper - Read the [user guide](/docs/topics) to learn more - Read the examples and guides in the [notebooks section](/docs/notebooks) - Check [research](/docs/Research) and [blog](/blog)
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Get Help If you have any questions, you can ask in our [GitHub Discussions](https://github.com/microsoft/autogen/discussions), or join our [Discord Server](https://aka.ms/autogen-dc). [![](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat.png)](https://aka.ms/autogen-dc)
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Get Involved - Check out [Roadmap Issues](https://aka.ms/autogen-roadmap) to see what we are working on. - Contribute your work to our [gallery](/docs/Gallery) - Follow our [contribution guide](/docs/contributor-guide/contributing) to make a pull request to AutoGen - You can also share your work with the community on the Discord server.
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
# Multi-agent Conversation Framework AutoGen offers a unified multi-agent conversation framework as a high-level abstraction of using foundation models. It features capable, customizable and conversable agents which integrate LLMs, tools, and humans via automated agent chat. By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. This framework simplifies the orchestration, automation and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses. It enables building next-gen LLM applications based on multi-agent conversations with minimal effort. ### Agents AutoGen abstracts and implements conversable agents designed to solve tasks through inter-agent conversations. Specifically, the agents in AutoGen have the following notable features: - Conversable: Agents in AutoGen are conversable, which means that any agent can send and receive messages from other agents to initiate or continue a conversation - Customizable: Agents in AutoGen can be customized to integrate LLMs, humans, tools, or a combination of them. The figure below shows the built-in agents in AutoGen. ![Agent Chat Example](images/autogen_agents.png) We have designed a generic [`ConversableAgent`](../reference/agentchat/conversable_agent.md#conversableagent-objects) class for Agents that are capable of conversing with each other through the exchange of messages to jointly finish a task. An agent can communicate with other agents and perform actions. Different agents can differ in what actions they perform after receiving messages. Two representative subclasses are [`AssistantAgent`](../reference/agentchat/assistant_agent.md#assistantagent-objects) and [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) - The [`AssistantAgent`](../reference/agentchat/assistant_agent.md#assistantagent-objects) is designed to act as an AI assistant, using LLMs by default but not requiring human input or code execution. It could write Python code (in a Python coding block) for a user to execute when a message (typically a description of a task that needs to be solved) is received. Under the hood, the Python code is written by LLM (e.g., GPT-4). It can also receive the execution results and suggest corrections or bug fixes. Its behavior can be altered by passing a new system message. The LLM [inference](/docs/Use-Cases/enhanced_inference) configuration can be configured via [`llm_config`]. - The [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) is conceptually a proxy agent for humans, soliciting human input as the agent's reply at each interaction turn by default and also having the capability to execute code and call functions or tools. The [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) triggers code execution automatically when it detects an executable code block in the received message and no human user input is provided. Code execution can be disabled by setting the `code_execution_config` parameter to False. LLM-based response is disabled by default. It can be enabled by setting `llm_config` to a dict corresponding to the [inference](/docs/Use-Cases/enhanced_inference) configuration. When `llm_config` is set as a dictionary, [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) can generate replies using an LLM when code execution is not performed. The auto-reply capability of [`ConversableAgent`](../reference/agentchat/conversable_agent.md#conversableagent-objects) allows for more autonomous multi-agent communication while retaining the possibility of human intervention. One can also easily extend it by registering reply functions with the [`register_reply()`](../reference/agentchat/conversable_agent.md#register_reply) method. In the following code, we create an [`AssistantAgent`](../reference/agentchat/assistant_agent.md#assistantagent-objects) named "assistant" to serve as the assistant and a [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) named "user_proxy" to serve as a proxy for the human user. We will later employ these two agents to solve a task. ```python import os from autogen import AssistantAgent, UserProxyAgent from autogen.coding import DockerCommandLineCodeExecutor config_list = [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}] # create an AssistantAgent instance named "assistant" with the LLM configuration. assistant = AssistantAgent(name="assistant", llm_config={"config_list": config_list}) # create a UserProxyAgent instance named "user_proxy" with code execution on docker. code_executor = DockerCommandLineCodeExecutor() user_proxy = UserProxyAgent(name="user_proxy", code_execution_config={"executor": code_executor}) ```
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
Multi-agent Conversations ### A Basic Two-Agent Conversation Example Once the participating agents are constructed properly, one can start a multi-agent conversation session by an initialization step as shown in the following code: ```python # the assistant receives a message from the user, which contains the task description user_proxy.initiate_chat( assistant, message="""What date is today? Which big tech stock has the largest year-to-date gain this year? How much is the gain?""", ) ``` After the initialization step, the conversation could proceed automatically. Find a visual illustration of how the user_proxy and assistant collaboratively solve the above task autonomously below: ![Agent Chat Example](images/agent_example.png) 1. The assistant receives a message from the user_proxy, which contains the task description. 2. The assistant then tries to write Python code to solve the task and sends the response to the user_proxy. 3. Once the user_proxy receives a response from the assistant, it tries to reply by either soliciting human input or preparing an automatically generated reply. If no human input is provided, the user_proxy executes the code and uses the result as the auto-reply. 4. The assistant then generates a further response for the user_proxy. The user_proxy can then decide whether to terminate the conversation. If not, steps 3 and 4 are repeated. ### Supporting Diverse Conversation Patterns #### Conversations with different levels of autonomy, and human-involvement patterns On the one hand, one can achieve fully autonomous conversations after an initialization step. On the other hand, AutoGen can be used to implement human-in-the-loop problem-solving by configuring human involvement levels and patterns (e.g., setting the `human_input_mode` to `ALWAYS`), as human involvement is expected and/or desired in many applications. #### Static and dynamic conversations AutoGen, by integrating conversation-driven control utilizing both programming and natural language, inherently supports dynamic conversations. This dynamic nature allows the agent topology to adapt based on the actual conversation flow under varying input problem scenarios. Conversely, static conversations adhere to a predefined topology. Dynamic conversations are particularly beneficial in complex settings where interaction patterns cannot be predetermined. 1. Registered auto-reply With the pluggable auto-reply function, one can choose to invoke conversations with other agents depending on the content of the current message and context. For example: - Hierarchical chat like in [OptiGuide](https://github.com/microsoft/optiguide). - [Dynamic Group Chat](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb) which is a special form of hierarchical chat. In the system, we register a reply function in the group chat manager, which broadcasts messages and decides who the next speaker will be in a group chat setting. - [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) which is a special form of dynamic group chat. In this approach, a directed transition matrix is fed into group chat. Users can specify legal transitions or specify disallowed transitions. - Nested chat like in [conversational chess](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_nested_chats_chess.ipynb). 2. LLM-Based Function Call Another approach involves LLM-based function calls, where LLM decides if a specific function should be invoked based on the conversation's status during each inference. This approach enables dynamic multi-agent conversations, as seen in scenarios like [multi-user math problem solving scenario](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb), where a student assistant automatically seeks expertise via function calls. ### Diverse Applications Implemented with AutoGen The figure below shows six examples of applications built using AutoGen. ![Applications](images/app.png) Find a list of examples in this page: [Automated Agent Chat Examples](../Examples.md#automated-multi-agent-chat)
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
For Further Reading _Interested in the research that leads to this package? Please check the following papers._ - [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155). Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang and Chi Wang. ArXiv 2023. - [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337). Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2306.01337 (2023).
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
# Enhanced Inference `autogen.OpenAIWrapper` provides enhanced LLM inference for `openai>=1`. `autogen.Completion` is a drop-in replacement of `openai.Completion` and `openai.ChatCompletion` for enhanced LLM inference using `openai<1`. There are a number of benefits of using `autogen` to perform inference: performance tuning, API unification, caching, error handling, multi-config inference, result filtering, templating and so on.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Tune Inference Parameters (for openai<1) Find a list of examples in this page: [Tune Inference Parameters Examples](../Examples.md#inference-hyperparameters-tuning) ### Choices to optimize The cost of using foundation models for text generation is typically measured in terms of the number of tokens in the input and output combined. From the perspective of an application builder using foundation models, the use case is to maximize the utility of the generated text under an inference budget constraint (e.g., measured by the average dollar cost needed to solve a coding problem). This can be achieved by optimizing the hyperparameters of the inference, which can significantly affect both the utility and the cost of the generated text. The tunable hyperparameters include: 1. model - this is a required input, specifying the model ID to use. 1. prompt/messages - the input prompt/messages to the model, which provides the context for the text generation task. 1. max_tokens - the maximum number of tokens (words or word pieces) to generate in the output. 1. temperature - a value between 0 and 1 that controls the randomness of the generated text. A higher temperature will result in more random and diverse text, while a lower temperature will result in more predictable text. 1. top_p - a value between 0 and 1 that controls the sampling probability mass for each token generation. A lower top_p value will make it more likely to generate text based on the most likely tokens, while a higher value will allow the model to explore a wider range of possible tokens. 1. n - the number of responses to generate for a given prompt. Generating multiple responses can provide more diverse and potentially more useful output, but it also increases the cost of the request. 1. stop - a list of strings that, when encountered in the generated text, will cause the generation to stop. This can be used to control the length or the validity of the output. 1. presence_penalty, frequency_penalty - values that control the relative importance of the presence and frequency of certain words or phrases in the generated text. 1. best_of - the number of responses to generate server-side when selecting the "best" (the one with the highest log probability per token) response for a given prompt. The cost and utility of text generation are intertwined with the joint effect of these hyperparameters. There are also complex interactions among subsets of the hyperparameters. For example, the temperature and top_p are not recommended to be altered from their default values together because they both control the randomness of the generated text, and changing both at the same time can result in conflicting effects; n and best_of are rarely tuned together because if the application can process multiple outputs, filtering on the server side causes unnecessary information loss; both n and max_tokens will affect the total number of tokens generated, which in turn will affect the cost of the request. These interactions and trade-offs make it difficult to manually determine the optimal hyperparameter settings for a given text generation task. *Do the choices matter? Check this [blogpost](/blog/2023/04/21/LLM-tuning-math) to find example tuning results about gpt-3.5-turbo and gpt-4.* With AutoGen, the tuning can be performed with the following information: 1. Validation data. 1. Evaluation function. 1. Metric to optimize. 1. Search space. 1. Budgets: inference and optimization respectively. ### Validation data Collect a diverse set of instances. They can be stored in an iterable of dicts. For example, each instance dict can contain "problem" as a key and the description str of a math problem as the value; and "solution" as a key and the solution str as the value. ### Evaluation function The evaluation function should take a list of responses, and other keyword arguments corresponding to the keys in each validation data instance as input, and output a dict of metrics. For example, ```python def eval_math_responses(responses: List[str], solution: str, **args) -> Dict: # select a response from the list of responses answer = voted_answer(responses) # check whether the answer is correct return {"success": is_equivalent(answer, solution)} ``` `autogen.code_utils` and `autogen.math_utils` offer some example evaluation functions for code generation and math problem solving. ### Metric to optimize The metric to optimize is usually an aggregated metric over all the tuning data instances. For example, users can specify "success" as the metric and "max" as the optimization mode. By default, the aggregation function is taking the average. Users can provide a customized aggregation function if needed. ### Search space Users can specify the (optional) search range for each hyperparameter. 1. model. Either a constant str, or multiple choices specified by `flaml.tune.choice`. 1. prompt/messages. Prompt is either a str or a list of strs, of the prompt templates. messages is a list of dicts or a list of lists, of the message templates. Each prompt/message template will be formatted with each data instance. For example, the prompt template can be: "{problem} Solve the problem carefully. Simplify your answer as much as possible. Put the final answer in \\boxed{{}}." And `{problem}` will be replaced by the "problem" field of each data instance. 1. max_tokens, n, best_of. They can be constants, or specified by `flaml.tune.randint`, `flaml.tune.qrandint`, `flaml.tune.lograndint` or `flaml.qlograndint`. By default, max_tokens is searched in [50, 1000); n is searched in [1, 100); and best_of is fixed to 1. 1. stop. It can be a str or a list of strs, or a list of lists of strs or None. Default is None. 1. temperature or top_p. One of them can be specified as a constant or by `flaml.tune.uniform` or `flaml.tune.loguniform` etc. Please don't provide both. By default, each configuration will choose either a temperature or a top_p in [0, 1] uniformly. 1. presence_penalty, frequency_penalty. They can be constants or specified by `flaml.tune.uniform` etc. Not tuned by default. ### Budgets One can specify an inference budget and an optimization budget. The inference budget refers to the average inference cost per data instance. The optimization budget refers to the total budget allowed in the tuning process. Both are measured by dollars and follow the price per 1000 tokens. ### Perform tuning Now, you can use `autogen.Completion.tune` for tuning. For example, ```python import autogen config, analysis = autogen.Completion.tune( data=tune_data, metric="success", mode="max", eval_func=eval_func, inference_budget=0.05, optimization_budget=3, num_samples=-1, ) ``` `num_samples` is the number of configurations to sample. -1 means unlimited (until optimization budget is exhausted). The returned `config` contains the optimized configuration and `analysis` contains an ExperimentAnalysis object for all the tried configurations and results. The tuned config can be used to perform inference.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
API unification `autogen.OpenAIWrapper.create()` can be used to create completions for both chat and non-chat models, and both OpenAI API and Azure OpenAI API. ```python from autogen import OpenAIWrapper # OpenAI endpoint client = OpenAIWrapper() # ChatCompletion response = client.create(messages=[{"role": "user", "content": "2+2="}], model="gpt-3.5-turbo") # extract the response text print(client.extract_text_or_completion_object(response)) # get cost of this completion print(response.cost) # Azure OpenAI endpoint client = OpenAIWrapper(api_key=..., base_url=..., api_version=..., api_type="azure") # Completion response = client.create(prompt="2+2=", model="gpt-3.5-turbo-instruct") # extract the response text print(client.extract_text_or_completion_object(response)) ``` For local LLMs, one can spin up an endpoint using a package like [FastChat](https://github.com/lm-sys/FastChat), and then use the same API to send a request. See [here](/blog/2023/07/14/Local-LLMs) for examples on how to make inference with local LLMs. For custom model clients, one can register the client with `autogen.OpenAIWrapper.register_model_client` and then use the same API to send a request. See [here](/blog/2024/01/26/Custom-Models) for examples on how to make inference with custom model clients.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Usage Summary The `OpenAIWrapper` from `autogen` tracks token counts and costs of your API calls. Use the `create()` method to initiate requests and `print_usage_summary()` to retrieve a detailed usage report, including total cost and token usage for both cached and actual requests. - `mode=["actual", "total"]` (default): print usage summary for all completions and non-caching completions. - `mode='actual'`: only print non-cached usage. - `mode='total'`: only print all usage (including cache). Reset your session's usage data with `clear_usage_summary()` when needed. [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_client_cost.ipynb) Example usage: ```python from autogen import OpenAIWrapper client = OpenAIWrapper() client.create(messages=[{"role": "user", "content": "Python learning tips."}], model="gpt-3.5-turbo") client.print_usage_summary() # Display usage client.clear_usage_summary() # Reset usage data ``` Sample output: ``` Usage summary excluding cached usage: Total cost: 0.00015 * Model 'gpt-3.5-turbo': cost: 0.00015, prompt_tokens: 25, completion_tokens: 58, total_tokens: 83 Usage summary including cached usage: Total cost: 0.00027 * Model 'gpt-3.5-turbo': cost: 0.00027, prompt_tokens: 50, completion_tokens: 100, total_tokens: 150 ``` Note: if using a custom model client (see [here](/blog/2024/01/26/Custom-Models) for details) and if usage summary is not implemented, then the usage summary will not be available.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Caching Moved to [here](/docs/topics/llm-caching).
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Error handling ### Runtime error One can pass a list of configurations of different models/endpoints to mitigate the rate limits and other runtime error. For example, ```python client = OpenAIWrapper( config_list=[ { "model": "gpt-4", "api_key": os.environ.get("AZURE_OPENAI_API_KEY"), "api_type": "azure", "base_url": os.environ.get("AZURE_OPENAI_API_BASE"), "api_version": "2024-02-01", }, { "model": "gpt-3.5-turbo", "api_key": os.environ.get("OPENAI_API_KEY"), "base_url": "https://api.openai.com/v1", }, { "model": "llama2-chat-7B", "base_url": "http://127.0.0.1:8080", }, { "model": "microsoft/phi-2", "model_client_cls": "CustomModelClient" } ], ) ``` `client.create()` will try querying Azure OpenAI gpt-4, OpenAI gpt-3.5-turbo, a locally hosted llama2-chat-7B, and phi-2 using a custom model client class named `CustomModelClient`, one by one, until a valid result is returned. This can speed up the development process where the rate limit is a bottleneck. An error will be raised if the last choice fails. So make sure the last choice in the list has the best availability. For convenience, we provide a number of utility functions to load config lists. - `get_config_list`: Generates configurations for API calls, primarily from provided API keys. - `config_list_openai_aoai`: Constructs a list of configurations using both Azure OpenAI and OpenAI endpoints, sourcing API keys from environment variables or local files. - `config_list_from_json`: Loads configurations from a JSON structure, either from an environment variable or a local JSON file, with the flexibility of filtering configurations based on given criteria. - `config_list_from_models`: Creates configurations based on a provided list of models, useful when targeting specific models without manually specifying each configuration. - `config_list_from_dotenv`: Constructs a configuration list from a `.env` file, offering a consolidated way to manage multiple API configurations and keys from a single file. We suggest that you take a look at this [notebook](/docs/topics/llm_configuration) for full code examples of the different methods to configure your model endpoints. ### Logic error Another type of error is that the returned response does not satisfy a requirement. For example, if the response is required to be a valid json string, one would like to filter the responses that are not. This can be achieved by providing a list of configurations and a filter function. For example, ```python def valid_json_filter(response, **_): for text in OpenAIWrapper.extract_text_or_completion_object(response): try: json.loads(text) return True except ValueError: pass return False client = OpenAIWrapper( config_list=[{"model": "text-ada-001"}, {"model": "gpt-3.5-turbo-instruct"}, {"model": "text-davinci-003"}], ) response = client.create( prompt="How to construct a json request to Bing API to search for 'latest AI news'? Return the JSON request.", filter_func=valid_json_filter, ) ``` The example above will try to use text-ada-001, gpt-3.5-turbo-instruct, and text-davinci-003 iteratively, until a valid json string is returned or the last config is used. One can also repeat the same model in the list for multiple times (with different seeds) to try one model multiple times for increasing the robustness of the final response. *Advanced use case: Check this [blogpost](/blog/2023/05/18/GPT-adaptive-humaneval) to find how to improve GPT-4's coding performance from 68% to 90% while reducing the inference cost.*
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Templating If the provided prompt or message is a template, it will be automatically materialized with a given context. For example, ```python response = client.create( context={"problem": "How many positive integers, not exceeding 100, are multiples of 2 or 3 but not 4?"}, prompt="{problem} Solve the problem carefully.", allow_format_str_template=True, **config ) ``` A template is either a format str, like the example above, or a function which produces a str from several input fields, like the example below. ```python def content(turn, context): return "\n".join( [ context[f"user_message_{turn}"], context[f"external_info_{turn}"] ] ) messages = [ { "role": "system", "content": "You are a teaching assistant of math.", }, { "role": "user", "content": partial(content, turn=0), }, ] context = { "user_message_0": "Could you explain the solution to Problem 1?", "external_info_0": "Problem 1: ...", } response = client.create(context=context, messages=messages, **config) messages.append( { "role": "assistant", "content": client.extract_text(response)[0] } ) messages.append( { "role": "user", "content": partial(content, turn=1), }, ) context.append( { "user_message_1": "Why can't we apply Theorem 1 to Equation (2)?", "external_info_1": "Theorem 1: ...", } ) response = client.create(context=context, messages=messages, **config) ```
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Logging When debugging or diagnosing an LLM-based system, it is often convenient to log the API calls and analyze them. ### For openai >= 1 Logging example: [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) #### Start logging: ```python import autogen.runtime_logging autogen.runtime_logging.start(logger_type="sqlite", config={"dbname": "YOUR_DB_NAME"}) ``` `logger_type` and `config` are both optional. Default logger type is SQLite logger, that's the only one available in autogen at the moment. If you want to customize the database name, you can pass in through config, default is `logs.db`. #### Stop logging: ```python autogen.runtime_logging.stop() ``` #### LLM Runs AutoGen logging supports OpenAI's llm message schema. Each LLM run is saved in `chat_completions` table includes: - session_id: an unique identifier for the logging session - invocation_id: an unique identifier for the logging record - client_id: an unique identifier for the Azure OpenAI/OpenAI client - request: detailed llm request, see below for an example - response: detailed llm response, see below for an example - cost: total cost for the request and response - start_time - end_time ##### Sample Request ```json { "messages":[ { "content":"system_message_1", "role":"system" }, { "content":"user_message_1", "role":"user" } ], "model":"gpt-4", "temperature": 0.9 } ``` ##### Sample Response ```json { "id": "id_1", "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "assistant_message_1", "role": "assistant", "function_call": null, "tool_calls": null } } ], "created": "<timestamp>", "model": "gpt-4", "object": "chat.completion", "system_fingerprint": null, "usage": { "completion_tokens": 155, "prompt_tokens": 53, "total_tokens": 208 } } ``` Learn more about [request and response format](https://platform.openai.com/docs/api-reference/chat/create) ### For openai < 1 `autogen.Completion` and `autogen.ChatCompletion` offer an easy way to collect the API call histories. For example, to log the chat histories, simply run: ```python autogen.ChatCompletion.start_logging() ``` The API calls made after this will be automatically logged. They can be retrieved at any time by: ```python autogen.ChatCompletion.logged_history ``` There is a function that can be used to print usage summary (total cost, and token count usage from each model): ```python autogen.ChatCompletion.print_usage_summary() ``` To stop logging, use ```python autogen.ChatCompletion.stop_logging() ``` If one would like to append the history to an existing dict, pass the dict like: ```python autogen.ChatCompletion.start_logging(history_dict=existing_history_dict) ``` By default, the counter of API calls will be reset at `start_logging()`. If no reset is desired, set `reset_counter=False`. There are two types of logging formats: compact logging and individual API call logging. The default format is compact. Set `compact=False` in `start_logging()` to switch. * Example of a history dict with compact logging. ```python { """ [ { 'role': 'system', 'content': system_message, }, { 'role': 'user', 'content': user_message_1, }, { 'role': 'assistant', 'content': assistant_message_1, }, { 'role': 'user', 'content': user_message_2, }, { 'role': 'assistant', 'content': assistant_message_2, }, ]""": { "created_at": [0, 1], "cost": [0.1, 0.2], } } ``` * Example of a history dict with individual API call logging. ```python { 0: { "request": { "messages": [ { "role": "system", "content": system_message, }, { "role": "user", "content": user_message_1, } ], ... # other parameters in the request }, "response": { "choices": [ "messages": { "role": "assistant", "content": assistant_message_1, }, ], ... # other fields in the response } }, 1: { "request": { "messages": [ { "role": "system", "content": system_message, }, { "role": "user", "content": user_message_1, }, { "role": "assistant", "content": assistant_message_1, }, { "role": "user", "content": user_message_2, }, ], ... # other parameters in the request }, "response": { "choices": [ "messages": { "role": "assistant", "content": assistant_message_2, }, ], ... # other fields in the response } }, } ``` * Example of printing for usage summary ``` Total cost: <cost> Token count summary for model <model>: prompt_tokens: <count 1>, completion_tokens: <count 2>, total_tokens: <count 3> ``` It can be seen that the individual API call history contains redundant information of the conversation. For a long conversation the degree of redundancy is high. The compact history is more efficient and the individual API call history contains more details.
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
# Docker for Development For developers contributing to the AutoGen project, we offer a specialized Docker environment. This setup is designed to streamline the development process, ensuring that all contributors work within a consistent and well-equipped environment.
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Autogen Developer Image (autogen_dev_img) - **Purpose**: The `autogen_dev_img` is tailored for contributors to the AutoGen project. It includes a suite of tools and configurations that aid in the development and testing of new features or fixes. - **Usage**: This image is recommended for developers who intend to contribute code or documentation to AutoGen. - **Forking the Project**: It's advisable to fork the AutoGen GitHub project to your own repository. This allows you to make changes in a separate environment without affecting the main project. - **Updating Dockerfile**: Modify your copy of `Dockerfile` in the `dev` folder as needed for your development work. - **Submitting Pull Requests**: Once your changes are ready, submit a pull request from your branch to the upstream AutoGen GitHub project for review and integration. For more details on contributing, see the [AutoGen Contributing](https://microsoft.github.io/autogen/docs/Contribute) page.
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Building the Developer Docker Image - To build the developer Docker image (`autogen_dev_img`), use the following commands: ```bash docker build -f .devcontainer/dev/Dockerfile -t autogen_dev_img https://github.com/microsoft/autogen.git#main ``` - For building the developer image built from a specific Dockerfile in a branch other than main/master ```bash # clone the branch you want to work out of git clone --branch {branch-name} https://github.com/microsoft/autogen.git # cd to your new directory cd autogen # build your Docker image docker build -f .devcontainer/dev/Dockerfile -t autogen_dev-srv_img . ```
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Using the Developer Docker Image Once you have built the `autogen_dev_img`, you can run it using the standard Docker commands. This will place you inside the containerized development environment where you can run tests, develop code, and ensure everything is functioning as expected before submitting your contributions. ```bash docker run -it -p 8081:3000 -v `pwd`/autogen-newcode:newstuff/ autogen_dev_img bash ``` - Note that the `pwd` is shorthand for present working directory. Thus, any path after the pwd is relative to that. If you want a more verbose method you could remove the "`pwd`/autogen-newcode" and replace it with the full path to your directory ```bash docker run -it -p 8081:3000 -v /home/AutoGenDeveloper/autogen-newcode:newstuff/ autogen_dev_img bash ```
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Develop in Remote Container If you use vscode, you can open the autogen folder in a [Container](https://code.visualstudio.com/docs/remote/containers). We have provided the configuration in [devcontainer](https://github.com/microsoft/autogen/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
46
Edit dataset card