You must know these ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐ฆ๐๐๐๐ฒ๐บ ๐ช๐ผ๐ฟ๐ธ๐ณ๐น๐ผ๐ ๐ฃ๐ฎ๐๐๐ฒ๐ฟ๐ป๐ as an ๐๐ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ. If you are building Agentic Systems in an Enterprise setting you will soon discover that the simplest workflow patterns work the best and bring the most business value. At the end of last year Anthropic did a great job summarising the top patterns for these workflows and they still hold strong. Letโs explore what they are and where each can be useful: ๐ญ. ๐ฃ๐ฟ๐ผ๐บ๐ฝ๐ ๐๐ต๐ฎ๐ถ๐ป๐ถ๐ป๐ด: This pattern decomposes a complex task and tries to solve it in manageable pieces by chaining them together. Output of one LLM call becomes an output to another. โ In most cases such decomposition results in higher accuracy with sacrifice for latency. โน๏ธ In heavy production use cases Prompt Chaining would be combined with following patterns, a pattern replace an LLM Call node in Prompt Chaining pattern. ๐ฎ. ๐ฅ๐ผ๐๐๐ถ๐ป๐ด: In this pattern, the input is classified into multiple potential paths and the appropriate is taken. โ Useful when the workflow is complex and specific topology paths could be more efficiently solved by a specialized workflow. โน๏ธ Example: Agentic Chatbot - should I answer the question with RAG or should I perform some actions that a user has prompted for? ๐ฏ. ๐ฃ๐ฎ๐ฟ๐ฎ๐น๐น๐ฒ๐น๐ถ๐๐ฎ๐๐ถ๐ผ๐ป: Initial input is split into multiple queries to be passed to the LLM, then the answers are aggregated to produce the final answer. โ Useful when speed is important and multiple inputs can be processed in parallel without needing to wait for other outputs. Also, when additional accuracy is required. โน๏ธ Example 1: Query rewrite in Agentic RAG to produce multiple different queries for majority voting. Improves accuracy. โน๏ธ Example 2: Multiple items are extracted from an invoice, all of them can be processed further in parallel for better speed. ๐ฐ. ๐ข๐ฟ๐ฐ๐ต๐ฒ๐๐๐ฟ๐ฎ๐๐ผ๐ฟ: An orchestrator LLM dynamically breaks down tasks and delegates to other LLMs or sub-workflows. โ Useful when the system is complex and there is no clear hardcoded topology path to achieve the final result. โน๏ธ Example: Choice of datasets to be used in Agentic RAG. ๐ฑ. ๐๐๐ฎ๐น๐๐ฎ๐๐ผ๐ฟ-๐ผ๐ฝ๐๐ถ๐บ๐ถ๐๐ฒ๐ฟ: Generator LLM produces a result then Evaluator LLM evaluates it and provides feedback for further improvement if necessary. โ Useful for tasks that require continuous refinement. โน๏ธ Example: Deep Research Agent workflow when refinement of a report paragraph via continuous web search is required. ๐ง๐ถ๐ฝ๐: โ๏ธ Before going for full fledged Agents you should always try to solve a problem with simpler Workflows described in the article. What are the most complex workflows you have deployed to production? Let me know in the comments ๐ #LLM #AI #MachineLearning
