Aurimas Griciลซnas avatar

Aurimas Griciลซnas

@Aurimas_Gr

8/14/2025, 12:51:15 PM

You must know these ๐—”๐—ด๐—ฒ๐—ป๐˜๐—ถ๐—ฐ ๐—ฆ๐˜†๐˜€๐˜๐—ฒ๐—บ ๐—ช๐—ผ๐—ฟ๐—ธ๐—ณ๐—น๐—ผ๐˜„ ๐—ฃ๐—ฎ๐˜๐˜๐—ฒ๐—ฟ๐—ป๐˜€ as an ๐—”๐—œ ๐—˜๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ.

If you are building Agentic Systems in an Enterprise setting you will soon discover that the simplest workflow patterns work the best and bring the most business value.

At the end of last year Anthropic did a great job summarising the top patterns for these workflows and they still hold strong.

Letโ€™s explore what they are and where each can be useful:

๐Ÿญ. ๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜ ๐—–๐—ต๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด: This pattern decomposes a complex task and tries to solve it in manageable pieces by chaining them together. Output of one LLM call becomes an output to another.

โœ… In most cases such decomposition results in higher accuracy with sacrifice for latency.
โ„น๏ธ In heavy production use cases Prompt Chaining would be combined with following patterns, a pattern replace an LLM Call node in Prompt Chaining pattern.

๐Ÿฎ. ๐—ฅ๐—ผ๐˜‚๐˜๐—ถ๐—ป๐—ด: In this pattern, the input is classified into multiple potential paths and the appropriate is taken.

โœ… Useful when the workflow is complex and specific topology paths could be more efficiently solved by a specialized workflow.
โ„น๏ธ Example: Agentic Chatbot - should I answer the question with RAG or should I perform some actions that a user has prompted for?

๐Ÿฏ. ๐—ฃ๐—ฎ๐—ฟ๐—ฎ๐—น๐—น๐—ฒ๐—น๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป: Initial input is split into multiple queries to be passed to the LLM, then the answers are aggregated to produce the final answer.

โœ… Useful when speed is important and multiple inputs can be processed in parallel without needing to wait for other outputs. Also, when additional accuracy is required.
โ„น๏ธ Example 1: Query rewrite in Agentic RAG to produce multiple different queries for majority voting. Improves accuracy.
โ„น๏ธ Example 2: Multiple items are extracted from an invoice, all of them can be processed further in parallel for better speed.

๐Ÿฐ. ๐—ข๐—ฟ๐—ฐ๐—ต๐—ฒ๐˜€๐˜๐—ฟ๐—ฎ๐˜๐—ผ๐—ฟ: An orchestrator LLM dynamically breaks down tasks and delegates to other LLMs or sub-workflows.

โœ… Useful when the system is complex and there is no clear hardcoded topology path to achieve the final result. 
โ„น๏ธ Example: Choice of datasets to be used in Agentic RAG.

๐Ÿฑ. ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ผ๐—ฟ-๐—ผ๐—ฝ๐˜๐—ถ๐—บ๐—ถ๐˜‡๐—ฒ๐—ฟ: Generator LLM produces a result then Evaluator LLM evaluates it and provides feedback for further improvement if necessary.

โœ… Useful for tasks that require continuous refinement.
โ„น๏ธ Example: Deep Research Agent workflow when refinement of a report paragraph via continuous web search is required.

๐—ง๐—ถ๐—ฝ๐˜€:

โ—๏ธ Before going for full fledged Agents you should always try to solve a problem with simpler Workflows described in the article.

What are the most complex workflows you have deployed to production? Let me know in the comments ๐Ÿ‘‡

#LLM #AI #MachineLearning
Share
Explore

TwitterXDownload

v1.2.1

The fastest and most reliable Twitter video downloader. Free to use, no registration required.

ยฉ 2024 TwitterXDownload All rights reserved.