Back to Projects

Agentic Design Patterns Implementation

Project: MSU AI Agents Coursework (Fall 2025)

Python LLM Agents Prompt Engineering Design Patterns

This coursework involved designing and implementing an autonomous AI Agent capable of multi-step reasoning and tool-use. The primary goal was to move beyond simple prompt-response interactions by structuring the agent's internal process using established **Agentic Design Patterns** to enhance reliability and capability.

The implemented agent was required to interact with a defined external environment and utilize a set of predefined tools to solve complex, novel problems that required dynamic planning.

Programming Deep Dive: ReAct Pattern

The agent was constructed utilizing the **ReAct (Reasoning and Acting)** pattern. This involves forcing the Large Language Model (LLM) to interleave its **Thought (Reasoning)** process with its **Action (Tool Use)**.

This approach dramatically improves the agent's ability to plan and self-correct compared to simple Chain-of-Thought. The prompt structure explicitly requested the LLM to output a sequence of `Thought` and `Action(tool_call)` blocks, which were then parsed by the custom agent framework to execute the tool before feeding the observation back into the next prompt iteration.

The agent components implemented were: the **Planner** (the LLM itself), **Memory** (for storing conversation and past observations), and the **Tool Kit** (a collection of functions the agent could call, such as a code execution sandbox or external API query).

Problem Solving: Reliable Tool Parsing

A critical challenge in developing tool-using agents is ensuring the LLM's output for tool calls is reliably parsable by the Python execution environment. An unstructured text output can lead to constant parsing errors (often termed 'tool hallucination' or 'malformed output').

Solution: JSON Structured Output and Guardrails

To enforce strict tool call syntax, the prompt was heavily engineered to request a specific **JSON format** for all actions. If the LLM deviated from this JSON schema, a "Reflect" step was introduced:

  1. If parsing failed, the system generated an `Observation` stating the tool call was malformed.
  2. This observation was fed back to the LLM, prompting it to **reflect** on the error and correct the JSON structure in the subsequent turn.

This closed-loop feedback mechanism significantly increased the agent's reliability, achieving over 95% successful tool execution rate in complex, multi-step problem environments.

Result and Impact

The final agent successfully navigated environments requiring sequential use of multiple tools and demonstrated strong zero-shot planning capabilities. This project provided deep practical experience in prompt engineering, managing complex state and memory, and building robust, production-ready AI agent systems, which are key to next-generation software development.