Home Software Closing the loop on agents with test-driven development

Closing the loop on agents with test-driven development

by Delarno
0 comments
Closing the loop on agents with test-driven development


Traditionally, developers have used test-driven development (TDD) to validate applications before implementing the actual functionality. In this approach, developers follow a cycle where they write a test designed to fail, then execute the minimum code necessary to make the test pass, refactor the code to improve quality, and repeat the process by adding more tests and continuing these steps iteratively.

As AI agents have entered the conversation, the way developers use TDD has changed. Rather than evaluating for exact answers, they are evaluating behaviors, reasoning, and decision-making. To take it even further, they must continuously adjust based on real-world feedback. This development process is also extremely helpful to help mitigate and avoid unforeseen hallucinations as we begin to give more control to AI.

The ideal AI product development process follows the experimentation, evaluation, deployment, and monitoring format. Developers who follow this structured approach can better build reliable agentic workflows. 

Stage 1: Experimentation: In this first phase of test-driven developers, developers test whether the models can solve for an intended use case. Best practices include experimenting with prompting techniques and testing on various architectures. Additionally, utilizing subject matter experts to experiment in this phase will help save engineering time. Other best practices include staying model and inference provider agnostic and experimenting with different modalities. 

Stage 2: Evaluation: The next phase is evaluation, where developers create a data set of hundreds of examples to test their models and workflows against. At this stage, developers must balance quality, cost, latency, and privacy. Since no AI system will perfectly meet all these requirements, developers make some trade-offs. At this stage, developers should also define their priorities. 

If ground truth data is available, this can be used to evaluate and test your workflows. Ground truths are often seen as the backbone of  AI model validation as it is high-quality examples demonstrating ideal outputs. If you do not have ground truth data, developers can alternatively use another LLM to consider another model’s response. At this stage, developers should also use a flexible framework with various metrics and a large test case bank.

Developers should run evaluations at every stage and have guardrails to check internal nodes. This will ensure that your models produce accurate responses at every step in your workflow. Once there is real data, developers can also return to this stage.

Stage 3: Deployment: Once the model is deployed, developers must monitor more things than deterministic outputs. This includes logging all LLM calls and tracking inputs, output latency, and the exact steps the AI system took. In doing so, developers can see and understand how the AI operates at every step. This process is becoming even more critical with the introduction of agentic workflows, as this technology is even more complex, can take different workflow paths and make decisions independently.

In this stage, developers should maintain stateful API calls, retry, and fallback logic to handle outages and rate limits. Lastly, developers in this stage should ensure reasonable version control by using standing environments and performing regression testing to maintain stability during updates. 

Stage 4: Monitoring: After the model is deployed, developers can collect user responses and create a feedback loop. This enables developers to identify edge cases captured in production, continuously improve, and make the workflow more efficient.

The Role of TDD in Creating Resilient Agentic AI Applications

A recent Gartner survey revealed that by 2028, 33% of enterprise software applications will include agentic AI. These massive investments must be resilient to achieve the ROI teams are expecting.

Since agentic workflows use many tools, they have multi-agent structures that execute tasks in parallel. When evaluating agentic workflows using the test-driven approach, it is no longer critical to just measure performance at every level; now, developers must assess the agents’ behavior to ensure that they are making accurate decisions and following the intended logic. 

Redfin recently announced Ask Redfin, an AI-powered chatbot that powers daily conversations for thousands of users. Using Vellum’s developer sandbox, the Redfin team collaborated on prompts to pick the right prompt/model combination, built complex AI virtual assistant logic by connecting prompts, classifiers, APIs, and data manipulation steps, and systematically evaluated prompt pre-production using hundreds of test cases.

Following a test-driven development approach, their team could simulate various user interactions, test different prompts across numerous scenarios, and build confidence in their assistant’s performance before shipping to production. 

Reality Check on Agentic Technologies

Every AI workflow has some level of agentic behaviors. At Vellum, we believe in  a six-level framework that breaks down the different levels of autonomy, control, and decision-making for AI systems: from L0: Rule-Based Workflows, where there’s no intelligence, to L4: Fully Creative, where the AI is creating its own logic.

Today, more AI applications are sitting at L1. The focus is on orchestration—optimizing how models interact with the rest of the system, tweaking prompts, optimizing retrieval and evals, and experimenting with different modalities. These are also easier to manage and control in production—debugging is somewhat easier these days, and failure modes are kind of predictable.  

Test-driven development truly makes its case here, as developers need to continuously improve the models to create a more efficient system. This year, we are likely to see the most innovation in L2, with AI agents being used to plan and reason. 

As AI agents move up the stack, test-driven development presents an opportunity for developers to better test, evaluate, and refine their workflows. Third-party developer platforms offer enterprises and development teams a platform to easily define and evaluate agentic behaviors and continuously improve workflows in one place.



Source link

You may also like

Leave a Comment

Booboone

Breaking News on Health, Science, Politic, Science, Entertainment!

 

Edtior's Picks

Latest Articles

@2023 – All Right Reserved. Designed and Developed by booboone.com