top of page

Next Batch Starts October 11

Accelerator

Audience

Automation testers, SDETs, or manual testers with Python knowledge.

Goal

Equip testers to automate AI evaluations, integrate them into CI/CD, and trace/debug agent pipelines.

Outcome

  • Automate LLM evaluations using Eval SDK

  • Manage datasets & golden sets programmatically

  • Define & run custom metrics in CI/CD

  • Trace LangChain agent pipelines

  • Integrate AI testing into release workflows

Curriculum

  • Python Foundations for AI Testers

  • Automated AI Agent Evaluations

  • Dataset Automation

  • Prompt Optimization in CI/CD

  • Metrics for Automation (Single-Turn, Multi-Turn, Custom)

  • Tracing Agent Pipelines

  • LangChain + Eval Integration

  • LangGraph Tracing

  • CI/CD for AI Testing

  • Reporting & Observability

  • Capstone Project: Automated RAG Agent Testing Pipeline

bottom of page