|
|
ADK Local Development to Agent Engine Workflow
Author: Venkata Sudhakar
A production ADK agent goes through three environments before reaching users: local development for rapid iteration, staging Agent Engine for integration testing and evaluation, and production Agent Engine for live traffic. The key insight is that the same ADK agent code runs identically in all three environments - only the runner changes. Locally you use InMemorySessionService and Runner. On Agent Engine you use AdkApp with the same agent definition. This means bugs found locally are real bugs, not environment differences, and a passing local eval suite gives high confidence in production behaviour. The recommended project structure separates agent logic from deployment code. The agent module defines tools and the Agent object - this is environment-agnostic. A local_run.py script wires up Runner and InMemorySessionService for development. A deploy.py script wraps the same agent in AdkApp and deploys to staging or production Agent Engine. A tests directory contains the eval suite that runs in CI before every deployment. This structure scales from a solo developer to a team where different people own the agent logic, the deployment pipeline, and the evaluation suite. The below example shows the complete three-environment workflow with the recommended project structure, environment-specific configuration, and a deployment script that automatically runs evaluations before promoting to production.
Local development runner and deployment script for staging and production,
The deployment script for staging and production with automatic evaluation gate,
It gives the following output running the full local to production workflow,
# Step 1: Local development
python local_run.py
Local agent test:
Your Samsung 4K TV order ORD-88421 is out for delivery - expected today by 7pm!
iPhone 15 Pro has 15 units in Bangalore. Available for same-day delivery.
# Step 2: Interactive testing
adk web
INFO: Starting ADK web server on http://localhost:8000
INFO: Agent: shopmax_support | Tools: get_order_status, check_availability
# Step 3: Deploy to staging
python deploy.py staging
Deploying to: staging
Creating new staging deployment...
Deployed: projects/my-project/locations/us-central1/reasoningEngines/staging-111
Running smoke test...
Smoke test: PASSED
# Step 4: Deploy to production
python deploy.py production
Deploying to: production
Creating new production deployment...
Deployed: projects/my-project/locations/us-central1/reasoningEngines/prod-222
Running smoke test...
Smoke test: PASSED
Team workflow recommendations: use feature branches for agent instruction changes and open a pull request for review before merging. Run the eval suite in CI on every PR using GitHub Actions or Cloud Build - block merge if eval score drops below threshold. Deploy to staging automatically on merge to main. Require a manual approval step before deploying to production. Use separate GCP projects for staging and production so there is no shared quota or accidental cross-contamination. Document the expected eval score for each agent version in the PR description so you have a historical record of quality improvements over time.
|
|