Deploy Diff
Record a deploy marker and get a behavioral diff — new patterns, increased errors, top impacted services — in the 15 minutes before and after your release.
What you’ll need
- At least some log events ingested before and after a deploy
- (For CI/CD integration) a project API key
Step 1: Navigate to Deployments
Click Deployments in the left sidebar. You’ll see a table of deploy markers for your project.
If you haven’t recorded any deploys yet, the table will be empty.
Step 2: Record a deploy from the dashboard
- Click Record Deploy in the top right.
- Fill in the form:
- Version — e.g.
v1.4.2or a git SHA - Service — the service you’re deploying (optional)
- Environment —
production,staging, etc. (optional)
- Version — e.g.
- Click Record.
The deploy marker is saved with the current server timestamp as deployed_at.
Tip: For accurate diffs, record the deploy right when you push to production — not hours later.
Step 3: View the deploy diff
Click the View Diff link on any deploy row to open the diff page.
The diff compares system behavior in the 15 minutes before and 15 minutes after the deploy timestamp.
What you’ll see
New patterns (highlighted in red) — log message shapes that appeared for the first time after the deploy. These are the most important signal: a new error pattern right after a release almost always means something broke.
Increased patterns — patterns that fired ≥50% more often after the deploy. If database query timeout triples, that’s a regression.
Decreased patterns — patterns that dropped significantly. Sometimes good (a bug you fixed), sometimes suspicious (a health check that stopped appearing).
Error delta / Warn delta — raw change in error and warning counts. Positive = more errors after deploy.
Top impacted services — ranked by absolute event count change. Shows which part of your system was most affected.
Reading the diff
A healthy deploy looks like: small or zero error delta, no new error patterns, decreased or stable existing patterns.
A bad deploy looks like: new error patterns, large positive error delta, a service with dramatically increased event count.
Step 4: Integrate with CI/CD
Add a deploy marker automatically at the end of your deployment pipeline. Use your project API key:
curl -X POST https://api.scrywatch.com/v1/deploys \
-H "Authorization: Bearer <your-api-key>" \
-H "Content-Type: application/json" \
-d '{
"version": "'"$GIT_SHA"'",
"service": "api-worker",
"environment": "production"
}'
In GitHub Actions:
- name: Record deploy
run: |
curl -X POST https://api.scrywatch.com/v1/deploys \
-H "Authorization: Bearer ${{ secrets.SCRYWATCH_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{\"version\": \"${{ github.sha }}\", \"service\": \"api-worker\", \"environment\": \"production\"}"
Note: The API worker endpoint (
/v1/deploys) accepts API key auth. The dashboard endpoint requires a session cookie.
Step 5: Request an AI summary (optional)
On the diff page, click Generate AI Summary. ScryWatch reads the pattern data and error counts and writes a plain-English description of what changed — useful for incident notes or post-mortems.
You’re done
You now know how to:
- Record a deploy marker from the dashboard
- Read the deploy diff and identify new/increased/decreased patterns
- Integrate deploy recording into your CI/CD pipeline
Related docs
Full deploy API reference — record markers, list deploys, and retrieve diffs programmatically.