Distributed Tracing
ScryWatch APM traces requests across your services — from the initial request through every downstream call — and visualizes the result as a span waterfall with timing data.
What you’ll need
- A project with API key
- Instrumented services (using the custom span API or an OpenTelemetry SDK)
Step 1: Instrument your code
You can send traces two ways:
Custom JSON (recommended for Workers/edge):
curl -X POST https://api.scrywatch.com/api/traces \
-H "Authorization: Bearer <your-api-key>" \
-H "Content-Type: application/json" \
-d '{
"spans": [{
"trace_id": "abc123",
"span_id": "span001",
"name": "handle request",
"service": "api-gateway",
"start_time": 1700000000000,
"duration_ms": 142,
"status": "ok"
}, {
"trace_id": "abc123",
"span_id": "span002",
"parent_span_id": "span001",
"name": "query database",
"service": "db-worker",
"start_time": 1700000000010,
"duration_ms": 95,
"status": "ok"
}]
}'
OpenTelemetry (OTLP):
Point your OpenTelemetry SDK’s exporter at https://api.scrywatch.com/v1/traces with your API key as the Authorization: Bearer header. ScryWatch accepts the standard OTLP/HTTP JSON format.
Step 2: Open Traces
Click Traces in the left sidebar.
You’ll see a table of recent traces, sorted by most recent start time. Each row shows:
- Trace ID — the root trace identifier
- Service — the root service that started the trace
- Duration — total end-to-end duration
- Status —
ok,error, orslow - Spans — number of spans in the trace
- Started — when the trace began
Step 3: Filter traces
Use the filter controls to narrow down:
- Service — show only traces from a specific service
- Status — show only errors or slow traces
- Environment — production, staging, etc.
- Duration range — find traces slower than N milliseconds
Tip: Filter by
status=errorto find traces where at least one span failed. These are the traces worth investigating.
Step 4: Open the waterfall view
Click any trace row to open the span waterfall.
The waterfall shows:
- All spans for the trace, arranged by parent-child relationship
- Each span’s start offset and duration as a horizontal bar
- Color coding: green for OK, red for error, yellow for slow (>1s)
- Hover over any span to see its full timing and metadata
Read the waterfall left to right — the root span starts at the left edge, child spans begin when the parent calls them.
Note: Long spans indicate bottlenecks. If a database query span takes 800ms of a 900ms total trace, your database is the bottleneck.
Step 5: View the Service Map
Click Service Map in the left sidebar.
The Service Map shows:
- All services that have emitted spans
- Throughput (requests per minute) per service
- p50 and p95 latency per service
- Error rate per service
This gives you a topology view of your system — which services are most active, which are slow, and which are seeing errors.
You’re done
You now know how to:
- Instrument your code to send spans (custom JSON or OTLP)
- Browse and filter traces in the Traces page
- Read the span waterfall and identify bottlenecks
- Use the Service Map to understand system topology
Related docs
Full traces API reference — span schema, OTLP endpoint, filtering parameters, and retention policy.