Kubernetes
Deploy the OpenTelemetry Collector in your cluster to collect and forward traces to ScryWatch. Use the HTTP ingest endpoint for logs — OTLP log ingestion is not yet supported.
Prerequisites
- Kubernetes 1.24+
kubectlconfigured against your cluster- Helm 3 (for the Helm path)
Deployment patterns
| Pattern | When to use |
|---|---|
| DaemonSet | One pod per node — collects from all workloads on that node |
| Deployment | Centralized receiver — good for cluster-wide telemetry like API server audit logs |
Use DaemonSet for most production setups.
Path 1: Helm (recommended)
Add the OpenTelemetry Collector chart:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
Create values.yaml:
mode: daemonset
config:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 5s
send_batch_size: 200
memory_limiter:
check_interval: 1s
limit_mib: 256
exporters:
otlphttp/scrywatch:
endpoint: https://api.scrywatch.com
headers:
Authorization: "Bearer ${env:SCRYWATCH_API_KEY}"
sending_queue:
enabled: true
retry_on_failure:
enabled: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp/scrywatch]
# Logs: OTLP log ingestion is not yet supported by ScryWatch.
# Instrument your apps with the ScryWatch SDK or POST /v1/ingest directly.
image:
repository: otel/opentelemetry-collector-contrib
resources:
limits:
memory: 512Mi
cpu: 250m
requests:
memory: 128Mi
cpu: 50m
Create the API key secret:
kubectl create namespace observability
kubectl create secret generic scrywatch-credentials \
--from-literal=api-key=YOUR_API_KEY \
-n observability
Install:
helm install otel-collector open-telemetry/opentelemetry-collector \
-n observability \
--create-namespace \
-f values.yaml \
--set extraEnvs[0].name=SCRYWATCH_API_KEY \
--set extraEnvs[0].valueFrom.secretKeyRef.name=scrywatch-credentials \
--set extraEnvs[0].valueFrom.secretKeyRef.key=api-key
Path 2: Raw YAML
1. Create namespace and secret
kubectl create namespace observability
kubectl create secret generic scrywatch-credentials \
--from-literal=api-key=YOUR_API_KEY \
-n observability
2. Apply collector config and DaemonSet
Save as otel-collector.yaml and apply with kubectl apply -f otel-collector.yaml:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
namespace: observability
data:
config.yaml: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 5s
send_batch_size: 200
exporters:
otlphttp/scrywatch:
endpoint: https://api.scrywatch.com
headers:
Authorization: "Bearer ${env:SCRYWATCH_API_KEY}"
retry_on_failure:
enabled: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/scrywatch]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otel-collector
namespace: observability
spec:
selector:
matchLabels:
app: otel-collector
template:
metadata:
labels:
app: otel-collector
spec:
containers:
- name: collector
image: otel/opentelemetry-collector-contrib:latest
args: ["--config=/conf/config.yaml"]
env:
- name: SCRYWATCH_API_KEY
valueFrom:
secretKeyRef:
name: scrywatch-credentials
key: api-key
ports:
- containerPort: 4317 # OTLP gRPC
- containerPort: 4318 # OTLP HTTP
volumeMounts:
- name: config
mountPath: /conf
resources:
limits:
memory: 512Mi
cpu: 250m
requests:
memory: 128Mi
cpu: 50m
volumes:
- name: config
configMap:
name: otel-collector-config
---
apiVersion: v1
kind: Service
metadata:
name: otel-collector
namespace: observability
spec:
selector:
app: otel-collector
ports:
- name: otlp-grpc
port: 4317
targetPort: 4317
- name: otlp-http
port: 4318
targetPort: 4318
Instrumenting your apps
Point your services at the in-cluster Collector:
# Add to your app Deployment's env block
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-collector.observability.svc.cluster.local:4318"
- name: OTEL_SERVICE_NAME
value: "my-service"
- name: OTEL_TRACES_EXPORTER
value: "otlp"
Most OpenTelemetry SDKs pick these up automatically via the environment variable spec.
Verify
Check the Collector is running:
kubectl get pods -n observability
# NAME READY STATUS RESTARTS
# otel-collector-xxxxx 1/1 Running 0
kubectl logs -n observability daemonset/otel-collector --tail=30
Send a test trace directly to ScryWatch:
curl -X POST https://api.scrywatch.com/api/traces/otlp \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"resourceSpans": [{
"resource": {
"attributes": [{"key":"service.name","value":{"stringValue":"k8s-test"}}]
},
"scopeSpans": [{
"spans": [{
"traceId": "aabbccdd00112233aabbccdd00112233",
"spanId": "1122334455667788",
"name": "test-span",
"startTimeUnixNano": "1741200000000000000",
"endTimeUnixNano": "1741200000250000000",
"status": {"code": 1}
}]
}]
}]
}'
# Expected: {"inserted":1}
Logs strategy
OTLP log ingestion is not yet supported by ScryWatch. Options:
| Approach | How |
|---|---|
| ScryWatch SDK | Use the JS SDK or Flutter SDK in your app |
| HTTP ingest | POST /v1/ingest with Authorization: Bearer KEY — see PHP or Go guides for examples |
| Collector transform | Experimental: use Collector transform processor to map OTLP log records to ScryWatch’s JSON ingest format |
When to use Deployment instead of DaemonSet
Use a Deployment (single replica) when:
- You want a single centralized OTLP receiver for the whole cluster
- You are collecting cluster-level telemetry (API server audit logs, control plane metrics)
- You want to scale the Collector independently of node count
Change mode: daemonset to mode: deployment in the Helm values, or change kind: DaemonSet to kind: Deployment and add replicas: 1 in the raw YAML.
See also
- OpenTelemetry guide — OTLP exporter setup per language
- Distributed Tracing guide — understanding traces in ScryWatch
- Go integration — Go OTel SDK setup