Every application produces logs. But most teams either log too little (and can't debug issues) or log too much (and can't find what matters). The key is logging the right things at the right level.
Here's how to set up log monitoring that actually helps you ship faster.
Why Log Monitoring Matters
Logs are your primary debugging tool when things go wrong. But unstructured logging creates problems:
- Too many logs hide the important ones
- Inconsistent formats make parsing and searching difficult
- Missing context turns logs into useless noise
- No aggregation means repeating errors fill your storage
What to Log
Always Include
- Timestamp — ISO 8601 format with timezone
- Level — ERROR, WARN, INFO, DEBUG
- Service/component — Which part of your app
- Request ID — For tracing across services
- Message — Human-readable description
- Context — Relevant data (user ID, resource ID, etc.)
For Errors, Also Include
- Stack trace — But not for every framework frame
- Error code — If applicable
- Input that caused the error — Sanitized
- Recovery action taken — What did your app do?
For Performance Logs, Also Include
- Duration — How long the operation took
- Resource usage — Memory, CPU if significant
- External dependencies — Database queries, API calls
What NOT to Log
Avoid Logging
- Full request bodies — Log structure, not contents
- Repeated errors — Log once, increment counter
- Health check successes — Only log failures
- Every middleware step — Log at boundaries, not every step
- Framework internals — Unless debugging the framework itself
Structured Logging
Structured logging makes logs searchable and parseable:
// Bad: Unstructured
console.log('User ' + userId + ' failed to login at ' + new Date());
// Good: Structured (JSON)
{
"timestamp": "2026-03-20T03:34:00Z",
"level": "WARN",
"service": "auth-service",
"request_id": "req_abc123",
"message": "Login failed",
"context": {
"user_id": "user_456",
"reason": "invalid_password",
"ip": "192.168.1.1"
}
}
Benefits of Structured Logging
- Searchable — Query by any field
- Parseable — Tools can process automatically
- Consistent — Same format across all services
- Alertable — Set up alerts on specific patterns
Log Levels Done Right
| Level | When to Use | Production Default |
|---|---|---|
| ERROR | Something is broken, needs attention | Always on |
| WARN | Unexpected but handled, investigate later | Usually on |
| INFO | Important business events | Sometimes on |
| DEBUG | Detailed diagnostic info | Usually off |
Log Aggregation Options
Self-Hosted
- ELK Stack — Elasticsearch, Logstash, Kibana (resource intensive)
- Loki + Grafana — Lightweight, integrates with Prometheus
- Fluentd/Fluent Bit — Flexible log collection and routing
Managed Services
- Datadog — Comprehensive but expensive
- Papertrail — Simple, affordable
- Logtail (Better Stack) — Modern, developer-friendly
- Cloud provider logs — CloudWatch, Stackdriver, etc.
Log Retention Strategy
Not all logs need the same retention:
| Log Type | Retention | Reason |
|---|---|---|
| Error logs | 90 days | Debug old issues, patterns |
| Access logs | 30 days | Security investigations |
| Debug logs | 7 days | Short-term debugging only |
| Audit logs | 1-7 years | Compliance requirements |
Common Log Monitoring Mistakes
Mistake 1: Logging Everything at INFO Level
When everything is INFO, nothing stands out. Use levels appropriately.
Mistake 2: No Request Correlation
Without request IDs, you can't trace a request across services. Always include correlation IDs.
Mistake 3: Ignoring Log Volume
Log volume that grows unbounded will eventually cause problems. Monitor and alert on log volume spikes.
Mistake 4: Not Sampling High-Volume Logs
If you log every request to a popular endpoint, sample the logs (e.g., 10%) rather than storing everything.
Mistake 5: No Log-Based Alerts
Logs are useless if nobody reads them. Set up alerts for error rate spikes, specific error messages, and anomalies.
Log Monitoring Checklist
- ☐ Use structured logging (JSON)
- ☐ Include timestamp, level, service, request ID
- ☐ Never log sensitive data
- ☐ Use appropriate log levels
- ☐ Aggregate logs to a central location
- ☐ Set retention policies by log type
- ☐ Create alerts for error patterns
- ☐ Include request IDs for tracing
- ☐ Sample high-volume logs
- ☐ Review log volume regularly
Monitor Your Application Health with OpsPulse
Combine uptime monitoring with your log analysis for complete visibility. Know when things break, not just when logs say so.
Start Free Monitoring →Summary
Effective log monitoring comes down to:
- Structure — Use consistent, parseable formats
- Context — Include enough to debug, not so much it's noise
- Levels — Use them appropriately
- Aggregation — Centralize for searching
- Alerts — Turn patterns into notifications
Logs are for debugging. Make them work for you, not against you.