Alert Rules
Define conditions for when to trigger alerts
Slack Integration
Get notified in your team channels
PagerDuty
On-call escalation for critical errors
Email Alerts
Direct notifications to your inbox
Setting Up Alert Rules
Alert rules define when notifications should be sent based on error activity. You can create rules for specific error types, frequency thresholds, or user impact.
Create an Alert Rule
{
"name": "High Error Rate Alert",
"description": "Triggers when error rate exceeds threshold",
"enabled": true,
"conditions": {
"type": "threshold",
"metric": "error_count",
"operator": "greater_than",
"value": 100,
"timeWindow": "5m"
},
"filters": {
"environment": "production",
"level": ["error", "fatal"]
},
"notifications": [
{
"channel": "slack",
"target": "#alerts-critical"
},
{
"channel": "email",
"target": "oncall@company.com"
}
],
"cooldown": "15m"
}Alert Conditions
Configure different types of conditions to trigger alerts based on your needs:
Threshold Alerts
Trigger when a metric exceeds a defined threshold within a time window:
// Alert when more than 50 errors occur in 5 minutes
{
"type": "threshold",
"metric": "error_count",
"operator": "greater_than",
"value": 50,
"timeWindow": "5m"
}
// Alert when error rate exceeds 1% of requests
{
"type": "threshold",
"metric": "error_rate",
"operator": "greater_than",
"value": 0.01, // 1%
"timeWindow": "10m"
}Frequency Alerts
Trigger based on how often an error occurs:
// Alert when a new error type appears
{
"type": "new_issue",
"filters": {
"environment": "production"
}
}
// Alert when an error occurs N times
{
"type": "frequency",
"occurrences": 10,
"timeWindow": "1h",
"perIssue": true // Track per unique error
}User Impact Alerts
Trigger based on how many users are affected:
// Alert when error affects 100+ unique users
{
"type": "user_impact",
"metric": "unique_users",
"operator": "greater_than",
"value": 100,
"timeWindow": "1h"
}
// Alert when error affects 5% of active users
{
"type": "user_impact",
"metric": "user_percentage",
"operator": "greater_than",
"value": 5,
"timeWindow": "15m"
}| Property | Type | Description |
|---|---|---|
typerequired | "threshold" | "frequency" | "new_issue" | "user_impact" | "regression" | The type of alert condition |
metric | string | The metric to evaluate (error_count, error_rate, unique_users, etc.) |
operator | "greater_than" | "less_than" | "equals" | Comparison operator for threshold |
value | number | Threshold value to compare against |
timeWindow | string= "5m" | Time window for evaluation (5m, 1h, 24h) |
perIssue | boolean= false | Track metrics per unique error issue |
Slack Integration
Connect Sylphx to Slack to receive error alerts directly in your team channels.
Setup Steps
{
"channel": "slack",
"config": {
"workspace": "your-workspace",
"defaultChannel": "#engineering-alerts",
"channelRouting": {
// Route by error severity
"fatal": "#incidents",
"error": "#engineering-alerts",
"warning": "#monitoring"
},
"mentionUsers": ["@oncall"],
"mentionGroups": ["@backend-team"],
"includeStackTrace": true,
"maxStackTraceLines": 10
}
}Slack Message Format
Customize what information appears in Slack notifications:
{
"messageTemplate": {
"title": ":rotating_light: {{error.type}}: {{error.message}}",
"fields": [
{ "name": "Environment", "value": "{{environment}}" },
{ "name": "Users Affected", "value": "{{stats.uniqueUsers}}" },
{ "name": "Occurrences", "value": "{{stats.count}} in {{stats.timeWindow}}" }
],
"actions": [
{ "text": "View Issue", "url": "{{issue.url}}" },
{ "text": "Resolve", "action": "resolve" },
{ "text": "Snooze 1h", "action": "snooze", "duration": "1h" }
]
}
}Interactive Actions
PagerDuty Integration
Integrate with PagerDuty for on-call alerting and incident management. Critical errors can automatically page your on-call team.
Coming Soon
Planned Features
{
"channel": "pagerduty",
"config": {
"integrationKey": "your-integration-key",
"serviceId": "PXXXXXX",
"severity": {
"fatal": "critical",
"error": "error",
"warning": "warning"
},
"routingKey": "your-routing-key",
"dedupKey": "{{issue.fingerprint}}"
}
}Email Alerts
Receive error notifications directly in your inbox. Email alerts include detailed error information and quick action links.
{
"channel": "email",
"config": {
"recipients": [
"team@company.com",
"oncall@company.com"
],
"digestMode": false, // Send immediately vs. batched
"digestInterval": "1h", // If digestMode is true
"includeStackTrace": true,
"includeBreadcrumbs": true,
"maxBreadcrumbs": 10,
"replyTo": "errors@sylphx.dev"
}
}Email Digest
Instead of individual emails for each alert, receive a summary digest:
{
"digestMode": true,
"digestInterval": "1h", // Options: 15m, 30m, 1h, 4h, 24h
"digestConfig": {
"groupBy": "issue", // Group by issue or severity
"maxIssues": 20, // Limit issues per digest
"sortBy": "occurrences", // Sort by count or recency
"includeResolved": false // Include recently resolved
}
}Alert Escalation
Set up escalation policies to ensure critical alerts are handled if the primary responder doesn't acknowledge them.
{
"name": "Critical Error Escalation",
"escalationLevels": [
{
"level": 1,
"delay": "0m",
"targets": [
{ "channel": "slack", "target": "#alerts" },
{ "channel": "email", "target": "primary-oncall@company.com" }
]
},
{
"level": 2,
"delay": "15m",
"condition": "not_acknowledged",
"targets": [
{ "channel": "slack", "target": "#incidents", "mention": "@oncall" },
{ "channel": "email", "target": "backup-oncall@company.com" }
]
},
{
"level": 3,
"delay": "30m",
"condition": "not_acknowledged",
"targets": [
{ "channel": "email", "target": "engineering-lead@company.com" },
{ "channel": "slack", "target": "@engineering-lead" }
]
}
]
}Time-based escalation
Escalate after a defined time period if not acknowledged
Severity-based escalation
Auto-escalate if error count or user impact increases
Multiple notification channels
Each level can notify different channels and people
Silencing and Snoozing Alerts
Temporarily suppress alerts during maintenance windows or when investigating known issues.
// Snooze a specific alert rule
{
"action": "snooze",
"ruleId": "alert-rule-123",
"duration": "2h",
"reason": "Investigating the root cause"
}
// Snooze from Slack:
// Click "Snooze 1h" button on alert message
// Or use: /sylphx snooze alert-rule-123 2hDon't Forget to Unsnooze
Alert History and Analytics
Track alert history to understand patterns and optimize your alerting configuration:
Best Practices
Start with few alerts
Begin with critical alerts only. Add more as you understand your error patterns.
Avoid alert fatigue
Too many alerts leads to ignored alerts. Focus on actionable notifications.
Use appropriate channels
Route critical errors to PagerDuty, warnings to Slack, summaries to email.
Set up escalation
Ensure critical alerts have escalation paths so they never go unnoticed.