Skip to main content

Alerting

Monitoring

Configure alert rules and integrations to get notified when errors occur. Connect to Slack, PagerDuty, email, and more.

Alert Rules

Define conditions for when to trigger alerts

Slack Integration

Get notified in your team channels

PagerDuty

On-call escalation for critical errors

Email Alerts

Direct notifications to your inbox

Setting Up Alert Rules

Alert rules define when notifications should be sent based on error activity. You can create rules for specific error types, frequency thresholds, or user impact.

Create an Alert Rule

1
Navigate to Settings > Monitoring > Alert Rules
2
Click "Create Alert Rule"
3
Define the trigger conditions
4
Select notification channels
5
Set up escalation (optional)
6
Save and activate the rule
Example Alert Rule Configuration
{
  "name": "High Error Rate Alert",
  "description": "Triggers when error rate exceeds threshold",
  "enabled": true,
  "conditions": {
    "type": "threshold",
    "metric": "error_count",
    "operator": "greater_than",
    "value": 100,
    "timeWindow": "5m"
  },
  "filters": {
    "environment": "production",
    "level": ["error", "fatal"]
  },
  "notifications": [
    {
      "channel": "slack",
      "target": "#alerts-critical"
    },
    {
      "channel": "email",
      "target": "oncall@company.com"
    }
  ],
  "cooldown": "15m"
}

Alert Conditions

Configure different types of conditions to trigger alerts based on your needs:

Threshold Alerts

Trigger when a metric exceeds a defined threshold within a time window:

// Alert when more than 50 errors occur in 5 minutes
{
  "type": "threshold",
  "metric": "error_count",
  "operator": "greater_than",
  "value": 50,
  "timeWindow": "5m"
}

// Alert when error rate exceeds 1% of requests
{
  "type": "threshold",
  "metric": "error_rate",
  "operator": "greater_than",
  "value": 0.01,  // 1%
  "timeWindow": "10m"
}

Frequency Alerts

Trigger based on how often an error occurs:

// Alert when a new error type appears
{
  "type": "new_issue",
  "filters": {
    "environment": "production"
  }
}

// Alert when an error occurs N times
{
  "type": "frequency",
  "occurrences": 10,
  "timeWindow": "1h",
  "perIssue": true  // Track per unique error
}

User Impact Alerts

Trigger based on how many users are affected:

// Alert when error affects 100+ unique users
{
  "type": "user_impact",
  "metric": "unique_users",
  "operator": "greater_than",
  "value": 100,
  "timeWindow": "1h"
}

// Alert when error affects 5% of active users
{
  "type": "user_impact",
  "metric": "user_percentage",
  "operator": "greater_than",
  "value": 5,
  "timeWindow": "15m"
}
PropertyTypeDescription
typerequired"threshold" | "frequency" | "new_issue" | "user_impact" | "regression"The type of alert condition
metricstringThe metric to evaluate (error_count, error_rate, unique_users, etc.)
operator"greater_than" | "less_than" | "equals"Comparison operator for threshold
valuenumberThreshold value to compare against
timeWindowstring= "5m"Time window for evaluation (5m, 1h, 24h)
perIssueboolean= falseTrack metrics per unique error issue

Slack Integration

Connect Sylphx to Slack to receive error alerts directly in your team channels.

Setup Steps

1
Go to Settings > Integrations > Slack
2
Click "Connect to Slack"
3
Authorize the Sylphx app in your Slack workspace
4
Select the default channel for notifications
5
Configure alert routing rules
Slack Notification Configuration
{
  "channel": "slack",
  "config": {
    "workspace": "your-workspace",
    "defaultChannel": "#engineering-alerts",
    "channelRouting": {
      // Route by error severity
      "fatal": "#incidents",
      "error": "#engineering-alerts",
      "warning": "#monitoring"
    },
    "mentionUsers": ["@oncall"],
    "mentionGroups": ["@backend-team"],
    "includeStackTrace": true,
    "maxStackTraceLines": 10
  }
}

Slack Message Format

Customize what information appears in Slack notifications:

Custom Message Template
{
  "messageTemplate": {
    "title": ":rotating_light: {{error.type}}: {{error.message}}",
    "fields": [
      { "name": "Environment", "value": "{{environment}}" },
      { "name": "Users Affected", "value": "{{stats.uniqueUsers}}" },
      { "name": "Occurrences", "value": "{{stats.count}} in {{stats.timeWindow}}" }
    ],
    "actions": [
      { "text": "View Issue", "url": "{{issue.url}}" },
      { "text": "Resolve", "action": "resolve" },
      { "text": "Snooze 1h", "action": "snooze", "duration": "1h" }
    ]
  }
}

Interactive Actions

Use Slack's interactive buttons to resolve, snooze, or assign errors directly from the notification without leaving Slack.

PagerDuty Integration

Integrate with PagerDuty for on-call alerting and incident management. Critical errors can automatically page your on-call team.

Coming Soon

PagerDuty integration is currently in development. Contact support for early access.

Planned Features

Automatic incident creation
On-call schedule integration
Escalation policies
Bi-directional sync (resolve in Sylphx = resolve in PD)
Service mapping
Priority routing
PagerDuty Configuration (Preview)
{
  "channel": "pagerduty",
  "config": {
    "integrationKey": "your-integration-key",
    "serviceId": "PXXXXXX",
    "severity": {
      "fatal": "critical",
      "error": "error",
      "warning": "warning"
    },
    "routingKey": "your-routing-key",
    "dedupKey": "{{issue.fingerprint}}"
  }
}

Email Alerts

Receive error notifications directly in your inbox. Email alerts include detailed error information and quick action links.

Email Alert Configuration
{
  "channel": "email",
  "config": {
    "recipients": [
      "team@company.com",
      "oncall@company.com"
    ],
    "digestMode": false,  // Send immediately vs. batched
    "digestInterval": "1h",  // If digestMode is true
    "includeStackTrace": true,
    "includeBreadcrumbs": true,
    "maxBreadcrumbs": 10,
    "replyTo": "errors@sylphx.dev"
  }
}

Email Digest

Instead of individual emails for each alert, receive a summary digest:

{
  "digestMode": true,
  "digestInterval": "1h",  // Options: 15m, 30m, 1h, 4h, 24h
  "digestConfig": {
    "groupBy": "issue",  // Group by issue or severity
    "maxIssues": 20,     // Limit issues per digest
    "sortBy": "occurrences",  // Sort by count or recency
    "includeResolved": false  // Include recently resolved
  }
}

Alert Escalation

Set up escalation policies to ensure critical alerts are handled if the primary responder doesn't acknowledge them.

Escalation Policy
{
  "name": "Critical Error Escalation",
  "escalationLevels": [
    {
      "level": 1,
      "delay": "0m",
      "targets": [
        { "channel": "slack", "target": "#alerts" },
        { "channel": "email", "target": "primary-oncall@company.com" }
      ]
    },
    {
      "level": 2,
      "delay": "15m",
      "condition": "not_acknowledged",
      "targets": [
        { "channel": "slack", "target": "#incidents", "mention": "@oncall" },
        { "channel": "email", "target": "backup-oncall@company.com" }
      ]
    },
    {
      "level": 3,
      "delay": "30m",
      "condition": "not_acknowledged",
      "targets": [
        { "channel": "email", "target": "engineering-lead@company.com" },
        { "channel": "slack", "target": "@engineering-lead" }
      ]
    }
  ]
}

Time-based escalation

Escalate after a defined time period if not acknowledged

Severity-based escalation

Auto-escalate if error count or user impact increases

Multiple notification channels

Each level can notify different channels and people

Silencing and Snoozing Alerts

Temporarily suppress alerts during maintenance windows or when investigating known issues.

// Snooze a specific alert rule
{
  "action": "snooze",
  "ruleId": "alert-rule-123",
  "duration": "2h",
  "reason": "Investigating the root cause"
}

// Snooze from Slack:
// Click "Snooze 1h" button on alert message
// Or use: /sylphx snooze alert-rule-123 2h

Don't Forget to Unsnooze

Snoozed alerts automatically re-enable after the duration expires. Set appropriate durations to avoid missing important errors.

Alert History and Analytics

Track alert history to understand patterns and optimize your alerting configuration:

Alert frequency by rule
Mean time to acknowledge (MTTA)
Mean time to resolve (MTTR)
False positive rate
Escalation frequency
Channel effectiveness

Best Practices

Start with few alerts

Begin with critical alerts only. Add more as you understand your error patterns.

Avoid alert fatigue

Too many alerts leads to ignored alerts. Focus on actionable notifications.

Use appropriate channels

Route critical errors to PagerDuty, warnings to Slack, summaries to email.

Set up escalation

Ensure critical alerts have escalation paths so they never go unnoticed.