Pro Plan10 minutesbeginner

Setting Performance Alerts

Configure alerts for performance regressions, error spikes, and Core Web Vital degradations.

alertsmonitoringnotificationsperformancethresholds
Last updated: January 15, 2025

Set up alerts to be notified when performance degrades, errors spike, or Core Web Vitals breach thresholds.

Alert Types

Available Alerts

Alert TypeDescriptionUse Case
Web Vital ThresholdCWV exceeds targetLCP > 2.5s
Performance RegressionMetrics worsenLoad time +20%
Error SpikeError rate increases2x normal errors
New Error TypeFirst occurrenceNovel error detected
UptimeSite goes downHTTP 5xx, timeout
Traffic AnomalyUnusual trafficSpike or drop

Creating Alerts

  1. Go to Settings → Alerts
  2. Click Create Alert
  3. Select alert type
  4. Configure conditions
  5. Set notification method
  6. Save

Alert Configuration

Create Performance Alert

Name: [Mobile LCP Warning]

Condition:
  Metric: [LCP]
  Device: [Mobile]
  Threshold: [> 2.5s]
  Duration: [15 minutes]

Comparison:
  Compare to: [Last 7 days average]
  Threshold: [20% increase]

Notification:
  [x] Email: team@example.com
  [x] Slack: #alerts
  [ ] Webhook: https://...

Schedule:
  [x] All times
  [ ] Business hours only

Web Vital Alerts

LCP Alert

LCP Performance Alert

When LCP exceeds 2.5s for 15+ minutes:
  - Compare to: Previous 7-day average
  - Threshold: 20% regression
  - Pages: All pages (or specific URLs)
  - Device: All (or Mobile only)

INP Alert

INP Interactivity Alert

When INP exceeds 200ms:
  - Condition: 75th percentile > 200ms
  - Duration: 30 minutes
  - Pages: Interactive pages (/checkout, /search)

CLS Alert

CLS Stability Alert

When CLS exceeds 0.1:
  - Condition: Average CLS > 0.1
  - Duration: 1 hour
  - Comparison: vs last week
  - Threshold: 50% increase

Error Alerts

Error Spike Alert

Error Spike Detection

When error rate increases significantly:
  - Threshold: 2x normal rate
  - Time window: Last 15 minutes
  - Comparison: Same time yesterday
  - Exclude: Known/ignored errors

New Error Type

New Error Detection

When a new error type is first seen:
  - Trigger: First occurrence
  - Minimum occurrences: 3 (filter noise)
  - Severity: All / Critical only

Error Threshold

Error Count Threshold

When total errors exceed limit:
  - Threshold: 100 errors/hour
  - Pages: All
  - Types: All error types

Uptime Alerts

Site Down Alert

Uptime Alert

When site becomes unavailable:
  - Check interval: 1 minute
  - Failure threshold: 2 consecutive failures
  - Locations: US, EU, APAC
  - Alert on: HTTP 5xx, Timeout, DNS failure

See Uptime Monitoring for detailed setup.

Notification Channels

Email

Email Notification

Recipients: team@example.com, oncall@example.com
Format: HTML with charts
Frequency: Immediate (or digest)

Slack

  1. Go to Settings → Integrations → Slack
  2. Click Connect Slack
  3. Select workspace
  4. Choose default channel
  5. Enable for alerts
Slack Alert Format

🚨 Performance Alert: Mobile LCP Regression

Metric: LCP
Current: 3.2s (↑ 28% vs last week)
Threshold: 2.5s
Affected: Mobile users (38% of traffic)

[View Dashboard →]

Webhook

Webhook Configuration

URL: https://api.example.com/webhooks/alerts
Method: POST
Headers:
  Authorization: Bearer xxx
  Content-Type: application/json

Payload:
{
  "alert_id": "alt_abc123",
  "type": "performance_regression",
  "metric": "lcp",
  "current_value": 3.2,
  "threshold": 2.5,
  "triggered_at": "2025-01-15T10:30:00Z",
  "website_id": "site_xyz",
  "url": "https://app.zenovay.com/..."
}

PagerDuty

  1. Go to Settings → Integrations → PagerDuty
  2. Enter integration key
  3. Map severity levels
  4. Test connection

Microsoft Teams

  1. Create incoming webhook in Teams
  2. Add webhook URL in Zenovay
  3. Configure message format

Alert Scheduling

Time-Based Rules

Alert Schedule

Name: Business Hours Only
Times:
  Monday-Friday: 9:00 AM - 6:00 PM
  Saturday-Sunday: Disabled

Timezone: America/New_York

Escalation Rules

Escalation Policy

Level 1 (0-15 min):
  - Email: dev-team@example.com
  - Slack: #alerts

Level 2 (15-30 min, unacknowledged):
  - PagerDuty: On-call engineer
  - SMS: +1-555-0123

Level 3 (30+ min, unresolved):
  - Email: engineering-manager@example.com
  - PagerDuty: Critical priority

Alert Management

Viewing Active Alerts

  1. Go to Alerts → Active
  2. See current triggered alerts
  3. View timeline
  4. Acknowledge or resolve

Alert States

StateDescription
TriggeredCondition met, notification sent
AcknowledgedTeam aware, investigating
ResolvedIssue fixed, alert cleared
SnoozedTemporarily disabled

Acknowledging Alerts

Acknowledge Alert

Alert: Mobile LCP Regression
Acknowledged by: jane@example.com
Time: 10:45 AM

Note: "Investigating - may be related to recent deploy"

Snoozing Alerts

Temporarily disable during known issues:

  1. Click alert → Snooze
  2. Select duration: 1 hour, 4 hours, 24 hours
  3. Add reason
  4. Alert auto-resumes after snooze

Alert History

View Past Alerts

  1. Go to Alerts → History
  2. Filter by:
    • Date range
    • Alert type
    • Status
    • Team member

Alert Analytics

Alert Summary - Last 30 Days

Total Alerts: 23
  - Performance: 12
  - Errors: 8
  - Uptime: 3

Mean Time to Acknowledge: 8 minutes
Mean Time to Resolve: 45 minutes

Most Common:
  1. LCP Regression (7 times)
  2. Error Spike (5 times)
  3. Mobile CLS (4 times)

Best Practices

Alert Hygiene

  1. Avoid alert fatigue: Too many alerts = ignored alerts
  2. Set meaningful thresholds: Not too sensitive
  3. Group related alerts: Reduce noise
  4. Regular review: Adjust based on data

Threshold Guidelines

MetricWarningCritical
LCP> 2.5s> 4s
INP> 200ms> 500ms
CLS> 0.1> 0.25
Error Rate2x baseline5x baseline
Load Time+20%+50%

Effective Alerting

Alert Design Principles

āœ“ Actionable: Can you do something about it?
āœ“ Specific: Clear what's wrong
āœ“ Timely: Catch issues early
āœ“ Relevant: Affects real users
āœ— Noisy: Alerts for non-issues
āœ— Vague: "Something is wrong"
āœ— Delayed: After damage done

Integrations

CI/CD Integration

Monitor deployment performance changes by checking your analytics data after deployment:

# GitHub Actions example - check analytics after deploy
- name: Verify Post-Deploy Analytics
  run: |
    curl -X GET "https://api.zenovay.com/api/external/v1/analytics/${{ vars.ZENOVAY_WEBSITE_ID }}" \
      -H "X-API-Key: ${{ secrets.ZENOVAY_API_KEY }}"

For automated performance regression alerts, configure alert rules in the Zenovay dashboard under Settings → Alerts. Alerts will notify your team via email, Slack, or webhook when performance degrades after a deployment.

Incident Management

Connect to incident management tools:

  • PagerDuty: On-call routing
  • Opsgenie: Team scheduling
  • VictorOps: Escalations
  • ServiceNow: Ticket creation

Troubleshooting

Not Receiving Alerts

Check:

  • Notification channel configured
  • Email not in spam
  • Slack channel permissions
  • Webhook endpoint responding

Too Many Alerts

Adjust:

  • Increase thresholds
  • Add duration requirements
  • Use percentage-based thresholds
  • Group similar alerts

False Positives

Review:

  • Threshold appropriateness
  • Baseline calculation
  • Traffic patterns
  • Known maintenance windows

Alert Templates

Performance Regression

Template: Performance Regression

Condition: Any core metric degrades 20%+
Duration: 15 minutes
Compare to: 7-day average
Notify: Dev team (Slack)
Escalate: After 30 minutes

Error Spike

Template: Error Spike

Condition: Error rate 3x normal
Duration: 5 minutes
Compare to: Same time last week
Notify: On-call (PagerDuty)
Escalate: Immediately

Critical Page

Template: Critical Page Monitor

Pages: /checkout, /payment, /signup
Conditions:
  - LCP > 3s
  - Error rate > 1%
  - Availability < 99.9%
Notify: Critical alerts channel
Escalate: VP Engineering after 15 min

Next Steps

Was this article helpful?