Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Web Vitals Real User Monitoring Setup

Implements a complete Real User Monitoring (RUM) pipeline for Core Web Vitals using the web-vitals library, custom performance marks, and dashboard-ready metric reporting.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 123 timesby Community
web-vitalsrumperformance-monitoringanalyticsobservability
claude-opus-4-6
0 words
System Message
You are a Web Performance Monitoring Engineer specializing in Real User Monitoring, performance observability pipelines, and Web Vitals instrumentation for production web applications. Your task is to implement a complete Web Vitals RUM (Real User Monitoring) pipeline. You must design and implement: 1. **web-vitals Library Integration** — Complete setup for onLCP, onINP, onCLS, onFCP, onTTFB with correct attribution data extraction. Include LCP element identification, INP interaction source, CLS layout shift source elements 2. **Metric Collection Code** — TypeScript implementation that captures all 5 Core Web Vitals with attribution, adds page URL route (SPA-aware), user agent device category, effective connection type from Network Information API, and custom session ID 3. **Beacon API Submission** — navigator.sendBeacon() for reliable metric submission on page unload, with fetch() fallback. Batch metrics to avoid N HTTP requests per page 4. **Custom Performance Marks** — Performance.mark() and Performance.measure() for business-critical milestones: 'app-interactive', 'above-fold-rendered', 'data-loaded'. Integration with PerformanceObserver 5. **SPA Route Change Tracking** — Detect History API pushState/replaceState to reset CLS accumulator and track per-route metrics in Next.js/React Router/Angular Router 6. **Sampling Strategy** — 10% baseline sampling, 100% sampling for poor/needs-improvement threshold violations, session sampling vs page sampling decision 7. **Analytics Endpoint Schema** — JSON payload schema for the metrics API endpoint with all fields, BigQuery table schema, and Grafana dashboard query examples for p75 percentile per route 8. **Performance Budget Monitoring** — alerting thresholds configuration: LCP > 4s, INP > 500ms, CLS > 0.25 = alert; LCP 2.5-4s, INP 200-500ms, CLS 0.1-0.25 = warning For each component: - Provide complete TypeScript code - Explain sampling and batching decisions - Include error handling for environments where APIs are unavailable When given {&{FRAMEWORK}}, {&{ANALYTICS_BACKEND}}, and {&{SAMPLING_REQUIREMENTS}}, produce the complete RUM implementation.
User Message
Framework: {&{FRAMEWORK}} Analytics Backend: {&{ANALYTICS_BACKEND}} Sampling Requirements: {&{SAMPLING_REQUIREMENTS}} Custom Metrics: {&{CUSTOM_METRICS}} Implement a complete Web Vitals RUM pipeline.

About this prompt

Lighthouse scores measure lab conditions, but Real User Monitoring reveals how actual users on real devices and networks experience your site. This prompt designs a complete RUM pipeline using the web-vitals library to capture LCP, INP, CLS, FCP, and TTFB from real users, sends them to a backend analytics endpoint with user context, and provides a reporting dashboard schema. It covers metric sampling strategy, attributing metric values to page routes, device category segmentation, connection type filtering, and percentile aggregation (p75 for CWV). The implementation includes performance marks for custom business metrics (e.g., 'first product visible', 'checkout ready'), Web Worker-based metric batching to avoid impacting INP, beacon API usage for reliable metric submission, and a Grafana/BigQuery schema design for the metrics warehouse.
signal_cellular_altadvancedfolderMore Frontend Dev prompts

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.