Real-Time and Predictive Reporting
Replace monthly reports with a live performance dashboard and threshold-based alerts that surface problems while there's still time to act.
The business case
Most firm leaders are making decisions based on data that's 30 - 60 days old. The monthly report that lands on the CEO's desk reflects last month's reality, not this week's. By the time a problem appears - pipeline below target, billing rate declining, a key account going quiet - it's usually been a problem for weeks. The report confirms what has already happened. It doesn't give you enough lead time to respond. The underlying data exists. It lives in the CRM, the billing system, the project management tool. The problem is that pulling it together into a coherent picture requires manual work on a monthly schedule. By the time the picture is assembled, the moment it describes has already passed. This Play builds a live performance dashboard with threshold-based alerts that fire when a metric moves outside its defined range.
What this play does
n8n runs scheduled workflows that pull current data from your CRM, billing system, and project management tools at defined intervals - hourly for fast-moving metrics, daily for everything else. The data flows into a dashboard tool (Grafana, Metabase, or Google Looker Studio) that maintains a live view of your defined metrics. In parallel, n8n monitors each metric against its defined threshold. When a metric crosses a threshold - pipeline drops below target, a key account hasn't been contacted in 45 days, billable hours fall below target - an alert fires to the responsible person with context: what the metric is, what it moved to, what the recent pattern looks like.
Before and after
Before
The managing partner receives a monthly report on the 10th. It shows September was below target on billed revenue. Someone investigates and finds two partners had lower-than-expected billings in the last two weeks of September. It's now mid-October. The correction is already a month behind.
After
On September 19th, the system detects that billed revenue for the month is trending 18% below target. An alert fires: "Monthly billings on track for $282K vs. $348K target. Two active projects showing no logged time entries in 8+ days: [project names]." The managing partner addresses it on September 20th. The month ends on target.
Business impact
The financial impact of early warning is difficult to isolate but real. What firms that build this Play consistently report is that the biggest value is in the alerts - specifically, the confidence that if something important moves in the wrong direction, someone will know before it becomes a problem. That changes how leadership operates: less time pulling manual reports, more confidence in decisions. Firms with live performance monitoring consistently report shorter recovery times from operational problems because those problems are surfaced while there's still time to respond.
Prerequisites
Complete these before opening n8n. Skipping prerequisites is how you end up rebuilding workflows.
Define exactly 5 - 7 metrics before building
The temptation is to build a comprehensive dashboard. The result is usually one nobody uses. Define the 5 - 7 metrics that, if they move, require someone to act. 'Pipeline health' is not a metric. 'Total pipeline value in proposal stage' is. Be specific.
Map each metric to its source system
For each metric, identify exactly which system it comes from and confirm API access for that system. CRM metrics come from your CRM API. Billing metrics come from your billing system. Time tracking metrics come from your time tracking tool. Confirm access before building.
Pull 12 months of historical data before setting thresholds
Thresholds set without reference to actual historical patterns will either fire too often or too rarely. Before setting any threshold, know what 'normal' looks like for your firm. What's the typical range for monthly billings? What's the normal pipeline-to-revenue ratio?
Confirm source data quality
A dashboard is only as good as the data feeding it. Run a data quality check on each source system before connecting it. This Play comes last in the sequence for a reason - the data foundation it depends on is built by the earlier Plays, particularly Play One.
Step-by-step implementation
The steps below are the full build guide. Each step includes configuration notes and exact AI prompts where applicable.
Define and document your metrics
Before opening any tool, document each metric with: the exact name, how it's calculated, which system it comes from, which API endpoint provides the data, the normal range (based on 12 months of historical data), the threshold that triggers an alert, who receives the alert, and what action they're expected to take. For most professional services firms, the core metrics fall into four categories: **Revenue metrics**: Monthly billed vs. target (daily tracking), pipeline value by stage, average engagement size, close rate on proposals. **Operations metrics**: Time-to-first-response on new leads (from Play Two), CRM field completeness (from Play One), invoice aging by bucket (from Play Six), new client onboarding cycle time. **Team metrics**: Billable hours by person vs. target, utilization rate, capacity for new work. **Client health metrics**: Key accounts with no logged activity in 45+ days, open service issues by age, accounts where sentiment has trended negative for 2+ consecutive interactions. Start with one metric from each category - four metrics total. Get those working before adding more.
Build the data collection workflows
Create a separate n8n workflow for each data category, running on appropriate schedules: - Hourly: new lead response time, new unread VIP messages (from Play Eight) - Every 4 hours: billing system invoice status, CRM activity count - Daily (6 AM): all other metrics - pipeline values, billing actuals, utilization rates, account health signals Each workflow queries the source system API, calculates the metric value, and writes it to a central data store. A Supabase database works well for most firms - it can store time-series metric data and serve it to your dashboard tool in real-time. A Google Sheet works as an alternative for firms not ready to manage a database. The data store structure: metric_name, value, timestamp, period (daily/weekly/monthly), source_system. Each workflow adds a new row for each metric on each run.
Configuration Notes
Example: Billing metric workflow (daily)
1. Schedule trigger: 6:00 AM daily
2. Query billing system API for all invoices issued in current month
3. Calculate: sum of paid invoices = billed_to_date, target = monthly_target field
4. Calculate: billed_to_date / monthly_target * 100 = percent_of_target
5. Calculate: (billed_to_date / day_of_month) * days_in_month = projected_month_end
6. Write to data store: {metric: "monthly_billing_pace", value: percent_of_target, projection: projected_month_end, timestamp: now}
7. Check: if percent_of_target < alert_threshold → trigger alert workflowBuild the dashboard
Connect your data store to your dashboard tool. Google Looker Studio is free and connects to Google Sheets, Supabase, and most major databases. Metabase is open-source and self-hosted. Grafana works well if you're comfortable with more technical setup. Build one view per metric category - not one view for everything. Cluttered dashboards get ignored. Each view shows: - Current value vs. target (large, prominent) - Trend for the last 30 days (line chart) - Alert threshold line (horizontal reference line) - Date of last update (to confirm data is current) Set the dashboard as the default view for leadership to check every morning. The goal is for checking the dashboard to take 2 minutes, not 20.
Build the threshold alert system
In each data collection workflow, after writing to the data store, add a threshold check. If the metric value crosses the defined threshold, trigger an alert workflow. The alert workflow: generates a brief context message (current value, threshold, recent trend, who owns the response), and delivers it via the configured channel (Slack message, email, SMS via Twilio). Alert message structure: what metric crossed, what it moved to, what the recent trend looks like, who's responsible for acting, and a direct link to the relevant dashboard view. Keep alerts to 3 - 5 sentences - if they're too long, people stop reading them. Route each alert to the person responsible for acting on that specific metric. Billing alerts go to the CFO or billing manager. Pipeline alerts go to the managing partner. Utilization alerts go to the practice leader. Routing everything to one person creates alert fatigue that causes the alerts to be ignored.
AI Prompt
Generate a brief, actionable alert message for a performance metric that has crossed its threshold.
Metric: {{metric_name}}
Current value: {{current_value}}
Target/threshold: {{target_value}}
Direction of concern: {{direction}} (above or below threshold)
Recent trend (last 7 days): {{trend_description}}
Responsible owner: {{owner_name}}
Dashboard link: {{dashboard_url}}
Write a 3-5 sentence alert message that:
1. States what metric moved and by how much (specific numbers)
2. Provides brief context - has this been trending this direction, or is this a sudden change?
3. States what the owner should look at or do
4. Includes the dashboard link
Write in direct, business language. No filler. The owner receiving this is busy - they need to understand the situation and know what to do in under 30 seconds.Week-by-week rollout plan
- Define 5 - 7 metrics with exact calculations, sources, and thresholds.
- Confirm API access for each source system.
- Pull 12 months of historical data to calibrate thresholds.
- Build data collection workflows for each metric category.
- Set up central data store.
- Run for 3 - 5 days to verify data accuracy before building the dashboard.
- Build dashboard in your chosen tool.
- Build threshold alert workflows.
- Test alerts by temporarily lowering thresholds to confirm delivery and format.
- Share dashboard link with leadership team.
- Run alerts in shadow mode for one week (deliver to yourself, not recipients) to verify accuracy.
- Activate alerts for all recipients. Calibrate thresholds based on first two weeks of data.
Success benchmarks
These are the specific, measurable signals that confirm the play is working. Check against each benchmark at the 30-, 60-, and 90-day mark.
Common mistakes
Trying to track too many metrics at launch
Start with 5 - 7 metrics. The temptation is to build a comprehensive dashboard that covers everything. The result is one nobody uses. Build narrowly, prove value, then expand.
Building before source data quality is acceptable
A live dashboard connected to a CRM with 40% data completeness will display 40%-complete information with the appearance of authority. Fix the data problems first. Play One runs first for this reason.
Setting thresholds without historical data
Thresholds set without reference to actual historical patterns will fire too often or too rarely. Pull 12 months of data and understand what 'normal' looks like before defining 'abnormal.'
Routing all alerts to the managing partner
Not every alert requires the managing partner's attention. Routing everything to one person creates inbox fatigue that causes the alerts to be ignored. Map each alert to the person who actually owns the metric.
Exception rule
Read before going live
Dashboard data is only as reliable as the source data feeding it. If your CRM is incomplete, your time tracking is inconsistent, or your billing system has reconciliation gaps, the dashboard will reflect those problems and give them the appearance of authority. This Play comes last in the sequence because the data foundation it depends on is built by the earlier Plays.
Downloads & Templates
Play 12 Predictive Reporting Workflow (n8n JSON)
Ready-to-import n8n workflow file that aggregates data from tools to forecast utilization and revenue risks.
Revenue Institute
Want someone to build this play for your firm? Revenue Institute implements the full AI Workforce Playbook system as part of every engagement.