WBS Section 6 of 7
Screening Analytics
Track AI quality, measure efficiency gains, and validate screening outcomes.
Feeds directly into the AI Evaluation Plan's post-launch dashboard.
Start / End
Process
Decision
Output / Display
Jay (Decision Maker)
Screening activity generates data
Aggregate screening events
Scores, overrides, timestamps,
approvals, rejections, failures
Three metric streams:
① Override Tracking
• Override rate (%)
• Avg score delta (±)
• Direction: up vs. down
• Distribution by score range
Override
> 30%?
Yes
⚠ UI Warning
Review rubric/prompt
Pattern detection:
Consistently ↑ = rubric too strict
Consistently ↓ = rubric too lenient
② Screening Throughput
• Resumes processed / role
• Avg time: upload → shortlist
• Resumes / hour rate
• Parse & scoring failure rates
Baseline comparison
Manual: 80–100 hrs/hire
vs. actual AI-assisted time
③ Funnel Conversion
• % advancing each stage
• Approved → Phone → Tech
→ Panel → Offer → Hired
• Breakdown by AI score range
Calibration signal:
Do 9–10 candidates convert
higher than 7–8? If not →
Leo's Analytics Dashboard
All 3 streams unified
Filterable by role + time period
CSV export available
🔗 Feeds into AI Evaluation Plan
Post-launch 8-metric dashboard
Jay's Executive Summary
Accessible from main dashboard — no drill-down required
Jay opens dashboard
Executive Summary View
Total candidates screened
Estimated hours saved
Override rate · Pipeline conversion