Manual Review
Prerequisites
Before setting up manual review, understand:
- Risk scoring thresholds that trigger review
- Fraud types your reviewers need to recognize
- Device fingerprinting and other data sources
TL;DR
- Manual review = Human investigation for gray-zone cases where automation is uncertain
- Good candidates: high-value transactions, VIP customers, ML uncertain, customer appeals
- Bad candidates: clear fraud (auto-decline), clear legitimate (auto-approve), low-value
- Target metrics: >95% decision accuracy, >90% SLA adherence
- Feed decisions back to ML models to improve automation over time
Human investigation for complex fraud decisions.
When to Use Manual Review
Good Candidates for Review
| Scenario | Why Manual Review |
|---|---|
| Gray zone scores | ML uncertain, needs judgment |
| High-value transactions | Loss too big for automation error |
| VIP customers | False positive cost too high |
| Complex patterns | Multiple signals, needs synthesis |
| Appeals | Customer disputes automated decision |
Poor Candidates for Review
| Scenario | Better Alternative |
|---|---|
| Clear fraud signals | Auto-decline |
| Clear legitimate signals | Auto-approve |
| Low-value transactions | Risk-accept the loss |
| High volume attacks | Automated rules |
Review Queue Design
Prioritization
Priority Score =
(Transaction Value × Risk Score × Time Sensitivity)
÷ Analyst Capacity
| Priority | Criteria | SLA |
|---|---|---|
| Critical | >$5K, high risk, time-sensitive | 15 min |
| High | >$1K, high risk OR VIP | 1 hour |
| Medium | Medium risk, medium value | 4 hours |
| Low | Low value, marginal signals | 24 hours |
Queue Management
- Real-time SLA tracking – Monitor aging
- Automatic escalation – If SLA breached
- Capacity planning – Staff to volume
- Skill-based routing – Complex cases to senior
The Review Process
Investigation Steps
1. Review automated decision reason
↓
2. Examine transaction/application details
↓
3. Check customer history
↓
4. Query external data (device, email, phone)
↓
5. Look for linked accounts
↓
6. Make decision
↓
7. Document rationale
Key Data Points
| Category | What to Check |
|---|---|
| Identity | Name, address, SSN verification |
| Device | Fingerprint, reputation, velocity |
| Behavior | Pattern vs. history |
| Network | Links to other accounts (see synthetic identity) |
| External | Email age, phone history, bureau |
Decision Framework
| Evidence | Decision |
|---|---|
| Clear fraud (Tier 1 indicators) | Decline, flag account |
| Strong fraud (multiple Tier 2) | Decline, flag account |
| Unclear but risky | Challenge (step-up verification) |
| Risky but VIP | Approve with monitoring |
| Clear legitimate | Approve, whitelist signals |
Analyst Tools
Essential Features
- Single pane of glass – All data in one view
- Decision shortcuts – One-click common actions
- Notes/comments – For handoffs and history
- Timer – Track review time
- Feedback loop – Outcome tracking
Nice-to-Have Features
- Similar case search – "Show me cases like this"
- Graph visualization – Network connections
- Communication tools – Contact customer if needed
- Quality scoring – Manager review integration
Quality Assurance
Review Sampling
| Sample Rate | Application |
|---|---|
| 100% | New analysts (first 30 days) |
| 20% | Standard analyst |
| 10% | Senior analyst |
| 5% | Expert analyst |
Quality Metrics
| Metric | Target |
|---|---|
| Decision accuracy | >95% |
| Documentation completeness | 100% |
| SLA adherence | >90% |
| False positive rate | Track by analyst |
| False negative rate | Track by analyst |
Feedback Loop
- Track outcomes – Was decision correct?
- Feed to models – Human decisions train ML
- Identify patterns – What do humans catch that ML misses?
- Update rules – Encode learnings
Scaling Manual Review
When Volume Exceeds Capacity
- Raise review threshold – Only highest risk
- Auto-decide more – Accept some error
- Reduce review scope – Focus on key signals
- Add staff – If sustainable
- Improve models – Long-term solution
Efficiency Improvements
| Initiative | Impact |
|---|---|
| Better data presentation | 10-20% faster |
| Keyboard shortcuts | 5-10% faster |
| Pre-computed insights | 15-25% faster |
| Decision templates | 10-15% faster |
Next Steps
Setting up manual review?
- Define queue prioritization - Critical vs. low priority
- Design the review process - Step-by-step workflow
- Set quality targets - SLA and accuracy goals
Improving review efficiency?
- Check efficiency improvements - Quick wins
- Build better analyst tools - Single pane of glass
- Implement feedback loop - Train ML from decisions
Scaling beyond capacity?
- Raise review threshold - Only highest risk
- Improve models - Long-term solution
- Consider vendors - Outsource review
Related Topics
- Evidence Framework - Tier 1/Tier 2 indicators
- Risk Scoring - Automated scoring
- Device Fingerprinting - Device intelligence
- Behavioral Analytics - User behavior patterns
- Identity Verification - Document and biometric checks
- Fraud Metrics - Measuring performance
- Chargeback Representment - Fighting disputes
- Velocity Rules - Pattern detection
- Rules vs. ML - Detection approaches
- Friendly Fraud - First-party abuse cases