Skip to main content

Playbook: Increase Authorization Rates

TL;DR
  • Week 1: Retry logic experiments—soft declines (51, 61) retry timing, technical (91, 96) immediate retry
  • Week 2: Card updater enrollment (VAU, ABU), proactive expiration emails
  • Week 3: 3DS on recurring setup, frictionless tuning (more data = higher frictionless rate)
  • Week 4: Network token conversion for card-on-file (+2-5% lift typical)
  • Cumulative potential: 5-15% improvement—but measure each experiment individually

Systematic approach to improving payment authorization rates.

Auth rate optimization is a series of experiments, not a checklist. Each change is a hypothesis about what's causing declines.

Workflow Overview

PhaseKey Tasks
Retry LogicBaseline assessment, soft decline retry timing, technical retry experiments
Card UpdateAccount updater enrollment (VAU, ABU), proactive expiration outreach, measure ROI
3DS Tuning3DS on recurring setup, frictionless tuning (more data), measure impact
Network TokensConvert cards to network tokens, measure lift (+2-5% typical), calculate ROI
Prerequisites

Before starting, ensure you have:

  • Auth rate data by segment (see Payments Metrics)
  • Decline code breakdown by volume
  • Ability to run A/B tests on retry logic
  • Access to processor dashboard for card updater enrollment

When to Use This Playbook

  • Auth rates below industry benchmark (typically 85-95%)
  • Significant auth rate decline
  • High rate of soft declines
  • Customer complaints about payment failures

First Experiment to Run This Week

Hypothesis: We don't know which decline codes are actually addressable.

Experiment:

  1. Pull your top 5 decline codes from last 30 days
  2. Categorize each as: retry-able, card-updatable, fraud-related, or hard decline
  3. Calculate what % of declines are actually addressable

Expected outcome: You'll know where to focus. Usually 30-50% of declines are addressable with retry logic or card updates.

Baseline Assessment

Current State:
□ Overall auth rate: _______%
□ Auth rate by card brand:
- Visa: _______%
- Mastercard: _______%
- Amex: _______%
□ Auth rate by transaction type:
- First purchase: _______%
- Recurring: _______%
- Card on file: _______%

Decline Code Categorization

CategoryExample CodesAddressable?How to Test
Insufficient funds51, 61, 65YesRetry with timing experiment
Card errors14, 54, 41, 43YesAccount updater
Do Not Honor05MaybeMultiple approaches
Technical91, 96YesImmediate retry
Fraud blocks57, 59MaybeReview your fraud rules

Week 1: Retry Logic Experiments

Experiment: Soft Decline Retry Timing

Hypothesis: Retrying insufficient funds declines at end of month will recover X% of transactions.

Test:

  • Segment A: Retry code 51/61 after 24 hours
  • Segment B: Retry code 51/61 on the 1st and 15th of month
  • Segment C: No retry (control)

Metrics: Recovery rate, customer complaints

Run length: 30 days (need full month for payday cycles)

Guardrail: Stop if customer complaints about duplicate charges exceed 0.1%

Experiment: Technical Retry

Hypothesis: Retrying code 91/96 immediately will recover X%.

Test:

  • Retry technical errors once immediately
  • If fail, retry again after 5 seconds
  • Max 3 attempts

Metrics: Recovery rate per retry

Expected: 50-70% recovery on technical errors

Week 2: Card Update Experiments

Experiment: Account Updater ROI

Hypothesis: Account updater will improve auth rate by X% and is worth the cost.

Test:

  1. Enroll in Visa Account Updater (VAU) and Mastercard ABU
  2. Measure: cards updated, auth rate lift on updated cards, cost per update

Decision rule: If auth rate lift × transaction value > cost per update, keep it

Typical results: 2-5% lift, usually positive ROI for recurring billing

Experiment: Proactive Expiration Updates

Hypothesis: Reaching out to customers before card expiration reduces involuntary churn.

Test:

  • Segment A: Email customers 30 days before expiration with easy update link
  • Segment B: No proactive outreach (control)

Metrics: Card update rate, churn rate, auth rate

Week 3: 3DS Optimization

Experiment: 3DS on Recurring Setup

Hypothesis: Using 3DS on initial recurring transaction will improve subsequent auth rates.

Test:

  • Segment A: Require 3DS on initial subscription transaction
  • Segment B: No 3DS on initial transaction

Metrics: Initial conversion (expect drop), subsequent recurring auth rate (expect lift)

Decision: If recurring auth improvement × LTV > initial conversion drop × AOV, keep it

Experiment: Frictionless 3DS Tuning

Hypothesis: Sending more data to 3DS will increase frictionless rate without hurting fraud protection.

Test:

  1. Baseline your current frictionless rate
  2. Add additional data fields (device info, customer history, etc.)
  3. Measure frictionless rate change

Expected: 10-20% frictionless rate improvement with more data

Week 4: Network Tokens

Experiment: Network Token Conversion

Hypothesis: Network tokens will improve auth rates on card-on-file transactions.

Test:

  • Convert existing saved cards to network tokens (Visa, Mastercard)
  • Compare auth rates: tokenized vs. PAN

Typical results: 2-5% improvement

Cost consideration: Some processors charge per token. Calculate ROI.

Measurement Framework

For each experiment:

ExperimentHypothesisMetricControlResultKeep/Kill
Retry timing+2% recoveryRecovery rateNo retry___%
Account updater+3% authAuth rateNo updater___%
3DS on recurring+1% recurring authRecurring authNo 3DS___%
Network tokens+2% COF authCOF authPAN___%

Expected Improvements (Cumulative)

OptimizationTypical LiftConfidence
Smart retry1-3%High
Account updater2-5%High
Network tokens2-5%Medium
3DS optimization1-3%Medium
Data quality fixes0.5-2%Variable

Cumulative potential: 5-15% improvement

But measure each one. Your mileage will vary based on your current state, customer base, and card mix.

Where Experiments Lie to You
  • Seasonality: Auth rates fluctuate with spending patterns. Compare to same period last year, not just last month.
  • Card mix changes: If you're acquiring different customer types, your baseline is shifting.
  • Processor changes: If you switched processors, your auth rates will be different regardless of your optimizations.
  • Selection bias: If you only retry "good" looking declines, you'll overestimate retry effectiveness.

Anti-Pattern: What Not to Do

  • Don't implement all optimizations at once (can't measure what worked)
  • Don't retry hard declines (05 codes) aggressively (can get you blocked)
  • Don't assume vendor claims (test their "2-5% lift" claims yourself)
  • Don't optimize auth rate at the expense of fraud rate (measure both)

Next Steps

After running optimization experiments:

  1. Keep winners, kill losers: Document which experiments improved auth rates and make them permanent
  2. Set baselines: Lock in new auth rate benchmarks for future comparison
  3. Monitor for regression: Set alerts if auth rate drops below new baseline
  4. Expand to other segments: Apply winning strategies to international or different card types
  5. Balance with fraud: Verify fraud rate didn't increase alongside auth improvements