Building Fraud Rules
On this page
Before building rules, understand:
- Rules vs. ML trade-offs and the hybrid approach
- Velocity rules for rate-based detection
- Risk scoring for threshold setting
- Processor rules configuration for your platform's syntax
- Start with six rules covering amount ceilings, velocity limits, country mismatch, repeated declines, and new-account risk
- Use allow and block lists to override decisions for trusted customers and confirmed fraudsters
- Shadow mode everything for 2 weeks before enforcing. If you skip this, you'll learn about false positives from angry emails
- Lifecycle: Create, Shadow, Live, Tune, Retire. Monthly reviews keep your rule set healthy
- Rules are your first line of defense. They catch known patterns while ML catches the rest
Rules are how you translate "we know this is fraud" into automated decisions. They're fast to deploy, easy to explain, and they catch the patterns you already understand. This page gives you a working starter set, teaches allow/block list management, and walks through the full rule lifecycle from creation to retirement.
For the theory behind rules vs. ML, see Rules vs. ML. For velocity-specific tuning, see Velocity Rules. For processor-specific syntax, see Processor Rules Configuration.
How Fraud Rules Work
Rule Anatomy
Every fraud rule has three parts: a condition (what to check), an action (what to do), and a threshold (where to draw the line).
RULE: High-value new customer
CONDITION: account_age < 30 days AND order_total > $300
ACTION: review
THRESHOLD: any match
RATIONALE: New accounts with high-value orders have 4x the fraud rate
The condition evaluates transaction data in real time. The action tells your system what to do when the condition is true. The threshold sets the sensitivity.
Decision Actions
Most fraud systems support these actions:
| Action | What Happens | When to Use |
|---|---|---|
| Approve | Transaction proceeds normally | Low-risk, trusted customers |
| Review | Transaction queued for manual inspection | Ambiguous signals, medium risk |
| Decline | Transaction rejected | High confidence of fraud |
| Request 3DS | Trigger authentication challenge | Moderate risk, want liability shift |
When multiple rules fire on the same transaction, the highest-risk action wins. If one rule says "review" and another says "decline," the transaction gets declined. Allow list entries are the exception: they override other rules (see Allow Lists below).
How Rules Map Across Vendors
The concepts are the same everywhere. The syntax differs.
| Concept | Stripe Radar | Sift | Forter | Adyen |
|---|---|---|---|---|
| Block a transaction | Block if ... | Workflow: set decision = "block" | Decision rule: Decline | Risk rule: Refuse |
| Send to review | Review if ... | Workflow: set decision = "watch" | Decision rule: Manual Review | Risk rule: Review |
| Allow a trusted customer | Allow if ... | Allow list + workflow | Approve list | Risk rule: Accept |
| Trigger 3DS | Request 3D Secure if ... | Not native (use processor) | Not native (use processor) | Risk rule: 3D Secure |
| Score threshold | :risk_score: > 75 | abuse_score > 75 | Confidence < 60% | Risk score > threshold |
If you use a processor like Stripe or Adyen, rules are configured in their dashboard. If you use a standalone vendor like Sift or Forter, you configure rules in the vendor's console and the vendor passes decisions back to your processor. See Processor Rules Configuration for platform-specific setup.
Your Day-One Rule Set
These six rules cover the most common fraud patterns and take about 15 minutes to set up. Deploy them in shadow mode first (see Shadow Mode below), then promote to live after two weeks.
Rule 1: Transaction Amount Ceiling
Flag unusually large orders for review. Fraudsters maximize value per stolen card.
RULE: high_value_review
IF order_total > $X
THEN review
Set $X based on your average order value. A common starting point is 3-5x your AOV. If your AOV is $80, start at $300.
What it catches: Stolen card purchases where fraudsters buy the most expensive items.
Stripe Radar:
Review if :amount_in_usd: > 300
Rule 2: Hourly Velocity Limit
Multiple charges on the same card in a short window usually means automated fraud or card testing.
RULE: card_velocity_hourly
IF transactions_per_card_1hr > 3
THEN decline
What it catches: Card testing attacks, automated purchases with stolen cards.
Stripe Radar:
Block if :total_charges_per_card_number_hourly: > 3
See Velocity Rules for tuning windows and thresholds by business type.
Rule 3: Daily Velocity Limit
A broader window catches slower attacks that dodge hourly limits.
RULE: card_velocity_daily
IF transactions_per_card_24hr > 10
THEN review
What it catches: Distributed card testing, repeated small purchases from a single stolen card.
Stripe Radar:
Review if :total_charges_per_card_number_daily: > 10
Rule 4: Country Mismatch
When the card's issuing country doesn't match the customer's IP location, it's worth a second look.
RULE: country_mismatch
IF card_country != ip_country
THEN review
What it catches: Cross-border fraud using stolen card numbers. This is one of the highest-signal single indicators.
Stripe Radar:
Review if :ip_country: != :card_country:
Travelers, expats, and VPN users trigger this constantly. Start with "review," not "decline." If your false positive rate is under 20%, you can tighten to decline later. See Velocity Rules edge cases for more on VPN and travel patterns.
Rule 5: Repeated Declines
Multiple failed authorization attempts in a short window is a classic card testing signal.
RULE: repeated_declines
IF declined_transactions_per_card_10min > 3
THEN decline
What it catches: Card testing attacks where fraudsters cycle through stolen card numbers to find valid ones.
Stripe Radar:
Block if :total_declined_charges_per_card_number_hourly: > 3
Rule 6: New Account + High Value
Brand new accounts making large purchases are high risk. Legitimate new customers rarely start with their largest possible order.
RULE: new_account_high_value
IF account_age < 30_days
AND order_total > $300
THEN review
What it catches: Account takeover (recently compromised), synthetic accounts created for fraud, stolen credentials used on new accounts.
Stripe Radar:
Review if :seconds_since_first_seen: < 2592000 AND :amount_in_usd: > 300
Putting It Together
| Rule | Condition | Action | Primary Fraud Type |
|---|---|---|---|
| Amount ceiling | order > $X (3-5x AOV) | Review | Stolen card maximization |
| Hourly velocity | 3+ charges/card/hour | Decline | Card testing |
| Daily velocity | 10+ charges/card/day | Review | Distributed card testing |
| Country mismatch | card country != IP country | Review | Cross-border fraud |
| Repeated declines | 3+ declines in 10 min | Decline | Card testing |
| New account + high value | account < 30 days + order > $300 | Review | ATO, synthetic identity |
Under $100K/month: These six rules plus your processor's default ML are enough. Don't over-engineer.
$100K-$1M/month: Add product-specific rules (gift cards, electronics) and customer tenure exceptions.
Over $1M/month: Layer these with ML scoring and device fingerprinting. Rules become backstops, not your primary defense.
Device Intelligence as Rule Inputs
The six rules above use transaction data: amount, velocity, country, decline count, account age. Device intelligence adds a second layer of signals that transaction data alone can't see. If you use a device intelligence vendor (or your processor's built-in fingerprinting), these signals become available as rule conditions.
This section shows what's possible. The specific fields and syntax depend on your vendor. See Device Fingerprinting for a deep dive on signals and vendors.
Example: Combining Transaction Rules with Device Signals
A transaction rule catches what happened. A device signal catches how it happened and who did it.
RULE: high_value_new_account (transaction-only)
IF account_age < 30_days AND order_total > $300
THEN review
RULE: high_value_new_account_v2 (with device signals)
IF account_age < 30_days
AND order_total > $300
AND (device_age_hours < 1 OR emulator_detected = true OR vpn_detected = true)
THEN decline
The second version is more precise. A new account with a high-value order from a brand-new device behind a VPN is a much stronger signal than the transaction data alone.
Signals You Can Use in Rules
These are examples of what modern device intelligence platforms can expose as rule-able fields. Availability depends on your vendor.
| Signal Category | Example Rule Conditions | What It Catches |
|---|---|---|
| Device reputation | Device seen on 3+ fraud accounts | Known bad devices, fraud rings |
| Device age | Device first seen < 1 hour ago | Freshly created emulator instances, factory-reset farm devices |
| Emulator/VM | emulator_detected = true | Fraud factories running virtual devices at scale |
| VPN/proxy (deep) | true_ip != visible_ip via WebRTC/TLS analysis | Pierces VPNs to find the real IP. Basic VPN/IP-type detection is available from enrichment APIs without an SDK |
| Remote desktop | remote_software_detected = true | Remote access scams where a fraudster controls the victim's device |
| True IP mismatch | true_ip_country != visible_ip_country | VPN users whose real location differs from their presented location |
| Behavioral anomaly | behavioral_risk = high | Bots, scripted form filling, coached victims |
| Copy-paste in identity fields | name_field_pasted = true | Fraudsters working from stolen data lists (legitimate users type their own name) |
| Multiple accounts per device | accounts_per_device_30d > 3 | Promo abuse, multi-accounting, synthetic identity |
| Sensor anomaly | gyroscope_data = none AND device_type = mobile | Emulator (no physical sensors) or phone farm device on a rack |
Layering Transaction and Device Rules
A practical approach is to use device signals to sharpen rules you already have, not to create a separate device rule set.
| Existing Rule | Enhanced With Device Signal | Result |
|---|---|---|
| Country mismatch (card != IP) | + vpn_detected = false | If no VPN, the mismatch is a real geographic discrepancy, not a traveler |
| High-value new account | + device_age > 30_days | New account on a long-established device = lower risk (probably a real person, new to your site) |
| Hourly velocity > 3 | + emulator_detected = true | Velocity from an emulator = almost certainly automated card testing |
| Repeated declines | + accounts_per_device > 5 | Multiple accounts with failed auths from one device = fraud ring testing cards |
If you're on Stripe Radar or Adyen RevenueProtect, basic device signals (IP type, device fingerprint, velocity per device) are already factored into your ML score. Standalone device intelligence vendors add the deeper signals (behavioral biometrics, sensor data, True IP, remote desktop detection). Layer them in when your fraud patterns outgrow what processor tools catch. See Device Fingerprinting: Choosing a Vendor for when to upgrade.
Data Enrichment as Rule Inputs
The rules above use transaction and device data. Data enrichment adds a third layer: server-side API lookups that tell you about the IP, email, and phone on a transaction. Is the IP from a datacenter? Is the email disposable? Is the phone a prepaid burner?
These signals are cheap (often free) and require no client-side SDK. See Data Enrichment for Fraud Rules for the full signal catalog, vendor options, and example rules you can build from enrichment data.
Rules That Prevent Fraud Losses
The rules above focus on blocking fraud. But for friendly fraud, the transaction IS legitimate. You can't block it without blocking a real customer. Instead, you need rules that trigger two things: 3DS for liability shift and enhanced evidence collection so you win the chargeback if it comes.
This is the difference between preventing fraud and preventing fraud losses. See Defending Against Fraud Losses for the full strategy.
Rules That Trigger 3DS
3DS shifts liability to the issuer for fraud chargebacks. The key insight: any transaction you'd decline for fraud risk, trigger 3DS instead. If the customer passes, you get the sale AND liability shift. If they fail, you've lost nothing you wouldn't have lost anyway.
Use rules to trigger 3DS selectively on transactions with high dispute probability:
RULE: 3ds_high_value_new_customer
IF account_age < 60_days AND order_total > $200
THEN request_3ds
RATIONALE: New customer + high value = highest chargeback rate segment
RULE: 3ds_high_dispute_category
IF product_category IN (digital_goods, subscriptions, electronics)
THEN request_3ds
RATIONALE: These categories have 2-3x the dispute rate of general retail
RULE: 3ds_repeat_disputer
IF customer_prior_disputes > 0
THEN request_3ds
RATIONALE: Customers who dispute once are 40% likely to dispute again
| Rule | Condition | Why 3DS | Friction Impact |
|---|---|---|---|
| High-value + new customer | account < 60d, order > $200 | Highest friendly fraud segment | Low (frictionless 60-90% of the time) |
| High-dispute product category | digital goods, subscriptions, electronics | These categories generate the most disputes | Low to medium |
| Prior disputer | customer has 1+ previous disputes | Repeat behavior is predictable | Acceptable (protecting yourself) |
| Moderate risk score | risk_score 50-75 (not high enough to decline) | Grey zone transactions benefit from authentication | Low |
| International + high value | cross-border AND order > $150 | Cross-border disputes are harder to win | Medium (some issuers have higher challenge rates) |
Stripe Radar examples:
Request 3D Secure if :amount_in_usd: > 200 AND :seconds_since_first_seen: < 5184000
Request 3D Secure if :risk_score: > 50
For friendly fraud, 3DS is your strongest defense. The customer authenticated with their own bank. If they later claim "I didn't make this purchase," the liability is on the issuer. This single action protects you against the most common chargeback reason code (10.4/4837 fraud). See 3D Secure for implementation details.
Rules That Trigger Evidence Collection
Not every rule needs to block or challenge. Some rules should quietly escalate evidence collection so you're prepared if a chargeback arrives weeks later.
RULE: evidence_escalation_high_value
IF order_total > $150
THEN require_signature_on_delivery
RATIONALE: Signature confirmation wins "not received" disputes
RULE: evidence_escalation_digital
IF product_type = digital
THEN log_device_fingerprint + log_ip + log_usage_after_delivery
RATIONALE: CE 3.0 requires device/IP match to prior undisputed transactions
RULE: evidence_escalation_subscription_renewal
IF transaction_type = recurring AND renewal_count > 1
THEN send_renewal_reminder_email + log_email_delivery
RATIONALE: "I cancelled" is the #1 subscription dispute. Renewal reminder email with confirmed delivery defeats it
| Trigger Condition | Evidence Action | What It Defeats |
|---|---|---|
| Order > $150 (physical goods) | Require signature on delivery | "Not received" (Visa 13.1, MC 4855) |
| Digital goods delivery | Log device fingerprint, IP, download timestamp | Fraud claim (Visa 10.4) via CE 3.0 |
| Subscription renewal | Send reminder email 7 days before, log delivery receipt | "I cancelled" (Visa 13.2) |
| First purchase from customer | Capture device ID, IP, and link to account | Builds CE 3.0 history for future disputes |
| Order > $500 | Require delivery photo + signature | "Not received" and "not as described" |
| Customer has prior dispute history | Log all post-purchase activity (logins, usage, downloads) | Friendly fraud repeat behavior |
None of these rules add checkout friction. The customer doesn't see any difference. You're collecting data you already have access to and storing it in a way that's retrievable when a dispute arrives. The cost is a few lines of integration code, not lost conversion. See Defending Against Fraud Losses for the full evidence strategy.
Allow Lists and Block Lists
Allow and block lists are manual overrides that bypass your rule engine. They're simple and powerful, but they need management or they'll rot.
Block Lists
Block lists prevent known bad actors from transacting. Add entries when you have confirmed fraud.
What to block:
| Identifier | When to Block | Notes |
|---|---|---|
| Email address | Confirmed fraud from this email | Fraudsters reuse emails less than you'd think |
| Card hash/fingerprint | Chargeback received, confirmed stolen | Most effective single identifier |
| IP address | Active attack from this IP | Set an expiration (see warning below) |
| Device ID | Device linked to multiple fraud incidents | Requires device fingerprinting |
| Shipping address | Known reshipping address | Common with package forwarding fraud |
When to add entries:
- Confirmed chargeback (automatic in most processors)
- Manual review confirms fraud (reviewer marks as fraudulent)
- Fraud reported by customer (e.g., "I didn't make this purchase")
Expiration policy:
| Entry Type | Recommended Expiration |
|---|---|
| Card hash | Permanent (card is compromised) |
| 12 months, then review | |
| IP address | 30-90 days |
| Device ID | 6 months |
| Shipping address | 6 months, then review |
Set expiration dates on IP and device blocks. A fraudster's IP today is an innocent customer's IP tomorrow. Shared IPs (VPNs, corporate networks, mobile carriers) can block thousands of legitimate users. Review your block list quarterly and prune expired entries.
Allow Lists
Allow lists let trusted customers bypass rules that would otherwise flag them. Use them sparingly.
What to allow:
| Identifier | When to Allow | Scope |
|---|---|---|
| Customer ID | Verified repeat customer with clean history | Time-capped (12 months) |
| Email domain | Your own corporate domain, trusted partners | Permanent |
| Card fingerprint | Customer's primary card, verified via support | Transaction cap ($5,000/month) |
Precedence: Allow list entries override block rules. If a customer is on the allow list and also triggers a velocity rule, the allow list wins.
Bounded scope: Don't give unlimited allow list access. Set limits:
- Time cap: Allow list entry expires after 12 months
- Spend cap: Allow list bypasses rules only up to $X per month
- Rule cap: Allow list bypasses velocity rules but NOT amount ceilings
Automatic vs. Manual Management
| Trigger | Action | Manual or Auto |
|---|---|---|
| Chargeback received | Add card hash to block list | Automatic (most processors do this) |
| Review confirms fraud | Add email + device to block list | Manual (reviewer action) |
| Review confirms legitimate | Add customer to allow list | Manual (reviewer action) |
| Block list entry expires | Remove from block list | Automatic (TTL-based) |
| Customer contacts support about block | Investigate, potentially add to allow list | Manual |
In Stripe Radar for Fraud Teams, manage lists via Dashboard > Radar > Lists. You can create custom lists (emails, card fingerprints, IP addresses) and reference them in rules:
Block if :email: in @block_list
Allow if :customer_id: in @trusted_customers
For other processors, see Processor Rules Configuration.
Shadow Mode: Test Before You Block
Shadow mode means a rule fires and logs the result, but doesn't actually block the transaction. Every rule should spend time in shadow mode before going live. Without it, you learn about false positives from customer complaints instead of from your dashboard.
The Decision Framework
Deploy every new rule in shadow mode for at least 2 weeks. Then analyze:
- Over 30% fraud in hits: Promote the rule to live (with exceptions for trusted customers)
- 10-30% fraud in hits: Tighten the threshold or add conditions
- Under 10% fraud in hits: Kill the rule. It's not worth the false positives
If the rule triggers on more than 5% of total traffic, the threshold is too loose regardless of fraud-in-hits.
For the full 4-week testing methodology (shadow, analyze, enforce, monitor), backtest calculations, and sample size requirements, see Velocity Rules: How to Actually Test a Rule. The testing process is the same whether you're testing a velocity rule or any other rule type.
Stripe doesn't have a native "shadow mode" toggle. Instead, create a Review rule instead of a Block rule. Review rules flag transactions for manual inspection without blocking them. After two weeks, analyze the review queue to see what would have been caught.
For Adyen, use the "simulation" feature in RevenueProtect to test rules against historical transactions.
The Rule Lifecycle
Rules are never "done." They follow a lifecycle from creation to retirement.
| Stage | Duration | Key Activity |
|---|---|---|
| Create | 1 day | Write rule based on observed fraud pattern |
| Shadow | 2-4 weeks | Collect data, measure trigger rate and fraud-in-hits |
| Live | Ongoing | Rule actively blocking or reviewing transactions |
| Tune | Monthly | Adjust thresholds based on performance data |
| Retire | When stale | Remove rules that haven't triggered in 90 days |
Monthly Review Checklist
Run this every month to keep your rule set healthy:
□ Pull trigger rate per rule
- Which rules are firing most?
- Any rule blocking > 1% of traffic? (Probably too aggressive)
□ Check fraud caught vs. false positives per rule
- Fraud-in-hits > 30%? Keep the rule, consider tightening
- Fraud-in-hits < 10%? Loosen or retire
- False positive rate > 70%? Move from decline to review, or add exceptions
□ Tighten rules with low false positives
- If a rule has 5% FP rate and 60% fraud-in-hits, tighten the threshold
□ Loosen rules with high false positives
- If a rule has 80% FP rate, add customer tenure or history exceptions
□ Retire rules that haven't triggered in 90 days
- If the pattern it was catching no longer exists, remove the rule
- Keep a log of retired rules in case the pattern returns
□ Check for new fraud patterns not covered by existing rules
- Review recent chargebacks: what got through?
- Talk to your review team: what patterns are they seeing?
For processor-specific review processes, see Processor Rules Configuration: Monthly Review.
When to Create New Rules
New rules should come from observed patterns, not guesses:
- Chargeback analysis reveals a pattern (e.g., all recent chargebacks came from accounts under 7 days old)
- Review team flags a trend (e.g., "we're seeing a lot of gift card orders from the same IP range")
- Fraud attack post-mortem (e.g., "this attack would have been caught by a BIN-range rule") - see Survive a Fraud Attack
- Industry alert (e.g., new fraud technique targeting your product category)
Next Steps
Just getting started?
- Deploy the day-one rule set in shadow mode
- Set up your block list to auto-add on chargebacks
- Review shadow results after 2 weeks
Rules deployed but need tuning?
- Run the monthly review checklist
- Check your allow list scope and expiration dates
- See Velocity Rules for threshold tuning
Ready for the full operations picture?
- Running Fraud Operations - Daily/weekly/monthly operational cadence
- Fraud Model Feedback - How your vendor's ML learns from your data
- Processor Rules Configuration - Platform-specific syntax and setup
Related
- Rules vs. ML - When to use rules, when to use ML
- Velocity Rules - Rate-based detection and threshold tuning
- Risk Scoring - Combining rules with ML scores
- Processor Rules Configuration - Stripe, Adyen, Braintree syntax
- Fraud Model Feedback - Feedback loops and model tuning
- Running Fraud Operations - Operational cadence playbook
- Manual Review - Managing the review queue
- Data Enrichment - IP, email, phone signals for rules
- Device Fingerprinting - Device-based signals
- Card Testing - The fraud type rules catch best
- Account Takeover - Login and credential fraud
- Fraud Metrics - Measuring rule effectiveness
- Survive a Fraud Attack - Emergency rule deployment
- Experimentation - A/B testing rule changes