Detection Rule Tuning
Optimize detection rules to reduce false positives and improve alert actionability.
Last updated: February 2026Purpose and Scope
Detection rule tuning is the ongoing process of refining alert logic to maximize true positive rates while minimizing noise. This playbook covers strategies for measuring alert quality, identifying tuning opportunities, and implementing sustainable tuning practices.
Prerequisites
- SIEM access: Ability to modify detection rules and queries
- Alert tracking: Ticketing or case management with disposition data
- Environment knowledge: Understanding of normal activity and authorized tools
- Analyst feedback: Input from those triaging alerts daily
Goals of Tuning
Effective tuning achieves:
- Higher true positive rate (precision)
- Reduced analyst time spent on false positives
- Maintained or improved detection coverage (recall)
- Actionable alerts with sufficient context
- Sustainable alert volumes for the team
Alert Quality Metrics
Core Metrics
- True positive rate: Percentage of alerts that represent real security issues
- False positive rate: Percentage of alerts that are benign
- Alert volume: Total alerts per rule per time period
- Mean time to triage: Average analyst time per alert
- Escalation rate: Percentage of alerts escalated to incidents
Tracking Dispositions
Require analysts to categorize every alert:
- True positive: confirmed malicious activity
- False positive: benign activity, rule needs tuning
- Benign true positive: expected activity, allowlist candidate
- Inconclusive: insufficient data to determine
Identifying Tuning Opportunities
High Volume Rules
Rules generating excessive alerts:
- Review rules with highest alert counts
- Identify common false positive patterns
- Consider if the rule is too broad
- Evaluate if the detection value justifies the volume
Low True Positive Rate Rules
- Rules with true positive rate below acceptable threshold (often 50%)
- Analyze the common false positive characteristics
- Determine if tuning can improve precision without losing coverage
Analyst Feedback
- Regularly collect input from analysts
- Identify rules they find most frustrating
- Understand what context is missing from alerts
- Learn which rules consistently require the same verification steps
Tuning Strategies
Allowlisting
Exclude known good activity:
- Authorized admin tools and users
- Scheduled maintenance activities
- Known vendor or partner systems
- Security tools and scanners
Best practices:
- Be specific: allowlist exact paths, users, or hashes rather than broad patterns
- Document why each allowlist entry exists
- Review allowlists periodically
- Validate allowlist entries are still appropriate
Threshold Adjustment
- Increase thresholds for volume based detections
- Require multiple occurrences before alerting
- Use time windows appropriate to the behavior
- Consider rolling baselines instead of fixed thresholds
Adding Context Requirements
Require additional conditions for alerting:
- Correlate with other suspicious indicators
- Require unusual time, user, or location
- Exclude common benign command line patterns
- Filter by asset criticality or user risk
Splitting Rules
- Separate high fidelity variants from noisy ones
- Create specific rules for known attack patterns
- Lower severity for informational detections
- Route different variants to different queues
Tuning Workflow
- Measure: Collect alert volume and disposition data
- Analyze: Identify patterns in false positives
- Hypothesize: Propose tuning changes
- Test: Validate changes against historical data
- Implement: Deploy tuning in production
- Monitor: Track metrics post change
- Iterate: Refine based on results
Testing Tuning Changes
- Run modified queries against 30 to 90 days of historical data
- Compare new results to original rule
- Verify tuning removes false positives
- Confirm true positives are still detected
- Check for unintended consequences
Alert Enrichment
Improve alert quality by adding context:
- User details: role, department, manager
- Asset details: criticality, owner, recent changes
- Related alerts: same user or host
- Threat intelligence: IOC reputation
- Historical context: first time seen, frequency
Sustainable Tuning Practices
Regular Reviews
- Schedule weekly or monthly tuning sessions
- Review highest volume and lowest TPR rules
- Incorporate analyst feedback systematically
- Track tuning changes over time
Documentation
- Document rationale for every tuning decision
- Maintain allowlist justifications
- Record baseline metrics before and after tuning
- Version control detection rules
Balance Detection and Noise
- Accept that some noise is unavoidable for broad coverage
- Prioritize tuning rules with worst signal to noise ratio
- Consider tiered alerting: high confidence alerts vs. informational
- Do not tune away legitimate detection capability
Common Pitfalls
- Over tuning: Removing so many conditions that real threats are missed
- Broad allowlists: Allowlisting patterns that attackers could abuse
- No documentation: Forgetting why tuning was applied
- No testing: Implementing changes without validation
- Ignoring analyst feedback: Missing practical tuning opportunities
References
- Sigma Rules: github.com/SigmaHQ/sigma
- MITRE ATT&CK: attack.mitre.org
- Detection Engineering Weekly: Community resources
- Elastic Detection Rules: github.com/elastic/detection-rules
Was this helpful?