Burp Suite Sequencer Tool
Comprehensive guide to using Burp Suite's Sequencer tool for analyzing randomness in session tokens and other important values
The Sequencer tool is a specialized utility within Burp Suite that analyzes the randomness - entropy - of session tokens, CSRF tokens, password reset tokens, and other important values generated by web applications. It's available in both Community and Professional editions with identical functionality.
Overview
Purpose and Functionality
Sequencer serves as Burp Suite's statistical analysis tool:
- Analyzes the quality of randomness in security-critical values
- Identifies predictable patterns in tokens
- Performs statistical tests on token samples
- Detects weaknesses in token generation algorithms
- Helps assess the security of session management
It's an essential tool for security testing, helping you identify whether tokens are sufficiently unpredictable to resist attacks like session hijacking or brute forcing.
Common Use Cases
Sequencer is useful in several testing scenarios:
-
Session token analysis
- Evaluate the security of session identifiers
- Detect patterns that could lead to session prediction
- Assess resistance to brute force attacks
-
CSRF token testing
- Verify the unpredictability of anti-CSRF tokens
- Ensure tokens cannot be guessed by attackers
-
Password reset token evaluation
- Analyze the randomness of tokens used in password reset links
- Identify if tokens can be predicted or brute-forced
-
API key assessment
- Test the quality of generated API keys
- Ensure keys have sufficient entropy
-
Random number generator evaluation
- Assess the quality of random number generators
- Identify weaknesses in randomization algorithms
Proper token randomness is critical for security, as predictable tokens can lead to account takeover and other serious vulnerabilities.
Interface Overview
Key Interface Elements
Understanding the Sequencer interface:
-
Live capture tab
- Configure and perform live token capture
- Set up token extraction from responses
- Start/stop token collection
-
Manual load tab
- Manually paste or load token samples
- Analyze pre-collected token sets
-
Analysis options
- Configure token processing
- Set analysis parameters
- Choose statistical tests
-
Results summary
- Overall entropy assessment
- Effective entropy bits
- FIPS compliance status
-
Detailed analysis
- Character-level analysis
- Bit-level analysis
- Visualization of token patterns
- Statistical test results
The interface is designed to facilitate both automated token collection and detailed statistical analysis.
Basic Operations
Live Token Capture
Setting Up Live Capture
Steps to configure and perform live token capture:
-
Select the Live capture tab
- This is the default mode for capturing tokens in real-time
-
Configure the token request
- Select a request that generates a new token
- This could be a login page, registration form, or any page that issues tokens
- Click "Configure request" to set up the request details
-
Set token location
- Specify where in the response the token appears
- Options include:
- Cookie: Extract from a specific cookie
- Form field: Extract from an HTML form input
- Response body: Extract using regex
- Header: Extract from a response header
-
Start the capture
- Click "Start capture"
- Sequencer will repeatedly send the configured request
- Tokens will be extracted and collected for analysis
-
Monitor progress
- Watch the token count increase
- Typically need 100+ tokens for reliable analysis
- More tokens - 1000+ - provide more accurate results
Live capture automates the process of collecting token samples for analysis.
Token Extraction Methods
Different ways to extract tokens from responses:
-
Cookie-based extraction
- Select a specific cookie name
- Sequencer extracts the cookie value
- Useful for session cookies
-
Form field extraction
- Specify a form input name - e.g., "csrf_token"
- Sequencer extracts the value attribute
- Useful for CSRF tokens in forms
-
Custom location - regex
- Define a regular expression with a capture group
- Sequencer extracts the matched content
- Useful for tokens in response body or custom formats
-
HTTP header extraction
- Specify a header name
- Sequencer extracts the header value
- Useful for tokens in custom headers
-
JSON extraction
- Use regex to extract values from JSON responses
- Target specific JSON properties containing tokens
Proper token extraction configuration is crucial for accurate analysis.
Manual Token Analysis
Analyzing pre-collected token samples:
-
Select the Manual load tab
- Switch to manual mode for pre-collected tokens
-
Input token samples
- Paste tokens - one per line
- Or load from a file
- Each token should be on a separate line
-
Configure token handling
- Set prefix/suffix to remove
- Configure custom token processing
- Specify character set if needed
-
Start analysis
- Click "Analyze now"
- Sequencer will process the provided tokens
-
Review results
- Examine the entropy assessment
- Check detailed statistical results
- Identify potential weaknesses
Manual load is useful when you've already collected tokens or want to analyze tokens from sources outside of Burp Suite.
Token Preprocessing Options
Configuring how tokens are processed before analysis:
-
Prefix/suffix handling
- Remove consistent prefixes or suffixes
- Focus analysis on the variable portion
- Example: Remove "sess_" prefix from "sess_abc123"
-
Character handling
- Count format: ASCII or bit-level
- Ignore case: Treat upper/lowercase as same
- Ignore non-alphanumeric: Focus on letters/numbers
-
Token length
- Analyze fixed-length substrings
- Useful for tokens with variable length
- Focus on specific portions of tokens
-
Base encoding detection
- Automatically detect base64, hex encoding
- Convert to raw binary for analysis
- Ensure accurate entropy measurement
Proper preprocessing ensures that the analysis focuses on the truly random portions of tokens.
Understanding Results
Entropy Assessment
Understanding the concept of entropy in token analysis:
-
What is entropy?
- Measure of randomness or unpredictability
- Higher entropy means more unpredictable tokens
- Expressed in bits of entropy
-
Bits of entropy
- Each bit doubles the number of possible values
- 8 bits = 256 possibilities
- 32 bits = 4.3 billion possibilities
- 128 bits = 3.4 × 10^38 possibilities
-
Effective entropy
- The actual unpredictability in the tokens
- Often lower than theoretical maximum
- What matters for security assessment
-
Minimum requirements
- Session tokens: 64+ bits recommended
- CSRF tokens: 56+ bits recommended
- Password reset: 64+ bits recommended
-
Entropy vs. token length
- Long tokens can still have low entropy
- Quality of randomness matters more than length
- Patterns reduce effective entropy
Understanding entropy helps assess whether tokens are sufficiently secure against prediction or brute force attacks.
Results Summary
Interpreting the main results display:
-
Overall quality
- Excellent: Highly random, suitable for security purposes
- Adequate: Acceptable randomness for most purposes
- Poor: Insufficient randomness, potentially vulnerable
-
Effective entropy
- Estimated bits of entropy in the tokens
- Higher is better - 64+ bits recommended
- Critical for assessing security strength
-
FIPS compliance
- Federal Information Processing Standards
- Pass/fail for each statistical test
- Industry standard for randomness quality
-
Sample size adequacy
- Indication if enough tokens were analyzed
- More samples = more reliable results
- Recommendations for additional samples if needed
-
Character-level analysis
- Distribution of characters across positions
- Identifies position-specific patterns
- Shows character frequency anomalies
The summary provides a quick assessment of token security, while detailed results offer deeper insights.
Detailed Analysis
Understanding the statistical tests performed:
-
Character-level tests
- Character frequency analysis
- Transition frequency analysis
- Character distribution by position
- Identifies patterns in character usage
-
Bit-level tests
- Frequency test: Distribution of 0s and 1s
- Runs test: Sequences of consecutive bits
- Spectral test: Periodic patterns
- Maurer's universal test: Overall randomness
-
FIPS 140-2 tests
- Monobit test: Proportion of 1s vs. 0s
- Poker test: Patterns in 4-bit blocks
- Runs test: Sequences of consecutive bits
- Long runs test: Excessive repetition
-
Interpretation guidelines
- Multiple failed tests indicate problems
- Some tests are more important than others
- Context matters - session tokens vs. CSRF tokens
These tests apply established statistical methods to detect non-random patterns that could make tokens predictable.
Data Visualizations
Understanding the visual representations of token analysis:
-
Character-level charts
- Shows distribution of characters by position
- Highlights position-specific patterns
- Identifies fixed or predictable positions
-
Bit-level charts
- Shows distribution of bits - 0s and 1s
- Identifies bit-level patterns
- Reveals subtle biases in randomness
-
Transition maps
- Shows likelihood of character sequences
- Identifies predictable character transitions
- Reveals algorithm patterns
-
Interpreting visualizations
- Even, random distribution = good
- Clear patterns or hotspots = bad
- Position-specific anomalies indicate weaknesses
Visualizations make it easier to spot patterns that might not be obvious from numerical results alone.
Advanced Features
Custom Token Analysis
Tailoring the analysis to specific token types:
-
Token format options
- Auto-detect format - hex, base64, etc.
- Specify custom character sets
- Set fixed token length for analysis
-
Analysis scope
- Analyze entire token or specific portions
- Focus on variable parts of tokens
- Exclude known non-random components
-
Test selection
- Choose which statistical tests to run
- Adjust test parameters
- Focus on tests relevant to the token type
-
Sample handling
- Filter duplicate tokens
- Handle tokens with special characters
- Process tokens with variable length
Custom analysis configuration allows for more accurate assessment of specific token types and formats.
Handling Different Token Formats
Analyzing various token encoding schemes:
-
Base64-encoded tokens
- Common in many web applications
- May need decoding before analysis
- Often used for JWT and session tokens
-
Hexadecimal tokens
- Represent binary data as hex characters
- Common in many session management systems
- Each byte represented by two hex characters
-
UUID/GUID format
- Standard format for unique identifiers
- Fixed structure with version information
- May have predictable components
-
Custom formats
- Application-specific token formats
- May combine multiple encoding schemes
- May include timestamps or other predictable data
Understanding the token format is crucial for accurate analysis and proper interpretation of results.
Practical Applications
Session Token Analysis
Using Sequencer to evaluate session token security:
-
Capture methodology
- Obtain multiple session tokens - 100+
- Use the same user account or different accounts
- Collect tokens in quick succession
-
Key security criteria
- Minimum 64 bits of entropy
- No predictable patterns or sequences
- No correlation with user information
- No time-based patterns
-
Common weaknesses
- Time-based token generation
- Sequential counters in tokens
- User information encoded in tokens
- Weak random number generators
-
Impact of vulnerabilities
- Session prediction attacks
- Session hijacking
- Authentication bypass
- Account takeover
Secure session management is critical for protecting user accounts and preventing unauthorized access.
CSRF Token Evaluation
Analyzing anti-CSRF token security:
-
Capture methodology
- Collect multiple CSRF tokens from forms
- Obtain tokens in quick succession
- Use the same session for consistency
-
Key security criteria
- Minimum 56 bits of entropy
- Unique per form or session
- No predictable patterns
- Not based solely on session ID
-
Common weaknesses
- Static tokens across sessions
- Predictable generation algorithms
- Reuse of tokens across forms
- Simple transformations of session IDs
-
Impact of vulnerabilities
- CSRF protection bypass
- Forced actions on behalf of users
- Unauthorized state-changing operations
Effective CSRF tokens must be sufficiently random to prevent attackers from guessing or predicting them.
Password Reset Token Analysis
Analyzing the security of password reset tokens:
-
Capture methodology
- Request multiple password resets
- Collect tokens from emails or URLs
- Analyze token patterns
-
Key security criteria
- Minimum 64 bits of entropy
- Limited validity period
- Single-use only
- No correlation with user information
-
Common weaknesses
- Time-based generation patterns
- User information encoded in tokens
- Insufficient token length
- Reusable tokens
-
Impact of vulnerabilities
- Account takeover
- Unauthorized password changes
- Persistent access to victim accounts
Password reset tokens must be highly secure as they provide direct access to change user credentials.
Integration with Other Tools
Sequencer in the Burp Workflow
How Sequencer integrates with other Burp tools:
-
Proxy to Sequencer
- Identify requests that generate tokens
- Right-click and select "Send to Sequencer"
- Configure token extraction
-
Repeater integration
- Use Repeater to test token generation
- Manually collect tokens for analysis
- Send to Sequencer via clipboard
-
Intruder connection
- Use Intruder to harvest multiple tokens
- Export results for Sequencer analysis
- Test token behavior under different conditions
-
Scanner findings
- Follow up on Scanner entropy warnings
- Verify token randomness issues
- Assess impact of identified weaknesses
This integration creates a comprehensive workflow for identifying and assessing token-related security issues.
Best Practices
Efficient Testing Workflow
Maximize your productivity with Sequencer:
-
Sample size considerations
- Collect at least 100 tokens for basic analysis
- 1000+ tokens for more reliable results
- More samples provide higher confidence
-
Token selection strategy
- Focus on security-critical tokens first
- Prioritize authentication and session tokens
- Test tokens that protect high-value functions
-
Analysis approach
- Start with overall entropy assessment
- Investigate failed statistical tests
- Look for patterns in visualizations
- Compare results across different token types
-
Documentation best practices
- Save analysis results for reporting
- Document token generation conditions
- Include sample tokens and entropy measurements
- Explain the security impact of findings
An efficient workflow helps you thoroughly assess token security without unnecessary effort.
Result Interpretation Guidelines
How to properly interpret Sequencer results:
-
Entropy thresholds
- 64+ bits: Suitable for session tokens
- 56+ bits: Minimum for CSRF tokens
- 128+ bits: Ideal for critical applications
-
Statistical test results
- Multiple failed tests indicate problems
- Some variance is normal in truly random data
- Look for patterns across different tests
-
Context considerations
- Higher security requirements for financial applications
- Consider token lifetime and exposure
- Evaluate alongside other security controls
-
False positives/negatives
- Small sample sizes may give misleading results
- Some encoding schemes can affect analysis
- Consider additional manual verification
-
Reporting guidance
- Clearly explain the security impact
- Provide specific entropy measurements
- Include recommendations based on token usage
Proper interpretation ensures accurate assessment of token security and appropriate remediation recommendations.
Troubleshooting
Common Issues and Solutions
Solutions to frequently encountered problems:
-
Token extraction failures
- Verify the token location is correct
- Check if the application changed token format
- Try different extraction methods
- Use regex with capture groups for complex formats
-
Inconsistent results
- Collect more token samples
- Verify tokens are from the same context
- Check for time-based patterns
- Ensure proper token preprocessing
-
Analysis performance issues
- Reduce the number of tokens for initial analysis
- Close other resource-intensive Burp tools
- Increase memory allocation to Burp
- Split analysis into smaller batches
-
Interpretation challenges
- Compare with known-good token examples
- Consult statistical references for test results
- Consider the specific security context
- Verify findings with manual analysis
-
Token format problems
- Identify and remove non-random prefixes/suffixes
- Handle special characters appropriately
- Configure correct character set for analysis
- Pre-process tokens to isolate random portions
Use Case Examples
Session Token Analysis Example
Step-by-step example of analyzing session tokens:
-
Capture setup
- Identify the login request that sets session cookies
- Send to Sequencer
- Configure to extract the session cookie
-
Token collection
- Start the capture
- Collect 1000+ tokens
- Stop the capture when sufficient
-
Analysis configuration
- Remove any known prefixes - e.g., "SESS_"
- Set appropriate token format options
- Start the analysis
-
Results evaluation
- Check overall entropy - should be 64+ bits
- Review character distribution
- Look for patterns in visualizations
- Check FIPS compliance
-
Findings interpretation
- Assess if tokens are sufficiently random
- Identify any weaknesses or patterns
- Determine security impact
- Formulate recommendations
This methodical approach ensures thorough assessment of session token security.
Detecting Time-Based Token Generation
Identifying time-based patterns in token generation:
-
Capture methodology
- Collect tokens in two batches
- First batch: rapid succession - 1 per second
- Second batch: after a delay - e.g., 1 hour later
-
Analysis approach
- Analyze each batch separately
- Compare entropy between batches
- Look for similar patterns within each batch
-
Pattern identification
- Convert tokens to binary if needed
- Look for incremental patterns
- Check for timestamp encoding
- Analyze character positions for changes
-
Verification techniques
- Generate tokens at known times
- Try to predict future tokens
- Decode suspected timestamp portions
-
Security impact
- Assess predictability window
- Determine feasibility of token prediction
- Evaluate risk based on application context
Time-based token generation is a common weakness that can lead to token prediction attacks.
Callout:
The Sequencer tool is identical in both Community and Professional editions, making it fully accessible to all Burp Suite users without limitations.