Run your Tests
Execute your test specifications with browser automation
The bugster run
command executes your generated test specifications using browser automation, validating that your application works as expected. It runs tests against your configured application URL and reports results.
You must run bugster init
and bugster generate
before using the run command. Ensure your application is running on the configured base URL.
Basic Usage
This command will:
- Load test specifications from
.bugster/tests/
directory - Launch browser automation using Playwright
- Execute test steps against your application
- Report results with pass/fail status and detailed feedback
How It Works
Test Discovery
Bugster scans for test files:
- Loads all
.yaml
files from.bugster/tests/
- Applies test limits and filtering (if configured)
- Processes always-run tests first
- Organizes execution order
Browser Automation
For each test specification:
- Launches a browser instance (Chrome by default)
- Navigates to your application
- Authenticates using configured credentials
- Executes each test step in sequence
Results Collection
After execution:
- Records pass/fail status for each test
- Captures failure reasons and screenshots
- Optionally records video of test execution
- Streams results to dashboard (if enabled)
Command Flags and Options
Path Targeting
path argument
path argument
Purpose: Run tests from specific file or directory
Formats:
- File:
bugster run auth/1_login.yaml
- Directory:
bugster run auth/
- Relative paths: Relative to
.bugster/tests/
Examples:
Browser Configuration
--headless
--headless
Purpose: Run tests without visible browser window
When to use:
- CI/CD pipelines
- Server environments without GUI
- Faster execution when debugging isn’t needed
Default: Browser window is visible for local debugging
Example:
Output Control
--silent, -s
--silent, -s
Purpose: Minimize console output during execution
What’s hidden:
- Individual test step details
- Browser automation logs
- Progress indicators
What’s shown:
- Final test results
- Error messages
- Summary statistics
Example:
--verbose
--verbose
Purpose: Show detailed execution logs and debugging information
Additional output:
- Individual test step execution
- Browser automation details
- Network requests and responses
- Detailed error information
When to use: Debugging test failures or understanding execution flow
Example:
Environment Configuration
--base-url
--base-url
Purpose: Override the base URL from configuration
Use cases:
- Testing against different environments
- CI/CD with dynamic URLs
- Vercel preview deployments
- Staging environment testing
Examples:
Test Selection and Filtering
--only-affected
--only-affected
Purpose: Run only tests affected by recent code changes
How it works:
- Analyzes git changes since last commit
- Identifies modified pages and components
- Maps changes to relevant test files
- Includes always-run tests regardless
Benefits:
- Faster feedback in development
- Focus on potentially broken functionality
- Efficient CI/CD testing
Example:
Concurrency Control
--max-concurrent
--max-concurrent
Purpose: Control maximum number of parallel test executions
Range: 1 to 5 concurrent tests
Default: 5 concurrent tests for optimal performance
Considerations:
- Higher values: Faster execution, more resource usage
- Lower values: More stable, better for debugging
- Value 1: Sequential execution, easiest debugging
Examples:
Combined Usage Examples
Development Workflow
Test Execution Flow
Test Discovery and Limits
Parallel Execution
When running multiple tests concurrently, you’ll see compact status updates:
Results Summary
Test Limits and Selection
Default Test Limits
- Free tier: Maximum 5 tests per execution
- Distribution: Tests spread across feature folders
- Always-run tests: Execute in addition to the limit
- Selection algorithm: Representative tests chosen intelligently
Test Selection Logic
Folder Distribution
Algorithm: Tests distributed evenly across folders
Example: 15 total tests, 5 limit
auth/
: 2 tests selecteddashboard/
: 2 tests selectedcheckout/
: 1 test selected
Always-Run Priority
Execution: Always-run tests execute first
Additional: Count separately from regular limits
Maximum: Up to 3 always-run tests per execution
Video Recording
Tests automatically record execution for debugging:
Video Recording
- Location:
.bugster/videos/{run-id}/{test-id}/
- Format: WebM video files
- Content: Complete test execution from start to finish
- Accessibility: Videos uploaded to dashboard when streaming enabled
Authentication Handling
Credential Usage
Tests automatically handle authentication using configured credentials:
Authentication Flow
- Credential selection: Test uses appropriate credential set
- Login execution: Automated login using provided credentials
- Session management: Maintains authentication throughout test
- Logout handling: Cleans up sessions between tests
Best Practices
Performance Optimization
Use appropriate concurrency: Higher for fast feedback, lower for stability
Run targeted tests: Use --only-affected
during development
Environment-specific runs: Use --base-url
for different environments
Headless in CI/CD: Always use --headless
in automated pipelines
Debugging and Development
Sequential execution: Use --max-concurrent 1
when debugging
Verbose output: Add --verbose
to understand test execution
Local URLs: Test against localhost
during development
Save results: Use --output
to analyze test patterns
Troubleshooting
Application not accessible
Application not accessible
Error: Failed to navigate to application URL
Solutions:
- Verify your application is running on configured base URL
- Check network connectivity and firewall settings
- Use
--base-url
to override configuration if needed - Ensure URL includes correct protocol (http/https)
Authentication failures
Authentication failures
Error: Login failed or authentication required
Solutions:
- Verify credentials in
.bugster/config.yaml
are correct - Test credentials manually in your application
- Check if authentication flow has changed
- Ensure test user accounts exist and are active
Test timeouts
Test timeouts
Error: Test execution timeout or unresponsive browser
Solutions:
- Reduce
--max-concurrent
value for better stability - Check application performance and response times
- Use
--verbose
to identify slow steps - Verify adequate system resources (CPU, memory)
Element not found errors
Element not found errors
Error: UI elements not found during test execution
Solutions:
- Verify application UI hasn’t changed significantly
- Run
bugster update
to refresh test specifications - Use
--verbose
to see exact element selectors - Check if elements have dynamic IDs or classes
Network or streaming issues
Network or streaming issues
Error: Failed to stream results or connect to Bugster services
Solutions:
- Check internet connectivity
- Try with
--no-stream-results
to run locally only - Verify API key is valid and not expired
- Check if corporate firewall blocks connections
Exit Codes
The bugster run
command uses standard exit codes:
- Exit 0: All tests passed successfully
- Exit 1: One or more tests failed or execution error occurred
This enables proper CI/CD integration where failed tests fail the build.
Next Steps
After running tests, you can: