Getting started
From a fresh purchase to your first benchmark, plus a reference for every campaign field.
1. Install & activate
- Sign in at the customer portal and click Admin installer (Windows).
- Run
BenchStress-Setup-x.y.z.exe. Windows SmartScreen will warn the first time (we're in the process of getting an Authenticode cert) — click More info → Run anyway. Walk through the wizard. - Launch BenchStress from the Start Menu. On first launch you'll see a sign-in dialog asking for your purchase email and a 6-digit code.
- Enter your purchase email, click Send code. Check your inbox (subject: Your BenchStress activation code), enter the code in the dialog.
- The dashboard loads. The bundled Server (the SignalR broker that workers connect to) auto-starts on
localhost:5050. You'll see "Server not connected" for ~2 seconds while it spins up, then it goes connected.
2. Add a worker (same machine)
- From the portal, download Worker (Windows x64). Extract the
.zipanywhere — desktop is fine. - Double-click
BenchStress.Worker.exe. A console window opens. - It asks for your purchase email + 6-digit code (same flow as Admin). Enter both.
- After activation, the worker queries the cloud for your registered Admin sessions, finds the one running on the same box, and auto-connects to
localhost:5050. - Within a few seconds the worker shows up in your Admin's Dashboard tab with state Connected, and in your portal under Workers with a green status dot.
3. Add a worker (different machine)
For real distributed load testing, run workers on separate boxes. Each worker can drive 10K–50K simulated browsers depending on hardware.
- On the Admin machine, click + Add Worker on the dashboard. The dialog shows every detected LAN IP on this box. Pick the network the worker boxes can reach (usually wired Ethernet, not Hyper-V/VPN/WSL adapters), click Save. This is what gets reported to the cloud as your Admin's address.
- Run the firewall command shown in Step 3 of that dialog (one-time, in PowerShell as admin) so inbound TCP 5050 is allowed.
- On each worker box: download the appropriate worker bundle (Windows zip or Linux
.tar.gz) from the portal. No login required —wgetworks fine on headless Linux:wget https://benchstress.com/downloads/BenchStress.Worker.linux-x64.tar.gz tar xzf BenchStress.Worker.linux-x64.tar.gz chmod +x BenchStress.Worker ./BenchStress.Worker - The worker prompts for email + code. After activation it queries the cloud, finds your Admin (auto-picks if there's only one, shows a console menu if multiple), and connects.
- It saves the chosen Admin URL into
worker.jsonnext to the exe so subsequent launches are silent. Within ~60 seconds the worker shows up in your Admin's Dashboard.
worker.json.4. Build a campaign
- Click Campaigns in the sidebar.
- Click New campaign. Fill in the form (full reference below).
- Tick the workers you want this campaign to run on (you can also exclude specific workers if some are busy).
- Click Save & Start. The campaign begins ramping immediately. Switch to the Telemetry tab to watch live.
- Click Stop on the Telemetry tab to end the campaign. It can be restarted at any time from Campaigns.
5. Campaign field reference
Free-form label so you can find this campaign later. Doesn't affect traffic.
A preset that injects realistic User-Agent, Accept, Accept-Language, Accept-Encoding, and Sec-Ch-Ua-* client hints for the chosen browser. Pick the browser your real traffic looks like — servers, CDNs, and bot-protection middleware all branch on these headers.
Custom headers (below) layer on top and override these.
Full HTTP/HTTPS URL to hit. Path and query string are sent verbatim.
Examples:
https://api.example.com/v1/health
https://shop.example.com/products?category=shoes&sort=popular
https://staging.example.com/api/users/42
http://internal-svc.lan:8080/metrics
GET, POST, PUT, PATCH, DELETE, HEAD. Pair non-GET with a Body and a matching Content-Type header.
Sends a different Host: header than the URL's hostname. The URL still controls where the TCP packets go (DNS resolution); the Host header controls which vhost the server routes to.
Use it to:
- SNI / vhost testing: pin TCP to a specific IP via URL, but tell the server "I'm asking for the customer-facing hostname".
- CDN bypass: hit your origin directly while still triggering the right vhost-routed handler.
- A/B between hosts on the same IP: same TCP target, different vhost.
Examples (for each, the URL goes in Target URL, the value below goes in this field):
www.example.com
shop.example.com
api-staging.example.com
prod-origin.example.com:8443
Concrete pairing — bypass Cloudflare and hit your origin directly:
Target URL: https://203.0.113.5/api/health
Host header override: api.example.com
Empty = use the URL's hostname (normal behavior).
Raw request body, sent verbatim. Plain text, JSON, form-encoded, anything your endpoint accepts. Pair with a Content-Type custom header so the server parses it.
Example — JSON login payload:
{"email":"[email protected]","password":"hunter2","remember":true}
Example — form-encoded (also set Content-Type: application/x-www-form-urlencoded in custom headers):
username=alice&password=hunter2&next=%2Fdashboard
Example — GraphQL query (with Content-Type: application/json):
{"query":"query { products(first: 20) { id name price } }"}
Empty for GET/HEAD.
Key: Value)
Layered on top of the Browser profile headers. Your benchmark is only as realistic as the headers it sends — missing Cookie:/Authorization: hits the unauthenticated path and underestimates real load by a lot.
Common pattern — authenticated JSON API call:
Content-Type: application/json
Accept: application/json
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
Common pattern — logged-in browser session:
Cookie: session=abc123def456; csrftoken=xyz789
Accept-Language: en-US,en;q=0.9
Referer: https://shop.example.com/
X-Requested-With: XMLHttpRequest
Common pattern — multi-tenant SaaS, route to a specific tenant:
X-Tenant-Id: acme
X-Api-Key: live_pk_8a91f...
Authorization: Bearer eyJhbGciOi...
Common pattern — bypass CDN/edge cache so your origin actually sees the load:
Cache-Control: no-cache
Pragma: no-cache
CF-Connecting-IP: 198.51.100.42
Common pattern — mobile client simulation:
User-Agent: MyApp/2.4.1 (iPhone; iOS 17.5; Build/21F79)
Accept: application/json
X-Client-Version: 2.4.1
X-Device-Id: 3F2A8B1C-...
Tip: open your real site in Chrome DevTools → Network tab → right-click a request → Copy as cURL. The headers in that command line are exactly what you should mirror here for a representative benchmark.
How many simulated browsers each worker keeps alive simultaneously. 20 = each worker holds 20 concurrent clients. With 3 workers, 60 concurrent clients total.
High-end limit per worker is roughly 50,000 on beefy hardware. Practical sweet spot for most targets is 1,000–10,000.
How many additional clients each worker activates per second until it hits Clients per worker. 2 with 20 clients = full ramp in 10 seconds.
Higher = more aggressive cold-start. Lower = gentler, easier to spot exactly when the system tipped over.
Hard ceiling on total requests/second across all workers in this campaign. 10000 = workers throttle themselves so the sum never exceeds 10K RPS.
Set this when you have an SLO target you want to sustain rather than overrun. Leave high (or empty) when you want to find the breaking point.
How long the campaign runs before auto-stopping. Untick for an open-ended run you stop manually.
Pause between requests on each individual client, simulating user reading the page. 100ms means each client waits 100ms after a request finishes before sending the next one.
Set to 0 for max-throughput stress (each client fires the next request immediately on response). Higher values produce more realistic per-user load and let you sustain more concurrent clients without saturating.
Toggles the wire protocol.
- Off (HTTP/1.1): each TCP connection serves one request at a time.
- On: many in-flight requests share one TCP connection (multiplexing). Modern CDNs/Nginx/k8s ingresses default to HTTP/2.
Match the protocol your real users actually use. CDN/Cloudflare-fronted? Almost certainly HTTP/2 on. Legacy on-prem behind an old load balancer? Maybe HTTP/1.1.
- On (default): TCP connection is reused between requests. Realistic for browsers/API clients.
- Off: every request opens a fresh TCP+TLS handshake. Stresses your connection-establishment path — TLS termination, accept queue depth, slow-start CPU.
Turn off only when you specifically suspect the handshake/TLS layer is your bottleneck. Otherwise leave on.
Convenience shortcut for Authorization: headers. Equivalent to setting it manually in custom headers, just less typo-prone.
Examples:
Bearer: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0In0.xxx
Basic: admin:hunter2
Basic: apikey: (some APIs use empty password)
Bearer is what most modern API tokens / OAuth use. Basic is mostly legacy admin endpoints. If you need a non-standard scheme (Token, Hawk, etc.), put it in Custom headers directly:
Authorization: Token a1b2c3d4e5f6...
6. Reading the dashboard & telemetry grid
Dashboard summary tiles
- Connected: workers reachable from the Server right now.
- Running: workers actively executing a campaign (could be different from Connected if some are idle).
- Disconnected: previously-seen workers that have lost SignalR. They auto-prune from the registry after 5 min of being gone.
Telemetry top bar
- RPS: requests per second, summed across all workers.
- In flight: requests currently sent and waiting for a response.
- p50 / p95 / p99 ms: response-time percentiles. p99 yellow when it crosses 500ms.
- Errors: count + percentage of non-2xx and transport failures.
- Total requests / Bytes: cumulative since campaign start.
Per-client telemetry grid
Each cell is one simulated client on one worker. Color reflects current state:
Cells flicker between Transmitting (cyan/blue) and Connected/Done as requests cycle. A solid-green column with no flicker for a long time means clients connected but aren't completing requests — check that worker's network reachability to the target.
7. Common pitfalls
- Hitting the cached path. If your real users carry a session cookie or auth token, missing it means you're benching the cache layer instead of the backend. Add the right
Cookie/Authorizationin custom headers. - Choosing the wrong browser profile. Some bot-protection middleware short-circuits on User-Agent. Your benchmark might be invisibly served a 200 OK from edge while the real path never executes. Match your actual users' browser.
- Picking a virtual NIC as your Admin LAN IP. If + Add Worker defaults to a Hyper-V/vEthernet/VPN address, remote workers can't reach you. The dropdown lists wired Ethernet first — pick that.
- Forgetting the firewall rule. Inbound TCP 5050 must be allowed on the Admin's machine. Without it, remote workers fail to connect with no obvious error on the Admin side.
- Hammering an endpoint behind a CDN. The CDN absorbs the load, your origin sees 1% of it. Add a cache-busting query param (
?cb=$RAND) or aCache-Control: no-cacherequest header. Or hit the origin directly via Host header override. - Running the bench from your customer-facing IP. WAFs and rate limiters will throttle a single IP doing 10K RPS. Distribute across multiple workers on different egress IPs — or temporarily add your test IPs to your allowlist.
8. Troubleshooting
Admin window doesn't appear
Logs are at %LocalAppData%\BenchStress\logs\admin-YYYYMMDD.log. Open in Notepad — the last lines will explain. The dashboard's Reconnect button safely retries the SignalR handshake.
Worker prints "No Admin sessions are registered to this license yet"
Means the cloud doesn't see any Admin running for your license. Make sure your Admin app is open and has hit at least one heartbeat (~60 seconds after launch). Then re-launch the worker.
Worker connects to localhost:5050 but Admin is on a different machine
Discovery hasn't run for some reason. Delete the worker.json next to BenchStress.Worker.exe and re-launch — that forces fresh discovery. Or pass --server http://<admin-lan-ip>:5050 on the command line.
I revoked a machine in the portal and now its app shows nothing
Wait ~60 seconds, then close and reopen the app on that machine. It detects the revoke, prompts for a fresh activation code, and reconnects.
Two rows for the same hostname in the dashboard
Each Worker install gets a fresh worker-id.txt. Reinstalling to a new folder creates a new id, so the old one shows as Disconnected for ~5 minutes before auto-pruning.
Logs flooded with HttpRequestException: Only one usage of each socket address
That's Windows ephemeral port exhaustion (WSAEADDRINUSE / error 10048), not a BenchStress bug. At high outbound rates the OS runs out of source ports faster than they recycle from TIME_WAIT. Default range is only ~16K ports with a ~240s recycle window.
Fix on the worker box (PowerShell as Administrator):
netsh int ipv4 set dynamicport tcp start=10000 num=55535
reg add "HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters" /v TcpTimedWaitDelay /t REG_DWORD /d 30 /f
Reboot after the registry tweak. That gives you ~55K ports recycling every 30s — about 1,800 fresh outbound TCP per second sustained on a single machine.
Better long-term: spread the load across more worker boxes. Each worker has its own port pool.
Can I include a port in the Target URL?
Yes — standard URI syntax works. Examples:
http://10.0.0.5:8080/health
https://api.staging.example.com:8443/v2/orders
http://localhost:3000/test
The Host header override field is separate — only set that if you need a non-matching Host: header for vhost / SNI testing.
Still stuck? Email [email protected] from your purchase email and include the relevant log file.