Experiments
Experiments are how you test beliefs against reality. Apex supports A/B tests that run client-side through the tracking snippet or server-side through the SDK, with deterministic visitor assignment and built-in statistical analysis.
Experiment Types
Apex supports three variant types, each suited to different kinds of tests:
- Text change — Swap text content on a page without changing the URL. The snippet finds the target element and replaces its content for visitors in the variant group.
- Redirect — Send variant visitors to a completely different URL. Useful for testing entirely different page designs or flows.
- Code — Execute custom JavaScript (client-side) or use the SDK for server-side feature flags. The most flexible option for complex changes.
Tip
Start with text changes — they're the fastest to set up and don't require any code deployment. Graduate to redirects and code variants as your testing program matures.
Experiment Structure
Every experiment has these core fields:
| Field | Description |
|---|---|
name | A descriptive name (e.g. "Pricing page social proof") |
targetUrl | The page where the experiment runs |
trafficSplit | Percentage of visitors included (e.g. 100 for all traffic) |
variants | Always two: control (no change) and variant_b |
status | One of: draft, running, paused, completed |
goal | The conversion goal that defines success |
Traffic is split evenly between control and variant_b by default (50/50). You can adjust this, but even splits give you the fastest path to statistical significance.
How Assignment Works
Apex uses MurmurHash3 for deterministic visitor bucketing. Here's what that means in practice:
- Each visitor gets a persistent anonymous ID (stored in a first-party cookie)
- The hash of
visitorId + experimentIdproduces a number between 0 and 99 - That number determines whether the visitor sees control or the variant
- The same visitor always gets the same assignment — no flickering between variants
This approach is fast (no network round-trip needed), deterministic (same input always produces same output), and statistically uniform (even distribution across buckets).
Info
Assignment happens at the edge, in the snippet itself. There's no server call to determine which variant a visitor sees, so experiments add zero latency to page loads.
Running an Experiment
Create the experiment
From the dashboard, click New Experiment. Give it a name, choose the target URL, and select the variant type.
Configure the variant
For text changes, specify the CSS selector and new text. For redirects, provide the variant URL. For code variants, write the JavaScript that should execute.
Set a goal
Choose an existing goal or create a new one. This is the metric Apex uses to determine a winner.
Link a belief (optional)
Connect the experiment to a belief to automatically update confidence when results come in.
Activate
Set the status to running. The snippet starts assigning visitors immediately.
Results and Statistical Confidence
As visitors flow through the experiment, Apex tracks:
- Visitor count per variant
- Conversion count and conversion rate per variant
- Relative lift (how much better or worse the variant performs vs control)
- Statistical confidence (how likely the observed difference is real, not noise)
Apex calculates confidence using a frequentist two-proportion z-test. Results are considered significant at 95% confidence by default. You'll see the confidence percentage climb as more data comes in.
Warning
Don't call experiments early. Statistical significance requires enough data — calling a winner at 80% confidence means there's a 1-in-5 chance you're wrong. Wait for 95%.
Server-Side Experiments with the SDK
For experiments that require server-rendered changes (pricing logic, feature flags, API responses), use the Apex SDK:
import { Apex } from "@anthropic/apex-sdk";
const apex = new Apex({ projectKey: "YOUR_KEY" });
const variant = apex.getVariant("pricing-test", visitorId);
if (variant === "variant_b") {
// Show the experimental pricing
}
The SDK uses the same MurmurHash bucketing as the snippet, so a visitor assigned to variant_b client-side will also get variant_b server-side.
Connecting to Predictions
Before running an experiment, log a prediction — what you think will happen. This builds your team's calibration score and makes experiment results more actionable.
After the experiment completes, Apex compares your prediction against actual results to calculate an accuracy score. Over time, this feedback loop makes your team better at anticipating outcomes.
Lifecycle
| Status | What's happening |
|---|---|
draft | Experiment is configured but not live. No visitors are assigned. |
running | Visitors are being assigned and tracked. Results update in real-time. |
paused | Assignment stops. Existing data is preserved. Can be resumed. |
completed | Experiment is finished. Results are final. Belief confidence is updated. |