A/B Testing
Test link variants and optimize click-through rates with built-in A/B testing
Overview
GrowQR A/B Testing lets you create multiple variants of a short link, each pointing to a different destination URL, and split traffic between them to determine which performs best. Instead of guessing which landing page, offer, or CTA will resonate, you let real visitor behavior decide — and GrowQR handles the traffic splitting, data collection, and statistical analysis for you.
A/B tests can run on any GrowQR link. You define the variants, set traffic allocation percentages, and optionally add targeting conditions so that different audiences see different variants. When the test reaches statistical significance, GrowQR can automatically route all traffic to the winner.
What Problem It Solves
Marketers frequently debate which landing page, headline, or offer will perform better. Without a testing framework built into the link layer, running an A/B test requires setting up redirect rules on a web server, configuring analytics goals, and manually monitoring results — a process that takes hours and is easy to get wrong.
GrowQR A/B Testing moves experimentation upstream to the link itself. Because the split happens at the redirect layer, you don't need to modify your website or configure server-side experiments. Any link you share — in an email, a social post, a QR code, or an ad — can silently route visitors to variant A or variant B without the visitor knowing they're part of a test.
How It Works
When a visitor clicks an A/B-tested link, the GrowQR redirect server:
- Evaluates targeting conditions (if any) — geographic location, device type, time of day, or referrer.
- Selects a variant based on the traffic allocation weights you configured. If targeting conditions route the visitor to a specific variant, that takes priority.
- Records the variant assignment so the same visitor sees the same variant on repeat visits (sticky assignment via cookie).
- Redirects the visitor to the variant's destination URL.
- Tracks downstream events — clicks, page views, and conversions are attributed to the variant for performance comparison.
Statistical significance is calculated using a two-tailed Z-test on conversion rates. GrowQR continuously monitors the test and flags when a variant reaches the confidence threshold you've set (default: 95%).
Step-by-Step Usage
Creating Link Variants
- Navigate to Dashboard → Links and select the link you want to test (or create a new one).
- Click the Variants tab, then click Add Variant.
- Configure each variant:
| Field | Description |
|---|---|
| Variant Name | A descriptive label (e.g., "Control", "New Headline", "Green CTA") |
| Destination URL | The URL visitors assigned to this variant will be redirected to |
| Traffic Weight | Percentage of traffic allocated to this variant |
- The original link destination becomes the default variant (usually named "Control"). Add one or more additional variants.
- Ensure traffic weights sum to 100%.
Example configuration:
Link: growqr.io/spring-sale
├── Variant A (Control): example.com/sale — 50% traffic
├── Variant B (New Hero): example.com/sale-v2 — 30% traffic
└── Variant C (Video Hero): example.com/sale-v3 — 20% traffic
Setting Traffic Splitting Rules
Traffic is split randomly by default using the configured weights. You can optionally layer on targeting conditions to route specific segments to specific variants:
Geographic targeting — Show variant B only to visitors from the United States:
Variant B:
Condition: country = "US"
Fallback: Variant A
Device targeting — Show a mobile-optimized page to mobile visitors:
Variant C:
Condition: device = "mobile"
Fallback: Variant A
Time-based targeting — Test a different page during business hours:
Variant B:
Condition: time >= 09:00 AND time <= 17:00 (visitor's local timezone)
Fallback: Variant A
Referrer targeting — Show a custom variant to visitors from a specific source:
Variant B:
Condition: referrer contains "twitter.com"
Fallback: Variant A
Conditions can be combined using AND/OR logic. When a visitor matches multiple conditions, the first matching variant takes priority.
Monitoring Test Performance
Open the link's Variants tab to see the test dashboard:
- Visitors per variant — How many unique visitors each variant has received.
- Clicks per variant — Total click count per variant.
- Conversion rate per variant — If conversion goals are configured, the percentage of visitors who converted.
- Confidence level — The statistical confidence that the observed difference is not due to chance.
- Projected winner — The variant currently leading, with an estimate of when the test will reach significance.
A real-time chart shows cumulative conversion rates over time, with confidence intervals displayed as shaded bands.
Auto-Optimization
When auto-optimization is enabled, GrowQR takes action once a test reaches statistical significance:
- Winner detection — When a variant's conversion rate is significantly higher than all others at the configured confidence level, it's declared the winner.
- Traffic shift — All traffic is automatically redirected to the winning variant.
- Notification — You receive an email and in-app notification with the test results.
- Report generation — A summary report is created with full statistical details, archived under the link's test history.
To enable auto-optimization, toggle Auto-optimize when creating or editing a test and set the confidence threshold (90%, 95%, or 99%).
Statistical Significance
GrowQR uses a frequentist approach to determine significance:
- Minimum sample size — The test won't declare a winner until each variant has received at least 100 visitors (configurable).
- Confidence threshold — Default is 95%, meaning there's a 5% chance the observed difference is due to random variation.
- Effect size — Smaller differences between variants require larger sample sizes to detect. The dashboard shows an estimated time to significance based on current traffic volume.
The confidence calculation is displayed transparently so you can make informed decisions. If you prefer to end tests manually, disable auto-optimization and use the confidence metric as a guide.
Best Practices
- Test one variable at a time. If variant B has a different headline and a different CTA color, you won't know which change caused the performance difference. Isolate variables for clear learnings.
- Run tests for at least 7 days to account for day-of-week effects, even if statistical significance is reached earlier. Traffic patterns on Monday differ from Saturday.
- Don't peek and stop early. Ending a test the moment one variant looks better leads to false positives. Trust the statistical significance calculation.
- Use a 50/50 split for two-variant tests. Equal traffic allocation reaches significance fastest. Use unequal splits only when you need to limit exposure to a risky variant.
- Set conversion goals before starting. A/B tests measured on clicks alone are less meaningful than tests measured on downstream conversions (sign-ups, purchases).
- Document your hypothesis. Before launching a test, write down what you expect to happen and why. This discipline prevents random testing and builds organizational knowledge.
- Archive completed tests. Keep a record of past tests so your team doesn't re-run experiments that have already been answered.
Example Workflows
Landing Page Headline Test
- Create a short link
growqr.io/demopointing to your demo request page. - Add Variant B pointing to a version with a new headline.
- Split traffic 50/50 and set the conversion goal to "Demo Requested."
- Share the link in your email campaigns and social posts.
- After two weeks and 2,000 visitors per variant, Variant B shows a 12% higher conversion rate at 97% confidence.
- Auto-optimization kicks in and routes all traffic to Variant B.
- Update your canonical page to use the winning headline.
Geo-Targeted Offer Test
- Create a short link for a seasonal promotion.
- Add Variant A (standard offer) and Variant B (region-specific pricing).
- Set a geographic condition: visitors from Europe see Variant B; everyone else sees Variant A.
- Run the test for 30 days and compare conversion rates.
- If Variant B outperforms for European visitors, make the regional pricing permanent for that audience.
Multi-Variant Creative Test
- Create a short link for a product page used in paid ads.
- Add four variants, each pointing to a different creative treatment: text-heavy, image-heavy, video, and testimonial-focused.
- Allocate 25% traffic to each variant.
- Set the conversion goal to "Add to Cart."
- After reaching significance, the testimonial variant wins with a 22% higher add-to-cart rate.
- Consolidate ad spend on the testimonial variant and apply learnings to future campaigns.