The challenge

Two years of A/B tests with contradictory results. Optimizely and Adobe Analytics rarely agreed on conversion rates. The product team had stopped trusting test results and was making decisions based on intuition instead of evidence. The testing program was at risk of being shut down.

The first step was a diagnostic — understanding exactly where and why Optimizely and Adobe Analytics diverged. We ran a systematic comparison of conversion rate calculations across both platforms, tracing discrepancies back to their source.

The root causes

Three issues accounted for the majority of the divergence. First, visitor identification was inconsistent — Optimizely was using its own cookie while Adobe Analytics used a different identifier, and the two were only loosely correlated. Second, conversion event definitions differed — Optimizely's conversion goal included actions that Adobe Analytics did not count as conversions. Third, the experiment activation timing meant some visitors were being counted as experiment participants before they had actually been exposed to the variant.

The rebuild

We implemented a shared identity key — the Adobe Analytics visitor ID — as the primary identifier in Optimizely. We redefined conversion goals to match precisely across both platforms. We fixed activation timing by moving experiment assignment to a server-side decision that fired before page render.

We also built a pre-launch validation checklist that every test had to pass before going live — ensuring that metric definitions, sample size calculations, and activation logic were verified upfront.

Outcome

Metric agreement between platforms reached 100% within the accepted margin of statistical error. Test velocity tripled as stakeholders regained confidence in results. In the six months following the rebuild, 14 tests produced winning variants that were shipped to production.