// CONVERSION DROP90%
// INVESTIGATION6WK
// ACTUAL FIX4HR
// REVENUE AT RISK£2.4M / QTR

The setup

Late 2025, a DTC footwear brand completed a domain migration from a regional architecture to a single eu.toms.com presence. The move unified four local European storefronts, simplified hosting, and tidied up a tag stack that had grown fifteen vendors deep across five years of tactical additions. On paper it was straightforward. In practice, the migration window overlapped with peak Q4 trading, and the engineering team was already shipping to a different deadline.

Performify was engaged two weeks after go-live because conversion reporting had fallen off a cliff. The brief was narrow: validate the post-migration tracking, identify what was broken, and stabilise reporting before the January board pack. The assumption going in, shared by both the brand team and their agency, was that something in the Google Ads tag configuration had been missed during the cutover.

That assumption was wrong. The campaigns were fine. The migration was fine. What was actually broken was a single rogue script that nobody had noticed firing on every pageview, quietly telling the entire vendor stack to ignore user consent. By the time we found it, it had been active in production for nineteen days.

What broke

The symptom was severe and specific. GA4 conversion volume dropped roughly ninety percent overnight on the day of migration. Google Ads reported conversions followed the same curve with a two-day lag. Meta and TikTok numbers told similar stories.

What made the picture strange was everything that stayed stable. Brand search traffic held at its pre-migration baseline. Direct revenue in the data warehouse matched historical trend, week over week and weekday pattern. Order confirmation emails were firing normally. The only metric that had collapsed was the one in the advertising platforms.

// WATCH OUT

When warehouse revenue looks healthy but reported conversions collapse, the problem is almost always in the tag stack, not the campaigns. Don't start optimising bids until tracking is confirmed.

The investigation trail

We started where most teams start: the campaigns themselves. Over the first forty-eight hours we catalogued every change made during the migration window, compared the live tag container against the pre-migration snapshot, ran consent-mode diagnostics against a batch of cached session recordings, and pulled the Billy Grace attribution logs for the first seventy-two hours post-launch.

Everything looked fine. The dashboards said healthy. That was the first tell.

We ranked the hypotheses from most to least likely based on prior engagements with migrations of this shape:

  1. Campaign changes during the migration window that quietly shifted audience eligibility or bidding signal.
  2. GTM container version mismatch between preview and production.
  3. Consent mode mis-configuration causing vendor tags to default to deny.
  4. Billy Grace attribution rules drift after the domain change.
  5. Tag firing failures in production not surfacing in preview.

Hypotheses one through four ruled themselves out inside a week. Nothing had changed in the campaigns. The container was identical. The consent mode configuration was correct as documented. Billy Grace was still ingesting events, just ingesting ninety percent fewer of them.

The Billy Grace dashboard said everything was healthy. The browser said otherwise. That gap was the whole story.

The break came from doing the thing we should have done on day one: opening the network tab in an incognito session, giving full consent, and watching what actually fired. The OneTrust consent banner loaded. The user accepted. And then every downstream vendor tag, every pixel, every Billy Grace event, returned a default-deny in its consent signature. In production. With explicit consent granted.

Tracing the consent signal back through the tag manager surfaced a second OneTrust script, loaded alongside the production one, pointed at a staging domain ID. It had been added during a QA run three months earlier, was supposed to have been removed before go-live, and had survived because nobody had audited the All Pages trigger list before migration.

// JAVASCRIPT · gtm-staging-misfire.js
// Rogue OneTrust tag firing in production
// Triggered on All Pages, but with staging config
{
  domainId: "TEST-a1b2c3d4-0000",  // should be production ID
  env:       "staging",             // should be "production"
  consentMode: "default-deny"       // blocking all vendor tags
}
IMAGE PLACEHOLDER · 1280 × 720
FIG 01 · The GTM container view showing the rogue tag firing on every pageview.

The fix

Removing the rogue tag took four hours end to end, most of it spent double-verifying the production OneTrust configuration before pulling the staging one. We disabled the offending tag first rather than deleting it, watched the consent signals recover in real time across incognito sessions on desktop and mobile, then pushed a container version bump once behaviour was stable for thirty minutes.

Conversion volume began recovering the same day as cached sessions rolled through. By end of day two, GA4 was back within five percent of pre-migration baseline. Google Ads lagged by a further two days as its conversion windows refilled. By day four, every platform was back on trend.

// NOTE

If you're migrating domains and using OneTrust or any other CMP, audit every tag's environment flag before go-live. It's a five-minute check that can save six weeks.

What it cost, what it taught

The commercial impact was measurable and ugly. Nineteen days of under-reported conversions meant three weeks of bidding decisions made on bad data. Google's smart-bidding algorithms had quietly pulled spend away from the channels that looked like they'd stopped converting, which was all of them. The recovered revenue at risk came in at roughly £2.4m for the quarter, most of which was preserved because the brand's marketing director trusted the warehouse numbers over the platform numbers and held line on spend during the investigation window.

The systemic lesson is the one we already knew and repeated too often: the platforms cannot be trusted as the source of truth for whether tracking is healthy. They report what they receive. A broken pipeline looks identical to a broken campaign if you only read the dashboards.

This is why every engagement at Performify now begins with a tracking audit, not a strategy deck. Strategy without measurement is theatre. Measurement without verification is fiction. The five minutes it takes to check a tag's environment flag is the cheapest insurance in performance marketing, and the most frequently skipped.