Data Differences

Prev Next

Users familiar with our previous performance reporting may find discrepancies between classic reporting and the Jacquard Reporting Platform for metrics like open rates, uplifts, incrementals, etc. A difference in numbers does not necessarily indicate an error in either the new system or the old one. Statistics and performance metrics can be calculated using various methodologies, each with distinct strengths.

Overall, the definitions and methods in the Jacquard Reporting Platform have been updated to provide more robust, stable, and transparent reporting.

Batch aggregation vs. time-based rollups

In the legacy system, the “batch” was the base reporting unit. For example, if a campaign ran one batch per day, daily incrementals were calculated based on that single batch. This often resulted in “noisy” or volatile data for low-volume campaigns. The new system rolls batches up into standardised reporting windows (hour, day, week, or month) depending on volume, smoothing out volatility and resulting in more stable and statistically robust numbers.

Champion definition

Previously, the champion (winner) of a test and send campaign was defined as the variant with the highest rate at the specific moment the test window ended. To align with other test methodologies, the champion is now defined as the variant with the highest rate overall, utilising all available data, even after the test has ended.

Consequently, the reported champion may differ from the variant sent to the final audience. On average, this methodology tends to report higher uplift numbers because the champion is always the variant with the highest uplift.

Data filtering and transparency

The legacy Dynamic Optimisation system used anomaly filtering to automatically exclude outliers, and that same filtered data was used for reporting. This was problematic because the filtering was automatic and silent so data issues were often left undetected and raw counts (e.g. opens, sends) did not always match what you saw in your customer engagement platform (CEP).

The philosophy of the new system is complete data transparency: All data is included in reporting. This means data issues should be more obvious, and reported numbers should be more consistent with those in external systems. The trade-off is if source data contains errors, these will now be included in the final calculations. In some cases, it may be necessary for you to manually remove problematic campaigns from reports.

Calculation methodologies

In analytics, there is often more than one valid way to calculate a specific number. Discrepancies may arise because the new system employs a different (but equally valid) mathematical approach than the legacy system.

For example, account-level average uplift can be calculated by averaging the uplift of every campaign equally, or by averaging the uplift of each project and then averaging those project results. Both are correct, but the final number will differ. Similarly, a standard “average” can be calculated using the mean, the median, or a weighted average. Variances of this kind are not errors but rather differences in methodology.