Reporting periods
The Jacquard Reporting Platform aggregates your campaign data into standardised time slices (ranging from hourly to monthly) called reporting periods. The system dynamically adjusts periodicity to ensure every reporting period contains at least 500 engagements.
This approach enables long-term uplift calculation. By averaging uplift over distinct periods rather than using one “big bucket,” the platform prevents single high-volume days from skewing the final numbers.
Note for legacy users
Reporting periods replace the legacy “batch-based” reporting, which often resulted in volatile or “noisy” data for low-volume campaigns. This new model smooths out volatility and provides a more stable, reliable view of performance.
Learning Mode
After the initial launch of your Jacquard experiment, the algorithm explores all variants equally to gather data. This is called Learning Mode. During this period, results are statistically underpowered and directional only. They should not be used for making decisions or evaluating outcomes.
You can see the Learning Mode status in the Optimisation results table on the campaign report. A Boolean indicator shows which metrics are still in Learning Mode for a given experiment. Note that Learning Mode status is calculated independently for each metric. As such, it's entirely possible that a campaign may still be in Learning Mode for clicks but not for opens.

A campaign exits Learning Mode when either:
Minimum engagement threshold is reached - On average, all testing variants have received approximately 1,000 engagements each. Note: This refers to the success metric events not the number of messages sent.
A champion variant is found - A specific variant is declared a champion when it satisfies both of the following conditions:
It consistently maintains the highest engagement rate among all active variants.
It has been sent to the majority of the audience for two or more consecutive reporting periods.
Once a campaign exits Learning Mode, the optimisation shifts toward maximising performance and reported metrics become more reliable. Expect the top-performing variant to change frequently during Learning Mode.
Optimisation metric vs. tracked metrics
The optimisation metric is the single key metric the algorithm is designed to maximise. It acts as the “north star” for the optimisation engine, driving your end users toward the best-performing variants. Choosing this metric often requires balancing business value against data volume.
In most cases, you would like to optimise for the metric closest to revenue (e.g. clicks or conversions). However, these downstream events often happen less frequently. If your audience size is smaller, there may not be enough click data to ever exit Learning Mode.
In these cases, you must make a strategic compromise and select a high-volume proxy metric instead: open rate. While open oate is not the ultimate business goal, it provides enough data volume for the algorithm to function properly. Optimising for opens is often a necessary precursor to getting clicks.
Tracked metrics are the other performance indicators you monitor alongside the optimisation metric. As the engine is not optimising for these metrics, the data may be statistically underpowered and volatile.
The engine is blind to tracked metrics. If a variant has high clicks but low opens, the engine (optimising for opens) will rightfully reduce the share of the audience receiving that variant or drop it altogether. Don’t read too deeply into the performance of non-optimisation metrics.