Skip to main content
All CollectionsJacquard PlatformAudience OptimisationDynamic Optimisation
Dynamic Optimisation - Understanding Jacquard Uplift Calculations
Dynamic Optimisation - Understanding Jacquard Uplift Calculations

What's a time-adjusted rate and more answered here

Updated over a month ago

Jacquard's AI is constantly working in the background to optimise dynamic campaigns using the most innovative technology available today. Therefore, part of understanding how Jacquard is performing is understanding how Jacquard looks at data and calculates success.

Uplift defined

Jacquard displays uplift based on engagement metrics, as that is the type of metric directly attributable to content. To properly think about the impact of content on engagement metrics over time, Jacquard considers things like opens and clicks in the context of collective campaign performance but also performance in individual batches.

It's important to understand, generally speaking, humans consider performance of a campaign in totality. Jacquard displays total performance because humans often request that view of data.

However, when Jacquard is making its decisions from batch to batch (you can think of a batch as a window of time), it is considering data in the context of that individual period's performance. This is because Jacquard must make real-time decisions on whether to add variants, drop variants, increase a variant's share of the audience or decrease a variant's share of the audience.

Individual performance for a period is essential to good decision-making because volumes and audience engagement can vary wildly batch-to-batch. Jacquard ultimately devises time-adjusted rates to take the audience size and behaviour variation batch-to-batch into account.

Further, Jacquard's incremental uplift calculations always take into the opportunity cost of conducting optimisation by subtracting the incremental events the human control hypothetically would have gotten if the whole of the audience had been shown the human control variant.

A practical example

The best way to understand how Jacquard makes decisions and calculates its time-adjusted rates is with an example. Consider the following data:

Day 1

Source

Opens

Sends

Open rate

Uplift

Variant 1

Jacquard

400

1000

40.0%

0%

Variant 2 - champion

Jacquard

450

1000

45.0%

13%

Variant 3

Jacquard

300

1000

30.0%

-25%

Variant 4

Human

400

1000

40.0%

Daily Total

1550

4000

For this batch, 1550 opens were achieved. If the human line had been sent to the full 4000 audience, its 40% open rate would have achieved 1600 opens. Therefore, -50 incremental opens were generated this day. In this case, there is an opportunity cost to testing: 50 more opens would have been achieved if the human control had been sent to the whole audience.

Day 2

Source

Opens

Sends

Open rate

Uplift

Variant 1

Jacquard

160

800

20.0%

0%

Variant 2 - champion

Jacquard

1300

6000

21.7%

8%

Variant 3

Jacquard

70

400

17.5%

-13%

Variant 4

Human

160

800

20.0%

Daily Total

1690

8000

For this batch, the previous day's uplifts have been analysed and the proportion of audience being sent each line adjusted accordingly. 1690 opens were achieved. If the human line had been sent to the full 8000 audience, with the 20% open rate it achieved today, it would have generated 1600 opens. Therefore, 90 incremental opens were generated this day.

Day 3

Source

Opens

Sends

Open rate

Uplift

Variant 1

Jacquard

0

0

Variant 2 - champion

Jacquard

1000

1800

55.6%

11.1%

Variant 3

Jacquard

0

0

Variant 4

Human

100

200

50.0%

Daily Total

1100

2000

For this batch, all the previous uplifts have been analysed and the proportion of audience being sent each line adjusted accordingly. 1100 opens were achieved. If the human line had been sent to the full 2000 audience, with the 50% open rate it achieved today, it would have generated 1000 opens. Therefore, 100 incremental opens were generated this day.

Overall performance

Human open rate

33.0%

Champion open rate (unadjusted)

31.3%

Champion uplift (Variant 2)

10.65%

Champion open rate (adjusted)

36.5%

Summarising the overall performance, we find that:

  • -50 + 90 + 100 = 140 incremental opens were achieved by the experiment in total

  • Overall, the human line achieved 660 opens from 2000 sends, which is an open rate of 660/2000 = 33.0%.

  • The best performing variant (#2) achieved 2750 opens from 8800 sends, which is an open rate of 2750/8800 = 31.3%.

  • There is an interesting paradox here: Variant 2 got a higher open rate than the human control every day. However, the total open rate for Variant 2 is less than the human control. Why is this? It is because the open rates and send volumes vary from day to day. Variant 2 got a high volume of sends on a "bad" day (Day 2). This brought its overall average down. This illustrates the risk of using metrics aggregated over time.

  • To correct for this problem, Jacquard looks at batch-wise uplifts. The average uplift achieved by Variant 2 is (13 + 8 + 11.1) / 3 = 10.65%.

  • To represent this, Jacquard presents a time-adjusted open rate by applying this average uplift to the human line performance: 33.0% + (10.65% * 33.0%) = 36.5%

Issues with incrementality

We sometimes get questions regarding why Jacquard is showing the engagement uplift as positive while showing negative incremental opens or clicks.

The crux of the issue you're seeing is Jacquard has found a variant that outperforms your human control in terms of engagement. But when the raw opens and clicks that occurred are tallied and the opportunity cost of the test is subtracted, fewer opens and clicks happened overall than what we predict would have occurred had the experiment not been run.

This then usually leads to the question of what is actually causing the opportunity cost to outweigh the gains. There can be many reasons for this. We'll explore the most common below.

Experiment length and model maturity

Due to our focus on longitudinal performant content as opposed to "quick win" (or "spammy") content, it may take awhile for Jacquard to find the right content and send it to the majority of the audience.

For example, during your model's "exploration" period, there may be negative click incrementals because some content performing worse than your human control is still being sent out. This is part of the scientific experimentation process.

Consequently, you can experience early negative incrementals, but this is not cause for alarm. As long as there is an uplift in engagement rates, eventually incremental engagement volumes trend positive, as lower-performing variants will eventually drop and top-performing variants will deploy in greater proportions.

The speed of the switch from negative incrementality to positive incrementality depends on the audience volume and engagement levels.

Audience size

In some cases, if the audience is small, a user may stop an experiment before Jacquard has the chance to send the better content with the high uplift to lots of people, which is ultimately what leads to positive incrementals.

Randomness and noise

Randomness and noise can also be related to audience size or too few optimisation events for the chosen metric. If batch volumes or optimisation events are so low that Jacquard can't get a good read on the performance of the human control for that batch, the incrementals are essentially a flip of a coin (i.e. random numbers in, random numbers out). However, over lots of additional batches we might still see a positive uplift.

This method does take into consideration the opportunity cost of testing. This means that even if Jacquard AI finds a winning variant, you can still experience negative incremental engagement if there isn’t a large enough volume of sends deployed to winning variants.

Combating negative incrementals

Broadcast experiments

Extend the testing period

As mentioned above, the more batches Jacquard has to analyse, the better your optimisation gets.

Often emphasising speed of send over experimentation time will result in negative incrementals. It is critical to adhere our Data Science Team's recommended testing methodology. Particularly at the beginning of a relationship, your language model needs time to learn and adjust.

We may initially recommend a testing window with which you feel uncomfortable. But it is a critical piece of the optimisation puzzle. We may also need to recommend a change to your testing window if your audience is particularly small or optimisation events do not roll in quickly.

Focus on use cases with larger audiences or greater optimisation events

No one likes to hear that their audience may be too small for proper experimentation. However, there are recommended minimums for a reason. We have found over our many years of being marketing content experimentation experts that there are certain audience and optimisation event thresholds a population must meet for the likelihood of positive incrementals to be sufficient.

It's for this reason we evented our Jacquard Core Platform content generation product. This allows you to get the power of AI performance-predicted content without the time and audience needed to experiment.

We'd encourage you to try out this offering for your smaller segments, as it comes included free of charge with your Audience Optimisation product.

Reduce the number of tested variants

This is typically not our recommended solution. We have found content diversity in experiments to be the number one driver of engagement. Reducing the variants tested is means less diversity and less likelihood we're able to find variations that really resonate. However, this can occasionally help right the ship with smaller or less engaged broadcast audiences in terms of incrementals.

Speak with your Jacquard Customer Success representative and request a Data Science review if you think this option may be for you.

Triggered experiments

Give it more time

Generally for triggered experiments, the best way to correct negative incrementals is letting the experiment run for a longer period. Nine times out of 10, this issue will correct itself. If you're noticing negative incrementals, you can be sure the Jacquard AI is seeing them, too.

In some cases with smaller daily volumes, experiments may take many months to optimise. If you're concerned the recommended testing methodology is misaligned with your business needs, please speak with your Jacquard Customer Success representative and request a Data Science review.

Approve new variants quickly

It's critical you're being diligent about approving your new variants as quickly as possible.

We know it can be easy to forget about a trigger for a few weeks or even months amongst the other things you are doubtlessly juggling. But the longer the experiment runs without a full arsenal of variants to test, the more incrementals you leave on the table long term.


Related articles

Last reviewed: 11 July 2024

Did this answer your question?