Skip to main content
All CollectionsJacquard PlatformAudience OptimisationDynamic Optimisation
Dynamic Optimisation - Understanding Dynamic Optimisation
Dynamic Optimisation - Understanding Dynamic Optimisation

Learn about this unique method for maximising experiment outcomes

Updated over a week ago

Dynamic Optimisation defined

Dynamic Optimisation is a proprietary methodology we use for maximising experiment outcomes in conjunction with our industry-leading language models. It's designed to help you get great performance out of your experiments while minimising opportunity cost.

Dynamic Optimisation availability

Dynamic Optimisation is only available in customer engagement platforms (CEPs) that allow us to make real-time or near real-time calls from the platform to fetch content. For broadcast campaigns, it's also critical the CEP offers a deployment throttling mechanism.

Dynamic Optimisation is currently available for:

  • Adobe Campaign Classic

  • Adobe Journey Optimizer (optimisation on open data only; clicks coming soon)

  • Bloomreach

  • Braze

  • Iterable

  • MessageGears (triggers only)

  • MoEngage (optimisation on open data only; clicks coming soon)

  • Responsys (optimisation on open data only)

  • Salesforce Marketing Cloud

Dynamic Optimisation process

Dynamic Optimisation follows a deceptively simple five-step process.

1. Experiment begins

Dynamic Optimisation begins by serving up to 10 variants evenly to your audience.

2. Weight message distribution

The distribution of messages is updated batch-over-batch based on engagement (e.g. opens and clicks), ensuring the most effective language is reaching the most people possible. This occurs once a day for triggers and up to every five minutes for broadcast messages.

3. Remove low-performing language (triggers only)

Consistently underperforming message variants are automatically dropped from the experiment entirely, though you can disable this feature and drop variants manually if you wish.

4. Introduce fresh language (triggers only)

New message variants are suggested for your approval from your unique language model, combatting fatigue and promoting ongoing optimisation. You can set this to happen automatically at the discretion of Dynamic Optimisation, or you can choose a unique cadence based on your business needs.

5. Continuing optimisation

Performance optimisation is always on, repeating step one for broadcast and steps two, three, and four for triggers until the scheduled end of your experiment. Broadcast experiments can be set to run as long as a week, while trigger experiments can be set to run as far into the future as you like.

The Dynamic Optimisation difference

Dynamic Optimisation message distribution can be difficult to understand for those who are used to doing traditional A/B/n testing.

The key difference is this: Rather than having a single decision point after an experiment has run for a predetermined amount of time, there are many decision points that occur throughout the life of an experiment.

These decision points occur as infrequently as once a day for triggered campaigns and as much as every five minutes for broadcast campaigns.

Rather than deploying to a large segment of your audience and waiting for a single decision to be made, Dynamic Optimisation is constantly analysing engagement data throughout the life of your experiment to adjust which variant is delivered to the most people.

What this ultimately means is better dynamic decisioning based on how your subscribers are responding to the content delivered over time. It also means at the end of a campaign, there may not be an individual winner but rather several pieces of content that have gotten a larger share of the audience.

Broadcast distribution example

This concept is best explained through an example. For simplicity's sake, let's say Dynamic Optimisation is adjusting our weighted message distribution once per hour.

With five variants in the mix, let's say our human control is Variant 1. Our experiment could progress something like this:

Hour

Proportions

Hour 1

Variant 1: 20%

Variant 2: 20%

Variant 3: 20%

Variant 4: 20%

Variant 5: 20%

Hour 2

Variant 1: 16%

Variant 2: 17%

Variant 3: 30%

Variant 4: 30%

Variant 5: 7%

Hour 3

Variant 1: 10%

Variant 2: 5%

Variant 3: 45%

Variant 4: 35%

Variant 5: 5%

Hour 4

Variant 1: 2%

Variant 2: 1%

Variant 3: 75%

Variant 4: 15%

Variant 5: 1%

Hour 5

Variant 1: 2%

Variant 2: 1%

Variant 3: 60%

Variant 4: 36%

Variant 5: 1%

Hour 6

Variant 1: 2%

Variant 2: 1%

Variant 3: 51%

Variant 4: 45%

Variant 5: 1%

Notice how Variants 3 and 4 pull ahead early, but as it is only the first hour, Dynamic Optimisation keeps some other variants healthily in the mix.

Then in Hour 3, we see most of the audience is allocated to Variants 3 and 4, as they're neck-and-neck for the lead. The other variants have been decreased even more. But knowing audience behaviour is ever-changing as new data comes in, Dynamic Optimisation still keeps the others in play a bit.

By Hour 4, we see Variant 3 has taken a commanding lead. However, knowing the history of Variant 4 throughout the test, Dynamic Optimisation still gives it some of the audience as a precaution.

In Hour 5, we see Dynamic Optimisation was correct to do so because that variant has come back from behind.

Finally in Hour 6, Variants 3 and 4 are almost equal.

The result

Looking at this data at the end of the test, a user would notice Variants 3 and 4 were both technically "winners" and received a decent share of the audience, even though traditional A/B/n testing might have us believe there can only be one winner.

The beauty of this method is by constantly adjusting and never considering the test portion complete, more of the audience gets better-performing variants throughout the course of the experiment, lessening the opportunity cost of testing as a whole.

The timing

We often get asked why Dynamic Optimisation requires a minimum optimisation window like the six-hour window shown in our example. This is because Dynamic Optimisation is sorting through lots of statistical noise and uncertainty due to small sample sizes. It selects variant proportions based on statistical confidence, which increases over time as more results are returned and analysed.

While users sometimes choose to throttle their sends below the recommended optimisation duration, it is important to note this forces Dynamic Optimisation to make decisions more quickly than data would confidently dictate, which can adversely impact outcomes.

For example, notice Variant 1 is getting 10% of the audience in Hour 3, but by Hour 6 it has dropped to 2%. There is an opportunity cost to stopping the optimisation too quickly.

Trigger distribution example

While very similar in overall function to broadcast, trigger experiments with Dynamic Optimisation have the added complexity of dropping underperforming variants completely and introducing new ones to attempt to overtake the current champion(s).

With five variants in the mix once again, let's say our human control is Variant 1. Dynamic Optimisation will update our weighted message distribution once per day.

It's also important to note trigger experiments have much longer lifecycles and therefore have several options available to users in the Jacquard interface with respect to decision timing. For this example, we'll assume the daily volume is fairly high and the decisioning is set to its fastest mode.

Our experiment could progress something like this:

Day

Proportions

Day 1

Variant 1: 20%

Variant 2: 20%

Variant 3: 20%

Variant 4: 20%

Variant 5: 20%

Day 2

Variant 1: 20%

Variant 2: 31%

Variant 3: 27%

Variant 4: 12%

Variant 5: 10%

Day 3

Variant 1: 10%

Variant 2: 57%

Variant 3: 20%

Variant 4: 8%

Variant 5: 5%

Day 4

Variant 1: 5%

Variant 2: 70%

Variant 3: 20%

Variant 4: 5%

Day 5

Variant 1: 2%

Variant 2: 75%

Variant 3: 11.5%

Variant 6: 11.5%

Day 6

Variant 1: 2%

Variant 2: 80%

Variant 3: 6%

Variant 6: 6%

Variant 7: 6%

On Day 1, we start just as we do in any Jacquard experiment: with an even distribution.

On Day 2, we can see Dynamic Optimisation has started to adjust proportions. Nothing dramatic just yet. Even in its most aggressive mode, the system will still be judicious about its changes to ensure what it sees in the initial batches is not an abberation.

By Day 3, Variant 2 has taken a commanding lead and is therefore receiving more than half of the population. Dynamic Optimisation has still left the other variants in play to give them an attempt at a comeback.

By Day 4, Variant 5 having been consistently behind is dropped from the experiment entirely. Variant 2 still in the lead has been given an even greater percentage. Variants 1 and 4 have been further reduced.

On Day 5, we see the user has approved another variant for inclusion in the experiment and Dynamic Optimisation has officially started testing with Variant 6. Notice how Variant 1, the human control, always remains a part of the test with 2% of the population. This is vital for ongoing reporting purposes.

Finally, we conclude our example on Day 6. Variant 2 has become the decisive winner receiving 80% of the population. A new variant has been approved by the user and introduced for testing. Variant 1, the human control, retains 2% and the other variants are divided evenly for the testing to continue on in the background while the majority of subscribers are served the winning variant.

The result

Examining this brief example, we can see how Dynamic Optimisation marries optimising toward the most impactful variant(s) while still keeping a fair test of the other variants going simultaneously. The end result is better initial and ongoing outcomes than could be achieved by simple A/B/n testing alone.

Further, the language model that created these variants continues to receive all of this updated data in the background to enhance its understanding of the audience. That allows it to serve more relevant variants for testing on an ongoing basis.

With one experiment, you're able to optimise outcomes, reduce subscriber fatigue and continually test new variants, which could overtake the current champion(s). Also noteworthy is the only human intervention required during the test was to approve new variants as they became available.

The timing

Every experiment, audience, and business is different. Timing will vary greatly depending on existing engagement levels, send volumes, and experiment settings.

Some experiments can optimise as quickly as you see in the example, while others may take weeks or even months. Thankfully with trigger experiments, there is no limit in the platform to how long you can run your test.


Related articles

Last reviewed: 18 June 2024

Did this answer your question?