Skip to main content
All CollectionsJacquard PlatformAudience OptimisationDynamic Optimisation
Dynamic Optimisation - Understanding Dynamic Optimisation
Dynamic Optimisation - Understanding Dynamic Optimisation

Learn about this unique method for maximising experiment outcomes

Updated over 2 months ago

Dynamic Optimisation defined

Dynamic Optimisation is a proprietary methodology we use for maximising experiment outcomes in conjunction with our industry-leading language models. It's designed to help you get great performance out of your experiments while minimising opportunity cost.

Dynamic Optimisation availability

Dynamic Optimisation is only available in customer engagement platforms (CEPs) that allow us to make real-time or near real-time calls from the platform to fetch content. For broadcast campaigns, it's also critical the CEP offers a deployment throttling mechanism.

Dynamic Optimisation is currently available for:

  • Adobe Campaign Classic

  • Adobe Journey Optimizer (optimisation on open data only; clicks coming soon)

  • Bloomreach

  • Braze

  • Iterable

  • MessageGears (triggers only)

  • MoEngage (optimisation on open data only; clicks coming soon)

  • Responsys (optimisation on open data only)

  • Salesforce Marketing Cloud

Dynamic Optimisation process

Dynamic Optimisation follows a deceptively simple five-step process.

1. Experiment begins

Dynamic Optimisation begins by serving up to 10 variants evenly to your audience.

2. Weight message distribution

The distribution of messages is updated batch-over-batch based on engagement (e.g. opens and clicks), ensuring the most effective language is reaching the most people possible. This occurs once a day for triggers and up to every five minutes for broadcast messages.

3. Remove low-performing language (triggers only)

Consistently underperforming message variants are automatically dropped from the experiment entirely, though you can disable this feature and drop variants manually if you wish.

4. Introduce fresh language (triggers only)

New message variants are suggested for your approval from your unique language model, combatting fatigue and promoting ongoing optimisation. You can set this to happen automatically at the discretion of Dynamic Optimisation, or you can choose a unique cadence based on your business needs.

5. Continuing optimisation

Performance optimisation is always on, repeating step one for broadcast and steps two, three, and four for triggers until the scheduled end of your experiment. Broadcast experiments can be set to run as long as a week, while trigger experiments can be set to run as far into the future as you like.

The Dynamic Optimisation difference

Dynamic Optimisation message distribution can be difficult to understand for those who are used to doing traditional A/B/n testing.

The key difference is this: Rather than having a single decision point after an experiment has run for a predetermined amount of time, there are many decision points that occur throughout the life of an experiment.

These decision points occur as infrequently as once a day for triggered campaigns and as much as every five minutes for broadcast campaigns.

Rather than deploying to a large segment of your audience and waiting for a single decision to be made, Dynamic Optimisation is constantly analysing engagement data throughout the life of your experiment to adjust which variant is delivered to the most people.

What this ultimately means is better dynamic decisioning based on how your subscribers are responding to the content delivered over time. It also means at the end of a campaign, there may not be an individual winner but rather several pieces of content that have gotten a larger share of the audience.

Broadcast distribution example

This concept is best explained through an example. For simplicity's sake, let's say Dynamic Optimisation is adjusting our weighted message distribution once per hour.

With five variants in the mix, let's say our human control is Variant 1. By default, we will never serve less than 2% of the audience to your human control to ensure there is a baseline for performance comparison, though you do have the option to adjust this percentage or remove it completely.

With that in mind, our default settings experiment could progress something like this:

Hour

Proportions

Hour 1

Control: 20%

Variant 2: 20%

Variant 3: 20%

Variant 4: 20%

Variant 5: 20%

Hour 2

Control: 16%

Variant 2: 17%

Variant 3: 30%

Variant 4: 30%

Variant 5: 7%

Hour 3

Control: 10%

Variant 2: 5%

Variant 3: 45%

Variant 4: 35%

Variant 5: 5%

Hour 4

Control: 2%

Variant 2: 1%

Variant 3: 75%

Variant 4: 15%

Variant 5: 1%

Hour 5

Control: 2%

Variant 2: 1%

Variant 3: 60%

Variant 4: 36%

Variant 5: 1%

Hour 6

Control: 2%

Variant 2: 1%

Variant 3: 51%

Variant 4: 45%

Variant 5: 1%

Notice how Variants 3 and 4 pull ahead early, but as it is only the first hour, Dynamic Optimisation keeps some other variants healthily in the mix.

Then in Hour 3, we see most of the audience is allocated to Variants 3 and 4, as they're neck-and-neck for the lead. The other variants have been decreased even more. But knowing audience behaviour is ever-changing as new data comes in, Dynamic Optimisation still keeps the others in play a bit.

By Hour 4, we see Variant 3 has taken a commanding lead. However, knowing the history of Variant 4 throughout the test, Dynamic Optimisation still gives it some of the audience as a precaution.

In Hour 5, we see Dynamic Optimisation was correct to do so because that variant has come back from behind.

Finally in Hour 6, Variants 3 and 4 are almost equal.

The result

Looking at this data at the end of the test, a user would notice Variants 3 and 4 were both technically "winners" and received a decent share of the audience, even though traditional A/B/n testing might have us believe there can only be one winner.

The beauty of this method is by constantly adjusting and never considering the test portion complete, more of the audience gets better-performing variants throughout the course of the experiment, lessening the opportunity cost of testing as a whole.

The timing

We often get asked why Dynamic Optimisation requires a minimum optimisation window like the six-hour window shown in our example. This is because Dynamic Optimisation is sorting through lots of statistical noise and uncertainty due to small sample sizes. It selects variant proportions based on statistical confidence, which increases over time as more results are returned and analysed.

While users sometimes choose to throttle their sends below the recommended optimisation duration, it is important to note this forces Dynamic Optimisation to make decisions more quickly than data would confidently dictate, which can adversely impact outcomes.

For example, notice Variant 1 is getting 10% of the audience in Hour 3, but by Hour 6 it has dropped to 2%. There is an opportunity cost to stopping the optimisation too quickly.

Trigger distribution example

While very similar in overall function to broadcast, trigger experiments with Dynamic Optimisation have the added complexity of dropping underperforming variants completely and introducing new ones to attempt to overtake the current champion(s).

With five variants in the mix once again, let's say our human control is Variant 1. Dynamic Optimisation will update our weighted message distribution once per day.

It's also important to note trigger experiments have much longer lifecycles and therefore have several options available to users in the Jacquard interface with respect to decision timing. For this example, we'll assume the daily volume is fairly high and the decisioning is set to its fastest mode.

Our experiment could progress something like this:

Day

Proportions

Day 1

Control: 20%

Variant 2: 20%

Variant 3: 20%

Variant 4: 20%

Variant 5: 20%

Day 2

Control: 20%

Variant 2: 31%

Variant 3: 27%

Variant 4: 12%

Variant 5: 10%

Day 3

Control: 10%

Variant 2: 57%

Variant 3: 20%

Variant 4: 8%

Variant 5: 5%

Day 4

Control: 5%

Variant 2: 70%

Variant 3: 20%

Variant 4: 5%

Day 5

Control: 2%

Variant 2: 75%

Variant 3: 11.5%

Variant 6: 11.5%

Day 6

Control: 2%

Variant 2: 80%

Variant 3: 6%

Variant 6: 6%

Variant 7: 6%

On Day 1, we start just as we do in any Jacquard experiment: with an even distribution.

On Day 2, we can see Dynamic Optimisation has started to adjust proportions. Nothing dramatic just yet. Even in its most aggressive mode, the system will still be judicious about its changes to ensure what it sees in the initial batches is not an abberation.

By Day 3, Variant 2 has taken a commanding lead and is therefore receiving more than half of the population. Dynamic Optimisation has still left the other variants in play to give them an attempt at a comeback.

By Day 4, Variant 5 having been consistently behind is dropped from the experiment entirely. Variant 2 still in the lead has been given an even greater percentage. Variants 1 and 4 have been further reduced.

On Day 5, we see the user has approved another variant for inclusion in the experiment and Dynamic Optimisation has officially started testing with Variant 6. Notice how Variant 1, the human control, always remains a part of the test with 2% of the population. This is vital for ongoing reporting purposes.

Finally, we conclude our example on Day 6. Variant 2 has become the decisive winner receiving 80% of the population. A new variant has been approved by the user and introduced for testing. Variant 1, the human control, retains 2% and the other variants are divided evenly for the testing to continue on in the background while the majority of subscribers are served the winning variant.

Ongoing trigger behavior with default Automatic language introduction

Trigger Dynamic Optimisation experiments have a diverse array of settings available to help customise the experiment's behavior and testing methodology to your unique business needs. You can read about the various settings available in our Dynamic Optimisation experiment configuration article.

One such setting is the language introduction selector. This setting is unique to triggers. As triggers generally run in perpetuity, the default Automatic mode for this setting works to appropriately pair Jacquard's scientific testing with an exploitation phase that allows you to reap the benefits of the test.

Simply put, once Jacquard finds a champion, Automatic mode will allocate the majority of your audience to that champion and stop introducing new variants to the experiment for a period of time. That period of time is dependent upon how much the Jacquard champion is outperforming the human control variant.

This is perhaps best explained by extending our example from above:

Day

Proportions

Day 6

Control: 2%

Variant 2: 80%

Variant 3: 6%

Variant 6: 6%

Variant 7: 6%

...

...

Day 30

Control: 2%

Variant 2: 98%

...

...

Day 121

Control: 2%

Variant 2: 80%

Variant 8: 6%

Variant 9: 6%

Variant 10: 6%

Between Day 6 and Day 30, the experiment has discovered a definitive winner, Variant 2. Variant 2 is not only winning, but it's beating the human control by a significant percentage (20%+). Therefore, the system recognises it can safely enter the exploitation phrase, where it leaves the champion in place receiving the entirety of the audience, save for the 2% that always goes to the human control.

We can then see how the system will allow up to three months to pass with that winner in place without introducing any new variants. This allows you to maximise the incremental benefits of discovering a winner.

However, it is important to continue to periodically test new variants to ensure your audience is still receiving fresh, diverse language and to gauge whether your audience's tastes have changed in the intervening months.

So, we see that by Day 121, the system has offered new variants to test. When the user approves these variants, Jacquard will then allocate an even percentage of the audience reserved for testing (20% by default) to the newly approved lines.

A critical takeaway from this example is that when in Automatic mode, you will have periods of time where the system is not testing anything new but rather is capitalising on the performance of the champion. Automatic mode does this to allow you to balance the benefits of experimentation with the opportunity cost of conducting such a test.

The result

Examining this brief example, we can see how Dynamic Optimisation marries optimising toward the most impactful variant(s) while still keeping a fair test of the other variants going simultaneously. The end result is better initial and ongoing outcomes than could be achieved by simple A/B/n testing alone.

Further, the language model that created these variants continues to receive all of this updated data in the background to enhance its understanding of the audience. That allows it to serve more relevant variants for testing on an ongoing basis.

With one experiment, you're able to optimise outcomes, reduce subscriber fatigue and continually test new variants, which could overtake the current champion(s). Also noteworthy is the only human intervention required during the test was to approve new variants as they became available.

The timing

Every experiment, audience, and business is different. Timing will vary greatly depending on existing engagement levels, send volumes, and experiment settings.

Some experiments can optimise as quickly as you see in the example, while others may take weeks or even months. Thankfully with trigger experiments, there is no limit in the platform to how long you can run your test.


Related articles

Last reviewed: 3 October 2024

Did this answer your question?