Skip to main content
Dynamic Optimisation - Experiment Configuration

How to configure a Dynamic Optimisation experiment in the Jacquard platform

Updated over 4 months ago

Configuration panel

The Dynamic Optimisation configuration panel enables you to schedule broadcast or trigger experiments in your delivery provider.

The Dynamic Optimisation configuration panel is currently available for these platforms:

  • Adobe Campaign Classic

  • Adobe Journey Optimizer (optimisation on open data only; clicks coming soon)

  • Bloomreach

  • Braze

  • Iterable

  • MessageGears (triggers only)

  • MoEngage (optimisation on open data only; clicks coming soon)

  • Responsys (optimisation on open data only)

  • Salesforce Marketing Cloud

When everything is configured and your campaign or workflow is pushed live, the delivery platform will send a request to Jacquard to retrieve a language variant and its associated open tracking pixel. The language variant will be embedded within the message via a parameter and the open pixel within the body of the message.

This is accomplished differently in different platforms.

You should check our documentation for each individual platform to determine the best way to set up your deployment.

Ensure you're following recommended testing guidelines.

The process

1. Open the Dynamic Optimisation settings panel

First, follow your usual process for generating experiment language variants in Jacquard.

Once you've approved the language variants, you'll progress to the Results tab, and a plug icon will appear in the upper-right corner of the screen. Click on this icon to open the Dynamic Optimisation configuration panel.

You may need to expand the panel by clicking the arrow icon at the top.

Select Dynamic Optimisation to open the settings.

Once you open the panel, you'll have a number of options to configure. Certain options will only be available for trigger experiments. We'll go through each setting below in the order they appear in the panel.

2. Choose the optimisation mode

The Optimisation mode dropdown will allow to select from available optimisation modes. For triggers, there are three modes available. For broadcast campaigns, you are limited to Fast, maximise revenue mode.

Let's examine what each mode does.

Fast, maximise revenue

This mode employs a methodology to find the best-performing variant as quickly as possible. Once a winning variant is found, the majority of sends will use this variant, maximising overall revenue for the campaign and minimising the opportunity cost of testing. This is the only mode available for broadcast experiments.

Fast, repeat sends to the same audience

This uses the same optimisation method as Fast, maximise revenue. The difference is the user can choose to settle on a group of champions as opposed to just one.

The default is three champions, which means any one of the winning three variants will be sent to subscribers most of the time. This option works well when you are sending a campaign to the same audience week after week, or even multiple times within the same week. This provides the recipient with a variety of different high-performing lines to reduce fatigue.

Slow, maximise statistical significance

This uses a slow, deliberate method to find the best-performing variant. The approach uses a straight split test to find the statistically worst-performing variant. The worst-performing variant is then dropped and the process is repeated until there are 2 lines left: the human control and the best-performing Jacquard variant. The system then performs a head-to-head test with the last two variants. Once a true winner is found, the final split used is user configurable.

The default settings will send 80% of the deployment to the winner and 20% to the loser. This process requires Drop bad variants to be active. If Drop bad variants is inactive, the optimisation will give all live variants an equal proportion of the send. While guaranteeing the best line wins, this option can take up to 10 times longer to complete than Fast, maximise revenue.

3. Choose the optimisation metric

The metric you select here will tell Jacquard which metric it should use when making decisions about adjusting the proportion of the audience to serve to each variant and when to drop variants performing badly.

Depending on your platform and agreed upon methodology with Jacquard, you may have up to four options:

  1. Opens to sends - Jacquard will use unique open rate to determine the performance of a variant. By default, we collect open rates through the use of an open tracking pixel for Dynamic Optimisation experiments.

  2. Clicks to sends - Jacquard will use unique click-through rate to determine the performance of a variant. Jacquard has webhooks available where certain platforms can provide click tracking data.

  3. Clicks to opens - This uses click-to-open rate (CTOR) to determine performance. This is best used when optimising variants within an email body while also testing subject line.

Your Jacquard Customer Success representative will make a recommendation based on your use case in consultation with our Data Science team.

If your project has been configured with split calculator, it will have selected the best option for the experiment.

4. Configure the optimisation schedule

Input the Start schedule and End schedule for your Dynamic Optimisation experiment. This is the duration of time that Jacquard will be listening for new data and adjusting your variants and proportions accordingly.

It's important to note two very important things about the schedule:

  1. Once the experiment has been activated, you can still adjust the optimisation Start schedule until that point in time has passed. Once the optimisation period has begun, you are unable to adjust the Start schedule to stop it.

  2. Similarly, you can adjust the End schedule after the experiment has begun to increase the length of time Jacquard will listen for data and adjust variants and proportions. However once the configured End schedule point in time has passed, you are not able to adjust it to make the experiment run longer.

If you need to change the Start schedule or End schedule after that time has passed, you will need to make a new experiment.

Broadcast experiments

For broadcast experiments, the minimum optimisation schedule for an experiment is three days. Even though you'll likely only be deploying for one of those days, it's critical for Jacquard to keep listening for data.

This is because broadcast campaigns have about a 72- to 96-hour maturation period. During this time, your dashboard will continue to update the Mature Data tab. You can ask Jacquard to listen for additional data on a broadcast experiment for up to seven days.

Trigger experiments

For trigger experiments, you are able to configure them to run as long as you like. Keeping in mind that once the End schedule has passed it cannot be extended, it is best practice to set a trigger campaign to run longer than you think you will need it. You can always adjust the End schedule to a final end date if you ever settle on one.

As Jacquard dynamic trigger experiments are built to be able to run in perpetuity, you may find you never want to turn it off and create another experiment for a particular marketing touchpoint unless the goal or content of the touchpoint changes.

5. Triggers only: Determine if Jacquard should automatically drop bad variants

With Jacquard's Dynamic Optimisation, you have the ability to manually drop variants by using the Status dropdown next to each variant in the Results tab.

However, Dynamic Optimisation works best when you put that decisioning in the Jacquard's hands.

By default when you create a new experiment, Jacquard will toggle Drop bad variants on. If you decide you want to turn it off, simply uncheck the box next to Drop bad variants. This is not generally recommended, but it is available as an option should you need it.

6. Triggers only: Choose how Jacquard should introduce new variants

The controls for introducing new variants are in a dropdown just after the Optimisation schedule box. Again, this is for triggers only. Broadcast experiments do not run long enough to test more than the initial variants generated.

You have four different options for controlling the introduction of new variants in a dynamic trigger experiment:

  1. Automatic - This is the default. If Jacquard is winning with a strong uplift, we will delay adding new variants for up to 3 months to maximise the overall performance of the experiment. If we're not winning by much or are losing to the control, we will keep introducing new variants until we find a strong winner. This option also controls the number of new variants that we introduce. So if we are winning, we may only introduce a few new variants at a time.

  2. Time based - This will allow to instruct Jacquard to wait a configurable number of days before introducing new variants. The default for this setting is 30 days.

  3. Do not introduce new language - This does what it says on the tin. Jacquard will not add new variants, allowing the experiment to find a winner from the current variants, irrespective of whether a Jacquard variant is winning or not.

  4. Continuous testing - This mode will simply allow Jacquard to ask for the approval of new language as soon as an old variant is dropped.

No matter which mode you choose, Jacquard will never introduce language to a live experiment you haven't approved. You must approve newly offered variants from the Language Approval tab just as you would in a fresh experiment.

Otherwise, your new variants will just sit there in an unyielding approval limbo forever and ever. The horror! 😱

Language that has been approved may be held before being introduced into the experiment depending on the current performance of the existing language.

7. Triggers only: Choose if Jacquard can drop your human control

This is another option that does what it says on the tin: If the control is performing badly and this setting is toggled on, Jacquard will drop it. Uplift and other performance metrics will use the last known human control performance metrics prior to it being dropped.

Be default, this options is toggled off. We generally wouldn't recommend selecting this option, particularly at the start of an experiment. It's important for Jacquard to be able to get a solid benchmark against which to measure its own hypotheses.

If you wish to enable it, simply check the box next to Allow the human control to be automatically dropped.

8. Determine the human control's minimum audience share

By default, Jacquard will always send at least 2% of your audience the human control variant you entered, irrespective of how the variant may be negatively performing.

You can choose to toggle Minimum sends to the human control off or set it lower, which would allow Jacquard to drop the percentage below the 2% threshold. Alternatively, you can increase it and ensure Jacquard always sends a larger percentage of your audience the human control.

Note that if your human control is winning, Jacquard will increase the share to it no matter what percentage you've entered here.

9. Triggers only: Choose the percentage of the audience Jacquard can use for ongoing testing

The final setting available for triggers is Minimum percentage of audience used for testing (1-20%). If you have chosen settings in your experiment that allow Jacquard to introduce and test new variants, Jacquard will use this setting to determine what percentage of your audience can be used to test new variants you approve.

The default is 20%. This is also the maximum Jacquard would ever use for ongoing testing of new language. You can adjust this as low as one percent or turn it off completely by unchecking the box next to Minimum percentage of audience used for testing (1-20%).

Though there is always an opportunity cost with testing, ongoing testing of trigger language helps to keep pace with your audience's changing taste and engagement. This allows your language to continue remaining relevant and earning further engagement. Disabling this function introduces the risk of declining engagement over time as language becomes stale or out-of-pace with your audience's evolving preferences.

10. Click Start the Awesome

With all of your settings in place, click Start the Awesome. The button will turn green and an alert will appear at the top of the panel to let you know you the Dynamic Optimisation has been enabled for your experiment.

Note: This does not actually schedule your send. It has only enabled the optimisation on Jacquard's side. You need to complete experiment implementation and deployment steps within your delivery provider.

Certain providers will require copying and pasting a URL to pull information from the Dynamic Optimisation endpoint. You can copy the URL in the panel into your delivery provider platform when you see it appear. This will configure Jacquard as a data source from which language variants will be returned on a per-subscriber basis.

Sometimes, like in the example above, you may need to replace our placeholder values (e.g. <unique customer ID>, <unique delivery ID>) with the actual dynamic variables from your platform. We cannot always preconfigure these, as many of them vary based on how your unique deployment platform instance was provisioned.

Platform-specific settings

As mentioned above, each platform operationalises its sends differently and, therefore, the process for adding Dynamic Optimisation to sends in those platforms differs. Please find your platform below and read the article for the type of experiment you're running.


Related articles

Last reviewed: 20 June 2024

Did this answer your question?