Skip to main content
All CollectionsJacquard PlatformGetting Help
Frequently Asked Questions (FAQ)
Frequently Asked Questions (FAQ)

Find answers to some of our most commonly asked questions

Updated over a week ago

General

Does Jacquard work in other languages besides English?

Yes, Jacquard is multilingual.

Jacquard can generate content in 27+ languages and dialects, all precision-tailored by our team of computational linguists with more than 6,250 linguistic factors.

How do I get buy-in from senior stakeholders?

The best way to show your senior stakeholders the Jacquard difference is to include them in your onboarding sessions so they can see what to expect from Jacquard, how it works and how to work with us to achieve the best results.

I’m thinking about changing my customer engagement platform (CEP) or email service provider (ESP). What should I do?

Please let your Customer Success representative know which platform you are thinking of using. We’d love to help, and we have lots of experience of working with CEPs and ESPs. We also have integrations with many major platforms.

We’re happy to offer our perspective on which of your shortlisted platforms are best positioned offer functionality needed to work with Jacquard, such as the ability to handle multivariate testing.

Will Jacquard replace our copywriting team?

Absolutely not.

Jacquard is an enterprise platform built for your marketing and copywriting teams to harness AI content generation and optimisation in a safe, controlled, and built-for-purpose environment, not a tool to replace them.

Can't I just use a large language model (LLM) instead?

Large language models can be incredibly useful tools, but they have some major limitations and drawbacks by their very nature that concern marketers and legal teams alike (e.g. brand safety, factual errors, tone errors).

Audience Optimisation Language

What can I learn using Jacquard Audience Optimisation?

Through complex multivariate testing, Jacquard identifies new ways of communicating with your customers that resonate with them and foster greater engagement. Jacquard is uniquely positioned to offer unparalleled scale in the AI space with controls that give brand marketers ultimate confidence in the output.

After a number of experiments have been conducted, depending upon your audience size, Jacquard will be able to provide insights and language strategies that work best for your brand.

Can all of my generated variants include all the information I've got in the human control (e.g. offer, product name, brand name)?

There is a feature in Audience Optimisation to allow constants if your model is using the Hybrid Generation method, which allows you to choose up two topics to include in all of your generated variants. This feature is not available in the Control Generation method.

However, including all the same data for all variants imposes huge restrictions on the numerous variables Jacquard tests in order to discover more about your audience, so frequent use of this feature is not encouraged.

Jacquard tests a range of different language to see what your audience responds to best on any given campaign. This includes some variants that may highlight multiple parts of your offer (e.g. 20% off and free delivery), some that only have one part of the offer (e.g. 20% off) and some that don't include the offer at all.

Users are often surprised to see variants win a multivariate test that they were unsure of and/or made no mention of the particular offer, product name, or brand name.

Can I change things I don't like in the Audience Optimisation variants?

Everything Jacquard generates for an Audience Optimisation experiment is selected purposefully for specific, scientific reasons determined by our proprietary AI.

We really urge you to leave the variants as they appear, unless of course there are words or phrases that are off-brand according to your official brand style guidelines or cannot be used for legal or other compliance reasons. If you need to make changes for those reasons, please use the Request a change button and be as descriptive as possible with your feedback.

Otherwise, any changes you make introduce human bias into a AI-designed experiment. When you allow the original language to be tested, you may be surprised to see that what you didn't personally like was actually favoured by your audience.

Can Jacquard support merge tags/dynamic fields/personalisation in Audience Optimisation variants (e.g. coupon value, first name, discount amount, loyalty tier)?

Yes, Jacquard does support this. It's important to have this discussion up front with your Customer Success representative and Solutions Architect, as the requirements and capabilities for how these sorts of tags and fields render and the logic they carry varies by platform.

Please be aware the dynamic content may only appear in a few of the Jacquard variants, as this will merely one of many variables Jacquard tests to learn more about how best to communicate with your audience.

This is because there are thousands of elements that impact how language engages an audience. Including the recipient's name in an email subject line, for example, is just one of those variables. For one audience, this may be an effective strategy, while for another audience it has a negative impact on engagement. And for both of these audiences, the effectiveness will change over time.

Overexposure of one variable can also have an adverse effect. Therefore, continued testing is put in place to educate your Jacquard model about the changing audience response.

Do I have to use all the variants an Audience Optimisation experiment produces?

Yes. It is incredibly important to use all of the variants generated for your multivariate test. Each variant was selected for a specific, scientific reason. Choosing just one or two from the group to test will restrict Jacquard's ability to learn about your audience properly.

If you find yourself in a situation where you don't have time for an experiment on a particular campaign, we recommend you use our performance-predicted Core Platform language generation tool to generate a single variant for use on the time-restricted campaign.

If you simply need fewer variants for a test, we recommend you choose the desired number of variants in the initial experiment creation step and regenerate your variants.

How do I update the inputs or topics in my Jacquard Audience Optimisation user interface?

Open a support ticket and provide as much detail as you can about the new inputs or topics you require. One of our team of computational linguists will then make any required changes to your bespoke user interface. Typically, we require one to two weeks' notice to make these sorts of amendments.

Our copywriting/brand/executive team don't like the language Jacquard is generating. What do I do?

Get in touch with your Customer Success representative and let us know. We’d love to chat with them about the benefits and advantages of using purpose-built AI for content optimisation.

It's important to position Jacquard as a solution deliberately designed to help marketers and copywriters optimise short-form content so they can dedicate more of their time and creative skills to writing long-form content.

Some of the Audience Optimisation variants are too generic. Can I change them?

Jacquard is testing a massive number of variables simultaneously to build a deep understanding about what resonates with your audience(s). If every Jacquard variant created for a multivariate test isn't too generic, then we'd urge you to test them.

Some may occasionally feel generic and this is by design, as Jacquard tests and retests various hypotheses throughout the life of your bespoke model. Changing the variants risks adding human bias into a scientifically designed AI experiment. The performance of certain variants you felt unsure about may well surprise you.

The language Jacquard generated doesn't fit the campaign. What do I do?

Oftentimes, returning to the content generation step and trying slightly different prompts or topics can resolve these issues. It's important to use the model as you were trained by the Language and Customer Success teams during your onboarding.

If this doesn't work, open a support ticket with examples and reasoning so our team of computational linguists can investigate the issue.

What if the language doesn’t sound like something our copywriters would write?

As long as the language sounds like it was written by a human, is on brand and legally compliant, then we would seek to reassure all concerned the variants should be tested as created. Remember, Jacquard is testing an incredible number of variables, operating well beyond the capacity of a human copywriter yet still sounding human.

Why is a word or phrase I rejected still appearing in the replacement variants Jacquard Audience Optimisation creates?

Jacquard is artificial intelligence with an industry-leading team of computational linguists keeping it on track. Our Language Team monitors and incorporates the feedback you input into Jacquard to make sure no personal bias sullies your bespoke algorithm. Although we try to do this as quickly as possible, changes are not immediate.

Experiment Results and Data

Can Jacquard pull in the multivariate test results from my customer engagement platform (CEP) or email service provider (ESP)?

In many instances, yes. Jacquard has a number of integrations with most major CEPs and ESPs that might make it possible for results to be feed back automatically. We're also expanding our relationships with platforms all the time.

If there is a platform or element of a platform you wish Jacquard integrated with or integrated with differently, you can open a feature request ticket. We cannot guarantee all feature requests will be implemented, but we do guarantee that we read and consider them each one.

If no integration exists, there are a number of ways to feed results back to Jacquard, including simple copying and pasting or a CSV upload into the platform. Depending on your platform and Jacquard contract, you may also be able to schedule an automated export to Jacquard Support for upload.

Your Customer Success representative or Solutions Architect can advise you on the most efficient way of getting your results into Jacquard.

Can Jacquard tell me what language does and doesn't work so I can share the learnings to other parts of the business?

There's no such thing as the perfect language. We've analysed millions of variants and can confirm there's not one element that guarantees success. It's always a combination of linguistic features and elements aligned with proper timing that lead to success.

A human might erroneously think they've identified one thing that "always" works and send it over and over again. However, the initial successful results will start to decay over time. Jacquard's is constantly testing and finding new strategies that resonate with your customers to avoid creative fatigue and results decay.

That said, we do analyse Audience Optimisation experiment results and your bespoke language model(s) on an ongoing basis. You can a generate a Language Insights report in the platform at any time, provided the particular project you've chosen for analysis has enough valid experiments in the selected time period.

Your Customer Success representative will also provide custom insights tailored to your brand during your business reviews.

How frequently do I need to input Audience Optimisation experiment results?

Ideally the results are automatically fed back to Jacquard from your CEP or ESP within 72 hours of campaign completion if we have an integration available.

Otherwise, we would urge you to upload results on a regular cadence (e.g. at least once a week or once a month) depending on the frequency of your experiments. Feeding results into Jacquard enables our AI to continually optimise your bespoke language model(s) and communicate with your audience(s) in the most effective way.

Our human control variant generated more opens/clicks. Should I be concerned?

This can happen occasionally. There will be fluctuations from experiment to experiment, particularly when a bespoke model is new. While most Jacquard users see immediate positive results, a model will require 12 to 24 valid experiments before it is considered fully calibrated.

Post calibration, this is often the result of not following the recommended testing methodology outlined by our Data Science Team. In many cases, if Jacquard detects abnormalities or aberrations in the test data, it will default to deploying your human control as a safety measure. So, it is important to follow the recommended testing methodology, including wait time, test cell size, number of variants, and selection metric.

If your business needs have changed and you'd like our Data Science Team to review and amend your testing methodology, please get in touch with your Customer Success representative.

Jacquard variants might be winning the multivariate test, but I don't believe the Jacquard variants are generating more conversions. What do I do?

Jacquard is designed to create fair and brand-compliant experiments. In avoiding spammy or misleading language, Jacquard works to build trust and brand affinity with your customers over time.

While on an individual campaign basis you may find your human control leads to more immediate direct conversions, long-term loyalty and behaviour change is what Jacquard is after. This is best measured with engagement metrics by proxy.

Simply put, with Jacquard experiments the following generally hold true: more opens = more clicks = more customer loyalty = more revenue long term. This is why campaign-over-campaign conversion measurements comparing the human control to the Jacquard winner don't tell the full story of Jacquard's impact.

If you need to prove the business case for Jacquard or need assistance with analysing metrics, please get in touch with your Customer Success representative. They will put you in contact with a Data Science Team member who can help you with measurements that fit your use cases.

What if Jacquard has generated an uplift but our average order value (AOV) and year-over-year numbers are down?

Though there are many things outside our control that could be contributing to this outcome, we can say with confidence we have demonstrated Jacquard's ability to deliver an uplift in opens and clicks when compared to human control variants you would have used in your campaigns without Jacquard.

Consistently employing Jacquard experiments, adhering to recommended testing methodologies, and keeping our team apprised of changes to your brand and editorial calendar throughout the course of your Jacquard relationship are the best methods to mitigate this sort of occurrence.

We would suggest taking a look at other touchpoints in your customer journeys to see if there are any indicators and trends in the macro environment that could help you understand more about the changes you're seeing.

Please get in touch with your Jacquard Customer Success representative to discuss any issues as soon as possible.

Testing Methodology

What's statistical significance?

Simply put, we can have greater confidence the results of a test are unlikely to be caused by chance when the test audience is of a sufficient size. In other words, at a size where there is statistical significance.

Jacquard aims for a 95% confidence interval with its experiments, which is why it is so important to strictly adhere to testing methodology provided by our Data Science Team.

If the audience size is too small, sends too infrequent, or testing window too short, then statistical significance in your experiment is less likely. Jacquard is unable to learn about your audience and unable to confidently determine successful patterns of language that resonate if this happens on a consistent basis.

Why can't I choose the winner of an experiment based on conversions?

Variants generated for your experiments are designed to maximise long-term engagement and loyalty, converting recipients into readers, and also help determine the best variant for the experiment itself. Using down-funnel metrics like conversions impede timely decision-making on experiments and are often definitionally different organisation to organisation.

Using standardised, top-of-funnel engagement metrics allows for speedy, scientific, scalable experiments. Experiments based on conversions would greatly increase opportunity cost by either dramatically increasing decision time, test population size, reporting time, or all three. In fact, they would increase so much that the campaign in question would more than likely be over before a winner could be determined.

Furthermore, even if conversions could be gathered and reported back to Jacquard in a timely manner, the amount of conversions compared to the number of people who received the experiment would be so small that the decision made would ultimately be a statistical guess.

Taking this a step further, even if the test population size could reasonably be drastically increased to get enough conversions quickly enough to make such a decision, only a tiny portion of your audience would be left to receive the optimised variant. This means you wouldn't see the meaningful incremental effect as a result of testing in the first place.

For all of these reasons, Jacquard has used top-of-funnel metrics like opens and clicks for experimentation since the very beginning with great success.

Can I use Jacquard to do a simple A/B test against my human control?

Limiting an experiment to two variants removes an incredible amount of variables from the test and severely limits Jacquard's ability to determine what works and doesn't work for your audience(s).

A/B testing alone is simply not enough for the complex and rapidly changing communication demands of modern consumers, particularly for enterprise companies.

Can I use a previous Jacquard winner as my new human control?

For triggered campaigns, this is in a sense part of the methodology. But it is still important to have a true, non-AI human control in place to benchmark performance against.

For broadcast campaigns, how your audience responds to variants will vary day to day. So, the winning variant is very much "of the moment" and should be utilised in the final send a matter of hours after initiating the split test. What works best one week may not fare so well the following week, particularly as broadcast content tends to change frequently.

Can I use send time optimisation (STO) in tandem with Jacquard experiments?

This would very much depend upon the customer engagement platform (CEP) or email service provider (ESP) you are using.

For broadcast messaging in particular, using STO on the multivariate test introduces a external factor Jacquard does not control or account for into the test. This will skew not only the results of that particular experiment but the long-term learnings of your bespoke language model as well. Therefore, we generally recommend against STO usage.

For triggered messaging, STO is more possible from a testing methodology perspective, but if STO is used on a trigger to delay the message until a later time, it is often not as effective as an immediately triggered message.

Please contact your Customer Success representative who can investigate this further with a member of our Solutions Architecture Team.

How long do I need to wait between a multivariate test and winning deployment or for a Dynamic Optimisation campaign to run?

This will depend largely on your audience size, average selection metric, and number of variants. Our Data Science Team will provide you with a tailored recommendation during your onboarding process.

But, generally speaking, we recommend a 4-hour wait for open selection and a 6-hour wait for click selection.

What if my management team wants to receive all of the variants upon deployment?

Whilst it's great your management team is interested, we would suggest including the management team on just the final send. The direct involvement of a large number of internal stakeholders, including deployment multiple variants to the same person, during the experiment risks introducing bias and additional noise.

Why does Jacquard get multiple variants to test against just one human variant? Obviously, Jacquard is going to win.

When Jacquard generates multiple variants to test, it isn't just testing random pieces of language. It's setting up a scientific experiment—testing each variant against the other.

Jacquard will be trying to predict the performance of each variant based on your previous experiments' results. However, it will also ensure it's constantly generating fresh content and adapting to your changing audience, too. You wouldn't want to test the same variants every week—that's a surefire recipe for engagement decay.

The aim is to find out what language works for your audience for this experiment and build learning for subsequent campaigns. Jacquard is not designed to get maximum uplift for one experiment in isolation, and we would never base success on just a few experiments. Jacquard is designed to take a long-term view, and part of that long-term view is both interclass and intraclass linguistic diversity.

By testing a wide variety of variants in the space of one experiment, Jacquard is learning how best to communicate with your audience as quickly as possible. If you reduce the number of variants, Jacquard will take significantly longer to understand which language best resonates with your audience and, therefore, slow down all learnings.

Did this answer your question?