Skip to main content

How do Pencil’s AI models use my brand library?

Tim Bowers avatar
Written by Tim Bowers
Updated this week

How are the different sections of my brand library used?

When you populate your brand library in Pencil, the different sections have distinct purposes and interactions within the platform. None of the brand library interactions constitute model training but still influence Pencil’s text models. As you can see in the table below, only the Knowledge and Tone sections have a direct impact on the outputs of AI models:

Brand library section

Purpose

Knowledge

Informs the outputs of text models by equipping them with knowledge about your brand

Tone

Tailors the tone of voice in outputs from the text models

Logos

Enables users to select brand logos in the template and creative editors and can influence outputs of the static ad agent

Fonts

Enables users to select brand fonts in the template and creative editors and can influence outputs of the static ad agent

Colors

Enables users to select brand colors in the template and creative editors and can influence outputs of the static ad agent

Elements

Enables users to select brand elements in the template and creative editors and can influence outputs of the static ad agent

Style references

Enables users to select reference images in the image generation tool

Sounds

Enables users to select brand sounds in the template and creative editors

Skin

Enables users to select brand skins in the template and creative editors

Documents

Acts as a repository for the documents used to populate the brand library

Which models does the brand library affect?

The only models that interact with the brand library are Pencil’s Large Language Models (LLMs) which we also refer to as text generation models or copy generation models. There is no direct connection between the brand library and Pencil’s other model types (image, video and voiceover).

The way that this interaction works is that every time you generate text in-platform, Pencil submits your knowledge and tone information alongside your prompt, together with preset guidance on how these inputs should be utilised. This means that you do not need to submit prompts with the same depth of detail as you would using Pencil’s LLMs out of the box as they are always going to refer to your brand.

This applies to most text generation in platforms, including in the text generation tool, Pencil’s default agents, feed variation, voiceover script generation and the floating toolbar in the creative editor. Please note that the brand library does not affect the prompt improver feature within the image and video generation tools’ prompt entry menus.

Is the brand library training the models?

No, Pencil operates a no-train policy. The brand library provides a source of truth about your brand that enables the models to tailor their outputs whilst fully protecting your data.

Can I switch off the brand library’s connection to the LLMs?

Whilst the brand library link is a core part of the way that your Pencil workspace provides consistent on-brand text outputs, there are some situations where you might want more neutral outputs.

To enable this, you can create specific custom agents that do not draw on your brand library. In the process of creating a custom agent, you can select or deselect a Brand Knowledge checkbox. This selection will apply to all generations using this agent.

Can the brand library influence my image and video generations?

Knowledge and tone information from the brand library will only directly affect Pencil’s LLMs, however you can also use agents to draw on the power of your brand library to influence your image and video generation.

Pencil’s multimodal agents (image generation agent, video generation agent and static ad agent) all use LLMs in conjunction with visual generation agents. This means that the LLM takes your requirements, interprets them and prompts the image or video generation models on your behalf. Brand knowledge and tone is a key part of this process: your prompt and the brand library are submitted to the LLM so that its prompts to the other models are in line with your brand requirements.

You can see the brand influence in the image generation agent example below, from a workspace and brand library built around an ice cream brand:

Prompt: Create a concept image for my brand that I could use in an instagram story.

Output:

As you can see, Pencil is able to generate on-brand imagery without any reference to the detail of the brand in question.

Can visual elements from the brand library contribute to my generations?

Visuals from the brand library will not influence your outputs from prompt-based image and video generation nor the image and video generation agents, but will be drawn into the static ads agent as you can see in the example below:

Prompt: Create an ad for my brand that I could use in an instagram story, include the brand logo and an on-brand CTA

Output:

Did this answer your question?