Prompt Guiding Explained
Everything you need to know about prompt guiding
Last updated
Everything you need to know about prompt guiding
Last updated
Prompt Engineering is an essential part of making high-quality models.
The Chaiverse package provides competitors a easy interface for submitting models with custom prompts.
Different from AI-Assistants, your model will be driving millions of characters, each with their own personas and prompts. This demo will explain what are the key variables and how they are used.
Your model will be the "engine" which drives the conversations between these characters and users
Each character is defined by the following three things:
Character Name (bot_name
): name of the character
Character Persona (memory
): character's personalities
Example Conversation (prompt
): character creator's provided example conversation
The illustration below shows both the app and the definition for a character on the app, which your model sees.
Given a fixed context window (the default is 512 tokens):
We want the model to constantly remember the character's persona, hence it is always present within the context-window and it is called memory
Example conversation matters less as the conversation evolves. As a result, it gets truncated as the actual conversation grows
Intuitively, if the memory (i.e. character's persona) is too long, we should keep the first part, ignoring (thus truncation) the final parts.
If the prompt (i.e. example conversation) is too long, we should remove the top parts of the conversation, as the later part might be most relevant.
Therefore:
If the memory
is too long, it is truncated FROM THE RIGHT
Whereas prompt
is truncated FROM THE LEFT
In practice, the memory
must not exceed over 1/2 the size of the context window. Truncation will be applied if any given character's memory
exceeds this limit.
There are ~5 million characters that are user-generated on the