Cloudflare Docs
Workers AI
Edit this page
Report an issue with this page
Log into the Cloudflare dashboard
Set theme to dark (⇧+D)

mistral-7b-instruct-v0.2-lora

Beta

Model ID: @cf/mistral/mistral-7b-instruct-v0.2-lora

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

​​ Properties

Task Type: Text Generation

​​ Use the Playground

Try out this model with Workers AI Model Playground. It does not require any setup or authentication and an instant way to preview and test a model directly in the browser.

Launch the Model Playground

​​ Code Examples

Worker
export interface Env {
AI: Ai;
}
export default {
async fetch(request, env): Promise<Response> {
const response = await env.AI.run("@cf/mistral/mistral-7b-instruct-v0.2-lora", {
prompt: "tell me a story",
raw: true, //skip applying the default chat template
lora: "00000000-0000-0000-0000-000000000", //the finetune id OR name
});
return Response.json(response);
},
} satisfies ExportedHandler<Env>;
curl
curl https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/ai/run/@cf/mistral/mistral-7b-instruct-v0.2-lora \
-X POST \
-H "Authorization: Bearer $CLOUDFLARE_AUTH_TOKEN" \
-d '{
"prompt": "tell me a story",
"raw": "true",
"lora": "00000000-0000-0000-0000-000000000"
}'

​​ Prompting

Part of getting good results from text generation models is asking questions correctly. LLMs are usually trained with specific predefined templates, which should then be used with the model’s tokenizer for better results when doing inference tasks.

We recommend using unscoped prompts for inference with LoRA.

​​ Unscoped prompts

You can use unscoped prompts to send a single question to the model without worrying about providing any context. Workers AI will automatically convert your { prompt: } input to a reasonable default scoped prompt internally so that you get the best possible prediction.

{
prompt: "tell me a joke about cloudflare";
}

You can also use unscoped prompts to construct the model chat template manually. In this case, you can use the raw parameter. Here’s an input example of a Mistral chat template prompt:

{
prompt: "<s>[INST]comedian[/INST]</s>\n[INST]tell me a joke about cloudflare[/INST]",
raw: true
};

​​ Responses

{
"response": "The origin of the phrase \"Hello, World\" is not well-documented, but it is believed to have originated in the early days of computing. In the 1970s, when personal computers were first becoming popular, many programming languages, including C, had a simple \"Hello, World\" program that was used to demonstrate the basics of programming.\nThe idea behind the program was to print the words \"Hello, World\" on the screen, and it was often used as a first program for beginners to learn the basics of programming. Over time, the phrase \"Hello, World\" became a common greeting among programmers and computer enthusiasts, and it is now widely recognized as a symbol of the computing industry.\nIt's worth noting that the phrase \"Hello, World\" is not a specific phrase that was coined by any one person or organization, but rather a catchphrase that evolved over time as a result of its widespread use in the computing industry."
}

​​ API Schema

The following schema is based on JSON Schema

Input JSON Schema
{
"type": "object",
"oneOf": [
{
"properties": {
"prompt": {
"type": "string",
"minLength": 1,
"maxLength": 6144
},
"raw": {
"type": "boolean",
"default": false
},
"stream": {
"type": "boolean",
"default": false
},
"max_tokens": {
"type": "integer",
"default": 256
},
"temperature": {
"type": "number",
"minimum": 0,
"maximum": 5
},
"top_p": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"top_k": {
"type": "integer",
"minimum": 1,
"maximum": 50
},
"seed": {
"type": "integer",
"minimum": 1,
"maximum": 9999999999
},
"repetition_penalty": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"frequency_penalty": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"presence_penalty": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"lora": {
"type": "string"
}
},
"required": [
"prompt"
]
},
{
"properties": {
"messages": {
"type": "array",
"items": {
"type": "object",
"properties": {
"role": {
"type": "string"
},
"content": {
"type": "string",
"maxLength": 6144
}
},
"required": [
"role",
"content"
]
}
},
"stream": {
"type": "boolean",
"default": false
},
"max_tokens": {
"type": "integer",
"default": 256
},
"temperature": {
"type": "number",
"minimum": 0,
"maximum": 5
},
"top_p": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"top_k": {
"type": "integer",
"minimum": 1,
"maximum": 50
},
"seed": {
"type": "integer",
"minimum": 1,
"maximum": 9999999999
},
"repetition_penalty": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"frequency_penalty": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"presence_penalty": {
"type": "number",
"minimum": 0,
"maximum": 2
}
},
"required": [
"messages"
]
}
]
}
Output JSON Schema
{
"oneOf": [
{
"type": "object",
"contentType": "application/json",
"properties": {
"response": {
"type": "string"
}
}
},
{
"type": "string",
"contentType": "text/event-stream",
"format": "binary"
}
]
}

​​ More resources