Wednesday, October 2, 2024

Some notes on Mistral 7B

The 7B model released by Mistral AI,

Mistral is a 7B parameter model, distributed with the Apache license. It is available in both instruct (instruction following) and text completion.

The Mistral AI team has noted that Mistral 7B:

Outperforms Llama 2 13B on all benchmarks

Outperforms Llama 1 34B on many benchmarks

Approaches CodeLlama 7B performance on code, while remaining good at English tasks

Mistral 0.3 supports function calling with Ollama’s raw mode.

Example raw prompt

[AVAILABLE_TOOLS] [{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "format": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location."}}, "required": ["location", "format"]}}}][/AVAILABLE_TOOLS][INST] What is the weather like today in San Francisco [/INST]

Example response

[TOOL_CALLS] [{"name": "get_current_weather", "arguments": {"location": "San Francisco, CA", "format": "celsius"}}]

Variations

instruct Instruct models follow instructions

text Text models are the base foundation model without any fine-tuning for conversations, and are best used for simple text completion.

Usage

CLI

Instruct:

ollama run mistral

API

Example:

curl -X POST http://localhost:11434/api/generate -d '{

  "model": "mistral",

  "prompt":"Here is a story about llamas eating grass"

 }'

To run the mistral Locally, ollama run mistral 

References:

https://ollama.com/library/mistral?ref=maginative.com


No comments:

Post a Comment