PHPackages                             llm-speak/mistral-ai - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [API Development](/categories/api)
4. /
5. llm-speak/mistral-ai

ActiveLibrary[API Development](/categories/api)

llm-speak/mistral-ai
====================

A Laravel package for integrating MistralAI into LLMSpeak

0.4.1(9mo ago)01MITPHPPHP ^8.2

Since Jul 30Pushed 9mo agoCompare

[ Source](https://github.com/projectsaturnstudios/llm-speak-mistral)[ Packagist](https://packagist.org/packages/llm-speak/mistral-ai)[ RSS](/packages/llm-speak-mistral-ai/feed)WikiDiscussions main Synced 1mo ago

READMEChangelogDependenciesVersions (3)Used By (0)

LLMSpeak Mistral AI
===================

[](#llmspeak-mistral-ai)

[![License: MIT](https://camo.githubusercontent.com/fdf2982b9f5d7489dcf44570e714e3a15fce6253e0cc6b5aa61a075aac2ff71b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d79656c6c6f772e737667)](https://opensource.org/licenses/MIT)[![PHP](https://camo.githubusercontent.com/0f16581d1180dbfd4c0e13166ec1267d4ad2f2fab8281ea6d6b284cf5c65d921/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5048502d382e322532422d626c75652e737667)](https://php.net/releases/)[![Laravel](https://camo.githubusercontent.com/906dea2eb7060f4769da2f105c19b13c8e426284048c7217e23102fb572d64a8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c61726176656c2d31302e7825374331312e7825374331322e782d7265642e737667)](https://laravel.com)[![Latest Version on Packagist](https://camo.githubusercontent.com/7e69cf2fa307ae3201d549c18f8d008b4dbc8f05ee255c5dbcc451ec9b3877b5/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f762f6c6c6d2d737065616b2f6d69737472616c2d61692e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/llm-speak/mistral-ai)[![Total Downloads](https://camo.githubusercontent.com/f263d33e657a8c1f2676b0d7bba4deaa378a0358b970d2bdc4b311534c6e4441/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f64742f6c6c6d2d737065616b2f6d69737472616c2d61692e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/llm-speak/mistral-ai)

**LLMSpeak Mistral AI** is a Laravel package that provides a fluent, Laravel-native interface for integrating with Mistral AI's powerful language models. Built as part of the LLMSpeak ecosystem, it offers seamless access to Mistral's cutting-edge models including Mistral Large, Mistral Medium, and specialized models like Codestral for code generation.

> **Note:** This package is part of the larger [LLMSpeak ecosystem](https://github.com/projectsaturnstudios/llm-speak). For universal provider switching and standardized interfaces, check out the [LLMSpeak Core](https://github.com/projectsaturnstudios/llm-speak-core) package.

Table of Contents
-----------------

[](#table-of-contents)

- [Features](#features)
- [Get Started](#get-started)
- [Usage](#usage)
    - [Chat Completions](#chat-completions)
    - [Embeddings](#embeddings)
    - [Fluent Request Building](#fluent-request-building)
    - [Tool Calling](#tool-calling)
    - [Multiple Choices](#multiple-choices)
    - [Response Formatting](#response-formatting)
    - [Streaming Responses](#streaming-responses)
    - [Advanced Configuration](#advanced-configuration)
- [Response Handling](#response-handling)
- [Testing](#testing)
- [Credits](#credits)
- [License](#license)

Features
--------

[](#features)

- **🧠 Advanced Models**: Access to Mistral Large, Medium, Small, and specialized models like Codestral
- **🚀 Laravel Native**: Full Laravel integration with automatic service discovery
- **🔧 Fluent Interface**: Expressive request builders with method chaining
- **📊 Laravel Data**: Powered by Spatie Laravel Data for robust data validation
- **🛠️ Tool Support**: Complete function calling capabilities with flexible tool choice
- **📝 Embeddings**: Advanced embedding generation with multiple output formats
- **🎛️ Output Control**: Precise control over embedding dimensions and data types
- **💨 Streaming**: Real-time streaming responses for chat completions
- **🎯 JSON Mode**: Structured output generation for reliable data extraction
- **🎯 Type Safety**: Full PHP 8.2+ type declarations and IDE support
- **🔐 Secure**: Built-in API key management and request validation

Get Started
-----------

[](#get-started)

> **Requires [PHP 8.2+](https://php.net/releases/) and Laravel 10.x/11.x/12.x**

Install the package via [Composer](https://getcomposer.org/):

```
composer require llm-speak/mistral-ai
```

The package will automatically register itself via Laravel's package discovery.

### Environment Configuration

[](#environment-configuration)

Add your Mistral AI API key to your `.env` file:

```
MISTRAL_API_KEY=your_mistral_api_key_here
```

Get your API key from [Mistral AI Console](https://console.mistral.ai/).

Usage
-----

[](#usage)

### Chat Completions

[](#chat-completions)

The simplest way to chat with Mistral AI models:

```
use LLMSpeak\Mistral\MistralCompletionsRequest;

$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: [
        ['role' => 'user', 'content' => 'Explain quantum computing in simple terms']
    ]
);

$response = $request->post();

echo $response->getTextContent(); // "Quantum computing is..."
```

### Model Selection

[](#model-selection)

Choose the right Mistral model for your use case:

```
// Most capable model for complex reasoning
$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: $messages
);

// Balanced performance and speed
$request = new MistralCompletionsRequest(
    model: 'mistral-medium-latest',
    messages: $messages
);

// Fast and efficient for simple tasks
$request = new MistralCompletionsRequest(
    model: 'mistral-small-latest',
    messages: $messages
);

// Specialized for code generation
$request = new MistralCompletionsRequest(
    model: 'codestral-latest',
    messages: $messages
);
```

### Embeddings

[](#embeddings)

Generate embeddings with advanced output control:

```
use LLMSpeak\Mistral\MistralEmbeddingsRequest;

// Simple text embedding
$request = new MistralEmbeddingsRequest(
    model: 'mistral-embed',
    input: 'Generate embeddings for this text'
);

$response = $request->post();

$embeddings = $response->getEmbeddings();
$dimensions = $response->getDimensions();
```

### Advanced Embedding Configuration

[](#advanced-embedding-configuration)

Control output format and dimensions:

```
// High-precision embeddings
$request = new MistralEmbeddingsRequest(
    model: 'mistral-embed',
    input: 'Research paper abstract content'
)
->setOutputDimension(1024)     // Custom dimensions
->setOutputDtype('float');     // High precision

// Memory-efficient embeddings
$request = new MistralEmbeddingsRequest(
    model: 'mistral-embed',
    input: ['Text 1', 'Text 2', 'Text 3']  // Batch processing
)
->setOutputDimension(512)      // Reduced dimensions
->setOutputDtype('int8');      // Quantized format

// Ultra-compact embeddings
$request = new MistralEmbeddingsRequest(
    model: 'mistral-embed',
    input: $documentTexts
)
->setOutputDimension(256)
->setOutputDtype('binary');    // Maximum compression

$response = $request->post();

// Access different embedding formats
$embeddings = $response->getEmbeddings();        // Raw embeddings
$firstEmbedding = $response->getFirstEmbedding(); // Single vector
$count = $response->getEmbeddingCount();         // Number of embeddings
```

### Universal LLMSpeak Interface

[](#universal-llmspeak-interface)

For **provider-agnostic embeddings** that work across Mistral, Gemini, Ollama, and other providers, use the universal LLMSpeak interface:

```
use LLMSpeak\Core\Support\Facades\LLMSpeak;
use LLMSpeak\Core\Support\Requests\LLMSpeakEmbeddingsRequest;

// Universal request works with ANY provider
$request = new LLMSpeakEmbeddingsRequest(
    model: 'mistral-embed',
    input: 'Generate embeddings for this text',
    encoding_format: 'float',    // Maps to Mistral's outputDtype
    dimensions: 1024,            // Maps to Mistral's outputDimension
    task_type: null              // Not applicable for Mistral
);

// Execute with Mistral - same code works with other providers!
$response = LLMSpeak::embeddingsFrom('mistral', $request);

// Universal response methods
$embeddings = $response->getAllEmbeddings();
$firstVector = $response->getFirstEmbedding();
$dimensions = $response->getDimensions();
$tokenUsage = $response->getTotalTokens();
```

### Universal Format Mapping

[](#universal-format-mapping)

The universal interface automatically maps encoding formats to Mistral's native types:

```
// Float precision (maps to Mistral's outputDtype: 'float')
$floatRequest = new LLMSpeakEmbeddingsRequest(
    model: 'mistral-embed',
    input: 'High precision embeddings',
    encoding_format: 'float',    // → outputDtype: 'float'
    dimensions: 1024,            // → outputDimension: 1024
    task_type: null
);

$floatResponse = LLMSpeak::embeddingsFrom('mistral', $floatRequest);

// Quantized format (maps to Mistral's outputDtype: 'int8')
$quantizedRequest = new LLMSpeakEmbeddingsRequest(
    model: 'mistral-embed',
    input: 'Memory-efficient embeddings',
    encoding_format: 'base64',   // → outputDtype: 'int8' (quantized)
    dimensions: 512,             // → outputDimension: 512
    task_type: null
);

$quantizedResponse = LLMSpeak::embeddingsFrom('mistral', $quantizedRequest);

// Batch processing with universal interface
$batchRequest = new LLMSpeakEmbeddingsRequest(
    model: 'mistral-embed',
    input: [
        'Document one for embeddings',
        'Document two for embeddings',
        'Document three for embeddings'
    ],
    encoding_format: 'float',
    dimensions: null,            // Use model default
    task_type: null
);

$batchResponse = LLMSpeak::embeddingsFrom('mistral', $batchRequest);

echo "Generated {$batchResponse->getEmbeddingCount()} embeddings";
echo "Vector dimensions: {$batchResponse->getDimensions()}";
```

### Advanced Universal Configuration

[](#advanced-universal-configuration)

Access Mistral's advanced features through the universal interface:

```
// Ultra-compact embeddings with automatic format mapping
$compactRequest = new LLMSpeakEmbeddingsRequest(
    model: 'mistral-embed',
    input: 'Large document corpus for storage',
    encoding_format: 'base64',   // Automatically maps to binary/int8
    dimensions: 256,             // Reduced dimensions for storage
    task_type: null
);

$compactResponse = LLMSpeak::embeddingsFrom('mistral', $compactRequest);

// Different models with same interface
$models = ['mistral-embed', 'codestral-embed'];
foreach ($models as $model) {
    $request = new LLMSpeakEmbeddingsRequest(
        model: $model,
        input: 'Code snippet for analysis',
        encoding_format: 'float',
        dimensions: 1024,
        task_type: null
    );

    $response = LLMSpeak::embeddingsFrom('mistral', $request);
    echo "Model {$model}: {$response->getDimensions()} dimensions";
}
```

### Why Use Universal Interface?

[](#why-use-universal-interface)

**✅ Provider Independence:** Switch between Mistral, Gemini, Ollama with zero code changes
**✅ Automatic Mapping:** Encoding formats automatically mapped to provider-specific types
**✅ Future Proof:** New providers automatically supported
**✅ Consistent API:** Same methods across all providers
**✅ Type Safety:** Full PHP type declarations and IDE support

```
// Same request works with different providers!
$request = new LLMSpeakEmbeddingsRequest(
    model: 'embedding-model',
    input: 'Universal text input',
    encoding_format: 'float',
    dimensions: 512,
    task_type: null
);

$mistralResponse = LLMSpeak::embeddingsFrom('mistral', $request); // Mistral AI
$geminiResponse = LLMSpeak::embeddingsFrom('gemini', $request);   // Google AI
$ollamaResponse = LLMSpeak::embeddingsFrom('ollama', $request);   // Local models
```

### Fluent Request Building

[](#fluent-request-building)

Build complex requests using the fluent interface:

```
use LLMSpeak\Mistral\MistralCompletionsRequest;

$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: [
        ['role' => 'user', 'content' => 'Write a creative story about AI']
    ]
)
->setMaxTokens(2000)
->setTemperature(0.8)
->setPresencePenalty(0.1)
->setFrequencyPenalty(0.1)
->setStop(['THE END', '---']);

$response = $request->post();

// Access response properties
echo $response->id;                    // chatcmpl-abc123
echo $response->model;                 // mistral-large-latest
echo $response->getTotalTokens();      // 1850
echo $response->getTextContent();      // Generated story
```

### Batch Configuration

[](#batch-configuration)

Set multiple parameters at once:

```
$request = new MistralCompletionsRequest(
    model: 'mistral-medium-latest',
    messages: $conversation
)->setMultiple([
    'maxTokens' => 1500,
    'temperature' => 0.7,
    'presencePenalty' => 0.2,
    'frequencyPenalty' => 0.1,
    'stop' => ['Human:', 'Assistant:'],
    'n' => 3  // Generate 3 different responses
]);
```

### Tool Calling

[](#tool-calling)

Enable Mistral models to use external functions:

```
$tools = [
    [
        'type' => 'function',
        'function' => [
            'name' => 'get_weather_forecast',
            'description' => 'Get weather forecast for a specific location',
            'parameters' => [
                'type' => 'object',
                'properties' => [
                    'location' => [
                        'type' => 'string',
                        'description' => 'City and country (e.g., "Paris, France")'
                    ],
                    'days' => [
                        'type' => 'integer',
                        'description' => 'Number of days to forecast (1-7)',
                        'minimum' => 1,
                        'maximum' => 7
                    ]
                ],
                'required' => ['location']
            ]
        ]
    ]
];

$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: [
        ['role' => 'user', 'content' => 'What\'s the weather forecast for London this week?']
    ]
)
->setTools($tools)
->setToolChoice('auto');  // Let model decide when to use tools

$response = $request->post();

// Check for tool usage
if ($response->usedTools()) {
    $toolCalls = $response->getToolCalls();
    foreach ($toolCalls as $call) {
        echo "Function: {$call['function']['name']}\n";
        echo "Arguments: " . json_encode($call['function']['arguments']) . "\n";
    }
}
```

### Multiple Choices

[](#multiple-choices)

Generate multiple response alternatives:

```
$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: [
        ['role' => 'user', 'content' => 'Give me three different marketing slogans for an eco-friendly product']
    ]
)
->setN(3)                    // Generate 3 different responses
->setTemperature(0.9);       // Higher creativity for variety

$response = $request->post();

// Access all choices
$allChoices = $response->getAllChoices();
foreach ($allChoices as $index => $choice) {
    echo "Option " . ($index + 1) . ": " . $choice['message']['content'] . "\n\n";
}

// Or get a specific choice
$firstChoice = $response->getChoice(0);
$secondChoice = $response->getChoice(1);
```

### Response Formatting

[](#response-formatting)

Control output format for structured data:

```
$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: [
        [
            'role' => 'user',
            'content' => 'Extract the following information from this text as JSON: name, age, occupation. Text: "John Smith is a 35-year-old software engineer."'
        ]
    ]
)
->setResponseFormat(['type' => 'json_object'])
->setMaxTokens(200);

$response = $request->post();

$jsonContent = $response->getTextContent();
$data = json_decode($jsonContent, true);

echo "Name: " . $data['name'];           // John Smith
echo "Age: " . $data['age'];             // 35
echo "Occupation: " . $data['occupation']; // software engineer
```

### Streaming Responses

[](#streaming-responses)

Enable real-time streaming for long responses:

```
$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: [
        ['role' => 'user', 'content' => 'Write a detailed technical article about machine learning']
    ]
)
->setStream(true)
->setMaxTokens(4000);

$response = $request->post();

// Stream handling will be processed by the CompletionsEndpoint
// Response contains streaming data format
```

### Advanced Configuration

[](#advanced-configuration)

Configure advanced parameters for optimal performance:

```
$request = new MistralCompletionsRequest(
    model: 'mistral-large-latest',
    messages: $conversationHistory
)
->setMaxTokens(4000)
->setTemperature(0.7)
->setPresencePenalty(0.3)     // Encourage topic diversity
->setFrequencyPenalty(0.2)    // Reduce repetition
->setStop(['[END]', '###'])   // Custom stop sequences
->setN(2)                     // Generate 2 alternatives
->setResponseFormat(['type' => 'json_object']);

$response = $request->post();
```

Response Handling
-----------------

[](#response-handling)

Access comprehensive response data:

```
$response = $request->post();

// Basic response info
$responseId = $response->id;
$modelUsed = $response->model;
$timestamp = $response->created;
$responseObject = $response->object;

// Content access
$textContent = $response->getTextContent();
$allChoices = $response->getAllChoices();
$firstChoice = $response->getChoice(0);

// Token usage analysis
$totalTokens = $response->getTotalTokens();
$promptTokens = $response->getPromptTokens();
$completionTokens = $response->getCompletionTokens();

// Completion analysis
$finishReason = $response->getFinishReason();
$completedNaturally = $response->completedNaturally();
$hitTokenLimit = $response->reachedTokenLimit();
$stoppedBySequence = $response->stoppedBySequence();

// Tool usage analysis
$usedTools = $response->usedTools();
$toolCalls = $response->getToolCalls();
$hasAnyToolCalls = $response->hasAnyToolCalls();

// Quality metrics
$responseQuality = $response->calculateQualityScore();
$isHighQuality = $response->isHighQuality();

// System information
$systemFingerprint = $response->system_fingerprint;

// Convert to array for storage
$responseArray = $response->toArray();

// Embeddings Response Handling
$embeddingResponse = $embeddingRequest->post();

$embeddings = $embeddingResponse->getEmbeddings();
$firstVector = $embeddingResponse->getFirstEmbedding();
$dimensions = $embeddingResponse->getDimensions();
$embeddingCount = $embeddingResponse->getEmbeddingCount();
$tokenUsage = $embeddingResponse->getTotalTokens();
```

Testing
-------

[](#testing)

The package provides testing utilities for mocking Mistral responses:

```
use LLMSpeak\Mistral\MistralCompletionsRequest;
use LLMSpeak\Mistral\MistralCompletionsResponse;
use LLMSpeak\Mistral\MistralEmbeddingsResponse;

// Create a mock chat completion response
$mockResponse = new MistralCompletionsResponse(
    id: 'chatcmpl-test123',
    object: 'chat.completion',
    created: time(),
    model: 'mistral-large-latest',
    choices: [
        [
            'index' => 0,
            'message' => [
                'role' => 'assistant',
                'content' => 'Mock response content'
            ],
            'finish_reason' => 'stop'
        ]
    ],
    usage: [
        'prompt_tokens' => 15,
        'completion_tokens' => 20,
        'total_tokens' => 35
    ]
);

// Test your application logic
$this->assertEquals('Mock response content', $mockResponse->getTextContent());
$this->assertEquals(35, $mockResponse->getTotalTokens());
$this->assertTrue($mockResponse->completedNaturally());

// Create a mock embeddings response
$mockEmbeddingResponse = new MistralEmbeddingsResponse(
    id: 'emb-test123',
    object: 'list',
    data: [
        [
            'object' => 'embedding',
            'embedding' => array_fill(0, 1024, 0.1),
            'index' => 0
        ]
    ],
    model: 'mistral-embed',
    usage: [
        'prompt_tokens' => 5,
        'total_tokens' => 5
    ],
    status_code: 200,
    headers: []
);

// Test embedding functionality
$this->assertEquals(1024, $mockEmbeddingResponse->getDimensions());
$this->assertEquals(1, $mockEmbeddingResponse->getEmbeddingCount());
```

Credits
-------

[](#credits)

- [Project Saturn Studios](https://github.com/projectsaturnstudios)
- [Mistral AI](https://mistral.ai) for providing advanced language models

License
-------

[](#license)

The MIT License (MIT). Please see [License File](LICENSE.md) for more information.

---

**Part of the LLMSpeak Ecosystem** - Built with ❤️ by [Project Saturn Studios](https://projectsaturnstudios.com)

###  Health Score

27

—

LowBetter than 49% of packages

Maintenance57

Moderate activity, may be stable

Popularity1

Limited adoption so far

Community6

Small or concentrated contributor base

Maturity40

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 100% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~1 days

Total

2

Last Release

284d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/2169021b88d520bd58d9a37f51fa55058af7adbf7362c8cade65b261d644874c?d=identicon)[projectsaturnstudios](/maintainers/projectsaturnstudios)

---

Top Contributors

[![projectsaturnstudios](https://avatars.githubusercontent.com/u/10563160?v=4)](https://github.com/projectsaturnstudios "projectsaturnstudios (2 commits)")

### Embed Badge

![Health badge](/badges/llm-speak-mistral-ai/health.svg)

```
[![Health](https://phpackages.com/badges/llm-speak-mistral-ai/health.svg)](https://phpackages.com/packages/llm-speak-mistral-ai)
```

###  Alternatives

[stripe/stripe-php

Stripe PHP Library

4.0k143.3M480](/packages/stripe-stripe-php)[twilio/sdk

A PHP wrapper for Twilio's API

1.6k92.9M272](/packages/twilio-sdk)[knplabs/github-api

GitHub API v3 client

2.2k15.8M187](/packages/knplabs-github-api)[facebook/php-business-sdk

PHP SDK for Facebook Business

90121.9M34](/packages/facebook-php-business-sdk)[meilisearch/meilisearch-php

PHP wrapper for the Meilisearch API

73813.7M114](/packages/meilisearch-meilisearch-php)[google/gax

Google API Core for PHP

263103.1M454](/packages/google-gax)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
