PHPackages                             oziri/laravel-llm-suite - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [Utility &amp; Helpers](/categories/utility)
4. /
5. oziri/laravel-llm-suite

ActiveLibrary[Utility &amp; Helpers](/categories/utility)

oziri/laravel-llm-suite
=======================

A unified driver-based LLM toolkit for Laravel.

v0.2.0(5mo ago)6577[1 issues](https://github.com/oziriemeka/laravel-llm-suite/issues)[1 PRs](https://github.com/oziriemeka/laravel-llm-suite/pulls)MITPHPPHP ^8.1CI passing

Since Dec 4Pushed 3mo ago1 watchersCompare

[ Source](https://github.com/oziriemeka/laravel-llm-suite)[ Packagist](https://packagist.org/packages/oziri/laravel-llm-suite)[ RSS](/packages/oziri-laravel-llm-suite/feed)WikiDiscussions main Synced 1mo ago

READMEChangelog (1)Dependencies (4)Versions (5)Used By (0)

 [![Laravel LLM Suite](https://raw.githubusercontent.com/OziriEmeka/laravel-llm-suite/main/assets/logo/laravel-llm-suite-logo.png)](https://raw.githubusercontent.com/OziriEmeka/laravel-llm-suite/main/assets/logo/laravel-llm-suite-logo.png)

Laravel LLM Suite
=================

[](#laravel-llm-suite)

A unified, driver-based Laravel toolkit for working with multiple LLM providers for chat, image generation, and more.

Features
--------

[](#features)

- **Unified API** - Same interface regardless of provider (OpenAI, Anthropic, LM Studio, etc.)
- **Driver Pattern** - Switch providers like Laravel's Storage or Mail systems
- **Conversation Management** - Automatic message history with session or database storage
- **Token Usage Tracking** - Monitor token consumption for cost management
- **Local LLM Support** - Run models locally with LM Studio for development and testing
- **Laravel Native** - Config files, facades, service providers
- **Testable** - Built-in faking support for testing without API calls

Supported Providers
-------------------

[](#supported-providers)

ProviderDriverChatImageModels List**OpenAI**`openai`YesYesYes**Anthropic**`anthropic`Yes-Yes**LM Studio**`lmstudio`Yes-Yes**Dummy**`dummy`YesYes-- **OpenAI** - GPT-4, GPT-4.1, DALL-E 3, and other OpenAI models
- **Anthropic** - Claude 3.5 Sonnet, Claude 3 Opus, and other Claude models
- **LM Studio** - Run any open-source LLM locally (Llama, Mistral, Phi, etc.)
- **Dummy** - For testing and offline development (returns configurable mock responses)

Requirements
------------

[](#requirements)

- PHP 8.1+
- Laravel 10.x, 11.x, or 12.x

Installation
------------

[](#installation)

Install via Composer:

```
composer require oziri/laravel-llm-suite
```

Publish the configuration file and migrations:

```
# Publish both config and migrations (recommended)
php artisan vendor:publish --tag=llm-suite

# Or publish separately:
php artisan vendor:publish --tag=llm-suite-config
php artisan vendor:publish --tag=llm-suite-migrations
```

Configuration
-------------

[](#configuration)

Add your API keys to your `.env` file (Please keep keys and API Safe and do not share with anyone):

```
# Default provider
LLM_SUITE_DEFAULT=openai

# OpenAI
OPENAI_API_KEY=your-openai-api-key
OPENAI_CHAT_MODEL=gpt-4.1-mini
OPENAI_IMAGE_MODEL=dall-e-3

# Anthropic
ANTHROPIC_API_KEY=your-anthropic-api-key
ANTHROPIC_CHAT_MODEL=claude-3-5-sonnet-20241022

# LM Studio (local)
LMSTUDIO_PROTOCOL=http   # http or https
LMSTUDIO_HOST=127.0.0.1
LMSTUDIO_PORT=1234
LMSTUDIO_API_KEY=        # Optional - leave empty if not using authentication
LMSTUDIO_TIMEOUT=120
# LMSTUDIO_BASE_URL=     # Optional - override protocol/host/port with full URL

# Conversation Storage (optional - database is default)
LLM_CONVERSATION_DRIVER=database   # or 'session'
```

The configuration file (`config/llm-suite.php`) allows you to customize providers:

```
return [
    'default' => env('LLM_SUITE_DEFAULT', 'openai'),

    'providers' => [
        'openai' => [
            'driver' => 'openai',
            'api_key' => env('OPENAI_API_KEY'),
            'base_url' => env('OPENAI_BASE_URL', 'https://api.openai.com/v1'),
            'chat_model' => env('OPENAI_CHAT_MODEL', 'gpt-4.1-mini'),
            'image_model' => env('OPENAI_IMAGE_MODEL', 'dall-e-3'),
        ],

        'anthropic' => [
            'driver' => 'anthropic',
            'api_key' => env('ANTHROPIC_API_KEY'),
            'base_url' => env('ANTHROPIC_BASE_URL', 'https://api.anthropic.com/v1'),
            'chat_model' => env('ANTHROPIC_CHAT_MODEL', 'claude-3-5-sonnet-20241022'),
        ],

        'lmstudio' => [
            'driver' => 'lmstudio',
            'protocol' => env('LMSTUDIO_PROTOCOL', 'http'),
            'host' => env('LMSTUDIO_HOST', '127.0.0.1'),
            'port' => env('LMSTUDIO_PORT', 1234),
            'api_key' => env('LMSTUDIO_API_KEY'),
            'chat_model' => env('LMSTUDIO_CHAT_MODEL', 'local-model'),
            'timeout' => env('LMSTUDIO_TIMEOUT', 120),
        ],

        'dummy' => [
            'driver' => 'dummy',
        ],
    ],

    // Conversation storage settings
    'conversation' => [
        'driver' => env('LLM_CONVERSATION_DRIVER', 'database'),
        'table' => 'llm_conversations',
    ],
];
```

Usage
-----

[](#usage)

### Basic Chat

[](#basic-chat)

```
use Llm;

// Simple chat - returns string
$response = Llm::chat('Explain transformers in simple terms.');

// Get full response object with metadata
$response = Llm::chatWithResponse('Explain transformers in simple terms.');
echo $response->content;
echo $response->model;
echo $response->latencyMs;
```

### Switching Providers

[](#switching-providers)

```
use Llm;

// Use default provider (from config)
$response = Llm::chat('Hello!');

// Switch to Anthropic for this request
$response = Llm::using('anthropic')->chat('Write a Laravel policy example.');

// Switch to dummy for offline development
$response = Llm::using('dummy')->chat('Test message');

// Use LM Studio for local models
$response = Llm::using('lmstudio')->chat('Hello from local LLM!');
```

### Override Model Per Request

[](#override-model-per-request)

```
use Llm;

$response = Llm::chat('Explain queues in Laravel.', [
    'model' => 'gpt-4.1',
    'temperature' => 0.7,
    'max_tokens' => 1000,
]);
```

### System Prompts

[](#system-prompts)

```
use Llm;

$response = Llm::chat('What is 2 + 2?', [
    'system' => 'You are a helpful math tutor. Always explain your reasoning.',
]);
```

### Conversations (Multi-turn Chat)

[](#conversations-multi-turn-chat)

Build chatbots and maintain context across multiple messages:

```
use Llm;

// Start a NEW conversation (auto-generates UUID)
$conversation = Llm::conversation();
$conversation->system('You are a helpful assistant.');

// Chat with automatic context - the LLM remembers previous messages!
$response = $conversation->chat('My name is John.');
$response = $conversation->chat('What is my name?'); // "Your name is John."

// Get the conversation ID for later use
$conversationId = $conversation->getId();
// e.g., "550e8400-e29b-41d4-a716-446655440000"
```

**Resume an existing conversation:**

```
// Resume conversation using the saved ID
$conversation = Llm::conversation($conversationId);
$response = $conversation->chat('What else do you remember about me?');
```

**Use a specific provider for conversations:**

```
$conversation = Llm::using('openai')->conversation();
// or
$conversation = Llm::using('lmstudio')->conversation();
```

**Practical API example:**

```
// Start new chat
Route::post('/chat/new', function (Request $request) {
    $conversation = Llm::conversation();
    $conversation->system('You are a helpful assistant.');
    $response = $conversation->chat($request->input('message'));

    return [
        'conversation_id' => $conversation->getId(),
        'response' => $response->content,
        'tokens' => $response->tokenUsage->totalTokens,
    ];
});

// Continue existing chat
Route::post('/chat/{id}', function (Request $request, string $id) {
    $conversation = Llm::conversation($id);
    $response = $conversation->chat($request->input('message'));

    return [
        'response' => $response->content,
        'tokens' => $response->tokenUsage->totalTokens,
    ];
});
```

**Other conversation methods:**

```
$conversation->getMessages();       // Get all messages
$conversation->getMessageCount();   // Count messages
$conversation->getSystemPrompt();   // Get system prompt
$conversation->clear();             // Clear history (keeps system prompt)
$conversation->delete();            // Delete entire conversation
$conversation->export();            // Export as array
```

**List all conversations:**

```
use Llm;

// Get all conversation IDs
$conversationIds = Llm::conversations();
// ['550e8400-e29b-41d4-a716-446655440000', '6ba7b810-9dad-11d1-80b4-00c04fd430c8', ...]

// With database driver, you can also get metadata
$store = Llm::getConversationStore();
if ($store instanceof \Oziri\LlmSuite\ConversationStores\DatabaseStore) {
    $conversations = $store->allWithMetadata();
    // [['id' => '...', 'message_count' => 5, 'created_at' => '...', 'updated_at' => '...'], ...]
}
```

**Storage Drivers:**

DriverStorageBest For`database`Database tablePersistent storage, chat history (default)`session`Laravel sessionTemporary chats, no database setup**Database Driver (Default):**

Conversations are stored in the database for persistent storage. Publish and run the migration:

```
php artisan vendor:publish --tag=llm-suite-migrations
php artisan migrate
```

This creates the `llm_conversations` table for storing conversation history.

**Session Driver:**

For temporary chats that don't need persistence (expires with session):

```
LLM_CONVERSATION_DRIVER=session
```

No migration required for session driver.

### Token Usage

[](#token-usage)

Track token consumption for cost monitoring:

```
use Llm;

$response = Llm::chatWithResponse('Explain Laravel in one paragraph.');

// Access token usage
echo $response->tokenUsage->promptTokens;      // Input tokens
echo $response->tokenUsage->completionTokens;  // Output tokens
echo $response->tokenUsage->totalTokens;       // Total tokens

// Helper methods
echo $response->getTotalTokens();
echo $response->getPromptTokens();
echo $response->getCompletionTokens();

// As array
$usage = $response->tokenUsage->toArray();
// ['prompt_tokens' => 10, 'completion_tokens' => 50, 'total_tokens' => 60]
```

### Image Generation

[](#image-generation)

```
use Llm;

// Generate an image
$image = Llm::image()->generate([
    'prompt' => 'A minimalist logo for a Laravel AI package',
    'size' => '1024x1024',
]);

echo $image->url;

// Or use the convenience method
$image = Llm::generateImage([
    'prompt' => 'A futuristic cityscape',
    'size' => '512x512',
    'quality' => 'hd',
]);
```

### Listing Available Models

[](#listing-available-models)

```
use Llm;

// Get available models from OpenAI
$client = Llm::client('openai');
$models = $client->getAvailableModels();
print_r($models);
// ['gpt-4.1-mini', 'gpt-4.1', 'dall-e-3', ...]

// Check if the API is accessible
if ($client->isAvailable()) {
    echo "OpenAI API is accessible!";
}

// Same works for LM Studio
$lmClient = Llm::client('lmstudio');
if ($lmClient->isAvailable()) {
    $localModels = $lmClient->getAvailableModels();
    print_r($localModels);
}
```

### Using LM Studio (Local LLMs)

[](#using-lm-studio-local-llms)

LM Studio allows you to run open-source LLMs locally. Perfect for development, testing, or privacy-sensitive applications.

**Setup:**

1. Download [LM Studio](https://lmstudio.ai/)
2. Load a model (e.g., Llama, Mistral, Phi)
3. Start the local server (default: `http://localhost:1234`)

**Usage:**

```
use Llm;

// Basic chat with local model
$response = Llm::using('lmstudio')->chat('Explain Laravel middleware.');

// Check if LM Studio is running
$client = Llm::using('lmstudio')->client();
if ($client->isAvailable()) {
    echo "LM Studio is running!";
}

// List available models
$models = $client->getAvailableModels();
print_r($models);

// Use a specific local model
$response = Llm::using('lmstudio')->chat('Hello!', [
    'model' => 'mistral-7b-instruct',
    'temperature' => 0.7,
]);
```

**Set as default for local development:**

```
LLM_SUITE_DEFAULT=lmstudio
```

### Anthropic Provider

[](#anthropic-provider)

#### Available Models

[](#available-models)

```
use Llm;

// List available Claude models
$client = Llm::client('anthropic');

if ($client->isAvailable()) {
    $models = $client->getAvailableModels();

    foreach ($models as $model) {
        echo "- {$model}\n";
    }
}

// Example output:
// - claude-3-5-sonnet-20241022
// - claude-3-opus-20240229
// - claude-3-sonnet-20240229
// - claude-3-haiku-20240307
```

**Note**: Anthropic Claude models support chat completions and image analysis but do not support image generation.

### Working with Message History

[](#working-with-message-history)

```
use Llm;

$response = Llm::chat('What is the capital of France?', [
    'messages' => [
        ['role' => 'system', 'content' => 'You are a geography expert.'],
        ['role' => 'user', 'content' => 'What continent is Brazil in?'],
        ['role' => 'assistant', 'content' => 'Brazil is in South America.'],
        ['role' => 'user', 'content' => 'What is the capital of France?'],
    ],
]);
```

Testing
-------

[](#testing)

### Using Laravel HTTP Fakes

[](#using-laravel-http-fakes)

The simplest approach is to use Laravel's built-in HTTP faking:

```
use Illuminate\Support\Facades\Http;
use Llm;

Http::fake([
    'api.openai.com/*' => Http::response([
        'id' => 'chatcmpl-test',
        'model' => 'gpt-4.1-mini',
        'choices' => [
            ['message' => ['content' => 'Fake response']],
        ],
    ]),
]);

$response = Llm::chat('Test');
$this->assertEquals('Fake response', $response);
```

### Using LlmFake

[](#using-llmfake)

For more control, use the built-in fake helper:

```
use Llm;

// Set up the fake
$fake = Llm::fake()
    ->shouldReturnChat('Hello world')
    ->shouldReturnImage('https://example.com/image.png');

// Make requests
$chatResponse = Llm::chat('Hi there');
$imageResponse = Llm::image()->generate(['prompt' => 'A cat']);

// Assert requests were made
$fake->assertChatSent('Hi there');
$fake->assertImageSent('A cat');
$fake->assertChatCount(1);
$fake->assertImageCount(1);
```

### Using the Dummy Provider

[](#using-the-dummy-provider)

You can also use the dummy provider directly in your tests:

```
use Llm;

// Switch to dummy provider
$response = Llm::using('dummy')->chat('Test message');
// Returns: "This is a dummy response to: Test message"
```

Extending with Custom Drivers
-----------------------------

[](#extending-with-custom-drivers)

You can register custom drivers for other LLM providers:

```
use Oziri\LlmSuite\Facades\Llm;
use Oziri\LlmSuite\Contracts\ChatClient;
use Oziri\LlmSuite\Support\ChatResponse;

// Create your custom client
class MyCustomClient implements ChatClient
{
    public function __construct(protected array $config) {}

    public function chat(string $prompt, array $options = []): ChatResponse
    {
        // Your implementation here
        return new ChatResponse(
            content: 'Response from custom provider',
            raw: [],
            model: 'custom-model',
        );
    }
}

// Register the driver (in a service provider)
Llm::extend('custom', function (array $config) {
    return new MyCustomClient($config);
});

// Add to config/llm-suite.php
'providers' => [
    'my-custom' => [
        'driver' => 'custom',
        'api_key' => env('CUSTOM_API_KEY'),
    ],
],

// Use it
$response = Llm::using('my-custom')->chat('Hello!');
```

Available Methods
-----------------

[](#available-methods)

### Facade Methods

[](#facade-methods)

MethodDescription`Llm::chat($prompt, $options)`Send a chat message, returns string`Llm::chatWithResponse($prompt, $options)`Send a chat message, returns ChatResponse`Llm::using($provider)`Switch to a different provider`Llm::image()`Get the image client`Llm::generateImage($params)`Generate an image`Llm::extend($driver, $callback)`Register a custom driver`Llm::fake()`Create a fake for testing`Llm::getProviders()`List available providers`Llm::getDefaultProvider()`Get the default provider name`Llm::client($name)`Get the underlying client instance`Llm::conversation($id)`Start new or resume existing conversation`Llm::conversations()`Get all conversation IDs### Client Methods (OpenAI, LM Studio)

[](#client-methods-openai-lm-studio)

You can access the underlying client instance using `Llm::client('provider')` to call provider-specific methods:

```
$client = Llm::client('openai');    // or 'lmstudio'
```

MethodDescription`$client->isAvailable()`Check if the API/server is accessible`$client->getAvailableModels()`List available models from the provider### ChatResponse Properties

[](#chatresponse-properties)

PropertyTypeDescription`content`stringThe response text`raw`arrayRaw API response data`model`string|nullModel used for the request`id`string|nullRequest ID from the provider`latencyMs`float|nullRequest latency in milliseconds`tokenUsage`TokenUsageToken usage statistics### TokenUsage Properties

[](#tokenusage-properties)

PropertyTypeDescription`promptTokens`intNumber of tokens in the prompt/input`completionTokens`intNumber of tokens in the completion/output`totalTokens`intTotal tokens used**Methods:**

- `toArray()` - Convert to array
- `hasData()` - Check if usage data is available
- `TokenUsage::fromArray($data)` - Create from API response
- `TokenUsage::empty()` - Create empty instance

### ImageResponse Properties

[](#imageresponse-properties)

PropertyTypeDescription`url`string|nullURL of the generated image`base64`string|nullBase64 encoded image data`raw`arrayRaw API response data`revisedPrompt`string|nullRevised prompt (if modified by provider)Roadmap
-------

[](#roadmap)

- LM Studio support (local LLMs)
- Conversation management (session &amp; database storage)
- Token usage tracking
- Streaming support
- Tool/Function calling
- Embeddings API
- RAG helpers
- Additional providers (Gemini, Groq, Ollama)
- Rate limiting
- Caching layer

License
-------

[](#license)

MIT License. See [LICENSE](LICENSE) for details.

###  Health Score

37

—

LowBetter than 83% of packages

Maintenance77

Regular maintenance activity

Popularity18

Limited adoption so far

Community12

Small or concentrated contributor base

Maturity36

Early-stage or recently created project

 Bus Factor1

Top contributor holds 96% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~0 days

Total

4

Last Release

159d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/7acf66e6564e40343e21bbf040aaf881009dfe2c793530922ee6f68e916d1938?d=identicon)[oziriemeka](/maintainers/oziriemeka)

---

Top Contributors

[![oziriemeka](https://avatars.githubusercontent.com/u/12521025?v=4)](https://github.com/oziriemeka "oziriemeka (24 commits)")[![emekambachu](https://avatars.githubusercontent.com/u/32366820?v=4)](https://github.com/emekambachu "emekambachu (1 commits)")

---

Tags

laravellaravel-frameworkllmllmsopenaiphplaravelaiopenaichatclaudellmanthropicgptimage-generation

###  Code Quality

TestsPHPUnit

### Embed Badge

![Health badge](/badges/oziri-laravel-llm-suite/health.svg)

```
[![Health](https://phpackages.com/badges/oziri-laravel-llm-suite/health.svg)](https://phpackages.com/packages/oziri-laravel-llm-suite)
```

###  Alternatives

[sbsaga/toon

🧠 TOON for Laravel — a compact, human-readable, and token-efficient data format for AI prompts &amp; LLM contexts. Perfect for ChatGPT, Gemini, Claude, Mistral, and OpenAI integrations (JSON ⇄ TOON).

6115.6k](/packages/sbsaga-toon)[vizra/vizra-adk

Vizra Agent Development Kit - A comprehensive Laravel package for building intelligent AI agents.

29026.1k](/packages/vizra-vizra-adk)[cognesy/instructor-php

The complete AI toolkit for PHP: unified LLM API, structured outputs, agents, and coding agent control

310107.9k1](/packages/cognesy-instructor-php)[helgesverre/toon

Token-Oriented Object Notation - A compact data format for reducing token consumption when sending structured data to LLMs

11841.4k9](/packages/helgesverre-toon)[llm-agents/agents

LLM Agents PHP SDK - Autonomous Language Model Agents for PHP

16410.9k9](/packages/llm-agents-agents)[claude-php/claude-php-sdk-laravel

Laravel integration for the Claude PHP SDK - Anthropic Claude API

5010.8k](/packages/claude-php-claude-php-sdk-laravel)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
