PHPackages                             laravel-local-llm/sdk - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [Utility &amp; Helpers](/categories/utility)
4. /
5. laravel-local-llm/sdk

ActiveLibrary[Utility &amp; Helpers](/categories/utility)

laravel-local-llm/sdk
=====================

Modern Multi-Driver Local LLM Integration for Laravel

v1.1.0(2mo ago)00MITPHPPHP ^8.2CI passing

Since Mar 8Pushed 2mo agoCompare

[ Source](https://github.com/shamimlaravel/Laravel-Local-LLM-SDK)[ Packagist](https://packagist.org/packages/laravel-local-llm/sdk)[ RSS](/packages/laravel-local-llm-sdk/feed)WikiDiscussions main Synced 1mo ago

READMEChangelogDependencies (6)Versions (3)Used By (0)

Laravel Local LLM SDK
=====================

[](#laravel-local-llm-sdk)

Modern, multi-driver, failover-ready local LLM integration for Laravel.

[![PHP Version](https://camo.githubusercontent.com/a36c0210404104eacc59eff100cd75e4f7698b14b271d9870ac9331d7f7314ef/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5048502d382e342532422d3737374242343f7374796c653d666c6174266c6f676f3d706870)](https://www.php.net/)[![Laravel Version](https://camo.githubusercontent.com/bd5c3a1f6b2800a6886739143d9019f2ba8e7b9174750a7669941d8c43e5f549/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c61726176656c2d31322532422d4646324432303f7374796c653d666c6174266c6f676f3d6c61726176656c)](https://laravel.com/)[![MIT License](https://camo.githubusercontent.com/784362b26e4b3546254f1893e778ba64616e362bd6ac791991d2c9e880a3a64e/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d677265656e2e737667)](LICENSE)[![Static Analysis](https://camo.githubusercontent.com/942bdbddc7b2adea1d63ed80793492d06d72ef41911edcba33310d0745581548/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5048505374616e2d4c6576656c253230392d627269676874677265656e)](phpstan.neon)[![Code Style](https://camo.githubusercontent.com/8dce636d5374d5aed75936dd650f85479403f843643da105a5f1821d4d85f7f1/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652532305374796c652d50696e742d626c7565)](pint.json)

Why Laravel Local LLM SDK?
--------------------------

[](#why-laravel-local-llm-sdk)

FeatureDescription**Zero Cloud Dependency**Run LLMs entirely on your local infrastructure**Enterprise Ready**Type-safe, tested, and PSR-compliant**Modern PHP 8.4**Strict typing, readonly classes, union types**Production Tested**Comprehensive test suite with 100% coverage goalsQuick Links
-----------

[](#quick-links)

- [📘 Documentation](docs/index.html)
- [⚙️ API Reference](#api-reference)
- [🧪 Testing](#testing)
- [🚀 Roadmap](#roadmap)
- [📦 Packagist](https://packagist.org/packages/laravel-local-llm/sdk)

Overview
--------

[](#overview)

Laravel Local LLM SDK is a modern, enterprise-grade Laravel package designed to integrate local Large Language Models (LLMs) such as Ollama and LM Studio into your Laravel applications.

Features
--------

[](#features)

- **Multi-Driver Architecture** - Support for Ollama, LM Studio, AirLLMLlama, and OpenAI-compatible local servers
- **Intelligent Failover** - Automatic fallback to healthy drivers
- **Auto-Detection** - Automatically detect available local LLM engines
- **Streaming Support** - Server-Sent Events (SSE) for real-time responses
- **Token-Based Authentication** - Built-in API token system with rate limiting
- **Usage Tracking** - Track token usage and quotas
- **Builder Pattern** - Fluent API for building requests
- **Event-Driven** - Dispatch events for observability
- **Embeddings Support** - Generate vector embeddings for semantic search
- **Tool Calling** - Define and use tools/functions with LLMs
- **Batch Processing** - Process multiple requests efficiently
- **Webhooks** - Send LLM events to external services
- **Metrics** - Prometheus-compatible metrics for monitoring
- **Caching** - Cache model lists and health status

Requirements
------------

[](#requirements)

- PHP 8.4+
- Laravel 12+
- Composer 2+
- Local LLM engine (Ollama, LM Studio, or OpenAI-compatible server)

Installation
------------

[](#installation)

```
composer require laravel-local-llm/sdk
```

Configuration
-------------

[](#configuration)

Publish the configuration file:

```
php artisan vendor:publish --provider="LaravelLocalLlm\LocalLlmServiceProvider" --tag="llm-config"
```

### Environment Variables

[](#environment-variables)

```
# Default driver
LLM_DEFAULT_DRIVER=ollama

# Ollama
LLM_OLLAMA_ENABLED=true
LLM_OLLAMA_URL=http://localhost:11434
LLM_OLLAMA_DEFAULT_MODEL=llama3.2

# LM Studio
LLM_LMSTUDIO_ENABLED=true
LLM_LMSTUDIO_URL=http://localhost:1234/v1
LLM_LMSTUDIO_DEFAULT_MODEL=llama-3.2-1b-instruct

# OpenAI Compatible
LLM_OPENAI_COMPATIBLE_ENABLED=false
LLM_OPENAI_COMPATIBLE_URL=http://localhost:8080/v1

# Failover
LLM_FAILOVER_ENABLED=true

# Auto-detection
LLM_AUTO_DETECT=true
```

Usage
-----

[](#usage)

### Using the Facade

[](#using-the-facade)

```
use LaravelLocalLlm\Facades\LocalLlm;
use LaravelLocalLlm\DTO\Message;

// Simple chat
$response = LocalLlm::chat(
    new ChatRequest(
        model: 'llama3.2',
        messages: [
            Message::user('Hello, how are you?'),
        ]
    )
);

echo $response->content;
```

### Using the Builder

[](#using-the-builder)

```
$response = LocalLlm::chatWithBuilder()
    ->model('llama3.2')
    ->withUserMessage('Hello, how are you?')
    ->temperature(0.7)
    ->send();

echo $response->content;
```

### Streaming

[](#streaming)

```
LocalLlm::chatWithBuilder()
    ->model('llama3.2')
    ->withUserMessage('Tell me a story')
    ->stream(true)
    ->sendStream(function ($chunk) {
        echo $chunk->content;

        if ($chunk->finished) {
            echo "\nDone!\n";
        }
    });
```

### Using Specific Driver

[](#using-specific-driver)

```
$response = LocalLlm::chat(
    new ChatRequest(...),
    Driver::LM_STUDIO
);
```

### Failover

[](#failover)

```
$response = LocalLlm::chatWithFailover(new ChatRequest(...));
```

### Checking Models

[](#checking-models)

```
$models = LocalLlm::models();
```

### Health Check

[](#health-check)

```
$isHealthy = LocalLlm::health();
```

Embeddings
----------

[](#embeddings)

Generate vector embeddings for semantic search:

```
use LaravelLocalLlm\Facades\LocalLlm;
use LaravelLocalLlm\DTO\EmbeddingRequest;

// Single text
$response = LocalLlm::embeddings(new EmbeddingRequest(
    model: 'text-embedding-3-small',
    input: 'Hello world'
));

$embedding = $response->embeddings[0]->embedding;

// Multiple texts
$response = LocalLlm::embeddings(new EmbeddingRequest(
    model: 'text-embedding-3-small',
    input: ['Hello world', 'Goodbye world']
));
```

Batch Processing
----------------

[](#batch-processing)

Process multiple chat requests efficiently:

```
use LaravelLocalLlm\Facades\LocalLlm;
use LaravelLocalLlm\DTO\BatchChatRequest;
use LaravelLocalLlm\DTO\ChatRequest;
use LaravelLocalLlm\DTO\Message;

$requests = [
    new ChatRequest(model: 'llama3.2', messages: [Message::user('Hello')]),
    new ChatRequest(model: 'llama3.2', messages: [Message::user('How are you?')]),
    new ChatRequest(model: 'llama3.2', messages: [Message::user('Tell me a joke')]),
];

$batchResponse = LocalLlm::batchChat(new BatchChatRequest($requests));

echo "Total requests: " . $batchResponse->count();
echo "Total tokens: " . $batchResponse->totalTokens();
echo "Avg latency: " . $batchResponse->averageLatencyMs() . "ms";
```

Token Authentication
--------------------

[](#token-authentication)

### Creating Tokens

[](#creating-tokens)

```
use LaravelLocalLlm\Models\LlmToken;
use Illuminate\Support\Facades\Hash;

$token = LlmToken::create([
    'name' => 'API Token',
    'hashed_token' => Hash::make('your-secret-token'),
    'abilities' => ['chat', 'stream'],
    'rate_limit' => 60,
    'monthly_quota' => 1000000,
]);
```

### Using Tokens

[](#using-tokens)

Include the token in your request:

```
curl -H "Authorization: Bearer your-secret-token" \
  https://your-app.com/api/llm/chat
```

### Middleware

[](#middleware)

Protect your routes:

```
Route::middleware(['llm.guard:chat,stream'])->group(function () {
    Route::post('/llm/chat', [LlmController::class, 'chat']);
});
```

Events
------

[](#events)

- `ChatCompleted` - Dispatched when a chat request completes
- `StreamChunkReceived` - Dispatched for each streaming chunk

```
Event::listen(\LaravelLocalLlm\Events\ChatCompleted::class, function ($event) {
    log::info('Chat completed', [
        'model' => $event->response->model,
        'latency' => $event->response->latencyMs,
    ]);
});
```

Webhooks
--------

[](#webhooks)

Send LLM events to external services:

```
use LaravelLocalLlm\Webhooks\WebhookDispatcher;

$webhook = new WebhookDispatcher();

$webhook->register('chat.completed', 'https://your-app.com/webhooks/llm', [
    'secret' => env('WEBHOOK_SECRET'),
]);

$webhook->dispatchChatCompleted($request, $response, $driver);
```

Metrics
-------

[](#metrics)

Track LLM usage with Prometheus-compatible metrics:

```
use LaravelLocalLlm\Services\Metrics;

$metrics = new Metrics();

$metrics->recordRequest('ollama', 'llama3.2', 150.5, 20, 50);
$metrics->recordRequest('ollama', 'llama3.2', 120.0, 15, 45);

// Get per-model metrics
$allMetrics = $metrics->getMetrics();

// Get aggregate metrics
$aggregate = $metrics->getAggregateMetrics();
// ['total_requests' => 2, 'avg_latency_ms' => 135.25, ...]

// Export to Prometheus format
$prometheus = $metrics->toPrometheusFormat();
```

Helpers
-------

[](#helpers)

Utility functions for common tasks:

```
use LaravelLocalLlm\Helpers\TokenCalculator;
use LaravelLocalLlm\Helpers\ResponseFormatter;

// Estimate tokens
$tokens = TokenCalculator::estimateTokens('Hello world');

// Calculate cost
$cost = TokenCalculator::calculateCost(100, 50, 0.001, 0.002);

// Format response
$html = ResponseFormatter::markdown('**bold** and *italic*');

// Extract code blocks
$codeBlocks = ResponseFormatter::extractCode($markdown);
```

Console Commands
----------------

[](#console-commands)

```
# Check driver health
php artisan llm:health

# Check specific driver
php artisan llm:health --driver=ollama

# List models
php artisan llm:models

# List models for specific driver
php artisan llm:models --driver=lmstudio

# Clear cache
php artisan llm:clear-cache
```

Extending
---------

[](#extending)

### Custom Driver

[](#custom-driver)

```
use LaravelLocalLlm\Contracts\DriverInterface;
use LaravelLocalLlm\Enums\Driver;
use LaravelLocalLlm\DTO\ChatRequest;
use LaravelLocalLlm\DTO\ChatResponse;

class CustomDriver implements DriverInterface
{
    public function getDriver(): Driver
    {
        return Driver::OLLAMA; // or new Driver('custom')
    }

    public function chat(ChatRequest $request): ChatResponse
    {
        // Implementation
    }

    public function stream(ChatRequest $request, callable $onChunk): void
    {
        // Implementation
    }

    public function models(): array
    {
        // Implementation
    }

    public function health(): bool
    {
        // Implementation
    }

    public function isEnabled(): bool
    {
        return true;
    }
}
```

Testing
-------

[](#testing)

```
composer test
```

License
-------

[](#license)

MIT License - see [LICENSE](LICENSE) for details.

###  Health Score

37

—

LowBetter than 83% of packages

Maintenance87

Actively maintained with recent releases

Popularity0

Limited adoption so far

Community6

Small or concentrated contributor base

Maturity47

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 100% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~0 days

Total

2

Last Release

64d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/40be284b8dd88b2d48b0d446df171ae05f8eaaf44ab18b948766c6fafc76f9f4?d=identicon)[shamimstack](/maintainers/shamimstack)

---

Top Contributors

[![shamimlaravel](https://avatars.githubusercontent.com/u/266431424?v=4)](https://github.com/shamimlaravel "shamimlaravel (5 commits)")

---

Tags

laravelaiopenaillmollamalm-studiolocal-ai

###  Code Quality

TestsPHPUnit

Static AnalysisPHPStan

Type Coverage Yes

### Embed Badge

![Health badge](/badges/laravel-local-llm-sdk/health.svg)

```
[![Health](https://phpackages.com/badges/laravel-local-llm-sdk/health.svg)](https://phpackages.com/packages/laravel-local-llm-sdk)
```

###  Alternatives

[cognesy/instructor-php

The complete AI toolkit for PHP: unified LLM API, structured outputs, agents, and coding agent control

310107.9k1](/packages/cognesy-instructor-php)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
