PHPackages                             lzhx00/laravel-llm-client - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [API Development](/categories/api)
4. /
5. lzhx00/laravel-llm-client

ActiveLibrary[API Development](/categories/api)

lzhx00/laravel-llm-client
=========================

Laravel LLM Client Package - A unified interface for multiple LLM providers

v2.0.1(7mo ago)016MITPHPPHP &gt;=8.1

Since Jul 9Pushed 7mo agoCompare

[ Source](https://github.com/lzhx00/laravel-llm-client)[ Packagist](https://packagist.org/packages/lzhx00/laravel-llm-client)[ RSS](/packages/lzhx00-laravel-llm-client/feed)WikiDiscussions main Synced 1mo ago

READMEChangelogDependencies (3)Versions (4)Used By (0)

Laravel LLM Client
==================

[](#laravel-llm-client)

A Laravel package providing a unified, chainable interface for multiple LLM (Large Language Model) providers: **OpenAI**, **Anthropic (Claude)**, **Gemini**, and **Ollama**.

---

Requirements
------------

[](#requirements)

- **Laravel 10.x, 11.x, or 12.x** (This package only supports Laravel 10 and above)
- PHP 8.1 or higher

> Tested on Laravel 12.x. Other versions may work, but are not officially tested.

---

Installation
------------

[](#installation)

```
composer require lzhx00/laravel-llm-client
```

Laravel will auto-discover and register the package.
If you have disabled auto-discovery, add the following to `config/app.php`:

```
'providers' => [
    // ...
    Lzhx00\LLMClient\LLMClientServiceProvider::class,
],

'aliases' => [
    // ...
    'LLMClient' => Lzhx00\LLMClient\Facades\LLMClient::class,
],
```

---

Configuration
-------------

[](#configuration)

Publish the config file (optional, for customization):

```
php artisan vendor:publish --tag=llm-client-config
```

Set your API keys and provider settings in `.env` or `config/llm.php`:

```
LLM_DEFAULT_PROVIDER=openai
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIza...
OLLAMA_BASE_URL=http://localhost:11434
```

### config/llm.php Example

[](#configllmphp-example)

Each provider has its own `default_model`, `embedding_model`, and `options`.

```
return [
    'default' => env('LLM_DEFAULT_PROVIDER', 'openai'),
    'providers' => [
        'openai' => [
            'api_key' => env('OPENAI_API_KEY'),
            'default_model' => 'gpt-3.5-turbo',
            'embedding_model' => 'text-embedding-3-small',
            'options' => [
                'temperature' => 0.7,
                // ...other OpenAI-specific options
            ],
        ],
        'ollama' => [
            'base_url' => env('OLLAMA_BASE_URL', 'http://localhost:11434'),
            'default_model' => 'llama3',
            'embedding_model' => 'nomic-embed-text',
            'options' => [
                'temperature' => 0.5,
                // ...other Ollama-specific options
            ],
        ],
        // ...other providers
    ],
];
```

---

Usage
-----

[](#usage)

### Basic Text Generation

[](#basic-text-generation)

```
$response = LLMClient::generate('Say hello in English.');
```

### Specify Provider

[](#specify-provider)

```
$response = LLMClient::use('ollama')->generate('Say hello in English.');
```

### Chainable Model/Options (Recommended)

[](#chainable-modeloptions-recommended)

```
$response = LLMClient::model('llama3')->with(['temperature' => 0.5])->generate('Say hello.');
```

- `model()` only affects chat/completion.
- `embedModel()` only affects embed.
- `with()` sets provider-specific options (except model/embeddingModel).

### Embeddings

[](#embeddings)

```
$vector = LLMClient::use('ollama')->embed('hello world');

// Specify embedding model
$vector = LLMClient::embedModel('nomic-embed-text')->embed('hello world');
```

### Streaming Response

[](#streaming-response)

```
LLMClient::generateStream('Tell me a joke.', [], function($chunk) {
    echo $chunk;
});
```

### List Models

[](#list-models)

```
$models = LLMClient::use('gemini')->models();
```

---

Supported Providers
-------------------

[](#supported-providers)

- **OpenAI (ChatGPT)**
- **Anthropic (Claude)**
- **Gemini (Google)**
- **Ollama**

> ⚠️ Note: Only the Ollama provider has been fully tested.
> Other providers are implemented based on official docs, but not tested with real API keys.

---

📄 License
---------

[](#-license)

MIT License

###  Health Score

33

—

LowBetter than 75% of packages

Maintenance64

Regular maintenance activity

Popularity8

Limited adoption so far

Community6

Small or concentrated contributor base

Maturity47

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 100% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~46 days

Total

3

Last Release

215d ago

Major Versions

v1.0.0 → v2.0.02025-10-09

### Community

Maintainers

![](https://www.gravatar.com/avatar/505307e2d08cb8bd1a5cf157e439b09122e6138eac0d4c4d0a762313959ef31a?d=identicon)[lzhx00](/maintainers/lzhx00)

---

Top Contributors

[![lzhx00](https://avatars.githubusercontent.com/u/61302358?v=4)](https://github.com/lzhx00 "lzhx00 (3 commits)")

---

Tags

laravelaiopenaiGeminiclaudellmChatGptollama

###  Code Quality

TestsPHPUnit

### Embed Badge

![Health badge](/badges/lzhx00-laravel-llm-client/health.svg)

```
[![Health](https://phpackages.com/badges/lzhx00-laravel-llm-client/health.svg)](https://phpackages.com/packages/lzhx00-laravel-llm-client)
```

###  Alternatives

[vizra/vizra-adk

Vizra Agent Development Kit - A comprehensive Laravel package for building intelligent AI agents.

29026.1k](/packages/vizra-vizra-adk)[cognesy/instructor-php

The complete AI toolkit for PHP: unified LLM API, structured outputs, agents, and coding agent control

310107.9k1](/packages/cognesy-instructor-php)[sbsaga/toon

🧠 TOON for Laravel — a compact, human-readable, and token-efficient data format for AI prompts &amp; LLM contexts. Perfect for ChatGPT, Gemini, Claude, Mistral, and OpenAI integrations (JSON ⇄ TOON).

6115.6k](/packages/sbsaga-toon)[claude-php/claude-php-sdk-laravel

Laravel integration for the Claude PHP SDK - Anthropic Claude API

5010.8k](/packages/claude-php-claude-php-sdk-laravel)[vectorifyai/vectorify-laravel

Vectorify package for Laravel. The fastest way to ask AI about your data.

206.1k](/packages/vectorifyai-vectorify-laravel)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
