PHPackages                             bycerfrance/llm-api-lib - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [API Development](/categories/api)
4. /
5. bycerfrance/llm-api-lib

ActiveLibrary[API Development](/categories/api)

bycerfrance/llm-api-lib
=======================

PHP library for interacting with multiple LLM providers (Google, Mistral, OpenAI, OVH and any OpenAI-compatible endpoint) with failover, retry, and tool calling support

v1.12.0(1mo ago)176MITPHPPHP ^8.3CI passing

Since May 21Pushed 1mo agoCompare

[ Source](https://github.com/ByCerfrance/LlmApiLib)[ Packagist](https://packagist.org/packages/bycerfrance/llm-api-lib)[ Docs](https://www.vigicorp.fr)[ RSS](/packages/bycerfrance-llm-api-lib/feed)WikiDiscussions main Synced 1mo ago

READMEChangelog (3)Dependencies (13)Versions (17)Used By (0)

LLM API Library
===============

[](#llm-api-library)

[![Latest Version](https://camo.githubusercontent.com/a8dcd726f9df1f6ac2f98f10a641026a2fe904ca204b7cc5697e549a0040b9ae/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f762f62796365726672616e63652f6c6c6d2d6170692d6c69622e7376673f7374796c653d666c61742d737175617265)](https://github.com/ByCerfrance/LlmApiLib/releases)[![Packagist Dependency Version](https://camo.githubusercontent.com/7b23c552782f936e2b7c44c439f130225349f33b90796140f824fbba590f1106/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f646570656e64656e63792d762f62796365726672616e63652f6c6c6d2d6170692d6c69622f7068703f76657273696f6e3d6465762d6d61696e267374796c653d666c61742d737175617265)](https://camo.githubusercontent.com/7b23c552782f936e2b7c44c439f130225349f33b90796140f824fbba590f1106/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f646570656e64656e63792d762f62796365726672616e63652f6c6c6d2d6170692d6c69622f7068703f76657273696f6e3d6465762d6d61696e267374796c653d666c61742d737175617265)[![Software license](https://camo.githubusercontent.com/0614263a5cf35919d2b77fdc9c381d144ceb4279a010f5cfc8dfe7b6f905b6f3/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f42794365726672616e63652f4c6c6d4170694c69622e7376673f7374796c653d666c61742d737175617265)](https://github.com/ByCerfrance/LlmApiLib/blob/main/LICENSE)[![Build Status](https://camo.githubusercontent.com/f0fefbc6c1af76b0d04de460d3c00e963f870ea969103acdffd304148961e020/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f42794365726672616e63652f4c6c6d4170694c69622f74657374732e796d6c3f6272616e63683d6d61696e267374796c653d666c61742d737175617265266c6162656c3d7465737473)](https://github.com/ByCerfrance/LlmApiLib/actions/workflows/tests.yml?query=branch%3Amain)[![Total Downloads](https://camo.githubusercontent.com/02493fe3d58e7323c77864cc99d166a35804cdac9b4a4d93544cd377a8c513cc/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f64742f62796365726672616e63652f6c6c6d2d6170692d6c69622e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/bycerfrance/llm-api-lib)

PHP 8.3+ library for interacting with multiple LLM providers (Google, Mistral, OpenAI, OVH and any OpenAI-compatible endpoint) with failover, retry, guard validation, tool calling, MCP client, and OpenAPI integration support.

Installation
------------

[](#installation)

You can install library with [Composer](https://getcomposer.org/), it's the recommended installation.

```
$ composer require bycerfrance/llm-api-lib
```

Providers
---------

[](#providers)

### Built-in providers

[](#built-in-providers)

- **Google** -- Google Generative Language API
- **Mistral** -- Mistral AI API
- **OpenAI** -- OpenAI API
- **OVH** -- OVH AI Endpoints

### Generic (OpenAI-compatible)

[](#generic-openai-compatible)

The `Generic` provider connects to any OpenAI-compatible endpoint (local servers, proxies, third-party providers):

```
use ByCerfrance\LlmApiLib\Provider\Generic;

$provider = new Generic(
    uri: 'https://my-local-server.com/v1/chat/completions',
    apiKey: 'my-api-key',
    model: 'my-model',
    client: $httpClient, // PSR-18 ClientInterface
);
```

### Model metadata

[](#model-metadata)

Use `ModelInfo` to attach rich metadata to a provider (capabilities, quality/cost tiers, pricing, context window):

```
use ByCerfrance\LlmApiLib\Model\ModelInfo;
use ByCerfrance\LlmApiLib\Model\QualityTier;
use ByCerfrance\LlmApiLib\Model\CostTier;
use ByCerfrance\LlmApiLib\Model\Capability;
use ByCerfrance\LlmApiLib\Provider\OpenAi;

$model = new ModelInfo(
    name: 'gpt-4o',
    capabilities: [Capability::TEXT, Capability::IMAGE, Capability::TOOLS, Capability::JSON_OUTPUT],
    qualityTier: QualityTier::PREMIUM,
    costTier: CostTier::HIGH,
    inputCost: 2.50,   // $ per million tokens
    outputCost: 10.00,  // $ per million tokens
    maxContextTokens: 128_000,
);

$provider = new OpenAi(
    apiKey: 'sk-...',
    model: $model,       // ModelInfo or plain string
    client: $httpClient,
);
```

Chat
----

[](#chat)

### Basic usage

[](#basic-usage)

```
$llm = new \ByCerfrance\LlmApiLib\Llm($provider);

$completion = $llm->chat('Salut !');
print $completion->getLastMessage()->getContent(); // "Salut ! Comment allez-vous ?"

$completion = $llm->chat($completion->withNewMessage('Bien merci et toi ?'));
print $completion->getLastMessage()->getContent(); // "Bien, merci. Comment puis-je vous aider ?"
```

### With instructions

[](#with-instructions)

```
use ByCerfrance\LlmApiLib\Completion\Completion;
use ByCerfrance\LlmApiLib\Completion\Message\SystemMessage;
use ByCerfrance\LlmApiLib\Llm;

$completion = new Completion(new SystemMessage(
    'Tu es un assistant comptable, presentes toi comme tel.',
));

$llm = new Llm($provider);

$completion = $llm->chat($completion->withNewMessage('Salut !'));
print $completion->getLastMessage()->getContent();
// "Bonjour, je suis votre assistant comptable. Comment puis-je vous aider ?"
```

### Message types

[](#message-types)

The library provides typed message classes for convenience:

```
use ByCerfrance\LlmApiLib\Completion\Message\SystemMessage;
use ByCerfrance\LlmApiLib\Completion\Message\UserMessage;
use ByCerfrance\LlmApiLib\Completion\Message\Message;
use ByCerfrance\LlmApiLib\Completion\Message\RoleEnum;

// Typed classes (recommended)
$system = new SystemMessage('You are a helpful assistant.');
$user = new UserMessage('Hello!');

// Or using the generic Message class with explicit role
$system = new Message('You are a helpful assistant.', role: RoleEnum::SYSTEM);
```

### Completion parameters

[](#completion-parameters)

Fine-tune the LLM behavior with immutable `with*()` methods:

```
use ByCerfrance\LlmApiLib\Completion\Completion;

$completion = (new Completion(['Explain quantum computing']))
    ->withModel('gpt-4o')           // Override the provider's default model
    ->withMaxTokens(2000)           // Maximum tokens in the response
    ->withTemperature(0.7)          // Creativity (0 = deterministic, 2 = very creative)
    ->withTopP(0.9)                 // Nucleus sampling
    ->withSeed(42);                 // Reproducible outputs (provider-dependent)
```

Content Types
-------------

[](#content-types)

### ArrayContent

[](#arraycontent)

Allows combining multiple contents (`ContentInterface` or strings) into a single object. Useful for sending multiple elements in a single message.

Example:

```
$content = new ArrayContent(
    new TextContent('First message'),
    'Second message'
);
```

### DocumentUrlContent

[](#documenturlcontent)

Represents a document accessible via a URL. Supports capabilities `document` and `ocr`.

Example:

```
$content = new DocumentUrlContent('https://example.com/document.pdf');
```

Creates a `DocumentUrlContent` instance from a local file path or stream. The file is automatically converted to a base64-encoded data URL.

Example:

```
$content = DocumentUrlContent::fromFile('/path/to/document.pdf', 'custom-name.pdf');
```

Parameters:

- `$file`: Path to the file as a string or a stream resource.
- `$name`: Optional custom name for the document.
- `$detail`: Optional detail level for processing (e.g., 'auto', 'low', 'high').

### ImageUrlContent

[](#imageurlcontent)

Represents an image accessible via a URL. Supports capabilities `image` and `ocr`.

Example:

```
$content = new ImageUrlContent('https://example.com/image.jpg');
```

Creates an `ImageUrlContent` instance from a GD image resource. The image is automatically converted to a base64-encoded data URL.

Example:

```
$content = ImageUrlContent::fromGdImage($gdImage, 'high');
```

Parameters:

- `$image`: A GD image resource.
- `$detail`: Optional detail level for processing (e.g., 'auto', 'low', 'high').
- `$maxSize`: Optional maximum size for resizing the image.
- `$format`: Optional image format ('jpeg', 'png', 'gif', 'webp').
- `$quality`: Optional quality setting for JPEG/PNG/WebP formats.

Creates an `ImageUrlContent` instance from a local file path or stream. The file is automatically converted to a base64-encoded data URL.

Example:

```
$content = ImageUrlContent::fromFile('/path/to/image.png', 'low');
```

Parameters:

- `$file`: Path to the file as a string or a stream resource.
- `$detail`: Optional detail level for processing (e.g., 'auto', 'low', 'high').

### InputAudioContent

[](#inputaudiocontent)

Represents audio content encoded in base64 with a specified format. Supports capability `audio`.

Example:

```
$content = new InputAudioContent('base64encodeddata', 'wav');
```

### TextContent &amp; JsonContent

[](#textcontent--jsoncontent)

`TextContent` represents plain text or text read from a file. It supports the `text` capability.

`JsonContent` represents structured data in JSON format. It also supports the `text` capability.

Examples:

```
$text = new TextContent('Hello, world!');
$json = new JsonContent(['key' => 'value']);
```

When creating a `TextContent` instance, you can pass an associative array of placeholders that will be applied to the content using `str_replace`. This allows dynamic content generation based on placeholders in the text.

Example:

```
$content = new TextContent('Hello {name}, you are {age} years old.', ['name' => 'John', 'age' => 30]);
echo $content; // Outputs: "Hello John, you are 30 years old."
```

The placeholders are applied using the format `{key}` where `key` corresponds to the keys in the placeholder array.

Creates a `TextContent` instance from a local file path or stream. The file content is automatically loaded and can be processed with optional placeholders.

Example:

```
$content = TextContent::fromFile('/path/to/text.txt', ['name' => 'John', 'age' => 30]);
```

Parameters:

- `$file`: Path to the file as a string or a stream resource.
- `$placeholders`: Optional associative array of placeholders to apply to the content.

Response Formats
----------------

[](#response-formats)

Control the output format of the LLM response using `withResponseFormat()`.

### Text (default)

[](#text-default)

```
use ByCerfrance\LlmApiLib\Completion\ResponseFormat\TextFormat;

$completion = (new Completion(['Explain gravity']))
    ->withResponseFormat(new TextFormat());
```

### JSON Object

[](#json-object)

Forces the LLM to return valid JSON. Requires a provider with `JSON_OUTPUT` capability.

```
use ByCerfrance\LlmApiLib\Completion\ResponseFormat\JsonObjectFormat;

$completion = (new Completion(['List 3 colors as a JSON array']))
    ->withResponseFormat(new JsonObjectFormat());

$response = $llm->chat($completion);
$data = json_decode($response->getLastMessage()->getContent(), true);
```

### JSON Schema

[](#json-schema)

Forces the LLM to return JSON conforming to a specific schema. Requires a provider with `JSON_SCHEMA` capability.

```
use ByCerfrance\LlmApiLib\Completion\ResponseFormat\JsonSchemaFormat;

$completion = (new Completion(['Describe a person']))
    ->withResponseFormat(new JsonSchemaFormat(
        name: 'person',
        schema: [
            'type' => 'object',
            'properties' => [
                'name' => ['type' => 'string'],
                'age' => ['type' => 'integer'],
            ],
            'required' => ['name', 'age'],
        ],
        strict: true,
    ));

$response = $llm->chat($completion);
// {"name": "John", "age": 30}
```

Tools (Function Calling)
------------------------

[](#tools-function-calling)

Tools allow the LLM to call external functions during inference. The library handles the tool execution loop automatically.

### Defining a tool

[](#defining-a-tool)

```
use ByCerfrance\LlmApiLib\Completion\Tool\Tool;

$weatherTool = new Tool(
    name: 'get_weather',
    description: 'Get the current weather for a location',
    parameters: [
        'type' => 'object',
        'properties' => [
            'location' => [
                'type' => 'string',
                'description' => 'The city name',
            ],
        ],
        'required' => ['location'],
    ],
    callback: function (array $arguments): array {
        // Your logic here
        return [
            'temperature' => 20,
            'unit' => 'celsius',
            'condition' => 'sunny',
        ];
    },
);
```

### Using tools in a completion

[](#using-tools-in-a-completion)

```
use ByCerfrance\LlmApiLib\Completion\Completion;
use ByCerfrance\LlmApiLib\Llm;

$completion = (new Completion(['Quel temps fait-il a Paris ?']))
    ->withTools($weatherTool)
    ->withMaxToolIterations(5); // Optional, default is 10

$llm = new Llm($provider);
$response = $llm->chat($completion);

print $response->getLastMessage()->getContent();
// "Il fait actuellement 20°C a Paris avec un temps ensoleille."
```

### Multiple tools

[](#multiple-tools)

```
use ByCerfrance\LlmApiLib\Completion\Tool\ToolCollection;

$completion = (new Completion(['...']))
    ->withTools($weatherTool, $calculatorTool, $searchTool);

// Or using a collection
$tools = new ToolCollection($weatherTool, $calculatorTool);
$completion = (new Completion(['...']))->withTools($tools);
```

### Filtered tools

[](#filtered-tools)

Use `FilteredToolCollection` to restrict which tools are visible to the LLM. Supports include patterns (whitelist) and exclude patterns (prefix with `!`):

```
use ByCerfrance\LlmApiLib\Completion\Tool\FilteredToolCollection;

// Only expose specific tools
$filtered = new FilteredToolCollection($toolCollection, ['get_weather', 'search']);

// Exclude specific tools (expose everything else)
$filtered = new FilteredToolCollection($toolCollection, ['!delete_user', '!drop_table']);
```

The library automatically:

- Sends tools definition to the LLM
- Detects when the LLM wants to call a tool
- Executes the callback with the provided arguments
- Sends the result back to the LLM
- Continues until the LLM provides a final response or max iterations is reached

MCP Client (Model Context Protocol)
-----------------------------------

[](#mcp-client-model-context-protocol)

The library includes a full MCP client that connects to remote MCP servers, discovers tools, and executes them. MCP servers implement `ToolCollectionInterface` and can be passed directly to `withTools()`.

### McpServer

[](#mcpserver)

```
use ByCerfrance\LlmApiLib\Mcp\McpServer;
use ByCerfrance\LlmApiLib\Mcp\Transport\HttpStreamable;

// Create transport
$transport = new HttpStreamable(
    uri: 'https://my-mcp-server.com/mcp',
    client: $httpClient,
    headers: ['Authorization' => 'Bearer my-token'],
);

// Create MCP server client
$mcp = new McpServer($transport);

// Use MCP tools in a completion (tools are discovered automatically via lazy initialization)
$completion = (new Completion(['Search for documents about PHP']))
    ->withTools($mcp);

$response = $llm->chat($completion);
```

The MCP client handles the full lifecycle: initialization handshake, tool discovery (with pagination), tool execution via JSON-RPC `tools/call`, and graceful shutdown.

### OpenAPI integration

[](#openapi-integration)

Connect to any REST API described by an OpenAPI 3.x specification. Each operation becomes a tool the LLM can call.

> Requires the optional dependency: `composer require devizzent/cebe-php-openapi`

```
use ByCerfrance\LlmApiLib\Mcp\OpenApi;
use cebe\openapi\Reader;

$spec = Reader::readFromJsonFile('/path/to/openapi.json');

$openApi = new OpenApi(
    spec: $spec,
    client: $httpClient,
    headers: ['Authorization' => 'Bearer api-token'],
    baseUrl: 'https://api.example.com', // Optional, overrides spec servers
);

// Use OpenAPI operations as tools
$completion = (new Completion(['List all users']))
    ->withTools($openApi);

// Or filter specific operations
$filtered = new FilteredToolCollection($openApi, ['listUsers', 'getUser']);
$completion = (new Completion(['List all users']))
    ->withTools($filtered);

$response = $llm->chat($completion);
```

LlmTool (Agentic Sub-Model Delegation)
--------------------------------------

[](#llmtool-agentic-sub-model-delegation)

`LlmTool` allows the orchestrator LLM to delegate tasks to a different model via tool calling:

```
use ByCerfrance\LlmApiLib\Completion\Tool\LlmTool;
use ByCerfrance\LlmApiLib\Completion\Completion;

$analysisTool = new LlmTool(
    name: 'analyze_code',
    description: 'Analyze code for security vulnerabilities',
    parameters: [
        'type' => 'object',
        'properties' => [
            'code' => ['type' => 'string', 'description' => 'The code to analyze'],
            'language' => ['type' => 'string', 'description' => 'Programming language'],
        ],
        'required' => ['code'],
    ],
    llm: $specializedProvider, // A different LlmInterface (e.g., a more powerful model)
    promptBuilder: fn(string $code, string $language = 'php') => new Completion([
        "Analyze this {$language} code for security issues:\n{$code}",
    ]),
);

$completion = (new Completion(['Review my application for security issues']))
    ->withTools($analysisTool);

$response = $llm->chat($completion);

// Aggregate usage/cost across all sub-model calls
$subModelUsage = $analysisTool->getLlm()->getUsage();
$subModelCost = $analysisTool->getLlm()->getCost();
```

Model Selection
---------------

[](#model-selection)

### Selection strategy

[](#selection-strategy)

When using multiple providers with `Llm`, control which provider is preferred:

```
use ByCerfrance\LlmApiLib\Model\SelectionStrategy;
use ByCerfrance\LlmApiLib\Completion\Completion;

$completion = (new Completion(['Complex reasoning task']))
    ->withSelectionStrategy(SelectionStrategy::BEST_QUALITY);
```

Available strategies:

StrategyDescriptionScoring formula`CHEAP`Prefer low-cost providers80% cost + 20% quality`BALANCED`Balance cost and quality50% cost + 50% quality`BEST_QUALITY`Prefer highest quality80% quality + 20% costScoring is based on the `QualityTier` (BASIC, GOOD, PREMIUM) and `CostTier` (LOW, MEDIUM, HIGH) defined in each provider's `ModelInfo`.

Response Handling
-----------------

[](#response-handling)

### CompletionResponseInterface

[](#completionresponseinterface)

The `chat()` method returns a `CompletionResponseInterface` which extends `CompletionInterface` with additional response data:

```
$response = $llm->chat('Hello');

// Access the response content
$content = $response->getLastMessage()->getContent();

// Per-request token usage
$usage = $response->getUsage();
echo $usage->getPromptTokens();
echo $usage->getCompletionTokens();
echo $usage->getTotalTokens();

// Finish reason
$finishReason = $response->getFinishReason(); // FinishReason::STOP, LENGTH, TOOL_CALLS, CONTENT_FILTER

// Continue the conversation (CompletionResponseInterface extends CompletionInterface)
$response = $llm->chat($response->withNewMessage('Follow up question'));
```

### FinishReason

[](#finishreason)

The `FinishReason` enum indicates why the LLM stopped generating:

ValueDescription`STOP`Normal completion`LENGTH`Maximum token limit reached`TOOL_CALLS`Model wants to call tools (handled automatically)`CONTENT_FILTER`Content was filtered by the provider's safety systemRetry
-----

[](#retry)

The `Retry` decorator wraps any `LlmInterface` and retries on failure with configurable backoff:

```
use ByCerfrance\LlmApiLib\Retry;

$retryableProvider = new Retry(
    provider: $provider,
    time: 5000,          // Base wait time in milliseconds (default: 5000)
    retry: 3,            // Maximum retry attempts (default: 2)
    multiplier: 2.0,     // Exponential backoff multiplier (default: 1 = constant delay)
    retryOnGuard: false, // Retry on GuardException (default: false)
);

// Wait times: 5s, 10s, 20s (time * multiplier^attempt)
$response = $retryableProvider->chat('Hello');
```

Guard System
------------

[](#guard-system)

Guards validate LLM responses after each `chat()` call. If validation fails, a `GuardException` is thrown with the rejected response attached.

### Custom guard

[](#custom-guard)

```
use ByCerfrance\LlmApiLib\Guard\Guard;
use ByCerfrance\LlmApiLib\Guard\GuardException;

$guarded = new Guard(
    provider: $provider,
    guard: function (\ByCerfrance\LlmApiLib\Completion\CompletionResponseInterface $response): void {
        $content = $response->getLastMessage()->getContent();
        if (str_contains($content, 'I cannot')) {
            throw new \RuntimeException('Response contains a refusal');
        }
    },
);

try {
    $response = $guarded->chat('...');
} catch (GuardException $e) {
    $rejectedResponse = $e->getResponse(); // Access the rejected response
    echo $e->getMessage();
}
```

### FinishReasonGuard

[](#finishreasonguard)

A built-in guard that rejects responses with specific finish reasons (defaults to `LENGTH` and `CONTENT_FILTER`):

```
use ByCerfrance\LlmApiLib\Guard\FinishReasonGuard;
use ByCerfrance\LlmApiLib\Completion\FinishReason;

// Default: rejects LENGTH and CONTENT_FILTER
$guarded = new FinishReasonGuard($provider);

// Custom: only reject LENGTH
$guarded = new FinishReasonGuard($provider, FinishReason::LENGTH);
```

### Combining Guard + Retry

[](#combining-guard--retry)

Guards and retries compose naturally as decorators:

```
use ByCerfrance\LlmApiLib\Guard\FinishReasonGuard;
use ByCerfrance\LlmApiLib\Retry;

// Retry up to 3 times if the response is truncated (LENGTH) or filtered
$robust = new Retry(
    provider: new FinishReasonGuard($provider),
    retry: 3,
    retryOnGuard: true, // Required to retry on GuardException
);

$response = $robust->chat('...');
```

Failover
--------

[](#failover)

The `Llm` class accepts multiple providers and implements automatic failover:

```
use ByCerfrance\LlmApiLib\Llm;

$llm = new Llm($openAiProvider, $mistralProvider, $googleProvider);

// If OpenAI fails, Mistral is tried. If Mistral fails, Google is tried.
$response = $llm->chat('Hello');
```

### Capability-based filtering

[](#capability-based-filtering)

Before attempting providers, `Llm` automatically filters them by required capabilities. If a message contains an image, only providers with the `IMAGE` capability are tried. If a `JsonSchemaFormat` is used, only providers with `JSON_SCHEMA`are tried.

### Strategy-based ordering

[](#strategy-based-ordering)

When a `SelectionStrategy` is set on the completion, providers are sorted by their score (based on `ModelInfo`quality/cost tiers) before the failover sequence begins.

Logging
-------

[](#logging)

The `chat()` method accepts an optional PSR-3 logger for per-call logging:

```
use Psr\Log\LoggerInterface;

/** @var LoggerInterface $logger */
$response = $llm->chat($completion, logger: $logger);
```

The library logs:

- Provider selection and routing decisions
- Request initiation and completion metrics (tokens, cost, finish reason)
- Tool call counts and execution
- Retry attempts with wait times
- Failover transitions with error details

Usage &amp; Cost Tracking
-------------------------

[](#usage--cost-tracking)

### Token usage

[](#token-usage)

Retrieve aggregated token usage across all calls:

```
$usage = $llm->getUsage();
echo $usage->getPromptTokens();      // Total input tokens
echo $usage->getCompletionTokens();  // Total output tokens
echo $usage->getTotalTokens();       // Total tokens
```

### Cost tracking

[](#cost-tracking)

Calculate monetary cost based on `ModelInfo` pricing:

```
$cost = $llm->getCost();           // Total cost in dollars (4 decimal precision)
$cost = $llm->getCost(precision: 6); // Higher precision
```

Cost is computed as: `(promptTokens * inputCost / 1M) + (completionTokens * outputCost / 1M)`.

### Context window

[](#context-window)

Query the model's maximum context window size:

```
$maxTokens = $llm->getMaxContextTokens(); // e.g. 128000, or null if undefined
```

When using multi-provider `Llm`, returns the minimum across all providers.

Capabilities
------------

[](#capabilities)

This library supports a wide range of LLM capabilities, allowing developers to leverage advanced features such as multimodal processing, structured output, and reasoning. The following table lists the supported capabilities along with their descriptions.

CapabilityDescription (English)**text**Ability to read, process, and generate natural language text.**image**Ability to interpret visual content from images.**ocr**Ability to extract textual content embedded within images (printed or handwritten).**document**Ability to process structured, often multi-page documents (e.g., PDFs), including visual layout and textual interpretation.**audio**Ability to process and interpret speech or audio signals.**video**Ability to understand and analyze visual-temporal content from videos.**reasoning**Ability to perform logical, analytical, or multi-step reasoning to derive conclusions.**json\_output**Ability to generate responses strictly formatted as valid JSON.**json\_schema**Ability to generate responses that strictly follow a predefined JSON schema.**code**Ability to interpret, generate, or transform programming code.**tools**Ability to call external tools or functions during inference.**multimodal**Ability to combine and reason across multiple input types (e.g., text + image + audio + video).Each provider implementing the `LlmInterface` must declare its supported capabilities via the `getCapabilities()`method. The `Llm` class automatically filters providers based on compatibility with the requested capabilities, ensuring that only suitable providers are used for each request.

###  Health Score

48

—

FairBetter than 94% of packages

Maintenance98

Actively maintained with recent releases

Popularity15

Limited adoption so far

Community6

Small or concentrated contributor base

Maturity60

Established project with proven stability

 Bus Factor1

Top contributor holds 100% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~21 days

Recently: every ~9 days

Total

16

Last Release

39d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/cf4af828580b82bd45199e13b5f86b094a8dcfcc68c23bba37a7858d819e6c2e?d=identicon)[ElGigi](/maintainers/ElGigi)

---

Top Contributors

[![ElGigi](https://avatars.githubusercontent.com/u/18268216?v=4)](https://github.com/ElGigi "ElGigi (70 commits)")

---

Tags

apichatcompletiongooglellmmistralopenaiovhphpapigoogleaiopenaitoolschatovhllmfunction-callingmistral

###  Code Quality

TestsPHPUnit

Static AnalysisPHPStan

Type Coverage Yes

### Embed Badge

![Health badge](/badges/bycerfrance-llm-api-lib/health.svg)

```
[![Health](https://phpackages.com/badges/bycerfrance-llm-api-lib/health.svg)](https://phpackages.com/packages/bycerfrance-llm-api-lib)
```

###  Alternatives

[theodo-group/llphant

LLPhant is a library to help you build Generative AI applications.

1.5k311.5k5](/packages/theodo-group-llphant)[openai-php/client

OpenAI PHP is a supercharged PHP API client that allows you to interact with the Open AI API

5.8k22.6M232](/packages/openai-php-client)[deepseek-php/deepseek-php-client

deepseek PHP client is a robust and community-driven PHP client library for seamless integration with the Deepseek API, offering efficient access to advanced AI and data processing capabilities.

47073.9k5](/packages/deepseek-php-deepseek-php-client)[wordpress/php-ai-client

A provider agnostic PHP AI client SDK to communicate with any generative AI models of various capabilities using a uniform API.

26236.6k14](/packages/wordpress-php-ai-client)[gemini-api-php/client

API client for Google's Gemini API

216221.4k5](/packages/gemini-api-php-client)[erdum/php-open-ai-assistant-sdk

A PHP class for seamless interaction with the OpenAI Assistant API, enabling developers build powerful AI assistants capable of performing a variety of tasks.

203.1k](/packages/erdum-php-open-ai-assistant-sdk)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
