PHPackages                             azaharizaman/huggingface-php - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [API Development](/categories/api)
4. /
5. azaharizaman/huggingface-php

ActiveLibrary[API Development](/categories/api)

azaharizaman/huggingface-php
============================

A modern PHP client for the Hugging Face Inference API with full type safety and comprehensive model support

v1.2.1(6mo ago)3174↓100%1MITPHPPHP ^8.2CI passing

Since Nov 3Pushed 6mo ago1 watchersCompare

[ Source](https://github.com/azaharizaman/huggingface-php)[ Packagist](https://packagist.org/packages/azaharizaman/huggingface-php)[ RSS](/packages/azaharizaman-huggingface-php/feed)WikiDiscussions main Synced 1mo ago

READMEChangelog (5)Dependencies (13)Versions (12)Used By (0)

 [![GitHub Workflow Status (main)](https://camo.githubusercontent.com/60747ad47493c6933944b2fd100a6ca5af5ed93e00ea4072fe7b0348ad993ff2/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f617a61686172697a616d616e2f68756767696e67666163652d7068702f74657374732e796d6c3f6272616e63683d6d61696e266c6162656c3d7465737473267374796c653d726f756e642d737175617265)](https://github.com/azaharizaman/huggingface-php/actions) [![Total Downloads](https://camo.githubusercontent.com/cb44d37ecd8877fe9c93c142f91661132df4588e10b9d53bea438167438f91ed/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f64742f617a61686172697a616d616e2f68756767696e67666163652d706870)](https://packagist.org/packages/azaharizaman/huggingface-php) [![Latest Version](https://camo.githubusercontent.com/213a23dd54425b9ea5770a5abc8f983496231877a90ec699ff221243e6309b54/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f762f617a61686172697a616d616e2f68756767696e67666163652d706870)](https://packagist.org/packages/azaharizaman/huggingface-php) [![License](https://camo.githubusercontent.com/f35105590446d06398385fd7445eeb0861e5d19f7c99b0ff23996db73ff175a5/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f617a61686172697a616d616e2f68756767696e67666163652d706870)](https://packagist.org/packages/azaharizaman/huggingface-php)

---

**Huggingface PHP** is a community-maintained PHP API client that allows you to interact with the [Hugging Face API](https://huggingface.co/inference-api) and the latest [Chat Completions API](https://huggingface.co/docs/inference-providers).

**✨ NEW**: Now supports OpenAI-compatible Chat Completions API with multiple inference providers!

Table of Contents
-----------------

[](#table-of-contents)

- [Get Started](#get-started)
- [Usage](#usage)
    - [Chat Completions](#chat-completions)
    - [Text Generation](#text-generation)
    - [Fill Mask](#fill-mask)
    - [Summarization](#summarization)
    - [Sentiment Analysis](#sentiment-analysis)
    - [Emotion Classification](#emotion-classification)
    - [Hub API](#hub-api)
- [Advanced Usage](#advanced-usage)
    - [Custom Configuration](#custom-configuration)
    - [Provider Selection](#provider-selection)
    - [Auto Task Type Detection](#auto-task-type-detection)
    - [Error Handling](#error-handling)
- [Supported Task Types](#supported-task-types)
- [Inference Providers](#inference-providers)
- [Popular Models](#popular-models)
- [Testing](#testing)

Get Started
-----------

[](#get-started)

> **Requires [PHP 8.2+](https://php.net/releases/)**

First, install Huggingface PHP via the [Composer](https://getcomposer.org/) package manager:

```
composer require azaharizaman/huggingface-php
```

Ensure that the `php-http/discovery` composer plugin is allowed to run or install a client manually if your project does not already have a PSR-18 client integrated.

```
composer require guzzlehttp/guzzle
```

Then, interact with Hugging Face's API:

```
use AzahariZaman\Huggingface\Huggingface;
use AzahariZaman\Huggingface\Enums\Type;
use AzahariZaman\Huggingface\Enums\Provider;

$yourApiKey = getenv('HUGGINGFACE_API_KEY');
$client = Huggingface::client($yourApiKey);

// NEW: Chat Completions API with provider support
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [
        ['role' => 'user', 'content' => 'What is the meaning of life?']
    ],
    'provider' => Provider::SAMBANOVA,
    'max_tokens' => 100,
    'temperature' => 0.7,
]);

echo $result->choices[0]->message->content . "\n";

// Original Inference API (still supported)
$result = $client->inference()->create([
    'model' => 'gpt2',
    'inputs' => 'The goal of life is?',
    'type' => Type::TEXT_GENERATION,
]);

echo $result['generated_text']."\n";
```

Usage
-----

[](#usage)

### Chat Completions

[](#chat-completions)

The new Chat Completions API provides OpenAI-compatible endpoints with support for multiple inference providers:

```
use AzahariZaman\Huggingface\Huggingface;
use AzahariZaman\Huggingface\Enums\Provider;

$client = Huggingface::client(getenv('HUGGINGFACE_API_KEY'));

// Basic chat completion
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [
        ['role' => 'system', 'content' => 'You are a helpful assistant.'],
        ['role' => 'user', 'content' => 'Explain quantum computing in simple terms.']
    ],
    'max_tokens' => 200,
    'temperature' => 0.7,
]);

echo $result->choices[0]->message->content . "\n";

// With specific provider
$result = $client->chatCompletion()->create([
    'model' => 'deepseek-ai/DeepSeek-V3-0324',
    'messages' => [
        ['role' => 'user', 'content' => 'Write a Python function to calculate fibonacci numbers.']
    ],
    'provider' => Provider::SAMBANOVA,
    'max_tokens' => 300,
]);

// Multi-turn conversation
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [
        ['role' => 'user', 'content' => 'What is machine learning?'],
        ['role' => 'assistant', 'content' => 'Machine learning is a subset of AI...'],
        ['role' => 'user', 'content' => 'Can you give me a practical example?']
    ],
    'provider' => Provider::TOGETHER,
    'temperature' => 0.5,
]);

// Advanced options
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [
        ['role' => 'user', 'content' => 'Generate a JSON object with user information.']
    ],
    'response_format' => [
        'type' => 'json_schema',
        'json_schema' => [
            'name' => 'user',
            'schema' => [
                'type' => 'object',
                'properties' => [
                    'name' => ['type' => 'string'],
                    'age' => ['type' => 'integer'],
                    'email' => ['type' => 'string']
                ],
                'required' => ['name', 'age']
            ]
        ]
    ],
    'max_tokens' => 150,
]);
```

### Streaming Chat Completions

[](#streaming-chat-completions)

For real-time applications, use streaming responses:

```
// Stream responses in real-time
$streamResponse = $client->chatCompletion()->createStream([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [
        ['role' => 'user', 'content' => 'Tell me a story about space exploration.']
    ],
    'provider' => Provider::SAMBANOVA,
    'max_tokens' => 200,
    'temperature' => 0.7,
]);

// Process chunks as they arrive
echo "Streaming response: ";
foreach ($streamResponse->getIterator() as $chunk) {
    if (isset($chunk['choices'][0]['delta']['content'])) {
        echo $chunk['choices'][0]['delta']['content'];
        flush(); // Output immediately for real-time effect
    }
}

// Or collect the complete response
$completeResponse = $streamResponse->collect();
echo $completeResponse->choices[0]->message->content;

// Manual chunk processing with progress tracking
$fullContent = '';
$chunkCount = 0;

foreach ($streamResponse->getIterator() as $chunk) {
    $chunkCount++;

    if (isset($chunk['choices'][0]['delta']['content'])) {
        $content = $chunk['choices'][0]['delta']['content'];
        $fullContent .= $content;

        // Custom processing per chunk
        echo "Chunk $chunkCount: " . json_encode($content) . "\n";
    }

    // Check for completion
    if (isset($chunk['choices'][0]['finish_reason'])) {
        echo "Finished: " . $chunk['choices'][0]['finish_reason'] . "\n";
        break;
    }
}
```

### Text Generation

[](#text-generation)

Generate creative text continuations using various language models:

```
use AzahariZaman\Huggingface\Huggingface;
use AzahariZaman\Huggingface\Enums\Type;

$client = Huggingface::client(getenv('HUGGINGFACE_API_KEY'));

// Using GPT-2
$result = $client->inference()->create([
    'model' => 'gpt2',
    'inputs' => 'The future of artificial intelligence is',
    'type' => Type::TEXT_GENERATION,
]);

echo $result['generated_text'] . "\n";
// Output: "The future of artificial intelligence is bright and full of possibilities..."

// Using Microsoft DialoGPT for conversational AI
$result = $client->inference()->create([
    'model' => 'microsoft/DialoGPT-medium',
    'inputs' => 'Hello, how are you today?',
    'type' => Type::TEXT_GENERATION,
]);

echo $result['generated_text'] . "\n";

// Using CodeT5 for code generation
$result = $client->inference()->create([
    'model' => 'Salesforce/codet5-base',
    'inputs' => 'def fibonacci(n):',
    'type' => Type::TEXT_GENERATION,
]);

echo $result['generated_text'] . "\n";
```

### Fill Mask

[](#fill-mask)

Predict missing words in sentences using masked language models:

```
// Using BERT for general text
$result = $client->inference()->create([
    'model' => 'bert-base-uncased',
    'inputs' => 'The capital of France is [MASK].',
    'type' => Type::FILL_MASK,
]);

foreach ($result['filled_masks'] as $prediction) {
    echo "Token: {$prediction['token_str']}, Score: {$prediction['score']}\n";
    echo "Full sequence: {$prediction['sequence']}\n\n";
}

// Using RoBERTa for more accurate predictions
$result = $client->inference()->create([
    'model' => 'roberta-base',
    'inputs' => 'The best programming language for web development is .',
    'type' => Type::FILL_MASK,
]);

// Using domain-specific models like BioBERT for medical text
$result = $client->inference()->create([
    'model' => 'dmis-lab/biobert-base-cased-v1.1',
    'inputs' => 'The patient was diagnosed with [MASK] diabetes.',
    'type' => Type::FILL_MASK,
]);
```

### Summarization

[](#summarization)

Create concise summaries of longer texts:

```
// Using BART for general summarization
$longText = "Artificial intelligence (AI) is intelligence demonstrated by machines, " .
           "in contrast to the natural intelligence displayed by humans and animals. " .
           "Leading AI textbooks define the field as the study of 'intelligent agents': " .
           "any device that perceives its environment and takes actions that maximize " .
           "its chance of successfully achieving its goals.";

$result = $client->inference()->create([
    'model' => 'facebook/bart-large-cnn',
    'inputs' => $longText,
    'type' => Type::SUMMARIZATION,
]);

echo "Summary: " . $result['summary_text'] . "\n";

// Using T5 for more flexible summarization
$result = $client->inference()->create([
    'model' => 't5-base',
    'inputs' => 'summarize: ' . $longText,
    'type' => Type::SUMMARIZATION,
]);

// Using Pegasus for news article summarization
$newsArticle = "The latest research in quantum computing shows promising results...";
$result = $client->inference()->create([
    'model' => 'google/pegasus-xsum',
    'inputs' => $newsArticle,
    'type' => Type::SUMMARIZATION,
]);
```

### Sentiment Analysis

[](#sentiment-analysis)

Analyze the emotional tone and sentiment of text:

```
// Basic sentiment analysis
$result = $client->inference()->create([
    'model' => 'distilbert-base-uncased-finetuned-sst-2-english',
    'inputs' => 'I absolutely love this new product! It works perfectly.',
    'type' => Type::SENTIMENT_ANALYSIS,
]);

foreach ($result['sentiment_analysis'] as $sentiment) {
    echo "Label: {$sentiment['label']}, Score: {$sentiment['score']}\n";
}

// Multi-language sentiment analysis
$result = $client->inference()->create([
    'model' => 'nlptown/bert-base-multilingual-uncased-sentiment',
    'inputs' => 'Este producto es increíble, me encanta!',
    'type' => Type::SENTIMENT_ANALYSIS,
]);

// Financial sentiment analysis
$result = $client->inference()->create([
    'model' => 'ProsusAI/finbert',
    'inputs' => 'The company reported strong quarterly earnings with revenue growth of 15%.',
    'type' => Type::SENTIMENT_ANALYSIS,
]);

// Customer review sentiment
$result = $client->inference()->create([
    'model' => 'cardiffnlp/twitter-roberta-base-sentiment-latest',
    'inputs' => 'Just tried the new restaurant downtown. Food was amazing but service was slow.',
    'type' => Type::SENTIMENT_ANALYSIS,
]);
```

### Emotion Classification

[](#emotion-classification)

Detect specific emotions in text with fine-grained analysis:

```
// Detect multiple emotions
$result = $client->inference()->create([
    'model' => 'SamLowe/roberta-base-go_emotions',
    'inputs' => 'I am so excited about the upcoming vacation! Can\'t wait to relax on the beach.',
    'type' => Type::EMOTION_CLASSIFICATION,
]);

foreach ($result['emotion_classification'] as $emotions) {
    foreach ($emotions as $emotion => $score) {
        echo "Emotion: {$emotion}, Intensity: " . number_format($score, 3) . "\n";
    }
}

// Emotional analysis for customer support
$result = $client->inference()->create([
    'model' => 'j-hartmann/emotion-english-distilroberta-base',
    'inputs' => 'I am frustrated with this service. Nothing works as expected and support is unhelpful.',
    'type' => Type::EMOTION_CLASSIFICATION,
]);

// Detect emotions in social media posts
$result = $client->inference()->create([
    'model' => 'cardiffnlp/twitter-roberta-base-emotion',
    'inputs' => 'Watching the sunset with my loved ones. Feeling grateful for these precious moments.',
    'type' => Type::EMOTION_CLASSIFICATION,
]);
```

### Hub API

[](#hub-api)

Access model metadata, inference status, and provider information:

```
// Get model information
$modelInfo = $client->hub()->getModel([
    'model_id' => 'meta-llama/Llama-3.1-8B-Instruct',
    'expand' => ['inference', 'inferenceProviderMapping']
]);

echo "Model: " . $modelInfo->id . "\n";
echo "Inference status: " . ($modelInfo->inference ?? 'not available') . "\n";

if ($modelInfo->inferenceProviderMapping) {
    echo "Available providers:\n";
    foreach ($modelInfo->inferenceProviderMapping as $provider => $info) {
        echo "- $provider: " . $info['status'] . " (" . $info['task'] . ")\n";
    }
}

// List models by provider
$models = $client->hub()->listModels([
    'inference_provider' => 'sambanova',
    'pipeline_tag' => 'text-generation',
    'limit' => 10
]);

foreach ($models->models as $model) {
    echo "Model: " . $model->id . "\n";
}

// Get user information
$userInfo = $client->hub()->whoami();
echo "Username: " . $userInfo['name'] . "\n";
```

Advanced Usage
--------------

[](#advanced-usage)

### Provider Selection

[](#provider-selection)

Choose specific inference providers for your requests:

```
use AzahariZaman\Huggingface\Enums\Provider;

// Using Sambanova for fast inference
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [['role' => 'user', 'content' => 'Hello!']],
    'provider' => Provider::SAMBANOVA,
]);

// Using Together AI
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [['role' => 'user', 'content' => 'Hello!']],
    'provider' => Provider::TOGETHER,
]);

// Auto-select best provider
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [['role' => 'user', 'content' => 'Hello!']],
    'provider' => Provider::AUTO, // or omit provider parameter
]);
```

### Auto Task Type Detection

[](#auto-task-type-detection)

The client can automatically detect the appropriate task type based on the model:

```
// Automatic detection - no need to specify type
$result = $client->inference()->create([
    'model' => 'openai/whisper-large-v3',
    'inputs' => $audioData, // Will auto-detect as AUTOMATIC_SPEECH_RECOGNITION
]);

$result = $client->inference()->create([
    'model' => 'stabilityai/stable-diffusion-2',
    'inputs' => 'A beautiful sunset', // Will auto-detect as TEXT_TO_IMAGE
]);

$result = $client->inference()->create([
    'model' => 'sentence-transformers/all-MiniLM-L6-v2',
    'inputs' => ['source' => 'Hello', 'targets' => ['Hi', 'Goodbye']], // Will auto-detect as SENTENCE_SIMILARITY
]);

// You can still override with explicit type
$result = $client->inference()->create([
    'model' => 'gpt2',
    'inputs' => 'Translate this: Hello',
    'type' => Type::TRANSLATION, // Override auto-detection
]);
```

### Custom Configuration

[](#custom-configuration)

Customize the client with advanced options:

```
use AzahariZaman\Huggingface\Huggingface;

// Using the factory for advanced configuration
$client = Huggingface::factory()
    ->withApiKey(getenv('HUGGINGFACE_API_KEY'))
    ->withBaseUri('https://api-inference.huggingface.co')
    ->withHttpHeader('User-Agent', 'MyApp/1.0')
    ->withQueryParam('wait_for_model', 'true')
    ->make();

// Custom HTTP client configuration
$httpClient = new \GuzzleHttp\Client([
    'timeout' => 30,
    'connect_timeout' => 10,
]);

$client = Huggingface::factory()
    ->withApiKey(getenv('HUGGINGFACE_API_KEY'))
    ->withHttpClient($httpClient)
    ->make();

// Stream handling for real-time responses
$streamHandler = function ($request) use ($httpClient) {
    return $httpClient->send($request, ['stream' => true]);
};

$client = Huggingface::factory()
    ->withApiKey(getenv('HUGGINGFACE_API_KEY'))
    ->withStreamHandler($streamHandler)
    ->make();
```

### Error Handling

[](#error-handling)

Handle various types of errors gracefully:

```
use AzahariZaman\Huggingface\Exceptions\ErrorException;
use AzahariZaman\Huggingface\Exceptions\TransporterException;
use AzahariZaman\Huggingface\Exceptions\UnserializableResponse;

try {
    $result = $client->inference()->create([
        'model' => 'invalid-model-name',
        'inputs' => 'Test input',
        'type' => Type::TEXT_GENERATION,
    ]);
} catch (ErrorException $e) {
    // Handle API errors (model not found, invalid input, etc.)
    echo "API Error: " . $e->getMessage() . "\n";
    echo "Error Type: " . $e->getErrorType() . "\n";
    echo "Error Code: " . $e->getErrorCode() . "\n";
} catch (TransporterException $e) {
    // Handle network/HTTP errors
    echo "Network Error: " . $e->getMessage() . "\n";
} catch (UnserializableResponse $e) {
    // Handle response parsing errors
    echo "Response Error: " . $e->getMessage() . "\n";
}
```

Supported Task Types
--------------------

[](#supported-task-types)

The library supports all major Hugging Face inference tasks:

### Traditional NLP Tasks

[](#traditional-nlp-tasks)

- **TEXT\_GENERATION**: Generate text continuations
- **FILL\_MASK**: Fill missing words in sentences
- **SUMMARIZATION**: Create text summaries
- **SENTIMENT\_ANALYSIS**: Analyze emotional tone
- **EMOTION\_CLASSIFICATION**: Detect specific emotions
- **TRANSLATION**: Translate between languages

### Audio Tasks

[](#audio-tasks)

- **AUTOMATIC\_SPEECH\_RECOGNITION**: Convert speech to text
- **AUDIO\_CLASSIFICATION**: Classify audio content
- **AUDIO\_TO\_AUDIO**: Audio enhancement and transformation
- **TEXT\_TO\_SPEECH**: Generate speech from text

### Vision Tasks

[](#vision-tasks)

- **IMAGE\_TO\_TEXT**: Generate captions for images
- **TEXT\_TO\_IMAGE**: Generate images from text descriptions
- **IMAGE\_TO\_IMAGE**: Transform images based on prompts

### Multimodal Tasks

[](#multimodal-tasks)

- **IMAGE\_TEXT\_TO\_TEXT**: Process images and text together
- **CONVERSATIONAL**: Multi-turn chat conversations

### Specialized Tasks

[](#specialized-tasks)

- **SENTENCE\_SIMILARITY**: Compare text similarity
- **CHAT\_COMPLETION**: OpenAI-compatible chat interface

Inference Providers
-------------------

[](#inference-providers)

Choose from multiple inference providers for optimal performance:

### Supported Providers

[](#supported-providers)

- **SAMBANOVA**: Fast inference with competitive pricing
- **TOGETHER**: High-quality models with good performance
- **REPLICATE**: Specialized in image and video generation
- **FAL\_AI**: Fast inference for creative tasks
- **FIREWORKS\_AI**: Enterprise-grade inference
- **CEREBRAS**: Optimized for large language models
- **COHERE**: Advanced language understanding
- **GROQ**: Ultra-fast inference speeds
- **MISTRAL**: European AI provider
- **OPENAI**: Direct OpenAI API integration
- **ANTHROPIC**: Claude models access
- **HUGGINGFACE**: Default Hugging Face inference
- **AUTO**: Automatic provider selection

### Provider Usage

[](#provider-usage)

```
use AzahariZaman\Huggingface\Enums\Provider;

// Specify provider explicitly
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [['role' => 'user', 'content' => 'Hello!']],
    'provider' => Provider::SAMBANOVA,
]);

// Let the system choose the best provider
$result = $client->chatCompletion()->create([
    'model' => 'meta-llama/Llama-3.1-8B-Instruct',
    'messages' => [['role' => 'user', 'content' => 'Hello!']],
    'provider' => Provider::AUTO,
]);
```

Popular Models
--------------

[](#popular-models)

Here are some popular models for different tasks:

### Text Generation

[](#text-generation-1)

- **GPT-2**: `gpt2`, `gpt2-medium`, `gpt2-large`, `gpt2-xl`
- **GPT-Neo**: `EleutherAI/gpt-neo-1.3B`, `EleutherAI/gpt-neo-2.7B`
- **DialoGPT**: `microsoft/DialoGPT-medium`, `microsoft/DialoGPT-large`
- **CodeT5**: `Salesforce/codet5-base`, `Salesforce/codet5-large`

### Fill Mask

[](#fill-mask-1)

- **BERT**: `bert-base-uncased`, `bert-large-uncased`
- **RoBERTa**: `roberta-base`, `roberta-large`
- **DeBERTa**: `microsoft/deberta-base`, `microsoft/deberta-large`
- **ALBERT**: `albert-base-v2`, `albert-large-v2`

### Summarization

[](#summarization-1)

- **BART**: `facebook/bart-large-cnn`, `facebook/bart-large-xsum`
- **T5**: `t5-small`, `t5-base`, `t5-large`
- **Pegasus**: `google/pegasus-xsum`, `google/pegasus-cnn_dailymail`
- **LED**: `allenai/led-base-16384`

### Sentiment Analysis

[](#sentiment-analysis-1)

- **DistilBERT**: `distilbert-base-uncased-finetuned-sst-2-english`
- **RoBERTa**: `cardiffnlp/twitter-roberta-base-sentiment-latest`
- **FinBERT**: `ProsusAI/finbert` (Financial sentiment)
- **Multilingual**: `nlptown/bert-base-multilingual-uncased-sentiment`

### Emotion Classification

[](#emotion-classification-1)

- **GoEmotions**: `SamLowe/roberta-base-go_emotions`
- **Emotion**: `j-hartmann/emotion-english-distilroberta-base`
- **Twitter Emotion**: `cardiffnlp/twitter-roberta-base-emotion`

📁 Examples
----------

[](#-examples)

The `examples/` directory contains comprehensive demonstrations of all features and AI tasks supported by this library:

### 🚀 Quick Start

[](#-quick-start)

- **`quick_test.php`** - Instant verification that the library is working correctly
- **`inference.php`** - Basic inference examples with common AI tasks

### 🤖 AI Task Examples

[](#-ai-task-examples)

- **`advanced_inference_tasks.php`** - Complete showcase of all 17 AI task types including:
    - 🖼️ **Vision**: Image captioning, text-to-image, visual Q&amp;A
    - 🔊 **Audio**: Speech recognition, text-to-speech, audio classification
    - 🌐 **Language**: Translation, sentence similarity, conversational AI
    - 📝 **Text**: Generation, sentiment analysis, summarization

### ⚡ Advanced Features

[](#-advanced-features)

- **`streaming_example.php`** - Real-time streaming responses
- **`comprehensive_example.php`** - Full API integration with multiple providers

### 🏃 Running Examples

[](#-running-examples)

```
# Quick verification (no API key needed)
php examples/quick_test.php

# Basic AI tasks (set API key first)
export HUGGINGFACE_API_KEY="your_key_here"
php examples/inference.php

# Explore advanced AI capabilities
php examples/advanced_inference_tasks.php
```

> 💡 **Tip**: Each example file is self-contained and includes error handling, so you can run them individually to explore specific features.

Testing
-------

[](#testing)

Huggingface PHP uses PHPUnit for testing. The test suite provides comprehensive coverage of all classes, methods, and lines.

### Running Tests

[](#running-tests)

To run the test suite:

```
composer test
```

### Running Tests with Coverage

[](#running-tests-with-coverage)

To generate a code coverage report:

```
composer test-coverage
```

This will display coverage statistics in the terminal. The current test suite achieves:

- **Lines: 100%** (249/249)
- **Methods: 100%** (80/80)
- **Classes: 100%** (21/21)

### Running Specific Tests

[](#running-specific-tests)

To run a specific test file:

```
vendor/bin/phpunit tests/HuggingfaceTest.php
```

To run tests for a specific class or method:

```
vendor/bin/phpunit --filter=testMethodName
```

### Test Structure

[](#test-structure)

The test suite is organized to mirror the source code structure:

```
tests/
├── Core: Huggingface, Factory, Client
├── Resources: Inference
├── Transporters: HttpTransporter
├── ValueObjects: ApiKey, ResourceUri, BaseUri, Headers, QueryParams, Payload
├── Enums: Type, Method, ContentType
├── Responses: CreateResponse + specialized response types
├── Exceptions: ErrorException, TransporterException, UnserializableResponse
└── Traits: ArrayAccessible

```

Acknowledge
-----------

[](#acknowledge)

License
-------

[](#license)

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

This library was inspired at the source level by the PHP OpenAI client and Kambo-1st/Huggingface-php. Portions of the code have been directly copied from these outstanding libraries.

---

Huggingface PHP is an open-sourced software licensed under the **[MIT license](https://opensource.org/licenses/MIT)**.

###  Health Score

42

—

FairBetter than 89% of packages

Maintenance73

Regular maintenance activity

Popularity19

Limited adoption so far

Community10

Small or concentrated contributor base

Maturity54

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 67.7% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~0 days

Total

5

Last Release

184d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/bd098df033715920fe22c5562344c432ac329ccfdc3ba3f3ff18f583d48ca343?d=identicon)[azaharizaman](/maintainers/azaharizaman)

---

Top Contributors

[![azaharizaman](https://avatars.githubusercontent.com/u/117408?v=4)](https://github.com/azaharizaman "azaharizaman (44 commits)")[![Copilot](https://avatars.githubusercontent.com/in/1143301?v=4)](https://github.com/Copilot "Copilot (21 commits)")

---

Tags

apiclientinferenceainlpmltransformershuggingface

###  Code Quality

TestsPHPUnit

Static AnalysisPHPStan

Type Coverage Yes

### Embed Badge

![Health badge](/badges/azaharizaman-huggingface-php/health.svg)

```
[![Health](https://phpackages.com/badges/azaharizaman-huggingface-php/health.svg)](https://phpackages.com/packages/azaharizaman-huggingface-php)
```

###  Alternatives

[openai-php/client

OpenAI PHP is a supercharged PHP API client that allows you to interact with the Open AI API

5.8k22.6M232](/packages/openai-php-client)[mozex/anthropic-php

Anthropic PHP is a supercharged community-maintained PHP API client that allows you to interact with Anthropic API.

46365.1k13](/packages/mozex-anthropic-php)[getbrevo/brevo-php

Official Brevo provided RESTFul API V3 php library

963.1M35](/packages/getbrevo-brevo-php)[deepseek-php/deepseek-php-client

deepseek PHP client is a robust and community-driven PHP client library for seamless integration with the Deepseek API, offering efficient access to advanced AI and data processing capabilities.

47073.9k5](/packages/deepseek-php-deepseek-php-client)[deeplcom/deepl-php

Official DeepL API Client Library

2616.2M66](/packages/deeplcom-deepl-php)[theodo-group/llphant

LLPhant is a library to help you build Generative AI applications.

1.5k311.5k5](/packages/theodo-group-llphant)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
