PHPackages                             falahatiali/homa - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [Utility &amp; Helpers](/categories/utility)
4. /
5. falahatiali/homa

ActiveLibrary[Utility &amp; Helpers](/categories/utility)

falahatiali/homa
================

Homa - The legendary bird that brings AI wisdom to Laravel. A simple and elegant AI assistant package.

v1.4.1(6mo ago)515MITPHPPHP ^8.1CI failing

Since Oct 1Pushed 6mo agoCompare

[ Source](https://github.com/falahatiali/Homa)[ Packagist](https://packagist.org/packages/falahatiali/homa)[ RSS](/packages/falahatiali-homa/feed)WikiDiscussions main Synced 1mo ago

READMEChangelogDependencies (10)Versions (16)Used By (0)

 [![Homa - The legendary bird](.github/images/homa.png)](.github/images/homa.png)Homa 🦅
======

[](#homa-)

**The legendary bird that brings AI wisdom to Laravel.**

[![Tests](https://github.com/falahatiali/homa/actions/workflows/tests.yml/badge.svg)](https://github.com/falahatiali/homa/actions/workflows/tests.yml)[![Code Quality](https://github.com/falahatiali/homa/actions/workflows/code-quality.yml/badge.svg)](https://github.com/falahatiali/homa/actions/workflows/code-quality.yml)[![Latest Version](https://camo.githubusercontent.com/20c2dc55a3c978e3051ff498f3549103ea42662184b2efb49051c1f259ebf47c/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f762f66616c6168617469616c692f686f6d612e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/falahatiali/homa)[![Total Downloads](https://camo.githubusercontent.com/4f8e022d9fcdd58374c13705087aca55294daeb9879c75b664638a578cf38b09/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f64742f66616c6168617469616c692f686f6d612e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/falahatiali/homa)[![License](https://camo.githubusercontent.com/957c82d43f130c5cc92b5103028704d1e37c1cb9f7ae27e4371ea75ba318a212/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f6c2f66616c6168617469616c692f686f6d612e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/falahatiali/homa)

---

Homa is a simple and elegant AI assistant package for Laravel applications. Integrate multiple AI providers (OpenAI, Anthropic Claude, and more) with a clean, fluent API. Named after the mythical Persian bird that brings good fortune and wisdom to those it flies over.

✨ Features
----------

[](#-features)

- 🦅 **Simple, Fluent API** - Elegant interface inspired by Laravel's design philosophy
- 🔌 **Multiple AI Providers** - Support for OpenAI (GPT-4, GPT-3.5), Anthropic (Claude), Grok, Groq, Google Gemini, and Ollama (local)
- 💬 **Conversation Management** - Built-in context-aware multi-turn conversations
- ⚙️ **Highly Configurable** - Extensive configuration options for every use case
- 🧪 **Fully Tested** - 70 tests with 135 assertions covering all critical paths
- 📦 **Zero Configuration** - Works out of the box with sensible defaults
- 🎯 **Extensible** - Easy to add custom AI providers via Factory Pattern
- 🔒 **Type Safe** - Full PHP 8.1+ type hints and return types
- 🏗️ **SOLID Principles** - Clean architecture following best practices
- ⚡ **Production Ready** - Uses official OpenAI PHP client for reliability
- 🔍 **Code Quality** - PHPStan level 5 + Laravel Pint for consistency

📋 Requirements
--------------

[](#-requirements)

- PHP 8.1 or higher
- Laravel 10.x or 11.x
- API keys for your chosen AI provider(s)

📦 Installation
--------------

[](#-installation)

Install the package via Composer:

```
composer require falahatiali/homa
```

### Publish Configuration (Optional)

[](#publish-configuration-optional)

```
php artisan vendor:publish --tag=homa-config
```

This will create a `config/homa.php` configuration file.

### Quick Setup

[](#quick-setup)

1. **Copy environment file:**

    ```
    cp .env.example .env
    ```
2. **Add your API keys to `.env`:**

    ```
    HOMA_PROVIDER=openai
    OPENAI_API_KEY=sk-your-actual-api-key
    ```
3. **Start using Homa:**

    ```
    use Homa\Facades\Homa;
    $response = Homa::ask('Hello!');
    ```

### Configure Your API Keys

[](#configure-your-api-keys)

Copy the example environment file and add your AI provider API keys:

```
# Copy the example environment file
cp .env.example .env

# Edit with your actual API keys
nano .env
```

**Required Environment Variables:**

```
# Choose your default provider (openai, anthropic, grok, groq, gemini, ollama)
HOMA_PROVIDER=openai

# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key-here
OPENAI_MODEL=gpt-4

# Anthropic Configuration
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key-here
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022

# Grok Configuration
GROK_API_KEY=xai-your-grok-api-key-here
GROK_MODEL=grok-2

# Groq Configuration (Ultra-fast inference)
GROQ_API_KEY=gsk_your-groq-api-key-here
GROQ_MODEL=openai/gpt-oss-20b

# Gemini Configuration (Google AI with multimodal)
GEMINI_API_KEY=your-gemini-api-key-here
GEMINI_MODEL=gemini-2.0-flash-exp

# Ollama (Local, free)
OLLAMA_API_URL=http://localhost:11434
OLLAMA_MODEL=llama3
```

**Get API Keys:**

- **OpenAI**: [platform.openai.com/api-keys](https://platform.openai.com/api-keys)
- **Anthropic**: [console.anthropic.com](https://console.anthropic.com/)
- **Grok**: [console.x.ai](https://console.x.ai/)
- **Groq**: [console.groq.com](https://console.groq.com/)
- **Gemini**: [aistudio.google.com/apikey](https://aistudio.google.com/apikey)

### 🖥️ Use Ollama locally (free)

[](#️-use-ollama-locally-free)

Ollama lets you run models like Llama 3, Mistral, Qwen locally with no API cost.

1. Install Ollama

```
curl -fsSL https://ollama.com/install.sh | sh    # macOS/Linux
# Windows: download installer at https://ollama.com/download
```

2. Download and run a model

```
ollama run llama3               # or: mistral:7b-instruct, qwen2.5:7b-instruct
```

3. Configure Homa for Ollama

```
HOMA_PROVIDER=ollama
OLLAMA_API_URL=http://localhost:11434
OLLAMA_MODEL=llama3
```

4. Use in code

```
$response = Homa::provider('ollama')->ask('Explain Laravel service container.');
echo $response->content();
```

Best local models (balanced): `llama3.1:8b-instruct`, `mistral:7b-instruct`, `qwen2.5:7b-instruct`, `phi3:mini`. Browse models: `https://ollama.com/library` and curated lists at `https://llm-explorer.com/`.

🚀 Quick Start
-------------

[](#-quick-start)

### Basic Usage

[](#basic-usage)

Ask a simple question:

```
use Homa\Facades\Homa;

$response = Homa::ask('What is Laravel?');
echo $response->content();
```

### Configure On-The-Fly

[](#configure-on-the-fly)

Chain configuration methods for custom behavior:

```
$response = Homa::model('gpt-4')
    ->temperature(0.7)
    ->maxTokens(500)
    ->ask('Explain dependency injection in Laravel');

echo $response->content();
```

### Switch Between Providers

[](#switch-between-providers)

Easily switch between different AI providers:

```
// Use OpenAI
$openaiResponse = Homa::provider('openai')
    ->model('gpt-4')
    ->ask('What is Laravel?');

// Use Anthropic Claude
$claudeResponse = Homa::provider('anthropic')
    ->model('claude-3-5-sonnet-20241022')
    ->ask('What is Laravel?');

// Use Groq (Ultra-fast inference)
$groqResponse = Homa::provider('groq')
    ->model('openai/gpt-oss-20b')
    ->ask('What is Laravel?');

// Use Gemini (Google AI with multimodal)
$geminiResponse = Homa::provider('gemini')
    ->model('gemini-2.0-flash-exp')
    ->ask('What is Laravel?');
```

### Custom System Prompts

[](#custom-system-prompts)

Set custom system prompts for specialized behavior:

```
$response = Homa::systemPrompt('You are a Laravel expert. Answer concisely.')
    ->ask('What is a service provider?');
```

### Multi-Turn Conversations

[](#multi-turn-conversations)

Create context-aware conversations:

```
$conversation = Homa::startConversation();

$response1 = $conversation->ask('Hello! My name is Ali.');
// AI: Hello Ali! Nice to meet you...

$response2 = $conversation->ask('What is my name?');
// AI: Your name is Ali.

// Access conversation history
$history = $conversation->history();

// Clear conversation and start fresh
$conversation->clear();
```

### Advanced Chat Control

[](#advanced-chat-control)

For full control over the conversation, use the `chat()` method:

```
$messages = [
    ['role' => 'system', 'content' => 'You are a helpful Laravel assistant.'],
    ['role' => 'user', 'content' => 'What are service containers?'],
    ['role' => 'assistant', 'content' => 'Service containers are...'],
    ['role' => 'user', 'content' => 'Can you give me an example?'],
];

$response = Homa::chat($messages);
```

### Working with Responses

[](#working-with-responses)

The `AIResponse` object provides several useful methods:

```
$response = Homa::ask('Hello!');

// Get the response content
$content = $response->content();

// Get the model used
$model = $response->model();

// Get usage statistics (tokens, etc.)
$usage = $response->usage();

// Get raw API response
$raw = $response->raw();

// Convert to array
$array = $response->toArray();

// Convert to JSON
$json = $response->toJson();

// Use as string
echo $response; // Automatically calls content()
```

⚙️ Configuration
----------------

[](#️-configuration)

### Configuration File

[](#configuration-file)

After publishing the config file, you can customize all aspects in `config/homa.php`. For environment variables, see `.env.example` for all available options:

```
return [
    // Default AI provider
    'default' => env('HOMA_PROVIDER', 'openai'),

    // Provider configurations
    'providers' => [
        'openai' => [
            'api_key' => env('OPENAI_API_KEY'),
            'api_url' => env('OPENAI_API_URL', 'https://api.openai.com/v1'),
            'model' => env('OPENAI_MODEL', 'gpt-4'),
            'temperature' => env('OPENAI_TEMPERATURE', 0.7),
            'max_tokens' => env('OPENAI_MAX_TOKENS', 1000),
            'timeout' => env('OPENAI_TIMEOUT', 30),
        ],
        'anthropic' => [
            'api_key' => env('ANTHROPIC_API_KEY'),
            'api_url' => env('ANTHROPIC_API_URL', 'https://api.anthropic.com/v1'),
            'model' => env('ANTHROPIC_MODEL', 'claude-3-5-sonnet-20241022'),
            'temperature' => env('ANTHROPIC_TEMPERATURE', 0.7),
            'max_tokens' => env('ANTHROPIC_MAX_TOKENS', 1000),
            'timeout' => env('ANTHROPIC_TIMEOUT', 30),
        ],
        'grok' => [
            'api_key' => env('GROK_API_KEY'),
            'model' => env('GROK_MODEL', 'grok-2'),
            'temperature' => env('GROK_TEMPERATURE', 0.7),
            'max_tokens' => env('GROK_MAX_TOKENS', 1000),
        ],
        'groq' => [
            'api_key' => env('GROQ_API_KEY'),
            'api_url' => env('GROQ_API_URL', 'https://api.groq.com/openai/v1'),
            'model' => env('GROQ_MODEL', 'openai/gpt-oss-20b'),
            'temperature' => env('GROQ_TEMPERATURE', 0.7),
            'max_tokens' => env('GROQ_MAX_TOKENS', 1000),
            'timeout' => env('GROQ_TIMEOUT', 30),
        ],
        'gemini' => [
            'api_key' => env('GEMINI_API_KEY'),
            'base_uri' => env('GEMINI_BASE_URI', 'https://generativelanguage.googleapis.com/v1beta'),
            'model' => env('GEMINI_MODEL', 'gemini-2.0-flash-exp'),
            'temperature' => env('GEMINI_TEMPERATURE', 0.7),
            'max_tokens' => env('GEMINI_MAX_TOKENS', 1000),
            'timeout' => env('GEMINI_TIMEOUT', 30),
        ],
    ],

    // Default system prompt
    'system_prompt' => env('HOMA_SYSTEM_PROMPT', 'You are a helpful AI assistant.'),

    // Logging configuration
    'logging' => [
        'enabled' => env('HOMA_LOGGING', false),
        'channel' => env('HOMA_LOG_CHANNEL', 'stack'),
    ],

    // Caching configuration
    'cache' => [
        'enabled' => env('HOMA_CACHE_ENABLED', false),
        'ttl' => env('HOMA_CACHE_TTL', 3600),
        'prefix' => 'homa_',
    ],
];
```

### Available Models

[](#available-models)

**OpenAI:**

- `gpt-5` - Latest, most advanced model
- `gpt-5o` - Optimized GPT-5 variant
- `gpt-4o` - Latest GPT-4 with vision capabilities
- `gpt-4o-mini` - Smaller, faster GPT-4o
- `gpt-4-turbo` - Fast GPT-4 variant
- `gpt-4` - Most capable, best for complex tasks
- `gpt-3.5-turbo` - Fast and cost-effective

**Anthropic:**

- `claude-3-5-sonnet-20241022` - Latest, most capable
- `claude-3-opus-20240229` - Most powerful for complex tasks
- `claude-3-sonnet-20240229` - Balanced performance
- `claude-3-haiku-20240307` - Fastest, most cost-effective

**Groq (Ultra-fast inference):**

- `openai/gpt-oss-20b` - Large, capable model
- `openai/gpt-oss-7b` - Smaller, faster model
- `llama-3.1-70b-versatile` - Meta's Llama model
- `llama-3.1-8b-instant` - Fast Llama model
- `mixtral-8x7b-32768` - Mixtral model
- `gemma-7b-it` - Google's Gemma model

**Gemini (Google AI with multimodal capabilities):**

- `gemini-2.0-flash-exp` - Latest, fastest (experimental)
- `gemini-1.5-pro-latest` - Most capable
- `gemini-1.5-flash-latest` - Balanced speed and capability
- `gemini-1.5-pro` - Stable pro model
- `gemini-1.5-flash` - Fast and efficient
- `gemini-1.5-pro-002` - Versioned pro model
- `gemini-1.5-flash-002` - Versioned flash model

🎯 Use Cases
-----------

[](#-use-cases)

### Content Generation

[](#content-generation)

```
$blogPost = Homa::model('gpt-4')
    ->maxTokens(2000)
    ->ask('Write a blog post about Laravel best practices');
```

### Code Assistance

[](#code-assistance)

```
$response = Homa::systemPrompt('You are an expert PHP developer.')
    ->ask('Review this code and suggest improvements: ' . $code);
```

### Customer Support Bot

[](#customer-support-bot)

```
$conversation = Homa::systemPrompt('You are a helpful customer support agent.')
    ->startConversation();

$response = $conversation->ask($customerQuestion);
```

### Data Analysis

[](#data-analysis)

```
$analysis = Homa::model('claude-3-5-sonnet-20241022')
    ->ask("Analyze this data and provide insights: " . json_encode($data));
```

🏗️ Architecture
---------------

[](#️-architecture)

### Package Structure

[](#package-structure)

```
homa/
├── config/
│   └── homa.php                    # Configuration file
├── src/
│   ├── Contracts/
│   │   └── AIProviderInterface.php # Provider interface
│   ├── Conversation/
│   │   └── Conversation.php        # Conversation manager
│   ├── Exceptions/
│   │   ├── AIException.php         # Base exception
│   │   └── ConfigurationException.php
│   ├── Facades/
│   │   └── Homa.php                # Laravel facade
│   ├── Manager/
│   │   └── HomaManager.php         # Main manager class
│   ├── Providers/
│   │   ├── AnthropicProvider.php   # Anthropic implementation
│   │   └── OpenAIProvider.php      # OpenAI implementation
│   ├── Response/
│   │   └── AIResponse.php          # Response wrapper
│   └── HomaServiceProvider.php     # Laravel service provider
└── tests/                          # Comprehensive test suite

```

### Adding Custom Providers

[](#adding-custom-providers)

You can extend Homa with custom AI providers by implementing the `AIProviderInterface`:

```
use Homa\Contracts\AIProviderInterface;
use Homa\Response\AIResponse;

class CustomProvider implements AIProviderInterface
{
    public function sendMessage(array $messages, array $options = []): AIResponse
    {
        // Your implementation
    }

    // Implement other required methods...
}
```

🧪 Testing
---------

[](#-testing)

Run the test suite:

```
composer test
```

Or with PHPUnit directly:

```
./vendor/bin/phpunit
```

🤝 Contributing
--------------

[](#-contributing)

Contributions are welcome! Please feel free to submit a Pull Request.

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

📝 Changelog
-----------

[](#-changelog)

Please see [CHANGELOG](CHANGELOG.md) for more information on what has changed recently.

🔒 Security
----------

[](#-security)

If you discover any security-related issues, please email the maintainer instead of using the issue tracker.

📄 License
---------

[](#-license)

The MIT License (MIT). Please see [License File](LICENSE.md) for more information.

🌟 Credits
---------

[](#-credits)

- **Author**: [Ali Falahati](https://github.com/falahatiali)
- **Inspired by**: The mythical Persian Homa bird, a symbol of wisdom and good fortune

📖 About Homa
------------

[](#-about-homa)

In Persian mythology, the **Homa** (also spelled Huma) is a legendary bird that brings good fortune and wisdom to those fortunate enough to be graced by its shadow. The bird is said to never land, continuously soaring through the skies, much like how this package aims to elevate your Laravel applications with the power of AI.

Just as the Homa bird is known for its wisdom and grace, this package strives to bring intelligent, elegant solutions to your AI integration needs, making it effortless to incorporate cutting-edge AI capabilities into your Laravel applications.

---

**May the wisdom of Homa guide your code!** 🦅

Made with ❤️ for the Laravel community

###  Health Score

36

—

LowBetter than 82% of packages

Maintenance66

Regular maintenance activity

Popularity10

Limited adoption so far

Community6

Small or concentrated contributor base

Maturity52

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 100% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~2 days

Total

15

Last Release

199d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/068d2c8e404562d237d85728d61e51c7083afae3e806b44fb35cac6f37acb628?d=identicon)[falahatiali](/maintainers/falahatiali)

---

Top Contributors

[![falahatiali](https://avatars.githubusercontent.com/u/17930523?v=4)](https://github.com/falahatiali "falahatiali (63 commits)")

---

Tags

aichatbotsgptslaravelaiopenaiGeminiclaudechatbotanthropicgptassistantgrokgroq

###  Code Quality

TestsPHPUnit

Static AnalysisPHPStan

Code StyleLaravel Pint

Type Coverage Yes

### Embed Badge

![Health badge](/badges/falahatiali-homa/health.svg)

```
[![Health](https://phpackages.com/badges/falahatiali-homa/health.svg)](https://phpackages.com/packages/falahatiali-homa)
```

###  Alternatives

[sbsaga/toon

🧠 TOON for Laravel — a compact, human-readable, and token-efficient data format for AI prompts &amp; LLM contexts. Perfect for ChatGPT, Gemini, Claude, Mistral, and OpenAI integrations (JSON ⇄ TOON).

6115.6k](/packages/sbsaga-toon)[vizra/vizra-adk

Vizra Agent Development Kit - A comprehensive Laravel package for building intelligent AI agents.

29026.1k](/packages/vizra-vizra-adk)[cognesy/instructor-php

The complete AI toolkit for PHP: unified LLM API, structured outputs, agents, and coding agent control

310107.9k1](/packages/cognesy-instructor-php)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
