PHPackages                             llm-speak/google-gemini - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [API Development](/categories/api)
4. /
5. llm-speak/google-gemini

ActiveLibrary[API Development](/categories/api)

llm-speak/google-gemini
=======================

A Laravel package for integrating Google Gemini into LLMSpeak

0.4.1(9mo ago)00MITPHPPHP ^8.2

Since Jul 6Pushed 9mo agoCompare

[ Source](https://github.com/projectsaturnstudios/llm-speak-gemini)[ Packagist](https://packagist.org/packages/llm-speak/google-gemini)[ RSS](/packages/llm-speak-google-gemini/feed)WikiDiscussions main Synced 1mo ago

READMEChangelogDependenciesVersions (6)Used By (0)

LLMSpeak Google Gemini
======================

[](#llmspeak-google-gemini)

[![License: MIT](https://camo.githubusercontent.com/fdf2982b9f5d7489dcf44570e714e3a15fce6253e0cc6b5aa61a075aac2ff71b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d79656c6c6f772e737667)](https://opensource.org/licenses/MIT)[![PHP](https://camo.githubusercontent.com/0f16581d1180dbfd4c0e13166ec1267d4ad2f2fab8281ea6d6b284cf5c65d921/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5048502d382e322532422d626c75652e737667)](https://php.net/releases/)[![Laravel](https://camo.githubusercontent.com/906dea2eb7060f4769da2f105c19b13c8e426284048c7217e23102fb572d64a8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c61726176656c2d31302e7825374331312e7825374331322e782d7265642e737667)](https://laravel.com)[![Latest Version on Packagist](https://camo.githubusercontent.com/8a00bc2dc4ab730707cd40e00f5c696e01d30f74931e7d13d73fbd875a0ec45a/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f762f6c6c6d2d737065616b2f676f6f676c652d67656d696e692e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/llm-speak/google-gemini)[![Total Downloads](https://camo.githubusercontent.com/21b5e369cd557b30158802edffc41abbca80b35c728c2415aed910a48d43cf75/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f64742f6c6c6d2d737065616b2f676f6f676c652d67656d696e692e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/llm-speak/google-gemini)

**LLMSpeak Google Gemini** is a Laravel package that provides a fluent, Laravel-native interface for integrating with Google's Gemini AI models. Built as part of the LLMSpeak ecosystem, it offers seamless integration with Laravel applications through automatic service discovery and expressive request builders.

> **Note:** This package is part of the larger [LLMSpeak ecosystem](https://github.com/projectsaturnstudios/llm-speak). For universal provider switching and standardized interfaces, check out the [LLMSpeak Core](https://github.com/projectsaturnstudios/llm-speak-core) package.

Table of Contents
-----------------

[](#table-of-contents)

- [Features](#features)
- [Get Started](#get-started)
- [Usage](#usage)
    - [Content Generation](#content-generation)
    - [Embeddings](#embeddings)
    - [Fluent Request Building](#fluent-request-building)
    - [System Instructions](#system-instructions)
    - [Tool Calling](#tool-calling)
    - [Thinking Mode](#thinking-mode)
    - [Safety Settings](#safety-settings)
    - [Advanced Configuration](#advanced-configuration)
- [Response Handling](#response-handling)
- [Testing](#testing)
- [Credits](#credits)
- [License](#license)

Features
--------

[](#features)

- **🚀 Laravel Native**: Full Laravel integration with automatic service discovery
- **🔧 Fluent Interface**: Expressive request builders with method chaining
- **📊 Laravel Data**: Powered by Spatie Laravel Data for robust data validation
- **🛠️ Tool Support**: Complete function calling capabilities
- **🧠 Thinking Mode**: Support for Gemini 2.0+ extended thinking features
- **🛡️ Safety Controls**: Built-in safety filtering and content moderation
- **📝 Embeddings**: Full text embedding support with task-specific optimization
- **🎯 Type Safety**: Full PHP 8.2+ type declarations and IDE support
- **🔐 Secure**: Built-in API key management and request validation

Get Started
-----------

[](#get-started)

> **Requires [PHP 8.2+](https://php.net/releases/) and Laravel 10.x/11.x/12.x**

Install the package via [Composer](https://getcomposer.org/):

```
composer require llm-speak/google-gemini
```

The package will automatically register itself via Laravel's package discovery.

### Environment Configuration

[](#environment-configuration)

Add your Google API key to your `.env` file:

```
GEMINI_API_KEY=your_google_api_key_here
```

Usage
-----

[](#usage)

### Content Generation

[](#content-generation)

The simplest way to generate content with Gemini:

```
use LLMSpeak\Gemini\GeminiGenerateRequest;

$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-exp',
    contents: [
        [
            'parts' => [
                ['text' => 'Explain quantum computing in simple terms']
            ]
        ]
    ]
);

$response = $request->post();

echo $response->getTextContent(); // "Quantum computing is like..."
```

### Embeddings

[](#embeddings)

Generate embeddings for text with task-specific optimization:

```
use LLMSpeak\Gemini\GeminiEmbeddingsRequest;

// Simple text embedding
$request = GeminiEmbeddingsRequest::withText(
    model: 'text-embedding-004',
    text: 'Generate embeddings for this text'
);

$response = $request->post();

$embeddings = $response->getEmbeddings();
$dimensions = $response->getDimensionality();

// Task-specific embedding
$request = new GeminiEmbeddingsRequest(
    model: 'text-embedding-004',
    content: [
        'parts' => [
            ['text' => 'Research paper abstract content']
        ]
    ]
)
->setTaskType('RETRIEVAL_DOCUMENT')
->setTitle('Research Paper: AI in Healthcare')
->setOutputDimensionality(768);

$response = $request->post();
```

### Universal LLMSpeak Interface

[](#universal-llmspeak-interface)

For **provider-agnostic embeddings** that work across Gemini, Mistral, Ollama, and other providers, use the universal LLMSpeak interface:

```
use LLMSpeak\Core\Support\Facades\LLMSpeak;
use LLMSpeak\Core\Support\Requests\LLMSpeakEmbeddingsRequest;

// Universal request works with ANY provider
$request = new LLMSpeakEmbeddingsRequest(
    model: 'text-embedding-004',
    input: 'Generate embeddings for this text',
    encoding_format: null,           // Optional: 'float' or 'base64'
    dimensions: null,                // Optional: Custom dimensions (Matryoshka)
    task_type: 'SEMANTIC_SIMILARITY' // Optional: Gemini task optimization
);

// Execute with Gemini - same code works with other providers!
$response = LLMSpeak::embeddingsFrom('gemini', $request);

// Universal response methods
$embeddings = $response->getAllEmbeddings();
$firstVector = $response->getFirstEmbedding();
$dimensions = $response->getDimensions();
$tokenUsage = $response->getTotalTokens();
```

### Advanced Universal Features

[](#advanced-universal-features)

Leverage Gemini's unique capabilities through the universal interface:

```
// Task-specific optimization
$request = new LLMSpeakEmbeddingsRequest(
    model: 'text-embedding-004',
    input: 'Research paper about artificial intelligence',
    encoding_format: 'float',
    dimensions: 768,                    // Matryoshka representation
    task_type: 'RETRIEVAL_DOCUMENT'    // Gemini-specific optimization
);

$response = LLMSpeak::embeddingsFrom('gemini', $request);

// Batch processing with universal interface
$batchRequest = new LLMSpeakEmbeddingsRequest(
    model: 'text-embedding-004',
    input: [
        'Document one content',
        'Document two content',
        'Document three content'
    ],
    encoding_format: 'float',
    dimensions: 512,
    task_type: 'CLUSTERING'
);

$batchResponse = LLMSpeak::embeddingsFrom('gemini', $batchRequest);

echo "Generated {$batchResponse->getEmbeddingCount()} embeddings";
echo "Vector dimensions: {$batchResponse->getDimensions()}";
```

### Why Use Universal Interface?

[](#why-use-universal-interface)

**✅ Provider Independence:** Switch between Gemini, Mistral, Ollama with zero code changes
**✅ Future Proof:** New providers automatically supported
**✅ Consistent API:** Same methods across all providers
**✅ Type Safety:** Full PHP type declarations and IDE support
**✅ Best of Both:** Access provider-specific features when needed

```
// Same request works with different providers!
$request = new LLMSpeakEmbeddingsRequest(/*...*/);

$geminiResponse = LLMSpeak::embeddingsFrom('gemini', $request);   // Google AI
$mistralResponse = LLMSpeak::embeddingsFrom('mistral', $request); // Mistral AI
$ollamaResponse = LLMSpeak::embeddingsFrom('ollama', $request);   // Local models
```

### Fluent Request Building

[](#fluent-request-building)

Build complex requests using the fluent interface:

```
use LLMSpeak\Gemini\GeminiGenerateRequest;

$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-exp',
    contents: [
        [
            'parts' => [
                ['text' => 'Write a creative story about space exploration']
            ]
        ]
    ]
)
->setTemperature(0.9)
->setMaxOutputTokens(2048)
->setTopP(0.8)
->setTopK(40)
->setResponseMimeType('application/json')
->setStopSequences(['THE END']);

$response = $request->post();

// Access response properties
echo $response->getTextContent();
echo $response->getTotalTokens();
echo $response->getModelVersion();
```

### Batch Configuration

[](#batch-configuration)

Set multiple parameters at once:

```
$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-exp',
    contents: $conversation
)->setMultiple([
    'temperature' => 0.7,
    'maxOutputTokens' => 1024,
    'topP' => 0.9,
    'topK' => 50,
    'stopSequences' => ['Human:', 'Assistant:']
]);
```

### System Instructions

[](#system-instructions)

Provide system-level instructions to guide model behavior:

```
$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-exp',
    contents: [
        [
            'parts' => [
                ['text' => 'Explain machine learning concepts']
            ]
        ]
    ]
)->setSystemInstruction([
    'parts' => [
        [
            'text' => 'You are an expert computer science professor. ' .
                     'Provide detailed explanations with examples and analogies. ' .
                     'Always include practical applications.'
        ]
    ]
]);

$response = $request->post();
```

### Tool Calling

[](#tool-calling)

Enable Gemini to use external functions and tools:

```
$tools = [
    'functionDeclarations' => [
        [
            'name' => 'get_current_weather',
            'description' => 'Get the current weather for a specific location',
            'parameters' => [
                'type' => 'object',
                'properties' => [
                    'location' => [
                        'type' => 'string',
                        'description' => 'The city and state/country'
                    ],
                    'unit' => [
                        'type' => 'string',
                        'enum' => ['celsius', 'fahrenheit']
                    ]
                ],
                'required' => ['location']
            ]
        ]
    ]
];

$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-exp',
    contents: [
        [
            'parts' => [
                ['text' => 'What\'s the weather like in Tokyo?']
            ]
        ]
    ]
)
->setTools($tools)
->setToolConfig([
    'functionCallingConfig' => [
        'mode' => 'AUTO'
    ]
]);

$response = $request->post();

// Check for tool usage
if ($response->usedTools()) {
    $toolCalls = $response->getToolCalls();
    foreach ($toolCalls as $call) {
        echo "Function: {$call['name']}\n";
        echo "Arguments: " . json_encode($call['args']) . "\n";
    }
}
```

### Thinking Mode

[](#thinking-mode)

Enable Gemini 2.0+ extended thinking capabilities:

```
$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-thinking-exp',
    contents: [
        [
            'parts' => [
                ['text' => 'Solve this complex math problem: What is the derivative of x^3 + 2x^2 - 5x + 3?']
            ]
        ]
    ]
)
->setThinkingBudget(1024)  // Allow up to 1024 thinking tokens
->setIncludeThoughts(true); // Include reasoning in response

$response = $request->post();

// Access thinking content
$thoughts = $response->getThinkingContent();
$finalAnswer = $response->getTextContent();
$thinkingRatio = $response->getThinkingRatio();

echo "Reasoning: " . $thoughts;
echo "Answer: " . $finalAnswer;
echo "Thinking used: " . ($thinkingRatio * 100) . "% of budget";
```

### Safety Settings

[](#safety-settings)

Configure content safety and filtering:

```
$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-exp',
    contents: $contents
)->setSafetySettings([
    [
        'category' => 'HARM_CATEGORY_HATE_SPEECH',
        'threshold' => 'BLOCK_MEDIUM_AND_ABOVE'
    ],
    [
        'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT',
        'threshold' => 'BLOCK_ONLY_HIGH'
    ]
]);

$response = $request->post();

// Check safety feedback
$promptFeedback = $response->getPromptFeedback();
if ($promptFeedback && isset($promptFeedback['blockReason'])) {
    echo "Content blocked: " . $promptFeedback['blockReason'];
}
```

### Advanced Configuration

[](#advanced-configuration)

Configure advanced parameters for optimal performance:

```
$request = new GeminiGenerateRequest(
    model: 'gemini-2.0-flash-exp',
    contents: $conversation
)
->setTemperature(0.8)
->setMaxOutputTokens(4096)
->setTopP(0.95)
->setTopK(64)
->setResponseMimeType('application/json')
->setResponseSchema($jsonSchema)
->setCachedContent($cachedContentName)
->setPresencePenalty(0.2)
->setFrequencyPenalty(0.3);

$response = $request->post();
```

Response Handling
-----------------

[](#response-handling)

Access comprehensive response data:

```
$response = $request->post();

// Basic response info
$textContent = $response->getTextContent();
$allCandidates = $response->getAllCandidates();
$bestCandidate = $response->getBestCandidate();

// Token usage
$totalTokens = $response->getTotalTokens();
$inputTokens = $response->getInputTokens();
$outputTokens = $response->getOutputTokens();
$cachedTokens = $response->getCachedContentTokenCount();

// Model information
$modelVersion = $response->getModelVersion();
$finishReason = $response->getFinishReason();

// Safety and feedback
$promptFeedback = $response->getPromptFeedback();
$safetyRatings = $response->getSafetyRatings();

// Tool usage
$usedTools = $response->usedTools();
$toolCalls = $response->getToolCalls();

// Thinking mode (Gemini 2.0+)
$thinkingContent = $response->getThinkingContent();
$thinkingRatio = $response->getThinkingRatio();

// Completion status
$completedNaturally = $response->completedNaturally();
$hitTokenLimit = $response->reachedTokenLimit();
$wasBlocked = $response->wasBlockedBySafety();

// Convert to array for storage
$responseArray = $response->toArray();
```

Testing
-------

[](#testing)

The package provides testing utilities for mocking Gemini responses:

```
use LLMSpeak\Gemini\GeminiGenerateRequest;
use LLMSpeak\Gemini\GeminiGenerateResponse;

// Create a mock response
$mockResponse = new GeminiGenerateResponse(
    candidates: [
        [
            'content' => [
                'parts' => [
                    ['text' => 'Mock response content']
                ]
            ],
            'finishReason' => 'STOP'
        ]
    ],
    usageMetadata: [
        'promptTokenCount' => 10,
        'candidatesTokenCount' => 15,
        'totalTokenCount' => 25
    ],
    modelVersion: 'gemini-2.0-flash-exp'
);

// Test your application logic
$this->assertEquals('Mock response content', $mockResponse->getTextContent());
$this->assertEquals(25, $mockResponse->getTotalTokens());
```

Credits
-------

[](#credits)

- [Project Saturn Studios](https://github.com/projectsaturnstudios)

License
-------

[](#license)

The MIT License (MIT). Please see [License File](LICENSE.md) for more information.

---

**Part of the LLMSpeak Ecosystem** - Made with ADHD by [Project Saturn Studios](https://projectsaturnstudios.com)

###  Health Score

28

—

LowBetter than 54% of packages

Maintenance57

Moderate activity, may be stable

Popularity0

Limited adoption so far

Community6

Small or concentrated contributor base

Maturity43

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 100% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~6 days

Total

5

Last Release

286d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/2169021b88d520bd58d9a37f51fa55058af7adbf7362c8cade65b261d644874c?d=identicon)[projectsaturnstudios](/maintainers/projectsaturnstudios)

---

Top Contributors

[![projectsaturnstudios](https://avatars.githubusercontent.com/u/10563160?v=4)](https://github.com/projectsaturnstudios "projectsaturnstudios (7 commits)")

### Embed Badge

![Health badge](/badges/llm-speak-google-gemini/health.svg)

```
[![Health](https://phpackages.com/badges/llm-speak-google-gemini/health.svg)](https://phpackages.com/packages/llm-speak-google-gemini)
```

###  Alternatives

[stripe/stripe-php

Stripe PHP Library

4.0k143.3M480](/packages/stripe-stripe-php)[twilio/sdk

A PHP wrapper for Twilio's API

1.6k92.9M272](/packages/twilio-sdk)[knplabs/github-api

GitHub API v3 client

2.2k15.8M187](/packages/knplabs-github-api)[facebook/php-business-sdk

PHP SDK for Facebook Business

90121.9M34](/packages/facebook-php-business-sdk)[meilisearch/meilisearch-php

PHP wrapper for the Meilisearch API

73813.7M114](/packages/meilisearch-meilisearch-php)[google/gax

Google API Core for PHP

263103.1M454](/packages/google-gax)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
