PHPackages                             netresearch/nr-llm - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [Localization &amp; i18n](/categories/localization)
4. /
5. netresearch/nr-llm

ActiveTypo3-cms-extension[Localization &amp; i18n](/categories/localization)

netresearch/nr-llm
==================

Shared AI/LLM foundation for TYPO3 — centralized provider management, encrypted API keys, and ready-to-use services for chat, translation, vision, and embeddings

v0.5.0(2mo ago)23.9k—0%1[3 issues](https://github.com/netresearch/t3x-nr-llm/issues)1GPL-2.0-or-laterPHPPHP ^8.2CI passing

Since Jan 11Pushed 1mo ago1 watchersCompare

[ Source](https://github.com/netresearch/t3x-nr-llm)[ Packagist](https://packagist.org/packages/netresearch/nr-llm)[ Docs](https://github.com/netresearch/t3x-nr-llm)[ RSS](/packages/netresearch-nr-llm/feed)WikiDiscussions main Synced 1mo ago

READMEChangelog (10)Dependencies (19)Versions (32)Used By (1)

nr-llm — The Shared AI Foundation for TYPO3
===========================================

[](#nr-llm--the-shared-ai-foundation-for-typo3)

[![CI](https://github.com/netresearch/t3x-nr-llm/actions/workflows/ci.yml/badge.svg)](https://github.com/netresearch/t3x-nr-llm/actions/workflows/ci.yml)[![codecov](https://camo.githubusercontent.com/ef89c9355a8934b29b1d71e9822e442becfa991f47419d4d32c045231ed27528/68747470733a2f2f636f6465636f762e696f2f67682f6e657472657365617263682f7433782d6e722d6c6c6d2f67726170682f62616467652e737667)](https://codecov.io/gh/netresearch/t3x-nr-llm)[![Documentation](https://github.com/netresearch/t3x-nr-llm/actions/workflows/docs.yml/badge.svg)](https://github.com/netresearch/t3x-nr-llm/actions/workflows/docs.yml)[![OpenSSF Scorecard](https://camo.githubusercontent.com/2cce612ba1aea0034c5d46d9eb9b8eef2dc78add7ebb57765327afc262a26948/68747470733a2f2f6170692e736563757269747973636f726563617264732e6465762f70726f6a656374732f6769746875622e636f6d2f6e657472657365617263682f7433782d6e722d6c6c6d2f6261646765)](https://securityscorecards.dev/viewer/?uri=github.com/netresearch/t3x-nr-llm)[![OpenSSF Best Practices](https://camo.githubusercontent.com/4c0d499d678bd62201efdc50427078585c19180a3e6f328166e44db6c690d8db/68747470733a2f2f7777772e626573747072616374696365732e6465762f70726f6a656374732f31313639372f6261646765)](https://www.bestpractices.dev/projects/11697)[![PHPStan](https://camo.githubusercontent.com/d18b9a987aa81e64470a11caecf72caa66597c9ebd6b307bd1c2cb7a752b0dff/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5048505374616e2d6c6576656c25323031302d627269676874677265656e2e737667)](https://phpstan.org/)[![PHP 8.2+](https://camo.githubusercontent.com/0f16581d1180dbfd4c0e13166ec1267d4ad2f2fab8281ea6d6b284cf5c65d921/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5048502d382e322532422d626c75652e737667)](https://www.php.net/)[![TYPO3 v13.4+](https://camo.githubusercontent.com/55f8fb237b0ea7b977760a9c819113ec71df597862190a8cba75d6059d38c7c0/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5459504f332d7631332e342532422d6f72616e67652e737667)](https://typo3.org/)[![License: GPL v2](https://camo.githubusercontent.com/6b639abd353f330706524560924b761ba87ba1895911daf38f4b71b9347bb94f/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d47504c5f76322d626c75652e737667)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)[![Latest Release](https://camo.githubusercontent.com/613d85c07bbfb53f9ba0e7f96f967c6ea712bdf3d24f0e92765a133abee9fb94/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f6e657472657365617263682f7433782d6e722d6c6c6d)](https://github.com/netresearch/t3x-nr-llm/releases)[![Contributor Covenant](https://camo.githubusercontent.com/29a162993634c8b29ac484c10c05c2db4be4e6ea7a916cf5c130711e91d26364/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d322e302d3462616161612e737667)](CODE_OF_CONDUCT.md)[![SLSA 3](https://camo.githubusercontent.com/dc294f15fb5f1c96307863a1e96860310be940504e7ee370cee94bf4400cbac9/68747470733a2f2f736c73612e6465762f696d616765732f67682d62616467652d6c6576656c332e737667)](https://slsa.dev)

**One LLM setup. Every extension. Full admin control.**

nr-llm is shared infrastructure for AI in TYPO3 — like the caching framework, but for language models. Administrators configure providers once; every AI-powered extension on the site uses them automatically.

The Problem
-----------

[](#the-problem)

Every TYPO3 extension that wants AI capabilities today has to:

- Build its own provider integration (HTTP calls, auth, error handling, streaming)
- Store API keys in its own way (often plaintext in extension settings)
- Create its own backend configuration UI
- Leave administrators with no central overview of AI usage or costs

When a site runs three AI extensions, that's three separate API key configurations, three places to check when something breaks, and no way to switch providers globally.

The Solution
------------

[](#the-solution)

nr-llm provides the missing shared layer:

```
┌─────────────────────────────────────────────────┐
│  Your Extension  │  Cowriter  │  SEO Assistant  │
│  (3 lines of DI) │           │                 │
└────────┬─────────┴─────┬─────┴────────┬────────┘
         │               │              │
┌────────▼───────────────▼──────────────▼────────┐
│              nr-llm Service Layer               │
│  Chat · Translation · Vision · Embeddings       │
│  Streaming · Tool Calling · Caching             │
├─────────────────────────────────────────────────┤
│         Provider Abstraction Layer              │
│  OpenAI · Anthropic · Gemini · Ollama · …       │
├─────────────────────────────────────────────────┤
│     Admin Tools > LLM (Backend Module)          │
│  Encrypted keys · Usage tracking · Setup wizard │
└─────────────────────────────────────────────────┘

```

---

For Extension Developers
------------------------

[](#for-extension-developers)

**Add AI to your TYPO3 extension in 5 minutes** — no API key handling, no HTTP client code, no provider-specific logic.

### 1. Require the package

[](#1-require-the-package)

```
composer require netresearch/nr-llm
```

### 2. Inject the service you need

[](#2-inject-the-service-you-need)

```
use Netresearch\NrLlm\Service\LlmServiceManagerInterface;

class MyController
{
    public function __construct(
        private readonly LlmServiceManagerInterface $llm,
    ) {}

    public function summarizeAction(string $text): string
    {
        return $this->llm->complete("Summarize: {$text}")->content;
    }
}
```

That's it. Provider selection, API keys, caching, error handling — all managed by nr-llm.

### What you get for free

[](#what-you-get-for-free)

CapabilityWithout nr-llmWith nr-llmProvider switchingRewrite HTTP callsChange one admin settingAPI key storage`$GLOBALS` or plaintextEncrypted (sodium / nr-vault)Response cachingBuild your ownBuilt-in, TYPO3 caching frameworkStreaming (SSE)Implement per provider`foreach ($llm->streamChat($msg) as $chunk)`Error handlingParse each provider's errorsTyped exceptions with provider contextMultiple providersN × integration effortOne interface, all providers### Available services

[](#available-services)

```
// Chat & completion
$response = $llm->chat($messages);
$response = $llm->complete('Explain TYPO3 content elements');

// Translation
$result = $translationService->translate('Hello world', 'de');

// Vision / image analysis
$altText = $visionService->generateAltText($imageUrl);

// Embeddings & similarity
$vector = $embeddingService->embed('semantic search query');

// Streaming
foreach ($llm->streamChat($messages) as $chunk) { echo $chunk; }

// Tool/function calling
$response = $llm->chatWithTools($messages, $toolDefinitions);
```

### Why not call the OpenAI API directly?

[](#why-not-call-the-openai-api-directly)

You can — but then your extension only works with OpenAI. Your users can't switch to Anthropic, use a local Ollama instance, or route through Azure. And every extension on the site manages its own API keys, its own error handling, its own caching.

nr-llm solves this once for the entire TYPO3 ecosystem.

> **[Developer Guide](Documentation/Developer/Index.rst)** — Full API reference, custom provider registration, typed options, response objects
>
> **[Integration Guide](Documentation/Developer/IntegrationGuide.rst)** — Step-by-step tutorial for building your extension on nr-llm

---

For TYPO3 Administrators
------------------------

[](#for-typo3-administrators)

### One dashboard for all AI

[](#one-dashboard-for-all-ai)

The **Admin Tools &gt; LLM** backend module gives you full control:

- **Providers** — Register API connections (OpenAI, Anthropic, Gemini, Ollama, …)
- **Models** — Define which models are available and their capabilities
- **Configurations** — Create use-case presets (temperature, system prompts, token limits)
- **Tasks** — Define reusable prompt templates for editors

### Security by default

[](#security-by-default)

- **Encrypted API keys** — All keys stored with sodium\_crypto\_secretbox (XSalsa20-Poly1305), optionally via [nr-vault](https://github.com/netresearch/t3x-nr-vault) envelope encryption
- **Admin-only access** — Backend module restricted to administrators
- **No plaintext secrets** — Keys never stored or logged in plain text

### Setup in 2 minutes

[](#setup-in-2-minutes)

The **Setup Wizard** auto-detects your provider type from the endpoint URL, discovers available models, and generates a ready-to-use configuration. Paste your API key and go.

### AI-powered wizards

[](#ai-powered-wizards)

- **Task Wizard** — Describe what you need in plain language; the AI generates a complete task with configuration, system prompt, and model recommendation in one step
- **Configuration Wizard** — AI-assisted configuration generation with system prompts, parameters, and model selection
- **Model Discovery** — "Fetch Models" button on model fields queries the provider API and auto-fills capabilities and pricing

### Supported providers

[](#supported-providers)

ProviderAdapterCapabilitiesOpenAI`openai`Chat, Embeddings, Vision, Streaming, ToolsAnthropic Claude`anthropic`Chat, Vision, Streaming, ToolsGoogle Gemini`gemini`Chat, Embeddings, Vision, Streaming, ToolsOllama`ollama`Chat, Embeddings, Streaming (local, no API key)OpenRouter`openrouter`Chat, Embeddings, Vision, Streaming, ToolsMistral`mistral`Chat, Embeddings, StreamingGroq`groq`Chat, Streaming (fast inference)Azure OpenAI`azure_openai`Same as OpenAIAny OpenAI-compatible`custom`Varies (vLLM, LocalAI, LiteLLM, …)---

For Agencies &amp; Solution Architects
--------------------------------------

[](#for-agencies--solution-architects)

- **Reduce integration effort** — AI capabilities across client projects without per-project plumbing
- **No vendor lock-in** — Switch from OpenAI to Anthropic (or a local model) without code changes
- **Compliance-friendly** — Encrypted keys, admin-only access, SBOM and SLSA provenance on every release
- **Local-first option** — Ollama support means AI features work without sending data to external APIs
- **Production-proven** — Powers [t3x-cowriter](https://github.com/netresearch/t3x-cowriter), the CKEditor 5 AI writing assistant for TYPO3

---

Built on nr-llm
---------------

[](#built-on-nr-llm)

ExtensionWhat it doesnr-llm services used[t3x-cowriter](https://github.com/netresearch/t3x-cowriter)AI writing assistant in CKEditor 5Chat, Streaming, Translation, Tasks*Building on nr-llm? [Open a PR](https://github.com/netresearch/t3x-nr-llm/pulls) to add your extension here.*

---

Architecture
------------

[](#architecture)

The three-tier configuration hierarchy separates concerns cleanly:

```
┌─────────────────────────────────────────────────────────┐
│ CONFIGURATION (Use-Case Specific)                       │
│ "blog-summarizer", "product-description", "translator"  │
│ → system_prompt, temperature, max_tokens                │
└───────────────────────────┬─────────────────────────────┘
                            │ references
┌───────────────────────────▼─────────────────────────────┐
│ MODEL (Available Models)                                │
│ "gpt-5.3-instant", "claude-sonnet-4-6", "llama-70b"     │
│ → model_id, capabilities, pricing                       │
└───────────────────────────┬─────────────────────────────┘
                            │ references
┌───────────────────────────▼─────────────────────────────┐
│ PROVIDER (API Connections)                              │
│ "openai-prod", "openai-dev", "local-ollama"             │
│ → endpoint, api_key (encrypted), adapter_type           │
└─────────────────────────────────────────────────────────┘

```

**Benefits:**

- Multiple API keys per provider type (prod/dev/backup)
- Custom endpoints (Azure OpenAI, Ollama, vLLM, local models)
- Reusable model definitions across configurations
- Clear separation of concerns

---

Requirements
------------

[](#requirements)

- PHP 8.2+
- TYPO3 v13.4+ or v14.x
- PSR-18 HTTP Client (e.g., guzzlehttp/guzzle)

Installation
------------

[](#installation)

```
composer require netresearch/nr-llm
```

Then activate in **Admin Tools &gt; Extensions** and run **Admin Tools &gt; LLM &gt; Setup Wizard**.

---

Documentation
-------------

[](#documentation)

- **[Introduction](Documentation/Introduction/Index.rst)** — Overview and use cases
- **[Configuration](Documentation/Configuration/Index.rst)** — Backend module setup
- **[Developer Guide](Documentation/Developer/Index.rst)** — API reference, services, custom providers
- **[Integration Guide](Documentation/Developer/IntegrationGuide.rst)** — Build your extension on nr-llm
- **[Architecture](Documentation/Architecture/Index.rst)** — Design decisions and ADRs

---

Supply Chain Security
---------------------

[](#supply-chain-security)

This project implements supply chain security best practices:

### SLSA Level 3 Provenance

[](#slsa-level-3-provenance)

All releases include [SLSA Level 3](https://slsa.dev) build provenance attestations, providing:

- **Non-falsifiable provenance**: Cryptographically signed attestations
- **Isolated build environment**: Builds run in GitHub Actions with no user-controlled steps
- **Source integrity**: Provenance links artifacts to exact source commit

### Verifying Release Artifacts

[](#verifying-release-artifacts)

```
# Install slsa-verifier
go install github.com/slsa-framework/slsa-verifier/v2/cli/slsa-verifier@latest

# Download release artifacts
gh release download v1.0.0 -R netresearch/t3x-nr-llm

# Verify SLSA provenance
slsa-verifier verify-artifact nr_llm-1.0.0.zip \
  --provenance-path multiple.intoto.jsonl \
  --source-uri github.com/netresearch/t3x-nr-llm \
  --source-tag v1.0.0
```

### Verifying Signatures (Cosign)

[](#verifying-signatures-cosign)

All release artifacts are signed using [Sigstore Cosign](https://www.sigstore.dev/) keyless signing:

```
# Install cosign
go install github.com/sigstore/cosign/v2/cmd/cosign@latest

# Verify signature
cosign verify-blob nr_llm-1.0.0.zip \
  --signature nr_llm-1.0.0.zip.sig \
  --certificate nr_llm-1.0.0.zip.pem \
  --certificate-identity-regexp 'https://github.com/netresearch/t3x-nr-llm/' \
  --certificate-oidc-issuer 'https://token.actions.githubusercontent.com'
```

### Software Bill of Materials (SBOM)

[](#software-bill-of-materials-sbom)

Each release includes SBOMs in both [SPDX](https://spdx.dev/) and [CycloneDX](https://cyclonedx.org/) formats:

- `nr_llm-X.Y.Z.sbom.spdx.json` - SPDX format
- `nr_llm-X.Y.Z.sbom.cdx.json` - CycloneDX format

### Checksums

[](#checksums)

SHA256 checksums for all artifacts are provided in `checksums.txt`, which is also signed.

License
-------

[](#license)

GPL-2.0-or-later

Author
------

[](#author)

Netresearch DTT GmbH

###  Health Score

43

—

FairBetter than 91% of packages

Maintenance69

Regular maintenance activity

Popularity29

Limited adoption so far

Community16

Small or concentrated contributor base

Maturity48

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 88.4% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~2 days

Total

22

Last Release

62d ago

PHP version history (2 changes)v0.1.0PHP ^8.5

v0.2.0PHP ^8.2

### Community

Maintainers

![](https://www.gravatar.com/avatar/acffee6a64e18f21593794b335dd8786001148f7df89fd8372a54d3dd09d91a4?d=identicon)[netresearch](/maintainers/netresearch)

---

Top Contributors

[![CybotTM](https://avatars.githubusercontent.com/u/326348?v=4)](https://github.com/CybotTM "CybotTM (502 commits)")[![dependabot[bot]](https://avatars.githubusercontent.com/in/29110?v=4)](https://github.com/dependabot[bot] "dependabot[bot] (27 commits)")[![github-actions[bot]](https://avatars.githubusercontent.com/in/15368?v=4)](https://github.com/github-actions[bot] "github-actions[bot] (16 commits)")[![renovate[bot]](https://avatars.githubusercontent.com/in/2740?v=4)](https://github.com/renovate[bot] "renovate[bot] (16 commits)")[![just-tobi](https://avatars.githubusercontent.com/u/5242689?v=4)](https://github.com/just-tobi "just-tobi (7 commits)")

---

Tags

aianthropicclaudeembeddingsgeminigptllmollamaopenaiphpprovider-abstractionstreamingtypo3typo3-extensiontranslationaistreamingopenaiextensiontypo3Geminiclaudechatbotllmollamaembeddingsprovider-abstraction

###  Code Quality

Static AnalysisPHPStan, Rector

Code StylePHP CS Fixer

Type Coverage Yes

### Embed Badge

![Health badge](/badges/netresearch-nr-llm/health.svg)

```
[![Health](https://phpackages.com/badges/netresearch-nr-llm/health.svg)](https://phpackages.com/packages/netresearch-nr-llm)
```

###  Alternatives

[cognesy/instructor-php

The complete AI toolkit for PHP: unified LLM API, structured outputs, agents, and coding agent control

310107.9k1](/packages/cognesy-instructor-php)[theodo-group/llphant

LLPhant is a library to help you build Generative AI applications.

1.5k311.5k5](/packages/theodo-group-llphant)[symfony/ai-platform

PHP library for interacting with AI platform provider.

51927.7k134](/packages/symfony-ai-platform)[vizra/vizra-adk

Vizra Agent Development Kit - A comprehensive Laravel package for building intelligent AI agents.

29026.1k](/packages/vizra-vizra-adk)[sbsaga/toon

🧠 TOON for Laravel — a compact, human-readable, and token-efficient data format for AI prompts &amp; LLM contexts. Perfect for ChatGPT, Gemini, Claude, Mistral, and OpenAI integrations (JSON ⇄ TOON).

6115.6k](/packages/sbsaga-toon)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
