PHPackages                             mohammed-abd-razaq/laravel-toxicity-filter - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [Utility &amp; Helpers](/categories/utility)
4. /
5. mohammed-abd-razaq/laravel-toxicity-filter

ActiveLibrary[Utility &amp; Helpers](/categories/utility)

mohammed-abd-razaq/laravel-toxicity-filter
==========================================

A lightweight Laravel library that wraps AI-based toxicity detection engines to evaluate and moderate user-generated content within your application.

1.1.1(9mo ago)01MITPHPPHP ^8.0

Since Aug 9Pushed 9mo agoCompare

[ Source](https://github.com/MOHAMMED-ABD-RAZAQ/laravel-toxicity-filter)[ Packagist](https://packagist.org/packages/mohammed-abd-razaq/laravel-toxicity-filter)[ RSS](/packages/mohammed-abd-razaq-laravel-toxicity-filter/feed)WikiDiscussions main Synced 1mo ago

READMEChangelogDependencies (7)Versions (4)Used By (0)

Laravel AI Toxicity Filter Package
==================================

[](#laravel-ai-toxicity-filter-package)

[![Latest Version](https://camo.githubusercontent.com/34e695c6016bc2a934a96bed696e29b2f2ab562a7134d65a55d00653cd506bea/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f76657273696f6e2d312e302e302d626c75652e737667)](https://github.com/mohammed-abd-razaq/laravel-toxicity-filter)[![PHP Version](https://camo.githubusercontent.com/d994d75c13a9cd84975ec22cd83d2a7eba90d1de6a997599a566b9be8354d28b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f7068702d253545382e302d627269676874677265656e2e737667)](https://php.net)[![Laravel Version](https://camo.githubusercontent.com/0a499ba4ed472c682d8fc5f7e6b7b1b57fca489f73da770891d4c55b2cf3a0d6/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c61726176656c2d253545392e3025374325354531302e3025374325354531312e3025374325354531322e302d7265642e737667)](https://laravel.com)[![License](https://camo.githubusercontent.com/8bb50fd2278f18fc326bf71f6e88ca8f884f72f179d3e555e20ed30157190d0d/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d4d49542d677265656e2e737667)](https://opensource.org/licenses/MIT)

A professional Laravel library that integrates AI-based toxicity detection engines to automatically evaluate, moderate, and filter user-generated content such as comments, posts, messages, and reviews within your application.

Features
--------

[](#features)

- 🤖 **Multiple AI Providers**: Support for OpenAI Moderation API, Google Perspective API, and extensible for more
- ⚡ **Laravel Integration**: Seamless integration with Laravel facades, service providers, and middleware
- 🛡️ **Automatic Filtering**: Middleware for automatic content moderation on routes
- 🎛️ **Configurable Thresholds**: Customizable toxicity thresholds for blocking, flagging, and warning
- 📊 **Detailed Analytics**: Comprehensive logging and database storage of toxicity detection results
- 🚀 **Queue Support**: Async processing for bulk or large content moderation
- 💾 **Caching**: Redis/database caching to reduce API calls and improve performance
- 🔧 **Extensible**: Easy to add new AI providers through clean interfaces
- 🔒 **Privacy First**: Content hashing for privacy protection
- 📈 **Performance Optimized**: Built-in rate limiting and content optimization
- 🛠️ **Developer Friendly**: Rich testing utilities and comprehensive error handling
- 🌐 **Multi-language Support**: Native support for Arabic and English content with automatic language detection

Requirements
------------

[](#requirements)

- PHP 8.0 or higher
- Laravel 9.0, 10.0, 11.0, or 12.0
- OpenAI API key (for OpenAI provider)
- Google Perspective API key (for Perspective provider)

Installation
------------

[](#installation)

1. Install via Composer:

```
composer require mohammed-abd-razaq/laravel-toxicity-filter
```

Or if using the local package, update your root `composer.json`:

```
{
    "require": {
        "packages/toxicity-filter": "^1.0"
    }
}
```

2. Run composer update:

```
composer update
```

3. Publish the configuration file:

```
php artisan vendor:publish --tag=toxicity-filter-config
```

4. Publish and run migrations:

```
php artisan vendor:publish --tag=toxicity-filter-migrations
php artisan migrate
```

5. Clear configuration cache:

```
php artisan config:clear
```

Configuration
-------------

[](#configuration)

Set up your AI provider API keys in `.env`:

```
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODERATION_MODEL=text-moderation-latest

# Google Perspective API Configuration
PERSPECTIVE_API_KEY=your_perspective_api_key

# Toxicity Thresholds (0.0 - 1.0)
TOXICITY_BLOCK_THRESHOLD=0.8
TOXICITY_FLAG_THRESHOLD=0.6
TOXICITY_WARN_THRESHOLD=0.4

# Language-Specific Thresholds
TOXICITY_ARABIC_BLOCK_THRESHOLD=0.8
TOXICITY_ARABIC_FLAG_THRESHOLD=0.6
TOXICITY_ARABIC_WARN_THRESHOLD=0.4
TOXICITY_ENGLISH_BLOCK_THRESHOLD=0.8
TOXICITY_ENGLISH_FLAG_THRESHOLD=0.6
TOXICITY_ENGLISH_WARN_THRESHOLD=0.4

# Caching
TOXICITY_CACHE_ENABLED=true
TOXICITY_CACHE_TTL=3600

# Logging
TOXICITY_LOGGING_ENABLED=true
TOXICITY_STORE_CONTENT=false
```

Usage
-----

[](#usage)

### Basic Usage with Facade

[](#basic-usage-with-facade)

```
use Packages\ToxicityFilter\Facades\ToxicityFilter;

// Analyze English content
$result = ToxicityFilter::analyze("This is some content to check");

echo $result->getToxicityScore(); // 0.85
echo $result->getProvider(); // 'openai'
var_dump($result->getCategories()); // ['harassment', 'hate']

// Analyze Arabic content (automatic language detection)
$arabicResult = ToxicityFilter::analyze("مرحبا بالعالم");

// Analyze multilingual content
$multilingualResult = ToxicityFilter::analyze("Hello مرحبا world");

// Quick checks (uses language-specific thresholds)
if (ToxicityFilter::shouldBlock($content)) {
    // Block the content
}

if (ToxicityFilter::shouldFlag($content)) {
    // Flag for manual review
}

if (ToxicityFilter::shouldWarn($content)) {
    // Show warning to user
}
```

### Using Specific Providers

[](#using-specific-providers)

```
// Use OpenAI specifically
$result = ToxicityFilter::analyze($content, 'openai');

// Use Perspective API specifically
$result = ToxicityFilter::analyze($content, 'perspective');

// Get available providers
$providers = ToxicityFilter::getAvailableProviders();
```

### Optional Middleware Usage

[](#optional-middleware-usage)

The package includes optional middleware for automatic content filtering. To use it, you need to manually register it first.

#### Register the Middleware

[](#register-the-middleware)

Add to your `app/Http/Kernel.php`:

```
// In app/Http/Kernel.php

protected $routeMiddleware = [
    // ... other middleware
    'toxicity-filter' => \Packages\ToxicityFilter\Middleware\ToxicityFilterMiddleware::class,
];
```

#### Apply to Routes

[](#apply-to-routes)

```
// In your routes file
Route::post('/comments', [CommentController::class, 'store'])
    ->middleware('toxicity-filter');

// Or specify fields to check
Route::post('/posts', [PostController::class, 'store'])
    ->middleware('toxicity-filter:title,content,description');
```

The middleware will:

- Automatically block toxic content (returns 422 error)
- Flag moderately toxic content for review
- Add warnings to the request for mildly toxic content
- Log all detections to the database

### Advanced Usage

[](#advanced-usage)

```
use Packages\ToxicityFilter\Contracts\ToxicityFilterInterface;

class ContentModerationService
{
    public function __construct(
        private ToxicityFilterInterface $toxicityFilter
    ) {}

    public function moderateComment(string $content, User $user): array
    {
        $result = $this->toxicityFilter->analyze($content);

        $response = [
            'allowed' => true,
            'message' => null,
            'requires_review' => false,
        ];

        if ($result->shouldBlock(0.8)) {
            $response['allowed'] = false;
            $response['message'] = 'Content blocked due to inappropriate language';
        } elseif ($result->shouldFlag(0.6)) {
            $response['requires_review'] = true;
            $response['message'] = 'Content flagged for review';
        }

        return $response;
    }
}
```

### Arabic Language Support

[](#arabic-language-support)

The package includes native support for Arabic content with automatic language detection and text normalization:

#### Automatic Language Detection

[](#automatic-language-detection)

```
// Arabic content is automatically detected
$arabicContent = "مرحبا بالعالم";
$result = ToxicityFilter::analyze($arabicContent);
// Language is automatically detected as 'ar'

// Multilingual content is supported
$mixedContent = "Hello مرحبا world";
$result = ToxicityFilter::analyze($mixedContent);
// Primary language is determined based on character count
```

#### Arabic Text Normalization

[](#arabic-text-normalization)

The package automatically normalizes Arabic text for better analysis:

- **Character Normalization**: Converts different forms of Arabic characters (أ, إ, آ → ا)
- **Diacritics Removal**: Removes tashkeel (diacritics) for better matching
- **Hamza Handling**: Normalizes hamza variations

```
// Raw Arabic text with diacritics
$rawArabic = "مَرْحَباً بِالعَالَمِ";

// Package automatically normalizes for analysis
$result = ToxicityFilter::analyze($rawArabic);
```

#### Language-Specific Thresholds

[](#language-specific-thresholds)

Configure different toxicity thresholds for Arabic and English content:

```
// In config/toxicity-filter.php
'languages' => [
    'thresholds' => [
        'ar' => [
            'block' => 0.8,  // Arabic blocking threshold
            'flag' => 0.6,   // Arabic flagging threshold
            'warn' => 0.4,   // Arabic warning threshold
        ],
        'en' => [
            'block' => 0.8,  // English blocking threshold
            'flag' => 0.6,   // English flagging threshold
            'warn' => 0.4,   // English warning threshold
        ],
    ],
],
```

#### Manual Language Detection

[](#manual-language-detection)

You can also use the language detection service directly:

```
use Packages\ToxicityFilter\Services\LanguageDetectionService;

$detector = new LanguageDetectionService();

$language = $detector->detectLanguage("مرحبا بالعالم"); // 'ar'
$isArabic = $detector->isArabic("مرحبا"); // true
$isMultilingual = $detector->isMultilingual("Hello مرحبا"); // true
$normalized = $detector->normalizeArabicText("مَرْحَباً"); // "مرحبا"
```

### Queue Processing

[](#queue-processing)

For async processing, you can dispatch jobs:

```
use Packages\ToxicityFilter\Jobs\AnalyzeToxicityJob;

// Process large content asynchronously
AnalyzeToxicityJob::dispatch($content, $userId, $options);
```

Supported AI Providers
----------------------

[](#supported-ai-providers)

### OpenAI Moderation API

[](#openai-moderation-api)

- **Pros**: High accuracy, fast response, multiple toxicity categories
- **Cons**: Requires API key, has usage costs
- **Content Limit**: ~32,000 characters

### Google Perspective API

[](#google-perspective-api)

- **Pros**: Free tier available, detailed attribute scoring
- **Cons**: Limited free quota, requires Google Cloud setup
- **Content Limit**: 3,000 characters

Configuration Options
---------------------

[](#configuration-options)

The package offers extensive configuration options:

- **Providers**: Configure multiple AI providers with failover
- **Thresholds**: Set different toxicity thresholds for various actions
- **Caching**: Cache results to reduce API calls and costs
- **Logging**: Comprehensive logging with configurable storage
- **Queue**: Async processing for better performance
- **Bypass Rules**: Skip filtering for trusted users or content

Database Schema
---------------

[](#database-schema)

The package creates a `toxicity_detections` table to log all analysis results:

```
- id (primary key)
- provider (string, indexed)
- toxicity_score (decimal, indexed)
- categories (json)
- content_hash (text, indexed)
- content (text, optional)
- metadata (json)
- action_taken (string, indexed)
- user_id (bigint, nullable, indexed)
- ip_address, user_agent, request_path
- timestamps
```

Extending the Package
---------------------

[](#extending-the-package)

### Adding New AI Providers

[](#adding-new-ai-providers)

Implement the `ToxicityProviderInterface`:

```
use Packages\ToxicityFilter\Contracts\ToxicityProviderInterface;
use Packages\ToxicityFilter\ValueObjects\ToxicityResult;

class CustomProvider implements ToxicityProviderInterface
{
    public function analyze(string $content, array $options = []): ToxicityResult
    {
        // Implement your provider logic
    }

    public function getName(): string
    {
        return 'custom';
    }

    // ... implement other interface methods
}
```

Testing
-------

[](#testing)

```
# Run package tests
cd packages/toxicity-filter
composer test

# Run with coverage
composer test-coverage

# Run specific test file
vendor/bin/phpunit tests/Unit/ToxicityFilterServiceTest.php

# Run tests with debug output
vendor/bin/phpunit --debug
```

### Test Configuration

[](#test-configuration)

Create a `.env.testing` file for test environment:

```
TOXICITY_CACHE_ENABLED=false
TOXICITY_LOGGING_ENABLED=false
OPENAI_API_KEY=test_key
PERSPECTIVE_API_KEY=test_key
```

Troubleshooting
---------------

[](#troubleshooting)

### Common Issues

[](#common-issues)

**1. Configuration not loaded**

```
php artisan config:clear
php artisan config:cache
```

**2. Provider API errors**

- Verify API keys are correctly set in `.env`
- Check API rate limits and quotas
- Ensure network connectivity to provider endpoints

**3. Migration issues**

```
php artisan migrate:rollback
php artisan vendor:publish --tag=toxicity-filter-migrations --force
php artisan migrate
```

**4. Cache issues**

```
php artisan cache:clear
php artisan config:clear
```

### Debug Mode

[](#debug-mode)

Enable debug logging in your configuration:

```
'debug' => env('TOXICITY_DEBUG', false),
'log_level' => env('TOXICITY_LOG_LEVEL', 'info'),
```

Performance Considerations
--------------------------

[](#performance-considerations)

- **Caching**: Enable caching to reduce API calls for duplicate content
- **Queue**: Use async processing for bulk content or non-blocking operations
- **Rate Limiting**: Configure rate limits to stay within API quotas
- **Content Optimization**: Pre-filter very short content or obvious spam

Security &amp; Privacy
----------------------

[](#security--privacy)

- **Content Hashing**: Store MD5 hashes instead of actual content for privacy
- **API Key Management**: Store API keys securely in environment variables
- **User Bypass**: Allow trusted users to bypass filtering when appropriate
- **Audit Trail**: Comprehensive logging for compliance and debugging

Changelog
---------

[](#changelog)

### Version 1.1.0

[](#version-110)

**Arabic Language Support**

- ✅ Native Arabic language detection and support
- ✅ Automatic Arabic text normalization (character and diacritics)
- ✅ Language-specific toxicity thresholds
- ✅ Multilingual content support
- ✅ Arabic language detection service
- ✅ Enhanced provider support for Arabic content

### Version 1.0.0

[](#version-100)

**Initial Release**

- ✅ OpenAI Moderation API integration
- ✅ Google Perspective API integration
- ✅ Laravel facade and service provider
- ✅ Configurable toxicity thresholds
- ✅ Middleware for automatic filtering
- ✅ Database logging and analytics
- ✅ Caching support
- ✅ Queue processing
- ✅ Extensible provider system
- ✅ Comprehensive test suite

Contributing
------------

[](#contributing)

We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.

### Development Setup

[](#development-setup)

1. Clone the repository
2. Install dependencies: `composer install`
3. Copy `.env.example` to `.env` and configure
4. Run tests: `composer test`

Support
-------

[](#support)

- 📧 **Email**:
- 🐛 **Issues**: [GitHub Issues](https://github.com/mohammed-abd-razaq/laravel-toxicity-filter/issues)
- 📖 **Documentation**: [GitHub Repository](https://github.com/mohammed-abd-razaq/laravel-toxicity-filter)

License
-------

[](#license)

This package is open-sourced software licensed under the [MIT License](https://opensource.org/licenses/MIT).

Author
------

[](#author)

**Mohammed Abd Razaq**

- GitHub: [@mohammed-abd-razaq](https://github.com/mohammed-abd-razaq)
- Email:

---

⭐ If you find this package helpful, please consider giving it a star on GitHub!

###  Health Score

29

—

LowBetter than 60% of packages

Maintenance58

Moderate activity, may be stable

Popularity1

Limited adoption so far

Community6

Small or concentrated contributor base

Maturity43

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 100% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~1 days

Total

3

Last Release

275d ago

### Community

Maintainers

![](https://www.gravatar.com/avatar/230cc8bbf5028a489090673e4bcee435a359b5ab27616b4683d332ed10bbdf95?d=identicon)[MOHAMMED-ABD-RAZAQ](/maintainers/MOHAMMED-ABD-RAZAQ)

---

Top Contributors

[![MOHAMMED-ABD-RAZAQ](https://avatars.githubusercontent.com/u/29556066?v=4)](https://github.com/MOHAMMED-ABD-RAZAQ "MOHAMMED-ABD-RAZAQ (7 commits)")

---

Tags

laravelaiopenaimachine learningmoderationsafetyperspective-apitoxicitycontent-filter

###  Code Quality

TestsPHPUnit

### Embed Badge

![Health badge](/badges/mohammed-abd-razaq-laravel-toxicity-filter/health.svg)

```
[![Health](https://phpackages.com/badges/mohammed-abd-razaq-laravel-toxicity-filter/health.svg)](https://phpackages.com/packages/mohammed-abd-razaq-laravel-toxicity-filter)
```

###  Alternatives

[glhd/conveyor-belt

14797.0k](/packages/glhd-conveyor-belt)[aedart/athenaeum

Athenaeum is a mono repository; a collection of various PHP packages

245.2k](/packages/aedart-athenaeum)[ankurk91/laravel-ses-webhooks

Handle AWS SES webhooks in Laravel php framework

2534.2k](/packages/ankurk91-laravel-ses-webhooks)[salehhashemi/laravel-intelli-git

An intelligent Git helper package for Laravel applications. It utilizes OpenAI's GPT to analyze your Git repository, providing features such as automatic generation of commit messages based on staged changes.

131.5k](/packages/salehhashemi-laravel-intelli-git)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
