PHPackages                             oat-sa/extension-lti-test-review - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [Utility &amp; Helpers](/categories/utility)
4. /
5. oat-sa/extension-lti-test-review

ActiveTao-extension[Utility &amp; Helpers](/categories/utility)

oat-sa/extension-lti-test-review
================================

Extension for reviewing passed tests, with the display of actual and correct answers.

v3.9.10(3mo ago)07.1k↓11.1%[1 issues](https://github.com/oat-sa/extension-lti-test-review/issues)GPL-2.0-onlyJavaScriptCI failing

Since Jul 5Pushed 2mo ago39 watchersCompare

[ Source](https://github.com/oat-sa/extension-lti-test-review)[ Packagist](https://packagist.org/packages/oat-sa/extension-lti-test-review)[ Docs](http://www.taotesting.com)[ RSS](/packages/oat-sa-extension-lti-test-review/feed)WikiDiscussions master Synced 1mo ago

READMEChangelog (10)Dependencies (12)Versions (92)Used By (0)

Extension ltiTestReview
=======================

[](#extension-ltitestreview)

Extension for reviewing passed tests, with the display of actual and correct answers, as well as the number of points for each answer.

Usage
-----

[](#usage)

Run `composer require "oat-sa/extension-lti-test-review"` for including the code to the project. Install the extension using extension manager or with CLI: `php tao/scripts/installExtension.php ltiTestReview`.

### LTI calls

[](#lti-calls)

To launch the review of a specific delivery execution, use the following endpoint:

```
https://YOUR_DOMAIN/ltiTestReview/ReviewTool/launch?execution=YOUR_DELIVERY_EXECUTION_URI

```

The above endpoint without `execution` parameter (`https://YOUR_DOMAIN/ltiTestReview/ReviewTool/launch`) will use the `lis_result_sourcedid` field from launch data to determine delivery execution.

To launch the review of the *latest* delivery execution for a user, use the following endpoint:

```
https://YOUR_DOMAIN/ltiTestReview/ReviewTool/launch1p3?delivery=YOUR_DELIVERY_URI

```

The user id should be provided in the `for_user` claim:

```
{
  "https://purl.imsglobal.org/spec/lti/claim/message_type": "LtiSubmissionReviewRequest",
  "https://purl.imsglobal.org/spec/lti/claim/for_user": {
    "user_id": ""
  }
}
```

Requested for review delivery execution can be specified by `resource_link_id` claim (will be launched the latest delivery execution specified by `user_id`, `delivery_id` and `resource_link_id`)

```
{
  "https://purl.imsglobal.org/spec/lti/claim/resource_link": {
    "id": "unique_Id"
  }
}
```

For backwards compatibility, the following endpoint allows to select an exact delivery execution, whose id must be provided in a custom claim:

```
https://YOUR_DOMAIN/ltiTestReview/ReviewTool/launch1p3

```

```
{
  "https://purl.imsglobal.org/spec/lti/claim/custom": {
    "execution": ""
  }
}
```

### LTI options

[](#lti-options)

Various modes are available to review a test. By default the simplest mode is applied, showing only the test as it was passed, with the student's responses and no score.

The following custom parameters control the mode:

parameterdescription`custom_show_score=1`Show the student's score.`custom_show_correct=1`Show the correct responses when the student has failed. **Note:** This option discloses all the correct responses, for the whole test.`custom_review_layout=simple`Switch the review panel layout from `default` to `simple` variant.`custom_section_titles=0`Hide section titles in the `simple` review panel layout.`custom_item_tooltip=1`Show tooltip with item label in the `simple` review panel layout.When you use the [IMS emulator](http://ltiapps.net/test/tc.php) you must remove the prefix `custom_`.

#### Default values

[](#default-values)

By default the options `show_score` and `show_correct` are turned off. To turn them on by default you may change the platform configuration, in the file `config/ltiTestReview/DeliveryExecutionFinderService.conf.php`:

```
return new oat\ltiTestReview\models\DeliveryExecutionFinderService([
    'show_score' => false,
    'show_correct' => false
]);
```

**Note:** This will set the default value of these options for the whole platform. If you enable them by default, you can still disable them using LTI custom parameters.

The default values for `custom_review_layout` and `custom_section_titles` and `custom_item_tooltip` are read from `config/ltiTestReview/ReviewPanel.conf.php`:

```
return new oat\oatbox\config\ConfigurationService(array(
    'config' => array(
        'reviewLayout' => 'default',
        'displaySectionTitles' => true,
        'displayItemTooltip' => false
    )
));
```

**Note:** This will set the default value of these options for the whole platform. If you enable them by default, you can still disable them using LTI custom parameters.

###  Health Score

54

—

FairBetter than 97% of packages

Maintenance82

Actively maintained with recent releases

Popularity23

Limited adoption so far

Community27

Small or concentrated contributor base

Maturity75

Established project with proven stability

 Bus Factor2

2 contributors hold 50%+ of commits

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~39 days

Total

62

Last Release

106d ago

Major Versions

v0.5.0 → v1.2.02019-08-16

v1.18.2 → v2.0.02021-02-23

v2.2.0 → v3.0.02021-11-29

v1.17.0.2 → v3.1.02022-01-14

v2.1.1.1 → v3.3.12022-08-04

### Community

Maintainers

![](https://www.gravatar.com/avatar/c0ec704e0a8abaf0c27b227ad05d7aca23bc8f83d195229d94d4508cddc0fd24?d=identicon)[oat-lionel](/maintainers/oat-lionel)

![](https://www.gravatar.com/avatar/e8c30644c0da147701176483ca1ca40e1c41012e6c5b0da47e984c1eb8a840b1?d=identicon)[jbout](/maintainers/jbout)

---

Top Contributors

[![jsconan](https://avatars.githubusercontent.com/u/1500098?v=4)](https://github.com/jsconan "jsconan (162 commits)")[![bziondik](https://avatars.githubusercontent.com/u/25976342?v=4)](https://github.com/bziondik "bziondik (102 commits)")[![oatymart](https://avatars.githubusercontent.com/u/43652944?v=4)](https://github.com/oatymart "oatymart (50 commits)")[![alroniks](https://avatars.githubusercontent.com/u/303498?v=4)](https://github.com/alroniks "alroniks (20 commits)")[![albertosgz](https://avatars.githubusercontent.com/u/12124175?v=4)](https://github.com/albertosgz "albertosgz (18 commits)")[![bartlomiejmarszal](https://avatars.githubusercontent.com/u/16231681?v=4)](https://github.com/bartlomiejmarszal "bartlomiejmarszal (18 commits)")[![wazelin](https://avatars.githubusercontent.com/u/2943256?v=4)](https://github.com/wazelin "wazelin (18 commits)")[![tikhanovichA](https://avatars.githubusercontent.com/u/1053022?v=4)](https://github.com/tikhanovichA "tikhanovichA (13 commits)")[![SergiiTao](https://avatars.githubusercontent.com/u/36041347?v=4)](https://github.com/SergiiTao "SergiiTao (9 commits)")[![kilatib](https://avatars.githubusercontent.com/u/2750628?v=4)](https://github.com/kilatib "kilatib (8 commits)")[![taopkorczak](https://avatars.githubusercontent.com/u/133767102?v=4)](https://github.com/taopkorczak "taopkorczak (8 commits)")[![ricproenca-tao](https://avatars.githubusercontent.com/u/45600417?v=4)](https://github.com/ricproenca-tao "ricproenca-tao (8 commits)")[![btamas](https://avatars.githubusercontent.com/u/537151?v=4)](https://github.com/btamas "btamas (6 commits)")[![andreluizmachado](https://avatars.githubusercontent.com/u/11900046?v=4)](https://github.com/andreluizmachado "andreluizmachado (6 commits)")[![hectoras](https://avatars.githubusercontent.com/u/339792?v=4)](https://github.com/hectoras "hectoras (5 commits)")[![lecosson](https://avatars.githubusercontent.com/u/8062729?v=4)](https://github.com/lecosson "lecosson (5 commits)")[![gabrielfs7](https://avatars.githubusercontent.com/u/1467589?v=4)](https://github.com/gabrielfs7 "gabrielfs7 (5 commits)")[![edwin-focaloid](https://avatars.githubusercontent.com/u/126317720?v=4)](https://github.com/edwin-focaloid "edwin-focaloid (4 commits)")[![shpran](https://avatars.githubusercontent.com/u/59471572?v=4)](https://github.com/shpran "shpran (4 commits)")[![viktar-dzmitryieu-tao](https://avatars.githubusercontent.com/u/135347298?v=4)](https://github.com/viktar-dzmitryieu-tao "viktar-dzmitryieu-tao (4 commits)")

---

Tags

TAOcomputer-based-assessment

### Embed Badge

![Health badge](/badges/oat-sa-extension-lti-test-review/health.svg)

```
[![Health](https://phpackages.com/badges/oat-sa-extension-lti-test-review/health.svg)](https://phpackages.com/packages/oat-sa-extension-lti-test-review)
```

###  Alternatives

[oat-sa/tao-community

TAO is an Open Source e-Testing platform that empowers you to build, deliver, and share innovative and engaging assessments online – in any language or subject matter.

104.9k1](/packages/oat-sa-tao-community)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
