How can we help?

Home > Turnitin Scoring Engine > Turnitin Scoring Engine FAQ

Turnitin Scoring Engine FAQ

What is it?

- What does the Turnitin Scoring Engine (TSE) do

Using sample essays graded by experts, the Turnitin Scoring Engine identifies and learns patterns that it can use to quickly and reliably grade new submissions to writing prompts.


- How many prompts can TSE be set up to score

There is no limit to the number of prompts TSE can be trained to score, nor is there any limit on the number of prompt responses/submissions that can be scored. During the implementation phase of TSE, Turnitin staff will assist with the training of up to 5 writing prompts. Customers are expected to utilize the self-service tools provided with TSE to train and manage all additional prompts they wish to create. Of course, if a customer needs training or assistance they will have access to all the typical types of support offered by Turnitin.


- What is a "writing prompt" or "prompt response" as defined by TSE?

A writing prompt is a statement or question that students must read and respond to in writing. A prompt might include a source document (or a set of sources), and may be situated within a broader learning context, but it could be as simple as a brief one-sentence question that students are expected to respond to. 


In general, the more specific a prompt is, the more likely it will be successful with TSE. Broad, open-ended prompts on the other hand can be challenging to assess reliably for both instructors and our technology. We evaluate possible prompts on a case-by-case basis however, so there's no hard-line rule for defining when a prompt is too broad.


- Does TSE come with analytics and reports?

The primary purpose of TSE is to automatically grade written prompt responses. This information can be collected and stored for the purpose of allowing the institution to create its own analytics or reports.

How does it work?

- Can pre-written papers be uploaded to it, or is it "copy-and-paste" or "direct-type" only?

Pre-written papers are fine, so long as they are all in response to a common prompt (we generally recommend targeting 500 responses to a prompt).

- How does one feed it thousands of papers at once (so that automatic scoring can then happen for thousands of submissions in seconds, as advertised)? Via ZIP file submission? Bulk uploader?

Today, this can be done either through an API web service. We are currently in the process of building a user-friendly data upload web service, which will be available along with other self-service tools designed to facilitate the bulk grading process.

- Does TSE integrate or work with Turnitin? 

Compared to Turnitin or Revision Assistant, TSE requires a deeper level of integration with client institutions' learning environments and thus cannot simply be used in conjunction with or Turnitin LMS integrations. We are exploring ideas for future product enhancements that will allow existing Turnitin customers to seamlessly leverage TSE, but for now the two technologies should be considered separate products with separate purposes.


- Does TSE integrate with a school's Student Information System (SIS)

TSE does not require a school to have, utilize, or integrate with a SIS because the primary data a customer must provide to the engine are essays and these are not typically stored in a SIS.


- Does TSE check grammar and spelling

TSE does not explicitly check for grammar or spelling mistakes when evaluating and scoring an essay.


Work is currently underway to add grammar-checking capabilities to the Scoring Engine in order to further enhance its ability to detect the grammar-related scoring patterns exhibited by the expert instructors who provide sample training data.

- How does TSE react when a non-English word (or words) is mixed into the essay? Or uncommon English words like "pneumonoultramicroscopicsilicovolcanoconiosis"?

If complex or uncommon words are frequently present in a training set - for instance, "dyspnea" or "myalgia" in a medical school assessment - TSE will discover these patterns and learn to assess them appropriately. This is why gathering essays and scores from each new context is so important.


If a rare word appears that has no bearing on the quality of essays, it will usually be ignored by TSE for the purpose of assessment, not counting for or against a student's score. TSE uses statistical analysis to discover these correlations automatically based on the sample data it is trained with.


- Does TSE take the writing style, structure, choice of wording, or other genre-specific traits into account when scoring student responses?

TSE is not designed to explicitly evaluate things like style, wording, or genre and use that information to subjectively provide scores for essays. Rather, it is designed to detect patterns in the scores given by expert graders who may base their evaluations of a writing response on any of those same traits. In this way, TSE may take these traits into account--but only because patterns indicative of these traits are determined to be relevant to predicting essay scores.



Does the TSE grade the cover page (which some instructors require to "look" a certain way with the name in a certain area)? Or is that considered "styling" and therefore not graded?

TSE does not provide assessment for mechanics and conventions; we would consider the formatting and styling of a cover page as conventions, and those elements of a paper are not considered during automated processing. In general, we do not expect that the contents of a cover page would be evaluated at all.


- Does TSE provide feedback commentary on prompt responses?

TSE is only intended to provide summative assessment information (i.e., scores, grades, pass/fail indicators, etc.) and not formative feedback. Because TSE can score using rubric criteria, the scores it produces can act as indicators for areas a student may need to focus on based on the score he/she receives--however this is not the same as receiving explicit feedback written by an expert that suggests specific considerations or corrections for the student to make. This type of feedback is what the Revision Assistant tool is designed to provide.


- Can TSE score using a rubricIf so, does it come with predefined rubrics or can customers supply their own?

TSE is capable of producing summative or holistic scores and rubric-based scores as well. It does not score using a predefined rubric; rather, it produces scores for each of the individual rubric criteria that are created by the customer and defined in the sample data used to train the engine.


- If I use a rubric, how many criteria (a.k.a. dimensions, traits, etc.) can it include? 

Preferably, rubrics will not exceed 10 criteria/dimensions.  Practically speaking however, the more criteria there are the longer TSE will need to make scoring predictions on writing prompt responses. Depending on a customer's intended use of TSE, the benefit of speed with automated scoring may be diminished with the time delays caused by rubrics with too many traits. TSE staff can evaluate any rubric a customer intends to use and will be able to offer detailed guidance on whether the rubric's structure could impact performance.


- If I use a rubric, what types or ranges of scores can my criteria be evaluated on? 

TSE can reproduce numeric formats that match a customer's defined grading scale. Common examples include predicting values between small scales like 1 to 4 or large scales like 0 to 100. 

How does a user begin using TSE?

- What is the TSE implementation process like?

The implementation process for TSE requires an up-front time investment from customer organizations. The first step in this process is to identify content and/or curriculum assessment experts in your institution, as well as one or more technical leads who will be responsible for integrating TSE into the customer's learning platform. These individuals will need to communicate with TSE staff about their organization, assessment processes, and how they anticipate using TSE. During these conversations, TSE staff will provide information on the parameters of the training data that must be gathered (including specifics on the rubrics, the need to de-identify student essays, etc.).


Once training data has been assembled it must be sent to TSE staff. It will be evaluated for validity and then used to train a Scoring Engine for the customer. Feedback on the reliability of the training engine will be given to the appropriate customer representatives. If the engine is deemed reliable, the customer organization may proceed with the integration work necessary to automatically grade new writing prompt responses.


- How long does it take to implement TSE?

TSE could be implemented within a few weeks--or could take up to several months, all primarily dependent on the customer and their access to several key resources. Examples of these resources include: time for having a content/curriculum expert and technical lead discuss their needs with TSE staff; the existence of an adequate amount of reliably scored training data; and the availability of technical personnel to work on integrating the TSE service into the customer's existing learning management system or testing/assessment environment (the time needed for integration may vary based on whether the customer is using the API, an LTI interface, or other integration methods that may be available in the future).


- Is there any benefit to having more than one expert score the essays provide to TSE during the training process?

Absolutely. Having a second expert opinion on the score given to a particular essay (whether that's an additional total score or an additional set of rubric trait scores) allows TSE staff to perform comparisons on the inter-rater reliability of scoring data. Such measures can be useful because they provide customers with information on the consistency of their experts when compared to one another. This information is also valuable during the implementation of TSE because it effectively doubles the amount of scoring data the engine can examine when learning expert scoring patterns. This generally means a higher level of accuracy can be achieved by the engine.

Who provides Support for TSE?

- How does an instructor get supported?

Turnitin Scoring Engine will likely be an institutional relationship, not a relationship with an individual instructor. Interested partner institutions should contact their Turnitin account representatives and identify the content/curriculum experts and/or technical leads interested in learning more about TSE.

Last modified


This page has no custom tags.


(not set)