Skip to main content

The Authorship Dashboard

Starting from the provisioning of your account, every paper submitted to your institution via Turnitin will be processed by Authorship for Investigators and displayed in the My Dashboard area in your Authorship for Investigators account.

The Authorship Dashboard is only available for users who have been given the 'Product Admin' user role.

The dashboard body

The main body of the Dashboard shows the students whose submissions have been given a prediction score of 0.573 and above.

Select the row containing the student you want to investigate further to open the report.

The dashboard body does not show every student that has submitted to your Turnitin account. Only the students who:

  1. have submitted more than one file to your Turnitin account;
  2. have submitted in a manner that will attribute a student ID to them, and;
  3. have been given a prediction score of higher than 0.57, or;
  4. has work that contains 2 or more flags.

The students can be filtered by threshold or by submission date.

The dashboard has six columns.

AI_Dashboard_BodyColumns_1228x140.png

Name

The first column contains the name of the student.

The Dashboard only generates reports for papers with student IDs. Non-enrolled students or students who have used Direct Submission or Quick Submit will not appear in the dashboard.

Priority

The second column is the priority of the student based on their prediction score. This will either be Critical, High, or None. Select the question mark next to the column title to learn more about the priority and prediction score as well as the average prediction score of the dashboard.

Flags

The third column contains how many Flags the report has found.

Flags are visible evidence that a student’s work may not be entirely their own. We alert you when the metadata of a file does not match the student’s information, or if the metadata is different across multiple documents.

The presence of flags is not proof of a different author, but rather a suggestion that a student’s body of work be reviewed.

Hover over the number to see which Flags are highlighted in this report.

AI_Dashboard_Flags.png

Select the question mark next to the column title to learn more about Flags and see the average number of Flags in the Dashboard.

Last Submission Date

The fourth column contains the date of the last submission by this student.

Every time a student submits a new paper the report will regenerate. This will overwrite the previous dashboard report for this student.

Status

The fifth column contains the status of the report. The status reflects the status of the student’s Authorship Report the last time it was saved. Use the status to triage cases and investigate specific students. You must open the report to update the status.

Last Saved By

The last saved by column indicates the last investigator to leave a summary comment or select an investigation status for the Authorship Report.

Download

To download the dashboard as a .csv file, select the download button.

AI_Dashboard_DownloadCSV.png

Dismiss a student from the Dashboard

Select the check-box next to the student name to select them. Once selected, a student can be dismissed from the dashboard. You can select and dismiss multiple students at once.

AI_AuthorshipReport_SelectedFileDashboard_587x242.png

Dismissed students will no longer appear in the dashboard even if future submissions have 2 or more flags or a score higher than 0.57. We recommend only dismissing a student if they have left your institution.

Dismissed papers will not be included in the .csv download.

If you would like to see students that you have dismissed from the Dashboard, select Show dismissed.

Filter the dashboard

By threshold

You can filter the dashboard by prediction score. You can do this using the three flag filters.

AI_Admin_Dashboard_1010x357.png

The three filters are:

All Flags - This will show all the reports that have a score of over 0.57. This threshold is one we have calculated will capture all reports that could be potential contract cheating cases.

High Score - This will show all the reports that have a score between 0.57 and 0.76. This threshold span is one we have calculated that will capture reports that are likely contract cheating cases.

Critical Score - This will show all the reports that have a score between 0.77 and 0.100. This threshold span is one that identifies the cases with the highest number of flags and anomalies that require further investigation.

By date range

Along the top of the dashboard, there is the ability to filter the reports by latest date of submission.

The earliest date you can set the filter to is the date that your license for Turnitin Originality was provisioned. If you have just had your account provisioned then you may not be able to select a wide date range. Historical submissions will be included in the dashboard once a student has made a submission post provisioning date.

Select the date range dropdown to view the calendar and select the date range from which you would like to see the reports from. You can also type in the date in the following format: 1 Aug 2019.

AI_Admin_Dashboard_Date_565x487.png

The student row will refresh upon each submission. If you use an older submission date you may not see the student you wish to investigate.

What is our prediction based on?

The prediction score is a score between 0 and 1 that is attributed to each student in the Dashboard. The closer the score is to 1, the more likely it is there is something worth investigating. It is calculated using Turnitin’s prediction algorithm, which uses Natural Language Processing (NLP) methods.

NLP is a subfield of Artificial Intelligence that is focused on enabling computers to understand and process human languages, to get computers closer to a human-level understanding of language.

So how does it work?

The prediction algorithm was trained on a set of labeled data containing student work by the same author and work by different authors. The machine-learning algorithm was trained on a very large labeled dataset across hundreds of linguistic features to learn what characteristics signify authorship and non-authorship. We then tested the trained algorithm on another large test/validation dataset to ensure we did not overtrain our model on the training data.

These linguistic features are often too complex to present as valuable data. For this reason we combine them into an easily digestible score that we can attribute to a student.

Is it accurate?

Our accuracy targets were based on research conducted by Deakin University on how well markers identify contract cheating when they are told to look for it. In that study, they reached a 62% sensitivity in identifying contract cheating. Our algorithm is tuned to have the same level of sensitivity (detection rate) in identifying different authors based on our prediction algorithm validation.

The more you and other institutions use Authorship for Investigators, the better the prediction model will become.

We never claim that a student has contract cheated, we simply recommend further investigation. It is up to the investigator to determine if there is enough evidence to make a contract cheating allegation.

Was this article helpful?
6 out of 8 found this helpful

Articles in this section

Powered by Zendesk