Skip to main content

Turnitin's AI writing detection capabilities FAQs

The following content and FAQs pertain to our AI writing detection capabilities for English submissions only. For Spanish submissions, please refer to our Spanish AI writing detection capabilities FAQs.

AI Paraphrasing 

How do Turnitin’s AI writing detection capabilities work?

Does Turnitin offer a solution to detect AI writing?

Yes. Turnitin has released its AI writing detection capabilities to help educators uphold academic integrity while ensuring that students are treated fairly.

We have added an AI writing indicator to the Similarity Report. It shows an overall percentage of the document that AI writing tools, such as ChatGPT, may have generated. The indicator further links to a report which highlights the text segments that our model predicts were written by AI. Please note, only instructors and administrators are able to see the indicator.

While Turnitin has confidence in its model, Turnitin does not make a determination of misconduct, rather it provides data for the educators to make an informed decision based on their academic and institutional policies. Hence, we must emphasize that the percentage on the AI writing indicator should not be used as the sole basis for action or a definitive grading measure by instructors.

How does it work?

When a paper is submitted to Turnitin, the submission is first broken into segments of text that are roughly a few hundred words (about five to ten sentences). Those segments are then overlapped with each other to capture each sentence in context.

The segments are run against our AI detection model and we give each sentence a score between 0 and 1 to determine whether it is written by a human or by AI. If our model determines that a sentence was not generated by AI, it will receive a score of 0. If it determines the entirety of the sentence was generated by AI it will receive a score of 1.

Using the average scores of all the segments within the document, the model then generates an overall prediction of how much text in the submission we believe has been generated by AI.

Currently, Turnitin’s AI writing detection model is trained to detect content from the GPT-3 and GPT-3.5 language models, which includes ChatGPT. Because the writing characteristics of GPT-4 are consistent with earlier model versions, our detector is able to detect content from GPT-4 (ChatGPT Plus) most of the time. We are actively working on expanding our model to enable us to better detect content from other AI language models. 

What parameters or flags does Turnitin’s model take into account when detecting AI writing?

GPT-3 and ChatGPT are trained on the text of the entire internet, and they are essentially taking that large amount of text and generating sequences of words based on picking the next highly probable words. This means that GPT-3 and ChatGPT tend to generate the next word in a sequence of words in a consistent and highly probable fashion. Human writing, on the other hand, tends to be inconsistent and idiosyncratic, resulting in a low probability of picking the next word the human will use in the sequence.

Our classifiers are trained to detect these differences in word probability and are adept to the particular word probability sequences of human writers.

How was Turnitin’s model trained?

Our model is trained on a representative sample of data spread over a period of time, that includes both AI generated and authentic academic writing across geographies and subject areas. While creating our sample dataset, we also took into account statistically under-represented groups like second-language learners, English users from non-English speaking countries, students at colleges and universities with diverse enrollments, and less common subject areas such as anthropology, geology, sociology, and others to minimize bias when training our model.

Can I check past submitted assignments for AI writing?

Yes. Previously submitted assignments can be checked for AI writing detection if they’re re-submitted to Turnitin and if you have AI writing enabled for your account.

What languages are supported?

English and Spanish. Turnitin’s AI writing detection capabilities are able to detect likely AI-generated content for documents submitted in long-form English, and since September 2024, we can also detect likely AI-generated content for documents submitted in long-form Spanish. However, our AI paraphrasing detection capabilities are only available for English submissions.

What will happen if a non-English or non-Spanish paper is submitted?

If a non-English or non-Spanish paper is submitted, the detector will not process the submission. The indicator will show an empty/error state with ‘in-app’ guidance that will tell users that this capability only works for English or Spanish submissions at this time. No report will be generated if the submitted content is not in English or Spanish.

Can I or my admin suppress the new indicator and report if we do not want to see it?

Yes, admins have the option to enable/disable the AI writing feature from their admin settings page. Disabling the feature will remove the AI writing indicator & report from the Similarity report and it won’t be visible to instructors and admins until they enable it again.

Will the addition of Turnitin’s AI detection functionality to the Similarity report change my workflow or the way I use the Similarity report?

No. This additional functionality does not change the way you use the Similarity report or your existing workflows. Our AI detection capabilities have been added to the Similarity report to provide a seamless experience for our customers.

Will the AI detection capabilities be available via LMSs such as Moodle, Blackboard, Canvas, etc?

Yes, users will be able to see the indicator and the report via the LMS they’re using. We have made AI writing detection available via the Similarity report. There is no AI writing indicator or score embedded directly in the LMS user interface and users will need to go into the report to see the AI score.

Does the MS Teams integration support the AI writing detection feature?

AI writing detection is only available to instructors using the Microsoft Teams integration with Turnitin Feedback Studio. AI writing detection is not available for customers using the Microsoft Teams Assignment Similarity integration, as it only uses the student viewer. There is no separate teacher viewer. And since our AI detection capability is only going to be available to educators, we cannot provide it via MS Teams.

However, if an instructor using the Similarity integration has a concern that a report may have been written with an AI writing tool, they can request that their administrator use the paper lookup tool to view a full report.

How is authorship detection within Originality different from AI writing detection?

Turnitin’s AI writing detection technology is different from the technology used within Authorship (Originality). Our AI writing detection model calculates the overall percentage of text in the submitted document that was likely generated by an AI writing tool. Authorship, on the other hand, uses metadata as well as forensic language analysis to detect if the submitted assignment was written by someone other than the student. It will not be able to indicate if it was AI written; only that the content is not the student’s own work.

AI detection results & interpretation

What does the percentage in the AI writing detection indicator mean?

The percentage indicates the amount of qualifying text within the submission that Turnitin’s AI writing detection model determines was generated by AI. This qualifying text includes only prose sentences, meaning that we only analyze blocks of text that are written in standard grammatical sentences and do not include other types of writing such as lists, bullet points (short non-sentence structures), or other non-sentence structures.
This percentage is not necessarily the percentage of the entire submission. If text within the submission is not considered long-form prose text, it will not be included.

What do the different colors in the report mean?

The percentage detected as AI is the total amount of qualifying text in the submission likely originated from a Large Language Model (LLM). The submission breakdown divides this AI writing score into two categories:

  • Likely AI-generated text is highlighted in cyan in the submission.
  • Likely AI-generated text that was also likely AI-paraphrased is highlighted in purple.

What is the accuracy of Turnitin’s AI writing indicator?

We strive to maximize the effectiveness of our detector while keeping our false positive rate - incorrectly identifying fully human-written text as AI-generated - under 1% for documents with over 20% of AI writing. In other words, we might flag a human-written document as AI-written for one out of every 100 fully-human written documents.
To bolster our testing framework and diagnose statistical trends of false positives, in April 2023 we performed additional tests on 800,000 additional academic papers that were written before the release of ChatGPT to further validate our less than 1% false positive rate.

In order to maintain this low rate of 1% for false positives, there is a chance that we might miss 15% of AI written text in a document. We’re comfortable with that since we do not want to incorrectly highlight human written text as AI-written. For example, if we identify that 50% of a document is likely written by an AI tool, it could contain as much as 65% AI writing.

We’re committed to safeguarding the interests of students while helping institutions maintain high standards of academic integrity. We will continue to adapt and optimize our model based on our learnings from real world document submissions, and as large language models evolve to ensure we maintain this less than 1% false positive rate.

How does Turnitin ensure that the false positive rate for a document remains less than 1%?

Since the launch of our solution in April, we tested 800,000 academic papers that were written before the release of ChatGPT. Based on the results of these tests, we made the below updates to our model in May to ensure we hold steadfast on our objective of keeping our false positive rate below 1% for a document.

Added an additional indicator for documents with less than 20% AI writing detected
We learned that our AI writing detection scores under 20% have a higher incidence of false positives.This is inconsistent behavior, and we will continue to test to understand the root cause. In order to reduce the likelihood of misinterpretation, we have updated the AI indicator button in the Similarity Report to contain an asterisk for percentages less than 20% to call attention to the fact that the score is less reliable.

Increased the minimum word count from 150 to 300 words
Based on our data and testing, we increased the minimum word requirement from 150 to 300 words for a document to be evaluated by our AI writing detector. Results show that our accuracy increases with just a little more text, and our goal is to focus on long-form writing. We may adjust this minimum word requirement over time based on the continuous evaluation of our model.

Changed how we aggregate sentences in the beginning and at the end of a submission
We observed a higher incidence of false positives in the first few or last few sentences of a document.
Usually, this is the introduction and conclusion in a document. As a result, we changed how we aggregate these specific sentences for detection to reduce false positives.

The percentage shown sometimes doesn’t match the amount of text highlighted. Why is that?

Unlike our Similarity Report, the AI writing percentage does not necessarily correlate to the amount of text in the submission. Turnitin’s AI writing detection model only looks for prose sentences contained in long-form writing. Prose text contained in long-form writing means individual sentences contained in paragraphs that make up a longer piece of written work, such as an essay, a dissertation, or an article, etc. The model does not reliably detect AI-generated text in the form of non-prose, such as poetry, scripts, or code, nor does it detect short-form/unconventional writing such as bullet points (short non-sentence structures) or annotated bibliographies.

This means that a document containing several different writing types would result in a disparity between the percentage and the highlights.

What do the different indicators mean?

The AI writing report opens in a new tab of the window used to launch the Similarity Report. 

Upon opening the Similarity Report, after a short period of processing, the AI writing detection indicator will show one of the following:

  • Blue with a percentage between 0 and 100: The submission has processed successfully. The displayed percentage indicates the amount of qualifying text within the submission that Turnitin’s AI writing detection model determines was generated by AI. As noted previously, this percentage is not necessarily the percentage of the entire submission. If text within the submission was not considered long-form prose text, it will not be included.    

We no longer show an AI score for documents where we detect less than 20% of AI writing. With the introduction of AI paraphrasing, there is additional data for the instructor, and we want to reduce data that may not be actionable. 

To explore the results of the AI writing detection capabilities, select the indicator to open the AI writing report. The AI writing report opens in a new tab of the window used to launch the Similarity Report. If you have a pop-up blocker installed, ensure it allows Turnitin pop-ups.

  • Low percentage (*%):
    • False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models. To avoid potential incidence of false positives, no score or highlights are attributed for AI detection scores in the 1% to 19% range. When AI is detected below the 20% threshold in the report, it is now indicated with an asterisk (*%) and no percentage is attributed. This change is reflected in new submissions only and will not retroactively apply to existing submissions.
  • Gray with no percentage displayed (- -): The AI writing detection indicator is unable to process this submission. This can be due to one, or several, of the following reasons:
    • The submission was made before the release of Turnitin’s AI writing detection capabilities. The only way to see the AI writing detection indicator/report on historical submissions is to resubmit them.
    • The submission does not meet the file requirements needed to successfully process it for AI writing detection. In order for a submission to generate an AI writing report and percentage, the submission needs to meet the following requirements:
      • File size must be less than 100 MB
      • File must have at least 300 words of prose text in a long-form writing format
      • Files must not exceed 30,000 words
      • File must be written in English
      • Accepted file types: .docx, .pdf, .txt, .rtf
  • Error ( ! ): This error means that Turnitin has failed to process the submission. Turnitin is constantly working to improve its service, but unfortunately, events like this can occur. Please try again later. If the file meets all the file requirements stated above, and this error state still shows, please get in touch through our support center so we can investigate for you.

What can I do if I feel that the AI writing detection indicator is incorrect? How does Turnitin’s indicator address false positives?

If you find AI written documents that we've missed, or notice authentic student work that we've predicted as AI-generated, please let us know! Your feedback is crucial in enabling us to improve our technology further. You can provide feedback via the 'feedback' button found in the AI writing report. While using the new, enhanced report experience, simply click or tap the Turnitin logo in the bottom right corner of the viewer and select Give feedback.

Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing ends, but our model should give you a reliable guide to start conversations with the submitting student.

In shorter documents where there are only a few hundred words, the prediction will be mostly "all or nothing" because we're predicting on a single segment without the opportunity to overlap. This means that some text that is a mix of AI-generated and original content could be flagged as entirely AI-generated.

Please consider these points as you are reviewing the data and following up with students or others.

Will students be able to see the results?

The AI writing detection indicator and report are not visible to students. However, with the PDF download feature, instructors can download and share the AI report with students.

Does the AI Indicator automatically feed a student’s paper into a repository?

No, it does not. There is no separate repository for AI writing detection. Our AI writing detection capabilities are part of our existing similarity report workflow. For institutions that have the AI writing feature enabled, when we receive submissions, they are compared and evaluated via our proprietary algorithms for both similarity text matching and the likelihood of being AI writing (generated by LLMs). Customers retain the ability to choose whether to add their student papers into the repository or not.

If enabled, AI writing detection is run on a submission and the results are shared on the similarity report. Results regarding the percentage AI writing identified by the detector, along with the segments identified highly likely written by AI – are retained as part of the similarity report.

What is the difference between the Similarity score and the AI writing detection percentage? Are the two completely separate or do they influence each other?

The Similarity score and the AI writing detection percentage are completely independent and do not influence each other. The Similarity score indicates the percentage of matching-text found in the submitted document when compared to Turnitin’s comprehensive collection of content for similarity checking.

The AI writing detection percentage, on the other hand, shows the overall percentage of text in a submission that Turnitin’s AI writing detection model predicts was generated by AI writing tools.

Does the Turnitin model take into account that AI writing detection technology might be biased against particular subject-areas or second language writers?

Yes, it does. One of the guiding principles of our company and of our AI team has been to minimize the risk of harm to students, especially those disadvantaged or disenfranchised by the history and structure of our society. Hence, while creating our sample dataset, we took into account statistically under-represented groups like second-language learners, English users from non-English speaking countries, students at colleges and universities with diverse enrollments and less common subject areas such as anthropology, geology, sociology, and others.

How can I use the AI writing detection indicator percentage in the classroom with students?

Turnitin’s AI writing detection indicator shows the percentage of text that has likely been generated by an AI writing tool while the report highlights the exact segments that seem to be AI-written. The final decision on whether any misconduct has occurred rests with the reviewer/instructor. Turnitin does not make a determination of misconduct, rather it provides data for the educators to make an informed decision based on their academic and institutional policies.

Can I download the AI report like the Similarity report?

Yes. The AI detection report can be downloaded as a PDF via the ‘download’ button located in the right-hand corner of the report.

Can I view aggregated AI scores across submissions for my institution?

Yes, starting February 13 2024, administrators can view aggregated AI scores for their institution. This feature is currently only available to admins and shows statistics at the institution/parent level. As we continue to develop this functionality further, we will update you.

Scope of detection

Which AI writing models can Turnitin’s technology detect?

The first iteration of Turnitin’s AI writing detection capabilities have been trained to detect models including GPT-3, GPT-3.5, and variants. Our technology can also detect other AI writing tools that are based on these models such as ChatGPT. We’ve completed our testing of GPT-4 (ChatGPT Plus), and the result is that our solution will detect text generated by GPT-4 most of the time. We plan to expand our detection capabilities to other models in the future.

Which model is Turnitin’s AI detection model based on?

Our model is based on an open-source foundation model from the Huggingface company. We undertook multiple rounds of carefully calibrated retraining, evaluation and fine-tuning. What we must emphasize really is that the unique power of our model arises from the carefully curated data we've used to train the model, leveraging our 20+ years of expertise in authentic student writing, along with the technology developed by us to extract the maximum predictive power from the model trained on that data. In training our model, we focused on minimizing false positives while maximizing accuracy for the latest generation of LLMs ensuring that we help educators uphold academic integrity while protecting the interests of students.

How will Turnitin be future-proofing for advanced versions of GPT and other large language models yet to emerge?

We recognize that Large Language Models (LLMs) are rapidly expanding and evolving, and we are already hard at work building detection systems for additional LLMs. Our focus initially has been on building and releasing an AI writing detector for GPT-3 and GPT-3.5, and other writing tools based on these models, such as ChatGPT.  Since then, we’ve expanded our detection capabilities to include GPT-4, GPT-4o, Gemini (Pro) and LLaMA.

Will the AI percentage change over time as the detector and the models it is detecting evolve?

Yes, as we iterate and develop our model further, it is likely that our detection capabilities will also change, affecting the AI percentage. However, for a submitted document, the AI percentage will change only if it's re-submitted again to be processed.

If students use Grammarly for grammar checks, does Turnitin detect it and flag it as AI?

No. Our detector is not tuned to target Grammarly-generated spelling, grammar, and punctuation

modifications to content but rather, other AI content written by LLMs such as GPT-3.5. Based on initial tests we conducted on human-written documents with no AI-generated content in them, in most cases, changes made by Grammarly (free & premium) and/or other grammar-checking tools were not flagged as AI-written by our detector. Please note that this excludes GrammarlyGo, which is a generative AI writing tool and as such content produced using this tool will likely be flagged as AI-generated by our detector.

Access & licensing

How can customers get access to AI writing detection?

AI writing detection is only be available to customers that license Turnitin Originality. If you are a Turnitin Similarity, Turnitin Feedback Studio (TFS) or Originality Check customer, please speak to your Turnitin account manager regarding access to AI writing detection.

Please note that for customers licensing TFS, the ‘OriginalityCheck’ listed within the products available when accessing your institutional account dashboard, refers to the component of your institution’s TFS license that allows for papers to be checked against our database, and generate Similarity Reports, and is distinct from Turnitin Originality.

iThenticate 2.0 customers can get access to this feature if they license AI writing capabilities as an add-on. Please speak to your account manager for details.

Is Turnitin’s AI writing detection a standalone solution or is it part of another product?

Turnitin’s AI writing detection capabilities are a separate feature of the Similarity Report and are available to customers when licensing Turnitin Originality in addition to their existing product, or the AI writing add-on, when using iThenticate 2.0.

How will TFS with Originality customers access AI writing from Jan 2024?

TFS with Originality customers will be able to access AI writing detection from both their TFS account as well as Originality accounts. Customers can choose the workflow that works best for them.

I had AI writing detection enabled in my TFS sub-account but I don’t seem to have access to it anymore. Why is that?

Since AI writing detection is now only available to customers that license Originality, we have updated our user-settings in the back-end to manage usage within our products. This means that if your institution decides not to license Originality and/or disables the AI writing detection feature, any sub-account under the main account will lose access to the feature. If you’re unsure about your institution’s licensing or are unable to access AI writing detection despite licensing Originality, please speak to your institution’s Turnitin administrator.

In 2024, will I be able to see AI scores for previously submitted documents if I don’t license Originality?

No, from January 1, 2024, AI writing detection will only be available via the Originality license. Customers who do not license Originality will be unable to see the AI score and the reports, including reports for prior submissions. However, if your institution purchases the Originality add-on and enables the AI detection feature, you will be able to view the AI scores and reports for prior submissions.

Why is AI detection not being added to Gradescope?

We focused our resources on, what we view, as the biggest, most acute problem and that is higher education and K12 long-form writing. We do not currently have plans to add these capabilities to Gradescope, since the primary use case for Gradescope is handwritten text while for AI detection we’re focusing on typed text. However, we are happy to learn more about customer needs for AI writing detection within this product. In addition, we are not pursuing ChatGPT code detection at this time.

Where can I find more information about this new solution?

You can find information in our Turnitin’s AI writing detection capabilities article.


Why is Turnitin charging for AI writing from January 2024?

We made the decision to provide free access to our AI detection capabilities during the preview phase to support educators during an unprecedented time of rapid change. We received a significant amount of positive feedback from customers, and we acted on that feedback.

The decision to move to a paid licensing structure beginning January 2024 was made to ensure that we can continue to provide high-quality AI writing detection features to our customers. This enables us to invest in further research and development and improve our infrastructure to meet the evolving needs of our customers.

This is also in-line with our overall product strategy wherein we will continue to develop new capabilities to meet the changing requirements of our customers and to ensure we keep pace with technological enhancements. Some of these new features will form part of our base offering such as the new Similarity Report, and will be included in the price of the license, while others will be considered as premium capabilities such as AI writing, and will be available at an additional cost.

Will Turnitin process my submission for AI writing detection if my institution does not use the feature?

No, we will only process submissions for AI writing detection if the institution has the feature enabled.


Can my institution opt out if we do not want Turnitin to process our submissions for AI writing detection?

Yes, if your institution does not want submissions to be processed for AI writing detection, they can opt out by disabling AI writing detection from their admin account settings page.

If I re-enable AI writing detection, will it automatically show me scores for submissions made before it was enabled?

No, we cannot retroactively process submissions. If you would like to process past submissions for AI writing, you will need to re-submit the document.

Can I request deletion of my institution’s data prior to disabling the feature?

Yes, customers can request a full deletion of their submissions; we cannot support partial data deletion requests to delete only the AI writing component of the submission data.

 

AI paraphrasing detection

FAQs in the previous section on AI writing also apply to AI paraphrasing except for the ones answered below. This capability only works for English submissions.  

Product capabilities

Can Turnitin detect if content has been paraphrased using an AI paraphrasing tool?

Yes, Turnitin’s AI indicator includes detection of AI-generated content that may have been paraphrased using a word spinner/AI paraphrasing tool. This technology is run automatically for all submissions to Turnitin by institutions that have the AI writing indicator enabled for their accounts.

In the AI writing report, likely AI-generated content and AI-generated content that was likely AI paraphrased are highlighted in different colors to enable instructors and other users to interpret results easily.

 

How does Turnitin’s AI paraphrasing detection work?

Turnitin’s AI paraphrasing detection is part our AI writing detection capabilities. When a submission is made, it is first run through our AI writing model. Then, the AI paraphrasing model is run only on segments marked by the AI detector model as AI-generated.

Explaining this in further detail - when a submission is made, it is first broken into segments of text that are roughly a few hundred words (about five to ten sentences). Those segments are then overlapped with each other to capture each sentence in context. 

(numbers indicated in the image are for illustration purposes only)

The segments are first run against our AI detection model. We give each sentence a score between 0 and 1 to determine whether it is written by a human or by AI. 

Next, if our model determines a segment was AI-generated, the AI paraphrase model will also run an analysis. For the current iteration of this model, we do not show text detected as human-paraphrased.  Like the AI writing detector, the AI paraphrase model will give each sentence a score between 0 and 1. If the model determines that a sentence was AI-generated but not paraphrased using an AI tool, it will receive a score of 1 from the AI detection model and a score of 0 from the AI paraphrasing model, and will be marked only as AI-generated. Conversely, a sentence will be marked as human-written if it receives a score of 0 from the AI detection model. 

Using the average scores of all the segments within the document, the models then generate an overall prediction of how much text in the submission we believe has been generated by AI and paraphrased using an AI tool.

Can I check past submitted assignments for AI paraphrasing detection?

Yes, the AI paraphrase detector will only process submissions from the date it is released, so previously submitted assignments can be checked for AI paraphrasing detection if they’re re-submitted to Turnitin and if you have AI writing enabled for your account. 

What are the technical requirements for a paper to be checked for AI paraphrasing detection?

The requirements are the same for both AI writing and AI paraphrasing detection:

  • File size must be less than 100 MB
  • File must have at least 300 words of prose text in a long-form writing format
  • File must not exceed 30,000 words 
  • File must be written in English
  • Accepted file types: .docx, .pdf, .txt, .rtf

How will the paraphrasing functionality surface in the product? Will it change my workflow?

AI paraphrasing detection is integrated within our AI writing indicator. This means that submissions will automatically be checked for both AI writing and AI paraphrasing, if institutions have AI writing detection licensed and enabled. 

Users will experience no change in their current workflow. Likely AI-generated content that may have also been AI-paraphrased will be highlighted separately in the AI writing report (see image below). 

 

Which AI paraphrasing tools can Turnitin detect?

In our AI Innovation Lab, we have conducted tests using open sourced paraphrasing tools. Our technology has retained its effectiveness and is able to identify AI-generated text as likely paraphrased by a word spinner. 

We also tested our model on some of the more easily available and popular text spinners such as Quillbot, Grammarly (free paraphraser), and Scribbr and our model is able to identify AI-generated text as likely paraphrased when run through these tools. 

 

If I use Grammarly’s paraphrasing tool, will it flag my content as AI-generated? ?

Yes, if Grammarly’s paraphrasing tool is used to modify AI-generated text, it will likely be flagged as AI-generated by our detector. 

 

Will my admin be able to see statistics for AI paraphrased content?

No, our statistics dashboard available in the Admin console only provides an aggregated view of the AI score and does not differentiate between likely AI-generated content, and likely AI-generated content that may have been AI-paraphrased at this time. We are continually enhancing our data insights and will look to add further granularity to the dashboard in the future.

Detection of results & interpretation

Does the AI paraphrasing feature affect the overall accuracy Turnitin’s AI writing indicator? 

No, AI paraphrasing detection works only on content marked as AI-generated by the AI writing detector, so the AI paraphrasing model doesn't influence the false positive rate, the main metric of our overall AI writing detection capabilities. 

 

What is the accuracy of Turnitin’s AI paraphrasing technology?

We’re committed to safeguarding the interests of students while helping institutions maintain high standards of academic integrity. The objective of our AI writing indicator is to maximize the effectiveness of our detector while keeping our false positive rate under 1% for documents with over 20% likely AI-content in them. AI paraphrasing detection doesn't change these metrics as it works only on content identified as AI-generated by our detector. However, when run on the content marked as likely AI-generated by our detector, it can misidentify text. This can happen in two ways: first,  incorrectly identifying likely AI-generated but not AI-paraphrased text as likely AI-generated and AI-paraphrased; and second,  likely AI-generated and AI-paraphrased text as likely AI-generated but not AI-paraphrased. 

 

What is the difference between false positives and misidentifications in this context?

False positive rate shows the percentage of incorrectly identified human-written content as likely AI generated. Misidentification is the percentage of incorrectly classified text that is likely AI-generated as likely AI generated and likely AI paraphrased (misidentification type 1) or text that is likely AI generated and likely AI paraphrased as only likely AI generated (misidentification type 2).

 

Does misidentification increase the false positive rate? 

Misidentification doesn’t increase the false positive rate. The false positive rate is less than 1% for documents with more than 20% AI-generated content, and remains the same.

 

How should instructors interpret the results? 

Turnitin’s AI writing & paraphrasing features provide data and insight to enable educators to start a formative conversation with their students in conjunction with their academic misconduct policies. The final decision on whether any misconduct has occurred rests with the reviewer/instructor. Turnitin does not make a determination of misconduct; rather, it provides tools to assist its customers understand how AI tools are being used at their institution.



Access & licensing

Will my institution need to pay extra for AI paraphrasing detection?

No, AI paraphrasing is part of our AI writing indicator. If your institution licenses Originality (for Similarity, TFS & OC customers) and has the AI indicator enabled, they will get access to AI paraphrasing as well. Their submissions will automatically be processed for both AI-generated content and AI paraphrased content, if they have AI writing enabled.

The same is true for iThenticate 2.0 customers licensing AI writing detection capabilities.

 

Can my institution get access to trial this new capability?

No, there is no option to separately trial this feature. Our AI paraphrasing detection is integrated within our AI writing detection workflow and works in conjunction with it. Customers that have access to AI writing detection will automatically get access to AI paraphrasing, and their submissions will be processed for both AI writing and AI paraphrasing detection.

 

Can my institution suppress the AI paraphrasing if we do not want to use it and if we only want to use AI writing detection?

No. Our AI paraphrasing detection is integrated within our AI writing detection workflow and cannot be suppressed separately.  Submissions from customers that have AI writing detection enabled on their accounts, will automatically be processed for AI paraphrasing as well. 

 

Will students be able to see the results?

The AI writing score and report, which includes AI paraphrasing scores as well, are not visible to students. However, instructors can download and share the PDF report with students if they wish.

 

Can I download the AI paraphrasing report?

Yes, AI paraphrasing scores and highlights are included in the AI writing report. When you download the AI writing report, it will also include the AI paraphrasing results.

Was this article helpful?
38 out of 46 found this helpful

Articles in this section

Powered by Zendesk