Flagging suspect publications and crowdsourcing post-publication reassessments: the ‘Problematic Paper Screener’ - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :

Flagging suspect publications and crowdsourcing post-publication reassessments: the ‘Problematic Paper Screener’

(1) , (2) , (3)
1
2
3

Abstract

Probabilistic text generators have been used to produce fake scientific papers for more than a decade. Now more complex AI-powered generation techniques produce texts indistinguishable from that of humans and the generation of scientific texts starting from a few keywords used as input has been documented. Our study introduces the concept of tortured phrases: unexpected weird phrases in lieu of established ones, such as ‘counterfeit consciousness’ instead of ‘artificial intelligence.’ Hypothesising the use of advanced language models we ran a detector on the abstracts of recent articles and on several control sets.
Fichier principal
Vignette du fichier
20220308_CLM_Naverlabs.pdf (17.39 Mo) Télécharger le fichier

Dates and versions

hal-03603538 , version 1 (09-03-2022)

Licence

Attribution - CC BY 4.0

Identifiers

  • HAL Id : hal-03603538 , version 1

Cite

Guillaume Cabanac, Cyril Labbé, Alexander Magazinov. Flagging suspect publications and crowdsourcing post-publication reassessments: the ‘Problematic Paper Screener’. 2022. ⟨hal-03603538⟩
82 View
13 Download

Share

Gmail Facebook Twitter LinkedIn More