By using the process outlined above, we developed 4 broad recommendations (Table 2). Rather than target a particular sector, or problem, these recommendations speak to multiple points in the scholarly communications ecosystem.

Table 2 Reducing the Inadvertent Spread of Retracted Science (RISRS) Recommendations

Develop a Systematic Cross-industry Approach to Ensure the Public Availability of Consistent, Standardized, Interoperable, and Timely Information about Retractions

Over 94% of post-retraction citations in biomedicine do not demonstrate awareness that the cited item was retracted [24]. Users’ typical citation workflows may involve citing preprints, reusing downloaded copies, citing older works contained in their reference managers, and copying citations from their own or others’ previous bibliographies [21, 22]. Among citation styles, only the American Medical Association [76], National Library of Medicine [77], and American Psychological Association [78] styles provide explicit standards for citing retracted papers (See Appendix E: Existing Citation Standards for Retracted Publications in [48]). Among commonly used systems, only a handful of databases (such as PubMed and RetractionWatch) and tools built on them (such as Zotero, EndNote, Papers and scite) ensure that users know that a paper they are citing is retracted.

Information about retraction needs to move across different industry information providers (publishers, abstracting and indexing services, scholarly search engines, etc.). However, currently this need is challenged by non-robust dissemination, inconsistent information, and inconsistent presentation of retraction status information [30, 79,80,81].

Shared standards amongst publishers are necessary, but currently there are no industry-wide standards for retraction information or its visibility. The most widely accepted guidelines, from the Committee on Publication Ethics (COPE) [1] and the International Committee of Medical Journal Editors (ICMJE) [2], recommend how to make retraction information easy to use and find. However, they are not uniformly adopted. Although both are widely accepted by many publishing groups, particularly in medicine [82, 83], previous research has found that publishers do not uniformly adhere to COPE and ICMJE recommendations [84,85,86] and that more consistent display standards are needed, particularly regarding uniformity in landing pages [81]. In 2015, Retraction Watch also published its own standard for what a retraction notice should include, with more details than those of COPE and ICMJE [87].

Supporting and motivating stakeholders to consistently adopt and follow COPE and ICMJE recommendations for managing retracted articles and retraction notices is a baseline for further improvements. Beyond COPE and ICMJE recommendations, publishers should update procedures to add ‘Retracted’ to the titles of retracted articles following the example of the database Web of Science [27, 88].

A standards group should develop best practices for databases that facilitate the public and unrestricted access to and dissemination of retraction statuses and retraction notices. Processes, model license agreements, and standards for retraction data interchange are needed to facilitate information flow between publishers, aggregators, and database providers. Model license agreements could expand on established agreements such as the National Library of Medicine’s participation agreement for deposit [89].

Future models for disseminating retraction status include specialized databases such as the Retraction Watch Database, inclusion in field-specific databases such as PubMed, and/or as metadata in centralized repositories such as DOI registrars including CrossRef and DataCite and others. These data sources are not mutually exclusive, and ideally, retraction status would be up-to-date in all sites where readers encounter publications. Although a well-curated infrastructure could, in principle, be developed from centralizing metadata sourced from publishers, in practice, relying solely on publishers poses challenges because different publishers show varied and sometimes inadequate resources and commitment to metadata maintenance (see, for instance, the metadata improvement efforts of Metadata 20/20 [90]). General and field-specific databases also currently curate retraction metadata, but again, the quality varies [28]. Centralized metadata maintained by an external group focused solely on retraction, such as the Retraction Watch Database, ensures high quality, however, it requires an ongoing commitment, with financial resources for skilled curators and technological infrastructure.

Sustainable funding sources are urgently needed for databases to facilitate the public and unrestricted access to and dissemination of retraction notices. For example, the difference between restricted access and unrestricted access, facilitated by funding, can be seen by comparing the Retraction Watch Database to PubMed. Retraction Watch is a public database with restricted access to over 32,000 retractions in all disciplines [91]. Free public results are limited to 600 and license agreements are required for bulk use, and (as of January 2022) there is no Application Programming Interface (API). Its funding has included grants, private donations, and licensing agreements. PubMed is a public database with unrestricted access to and dissemination of over 10,000 retraction notices [92] in biomedicine. PubMed’s public interface and API are free to users because it is completely funded by the United States government. For other databases, such as Retraction Watch, that are not government-funded, additional funding sources could help make retraction information more free and accessible, especially through automated electronic means of data retrieval (e.g., APIs) to track and disseminate retraction statuses.

Another working group should convene composed of reference and citation industry groups along with members from COPE, the National Information Standards Organization (NISO), and others. The working group should be charged with defining best practices addressing retraction and post publication amendments in citation styles and citation software; developing additional citation styles and standards for indicating the retraction or correction status of a paper in text and in a bibliography.

Multiple stakeholders can play a part in adoption. Citation software developers should add features to flag retracted papers in their tools (e.g., Mendeley, Paperpile, RefWorks, etc.); Zotero, which flags retracted papers based on DOIs in Retraction Watch Database data, can be used as a model, as well as EndNote [93] and Papers [40], which announced the integration of Retraction Watch data in Fall 2021. Researchers should use citation software that flags retracted papers. Submission management platforms should integrate tools that enable systematic identification of retracted articles. Publishers should adopt software solutions that enable systematic identification of retracted articles in bibliographies prior to publication and check bibliographies for retracted paper as part of manuscript review and publishing workflows. Publishers should also invest in maintaining metadata, including promptly registering post-publication amendments with Crossmark [33], which became free to Crossref members in March 2020 [94].

Recommend a Taxonomy of Retraction Categories/Classifications and Corresponding Retraction Metadata that can be Adopted by All Stakeholders

A COPE working group noted in 2017 that “No standard taxonomy of updates exists for publishers to adopt. This leads to inconsistencies from journal to journal and potential confusion for the reader” [95]. Currently, retraction notices often provide vague or limited information about the reasons for retraction [84, 96, 97]. Terms currently in use are not used consistently: for instance, “withdrawal” is currently used in different ways [98,99,100].

People using retracted science and evaluating authors of retracted science demand additional context about retraction to both clean up the literature and disincentivize misconduct [101]. For example, researchers concerned with the stigma of retraction would like to distinguish retraction due to honest error from retraction due to misconduct [102, 103]. Some journals use “retract and republish” [104] or “retract and replace” [105] to signal handling of ‘honest’ errors; but some journals “keep the same DOI for the original and retracted article”, leading to poor indexing of these items’ retraction statuses in bibliographic databases [106].

Several taxonomies have been suggested in the literature: Fanelli et al. proposed definitions for 13 types of amendments, differentiated by asking: What is the issue? What is the impact? Who caused it? and Who communicated it? [102]. A bottom-up classification of retraction notices published in the journal Science led Andersen and Wray to 12 categories of error (4 levels of impact × 3 characterizations of intentionality) [107]. However, concerns about possible reputational damage and the risk of litigation can disincentivize the use of fine-grained distinctions about reasons for retraction [108]. A 2017 COPE working group advocated simplicity, using identifiers to interlink publicly available documents [95] but reducing the complexity of the classification. Their proposal is centered around ‘use of the neutral term “amendment” to describe all forms of post-publication change to an article’ [95]; each amendment notice would indicate who was issuing it and any dissenters; the type (“minor”, “major”, or “complete”); links to the article being amended and associated resources; the date; and an associated narrative, that is “updated as needed with links to any investigation if that is publicly available” [95]. More recently, a 2021 COPE RISRS taxonomy working group made an initial proposal to coalesce on 5 or 6 essential terms [109].

We recommend that a working group composed of standards organizations, publishers, platforms, infrastructure providers, and metadata development organizations work to develop a core taxonomy of retraction categories and corresponding metadata standards in tandem. The corresponding metadata standards should draw on existing models of persistent identifiers, versioning, and explicit links between expressions of concern, retraction notices, and the publications to which they refer. These links should be both machine-actionable and human-understandable. The taxonomy should be integrated with existing versioning systems. The working group should recommend how the taxonomy’s terminology should appear in database records for retraction notices and retracted articles when the taxonomy is implemented and adopted.

Models of persistent identifier usage may come from best practices for research outputs beyond the traditional scholarly journal article, including preprints, data, software, study protocols, registered reports, and repository content. Examples of best practices include (1) resolving a persistent identifier to a tombstone page [110] when the full-text must be removed; (2) providing versioned DOIs (e.g. Zenodo’s versioned DOIs for software [111]) so that substantive changes to the content of an item are reflected by a change to the persistent identifier; and (3) using explicit metadata to interlink versions (e.g., DataCite Metadata Schema [112]’s metadata for recording relations such as “IsNewVersionOf” and “Obsoletes” in DataCite Metadata Schema [112], or more general Crossref intra-work relationships such as “isReplacedBy” and “Replaces” [113], as well as older best practices for publications (e.g., the 2008 Journal Article Versions (JAV): Recommendations of the NISO/ALPSP JAV Technical Working Group [114])).

Within contemporary scholarly article publishing, F1000 provides an example of explicit versioning with the use of persistent identifiers: “All versions of an article are accessible, each with their own DOI (digital object identifier) and may be cited individually.” [115]. Their website has a useful interface for ensuring that human readers are alerted to the most recent version of an article. For a reader browsing an older version of an article, the F1000 website provides a daily notification which states: “There is a newer version of this article available.” Similar notifications exist on preprint servers and data repositories to indicate new versions, typically in banner messages. Yet at F1000 the content cannot be viewed before clicking on “Suppress this message for one day”: this interaction design ensures that a human reader with standard web browser settings cannot miss the message. Adoption of similar alerting would address one the most challenging current problems: ensuring that readers are notified about retraction.

In order to ensure the taxonomy and metadata are viable over the long-term, they should be curated and maintained on a discoverable website with a formal home and be based in an industry standards organization such as the NISO or the International Association of Scientific, Technical and Medical Publishers (STM). Finally, to ensure adoption, we recommend that highly visible organizations build support and influence through endorsement and adoption of the taxonomy and metadata standards, in order to support and motivate other stakeholders to adopt them.

Develop Best Practices for Coordinating the Retraction Process to Enable Timely, Fair, and Unbiased Outcomes

The time between the publication of papers and their potential amendment or retraction is a period in which papers may be adopted, used, and woven into the tapestry of scholarship. The need for retraction can be raised at any time after publication, and this time has been as long as 45 years (e.g., [116]). From the point of view of citation and use, there are two interrelated issues: First, the longer a publication is “alive” in the literature before retraction, the more time it has had to accrue citation and use while considered normal citable literature; this increases the potential impact on the rest of the literature, because there is currently no systematic process for updating knowledge claims when publications have already been cited by the time they are retracted [45]. Second, publications with shorter time to retraction may also receive fewer post-retraction citations [24]. Reducing the time to retraction is desirable to ensure the clear and timely communication of amendments to publications.

Another danger is that compromised research is identified but fails to be retracted because of logistical complexity amongst all stakeholders involved in the retraction process. Additional complications may arise when the author or editor is no longer publishing or no longer living [117, 118]. In these cases, failure to retract enables the continued citation of research that should have been retracted. Likewise, transfer of journals between publishers may adversely impact the display of retraction status.

Existing guidelines acknowledge the problems related to time to retraction. For example, the COPE 2019 guidelines say: “Publications should be retracted as soon as possible after the editor is convinced that the publication is seriously flawed, misleading, or falls into any of the categories described above.” However, stakeholders suggest that coordination amongst authors, co-authors, editors, and in some cases institutions may present complex logistical problems or conflicts of interest. For example, review of compromised figures, data sets, and data represented in images can be costly and time consuming. For editors and publishers, the COPE flowchart library is in common use, and could be a model for developing workflow models and suggestions aimed at a variety of additional stakeholders. Some interviewees and workshop participants suggested that efforts to innovate retraction processes in this nexus—between institutions, publishers/editors, funders and researchers/editors—are often hampered by perceptions of risk and liability. Early adopters of reforms potentially face increased risks (e.g., liability) on top of the cost of developing policies and procedures; potential costs include referral boards or independent investigative bodies.

Here, we recommend the use of the Cooperation & Liaison between Universities & Editors (CLUE) report [4] recommendations to develop best practice guidelines to streamline the retraction process with respect to institutions and sponsoring agencies by improving coordination between institutions, publishers, funders and researchers. Additionally, COPE and the research integrity groups such as the Association of Research Integrity Officers (ARIO) and the European Network of Research Integrity Offices (ENRIO), should work to clarify best practices and guidelines for journals, authors, and institutions to efficiently coordinate and address concerns about published work. This should include offering fast-tracks for retraction notices to move through the process more quickly, if the authors agree with or request retraction, or if a retraction is requested following an institutional misconduct investigation. Here, publishers should reserve the right to retract in legal agreements with authors. Publishers should also make sure that all journal websites provide clear instructions on how to submit an inquiry or concern about possible research misconduct or serious error. For instance, websites may not have updated contact information or email addresses. Finally we suggest creating a workflow template for starting a retraction inquiry and adopt a checklist of requisite information for a retraction notice. Publishers and editorial societies should encourage journal editors and institutions to develop systematic processes, including templates and checklists to coordinate and communicate about the retraction inquiry.

Educate Stakeholders about Pre- and Post-publication Stewardship, including Retraction and Correction of the Scholarly Record

Stakeholder education can help researchers and editors understand the range of post-publication corrections. Retraction is a publishing mechanism for cleaning up the literature, and does not signify misconduct. Currently stakeholders report a tension between the need to correct the literature and the need to preserve their reputations, either as researchers or as editors. Fear of stigma or career impacts can make researchers reluctant to participate in retraction processes, even to correct honest mistakes or errors. Fear of litigation makes editors reluctant to initiate retraction inquiries [1, 119]. Awareness of retraction and the reasons for retracting research may vary by field; this contributes to a confusion about the severity and impacts of retraction. Professional, disciplinary and scholarly societies, publishing associations and editorial groups, government agencies, and local institutional programming should develop education aimed at multiple groups. Our detailed recommendations for researchers, authors and editors can be found in the RISRS report [48]. These are examples; education for additional stakeholder groups, for example, librarians, developers of bibliographic databases and search engines, and research integrity officers, should be developed. Scholarly publications are used not only within the communities that produce them, but also more widely for application to public decisions: in the future, ​​science communicators and journalists as well other knowledge brokers who help the public interpret scholarly communication could be a target for further education as well.



Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Disclaimer:

This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.biomedcentral.com/)