Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/50862, first published .
Jargon and Readability in Plain Language Summaries of Health Research: Cross-Sectional Observational Study

Jargon and Readability in Plain Language Summaries of Health Research: Cross-Sectional Observational Study

Jargon and Readability in Plain Language Summaries of Health Research: Cross-Sectional Observational Study

Original Paper

Department of Health and Community Sciences, University of Exeter Medical School, University of Exeter, Exeter, United Kingdom

deceased

Corresponding Author:

Iain A Lang, DPhil

Department of Health and Community Sciences

University of Exeter Medical School

University of Exeter

South Cloisters

St Luke's Campus

Exeter,

United Kingdom

Phone: 44 7500 786180

Email: i.lang@exeter.ac.uk


Background: The idea of making science more accessible to nonscientists has prompted health researchers to involve patients and the public more actively in their research. This sometimes involves writing a plain language summary (PLS), a short summary intended to make research findings accessible to nonspecialists. However, whether PLSs satisfy the basic requirements of accessible language is unclear.

Objective: We aimed to assess the readability and level of jargon in the PLSs of research funded by the largest national clinical research funder in Europe, the United Kingdom’s National Institute for Health and Care Research (NIHR). We also aimed to assess whether readability and jargon were influenced by internal and external characteristics of research projects.

Methods: We downloaded the PLSs of all NIHR National Journals Library reports from mid-2014 to mid-2022 (N=1241) and analyzed them using the Flesch Reading Ease (FRE) formula and a jargon calculator (the De-Jargonizer). In our analysis, we included the following study characteristics of each PLS: research topic, funding program, project size, length, publication year, and readability and jargon scores of the original funding proposal.

Results: Readability scores ranged from 1.1 to 70.8, with an average FRE score of 39.0 (95% CI 38.4-39.7). Moreover, 2.8% (35/1241) of the PLSs had an FRE score classified as “plain English” or better; none had readability scores in line with the average reading age of the UK population. Jargon scores ranged from 76.4 to 99.3, with an average score of 91.7 (95% CI 91.5-91.9) and 21.7% (269/1241) of the PLSs had a jargon score suitable for general comprehension. Variables such as research topic, funding program, and project size significantly influenced readability and jargon scores. The biggest differences related to the original proposals: proposals with a PLS in their application that were in the 20% most readable were almost 3 times more likely to have a more readable final PLS (incidence rate ratio 2.88, 95% CI 1.86-4.45). Those with the 20% least jargon in the original application were more than 10 times as likely to have low levels of jargon in the final PLS (incidence rate ratio 13.87, 95% CI 5.17-37.2). There was no observable trend over time.

Conclusions: Most of the PLSs published in the NIHR’s National Journals Library have poor readability due to their complexity and use of jargon. None were readable at a level in keeping with the average reading age of the UK population. There were significant variations in readability and jargon scores depending on the research topic, funding program, and other factors. Notably, the readability of the original funding proposal seemed to significantly impact the final report’s readability. Ways of improving the accessibility of PLSs are needed, as is greater clarity over who and what they are for.

J Med Internet Res 2025;27:e50862

doi:10.2196/50862

Keywords



In recent years, the idea that science should involve and be accessible to nonscientists has grown. Activities such as patient and public involvement, citizen science, open science, and research coproduction represent different facets of this development and are grounded in both practical and normative motives [Beresford P, Russo J. Patient and Public Involvement In Research. In: Nolte E, Merkur S, Anell A, editors. Achieving Person-Centred Health Systems. Cambridge. Cambridge University Press; 2020:145-172.1-Martin GP. 'Ordinary people only': knowledge, representativeness, and the publics of public participation in healthcare. Sociol Health Illn. 2008;30(1):35-54. [FREE Full text] [CrossRef] [Medline]7]. In health research, one aspect of this involves writing a plain language summary (PLS), sometimes also called a “plain English summary” or lay summary. PLSs are short summaries of a study or project intended to increase its accessibility to nonspecialists. Many regulatory agencies and research funders now require them. For example, the European Union requires a PLS as part of the reporting of all clinical trials [Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on Clinical Trials on Medicinal Products for Human use, and Repealing Directive 2001/20/EC. 2014. URL: https://health.ec.europa.eu/system/files/2016-11/reg_2014_536_en_0.pdf [accessed 2024-09-26] 8], and PLSs must be included in all systematic reviews published in the Cochrane Library [Pitcher N, Mitchell D, Hughes C. Guidance for writing a Cochrane plain language summary. Cochrane Handbook for Systematic Reviews of Interventions. 2022. URL: https://training.cochrane.org/handbook/current/chapter-iii-s2-supplementary-material [accessed 2024-09-26] 9] and all proposals submitted to funders such as the Medical Research Council and the National Institute for Health and Care Research (NIHR), the 2 major state-backed funders of health research in the United Kingdom.

There is evidence to suggest that PLSs do not improve accessibility to the extent we might hope. For example, patients enrolled in clinical trials often have a partial or incorrect understanding of the trial in which they are participating, such as its risks and benefits, despite requirements that they be informed about these issues [Flory J, Wendler D, Emanuel E. Empirical issues in informed consent for research. In: Emanuel E, editor. The Oxford Textbook of Clinical Research Ethics. London. Oxford University Press; 2008:645-660.10-Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC. Quality of informed consent in cancer clinical trials: a cross-sectional survey. The Lancet. 2001;358(9295):1772-1777. [CrossRef]12]. Pharmaceutical and other industry groups have proposed standards for the preparation of PLS [Rosenberg A. Working toward standards for plain language summaries. Sci Editor. 2022;45(2):46-50. [FREE Full text] [CrossRef]13-Lobban D, Gardner J, Matheis R. Plain language summaries of publications of company-sponsored medical research: what key questions do we need to address? Current Medical Research and Opinion. 2021;38(2):189-200. [FREE Full text] [CrossRef]15]. Existing guidelines on how to write a PLS vary and are sometimes contradictory [Stoll M, Kerwer M, Lieb K, Chasiotis A. Plain language summaries: a systematic review of theory, guidelines and empirical research. PLoS One. 2022;17(6):e0268789. [FREE Full text] [CrossRef] [Medline]16] but often highlight issues around readability and avoidance of jargon [Gainey KM, Smith J, McCaffery KJ, Clifford S, Muscat DM. What author instructions do health journals provide for writing plain language summaries? A scoping review. Patient. 2023;16(1):31-42. [FREE Full text] [CrossRef] [Medline]17]. Studies of how research findings are disseminated to nonspecialists usually focus on potential users of research and what they need to do to understand research better rather than on examining how characteristics of researchers and research settings influence dissemination to nonacademic audiences [Uphold HS, Drahota A, Bustos TE, Crawford MK, Buchalski Z. “There’s no money in community dissemination”: a mixed methods analysis of researcher dissemination-as-usual. J Clin Trans Sci. 2022;6(1):e105. [CrossRef]18,Brownson RC, Fielding JE, Green Lawrence W. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27-53. [FREE Full text] [CrossRef] [Medline]19]. Like those who develop public health guidelines, researchers may emphasize internal validity (confidence in the reliability of the results) over external validity (whether and how the results can be applied in other places) [Green Lawrence W, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126-153. [CrossRef] [Medline]20]. Studies of public health researchers [Tabak RG, Stamatakis KA, Jacobs JA, Brownson RC. What predicts dissemination efforts among public health researchers in the United States? Public Health Rep. 2014;129(4):361-368. [FREE Full text] [CrossRef] [Medline]21] and dissemination and implementation researchers [McNeal DM, Glasgow RE, Brownson RC, Matlock DD, Peterson PN, Daugherty SL, et al. Perspectives of scientists on disseminating research findings to non-research audiences. J Clin Transl Sci. 2020;5(1):e61. [FREE Full text] [CrossRef] [Medline]22,Knoepke CE, Ingle MP, Matlock DD, Brownson RC, Glasgow RE. PLoS One. 2019;14(11):e0216971. [FREE Full text] [CrossRef] [Medline]23] found that the context in which research is funded and conducted influences efforts to communicate it to nonspecialists, but we are aware of no studies comparing across fields or subfields. More knowledge about how different characteristics of research influence attempts to communicate the research could help improve communication in the future.

Our aim was to assess the readability and level of jargon in the PLSs of research funded by the United Kingdom’s NIHR. The NIHR is the largest single funding body in the United Kingdom and the largest national clinical research funder in Europe [Davies SC, Walley T, Smye S, Cotterill L, Whitty CJ. The NIHR at 10: transforming clinical research. Clin Med (Lond). 2016;16(6):501-502. [FREE Full text] [CrossRef] [Medline]24]. Full reports of all projects funded in its major research programs are published on the web. Since 2014, it has been obligatory for these to include a PLS that sets out a clear, simple summary of research in a way that is accessible to nonspecialists and members of the public. NIHR guidance for researchers on PLSs [Plain English summaries. National Institute for Health and Care Research. 2021. URL: https://www.nihr.ac.uk/documents/plain-english-summaries/27363 [accessed 2024-09-26] 25] is that they should follow “a few simple rules” that include “avoid, wherever possible, jargon, abbreviations, and technical terms,” “avoid complicated language or uncommon words,” and “keep sentences short.” In this study, we addressed 3 questions about these PLSs: How readable are they? How much jargon do they contain? Are readability and use of jargon influenced by study characteristics such as topic and size?


Overview

Our data came from all NIHR Journals Library reports published from mid-2014, when the requirement for inclusion of a PLS was introduced, to mid-2022 (May 30). We downloaded the full text of each report from the NIHR website, where they are publicly available [NIHR Journals Library. National Institute of Health and Care Research. URL: https://www.journalslibrary.nihr.ac.uk/#/ [accessed 2024-10-09] 26]. We then used a purpose-written computer program (written in the computing language called Python) to go through each text and find the PLSs. In a few instances, we could not process the reports in this way, in which case we looked up the PLS manually. We had complete data on 1241 PLSs that were part of reports published in the NIHR Journals Library. Apart from 5 reports that did not have a PLS, we included all the reports published during this period.

Ethical Considerations

Our study was based on open-access data related to academic publications and we did not engage directly with the individuals or groups making or receiving these publications. As such, our study was exempt from ethical review as per the terms of the authors’ institutional ethics policy and framework (University of Exeter Research Ethics Policy and Framework, Paragraph 4.3.1).

Outcome Variables

We measured readability using the Flesch Reading Ease (FRE) formula. The FRE is often used in analyzing written health information [Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Social Adm Pharm. 2013;9(5):503-516. [CrossRef] [Medline]27] as well as other scientific texts [Baram-Tsabari A, Wolfson O, Yosef R, Chapnik N, Brill A, Segev E. Jargon use in Public Understanding of Science papers over three decades. Public Underst Sci. 2020;29(6):644-654. [FREE Full text] [CrossRef] [Medline]28,Plavén-Sigray P, Matheson GJ, Schiffler BC, Thompson WH. The readability of scientific texts is decreasing over time. eLife. 2017;6:e27725. [FREE Full text] [CrossRef]29] and is based on the idea that longer words and longer sentences make a text less readable. Each text is given a score that gets lower in proportion to the number of longer words and sentences used, so a higher score indicates a text that is easier to read. The formula is as follows:

FRE scores can be categorized as “extremely easy,” “very easy,” “fairly easy,” and so on, down to “very difficult,” as well as by approximate reading age (Table 1). We used a short prewritten computer program [DiMascio C. Py-Readability-Metrics [Source code]. GitHub. 2019. URL: https://github.com/cdimascio/py-readability-metrics [accessed 2019-12-09] 30] to calculate readability scores for each PLS. In the rest of this document, when we refer to readability scores, we mean FRE scores calculated in this way.

Table 1. Distribution of summaries by Flesch Reading Ease score classification.

Approximate reading age (years)DifficultyValues, n (%)
Readability score

≥100Up to 10Extremely easy0

90-10011Very easy0

80-9012Easy0

70-8013Fairly easy3 (0.2)

60-7014-15Plain English32 (2.6)

50-6016-18Fairly difficult169 (13.6)

30-50Undergraduate: 18-21Difficult768 (61.9)

0-30Postgraduate: ≥21Very difficult269 (21.7)

We measured jargon using a calculator called the “De-Jargonizer.” It was created to help scientists engage with the public, and it identifies jargon based on the frequency with which words appear in everyday English usage. The developers of the calculator analyzed more than 90 million words used in around 250,000 papers on the British Broadcasting Corporation websites (including news, sports, and science pages) [BBC. URL: https://www.bbc.com/ [accessed 2024-09-26] 31] during the years 2012-2015. Based on existing work about how commonly words are used and understood in everyday communication, they categorized the words into high frequency (belonging to the 2000 most common word families, which each appeared more than 1000 times), mid-frequency (appearing between 50 and 1000 times), and jargon (fewer than 50 appearances). Acronyms, which can often be part of jargon, are treated the same way as words, which means that common acronyms such as NHS (National Health Service) or USA (United States of America) fall into the “high frequency” category. Full details of how the calculator was put together, including testing and validation, have been published [Rakedzon T, Segev E, Chapnik N, Yosef R, Baram-Tsabari A. Automatic jargon identifier for scientists engaging with the public and science communication educators. PLoS One. 2017;12(8):e0181742. [FREE Full text] [CrossRef] [Medline]32], and a web-based version of the calculator contains a description and additional details [Jargon Project. De-Jargonizer. 2023. URL: https://scienceandpublic.com/ [accessed 2020-01-31] 33].

The developers of the calculator also created a score to indicate how suitable a text was for a general audience. If a text uses only common words, the score is 100; lower scores indicate more use of mid-frequency and jargon words. The score is calculated using this formula:

We downloaded the source code [NoamAndRoy. JargonProject [Source code]. GitHub. 2017. URL: https://github.com/NoamAndRoy/JargonProject [accessed 2024-09-26] 34] for the calculator and used it to work out a jargon score for each PLS. A higher score means that less jargon was used.

We created additional outcome variables to identify PLSs that were better in terms of readability and jargon. For readability scores, we focused on summaries with scores of >50: those classed as “fairly difficult to read” or better. A total of 204 summaries (16.4%, 204/1241) fell into this category. The starting point of this study related to the average reading age in the United Kingdom (see the “Patient and Public Involvement” section). We created an outcome variable showing whether a readability score was suitable for a reading age of 9 years—that is, a readability score of 100 or above (Table 1). However, no summaries fell into this category. Rakedzon and colleagues [Rakedzon T, Segev E, Chapnik N, Yosef R, Baram-Tsabari A. Automatic jargon identifier for scientists engaging with the public and science communication educators. PLoS One. 2017;12(8):e0181742. [FREE Full text] [CrossRef] [Medline]32] refer to 2 levels of jargon (2% or 5%) as “recommended for general comprehension.” We used the more generous 5% score and categorized scores of more than 95 as having low levels of jargon. A total of 269 (21.7%, 269/1241) summaries fell into this category.

Multimedia Appendix 1

Exemplar plain language summaries with high and low readability and high and low jargon scores.

DOCX File , 18 KBMultimedia Appendix 1 shows examples of PLSs with high and low readability and jargon.

Study Characteristics

We used the following information relating to each PLS in our analyses:

  1. Research topic: NIHR classifies all research projects using the UK Clinical Research Collaboration Health Research Classification System. This is a system for categorizing health research funding and is used to allow funders and others to assess funding schemes [HRCS Online - Home. 2023. URL: https://hrcsonline.net/ [accessed 2024-09-26] 35]. We used the Health Categories dimension of the Health Research Classification System, which groups each project into 1 or more of 21 categories. In our analysis, we included categories if they had at least 50 projects associated with them, which left us with 12 categories: cancer and neoplasms, cardiovascular, infection, mental health, metabolic and endocrine, musculoskeletal, neurological, oral and gastrointestinal, reproductive health and childbirth, respiratory, stroke, and “generic health relevance.”
  2. Funding program: Reports published in the National Journals Library relate to 5 NIHR funding programs: Efficacy and Mechanism Evaluation, Health Technology Assessment, Health and Social Care Delivery Research, Programme Grants for Applied Research, and Public Health Research. Of these, the Efficacy and Mechanism Evaluation program is generally considered most “upstream” (closer to basic than applied science), and the Health and Social Care Delivery and Public Health Research programs are most “downstream” (most applied). Details of all NIHR programs are available on the web [Funding Programmes. National Institute for Health and Care Research. 2024. URL: https://www.nihr.ac.uk/explore-nihr/funding-programmes/ [accessed 2024-09-26] 36].
  3. Project size: We wanted to know whether the size of a project made a difference to the PLS, and we used the amount of funding as an approximate measure for this. Smaller projects tend to be more focused and contained; larger projects may have more resources to move around and support activities such as public engagement (which does not necessarily mean that a “better” PLS will be produced). We categorized projects by size by ranking them in terms of the amount of funding received and then sorting them into 5 equal groups: top 20%, next 20%, and so on.
  4. Length in words: When preparing their report for the National Journals Library, authors are asked to write a PLS of up to 300 words, but some write shorter summaries, and some write longer ones. Studies of patient-information leaflets in trials have found that long [Ennis L, Wykes T. Sense and readability: participant information sheets for research studies. Br J Psychiatry. 2016;208(2):189-194. [FREE Full text] [CrossRef] [Medline]37,Sharp SM. Consent documents for oncology trials: does anybody read these things? Am J Clin Oncol. 2004;27(6):570-575. [CrossRef] [Medline]38] and short [Brierley G, Richardson R, Torgerson DJ. Using short information leaflets as recruitment tools did not improve recruitment: a randomized controlled trial. J Clin Epidemiol. 2012;65(2):147-154. [CrossRef] [Medline]39] texts each have problems associated with them regarding readability and clarity, and we wanted to see whether the length of the PLS made a difference.
  5. Readability and jargon scores of the original funding proposal: Information about all projects funded by the NIHR is publicly available on the web, including a copy of the PLS submitted as part of the original funding proposal. The difference between this and the final National Journals Library PLS is that the original one sets out what the researchers proposed to do; the PLS in the National Journals Library summarizes what they ultimately did and found. NIHR instructs writers of summaries to “follow the same principles and procedures as in writing the plain language summary that accompanied your funding submission” [National Journals Library - Plain language summary. National Institute for Health and Social Care Research. URL: https:/​/www.​journalslibrary.nihr.ac.uk/​information-for-authors/​manuscript-preparation/​report-sections/​plain-language-summary.​htm [accessed 2024-09-26] 40], and we wanted to find out whether original and final report summaries were written in similar ways. Just as we did for the National Journals Library PLSs, we calculated readability and jargon scores for each of the funding proposal PLSs. We categorized these scores by ranking them and then sorting them into 5 equal groups.
  6. Publication year: PLSs became a requirement in mid-2014, and we downloaded our data in mid-2022 (July 22). We wanted to see whether the readability and use of jargon in PLSs had changed over time and did this by categorizing each one according to the year it was published.

Analysis

We analyzed our data using Stata/SE (version 17.0; StataCorp LLC) and Microsoft Excel (version 2409; Microsoft Corporation). For each of the study characteristics described in the preceding section (research topic, funding program, project size, length in words, readability or jargon used in the original proposal, and publication year), we calculated descriptive statistics on the number of PLSs in each category or, where categories were of equal size, the range covered by the category. For each category, we estimated the average readability and jargon score, the percentage of summaries with readability scores higher than 50, the percentage of summaries with jargon scores higher than 95, and the 95% CIs for these estimates. We looked at the relationship between readability and jargon scores by calculating their pairwise (Pearson) correlation. We estimated incidence rate ratios and 95% CIs of PLSs in the hardest-to-read categories (readability scores of >50 and jargon scores of >95) using a generalized linear model with a modified Poisson approach and robust error variances [Zou G. A modified poisson regression approach to prospective studies with binary data. Am J Epidemiol. 2004;159(7):702-706. [CrossRef] [Medline]41]. We report the results of a model in which all study characteristics were entered simultaneously.

Patient and Public Involvement

This study was prompted by the assertion made at a patient and public involvement meeting, attended by AK (who is a patient and member of the public but not a researcher) and hosted by a leading UK research-funding charity, that the average reading age of the UK population is 9 years. This assertion (along with a few variations) can be found on numerous websites by searching on the web for the term “UK average reading age” (we subsequently found that these figures come from the UK Government’s Skills for Life Survey [The 2011 Skills for Life survey: a survey of literacy, numeracy and ICT levels in England. London, UK.; 2012. URL: https://www.gov.uk/government/publications/2011-skills-for-life-survey [accessed 2024-09-26] 42]). This raised the question: if this is the case, how accessible to the general population are the PLSs routinely produced in health research funding applications and reports? AK and IAL’s discussions about how to address this question led to the writing of this paper. AK has been involved throughout and is a coauthor.


Readability scores in our sample ranged from 1.1 to 70.8. The mean (average) FRE score was 39.0 (95% CI 38.4-39.7), and the median (middle) score was 39.8. The distribution of scores across readability categories is shown in Table 1. Around one-fifth of summaries had a score below 30, “very difficult to read.”

Jargon scores in our sample ranged from 76.4 to 99.3. The mean (average) jargon score was 91.7 (95% CI 91.5-91.9), and the median (middle) score was 92.4. The distribution of scores across jargon categories is shown in Table 2. Around one-fifth of summaries had a score of 95 or above, suggesting that they would be suitable for a general audience.

Table 2. Distribution of summaries by jargon score.
Jargon scoreValues, n (%)
95-100 (least jargon)269 (21.7)
90-95592 (47.7)
85-90300 (24.2)
85 or lower (most jargon)80 (5.4)

The pairwise correlation between readability scores and jargon scores was 0.249 (P<.001), which suggests that when one score is higher, the other is also likely to be higher but that the relationship is moderate. Sixty-six summaries (5.3%, 66/1241) were in both the “easier to read” and “least jargon” categories.

Table 3 shows, for each of the study characteristics, the number of summaries in each category (where relevant) and the mean readability and jargon scores for that category. We found statistically significant variations in estimated readability and jargon scores between categories for each study characteristic.

Table 3. Distribution of summaries, mean readability scores, and mean jargon scores in relation to each study characteristic.

Number in each category, n (%)Mean readability score (higher=easier to read; scores of ≥60 are “plain English,” 50-60 are suitable for people educated to high-school level, 30-50 for undergraduate level, and 30 and below for postgraduate level) (95% CIs)Mean jargon score (higher=less jargon; scores of ≥95 are suitable for a general audience) (95% CIs)
Research topica

Cancer and neoplasms164 (13.2)39.9 (37.9-41.9)91.2 (90.6-91.9)

Cardiovascular165 (13.3)41.6 (39.8-43.4)91.6 (91.0-92.1)

Generic health relevance271 (21.8)37.7 (36.3-39.1)94.2 (93.9-94.4)

Infection93 (7.5)38.4 (36.2-40.6)90.3 (89.5-91.1)

Mental health211 (17.0)38.2 (36.7-39.7)93.1 (92.7-93.5)

Metabolic and endocrine78 (6.3)42.6 (39.9-45.2)92.4 (91.6-93.2)

Musculoskeletal57 (4.6)37.8 (33.3-40.3)91.2 (90.2-92.2)

Neurological81 (6.5)38.0 (35.3-40.7)92.4 (91.6-93.2)

Oral and gastrointestinal106 (8.5)42.1 (39.9-44.2)91.7 (91.0-92.5)

Reproductive health and childbirth96 (7.7)41.6 (39.6-43.7)91.2 (90.5-91.9)

Respiratory56 (4.4)41.3 (38.2-44.4)91.0 (90.1-92.0)

Stroke110 (8.9)41.7 (39.5-43.9)92.9 (92.3-93.5)
Funding programb

Efficacy and Mechanism Evaluation81 (6.5)38.8 (36.4-41.2)88.4 (87.7-89.1)

Health Technology Assessment630 (50.8)39.4 (38.5-40.3)90.3 (90.0-90.6)

Health and Social Care Delivery Research327 (26.4)37.3 (36.1-38.6)94.2 (93.9-94.5)

Programme Grants for Applied Research96 (7.7)37.5 (35.3-39.6)92.8 (92.2-93.4)

Public Health Research107 (8.6)43.4 (41.3-45.6)94.0 (93.6-94.5)
Publication yearb

201491 (7.3)37.6 (35.4-39.8)92.5 (91.8-93.2)

2015170 (13.7)38.7 (36.8-40.6)91.5 (90.9-92.0)

2016168 (13.5)37.5 (35.5-39.5)91.2 (90.5-91.9)

2017147 (11.8)39.2 (37.6-40.9)91.8 (91.2-92.4)

2018142 (11.4)41.1 (39.4-42.8)91.9 (91.4-92.5)

2019153 (12.3)41.3 (39.6-43.0)91.7 (91.0-92.3)

2020154 (12.4)38.7 (36.7-40.7)92.1 (91.4-92.7)

2021145 (11.6)38.7 (36.8-40.5)91.7 (91.0-92.4)

202271 (5.7)37.1 (34.2-39.9)91.5 (90.6-92.5)
Project size

Smallest 20%
36.9 (35.5-38.4)91.4 (90.9-91.9)

2
38.1 (36.6-39.7)92.7 (92.2-93.1)

3
39.5 (38.1-40.9)93.1 (92.7-93.6)

4
40.2 (38.8-41.6)90.5 (89.9-91.0)

Largest 20%
40.3 (38.9-41.7)91.0 (90.5-91.4)
Length of summary

Shortest 20%
38.6 (37.0-40.1)91.6 (91.1-92.2)

2
39.2 (37.7-40.7)92 (91.5-92.4)

3
39.3 (37.9-40.6)91.2 (90.7-91.7)

4
40.9 (39.5-42.2)92.8 (92.3-93.2)

Longest 20%
37.2 (35.7-38.6)91.1 (90.5-91.6)
Scores in original proposalc

Lowest 20% of original scores
35.3 (33.8-36.8)88.0 (87.5-88.4)

2
36.8 (35.3-38.4)90.5 (90.1-91.0)

3
39.4 (38.0-40.7)92.2 (91.8-92.6)

4
39.7 (38.4-40.9)93.3 (92.9-93.6)

Highest 20% of original scores
43.9 (42.5-45.3)94.7 (94.4-94.9)

aSome summaries were associated with more than 1 area of research, so the total percentage does not add up to 100.

bPercentages may not add up to 100 because of rounding.

cReadability scores are categorized by the readability score of the original funding proposal; jargon scores are categorized by the jargon score of the original funding proposal.

For the research topic, the highest estimated readability score (most readable) was in the “Metabolic and endocrine” category, and the lowest score (least readable) was for “Generic health relevance.” However, the “Generic health relevance” category was associated with the highest estimated jargon score (least jargon), and the lowest score (most jargon) was for “Infection.”

For the funding program, the highest estimated readability score was associated with the Public Health Research program, and the lowest was in Health and Social Care Delivery Research. In contrast, the Health and Social Care Delivery Research had the highest estimated jargon score (least jargon). The lowest jargon score (most jargon) was associated with the Efficacy and Mechanism Evaluation program.

For project size, mean readability scores but not jargon scores rose as projects got larger. The highest estimated readability score was associated with the largest projects (top 20% of funding), and the lowest readability was associated with the smallest (bottom 20%). There was no clear pattern of variation in jargon scores.

For length of summary, longer summaries were associated with better readability and less jargon, but only up to a point. Scores appeared to rise across the first 80% of summaries when ranked by length but then dipped so that the lowest scores were in the longest 20% of summaries.

For publication year, the lowest readability score was for 2022, but data for that year were incomplete at the time we collected our data. Readability and jargon scores varied by year, but there did not appear to be any trend in either score over time.

Our estimates of scores in summaries in relation to scores in original proposals rose steadily across the categories from lowest to highest proposal scores for both readability and jargon. For both readability and jargon, the highest estimated scores were for PLSs that had the highest scores in the proposals, and the lowest estimated scores were for PLSs that had the lowest scores in the proposals.

The proportion of summaries that have readability scores over 50 (“fairly difficult to read” or better) and jargon scores >95 (low level of jargon) are shown in Figure 1 (by research topic), Figure 2 (by funding program), Figure 3 (by project size), and Figure 4 (by scores in original proposals).

Figure 1. Percentage of summaries with readability scores >50 (“fairly difficult to read” or better) and with jargon scores >95 (low level of jargon), by research topic, with 95% CIs. PLSs: plain language summaries.
Figure 2. Percentage of summaries with readability scores >50 (“fairly difficult to read” or better) and with jargon scores >95 (low level of jargon), by National Institute for Health and Care Research funding program, with 95% CIs. PLSs: plain language summaries.
Figure 3. Percentage of summaries with readability scores >50 (“fairly difficult to read” or better) and with jargon scores >95 (low level of jargon), by project size, with 95% CIs. PLSs: plain language summaries.
Figure 4. Percentage of summaries with readability scores >50 (“fairly difficult to read” or better) and with jargon scores >95 (low level of jargon), by readability scores or jargon scores (as appropriate: readability scores are categorized in relation to the readability score of the original funding proposal, and jargon scores are categorized in relation to the corresponding original jargon score) in original proposals, with 95% CIs. PLSs: plain language summaries.

Finally, Table 4 shows the incidence rate ratios of having each of these “better” scores when all study characteristics are included in the same model. This is useful because it allows us to assess whether the differences we have observed are still there when the other differences between summaries are accounted for. It could be, for example, that summaries in the Public Health Research program were more likely to be in the easier-to-read category, compared with Programme Grants for Applied Research, because of differences in project size or research topic. The results of the regression suggest, however, that even when other differences (eg, size, topic, year) are controlled for, Public Health Research summaries are approximately 5 times as likely as Programme Grant summaries to be more readable. Similarly, we can say that projects for which the original proposal had little jargon are markedly more than 10 times as likely to have little jargon in their final reports when compared with those with lots of jargon in the original proposal, even when differences in size, topic, and so on are accounted for.

Table 4. Incidence rate ratios and 95% CIs from regression analyses including all study characteristics for each outcome of interest: readability scores >50 (“fairly difficult to read” or better) and jargon scores >95 (low level of jargon).

Readability scores ≥50 (“fairly difficult to read” or better), incidence rate ratio (95% CI)Jargon scores ≥95 (suitable for a general audience), incidence rate ratio (95% CI)
Research topic

Cancer and neoplasms1.12 (0.70-1.79)0.84 (0.45-1.56)

Cardiovascular0.95 (0.57-1.58)0.99 (0.55-1.78)

Generic health relevance0.90 (0.57-1.41)1.86 (1.32-2.60)

Infection0.55 (0.31-0.97)1.25 (0.75-2.09)

Mental health0.73 (0.47-1.14)1.71 (1.26-2.32)

Metabolic and endocrine1.72 (1.08-2.76)1.26 (0.80-1.98)

Musculoskeletal0.68 (0.31-1.49)1.17 (0.63-2.15)

Neurological1.05 (0.63-1.75)1.74 (1.21-2.50)

Oral and gastrointestinal1.10 (0.63-1.93)1.10 (0.66-1.86)

Reproductive health and childbirth1.45 (0.95-2.20)1.32 (0.84-2.09)

Respiratory1.72 (1.03-2.87)0.90 (0.44-1.86)

Stroke0.50 (0.23-1.09)1.69 (0.98-2.92)
Funding Program

Efficacy and Mechanism Evaluation2.59 (1.01-6.61)1

Health Technology Assessment3.00 (1.31-6.87)2.25 (0.74-6.85)

Health and Social Care Delivery Research2.56 (1.05-6.25)4.08 (1.33-12.54)

Programme Grants for Applied Research13.27 (1.00-10.66)

Public Health Research5.35 (2.13-13.4)3.87 (1.25-11.96)
Publication year

20140.78 (0.4-1.54)0.62 (0.38-1.00)

20150.95 (0.56-1.62)0.72 (0.47-1.11)

20161.00 (0.60-1.67)0.89 (0.58-1.37)

20170.82 (0.47-1.44)1.05 (0.70-1.58)

20181.01 (0.64-1.61)0.81 (0.54-1.22)

201911

20201.25 (0.82-1.89)1.22 (0.88-1.70)

20210.82 (0.50-1.32)1.49 (1.05-2.12)

20220.65 (0.34-1.25)1.03 (0.67-1.60)
Project size

Smallest 20%11

21.19 (0.77-1.83)0.89 (0.67-1.20)

31.36 (0.89-2.06)1.01 (0.76-1.35)

41.18 (0.76-1.83)0.69 (0.49-0.99)

Largest 20%1.64 (1.06-2.55)0.59 (0.37-0.97)
Length of summary

Shortest 20%11

21.07 (0.71-1.61)1.09 (0.81-1.47)

30.81 (0.51-1.27)0.67 (0.47-0.97)

41.07 (0.67-1.69)0.93 (0.65-1.32)

Longest 20%0.69 (0.42-1.13)0.74 (0.51-1.10)
Scores in original proposala

Lowest 20% of original scores11

21.39 (0.85-2.26)4.53 (1.62-12.68)

31.40 (0.87-2.26)7.26 (2.66-19.79)

41.48 (0.91-2.40)8.74 (3.23-23.63)

Highest 20% of original scores2.88 (1.86-4.45)13.87 (5.17-37.2)

aWe analyzed readability score outcomes in relation to the readability score of the original funding proposal and jargon score outcomes in relation to the jargon score of the original funding proposal.


Main Findings

Our findings suggest that the PLSs published in the NIHR’s Journals Library are often difficult to read and likely inaccessible to a general audience. Despite the NIHR’s advice to avoid jargon and complicated words and to keep sentences short, many published summaries had lots of jargon and poor readability. We analyzed more than 1200 summaries and found none with readability scores suggesting that they would be accessible to people with the average UK reading age of 9 years.

Readability and jargon scores varied significantly in relation to research topic (where “Metabolic and endocrine” projects did best), funding program (where projects in the Public Health Research Programme did best), and, most noticeably, in relation to how readable the original funding proposal was. The relationship between original funding proposal summaries and final report summaries is notable because it suggests that some authors are consistently better (or worse) at writing accessible summaries.

Readability scores and jargon scores were correlated but did not always coincide. For instance, summaries in the “Generic health relevance” category used relatively few jargon words but were not very readable. The opposite was true of larger projects, which, compared with small projects, had more readable summaries and less jargon. The Efficacy and Mechanism Evaluation funding program was also associated with summaries that were moderately readable yet had high levels of jargon. These differences suggest that summaries may be accessible or easy to read in some ways but not in others and that relying on a single measure to assess a text may miss important aspects of readability. We found no pattern over time: looked at year-on-year, there are differences but no upward or downward trend in either readability or jargon scores.

Comparisons With Other Studies

The mean FRE score we found was higher than the mean score of 23.6 reported for PLSs in 2 psychology journals [Stricker J, Chasiotis A, Kerwer M, Günther A. Scientific abstracts and plain language summaries in psychology: a comparison based on readability indices. PLoS One. 2020;15(4):e0231160. [FREE Full text] [CrossRef] [Medline]43] and the mean of 21 found in a sample taken from the Physiotherapy Evidence Database [Carvalho FA, Elkins MR, Franco MR, Pinto RZ. Are plain-language summaries included in published reports of evidence about physiotherapy interventions? Analysis of 4421 randomised trials, systematic reviews and guidelines on the Physiotherapy Evidence Database (PEDro). Physiotherapy. 2019;105(3):354-361. [CrossRef] [Medline]44]. A study of research on cystic fibrosis found that the FRE score of PLSs in journals was 43.3 and in Cochrane reviews was 46.3 [Anderson HL, Moore JE, Millar BC. Comparison of the readability of lay summaries and scientific abstracts published in CF research news and the journal of cystic fibrosis: recommendations for writing lay summaries. J Cyst Fibros. 2022;21(1):e11-e14. [FREE Full text] [CrossRef] [Medline]45]. An analysis of medicine information sheets produced by the professional associations of rheumatologists in 3 countries found that the average FRE score for those from Australia was 50.8, from the United Kingdom was 48.5, and from Canada was 66.1 [Oliffe M, Thompson E, Johnston J, Freeman D, Bagga H, Wong PKK. Assessing the readability and patient comprehension of rheumatology medicine information sheets: a cross-sectional health literacy study. BMJ Open. 2019;9(2):e024582. [FREE Full text] [CrossRef] [Medline]46]; in patient information leaflets produced by the British Association of Dermatologists, the mean score was 52.2 [Hunt WTN, Sofela J, Mohd Mustapa MF, British Association of Dermatologists' Clinical Standards Unit. Readability assessment of the British Association of Dermatologists' patient information leaflets. Clin Exp Dermatol. 2022;47(4):684-691. [CrossRef] [Medline]47]. An analysis of Cochrane PLSs, which used a different readability formula, concluded that most would be difficult to read for someone with no medical education [Banić A, Fidahić M, Šuto J, Roje R, Vuka I, Puljak L, et al. Conclusiveness, linguistic characteristics and readability of Cochrane plain language summaries of intervention reviews: a cross-sectional study. BMC Med Res Methodol. 2022;22(1):240. [FREE Full text] [CrossRef] [Medline]48] and is in line with a previous analysis of Cochrane PLSs that found that they were very heterogeneous and often failed to adhere to standards [Jelicic Kadic A, Fidahic M, Vujcic M, Saric F, Propadalo I, Marelja I, et al. Cochrane plain language summaries are highly heterogeneous with low adherence to the standards. BMC Med Res Methodol. 2016;16(1):61. [CrossRef] [Medline]49]. Comparing scores across different samples of texts is not straightforward because of differences in the intended audiences for each. What is clear is that the level of readability we found is markedly worse than that recommended for texts in plain language and suitable for a general audience, which is a score of 60 or above (Table 1).

We are aware of only 1 previous use of a jargon calculator to assess PLSs by the developers of the calculator we used [Rakedzon T, Segev E, Chapnik N, Yosef R, Baram-Tsabari A. Automatic jargon identifier for scientists engaging with the public and science communication educators. PLoS One. 2017;12(8):e0181742. [FREE Full text] [CrossRef] [Medline]32]. They compared levels of jargon in academic abstracts and lay summaries in 2 journals, PLOS Computational Biology and PLOS Genetics. Although they found that the PLSs contained less jargon than the abstracts, they—like us—found that jargon use in the summaries was significantly higher than the recommended levels. We used a cutoff of 5% to identify summaries with low levels of jargon and identified that only 21.7% (269/1241) of papers met this criterion; of the summaries in our sample, only 1.0% (13/1241) met the more stringent 2% cutoff proposed by Rakedzon and colleagues [Rakedzon T, Segev E, Chapnik N, Yosef R, Baram-Tsabari A. Automatic jargon identifier for scientists engaging with the public and science communication educators. PLoS One. 2017;12(8):e0181742. [FREE Full text] [CrossRef] [Medline]32].

Strengths and Weaknesses of This Study

Previous studies of PLSs have focused on specific research areas (such as physiotherapy) or methods (such as reviews). Our broader approach has enabled us to address a part of what Uphold and colleagues [Uphold HS, Drahota A, Bustos TE, Crawford MK, Buchalski Z. “There’s no money in community dissemination”: a mixed methods analysis of researcher dissemination-as-usual. J Clin Trans Sci. 2022;6(1):e105. [CrossRef]18] described as a critical gap in the dissemination and implementation literature, the analysis of how researcher characteristics and environmental determinants influence attempts to disseminate research findings. We also looked at both readability and jargon, whereas previous studies have focused on one or the other.

FRE scores are widely used [Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Social Adm Pharm. 2013;9(5):503-516. [CrossRef] [Medline]27], are recommended by NIHR and other funders as a way of assessing the readability of PLSs [Plain English summaries. National Institute for Health and Care Research. 2021. URL: https://www.nihr.ac.uk/documents/plain-english-summaries/27363 [accessed 2024-09-26] 25], and—as we have used them here—provide a way of looking at large numbers of texts and identifying trends and tendencies in readability. We recognize, all the same, that readability indices are an imperfect way of assessing texts. They can be misleading and have little to do with how easy a text is to understand [Colter A. Assessing the usability of credit card disclosures. Clarity - The Journal of the International Association Promoting Plain Legal Language. 2009;62:46-52.50-Zheng J, Yu H. Readability formulas and user perceptions of electronic health records difficulty: a corpus study. J Med Internet Res. 2017;19(3):e59. [FREE Full text] [CrossRef] [Medline]52], and scores may be inconsistent when tested across different pieces of software because of formatting or other differences [Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Social Adm Pharm. 2013;9(5):503-516. [CrossRef] [Medline]27]. Readability indices also capture only 1 aspect of how science is communicated to the public. There are more sophisticated ways of understanding [Baram-Tsabari A, Lewenstein BV. An instrument for assessing scientists’ written skills in public communication of science. Science Communication. 2012;35(1):56-85. [CrossRef]53,Longnecker N. An integrated model of science communication — more than providing evidence. JCOM. 2016;15(05):Y01. [FREE Full text] [CrossRef]54], conducting [Shonkoff JP, Bales SN. Science does not speak for itself: translating child development research for the public and its policymakers. Child Dev. 2011;82(1):17-32. [CrossRef] [Medline]55,Bales SN, Gilliam FD. Communications for Social Good. In: Patrizi P, Sherwood K, Spector A, editors. Practice Matters - The Improving Philanthropy Project. New York, NY, United States. The Foundation Center; 2004. 56], and assessing [Olesk A, Renser B, Bell L, Fornetti A, Franks S, Mannino I, et al. Quality indicators for science communication: results from a collaborative concept mapping exercise. JCOM. 2021;20(03):A06. [FREE Full text] [CrossRef]57-Sevian H, Gonsalves L. Analysing how scientists explain their research: a rubric for measuring the effectiveness of scientific explanations. International Journal of Science Education. 2008;30(11):1441-1467. [CrossRef]59] science communication. Our approach focused on summaries written by research teams and could be extended by looking at the perceptions and responses of readers and evaluating the impact of summaries [Spicer S. The nuts and bolts of evaluating science communication activities. Semin Cell Dev Biol. 2017;70:17-25. [CrossRef] [Medline]60-Jensen E. Evaluate impact of communication. Nature. 2011;469(7329):162-162. [CrossRef] [Medline]62]. We had no information on the extent to which teams responded to NIHR’s “strong encouragement” to involve a nonacademic member of the public in writing the PLS [Plain English summaries. National Institute for Health and Care Research. 2021. URL: https://www.nihr.ac.uk/documents/plain-english-summaries/27363 [accessed 2024-09-26] 25], so we cannot comment on whether this alters their content and presentation.

The way we have assessed jargon focuses on single words and ignores the use of phrases that might otherwise count as jargon.

Multimedia Appendix 1

Exemplar plain language summaries with high and low readability and high and low jargon scores.

DOCX File , 18 KBMultimedia Appendix 1 contains some examples of this. For example, “confidence” and “interval” are both classed as mid-frequency words in our jargon calculator, but the phrase “confidence interval,” often used in reporting statistical estimates in health studies (as in this paper), would probably be considered jargon. The same applies to longer jargon terms such as “incremental cost-effectiveness ratio,” which has a specific technical meaning but which the jargon calculator does not pick up on. We have not accounted for other aspects of summaries that can affect how readable they are, such as the use of numbers and statistics.

The summaries we looked at were all funded by a single large funder (NIHR) and relate to research done in a single country (the United Kingdom). Over the past 3 decades, the United Kingdom has more swiftly embedded public involvement and engagement in health research than most other countries [Lang I, King A, Jenkins G, Boddy K, Khan Z, Liabo K. How common is patient and public involvement (PPI)? Cross-sectional analysis of frequency of PPI reporting in health research papers and associations with methods, funding sources and other factors. BMJ Open. 2022;12(5):e063356. [FREE Full text] [CrossRef] [Medline]63,Wilson P, Mathie E, Keenan J, McNeilly E, Goodman C, Howe A, et al. Research with patient and public involvement: a realist evaluation – the RAPPORT study. Health Serv Deliv Res. 2015;3(38):1-176. [FREE Full text] [CrossRef] [Medline]64]. While we may appear critical of NIHR, this study has been possible only because NIHR requires researchers to write summaries and then makes these publicly accessible. NIHR sponsors the UK Standards for Public Involvement, intended as a “description of what good public involvement looks like,” and 1 of the 6 standards is “Communication—use plain language for well-timed and relevant communications, as part of involvement plans and activities” [UK standards for public involvement: better public involvement for better health and social care research. UK Public Involvement Standards Development Partnership. 2022. URL: https://drive.google.com/file/d/1U-IJNJCfFepaAOruEhzz1TdLvAcHTt2Q/view [accessed 2024-09-26] 65]. The NIHR also says that it is “the world’s first health research funder to publish comprehensive accounts of its funded research within its own publicly and permanently available journals” [NIHR Journals Library. National Institute of Health and Care Research. URL: https://www.journalslibrary.nihr.ac.uk/#/ [accessed 2024-10-09] 26].

The PLSs included in National Journals Library publications are central to these 2 commitments—to make publicly available details of the work it funds and to communicate in plain language—and are “in keeping with the NIHR Journals Library’s commitment to accessibility” [NIHR Journals Library. National Institute of Health and Care Research. URL: https://www.journalslibrary.nihr.ac.uk/#/ [accessed 2024-10-09] 26]. They also relate to a substantial investment: the cost of the research represented in the reports at which we looked is not easy to calculate, but NIHR expenditure on these funding streams in financial year 2021/22 was £206.3 million (approximately €241 million/US $268 million), so it seems reasonable to assume that the cost over the years covered here exceeded £1 billion (approximately €1.17 billion/US $1.30 billion). We might expect to see in the work of the NIHR a well-developed set of processes and mechanisms by which to communicate the results of research to the public, and for this reason, we consider the NIHR’s flagship publication stream, the Journals Library, to be an apt focus for our enquiry here.

The extent to which the assessment of readability can be regarded as an indication of comprehension is not included in this study, as there is no existing measure that captures this relationship. Assessing comprehensibility usually involves asking people to read 1 or more texts and then measuring their understanding (eg, as Koops et al [Koops van 't Jagt R, Hoeks JCJ, Jansen CJM, de Winter AF, Reijneveld SA. Comprehensibility of health-related documents for older adults with different levels of health literacy: a systematic review. J Health Commun. 2016;21(2):159-177. [CrossRef] [Medline]66] did), which would be challenging to do with a large sample of texts such as those in the NIHR Journals Library. In the absence of measurable comprehensibility, assumptions on the accessibility of PLSs deserve scrutiny and a willingness to reconsider the methods of their production.

Unanswered Questions and Future Research

We would like this study to contribute to a debate on 2 connected questions: What and who are PLSs for?

From a funder’s perspective, PLSs are a relatively cheap and easy way of disseminating research findings. The teams doing the research are responsible for producing a PLS, and this has strengths (the information “straight from the horse’s (or scientist’s) mouth” [Kerwer M, Chasiotis A, Stricker J, Günther A, Rosman T. Straight from the scientist's mouth—plain language summaries promote laypeople's comprehension and knowledge acquisition when reading about individual research findings in psychology. Collabra: Psychology. 2021;7(1):18898. [CrossRef]67] is presumably less likely to be incorrect or imprecise) as well as weaknesses (researchers often lack training in science communication). Scientific writing can be exclusive: “The language of science, though forward-looking in its origins, has become increasingly anti-democratic… [it] sets apart those who understand it and shields them from those [who] do not” [Halliday M, Webster J. Language of Science 5. London. Continuum International Pub. Group; 2004. 68] but fluency in this language is necessary for scientists to do their work and have it accepted by other scientists [Prelli LJ. A Rhetoric of Science: Inventing Scientific Discourse. Columbia, SC. University of South Carolina Press; 1989. 69]. PLSs could be an important aspect of a more “engaged” university sector [Wilson P, Mathie E, Keenan J, McNeilly E, Goodman C, Howe A, et al. Research with patient and public involvement: a realist evaluation – the RAPPORT study. Health Serv Deliv Res. 2015;3(38):1-176. [FREE Full text] [CrossRef] [Medline]64], but the intended role and audience for PLSs are unclear, and this limits their potential value.

Communicating in a certain way means communicating to a group of people who can understand that type of communication. Attempts to communicate to “the general public” [Plain English summaries. National Institute for Health and Care Research. 2021. URL: https://www.nihr.ac.uk/documents/plain-english-summaries/27363 [accessed 2024-09-26] 25] must address multiple audiences [Schäfer MS, Füchslin T, Metag J, Kristiansen S, Rauchfleisch A. The different audiences of science communication: a segmentation analysis of the Swiss population's perceptions of science and their information and media use patterns. Public Underst Sci. 2018;27(7):836-856. [CrossRef] [Medline]70], but the members of the public being addressed include people who differ widely in their interest, knowledge, and trust in science [OST and Wellcome Trust. Science and the public: A review of science communication and public attitudes to science in Britain. London, UK. A Joint Report by the UK Office of Science and Technology and the Wellcome Trust; 2000. URL: https://wellcomecollection.org/works/gqk2wxw6/items [accessed 2024-09-26] 71]. Ahmed [Ahmed S. On Being Included: Racism and Diversity in Institutional life. Durham, London. Duke University Press; 2012. 72] argues, drawing on the work of Warner [Warner M. Publics and Counterpublics. Zone Books. 2005. URL: https://press.princeton.edu/books/paperback/9781890951290/publics-and-counterpublics [accessed 2024-09-26] 73], that addressing a public generates a public that can be addressed. Saying something in Swahili implies that you are speaking to people who can understand Swahili, and writing something using lots of scientific jargon and complicated sentences implies that you intend it for an audience who can understand such language. The dissemination of research findings has been challenged as a 1-way form of communication that falls short of the principles and expectations of public engagement [Weingart P, Joubert M. The conflation of motives of science communication — causes, consequences, remedies. JCOM. 2019;18(03):Y01. [FREE Full text] [CrossRef]74], can be seen as representing public relations rather than science communication [Entradas M, Bauer MW, O'Muircheartaigh C, Marcinkowski F, Okamura A, Pellegrini G, et al. Public communication by research institutes compared across countries and sciences: building capacity for engagement or competing for visibility? PLoS One. 2020;15(7):e0235191. [FREE Full text] [CrossRef] [Medline]75,Carver RB. Public communication from research institutes: is it science communication or public relations? JCOM. 2014;13(03):C01. [FREE Full text] [CrossRef]76], and contributes to the exclusion of people from low-income, minority ethnic groups [Dawson E. Reimagining publics and (non) participation: exploring exclusion from science communication through the experiences of low-income, minority ethnic groups. Public Underst Sci. 2018;27(7):772-786. [FREE Full text] [CrossRef] [Medline]77]. If we regard the production of PLSs as an aspect of Open Science, with its commitments to social engagement in the conduct and outcomes of research [Leonelli S. Data-Centric Biology: A Philosophical Study. Chicago. University of Chicago Press; 2016. 5], we also need to be aware of what is being made visible and invisible in the process [Levin N, Leonelli S. How does one "open" science? Questions of value in biological research. Sci Technol Human Values. 2017;42(2):280-305. [FREE Full text] [CrossRef] [Medline]78] and of who is being included and excluded.

Evidence on what is most effective in PLSs of scientific research, in terms of both form and content, is emerging [Stoll M, Kerwer M, Lieb K, Chasiotis A. Plain language summaries: a systematic review of theory, guidelines and empirical research. PLoS One. 2022;17(6):e0268789. [FREE Full text] [CrossRef] [Medline]16,Kerwer M, Chasiotis A, Stricker J, Günther A, Rosman T. Straight from the scientist's mouth—plain language summaries promote laypeople's comprehension and knowledge acquisition when reading about individual research findings in psychology. Collabra: Psychology. 2021;7(1):18898. [CrossRef]67,Maurer M, Siegel JE, Firminger KB, Lowers J, Dutta T, Chang JS. Lessons learned from developing plain language summaries of research studies. Health Lit Res Pract. 2021;5(2):e155-e161. [FREE Full text] [CrossRef] [Medline]79], but there is still much for us to learn about what works, for whom, and why. The value of training to improve science communication is unclear [Rubega MA, Burgio KR, MacDonald AAM, Oeldorf-Hirsch A, Capers RS, Wyss R. Assessment by audiences shows little effect of science communication training. Science Communication. 2020;43(2):139-169. [CrossRef]80], and attempts to improve PLSs have met with mixed results. For example, Kirkpatrick and colleagues [Kirkpatrick E, Gaisford W, Williams E, Brindley E, Tembo D, Wright D. Understanding plain English summaries. A comparison of two approaches to improve the quality of plain english summaries in research reports. Res Involv Engagem. 2017;3(1):17. [FREE Full text] [CrossRef] [Medline]81] tested 2 approaches: having authors rewrite PLSs using new guidance and having an independent medical writer edit the PLS. In each case, a group of nonspecialists rated the revised versions as easier to read but not easier to understand. A service designed to improve recruitment to studies by having trained patients and carers review research documents was successful in reducing the amount of jargon but not in improving readability [Jilka S, Hudson G, Jansli SM, Negbenose E, Wilson E, Odoi CM, et al. How to make study documents clear and relevant: the impact of patient involvement. BJPsych Open. 2021;7(6):e32. [FREE Full text] [CrossRef] [Medline]82].

Creative approaches to communicating research findings have the potential to enable 2-way communication and flatten hierarchies between scientists and nonscientists [Thompson Coon J, Orr N, Shaw L, Hunt H, Garside R, Nunns M, et al. Bursting out of our bubble: using creative techniques to communicate within the systematic review process and beyond. Syst Rev. 2022;11(1):56. [FREE Full text] [CrossRef] [Medline]83]. Again, when putting this to the test, results have been mixed: one recent study found that PLSs of published research were more effective than scientific abstracts or graphical abstracts in terms of comprehension and understanding [Bredbenner K, Simon SM. Video abstracts and plain language summaries are more effective than graphical abstracts and published abstracts. PLoS One. 2019;14(11):e0224697. [FREE Full text] [CrossRef] [Medline]84], whereas another study concluded that graphical summaries were one of the most preferred formats [Martínez Silvagnoli L, Shepherd C, Pritchett J, Gardner J. Optimizing readability and format of plain language summaries for medical research articles: cross-sectional survey study. J Med Internet Res. 2022;24(1):e22122. [FREE Full text] [CrossRef] [Medline]85]. A recent review of instructions for authors on writing PLSs found a lot of inconsistency across journals and suggested that consistent instructions could be developed with members of the public [Gainey KM, Smith J, McCaffery KJ, Clifford S, Muscat DM. What author instructions do health journals provide for writing plain language summaries? A scoping review. Patient. 2023;16(1):31-42. [FREE Full text] [CrossRef] [Medline]17]. The creation of common standards for summaries has also been proposed as part of the OpenPharma project [Rosenberg A. Working toward standards for plain language summaries. Sci Editor. 2022;45(2):46-50. [FREE Full text] [CrossRef]13,Rosenberg A, Baróniková S, Feighery L, Gattrell W, Olsen RE, Watson A, et al. Open pharma recommendations for plain language summaries of peer-reviewed medical journal publications. Curr Med Res Opin. 2021;37(11):2015. [FREE Full text] [CrossRef] [Medline]14]. Expert consensus conference methods have been used to produce recommendations on maximizing the accessibility of study patient-information leaflets and informed-consent forms [Coleman E, O'Sullivan L, Crowley R, Hanbidge M, Driver S, Kroll T, et al. Preparing accessible and understandable clinical research participant information leaflets and consent forms: a set of guidelines from an expert consensus conference. Res Involv Engagem. 2021;7(1):31. [FREE Full text] [CrossRef] [Medline]86], and a similar approach could potentially be applied to the preparation of PLSs.

Another approach from which we might learn is citizen science. Communication of all aspects of research is fundamental to citizen science, which has been described as one of the most dramatic developments in science communication in decades [Lewenstein B. Can we understand citizen science? JCOM. 2016;15(01):E. [FREE Full text] [CrossRef]87] It emphasizes multidirectional and ongoing communication [Delfanti A. Users and peers. From citizen science to P2P science. JCOM. 2010;09(01):E. [CrossRef]88] and recognizes storytelling and visualization as central to this [Hecker S, Luckas M, Brandt M, Kikillus H, Marenbach I, Schiele B, et al. Stories Can Change The World - Citizen Science Communication in Practice. In: Hecker S, Haklay ME, Bowser A, Makuch Z, Vogel J, Bonn A, editors. Citizen Science: Innovation in Open Science, Society and Policy. London. UCL Press; 2018:445-462.89]. At least in aspiration, citizen science has the potential to improve and transform science communication at the same time as it empowers and informs citizen scientists [Hecker S, Haklay M, Anne B, Makuch Z, Johannes V, Aletta B. Innovation in open science, society and policy - setting the agenda for citizen science. In: Hecker S, Haklay M, Boswer A, Makuch Z, Vogel J, Bonn A, editors. Citizen Science: Innovation in Open Science, Society and Policy. London, UK. UCL Press; 2018:1-23.90]. Other approaches to dissemination have emphasized coproduction [MacGregor S, Cooper A. Blending research, Journalism, and community expertise: a case study of coproduction in research communication. Science Communication. 2020;42(3):340-368. [CrossRef]91] and community engagement [Stewart EC, Davis JS, Walters TS, Chen Z, Miller ST, Duke JM, et al. Development of strategies for community engaged research dissemination by basic scientists: a case study. Transl Res. 2023;252:91-98. [CrossRef] [Medline]92]. As Knowles and colleagues noted, “finding mutually acceptable and valuable ways to express findings is yet another area requiring open discussion and negotiation” [Knowles SE, Allen D, Donnelly A, Flynn J, Gallacher K, Lewis A, et al. Participatory codesign of patient involvement in a learning health system: how can data-driven care be patient-driven care. Health Expect. 2022;25(1):103-115. [FREE Full text] [CrossRef] [Medline]93].

Conclusions

We found that the sample of PLSs that we examined had low readability and contained lots of jargon. Although these things differed in relation to study characteristics, such as topic and size, none of the PLSs had readability scores in line with the average reading age of the UK public. The aims of, and audiences for, these PLSs are unclear, and their place in science communication and public engagement requires further consideration. It is uncertain whether these summaries improve public access to research.

Acknowledgments

This report is independent research supported by the UK National Institute for Health and Care Research Applied Research Collaboration South West Peninsula (grant reference number: NIHR200167). The views expressed in this publication are those of the author(s) and not necessarily those of the National Institute for Health and Care Research or the UK Department of Health and Social Care. We did not use generative artificial intelligence in any portion of the writing of this manuscript.

Data Availability

The data we used came from the texts and metadata of reports published in the NIHR’s Journals Library. This is an open-access resource, and all the data we used are publicly available on the web via the Journals Library website: https://www.journalslibrary.nihr.ac.uk/journals/

Authors' Contributions

IAL and AK contributed to conceptualization and methodology. IAL contributed to statistical analysis and writing the original draft. IAL, AK, KB, KS, LA, JD, and KL contributed to the interpretation of the data, critical revision of the manuscript for important intellectual content, and writing—review and editing, and all authors agree to be accountable for all aspects of the work.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Exemplar plain language summaries with high and low readability and high and low jargon scores.

DOCX File , 18 KB

  1. Beresford P, Russo J. Patient and Public Involvement In Research. In: Nolte E, Merkur S, Anell A, editors. Achieving Person-Centred Health Systems. Cambridge. Cambridge University Press; 2020:145-172.
  2. Liabo K, Boddy K, Bortoli S, Irvine J, Boult H, Fredlund M, et al. Public involvement in health research: what does 'good' look like in practice? Res Involv Engagem. 2020;6:11. [FREE Full text] [CrossRef] [Medline]
  3. Duncan S, Oliver S. Editorial: motivations for engagement. Research for All. 2017;1(2):229-233. [CrossRef]
  4. Solomon MZ, Gusmano MK, Maschke KJ. The ethical imperative and moral challenges of engaging patients and the public with evidence. Health Aff (Millwood). 2016;35(4):583-589. [CrossRef] [Medline]
  5. Leonelli S. Data-Centric Biology: A Philosophical Study. Chicago. University of Chicago Press; 2016.
  6. Gradinger F, Britten N, Wyatt K, Froggatt K, Gibson A, Jacoby A, et al. Values associated with public involvement in health and social care research: a narrative review. Health Expect. 2015;18(5):661-675. [FREE Full text] [CrossRef] [Medline]
  7. Martin GP. 'Ordinary people only': knowledge, representativeness, and the publics of public participation in healthcare. Sociol Health Illn. 2008;30(1):35-54. [FREE Full text] [CrossRef] [Medline]
  8. Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on Clinical Trials on Medicinal Products for Human use, and Repealing Directive 2001/20/EC. 2014. URL: https://health.ec.europa.eu/system/files/2016-11/reg_2014_536_en_0.pdf [accessed 2024-09-26]
  9. Pitcher N, Mitchell D, Hughes C. Guidance for writing a Cochrane plain language summary. Cochrane Handbook for Systematic Reviews of Interventions. 2022. URL: https://training.cochrane.org/handbook/current/chapter-iii-s2-supplementary-material [accessed 2024-09-26]
  10. Flory J, Wendler D, Emanuel E. Empirical issues in informed consent for research. In: Emanuel E, editor. The Oxford Textbook of Clinical Research Ethics. London. Oxford University Press; 2008:645-660.
  11. Mandava A, Pace C, Campbell B, Emanuel E, Grady C. The quality of informed consent: mapping the landscape. a review of empirical data from developing and developed countries. J Med Ethics. 2012;38(6):356-365. [FREE Full text] [CrossRef] [Medline]
  12. Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC. Quality of informed consent in cancer clinical trials: a cross-sectional survey. The Lancet. 2001;358(9295):1772-1777. [CrossRef]
  13. Rosenberg A. Working toward standards for plain language summaries. Sci Editor. 2022;45(2):46-50. [FREE Full text] [CrossRef]
  14. Rosenberg A, Baróniková S, Feighery L, Gattrell W, Olsen RE, Watson A, et al. Open pharma recommendations for plain language summaries of peer-reviewed medical journal publications. Curr Med Res Opin. 2021;37(11):2015. [FREE Full text] [CrossRef] [Medline]
  15. Lobban D, Gardner J, Matheis R. Plain language summaries of publications of company-sponsored medical research: what key questions do we need to address? Current Medical Research and Opinion. 2021;38(2):189-200. [FREE Full text] [CrossRef]
  16. Stoll M, Kerwer M, Lieb K, Chasiotis A. Plain language summaries: a systematic review of theory, guidelines and empirical research. PLoS One. 2022;17(6):e0268789. [FREE Full text] [CrossRef] [Medline]
  17. Gainey KM, Smith J, McCaffery KJ, Clifford S, Muscat DM. What author instructions do health journals provide for writing plain language summaries? A scoping review. Patient. 2023;16(1):31-42. [FREE Full text] [CrossRef] [Medline]
  18. Uphold HS, Drahota A, Bustos TE, Crawford MK, Buchalski Z. “There’s no money in community dissemination”: a mixed methods analysis of researcher dissemination-as-usual. J Clin Trans Sci. 2022;6(1):e105. [CrossRef]
  19. Brownson RC, Fielding JE, Green Lawrence W. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27-53. [FREE Full text] [CrossRef] [Medline]
  20. Green Lawrence W, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126-153. [CrossRef] [Medline]
  21. Tabak RG, Stamatakis KA, Jacobs JA, Brownson RC. What predicts dissemination efforts among public health researchers in the United States? Public Health Rep. 2014;129(4):361-368. [FREE Full text] [CrossRef] [Medline]
  22. McNeal DM, Glasgow RE, Brownson RC, Matlock DD, Peterson PN, Daugherty SL, et al. Perspectives of scientists on disseminating research findings to non-research audiences. J Clin Transl Sci. 2020;5(1):e61. [FREE Full text] [CrossRef] [Medline]
  23. Knoepke CE, Ingle MP, Matlock DD, Brownson RC, Glasgow RE. PLoS One. 2019;14(11):e0216971. [FREE Full text] [CrossRef] [Medline]
  24. Davies SC, Walley T, Smye S, Cotterill L, Whitty CJ. The NIHR at 10: transforming clinical research. Clin Med (Lond). 2016;16(6):501-502. [FREE Full text] [CrossRef] [Medline]
  25. Plain English summaries. National Institute for Health and Care Research. 2021. URL: https://www.nihr.ac.uk/documents/plain-english-summaries/27363 [accessed 2024-09-26]
  26. NIHR Journals Library. National Institute of Health and Care Research. URL: https://www.journalslibrary.nihr.ac.uk/#/ [accessed 2024-10-09]
  27. Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Social Adm Pharm. 2013;9(5):503-516. [CrossRef] [Medline]
  28. Baram-Tsabari A, Wolfson O, Yosef R, Chapnik N, Brill A, Segev E. Jargon use in Public Understanding of Science papers over three decades. Public Underst Sci. 2020;29(6):644-654. [FREE Full text] [CrossRef] [Medline]
  29. Plavén-Sigray P, Matheson GJ, Schiffler BC, Thompson WH. The readability of scientific texts is decreasing over time. eLife. 2017;6:e27725. [FREE Full text] [CrossRef]
  30. DiMascio C. Py-Readability-Metrics [Source code]. GitHub. 2019. URL: https://github.com/cdimascio/py-readability-metrics [accessed 2019-12-09]
  31. BBC. URL: https://www.bbc.com/ [accessed 2024-09-26]
  32. Rakedzon T, Segev E, Chapnik N, Yosef R, Baram-Tsabari A. Automatic jargon identifier for scientists engaging with the public and science communication educators. PLoS One. 2017;12(8):e0181742. [FREE Full text] [CrossRef] [Medline]
  33. Jargon Project. De-Jargonizer. 2023. URL: https://scienceandpublic.com/ [accessed 2020-01-31]
  34. NoamAndRoy. JargonProject [Source code]. GitHub. 2017. URL: https://github.com/NoamAndRoy/JargonProject [accessed 2024-09-26]
  35. HRCS Online - Home. 2023. URL: https://hrcsonline.net/ [accessed 2024-09-26]
  36. Funding Programmes. National Institute for Health and Care Research. 2024. URL: https://www.nihr.ac.uk/explore-nihr/funding-programmes/ [accessed 2024-09-26]
  37. Ennis L, Wykes T. Sense and readability: participant information sheets for research studies. Br J Psychiatry. 2016;208(2):189-194. [FREE Full text] [CrossRef] [Medline]
  38. Sharp SM. Consent documents for oncology trials: does anybody read these things? Am J Clin Oncol. 2004;27(6):570-575. [CrossRef] [Medline]
  39. Brierley G, Richardson R, Torgerson DJ. Using short information leaflets as recruitment tools did not improve recruitment: a randomized controlled trial. J Clin Epidemiol. 2012;65(2):147-154. [CrossRef] [Medline]
  40. National Journals Library - Plain language summary. National Institute for Health and Social Care Research. URL: https:/​/www.​journalslibrary.nihr.ac.uk/​information-for-authors/​manuscript-preparation/​report-sections/​plain-language-summary.​htm [accessed 2024-09-26]
  41. Zou G. A modified poisson regression approach to prospective studies with binary data. Am J Epidemiol. 2004;159(7):702-706. [CrossRef] [Medline]
  42. The 2011 Skills for Life survey: a survey of literacy, numeracy and ICT levels in England. London, UK.; 2012. URL: https://www.gov.uk/government/publications/2011-skills-for-life-survey [accessed 2024-09-26]
  43. Stricker J, Chasiotis A, Kerwer M, Günther A. Scientific abstracts and plain language summaries in psychology: a comparison based on readability indices. PLoS One. 2020;15(4):e0231160. [FREE Full text] [CrossRef] [Medline]
  44. Carvalho FA, Elkins MR, Franco MR, Pinto RZ. Are plain-language summaries included in published reports of evidence about physiotherapy interventions? Analysis of 4421 randomised trials, systematic reviews and guidelines on the Physiotherapy Evidence Database (PEDro). Physiotherapy. 2019;105(3):354-361. [CrossRef] [Medline]
  45. Anderson HL, Moore JE, Millar BC. Comparison of the readability of lay summaries and scientific abstracts published in CF research news and the journal of cystic fibrosis: recommendations for writing lay summaries. J Cyst Fibros. 2022;21(1):e11-e14. [FREE Full text] [CrossRef] [Medline]
  46. Oliffe M, Thompson E, Johnston J, Freeman D, Bagga H, Wong PKK. Assessing the readability and patient comprehension of rheumatology medicine information sheets: a cross-sectional health literacy study. BMJ Open. 2019;9(2):e024582. [FREE Full text] [CrossRef] [Medline]
  47. Hunt WTN, Sofela J, Mohd Mustapa MF, British Association of Dermatologists' Clinical Standards Unit. Readability assessment of the British Association of Dermatologists' patient information leaflets. Clin Exp Dermatol. 2022;47(4):684-691. [CrossRef] [Medline]
  48. Banić A, Fidahić M, Šuto J, Roje R, Vuka I, Puljak L, et al. Conclusiveness, linguistic characteristics and readability of Cochrane plain language summaries of intervention reviews: a cross-sectional study. BMC Med Res Methodol. 2022;22(1):240. [FREE Full text] [CrossRef] [Medline]
  49. Jelicic Kadic A, Fidahic M, Vujcic M, Saric F, Propadalo I, Marelja I, et al. Cochrane plain language summaries are highly heterogeneous with low adherence to the standards. BMC Med Res Methodol. 2016;16(1):61. [CrossRef] [Medline]
  50. Colter A. Assessing the usability of credit card disclosures. Clarity - The Journal of the International Association Promoting Plain Legal Language. 2009;62:46-52.
  51. Redish J. Readability formulas have even more limitations than Klare discusses. ACM J. Comput. Doc. 2000;24(3):132-137. [CrossRef]
  52. Zheng J, Yu H. Readability formulas and user perceptions of electronic health records difficulty: a corpus study. J Med Internet Res. 2017;19(3):e59. [FREE Full text] [CrossRef] [Medline]
  53. Baram-Tsabari A, Lewenstein BV. An instrument for assessing scientists’ written skills in public communication of science. Science Communication. 2012;35(1):56-85. [CrossRef]
  54. Longnecker N. An integrated model of science communication — more than providing evidence. JCOM. 2016;15(05):Y01. [FREE Full text] [CrossRef]
  55. Shonkoff JP, Bales SN. Science does not speak for itself: translating child development research for the public and its policymakers. Child Dev. 2011;82(1):17-32. [CrossRef] [Medline]
  56. Bales SN, Gilliam FD. Communications for Social Good. In: Patrizi P, Sherwood K, Spector A, editors. Practice Matters - The Improving Philanthropy Project. New York, NY, United States. The Foundation Center; 2004.
  57. Olesk A, Renser B, Bell L, Fornetti A, Franks S, Mannino I, et al. Quality indicators for science communication: results from a collaborative concept mapping exercise. JCOM. 2021;20(03):A06. [FREE Full text] [CrossRef]
  58. Pelligrini G. Evaluating science communication: Concepts and tools for realistic assessment. In: Bucchi M, Trench B, editors. Routledge Handbook of Public Communication of Science and Technology, 3rd ed. New York, NY, USA. Routledge; 2021:305-322.
  59. Sevian H, Gonsalves L. Analysing how scientists explain their research: a rubric for measuring the effectiveness of scientific explanations. International Journal of Science Education. 2008;30(11):1441-1467. [CrossRef]
  60. Spicer S. The nuts and bolts of evaluating science communication activities. Semin Cell Dev Biol. 2017;70:17-25. [CrossRef] [Medline]
  61. Jensen E. The problems with science communication evaluation. JCOM. 2014;13(01):C04. [CrossRef]
  62. Jensen E. Evaluate impact of communication. Nature. 2011;469(7329):162-162. [CrossRef] [Medline]
  63. Lang I, King A, Jenkins G, Boddy K, Khan Z, Liabo K. How common is patient and public involvement (PPI)? Cross-sectional analysis of frequency of PPI reporting in health research papers and associations with methods, funding sources and other factors. BMJ Open. 2022;12(5):e063356. [FREE Full text] [CrossRef] [Medline]
  64. Wilson P, Mathie E, Keenan J, McNeilly E, Goodman C, Howe A, et al. Research with patient and public involvement: a realist evaluation – the RAPPORT study. Health Serv Deliv Res. 2015;3(38):1-176. [FREE Full text] [CrossRef] [Medline]
  65. UK standards for public involvement: better public involvement for better health and social care research. UK Public Involvement Standards Development Partnership. 2022. URL: https://drive.google.com/file/d/1U-IJNJCfFepaAOruEhzz1TdLvAcHTt2Q/view [accessed 2024-09-26]
  66. Koops van 't Jagt R, Hoeks JCJ, Jansen CJM, de Winter AF, Reijneveld SA. Comprehensibility of health-related documents for older adults with different levels of health literacy: a systematic review. J Health Commun. 2016;21(2):159-177. [CrossRef] [Medline]
  67. Kerwer M, Chasiotis A, Stricker J, Günther A, Rosman T. Straight from the scientist's mouth—plain language summaries promote laypeople's comprehension and knowledge acquisition when reading about individual research findings in psychology. Collabra: Psychology. 2021;7(1):18898. [CrossRef]
  68. Halliday M, Webster J. Language of Science 5. London. Continuum International Pub. Group; 2004.
  69. Prelli LJ. A Rhetoric of Science: Inventing Scientific Discourse. Columbia, SC. University of South Carolina Press; 1989.
  70. Schäfer MS, Füchslin T, Metag J, Kristiansen S, Rauchfleisch A. The different audiences of science communication: a segmentation analysis of the Swiss population's perceptions of science and their information and media use patterns. Public Underst Sci. 2018;27(7):836-856. [CrossRef] [Medline]
  71. OST and Wellcome Trust. Science and the public: A review of science communication and public attitudes to science in Britain. London, UK. A Joint Report by the UK Office of Science and Technology and the Wellcome Trust; 2000. URL: https://wellcomecollection.org/works/gqk2wxw6/items [accessed 2024-09-26]
  72. Ahmed S. On Being Included: Racism and Diversity in Institutional life. Durham, London. Duke University Press; 2012.
  73. Warner M. Publics and Counterpublics. Zone Books. 2005. URL: https://press.princeton.edu/books/paperback/9781890951290/publics-and-counterpublics [accessed 2024-09-26]
  74. Weingart P, Joubert M. The conflation of motives of science communication — causes, consequences, remedies. JCOM. 2019;18(03):Y01. [FREE Full text] [CrossRef]
  75. Entradas M, Bauer MW, O'Muircheartaigh C, Marcinkowski F, Okamura A, Pellegrini G, et al. Public communication by research institutes compared across countries and sciences: building capacity for engagement or competing for visibility? PLoS One. 2020;15(7):e0235191. [FREE Full text] [CrossRef] [Medline]
  76. Carver RB. Public communication from research institutes: is it science communication or public relations? JCOM. 2014;13(03):C01. [FREE Full text] [CrossRef]
  77. Dawson E. Reimagining publics and (non) participation: exploring exclusion from science communication through the experiences of low-income, minority ethnic groups. Public Underst Sci. 2018;27(7):772-786. [FREE Full text] [CrossRef] [Medline]
  78. Levin N, Leonelli S. How does one "open" science? Questions of value in biological research. Sci Technol Human Values. 2017;42(2):280-305. [FREE Full text] [CrossRef] [Medline]
  79. Maurer M, Siegel JE, Firminger KB, Lowers J, Dutta T, Chang JS. Lessons learned from developing plain language summaries of research studies. Health Lit Res Pract. 2021;5(2):e155-e161. [FREE Full text] [CrossRef] [Medline]
  80. Rubega MA, Burgio KR, MacDonald AAM, Oeldorf-Hirsch A, Capers RS, Wyss R. Assessment by audiences shows little effect of science communication training. Science Communication. 2020;43(2):139-169. [CrossRef]
  81. Kirkpatrick E, Gaisford W, Williams E, Brindley E, Tembo D, Wright D. Understanding plain English summaries. A comparison of two approaches to improve the quality of plain english summaries in research reports. Res Involv Engagem. 2017;3(1):17. [FREE Full text] [CrossRef] [Medline]
  82. Jilka S, Hudson G, Jansli SM, Negbenose E, Wilson E, Odoi CM, et al. How to make study documents clear and relevant: the impact of patient involvement. BJPsych Open. 2021;7(6):e32. [FREE Full text] [CrossRef] [Medline]
  83. Thompson Coon J, Orr N, Shaw L, Hunt H, Garside R, Nunns M, et al. Bursting out of our bubble: using creative techniques to communicate within the systematic review process and beyond. Syst Rev. 2022;11(1):56. [FREE Full text] [CrossRef] [Medline]
  84. Bredbenner K, Simon SM. Video abstracts and plain language summaries are more effective than graphical abstracts and published abstracts. PLoS One. 2019;14(11):e0224697. [FREE Full text] [CrossRef] [Medline]
  85. Martínez Silvagnoli L, Shepherd C, Pritchett J, Gardner J. Optimizing readability and format of plain language summaries for medical research articles: cross-sectional survey study. J Med Internet Res. 2022;24(1):e22122. [FREE Full text] [CrossRef] [Medline]
  86. Coleman E, O'Sullivan L, Crowley R, Hanbidge M, Driver S, Kroll T, et al. Preparing accessible and understandable clinical research participant information leaflets and consent forms: a set of guidelines from an expert consensus conference. Res Involv Engagem. 2021;7(1):31. [FREE Full text] [CrossRef] [Medline]
  87. Lewenstein B. Can we understand citizen science? JCOM. 2016;15(01):E. [FREE Full text] [CrossRef]
  88. Delfanti A. Users and peers. From citizen science to P2P science. JCOM. 2010;09(01):E. [CrossRef]
  89. Hecker S, Luckas M, Brandt M, Kikillus H, Marenbach I, Schiele B, et al. Stories Can Change The World - Citizen Science Communication in Practice. In: Hecker S, Haklay ME, Bowser A, Makuch Z, Vogel J, Bonn A, editors. Citizen Science: Innovation in Open Science, Society and Policy. London. UCL Press; 2018:445-462.
  90. Hecker S, Haklay M, Anne B, Makuch Z, Johannes V, Aletta B. Innovation in open science, society and policy - setting the agenda for citizen science. In: Hecker S, Haklay M, Boswer A, Makuch Z, Vogel J, Bonn A, editors. Citizen Science: Innovation in Open Science, Society and Policy. London, UK. UCL Press; 2018:1-23.
  91. MacGregor S, Cooper A. Blending research, Journalism, and community expertise: a case study of coproduction in research communication. Science Communication. 2020;42(3):340-368. [CrossRef]
  92. Stewart EC, Davis JS, Walters TS, Chen Z, Miller ST, Duke JM, et al. Development of strategies for community engaged research dissemination by basic scientists: a case study. Transl Res. 2023;252:91-98. [CrossRef] [Medline]
  93. Knowles SE, Allen D, Donnelly A, Flynn J, Gallacher K, Lewis A, et al. Participatory codesign of patient involvement in a learning health system: how can data-driven care be patient-driven care. Health Expect. 2022;25(1):103-115. [FREE Full text] [CrossRef] [Medline]


FRE: Flesch Reading Ease
NIHR: National Institute for Health and Care Research
PLS: plain language summary


Edited by T de Azevedo Cardoso; submitted 14.07.23; peer-reviewed by M Stoll, C Baur; comments to author 03.11.23; revised version received 04.03.24; accepted 23.09.24; published 13.01.25.

Copyright

©Iain A Lang, Angela King, Kate Boddy, Ken Stein, Lauren Asare, Jo Day, Kristin Liabo. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 13.01.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.