Review
Abstract
Background: Medical progress depends on the evaluation of new diagnostic and therapeutic interventions within clinical trials. Clinical trial recruitment support systems (CTRSS) aim to improve the recruitment process in terms of effectiveness and efficiency.
Objective: The goals were to (1) create an overview of all CTRSS reported until the end of 2013, (2) find and describe similarities in design, (3) theorize on the reasons for different approaches, and (4) examine whether projects were able to illustrate the impact of CTRSS.
Methods: We searched PubMed titles, abstracts, and keywords for terms related to CTRSS research. Query results were classified according to clinical context, workflow integration, knowledge and data sources, reasoning algorithm, and outcome.
Results: A total of 101 papers on 79 different systems were found. Most lacked details in one or more categories. There were 3 different CTRSS that dominated: (1) systems for the retrospective identification of trial participants based on existing clinical data, typically through Structured Query Language (SQL) queries on relational databases, (2) systems that monitored the appearance of a key event of an existing health information technology component in which the occurrence of the event caused a comprehensive eligibility test for a patient or was directly communicated to the researcher, and (3) independent systems that required a user to enter patient data into an interface to trigger an eligibility assessment. Although the treating physician was required to act for the patient in older systems, it is now becoming increasingly popular to offer this possibility directly to the patient.
Conclusions: Many CTRSS are designed to fit the existing infrastructure of a clinical care provider or the particularities of a trial. We conclude that the success of a CTRSS depends more on its successful workflow integration than on sophisticated reasoning and data processing algorithms. Furthermore, some of the most recent literature suggest that an increase in recruited patients and improvements in recruitment efficiency can be expected, although the former will depend on the error rate of the recruitment process being replaced. Finally, to increase the quality of future CTRSS reports, we propose a checklist of items that should be included.
doi:10.2196/jmir.3446
Keywords
Introduction
Medical progress depends on the evaluation of new diagnostic and therapeutic interventions within clinical trials. The value of each clinical trial depends on the successful recruitment of patients within a limited time frame. The number of participants must be sufficiently large to allow for scientifically and statistically valid analysis. Unfortunately, many trials experience gaps between initially planned and finally achieved participant numbers or they need to prolong their recruitment period. Slow recruitment delays medical progress and leads to unnecessarily high study costs [
- ].The main stakeholders in the recruitment process are the patient, the treating physician, the study nurse, and the principal investigator. But when it comes to the details of how responsibilities and tasks are distributed and how stakeholders interact with one another, recruitment processes start to show large variability. These specifics are influenced by a multitude of factors, including whether the trial is prospective or retrospective, the number of patients to be screened, the fraction of potential participants among the screened patients, the number of participating clinics, the urgency of recruiting a patient after discovering eligibility, the local data protection laws, the available funds or the organization, and infrastructure of the clinical institutions which pursue the trial.
Because of this variability in the recruitment processes, numerous reasons for failure to include sufficient participants into a trial were found [
- ]. On the most abstract level, these are overoptimistic feasibility estimations of future eligible patient numbers [ , ], the inability to motivate physicians to approach their patients [ - ], and the inability to motivate patients to participate [ , ].Following increased levels of patient data capture in digital systems and the advent of clinical decision support systems, the early 1990s also saw the use of computers for matching patients and trial protocols. These clinical trial recruitment support systems (CTRSS) aim to solve the issue of false feasibility estimations, to generate a positive impact on the treating physicians’ enrollment efforts, and to reduce the resources required to set up a successful recruitment process. Although many CTRSS have been proposed, the problems in recruitment persist [
, ].In this context, Cuggia et al [
] raised the question “What significant work has been carried out toward automating patient recruitment?” and reviewed the literature published between 1998 and October 2009. They found a comparatively small number of papers related to 28 distinct CTRSS. Most of these projects had focused on the technological feasibility of the search algorithm and neglected assessments of the system’s impact on recruitment in real-life scenarios. Cuggia et al concluded “that the automatic recruitment issue is still open” and that in 2009 it was still “difficult to make any strong statements about how effective automatic recruitment is, or about what makes a good decision support system for clinical trial recruitment.”Since then, CTRSS have become even more popular. Many independent institutions have tackled the challenge to improve their local recruitment processes. Large European collaborations, such as Electronic Health Records for Clinical Research (EHR4CR) [
], and national collaborations, for example in Germany [ ], have been initiated to create information technology (IT)-supported patient recruitment architectures and platforms. For the related but broader challenge of extracting meaningful patient information from electronic health record (EHR) data, a plethora of publications have been published in recent years and the term patient phenotyping has been coined [ ]. Recently, Shivade et al [ ] presented a review on phenotyping techniques. They observed “a rise in the number of studies associated with cohort identification using electronic medical records.”The rapidly growing knowledge about and the importance of electronic patient recruitment systems warrants a new review of the existing literature. Our objectives were to (1) create an overview of all papers published until the end of 2013, (2) find and describe similarities in CTRSS design, (3) discuss the reasons for different approaches, and (4) examine whether new projects were able to illustrate the impact of CTRSS.
Methods
Search Strategy
One of the authors (FK) searched the database PubMed with 2 queries. The first query contained keywords for publication titles and Medical Subject Headings (MeSH) terms. Because most recent articles were not yet completely indexed with MeSH terms, a second query performed a more profound keyword search in all fields. Neither query was limited to a specific time period:
- PubMed query 1: (“clinical trial”[Title] OR “clinical trials”[Title] OR “Clinical Trials as Topic”[MESH]) AND (“eligibility”[Title] OR “identification”[Title] OR “recruitment”[Title] OR “Patient Selection”[MESH] OR “cohort”[Title] OR “accrual”[Title] OR “enrollment”[Title] OR “enrolment” [Title] OR “screening”[Title]) AND (“electronic”[Title] OR “computer”[Title] OR “software”[Title] OR “Decision Making, Computer-Assisted”[MESH] OR “Decision Support Systems, Clinical”[Mesh] OR “Medical Records Systems, Computerized”[Mesh])
- PubMed query 2: (“clinical trial”[All Fields] OR “clinical trials”[All Fields]) AND (“eligibility”[All Fields] OR “identification”[All Fields] OR “recruitment”[All Fields] OR “accrual”[All Fields] OR “enrollment”[All Fields] OR “enrolment”[All Fields] OR “screening”[All fields]) AND (“participants”[All Fields] OR “cohort”[All fields] OR “patients”[All Fields]) AND (“electronic”[All fields] OR “computer”[All fields] OR “software”[All fields] OR “automatic”[All Fields])
Both queries were executed on January 15, 2014. After removing all duplicates from the combined result sets of both queries, FK screened titles and abstracts for the inclusion criteria. We then tried to obtain the full text of all included articles for a second screening. Finally, FK reviewed all references of the included manuscripts for additional articles. In case of uncertainty about the inclusion of an article, it was discussed with HUP for a final decision.
Inclusion Criteria
Our review covers primary research articles and conference proceedings on computer systems that compared patient data and eligibility criteria of a clinical trial to identify either potential participants for a given trial or suitable trials for a given patient. The system must have employed a computer to determine patient eligibility; that is, the utilization of electronically captured data was insufficient if the matchmaking process itself was done manually (eg, [
, ]). Manual processes before and after eligibility determination were otherwise accepted. Articles on the construction and processing of eligibility criteria, although closely tied to the construction and usage of CTRSS, were not part of this review (eg, [ , ]). Although technically the same, we also excluded decision support systems that identified patients for other purposes than clinical trials recruitment (eg, for diagnosing [ ] or phenotyping [ ]).Classification
The classification of CTRSS was roughly based on that of a previous review by Cuggia et al [
] to render results comparable with one another. They included (1) the clinical context or setting to which the system was deployed, (2) the manner of integration into the existing clinical or recruitment workflow, (3) the source and format of patient data and eligibility criteria, (4) the reasoning method employed to derive eligible patients, and (5) the outcome obtained by the system’s application to one or more clinical trials.Results
Included Studies
The 2 PubMed queries together yielded 1693 articles. A total of 1581 articles were removed from the literature pool based on their titles and abstracts. After removal of 8 articles that could not be obtained as full text and 21 duplicates, we arrived at 83 distinct articles, of which 60 were included in the qualitative analysis after review. In all, 5 of the excluded articles described other supportive measures for trial recruitment, 4 were deemed nonscientific (eg, commentaries), 6 described manual systems or the mode of eligibility determination was not clearly stated, 3 constituted general contributions without a relation to a specific CTRSS, 3 focused on the representation of eligibility criteria in a computable format, and 2 articles dealt with other topics (eg, phenotyping, personalized medicine). We obtained 41 additional articles through references and arrived at a final pool of 101 articles [
, - ] on 79 different systems. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram [ ] in shows the different phases of the article selection process.Results Structure
shows a list of all articles grouped by system and ordered by first publication date (objective 1). It also summarizes the CTRSS characteristics according to the categories described subsequently. In the following sections on CTRSS characteristics, we identify and describe CTRSS groups with similar features (objective 2). We also speculate on environmental characteristics that led the developers to favor a group or reject another (objective 3). All evidence for the impact of CTRSS on patient recruitment is presented in Outcomes (objective 4).
Characteristics of Included Articles
Regarding system maturity, 12 articles reported on a CTRSS concept that was not implemented yet. A total of 42 articles described a prototypical implementation, often including performance tests, but no application to a running clinical trial. Another 47 articles described fully matured systems that were used to recruit patients into at least 1 trial. First publications on CTRSS dated back to 1990. However, there were no more than 3 publications per year until 2003. Since then, 7 articles per year were published on average, so that nearly 80% of all articles were from the past 10 years (
).Clinical Context and Scope of Application
CTRSS have been implemented and used in trials in a wide variety of clinical domains. Still, many systems were evaluated for only 1 trial or trials from the same domain. With 17 CTRSS in this domain, oncology (especially breast cancer) was found particularly often. This domain may be favorable because it is research intensive with many open trials and exceptionally large available volumes of patient data and funding. The functionalities and algorithms of the CTRSS seemed largely independent from the clinical domain. Thus, no author precluded the use of their system for clinical trials from other domains and many actually suggested it.
The accuracy of a CTRSS depends on the available patient data and its effect depends on the organizational environment in which it operates. Therefore, each CTRSS should be evaluated for a large number of trials and at multiple sites to increase the reliability of reported results if possible. Many authors observed this: 43 articles reported on using their system for more than 1 trial (11 did not name an exact figure) and 14 CTRSS were intended for use at multiple sites. In comparison, 37 reports evaluated a CTRSS for a single trial and 62 CTRSS were used at a single institution. In all, 11 papers failed to give the number of trials their CTRSS had been evaluated or used for.
Workflow Integration
Overview
Every CTRSS has 2 points of contact with the recruitment workflow of a clinical trial. The first is the trigger that causes the system to assess the eligibility of one or more patients. The second is the communication of the assessment’s results (eg, a list of potential trial participants) to the system’s user.
Trigger
One way to trigger the eligibility assessment is to have the user or an administrator execute a manual process. Manual triggers are both the easiest to implement and the most commonly found. They are sufficient for cases in which patient data are entered into the CTRSS by the user who can subsequently view patient eligibility in an interactive fashion. The user can be a physician [
, ] or the patient [ , , ]. Manual triggers are also sufficient for cases in which an eligibility assessment is required only once to generate a patient list, which is not expected to change during the trial’s recruitment phase. The latter is generally the case for retrospective trials and feasibility studies. Typical examples include Payne et al [ ], Thadani et al [ ], and Köpcke et al [ ] who required an administrator to develop a Structured Query Language (SQL)-based query. Based on 16 years of COSTAR research queries, Murphy [ ] created the graphical interface Informatics for Integrating Biology and the Bedside (i2b2) to allow investigators to parameterize query templates themselves.For trials that require regular re-evaluation of patient eligibility because of changing patient data over time, manual triggers are generally inefficient and are replaced by automatic triggers. Automatic triggers can start eligibility assessments periodically at given time intervals [
, ] or in reaction to particular events in the hospital information system (HIS) [ , , , ]. Time-based triggers are generally easier to implement than event-based triggers. The interval length between assessments depends on the requirements of each trial and the computing time required for an assessment. It is usually set to a value between several minutes and 1 day. For trials that require an immediate reaction to new patient data by trial staff and for trials with comparatively rare potential participants, trigger events are preferred. Such triggers include the availability of new data or the admission of a patient.Communication
The results of an eligibility assessment must be communicated somehow to the CTRSS user. The primary factor of influence when choosing a mode of communication is the target user group. If patients are supposed to use the CTRSS, it is most common to offer a separate user interface that interactively displays potentially fitting trials and/or a score indicating the patient’s fit with a certain trial [
, , , , , , ]. Exceptions are found if the patient is interested in future trials instead of ones that are currently recruiting. In these cases, patients enter their health data into a registry or a personal health record and they are notified by email as soon as a fitting trial is detected [ , ]. If the CTRSS has no clinical/research user (ie, the direct user is IT staff), it usually transforms the raw result of the reasoning algorithm into a patient list which is subsequently handed out to the researcher [ , , , , , ]. This is the preferred mode of communication if eligibility assessments are only required once [ , ]. However, when the target users are either treating physicians or clinical investigators, the mode of communication also needs to accommodate data security regulations and the trial’s temporal requirements. Pagers seem to be the only option if the user needs to react immediately to new patient data, such as critical laboratory values [ , ]. When time is of less importance, emails are chosen to deliver both proposals for single patients and patient sets alike [ , , , ]. A recurring scenario is that the physician or nurse is reminded of a trial during their first consultation. To achieve this, alerts or flags are placed in the EHR which appear at a convenient moment and often allow direct evaluation of the patient’s eligibility [ , ].When coupled with simultaneous messages, automatic triggers have the disadvantage of easily initiating alerts or prompts at a time when the user is not prepared to answer them. Untimely messages will cause the receiver to ignore them. The same effect occurs for systems with a large share of false positive alerts. This alert fatigue is regularly mentioned as a problem for CTRSS efficiency and acceptance. Numbers for the fraction of alerts that are actually reviewed by the receiver range from 25% [
], more than 30% to 40% [ ], and 56% [ ] to less than 70% [ ]. For Ruffin [ ], even “numerous prompts and reminders and customized requests” could not solve the problem. Additionally, Embi and Leonard [ ] found that response rates declined at a rate of 2.7% per 2-week time period.Knowledge Representation and Data Sources
Overview
The core technical functionality of a CTRSS is the comparison of eligibility criteria with the electronically available patient data. According to Weng et al [
], the process is characterized by 3 aspects: “the expression language for representing eligibility rules, the encoding of eligibility concepts, and the modeling of patient data.” The underlying problem is that eligibility criteria are almost always given in narrative form and need to be translated into a structure that can be interpreted by the CTRSS. The same is true for the patient data, which needs to be analyzed to identify concepts that match the eligibility concepts before developing the eligibility rules themselves.Source of Patient Data
Most authors choose the data source for their CTRSS according to availability and accessibility. Few CTRSS designs are based on a comparison of different potential data sources (eg, for timeliness or comprehensiveness). Nevertheless, the reuse of existing patient data for the purpose of recruitment is common practice: 64 CTRSS relied on data that was collected for other purposes originally. A total of 5 monitored the health level 7 (HL7) messages of a clinical information system, 46 of them read patient data directly from the EHR of the hospital or general practitioner, 12 used a data warehouse, and 1 used a clinical registry. In this order, these data sources increasingly collect and integrate patient data over time, software applications, and institutions, which makes access to the data of large patient sets comparatively easy. However, more integrated data often means the data source becomes increasingly detached from its origin as well (ie, some information is lost during processing and delays between the documented event and availability of the corresponding data grow). For some trials, such delays are unacceptable because trial staff need to be notified as soon as possible for specific events. Specialty subsystems, such as an electronic tracking board [
, ] or the messages exchanged between these systems [ , , , , ], need to be monitored directly in these cases. A total of 3 CTRSS preloaded patient characteristics from the EHR and prompted the physician to complete missing data [ , , ]. Wilcox et al [ ] conceptualized a CTRSS that integrated EHR data and the personal health record of a patient. Only 16 CTRSS made exclusive use of data that were entered directly into the system itself by the physician (n=8), the patient (n=7), or an investigator (n=1).Terminologies
The CTRSS developer can choose the terminology for clinical concept names arbitrarily if patient data are entered only for the purpose of eligibility assessment. However, if patient data are taken from an already existing data source, most developers chose to reuse the terminologies found there. A total of 66 articles did not mention the use of any terminology. Of these, 5 performed pure free-text analysis and did not necessarily require terminologies. Of those papers that did mention the use of a specific terminology, 16 named the International Classification of Diseases (ICD). This makes sense because it is also the terminology most commonly used within EHRs. There were no other widespread terminologies used for CTRSS. The Unified Medical Language System (UMLS) appeared in 6 publications and the Medical Entities Dictionary (MED), Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and Logical Observation Identifiers Names and Codes (LOINC) in only 3, respectively. In all, 10 terminologies were each used in only 1 CTRSS, such as Cerner Multum, National Drug Code, Hospital International Classification of Diseases Adapted, Read, NCI-Thesaurus (NCI-T), and 837 billing data, or in 2 CTRSS, such as MeSH, NCI Common Data Elements (CDE), and Current Procedural Terminology.
Intermediary Criteria Format
Terminologies are usually chosen to suit the available patient data, whereas the intermediary criteria format is strongly associated with the reasoning method of the CTRSS. The SQL is the most frequently found representation of criteria logic. Unfortunately, the CTRSS literature lacks details on the representation of criteria expressions. A comparison of the eligibility criteria as given in the study protocol and their representation in the CTRSS is rare; 49 papers gave no information on the chosen format of eligibility expressions.
Translation Process
With a few exceptions, the translation process to make eligibility criteria processable for the computer seemed to be a manual one. For 51 CTRSS, the administrator was responsible for reading the trial protocol, mapping clinical concepts to the target terminology, and creating eligibility expressions. This is the most efficient process in clinical settings that generate few trials per researcher because teaching costs are minimized and experience is concentrated in 1 person. Yet, a notable fraction of the CTRSS offered the user an interface to select eligibility criteria autonomously from a small [
] or large [ ] set of predefined criteria. Having the user translate the eligibility criteria of a trial is primarily meaningful for feasibility studies, giving a researcher the means to dynamically modify the criteria for a new trial and to instantly receive feedback for the change’s influence on the expected number of participants.Lonsdale et al [
] proposed natural language processing (NLP) to support the translation process. They read eligibility sentences from the trial registry ClinicalTrials.gov, parsed them to retrieve logical forms and mapped concepts to standard terminologies to generate executable Arden syntax Medical Logic Modules (MLMs). The process succeeded for 16% of all criteria from 85 randomly chosen trials [ , ]. Zhang et al [ ] and Köpcke et al [ ] proposed case-based reasoning algorithms for free-text and structured patient data, respectively. These algorithms did not require the translation of eligibility criteria into rules, but tried to determine the unknown eligibility of new patients by comparing them with a set of patients with known eligibility status.Reasoning
Overview
Closely tied to the previously described CTRSS characteristics is the reasoning process itself (ie, the method to assess whether the available data for a patient suffices for the conditions set by the trial’s eligibility criteria). Almost all CTRSS “perform ‘pre-screening’ for clinical research staff” [
] instead of trying to determine the actual eligibility of a patient. They do not replace manual chart review, but act as a filter that limits the number of patients who require such by selecting the most likely candidates. The presentation of reasoning details, such as a probability of eligibility or missing patient characteristics together with the screening list, can facilitate the manual screening process even further.The dominance of relational databases for the storage of patient data entails that most CTRSS employ database queries somewhere in the reasoning process. Consequentially, most CTRSS are based on an elaborate query or a set of subsequently executed queries per trial [
, , , ]. If the result set of potentially eligible patients is sufficiently accurate, no further processing is required.Some authors demonstrated the feasibility of more exotic reasoning methods. A total of 4 CTRSS used Arden syntax to control the reasoning process [
, , , ]; 3 CTRSS employed an ontologic reasoner after transforming eligibility criteria and, in 2 cases, patient data into separate ontologies [ , , ]. However, although technically interesting, the authors failed to convey the advantages of these algorithms when compared with the aforementioned simpler ones.Dealing With Incomplete Data
Some CTRSS designers paid particular attention to missing patient data. Tu et al [
] developed 2 methods for dealing with this problem. In their qualitative method, each criterion was attributed 1 of 5 qualities according to a patient’s concrete data: patient meets the criterion, patient probably meets the criterion, no assertion possible, patient probably fails the criterion, and patient fails the criterion. Specific rules for each criterion derived one of these qualities from the patient’s data or assign default values. In their probabilistic method, a Bayesian belief network was manually constructed for each trial. The network represented variables as nodes and dependencies as links between nodes. All nodes and links were given probabilities based on legacy data or experts. If data for a variable were found, the variable was given a probability of 1 or zero; otherwise, the default probabilities were used. When all available data for a patient were retrieved, a probability for the patient’s eligibility could be calculated. This probabilistic approach was applied again later by Papaconstantinou et al [ ] and Ash et al [ ]. Bhanja et al [ ] suggested that scalability as well as time and design complexities discouraged the use of probabilistic approaches.Natural Language Processing
The wish to include unstructured (ie, free-text) data could also warrant the utilization of complex reasoning algorithms. Keyword searches were often employed when no structured data elements were available [
, , , , , , ]. They could easily be added to complement queries of structured patient data [ , , , ]. Pakhomov [ ] compared a keyword search with 2 other NLP methods: naive Bayes and perceptron. Naive Bayes yielded the best sensitivity (95% vs 86% and 71% for perceptron and keyword search, respectively) and perceptron offered the best specificity (65% vs 57% and 54% for naive Bayes and keyword search, respectively). Although performing worst of all methods, the advantage of using a simple keyword search lies in its easy implementation (no need for training data) and transparency. In a similar comparison, Zhang et al [ ] found regular expressions outperformed a vector space method and latent semantic indexing to achieve accuracy similar to a specifically developed method called subtree match. However, they also proposed algorithms for automatic keyword and subtree generation, which could offer distinct potential for automation.Sensitivity-Versus-Specificity Tradeoff
Independent from the chosen reasoning, the inclusiveness of each CTRSS is subject to the desires of its user. Ultimately, the setup of a CTRSS “requires sensitivity-versus-specificity tradeoffs” for each trial [
]. The upper limit to specificity might be determined by the fit between available patient data and eligibility criteria, whereas its lower limit is simply determined by what the user is willing to accept ( ). The required level of sensitivity is limited by the availability of trial participants. Sensitivity should be chosen as low as possible to increase specificity and, thus, reduce recruitment workload. In practice, however, when the CTRSS is motivated by a lack of participants for a specific trial, maximum sensitivity is imperative and low specificity must be accepted [ ].Outcome
Overview
All studies in this review shared the common goal to improve the recruitment process of clinical trials. However, calculating the performance of the CTRSS in terms of specificity and sensitivity alone is, at best, a secondary indicator for its effect. Direct comparison with the manual recruitment process with regard to its effects on one or more of the following 3 variables should be favored: (1) the pure number of trial participants (ie, the effectiveness of the recruitment process), (2) the cost to recruit a given number of patients in terms of money and/or time (ie, the efficiency of the recruitment process), and (3) the quality of the collective of trial participants (eg, measures for selection bias and dropouts). All reported system effects were weighted according to the scientific quality of the evaluation as (1) reliable quantitative measurement, (2) quantitative measurement with insufficient description of or flawed method, or (3) survey or estimation (corresponding to A-C in
, respectively).Impact on Recruitment Effectiveness
We found 5 papers that reliably quantified differences in recruitment effectiveness between manual and CTRSS-supported recruitment. Embi et al [
] reported on a doubling of physician’s enrollment rate from 3 to 6 per month, which was attributed to a concurrent significant increase in the number of referring physicians from 5 to 42. The CTRSS presented by Cardozo et al [ ] increased identification of eligible patients from 1 in 2 months to 6 in 2 months after physicians failed to generate pager notifications in time. Herasevich et al [ ] doubled monthly enrollment rates from 37 in approximately 8.5 months to 68 in approximately 9 months in a time-critical setting. They attributed the effect to the change from imprecise clinical notes (manual process) to specific physiologic criteria (automated process) as the basis for eligibility evaluation. Beauharnais et al [ ] also doubled recruitment, in this case from 11 patients in 63 days to 20 patients in 62 days. The effect seemed to correlate with an increase in screening efficiency that similarly doubled the number of screened patients. A comparatively minor increase in recruited patients of 14% from 306 to 348 in the same week was reported by Köpcke et al [ ] who addressed pure oversight of otherwise well-organized manual recruiters. They also found 7% of the manually included patients did not fulfill the trial’s eligibility criteria.Lane et al [
], Tu et al [ ], and a research group from the University of South Florida [ , , , ] ran their respective CTRSS on legacy patient data and evaluated how many of those patients found potentially eligible by their system were actually enrolled in the past. These works only showed an upper limit of CTRSS effectiveness because it was unclear whether “physicians actually missed the matches, rather than having undocumented reasons for omitting them” [ ]. Similarly, Weiner et al [ ] described an increase in the number of eligibility alerts sent to the trial investigator. Again, these can only be an upper limit for the effect of the CTRSS on enrollment because the physician’s reasons for not alerting the investigator were unclear. It is possible that the physicians judged patients unfit for the trial for reasons beyond the criteria that were considered by the CTRSS or that the patients were unwilling to participate. Séroussi and Bouaud [ ], Weng et al [ ], and Treweek et al [ ] compared the effectiveness of their CTRSS with conventional methods of recruitment by running them in parallel over the whole study period. However, the lack of enrollment numbers for a preceding phase without the CTRSS made it impossible to quantify the effect of the CTRSS. Finally, Ferranti et al [ ] reported an increase in recruitment numbers by 53%. Although we found their methodology suitable, the authors failed to discuss reasons for a sharp increase in recruitment numbers 2 months before introduction of the CTRSS.Impact on Recruitment Efficiency
We judged 4 papers to reliably quantify differences in the efficiency of a CTRSS and the manual recruitment process. Thompson et al [
] reduced the screening time required per eligible patient from 18 to 6 minutes (66%) in a 2-week evaluation of their CTRSS prototype. This reduction was achieved solely through a higher fraction of eligible patients among screened patients, whereas the individual screening time was actually higher for patients proposed by the CTRSS. Penberthy et al [ ] verified this circumstance for 5 additional trials, achieving screening time reductions of 95%, 34%, 86%, and 34% in 4 trials and an increase of 31% in 1 trial. Again, time savings resulted from screening fewer noneligible patients, whereas individual screening time remained unchanged. Therefore, the benefit in efficiency was found to depend on the specificity of the CTRSS. Nkoy et al [ ] decreased screening time from 2 to zero hours daily with no manual control of the patient list generated by their CTRSS. They translated these time savings into cost savings of US $1200 per month. Beauharnais et al [ ] halved screening time from 4 to 2 hours daily, measuring manual and CTRSS-aided recruitment over 60 subsequent days, respectively. They concluded that “the use of an algorithm is most beneficial for studies with low enrollment rates because of the long duration of the accrual period.”Following a proposition by Ohno-Machado et al [
], the aforementioned research group from the University of South Florida [ , , , ] presented a unique approach to increase screening efficiency. Through ordering of the necessary clinical tests for eligibility determination in such a way that cheap but decisive tests were done first, they expected a reduction of costs by 50%. The cost of each test and the number of clinical trials and eligibility criteria that required a test’s results were included in the calculation. Unfortunately, the evaluation of the methodology was based on retrospective data and it remained unclear how the cost for tests without reordering were calculated. Seyfried et al [ ] reported decreased screening time, but used the same dataset with the same test physicians for both manual and CTRSS-aided screenings (50 patients, 1-week interval). Furthermore, the CTRSS appeared to be trained with the same dataset on which it was tested later. Thadani et al [ ] and Schmickl et al [ ] did not directly measure screening time decreases, but stated that they could imagine screening only patients proposed by their respective CTRSS to be sufficient, reducing the patient pool by 81% and 76%, respectively. Obviously, such a strategy would require the CTRSS to feature a sufficiently high sensitivity.Impact on Recruitment Quality
Only Rollman et al [
] compared the characteristics of patient sets after manual and CTRSS-aided recruitment. To this end, they observed 2 subsequent trials with similar eligibility criteria, the same recruitment period of 22 months and the same 4 recruiting primary care physicians. They found that usage of the CTRSS significantly increased the proportion of male nonwhite patients, as well as the fraction of patients with more severe disease grades.Discussion
Principal Findings
There are some CTRSS setups that reappear on a regular basis. Firstly, for the retrospective identification of trial participants based on existing clinical data, database queries are designed and executed once or on a regular basis. They create a list of potentially eligible patients that is printed on paper or otherwise delivered to the researcher. Secondly, for trials with short windows of opportunity for recruitment, a key event in the EHR or another health IT component is constantly monitored. Its occurrence causes a more comprehensive eligibility test for the concerned patient and is communicated to the researcher via pager. Thirdly, if no patient data exist yet, it is entered directly into the CTRSS, which assesses and communicates the patient’s eligibility directly after completion of data entry. Although the treating physician was required to act for the patient in older systems, it is now becoming increasingly popular to offer this possibility directly to the patient via dedicated websites. Our review confirms the findings of Weng et al [
] who also gave names to these CTRSS types: (1) mass screening decision support, (2) EHR-based recruitment alerts, and (3) computerized research protocol systems and Web-based patient-enabling systems (depending on the user).The setup of a specific CTRSS is rarely chosen on a theoretical background (ie, after an evaluation of different options for triggering the system and communicating the results). Instead, the setup is dictated mostly by the existing clinical environment, available IT tools, and the needs of a specific trial or group of researchers. Because CTRSS are a subset of clinical decision support systems (CDSS), it will generally be possible to configure existing CDSS such that they assume CTRSS functionalities (eg, [
, , ]).Limitations of the Review
Our review is limited in that the collection of publications and extraction of information from these publications was done by only 1 author. We reduced the impact of this approach by refraining from any interpretation of the given information in this step. Nevertheless, we cannot preclude mistakes, especially when stating that no or unclear information on a certain CTRSS characteristic was found in an article. Furthermore, all unreferenced statements made in this review reflect only the opinion of the 2 authors and are subject to discussion by the research community.
Comparison With Previous Review
Our review of 101 CTRSS publications offers the most comprehensive and up-to-date overview on CTRSS. Compared to the previous review paper by Cuggia et al from 2011 [
], which analyzed 28 CTRSS from articles published before October 2009, we identified an increase of publications in the subsequent years. These more recent publications present more data on the impact of CTRSS on the recruitment process, which we discuss subsequently. Of the 7 tendencies in CTRSS research formulated by Cuggia et al, all but the exclusive reliance on structured data appear to continue. We found many CTRSS that include unstructured data as a data source, although many of them are limited to keyword searches. There are 3 additional lessons we believe can be learned from the existing research, which are described subsequently.Lack of standards is not limited to the terminologies of the patient data source, but also applies to the computational representation of eligibility criteria. Although researchers have proposed independent languages to encode the free-text criteria of a trial’s protocol (eg, ERGO [
], EliXR [ ]), most CTRSS bind the representation of eligibility criteria in 2 ways to the specifics of their environment: (1) to the terminology of the patient data source and (2) to the chosen reasoning method. We believe independent and exchangeable eligibility criteria to be desirable because multisite trials have become the norm. However, judging from the experience so far, readily encoded criteria will need to be the norm in trial protocols before they will be adopted by CTRSS designers. Tools to help translate the criteria into SQL statements could speed up the adoption process.The choice of the reasoning method should consider its pervasiveness (ie, how easily third parties interested in its deployment can learn to install and administrate it). Considering this, no other method seems to be as suitable for CTRSS as SQL queries on relational databases. Queries can make use of existing data from the EHR, a data warehouse (DWH), or a registry and their administrators are likely to be experienced creators and users of such queries. Resistance to adopt and maintain an additional query-based system is likely to be small compared to CTRSS that require additional training in one of the less widespread technologies, such as probabilistic methods or Arden syntax. Although complex reasoning methods have been shown to achieve high accuracy, it is unclear whether they lead to an increased CTRSS impact compared to queries.
Using patient care data promises efficiency and effectiveness gains for a CTRSS. But, because it is collected for other purposes, it also introduces new challenges [
]. It is imperfect from the viewpoint of eligibility assessments because it lacks uniformity (the same information can be documented differently for 2 patients), timeliness (information might be documented too late), and completeness (information might be missing for some or all patients). Uniformity and completeness problems can lead to severe selection bias and increase the cost of eligibility rule creation. For example, low uniformity necessitates an analysis of documentation habits; low completeness might enforce the use of proxy data [ ] or estimates [ ]. Timeliness must be ensured by the documentation process, which might resist change. Untimely data will severely limit the possibility to support a trial, especially in outpatient settings [ ]. Thus, unfit data can constitute a major limitation to CTRSS impact.First Conclusions on Clinical Trial Recruitment Support Systems Impact
We suggested that the introduction of a CTRSS can be motivated by 3 expectations: (1) an increase in the number of participants for a given clinical trial or a set of trials, (2) a reduction of trial costs through decreased screening costs, and (3) the guarantee to select a representative set of patients (ie, the reduction of selection bias). Many authors do not elaborate on the shortcomings of the manual recruitment process that led to the development of their CTRSS.
Whether a CTRSS is able to increase the number of participants for a trial depends little on its setup, but rather on the deficits of the manual recruitment process it is set to replace. To begin with, an untapped group of potential participants (ie, a gap between those patients who are eligible and those who are asked to participate) needs to exist. This gap originates from some patients not being screened at all or from communication problems between the different actors of the recruitment process. Thus, a CTRSS can close this gap if it can ensure that every patient is screened and that the necessary information on the patient and the trial is available in time.
Often, a CTRSS is expected to close the gap between estimated and realized participant numbers or that between eligible and recruited patients or even the gap between needed and available patients. These expectations are likely to be disappointed. They disregard that many causes for insufficient recruitment are out of the scope of a CTRSS or simply cannot be addressed by an IT intervention. The most important is a willingness of the patient to participate and motivation of physicians to participate in recruitment. The analysis of the existing recruitment process and its weaknesses should, therefore, be part of every CTRSS design process. Weng et al [
, ] give examples on how to do this. They characterize patient eligibility status in different categories, such as potentially eligible, approachable, consentable, eligible, and ultimately enrolled. By comparing the ratio of patients in each category, such taxonomy can be used to identify the weak spots in recruitment that need to be addressed by the CTRSS.Although the effectiveness of a CTRSS is determined by its setting, improvements in screening efficiency might be more generally achievable. Many successes to reduce screening time are based on using existing data to reliably exclude patients from the screening list (ie, the CTRSS generates no or few false negatives). In this way, the CTRSS can be used to reduce the number of patients that must be screened manually. Under the reasonable assumption that documented data are correct, but not all patient characteristics are documented, we believe CTRSS should focus on the exclusion criteria of a clinical trial to maximize efficiency gains. No final eligibility decision should be based on the trial’s inclusion criteria because this can reduce the sensitivity of the CTRSS and motivate the screeners to use other screening methods in parallel. To realize efficiency gains, the CTRSS must completely replace the former screening process. This also means that the aim to increase recruitment efficiency is opposed to the other 2 potential aims of a CTRSS which profit from running multiple screening methods in parallel.
The potential benefit of a CTRSS on the composition of a trial’s participants has been insufficiently explored so far. Because patient demographics should be easily obtainable for all experiments comparing manual and CTRSS-aided recruitment, we suggest including them in future publications.
Future Directions
We found most articles describe the characteristics and operating principles of their CTRSS reasonably well, but all lacked in some regard. Intermediary criteria representation, terminologies of the patient data, and an evaluation of the system’s effects were often missing. Many authors present prototypes of their CTRSS directly after finishing its design and fail to report on its outcome and usage. We encourage more follow-up publications on the experiences with existing CTRSS such as those by Embi et al [
], Embi and Leonard [ ], and Dugas et al [ ]. To strengthen the comprehensibility and usefulness of future reports, we propose a list of essential elements that should be included ( ).In their review of patient cohort identification systems in general, Shivade et al [
] found a “growing trend in the areas of machine learning and data mining” and believe these necessary to develop generalizable solutions. For CTRSS in particular, this trend has not yet manifested in the literature. Only Zhang et al [ ] and Köpcke et al [ ] report on experiments to exploit these techniques for recruitment purposes, but both are still in a prototype stage. Machine learning promises more independence from the individual representation of patient data in a hospital and better portability. Still more data are needed to assess advantages and disadvantages and to explore hybrid solutions.Essential elements to be included in future CTRSS studies.
Clinical Context
- Number of trials that the CTRSS has been evaluated for
- Length of time the CTRSS has been in use
- Brief description and number of sites that use the CTRSS
Input
- Representation format of eligibility criteria in the CTRSS
- Comparison of original and computable eligibility criteria for an exemplary trial
- Summary of how well the eligibility criteria could be translated for all other trials (if any)
- Details on the translation process
- Representation of patient data in the CTRSS
- Details on the patient data source (eg, purpose, terminologies)
Working Principle
- What triggers an eligibility assessment?
- How are eligibility criteria and patient data compared?
- How is the result of the assessment communicated, when is it communicated, and to whom?
Outcome
- Recruitment process before introduction of CTRSS including perceived problems
- Recruitment process following introduction of CTRSS
- Patient numbers and time spent for each step
Current publications in the area of CTRSS are still too focused on—and sometimes limited to—technical aspects of system setup and the accuracy of its eligibility assessment. After review of most of the existing literature, we believe that the impact of a CTRSS on a given recruitment process is determined more by the context of the CTRSS (ie, the available patient data, its integration in trial, and clinical workflows and its attraction to users). Therefore, what is needed are research projects to evaluate how a CTRSS can be embedded in different recruitment workflows, the characteristics of trials that profit from CTRSS, different designs for user interaction, and the outcomes of CTRSS in relation to these parameters.
Conclusions
We further found that differences in the setup of CTRSS are because of existing infrastructure and particularities of the recruitment process, particularly the target user of the CTRSS (eg, treating physician, study nurse) and the prior recruitment problem (eg, failure to identify, failure to communicate). Yet, there are still many questions open in defining when and how CTRSS can best improve recruitment processes in clinical trials. Based on the questions that remained open in our analysis of many of the 101 articles, we propose an item list that should be considered for future publications on CTRSS design, implementation, and evaluation. This shall ensure that CTRSS setup and background, their integration in research processes, and their outcome results are sufficiently described to allow researchers to better learn from other´s experiences.
Acknowledgments
We acknowledge support by Deutsche Forschungsgemeinschaft and Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) within the funding program Open Access Publishing.
Authors' Contributions
FK conducted the review, wrote, and reviewed the manuscript. HUP wrote and reviewed the manuscript.
Conflicts of Interest
None declared.
Multimedia Appendix 1
Table of publications included in qualitative analysis together with major CTRSS characteristics.
XLSX File (Microsoft Excel File), 31KBReferences
- Tassignon JP, Sinackevich N. Speeding the Critical Path. Appl Clin Trials 2004 Jan;13(1):42 [FREE Full text]
- Sullivan J. Subject recruitment and retention: Barriers to success. Appl Clin Trials 2004 Apr:50-54 [FREE Full text]
- Nkoy FL, Wolfe D, Hales JW, Lattin G, Rackham M, Maloney CG. Enhancing an existing clinical information system to improve study recruitment and census gathering efficiency. AMIA Annu Symp Proc 2009;2009:476-480 [FREE Full text] [Medline]
- Hunninghake DB, Darby CA, Probstfield JL. Recruitment experience in clinical trials: literature summary and annotated bibliography. Control Clin Trials 1987 Dec;8(4 Suppl):6S-30S. [Medline]
- Gotay CC. Accrual to cancer clinical trials: directions from the research literature. Soc Sci Med 1991;33(5):569-577. [Medline]
- Ellis PM. Attitudes towards and participation in randomised clinical trials in oncology: a review of the literature. Ann Oncol 2000 Aug;11(8):939-945 [FREE Full text] [Medline]
- Collins JF, Williford WO, Weiss DG, Bingham SF, Klett CJ. Planning patient recruitment: fantasy and reality. Stat Med 1984 Dec;3(4):435-443. [Medline]
- Campbell MK, Snowdon C, Francis D, Elbourne D, McDonald AM, Knight R, STEPS group. Recruitment to randomised trials: strategies for trial enrollment and participation study. The STEPS study. Health Technol Assess 2007 Nov;11(48):iii, ix-iii,105 [FREE Full text] [Medline]
- Taylor KM, Kelner M. Interpreting physician participation in randomized clinical trials: the Physician Orientation Profile. J Health Soc Behav 1987 Dec;28(4):389-400. [Medline]
- Siminoff LA, Zhang A, Colabianchi N, Sturm CM, Shen Q. Factors that predict the referral of breast cancer patients onto clinical trials by their surgeons and medical oncologists. J Clin Oncol 2000 Mar;18(6):1203-1211. [Medline]
- Mannel RS, Walker JL, Gould N, Scribner DR, Kamelle S, Tillmanns T, et al. Impact of individual physicians on enrollment of patients into clinical trials. Am J Clin Oncol 2003 Apr;26(2):171-173. [CrossRef] [Medline]
- Rahman M, Morita S, Fukui T, Sakamoto J. Physicians' reasons for not entering their patients in a randomized controlled trial in Japan. Tohoku J Exp Med 2004 Jun;203(2):105-109 [FREE Full text] [Medline]
- Comis RL, Miller JD, Aldigé CR, Krebs L, Stoval E. Public attitudes toward participation in cancer clinical trials. J Clin Oncol 2003 Mar 1;21(5):830-835. [Medline]
- Grunfeld E, Zitzelsberger L, Coristine M, Aspelund F. Barriers and facilitators to enrollment in cancer clinical trials: qualitative study of the perspectives of clinical research associates. Cancer 2002 Oct 1;95(7):1577-1583 [FREE Full text] [CrossRef] [Medline]
- McDonald AM, Knight RC, Campbell MK, Entwistle VA, Grant AM, Cook JA, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials 2006;7:9 [FREE Full text] [CrossRef] [Medline]
- Toerien M, Brookes ST, Metcalfe C, de Salis I, Tomlin Z, Peters TJ, et al. A review of reporting of participant recruitment and retention in RCTs in six major journals. Trials 2009;10:52 [FREE Full text] [CrossRef] [Medline]
- Cuggia M, Besana P, Glasspool D. Comparing semi-automatic systems for recruitment of patients to clinical trials. Int J Med Inform 2011 Jun;80(6):371-388. [CrossRef] [Medline]
- Coorevits P, Sundgren M, Klein GO, Bahr A, Claerhout B, Daniel C, et al. Electronic health records: new opportunities for clinical research. J Intern Med 2013 Dec;274(6):547-560. [CrossRef] [Medline]
- Trinczek B, Köpcke F, Leusch T, Majeed RW, Schreiweis B, Wenk J, et al. Design and multicentric implementation of a generic software architecture for patient recruitment systems re-using existing HIS tools and routine patient data. Appl Clin Inform 2014;5(1):264-283 [FREE Full text] [CrossRef] [Medline]
- Hripcsak G, Knirsch C, Zhou L, Wilcox A, Melton G. Bias associated with mining electronic health records. J Biomed Discov Collab 2011;6:48-52 [FREE Full text] [CrossRef] [Medline]
- Shivade C, Raghavan P, Fosler-Lussier E, Embi PJ, Elhadad N, Johnson SB, et al. A review of approaches to identifying patient phenotype cohorts using electronic health records. J Am Med Inform Assoc 2014;21(2):221-230 [FREE Full text] [CrossRef] [Medline]
- Andersen MR, Schroeder T, Gaul M, Moinpour C, Urban N. Using a population-based cancer registry for recruitment of newly diagnosed patients with ovarian cancer. Am J Clin Oncol 2005 Feb;28(1):17-20. [Medline]
- Ashburn A, Pickering RM, Fazakarley L, Ballinger C, McLellan DL, Fitton C. Recruitment to a clinical trial from the databases of specialists in Parkinson's disease. Parkinsonism Relat Disord 2007 Feb;13(1):35-39. [CrossRef] [Medline]
- Tu SW, Peleg M, Carini S, Bobak M, Ross J, Rubin D, et al. A practical method for transforming free-text eligibility criteria into computable criteria. J Biomed Inform 2011 Apr;44(2):239-250 [FREE Full text] [CrossRef] [Medline]
- Patel CO, Weng C. ECRL: an eligibility criteria representation language based on the UMLS Semantic Network. AMIA Annu Symp Proc 2008:1084. [Medline]
- Friedlin J, Overhage M, Al-Haddad MA, Waters JA, Aguilar-Saavedra JJ, Kesterson J, et al. Comparing methods for identifying pancreatic cancer patients using electronic data sources. AMIA Annu Symp Proc 2010;2010:237-241 [FREE Full text] [Medline]
- Carroll RJ, Eyler AE, Denny JC. Naïve Electronic Health Record phenotype identification for Rheumatoid arthritis. AMIA Annu Symp Proc 2011;2011:189-196 [FREE Full text] [Medline]
- Afrin LB, Oates JC, Boyd CK, Daniels MS. Leveraging of open EMR architecture for clinical trial accrual. AMIA Annu Symp Proc 2003:16-20 [FREE Full text] [Medline]
- Ahmad F, Gupta R, Kurz M. Real time electronic patient study enrollment system in emergency room. AMIA Annu Symp Proc 2005:881 [FREE Full text] [Medline]
- Ainsworth J, Buchan I. Preserving consent-for-consent with feasibility-assessment and recruitment in clinical studies: FARSITE architecture. Stud Health Technol Inform 2009;147:137-148. [Medline]
- Ash N, Ogunyemi O, Zeng Q, Ohno-Machado L. Finding appropriate clinical trials: evaluating encoded eligibility criteria with incomplete data. Proc AMIA Symp 2001:27-31 [FREE Full text] [Medline]
- Atkinson NL, Massett HA, Mylks C, Hanna B, Deering MJ, Hesse BW. User-centered research on breast cancer patient needs and preferences of an Internet-based clinical trial matching system. J Med Internet Res 2007;9(2):e13 [FREE Full text] [CrossRef] [Medline]
- Bache R, Miles S, Taweel A. An adaptable architecture for patient cohort identification from diverse data sources. J Am Med Inform Assoc 2013 Dec;20(e2):e327-e333. [CrossRef] [Medline]
- Beauharnais CC, Larkin ME, Zai AH, Boykin EC, Luttrell J, Wexler DJ. Efficacy and cost-effectiveness of an automated screening algorithm in an inpatient clinical trial. Clin Trials 2012 Apr;9(2):198-203. [CrossRef] [Medline]
- Besana P, Cuggia M, Zekri O, Bourde A, Burgun A. Using semantic web technologies for clinical trial recruitment. : Springer; 2010 Presented at: The Semantic Web-ISWC 2010: 9th International Semantic Web Conference, ISWC 2010; November 7-11, 2010; Shanghai, China p. 7748.
- Bhanja S, Fletcher-Heath LM, Hall LO, Goldgof DB, Krischer JP. A qualitative expert system for clinical trial assignment. In: FLAIRS-98: Proceedings of the Eleventh International Florida Artificial Intelligence Research Symposium Conference.: AAAI Press; 1998 Presented at: Eleventh International Florida Artificial Intelligence Research Symposium Conference; May 18-20, 1998; Sanibel Island, Florida p. 84.
- Boland MR, Miotto R, Gao J, Weng C. Feasibility of feature-based indexing, clustering, and search of clinical trials. A case study of breast cancer trials from ClinicalTrials.gov. Methods Inf Med 2013;52(5):382-394 [FREE Full text] [CrossRef] [Medline]
- Breitfeld PP, Weisburd M, Overhage JM, Sledge G, Tierney WM. Pilot study of a point-of-use decision support tool for cancer clinical trials eligibility. J Am Med Inform Assoc 1999 Dec;6(6):466-477 [FREE Full text] [Medline]
- Breitfeld PP, Ullrich F, Anderson J, Crist WM. Web-based decision support for clinical trial eligibility determination in an international clinical trials network. Control Clin Trials 2003 Dec;24(6):702-710. [Medline]
- Butte AJ, Weinstein DA, Kohane IS. Enrolling patients into clinical trials faster using RealTime Recuiting. Proc AMIA Symp 2000:111-115 [FREE Full text] [Medline]
- Cardozo E, Meurer WJ, Smith BL, Holschen JC. Utility of an automated notification system for recruitment of research subjects. Emerg Med J 2010 Oct;27(10):786-787. [CrossRef] [Medline]
- Carlson RW, Tu SW, Lane NM, Lai TL, Kemper CA, Musen MA, et al. Computer-based screening of patients with HIV/AIDS for clinical-trial eligibility. Online J Curr Clin Trials 1995 Mar 28;Doc No 179:[3347 words; 32 paragraphs]. [Medline]
- Deshmukh VG, Meystre SM, Mitchell JA. Evaluating the informatics for integrating biology and the bedside system for clinical research. BMC Med Res Methodol 2009;9:70 [FREE Full text] [CrossRef] [Medline]
- Duftschmid G, Gall W, Eigenbauer E, Dorda W. Management of data from clinical trials using the ArchiMed system. Med Inform Internet Med 2002 Jun;27(2):85-98. [CrossRef] [Medline]
- Dugas M, Lange M, Berdel WE, Müller-Tidow C. Workflow to improve patient recruitment for clinical trials within hospital information systems - a case-study. Trials 2008;9:2 [FREE Full text] [CrossRef] [Medline]
- Dugas M, Amler S, Lange M, Gerss J, Breil B, Köpcke W. Estimation of patient accrual rates in clinical trials based on routine data from hospital information systems. Methods Inf Med 2009;48(3):263-266. [CrossRef] [Medline]
- Dugas M, Lange M, Müller-Tidow C, Kirchhof P, Prokosch HU. Routine data from hospital information systems can support patient recruitment for clinical studies. Clin Trials 2010 Apr;7(2):183-189. [CrossRef] [Medline]
- Embi PJ, Jain A, Harris CM. Physician perceptions of an Electronic Health Record-based Clinical Trial Alert system: a survey of study participants. AMIA Annu Symp Proc 2005:949 [FREE Full text] [Medline]
- Embi PJ, Jain A, Clark J, Bizjack S, Hornung R, Harris CM. Effect of a clinical trial alert system on physician participation in trial recruitment. Arch Intern Med 2005 Oct 24;165(19):2272-2277 [FREE Full text] [CrossRef] [Medline]
- Embi PJ, Jain A, Clark J, Harris CM. Development of an electronic health record-based Clinical Trial Alert system to enhance recruitment at the point of care. AMIA Annu Symp Proc 2005:231-235 [FREE Full text] [Medline]
- Embi PJ, Jain A, Harris CM. Physicians' perceptions of an electronic health record-based clinical trial alert approach to subject recruitment: a survey. BMC Med Inform Decis Mak 2008;8:13 [FREE Full text] [CrossRef] [Medline]
- Embi PJ, Leonard AC. Evaluating alert fatigue over time to EHR-based clinical trial alerts: findings from a randomized controlled study. J Am Med Inform Assoc 2012 Jun;19(e1):e145-e148 [FREE Full text] [CrossRef] [Medline]
- Fernández-Breis JT, Maldonado JA, Marcos M, Legaz-García Mdel C, Moner D, Torres-Sospedra J, et al. Leveraging electronic healthcare record standards and semantic web technologies for the identification of patient cohorts. J Am Med Inform Assoc 2013 Dec;20(e2):e288-e296. [CrossRef] [Medline]
- Ferranti JM, Gilbert W, McCall J, Shang H, Barros T, Horvath MM. The design and implementation of an open-source, data-driven cohort recruitment system: the Duke Integrated Subject Cohort and Enrollment Research Network (DISCERN). J Am Med Inform Assoc 2012 Jun;19(e1):e68-e75 [FREE Full text] [CrossRef] [Medline]
- Fink E, Hall LO, Goldgof DB, Goswami BD, Boonstra M, Krischer JP. Experiments on the automated selection of patients for clinical trials. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. 2003 Presented at: IEEE International Conference on Systems, Man, and Cybernetics; October 5-8, 2003; Washington, DC p. 4541-4545. [CrossRef]
- Fink E, Kokku PK, Nikiforou S, Hall LO, Goldgof DB, Krischer JP. Selection of patients for clinical trials: an interactive web-based system. Artif Intell Med 2004 Jul;31(3):241-254. [CrossRef] [Medline]
- Gennari JH, Reddy M. Participatory design and an eligibility screening tool. Proc AMIA Symp 2000:290-294 [FREE Full text] [Medline]
- Gennari JH, Sklar D, Silva J. Cross-tool communication: from protocol authoring to eligibility determination. Proc AMIA Symp 2001:199-203 [FREE Full text] [Medline]
- Graham AL, Cha S, Cobb NK, Fang Y, Niaura RS, Mushro A. Impact of seasonality on recruitment, retention, adherence, and outcomes in a web-based smoking cessation intervention: randomized controlled trial. J Med Internet Res 2013;15(11):e249 [FREE Full text] [CrossRef] [Medline]
- Grundmeier RW, Swietlik M, Bell LM. Research subject enrollment by primary care pediatricians using an electronic health record. AMIA Annu Symp Proc 2007:289-293 [FREE Full text] [Medline]
- Harris PA, Lane L, Biaggioni I. Clinical research subject recruitment: the Volunteer for Vanderbilt Research Program www.volunteer.mc.vanderbilt.edu. J Am Med Inform Assoc 2005 Dec;12(6):608-613 [FREE Full text] [CrossRef] [Medline]
- Heinemann S, Thüring S, Wedeken S, Schäfer T, Scheidt-Nave C, Ketterer M, et al. A clinical trial alert tool to recruit large patient samples and assess selection bias in general practice research. BMC Med Res Methodol 2011;11:16 [FREE Full text] [CrossRef] [Medline]
- Herasevich V, Pieper MS, Pulido J, Gajic O. Enrollment into a time sensitive clinical study in the critical care setting: results from computerized septic shock sniffer implementation. J Am Med Inform Assoc 2011 Oct;18(5):639-644 [FREE Full text] [CrossRef] [Medline]
- Jenders RA, Hripcsak G, Sideli RV, DuMouchel W, Zhang H, Cimino JJ, et al. Medical decision support: experience with implementing the Arden Syntax at the Columbia-Presbyterian Medical Center. Proc Annu Symp Comput Appl Med Care 1995:169-173 [FREE Full text] [Medline]
- Kamal J, Pasuparthi K, Rogers P, Buskirk J, Mekhjian H. Using an information warehouse to screen patients for clinical trials: a prototype. AMIA Annu Symp Proc 2005:1004 [FREE Full text] [Medline]
- Kho A, Zafar A, Tierney W. Information technology in PBRNs: the Indiana University Medical Group Research Network (IUMG ResNet) experience. J Am Board Fam Med 2007 Apr;20(2):196-203 [FREE Full text] [CrossRef] [Medline]
- Koca M, Husmann G, Jesgarz J, Overath M, Brandts C, Serve H. A special query tool in the hospital information system to recognize patients and to increase patient numbers for clinical trials. Stud Health Technol Inform 2012;180:1180-1181. [Medline]
- Kokku PK, Hall LO, Goldgof DB, Fink E, Krischer JP. A cost-effective agent for clinical trial assignment. In: Proceedings of the 2002 IEEE International Conference on Systems, Man & Cybernetics: bridging the digital divide: cyber-development, human progress, peace and prosperity. 2002 Presented at: 2002 IEEE International Conference on Systems, Man & Cybernetics; October 6-9, 2002; Yasmine Hammamet, Tunisia.
- Köpcke F, Kraus S, Scholler A, Nau C, Schüttler J, Prokosch HU, et al. Secondary use of routinely collected patient data in a clinical trial: an evaluation of the effects on patient recruitment and data acquisition. Int J Med Inform 2013 Mar;82(3):185-192. [CrossRef] [Medline]
- Köpcke F, Lubgan D, Fietkau R, Scholler A, Nau C, Stürzl M, et al. Evaluating predictive modeling algorithms to assess patient eligibility for clinical trials from routine data. BMC Med Inform Decis Mak 2013;13:134 [FREE Full text] [CrossRef] [Medline]
- Lane NM, Kemper CA, Fodor M, Tu SW, Lai T, Yvon M, et al. Opportunity to enhance accrual to clinical trials using a microcomputer. 1991 Presented at: International Conference On AIDS: Science Challenging AIDS; June 16-21, 1991; Florence, Italy p. 16-21.
- Lee Y, Dinakarpandian D, Katakam N, Owens D. MindTrial: An Intelligent System for Clinical Trials. AMIA Annu Symp Proc 2010;2010:442-446 [FREE Full text] [Medline]
- Li L, Chase HS, Patel CO, Friedman C, Weng C. Comparing ICD9-encoded diagnoses and NLP-processed discharge summaries for clinical trials pre-screening: a case study. AMIA Annu Symp Proc 2008:404-408 [FREE Full text] [Medline]
- Lonsdale D, Tustison C, Parker C, Embley DW. Formulating queries for assessing clinical trial eligibility. In: Proceedings of the 11th international conference on Applications of Natural Language to Information Systems. Berlin: Springer; 2006 Presented at: NLDB’06: 11th international conference on Applications of Natural Language to Information Systems; May 31-June 2, 2006; Klagenfurt, Austria p. 82-93. [CrossRef]
- Lonsdale DW, Tustison C, Parker CG, Embley DW. Assessing clinical trial eligibility with logic expression queries. Data & Knowledge Engineering 2008 Jul;66(1). [CrossRef]
- Majeed RW, Röhrig R. Identifying patients for clinical trials using fuzzy ternary logic expressions on HL7 messages. Stud Health Technol Inform 2011;169:170-174. [Medline]
- Marcos M, Maldonado JA, Martínez-Salvador B, Boscá D, Robles M. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility. J Biomed Inform 2013 Aug;46(4):676-689 [FREE Full text] [CrossRef] [Medline]
- McGregor J, Brooks C, Chalasani P, Chukwuma J, Hutchings H, Lyons RA, et al. The Health Informatics Trial Enhancement Project (HITE): Using routinely collected primary care data to identify potential participants for a depression trial. Trials 2010;11:39 [FREE Full text] [CrossRef] [Medline]
- Metz JM, Coyle C, Hudson C, Hampshire M. An Internet-based cancer clinical trials matching resource. J Med Internet Res 2005 Jul 1;7(3):e24 [FREE Full text] [CrossRef] [Medline]
- Miller JL. The EHR solution to clinical trial recruitment in physician groups. Health Manag Technol 2006 Dec;27(12):22-25. [Medline]
- Miller DM, Fox R, Atreja A, Moore S, Lee JC, Fu AZ, et al. Using an automated recruitment process to generate an unbiased study sample of multiple sclerosis patients. Telemed J E Health 2010 Feb;16(1):63-68 [FREE Full text] [CrossRef] [Medline]
- Mosis G, Koes B, Dieleman J, Stricker B, van der Lei J, Sturkenboom MC. Randomised studies in general practice: how to integrate the electronic patient record. Inform Prim Care 2005;13(3):209-213. [Medline]
- Mosis G, Dieleman JP, Stricker B, van der Lei J, Sturkenboom MC. A randomized database study in general practice yielded quality data but patient recruitment in routine consultation was not practical. J Clin Epidemiol 2006 May;59(5):497-502. [CrossRef] [Medline]
- Murphy SN, Morgan MM, Barnett GO, Chueh HC. Optimizing healthcare research data warehouse design through past COSTAR query analysis. Proc AMIA Symp 1999:892-896 [FREE Full text] [Medline]
- Murphy SN, Barnett GO, Chueh HC. Visual query tool for finding patient cohorts from a clinical data warehouse of the partners HealthCare system. Proc AMIA Symp 2000:1174 [FREE Full text] [Medline]
- Musen MA, Carlson RW, Fagan LM, Deresinski SC, Shortliffe EH. T-HELPER: automated support for community-based clinical research. Proc Annu Symp Comput Appl Med Care 1992:719-723 [FREE Full text] [Medline]
- Nalichowski R, Keogh D, Chueh HC, Murphy SN. Calculating the benefits of a Research Patient Data Repository. AMIA Annu Symp Proc 2006:1044 [FREE Full text] [Medline]
- Kokku PK, Hall LO, Goldgof DB, Fink E, Krischer JP. A cost-effective agent for clinical trial assignment. In: Proceedings of the 2002 IEEE International Conference on Systems, Man & Cybernetics: bridging the digital divide: cyber-development, human progress, peace and prosperity. Piscataway, N.J: IEEE; 2002 Presented at: 2002 IEEE International Conference on Systems, Man & Cybernetics: bridging the digital divide: cyber-development, human progress, peace and prosperity; October 6-9, 2002; Yasmine Hammamet, Tunisia.
- Oberg R, Rasmussen L, Melski J, Peissig P, Starren J. Evaluation of the google search appliance for patient cohort discovery. AMIA Annu Symp Proc 2008:1104. [Medline]
- Ohno-Machado L, Parra E, Henry SB, Tu SW, Musen MA. AIDS2: a decision-support tool for decreasing physicians' uncertainty regarding patient eligibility for HIV treatment protocols. Proc Annu Symp Comput Appl Med Care 1993:429-433 [FREE Full text] [Medline]
- Ohno-Machado L, Wang SJ, Mar P, Boxwala AA. Decision support for clinical trial eligibility determination in breast cancer. Proc AMIA Symp 1999:340-344 [FREE Full text] [Medline]
- Olesen F, Kjeldsen HC, Christensen MB. Identifying patients for research in general practice. Scand J Prim Health Care 1996 Mar;14(1):62-63. [Medline]
- Pakhomov SV, Buntrock J, Chute CG. Prospective recruitment of patients with congestive heart failure using an ad-hoc binary classifier. J Biomed Inform 2005 Apr;38(2):145-153 [FREE Full text] [CrossRef] [Medline]
- Papaconstantinou C, Theocharous G, Mahadevan S. An expert system for assigning patients into clinical trials based on Bayesian networks. J Med Syst 1998 Jun;22(3):189-202. [Medline]
- Parker CG, Embley DW. Generating medical logic modules for clinical trial eligibility criteria. AMIA Annu Symp Proc 2003:964 [FREE Full text] [Medline]
- Patel C, Cimino JJ, Dolby J, Fokoue A, Kalyanpur A, Kershenbaum A, et al. Matching patient records to clinical trials using ontologies. In: Proceedings of the 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007.: Springer; 2007 Presented at: The Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007; November 11-15, 2007; Busan, Korea p. 6297.
- Payne TH, Goroll AH, Morgan M, Barnett GO. Conducting a matched-pairs historical cohort study with a computer-based ambulatory medical record system. Comput Biomed Res 1990 Oct;23(5):455-472. [Medline]
- Penberthy L, Brown R, Puma F, Dahman B. Automated matching software for clinical trials eligibility: measuring efficiency and flexibility. Contemp Clin Trials 2010 May;31(3):207-217. [CrossRef] [Medline]
- Peterson KA, Delaney BC, Arvanitis TN, Taweel A, Sandberg EA, Speedie S, et al. A model for the electronic support of practice-based research networks. Ann Fam Med 2012 Dec;10(6):560-567 [FREE Full text] [CrossRef] [Medline]
- Pressler TR, Yen PY, Ding J, Liu J, Embi PJ, Payne PR. Computational challenges and human factors influencing the design and use of clinical research participant eligibility pre-screening tools. BMC Med Inform Decis Mak 2012;12:47 [FREE Full text] [CrossRef] [Medline]
- Quinn J, Durski K. A real-time tracking, notification, and web-based enrollment system for emergency department research. Acad Emerg Med 2004 Nov;11(11):1245-1248. [CrossRef] [Medline]
- Richesson RL, Lee HS, Cuthbertson D, Lloyd J, Young K, Krischer JP. An automated communication system in a contact registry for persons with rare diseases: scalable tools for identifying and recruiting clinical research participants. Contemp Clin Trials 2009 Jan;30(1):55-62 [FREE Full text] [CrossRef] [Medline]
- Rokach L, Maimon O, Averbuch M. Information retrieval system for medical narrative reports. In: Flexible Query Answering Systems: 6th International Conference, FQAS 2004, Proceedings (Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence).: Springer; 2004 Presented at: Flexible Query Answering Systems: 6th International Conference, FQAS 2004; June 24-26, 2004; Lyon, France p. 217.
- Rollman BL, Fischer GS, Zhu F, Belnap BH. Comparison of electronic physician prompts versus waitroom case-finding on clinical trial enrollment. J Gen Intern Med 2008 Apr;23(4):447-450 [FREE Full text] [CrossRef] [Medline]
- Ruffin MT, Nease DE. Using patient monetary incentives and electronically derived patient lists to recruit patients to a clinical trial. J Am Board Fam Med 2011 Oct;24(5):569-575 [FREE Full text] [CrossRef] [Medline]
- Schmickl CN, Li M, Li G, Wetzstein MM, Herasevich V, Gajic O, et al. The accuracy and efficiency of electronic screening for recruitment into a clinical trial on COPD. Respir Med 2011 Oct;105(10):1501-1506 [FREE Full text] [CrossRef] [Medline]
- Schreiweis B, Grüber G, Bergh B. First experiences in implementing a software-based support for patient recruitment at Heidelberg university hospital. Stud Health Technol Inform 2012;180:1147-1149. [Medline]
- Séroussi B, Bouaud J. Using OncoDoc as a computer-based eligibility screening system to improve accrual onto breast cancer clinical trials. Artif Intell Med 2003 Oct;29(1-2):153-167. [Medline]
- Seward J, Follansbee S, Christianson B. Development of a computer database system to assist patients with enrollment into an HIV clinical trial. 1992 Presented at: VII International Conference on AIDS and the II STD World Congress; July 19-24, 1992; Amsterdam, Netherlands.
- Seyfried L, Hanauer DA, Nease D, Albeiruti R, Kavanagh J, Kales HC. Enhanced identification of eligibility for depression research using an electronic medical record search engine. Int J Med Inform 2009 Dec;78(12):e13-e18 [FREE Full text] [CrossRef] [Medline]
- Silva MI, Abbott P, Petrucci K, Canfield K, Muir J. Improving the efficiency of patient recruitment with an automated telephone screening system in a client-server environment. Proc Annu Symp Comput Appl Med Care 1993:41-45 [FREE Full text] [Medline]
- Smith KS, Eubanks D, Petrik A, Stevens VJ. Using web-based screening to enhance efficiency of HMO clinical trial recruitment in women aged forty and older. Clin Trials 2007;4(1):102-105. [CrossRef] [Medline]
- Stell A, Sinnott R, Ajayi O. Supporting the clinical trial recruitment process through the grid. In: Proceedings of the UK e-Science All Hands Meeting 2006. Edinburgh: National e-Science Centre; 2006 Presented at: UK e-Science All Hands Meeting; September 18-21, 2006; Nottingham p. 61-68.
- Sumi E, Teramukai S, Yamamoto K, Satoh M, Yamanaka K, Yokode M. The correlation between the number of eligible patients in routine clinical practice and the low recruitment level in clinical trials: a retrospective study using electronic medical records. Trials 2013;14:426 [FREE Full text] [CrossRef] [Medline]
- Thadani SR, Weng C, Bigger JT, Ennever JF, Wajngurt D. Electronic screening improves efficiency in clinical trial recruitment. J Am Med Inform Assoc 2009 Dec;16(6):869-873 [FREE Full text] [CrossRef] [Medline]
- Thew S, Leeming G, Ainsworth J, Gibson M, Buchan I. FARSITE: evaluation of an automated trial feasibility assessment and recruitment tool. Trials 2011;12(Suppl 1):A113. [CrossRef]
- Thompson DS, Oberteuffer R, Dorman T. Sepsis alert and diagnostic system: integrating clinical systems to enhance study coordinator efficiency. Comput Inform Nurs 2003 Feb;21(1):22-6; quiz 27. [Medline]
- Treweek S, Pearson E, Smith N, Neville R, Sargeant P, Boswell B, et al. Desktop software to identify patients eligible for recruitment into a clinical trial: using SARMA to recruit to the ROAD feasibility trial. Inform Prim Care 2010;18(1):51-58. [Medline]
- Tu SW, Kemper CA, Lane NM, Carlson RW, Musen MA. A methodology for determining patients' eligibility for clinical trials. Methods Inf Med 1993 Aug;32(4):317-325. [Medline]
- Vahabzadeh M, Lin JL, Mezghanni M, Contoreggi C, Leff M. A clinical recruiting management system for complex multi-site clinical trials using qualification decision support systems. AMIA Annu Symp Proc 2007:1141. [Medline]
- Weber S, Lowe HJ, Malunjkar S, Quinn J. Implementing a Real-time Complex Event Stream Processing System to Help Identify Potential Participants in Clinical and Translational Research Studies. AMIA Annu Symp Proc 2010;2010:472-476 [FREE Full text] [Medline]
- Weiner DL, Butte AJ, Hibberd PL, Fleisher GR. Computerized recruiting for clinical trials in real time. Ann Emerg Med 2003 Feb;41(2):242-246. [CrossRef] [Medline]
- Weng C, Bigger JT, Busacca L, Wilcox A, Getaneh A. Comparing the effectiveness of a clinical registry and a clinical data warehouse for supporting clinical trial recruitment: a case study. AMIA Annu Symp Proc 2010;2010:867-871 [FREE Full text] [Medline]
- Weng C, Batres C, Borda T, Weiskopf NG, Wilcox AB, Bigger JT, et al. A real-time screening alert improves patient recruitment efficiency. AMIA Annu Symp Proc 2011;2011:1489-1498 [FREE Full text] [Medline]
- Wilcox A, Natarajan K, Weng C. Using Personal Health Records for Automated Clinical Trials Recruitment: the ePaIRing Model. Summit on Translat Bioinforma 2009;2009:136-140 [FREE Full text] [Medline]
- Yamamoto K, Sumi E, Yamazaki T, Asai K, Yamori M, Teramukai S, et al. A pragmatic method for electronic medical record-based observational studies: developing an electronic medical records retrieval system for clinical research. BMJ Open 2012;2(6) [FREE Full text] [CrossRef] [Medline]
- Zhang J, Gu Y, Liu W, Hu W, Zhao T, Mu X, et al. Automatic patient search for breast cancer clinical trials using free-text medical reports. In: Proceedings of ACM International Health Informatics Symposium, IHI 2010. 2010 Presented at: IHI ’10 Proceedings of the 1st ACM International Health Informatics Symposium; November 11-12, 2010; Arlington, VA p. 405-409. [CrossRef]
- Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [FREE Full text] [CrossRef] [Medline]
- Weng C, Tu SW, Sim I, Richesson R. Formal representation of eligibility criteria: a literature review. J Biomed Inform 2010 Jun;43(3):451-467 [FREE Full text] [CrossRef] [Medline]
- Tu SW, Peleg M, Carini S, Bobak M, Ross J, Rubin D, et al. A practical method for transforming free-text eligibility criteria into computable criteria. J Biomed Inform 2011 Apr;44(2):239-250 [FREE Full text] [CrossRef] [Medline]
- Weng C, Wu X, Luo Z, Boland MR, Theodoratos D, Johnson SB. EliXR: an approach to eligibility criteria extraction and representation. J Am Med Inform Assoc 2011 Dec;18 Suppl 1:i116-i124 [FREE Full text] [CrossRef] [Medline]
- Hersh WR, Weiner MG, Embi PJ, Logan JR, Payne PR, Bernstam EV, et al. Caveats for the use of operational electronic health record data in comparative effectiveness research. Med Care 2013 Aug;51(8 Suppl 3):S30-S37. [CrossRef] [Medline]
Abbreviations
CDE: common data elements |
CTRSS: clinical trial recruitment support systems |
EHR: electronic health record |
ERGO: Eligibility Rule Grammar and Ontology |
HIS: hospital information system |
ICD: International Classification of Diseases |
LOINC: Logical Observation Identifiers Names and Codes |
MED: Medical Entities Dictionary |
MLM: Medical Logic Module |
NCI: National Cancer Institute |
NLP: natural language processing |
SNOMED CT: Systematized Nomenclature of Medicine Clinical Terms |
SQL: Structured Query Language |
UMLS: Unified Medical Language System |
Edited by G Eysenbach; submitted 07.04.14; peer-reviewed by J Luo, A Young; comments to author 28.04.14; revised version received 15.05.14; accepted 31.05.14; published 01.07.14
Copyright©Felix Köpcke, Hans-Ulrich Prokosch. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.07.2014.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.