Published on in Vol 11, No 1 (2009): Jan-Mar

Comparing Administration of Questionnaires via the Internet to Pen-and-Paper in Patients with Heart Failure: Randomized Controlled Trial

Comparing Administration of Questionnaires via the Internet to Pen-and-Paper in Patients with Heart Failure: Randomized Controlled Trial

Comparing Administration of Questionnaires via the Internet to Pen-and-Paper in Patients with Heart Failure: Randomized Controlled Trial

Original Paper

1Department of Medicine, University of Toronto, Toronto, ON, Canada

2Division of General Internal Medicine, University Health Network, Toronto, ON, Canada

3Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, ON, Canada

4Department of Public Health Sciences, Faculty of Medicine, University of Toronto, Toronto, ON, Canada

5Division of Cardiology, University Health Network, Toronto, ON, Canada

6School of Nursing, Ryerson University, Toronto, ON, Canada

Corresponding Author:

Robert C Wu, MD, MSc

Toronto General Hospital

200 Elizabeth Street, 14EN-222

Toronto, ON M5G 2C4

Canada

Phone: +1 416 340 4567

Fax:+1 416 595 5826

Email: robert.wu@uhn.on.ca


Background: The use of the Internet to administer questionnaires has many potential advantages over the use of pen-and-paper administration. Yet it is important to validate Internet administration, as most questionnaires were initially developed and validated for pen-and-paper delivery. While some have been validated for use over the Internet, these questionnaires have predominately been used amongst the healthy general population. To date, information is lacking on the validity of questionnaires administered over the Internet in patients with chronic diseases such as heart failure.

Objectives: To determine the validity of three heart failure questionnaires administered over the Internet compared to pen-and-paper administration in patients with heart failure.

Methods: We conducted a prospective randomized study using test-retest design comparing administration via the Internet to pen-and-paper administration for three heart failure questionnaires provided to patients recruited from a heart failure clinic in Toronto, Ontario, Canada: the Kansas City Cardiomyopathy Questionnaire (KCCQ), the Minnesota Living with Heart Failure Questionnaire (MLHFQ), and the Self-Care Heart Failure Index (SCHFI).

Results: Of the 58 subjects enrolled, 34 completed all three questionnaires. The mean difference and confidence intervals for the summary scores of the KCCQ, MLHFQ, and SCHFI were 1.2 (CI -1.5 to 4.0, scale from 0 to 100), 4.0 (CI -1.98 to 10.04, scale from 0 to 105), and 10.1 (CI 1.18 to 19.07, scale from 66.7 to 300), respectively.

Conclusions: Internet administration of the KCCQ appears to be equivalent to pen-and-paper administration. For the MLHFQ and SCHFI, we were unable to demonstrate equivalence. Further research is necessary to determine if the administration methods are equivalent for these instruments.

J Med Internet Res 2009;11(1):e3

doi:10.2196/jmir.1106

Keywords



Using the Internet as a means to help manage patients with heart failure may improve quality of life and reduce health care costs [1-5]. It is important to further evaluate Internet-based disease management, and its evaluation may be facilitated by using the Internet to administer questionnaires. Indeed, Internet questionnaire administration may have advantages over pen-and-paper administration including being easier for participants to complete, improving completeness of data, and eliminating data entry errors that occur with the transcription of paper questionnaires [6,7]. However, it is important to consider that most questionnaires have been developed and validated for pen-and-paper administration, and there may be important differences between pen-and-paper administration and Internet administration that can affect data quality [8]. Responses of Internet questionnaires may differ from pen-and-paper questionnaires due to issues such as a participant’s computer anxiety or differences in display on a participant’s computer [9]. Whether Internet administration provides similarly valid results as the traditional administration of questionnaires should not be assumed, and it has been recommended that each questionnaire be validated for Internet administration [7,10].

There is some data on the administration of questionnaires via the Internet compared to pen-and-paper administration. Overall, the data implies that Internet administration of questionnaires is associated with lower completion rates but less missing data compared to traditional administration of questionnaires [6]. There is also some evidence concerning whether Internet administration provides similar participant responses as pen-and-paper administration. They appear to be equivalent based on quality-of-life measures in adolescents, health-related questionnaires completed by Internet volunteers, and trauma survey in healthy college volunteers [11-13]. However, to date, there is little data on the equivalence of responses in patients with heart failure or other chronic complex medical conditions. Patients with heart failure differ from patients in previous survey samples by being older and by having more co-morbidities [14].

Since improving quality of life is recognized as one of the main goals of managing heart failure [15], validating questionnaires which assess this parameter is important. We hypothesized that Internet administration would provide similar results as pen-and-paper administration in a cohort of patients with heart failure. We tested this by evaluating equivalency using the test-retest study design of three heart failure questionnaires.


Study Design

This was a prospective trial comparing pen-and-paper administration to Internet administration using classic test-retest design. Between June 2006 and May 2007, we randomized participants to first complete either the pen-and-paper questionnaire or the Internet questionnaire. We then retested the participants two weeks later with the alternate method of administration. The interval of two weeks between retesting was considered short enough to minimize clinical change yet long enough to reduce recall bias.

Participants

We enrolled patients from the Heart Function Clinic at Toronto General Hospital, University Health Network, Toronto, Ontario, Canada. The Heart Function Clinic is a tertiary care, multidisciplinary heart failure clinic. Patients were eligible for inclusion in the study if they were diagnosed with heart failure, aged 18 years or older, able to access the Internet, able to read and comprehend English, and able to provide informed consent. Participants were given information describing the study at the time of their clinic appointment. For those people interested in participating, the research associate initiated the process of informed consent. Ethics approval was obtained from the research ethics board at the University Health Network. Since this trial did not have an intervention, it was not registered with a randomized trial registry.

Randomization and Allocation

A computer-generated randomized schedule was prepared by the study biostatistician and then stored and securely concealed until allocation was assigned. For those patients meeting the inclusion criteria and providing informed consent, the research assistant assigned the next randomization sequence according to the schedule. Patients were first allocated to either pen-and-paper or Internet administration. Neither blinding of participants nor the research assistant was possible due to the study design.

Data Collection

If the participant was randomized to the pen-and-paper version first, they either completed the questionnaires in the clinic or completed them at home and then returned them by mail. If they were initially randomized to the Internet version, they either completed them online at a computer in the clinic or at home. Two weeks later, participants were retested by the alternate method. After one week, email reminders were sent to any participants who had not completed either set of testing.

Instruments

We administered the following surveys, none of which had been previously validated for use on the Internet:

  • Kansas City Cardiomyopathy Questionnaire (KCCQ) [16]. The KCCQ consists of 23 items measuring the impact of heart failure. Including the overall summary score, there are 10 summary scores measuring the dimensions of a patient’s physical function, symptoms, social limitation, self-efficacy, and quality of life. The overall summary score ranges from 0 to 100 with higher scores representing better quality of life. The KCCQ has been validated, used in large randomized controlled trials, and found to be highly responsive [16-19]. A change of over 5 points on the KCCQ summary score is considered to be a clinically significant change in heart failure status [17].
  • Minnesota Living with Heart Failure Questionnaire (MLHFQ) [20]. The MLHFQ is a questionnaire that provides a patient’s self-assessment of how heart failure affects his or her daily life. It consists of 21 items, each with the same 0 - 5 Likert scale. The range of scores is 0 - 105 with higher scores representing worse quality of life. Subscores include physical and emotional dimensions. The MLHFQ has been validated and is commonly used as a measure of health-related quality of life of heart failure patients in large randomized controlled trials [21-23]. The minimal clinically important difference is considered to be 5 - 7 points on the total score [24,25].
  • Self-Care of Heart Failure Index (SCHFI) [26]. The SCHFI is a 15-item questionnaire measuring self-care. It consists of three subscales: management, maintenance, and self-confidence. The range of scores for each subscale is 16.7 - 100, 25 - 100, and 25 - 100, respectively, with higher scores representing more self-care. The range of the summary scale is 66.7 - 300. While it has been validated, the minimal clinically significant difference is not yet known [26].

Outcomes

Our primary outcome of interest was the difference in scores between Internet and pen-and-paper administration of the main summary scores for each of the three instruments. Outcomes of secondary interest were the differences in the subscores of the three questionnaires and also whether the order of administration affected participants’ responses.

Sample Size

As the KCCQ has been found to be more responsive than the MLHFQ [16], this instrument was used to calculate the sample size. In the original validation of the KCCQ, the authors performed a test-retest validation of the KCCQ over a 3-month period [16]. Over the 3-month period, the mean clinical summary score changed by -2.1 in “stable patients” with a P-value of .36. This equated to a standard deviation of 14.2. Assuming that half of the variance observed was true change and half was due to measurement error, the standard deviation of change scores due to measurement error would be 10.01. We assumed that any change in the KCCQ over 5 was significant. Using an alpha error of .05 and beta error of .20, we calculated the desired sample size for an equivalence study to be 35 subjects per group and then assumed two groups, resulting in a total of 70 subjects. In retrospect, this calculation likely overestimated our desired sample size, since the analysis was based on paired differences of a single group [27]. Thus, the true sample size required was 35 subjects. Due to slower than expected enrollment, the study was terminated at one year, before the planned enrollment of all 70 subjects was completed.

Analysis

Mean paired differences between delivery methods were calculated for the summary scores and subscores for each of the three questionnaires. To determine whether the Internet and pen-and-paper administration methods were equivalent, we calculated one-sided confidence intervals. Since a statistical test for an equivalence hypothesis is statistically equivalent to a pair of one-sided hypothesis tests, one-sided confidence intervals were reported as they can be more informative than P-values. Given that an acceptable equivalence margin is not precisely known for most of the scales considered, the confidence interval approach provides more detailed information concerning how close the results are between administration methods [28,29]. If one-sided confidence intervals were less than the minimal clinically important difference, the administration methods were considered equivalent for the KCCQ and MLHFQ. Since the minimal clinically important difference was not known for the SCHFI, the mean paired difference and one-sided confidence intervals were calculated to provide information about equivalence.

To determine whether order of administration affected responses, we performed a t test on the paired summary scores and sub-scores.


From the start of the study in June 2006 until its completion in May 2007, there were a total of 58 participants enrolled. Of these participants, 28 received the paper version first and 30 received the Internet version first (Figure 1). The average age was 51, ranging from 24 to 80 years (Table 1).

Figure 1. Flow of study participants
View this figure

Table 1. Demographics of participants
Paper FirstInternet FirstCompleted BothTotal
n28303458
Age in Years (SD)50 (13.3)52 (15.1)49 (14.2)51 (14.2)
Female7 (25%)12 (40%)11 (32%)19 (33%)
Highest education achieved
Some High School0 (0%)0 (0%)0 (0%)0 (0%)
High School Graduate7 (25%)5 (17%)7 (21%)12 (21%)
Some University/College4 (14%)4 (13%)6 (18%)8 (14%)
University/College Graduate14 (50%)19 (63%)19 (56%)33 (57%)
Post-graduate1 (4%)1 (3%)1 (3%)2 (3%)
Undetermined2 (7%)1 (3%)1 (3%)3 (5%)

There were 34 participants who completed both Internet and pen-and-paper questionnaires. Of these 34 subjects, 18 completed paper questionnaires first, and 16 completed Internet questionnaires first. There were 4 participants who completed Internet questionnaires but did not complete pen-and-paper questionnaires. Conversely, 2 completed pen-and-paper questionnaires but did not complete Internet questionnaires.

The summary scores and subscores for the KCCQ, MLHFQ, and SCHFI are shown in Table 2. For the KCCQ, the one-sided confidence limits of both overall and clinical summary scores were within the equivalence margin of 5, demonstrating that the Internet and pen-and-paper versions are equivalent; for the MLHFQ, the one-sided 95% confidence intervals were larger than the minimally clinical important difference of 5 - 7; and for the SCHFI, there were wide ranges in the one-sided confidence intervals.

Table 2. Paired differences and one-sided confidence intervals for the overall and sub-domain scores for the three questionnaires
InternetPaperDifference95% One-sided CIs
nMean ScoreSDnMean ScoreSDnMean Paired ScoreSDLowerUpper
KCCQ
Overall Summary Score3871.819.93670.122.0341.29.5-1.54.0
Clinical Summary Score3878.018.33677.919.134-0.113.7-4.13.8
Physical Limitationa3775.523.43675.722.633-0.7621.8-7.195.68
Symptom Stability3854.616.33655.619.034-0.7417.9-5.944.47
Symptom Frequency3879.819.43678.220.9341.9613.3-1.915.83
Symptom Burden3880.919.43681.918.434-1.4713.1-5.262.32
Total Symptom Score3880.418.33680.119.0340.2512.0-3.233.72
Self-Efficacy3883.917.43682.321.6341.1015.8-3.485.69
QoL3863.823.73660.227.1342.4513.1-1.346.24
Social Limitationa3766.127.23664.329.9332.2113.1-1.666.08
MLHFQ
Overall3839.325.63636.426.3344.020.70-1.9810.04
Physical3818.511.83615.311.5343.88.651.286.31
Emotional387.16.0368.16.534-0.885.98-2.620.85
SCHFI
Overall36224.234.934215.731.53110.129.41.1819.1
Maintenance3873.411.63671.012.4342.88.30.395.20
Managementa3778.018.63473.316.6315.719.2-0.2111.5
Confidencea3671.417.43571.318.0321.415.6-3.316.05

aNote that sample size for some subscores is less than total sample size due to different responses, not due to missing data.

With respect to order of administration, Table 3 summarizes the difference between Internet and pen-and-paper administration for the three questionnaires. The P-values were not adjusted for multiple testing. We observed no difference due to the order of administration.

Table 3. Effect of order of administration on mean paired differences for the three questionnaires
Paper FirstInternet FirstP value
nMean ScoreSDnMean ScoreSD
KCCQ
Overall Summary Score184.09.116-2.09.2.07
Clinical Summary Score182.912.516-3.514.5.18
MLHFQ
Overall180.914.0167.526.4.38
Physical183.36.3164.310.9.76
Emotional18-2.25.5160.66.3.17
SCHFI
Overall17a15.830.9143.326.8.24
Maintenance183.38.0162.28.8.69
Management17 a7.117.2143.921.9.66
Confidence184.919.014-3.18.4.12

aNote that sample size for some subscores is less than total sample size due to different responses, not due to missing data.

To determine whether there was a true clinical change over the test-retest interval, we examined the responses to the symptom stability question from the KCCQ: “Compared with 2 weeks ago, have your symptoms of heart failure (shortness of breath, fatigue or ankle swelling) changed?”. Of the respondents, 79% reported no change or no symptoms (n = 27), 15% reported slight changes (n = 5), and 6% (n = 2) reported their symptoms were much better.


Principal Results

In patients with heart failure, we found that Internet administration was equivalent to pen-and-paper administration for the Kansas City Cardiomyopathy Questionnaire, a questionnaire that is known to be valid and responsive, as well as an independent predictor of poor prognosis [18,19,30].

We were unable to show that Internet administration was equivalent to pen-and-paper administration for the Minnesota Living with Heart Failure Questionnaire and the Self-Care of Heart Failure Index. The MLHFQ was not originally intended to be self-administered; rather, the intention was that research personnel would administer it. This may have affected both pen-and-paper and Internet responses. Indeed, it has been found that who administers the questionnaires (ie, whether self-administered or administered via interview) may have greater effects than how it is administered [8]. While the SCHFI had a larger absolute mean difference and greater confidence intervals than both the MLHFQ and KCCQ, this is likely attributable mostly to the greater range of the summary scale. For the SCHFI, further research to establish the minimal clinically important difference would help to determine if delivery methods are indeed equivalent.

Limitations

There were several study limitations of note. Firstly, enrollment was slow and, after one year of recruitment, we did not achieve our desired sample size. While our sample size was much smaller than previous validation studies, this may be due to the fact we studied people with a chronic disease as opposed to people from the healthy population [11-13]. In any case, due to an overestimation of our sample size, we achieved sufficient power to show equivalence for the KCCQ. Secondly, Internet access was a requirement which may have created a biased selection of those who were highly educated and relatively young. Indeed, the average age of our sample was 51 years, much younger than the 72 years which is the average age of patients admitted to our hospital with heart failure [31]. With respect to the level of education, 60% of those enrolled had completed a university or college degree, compared to the 52% possessing the same level of education in the general population of our province [32]. Thirdly, survey completion rate was an issue. Of all who consented and were enrolled in the study, only 58% completed all parts. However, this is similar to other studies comparing pen-and-paper to Internet administration [33]. Finally, we examined three questionnaires but did not randomize the order of the three questionnaires. While our design is similar to other evaluations of Internet questionnaires [13,33], bias may have been introduced because questionnaires that were administered last may be less valid due to participant fatigue. Fatigue increases the chance that participants will provide an answer which is not accurate and may result in a difference in test-retest scores. The order of the questionnaires was as follows: KCCQ, SCHFI, and MLHFQ for paper questionnaires and SCHFI, MLHFQ, and KCCQ for Internet questionnaires. As a result of the order applied, fatigue effects would be least for the SCHFI, moderate for the KCCQ, and most for the MLHFQ. We are reassured by the fact that the KCCQ was still found to be equivalent despite any bias from fatigue.

Comparison With Prior Work

Previous literature suggests that pen-and-paper administration of questionnaires is equivalent to Internet administration [12,13,33]. To date, these studies have been limited to healthy, younger populations. This study adds to the literature, demonstrating the equivalence between pen-and-paper administration and Internet administration for the KCCQ in patients with heart failure.

Summary

In summary, Internet administration of the KCCQ appears to be equivalent to pen-and-paper administration. For the MLHFQ and SCHFI, we were unable to demonstrate equivalence, and further research is necessary to determine if the administration methods are equivalent for these instruments.

Our research suggests that one cannot presume equivalency between results from the same questionnaire administered over the Internet and by the pen-and-paper method in individuals with chronic disease. Therefore, it is important that such questionnaires are validated before being used online. Future research should confirm these findings and examine why such differences exist.

Acknowledgments

We would like to thank the University of Toronto Faculty of Medicine for the Dean’s Fund New Staff Grant which provided funding support for this project.

Conflicts of Interest

None declared.

  1. Artinian NT, Harden JK, Kronenberg MW, Vander Wal JS, Daher E, Stephens Q, et al. Pilot study of a Web-based compliance monitoring device for patients with congestive heart failure. Heart Lung 2003;32(4):226-233. [Medline] [CrossRef]
  2. Jackson RA. Internet and telephone-based congestive heart failure program as effective as and cheaper than traditional one, study says. Rep Med Guidel Outcomes Res 2001 Feb 22;12(4):9-10, 12 12. [Medline]
  3. Kashem A, Cross RC, Santamore WP, Bove AA. Management of heart failure patients using telemedicine communication systems. Curr Cardiol Rep 2006 May;8(3):171-179. [Medline] [CrossRef]
  4. Kashem A, Droogan MT, Santamore WP, Wald JW, Marble JF, Cross RC, et al. Web-based Internet telemedicine management of patients with heart failure. Telemed J E Health 2006 Aug;12(4):439-447. [Medline] [CrossRef]
  5. Ross SE, Moore LA, Earnest MA, Wittevrongel L, Lin CT. Providing a web-based online medical record with electronic communication capabilities to patients with congestive heart failure: randomized trial. J Med Internet Res 2004 May 14;6(2):e12 [FREE Full text] [Medline] [CrossRef]
  6. Kongsved SM, Basnov M, Holm-Christensen K, Hjollund NH. Response rate and completeness of questionnaires: a randomized study of Internet versus paper-and-pencil versions. J Med Internet Res 2007;9(3):e25 [FREE Full text] [Medline] [CrossRef]
  7. Coles ME, Cook LM, Blake TR. Assessing obsessive compulsive symptoms and cognitions on the internet: evidence for the comparability of paper and Internet administration. Behav Res Ther 2007 Sep;45(9):2232-2240 Epub 2007 Jan 12. [Medline] [CrossRef]
  8. Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health (Oxf) 2005 Sep;27(3):281-291 [FREE Full text] [Medline] [CrossRef]
  9. Schulenberg SE, Yutrzenka BA. Ethical issues in the use of computerized assessment. Computers in Human Behavior 2004;20(4):477-490. [CrossRef]
  10. Buchanan T. Internet-based questionnaire assessment: appropriate use in clinical contexts. Cogn Behav Ther 2003;32(3):100-109. [Medline] [CrossRef]
  11. Raat H, Mangunkusumo RT, Landgraf JM, Kloek G, Brug J. Feasibility, reliability, and validity of adolescent health status measurement by the Child Health Questionnaire Child Form (CHQ-CF): internet administration compared with the standard paper version. Qual Life Res 2007 May;16(4):675-685 [FREE Full text] [Medline] [CrossRef]
  12. Ritter P, Lorig K, Laurent D, Matthews K. Internet versus mailed questionnaires: a randomized comparison. J Med Internet Res 2004 Sep 15;6(3):e29 [FREE Full text] [Medline] [CrossRef]
  13. Fortson BL, Scotti JR, Del Ben KS, Chen YC. Reliability and validity of an Internet traumatic stress survey with a college student sample. J Trauma Stress 2006 Oct;19(5):709-720. [Medline] [CrossRef]
  14. Ho KK, Pinsky JL, Kannel WB, Levy D. The epidemiology of heart failure: the Framingham Study. J Am Coll Cardiol 1993 Oct;22(4 Suppl A):6A-13A A-13A. [Medline]
  15. Liu P, Arnold JM, Belenkie I, Demers C, Dorian P, Gianetti N, et al; Canadian Cardiovascular Society. The 2002/3 Canadian Cardiovascular Society consensus guideline update for the diagnosis and management of heart failure. Can J Cardiol 2003 Mar 31;19(4):347-356. [Medline]
  16. Green CP, Porter CB, Bresnahan DR, Spertus JA. Development and evaluation of the Kansas City Cardiomyopathy Questionnaire: a new health status measure for heart failure. J Am Coll Cardiol 2000 Apr;35(5):1245-1255. [Medline] [CrossRef]
  17. Spertus J, Peterson E, Conard MW, Heidenreich PA, Krumholz HM, Jones P, et al; Cardiovascular Outcomes Research Consortium. Monitoring clinical changes in patients with heart failure: a comparison of methods. Am Heart J 2005 Oct;150(4):707-715. [Medline] [CrossRef]
  18. Eurich DT, Johnson JA, Reid KJ, Spertus JA. Assessing responsiveness of generic and specific health related quality of life measures in heart failure. Health Qual Life Outcomes 2006;4(1):89 [FREE Full text] [Medline] [CrossRef]
  19. Spertus JA, Tooley J, Jones P, Poston C, Mahoney E, Deedwania P, et al. Expanding the outcomes in clinical trials of heart failure: the quality of life and economic components of EPHESUS (EPlerenone's neuroHormonal Efficacy and SUrvival Study). Am Heart J 2002 Apr;143(4):636-642. [Medline] [CrossRef]
  20. Rector TS, Kubo SH, Cohn JN. Patients' self-assessment of their congestive heart failure. Part II: Content, reliability, and validity of a new measure - the Minnesota Living with Heart Failure questionnaire. Heart Failure 1987;Oct/Nov:198-209.
  21. Gorkin L, Norvell NK, Rosen RC, Charles E, Shumaker SA, McIntyre KM, et al. Assessment of quality of life as observed from the baseline data of the Studies of Left Ventricular Dysfunction (SOLVD) trial quality-of-life substudy. Am J Cardiol 1993 May 1;71(12):1069-1073. [Medline] [CrossRef]
  22. Rose EA, Moskowitz AJ, Packer M, Sollano JA, Williams DL, Tierney AR, et al. The REMATCH trial: rationale, design, and end points. Randomized Evaluation of Mechanical Assistance for the Treatment of Congestive Heart Failure. Ann Thorac Surg 1999 Mar;67(3):723-730. [Medline] [CrossRef]
  23. Abraham WT. Rationale and design of a randomized clinical trial to assess the safety and efficacy of cardiac resynchronization therapy in patients with advanced heart failure: the Multicenter InSync Randomized Clinical Evaluation (MIRACLE). J Card Fail 2000 Dec;6(4):369-380. [Medline] [CrossRef]
  24. Bennett SJ, Oldridge NB, Eckert GJ, Embree JL, Browning S, Hou N, et al. Comparison of quality of life measures in heart failure. Nurs Res 2003;52(4):207-216. [Medline] [CrossRef]
  25. Minnesota Living With Heart Failure Questionnaire. Rector TS. Overview of The Minnesota Living with Heart Failure Questionnaire. 2005   URL: http://www.mlhfq.org/_dnld/mlhfq_overview.pdf [accessed 2008 Jun 23] [WebCite Cache]
  26. Riegel B, Carlson B, Moser DK, Sebern M, Hicks FD, Roland V. Psychometric testing of the self-care of heart failure index. J Card Fail 2004 Aug;10(4):350-360. [Medline] [CrossRef]
  27. Julious SA. Sample sizes for clinical trials with normal data. Stat Med 2004 Jun 30;23(12):1921-1986. [Medline] [CrossRef]
  28. Hwang IK, Morikawa T. Design issues in noninferiority/equivalence trials. Drug Information J 1999;33:1205-1218.
  29. Dunnett CW, Gent M. An alternative to the use of two-sided tests in clinical trials. Stat Med 1996 Aug 30;15(16):1729-1738. [Medline] [CrossRef]
  30. Heidenreich PA, Spertus JA, Jones PG, Weintraub WS, Rumsfeld JS, Rathore SS, et al; Cardiovascular Outcomes Research Consortium. Health status identifies heart failure outpatients at risk for hospitalization or death. J Am Coll Cardiol 2006 Feb 21;47(4):752-756. [Medline] [CrossRef]
  31. Tu JV, Donovan LR, Austin PC, Ko DT, Newman AM, Wang J, et al. Quality of Cardiac Care in Ontario—Phase I. Report 2. Toronto, Ontario: Institute for Clinical Evaluative Sciences; 2005.
  32. Statistics Canada. Statistics Canada. 2006 Census   URL: http://www12.statcan.ca/census-recensement/index-eng.cfm [accessed 2009 Jan 12] [WebCite Cache]
  33. Vallejo MA, Jordán CM, Díaz MI, Comeche MI, Ortega J. Psychological assessment via the internet: a reliability and validity study of online (vs paper-and-pencil) versions of the General Health Questionnaire-28 (GHQ-28) and the Symptoms Check-List-90-Revised (SCL-90-R). J Med Internet Res 2007;9(1):e2 [FREE Full text] [Medline] [CrossRef]


KCCQ: Kansas City Cardiomyopathy Questionnaire
MLHFQ: Minnesota Living with Heart Failure Questionnaire
SCHFI: Self-Care of Heart Failure Index


Edited by K El Emam; submitted 02.07.08; peer-reviewed by B Malin, N Barrowman; comments to author 27.08.08; revised version received 03.11.08; accepted 21.11.08; published 06.02.09

Copyright

© Robert C Wu, Kevin Thorpe, Heather Ross, Vaska Micevski, Christine Marquez, Sharon E Straus. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.02.2009.  

This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.