Multimedia Appendix 1

Table of contents 1. Collaboration of Research Implementation and Training in Critical Care in Asia Investigators 2 2. Characteristics of sites and clinical registries included in the study 4 Table 2.1. Characteristics of sites and intensive care units (ICUs) represented in this study 4 Table 2.2. Characteristics of clinical registries of the Collaboration for Research Improvement and Training in Critical Care in Asia (CCA) network included in the study at the time of the interviews 5 3. Template for Intervention Description and Replication (TIDieR) checklist 6 4. Standards for Reporting Qualitative Research Checklist 10 5. Interview guide 12


Characteristics of sites and clinical registries included in the study
To implement a cloud-based setting-adapted registry platform in 42 intensive care units in nine low-income and middle-income countries in Asia.

Why
Describe any rationale, theory, or goal of the elements essential to the intervention.
To provide real-time data on service activity, case mix, processes of care, patient experience and outcomes for critical care services in Asia.

What -Materials
Describe any physical or informational materials used in the intervention, including those provided to participants or used in intervention delivery or in training of intervention providers. Provide information on where the materials can be accessed (such as online appendix, URL).
Materials required for implementation included: • Post-Implementation Support -Weekly or fortnightly review of data completeness, frequency of reporting and validity of data entry with the national and NICS-MORU implementation teams -Data validation queries from automated and manual checks sent to sites -Errors, edits and validation queries generated are stored separately, so that common errors could be identified and acted on -Adaptation of the registry where necessary -E.g. variable labels were adapted to reflect local healthcare system terminology, user interfaces were adapted to maximise data completeness and field validation or alerts were added to reduce user error. -E.g. data fields added and/or edited due to local projects and research questions or participation in observational studies, pandemic surveillance, international trials or other data collection -Creation and testing of new developments to the registry -Training and close communication following the release of new registry features -Constant technical and operational support through telephone support and app-based messaging groups, and meetings as required by sites -Production of customised reports and analyses of data based on site requirements 5. Who provided For each category of intervention provider (such as psychologist, nursing assistant), describe their expertise, background, and any specific training given for the implementation.
NICS-MORU implementation team -Project coordinators: Overall responsibility for the design of the registry, implementation and strategic vision for the network. Involved in coordinating the development of the registry, identifying potential collaborators, supporting stakeholders in implementation and ensuring sustainability through ensuring utility of the registry, demonstrating the value to stakeholders as well as potential research partners and funders. -Implementation coordinators: The NICS-MORU implementation coordinators' centrally managed operational aspects related to the implementation of the registry. They assisted national coordinators in all phases of the project start-up, approval submissions, implementation and data monitoring, and coordinated between the local, national and development teams. They assisted national and local coordinators in ensuring data safety and quality, and provided post-implementation support.
NICS-MORU development team -Data analyst: Provided technical support for the registry and data visualization. National implementation teams -National leads: These individuals acted as liaison between the NICS-MORU teams and the sites in the country. They helped identify sites, clinical leads and data collectors within each country, held an advocacy role and acted as a liaison for clinical, professional and ministerial stakeholders. They ensured that all local necessary ethical and regulatory approvals were obtained before the start of implementation. -Implementation coordinators: Under the supervision of the NICS-MORU implementation coordinators, the implementation coordinators supported sites in their country with logistics, administration, implementation and post-implementation support.
Site teams -Clinical leads: In each site, a clinical lead (doctor or senior nurse) was appointed, who led local implementation, acquisition of necessary approvals, supported peer training and troubleshooted problems in the daily use of the registry as well as guided the local use of the data. They reported to the national and NICS-MORU implementation teams.

-
Data collectors: Data collectors, either clinical or non-clinical depending on site preferences, were appointed in each unit and trained in data entry and dashboard navigation.

How
Describe the modes of delivery (such as face to face or by some other mechanism, such as internet or telephone) of the intervention and whether it is provided individually or in a group • Face to face: Members of the NICS-MORU implementation team visited new sites to deliver training and collect feedback from stakeholders directly.

•
Video conferencing: Shared screen video conferencing such as Skype, Hangouts and Zoom were used to deliver training on data input and validation on the platform individually and to groups. Training for new platform features was also conducted using video conferencing.

•
Instant messaging: Web-based 24 hr implementation and technical support was available via WhatsApp in all countries. • Email: Email addresses of each country's team members and the central team were shared and email threads saved as records of discussions.

•
Via the registry portal: The reporting facilities provided near real-time reporting on the data quality including completeness, epidemiology, severity of illness, treatment, microbiology and outcomes of ICU patients.

7.
Where Describe the type(s) of location(s) where the intervention occurred, including any necessary infrastructure for relevant features.
The registry implementation occured on-site in all the relevant units and was centrally overseen by NICS-MORU based in Sri Lanka. Infrastructure required for implementation has been outlined in section 4 (implementation) in detail and included: • 3G connection or wifi • Mobile, desktop or tablet • A physical server, housed in a secure location at the network site country or at NICS-MORU's secure location.

8.
When and how much Describe the number of times the intervention was delivered and over what period of time including the number of sessions, their schedule,and their duration, intensity, or dose • Initial training and set up: This was a period of close supervision, communication and support including email, web-based 24 hour implementation and technical support available via WhatsApp, training and meetings to discuss any issues. • Support: Weekly video conferencing sessions with sites and implementation teams provided an opportunity for troubleshooting and feedback on feasibility, usability and enable adaptation to be discussed and prioritised. • Feedback and iteration: Twenty-four hour support via online messaging was available during implementation and post-implementation support.

Tailoring
If the intervention was planned to be personalised, titrated or adapted, then describe what, why, when, and how The steps that were undertaken to ensure tailoring of the platform to each site have been outlined in detail in Section 4. All iterations were guided by stakeholders, led by the implementation team and conducted by the development team. Iterations included the following: • Landscaping survey to tailor the data set and the platform identifiers -number of beds, unit names, logos, person identifiers, users and logins.

•
The data dictionaries were produced collaboratively and local terminology as well as preferences for units of measure and bioclinical information were taken at the start.

•
Network countries had a choice of optional forms for quality indicators and electronic health record features including daily observations and investigations.

Modifications
If the intervention was modified during the course the study, describe the changes (what,why,when,and how) • Weekly video conferencing sessions with sites and implementation teams provided an opportunity for feedback on feasibility, usability and enabled adaptation to be discussed and prioritised.

•
Twenty-four hour support via online messaging was available during implementation and post-implementation. An iterative cycle of adaptation and implementation was used to adapt the registry where necessary.

•
Examples of areas of modifications included: ○ Login and access portals ○ Data set -core data set, ability to embed additional variables for research studies. Ability to embed quality improvement audits. Ability to report pandemic surveillance. ○ User interface: Navigation changes ○ Visualisation and output: Stakeholders identified which indicators or outputs they wanted prioritised including occupancy, acuity, equipment and resource availability and bed capacity ○ Reports and PDF generation: Modifications were made to what variables and indicators were reported in the unit and registry reports. ○ Field labels: These were adapted to reflect local healthcare system terminology, user interfaces were adapted to maximise data completeness and field validation and alerts were added in a personalised way to reduce user error. 8 11. How well -Planned If intervention adherence or fidelity is assessed, describe how and by whom, and if any strategies were used to maintain improve fidelity, describe them Adherence or fidelity in the context of implementing the CCA registry refers to the extent of achieving the target of 42 units in nine South Asian countries over three years. This is monitored by the NICS-MORU implementation team and reviewed in regular project grant meetings. Recruitment of additional units and countries proceeded as described in Section 4.
Fidelity and accuracy of data capture for the registry was assessed and supported using the strategies described below. Further information about these processes is available in Section 4. -Data validation comprised automated rules within the registry, on-site checks and remote checks by the NICS-MORU team. Automated rules flagged abnormally high or low values and missed information. Use of free text entry was restricted as much as possible. -Visual display of daily data capture facilitated checks on accuracy and completeness by the clinical lead and data collector. Checks were subsequently also performed centrally by the NICS-MORU team, who monitored data entry daily, aided by automated scripts. -Queries were sent back to the sites, which were addressed by the data collector and clinical lead, and overseen by the national coordinator. -Errors, edits and validation queries generated in the registry were stored separately. This allowed common errors to be identified and acted on. -Rapid feedback loops were maintained, since in the low-income and middle-income setting traceability of paper-based notes is generally limited. The use of a single platform for entering, querying, reporting and validating data increased efficiencies.

How well -Actual
If intervention adherence or fidelity was assessed, describe the extent to which the intervention was delivered as planned The achievement of the work packages of the CCA registry are monitored annually by the NICS-MORU project coordinators and reported to the grant committee.
Data reporting from the registry is as follows: • Private reporting All sites had immediate access to their own unit's information captured through the registry via the private dashboard. This information enabled healthcare staff, administrators and researchers to evaluate trends in unit activity, severity of illness, bed occupancy, length of stay and outcomes within their respective institution. •

Peer reporting
The aim of peer reporting was to enable evaluation of performance between comparable units. Benchmarking was used to enable stakeholders within the network to evaluate services, identify 'good or effective practice' and share this with other units. Benchmarking processes of care and outcomes helped healthcare leaders identify relationships between resource, infrastructure, healthcare culture and experience, complimenting the qualitative health systems evaluation which was undertaken by the network and the training to support stakeholders to accurately evaluate and interpret the indicators of quality measured through the registry.
• Public reporting Anonymised aggregate data dashboards were accessible through a public portal. The aim was to provide community stakeholders with the opportunity to see information about healthcare within their region or country, promoting greater accountability of care. The dashboards were tailored to the patient and public priorities and included supportive information to aid interpretation. The information was used to inform those responsible for strategic planning and financial provision of healthcare at regional and national level.

Standards for Reporting Qualitative Research Checklist [26]
No Topic Item Location where item reported Title and abstract S1 Title Concise description of the nature and topic of the study identifying the study as qualitative or indicating the approach (e.g., ethnography, grounded theory) or data collection methods (e.g., interview, focus group) is recommended.
1 S2 Abstract Summary of key elements of the study using the abstract format of the intended publication; typically includes background, purpose, methods, results, and conclusions. 1

S3
Problem formulation Description and significance of the problem/phenomenon studied; review of relevant theory and empirical work; problem statement. 2-3

S4
Purpose or research question Purpose of the study and specific objectives or questions. 3

Researcher characteristics and reflexivity
Researchers' characteristics that may influence the research, including personal attributes, qualifications/experience, relationship with participants, assumptions, and/or presuppositions; potential or actual interaction between researchers' characteristics and the research questions, approach, methods, results, and/or transferability.

4-5 S7
Context Setting/site and salient contextual factors. 3-4 Appendix pp 2-3 S8 Sampling strategy How and why research participants, documents, or events were selected; criteria for deciding when no further sampling was necessary (e.g., sampling saturation).

S9
Ethical issues pertaining to human subjects Documentation of approval by an appropriate ethics review board and participant consent, or explanation for lack thereof; other confidentiality and data security issues 4 S10 Data collection methods Types of data collected; details of data collection procedures including (as appropriate) start and stop dates of data collection and analysis, iterative process, triangulation of sources/methods, and modification of procedures in response to evolving study findings.

S11 Data collection instruments and technologies
Description of instruments (e.g., interview guides, questionnaires) and devices (e.g., audio recorders) used for data collection; if/how the instrument(s) changed over the course of the study.

4
Appendix pp 15-18 S12 Units of study Number and relevant characteristics of participants, documents, or events included in the study; level of participation (could be reported in results).

4-5
Appendix pp 2-3 S13 Data processing Methods for processing data prior to and during analysis, including transcription, data entry, data management and security, verification of data integrity, data coding, and anonymization/deidentification of excerpts.
4-5 S14 Data analysis Process by which inferences, themes, etc., were identified and developed, including the researchers involved in data analysis; usually references a specific paradigm or approach. 5 S15 Techniques to enhance trustworthiness Techniques to enhance trustworthiness and credibility of data analysis (e.g., member checking, audit trail, triangulation). 5 Results/findings S16 Synthesis and interpretation Main findings (e.g., interpretations, inferences, and themes); might include development of a theory or model, or integration with prior research or theory.
5-8 S17 Links to empirical data Evidence (e.g., quotes, field notes, text excerpts, photographs) to substantiate analytic findings 6-7 Discussion S18 Integration with prior work, implications, transferability, and contribution(s) to the field Short summary of main findings; explanation of how findings and conclusions connect to, support, elaborate on, or challenge conclusions of earlier scholarship; discussion of scope of application/generalizability; identification of unique contribution(s) to scholarship in a discipline or field.