@Article{info:doi/10.2196/66279, author="{\v{S}}uvalov, Hendrik and Lepson, Mihkel and Kukk, Veronika and Malk, Maria and Ilves, Neeme and Kuulmets, Hele-Andra and Kolde, Raivo", title="Using Synthetic Health Care Data to Leverage Large Language Models for Named Entity Recognition: Development and Validation Study", journal="J Med Internet Res", year="2025", month="Mar", day="18", volume="27", pages="e66279", keywords="natural language processing; named entity recognition; large language model; synthetic data; LLM; NLP; machine learning; artificial intelligence; language model; NER; medical entity; Estonian; health care data; annotated data; data annotation; clinical decision support; data mining", abstract="Background: Named entity recognition (NER) plays a vital role in extracting critical medical entities from health care records, facilitating applications such as clinical decision support and data mining. Developing robust NER models for low-resource languages, such as Estonian, remains a challenge due to the scarcity of annotated data and domain-specific pretrained models. Large language models (LLMs) have proven to be promising in understanding text from any language or domain. Objective: This study addresses the development of medical NER models for low-resource languages, specifically Estonian. We propose a novel approach by generating synthetic health care data and using LLMs to annotate them. These synthetic data are then used to train a high-performing NER model, which is applied to real-world medical texts, preserving patient data privacy. Methods: Our approach to overcoming the shortage of annotated Estonian health care texts involves a three-step pipeline: (1) synthetic health care data are generated using a locally trained GPT-2 model on Estonian medical records, (2) the synthetic data are annotated with LLMs, specifically GPT-3.5-Turbo and GPT-4, and (3) the annotated synthetic data are then used to fine-tune an NER model, which is later tested on real-world medical data. This paper compares the performance of different prompts; assesses the impact of GPT-3.5-Turbo, GPT-4, and a local LLM; and explores the relationship between the amount of annotated synthetic data and model performance. Results: The proposed methodology demonstrates significant potential in extracting named entities from real-world medical texts. Our top-performing setup achieved an F1-score of 0.69 for drug extraction and 0.38 for procedure extraction. These results indicate a strong performance in recognizing certain entity types while highlighting the complexity of extracting procedures. Conclusions: This paper demonstrates a successful approach to leveraging LLMs for training NER models using synthetic data, effectively preserving patient privacy. By avoiding reliance on human-annotated data, our method shows promise in developing models for low-resource languages, such as Estonian. Future work will focus on refining the synthetic data generation and expanding the method's applicability to other domains and languages. ", issn="1438-8871", doi="10.2196/66279", url="https://www.jmir.org/2025/1/e66279", url="https://doi.org/10.2196/66279" }