Named Entity Recognition in Chinese Medical Literature Using Pretraining Models

المؤلفون المشاركون

Sun, Yining
Ma, Zuchang
Gao, Lisheng
Wang, Yu
Xu, Yang

المصدر

Scientific Programming

العدد

المجلد 2020، العدد 2020 (31 ديسمبر/كانون الأول 2020)، ص ص. 1-9، 9ص.

الناشر

Hindawi Publishing Corporation

تاريخ النشر

2020-09-09

دولة النشر

مصر

عدد الصفحات

9

التخصصات الرئيسية

الرياضيات

الملخص EN

The medical literature contains valuable knowledge, such as the clinical symptoms, diagnosis, and treatments of a particular disease.

Named Entity Recognition (NER) is the initial step in extracting this knowledge from unstructured text and presenting it as a Knowledge Graph (KG).

However, the previous approaches of NER have often suffered from small-scale human-labelled training data.

Furthermore, extracting knowledge from Chinese medical literature is a more complex task because there is no segmentation between Chinese characters.

Recently, the pretraining models, which obtain representations with the prior semantic knowledge on large-scale unlabelled corpora, have achieved state-of-the-art results for a wide variety of Natural Language Processing (NLP) tasks.

However, the capabilities of pretraining models have not been fully exploited, and applications of other pretraining models except BERT in specific domains, such as NER in Chinese medical literature, are also of interest.

In this paper, we enhance the performance of NER in Chinese medical literature using pretraining models.

First, we propose a method of data augmentation by replacing the words in the training set with synonyms through the Mask Language Model (MLM), which is a pretraining task.

Then, we consider NER as the downstream task of the pretraining model and transfer the prior semantic knowledge obtained during pretraining to it.

Finally, we conduct experiments to compare the performances of six pretraining models (BERT, BERT-WWM, BERT-WWM-EXT, ERNIE, ERNIE-tiny, and RoBERTa) in recognizing named entities from Chinese medical literature.

The effects of feature extraction and fine-tuning, as well as different downstream model structures, are also explored.

Experimental results demonstrate that the method of data augmentation we proposed can obtain meaningful improvements in the performance of recognition.

Besides, RoBERTa-CRF achieves the highest F1-score compared with the previous methods and other pretraining models.

نمط استشهاد جمعية علماء النفس الأمريكية (APA)

Wang, Yu& Sun, Yining& Ma, Zuchang& Gao, Lisheng& Xu, Yang. 2020. Named Entity Recognition in Chinese Medical Literature Using Pretraining Models. Scientific Programming،Vol. 2020, no. 2020, pp.1-9.
https://search.emarefa.net/detail/BIM-1209146

نمط استشهاد الجمعية الأمريكية للغات الحديثة (MLA)

Wang, Yu…[et al.]. Named Entity Recognition in Chinese Medical Literature Using Pretraining Models. Scientific Programming No. 2020 (2020), pp.1-9.
https://search.emarefa.net/detail/BIM-1209146

نمط استشهاد الجمعية الطبية الأمريكية (AMA)

Wang, Yu& Sun, Yining& Ma, Zuchang& Gao, Lisheng& Xu, Yang. Named Entity Recognition in Chinese Medical Literature Using Pretraining Models. Scientific Programming. 2020. Vol. 2020, no. 2020, pp.1-9.
https://search.emarefa.net/detail/BIM-1209146

نوع البيانات

مقالات

لغة النص

الإنجليزية

الملاحظات

Includes bibliographical references

رقم السجل

BIM-1209146