Named Entity Recognition in Chinese Medical Literature Using Pretraining Models

Joint Authors

Sun, Yining
Ma, Zuchang
Gao, Lisheng
Wang, Yu
Xu, Yang

Source

Scientific Programming

Issue

Vol. 2020, Issue 2020 (31 Dec. 2020), pp.1-9, 9 p.

Publisher

Hindawi Publishing Corporation

Publication Date

2020-09-09

Country of Publication

Egypt

No. of Pages

9

Main Subjects

Mathematics

Abstract EN

The medical literature contains valuable knowledge, such as the clinical symptoms, diagnosis, and treatments of a particular disease.

Named Entity Recognition (NER) is the initial step in extracting this knowledge from unstructured text and presenting it as a Knowledge Graph (KG).

However, the previous approaches of NER have often suffered from small-scale human-labelled training data.

Furthermore, extracting knowledge from Chinese medical literature is a more complex task because there is no segmentation between Chinese characters.

Recently, the pretraining models, which obtain representations with the prior semantic knowledge on large-scale unlabelled corpora, have achieved state-of-the-art results for a wide variety of Natural Language Processing (NLP) tasks.

However, the capabilities of pretraining models have not been fully exploited, and applications of other pretraining models except BERT in specific domains, such as NER in Chinese medical literature, are also of interest.

In this paper, we enhance the performance of NER in Chinese medical literature using pretraining models.

First, we propose a method of data augmentation by replacing the words in the training set with synonyms through the Mask Language Model (MLM), which is a pretraining task.

Then, we consider NER as the downstream task of the pretraining model and transfer the prior semantic knowledge obtained during pretraining to it.

Finally, we conduct experiments to compare the performances of six pretraining models (BERT, BERT-WWM, BERT-WWM-EXT, ERNIE, ERNIE-tiny, and RoBERTa) in recognizing named entities from Chinese medical literature.

The effects of feature extraction and fine-tuning, as well as different downstream model structures, are also explored.

Experimental results demonstrate that the method of data augmentation we proposed can obtain meaningful improvements in the performance of recognition.

Besides, RoBERTa-CRF achieves the highest F1-score compared with the previous methods and other pretraining models.

American Psychological Association (APA)

Wang, Yu& Sun, Yining& Ma, Zuchang& Gao, Lisheng& Xu, Yang. 2020. Named Entity Recognition in Chinese Medical Literature Using Pretraining Models. Scientific Programming،Vol. 2020, no. 2020, pp.1-9.
https://search.emarefa.net/detail/BIM-1209146

Modern Language Association (MLA)

Wang, Yu…[et al.]. Named Entity Recognition in Chinese Medical Literature Using Pretraining Models. Scientific Programming No. 2020 (2020), pp.1-9.
https://search.emarefa.net/detail/BIM-1209146

American Medical Association (AMA)

Wang, Yu& Sun, Yining& Ma, Zuchang& Gao, Lisheng& Xu, Yang. Named Entity Recognition in Chinese Medical Literature Using Pretraining Models. Scientific Programming. 2020. Vol. 2020, no. 2020, pp.1-9.
https://search.emarefa.net/detail/BIM-1209146

Data Type

Journal Articles

Language

English

Notes

Includes bibliographical references

Record ID

BIM-1209146