Labelling Training Samples Using Crowdsourcing Annotation for Recommendation
المؤلفون المشاركون
Wang, Qingren
Zhang, Min
Tao, Tao
Sheng, Victor S.
المصدر
العدد
المجلد 2020، العدد 2020 (31 ديسمبر/كانون الأول 2020)، ص ص. 1-10، 10ص.
الناشر
Hindawi Publishing Corporation
تاريخ النشر
2020-05-05
دولة النشر
مصر
عدد الصفحات
10
التخصصات الرئيسية
الملخص EN
The supervised learning-based recommendation models, whose infrastructures are sufficient training samples with high quality, have been widely applied in many domains.
In the era of big data with the explosive growth of data volume, training samples should be labelled timely and accurately to guarantee the excellent recommendation performance of supervised learning-based models.
Machine annotation cannot complete the tasks of labelling training samples with high quality because of limited machine intelligence.
Although expert annotation can achieve a high accuracy, it requires a long time as well as more resources.
As a new way of human intelligence to participate in machine computing, crowdsourcing annotation makes up for shortages of machine annotation and expert annotation.
Therefore, in this paper, we utilize crowdsourcing annotation to label training samples.
First, a suitable crowdsourcing mechanism is designed to create crowdsourcing annotation-based tasks for training sample labelling, and then two entropy-based ground truth inference algorithms (i.e., HILED and HILI) are proposed to achieve quality improvement of noise labels provided by the crowd.
In addition, the descending and random order manners in crowdsourcing annotation-based tasks are also explored.
The experimental results demonstrate that crowdsourcing annotation significantly improves the performance of machine annotation.
Among the ground truth inference algorithms, both HILED and HILI improve the performance of baselines; meanwhile, HILED performs better than HILI.
نمط استشهاد جمعية علماء النفس الأمريكية (APA)
Wang, Qingren& Zhang, Min& Tao, Tao& Sheng, Victor S.. 2020. Labelling Training Samples Using Crowdsourcing Annotation for Recommendation. Complexity،Vol. 2020, no. 2020, pp.1-10.
https://search.emarefa.net/detail/BIM-1139938
نمط استشهاد الجمعية الأمريكية للغات الحديثة (MLA)
Wang, Qingren…[et al.]. Labelling Training Samples Using Crowdsourcing Annotation for Recommendation. Complexity No. 2020 (2020), pp.1-10.
https://search.emarefa.net/detail/BIM-1139938
نمط استشهاد الجمعية الطبية الأمريكية (AMA)
Wang, Qingren& Zhang, Min& Tao, Tao& Sheng, Victor S.. Labelling Training Samples Using Crowdsourcing Annotation for Recommendation. Complexity. 2020. Vol. 2020, no. 2020, pp.1-10.
https://search.emarefa.net/detail/BIM-1139938
نوع البيانات
مقالات
لغة النص
الإنجليزية
الملاحظات
Includes bibliographical references
رقم السجل
BIM-1139938
قاعدة معامل التأثير والاستشهادات المرجعية العربي "ارسيف Arcif"
أضخم قاعدة بيانات عربية للاستشهادات المرجعية للمجلات العلمية المحكمة الصادرة في العالم العربي
تقوم هذه الخدمة بالتحقق من التشابه أو الانتحال في الأبحاث والمقالات العلمية والأطروحات الجامعية والكتب والأبحاث باللغة العربية، وتحديد درجة التشابه أو أصالة الأعمال البحثية وحماية ملكيتها الفكرية. تعرف اكثر