Automated essay scoring validity for native Arabic speaking learners of English

المؤلف

al-Jundi, Mirfat Muhammad

المصدر

Bulletin of the Faculty of Arts

العدد

المجلد 77، العدد 2 (31 يناير/كانون الثاني 2017)، ص ص. 49-68، 20ص.

الناشر

جامعة القاهرة كلية الآداب

تاريخ النشر

2017-01-31

دولة النشر

مصر

عدد الصفحات

20

التخصصات الرئيسية

العلوم التربوية
اللغات والآداب المقارنة

الملخص EN

This study aimed to investigate the validity of using Automated Essay Scoring (AES) systems to score essays written by nonnative university female students of English whose native language was Arabic.

For this purpose, the performance of the AES program, My Access which was supported by IntelliMetric scoring system, was compared with that of human raters in assigning scores.

The data had been obtained by using the IntelliMetric scoring system to score 55 essays and by asking three qualified experienced human raters to score the same essays.

The human raters were native English speakers.

Four- point informative analytic and holistic rubrics had been used.

The analytic rubric included five traits: focus and meaning, content and development, organization, language use, voice and style, and mechanics and conventions.

The scores were then accumulated.

Descriptive statistics, mean differences and Pearson Correlation Coefficient were calculated.

The results showed that across the five traits the correlations between the human raters and IntelliMetric scores were weak and moderate, ranging from 0.308 to 0.435.

The correlation between IntelliMetric and the first human rater (H1) on holistic scoring was 0.278 and 0.288 with the second human rater (H2).

There was no significance correlation between IntelliMetric and the third human rater (H3) on holistic scoring.

Across the five traits the results of One-Way Analysis of Variance (ANOVA) indicated that there was a statistically f essays written by native English speakers.

There has been little research done regarding nonnative English students.

Burstein and Chodorow (1999), Wang (2006) and Dikli (2007) assert that more studies are needed to provide further evidence to verify the validity and generalizability of the AES tools.

A shift in the research focus, regarding the AES technology, is needed.

Most studies focus on native English speaking populations.

There is much interest in examining the performance of the systems on nonnative speakers' essays.

The National Council of Teachers of English (NCTE) believes that "AES systems are unfair to nonnative speakers of English" (Lewis, 2013, p.

6).

Most of AES systems "are intended for native English speaker/ writers and may not have been systematically piloted on non-native writers" (Weigle, 2013, p.

90).

There is little published evidence of its appropriateness for English language learners.

Nonnative English students may need different support systems in writing classes.

Wang and Brown (2007) demonstrate the need for more studies investigating the generalizability of IntelliMetric, AES system, across different student populations.

Various studies on IntelliMetric are conducted by the development company in the form of technical reports.

Dikli (2010) asserts that "it is essential to increase the number of studies that are conducted by independent researchers" (p.

102) to draw a clear picture of the efficiency of AES programs.

This study is an attempt to fulfill this need.

It examines the validity of IntelliMetric system for Arabic native speakers.

نمط استشهاد جمعية علماء النفس الأمريكية (APA)

al-Jundi, Mirfat Muhammad. 2017. Automated essay scoring validity for native Arabic speaking learners of English. Bulletin of the Faculty of Arts،Vol. 77, no. 2, pp.49-68.
https://search.emarefa.net/detail/BIM-857612

نمط استشهاد الجمعية الأمريكية للغات الحديثة (MLA)

al-Jundi, Mirfat Muhammad. Automated essay scoring validity for native Arabic speaking learners of English. Bulletin of the Faculty of Arts Vol. 77, no. 2 (Jan. 2017), pp.49-68.
https://search.emarefa.net/detail/BIM-857612

نمط استشهاد الجمعية الطبية الأمريكية (AMA)

al-Jundi, Mirfat Muhammad. Automated essay scoring validity for native Arabic speaking learners of English. Bulletin of the Faculty of Arts. 2017. Vol. 77, no. 2, pp.49-68.
https://search.emarefa.net/detail/BIM-857612

نوع البيانات

مقالات

لغة النص

الإنجليزية

الملاحظات

رقم السجل

BIM-857612