Automated scoring of teachers’ pedagogical content knowledge : a comparison between human and machine scoring

dc.contributor.authorWahlen, Andreas
dc.contributor.authorKuhn, Christiane
dc.contributor.authorZlatkin-Troitschanskaia, Olga
dc.contributor.authorGold, Christian
dc.contributor.authorZesch, Torsten
dc.contributor.authorHorbach, Andrea
dc.date.accessioned2020-10-21T10:35:24Z
dc.date.available2020-10-21T10:35:24Z
dc.date.issued2020
dc.description.abstractTo validly assess teachers’ pedagogical content knowledge (PCK), performance-based tasks with open-response formats are required. Automated scoring is considered an appropriate approach to reduce the resource-intensity of human scoring and to achieve more consistent scoring results than human raters. The focus is on the comparability of human and automated scoring of PCK for economics teachers. The answers of (prospective) teachers (N = 852) to six open-response tasks from a standardized and validated test were scored by two trained human raters and the engine “Educational SCoRIng Toolkit” (ESCRITO). The average agreement between human and computer ratings, κw = 0.66, suggests a convergent validity of the scoring results. The results of the single-sector variance analysis show a significant influence of the answers for each homogeneous subgroup (students = 460, trainees = 230, in-service teachers = 162) on the automated scoring. Findings are discussed in terms of implications for the use of automated scoring in educational assessment and its potentials and limitations.en_GB
dc.description.sponsorshipDFG, Open Access-Publizieren Universität Mainz / Universitätsmedizin Mainzde
dc.identifier.doihttp://doi.org/10.25358/openscience-5243
dc.identifier.urihttps://openscience.ub.uni-mainz.de/handle/20.500.12030/5247
dc.language.isoengde
dc.rightsCC-BY-4.0*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.subject.ddc300 Sozialwissenschaftende_DE
dc.subject.ddc300 Social sciencesen_GB
dc.subject.ddc330 Wirtschaftde_DE
dc.subject.ddc330 Economicsen_GB
dc.titleAutomated scoring of teachers’ pedagogical content knowledge : a comparison between human and machine scoringen_GB
dc.typeZeitschriftenaufsatzde
jgu.journal.titleFrontiers in educationde
jgu.journal.volume5de
jgu.organisation.departmentFB 03 Rechts- und Wirtschaftswissenschaftende
jgu.organisation.nameJohannes Gutenberg-Universität Mainz
jgu.organisation.number2300
jgu.organisation.placeMainz
jgu.organisation.rorhttps://ror.org/023b0x485
jgu.pages.alternativeArt. 149de
jgu.publisher.doi10.3389/feduc.2020.00149
jgu.publisher.issn2504-284Xde
jgu.publisher.nameFrontiers Mediade
jgu.publisher.placeLausannede
jgu.publisher.urihttps://doi.org/10.3389/feduc.2020.00149de
jgu.publisher.year2020
jgu.rights.accessrightsopenAccess
jgu.subject.ddccode300de
jgu.subject.ddccode330de
jgu.type.dinitypeArticleen_GB
jgu.type.resourceTextde
jgu.type.versionPublished versionde

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
wahlen_andreas-automated_scor-20201021123117590.pdf
Size:
886.22 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
3.57 KB
Format:
Item-specific license agreed upon to submission
Description: