Please use this identifier to cite or link to this item: http://doi.org/10.25358/openscience-5243
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWahlen, Andreas-
dc.contributor.authorKuhn, Christiane-
dc.contributor.authorZlatkin-Troitschanskaia, Olga-
dc.contributor.authorGold, Christian-
dc.contributor.authorZesch, Torsten-
dc.contributor.authorHorbach, Andrea-
dc.date.accessioned2020-10-21T10:35:24Z-
dc.date.available2020-10-21T10:35:24Z-
dc.date.issued2020-
dc.identifier.urihttps://openscience.ub.uni-mainz.de/handle/20.500.12030/5247-
dc.description.abstractTo validly assess teachers’ pedagogical content knowledge (PCK), performance-based tasks with open-response formats are required. Automated scoring is considered an appropriate approach to reduce the resource-intensity of human scoring and to achieve more consistent scoring results than human raters. The focus is on the comparability of human and automated scoring of PCK for economics teachers. The answers of (prospective) teachers (N = 852) to six open-response tasks from a standardized and validated test were scored by two trained human raters and the engine “Educational SCoRIng Toolkit” (ESCRITO). The average agreement between human and computer ratings, κw = 0.66, suggests a convergent validity of the scoring results. The results of the single-sector variance analysis show a significant influence of the answers for each homogeneous subgroup (students = 460, trainees = 230, in-service teachers = 162) on the automated scoring. Findings are discussed in terms of implications for the use of automated scoring in educational assessment and its potentials and limitations.en_GB
dc.description.sponsorshipDFG, Open Access-Publizieren Universität Mainz / Universitätsmedizin Mainzde
dc.language.isoengde
dc.rightsCC BY*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.subject.ddc300 Sozialwissenschaftende_DE
dc.subject.ddc300 Social sciencesen_GB
dc.subject.ddc330 Wirtschaftde_DE
dc.subject.ddc330 Economicsen_GB
dc.titleAutomated scoring of teachers’ pedagogical content knowledge : a comparison between human and machine scoringen_GB
dc.typeZeitschriftenaufsatzde
dc.identifier.doihttp://doi.org/10.25358/openscience-5243-
jgu.type.dinitypearticleen_GB
jgu.type.versionPublished versionde
jgu.type.resourceTextde
jgu.organisation.departmentFB 03 Rechts- und Wirtschaftswissenschaftende
jgu.organisation.number2300-
jgu.organisation.nameJohannes Gutenberg-Universität Mainz-
jgu.rights.accessrightsopenAccess-
jgu.journal.titleFrontiers in educationde
jgu.journal.volume5de
jgu.pages.alternativeArt. 149de
jgu.publisher.year2020-
jgu.publisher.nameFrontiers Mediade
jgu.publisher.placeLausannede
jgu.publisher.urihttps://doi.org/10.3389/feduc.2020.00149de
jgu.publisher.issn2504-284Xde
jgu.organisation.placeMainz-
jgu.subject.ddccode300de
jgu.subject.ddccode330de
jgu.publisher.doi10.3389/feduc.2020.00149
jgu.organisation.rorhttps://ror.org/023b0x485
Appears in collections:JGU-Publikationen

Files in This Item:
  File Description SizeFormat
Thumbnail
wahlen_andreas-automated_scor-20201021123117590.pdf886.22 kBAdobe PDFView/Open