Please use this identifier to cite or link to this item:
http://doi.org/10.25358/openscience-5243
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wahlen, Andreas | - |
dc.contributor.author | Kuhn, Christiane | - |
dc.contributor.author | Zlatkin-Troitschanskaia, Olga | - |
dc.contributor.author | Gold, Christian | - |
dc.contributor.author | Zesch, Torsten | - |
dc.contributor.author | Horbach, Andrea | - |
dc.date.accessioned | 2020-10-21T10:35:24Z | - |
dc.date.available | 2020-10-21T10:35:24Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | https://openscience.ub.uni-mainz.de/handle/20.500.12030/5247 | - |
dc.description.abstract | To validly assess teachers’ pedagogical content knowledge (PCK), performance-based tasks with open-response formats are required. Automated scoring is considered an appropriate approach to reduce the resource-intensity of human scoring and to achieve more consistent scoring results than human raters. The focus is on the comparability of human and automated scoring of PCK for economics teachers. The answers of (prospective) teachers (N = 852) to six open-response tasks from a standardized and validated test were scored by two trained human raters and the engine “Educational SCoRIng Toolkit” (ESCRITO). The average agreement between human and computer ratings, κw = 0.66, suggests a convergent validity of the scoring results. The results of the single-sector variance analysis show a significant influence of the answers for each homogeneous subgroup (students = 460, trainees = 230, in-service teachers = 162) on the automated scoring. Findings are discussed in terms of implications for the use of automated scoring in educational assessment and its potentials and limitations. | en_GB |
dc.description.sponsorship | DFG, Open Access-Publizieren Universität Mainz / Universitätsmedizin Mainz | de |
dc.language.iso | eng | de |
dc.rights | CC BY | * |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | * |
dc.subject.ddc | 300 Sozialwissenschaften | de_DE |
dc.subject.ddc | 300 Social sciences | en_GB |
dc.subject.ddc | 330 Wirtschaft | de_DE |
dc.subject.ddc | 330 Economics | en_GB |
dc.title | Automated scoring of teachers’ pedagogical content knowledge : a comparison between human and machine scoring | en_GB |
dc.type | Zeitschriftenaufsatz | de |
dc.identifier.doi | http://doi.org/10.25358/openscience-5243 | - |
jgu.type.dinitype | article | en_GB |
jgu.type.version | Published version | de |
jgu.type.resource | Text | de |
jgu.organisation.department | FB 03 Rechts- und Wirtschaftswissenschaften | de |
jgu.organisation.number | 2300 | - |
jgu.organisation.name | Johannes Gutenberg-Universität Mainz | - |
jgu.rights.accessrights | openAccess | - |
jgu.journal.title | Frontiers in education | de |
jgu.journal.volume | 5 | de |
jgu.pages.alternative | Art. 149 | de |
jgu.publisher.year | 2020 | - |
jgu.publisher.name | Frontiers Media | de |
jgu.publisher.place | Lausanne | de |
jgu.publisher.uri | https://doi.org/10.3389/feduc.2020.00149 | de |
jgu.publisher.issn | 2504-284X | de |
jgu.organisation.place | Mainz | - |
jgu.subject.ddccode | 300 | de |
jgu.subject.ddccode | 330 | de |
jgu.publisher.doi | 10.3389/feduc.2020.00149 | |
jgu.organisation.ror | https://ror.org/023b0x485 | |
Appears in collections: | JGU-Publikationen |
Files in This Item:
File | Description | Size | Format | ||
---|---|---|---|---|---|
wahlen_andreas-automated_scor-20201021123117590.pdf | 886.22 kB | Adobe PDF | View/Open |