Initial Push

This commit is contained in:
2025-11-15 23:55:07 -06:00
commit 8c209d1ae4
10 changed files with 742 additions and 0 deletions

BIN
AMSTUD Paper/main.pdf Normal file

Binary file not shown.

138
AMSTUD Paper/main.tex Normal file
View File

@@ -0,0 +1,138 @@
\documentclass[12pt]{article}
%
%Margin - 1 inch on all sides
%
\usepackage[letterpaper]{geometry}
\usepackage{times}
\geometry{top=1.0in, bottom=1.0in, left=1.0in, right=1.0in}
%Doublespacing
%
\usepackage{setspace}
\doublespacing
%
%Rotating tables (e.g. sideways when too long)
%
\usepackage{rotating}
%
%Fancy-header package to modify header/page numbering (insert last name)
%
\usepackage{fancyhdr}
\pagestyle{fancy}
\lhead{}
\chead{}
\rhead{Anand \thepage}
\lfoot{}
\cfoot{}
\rfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
%To make sure we actually have header 0.5in away from top edge
%12pt is one-sixth of an inch. Subtract this from 0.5in to get headsep value
\setlength\headsep{0.333in}
\usepackage[style=mla]{biblatex}
\addbibresource{references.bib} % your .bib file name
\begin{document}
\begin{flushleft}
%%%%First page name, class, etc
Keshav Anand\\
Russell/Alexander\\
American Studies\\
31 October 2025\\
%%%%Title
\begin{center}
A Synthesis of Research on the Efficacy of Narrative Assessment in Secondary Education
\end{center}
%%%%Changes paragraph indentation to 0.5in
\setlength{\parindent}{0.5in}
%%%%Begin body of paper here
%TODO: ADD MLA PAGES FOR ALL CITATIONS
Assessment remains a critical component of education, serving as a means to evaluate student progress and mastery over a given subject.
Traditionally, quantitative assessments (e.g. multiple-choice tests, standardized exams) have been preferred for their objectivity and ease of grading \parencite{Lamiell2018}.
Even today, many standardized tests and important exams are primarily quantitative in nature, with American exams such as the SAT and ACT having recently undergone changes
to remove or optionalize essay components \parencite{McGrath2021_SATessay,Semos2024_ACTwriting}. While quantitative assessments have their merits,
their defined scope limits their ability to capture a student's critical thinking and creativity \parencite{KU200970}.
Conversely, narrative assessments (e.g. essays, projects, presentations) with subjective grading
\footnote{Structured written responses scored with a non-flexible rubric, such as numerical math problems or essay questions with rigid crieteria, are not considered narrative assessments in this context.}
allow students to express
understanding in a more holistic and authentic manner \parencite{KU200970}. Recent developments in educational trends due to the rise of Artificial Intelligence (AI)
following the COVID-19 pandemic have led many educators to reconsider their methods, accelerating changes in curriculum and assessment \parencite{Kamalov2023New}.
Hence, this paper synthesizes research on the efficacy of subjective narrative as
sessment in secondary education when compared
to traditional quantitative assessments.\\
A central advantage of narrative assessments is their ability to allow students to demonstrate higher-order thinking skills.
While quantitative assessments often reward students for accurate recall of facts,
narrative assessments provide a means to distinguish a satisfactory understanding from a nuanced comprehension.
This is particularly important to distinguish gifted students; while a standard 4-option multiple-choice question gives full credit for knowing the tested concept (and a 25\% change of guessing correctly),
both the student who barely understands a concept and the student who has mastered it receive the same score \parencite{Liu2023Multiple-choice}.
University professors second this notion, agreeing that multiple-choice questions (MCQs) are limited in their ability to assess higher-order cognition \parencite{Liu2023Multiple-choice}.
In the context of a secondary education setting, the impact of narrative assessments are particularly pronounced due to the focus on knowledge retention.
Failing to properly distinguish between levels of understanding can inadvertently incentivize surface-level memorization,
where students fail to recall basic information learned in a class after the subject is no longer tested \parencite{Kooloos2019The}.
In fact, \cite{Kooloos2019The} found that students lost about 33\% of gained knowledge when learning is geared towards short-term recall.
Known as the "forgetting curve," this rapid loss of information can only be mitigated by constant revisiting of material or understanding concepts at a deeper level \parencite{Kooloos2019The}.
Due to the open-ended nature of narrative assessments, students are encouraged to engage with material more deeply,
and when educators reward complex understanding, students are incentivized to internalize concepts rather than memorize facts \parencite{KU200970}.
Additionally, these benefits are compounded in secondary education as it promotes creativity, which is beneficial for growth and development at younger ages \parencite{Redó2021Dimensions}.
When considering that growth and development is the sole focus of secondary education itself, the creativity garnered by narrative assessments is an appealing option for educators.\\
Another vital component of narrative assessments is their ability to thoroughly evaluate student communication skills.
In an increasingly interconnected world, the ability to effectively communicate ideas is paramount,
with narrative assessments providing a platform for students to hone these skills \parencite{WILBY20191164}.
In fact, \cite{WILBY20191164} found that narrative assessments even have merits in STEM (Science, Technology, Engineering, and Mathematics) subjects,
where they researchers found that narrative assessments have use in the medical industry for summative decision making.
Due to the ubiquitous importance of writing and communication skills across disciplines, educators' primary rationale for using narrative assessments
lies in their ability to foster these skills \parencite{Wilby2019Discriminating}.
These motives are backed by scientific evidence beyond communication skills, as it has been repeatedly proven that the ability to communicate a certain concept
— especially when supported by metacognitive strategies such as planning, monitoring, and evaluation —
is correlated with mastery of that concept \parencite{Hamzah2022Systematic}. While narrative assessments promote such communication skills,
students who are not proficient writers may be disadvantages. \cite{Lo2021Assessing} found that assessing students through essays
requires an implied mastery of the English language, which may unfairly penalize non-native speakers.
This issue is particularly sensitive in secondary education, as the vast percentage of immigrants and non-native speakers
will have their course evaluation depend on their English proficiency. Hence, while narrative assessments promote communication skills,
educators must be wary of potential biases against non-native speakers.\\
While proponents of narrative assessments highlight their benefits, critics focus on the subjectivity of such assessments as a major flaw.
Unsurprisingly, bias and subjectivity is a primary complaint of students, where a majority often perceive essays and projects as unfairly graded \parencite{Bullock2019In}.
These complains are not without merit, as inconsistencies are bound in subjective evaluation due to educator bias and differences between sections and educators.
For example, the same history course can be taught by multiple teachers, where each teacher may have different expectations for essay responses.
In fact, \cite{LOPERAOQUENDO2024101992} found that inter-rater reliability (IRR) for essay grading was often low,
with even rubric-based grading systems failing to ensure consistency between graders.
This inconsistency can lead to student frustration and a perception of unfairness, which can negatively impact student motivation and engagement.
While many focus on the negative aspects of subjectivity, some researchers argue that subjectivity can be beneficial in certain contexts.
For example, positive expectation bias — where education hold higher expectations for certain students — can lead to improved performance
due to positive reinforcement of a student's potential and abilities \parencite{Boer2010Sustainability}.
Positive expectation bias is also useful in flattening performance outliers;
for example, a student having a bad day during an assessment date may be unfairly penalized in a quantitative assessment,
while positive expectation bias allows teachers to understand and compensate for performative outliers.
This not only benefits gifted students but also generally weaker students, and studies show
that teachers sometimes subconsciously use shifting standards to leniently grade struggling (yet hardworking) students \parencite{Gil-Hernández_2024}.\\
\newpage
\printbibliography[title={Works Cited}]
\end{flushleft}
\end{document}
\}

151
AMSTUD Paper/references.bib Normal file
View File

@@ -0,0 +1,151 @@
@inbook{Lamiell2018,
title = {Some Historical Perspective on the Marginalization of Qualitative Methods Within Mainstream Scientific Psychology},
ISBN = {9781351136426},
url = {http://dx.doi.org/10.4324/9781351136426-2},
DOI = {10.4324/9781351136426-2},
booktitle = {Situating Qualitative Methods in Psychological Science},
publisher = {Routledge},
author = {Lamiell, James T.},
year = {2018},
month = jul,
pages = {1126}
}
@online{McGrath2021_SATessay,
author = {Steve McGrath},
title = {College Board Updates on the SAT Essay and Subject Tests},
year = {2021},
url = {https://info.methodlearning.com/blog/college-board-updates-on-the-sat-essay-and-subject-tests},
note = {Accessed: 2025-10-26},
organization = {Method Learning Blog}
}
@online{Semos2024_ACTwriting,
author = {Kristina Semos},
title = {Whats Staying the Same on the New ACT? (Pt 2)},
year = {2024},
month = sep,
url = {https://www.ivyloungetestprep.com/blog/act-changes},
note = {Accessed: 2025-10-26},
organization = {IVY Lounge Test Prep}
}
@article{KU200970,
title = {Assessing students critical thinking performance: Urging for measurements using multi-response format},
journal = {Thinking Skills and Creativity},
volume = {4},
number = {1},
pages = {70-76},
year = {2009},
issn = {1871-1871},
doi = {https://doi.org/10.1016/j.tsc.2009.02.001},
url = {https://www.sciencedirect.com/science/article/pii/S1871187109000054},
author = {Kelly Y.L. Ku},
keywords = {Critical thinking, Assessment, Response format, Higher education, Thinking skills},
abstract = {The current paper discusses ambiguities in critical thinking assessment. The paper first reviews the components of critical thinking. It then discusses the features and issues of commonly used critical thinking tests and to what extend they are made compatible to the conceptualization of critical thinking. The paper argues that critical thinking tests utilizing a single multiple-choice response format measures only recognition or level of knowledge, and do not adequately capture the dispositional characteristics of test-takers. Multiple-choice response format does not reveal test-takers underlying reasoning for choosing a particular answer, nor does it reflect test-takers ability to think critically under unprompted situations. Whereas measurement that allows for responses in both multiple-choice and open-ended format makes it possible to assess individuals spontaneous application of thinking skills on top of their ability to recognize a correct response. Assessment consists of multi-response format should be pursued for effective evaluation of students critical thinking performance.}
}
@article{Kamalov2023New,title={New Era of Artificial Intelligence in Education: Towards a Sustainable Multifaceted Revolution},author={Firuz Kamalov and David Santandreu Calonge and Ikhlaas Gurrib},journal={Sustainability},year={2023},doi={10.3390/su151612451}}
@article{Liu2023Multiple-choice,title={Multiple-choice questions (MCQs) for higher-order cognition: Perspectives of university teachers},author={Qian Liu and Navé Wald and Chandima Daskon and T. Harland},journal={Innovations in Education and Teaching International},year={2023},volume={61},pages={802 - 814},doi={10.1080/14703297.2023.2222715}}
@article{Kooloos2019The,
title={The Effect of Passive and Active Education Methods Applied in Repetition Activities on the Retention of Anatomical Knowledge},
author={J. Kooloos and Esther M. Bergman and Marieke A G P Scheffers and A. SchepensFranke and M. Vorstenbosch},
journal={Anatomical Sciences Education},
year={2019},
volume={13},
pages={458 - 466},
doi={10.1002/ase.1924}
}
@article{WILBY20191164,
title = {Reliability of narrative assessment data on communication skills in a summative OSCE},
journal = {Patient Education and Counseling},
volume = {102},
number = {6},
pages = {1164-1169},
year = {2019},
issn = {0738-3991},
doi = {https://doi.org/10.1016/j.pec.2019.01.018},
url = {https://www.sciencedirect.com/science/article/pii/S0738399118307493},
author = {Kyle John Wilby and Marjan J.B. Govaerts and Diana H.J.M. Dolmans and Zubin Austin and Cees {van der Vleuten}},
keywords = {Assessment, Communication, Medical education, Pharmacy education},
abstract = {Objective
To quantitatively estimate the reliability of narrative assessment data regarding student communication skills obtained from a summative OSCE and to compare reliability to that of communication scores obtained from direct observation.
Methods
Narrative comments and communication scores (scale 15) were obtained for 14 graduating pharmacy students across 6 summative OSCE stations with 2 assessors per station who directly observed student performance. Two assessors who had not observed the OSCE reviewed narratives and independently scored communication skills according to the same 5-point scale. Generalizability theory was used to estimate reliability. Correlation was used to evaluate the relationship between scores from each assessment method.
Results
A total of 168 narratives and communication scores were obtained. The G-coefficients were 0.571 for scores provided by assessors present during the OSCE and 0.612 for scores from assessors who provided scores based on narratives only. Correlation between the two sets of scores was 0.5.
Conclusion
Reliability of communication scores is not dependent on whether assessors directly observe student performance or assess written narratives, yet both conditions appear to measure communication skills somewhat differently.
Practice implications
Narratives may be useful for summative decision-making and help overcome the current limitations of using solely quantitative scores.}
}
@article{Wilby2019Discriminating,
title={Discriminating Features of Narrative Evaluations of Communication Skills During an OSCE},
author={K. Wilby and M. Govaerts and Z. Austin and D. Dolmans},
journal={Teaching and Learning in Medicine},
year={2019},
volume={31},
pages={298 - 306},
doi={10.1080/10401334.2018.1529570}
}
@article{Hamzah2022Systematic,title={Systematic Literature Review on the Elements of Metacognition-Based Higher Order Thinking Skills (HOTS) Teaching and Learning Modules},author={Hainora Hamzah and M. I. Hamzah and Hafizhah Zulkifli},journal={Sustainability},year={2022},doi={10.3390/su14020813}}
@article{Lo2021Assessing,
title={Assessing content knowledge through L2: mediating role of language of testing on students performance},
author={Y. Lo and D. Fung and Xuyan Qiu},
journal={Journal of Multilingual and Multicultural Development},
year={2021},
volume={44},
pages={1013 - 1028},
doi={10.1080/01434632.2020.1854274}
}
@article{Redó2021Dimensions,
title={Dimensions of Creativity in Secondary School High-Ability Students},
author={Núria Arís Redó and María Ángeles Millán Gutiérrez and José-Diego Vargas Cano},
journal={European Journal of Investigation in Health, Psychology and Education},
year={2021},
volume={11},
pages={953 - 961},
doi={10.3390/ejihpe11030070}
}
@article{Boer2010Sustainability,title={Sustainability of teacher expectation bias effects on long-term student performance.},author={H. Boer and R. Bosker and M. Werf},journal={Journal of Educational Psychology},year={2010},volume={102},pages={168-179},doi={10.1037/a0017289}}
@article{Bullock2019In,
title={In Pursuit of Honors: A Multi-Institutional Study of Students' Perceptions of Clerkship Evaluation and Grading.},
author={Justin L. Bullock and Cindy J. Lai and T. Lockspeiser and P. OSullivan and P. Aronowitz and Deborah Dellmore and C. Fung and Christopher Knight and K. Hauer},
journal={Academic medicine : journal of the Association of American Medical Colleges},
year={2019},
doi={10.1097/acm.0000000000002905}
}
@article{LOPERAOQUENDO2024101992,
title = {Rating writing: Comparison of holistic and analytic grading approaches in pre-service teachers},
journal = {Learning and Instruction},
volume = {94},
pages = {101992},
year = {2024},
issn = {0959-4752},
doi = {https://doi.org/10.1016/j.learninstruc.2024.101992},
url = {https://www.sciencedirect.com/science/article/pii/S0959475224001191},
author = {Carolina Lopera-Oquendo and Anastasiya A. Lipnevich and Ignacio Mañez},
keywords = {Grades, Pre-service teachers, Holistic scoring, Analytic scoring, Rubrics},
}
@article {Gil-Hernández_2024,
author = {Carlos J. Gil-Hernández and Irene Pañeda-Fernández and Leire Salazar and Jonatan Castaño Muñoz },
title = {Teacher Bias in Assessments by Student Ascribed Status: A Factorial Experiment on Discrimination in Education},
journal = {Sociological Science},
volume = {11},
number = {27},
issn = {2330-6696},
url = {http://dx.doi.org/10.15195/v11.a27},
doi = {10.15195/v11.a27},
pages = {743--776},
year = {2024},
}