
- This event has passed.
LARC 2025 Conference Presentation—Investigating Convergent Validity Evidence of an English Literacy Test
April 4 @ 12:00 am

Investigating Convergent Validity Evidence of an English Literacy Test
Date & Time: April 4, 2025 | 05:00–05:30 p.m. PDT
Location: LARC, Fullerton, CA
Presenter(s):
- Yage Leah Guo (Center for Applied Linguistics)
- Reshmi Kompakha
- Rachel Myers
- Anna Zilberberg
Description:
Building a validity argument involves examining the relationships between test scores and other measures intended to assess the same or similar constructs, providing convergent validity evidence for the test score interpretations (AERA, APA, NCME, 2014). This paper presents such convergent validity evidence in support of the claims made based on the scores from an English literacy assessment.
Designed in alignment with the 2017 National Reporting System Educational Functioning Levels (NRS EFLs), this literacy test is composed of Reading items and Writing tasks that are presented to examinees in an integrated manner, organized into thematic exercises reflecting the domains and topic areas commonly addressed in adult ESL programs. Scale scores on this literacy test are interpreted in terms of the 2017 NRS EFLs and are used for NRS reporting. The following external variables measuring similar or related constructs were used to investigate convergent validity: (1) test scores of another standardized measure of Reading and Writing among adult ELLs (Avant STAMP Pro), (2) existing program placement, and (3) teacher judgments of examinees’ English proficiency. A total of 67 students from one adult ESL program in Texas participated in the study. The relationships between the literacy test and each one of these measures are briefly described next.
Avant STAMP Pro Reading and Writing sections were chosen as measures of similar constructs to the literacy test in question. AVANT Stamp Pro provides proficiency level scores according to the ACTFL proficiency scale (ACTFL, 2012). We examined the relationship between the literacy test scale scores and the classifications based on STAMP Pro by examining the average literacy scale score of examinees performing at each STAMP Pro Level, separately for Reading and Writing. In addition, Spearman rank-order correlations (in terms of classifications into ACTFL and NRS levels, respectively for each assessment), were examined. As expected, the average scale scores on the literacy test gradually increased as the corresponding Level of STAMP Pro increased, and the correlations were moderate (0.63 – 0.65). These results suggest a moderately strong relationship between the literacy test and the STAMP Pro test and support the claim that the literacy test assesses English language proficiency. When it comes to program class placement, students were divided into three different class levels, and correlations between class level and scale score on the literacy assessment were calculated. The relationships were of moderate strength, as expected. When it comes to teacher judgments, we used a single estimate of the students’ overall reading and writing proficiency levels according to the 2017 NRS EFLs. For 82% of examinees, there was an exact or adjacent agreement between teacher judgments and literacy test scores, both for reading and writing sections.
In summary, the present study provides robust convergent validity evidence for the literacy test’s ability to measure reading and writing proficiency among a population of adult English language learners and place examinees in a hierarchical fashion according to a proficiency framework.