Tools

ESL Resources
Research
State Capacity Building
About CAELA

Calendar
Subscribe to Our Newsletter

Do you have a question?
Ask CAELA

Adult ESL Learner Assessment: Purposes and Tools


Miriam Burt and Fran Keenan
National Center for ESL Literacy Education (NCLE)
September 1995

Learner assessment is conducted in adult basic education (ABE) and adult English as a second language (ESL) educational programs for many reasons: to place learners in appropriate instructional levels, to measure their ongoing progress, to qualify them to enroll in academic or job training programs, to verify program effectiveness, and to demonstrate learner gains in order to justify continued funding for a program. Because of this multiplicity of objectives, learner assessment involves using a variety of instruments and procedures to gather data on a regular basis to ensure that programs are "identifying learners' needs, documenting the learners' progress toward meeting their own goals, and ascertaining the extent to which the project objectives are being met" (Holt, 1994, p. 6).

This digest looks at learner assessment in adult ESL programs. It describes commercially available tests and alternative assessment tools, discusses key issues in assessment, and highlights some of the differences between assessment and evaluation.

Commercially Available Tests
In adult basic education, commercially available instruments such as the Test of Adult Basic Education (TABE) and the Adult Basic Learning Examination (ABLE) predominate as assessment tools because they have construct validity and scoring reliability, are easy to administer to groups, require minimal training on the part of the teacher, and are often stipulated by funding sources (Solórzano, 1994; Wrigley, 1992). ESL tests most commonly used in adult education programs are the Basic English Skills Test (BEST) and the CASAS ESL Appraisal (Sticht, 1990).

The BEST, originally developed by the Center for Applied Linguistics in 1982 to test newly arrived Southeast Asian refugees, assesses English literacy (reading and writing) skills and listening and speaking skills. Although this test measures language and literacy skills at the lowest levels (no speaking is necessary for some items as learners respond to pictures by pointing), it requires some training on the part of the tester. Also, the oral segment is lengthy and must be administered individually (Sticht, 1990).

The Comprehensive Adult Student Assessment System (CASAS) of California has developed competencies, training manuals, and assessment tools for ABE and ESL programs. The CASAS ESL Appraisal is multiple choice and includes reading and listening items. It is easy to administer because it is given to groups, but does not test oral skills (Sticht, 1990).

Other tests used for ESL are the NYSPLACE Test, published by New York State, which is designed for placement and includes a basic English literacy screening and an oral assessment; the Basic Inventory of Natural Language (BINL) which provides a grammatical analysis of spoken language; the Henderson-Moriarty ESL Placement (HELP) test which was designed to measure the literacy skills (in the native language and in English) and the oral English proficiency of Southeast Asian refugee adults; and Literacy Volunteers of America's ESL Oral Assessment (ESLOA) which assesses a learner's ability to speak and understand English.

Limitations of Commercially Available Tests
The use of commercially available tests with adult learners is problematic because these tools may not adequately assess individual learner strengths and weaknesses especially at the lowest level of literacy skills. Such tests do not necessarily measure what has been learned in class, nor address learner goals (Lytle & Wolf, 1989; Wrigley, 1992).

Some testing issues are unique to ESL learners. It is not always clear whether ESL learners have trouble with selected test items because of difficulties with reading, with the vocabulary, or with the cultural notions underlying the test items (Wrigley & Guth, 1992). Another problem may be that some low-literate ESL learners are unfamiliar with classroom conventions such as test taking. Henderson and Moriarty, in their introduction to the HELP test, advise that ESL programs should evaluate whether learners possess the functional skills necessary for writing (such as holding a pencil), are familiar with classroom behaviors (such as responding to teacher questions), and are able to keep up with the pace of learning in beginning level classes (Wrigley and Guth, 1992).

Some would argue that the tests themselves are not the problem, but rather their inappropriate use, for example, administering a commercially available adult literacy test (assesses reading and writing skills) to measure English language proficiency (listening and speaking ability). Funding stipulations may specify inappropriate instruments (Solórzano, 1994) or even tests developed for native speakers (e.g., TABE, ABLE). Wilde (1994) suggests that programs maximize the benefits of commercially available, norm-referenced, and diagnostic tests by,

(1) choosing tests that match the demographic and educational backgrounds of the learners;

(2) interpreting scores carefully;

(3) ensuring that test objectives match the program objectives and curricular content; and

(4) using additional instruments to measure learner achievement.

Alternatives to Commercially Available Tests
Due in part to the drawbacks of the tests described above, many adult (and K-12) educators promote the use of alternative assessment tools that incorporate learner goals and relate more closely to instruction (Lytle & Wolfe, 1989). Alternative assessment (also known as classroom-based, authentic, or congruent assessment) includes such tools as surveys, interviews, checklists, observation measures, teacher-developed tests, learner self-assessment, portfolios and other performance samples, and performance-based tests (Balliro, 1993; Genesee, 1994; Isserlis, 1992; Wrigley, 1992.)

Alternative assessment allows for flexibility in gathering information about learners and measures what has been taught in class. Learner portfolios, collections of individual work, are common examples of alternative assessment. Portfolios can include such items as reports on books read, notes from learner/teacher interviews, learners' reflections on their progress, writing samples, data from performance-based assessments, and scores on commercially available tests (Fingeret, 1993; Wrigley, 1992). From learner interviews, administrators and instructors get information to help with placement decisions and to determine an individual's progress. In one survey of adult teachers, 80% reported using oral interviews to assess what students needed and what they were learning (Davis and Yap, 1992). From program-developed performance-based tests, instructors, administrators, and the learners themselves get information on how learners use English and basic skills regularly. These tests, in which items (such as reading a chart or locating information on a schedule) are put in actual contexts the learners might encounter (Alamprese & Kay, 1993; Holt, 1994), are common in workplace programs. Authentic materials such as job schedules, pay stubs, and union contracts provide the context in which literacy skills are assessed.

Alternative assessment procedures, however, are not a panacea. Maintaining portfolios is time consuming for both learners and teachers. The cultural expectations and educational backgrounds of ESL learners might make them especially resistant to the use of participatory and other alternative assessments (Wrigley & Guth, 1992). Furthermore, funders often require "hard data," and it is difficult to quantify outcomes without using commercially available tests. Finally, data from alternative assessment instruments may not meet eligibility requirements for job training programs, or higher level classes, or certification (Balliro, 1993; Lytle & Wolfe, 1989).

Because of these issues, ESL programs often use a combination of commercially available and program-developed assessment instruments to assess literacy and language proficiency (Guth & Wrigley, 1992; Wrigley, 1992).

Learner Assessment and Program Evaluation
Although learner progress, as measured both by commercially available and alternative assessment instruments, is an indicator of program effectiveness, it is not the only factor in evaluating ABE and adult ESL programs. Other quantifiable indicators include learner retention, learner promotion to higher levels of instruction, and learner transition to jobs or to other types of programs (e.g., moving from an adult ESL program to a vocational program, or to a for-credit ESL or academic program). Less quantifiable learner outcomes include heightened self-esteem and increased participation in community, school, and church events (Alamprese & Kay, 1993).

Other measures of adult education program effectiveness depend to a large extent on program goals. In family literacy programs, increased parental participation in children's learning, parents reading more frequently to their children, and the presence of more books in the home might indicate success (Holt, 1994). Workplace program outcomes might include promotion to higher level jobs, increased participation in work teams, and improved worker attitude that shows up in better job attendance and in a willingness to learn new skills (Alamprese & Kay, 1993).

Conclusion
Assessment is problematic for adult ESL educators searching for tools that will quantify learner gains and program success to funders, demonstrate improvement in English proficiency and literacy skills to learners, and clarify for the educators themselves what has been learned and what has not. Dissatisfaction with commercially available tools has been widespread, and many teachers have felt left out of the process of determining how to assess learner gains in a way that helps teaching and learning. Current practice and theory seem to recommend using a combination of commercially available and program-developed alternative assessment instruments. Further research in this area both by teachers and researchers is warranted.



References

Alamprese, J.A., & Kay, A. (1993). Literacy on the cafeteria line: Evaluation of Skills Enhancement Training Program. Washington, DC: COSMOS Corporation and Ruttenberg, Kilgallon & Associates. (ERIC No. ED 368 933)

Balliro, L. (1993). What kind of alternative? Examining alternative assessment. TESOL Quarterly, 27(3), 558-560.

Davis, A.E. & Yap, K.O. (1992). Results of field research ABE/ESL assessment. Portland, OR: Northwest Regional Educational Laboratory. (ERIC No. ED 376 379)

Fingeret, H.A. (1993). It belongs to me: A guide to portfolio assessment in adult education programs. Washington, DC: U.S. Department of Education. (ERIC No. ED 359 352)

Genesee, F. (1994). President's message: Assessment alternatives. TESOL Matters, 4(5), 2.

Guth, G.J.A., & Wrigley, H.S. (1992). Adult ESL literacy: Programs and practice. Technical report. San Mateo, CA: Aguirre International. (ERIC No. ED 348 895)

Holt, D.D. (Ed.). (1994). Assessing success in family literacy projects: Alternative approaches to assessment and evaluation. Washington, DC and McHenry, IL: Center for Applied Linguistics and Delta Systems. (ERIC No. ED 375 688)

Isserlis, J. (1992). What you see: Ongoing assessment in the ESL/literacy classroom. Adventures in Assessment, 2, 41-48.

Lytle, S.L, & Wolfe. M. (1989). Adult literacy education: Program evaluation and learner assessment. Columbus, OH: ERIC Clearinghouse on Adult, Career, and Vocational Education. (ERIC No. ED 315 665)

Solórzano, R.W. (1994). Instruction and assessment for limited-English-proficient adult learners (Technical report TR94-06). Philadelphia, PA: National Center on Adult Literacy. (ERIC No. ED 375 686)

Sticht, T.G. (1990). Testing and assessment in adult basic education and English as a second language programs. San Diego, CA: Applied Behavioral & Cognitive Sciences, Inc. (ERIC No. ED 317 867)

Wilde, J. (1994). Next steps: Using the results to refine the project. In D.D. Holt, Assessing success in family literacy projects: Alternative approaches to assessment and evaluation (pp. 119-133). Washington, DC and McHenry, IL: Center for Applied Linguistics and Delta Systems.

Wrigley, H.S. (1992). Learner assessment in adult ESL literacy. ERIC Digest. Washington, DC: National Center for ESL Literacy Education. (ERIC No. ED 353 863)

Wrigley, H.S., & Guth, G.J.A. (1992). Bringing literacy to life: Issues and options in adult ESL literacy. San Mateo, CA: Aguirre International. (ERIC No. ED 348 896)

Resources

Adventures in Assessment: Learner-Centered Approaches to Assessment and Evaluation in Adult Literacy. (This journal is available from SABES Central Resource Center, World Education, 210 Lincoln Street, Boston, MA 02111.)

Annotated Bibliography on ABE/ESL Assessment. (1992). (Davis & Arter). (ERIC No. ED 367 191)

English as a Second Language Model Standards for Adult Education Programs. (1992). (Available from Bureau of Publications, California Department of Education, P.O. Box 271, Sacramento, CA 95812-0271.)

Reviews of English Language Proficiency Tests. (1987). (Alderson, Krahnke, & Stansfield). (Available from TESOL, 1660 Cameron Street, Suite 300, Alexandria, VA 22314.)


This document was produced at the Center for Applied Linguistics (4646 40th Street, NW, Washington, DC 20016 202-362-0700) with funding from the U.S. Department of Education (ED), Office of Educational Research and Improvement, under contract no. RI 93002010, The opinions expressed in this report do not necessarily reflect the positions or policies of ED. This document is in the public domain and may be reproduced without permission.