KS Logo here

Assessing Children For The Presence Of A Disability

Methods of Gathering Information



Source

National Information Center
for Children and Youth with Disabilities

Contents

Introduction to Assessment

Methods of Gathering Information

Parents' Role in the Assessment Process

Assessing Students Who Are Culturally and Linguistically Diverse

Primary Areas of Assessment

Putting It All Together: Interpreting Results and Summary

References and List of Publishers


Forums

Learning and Other Disabilities


Related Articles

Questions Often Asked About Special Education Services

Testing Students with Disabilities



One of the cornerstones of the IDEA's evaluation requirements is that it is inappropriate and unacceptable to base any eligibility or placement decision upon the results of only one procedure [34 Code of Federal Regulations (CFR) Section 300.532(d)]. The child must be assessed "in all areas related to the suspected disability, including, if appropriate, health, vision, hearing, social and emotional status, general intelligence, academic performance, communicative status, and motor abilities" [34 CFR Section 300.532(f)].

Because of the convenient and plentiful nature of standardized tests, it is perhaps tempting to administer a battery (group) of tests to a student and make an eligibility or placement determination based upon the results. However, tests alone will not give a comprehensive picture of how a child performs or what he or she knows or does not know. Evaluators need to use a variety of tools and approaches to assess a child, including observing the child in different settings to see how he or she functions in those environments, interviewing individuals who know the child to gain their insights, and testing the child to evaluate his or her competence in whatever skill areas appear affected by the suspected disability, as well as those that may be areas of strength. There are, recently, a number of other approaches being used to collect information about students as well; these include curriculum-based assessment, ecological assessment, task analysis, dynamic assessment, and assessment of learning style. These approaches yield rich information about students, are especially important when assessing students who are from culturally or linguistically diverse backgrounds, and, therefore, are critical methods in the overall approach to assessment. Students possessing medical or mental health problems may also have assessment information from sources outside of the school. Such information would need to be considered along with assessment information from the school's evaluation team in making appropriate diagnoses, placement decisions, and instructional plans.

Only through collecting data through a variety of approaches (observations, interviews, tests, curriculum-based assessment, and so on) and from a variety of sources (parents, teachers, specialists, peers, student) can an adequate picture be obtained of the child's strengths and weaknesses. Synthesized, this information can be used to determine the specific nature of the child's special needs, whether the child needs special services and, if so, to design an appropriate program.

Reviewing School Records

School records can be a rich source of information about the student and his or her background. The number of times the student has changed schools may be of interest; frequent school changes can be disruptive emotionally as well as academically and may be a factor in the problems that have resulted in the student's being referred for assessment. Attendance is another area to note; are there patterns in absences (e.g., during a specific part of the year, as is the case with some students who have respiratory problems or allergies), or is there a noticeable pattern of declining attendance, which may be linked to a decline in motivation, an undiagnosed health problem, or a change within the family?

The student's past history of grades is usually of interest to the assessment team as well. Is the student's current performance in a particular subject typical of the student, or is the problem being observed something new? Are patterns noticeable in the student's grades? For example, many students begin the year with poor grades and then show gradual improvement as they get back into the swing of school. For others, the reverse may be true: During the early part of the year, when prior school material is being reviewed, they may do well, with declines in their grades coming as new material is introduced. Also, transition points such as beginning the fourth grade or middle school may cause students problems; the nature and purpose of reading, for example, tends to change when students enter the fourth grade, where reading to learn content becomes more central. Similarly, middle school requires students to assume more responsibility for long-term projects (Hoy & Gregg, 1994). These shifts may bring about a noticeable decline in grades for some students.

Test scores are also important to review. Comparing these scores to a student's current classroom performance can indicate that the student's difficulties are new ones, perhaps resulting from some environmental change that needs to be investigated more fully, or the comparison may show that the student has always found a particular skill area to be problematic. "In this situation, the current problems the student is experiencing indicate that the classroom demands have reached a point that the student requires more support to be successful" (Hoy & Gregg, 1994, p. 37).

Looking at Student Work

Often, an initial part of the assessment process includes examining a student's work, either by selecting work samples that can be analyzed to identify academic skills and deficits, or by conducting a portfolio assessment, where folders of the student's work are examined.

When collecting work samples, the teacher selects work from the areas where the student is experiencing difficulty and systematically examines them. The teacher might identify such elements as how the student was directed to do the activity (e.g., orally, in writing), how long it took the student to complete the activity, the pattern of errors (e.g., reversals when writing, etc.), and the pattern of correct answers. Analyzing the student's work in this way can yield valuable insight into the nature of his or her difficulties and suggest possible solutions.

Maintaining portfolios of student work has become a popular way for teachers to track student progress. By assembling in one place the body of a student's work, teachers can see how a student is progressing over time, what problems seem to be re-occurring, what concepts are being grasped or not grasped, and what skills are being developed. The portfolio can be analyzed in much the same way as selective work samples, and can form the basis for discussions with the student or other teachers about difficulties and successes and for determining what modifications teachers might make in their instruction.

Prereferral Procedures

Many school systems recommend or require that, before an individualized evaluation of a student is conducted, his or her teacher meet with an assistance team to discuss the nature of the problem and what possible modifications to instruction or the classroom might be made. These procedures are known as prereferral. Prereferral procedures have arisen out of a number of research studies documenting faulty referral practices, including, among other practices, the overreferral of students who come from backgrounds that are culturally or linguistically different from the majority culture, those who are hard to teach, or those who are felt to have behavioral problems. According to Overton (1992), "the more frequent use of better prereferral intervention strategies is a step forward in the prevention of unnecessary evaluation and the possibility of misdiagnosis and overidentification of special education students" (p. 6).

This process recognizes that many variables affect learning; rather than first assuming that the difficulty lies within the student, the assistance team and the teacher will look specifically at what variables (e.g., classroom, teacher, student, or an interaction of these) might be affecting this particular student. Examining student records and work samples and conducting interviews and observations are part of the assistance team's efforts. These data-gathering approaches are intended to specify the problem more precisely and to document its severity. Modifications to the teacher's approach, to the classroom, or to student activities may then be suggested, attempted, and documented; if no progress is made within a specific amount of time, then the student is referred for an individualized evaluation. It is important for teachers to keep track of the specific modifications they attempt with a student who is having trouble learning or behaving, because these can provide valuable information to the assessment team at the point the student is referred for evaluation.

Observation

Observing the student and his or her environment is an important part of any assessment process. Observations in the classroom and in other settings where the student operates can provide valuable information about his or her academic, motor, communication, or social skills; behaviors that contribute to or detract from learning; and overall attitude or demeanor.

Observing the student's environment(s) and his or her behavior within those environments can identify the factors that are influencing the student. For the information from observations to be useful, the team must first define the purpose for the observation and specify:

  • Who will make the observation;

  • Who or what will be observed;

  • Where the observation will take place (observing a range of situations where the student operates is recommended);

  • When the observation will take place (a number of observations at different times is also important); and

  • How the observations will be recorded. (Wallace, Larsen, & Elksnin, 1992, p. 12).

Observations are a key part of some of the assessment methods that will be discussed later in this section, including curriculum-based assessment, ecological assessment, and task analysis. There are many ways in which to record what is observed; the box below entitled "Common Observational Techniques" lists and briefly describes the more common observational methods.


Common Observational Techniques

Anecdotal Records: The observer describes incidents or behaviors observed in a particular setting in concrete, narrative terms (as opposed to drawing inferences about feelings or motives). This type of record allows insight into cause and effect by detailing what occurred before a behavior took place, the behavior itself, and consequences or events that occurred after the behavior.

Event Recording: The observer is interested in recording specific behavioral events (such as how many times the student hits or gets out of his or her seat). A tally sheet listing the behaviors to be observed and counted is useful; when the observer sees the behavior of interest, he or she can simply make a tick mark on the sheet.

Duration Recording: This method usually requires a watch or clock, so that a precise measurement of how much time a student spends doing something of concern to the teacher or assessment team (e.g., talking to others, tapping, rocking) can be recorded.

Time-sampling Recording: With this technique observers count the number of times a behavior occurs during a specific time interval. Rather than observe for long periods of time and tally all incidences of the behavior causing concern, the observer divides the observation period into equal time units and observes and tallies behavior only during short periods of time. Based upon the time sampling, predictions can then be made about the student's total behavior.

Checklists and Rating Scales: A checklist usually requires the observer to note whether a particular characteristic is present or absent, while a rating scale typically asks the observer to note the degree to which a characteristic is present or how often a behavior occurs. There are many commercially available checklists and rating scales, but they may be developed locally as well.

Sources: Swanson & Watson, 1989, pp. 273-277; Wallace, Larsen, & Elksnin, 1992, pp. 12-13.


While observations can yield useful information about the student and his or her environments, there are a number of errors that can occur during observations and distort or invalidate the information collected. One source of error may come from the observer -- he or she must record accurately, systematically, and without bias. If his or her general impression of the student influences how he or she rates that student in regards to specific characteristics, the data will be misleading and inaccurate. This can be especially true if the student comes from a background that is different from the majority culture. In such cases, it is important that the observer have an understanding of, and a lack of bias regarding, the student's cultural or language group. Often, multiple observers are used to increase the reliability of the observational information collected. All observers should be fully trained in how to collect information using the specific method chosen (e.g., time-sampling using a checklist) and how to remain unobtrusive while observing and recording, so as not to influence the student's behavior. It is also important to observe more than once, in a number of situations or locations, and at various times, and to integrate these data with information gathered through other assessment procedures. Decisions should not be made based upon a narrow range of observational samples.

Interviews

Interviewing the student in question, his or her parents, teachers, and other adults or peers can provide a great deal of useful information about the student. Ultimately, "an interview should be a conversation with a purpose" (Wallace, Larsen, & Elksnin, 1992, p. 16), with questions designed to collect information that "relates to the observed or suspected disability of the child" (p. 260). Preparing for the interview may involve a careful review of the student's school records or work samples, for these may help the assessment team identify patterns or areas of specific concern that can help determine who should be interviewed and some of the questions to be asked. Parents, for example, may be able to provide detailed information about the child's academic or medical background. It is especially important that they contribute their unique, "insider" perspective on their child's functioning, interests, motivation, difficulties, and behavior in the home or community. They may have valuable information to share about possible solutions to the problems being noted. Teachers can provide insight into the types of situations or tasks that the child finds demanding or easy, what factors appear to contribute to the child's difficulties, and what has produced positive results (e.g., specific activities, types of rewards) (Wodrich & Joy, 1986). The student, too, may have much to say to illuminate the problem. "All persons interviewed should be asked if they know of information important to the solution of the academic or behavior problem that was not covered during the interview" (Hoy & Gregg, 1994, p. 44).

Organizing interview results is essential. Hoy and Gregg (1994) suggest that the interviewer might summarize the "perceptions of each person interviewed in a way that conveys similarities and differences in viewpoints" (p. 46), including:

  • perceptions of the primary problem and its cause,

  • what attempts have been made to solve or address the problem,

  • any recent changes in the problem's severity, and

  • student strengths and weaknesses.

Testing

Most assessments include tests, although this has become increasingly controversial. Many educators question the usefulness of the information gained from tests, for reasons that will be discussed in a moment. However controversial testing may be, this News Digest will nonetheless present a basic overview of the issues, because testing so often forms a part of the assessment process. Parents, teachers, and other professionals may find this basic information helpful (a) for understanding some of the controversy surrounding testing and, thus, what principles schools need to consider when using standardized tests, and (b) for identifying what resources of information about tests are available and what alternatives to testing exist. Standardized tests are very much a part of the education scene, as we all know. Most of us have taken many such tests in our lifetime. Tests may be informal -- meaning a measure developed locally -- or they may be commercially developed, formal measures, commonly called standardized tests. Unlike informal tests, standardized tests have detailed procedures for administration, timing, and scoring. There is a wide variety of tests available to assess the different skill areas.

Some tests are known as criterion-referenced tests. This means that they are scored according to a standard, or criterion, that the teacher, school, or test publisher decides represents an acceptable level of mastery. An example of a criterion- referenced test might be a teacher-made spelling test where there are 20 words to be spelled and where the teacher has defined an "acceptable level of mastery" as 16 correct (or 80%). These tests, sometimes called content-referenced tests, are concerned with the mastery of specific, defined skills; the student's performance on the test indicates whether or not he or she has mastered those skills.

Other tests are known as norm-referenced tests. Scores on these tests are not interpreted according to an absolute standard or criterion (i.e., 8 out of 10 correct) but, rather, according to how the student's performance compares with that of a particular group of individuals. In order for this comparison to be meaningful, a valid comparison group -- called a norm group -- must be defined. A norm group is a large number of children who are representative of all the children in that age group. Such a group can be obtained by selecting a group of children that have the characteristics of children across the United States -- that is, a certain percentage must be from each gender, from various ethnic backgrounds (e.g., Caucasian, African American, American Indian, Asian, Hispanic), from each geographic area (e.g., Southeast, Midwest), and from each socioeconomic class. By having all types of children take the test, the test publisher can provide information about how various types of children perform on the test. (This information -- what type of students comprised the norm group and how each type performed on the test -- is generally given in the manuals that accompany the test.) The school will compare the scores of the child being evaluated to the scores obtained by the norm group. This helps evaluators determine whether the child is performing at a level typical for, below, or above that expected for children of a given ethnicity, socioeconomic status, age, or grade.

Not all tests use large, representative norm groups. This means that such tests were normed using a group of individuals who were not representative of the population in general. For example, on one such test, the norm group may have included few or no African American, Hispanic, or Asian students. Because it is not known how such students typically perform on the test, there is nothing to which an individual student's scores can be compared, which has serious implications for interpretation of results.

Thus, before making assumptions about a child's abilities based upon test results, it is important to know something about the group to which the child is being compared -- particularly whether or not the student is being compared to children who are similar in ethnicity, socioeconomic status, and so on. The more unlike the child the norm group is, the less valuable the results of testing will generally be. This is one of the areas in which standardized testing has fallen under considerable criticism. Often, test administrators do not use the norm group information appropriately, or there may not be children in the norm group who are similar to the child being tested. Furthermore, many tests were originally developed some time ago, and the norm groups reported in the test manual are not similar at all to the children being tested today.

Selecting an Appropriate Instrument. The similarity of the norm group to the student being tested is just one area to be carefully considered by the professionals who select and administer standardized tests. Choosing which test is appropriate for a given student requires investigation; it is extremely important that those responsible for test selection do not just use what is available to or "always used by" the school district or school. The child's test results will certainly influence eligibility decisions, instructional decisions, and placement decisions, all of which have enormous consequences for the child. If the child is assessed with an instrument that is not appropriate for him or her, the data gathered are likely to be inaccurate and misleading, which in turn results in faulty decisions regarding that child's educational program. This is one of the reasons that many educators object vehemently to standardized testing as a means of making decisions about a student's strengths, weaknesses, and educational needs.

Therefore, selecting instruments with care is vital, as is the need to combine any information gathered through testing with information gathered through other approaches (e.g., interviews, observations, dynamic assessment).

Given the number of standardized tests available today, how does the individual charged with testing select an appropriate test for a given student? Here are some suggestions.

  1. Consider the student's skill areas to be assessed, and identify a range of tests that measure those skill areas. There are a variety of books that can help evaluators identify what tests are available; one useful reference book is Tests: A Comprehensive Reference for Assessments in Psychology, Education, and Business (3rd edition) by Sweetland and Keyser (1991). Another is A Consumer's Guide to Tests in Print (Hamill, Brown, & Bryant, 1992). Both books describe what each available test claims to measure, the age groups for which it is appropriate, whether it is group- and individually-administered (all testing of children with suspected disabilities must be individualized), how long it takes to administer the test, and much more. Additionally, the two NICHCY bibliographies -- one for families and one for schools -- that are available separately from this News Digest list many books on assessment which describe and critique a subset of the tests available in any given skill area. Taking advantage of the review information available on tests is a critical responsibility of all those charged with assessing students and making decisions about their education.

  2. Investigate how suitable each test identified is for the student to be assessed and select those that are most appropriate. A particularly valuable resource for evaluating tests is the Mental Measurements Yearbook (Conoley & Kramer, 1992), which describes tests in detail and includes expert reviews of many tests. This yearbook is typically available in professional libraries for teachers, university libraries, and in the reference section of many public libraries. Publishers of tests generally also make literature available to help professionals determine whether a test is suitable for a specific student. This literature typically includes sample test questions; information on how the test was developed; a description of what groups of individuals (e.g., ethnic groups, ages, grade levels) were included in the "norm" group; and general guidelines for administration and interpretation.

Some questions professionals consider when reviewing a test are:

  • According to the publisher or expert reviewers, what, specifically, is the test supposed to measure? Is its focus directly relevant to the skill area(s) to be assessed? Will student results on the test address the educational questions being asked? (In other words, will the test provide the type of educational information that is needed?) If not, the test is not appropriate for that student and should not be used.

  • Is the test reliable and valid? These are two critical issues in assessment. Reliability refers to the degree to which a child's results on the test are the same or similar over repeated testing. If a test is not reliable or if its reliability is uncertain -- meaning that it does not yield similar results when the student takes the test again -- then it should not be used. Validity refers to the degree to which the test measures what it claims to measure. For example, if a test claims to measure anxiety, a person's scores should be higher under a stressful situation than under a nonstressful situation. Test publishers make available specimen sets that will typically report the reliability and validity of the test. This information may also be reported in books describing the test, in the Mental Measurement Yearbook (Conoley & Kramer, 1992), or in many of the books listed in the reference section of this News Digest or in the two NICHCY bibliographies on assessment (available separately from this document).

  • Is the content/skill area being assessed by the test appropriate for the student, given his or her age and grade? (Scope and sequence charts that identify the specific hierarchy of skills for different academic areas are useful here.) If not, there is no reason to use the test.

  • If the test is norm-referenced, does the norm group resemble the student? This point was mentioned above and is important for interpretation of results.

  • Is the test intended to evaluate students, to diagnose the specific nature of a student's disability or academic difficulty, to inform instructional decisions, or to be used for research purposes? Many tests will indicate that a student has a disability or specific problem academically, but results will not be useful for instructional planning purposes. Additional testing may then be needed, in order to fully understand what type of instruction is necessary for the student.

  • Is the test administered in a group or individually? By law, group tests are not appropriate when assessing a child for the presence of a disability or to determine his or her eligibility for special education.

  • Does the examiner need specialized training in order to administer the test, record student responses, score the test, or interpret results? In most, if not all, cases, the answer to this question is yes. If the school has no one trained to administer or interpret the specific test, then it should not be used unless the school arranges for the student to be assessed by a qualified evaluator outside of the school system.

  • Will the student's suspected disability impact upon his or her taking of the test? For example, many tests are timed tests, which means that students are given a certain amount of time to complete items. If a student has weak hand strength or dexterity, his or her performance on a timed test that requires holding a pencil or writing will be negatively affected by the disability. Using a timed test would only be appropriate for determining how speed affects performance. To determine the student's actual knowledge of a certain area, a nontimed test would be more appropriate. It may also be possible to make accommodations for the student (e.g., removing time restrictions from a timed test). If an accommodation is made, however, results must be interpreted with caution. Standardized tests are designed to be administered in an unvarying manner; when accommodations are made, standardization is broken, and the norms reported for the test no longer apply.

  • How similar to actual classroom tasks are the tasks the child is asked to complete on the test? For example, measuring spelling ability by asking a child to recognize a misspelled word may be very different from how spelling is usually measured in a class situation (reproducing words from memory). If test tasks differ significantly from classroom tasks, information gathered by the test may do little to predict classroom ability or provide information useful for instruction.

Limitations of Testing. Even when all of the above considerations have been observed, there are those who question the usefulness of traditional testing in making good educational decisions for children. Many educators see traditional tests as offering little in the way of information useful for understanding the abilities and special needs of an individual child. Martin Kozloff (1994) offers the following example to illustrate how rigid use and interpretation of tests can result in useful information being overlooked or misinterpreted.

Ms. Adams: (Holding up a picture of a potato.) And this one?
Indra: You eat it.
Ms. Adams: No. It's a potato. Let's try another. (Holds up a picture of a duck.) What is this?
Indra: Swimming.
Ms. Adams: No. It's a duck. Say, "duck."
Indra: Duck.
Ms. Adams: Very good. (Still showing picture of a duck.) Now, what is this?
Indra: Swimming! (p. 16)

Kozloff notes that:
There are many competent ways to respond to "What is this?". Indra said what potatoes are for and what the duck was doing. Ms. Adams scores Indra's answers incorrect because the test Ms. Adams is using narrowly defines as correct those answers with an object-naming function. Thus, Ms. Adams underestimates the size of Indra's object-naming repertoire and does not notice the other functions of Indra's vocabulary. (Kozloff, 1994, pp. 16-17)

Another concern about the overuse of testing in assessment is its lack of usefulness in designing interventions. Historically, it has seemed as if tests have not been interpreted in ways that allow for many specific strategies to be developed. While scores help to define the areas in which a student may be performing below his or her peers, they may offer little to determine particular instruction or curricular changes that may benefit the child.

Traditional tests often seem to overlap very little with the curriculum being taught. This suggests that scores may not reflect what the child really knows in terms of what is taught in the actual classroom. Other concerns include overfamiliarity with a test that is repeated regularly, inability to apply test findings in any practical way (i.e., generating specific recommendations based on test results), and difficulty in using such measures to monitor short-term achievement gains.

The sometimes circular journey from the referral to the outcome of the assessment process is frustrating. The teacher or parent requests help because the student is having problems, and the assessment results in information that more or less states, "The student is having problems."

It may be, however, that it is not that the tests themselves offer little relevant information but, rather, that the evaluators may fail to interpret them in ways that are useful. If we only ask questions related to eligibility (e.g., does this child meet the criteria as an individual with mental disabilities?) or about global ability (e.g., what is this child's intellectual potential?), then those are the questions that will be answered. Such information is not enough, if the goal is to develop an effective and appropriate educational program for the student.

Other Assessment Questions

During the assessment process, we often ask questions such as:

  • How can we help the child to do his or her work?

  • How can we manage the child's behavior, or teach the child to manage his or her own behavior?

  • How can we help the child to be neater, faster, quieter, more motivated?

As alluded to a moment ago, it may be that a different set of questions needs to be asked, questions that may be more effective in eliciting practical and useful information that can be readily applied toward intervention. Such questions might include:

  • In what physical environment does the child learn best?

  • What is useful, debilitating, or neutral about the way the child approaches the task?

  • Can the student hold multiple pieces of information in memory and then act upon them?

  • How does increasing or slowing the speed of instruction impact upon the child's accuracy?

  • What processing mechanisms are being taxed in any given task?

  • How does this student interact with a certain teacher style?

  • With whom has the child been successful? What about the person seems to have contributed to the child's success?

  • What is encouraging to the child? What is discouraging?

  • How does manipulating the mode of teaching (e.g., visual or auditory presentation) affect the child's performance?

The two sets of questions above differ from each other in two important ways. Within the first set, there is a subtle assumption that the problem is known (e.g., we "know" that the child is not trying hard enough) and that the solution to the problem is all that is needed. The second set of questions, in contrast, is seeking information about the problem. The assessment is designed to find out what is keeping the child from trying harder or producing readable work. Also, the first set of questions tends to be more "child-blaming," while the other set attempts to understand more about the child's experience. Assuming one already "knows" the problem may result in fewer and less effective interventions. On the other hand, if we seek to understand "why" the child is having difficulty succeeding in school (e.g., he or she has trouble remembering and integrating information; fear of failure results in reduced classroom effort), we engage in an assessment process that seeks information about the problem and results in the identification of specific strategies to reduce the problem's negative impact on learning. To this end, assessment that goes beyond administering standardized tests and includes other evaluation methods is essential. In the remainder of this section, several valuable assessment methods will be briefly described. Resources of additional information are listed in the two NICHCY bibliographies on assessment available separately from this News Digest.

Ecological Assessment

Ecological assessment basically involves directly observing and assessing the child in the many environments in which he or she routinely operates. The purpose of conducting such an assessment is to probe how the different environments influence the student and his or her school performance. Where does the student manifest difficulties? Are there places where he or she appears to function appropriately? What is expected of the student academically and behaviorally in each type of environment? What differences exist in the environments where the student manifests the greatest and the least difficulty? What implications do these differences have for instructional planning? As Wallace, Larsen, and Elksnin (1992) remark: "An evaluation that fails to consider a student's ecology as a potential causative factor in reported academic or behavioral disorders may be ignoring the very elements that require modification before we can realistically expect changes in that student's behavior" (p. 19).

Direct Assessment

Direct assessment of academic skills is one alternative that has recently gained in popularity. While there are a number of direct assessment models that exist (Shapiro, 1989), they are similar in that they all suggest that assessment needs to be directly tied to instructional curriculum. Curriculum-based assessment (CBA) is one type of direct evaluation. "Tests" of performance in this case come directly from the curriculum. For example, a child may be asked to read from his or her reading book for one minute. Information about the accuracy and the speed of reading can then be obtained and compared with other students in the class, building, or district. CBA is quick and offers specific information about how a student may differ from peers.

Because the assessment is tied to curriculum content, it allows the teacher to match instruction to a student's current abilities and pinpoints areas where curriculum adaptations or modifications are needed. Unlike many other types of educational assessment, such as I.Q. tests, CBA provides information that is immediately relevant to instructional programming. (Berdine & Meyer, 1987, p. 33)

CBA also offers information about the accuracy and efficiency (speed) of performance. The latter is often overlooked when assessing a child's performance but is an important piece of information when designing intervention strategies. CBA is also useful in evaluating short-term academic progress. The resources on CBA which are listed in the NICHCY bibliographies on assessment (available separately from this News Digest) offer detailed guidance on how to design assessments that are tied to the curriculum.

Dynamic Assessment

Dynamic assessment refers to several different, but similar approaches to evaluating student learning. Although these approaches have been in use for some time, only recently has dynamic assessment been acknowledged as a valuable means of gathering information about students (Lidz, 1987). The goal of this type of assessment "is to explore the nature of learning, with the objective of collecting information to bring about cognitive change and to enhance instruction" (Sewell, 1987, p. 436).

One of the chief characteristics of dynamic assessment is that it includes a dialogue or interaction between the examiner and the student. Depending on the specific dynamic assessment approach used, this interaction may include modeling the task for the student, giving the student prompts or cues as he or she tries to solve a given problem, asking what the student is thinking while working on the problem, sharing on the part of the examiner to establish the task's relevance to experience and concepts beyond the test situation, and giving praise or encouragement (Hoy & Gregg, 1994). The interaction allows the examiner to draw conclusions about the student's thinking processes (e.g., why he or she answers a question in a particular way) and his or her response to a learning situation (i.e., whether, with prompting, feedback, or modeling, the student can produce a correct response, and what specific means of instruction produce and maintain positive change in the student's cognitive functioning).

Typically, dynamic assessment involves a test-train-retest approach. The examiner begins by testing the student's ability to perform a task or solve a problem without help. Then, a similar task or problem is given the student, and the examiner models how the task or problem is solved or gives the student cues to assist his or her performance. In Feuerstein's (1979) model of dynamic assessment, the examiner is encouraged to interact constantly with the student, an interaction that is called mediation, which is felt to maximize the probability that the student will solve the problem. Other approaches to dynamic assessment use what is called graduated prompting (Campione & Brown, 1987) where "a series of behavioral hints are used to teach the rules needed for task completion" (Hoy & Gregg, 1994, p. 151). These hints do not evolve from the student's responses, as in Feuerstein's model, but, rather, are scripted and preset, a standardization which allows for comparison across students. The prompts are given only if the student needs help in order to solve the problem. In both these approaches, the "teaching" phase is followed by a retesting of the student with a similar task but with no assistance from the examiner. The results indicate the student's "gains" or responsiveness to instruction -- whether he or she learned and could apply the earlier instructions of the examiner and the prior experience of solving the problem. An approach known as "testing the limits" incorporates the classic training and interactional components of dynamic assessment but can be used with many traditional tests, particularly tests of personality or cognitive ability (Carlson & Wiedl, 1978, 1979, as cited in Jitendra & Kameenui, 1993). Modifications are simply included in the testing situation -- while taking a particular standardized test, for example, the student may be encouraged to verbalize before and after solving a problem. Feedback, either simple or elaborated, may be provided by the examiner as well.

Of course, dynamic assessment is not without its limitations or critics. One particular concern is the amount of training needed by the examiner to both conduct the assessment and interpret results. Another is a lack of operational procedures or "instruments" for assessing a student's performance or ability in the different content areas (Jitendra & Kameenui, 1993). Further, conducting a dynamic assessment is undeniably labor intensive.

Even with these limitations, dynamic assessment is a promising addition to current evaluation techniques. Because it incorporates a teaching component into the assessment process, this type of assessment may be particularly useful with students from minority backgrounds who may not have been exposed to the types of problems or tasks found on standardized tests. The interactional aspect of dynamic assessment also can contribute substantially to developing an understanding of the student's thinking process and problem-solving approaches and skills. Certainly, having detailed information about how a student approaches performing a task and how he or she responds to various instructional techniques can be highly relevant to instructional planning.

Task Analysis

Task analysis is very detailed; it involves breaking down a particular task into the basic sequential steps, component parts, or skills necessary to accomplish the task. The degree to which a task is broken down into steps depends upon the student in question; "it is only necessary to break the task down finely enough so that the student can succeed at each step" (Wallace, Larsen, & Elksnin, 1992, p. 14).

Taking this approach to assessment offers several advantages to the teacher. For one, the process identifies what is necessary for accomplishing a particular task. It also tells the teacher whether or not the student can do the task, which part or skill causes the student to falter, and the order in which skills must be taught to help the student learn to perform the task. According to Bigge (1990), task analysis is a process that can be used to guide the decisions made regarding:

  • what to teach next;

  • where students encounter problems when they are attempting but are not able to complete a task;

  • the steps necessary to complete an entire task;

  • what adaptations can be made to help the student accomplish a task;


  • options for those students for whom learning a task is not a possible goal (as described in Wallace, Larsen, & Elksnin, 1992, p. 14).

Task analysis is an approach to assessment that goes far beyond the need to make an eligibility or program placement decision regarding a student. It can become an integral part of classroom planning and instructional decision-making.

Outcome-Based Assessment

Outcome-based assessment is another approach to gathering information about a student's performance. This type of assessment has been developed, at least in part, to respond to concerns that education, to be meaningful, must be directly related to what educators and parents want the child to have gained in the end. Outcome-based assessment involves considering, teaching, and evaluating the skills that are important in real-life situations. Learning such skills will result in the student becoming an effective adult. Assessment, from this point of view, starts by identifying what outcomes are desired for the student (e.g., being able to use public transportation). In steps similar to what is used with task analysis, the team then determines what competencies are necessary for the outcomes to take place (e.g., the steps or subskills the student needs to have mastered in order to achieve the outcome desired) and identifies which subskills the student has mastered and which he or she still needs to learn. The instruction that is needed can then be pinpointed and undertaken.

Learning Styles

The notion of learning styles is not new, but seems to have revived in the past few years. Learning styles theory suggests that students may learn and problem solve in different ways and that some ways are more natural for them than others. When they are taught or asked to perform in ways that deviate from their natural style, they are thought to learn or perform less well. A learning style assessment, then, would attempt to determine those elements that impact on a child's learning and "ought to be an integral part of the individualized prescriptive process all special education teachers use for instructing pupils" (Berdine & Meyer, 1987, p. 27).

Some of the common elements that may be included here would be the way in which material is typically presented (visually, auditorily, tactilely) in the classroom, the environmental conditions of the classroom (hot, cold, noisy, light, dark), the child's personality characteristics, the expectations for success that are held by the child and others, the response the child receives while engaging in the learning process (e.g., praise or criticism), and the type of thinking the child generally utilizes in solving problems (e.g., trial and error, analyzing). Identifying the factors that positively impact the child's learning may be very valuable in developing effective intervention strategies.

Back to top

Continue on to part 3 of this digest

spacerspacerspacer


Infants | Toddlers | Preschoolers | K-12
Education | Health | Recreation | Parenting | Organizations | Store
Home | Media Info | Survey | About Us | Legal

KidSource OnLine KidSource and KidSource OnLine are trademarks of Kidsource OnLine, Inc. Copyright 2009. Other trademarks property of their respective holders.. Created: August 04, 1997 . Last modified time : April 20, 2000 .