News from the Tennessee Valley Opinion
SUNDAY, JANUARY 8, 2006
OP-ED | OPINION | HOME | FORUMS | ARCHIVES | COLUMNISTS

OP-ED

Many factors influence test-score differences

By James E. McLean

Since the recent article in the New York Times (Students Ace State Tests, but Earn D's From U.S., Nov. 26, 2005) reporting that many states were scoring well on state assessments, but not performing well on the National Assessment of Educational Progress, I have had a number of inquiries from reporters asking me why this discrepancy was also observed in Alabama. I believe this is a reasonable question and the people of Alabama should know some of the reasons this could occur. It is impossible to know the exact cause, but there are many possible explanations of these differences.

Different standards

The first thing to realize is that even though the tests are assessing the same global skills (reading and math), they are not based on the same standards. The national test uses national standards that are somewhat broader than those in Alabama. The Alabama tests are keyed to Alabama standards. While Alabama's standards are based on national standards, they are narrower, focusing on knowledge and skills more important to Alabama. Thus, one explanation is that students taught a curriculum based on Alabama standards may do better on a test developed to test those standards than on one that includes some areas they may not have been taught. It is entirely reasonable that we would want our schools to teach to areas and subjects that our policymakers have deemed to be the most important.

Motivational factors

Another possible reason for performance differences relates to the motivation of the students. The Alabama tests have been used for some time to make judgments about schools and school systems and, by implication, principals, superintendents and teachers. Since these groups are judged by results from these tests, they tend to emphasize them more in their teaching and motivate students to perform well on them. The national test has no direct implications for students, teachers, principals or superintendents. Thus, there is little motivation to emphasize its importance.

Familiarity breeds success

Familiarity provides yet another possible explanation. Students have been taking the state tests or tests similar to them for years. This particular edition is several years old. Thus, they are used to the content, the way the questions are presented, the way in which they respond to the questions and the entire format of the tests. However, the national test is quite different. For example, the national test uses a sampling approach, where each student completes only a small portion of the total test. Results from the complete test are obtained by combining the results from all the students. When combined, the complete test is made up of the parts taken by different students. Students are randomly assigned which part to take.

Judges, regional bias

In addition, the proficiency levels of the national tests are set by a different group of judges who may hold different standards. Judges for setting the proficiency scores on the national test include educational specialists and the general public as well as teachers. The proficiency levels in Alabama are recommended by teachers who teach all levels of students. Thus, acceptable performance levels for one group of judges may differ from that of another. Compounding these results are possible test biases. National tests are generally developed by testing companies in the North, Midwest and Far West. There have been many historical examples of national tests having a regional bias against Southerners.

Another difference relates to the levels at which the various states fund education. Alabama students do remarkably well when considering the funding differences between Alabama and most of the states that did better on the national test. Funding imbalances manifest themselves in a number of ways.

Fewer school days

One simple example is the number of days of instruction per year. In Alabama, we have only 175 days of instruction per year compared to 180 to 185 days of instruction in most other states. It is obvious that 5 to 10 additional days of preparation each year could result in improved performance. Additional days of instruction would cost the state additional dollars.

Thus, it is easy to see why students from many other states outperform Alabama students on this national test. However, this does not suggest that our children are any less intelligent than those anywhere in the country. I believe the differences are due to the factors I have cited above and the reduced educational opportunities that many Alabama students have due to the poor funding of education in this state.

Dr. James E. McLean is dean of the College of Education at The University of Alabama.

Leave feedback
on this or
another
story.

Email This Page


  www.decaturdaily.com