**I. INTRODUCTION
**

This report consists of four sections:

- A description of the Milwaukee Parental Choice Program and the data being collected;
- A description of the choice families and students;
- A five-year report on outcomes; and
- A brief response to some of the criticisms of our previous evaluations.

This report is an abbreviated version of earlier reports, for three reasons. First, adequate resources to do this research have not been available for the past two years. Second, program legislation has been changed to include parochial schools and allow students already in those schools to enter the program. The new program will not be comparable and will not be evaluated except for an audit in the year 2000. Third, comparable Milwaukee Public School (MPS) system achievement test score data were not available for the fifth year because the MPS has essentially dropped the Iowa Test of Basic Skills (ITBS) as state mandated tests have been introduced. The private schools continue to use the ITBS and are not required to take the state tests. This report does, however, more than fulfill the statutory requirements (section 119.23 (5) (d)). This report does not contain any information on the 1995-96 school year.

For the most part, this report updates the *Fourth Year Report* (December 1994)
and should be read in conjunction with that report. Most of the findings are consistent
with that very detailed report. This report includes three new items: (1) a brief
description of the new program legislation; (2) new multivariate estimates of achievement
differences between choice and MPS schools, combining four years of data; and (3) a brief
response to some of the criticisms of previous reports. Some of these critiques were
inaccurate in the extreme and should not have been used as policy-making tools. The
Peterson critique, for example, circulated when the legislature was considering major
changes in the legislation, and dramatic changes did occur--particularly an increase in
the scope of the program and appropriations for students to attend religious schools.

**II. THE MILWAUKEE PARENTAL CHOICE PROGRAM
**

* The Original Program*. The Milwaukee Parental Choice Program, enacted in
spring 1990, provides an opportunity for students meeting specific criteria to attend
private, nonsectarian schools in Milwaukee. A payment from public funds equivalent to the
MPS per-member state aid ( $3,209 in 1994-95) is paid to the private schools in lieu of
tuition and fees for the student. Students must come from families with incomes not
exceeding 1.75 times the national poverty line. New choice students must not have been in
private schools in the prior year or in public schools in districts other than MPS. The
total number of choice students in any year was limited to 1% of the MPS membership in the
first four years, but was increased to 1.5% beginning with the 1994-95 school year.

Schools initially had to limit choice students to 49% of their total enrollment. The
legislature increased that to 65% beginning in 1994-95. Schools must also admit choice
students without discrimination (as specified in s. 118.13, Wisconsin Stats.). Both the
statute and administrative rules specify that pupils must be accepted "on a random
basis." This has been interpreted to mean that if a school was oversubscribed in a
grade, random selection is required in that grade. In addition, in situations in which one
child from a family was admitted to the program, a sibling is exempt from random selection
even if random selection is required in the child's grade.

* The New Program.* The legislation was amended as part of the biennial
state budget in June 1995. The principal changes were: (1) to allow religious schools to
enter the program; (2) to allow students in grades kindergarten through grade three who
were already attending private schools to be eligible for the program; (3) to eliminate
all funding for data collection and evaluations (the Audit Bureau is required simply to
file a report by the year 2000). Thus unless the legislation changes, data of the type
collected for this and previous reports will not be available for the report to be
submitted in the year 2000.

* Research and Data*. The study on which this report is based employs a
number of methodological approaches. Surveys were mailed in the fall of each year from
1990 to 1994 to all parents who applied for enrollment in one of the choice schools.
Similar surveys were sent in May and June of 1991 to a random sample of 5,474 parents of
students in the Milwaukee Public Schools. Among other purposes, the surveys were intended
to assess parent knowledge of and evaluation of the Choice Program, educational
experiences in prior public schools, the extent of parental involvement in prior MPS
schools, and the importance of education and the expectations parents hold for their
children. We also obtained demographic information on family members. A follow-up survey
of choice parents assessing attitudes relating to their year in private schools was mailed
in June of each year.

In addition, detailed case studies were completed in April 1991 in the four private schools that enrolled the majority of the choice students. An additional study was completed in 1992, and six more case studies in the spring of 1993. Case studies of the K-8 schools involved approximately 30 person-days in the schools, including 56 hours of classroom observation and interviews with nearly all of the teachers and administrators in the schools. Smaller schools required less time. Researchers also attended and observed parent and community group meetings, and Board of Director meetings for several schools. Results of these case studies were included in the December 1994 report.

Finally, beginning in the fall of 1992 and continuing through the fall of 1994, brief mail and phone surveys were completed with as many parents as we could find who chose not to have their children continue in the program.

In accordance with normal research protocol, and with agreement of the private schools, to maintain student confidentiality, reported results are aggregated and schools are not individually identified. Thus these findings should not be construed as an audit or an assessment of the effectiveness of the educational environment in any specific school.

Readers of earlier reports may notice slight differences in a few of the statistics
reported for earlier years. These differences are attributable to improvements in
collecting data (such as search for student names in MPS records), errors in reporting,
survey results that come in after the reports are prepared, and data entry and analysis
errors that are subsequently corrected. Unless noted in the text, these differences are
minor.

**III. CHOICE FAMILIES AND STUDENTS**

Over the years we have reported on a number of important questions concerning the students and families who applied to the choice program. How many families apply? How many seats are available in the private schools? How do families learn about the program? Why do they want to participate? What are the demographic characteristics of students and families? What are parental attitudes toward education? What are the prior experiences of applicants in public schools? How well were the students doing in their prior public schools?

Findings are very consistent over the five years of the program. For economy, and
because five years of data provide a better picture than a single year, most tables
contain combined data from 1990 to 1994. The most appropriate comparison group to the
choice families, on most measures, is the low-income MPS sample. That group, which
includes about two-thirds of Milwaukee students, is defined as qualifying for free or
reduced-priced lunch. The income level for reduced-priced lunch is 1.85 times the poverty
line, free lunch is 1.35 times the poverty line. Almost all low-income students qualify
for full free lunch.

* Enrollment in the Choice Program*. Enrollment statistics for the Choice
Program are provided in Table 1. Enrollment in the Choice Program has increased steadily
but slowly, never reaching the maximum number allowed by the law. September enrollments
have been 341, 521, 620, 742, and 830 from 1990-91 to 1994-95. The number of schools
participating was: 7 in 1990-91, 6 in 1991-92, 11 in 1992-93, and 12 in the last two
years. The number of applications has also increased, with again the largest increase in
1992-93. In the last two years, however, applications have leveled off at a little over
1,000 per year. Applications exceeded the number of available seats (as determined by the
private schools) by 171, 143, 307, 238 and 64 from 1990-91 through 1994-95. Some of these
students eventually fill seats of students who are accepted but do not actually enroll.
The number of seats available exceed the number of students enrolled because of a mismatch
between applicant grades and seats available by grade. It is difficult to determine how
many more applications would be made if more schools participated and more seats were
available. In 1992-93, when the number of participating schools increased from 6 to 11,
applications rose by 45%. In the last two years, however, seats available increased by 22%
and 21%, but applications increased only by 5% from 1992-93 to 1993-94 and declined this
past year.

* Learning About Choice and the Adequacy of Information on Choice*. Table 2
indicates how survey respondents learned about the Choice Program. The results are fairly
similar for the first four years. The most prevalent source of information on choice
remains friends and relatives, which essentially means word-of-mouth information. That
informal communication is almost double the frequency of other sources. It also increased
in 1994.

Parental satisfaction with the amount and accuracy of that information, and with the
assistance they received, is presented in Table 3. Satisfaction with the amount of
information on the program in general is high in all years and even higher in 1994. There
is, however, a difference in satisfaction of parents selected and not selected into the
choice schools. Looking ahead to Table 6, when we create additive scales for these
questions measuring satisfaction with administration of the Choice Program, we see a large
difference between those applicants who enroll in the Choice Program and those not
selected. As indicated in the rows for Scale T3 (top panel, Table 6), choice enrollees for
1990-94 have a mean dissatisfaction of 11.6, while non-selected parents have a much higher
average dissatisfaction score of 14.4.

* Why Choice Parents Participated in the Choice Program*. Table 4 provides
the responses to survey questions rating the importance of various factors in parents'
decisions to participate in the Choice Program. The results are consistent across the
years. The leading reasons given for participating in the Choice Program was the perceived

* Demographic Characteristics of Choice Families and Students*. The Choice
Program was established, and the statute written, explicitly to provide an opportunity for
relatively poor families to attend private schools. The program has clearly accomplished
this goal. Relevant demographic statistics are presented in Table 5, which, unless
otherwise noted, are based on our surveys. For comparison purposes seven groups are
identified, including applicants in the most recent year (1994), and the combined year
samples for choice applicants, choice students actually enrolled, students not selected,
students who left the program (and did not graduate), and the MPS control groups.

In terms of reported family income (Table 5a), the average income was $11,630 in the first five years. Incomes of 1994 choice families increased considerably to an average of $14,210. Low-income MPS parents reported a slightly higher family income, which averaged $12,100. The average in the full MPS control group was $22,000.

In racial terms, the program has had the greatest impact on African-American students, who comprised 74% of those applying to choice schools and 72% of those enrolled in the first five years of the program (Table 5b). Hispanics accounted for 19% of the choice applicants and 21% of those enrolled. Both of these groups were disproportionately higher than the MPS sample. Compared with the low-income MPS sample, however, Hispanics were the most overrepresented, with Asians and White students the most underrepresented. The overrepresentation of Hispanics was due to a new building and considerable expansion in capacity of Bruce-Guadalupe Bilingual School, which has an emphasis on Spanish culture and is located in a neighborhood with an increasing Hispanic population. .

In terms of marital status (Table 5c), choice families were much more likely to be headed by a non-married parent (75%) than the average MPS family (49%), and somewhat more likely than the low-income MPS parent (65%). The percentage was almost identical for the five separate years.

One important difference between MPS and choice applicants was in family size (Table 5d). For the combined years, only 42% of the choice families reported having more than two children. The average number of children in families applying to the choice program was 2.54. This compared with 54% of the MPS families having more than 2 children (2.95 children per family) and 65% of the low-income MPS families (3.24 on average).

A unique characteristic of choice parents was that despite their economic status, they reported higher education levels than either low-income or average MPS parents (Table 5e). Over half of the choice mothers reported some college education (56%), compared with 40% for the entire MPS sample and 30% of the low-income MPS respondents. That number was even higher in 1994, with 64% of the mothers new to the program reporting some college. That was consistent with the higher incomes reported for the last year (Table 5a, column 2). The biggest difference in education appears in the category titled "some college." Although fathers more closely match the MPS control groups, they were also somewhat more educated.

* The Importance of Education and Educational Expectations*. Based on our
measures, choice and MPS parents were similar in terms of the importance they place on
education. We measured the importance of education relative to other important family
values. Scale descriptive statistics are in Table 6.

Educational expectations were high for all groups, with choice parents somewhat higher
than MPS and low-income MPS parents. Eighty-six percent of choice parents in the first
four years indicated that they expected their child to go to college or do post-graduate
work. This compared with 76% of the MPS parents and 72% of the low-income MPS parents.
Because sample sizes were large, these proportions were significantly different.

* Experience of Choice Parents in Prior Public Schools*. A more complete
picture of choice parents includes the level of parental involvement, attitudes toward,
and student success in their child's prior public school. Our surveys measured the degree
of parental involvement in the school, the amount of parental help for children at home,
and parent satisfaction with prior schools. The results are presented in Tables 6. Prior
achievement test results are provided in Table 8.

Based on five years of highly consistent data, the conclusions we draw are as follows: (1) that choice parents were significantly more involved in the prior education of their children than MPS parents (Table 6, Scales A3-A5); (2) that they expressed much higher levels of dissatisfaction with their prior schools than MPS parents (Table 6, Scale A6); and, (3) that their children had lower prior achievement test results than the average MPS student (Table 8).

Choice students had prior test scores at or, in some years below, the low-income MPS students. The test scores in Table 8 are Iowa Test of Basic Skills (ITBS) which are given in grades 1-8. The tests used are the Reading Comprehension Test and the Math Total Test. The latter consists of three subtests: Math Concepts, Math Problem Solving, and Math Computations. The results reported are for the last test taken while the student was in MPS. The reason for this is that we are trying to get as complete a picture as possible of a relatively small number of choice students. The majority of those tests were taken in the spring of the year of application.

The prior test scores for the last year, 1994, were the lowest of any cohort, especially in math. This may have been due to the small number of students who completed all portions of the math test (because of changes in MPS testing requirements). In any event, the pattern remained consistent.

The absolute level of the scores indicates the difficulty these students were having
prior to entering the program. The median national percentile for choice students ranged
from 25.5 to 31, compared with the national median of 50. The Normal Curve Equivalent
(NCE), which is standardized to a national mean of 50, ranged from 35.5 to 39.8, which is
about two-thirds of a standard deviation below the national average. *In short, the
choice students were already very low in terms of academic achievement when they entered
this program.
*

We discuss five outcome measures in this section: (1) achievement test results; (2)
attendance data; (3) parent attitudes; (4) parental involvement; and 5) attrition from the
program. The legislation specified that suspension, expulsion, and dropping out also be
monitored. Those measures, however, would be meaningful only at the high-school level. The
high-school-level choice schools consisted of alternative education programs for seriously
at-risk students. Therefore it is unclear what the relevant MPS comparisons would have
been.

**Achievement Test Results
**

* Cohort Test Results*. Table 9 provides the aggregate test results for 1991
to 1995 for choice students and for 1991 to 1994 for students taking tests in MPS in the
respective years. Tests were administered in April or May of each year. The results may be
compared only crudely with those in Table 8, which indicated prior test scores for
students accepted into the Choice Program. The prior test data in Table 8 were mostly from
the previous spring, but were based on the last prior test in the student's file. As
stated above, those data indicated the choice applicants were clearly behind the average
MPS student, and also behind a large random sample of low-income MPS students.

As noted in the introduction, test scores from the MPS sample were not available for 1995. MPS is only requiring the ITBS of fifth-grade students due to state-mandated tests, which have been substituted for the ITBS. In addition, the ITBS version given to fifth graders is a new and very different test which is not substantively comparable to the older versions used in the choice school. Choice schools were not required to take the state tests. Thus we report on only choice 1995 tests, and change scores are analyzed using only the first four years of data.

In reading, table 9 indicates that the choice students tested in 1991 did better than the MPS low-income group, but in math they did somewhat worse. Tests taken in the choice schools in the second year--spring 1992Cwere considerably lower. They were lower than the scores in both the full MPS sample and among the low-income MPS students. Comparing the more relevant low-income MPS and choice students, the choice students mean math NCE was more than five points lower; the reading score was two points lower.

For 1993, the scores of choice students were approximately the same in reading as in the prior year, but were considerably higher in mathematics. Although the reading scores were lower than the low-income MPS sample, the math scores had the same median National Percentile Ranking, but were 2.3 NCE points higher.

The 1994 spring scores for all choice students were slightly higher than the 1993 choice cohort, and very similar to the MPS low-income group on both reading and mathematics. In contrast to the first two years, for both 1993 and 1994, choice students did better in math than in reading (which is the pattern for MPS students in all years).

Choice scores in 1995 were very similar to the scores in 1994. In terms of NCEs, they were almost identical. In terms of median National Percentile Rankings, the 1995 scores of choice students were 2% lower than in 1994 for both reading and math.

Because the cohort scores do not report on the same students from year to year, and
because we hope that schools are able to add to the educational achievement of students,
the most accurate measure of achievement are "value-added," change scores.

* Change Scores*. Analysis of year-to-year change scores for individual
students produced somewhat different results. By comparing students' scores with national
samples, we can estimate how individual students changed relative to the national norms
over the year. Descriptive change scores are depicted in Tables 10a to 10e for the
respective years. We caution the reader, as we have in previous reports, that sample sizes
in some years are quite small for choice students. The results in the table are based on
differences in NCEs, subtracting the first-year score from the second.

In the first year, choice students gained in reading, but math scores stayed essentially the same. MPS students improved considerably in math, with smaller gains in reading. Both low-income and non-low-income MPS students gained in math. Three of these gains were statistically significant (due in part to larger sample sizes), while the gains for choice students were not.

There were quite different effects in the second year (Table 10b). Change scores for choice students in math, and for MPS students in both reading and math, were not appreciable. None of these differences approached standard levels of statistical significance. In contrast to the first year, however, reading scores dropped for choice students. The decline was 3.9 NCE points for all students between 1991 and 1992. Because NCE scores are based on a national norm, this means that choice students scored considerably below the national sample in the prior year. The decline was statistically significant at the .001 level.

The results shifted again in the third year. Choice students declined slightly in reading, which was not significant. On the other hand, for the first time, math scores for choice students improved. The mean math NCE went from 38.3 to 42.7 for a 4.4 NCE gain, which was statistically significant. Scores for MPS students, on the other hand, declined for both tests and for both groups. Because of relatively large sample sizes for the MPS control group, the decline in math scores was significant and estimated to be 1.2 NCEs for both the total MPS control group and the low-income sample.

The results for 1994 again shifted for both groups. For all groups, reading scores
effectively did not change. The same was true of math scores for the MPS groups, although
the low-income MPS scores decline slightly more than the non-low income group. For the
choice students, after a large math increase in 1993, there was a decline of 2 NCE points
in 1994. In 1995, reading scores for choice students essentially remained the same, but
again there was a 1.5 drop in NCE math scores.

* Regression Analysis*. Because test score differences could be based on a
number of factors, and the factors could be distributed differentially between choice and
MPS students, it is necessary to control statistically for factors other than if students
were in MPS or choice schools. Those controls are provided by multivariate regression
analysis of the combined MPS and choice samples. There are a number of ways to model
achievement gains. The most straightforward is to estimate the second-year test score,
controlling for prior achievement and background characteristics, and then include an
indicator (dichotomous) variable to measure the effect of being in a choice or MPS school.
A series of dichotomous variables indicating the number of years in the choice program can
be substituted for the single choice indicator variable.

In previous year reports regressions were provided for each yearly change. The
conclusion of those analyses, which were comprehensively described in the** ***Fourth
Year Report* (Tables 12 -16), was that there was no consistent difference between
choice and MPS students, controlling for a wide range of variables.

For annual reporting purposes, and to test for consistency of results between years,
previous reports included yearly regressions. For those analyses, small sample sizes
precluded including survey variables. In this report, we "stack" four years of
change scores and thus we have included survey variables in the regressions reported in
Tables 12a and 12b. Critics of earlier reports surmised that exclusion of these variables
biased the results against choice. Unfortunately, as Witte predicted in response to
critics, *the results of including these variables do not favor the choice schools*.

The *B* columns in Tables 11 and 12 contain the coefficients that determine the
effects of the variables on predicting the dependent variable (the relevant test). In both
tables, a critical coefficient will be the "Choice" (dichotomous) indicator
variable. Dichotomous variables have a value of 1 if the student has the characteristic,
and 0 if they do not. The effects of the coefficient can be read as a straight difference
in the NCE score on the respective test being estimated. Thus, for example, in Table 11a,
being a low-income student has an estimated negative effect of 2.510 NCE points on the
reading score of students taking two tests a year apart from 1990 to 1994. Similarly, the
effect of being in a private school in the Choice Program (as compared to being in MPS)
produced an estimated decrease in Reading of .568 Normal Curve Equivalent points. Because
of the size of the coefficient relative to its standard error, the negative impact of
being in a low-income family is considered statistically significant, but not the effect
of being in a choice school. These predicted effects occurred after simultaneously
controlling, or in essence keeping constant, all other variables in the model.

The variables in the models that are not dichotomous variables are also easy to
interpret. They can be read as the effect that a *one point difference in the variable*
has on the predicted test. Thus, for reading in Table 11a, if a student is one NCE point
better on the prior reading test than another student, we would predict he or she would do
.495 points better on the post-test. For "Test Grade" for each additional grade
students are in, we would estimate a lower reading score of .176 NCE points--lower because
the sign of the coefficient is negative.

The probabilities indicated in the tables are a statistical measure of how likely the coefficients are to differ from 0. Traditionally, those coefficients that have a probability of .05 or less are considered "significant."

Table 11 differs from Table 12 in that the latter includes a full set of survey variables, with a more accurate measure of income than free-lunch, mother's education, marital status, and educational expectations for their children. There are also four scales of parental involvement, paralleling scales A2-A5 in Table 6. Also notice that the sample sizes decline from 4716 and 4653 (in Table 11) to 1385 and 1372 (in Table 12).

There are several consistent patterns across all the models. Prior tests were always good predictors of post-tests, and the coefficients were relatively stable no matter how the results were modeled. Income was also always significant, measured either as a low-income dichotomous variable (qualifying for free lunch) or as family income in thousands of dollars. Higher income was always associated with higher achievement gains. Gender was significantly related to greater achievement in three of the models, with girls scoring higher than boys. Test grade was only significant on math models (tables 11b and 12b). The higher the grade of the student, the lower they scored.

Racial effects varied, although African American students always did less well than white students. For Hispanics and other minority students, the coefficients relative to whites were usually negative, but only significantly so for reading as shown in Table 11a.

As we anticipated in an earlier response to our critics, most of the other survey
variables included in Tables 12a and 12b were not statistically significant. Mother's
education was significant for reading, and two parental involvement variables were barely
significant for math. The parental involvement variables, however, went in the wrong
direction: greater involvement predicted *lower* test scores. The reason these
variables were generally insignificant, and probably the reason the sign of the parental
involvement coefficients were negative, was that these were correlated with other
variables in the equations. This correlation, and the small sample sizes, produce unstable
results and large standard errors for the estimates of the Bs.

The effects of being in a choice school were somewhat more discouraging. With the large samples, excluding the survey variables, none of the choice variables were even close to significant. In model 1, including the simple indicator variable of being in a choice school or not (Tables 11a and 11b), the coefficients were very close to zero, with large standard errors. The same was generally true when, rather than simply indicating whether a student was in a choice school, we included a set of indicator variables for the years they had been in choice (thus the AChoice Year 2 @ variable is 1 for a second-year choice student and 0 for everyone else). The negative 1.2 on reading for two-year students was the only one of these variables that was close to significant (Table 11a, model 2).

The one exception to the result that there were no differences between public school
students and choice students was for reading scores including the survey variables (Table
12a). In that model, choice students' reading scores were estimated to be 2.15 NCE points *lower*
than MPS students--controlling for all the other variables. The estimate was significant
at the .01 level of probability. When analyzed one year at a time in the Choice Program,
the effect was mostly due to second-year students. This result may well be an artifact of
the poor performance of choice students in the second year of the program (1991-92). In
earlier reports it was estimated that the difference in reading in that year was -3.35
points (significant at the .001 level). It was also previously reported that after that
year, many of the poorest students left the choice schools (see* Second Year Milwaukee
Parental Choice Report*, pp. 16-17, Tables 20 and 21).

The reason choice estimates were more negative in this model than in other models in Table 11a and 11b, was probably due to the inclusion of mother's education. The variable was positively related to higher achievement and was significant in the model. Choice mothers were more educated than MPS mothers, which should have produced higher scores for choice students, but did not. Thus when we controlled for education of mothers, the estimated impact of choice schools was more negative than when we did not. As noted above, this result was produced with a much smaller sample size, and with high standard errors on a number of variables. Thus it should be interpreted with caution. What clearly cannot be concluded, however, is that the inclusion of survey variables improves the results for choice students relative to public school students. At best they did as well as MPS students, and they may have done worse.

**Attendance**

Attendance is not a very discriminating measure of educational performance at this
level because there is little school-to-school variation. For example, in the last three
years, average attendance in MPS elementary schools has been 92% in each year. Middle
school attendance for the same years averaged 89, 88 and 89%. Attendance of choice
students in the private schools (excluding alternative schools) averaged 94% in 1990-91
and 92% in 1991-92, which puts them slightly above MPS, but the differences were obviously
slight. In 1992-93, excluding SEA Jobs and Learning Enterprises, attendance at the other
schools was 92.5%. For 1993-94, attendance in the non-alternative schools (thus excluding
Exito and Learning Enterprises) was 93%. For 1994-95 attendance dropped in the choice
schools to 92%. It can be concluded that overall attendance was satisfactory and on
average not a problem in choice schools.

**Parental Involvement**

Parental involvement was stressed in most of the choice schools and, in fact, was required in the contracts signed by parents in several of the schools. Involvement took several forms: (1) organized activities that range from working on committees and boards to helping with teas and fund raising events; (2) involvement in educational activities such as chaperoning field trips, and helping out in the classroom or with special events.

Table 6 provides five-year data on parental involvement scales for choice parents both in their prior public schools and in the private schools their students attended under the Choice Program. Table 6 provides the means and standard deviations of the scales for the applicants from 1990 to 1994 and the comparable scale scores for choice parents from 1991 to 1995 June surveys.

Of the four types of parental involvement we measured, already very high levels of
involvement significantly increased in the private choice schools in three areas (Table 6,
Scales A2, A3, and A4). These include parents contacting schools, schools contacting
parents, and parental involvement in organized school activities. The increases were
significant at the .001 level. The one exception was parent involvement in educational
activities at home (Scale A5). For that scale, involvement increased but the increase was
not statistically significant. These results have been confirmed independently in each of
the five years of the program.

**Parental Attitudes
**

* Parental Satisfaction with the Choice Schools*. In all five years,
parental satisfaction with choice schools increased significantly over satisfaction with
prior public schools. School satisfaction is indicated by scale A6 in Table 6. As
described above, satisfaction with their prior schools was significantly lower than
satisfaction of the average or low-income MPS parent. Reported satisfaction with the
choice schools is indicated in these tables as AChoice Private School 1991-95@ and was measured on the spring surveys in each year.

Another indication of parent satisfaction was the grade parents gave for their
children's school. On the follow-up survey in June of each year, we asked parents to grade
the private school their children attended. The grades, which indicate substantial
differences with the grades they gave their prior MPS schools, are given in Table 7. For
the five-year period, the average prior grade (on a scale where an A is 4.0) improved from
2.4 for prior MPS schools to 3.0 for current private schools. The overall grades were
relatively consistent for each year and were always above the grades given to prior MPS
schools.

* Attitudes Toward the Choice Program*. Follow-up surveys in June of each
year asked parents of choice students if they wanted the Choice Program to continue.
Respondents almost unanimously agreed the program should continue. Over the five years,
98% of the respondents felt the program should continue.

**Attrition From the Program
**

* Attrition Rates*. One of the issues concerning the Choice Program is the
rate of attrition from the program. Attrition rates, calculated with and without
alternative high school programs are presented in the last two rows of Table 1. Attrition
from the program was not inconsequential, although there appears to be a downward trend
that leveled off in the last year. Overall attrition, defined as the percentage of
students who did not graduate and who could have returned to the choice schools, was 46%,
35%, 31%, 27%, and 28% over the five years. Excluding students in alternative programs,
the rates were 44%, 32%, 28%, 23%, and 24%. Whether these attrition rates are high or not
has been discussed at length in prior reports. A Legislative Audit Bureau report on the
program last year concluded it was a problem. We interpreted it as a problem for both
public and private schools in that the public school student mobility rate was similar in
magnitude to the attrition rate from choice schools.

* Why students did not return to the choice schools?* Because analysis of
the causes of attrition was not part of the original study design, by the time we realized
how many students were not returning after the first year, we were unable to follow up
with non-returning families. That was rectified in the following years by using very brief
mail surveys or telephone interviews. The surveys and interviews simply asked where the
students were currently enrolled in school (if they were), and (open ended) why they left
the choice school. The response rate to our inquiries has been 39%. This rate of return
was slightly lower than for the other surveys. The normal problems of mailed surveys were
compounded by the fact that we did not know who would return until enrollment actually
occurred in September. Thus in many cases addresses and phones numbers were not accurate.
The largest bias in our responses was undoubtedly families who moved out of the Milwaukee
area and did not leave forwarding addresses. Telephone searches were impossible for that
group. Our results should thus be treated with some caution.

The reasons parents gave for leaving are presented in Table 13. Approximately 18% of the responses indicated child or family specific reasons--including moving. We suspect that this category is underestimated since we undoubtedly were not able to locate as high a proportion of families who moved out of the area.

Almost all of the remaining respondents were critical of some aspect of the Choice
Program or the private schools. The leading problems with the program were transportation
problems, difficulties in reapplying to the program, problems with extra fees charged by
some schools, and the lack of religious training, which was not allowed by the statute.
Within-school problems most often cited were unhappiness with the staff, usually teachers,
dissatisfaction with the general quality of education, and perceptions that discipline was
too strict. The lack of special programs, which might have been available elsewhere, was
also cited in 6% of the responses.

**Outcome Summary
**

Outcomes after five years of the Choice Program remain mixed. Achievement change scores have varied considerably in the first five years of the program. Choice students' reading scores increased the first year; fell substantially in the second year, and have remained approximately the same in the next three years. Because the sample size was very small in the first year, the gain in reading was not statistically significant, but the decline in the second year was. In math, choice students were essentially the same in the first two years, recorded a significant increase in the third year, and then significantly declined this last year.

MPS students as a whole gained in reading in the first two years, with a relatively small gain in the first year being statistically significant. There were small and not significant declines in the last two years. Low-income MPS students followed approximately the same pattern, with none of the changes approaching significance. Math scores for MPS students were extremely varied. In the first year there were significant gains for both the total MPS group and the low-income subgroup. In the second year, the scores were essentially flat, but in the third year they declined significantly. Again, in the fourth year there was essentially no change in either the total MPS or low-income MPS groups.

Regression results, using a wide range of modeling approaches, including yearly models and a combined four-year model, generally indicated that choice and public school students were not much different. If there was a difference, MPS students did somewhat better in reading.

Parental attitudes toward choice schools, opinions of the Choice Program, and parental involvement were very positive over the first five years. Attitudes toward choice schools and the education of their children were much more positive than their evaluations of their prior public schools. This shift occurred in every category (teachers, principals, instruction, discipline, etc.) for each of the five years. Similarly, parental involvement, which was more frequent than for the average MPS parent in prior schools, was even greater for most activities in the choice schools. In all years, parents expressed approval of the program and overwhelmingly believed the program should continue.

Attrition (not counting students in alternative choice schools) has been 44%, 32%, 28%,
23%, and 24% in the five years of the program. Estimates of attrition in MPS are
uncertain, but in the last two years, attrition from the Choice Program was comparable to
the range of mobility between schools in MPS. The reasons given for leaving included
complaints about the Choice Program, especially application and fee problems,
transportation difficulties and the limitation on religious instruction. They also
included complaints about staff, general educational quality and the lack of specialized
programs in the private schools.

**V. A BRIEF RESPONSE TO SOME OF OUR CRITICS
**

Several choice supporters criticized our previous reports in terms of outcomes they
interpreted as negative for choice. Positive results were usually lauded and accepted as
gospel.^{14} They focused on two sets of results: high attrition from the program,
and test score results. Our attrition numbers are straightforward and accurate. We counted
students who could have returned to choice schools, not subject to random selection, but
chose not to return. These statistics were highlighted in an independent report by the
Wisconsin Legislative Audit Bureau. Many students did not stay in these schools. It is
that simple.

In terms of test scores, three points require a response. The first is the belief that inclusion of the survey variables would aid the results for choice students, and thus omitting them from the analysis biased the results against choice. In general, the excluded variables were correlated with variables already in the earlier equations and thus did not have significant effects. For the most part that is what happened in the regressions depicted in Tables 12a and 12b.

If the survey variables would have an effect, however, the *a priori* prediction
would be that inclusion would favor the public schools, not the reverse. The reason is
that on most of the omitted variables choice families were higher, and we would expect the
variables to be positively correlated with greater achievement. For example, we would
predict that the greater a mother's education, the more her involvement in her child's
education, and the higher her educational expectation for her child, the higher her
child's achievement.^{15}* Because those variables were higher for choice
families, holding those factors constant should lower, not increase, the relative scores
for choice families. And that is precisely what happened on reading scores (Table 12a). *

Second, several commentators and newspaper reports have picked up on claims that
because choice students started with low test scores, they should be expected to do worse
than other students. This is undoubtedly true in absolute terms, but not likely in terms
of *change scores.* That is why, in both regressions and descriptive statistics, we
emphasized the changes in test scores, not the cohort scores. In fact, because of
regression to the mean effects, those scoring near the bottom of a distribution on a prior
test are likely to improve more than those scoring initially higher.^{16}** **That
means that in terms of change scores, choice students would be advantaged, not
disadvantaged by initially lower scores.

Finally, and the most absurd of all, is the charge that we did not control for poor English speakers, and that biased the results against choice because of the disproportionate number of Hispanics in the program. Unfortunately, there is no measure of this variable available in any of the data sets. Hispanic origin (which has always been included in the analyses) probably is a good proxy, but certainly not a perfect one.

However, again, when considering *change scores*, the exact opposite result is
likely. Regression to the mean will also occur for these students and thus favor students
with poor English on a second test. These students, however, will also be getting a year
of instruction in English between the tests. Think of a child coming into a lower grade
knowing little English. The student will obviously not do well on either the initial
reading or math tests (because math tests also require reading). The student is then
immersed in English for the first time. Relative to English speakers, what will the *change
score* look like after one year?

The analysis of the critics is so blatantly erroneous that it seems as though either
they have little statistical or research training, or they simply do not care and are
motivated by other factors, or both.

**VI. CONCLUSIONS AND RECOMMENDATIONS**

We ended the *Fourth Year Report* by summarizing the positive and negative
consequences of the program. We then wrote:

Honorable people can disagree on the importance of each of these factors. One way to
think about it is whether the majority of the students and families involved are better
off because of this program. The answer of the parents involved, at least those who
respond to our surveys, was clearly yes. This is despite the fact that achievement, as
measured by standardized tests, was no different than the achievement of MPS students.
Obviously the attrition rate and the factors affecting attrition indicate that not all
students will succeed in these schools, but the majority remain and applaud the program.

Although achievement test results may be somewhat more bleak for choice in this
analysis, the differences are not very large in terms of their impact, and the negative
estimates are based on less stable models and smaller sample sizes. In addition test
scores are only one indication of educational achievement. Thus we see no reason to change
last year's conclusion.

Our recommendations are no different from those offered in previous years. For those
interested, they are stated in their entirety in the* Fourth Year Report*.

^{1 }Daniel McGroarty, "School Choice Slandered," *The Public
Interest* (No. 17), Fall, 1994; Paul E. Peterson, "A Critique of the Witte
Evaluation of Milwaukee's School Choice Program," Center for American Political
Studies, Harvard University, February 1995, unpublished occasional paper.

^{2 }This change is extremely important because most students have been
admitted to the Choice Program in those grades. Private schools in general prefer to limit
lateral entry at higher grades and therefore most have a grade structure which has many
more students in the lower grades than the upper grades.

^{3} Table 6 contains information on "scales" which are sets of
questions measuring an underlying concept. The exact questions for the scales appear in
the *Fourth Year Report* (Tables A1-A6). We created simple scale scores by adding
together responses to each item. Table 6 contains statistics on the scales, defines the
scale direction, and reports the Cronbach Alpha statistic for the scale. Cronbach Alpha is
a measure of how well the items form a scale. It has a maximum value of 1.0 when all items
are perfectly correlated with each other.

^{4} Test scores were not available for all students in either group because
tests were not given every year in MPS. Therefore, there were no tests for 4- and 5-year
old kindergarten, and few for first-grade students. Lateral entry into higher grades could
also have missed some students because primary testing was in grades 2, 5, 7, and 10. For
the few high school students in the Choice Program, the 10th grade test was excluded
because very few of these students were tested and because students were entering
alternative schools (schools for students contemplating dropping out of school or pregnant
teenage students).

^{5} A number of the tests taken in MPS were dictated by rules for the federal
Chapter I program which required testing in every grade in reading and math using a
standardized achievement test. In 1993, the federal regulations changed from requiring
total math, consisting of three subtests, to just "problem solving." With that
change, MPS dropped Chapter I testing using all three subtests for some students.
Fortunately, the correlation between the Problem Solving Component and the Total Math
score is .88. We were able to use an estimated regression model with Problem Solving
estimating Total Math for students taking just the Problem Solving portion. The details of
this procedure were described in Technical Appendix F of the *Fourth Year Report*.

^{6} Because sample sizes were relatively small for choice students, the most
reliable statistic in these tables is the mean Normal Curve Equivalent (NCE). Median and
percent at or above 50% National Percentile Rankings are included because these statistics
are routinely reported by MPS. Because a number of students may be bunched together with
small samples, both of these numbers are volatile.

^{7} There is almost no dropping out at the elementary level. Drop-out rates
are also extremely low in middle schools. In MPS suspensions are also rare in these grades
and the policies and reporting vary considerably from school to school. For example,
student fighting, which leads to a suspension for up to three days in most of the private
schools, may result in a student being sent home in MPS. Whether that becomes an official
suspension or not may depend on the principal and the reactions of the child or parents.
The numbers of official expulsions are even smaller than dropouts or suspensions. See John
F. Witte, "Metropolitan Milwaukee Dropout Report," Report of the Commission on
the Quality and Equity of the Milwaukee Metropolitan Public Schools, 1985.

^{8} Please note that the cohort population described in the last section is
not identical with students for whom we have change data from one year to the next. Thus
tables 9 and 10 are not directly comparable.

^{9} Normal Curve Equivalents are used because National Percentile Rankings are
not interval level data. One of the problems with the transformation from NPRs to NCEs is
that the very lowest and highest ends of the distribution are compressed. This tends to
inflate very low-end scores and deflate very high-end scores. The lower end inflation may
affect this population, which has quite a few test scores below the 10th National
Percentile. For later analysis, if sample sizes become large enough, we will also analyze
scores by grade using the Iowa Test Standard Scores, which are interval level but do not
have this compression effect. NCEs are, however, the national standard for reporting
results across populations and grades for Chapter I and other programs.

^{10} All MPS results in this and last year's report use the estimated math
score. In the third year we reported the estimated scores in footnotes and appendices. We
changed because we feel the more accurate method is to include the estimated scores to
prevent an income bias associated with Chapter I eligibility.

^{11} These regressions are the equivalent of a change score with prior tests
weighted proportionate to their B values (rather than constrained to 1.0). This can be
determined by simply rewriting the regression equations subtracting the prior tests from
each side of the equation.

^{12} See John F. Witte, "A Reply to Paul Peterson," unpublished
paper, February 1995.

^{13 }A final set of regressions, correcting for selection into and out of the
choice and MPS samples, is currently being done. Those corrections require quite robust
statistical assumptions and will be interpreted with caution. However, preliminary results
indicate that taking into account primarily students not taking tests in a given year
(because they left the respective systems or were not tested if present), both MPS and
choice scores are corrected upward, but MPS scores are corrected upward considerably more.
The reason is that non-Chapter I, MPS students were not as likely to be tested each year
and White, non-low-income students were more likely to leave the system.

^{14 }The McGroarty article may be the most notable for this. After spending
pages demonstrating how choice opponents use our reports to support their positions, and
criticizing test scores and attrition along the lines indicated below, he ends with an
unqualified litany of the positive results we "demonstrated" (positive parental
attitudes, parental involvement, etc.). See McGroarty, 1994.

^{15} The one exception to this is the income variable. Choice families have
lower income than MPS families and we assume income is related to higher achievement. That
variable, however, was already in the earlier equations as an indicator variable defined
as qualifying for free lunch. The survey variable may add accuracy, but essentially that
control was already in place.

^{16} Regression to the mean, taught in first semester statistics courses,
occurs because of measurement error in tests. The easiest way to think about this is to
consider that a person's true score varies randomly around the score the person actually
records. For the person near the bottom of the distribution of test scores, the
distribution of true scores is limited by not being able to go below 0, and thus the
probability is that they will go up, not down. The same is true for people near the top of
the distribution, because they cannot go above 100. If they are at 98 on the first test,
the likelihood is that they will go down on the second try. Thus those near the bottom are
likely to move upward, toward the mean, and those at the top the reverse.