Home » NYS APPR

Category Archives: NYS APPR

Developing Leadership

This article was originally published on 10 September  in the Washington Post under the title Good or Bad? New rating system can’t decide about this principal.

 

Developing Leadership
Sean C. Feeney
Principal, The Wheatley School
President, Nassau County High School Principals Association
Co-author, Letter of Concern about NYS APPR Legislation

www.newyorkprincipals.org

Beginning last week, superintendents in New York State’s public school districts received the “Growth Scores” for teachers of grades 4-8 and principals serving grades 4-8 and 9-12. These scores are based on the new Common Core test results, which saw a 30-point drop in students being classified as “proficient.” Although our Commissioner of Education has repeatedly tried to assure the public (http://www.engageny.org/resource/parent-resources-grades-3-8-ela-mathematics-tests) that the dramatic drop in student scores does not reflect negatively on student, teacher or school performance, this message flies in the face of reality. As the State continues to “build its plane in the air,” (http://www.washingtonpost.com/blogs/answer-sheet/post/the-dangers-of-building-a-plane-in-the-air/2011/09/30/gIQAojqWAL_blog.html), those of us who work in schools are seeing how destructive these poorly-planned initiatives have been.

As thousands of educators across New York State have publicly indicated, the excessive testing and use of student scores to rate teachers, principals and schools is misguided, not based in sound research and rushed in its implementation. These facts are ignored at the peril of our students and schools.

As we see more and more results of New York State’s initiatives, the more serious problems and contradictions are revealed. Looking deeper into the recent scores released, one can see at least three glaring problems:

1.      State Provided Scores are Not Reflective of Reality

As the principal of an 8-12 school, I receive two separate growth scores from the State. According to the State’s growth measures for 4-8 principals, I have been classified as a “Developing” principal. This is one step above the “Ineffective” rating. According to the growth measure for 9-12 principals, I am an “Effective” principal. This notion that one can look at a single grade and extrapolate a rating for an educator is nonsensical and bad statistical practice. Additionally, it is not reflective of the educational offerings we provide our students across grades 8-12.

Our school programs and offerings reflect a supportive community that has high expectations for its children. We offer robust music, theatre and art programs, and students have ample opportunity to participate in student clubs and athletics. Our school has a strong commitment to community service with faculty and students all participating in a day-long a Day of Service and Learning. I have the blessing of working with a wonderfully talented faculty. As the principal, I have been smart enough to listen and support all the time, to stay out of the way some times, and to push hard other times.

So how has our school done preparing students for college? Our most recent State report card reflects a graduation rate of 100%, with 89% of our students earning the higher Advanced Designation diploma. These rates are far above the New York State average of 74% and 30%, respectively. Our school is highly ranked on the national lists of top high schools. Virtually every graduate attends college, with over 90% of them attending a four-year college. Students have the opportunity to participate in career mentoring, science research and mathematics research. Every school in our country should have the support and programs we are able to provide our students. How do these two scores, as well as the scores that my teachers received, reflect our work in preparing all students for college and careers?

2.      State Provided Scores are Not Consistent

In one of the many memoranda to schools, Commissioner King boasts that “about three-quarters of individual teachers will earn the same or better HEDI rating than they did in 2011-12.” http://www.engageny.org/sites/default/files/resource/attachments/growth_score_release_letter_to_superintendents.pdf. This is not something that should make our Commissioner proud! If 75% of teachers earned the same or better rating than last year, that means about 25% of teachers earned a worse rating than they did in 2011-12. No matter how it is viewed, this amounts to an alarming amount of movement in a model that purports to measure “teacher effectiveness.” Did that many teachers become worse at their craft from one year to the next? Which measure is accurate: last year’s measure or this year’s measure? Of course, this lack of intertemporal stability in value added measures is one that has been identified by researchers for years (http://www.urban.org/UploadedPDF/1001266_stabilityofvalue.pdf). Clearly, there is a problem with the model. A system that purports to be objective but results in teachers bouncing from Ineffective to Effective or from Highly Effective to Developing in one year is evidence of a capricious, inconsistent system.

3.      State Measures Contradict Each Other

My “Developing” rating as a 4-8 principal is based on the performance of our 8th Graders on the Common Core examinations administered in April 2013. As our Deputy Commissioner explained in the infamous March memo (http://www.engageny.org/resource/field-memo-transition-to-common-core-assessments) only students achieving at a level of 3 or 4 on these exams are considered to be on a college and career trajectory. (This is the same memo that informed us that only one-third of New York State students would demonstrate proficiency on the exams that were still weeks away from even being taken!) Well, how did our 8th graders do on the mathematics examination? Only 39% of them earned a level 3 or level 4 designation. Clearly, I must be an ineffective leader to have such low performance among my students!

Not so fast. At our school, 92% of our 8th graders also took the high-school Integrated Algebra examination. Our overall passing rate on this exam was 97% last year. The passing rate of the 8th graders who took the Algebra Regents examination was over 99%. A few years ago, the State established “College and Career Readiness” passing thresholds for these exams. Despite the fact that these thresholds and the correlational study on which they are based have been discredited (http://roundtheinkwell.com/2011/10/20/letter-to-the-new-york-state-regents-on-ccr), Commissioner King insists that only students who meet the Aspirational Index score of at least 80 on the Integrated Algebra examination are college and career ready. So how did our 8th Graders do against this measure?

Nearly 74% of them scored at this higher threshold. Of even greater concern is the fact that nearly 60% of the students who earned Level 1 or Level 2 scores on the 8th grade assessment passed the Integrated Algebra Regents at the “college and career ready” threshold of 80. How do I tell a student that he was not proficient in April, but met the Commissioner’s “College Ready” standard for graduation two months later?

In their desire to earn Race to the Top funding, our State Education officials have created a system of contradictions, mixed messages and harmful outcomes. This is what happens when one throws an entire state into a chaotic system of mandates and practices that are not thoughtfully planned and are simply not grounded in best practices. Yet I’m the one being labeled as a developing leader.

More than a Number Panel Discussion

On Wednesday, April 10th, a panel of educators presented an overview of the impact that our current regimen of high stakes testing is having on our students and schools. Panelists included:

  • Dr. William Johnson, Superintendent of Schools, Rockville Centre
  • Dr. Carol Burris, Principal, South Side High School
  • Dr. Sean C. Feeney, Principal, The Wheatley School
  • Ms. Sharon Fougner, Principal, EM Baker Elementary
  • Dr. Don Sternberg, Principal, Wantagh Elementary
  • Nikhil Goyal, Student Author and Activist
  • Dr. Lola Nouryan, Psychologist
  • Leonie Haimson, Parent

The guiding presentation can be downloaded through the following link: More than a Number Updated 10April13

Where is our Flexibility?

Recently, John King, the New York State Education Commissioner, testified before a US Senate Committee about the reauthorization of the No Child Left Behind law (you can view his testimony through this link: http://www.scribd.com/doc/124387499/Esea-Testimony). In his testimony, Commissioner King urged the Senate panel to include a federal requirement for teacher and principal evaluation. “It would be helpful in the potential reauthorization to set a few clear, bright-line parameters, and then to give states flexibility to adapt those parameters to their context,” King testified.

What might those “parameters” be? Well…those of us who have followed the recent “education reform” efforts know these parameters to be veritable pillars of faith. Namely: 1) the inclusion of student performance in the evaluations of teachers and principals, and 2) the use of these performance evaluations when making employment or salary decisions. This is exactly what Commissioner King requested in his testimony, adding that states must be afforded flexibility to adapt these parameters to the specific needs of the state.

John King wants to make sure that the federal government provides him with the flexibility necessary to apply its mandates to the unique needs of New York State. As State Education Commissioner, John King’s fixed views of what works best in schools has prevented local districts any real flexibility in demonstrating high levels of student achievement and teacher competencies.

What’s good for the goose is not good for the gander, it seems.

Last month, local news reports were filled with descriptions of the failure of New York City teachers union (UFT) and the Bloomberg administration to reach an agreement regarding New York State’s APPR legislation. Lost in the battle of blame was the fact that no matter what local agreement was reached regarding the implementation of APPR, the State Education Department could ultimately order a “corrective action plan” if it did not like the results of a local district’s APPR plan. As described in the second paragraph of the cover letter of every district’s approved APPR plan (see http://usny.nysed.gov/rttt/teachers-leaders/plans/home.html), NYSED assures the district superintendent that such a corrective action plan will be required if an analysis of data led it to believe that there was:

“unacceptably low correlation results between the student growth subcomponent and any other measures of teacher and principal effectiveness and/or if the teacher or principal scores or ratings show little differentiation across educators and/or the lack of differentiation is not justified by equivalently consistent student achievement results”

Ah yes…those student growth scores! Despite the numerous studies and explanations showing their unreliability (see, for example, http://schoolfinance101.wordpress.com/2012/11/17/air-pollution-in-ny-state-comments-on-the-ny-state-teacherprincipal-rating-modelsreport/) these student scores trump the judgment of the professionals in the school. If a district’s evaluation of teachers do not correlate with the teacher score that NYS gives through the unreliable and erratic value added or student growth model, the problem must be with the district, which in turn must be forced to change its methods for evaluating teachers. No flexibility there!

My colleague, Carol Burris, described the first part of a training series she attended recently in order to prepare her for Calibration Day. Ostensibly, the purpose of the training was to remove all biases from the observation process in order to achieve the holy grail of a fully objectified observation process. Despite her years of experience supervising teachers and leading one of the country’s highest achieving schools, Carol had to spend the day being told how to conduct a proper observation free of any bias. Of course, the biases being removed were the ones identified by the Master Coder, who is an unidentified individual somewhere in Albany. There was no discussion at this training, no dialogue about the flexibility needed to teach and supervise different classes based on the numerous variables related to a class (e.g., student composition, time of day).

Gone are the days of dialogue in New York State; instead, we are repeatedly reminded that SED has the keys to what works in schools. We are told to simply follow along so that our students will find success. Are you wondering how to evaluate teachers and principals? Forget what years of practice and research tell us about what works. Instead, you must follow SED’s APPR guidelines. Forget what research tells us about developing a positive and collaborative culture  (http://www.ascd.org/publications/educational-leadership/oct09/vol67/num02/Creating-Collaborative-Cultures.aspx). Instead, expect to be told each year that a specific percentage of your faculty are ineffective and must be removed. The collective voices and experience of some of this country’s most effective educators are being ignored. Don’t think; follow.

Recently, New York State announced an exciting grant opportunity “to provide funds to support the dissemination of effective practices and programs that have been developed, tested, and proven successful” in schools (http://www.p12.nysed.gov/funding/currentapps.html#nycs_dissemination). With eighteen of the top hundred high schools in the country (as ranked by Newsweek http://www.thedailybeast.com/newsweek/2012/05/20/america-s-best-high-schools.html), it would seem that such a grant opportunity would be a wonderful opportunity for some of our high achieving schools to share their best practices. But wait! These grants are not for public schools to share their practices. Rather, the grant funds “are made available to assist charter schools in disseminating their successful innovations to any district school(s) in New York through designated partnerships.” Once again, in John King’s rigid view of what works in schools, Charter Schools are the answer; public schools have little to share regarding best practices. This is not what some of the most recent research related to Charter Schools tells us: http://blogs.edweek.org/edweek/inside-school-research/2013/02/charters_struggle_with_staffing_like_traditional_schools_study_finds.html. What message is being sent to the very people that our Commissioner is supposed to be leading?

Why does the Commissioner of Education refuse to engage in any real dialogue about successful practices in New York State schools? Why are the deep concerns of nearly 1600 New York State principals (see www.newyorkprincipals.org) dismissed as simply the anxieties of individuals faced with change (http://www.nytimes.com/2011/11/28/education/principals-protest-increased-use-of-test-scores-to-evaluate-educators.html?pagewanted=all)? Why is he asking for the very type of flexibility from the federal government that he refuses to accord districts in New York?

Meet Ashley, a great teacher with a bad ‘value-added’ score

Originally published on-line at the Washington Post’s AnswerSheet Blog

New York State schools are back in session! With the new school year comes a new responsibility for principals across the state: the need to inform teachers of their “growth score” based on the New York State assessments their students took in the spring. This teacher growth score is one of the parts of the New York State APPR system that was implemented last year in a rushed manner against the very public objection of over one-third of the New York State principals along with thousands of other teachers, administrators, parents and concerned citizens (see www.newyorkprincipals.org for more information).

These state-supplied scores were the missing piece in a teacher’s final end-of-year score — potentially determining whether or not a teacher is deemed Ineffective and therefore subject to requiring a Teacher Improvement Plan (TIP) within 10 days of the start of the school year. These scores were not available to schools until the third week of August. So there you have it: high-stakes information that can potentially have a serious impact on a teacher’s career being supplied well past any sort of reasonable timeframe. Welcome to New York’s APPR system!

As a principal, I sat with each of the teachers who received a score from the state and tried to explain how the state arrived at these scores out of 20 points. One of the first teachers with whom I did this was Ashley.

Ashley is the type of teacher that all parents want for their child: smart in her content area and committed to making a difference in her students’ lives. Ashley works incessantly with her students, both inside and outside of the classroom.

During her free time, Ashley can always be found working with small groups of students in the hallways or any free space in the area. She has taken our school’s math teams on weekend trips as our mathematics team has found success in various competitions. Over the past four years, 91% of her 179 Algebra 1, Geometry or Algebra 2/Trigonometry students have passed the corresponding Regents examination on their first attempt.

At the end of every year, students and parents send in countless notes of thanks to Ashley for her tireless efforts. Ashley has worked with our highest achieving students as well as many of those who struggle with mathematical understanding. For those who struggle, Ashley has a well-deserved reputation for making them more confident, successful and comfortable with the material. Last spring, Ashley was recognized as the Parent Teacher Organization teacher of the year.

So what score did the state assign Ashley? Well, she earned a score of 7 out of 20 points. According to the state’s guidelines, this makes Ashley a Developing teacher. Goodness. To those of us who know Ashley and have had the pleasure of working with her over the years, this is a jaw-dropping result. Ashley’s score defies all understanding of who she is as an educator. Her score flies in the face of how she is valued in our school and what she has done for students in our school. Her score contradicts the thoughtful evaluations given to her over the past five years.

How, then, is one to understand this score?

Officials at our State Education Department have certainly spent countless hours putting together guides explaining the scores. These documents describe what they call an objective teacher evaluation process that is based on student test scores, takes into account students’ prior performance, and arrives at a score that is able to measure teacher effectiveness. Along the way, the guides are careful to walk the reader through their explanations of Student Growth Percentiles (SGPs) and a teacher’s Mean Growth Percentile (MGP), impressing the reader with discussions and charts of confidence ranges and the need to be transparent about the data. It all seems so thoughtful and convincing! After all, how could such numbers fail to paint an accurate picture of a teacher’s effectiveness?

(One of the more audacious claims of this document is that the development of this evaluative model is the result of the collaborative efforts of the Regents Task Force on Teacher and Principal Effectiveness. Those of us who know people who served on this committee are well aware that the recommendations of the committee were either rejected or ignored by State Education officials.)

One of the items missing from this presentation, however, is an explanation of how State officials translated SGPs and MGPs into a number from 1 to 20. In order to find out how the State went from MGPs to a teacher effectiveness score out of 20 points, one needs to refer to the 2010-11 Beta Growth Model for Educator Evaluation Technical Report. Why a separate document for explaining these scores? Most likely because there are few State officials who are fluent in the psychometrics necessary to explain how this part of our APPR system works.

It is incredulous that the state feels that it is perfectly fine to use a statistical model still in a beta phase to arrive at these amorphous teacher effectiveness scores. I make it a point not to use beta software on my computer, for I do not want something untested and filled with bugs to contaminate the programs that are working fine on my machine. It is a shame that the State does not have the same opinion regarding its reform initiatives.

As explained in the technical paper, the SGP model championed by New York State claims to account for students who are English Language Learners (ELL), students with disabilities (SWD) and even economically disadvantaged students as it determines a teachers adjusted mean growth percentage. While the statistical explanation underlying the SGP model is carefully developed, nowhere do the statisticians justify the underlying cause for any change in student score measured. In other words, what is the research basis for attributing any change in score from year to year to the singular variable of a teacher? The reason why this is never explained is because there is virtually no research that justifies attributing the teacher as the sole cause of a change in student score from year to year.

So if it is not solely the teacher who caused the change in score, to what should one attribute a change in student score? Well, that is a question that continues to challenge statisticians and educational researchers. Despite the hopes and declarations of so many of our present-day “reformers,” we simply do not have to tools necessary to quantify the impact a single teacher has on an individual student’s test score over the course of time. Derek Briggs presented a critique of the use of SGPs in this paper.

How can one explain Ashley’s shockingly low score, however? As a principal who has always availed himself of data when evaluating teachers, I would sit down and have a conversation about the test results so that I could put them in context. Here is what we know about the context of Ashley’s score:

* This year, Ashley’s score was based on her two eighth grade classes, not the results of her Regents-level classes

* The two eighth grade classes were different curricula: one was an Algebra course and the other was a Math 8 course.

* The Algebra 8 course is geared towards the Regents exam, which is a high-school level assessment that is beyond the mathematical level of the NYS Math 8 examination. Ninety one percent of Ashley’s students in this class passed the Regents Algebra 1 examination. There is different content on the Math 8 exam, which can make it a challenge for some of our weaker Algebra students. In fact, of the students who took the Algebra course, one-quarter of them passed the Regents examination but scored below proficiency on the Math 8 exam.

* In the two weeks prior to the three-day administration of the Math 8 exam in April 2012, students in Ashley’s class had one week of vacation followed by three days of English testing. In the two weeks leading to the beginning of the Math 8 exam, Ashley saw her class only three times.

Rather than place the student results in context, the State issued a blind judgment based on data that was developed through unproven and invalid calculations. These scores are then distributed with an authority and “scientific objectivity” that is simply unwarranted. Along the way, teacher reputations and careers will be destroyed.

Despite the judgment of the New York State Education Department, Ashley remains a model teacher in our school: beloved by students and parents; respected by colleagues and supervisors. She continues to work on perfecting her practice and helping her students gain confidence and skills. My hope, of course, is that she will continue to feel that she is part of a profession that respects teachers and students alike, not one that reduces them to a poorly conceived and incoherent number.