Monday, April 3, 2017

Guernica: On Impractical Urges, Ayana Mathis

Guernica: On Impractical Urges, Ayana Mathis
We have a cult of success in America. We believe that if we just work hard enough, we will achieve. It is certainly better to hold these beliefs than a fatalist vision of the world in which fortunes are determined entirely by factors outside of oneself (social position, nepotism, economic status, etc.). Nonetheless, there is something naive about our way of looking at things, and cruel too, in the way children can be cruel because they are too young to have anything but an absolutist vision of the world. It isn’t always true that failure has direct correlation to insufficient grit or ambition. We resist the fact that race and class play a significant role in what we want and whether we are provided with the tools to make an attempt at getting it. The humbling, and unsettling, reality is that all obstacles are not surmountable. And in any case, is the sole objective of our lives the surmounting of obstacles so that we can come in first, like dogs in a race? This seems an impoverished vision of our human experience, more tragic and empty than any failure could ever be. But I have wandered into questions about how we might characterize a life well lived, and that is not the subject of this essay.

quote context: http://pllqt.it/OFMxyo

Wednesday, March 22, 2017

Why Are We Criminalizing Black Students?

COMMENTARY

Why Are We Criminalizing Black Students?

Five steps to address racial disparities in school discipline

A recent special report in Education Week revealed serious concerns about the prevalence of school resource officers at elementary and secondary schools across the nation ("Policing America's Schools: An Education Week Analysis," Jan. 25, 2017). On the surface, the presence of law-enforcement personnel would seem to be a good step in helping to create and sustain safe learning environments for students and school personnel. However, a deeper look at the presence of SROs on school campuses raises serious concerns that reflect a pattern of racial inequities about who is policed, who is profiled, and who is punished.
Consider the fact that black students are most likely to be punished despite often being one of the smallest populations in many school districts across the country. Data shown in Education Week reveal that black males are three times more likely to be arrested at schools than their white male peers. Black girls do not fare much better: They are arrested 1.5 times more than their white male peers. It is not the sheer number of arrests that is so disturbing, but the disproportionality: Of the schools that referred students to law enforcement, 17 percent of their enrollments were black, yet 26 percent of all students referred to law enforcement were black. Across a majority of states, no other group has such a high arrest-to-enrollment ratio.
Why Are We Criminalizing Black Students? School resource officers are making racial disparities in discipline worse, not better, writes UCLA professor Tyrone C. Howard.
—Jared Boggess for Education Week
The first reaction for some when seeing disproportionality data is to conclude that SROs are on campuses where violent acts are most likely to occur. The presence of school resource officers is most prevalent at schools with black students, as well as low-income and racially segregated schools, where children of color are most likely to attend. This should trouble educators. In truth, incidents of violence are not higher where black students attend school, and this raises serious concerns about how schools may be contributing to damaging racial profiles of particular students.
The irony of school discipline is that the increased presence of SROs is a direct result of the "zero tolerance" disciplinary policies of the early 2000s. Such policies emerged largely as a result of mass shootings on school campuses. However, over the past two decades, a majority of mass school shootings have been in largely white, middle-class, rural, or suburban communities. They were also overwhelmingly perpetrated by white males. Consider Littleton, Colo.; Paducah, Ky.; Jonesboro, Ark.; and Newtown, Conn.
The presence of resource officers seems to create a campus environment in which a school looks more like a police headquarters than a community of learning. And those officers are given the responsibility of interacting with students, typically without having any training on youth development, theories of learning, student disabilities, and overall child behavior.
To underscore the severity of the problem of SRO presence on campuses, consider that the U.S. Department of Education estimates that there are close to 31,000 resource officers or other law-enforcement officers stationed in the nation's nearly 100,000 public schools. The Education Department states that another 13,000 sworn law-enforcement officers are spending at least part of their time at schools. On some campuses, the number of officers is greater than that of school psychologists, nurses, psychiatric social workers, and learning specialists combined. It prompts the question: What is the priority? Is it to police children or to support them?
"What message does this tendency send to other students about black students?"
Instead of punishing students, schools might be better served allocating limited resources to provide additional supports for mental-health services and programs instead of SROs. Much of what is seen from students who engage in conflict is a need for intervention for depression, anxiety, bipolar issues, or untreated trauma. More schools are adopting restorative-justice practices, which in some cases are showing positive outcomes. More resources should be devoted to such programs that seek to help and heal students as opposed to criminalizing them.
Finally, a sustained focus on mental-health supports and a focus on mindfulness would take significant steps toward ameliorating the chronic gaps in school outcomes that have plagued low-income students and children of color. Moreover, there is a need for schools to have an explicit focus on creating a school culture which neither criminalizes students nor creates an unfair racial climate. Other steps that could make a difference in changing these outcomes, for example:
• Eliminate the criminalization of low-level behaviors that pose no public-safety threat to students, teachers, or staff, such as "willful defiance," dress-code violations, and talking back to teachers. Also, reduce the ability of school personnel to refer student-behavior cases to juvenile court for minor offenses.
• Eradicate zero-tolerance policies and develop more culturally sustaining and appropriate pedagogies centered on student communication and learning.
RELATED

School resource officer
—Melissa Golden/Redux for Education Week
• Create more trauma-sensitive schools and classrooms, which identify the roots of student behavior and provide the appropriate resources for them and families.
• Ensure that suspensions, expulsions, and arrests can be used only when immediate safety threats exist and no other interventions are available.
• Provide more sustained training for school personnel and SROs on unconscious bias and racial microaggressions. In some cases, white children and children of color engage in similar types of behavior in schools, yet the responses and punishments can differ notably. Unconscious attitudes and beliefs may explain much of the racial disproportionality of school arrests.
Why do schools criminalize black students? What message does this tendency send to other students about black students? And, more importantly, what is the message that black children take away from continually being depicted as problem children? To upend the inequities, students, school leaders, and classroom teachers must discuss the data around discipline, talk openly about race, and recognize how they could be contributing to a hostile learning environment for black children. It is time for schools to be accountable to the students they serve.

Faculty Statement on Charles Murray Lecture

Faculty Statement on Charles Murray Lecture

Tuesday, March 21, 2017

Stop Calling Some Needs ‘Special’

Stop Calling Some Needs ‘Special’

Divisiveness Is Not Diversity, Linus Owens, Maya Goldberg- Safir and Rebecca Flores Harper

Divisiveness Is Not Diversity

Linus Owens, Rebecca Flores Harper and Maya Goldberg-Safir share their views as to why students are protesting at Middlebury College.
Linus Owens, Maya Goldberg- Safir and Rebecca Flores Harper
March 17, 2017


During the coverage of the protests against Charles Murray’s recent visit to Middlebury College, something got lost in the scuffle: the actual students.

Commenters lumped all college students into a homogeneous group as an object to condemn. But not all college students, even at an elite place like Middlebury College, are monolithic. Before criticizing them on the grounds of privilege, perhaps we should do what no one has done and try to understand why those who protested were so angry.

It was about Murray, true, but what other factors were involved? Allison Stanger, the Middlebury professor who moderated his remarks and was injured in the ensuing fracas, suggests protesters are reacting to their anxieties of life under Trump. However, it goes even deeper, to the contradictions of being a student of color at a predominantly white college and being asked to respond civilly while having one's humanity attacked.

This situation is more complex than just being an issue of free speech or diversity of ideas. Any effective response requires taking the students seriously -- which is, after all, the primary job of educators.

Many people deride students as coddled snowflakes who use safe spaces and trigger warnings to protect themselves against the big, bad outside world teeming with microaggressions. This image, always a caricature, could not be farther from the truth at Middlebury. The protesters, primarily students of color and working-class students, are hardly coddled. Life on the campus for them is and has historically been anything but easy. Students and former students frequently confront blatant and subtle forms of racism and classism. Students of color are often assumed to be on financial aid or are told they are only here because of affirmative action. Some professors make assumptions about their intellectual abilities or single them out in class to play the spokesperson on race issues. The overwhelming culture of whiteness and wealth leaves many working-class students or students of color feeling depressed and alienated.

To suggest that they needed a visit from Murray to expose them to “controversial” ideas is laughable and offensive. They confront racism and classism every day on campus. Moreover, they talk about race and class all the time, whether they want to or not -- in personal conversations, in the many courses exploring these subjects, at town hall forums recently held on the campus to address incidents of racial insensitivity, as well as at the numerous meetings organized in the days leading up to Murray’s visit. Those discussions all took place with the high level of civility many commenters assume cannot happen.

Civil discourse on hard issues does happen here, primarily through the labor of students of color and working-class students. It is an insult to call these students sheltered. They aspire to turn the campus not into a safe space, but simply a safer one. In this context, Murray’s divisive ideas offered a sharp rebuke to all their hard-won achievements to create a campus where they, too, feel they belong.

We must not confuse divisiveness for diversity. Conservatives seek to push debates on settled topics, using free speech as a club to reopen discussions long ago resolved. The primarily white faculty members and students at Middlebury feel comfortable welcoming “all debates” because they never worry about their own humanity being called into question.

If free speech can justify a platform for Murray, it also justifies students talking back. We don’t have to agree with the protesting students’ tactics to still recognize that the nonviolent demonstrators were defending speech just as much as the people now rushing to condemn them.

Actions have consequences. People use this claim to demand punishment, but it provides an even more compelling reason for considering the type of community we want. Middlebury will not punish abstract “college students,” but actual people, many of them students of color and/or from working-class backgrounds. Currently, many find themselves the targets of widespread harassment, bullying and attacks on social media and in the national press. Punishing them for making the moral choice to protest a racist provocateur would add another injury to the initial insult.

This current fight focuses on speech, but the true war is over diversity at colleges and universities. Controversial speakers are not the key to expanding the marketplace of ideas, contrary to what many have argued. In fact, the single most robust source of a broad and varied range of ideas on a campus is a student body and faculty composed of people from many diverse backgrounds. They will do the most to upend orthodoxy and challenge comfort levels. Treating divisiveness as a proxy for diversity is, at best, naïve. At worst, it is an active step to roll back progress.

Institutions like Middlebury need to change, but not in the way many people currently demand. Such colleges and universities cannot accept students, take their tuition and use them to market their diverse campus, and then refuse to recognize their individual needs. Doing so gives the impression that institutions do not want actual diversity to enhance learning but rather just want to look good publicly and improve their bottom line.

Colleges and universities have always needed to balance the goals of speech and inclusion. In the “good old days,” when faculty members looked like the students, who all looked like one another, this largely went unnoticed. Today, however, using old standards for a more diverse community does not work. The biggest danger now is a response from Middlebury that leads to a less diverse student body, one that is whiter and richer. Let’s not let that happen.

Bio


Linus Owens is an associate professor of sociology at Middlebury College. Rebecca Flores Harper is a 2011 graduate of the college and served as chair of diversity for the student government there from 2008-11. Maya Goldberg-Safir is a 2012 graduate.

Wednesday, March 15, 2017

Learning Styles | Center for Teaching | Vanderbilt University

Learning Styles | Center for Teaching | Vanderbilt University

by Nancy Chick, CFT Assistant Director

What are Learning Styles?

The term learning styles is widely used to describe how learners gather, sift through, interpret, organize, come to conclusions about, and “store” information for further use.  As spelled out in VARK (one of the most popular learning styles inventories), these styles are often categorized by sensory approaches:  visual, aural, verbal [reading/writing], and kinesthetic.  Many of the models that don’t resemble the VARK’s sensory focus are reminiscent of Felder and Silverman’s Index of Learning Styles, with a continuum of descriptors for how learners process and organize information:  active-reflective, sensing-intuitive, verbal-visual, and sequential-global.
There are well over 70 different learning styles schemes (Coffield, 2004), most of which are supported by “a thriving industry devoted to publishing learning-styles tests and guidebooks” and “professional development workshops for teachers and educators” (Pashler, et al., 2009, p. 105).
Despite the variation in categories, the fundamental idea behind learning styles is the same: that each of us has a specific learning style (sometimes called a “preference”), and we learn best when information is presented to us in this style.  For example, visual learners would learn any subject matter best if given graphically or through other kinds of visual images, kinesthetic learners would learn more effectively if they could involve bodily movements in the learning process, and so on.  The message thus given to instructors is that “optimal instruction requires diagnosing individuals’ learning style[s] and tailoring instruction accordingly” (Pashler, et al., 2009, p. 105).

Caution!

Despite the popularity of learning styles and inventories such as the VARK, it’s important to know that there is no evidence to support the idea that matching activities to one’s learning style improves learning.  It’s not simply a matter of “the absence of evidence doesn’t mean the evidence of absence.”  On the contrary, for years researchers have tried to make this connection through hundreds of studies.
In 2009, Psychological Science in the Public Interest commissioned cognitive psychologists Harold Pashler, Mark McDaniel, Doug Rohrer, and Robert Bjork to evaluate the research on learning styles to determine whether there is credible evidence to support using learning styles in instruction.  They came to a startling but clear conclusion:  “Although the literature on learning styles is enormous,” they “found virtually no evidence” supporting the idea that “instruction is best provided in a format that matches the preference of the learner.”  Many of those studies suffered from weak research design, rendering them far from convincing.  Others with an effective experimental design “found results that flatly contradict the popular” assumptions about learning styles (p. 105). In sum,
“The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing” (p. 117).

Why Are They So Popular?

Pashler and his colleagues point to some reasons to explain why learning styles have gained—and kept—such traction, aside from the enormous industry that supports the concept.  First, people like to identify themselves and others by “type.” Such categories help order the social environment and offer quick ways of understanding each other.  Also, this approach appeals to the idea that learners should be recognized as “unique individuals”—or, more precisely, that differences among students should be acknowledged—rather than treated as a number in a crowd or a faceless class of students (p. 107). Carried further, teaching to different learning styles suggests that “all people have the potential to learn effectively and easily if only instruction is tailored to their individual learning styles” (p. 107).
There may be another reason why this approach to learning styles is so widely accepted. They very loosely resemble the concept of metacognition, or the process of thinking about one’s thinking.  For instance, having your students describe which study strategies and conditions for their last exam worked for them and which didn’t is likely to improve their studying on the next exam (Tanner, 2012).  Integrating such metacognitive activities into the classroom—unlike learning styles—is supported by a wealth of research (e.g., Askell Williams, Lawson, & Murray-Harvey, 2007; Bransford, Brown, & Cocking, 2000; Butler & Winne, 1995; Isaacson & Fujita, 2006; Nelson & Dunlosky, 1991; Tobias & Everson, 2002).
Importantly, metacognition is focused on planning, monitoring, and evaluating any kind of thinking about thinking and does nothing to connect one’s identity or abilities to any singular approach to knowledge.  (For more information about metacognition, see CFT Assistant Director Cynthia Brame’s “Thinking about Metacognition” blog post, and stay tuned for a Teaching Guide on metacognition this spring.)

Now What?

There is, however, something you can take away from these different approaches to learning—not based on the learner, but instead on the content being learned.  To explore the persistence of the belief in learning styles, CFT Assistant Director Nancy Chick interviewed Dr. Bill Cerbin, Professor of Psychology and Director of the Center for Advancing Teaching and Learning at the University of Wisconsin-La Crosse and former Carnegie Scholar with the Carnegie Academy for the Scholarship of Teaching and Learning.  He points out that the differences identified by the labels “visual, auditory, kinesthetic, and reading/writing” are more appropriately connected to the nature of the discipline:
“There may be evidence that indicates that there are some ways to teach some subjects that are just better than others, despite the learning styles of individuals…. If you’re thinking about teaching sculpture, I’m not sure that long tracts of verbal descriptions of statues or of sculptures would be a particularly effective way for individuals to learn about works of art. Naturally, these are physical objects and you need to take a look at them, you might even need to handle them.” (Cerbin, 2011, 7:45-8:30)

Pashler and his colleagues agree: “An obvious point is that the optimal instructional method is likely to vary across disciplines” (p. 116). In other words, it makes disciplinary sense to include kinesthetic activities in sculpture and anatomy courses, reading/writing activities in literature and history courses, visual activities in geography and engineering courses, and auditory activities in music, foreign language, and speech courses.  Obvious or not, it aligns teaching and learning with the contours of the subject matter, without limiting the potential abilities of the learners.

References

Tuesday, March 14, 2017

Spanking and child outcomes: Old controversies and new meta-analyses.

Spanking and child outcomes: Old controversies and new meta-analyses.

Gershoff, Elizabeth T.; Grogan-Kaylor, Andrew

Journal of Family Psychology
, Vol 30(4), Jun 2016, 453-469.

Whether spanking is helpful or harmful to children continues to be the source of considerable debate among both researchers and the public. This article addresses 2 persistent issues, namely whether effect sizes for spanking are distinct from those for physical abuse, and whether effect sizes for spanking are robust to study design differences. Meta-analyses focused specifically on spanking were conducted on a total of 111 unique effect sizes representing 160,927 children. Thirteen of 17 mean effect sizes were significantly different from zero and all indicated a link between spanking and increased risk for detrimental child outcomes. Effect sizes did not substantially differ between spanking and physical abuse or by study design characteristics. (PsycINFO Database Record (c) 2016 APA, all rights reserved)

Risks of Harm from Spanking Confirmed by Analysis of Five Decades of Research

Thursday, March 2, 2017

From Bruce Baker


Wednesday, March 1, 2017

Monday, February 27, 2017

Dress Code

Some resources for dress code:
  • Shame: A Documentary on School Dress Code[i]. This is a documentary by a 17-year-old student, available on YouTube. This could be a text in this unit or a model for documentaries created by students.
  • “Why School Dress Codes Are Sexist,” Li Zhou (The Atlantic).[ii] This is a well-written work of journalism that covers the topic of sexism in dress codes well and serves as a strong model for public writing that uses hyperlinks as citation.
  • “Sexualization, Sex Discrimination, and Public School Dress Codes,” Meredith J. Harbach.[iii] Here, students can examine a scholarly approach to the issues of sexism and dress codes.
  • “The Unspoken Messages of Dress Codes: Uncovering Bias and Power,” Rosalind Wiseman (Anti-Defamation League).[iv] A curriculum resource and excellent overview, this can serve as a guideline for students lobbying for changes to dress codes and/or writing alternative codes that avoid bias.
  • “Baby Woman,” Emily Ratajkowski (Lenny).[v] Ratajkowski is a contemporary celebrity, model and actress, who takes a strong public position as a feminist, despite her association with provocative and sexualized media (controversial music videos and TV commercials). Her personal narrative is a strong model of the genre, but it also complicates views of feminism and female sexuality as well as objectification.


[i] Maggie Sunseri, Shame: A Documentary on School Dress Codes, YouTube, may 29, 2015, accessed February 10, 2017, https://www.youtube.com/watch?v=XDgAZO_5U_U
[ii] Li Zhou, “Why School Dress Codes Are Sexist,” The Atlantic, October 20, 2015, accessed February 10, 2017, https://www.theatlantic.com/education/archive/2015/10/school-dress-codes-are-problematic/410962/
[iii] Meredith Johnson Harbach, “Sexualization, Sex Discrimination, and Public School Dress Codes,” 50 U. Rich. L. Rev. 1039 (2016), access February 10, 2017, http://scholarship.richmond.edu/cgi/viewcontent.cgi?article=2275&context=law-faculty-publications
[iv] Rosalind Wiseman, “The Unspoken Messages of Dress Codes: Uncovering Bias and Power,” Anti-Defamation League, September 2014, accessed February 10, 2017, http://www.adl.org/education-outreach/curriculum-resources/c/the-unspoken-language-of-bias-and-power.html
[v] Emily Ratajkowski, “Baby Woman,” Lenny, February 16, 2016, accessed February 2, 2017, http://www.lennyletter.com/life/a265/baby-woman-emily-ratajkowski/

5 facts about crime in the U.S.

5 facts about crime in the U.S.

Donald Trump made crime-fighting an important focus of his campaign for president, and he cited it again during his inaugural address in January. With the White House and Justice Department announcing steps to address violence in American communities, here are five facts about crime in the United States.
1Violent crime in the U.S. has fallen sharply over the past quarter century.There are two commonly cited measures of the nation’s crime rate. One is an annual report by the FBI of serious crimes reported to police in approximately 18,000 jurisdictions around the country. The other is an annual survey of more than 90,000 households conducted by the Bureau of Justice Statistics, which asks Americans ages 12 and older whether they were the victims of crime in the past six months (regardless of whether they reported those crimes to the police or not). Both the FBI and BJS data show a substantial decline in the violent crime rate since its peak in the early 1990s.
Using the FBI numbers, the rate fell 50% between 1993 and 2015, the most recent full year available. Using the BJS data, the rate fell by 77% during that span. It’s important to note, however, that the FBI reported a 3% increase in the violent crime rate between 2014 and 2015, including a 10% increase in the murder rate. (The BJS figures show a stable violent crime rate between 2014 and 2015, but they do not count murders.) Some experts have projected that the 2016 FBI data will show another increase in the violent crime rate – including another rise in the murder rate – when they are released later this year.
2Property crime has declined significantly over the long term. Like the violent crime rate, the U.S. property crime rate today is far below its peak level. FBI data show that the rate fell 48% between 1993 and 2015, while BJS reports a decline of 69% during that span. Both the FBI and BJS reported a decline in the property crime rate between 2014 and 2015, even as the violent crime rate went up in the FBI’s data. Property crime includes offenses such as burglary, theft and motor vehicle theft and is generally far more common than violent crime.
3Public perceptions about crime in the U.S. often don’t align with the data. Opinion surveys regularly find that Americans believe crime is up, even when the data show it is down. In 21 Gallup surveys conducted since 1989, a majority of Americans said there was more crime in the U.S. compared with the year before, despite the generally downward trend in both violent and property crime rates during much of that period. In a Pew Research Center survey in late 2016, 57% of registered voters said crime had gotten worse since 2008, even though BJS and FBI data show that violent and property crime rates declined by double-digit percentages during that span.
4There are large geographic variations in crime rates. The FBI’s data allow for geographic comparisons of crime rates, and these comparisons can show big differences from state to state and city to city. In 2015, for instance, there were more than 600 violent crimes per 100,000 residents in Alaska, Nevada, New Mexico and Tennessee. By contrast, Maine, New Hampshire, Vermont and Virginia had rates below 200 violent crimes per 100,000 residents. And while Chicago has drawn widespread attention for its soaring murder total in recent years, its murder rate in 2015 – 18 murders and non-negligent manslaughters per 100,000 residents – was less than a third of the rate in St. Louis (59 per 100,000) and Baltimore (55 per 100,000). The FBI notes that various factors might influence a particular area’s crime rate, including its population density and economic conditions.
5Many crimes are not reported to police. In its annual survey, BJS asks victims of crime whether or not they reported that crime to police. In 2015, the most recent year available, only about half of the violent crime tracked by BJS (47%) was reported to police. And in the much more common category of property crime, only about a third (35%) was reported. The proportion was substantially higher for offenses classified as serious violent crime (55%), a category that includes serious domestic violence (61% of which was reported), serious violent crime involving injury (59%) and serious violent crime involving weapons (56%). There are a variety of reasons why crime might not be reported, including a feeling that police “would not or could not do anything to help” or that the crime is “a personal issue or too trivial to report,” according to BJS.

How Equitable is Access to Advanced Coursework in Pennsylvania High Schools? Dr. Ed Fuller

How Equitable is Access to Advanced Coursework in Pennsylvania High Schools? Dr. Ed Fuller

Thursday, February 23, 2017

Bunkum Award 2016

Bunkum Award 2016

The National Education Policy Center is pleased to announce the winner of the 2016 Bunkum Award, recognizing the think tank whose reviewed work best captures the true spirit of bunk, starring in a feature report.
Watch the 2016 Bunkum Award Ceremony:

2016 Bunkum Award Honoree:

Center for American Progress for Lessons From State Performance on NAEP

Many organizations publish reports they call research. But what does this mean? These reports often are published without having first been reviewed by independent experts — the “peer review” process commonly used for academic research.
Even worse, many think tank reports subordinate research to the goal of making arguments for policies that reflect the ideology of the sponsoring organization.
Yet, while they may provide little or no value as research, advocacy reports can be very effective for a different purpose: they can influence policy because they are often aggressively promoted to the media and policymakers.
To help the public determine which elements of think tank reports are based on sound social science, NEPC’s “Think Twice” Project has, every year for the past decade, asked independent experts to assess these reports’ strengths and weaknesses.
The results have been interesting. Some advocacy reports have been found by experts to be sound and useful, but most are found to have little if any scientific merit.
At the end of each year, the editors at NEPC sift through the think tank reports that had been reviewed, to identify the worst offender.
We then award the organization publishing that report NEPC’s Bunkum Award for shoddy research.
This year’s award goes to the Center for American Progress (CAP) for Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better than Others, authored by Ulrich Boser and Catherine Brown. The CAP report is based on a correlational study with the key finding that high standards increase learning for high-poverty students. The researchers compared changes in states’ test scores for low-income students to changes to those states’ standards-based policy measures as judged by the researchers. Their conclusions were that high standards lead to higher test scores and that states should adopt and implement the Common Core.
Alas, there was much less than met the eye.
In choosing the worst from among the many “worthy” contenders, the evaluation criteria applied to the year’s reports were drawn from two separate short papers entitled, Five Simple Steps to Reading Research and Reading Qualitative Educational Policy Research.

Here’s how the CAP report scored:
  • Was the design appropriate?   No: The design was not sensitive, so they tossed in “anecdotes” and “impressions.”
The apparent purpose of the paper was to advocate for standards-based reform, particularly for the Common Core State Standards, by demonstrating a correlational association between better NAEP scores and states with stronger and better standards and assessments. The data could do little to support this conclusion, so the report largely relied on evidence the authors repeatedly acknowledged as “anecdotal.”
  • Were the methods clearly explained?   No: The methods section is incomplete and obtuse.
The report claims the authors studied five two-year implementation cycles, from 2003 to 2013, but the results from these time periods were not presented. The reader is at a loss as to whether the data were missing, mushed together across years, too weak to present, had a spike with Common Core implementation, or something else. The authors apparently used only one aggregated “policy implementation score,” which was derived from “norming” each of three “categories” and then averaging the three categories together. The categories were derived from the 14 scaled “indicators.” Apparently they regressed this policy implementation score against NAEP scores. While there exist useful measures of scale integrity, the report includes no analysis of the effect of the de facto amalgamation of 14 different variables into one. A finance measure was included but had no statistical effect on the outcomes; why this measure was reported is not clear. Finally, the methods presentation left out critical data and was incomplete.
  • Were the data sources appropriate?   No: The variables used were inadequate and were aggregated in unclear ways.
The report’s goal was to attempt to isolate and determine the relationship between state standards and student achievement. But test-score differences between, say, Massachusetts, Kentucky and Michigan, likely vary for many, many reasons beyond the particular standards that each state adopted, and the study does not control for the vast majority of these likely reasons. The study also includes no measure of the quality or fidelity of the implementation of the standards themselves.
  • Were the data gathered of sufficient quality and quantity?   No: The report uses just state-level NAEP scores and summary data.
This was a correlational study of convenience measures adopted from an Education Week publication. Without knowing more about other factors potentially impacting NAEP test scores in each state, and without knowing about the implementation process for each state’s standards, it is difficult to see how data about the simple act of a state adopting standards is sufficient. Readers are asked to accept the authors’ conclusion that “rigorous” implementation of standards was effective for the favored states. But even if a reader were to accept this premise, “rigor” was never directly addressed in the study.
  • Were the statistical analyses appropriate?   No: A multiple correlation with just 50 cases is too small.
Conducting multiple regressions with 50 cases is not an appropriate methodological approach. Not surprisingly, the resulting effect sizes were quite small. The authors acknowledge—even while they don’t restrain their claims or conclusions—that this analysis is “anecdotal” and “impressionistic.” For example:
While there is an important debate over the definition of standards-based reform—and this analysis is undoubtedly anecdotal and impressionistic—it appears clear that states that have not embraced the approach have shown less success, while more reform-oriented states have shown higher gains over the long term. (p. 2)
  • Were the analyses properly executed?   Cannot be determined: The full results were not presented.
The authors added together the 14 state policy variables of interest, and they then regressed this “change in policy implementation score” against NAEP scores from the previous two years. Since the report did not include specific results (for example, they do not include the multiple R’s, a correlation matrix, or the 14 predictor variables), or why (or how) they were weighted and added together, the reader cannot tell whether the analyses are properly executed.
  • Was the literature review thorough and unbiased?   No: The report largely neglected peer-reviewed research.
Of the 55 references, only one was clearly peer-reviewed. Despite a rich literature on which the authors could have drawn, the report’s literature review over-relied (and attempted to replicate, in many ways) a single non-peer reviewed source from 2006.
  • Were the effect sizes strong enough to be meaningful?   Effect sizes were not presented, and the claims are based on the generally unacceptable 0.10 significance level.
Although effect sizes can be estimated from correlations (e.g., Cohen’s D), only the results from one of the five two-year contrast panels were reported. The single table, which appears in the report’s appendix, purports to show a small relationship between the standards policies and NAEP scores, but this relationship is significant only at the 0.10 level, and even then only for 4th grade math and 8th grade reading–but not for 8th grade math and 4th grade reading, where the weak relationships are negative. It is generally not acceptable to claim significance at the 0.10 level.
  • Were the recommendations supported by strong evidence?   No: Their conclusion is based on weak correlations.
Despite the authors’ claim that “Our findings suggest that there is clear evidence that standards-based reform works, particularly when it comes to the needs of low-income students,” an objective reader of these data and analyses could easily come to exactly the opposite conclusion: that there is no demonstrated relationship.
The fundamental flaw in this report is simply that it uses inadequate data and analyses to make a broad policy recommendation in support of the common core state standards. A reader may or may not agree with the authors’ conclusion that “states should continue their commitment to the Common Core’s full implementation and aligned assessments.” But that conclusion cannot and should not be based on the flimsy analyses and anecdotes presented in the report.

Find the report at:
Boser, U., & Brown, C. (2016, January 14). Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better than Others. Washington, DC: Center for American Progress. Available online at https://cdn.americanprogress.org/wp-content/uploads/2015/12/23090515/NAEPandCommonCore.pdf
Find the review at:
Nichols, S.L. (2016). Review of “Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better Than Others.” Boulder, CO: National Education Policy Center. Available online at http://nepc.colorado.edu/thinktank/review-CAP-standards

Watch the 2016 Bunkum Award video presentation, read the Bunkum-worthy report and the review, and learn about past Bunkum winners and the National Education Policy Center’s Think Twice think tank review project, all by going to
http://nepc.colorado.edu/think-tank/bunkum-awards/2016