Monday, February 27, 2017

Dress Code

Some resources for dress code:
  • Shame: A Documentary on School Dress Code[i]. This is a documentary by a 17-year-old student, available on YouTube. This could be a text in this unit or a model for documentaries created by students.
  • “Why School Dress Codes Are Sexist,” Li Zhou (The Atlantic).[ii] This is a well-written work of journalism that covers the topic of sexism in dress codes well and serves as a strong model for public writing that uses hyperlinks as citation.
  • “Sexualization, Sex Discrimination, and Public School Dress Codes,” Meredith J. Harbach.[iii] Here, students can examine a scholarly approach to the issues of sexism and dress codes.
  • “The Unspoken Messages of Dress Codes: Uncovering Bias and Power,” Rosalind Wiseman (Anti-Defamation League).[iv] A curriculum resource and excellent overview, this can serve as a guideline for students lobbying for changes to dress codes and/or writing alternative codes that avoid bias.
  • “Baby Woman,” Emily Ratajkowski (Lenny).[v] Ratajkowski is a contemporary celebrity, model and actress, who takes a strong public position as a feminist, despite her association with provocative and sexualized media (controversial music videos and TV commercials). Her personal narrative is a strong model of the genre, but it also complicates views of feminism and female sexuality as well as objectification.

[i] Maggie Sunseri, Shame: A Documentary on School Dress Codes, YouTube, may 29, 2015, accessed February 10, 2017,
[ii] Li Zhou, “Why School Dress Codes Are Sexist,” The Atlantic, October 20, 2015, accessed February 10, 2017,
[iii] Meredith Johnson Harbach, “Sexualization, Sex Discrimination, and Public School Dress Codes,” 50 U. Rich. L. Rev. 1039 (2016), access February 10, 2017,
[iv] Rosalind Wiseman, “The Unspoken Messages of Dress Codes: Uncovering Bias and Power,” Anti-Defamation League, September 2014, accessed February 10, 2017,
[v] Emily Ratajkowski, “Baby Woman,” Lenny, February 16, 2016, accessed February 2, 2017,

5 facts about crime in the U.S.

5 facts about crime in the U.S.

Donald Trump made crime-fighting an important focus of his campaign for president, and he cited it again during his inaugural address in January. With the White House and Justice Department announcing steps to address violence in American communities, here are five facts about crime in the United States.
1Violent crime in the U.S. has fallen sharply over the past quarter century.There are two commonly cited measures of the nation’s crime rate. One is an annual report by the FBI of serious crimes reported to police in approximately 18,000 jurisdictions around the country. The other is an annual survey of more than 90,000 households conducted by the Bureau of Justice Statistics, which asks Americans ages 12 and older whether they were the victims of crime in the past six months (regardless of whether they reported those crimes to the police or not). Both the FBI and BJS data show a substantial decline in the violent crime rate since its peak in the early 1990s.
Using the FBI numbers, the rate fell 50% between 1993 and 2015, the most recent full year available. Using the BJS data, the rate fell by 77% during that span. It’s important to note, however, that the FBI reported a 3% increase in the violent crime rate between 2014 and 2015, including a 10% increase in the murder rate. (The BJS figures show a stable violent crime rate between 2014 and 2015, but they do not count murders.) Some experts have projected that the 2016 FBI data will show another increase in the violent crime rate – including another rise in the murder rate – when they are released later this year.
2Property crime has declined significantly over the long term. Like the violent crime rate, the U.S. property crime rate today is far below its peak level. FBI data show that the rate fell 48% between 1993 and 2015, while BJS reports a decline of 69% during that span. Both the FBI and BJS reported a decline in the property crime rate between 2014 and 2015, even as the violent crime rate went up in the FBI’s data. Property crime includes offenses such as burglary, theft and motor vehicle theft and is generally far more common than violent crime.
3Public perceptions about crime in the U.S. often don’t align with the data. Opinion surveys regularly find that Americans believe crime is up, even when the data show it is down. In 21 Gallup surveys conducted since 1989, a majority of Americans said there was more crime in the U.S. compared with the year before, despite the generally downward trend in both violent and property crime rates during much of that period. In a Pew Research Center survey in late 2016, 57% of registered voters said crime had gotten worse since 2008, even though BJS and FBI data show that violent and property crime rates declined by double-digit percentages during that span.
4There are large geographic variations in crime rates. The FBI’s data allow for geographic comparisons of crime rates, and these comparisons can show big differences from state to state and city to city. In 2015, for instance, there were more than 600 violent crimes per 100,000 residents in Alaska, Nevada, New Mexico and Tennessee. By contrast, Maine, New Hampshire, Vermont and Virginia had rates below 200 violent crimes per 100,000 residents. And while Chicago has drawn widespread attention for its soaring murder total in recent years, its murder rate in 2015 – 18 murders and non-negligent manslaughters per 100,000 residents – was less than a third of the rate in St. Louis (59 per 100,000) and Baltimore (55 per 100,000). The FBI notes that various factors might influence a particular area’s crime rate, including its population density and economic conditions.
5Many crimes are not reported to police. In its annual survey, BJS asks victims of crime whether or not they reported that crime to police. In 2015, the most recent year available, only about half of the violent crime tracked by BJS (47%) was reported to police. And in the much more common category of property crime, only about a third (35%) was reported. The proportion was substantially higher for offenses classified as serious violent crime (55%), a category that includes serious domestic violence (61% of which was reported), serious violent crime involving injury (59%) and serious violent crime involving weapons (56%). There are a variety of reasons why crime might not be reported, including a feeling that police “would not or could not do anything to help” or that the crime is “a personal issue or too trivial to report,” according to BJS.

How Equitable is Access to Advanced Coursework in Pennsylvania High Schools? Dr. Ed Fuller

How Equitable is Access to Advanced Coursework in Pennsylvania High Schools? Dr. Ed Fuller

Thursday, February 23, 2017

Bunkum Award 2016

Bunkum Award 2016

The National Education Policy Center is pleased to announce the winner of the 2016 Bunkum Award, recognizing the think tank whose reviewed work best captures the true spirit of bunk, starring in a feature report.
Watch the 2016 Bunkum Award Ceremony:

2016 Bunkum Award Honoree:

Center for American Progress for Lessons From State Performance on NAEP

Many organizations publish reports they call research. But what does this mean? These reports often are published without having first been reviewed by independent experts — the “peer review” process commonly used for academic research.
Even worse, many think tank reports subordinate research to the goal of making arguments for policies that reflect the ideology of the sponsoring organization.
Yet, while they may provide little or no value as research, advocacy reports can be very effective for a different purpose: they can influence policy because they are often aggressively promoted to the media and policymakers.
To help the public determine which elements of think tank reports are based on sound social science, NEPC’s “Think Twice” Project has, every year for the past decade, asked independent experts to assess these reports’ strengths and weaknesses.
The results have been interesting. Some advocacy reports have been found by experts to be sound and useful, but most are found to have little if any scientific merit.
At the end of each year, the editors at NEPC sift through the think tank reports that had been reviewed, to identify the worst offender.
We then award the organization publishing that report NEPC’s Bunkum Award for shoddy research.
This year’s award goes to the Center for American Progress (CAP) for Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better than Others, authored by Ulrich Boser and Catherine Brown. The CAP report is based on a correlational study with the key finding that high standards increase learning for high-poverty students. The researchers compared changes in states’ test scores for low-income students to changes to those states’ standards-based policy measures as judged by the researchers. Their conclusions were that high standards lead to higher test scores and that states should adopt and implement the Common Core.
Alas, there was much less than met the eye.
In choosing the worst from among the many “worthy” contenders, the evaluation criteria applied to the year’s reports were drawn from two separate short papers entitled, Five Simple Steps to Reading Research and Reading Qualitative Educational Policy Research.

Here’s how the CAP report scored:
  • Was the design appropriate?   No: The design was not sensitive, so they tossed in “anecdotes” and “impressions.”
The apparent purpose of the paper was to advocate for standards-based reform, particularly for the Common Core State Standards, by demonstrating a correlational association between better NAEP scores and states with stronger and better standards and assessments. The data could do little to support this conclusion, so the report largely relied on evidence the authors repeatedly acknowledged as “anecdotal.”
  • Were the methods clearly explained?   No: The methods section is incomplete and obtuse.
The report claims the authors studied five two-year implementation cycles, from 2003 to 2013, but the results from these time periods were not presented. The reader is at a loss as to whether the data were missing, mushed together across years, too weak to present, had a spike with Common Core implementation, or something else. The authors apparently used only one aggregated “policy implementation score,” which was derived from “norming” each of three “categories” and then averaging the three categories together. The categories were derived from the 14 scaled “indicators.” Apparently they regressed this policy implementation score against NAEP scores. While there exist useful measures of scale integrity, the report includes no analysis of the effect of the de facto amalgamation of 14 different variables into one. A finance measure was included but had no statistical effect on the outcomes; why this measure was reported is not clear. Finally, the methods presentation left out critical data and was incomplete.
  • Were the data sources appropriate?   No: The variables used were inadequate and were aggregated in unclear ways.
The report’s goal was to attempt to isolate and determine the relationship between state standards and student achievement. But test-score differences between, say, Massachusetts, Kentucky and Michigan, likely vary for many, many reasons beyond the particular standards that each state adopted, and the study does not control for the vast majority of these likely reasons. The study also includes no measure of the quality or fidelity of the implementation of the standards themselves.
  • Were the data gathered of sufficient quality and quantity?   No: The report uses just state-level NAEP scores and summary data.
This was a correlational study of convenience measures adopted from an Education Week publication. Without knowing more about other factors potentially impacting NAEP test scores in each state, and without knowing about the implementation process for each state’s standards, it is difficult to see how data about the simple act of a state adopting standards is sufficient. Readers are asked to accept the authors’ conclusion that “rigorous” implementation of standards was effective for the favored states. But even if a reader were to accept this premise, “rigor” was never directly addressed in the study.
  • Were the statistical analyses appropriate?   No: A multiple correlation with just 50 cases is too small.
Conducting multiple regressions with 50 cases is not an appropriate methodological approach. Not surprisingly, the resulting effect sizes were quite small. The authors acknowledge—even while they don’t restrain their claims or conclusions—that this analysis is “anecdotal” and “impressionistic.” For example:
While there is an important debate over the definition of standards-based reform—and this analysis is undoubtedly anecdotal and impressionistic—it appears clear that states that have not embraced the approach have shown less success, while more reform-oriented states have shown higher gains over the long term. (p. 2)
  • Were the analyses properly executed?   Cannot be determined: The full results were not presented.
The authors added together the 14 state policy variables of interest, and they then regressed this “change in policy implementation score” against NAEP scores from the previous two years. Since the report did not include specific results (for example, they do not include the multiple R’s, a correlation matrix, or the 14 predictor variables), or why (or how) they were weighted and added together, the reader cannot tell whether the analyses are properly executed.
  • Was the literature review thorough and unbiased?   No: The report largely neglected peer-reviewed research.
Of the 55 references, only one was clearly peer-reviewed. Despite a rich literature on which the authors could have drawn, the report’s literature review over-relied (and attempted to replicate, in many ways) a single non-peer reviewed source from 2006.
  • Were the effect sizes strong enough to be meaningful?   Effect sizes were not presented, and the claims are based on the generally unacceptable 0.10 significance level.
Although effect sizes can be estimated from correlations (e.g., Cohen’s D), only the results from one of the five two-year contrast panels were reported. The single table, which appears in the report’s appendix, purports to show a small relationship between the standards policies and NAEP scores, but this relationship is significant only at the 0.10 level, and even then only for 4th grade math and 8th grade reading–but not for 8th grade math and 4th grade reading, where the weak relationships are negative. It is generally not acceptable to claim significance at the 0.10 level.
  • Were the recommendations supported by strong evidence?   No: Their conclusion is based on weak correlations.
Despite the authors’ claim that “Our findings suggest that there is clear evidence that standards-based reform works, particularly when it comes to the needs of low-income students,” an objective reader of these data and analyses could easily come to exactly the opposite conclusion: that there is no demonstrated relationship.
The fundamental flaw in this report is simply that it uses inadequate data and analyses to make a broad policy recommendation in support of the common core state standards. A reader may or may not agree with the authors’ conclusion that “states should continue their commitment to the Common Core’s full implementation and aligned assessments.” But that conclusion cannot and should not be based on the flimsy analyses and anecdotes presented in the report.

Find the report at:
Boser, U., & Brown, C. (2016, January 14). Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better than Others. Washington, DC: Center for American Progress. Available online at
Find the review at:
Nichols, S.L. (2016). Review of “Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better Than Others.” Boulder, CO: National Education Policy Center. Available online at

Watch the 2016 Bunkum Award video presentation, read the Bunkum-worthy report and the review, and learn about past Bunkum winners and the National Education Policy Center’s Think Twice think tank review project, all by going to

Monday, February 13, 2017

Which States Pay Teachers the Most (and Least)?

Alaska and New York pay teachers nearly double the salaries of those working in Mississippi and Oklahoma, says a new study by GoBankingRates.
According to the finance website, teachers in Alaska and New York are paid each year on average $77,843 and $76,953, respectively. By contrast, the averages in Mississippi and Oklahoma are $42,043 and $42,647, respectively. To be fair, many of the states with higher teacher pay also have higher costs of living. (You can use this tool to compare costs of living in different cities and states across the country.)
And a salary on the high end doesn't necessarily mean easy living. The authors show, for instance, that the average salary in California of $72,050 "is just a tad under the amount of money needed to live comfortably in [the state]." What's more, a starting teacher's salary would be much less, closer to $40,000 per year, according to the California Department of Education.
Many of the states with the lowest salaries are working to increase teacher pay, often to combat teacher shortages. Lawmakers in Oklahoma say raising teacher pay is a top priority. Under a bill filed by state Senator David Holt, Oklahoma teachers would receive a $10,000 pay raise by 2021. Governor Doug Ducey of Arizona has also recently made a big push to boost teacher salaries across the state.
The average teacher salaries in 50 states (not including the District of Columbia) were calculated using data from the Bureau of Labor Statistics. The authors averaged the mean salaries of elementary, middle, and high school teachers to get the average salary in each state. The calculations did not include the salaries of special education teachers. Here are the 10 states where teachers get paid the most and the 10 states where teachers earn the least.

The 10 states where teachers get paid the most:

1. Alaska: $77,843
2. New York: $76,593
3. Connecticut: $75,867
4. California: $72,050
5. Massachusetts: $71,587
6. New Jersey: $70,700
7. Rhode Island: $67,533
8. Maryland: $65,257
9. Illinois: $65,153
10. Virginia: $63,493
 The 10 states where teachers get paid the least:
1. Mississippi: $42,043
2. Oklahoma: $42,647
3. South Dakota: $43,200
4. North Carolina: $43,587
5. Arizona: $43,800
6. West Virginia: $45,477
7. Arkansas: $47,053
8. Idaho: $47,063
9. Kansas: $47,127
10. Louisiana: $48,587
Map courtesy of GoBankingRates 

Sunday, February 5, 2017

2017 SRI Winter Meeting Closing Keynote by Paul Gorski

2017 SRI Winter Meeting Closing Keynote by Paul Gorski

Wednesday, February 1, 2017

Corporal Punishment: A Reader

Spanking and Child Outcomes: Old Controversies and New Meta-Analyses, Gershoff, Elizabeth T.; Grogan-Kaylor, Andrew
Follow Dr. Stacey Patton on Twitter and Facebook; and see Spare the Kids.
Patton keynote Part I:
Patton keynote Part II:
Corporal punishment leads to more immediate compliant behavior in children, but is also associated with physical abuse. Should parents be counseled for or against spanking?
WASHINGTON — Corporal punishment remains a widely used discipline technique in most American families, but it has also been a subject of controversy within the child development and psychological communities. In a large-scale meta-analysis of 88 studies, psychologist Elizabeth Thompson Gershoff, PhD, of the National Center for Children in Poverty at Columbia University, looked at both positive and negative behaviors in children that were associated with corporal punishment. Her research and commentaries on her work are published in the July issue of Psychological Bulletin, published by the American Psychological Association.
While conducting the meta-analysis, which included 62 years of collected data, Gershoff looked for associations between parental use of corporal punishment and 11 child behaviors and experiences, including several in childhood (immediate compliance, moral internalization, quality of relationship with parent, and physical abuse from that parent), three in both childhood and adulthood (mental health, aggression, and criminal or antisocial behavior) and one in adulthood alone (abuse of own children or spouse).
Gershoff found “strong associations” between corporal punishment and all eleven child behaviors and experiences. Ten of the associations were negative such as with increased child aggression and antisocial behavior. The single desirable association was between corporal punishment and increased immediate compliance on the part of the child.
The two largest effect sizes (strongest associations) were immediate compliance by the child and physical abuse of the child by the parent. Gershoff believes that these two strongest associations model the complexity of the debate around corporal punishment.
“That these two disparate constructs should show the strongest links to corporal punishment underlines the controversy over this practice. There is general consensus that corporal punishment is effective in getting children to comply immediately while at the same time there is caution from child abuse researchers that corporal punishment by its nature can escalate into physical maltreatment,” Gershoff writes.
But, Gershoff also cautions that her findings do not imply that all children who experience corporal punishment turn out to be aggressive or delinquent. A variety of situational factors, such as the parent/child relationship, can moderate the effects of corporal punishment. Furthermore, studying the true effects of corporal punishment requires drawing a boundary line between punishment and abuse. This is a difficult thing to do, especially when relying on parents’ self-reports of their discipline tactics and interpretations of normative punishment.
“The act of corporal punishment itself is different across parents – parents vary in how frequently they use it, how forcefully they administer it, how emotionally aroused they are when they do it, and whether they combine it with other techniques. Each of these qualities of corporal punishment can determine which child-mediated processes are activated, and, in turn, which outcomes may be realized,” Gershoff concludes.
The meta-analysis also demonstrates that the frequency and severity of the corporal punishment matters. The more often or more harshly a child was hit, the more likely they are to be aggressive or to have mental health problems.
While the nature of the analyses prohibits causally linking corporal punishment with the child behaviors, Gershoff also summarizes a large body of literature on parenting that suggests why corporal punishment may actually cause negative outcomes for children. For one, corporal punishment on its own does not teach children right from wrong. Secondly, although it makes children afraid to disobey when parents are present, when parents are not present to administer the punishment those same children will misbehave.
In commentary published along with the Gershoff study, George W. Holden, PhD, of the University of Texas at Austin, writes that Gershoff’s findings “reflect the growing body of evidence indicating that corporal punishment does no good and may even cause harm.” Holden submits that the psychological community should not be advocating spanking as a discipline tool for parents.
In a reply to Gershoff, researchers Diana Baumrind, PhD (Univ. of CA at Berkeley), Robert E. Larzelere, PhD (Nebraska Medical Center), and Philip Cowan, PhD (Univ.of CA at Berkeley), write that because the original studies in Gershoff’s meta-analysis included episodes of extreme and excessive physical punishment, her finding is not an evaluation of normative corporal punishment.
“The evidence presented in the meta-analysis does not justify a blanket injunction against mild to moderate disciplinary spanking,” conclude Baumrind and her team. Baumrind et al. also conclude that “a high association between corporal punishment and physical abuse is not evidence that mild or moderate corporal punishment increases the risk of abuse.”
Baumrind et al. suggest that those parents whose emotional make-up may cause them to cross the line between appropriate corporal punishment and physical abuse should be counseled not to use corporal punishment as a technique to discipline their children. But, that other parents could use mild to moderate corporal punishment effectively. “The fact that some parents punish excessively and unwisely is not an argument, however, for counseling all parents not to punish at all.”
In her reply to Baumrind et al., Gershoff states that excessive corporal punishment is more likely to be underreported than overreported and that the possibility of negative effects on children caution against the use of corporal punishment.
“Until researchers, clinicians, and parents can definitively demonstrate the presence of positive effects of corporal punishment, including effectiveness in halting future misbehavior, not just the absence of negative effects, we as psychologists can not responsibly recommend its use,” Gershoff writes.
Lead authors can be reached at:
University office: (212) 304-7149
Home office: (212) 316-0387
402 498-1936
The American Psychological Association (APA), in Washington, DC, is the largest scientific and professional organization representing psychology in the United States and is the world’s largest association of psychologists. APA’s membership includes more than 155,000 researchers, educators, clinicians, consultants and students. Through its divisions in 53 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations, APA works to advance psychology as a science, as a profession and as a means of promoting human welfare.