I keep seeing folks substituting missing values in one of the PTS-Scales (say PTS-Amnesty) with existing values from the other PTS-Scale (in this case PTS-StateDept). We ourselves (i.e. Political Terror Scales) in our data releases report average annual scores (the means of PTS-Amnesty and PTS-StateDept). I think both practices are problematic. Amnesty International and the State Department describe human rights practices from their own lens. Each organization is faced with a unique set of constraints and incentives to produce the reports they do produce.
We also know well that missingness (i.e. non-existent reports for some countries for some years) is is not random, especially for Amnesty International. Likely due to resource constraints and monitoring capacity, Amnesty International, for example, did not cover some Western European countries with arguably strong human rights records during the 1970s and 1980s (e.g.,Belgium, the Netherlands, or Denmark) and instead focused its resources on countries where violations were likely. Amnesty International also does appear to disproportionately cover autocracies, while the State Department’s reports appear to track existing democracies and autocracies more closely.
Simmons (2009) conjectures that non-governmental organizations (NGOs) such as Amnesty International have incentives to consistently report bad news even if states’ human rights records improve. If human rights records across the world improve sufficiently, Amnesty International’s ability to mobilize members and attract donations would arguably be eroded. In short, Amnesty International has an incentive to change its standards, or to focus its attention to violations ignored in the past to remain relevant.1
Although sample bias and incentives to strategically adjust reporting standards are less likely to be a problem for scores generated from the U.S. State Department’s annual reports, these reports have also been criticized. Whereas Amnesty International was arguably covering more violent countries in its annual reports, the State Department’s reports were allegedly biased in their content (see: Poe and Tate (1994) among others).
Critics frequently claim that the U.S. State Department unfairly emphasized violations in countries that are ideologically opposed to the United States (particularly during the Cold War), while ignoring similar violations in countries where the U.S. has had an interest. Poe, Carey, and Vazquez (2001) for instance provide anecdotal evidence that the State Departments reports for communist Cuba prior to 1989 suffered from “exaggeration and undocumented conclusions” whereas reports for U.S. allies such as El Salvador in the 1980s were “extremely politicized” (665). When comparing PTS-Amnesty and PTS-StateDept, Poe et al.,find that the U.S. State Department’s reports sometimes favored U.S. allies in the 1970s and early 1980s and that in particular during the Reagan administration leftist countries received disproportionately worse scores (667). By the late 1980s, however, this bias disappeared and PTS-StateDept and PTS-Amnesty converged.
In short then, both PTS-Amnesty and PTS-StateDept likely present a biased view of physical integrity rights violations – certainly in the 1970s and early to mid 1980s. Even with almost 40 years worth of data these biases remain problematic. Consider the Figure produced below. (You can also download a .pdf version here: PTS-Bias.pdf.) Replacing a missing score State Department score – say for Saudi Arabia – with the Amnesty score is not going to be in the interest of anybody. It is also very unlikely that the bias in one of the scales can easily be “fixed” with the other differently biased scale. The PTS scores should be treated less as representations of the true human rights conditions in a given country but rather representations of human rights records from the perspectives of varying monitoring organizations.