Tag: surveys

A Global Survey of IR Students – Might be Worth Pitching in your Classes

Daryl Morini, an IR PhD candidate at the University of Queensland whom I know, has put together an interesting global survey for undergraduate and graduate students of international relations. It looks pretty thorough and might make a pretty interesting student couter-point to TRIP. Eventually the goal is an article on our students’ attitudes toward the discipline; here is the full write-up of  the project at e-IR. So far as I know, nothing like this has been done before (please comment if that is incorrect), so this strikes me as the interesting sort of student work we should support. Daryl’s made an interesting effort to use Twitter as a simulation tool in IR, so I am happy to pitch this survey for him. Please take a look; Daryl may be contacted here.

Share

Some Things are Best Expressed in an Eldritch Tongue

This survey of American households has been around in some form since 1850, either as a longer version of or a richer supplement to the basic decennial census. It tells Americans how poor we are, how rich we are, who is suffering, who is thriving, where people work, what kind of training people need to get jobs, what languages people speak, who uses food stamps, who has access to health care, and so on. 

It is, more or less, the country’s primary check for determining how well the government is doing — and in fact what the government will be doing. The survey’s findings help determine how over $400 billion in government funds is distributed each year. 

But last week, the Republican-led House voted to eliminate the survey altogether, on the grounds that the government should not be butting its nose into Americans’ homes. 

“This is a program that intrudes on people’s lives, just like the Environmental Protection Agency or the bank regulators,” said Daniel Webster, a first-term Republican congressman from Florida who sponsored the relevant legislation. 

“We’re spending $70 per person to fill this out. That’s just not cost effective,” he continued, “especially since in the end this is not a scientific survey. It’s a random survey.”

Via PM.

Share

Open-ended vs. Scale Questions: A note on survey methodology

Aaron Shaw had an interesting post at the Dolores Labs blog last week that examined how using different question scales in surveys can elicit very different responses:

You can ask “the crowd” all kinds of questions, but if you don’t stop to think about the best way to ask your question, you’re likely to get unexpected and unreliable results. You might call it the GIGO theory of research design.

To demonstrate the point, I decided to recreate some classic survey design experiments and distribute them to the workers in Crowdflower’s labor pools. For the experiments, every worker saw only one version of the questions and the tasks were posted using exactly the same title, description, and pricing. One hundred workers did each version of each question and I threw out the data from a handful of workers who failed a simple attention test question. The results are actual answers from actual people.

Shaw asked the same question to both samples but altered the scale of the available answers:

Low Scale Version:
About how many hours do you spend online per day?
(a) 0 – 1 hour
(b) 1 – 2 hours
(c) 2 – 3 hours
(d) More than 3 hours

High Scale Version:
About how many hours do you spend online per day?
(a) 0 – 3 hours
(b) 3 – 6 hours
(c) 6 – 9 hours
(d) More than 9 hours

He found that there was a (statistically) significant difference in the responses he received from questions using both the high and low scales. More specifically, more people responded that they spent more than 3 hours online per day when presented with the high scale question. Additionally, more people exposed to the high scale responded that they spend less than 3 hours online per day. What accounts for this? Shaw hypothesizes that it is the result of satisficing:

[…] it happens when people taking a survey use cognitive shortcuts to answer questions. In the case of questions about personal behaviors that we’re not used to quantifying (like the time we spend online), we tend to shape our responses based on what we perceive as “normal.” If you don’t know what normal is in advance, you define it based on the midpoint of the answer range. Since respondents didn’t really differentiate between the answer options, they were more likely to have their responses shaped by the scale itself.

These results illustrate a sticky problem: it’s possible that a survey question that is distributed, understood, and analyzed perfectly could give you completely inaccurate results if the scale is poorly designed.

It’s an important point–how you ask a question can have a significant impact on the answers you get. Or put another way, you need to pay as much attention to design and structure of your questions (and answers) as to the content of those questions.

A number of commentators chimed in about when it is better to use scale versus open-ended questions. One major advantage that comes immediately to mind is that scale questions don’t require analysts to spend additional time coding answers before commencing with their analysis. While open-ended questions may avoid the issue of satisficing (which I am not convinced they do–respondents could easily reference their own subjective scale or notions), they do place an additional burden on the analyst. For short, small-n surveys this isn’t that big of an issue. However, once you start scaling up in terms of n and the number of questions it can become problematic. Once you get into coding there are all sorts of issues that can arise (issues of subjectivity and bias, data entry errors, etc). Some crowdsourcing applications like Crowdflower may provide a convenient and reliable platform for coding (as I’ve mentioned before), but at some level researchers will always have to make an intelligent trade-off between scale and open-ended questions.

[Cross-posted at bill | petti]

Share

Is America cool again?


According to the latest data from the Pew Global Attitudes Project, “the image of the United States has improved markedly in most parts of the world reflecting global confidence in Barack Obama.”

Follow the Pew survey link to find data charting some dramatic American improvements throughout the world. The biggest upswings seem to have occurred in Western Europe, parts of Latin America, India, and in Indonesia and Nigeria. Note that in some of these states the U.S. image was already on the upswing the past year or so following lows achieved earlier this decade.

Data from Eastern Europe, Russia, and the Muslim Middle East do not reflect major changes. Indeed, the US image has actually declined in Israel post-Bush and is flat (with low marks) in Pakistan, Palestine, and Turkey.

What does it mean that major U.S. foreign policy initiatives of the past half year are most popular in other advanced states?

I think the results reflect rational thinking around the globe. After all, the Obama administration announced some important changes in the war on terror, escalated the war in Afghanistan, followed the path towards withdrawal from Iraq (starting with the cities), and started talking differently about global climate change.

Those are all relatively popular moves in Europe. Thousands of NATO troops are in Afghanistan, so even though Europeans are not especially hawkish on the war, many have reason to seek victory.

President Obama has personal connections to Indonesia and Nigeria, so the improvements in U.S. image in those states may only reflect favorable views towards a “favorite son.” Do the Pew survey results from the rest of Muslim world mean the famed “Cairo speech” didn’t hit its intended target audience?

I’ve come here to Cairo to seek a new beginning between the United States and Muslims around the world, one based on mutual interest and mutual respect, and one based upon the truth that America and Islam are not exclusive and need not be in competition. Instead, they overlap, and share common principles — principles of justice and progress; tolerance and the dignity of all human beings.

Apparently, much of the Muslim world is going to wait awhile for policy results before changing their impression of the USA.

Share

The William and Mary Survey, 2006-2007

The Foreign Policy summary (registration required) of the William and Mary “Inside the Ivory Tower” survey is out (as is the full version).

The 2006-2007 survey was conducted by Daniel Maliniak, Amy Oakes, Susan Peterson, and Michael J. Tierney. I participated in a trial run, but I’m not sure if I filled out the final version. No matter.

Part of the survey involves ranking PhD, MA, and undergraduate programs in international studies. In essence, these questions measure the reputation of various programs across the field. How did the institutions of international-relations bloggers fare?

First, PhD programs:

1. Harvard University
2. Princeton University
3. Columbia University
4. Stanford University
5. University of Chicago
6. Yale University
7. University of California, Berkeley
8. University of Michigan
9. University of California (San Diego?)
10. Cornell University
11. Mass. Institute of Technology
12. Johns Hopkins University
13. Georgetown University
14. Duke University
15. Ohio State University
16. New York University
17. University of Minnesota
18. University of California, Los Angeles
19. Tufts University
20. University of Rochester

This data, as I mentioned above, measures reputation among survey participants and little else. That might account for the fact that Tufts, which does not have an academic PhD program, breaks the top-20. The survey also, at least in my view, displays some good evidence about continued lags in reputation and current performance.

Georgetown and Johns Hopkins both moved up this year, largely because of Duke’s drop. Now, I have nothing but wonderful things to say about Georgetown–at least most of the time–but I wouldn’t rank us above some of the programs that come in below us on this list.

One of the major problems with any of these kinds of rankings is that they don’t specify “in what dimension of international relations.” Rochester, for example, is much better than number 20 if you want to do formal modeling; Minnesota, Ohio State, and Cornell are the premiere schools for constructivist scholarship.

The MA rankings demonstrate, at least in my view, a better (but still imperfect) correlation between rankings and quality:

1. Georgetown University
2. Johns Hopkins University
3. Harvard University
4. Tufts University
5. Columbia University
6. Princeton University
7. George Washington University
8. American University
9. University of Denver
10. Syracuse University
11. University of California
12. University of Chicago
12. Yale University
14. Stanford University
15. University of Pittsburgh
16. University of California
16. University of Maryland
18. Mass. Institute of Technology
18. Monterey Institute of Int’l Studies
20. University of Southern California

Georgetown appears to have edged out SAIS this year. I wonder if that represents a sampling change?

In the undergraduate international-relations program rankings, I think Georgetown probably underperformed at number 4:

1. Harvard University
2. Princeton University
3. Stanford University
4. Georgetown University
5. Columbia University
6. Yale University
7. University of Chicago
8. University of California, Berkeley
9. Dartmouth College
10. George Washington University
11. American University
12. University of Michigan
13. Tufts University
14. Swarthmore College
14. University of California (?)
16. Cornell University
17. Brown University
18. Williams College
19. Duke University
19. Johns Hopkins University

For those of you who don’t care about the inside baseball, the most interesting parts of the survey concern academics’ attitudes towards foreign policy. We’re very pessimistic about the prospects for democracy in Iraq in the next 10-15 years (85% say “unlikely” or “very unlikely”). We tend to think (and this surprised me a bit) that the “Israel lobby” has too much influence over US foreign policy, and so forth. These, and other opinions, do vary with political orientation (conservative, liberal, and moderate) but not as much as one would expect. Conservative and moderate international-relations scholars don’t, for example, care very much for the current administration.

One of the more intriguing findings:

This support for multilateralism is remarkably stable across ideology. In the cases of both Iran and North Korea, liberals and conservatives agree that U.N.-sanctioned action is preferable. More striking are the attitudes of self-identified realists. Scholars of realism traditionally argue that international institutions such as the United Nations do not (and should not) influence the choices of states on issues of war and peace. But we found realists to be much more supportive of military intervention with a U.N. imprimatur than they are of action without such backing. Among realists, in fact, the gap between support for multilateral and unilateral intervention in North Korea is identical to the gap among scholars of the liberal tradition, whose theories explicitly favor cooperation.

I don’t believe this is because realists have suddenly turned into Wilsonsians; rather, I suspect the data reflects how a broad cross-section of realist scholars have come to the conclusion that international legitimacy greases the wheels of power and makes counterbalancing less likely. Thank (or blame) the Bush administration.

Anyway, as they say in blogland “read the whole thing.” I hope Mike and his colleagues sort out their technical problem so we can access the complete report.

Share

© 2020 Duck of Minerva

Theme by Anders NorenUp ↑