Tag: research

How to Work Effectively with a Research Assistant

So you’ve finally received research funding to hire a research assistant….now what? Attaining resources to get research support is a wonderful thing, but figuring out who to hire and how to work with them can be a challenge. While I have been very fortunate to work with some excellent research assistants, I’ve also experienced difficult situations where I’ve hired someone with the wrong skill set, or clearly not mentored or managed the relationship well. These different experiences have taught me a few things about working with research assistants that I hope others find useful. Since I’m relatively new at managing a research team, I’d also love to hear some of your suggestions on this topic. So, here’s my list of frequently asked questions about research assistants:

1. Who do I hire? Once you have funds to hire a research assistant (RA) it can be exciting- and daunting- choosing the right person. If you are new to an institution, or you don’t know anyone who isn’t already working at their max, it can especially be challenging. Before you can choose the right person, it is important to determine what types of tasks you are seeking an RA for. I used to think that the most senior graduate students would necessarily make the best research assistants. I assumed that they would have the highest level of skill and maturity, so it made sense to seek out available PhDs, if possible. While PhD students are often excellent RAs, there are several types of tasks that aren’t necessarily suited to advanced graduate students. I’ve found that simple work like data collection, editing, entering information, and organizing data is better suited to advanced undergraduates. This is a huge over-generalization, but undergrads can be much less grouchy about doing some of the the more tedious work that some of us need RAs to help with. PhDs often want to work on more macro-level thinking, which could include synthesizing data or writing briefs. This makes perfect sense. In some cases, it is useful to hire both an advanced undergraduate and a PhD student to work together on different aspects of a project. Both get to build skill sets that are useful, and you get the results you need in less time.

2. How can I mentor while I get research assistance? This question is linked to the first one. I see hiring a RA as a form of mentorship. If you can figure out what skills different potential RAs have, and also what skills they want to refine and work on, it can help you not only hire the right person, but also to mentor them. Students who are working on a Masters degree who are looking to work for an NGO, for example, will want to attain different skills than a PhD student who wants to be associated with a potential publication. I have found that if students feel they are being mentored, they work better and are happier than when they simply feel like ’employees.’ Continue reading

Share

Cyber Spillover: The Transition from Cyber Incident to Conventional Foreign Policy Dispute

*Post written with my coauthor Ryan Maness.  We are currently rounding the corner and almost ready to submit the final version of our Cyber Conflict book.  This post represents ongoing research as we fill out unanswered questions in our text.

My coauthor and I have dissected the contemporary nature of cyber conflict in many ways, from cataloging all actual cyber incidents and disputes between states, to examining cyber espionage, and finally, examining the impact of cyber incidents on the conflict-cooperation nexus of states.  What we have not done until now is examine the nature of what we call cyber spillover.  duck read 2

Cyber spillover is when cyber conflicts seep and bleed into traditional arena of militarized and foreign policy conflict.  While it is dubious to claim that the cyber domain is disconnected from the physical domain given that cyber technology has to be housed somewhere, it is also true that there are very few incidents of cyber actions causing physical damage (the only case being Stuxnet).  Our question is not about the transition from cyber to physical, but when cyber disagreements lead directly to conventional foreign policy disputes between states, thus altering how international interactions work.

Continue reading

Share

In Defense of Teaching (and Grading) the Long Research Paper

procedure-for-writing-term-paper

There’s a Slate article titled “The End of the College Essay” circulating in various Facebook and Twitter circles critical of assigning long essays to undergraduates. The gist of the complaint mirrors the complaints I’ve heard over the years from students and colleagues (and others outside the academy) about assigning long research papers. Last summer, I attended a conference in Toronto on the future of liberal education in which a number of participants criticized the long-form research paper by noting that, unless students go into Ph.D programs, most will never write a long paper again in their lives. I heard from quite a few people who argued that faculty should give students assignments that reflect the new communication technologies and skills associated with those technologies — and, failure to do so, will only exacerbate the increasing irrelevance of the liberal arts.

I really disagree with all of this. Continue reading

Share

Another Great Video – Why You Should NEVER Lecture from the Book’s Slides

My first semester teaching as a PhD’d professor was tough – I was constantly struggling to stay on top of my research responsibilities and my family responsibilities.  Add in teaching 2 new preps – something had to give!  Well, I thought I found a solution – the textbook I was using for Intro to IR had already-made Powerpoint presentations!  All I needed to do was change the name and date on the slides and – Voilà! – teaching duties done! Unfortunately, every time I tried that, I ended up looking exactly like this guy:

Continue reading

Share

Save IR and Politics at University of the West of England

Earlier today, I received an email alerting me to the fact that the University of the West of England’s Academic Board supported a recommendation from the Vice Chancellor’s Executive Group to close all international relations and politics programs.

Apparently, the plan is to refocus the university (one of the ten largest in England) on skills-based learning and vocational courses, which essentially means that arts and social sciences have no place in future plans. As long-time Duck readers know, I think this is a very bad idea — and some much-discussed research strongly supports the value of liberal arts education. Indeed, this research suggests that liberal arts students even out-perform vocationally trained students in the job market. In IR at UWE, 95% of “Students [are] in work / study six months after finishing” their course.

Unsurprisingly, students are very happy with the education they receive at UWE:

In the last five National Student Surveys History at UWE has consistently scored over 90 per cent in the overall satisfaction ratings and Politics at UWE has scored close to 90 per cent. In the 2011 Guardian University League Tables Politics at UWE scored 91 per cent for overall course satisfaction.

Indeed, the students are campaigning  to save their programs. They have set up an online petition. They also have a Facebook page. An especially resourceful Politics/IR student at UWE made the following video about the pending decision and the value of the programs: Continue reading

Share

Do You Need a Pro-Cancer Oncologist? Bias and Human Rights Scholarship

It’s a question faced by scientists daily: if you found that X wasn’t associated with Y, would you report it?  What if you found that treatment X was harmful to Y, would you report your findings? For example, let’s say you are an oncologist and you just concluded, based on years of research, that smoking wasn’t associated with cancer  – would you report your findings?  What if you were employed by the cancer drug’s maker or dealing with cancer personally, would you report your findings about treatment X then? Is it unethical to leave the results unpublished?

Questions of personal biases and valid science permeate all facets of science; of course, we as social scientists face these questions all the time in our research.  Do personal biases get in the way of our science?  Is there any way around our personal biases?

I’m a firm believer that the process of science allows us to eliminate many of the potential biases that we carry around with us.  As Jay Ulfelder just pointed out in a blog post on Dart Throwing Chimp with respect to democracy research in comparative politics,  the scientific process isn’t easy – there are often strong personal and professional reasons that lead people to stray from the scientific process (to me, sequestering results would imply straying from the scientific process).  But, I would contend, the scientific process allows us to overcome many of our personal and professional biases.  This is especially relevant, of course, to human rights research.  As Jake Wobig just wrote,

“a person does not start studying human rights unless they want to identify ways to change the world for the better.  However, wanting something to be so does not make it so, and we scholars do not do anyone any favors by describing the world incorrectly.” 

Continue reading

Share

No More Cups of Tea: Terrorism Research and the Law

This is a guest post from Tanisha Fazal, a professor of political science at Columbia University, and Jessica Martini, a human rights and international trade attorney based in New York City.

To conduct research on terrorism and insurgency, it’s best to be able to talk to people.  Combing through incident reports is helpful, but often an informal conversation over a cup of tea is as, if not more, illuminating.  But according to ban on providing “material support” (18 United States Code (U.S.C.) 2339B), buying a cup of tea for a terrorist can land you in [US] jail.  In 1996 the Antiterrorism and Effective Death Penalty Act (AEDPA) prohibited providing “material support or resources” to terrorists, which included providing goods and financing, in addition to intangibles such as training and personnel.  This was expanded in 2001 in the wake of the September 11th attacks, as part of USA PATRIOT Act, and subsequent court decisions interpreting this law, to include “expert advice and assistance” and coordinated advocacy.

As part of the government’s broader counterterrorism strategy, The Departments of Defense, State, and Homeland Security all have major initiatives and funding today to develop and promote better research on terrorism.  But another element of US counterterrorism – the material support ban – not only directly hinders the conduct of exactly this type of research, but also puts scholars in a position where they risk being fined or even imprisoned for researching terrorism and/or insurgency.
According to the American Bar Association, the material support ban

prohibits “providing material support or resources” to an organization the Secretary of State has designated as a “foreign terrorist organization.” The material support ban was first passed as part of the Antiterrorism and Effective Death Penalty Act of 1996 (AEDPA). The provision’s purpose is to deny terrorist groups the ingredients necessary for planning and carrying out attacks. Congress was concerned that terrorist organizations with charitable or humanitarian arms were raising funds within the United States that could then be used to further their terrorist activities. The provision outlawed any support to these groups, irrespective of whether that support was intended for humanitarian purposes.

The list of foreign terrorist organizations, or FTOs, contains many groups whose members scholars would like to interview to further their own research.  In addition to the restriction on contacts with FTOs and other entities listed on a number of other US Government lists, there are restrictions on bringing the modern tools of research, such as laptop computers and cell phones – into sanctioned countries like Syria or Iran due to trade sanctions and  export controls.

Prominent NGOs such as Human Rights Watch, The Carter Center, and the International Crisis Group and academic centers such as Notre Dame’s Kroc Institute have protested these restrictions, specifically by submitting amicus briefs (see more such briefs here, here, and here) in Holder v. Humanitarian Law Project, which was an unsuccessful test case challenging the constitutionality on First Amendment grounds of the material support ban.  Ambiguity in the Holder decision creates uncertainty about what is legal when conducting research involving people who may be affiliated with terrorists.  Any resources transferred to these groups – be it a discussion of your broader research that could be translated into advice, or buying lunch for a subject to thank them for taking the time to speak with you – could, in theory “free up other resources within the organization that may be put to violent ends,” according to the majority opinion of the court.

The Holder decision is an issue not just for academics, but also for journalists and activists.  Many of the groups co-sponsoring the amicus briefs were engaged in peacebuilding activities with groups such as the LTTE in Sri Lanka.  But the court’s ruling was that training members of these groups in international human rights law was illegal.

The material support ban and export control restrictions serve an important purpose. Terrorists are a proven threat to the US, and we shouldn’t abet them.  But in restricting resource transfers wholesale, we limit our ability to understand and help these groups find alternative means to achieve the ends they currently seek violently.  There are, in other words, important unintended consequences to the law and to the subsequent decision on its constitutionality.

The main danger for scholars is the vagueness of both the law and the court’s decision.  Insofar as academic research tends to stay within the academy, it’s highly unlikely that a terrorism scholar will be prosecuted for buying a cup of tea for an interview subject on the FTO.  But to the extent that scholarship makes it up to the levels of policy debate – which is partly the point of government programs such as the Minerva initiative, as well as foundation and university initiatives such as the Bridging the Gap program – these laws make conducting research on terrorism and insurgency even riskier than it already is.

— Cross-posted from The Monkey Cage

Share

Visas and scholarship

Sarah Duff (who has contributed to this blog before) had a very interesting piece in the UK Guardian this week on the hurdles scholars in developing countries have to face in order to engage with scholars in the developed world. Rather than focusing on whether or not the visa system is fair, she describes exactly what she must do in order to present a paper in “the West” how this impacts on the development of her research:

I describe the expensive, time-consuming, and often quite invasive procedure of applying for a visa to explain why they influence my work. Because my American visa is valid until 2015, I jump at the chance of attending conferences in the US. Next year, I hope to present at a conference in Australia, but I will only attend if I manage to secure travel funds that will cover the cost of the visa (another £65). I recently presented a paper at a conference in London via Skype because I had neither the time nor the funds to apply for a British visa.

Given what we hear in the media (and how Europeans complain to me of lines at US airports) it’s interesting here that the US system (which can provide up to a 10 year visa) is almost enlightened by comparison. Certainly it is fairer to scholars who are trying to network and get their research noticed.

However, the point I want to raise is (writing as a Western academic) more selfish. While Duff’s article suggests the way that these expensive and complicated visa systems have an impact on scholars in the developing world and how they do research, it seems clear to me that these systems are also affecting, if not damaging, research in the West. If scholars “in the West” cannot get access to scholars in the developing world, surely this is also affecting our ability to carry out research and exchange information and ideas as well. Yes, of course there is the internet, Skype, online journals, etc. The research is there if you look for it. But don’t we learn more at conferences when we have better global representation and views? Additionally, aren’t our students (who may not have large grants /funds to travel) better off when they can meet with and speak to scholars from the developing world? These things just seem self-evident.

Given recent trends in the West, I don’t expect this visa situation to be changing any time soon. But I think it is important for scholars to consider the subtle and not-so-subtle ways that the absence of voices from the developing world because of their inability to engage and network is affecting the way both groups of scholars carry out research.

Share

Information wants to be free. Congress wants it to be held for ransom.

It’s bad form to criticize other
disciplines’ journals based solely
on titles, but Annals of Tourism Research?
This is the sort of thing libraries
spend their budgets on?

Representative Carolyn Maloney (D-NY) is trying to end taxpayer access to publicly-funded research. The article is worth reading, not least because it is the only time that you’ll ever see the term “powerful publishing cartels” in this age of disruptive new-media innovation.

And yet the academic publishing market really is different, as one UC-Berkeley professor argued last year. When Nature tried to extort a 400% subscription fee increase from the University of California system, there was very little to do except engage the nuclear option–that is, threaten to boycott the journals entirely. Academics, whose lives are shaped by publishing in journals, are at the mercy of those journals’ publishers. In such negotiating positions, it’s unsurprising that publishers have managed to steadily increase their yield from universities that–as you may have heard!–are otherwise struggling to get by.

In the long term, the disjuncture between stagnant or shrinking university resources and increasing fees for access will lead to a rather severe readjustment. The same thing will happen to the plethora of new journals that is happening to the plethora of newly-minted Ph.D.s. That is, they will starve, wither, and — well, only the journals will die. The Ph.D.s will move on to jobs in industry. (I hope.)

What could help, of course, would be a far-sighted policy that would guarantee that the fruits of taxpayer-funded research would be available to taxpayers. This utopian dream is easily oversold. Let’s be frank: the general public doesn’t particularly care or directly benefit from research. The indirect benefits are pretty good, but no single journal article is likely to matter much to the public, which is simply unable to read and evaluate the articles unless they get their union card earn their doctorate. But it’s reprehensible that universities, which even if “private” are tax-supported by their nonprofit status, are given federal money to produce research which is then given to private publishers which, in turn, take quite a bit of money from universities to let them see that research in slightly better-formatted versions.

The good news is that the publishing house Elsevier has managed to rent their very own congressperson for, apparently, only a couple of thousand dollars in campaign contributions. At this point, even academics can scrape together a few shillings and find a senator or two to champion our cause. But please: let’s stick to the small-state legislators. Their campaigns are cheaper and some of us have a pay freeze.

Share

F for the Professor?

Have you heard about a new study, Academically Adrift: Limited Learning on College Campuses, authored by academics Richard Arum and Josipa Roksa and released by University of Chicago Press? Their research question should be of interest to most of the people reading this blog: “are undergraduates really learning anything once they get” to college?**

The results are disturbing:

Their extensive research draws on survey responses, transcript data, and, for the first time, the state-of-the-art Collegiate Learning Assessment, a standardized test administered to students in their first semester and then again at the end of their second year. According to their analysis of more than 2,300 undergraduates at twenty-four institutions, 45 percent of these students demonstrate no significant improvement in a range of skills—including critical thinking, complex reasoning, and writing—during their first two years of college.

According to press reports, “36 percent of students ‘did not demonstrate any significant improvement in learning’ over four years of college.”

The linked press report quotes education experts who frame this as a moral issue and describe the findings as “devastating.” The halls of academe, write the authors, are filled with too many students “drifting through college without a clear sense of purpose.”

According to Arum and Roksa, the greatest problem from the institution’s perspective is lack of rigorous expectations for undergraduates:

They review data from student surveys to show, for example, that 32 percent of students each semester do not take any courses with more than 40 pages of reading assigned a week, and that half don’t take a single course in which they must write more than 20 pages over the course of a semester.

The findings do suggest some good news for liberal arts majors:

Students majoring in liberal arts fields see “significantly higher gains in critical thinking, complex reasoning, and writing skills over time than students in other fields of study.” Students majoring in business, education, social work and communications showed the smallest gains.

Greek life, extracurricular activities, study groups, and other social experiences tend to be associated with reduced learning outcomes. Students who study alone for many hours per week achieve more.

Grade inflation is apparently another important part of the problem as students “earn” top grades without really learning anything. At an Arts and Sciences assembly last week, I learned that more than one-third of students receive a grade of A+, A or A- in University of Louisville classes. The portion of students earnign an A of any type in my classes is less than half of that number and I found the statistic quite disheartening.

The Social Science Research Council has published a short report “Improving Undergraduate Learning: Findings and Policy Recommendations from the SSRC-CLA Longitudinal Project” by the scholars (and Esther Cho of SSRC) that is available for free download.

What are the key recommendations of this SSRC report? Unsurprisingly, the authors call for more rigorous requirements:

Enhanced curriculum and instruction associated with academic rigor. More rigorous, appropriately demanding course requirements and standards must be put in place to ensure the development of critical thinking, complex reasoning, and written communication skills (i.e., increased academic assignments requiring greater student effort, adequate student reading and writing, and high expectations by faculty).

They call for students, faculty and administrators to demand rigor — and for additional assessment to assure that it works (though not really via standarized tests). More funding would be needed to implement these measures at the undergraduate level. One huge current problem is that undergraduate education is often near the bottom of the priority list for major research institutions that are nonetheless considered top-notch institutions of higher learning.

** Faithful readers may recall that I have an additional personal stake in this topic.

Share

Open-ended vs. Scale Questions: A note on survey methodology

Aaron Shaw had an interesting post at the Dolores Labs blog last week that examined how using different question scales in surveys can elicit very different responses:

You can ask “the crowd” all kinds of questions, but if you don’t stop to think about the best way to ask your question, you’re likely to get unexpected and unreliable results. You might call it the GIGO theory of research design.

To demonstrate the point, I decided to recreate some classic survey design experiments and distribute them to the workers in Crowdflower’s labor pools. For the experiments, every worker saw only one version of the questions and the tasks were posted using exactly the same title, description, and pricing. One hundred workers did each version of each question and I threw out the data from a handful of workers who failed a simple attention test question. The results are actual answers from actual people.

Shaw asked the same question to both samples but altered the scale of the available answers:

Low Scale Version:
About how many hours do you spend online per day?
(a) 0 – 1 hour
(b) 1 – 2 hours
(c) 2 – 3 hours
(d) More than 3 hours

High Scale Version:
About how many hours do you spend online per day?
(a) 0 – 3 hours
(b) 3 – 6 hours
(c) 6 – 9 hours
(d) More than 9 hours

He found that there was a (statistically) significant difference in the responses he received from questions using both the high and low scales. More specifically, more people responded that they spent more than 3 hours online per day when presented with the high scale question. Additionally, more people exposed to the high scale responded that they spend less than 3 hours online per day. What accounts for this? Shaw hypothesizes that it is the result of satisficing:

[…] it happens when people taking a survey use cognitive shortcuts to answer questions. In the case of questions about personal behaviors that we’re not used to quantifying (like the time we spend online), we tend to shape our responses based on what we perceive as “normal.” If you don’t know what normal is in advance, you define it based on the midpoint of the answer range. Since respondents didn’t really differentiate between the answer options, they were more likely to have their responses shaped by the scale itself.

These results illustrate a sticky problem: it’s possible that a survey question that is distributed, understood, and analyzed perfectly could give you completely inaccurate results if the scale is poorly designed.

It’s an important point–how you ask a question can have a significant impact on the answers you get. Or put another way, you need to pay as much attention to design and structure of your questions (and answers) as to the content of those questions.

A number of commentators chimed in about when it is better to use scale versus open-ended questions. One major advantage that comes immediately to mind is that scale questions don’t require analysts to spend additional time coding answers before commencing with their analysis. While open-ended questions may avoid the issue of satisficing (which I am not convinced they do–respondents could easily reference their own subjective scale or notions), they do place an additional burden on the analyst. For short, small-n surveys this isn’t that big of an issue. However, once you start scaling up in terms of n and the number of questions it can become problematic. Once you get into coding there are all sorts of issues that can arise (issues of subjectivity and bias, data entry errors, etc). Some crowdsourcing applications like Crowdflower may provide a convenient and reliable platform for coding (as I’ve mentioned before), but at some level researchers will always have to make an intelligent trade-off between scale and open-ended questions.

[Cross-posted at bill | petti]

Share

Research and Data on September 11 Terrorist Attacks

It is an appropriately gloomy day here in Manhattan, as the city and the country remembers the horror of September 11th, 2001 and attempts to continue to collectively heal. For me, part of that healing process has been trying to understand what happened, and more importantly, how to prevent it from ever happening again. Over the past eight years many others have been moved to investigate and analyze these events, which has lead to a plethora of research on 9/11—some good, some not so good.

As someone who attempts to read everything that comes across my desk related to these attacks, I thought today an appropriate time to compile a short list of my favorite research and data on the terrorist attacks of September 11th.

Research

  • Leaderless Jihad: Terror Networks in the Twenty-First Century, Marc Sageman – Much of the initial academic and popular research related to the causes of terrorism in the aftermath of 9/11 focused on the colloquial wisdom that terrorist were poor, uneducated, and disaffected young men. Sageman was the first scholar to actually apply scientific rigor to the analysis of terrorist origins, and using his own data on the Hamburg cell the book continues to stand out as on of the best treatments of the formation and motivation of the 9/11 hijackers.
  • Responder Communication Networks in the World Trade Center Disaster: Implications for Modeling of Communication Within Emergency Settings.Journal of Mathematical Sociology, 31(2), 121-147. Carter T. Butts; Miruna Petrescu-Prahova; B. Remy Cross – This is one of the most unique and interesting studies on the events of 9/11. Butts and his co-authors use data from emergency responder radio communication to build a dynamic collaboration network. This is a great paper for those interested in time-space relations under heavy stress and uncertainty.
  • The Internet Under Crisis Conditions, Learning from September 11, National Academic Press – I was fortunate enough to have attended the release conference this research in Washington, DC. This remains the most comprehensive examination of global internet traffic, and network response in the aftermath of the loss of a major node at the World Trade Center.
  • An economic perspective on transnational terrorism, European Journal of Political Economy. Volume 20, Issue 2, June 2004, Pages 301-316, Todd Sandler and Walter Enders – Sandler and Enders are two leading scholars on the relationship among politics, economics and terrorism, and have written extensively on the topic. This article is one of the first to apply a game theoretic model to the economic of terrorism in the aftermath of 9/11.

Data

As always, I welcome any and all addendum to the list.

Share

Shipwrecked

Ah, August 15 NSF target dates! This time of year makes the image above ring truer than ever. Hat Tip to Stu Shulman.

Share

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑