Just wanted to let everyone know that starting January 4th I will be writing a weekly baseball column (sometimes twice weekly if I am feeling especially opinionated) at Beyond the Box Score.
Beyond the Box Score is a fantastic site, examining baseball from an analytical perspective. The authors definitely embrace sabermetrics, but they don’t beat readers over the head with complex statistics. As with most things that I do, the subject of my columns will vary quite a bit.
Generally speaking I’ll likely focus on team performance, player valuation, and lots of exploratory questions about the game. Oh, and you can be sure there will be lots of pretty visuals and laments about the NY Mets.
This post originally appeared at Beyond the Box Score. If you are a baseball analysis fan and don’t already read BTBS I highly recommend it.
2010 marks the end of the “ought” decade for Major League Baseball. I thought I would take the opportunity to analyze the last 10 years by visualizing team data. I used Tableau Public to create the visualization and pulled team data from ESPN.com (on-field statistics) and USA Today (team payroll).
The data is visualized through three dashboards. The first visualizes the relationship between run differential (RunDiff) and OPS differential (OPSDiff) as well as the cost per win for teams. The second visualization looks at expected wins and actual wins through a scatter plot. The size of each team’s bubble represents the absolute difference between their actual and expected wins. Teams lying above the trend line were less lucky than their counterparts below the trend line.The final tab in the visualization presents relevant data in table form and can be sorted and filtered along a number of dimensions.
The first visualization lists all 30 teams and provides their RunDiff, OPSDiff, wins, and cost per win for 2001-2010. The default view lists the averages per team over the past 10 years, but you can select a single year or range of years to examine averages over that time frame. The visualization also allows users to filter by whether teams made the playoffs, were division winners or wild card qualifiers, won a championship, or were in the AL or NL. The height of the bars corresponds to a team’s wins (or average wins a range of years). The color of the bars corresponds to a team’s cost per win–the darker green the bar the more costly a win was for a team. Total wins (or average for a range of years) is listed at the end of each bar. In order to create the bar graph I normalized the run and OPS differentials data (added the absolute value of each score + 20) to make sure there were no negative values. For the decade, run differential explained about 88% of the variation in wins and OPS differential explained about 89% of the variation in run differential.
The visualization illustrates the tight correlation between RunDiff and OPSDiff, as the respective bars for each team are generally equidistant from the center line creating an inverted V shape when sorted by RunDiff. In terms of average wins over the decade, there are few surprises as the Yankees, Red Sox, Cardinals, Angels, and Braves round out the top 5. However, St. Louis did a much better job at winning efficiently, as they paid less per win than the other winningest teams (<$1M per win).
There are tons of ways to manipulate the visualizations and cut the data. Hopefully viewing the data in this way is helpful and illuminates some things we didn’t know and drives home other things we had a hunch about. This is my first attempt to visualize this data, so please feel free to send along any and all comments so I can improve it.
In the days after the US midterm elections cable news outlets, radio programs, political pundits, newspapers, and activists on both sides of the ideological spectrum have exerted a great deal of blood and sweat to explain the nationwide drubbing of the Democrats. Democrats are predictably covering their behinds—conceding voter anger, but cautioning that the country has not lurched to the right in just two years. Republicans are claiming validation of their position and a greater ideological alignment with the American people. Activists and enthusiasts of all stripes are weaving narratives that use the election results to validate their personal political perspective. The question, of course, is whether any of this is correct or meaningful. Was this election a mass repudiation of Democratic policies? Was it a validation of the Republican platform and/or Tea Party-style conservatives?
Elections are like Rorschach bots—everyone sees something different, and often times what they see is what they want to see. Particularly with elections, people like to place causation in the hands of people—agents—whose efforts, words, thoughts, etc, drive the outcome. And to be sure, individual agents can and do wield a great deal of influence on events. But an overemphasis on agents can lead to spurious conclusions about why something happens. You must also look at structural or environmental factors.
Over at the Monkey Cage, John Snides has a great piece precisely along these lines. Snides and his colleagues looked at which factors where the best predictors of voter choice:
If you had one thing, and one thing only, to predict which Democratic House incumbents would lose their seats in 2010, what would you take? The amount of money they raised? Their TARP vote? Their health care vote? Whether they had a Tea Party opponent? A Nazi reenactor opponent?
As is typically the case, the partisan makeup of a politician’s district mostly determines which candidate will win. Snides and his colleagues found that the 2008 Presidential vote in a district explained 83% of the variation in the 2010 vote share (see graph below).
This data does not negate agent-centered factors, but it certainly dulls them. Additionally, many of the theories being thrown about (the vote was a referendum on Obama, on Democrats, on “Big Government”, etc) just don’t have the explanatory power that the partisan makeup of a district has.
What’s clear is that, structurally speaking, the Democrats were set up for a shellacking. Historically, the President’s party takes a big hit in the midterms, incumbents are punished in a poor economy (regardless of their control over it), and incumbents in swing districts will be the first to go. Many of the seats Democrats gained in 2006 and 2008 to take a commanding majority in the House were obtained by targeting vulnerable Republicans in swing districts. Conservative Democrats ran and won in those districts, meaning they faced a center-right electorate. Given these structural factors, it is no surprise that the Democrats lost so many seats.
Structural explanations are not very sexy. They don’t allow a ton of room for debate and analysis after the initial work is done. By their nature, there isn’t a whole lot that can be done to alter the conditions (i.e. a reduced role for agency). And they don’t really allow people to indulge in great philosophical and ideological satisfaction. But, at the end of the day, they can be powerful explanations. Democrats in 2006 and 2008 were overzealous in their interpretation of what those election results implied, and the same may happen to Republicans in 2010. Savvy politicians and operatives should take heed.
[Cross-posted at Signal/Noise]
I recently finished Diego Gambetta’s Codes of the Underworld: How Criminals Communicate. For those looking for a more academic take on signaling (particularly from a sociological point of view) it’s a great find. As I previously mentioned, Gambetta uses the extreme case of cooperation amongst criminals to tease out more general dynamics of trust, signaling, and communication. The Mafia can be considered a “hard-case” for theories of signaling trust; given the extreme incentives for criminals to lie and the lack of credibility they wield given the very fact that they are criminals, how is it that criminals manage to coordinate their actions and trust each other at all? By understanding how trust works in this harsh environment we learn something about how to signal trustworthiness in broader, less restrictive environments. As Gambetta notes:
Studying criminal communication problems, precisely because they are the magnified extreme versions of problems that we normally solve by means of institutions, can teach us something about how we might communicate, or even should communicate, when we find ourselves in difficult situations, when, say, we desperately want to be believed or keep our messages secret.
The book is a great example of studying deviant cases or outliers, particularly when the area of study is not well worn. This is a valuable general methodological lesson. We are typically taught to avoid outliers as they skew analysis. However, they can be of great value in at least two circumstances: 1) Generating hypotheses in areas that have not been well studied and 2) Testing hypotheses in small-N research designs, where hard cases can establish potential effect and generalizability and easy cases suggest minimal plausibility.
Gambetta takes a number of criminal actions and views them through the lens of signaling. This allows readers to see actions, in many cases, in completely new ways, highlighting the instrumental causes of behavior. For example, Gambetta looks at how criminals solve the problem of identifying other criminals by selectively frequenting environments where non-criminals are not likely to go. Since criminals cannot advertise their criminality, they face a coordination problem. Frequenting these locations acts as a screening mechanism since only those that are criminals are likely willing to pay the costs to frequent these locations. (This ignores the issue of undercover law enforcement, but Gambetta deals with that as well). Gambetta also makes the reader look at prison in a new light. Criminals derive a number of advantages from serving time in prison, not the least of which is providing them with a signaling mechanism for communicating their credibility to other criminals (as prison time can be verified by third parties). Additionally, many criminal organizations will require that new members have already served time before they are allowed to join. Moreover, Gambetta explores how incompetence can work to a criminal’s advantage, since it can signal loyalty to a boss who provides the criminals only real means of income (a topic I discussed here).
Gambetta also looks at the conspicuous use of violence within prisons. This isn’t a new topic, as any law enforcement drama will undoubtedly portray the dilemma of a new inmate who must establish their reputation for toughness and resolve or else suffer constant assaults by other inmates. However, Gambetta makes it interesting by embedding the acts in a signaling framework.
First, Gambetta’s hypothesis regarding the importance of non-material interests is borne out by various studies. Among others, he cites one study of prison conflict that found:
“[n]on-material interests (self-respect, honour, fairness, loyalty, personal safety and privacy) were important in every incident.” While only some violent conflicts occur for the immediate purpose of getting or keeping resources, all of them have to do with establishing one’s reputation or correcting wrong beliefs about it. Even “a conflict that began over the disputed ownership of some item could quickly be interpreted by both parties as a test of who could exploit whom.”
Second, Gambetta hypothesizes that we should expect to see more fights when prisoners do not have enough of a violence track record when they first arrive in prison. One observable implication of this is higher rates of prison violence among female prisoners and younger prisoners. In fact, the empirical record bears this out quite nicely. Rates of violence are inversely related to age, providing ” a plausible social rather than biological explanation” for youth violence. Additionally, Gambetta finds that, although less violent in the outside world, “women become at least as violent and often more prone to violence than men”. Interesting, women are less often convicted of violent offenses, suggesting that the results are not simply the result of selection effects.
Both points have implications for political science and international relations, given the growing use of signaling models to explain political behavior. The issue of reputation in international relations is one that is still growing and Gambetta’s hypothesis about lack of “violence capital” fits right in to much of the current work in conflict studies.
Overall, Codes of the Underworld is unique and thought-provoking work. For those with a strong interest in communication and signaling, it is a must read.
[Cross-posted at Signal/Noise]
At Gallup, we are officially predicting–regardless of turnout level–at least 40 seats for Republicans. Based on the numbers and our historical model, Republicans should land about 60+ House seats, easily gaining the majority.
Personally, I’ll say 65 just to be (arbitrarily) specific. I’ll also predict that Republicans pick up 7 seats in the Senate, 3 short of a majority in that body.
What do you think? Feel free to leave your own predictions in the comments section.
Makes you laugh, but also cry a little bit inside…
Update: and for those polisci folks out there
Joseph Nye gives a TED talk:
To my knowledge, this is only the third political scientist and IR specialist to give a TED talk (Bruce Bueno de Mesquita and Samantha Power being the other two). Hopefully we’ll see more as I think the TED series is a great way for polisci and IR scholars to make their knowledge relevant and understandable outside of the discipline.
For those that have not read it yet, The Atlantic recently featured an article profiling Dr. John Ioannidis who has made a career out of falsifying many of the findings of medical research that guides clinical practice. Ioannidis’ research should cause us all to appreciate the various bias we may bring to our own work:
[C]an any medical-research studies be trusted?
That question has been central to Ioannidis’s career. He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem. [my emphasis]
Unlike most famous researchers, Ioannidis is not famous for a positive discovery or finding (unless you count his mathematical proof that predicts error rates for different methodologically-framed studies). Instead, his status has been obtained because of his ability to falsify the work of others–to take their hypotheses and empirical research and show that they are wrong.
This is highly unusual, not only in the area of medical research, but in most academic disciplines. The article notes that researchers are incentivized to publish positive findings–preferably paradigm altering ones–and this leads to a breakdown in the scientific method. As Karl Popper so famously argued, knowledge accumulates based on the testing of theories that are then subjected to replication by other researchers. If the original findings are falsified–meaning that the evidence does not support the theory–the theory is scrapped and replaced with a new theory that has greater explanatory power. Knowledge is built through the cumulative falsification of theories. One can think about falsification as the successive chipping away at a block of stone–the more we chip away the closer we get to an actual form. If researchers are not incentivized to pursue falsification we all lose as a result, since incorrect findings are not vigorously retested and challenged. According to Ioannidis, if they are challenged it is often years–if not decades-after they have been generally accepted by research communities.
It would appear that Theodore Roosevelt was not entirely correct. The critic should, in fact, count a great deal.
[Cross-posted at Signal/Noise]
[Cross-posted at Signal/Noise]
There are many reasons why organizations (government, businesses, etc) grow dysfunctional and stagnant. One major reason lies with the promotion and retention of less capable workers. There have been a number of studies that explored this dynamic (for example, The Peter Principle, which theorizes that people are promoted as long as they are competent, which means at some point they reach a position of incompetence). In general, though, the promotion and retention of incompetent workers would seem to run counter to the rational interests of the larger organization. So why does this behavior persist? Why are less competent workers able to retain their positions and, in some cases, obtain promotions?
One potential reason is that it is their very incompetence that is valued. Incompetence acts as a credible, costly signal that they can be trusted by superiors looking to accumulate a power base.
Sociologist Diego Gambetta is a pioneer in the study of signaling. In his 2007 book Codes of the Underworld: How Criminals Communicate, Gambetta uses the extreme case of cooperation amongst criminals to tease out more general dynamics of trust, signaling, and communication. The Mafia can be considered a “hard-case” for theories of signaling trust; given the extreme incentives for criminals to lie and the lack of credibility they wield given the very fact that they are criminals, how is it that criminals manage to coordinate their actions and trust each other at all? By understanding how trust works in this harsh environment we learn something about how to signal trustworthiness in broader, less restrictive environments.
Gambetta theorizes that one way that a criminal can signal their trustworthiness to another is through their own incompetence:
The mobsters’ henchman, so often caricaturised in fiction as an énergumène, epitomizes the extreme case of this class. If he were too clever he would be a menace to the boss. Idiocy implies a kind of trustworthiness. […] One way of convincing others that one’s best chance of making money lies in behaving as an ‘honourable thief’, is by showing that one lacks better alternatives. […] Incompetence is one way of telling people “You can count on me for even if I wanted to I would not be able to cheat.”
Through this mechanism, lower-level criminals can signal their trustworthiness to their bosses, since they are essentially dependent on their bosses for their economic gains given their lack of independent skill and intelligence. This pervasive logic means that criminal organizations are likely to employ mostly incompetent criminals and that leaders will likely surround themselves with less competent lieutenants over time.
It is not hard to see this same logic play out in businesses, schools, and government. If organizations are set up in such a way where the accumulation of loyalists is incentivized instead of performance, we should expect to see a greater number of incompetent employees relative to competent ones. Additionally, we should see more incompetent employees advance as their “sponsor” advances.
“A unique anniversary is upon us. Seventy-five years ago today — Oct. 20, 1935 — the Gallup Poll published its first official release of public opinion data.
Here we are three-quarters of a century later, still working to fulfill the mission laid out in that first release: providing scientific, nonpartisan assessment of American public opinion.
The subject of that first release? Well, given the fact that 1935 was smack dab in the middle of the Depression, it may come as no surprise that the topic focused on public opinion about “relief and recovery,” or in other words, welfare. President Franklin Delano Roosevelt was at that time heavily involved in creating a number of relief, recovery, and work programs designed to help people whose lives were being affected by the Depression. Figuring out what the public thought about all of this became Dr. George Gallup’s first official poll question.”
You can read the rest of Frank Newport’s write up of the first poll here.
[Cross-posted at Signal/Noise]
Time for a little baseball blogging.
There is quite a lot of buzz surrounding the AL Cy Young award this year. While there are a number of pitchers that possess a high number of wins (17, 18, 19, and even 20 games), there are many who believe the award should go to Seattle’s Felix Hernandex. Despite only winning 13 games and losing 12, Hernandez’s performance this year has been nothing short of amazing. His problem is that he played on one of the worst teams in the league. He was 8th in the league amongst starters in terms of runs support (86 runs over 34 starts) and was actually dead last in terms of runs support per nine innings (3.1). If you look beyond wins to the other two orthodox statistics that make up the pitching triple crown, Hernandez finished first in ERA (2.27) and second in strikeouts (233). It is his performance in these other two categories that have many arguing for Hernandez to win the award, since he shouldn’t be penalized for his team’s lack of ability to score runs to support his dominance.
If someone like Hernandez wins this year it would truly represent a paradigm shift in the way baseball writers evaluate player performance. In the history of the AL Cy Award, no starting pitcher has ever won with less than 16 victories (Zach Greinke won last year). In the NL, only Fernando Valenzuela managed to win the award with as few as 13 wins, and that was in 1981, and no winner from either league had a record as close to .500 as Hernandez does.
That being said, I would actually argue that Hernandez is not the only “non-orthodox” contender.
There is only so much control a pitcher has over the outcome of a game. And while starting pitchers have more control than most, they still must rely on their defense to play well and on their offense to score runs. So rather than focus on statistics such as wins (which are heavily dependent on a team’s offense), we should evaluate starting pitchers on their performance independent of their offense and–to the extent possible–their defense. Doing this means focusing on how often hitters deny batters the chance to put the ball in play (strikeouts), how often they give a batter a free pass (walks), how many base runners they allow (WHIP), and how deep into a game they pitch, which gives their bullpen rest and allows their manager to use only the team’s best relievers (thereby, giving the team the best chance to win).
So let’s look at a few statistics:
K/9 – Strikeouts per 9 innings: The more batters a pitcher strikes out, the better.
K/BB – Strikeouts to Walk Ratio: The more strikeouts relative to walks, the better.
WHIP – Walks + Hits per Inning Pitched: The fewer baserunners a pitcher allows to reach base, the better.
FIP – Fielding Independent Pitching: Measures a pitchers performance independent of the quality of their defense. Lower the better.
RS/9 – Run Support per 9 innings: How many runs a pitcher’s offense scores for them per nine innings.
IP/GS – Innings Pitched per Game Started: The more innings pitched per start, the better.
I’ve created a table with non-counting statistics for the top 10 pitchers in the AL this year, but I have not included their names or their traditional statistics (Wins, ERA, or K’s). Take a look and think about who jumps out as the best pitcher:
Now, all of these guys are good, but there is one whose performance really jumps out.
First, it’s hard to miss the obvious gap between Pitcher A and their K/BB ration of 10.28 and the rest of the field. For every 1 batter Pitcher A walks he also strikes out 10. That is more than double the next closest pitcher (Pitcher B at 4.31). That ratio of 10.28 is the second highest in the history of baseball and only the third time we’ve seen a double-digit ratio (the other other two times-1994 and 1884). Pitcher A also had the lowest WHIP, the lowest FIP, and the highest IP/GS. The only two areas he didn’t finish first is K/9 (10th) and RS/9 (4th fewest).
So who is Pitcher A? Felix Hernandez? Nope. It’s Cliff Lee.
Here’s the chart with the names included:
In terms of the traditional statistics, Lee only went 12-9 with a 3.18 ERA (6th in the AL) and 185 strikeouts (10th in the AL) in 28 starts. At first blush, his body of work doesn’t look that impressive. But if you go beyond mere “counting” stats, Lee’s dominance becomes more evident and Hernandez-esque. His higher ERA (still 6th best) can be explained by an unusually high .302 batting aver for balls in play (BABIP), meaning when batters actually managed to put the ball in play they reached based 1/3 of the time. BABIP is strongly correlated to ERA. My guess is that Lee’s high BABIP can be explained by the fact that the defense behind him wasn’t the greatest, reflected in the fact that he had the best fielding independent pitching in the AL amongst starters.
Hernandez had less run support (3.10 to 4.45) and more strikeouts per nine innings (8.36 to 7.84), but otherwise Lee was better than Hernandez in every non-counting category (and he was better than every other contender).
Will Lee win the AL Cy Young? I doubt it. My guess is it will either go to Hernandez or CC Sabathia (since he had 21 wins and played for the Yankees in the AL East), but it is hard to argue with how dominant he was over the course of the regular season.
[Cross-posted at Signal/Noise]
Stephen Biddle has a spot-on piece over at Foreign Policy on how Presidents, and Obama in particular, must take into account domestic politics when setting national security strategy. With the release of Bob Woodward’s latest book, Obama’s Wars, many have jumped on the President’s alleged quote that he can’t lose the entire Democratic Party to justify the need to set a troop draw-down date for Afghanistan as evidence that he’s putting politics above national security (as if anything can be separated from politics).
…I do know that it’s no sin for a president to consider the domestic politics of military strategy. On the contrary, he has to. It’s a central part of his job as commander in chief.
Waging war requires resources — money, troops, and equipment — and in a democracy, resources require public support. In the United States, the people’s representatives in Congress control public spending. If a majority of lawmakers vote against the war, it will be defunded, and this means failure every bit as much as if U.S. soldiers were outfought on the battlefield. A necessary part of any sound strategy is thus its ability to sustain the political majority needed to keep it funded, and it’s the president’s job to ensure that any strategy the country adopts can meet this requirement. Of course, war should not be used to advance partisan aims at the expense of the national interest; the role of politics in strategy is not unlimited. But a military strategy that cannot succeed at home will fail abroad, and this means that politics and strategy have to be connected by the commander in chief.
State leaders must always balance the domestic and international when formulating policy. What may be possible internationally may not be sustainable domestically, and vice versa. Ignoring either one typically leads to disaster. Political scientists have long argued that outcomes are the result of simultaneous negotiations between domestic and international audiences, as well as the difficultly states face when trying to sustain public supporter for wars of choice. Condemning leaders for being prudent may make for good copy, but it makes no sense given all we know about policymaking.
Every night, about 15 minutes or so after we’ve put my 3-year old daughter to bed, we inevitably hear a knock at the door. She’s typically knocking because she needs to go the bathroom. She’s also knocking because she wants to scope out what we are doing, find out if she is missing anything. One thing that bothers her is if me or my wife leaves the house after she goes to bed. In order to go to sleep she needs some kind of guarantee that we aren’t leaving and are getting read to go to bed just like her. It appears she’s found one–whether me or my wife have gotten changed into our pajamas.
If we come to her door in our pajamas–or at least different clothes (e.g. sweatpants, etc) than when she last saw us–she takes it as a signal that we are in for the night. If we were going out or not going to bed soon we would still be in our regular clothes that we wore earlier. If we haven’t changed, she probes–“why aren’t you in your jammies?” This let’s us know that she suspects we aren’t in for the night. It also means that she will likely spend a fair amount of time looking out her window to see if our cars stay in the driveway before she will settle in and go to sleep. Now, putting on pajamas isn’t that costly of signal–there is nothing stopping us from putting them on and then changing back into regular clothes to leave the house or host guests. (However, in all honestly this isn’t likely to happen.)
The lesson here is that a) the idea of seeking out signals is intuitive for people and we start at a very early age, and b) rather than fight with our daughter about going to bed we might be better served just changing into our pajamas out the outset to demonstrate to her that we aren’t leaving the house, no one is coming over, and we are also getting ready for bed. She may not believe our words, but she seems to believe the signal that she’s identified. Leveraging that signal can lead to better communication and the outcome that we want.
[Cross-posted at Signal/Noise]
I’ve just started reading Matt Ridley’s The Rational Optimist. So far, it is an excellent, through-provoking read. A key to Ridley’s argument is that the innovation of exchange–the trading between two parties of separate items or services that both parties value–that led to mankind’s dominance of the planet and the explosion of knowledge and technology.
Ridley explains how exchange–or barter–is qualitatively different from reciprocity (an activity that can be found in other species):
at some point, after millions of years of indulging in reciprocal back-scratching of gradually increasing intensity, one species, and one alone, stumbled upon an entirely different trick. Adam gave Oz an object in exchange for a different object. This is not the same as Adam scratching Oz’s back now and Oz scratching Adam’s back later, or Adam giving Oz some spare food now and Oz giving Adam some spare food tomorrow. The extraordinary promise of this event was that Adam potentially now had access to objects he did not know how to make or find; and so did Oz. And the more they did it, the more valuable it became. For whatever reason, no other animal species ever stumbled upon this trick – at least between unrelated individuals.
As I read this it occurred to me that Ridley is likely right, but also that exchange is just as dangerous an activity as it is a transformative one. Why? Because to base one’s existence on exchange means making oneself vulnerable to and dependent on others for what one needs. As Ridley notes, earlier humans were self-sufficient. But moving from self-sufficiency to exchange means trusting others that they will provide what you need and will honor the exchange.
In the present, we take this somewhat for granted. I assume that my local grocer will have the fruits, vegetables, etc, that I need to feed myself and my family. I don’t worry about the possibility that they either won’t have my food or that they will refuse to provide it to me in exchange for the money that I have. But imagine back about 100,000 years ago. At some point, someone had to take a very big leap and become dependent on someone else for what they required for survival.
As a political scientist, my initial reaction is that trust both emerged from repeated interactions with barter partners and was then institutionalized through the emergence of government. Too often government is derided as an impediment to economic growth, but we often forget that without it one is hard pressed to explain sustained progress. A capitalist economic system cannot function without a robust legal system that includes rules for exchange and a system that monitors and enforces violators. How else can a society become so utterly dependent on anonymous, non-local actors to provide that which is crucial for survival? That isn’t to say that government can’t also play a negative role–often it has. But ignoring the positive, necessary role that it plays is quite dangerous in my view. It also requires us to ignore the lessons of history.
I am only up to Chapter 2, and it appears that Ridley will take up this question of how trust emerged in Chapter 3. I’ll be curious to see how he deals with this question and what answer he proposes.
[Cross-posted at Signal/Noise]
Via Drew Conway, a great quote this morning from Stephen Curry, a professor at Imperial College London:
Students should think more broadly about what a PhD could prepare them for. We should start selling a PhD as higher level education but not one that necessarily points you down a tunnel…We should not see moving out of academia as a failure. We need to see it as a stepping stone, a way of moving forward to something else.
Curry was commenting here on changing the mindset of the students, but I would argue in many disciplines the problem isn’t the students, but the professors. There are still large groups of people in academia that not only disagree with this sentiment, but actively work to undermine students who choose to take their education and apply it outside of academia. My experience has been in the realm of political science, but certainly know others that have had similar experiences in other disciplines.
The skills one learns in graduate school are absolutely applicable outside of academia. In many cases, students may be better positioned to apply what they’ve learned and have a more fulfilling career in either government or business. Not everyone is cut out for this type of career, but then again not everyone is cut out for a life in academia either. In many cases, it takes a different set of talents to thrive in either environment. And when we take into account the utter dysfunction of the academic labor market, I don’t think pressuring students to seek a career in that market is the most responsible thing to do.
Bottom line: the focus should be on the students and what will be the best move for them, not what professors think is the ‘proper’ career for those pursuing and holding a Ph.D.
Loyal Duck readers, I was hoping you might be able to help me out.
Do you have any recommendations for books about the inventive ways that people (scientists, designers, business folk, etc) have evaluated hard to test subjects? I am looking for something that is less about methodology, per se, and more about testing ideas in a practical way where either the environment or subject matter makes testing difficult (thinking here of astrophysics, for example). I am not looking for something that looks at the subject from a philosophical standpoint, but is more of a collection of examples that highlight the inventive ways people have gone about testing hypotheses in practical ways.
Hopefully this makes some sense. Any suggestions?
Thanks in advance!