A recent IR Twitter flare-up occurred on a seemingly innocuous topic illustrated by the flow-chart above: what should I call my professor? A PSA from Prof. Megan L. Cook recommended students to address their professors as Professors or Dr., avoiding references to their marital status or first names. Prof. Raul Pacheco-Vega tweeted the following:
I also delete every email that first-persons me on a first email. Them’s the rules. You can decide how you want to be addressed, but I’m the one who decides how *I* want to be addressed.
Dr. Jenny Thatcher and several others disagreed, pointing out that taking offence at an “improper” address is elitist, disrupts collegiality and can potentially push out first-gen scholars or people from backgrounds that do not share the same culture of academic etiquette. For that intervention, Dr. Thatcher endured insults, digs at dyslexia, and threats of getting reported to the police by random Tweeps.
Rousseau once remarked that “It is, therefore, very certain that compassion is a natural sentiment, which, by moderating the activity of self-esteem in each individual, contributes to the mutual preservation of the whole species” (Discourses on Inequality). Indeed, it is compassion, and not “reason” that keeps this frail species progressing. Yet, this ability to be compassionate, which is by its very nature an other-regarding ability, is (ironically) the different side to the same coin: comparison. Comparison, or perhaps “reflection on certain relations” (e.g. small/big; hard/soft; fast/slow; scared/bold), also has the different and degenerative features of pride and envy. These twin vices, for Rousseau, are the root of much of the evils in this world. They are tempered by compassion, but they engender the greatest forms of inequality and injustice in this world.
Rousseau’s insights ought to ring true in our ears today, particularly as we attempt to create artificial intelligences to overtake or mediate many of our social relations. Recent attention given to “algorithm bias,” where the algorithm for a given task draws from either biased assumptions or biased training data yielding discriminatory results, I would argue is working the problem of reducing bias from the wrong direction. Many, the White House included, are presently paying much attention about how to eliminate algorithmic bias, or in some instance to solve the “value alignment problem,” thereby indirectly eliminating it. Why does this matter? Allow me a brief technological interlude on machine learning and AI to illustrate why eliminating this bias (a la Rousseau) is impossible.
The grassroots advocacy campaign, Women on 20s, had a simple request: put a woman on the $20 bill by 2020 to commemorate the 100 year anniversary of the 19th amendment, which granted women the right to vote in the United States. Starting with a list of 15 women candidates, on-line voters cast an electronic ballot in the primary round and chose four finalists: Harriet Tubman, Eleanor Roosevelt, Rosa Parks and Wilma Mankiller. One month later, voters elected Harriet Tubman as their choice for the portrait on the twenty dollar bill. As the final votes were pouring in, Senator Jeanne Shaheen (D-NH) introduced S.925 Women on the Twenty Act, which is currently being considered by the Senate Committee on Banking, Housing and Urban Affairs.
The momentum of the campaign came to a halt when Treasury Secretary Jack Lew announced that a woman would appear on the redesigned $10 bill, but she would share the honor with Alexander Hamilton who is currently on the bill. Continue reading