There is a long tradition of philosophers occupying the political role of rationalizing the actions and existence of destructive institutions. This usually happens under the surface of ‘nuance’, ‘rigor’, or other nebulous scholarly virtues. We the audience get the sense that if such a careful and credentialed thinker thinks something is ok, it cannot be all bad. Usually, to lubricate the moderating effect, some benign critiques of the institution are thrown in, but only ones that the institution already took the initiative of admitting. In the end, ‘radical’ gets conflated with ‘unnuanced’, ‘rational’ with ‘moderate’.
In the piece Liao does some translation public philosophy. Among academics there is a distinction between duties to oneself and duties to others. If Facebook causes you harm, you might have a duty to yourself to delete your account. When we try (and probably fail) to grasp the extent of the damage that the company has caused all over the world, the question becomes whether we have a duty to others to delete. Liao’s best point is in showing that even if you only share cat videos, you are not free from concerns about right wing propaganda and hate speech on the site. Facebook is still collecting your data in order to sell ads.
By starting the article with an academic distinction and then moving methodically through numbered points, we get the message that we are reading a nuanced treatment of a real problem. The distinction will help us draw the right conclusions.
What conclusions does Liao provide for us?
“So do we have an obligation to leave Facebook for others’ sake? The answer is a resounding yes for those who are intentionally spreading hate speech and fake news on Facebook.”
Some truly groundbreaking moral philosophy for you! Unfortunately, the conclusion doesn’t actually follow. Do those people really need to leave? Couldn’t they just switch to cat video content? Liao hasn’t ruled out that possibility.
Quibbling aside, notice the sudden appearance of intentions. ‘Intention’ is an important moral concept that had not yet been mentioned in the article. When you study logic you learn that if a conclusion has content that doesn’t appear in any of the premises, something is wrong.
My concern goes beyond the logical errors. The key is in recognizing that, after we’ve been drawn in by the distinctions, a clear outline, and helpful metaphors, all of the nuance stops.
We saw the trivial conclusion about hate speech and fake news—after which I’m sure the people guilty of posting such content stopped reading and promptly deleted their accounts. Liao then has a conclusion dedicated to the rest of us:
“For me at least, Facebook would have crossed a moral red line if it had, for example, intentionally sold the data of its users to Cambridge Analytica with the full knowledge that company would use the data subversively to influence a democratic election. Likewise, Facebook would have crossed a red line if it had intentionally assisted in the dissemination of hate speech in Myanmar. But the evidence indicates that Facebook did not intend for those things to occur on its platform.”
Here is where the sudden appearance of intentions matters. What does it mean to say that Facebook has intentions? An answer would probably require us to reformulate the question. Should we be talking about Zuckerberg or Sandberg? The board? We don’t know what Facebook is.
And wait. Even if we can determine Facebook’s intention, is that what should matter most in determining duties? These are issues that call for nuance.
Deciphering the intentions of corporations is notoriously difficult, legally and philosophically. And corporations know it. As long as our moral discussion is premised on knowing the corporation’s intentions as a whole, the corporation and its leaders are safe. They can hide in the vagaries of corporate decision making, the assumption that it is politically acceptable for businesses to make piles of money with only the thinnest of moral constraints, and, in the case of Facebook, the fact that the site is only a “neutral platform” for the voices of users—an idea that should be laughable at this point. Given the complexities surrounding intentions, Liao is making a corporation-friendly case.
He then offers a conclusion that is as hollow as it is common: Facebook should be “much more proactive in fixing such problems.” Well, yes. But what would that look like? When we think about it we see that it is an issue that calls for nuance. And Facebook knows it. That is why Zuckerberg can copy/paste his apologies for each new failure and (re)commit to being “much more proactive.” It lulls the public into complacency by appropriating moral language. In effect, it makes moderate moral critiques of Facebook, like Liao’s, indistinguishable from Facebook PR strategy.
It is laudable to publish moral critiques of Facebook. I have done so numerous times. It is probably less laudable to focus instead on the moral duties of Facebook users, but there may be a place for it. In either case, when you do it using a moral framework that Facebook can so easily use to its benefit, you undermine the project. Your philosophy becomes a tool for destructive institutions.
Was that your intention? If the destruction comes, and you, philosopher or social media entrepreneur, had a hand in it, will your intention be what matters? You still failed in your duty to us. This is why it is better to ask whether Facebook has a duty to delete itself.
Liao’s ultimate conclusion is that people like him don’t currently have an obligation to leave the site. He tells us to watch the red line. But since crossing the red line requires us to know Facebook’s intentions, we are left with a piece that exemplifies a pervasive flaw in philosophy: “clarity” and “rigor” about the trivial and obvious, handwaving and silence on what actually matters.