Tuesday, October 3, 2017

My take on Charlottesville (When the Nazis come marching in)

One of my most controversial predictions is that North Americans are sleepwalking towards a Second Civil War in which the growing estrangement of the increasingly polarized halves of its electorate will have to act out the enormous reservoirs of animosity and spite they have been building towards each other for the last decades. The recent events in Charlottesville, Virginia, in which rightists and leftists clashed in the streets causing enough turmoil to force Governor Terry McAuliffe to declare a state of emergency, and a counterdemonstrator (32 years old Heather Heyer) was killed, would seem to be a validation of the direness of my predictions, and a clear harbinger of the more virulent clashes to come. However, as is usually the case, there is more in the picture than meets the eye, and a sober assessment of the events rather makes me be more cautious, and even a bit more optimistic. Let’s see why.

1-    What happened (are the streets burning yet?)

First let’s unpack what actually went on in the Virginia locality (population: 48,210 as of the 2010 census, and home to the University of Virginia) had announced its intention to remove a statue to Confederate general Robert E. Lee from the eponymous park (previously renamed Emancipation park, which already tells you a lot about Charlottesville municipal government). 

A loose network of rightist groups opposed to the measure plan a rally on Saturday, Aug 12th under the denomination “Unite the Right”. Such rally is duly notified to the authorities and nominally permitted. The previous day (Fri, Aug 11th) Governor McAuliffe notifies via Twitter that, although demonstrators are protected by their constitutional right to express their views, he finds such views “abhorrent” and encourages Virginians of all persuasions to stay away of the march.

The night of the 11th hundreds of demonstrators march through the campus of the UoV carrying torches (as seen in countless photos, most seem to be oddly out-of-place tiki torches that would be more fit for lighting a barbecue in a patio in the ‘burbs than to be wielded in an exhibition of white power or whatnot…) and chanting niceties like “White lives matter”, “you will not replace us” (most journalists transcribe it as “Jews will not replace us”, as it has more sinister overtones and surely sells more papers) and “blood and soil”. Along with the torch-carrying and white-supremacist slogan chanting, a number of marchers can be seen unequivocally extending their arms in what undoubtedly can be construed as the traditional Nazi salute. Such march is widely and luridly transmitted by all major media organizations in the country.

On the morning of Saturday, Aug 12th demonstrators start gathering around Emancipation park, both for the planned march and to protest against it. A significant number on both sides are armed with visible weaponry and paramilitary gear that would be unimaginable in any other country but, Virginia being a permissive open carry state we can assume nothing out of bounds in the USA. To the surprise of exactly nobody, given the level of publicity achieved by the rally, as many counter-demonstrators as potential demonstrators can be seen around the park, and there are a number of clashes between both (check the NYT account: People pretending to fight - badly ). Around 11:00 AM Governor McAuliffe declares the state of emergency, revoking the authorization for the Right’s march, and ordering the attendants to the rally to abandon their location and dissolve. It has to be noted that such “dissolution” would force them to march through the throngs of  counterdemonstrators gathered around the park, multiplying the chances for fights, clashes and brawls (check the alternative account of an avowedly alt-right attendant: What Loretta saw at C-ville ).

Clashes and widespread violence continue (but, amazingly given the stupendous amount of guns seen all around in every video and photo, no shootouts are reported… of the somewhere north of 30 wounded most are by punching and being beaten with blunt objects, plus some pepper-sprayed), reaching its high point (regarding lethality) when a Dodge Challenger driven by a James Alex Fields Jr. slams a group of counter protestors, killing the aforementioned Ms. Heyer. Later in the day, the crash of a police helicopter monitoring the day’s developments would add two officers to the total body count.    

To top off the division and shock of the nation, the President famously talked to reporters from his golf resort in Bedmister, NJ, condemning the violence, which he blamed on “many sides”, causing not only liberals and progressives, but members of his (purportedly) own party like Speaker Paul Ryan to denounce him for “putting on the same moral plane” the “Nazis and anti-Nazis”. Not one to back off or publicly retract any proffered opinion, Trump later would say that a lot of “very fine people” attended the alt-right rally, and would insist in equally apportioning blame to “both sides” on a speech to reporters at Trump Tower the following Tuesday (Orange one's response ).

2-    How the media reacted (burning? Man, they are exploding! Crumbling! Sizzling!)

Before I offer my interpretation of the facts I’ve just described, I think it’s worthwhile to reflect on how the media has portrayed them, from different points of the political spectrum. We have grown accustomed to the very post-modern idea of there not being a “true-truth”, but just different discourses, or narratives, weaving a hermeneutical network of signifiers that denote no precise significant at all. What that somewhat obscure assertion means is that for the few that still read news from outlets with different political alignments it is pretty common to find entirely diverging descriptions of a single event, to the point of making it difficult to identify such descriptions as applying to the same underlying facts. In the case under consideration, the differences have been predictably magnified: for the mainstream media it has been a national disgrace, the symptom of a seriously corroded and corrupted social compact that not only allows, but apparently encourages normal, seemingly well-adjusted young men (wearing that ultimate symbol of successful integration into middle-class status: khakis and polo shirts! That is no way to dress for the fascist takeover of the state, sir! What self-respecting revolutionary would exchange his jackboots and brown shirt for such a bland attire?) to publicly and unashamedly proclaim their Nazi sympathies and their scorn for every ages-old convention of what is acceptable and proper in a democracy. Nazi salutes? Check. Open proclamation of racial slurs? Check. Embracement not just of somewhat morally tainted past (the Confederacy and the Old South) but of beyond-the-pale fringe elements (like the Ku Klux Klan, David Duke and even ol’ Adolf himself)? Double check.

Unsurprisingly, both MSM and left-leaning circles have been having fits of apoplexy and denouncing the whole thing as furiously and unambiguously as possible, while at the same time pointing to the (at best equivocal) reaction of the White House as the undisputable proof  of the racism and unabashed association with White Supremacism of not just the President and his inner circle, but the whole Republican establishment (the most commented piece along those lines is surely the one penned by Ta-Nehisi Coates in “The Atlantic”: The first white president (how much does the USA suck?) ). For them the right in general is racist, no exceptions allowed. And with fascist tendencies all along, so no surprise they resort to threats, violence and finally murder. The sad outcome of the many clashes in Charlottesville is not an isolated incident carried out by a mentally unstable individual, but the unavoidable consequence of a noxious ideology that, left unchecked, will cause many more eruptions like that one, and many more deaths (hence, combatting it with any means is the only rational and commendable action). 

Fox News, the right leaning talk-radio hosts (Limbaugh, Hannity, Savage, etc.), the Murdoch press and the abundant orthosphere, NeoReaction and alt-right sites in the Interwebz see the whole story under a very different light. Their sympathies were since the beginning clearly with the initial demonstrators, not just because they fully endorse the country’s racist past, which they more or less unabashedly do, and thus also oppose the removal of any statue of Confederate heroes by the -in their eyes- minor feature of having led a rebel army against a constitutionally legitimate government for the sacred cause of being able to keep humans of a different skin color enslaved, but because in general “uniting the right” is something they all can rally behind (sharing, all of them, a sense of dread and disgust towards what they see as an almost unstoppable tide of progressivism and leftism that constitutes an existential threat to everything they hold dear and consider sacred). For all those media, the counterdemonstrators were an unholy and ragtag alliance of everything that is wrong with America today: feminists (“feminazis”), BLM sympathizers (“race traitors”), LGBT advocates (“faggots and butches”) and in general progressives and liberals. Instead of “proud boys” impeccably white and well-groomed marching in their khakis and polo shirts (oddly complemented by a peculiar assortment of shields, kneecaps and helmets), a bunch of blacks, short-haired girls and old hippies with questionable fashion sense carrying bullhorns and placards that seemed plucked from some outdated documentary about racial protests of the 70’s (but let’s not forget the mobile phones, which where mercifully absent back then… one can only wonder about the volume of uploads in Instagram, FB and the like of demonstrators from every sign preening about their exploits, in a new and social-media age version of the old “radical chic”).

Few have claimed that the victim between the counterdemonstrators somehow deserved it, or “had it coming”, but the narrative they weave leaves little doubt this is how they see it. For the right-wing media the whole episode is a further illustration of the inability of the current state (seized by liberals and traitors) to protect decent citizens, from the declaration of the state of emergency (which only served to further curtail the constitutional right to freely express its opinion to the always silenced part of the social body that does not share the left’s worldview) to the failure to protect the people gathered in Emancipation park from the taunts and aggressions of the dangerous “Antifa” mob. Never mind that the only actual casualty was in the ranks of the supposedly aggressive, dangerous and deranged anti-American extremists that went to disrupt a perfectly peaceful and tranquil event. Again, it was all the fault of the “Cathedral”, in this case personified in the Democratic Mayor, the Democratic Governor, the mob of dangerous radicals bent on violence and mayhem grouped under the label “Antifa” and, of course, the devious mainstream media that distorted and manipulated the emotions of some young man so he ended committing a crime. 

3-    So, all of this validates the narrative of “civil war tomorrow”, right?

Er, actually wrong. Always the contrarian, I see more positive than negative aspects to take into consideration after the events in Charlottesville. And remember I could construe it as a validation of my predictions of quick descent of the American polity into fractiousness and conflict (Guys, you're screwed ). But that’s not how I read it. For one, I won’t claim to be the greatest street brawler and bruiser of all time, but I’ve been involved in my share of fights (most of them had unwise amounts of alcohol involved, so take my account with a pinch of salt) and I was surprised, watching the many videos of the “violence”, by how… “performative” it looked like, and how little actual rage or aggression they showed. The few punches that are exchanged in front of the cameras (whose presence may be a distorting factor, or the other way round, the catalyst for all the action) resemble more a limp attempt to swat a mosquito than an actual intent to cause maximum damage whilst minimizing the puncher exposure.

We humans are a social species, and as every military instructor will tell you, getting normal people ready to shoot towards their fellow human beings (even at considerable distance, where the feeling of common humanity can be more easily overcome) requires quite extensive reprogramming. When I was younger (actually, much, much younger) I knew my share of seedy neighborhood gyms, all of which had its crew of testosterone-addled asocial troublemakers (and yes, a disproportionate percentage among them were already “extreme-right” and trained to either join the armed forces, the police or any self-styled anticommunist crusade in not-so-distant-Francoist Spain, their fathers or grandfathers having typically fought side by side with actual, honest-to-God German Nazis and Italian Fascists, the real thing and not the imagined bugbear so easily peddled in leftist fantasies). Even the most apparently psychotic between them had some difficulties overcoming the innate human revulsion towards doing harm and seeing other people suffer (although in some cases, it has to be said, they became quite good at such overcoming). I’ve seen how those guys hit, and that’s very different from what the footage by the NYT, CBS, ABC, WaPo, CNN or Fox shows. What that footage (much of it seem to be the same limited number of events shot from different points of view) depicts are a certain, limited number of posers running in front of the camera to have a go at throwing a (typically ineffective) jab at the least-imposing element of the opposite tribe, and then retreating precipitously back to safety among their own numbers, having accomplished its main goal, which we can surmise was never to gain territorial control of the contested streets, but snatching a nice graphical testimony to hang in their snapchat or Instagram account.

As I was not present in the city during the events, I can not say for sure that all the “street violence” so luridly reported by alarmed journalists was of this theatrical nature. Obviously the guy who rammed his car against the multitude, killing one and wounding multiple others was not “just doing it to look badass on Instagram”, and caused real, grievous and irreversible damage. Additional people were physically damaged to the point of requiring medical attention (19 in the car accident and 14 in other incidents) but if what the newsreels show is any representative indication, I think Charlottesville was a hybrid between a theater and a not-fully-grown-up kids’ giant playground, where self-styled radicals from left and right enacted their fantasies of being badass, rebellious and violently (and valiantly) opposing the unacceptable ideas of the other side:

Please note with this first interpretation of the events I’m not pretending to establish any moral equivalence between both sides, or pretending that white supremacism, racism and even ol’ Nazism are somehow OK (or not, I really don’t buy pieties or second hand opinions from any peddler of political correctness, and my opinions about such issues are really my own and not to be discussed here). I hope we can all agree the “Unite the Right” organizers had considered that their little show turning violent was a real possibility (heck, if not, why come with all the defensive gear, the shields, helmets, and, specially, the “security details” of the most prominent figures?), and that organizing a public event knowing it will turn violent (and thus, assuming people will be hurt) is at best irresponsible, and at worst outright evil. Yep, I know oppressed minorities in repressive states may be excused to resort to violence as no other way of redressing their grievances is open to them, and for many people in the USA “alt-right” theirs is precisely that kind of state. We’ll get back to that contention in a moment…

But similarly irresponsible/ evil is attending such event to be part of that same violence from the other side, regardless of how virtuous your ideas are. The traditional distinction between “defensive” and “offensive” violence does not apply, as we are not dealing with people that were going about their  daily lives and suddenly were presented with a bunch of aggressive fascist threatening them, but of groups of activists that travelled to the scheduled demonstration location to harass and confront demonstrators for expressing their ideas, with the justification that such ideas are obnoxious and morally indefensible (again, I’m not yet declaring if I concur or not with such characterization). It is the embrace of violent means which constitutes a) the essence of totalitarian regimes (which define themselves by abandoning the public, pacific discussion of policy alternatives as main way of consensus formation and resort instead to its unilateral imposition, by whichever means -that’s when violence comes in) and b) the most salient and morally repugnant of its features (an imaginary “benevolent dictatorship” that never inflicted any pain on any of its subjects would be much less evil than one which systematically did -see “enlightened despotism” as proof). Am I saying with this that Trump was right, and we should condemn violence “from all sides”? does such generic condemnation mean that we indeed consider both sides “morally equivalent”? (let’s call them, for greater clarity, fascists and anti-fascists, or white supremacists and anti-white supremacists -or would it be more accurate to call the latter “white subserviencists “?). Not to put too fine a point about it, yes and no. I do indeed oppose (and condemn) every kind of violence, regardless of how honorable the cause it defends, or how ignoble and vile the cause it attacks. In cases of terrible oppression, when any other means of redress are closed, it could be justified to resort to causing pain (including to innocent people), always in a most limited, most circumspect manner, but those cases are few and far between, and certainly none of them obtains in modern day America (or in modern day Europe).

Which is not to say that, once violence is unleashed, every participant is similarly to blame: Those that start it (those who hit first) are normally more to blame than those that respond to it. Those that lose their temper and escalate it (and respond to a taunt with a punch, or to a punch with a shot to the head) are more to blame than those that keep their cool and show some restraint, trying to keep it proportional and not inflict more pain than what they themselves may have suffered. And yes, those that engage in it to advance a “respectable” cause (for a Kantian like me the proof of respectability is pretty straightforward: those that act according to a maxim they can universalize, so they would like to see it become a rule of nature or, alternatively, those that treat other people as ends in themselves and never as means) are less to blame than those that defend dubious, non-universalizable, particularistic causes. Only according to the first two criteria are the white supremacists who intended to march in Charlottesville morally equivalent to the counterdemonstrators that tried to stop them, as according the third their cause, being associated with racial segregation, a celebration of slavery and sedition (which entails a violation of the rule of law), and thus strictly non-universalizable, they are clearly inferior to those that showed up to oppose them.

Now that has been taken out of the way, let’s go back to why I find the sad and tragic events that unfolded the 12th of August still contain a reason to rejoice: essentially, what they showed is that the US of A is much farther from a civil war than what I feared, as the most vocal proponents of the de-humanization of half of the country that is required for such a confrontation to take place are a really tiny minority, unable to inflame the passions of a sizeable amount of their countrymen (as of today still very, very far from reaching a critical mass to have any significant impact on the political, let alone military balance of forces of the country) and willing to fold when confronted with the possibility of a real fight. During the campaign of the presidential election I tended to disagree with many analysts on the left that dismissed the perils of a Trump victory saying that the amount of his followers that bought into white supremacist phantasies was very minor, in the order of a few thousands, but now I think they were spot on, given how easy it was for a bunch of ragtag organizations to outnumber them on very short notice. Breitbart may claim some hundreds of thousands of readers, and the Daily Stormer (now disappeared from the “public” internet) some tens of thousands, but we’ve seen that lurking in unsavory virtual places while safely seated in your parents’ basement is one thing, and hauling your ass to a demonstration with fellow extremists where said ass can be repeatedly kicked is a very different one. A lot of people seem to have signed for the first, but precious few for the second.

And the media in the right have noticed, as the diagnostic I’ve met more frequently is that the “Unite the Right” rally was an unmitigated disaster, brought tons of bad publicity and has probably set up the movement a few years, if not decades. A lot of people, even those of a most conservative persuasion, still balk at being identified as Nazis, or being associated with the Ku Klux Klan. If I were a cynical I would show some surprise at the apparent inconsistency of endorsing blatant discrimination towards certain ethnic/ cultural groups (browns and blacks) but being uneasy about being associated with those who demonize others (Jews), as that’s where the line seem to be drawn. Sorry, but I fail to see how a Nazi is so much worse (and thus so amazingly more evil) than a super-nationalist, jingoistic hick that wants to send “people not like him” (because he considers them inferior and not fully human) back to their countries of origin, just because the former includes in the “not like him” category some people externally indistinguishable from himself (Jews) and the latter does not. And just to be clear and avoid mistakes, it is not because I sympathize with one more than the other: for the record, I consider both equally unacceptable and indefensible. However, a number of alt-right bloggers and neoReactionary thinkers seem to be happily aligned with the super-nationalist jingoist but reject being labelled as full-throat Nazis (see Mencius Moldbug, for obvious reasons).

But enough cynicism already, back to the uplifting consequences of proto-fascist thugs being routed in the streets, we can expect much less visibility from them, and that is not a bad thing. We will see similar levels of rancor and spite and foaming-at-the-mouth between progressives and conservatives, we will see one or the other condone ugly behaviors (both in words and deeds) as long as it is exhibited by someone from their tribe and causes harm to someone of the opposite’s, but such ugly behavior will return to the electronic realm: the usual trolling, badmouthing, toxic name-calling and occasional banding together to be overheard in front of niche audiences (sad puppies) but no street fighting (and hopefully, no car ramming in the enemy’s ranks). So cheer up, Americans! It seems like your simmering Second Civil War will remain a virtual conflict still for a few years to come. If and when it becomes real (not that original an idea, see American War by Omar el Akkad -oh, I forgot you barely read, and much less a book by an Arab-sounding author) is still up to you to decide. 

Friday, September 8, 2017

The days of commercial TV are numbered

Only nobody knows what the number of remaining days is, or even if it is very high (say, we still have 100,000 days of commercial, open television left -that would be 274 years, far longer than the time it’s already been around, since the first emissions in the 50’s of the last century). That’s the problem with prognostication in the social realm, nobody has really much of a clue about how technology will evolve (as Popper famously quipped in The poverty of historicism, if we knew exactly what would be invented in the future, we could as well invent it right away!) and even less how such technologies will interact with the underlying social forces to shape the development of the collectives which embrace them.

In a certain sense, then, the title of today’s post is a bit misleading (no surprise, I didn’t exactly invent clickbait, and as author of one of the world’s less read blogs I wouldn’t readily confess to indulging much in the practice), but as usual, in a torturous and circuitous way I still believe it may help illuminate some tendencies in our world that are worth paying attention to.

What put me in the track of this line of thought this time was a comment by my elder son, to the effect that neither him nor any of his friends or acquaintances watched live TV any more (they just downloaded or streamed those shows they were interested in… admittedly, my son’s friends are the nerdy type that don’t have much use for sports broadcasts, where there is indeed a premium for immediacy). It made me remember an internal report, back in my consultant days, stating the imminent demise of TV as we knew it because of the arrival of a gadget that would revolutionize the way people consumed audiovisual entertainment: the TiVo box (for those who are not familiar with the contraption: essentially a digital VCR with a more friendly user interface that  basically any moron could use, which would allow people to decouple the viewing experience from the time when the originating network chose to broadcast it and, more importantly, would allow the viewers to skip the advertising, thus depriving the content producers from revenue in the long run… guess what? Sending the users bundles of just ads, no annoying programs interrupting them, ended up being a very popular feature of the service).

That was in the second half of the 90’s, so with 20 years of hindsight we may agree that the announcement of the imminent demise was a tad premature. The same may be said, in another 20 years time, of my son impression, but now as then it got me thinking about the money flux that keeps TV going and how such flux may be diverted or weakened, with potentially huge social implications. Because the conventional wisdom states that TV is the most powerful instrument of social control ever devised. In my own terms, the great conveyor belt for infusing each society’s dominant reason in the unsuspecting brains of its citizens. We could have twenty philosophers as brilliant as Kant plus Aristotle plus Stuart Mill (to reflect each major moral tradition’s sensibility) publishing their most persuasive works tomorrow and, frankly, nobody would as much as yawn if they couldn’t advertise it on the telly (not by themselves, mind you, unless they were outrageously good-looking and able to spice their communication with some raunchy personal stories, that is the nature and servitudes of the medium). Indeed, statistically, it is almost certain we have striding  the Earth between us, at this very moment, some thinkers of comparable stature (if only because the number of people pursuing speculative thoughts full time in the countless faculties of the modern world is much, much bigger than the total sum of people that have been able to pursuit such endeavors during the whole history of our species), without anybody noticing, or being able to take any advantage from it (the only explanation for such glaring loss is that such geniuses are most likely either old, ugly, boring or all of the above, so no way they could draw an audience primed to value more flashy attributes).

Notice that I said “the conventional wisdom”, so it is appropriate to consider if, in this case as in so many others, such wisdom may indeed be woefully wrong. Before we can answer that question, let us get back for a moment to how such a powerful institution, supposedly in charge of shaping much of the public sensibility, of their conception of what is ugly and what is nice, what is right and what is wrong, the moral educator of the masses, could sustain itself. Which takes us right into the murky realm of advertising: it is commonly said that TV “sold” entertainment to the audience, a cheap, easy way to pass their time that proved to be almost unbeatable (want proof? The average human being spends almost two hours a day watching TV, which means that once you subtract the time for earning a living, commuting, doing the household chores and sleeping, that’s essentially all they do apart from work). Bollocks, as until relatively recently almost nobody “paid” for being entertained (we’ll get to how that is changing in a moment). TV spread like a wildfire because it actually sold something much more valuable: their audience’s attention to manufacturers that could use that attention to convince them that their products were superior to those of their competition (didn’t matter at all if such claim was true or false).

Seen from outside, it is a pretty silly proposition: pay me to insert whatever flashy message you want (whose production you have to fund separately, of course) between my regular programming, so you can use such message to convince “my” audience of the superiority of your product. If such scheme works, you will be able to sell more units, and ask for a higher price, that would supposedly more than cover the costs of producing the message and paying me for showing it to as many people as possible. But what if your competitors adopt the same strategy? The whole market may end up at a higher price level, with consumers funding both the Coca Cola company and Pepsi Co through paying more for their sodas so they can “enjoy” having their favorite shows interrupted by a barrage of ads from both extolling the supposedly greater virtues of their wares over those of the other. Which is a decidedly inferior equilibrium than the one in which there is no advertising, the sodas are cheaper, and nobody’s show is interrupted with flashy ads that are known to have only the most tenuous relationship with truth. 

When you add the fact that a significant portion of the advertising that has historically kept broadcasting afloat is not just a zero-sum game, but has actively promoted deleterious products and practices (just to name a few: tobacco, alcohol, and sugary drinks that collectively account for a staggering amount of premature deaths in the advanced economies in the last decades) you really have to  wonder how is it possible that we collectively allowed for such insanity to proceed gingerly apace, and how is it possible that any attempt at changing it (leveraged by a raft of technological “game-changers” that always change much less than what was expected, beyond of the financial situation of some of its promoters, that is) tends to meet at most a very limited success. I think the explanation comes from an unholy intersection of an unfortunate feature of human nature with the current mode of development of our overarching social structure (for lack of a better word I’ll stick to the term “capitalism”, which I’ve tried unsuccessfully to qualify as “digital”, “post-industrial”, “advanced” or even “desiderative”, without ever settling on any as being clearer or more illuminating than the rest). Such intersection, when analyzed, makes me fear that open television, funded by advertising and uncritically watched by enormous audiences, will still be with us for many years to come.

Starting with human nature, and without having to go all Maslow-y here, people do indeed need to be entertained (as easily and effortlessly as possible), and after their other basic needs have been satisfied something to fill the endless hours gets a pretty high priority in their scale. The fact that a complete history of leisure still has to be written is revealing, as my hunch is that for most of humanity’s existence there simply was not such a thing. People mostly herded cattle and cultivated the fields, or collected wild berries and tubers, seek suitable partners, wooed them, raised a family and took care of them until they croaked. A tiny few were lucky enough not to have to devote 100% of their time to that, and demanded to be entertained by others instead, but they were such a minuscule proportion as to be entirely irrelevant for the ordering of society, although the tale of their exploits have survived the passing of time much better than the relatively traceless remnants of the majority, and thus they occupy a fraction of our image of past ages much greater than what is warranted. Only after the agrarian revolution that preceded the Industrial Revolution in Europe has our species faced the prospect of what we may term “mass leisure”, that is, the existence of significant numbers of modestly well-off citizens with time on their hands they could devote to pretty much what they wanted, their basic necessities (food, clothing, a suitable dwelling and child-rearing) having been taken care of.

Interestingly, we start to have a better grasp of how people passed their recently (in historical terms) acquired “free” time thanks to the concomitant appearance of the modern novel, a form of narration in which common folk suddenly is deemed as worthy of attention, and in which the inner lives of said folk are minutely traced (or may we say invented? But let us not get too post-modern here) by the authors, which requires presenting how they spend their days. And what we grasp from them is that women did manual labors (mainly sewing, past beyond when thanks to mass retailing it stopped making economic sense to do so), lorded over a dwindling supply of home servants and paid visits to one another (visits that, when received, demanded stupendous efforts of said servants to keep the house in pristine condition and prepare the complicated foodstuff that back then played the part of an expensive car and wall-to-wall plasma TV). Men kept themselves busy feigning to work at almost all times (not that different from today), having a drink at public houses (that, then as now, didn’t require the patrons to speak more than a few words if so inclined) and, when confined in the house due to bad weather, played cards and read the newspaper. They may even, in some scarce cases, read some works of speculative thought. It was also assumed that they needed an outlet for their “bodily passions” (that no sane and healthy man could entirely satisfy with his wife), requiring either the maintenance of a full-time mistress or frequent visits to houses of ill-repute that would still consume a good deal of his free time.

Problem was, due to automation feigning to work more than 80 hours a week was starting to get difficult (we are talking about the beginning of the XX century here folks, so do not think in humanoid robots taking the dentists’ jobs yet). The temperance movement made spending many hours a day in the public houses (and the houses of ill-repute) less and less admissible, and there is only so much a man can play cards without wanting to assassinate his playing partners for a change. So, in a possible instance of the old Marxian dictum that society only proposes itself problems it can solve, in came the radio and the birth of mass “culture”, soon to be followed by TV.

Rich societies adopted both radio and TV at a speed that had no parallel in human history (except if you forget the internal combustion engine automobile, which in the less densely populated USA went in a decade from being owned by 0,1% of families to being owned by 99% of them, quite faster than mobile telephony or the internet, regardless of what today’s techno-utopians like to claim). And this is where the second dimension I mentioned before came in handy (the first dimension, remember, was the brute fact of human need to be entertained and receive help to pass the time once basic necessities have been satisfied): some social arrangements are more vulnerable to the threat of massive boredom than others. If a group has agreed that the ultimate goal of life is to excel between you peers in some public pursuit (be it poetry, philosophizing or charioteering -that would be my current understanding of the keystone of dominant reason in classical Greece, btw)) it provides enough incentives to every one of its members to cultivate certain (socially valued) abilities that will be for them meaningful enough as to devote them most if not all of their “free” time. If a society mostly agrees that the only valid end of human life is to prepare for the afterlife according to the precepts laid out by a barely-literate people more than twenty centuries ago (as most of Europe did between the fall of Rome and the demise of baroque reason in the XVIII century) again, they will find better uses of their time (praying, fasting, renouncing, or whatever activity makes the attainment of such afterlife more likely) than to idly seat watching images be rolled in front of them (specially if they can indirectly decide what kind of images they are presented with, and witness how they collective decide for the ones celebrating lust, sloth, greed and wrath, which the aforementioned book tends to condemn as the most despicable vices).

But if a collective has settled on our current desiderative reason, and considers that a) the ultimate end of life is to satisfy desire (or alternatively, to experience the maximum amount of pleasure over pain in a lifespan); b) every desire is but the expression of a single desire: to show to others that you are socially better considered than them and c) the only meaningful, socially sanctioned way of social consideration is having unfettered access to tons and tons of goodies, once you have exhausted your means of ensuring your access to said goodies (i.e. once you have worked your butt off to have as much money as possible given your circumstances) there isn’t much you can do with the rest of your waking time. You may just eliminate such rest, and just devote absolutely 100% of your energies, your every waking hour, to work, and to the undistracted pursuit of more money, but even our ultra-materialistic, ultra-consumerist society seems to have learned that there are limits that it is better to impose on individuals (like we already did with the pursuit of other pleasures, from drinking -and in general using any drug- to boning, which beyond certain limits are loudly and universally frowned upon). Limits that are in some places (Silicon Valley, where a 9 to 5 work schedule is considered as revealing “loser” status, or the financial community) crumbling and actively contested, but globally our society still thinks that people should do “something” apart for work.

How could it be otherwise? If people only worked, who (or when) would thy consume the fruits of such super-intensive labor? And if such fruits (abstract and intangible as they may be) are not to be consumed, or fought for, what is the point in producing them in the first place? Note that the need for consumption commensurate with production is a needed corollary of the final success of our current dominant reason: once there are no alternative societies with which to vie for supremacy the system must either find a super-hyped-up, super-evil “other” to justify the growing sacrifices demanded to its population (even more so in a scenario where people is pushed to 100% production, with almost no time to enjoy themselves the goods produced, that would be mostly diverted to the military-industrial complex) or attain a steady-state equilibrium (acceptance of lack of growth, lack of opportunities of professional advancement for the ambitious young, and likely lack of technological advances and productivity growth due to reduced incentives for them).

Which is really the world we are living in, with both developmental paths open and in an uneasy balance, as we collectively seem unsure about which one to pursue, so we waver between both. Some societies seem more committed to the military-industrial path, lashing around in search of an external enemy strong enough to justify the maintenance of a “weaponized Keynesianism” that can keep the system in a state of perpetual growth to prepare for an ever-growing menace (surely the USA is the country in which such tendency is more marked, and where it has worked better – the other countries where you can see it in play are places like Cuba, Venezuela, North Korea, Iran… and China may be the other moderately successful economy toying with the idea of following the nationalistic-militaristic path). Others seem more resigned to the steady state and the accompanying stagnation and likely deflation (Japan and the EU are the poster boys, with much of South East Asia soon to follow). What the little technological development that remains ensure is that given the current levels of productivity, there won’t be much work to go around, regardless of the path chosen. Just as there isn’t now, hence the prevalence of bullshit jobs, make work, high unemployment or low participation levels in the workforce.

What all these societies have in common is the embrace of a dominant reason that precludes any pursuit outside of working more to earn more money and thus be recognized as socially superior. Not just that “undervalues” such pursuits, or that “gives less weight” to them. For our current dominant reason any life-project outside of the crass materialism outlined above is an existential threat, and because of that it is both unintelligible (it can not be “understood”, or compared with the alternative it presents and weighed against it) and dangerous. But let us take stock for a moment about the kind of life projects and interests that have no place in the grand narrative we have collectively settled for, through some examples:

·         identifying some transcendent truth and devoting ourselves to it

·         choosing a field with a well-recognized set of external criteria of excellence (what MacIntyre called a “practice”) and try to be as good as possible in it, even if it doesn’t give you any money

·         caring for other people altruistically (not “reciprocally altruistically”, but for the sheer joy of helping flourish and prosper those we deeply care about, regardless of being paid or even recognized)

I’m not saying that the activities derived from those pursuits are somehow superior or nobler or in any sense “better” than the ones our dominant reason would rather direct us to engage in. Aw, what the heck! Of course I’m sayin’ that!  Because what is at the end of the day what desiderative reason beseeches us to do? Watch friggin’ TV so we get more brainwashed into desiring more stuff, which will in turn make us want to work more, or go deeper into debt, but will help keep the whole system spinnin’… Whoa! Some life program, isn’t it? But for some time (was that the secret allure of the 60’s, so difficult to understand from today’s perspective?) it seemed like people were waking up to the emptiness of the value system that wasn’t yet as hegemonic as it is today (what the Marxist critique would call its “internal contradictions”). A system that forbid any “meaningful” activity outside of working for its cancerous, suicidal, perpetual augmentation (regardless of how close it got to running against nature limits through overpopulation and non-renewable resources exhaustion) was at the end of the day a system nobody would want to live in. If the only leisure it could abet was watching a shabby catode-ray tube showing endless commercials barely interrupted by snippets of shoddy story-telling it is no surprise people started flirting with “alternative” lifestyles and different value systems that allowed for more time to be devoted to activities perceived as more “meaningful”, that required a higher level of engagement, that allowed people to find a fulfillment, a contentment that passively watching the tube could not match.

But, alas! The system reacted… there are multiple narrative threads that may contribute to an explanation about how from that momentary weakness the reason dominant back then, instead of evolving and adapting and changing gears just doubled down and succeeded in becoming hegemonic over the whole globe: with the development of identity politics the powers-that-be played different segments of the populace against one another, with the result of having each embrace the overarching value system even more fiercely; the main alternative to dominant reason (embodied in capitalist society) was even more exhausted (it was, after all, communism, the embodiment of bureaucratic reason, an older version of western values); and, last but not least, entertainment technology simply got much better, and fused itself with the dominant ideology more strongly and more subtly, which leads directly to our own days’ flat-screen HD TVs, the internet, mobile telephony, social networks, the golden age of TV shows and, of course, videogames and any time soon Virtual Reality.

Which leads me back to the original thread of this post: “commercial TV”, understood as TV produced by enormous corporations (the “networks”, and most likely the traditional ones, as they are the institutions both well connected to the legislative power and with the knowledge of how the content is produced and distributed and what the customers are willing to spend time watching… see the mostly failed efforts of big Telcos to gain a significant foothold in that turf for well over two decades), distributed for free (although some “carriers” -satellite and fiber- will be able to wring some revenue from taking some premium-quality signal to a substantial number of homes paying for only a fraction of the distribution costs) will still be with us for the foreseeable future. It is just too valuable for the maintenance of the whole civilizational compact to be easily replaced.

Thursday, August 24, 2017

An unexpected use of philosophy (how to speak six languages… and counting)

It is frequently argued that we live in distinctly unphilosophical times: Our attention spans have been shortened by the deluge of messages we receive 24 hours a day through our hyper-connected existences, making it difficult to engage with the long (like really, really long!) texts that dominate the field. Having been habituated to almost instant gratification in most areas of life (from super tasty food to exercise “regimes” where electro-stimulation supposedly does all the hard work so you don’t have to exert yourself too much) we disdain any discipline that requires many years of patient study to bear its fruits. Being reared in a dominantly audiovisual culture we find it alien and uncomfortable to have to resort to that quaint contraption, a written text, to find you bearings (I suppose you can find the Nicomachean Ethic in audiobook format, or have Siri read it to you if you are too lazy to do it yourself, but I wonder how much mileage you may extract from that).

But the main problem seems to be the utilitarian, pragmatic nature of our era, in which time has so many alternative uses (the opportunity costs have grown so big) that we expect an immediate, measurable payback from every activity we engage in, and in that department the study of philosophy comes woefully short: you devote countless hours to read abstruse books, and what do you have to show for it? A hunch that being… is? The revolutionary idea that you shouldn’t do unto others what you don’t want done unto you? The nagging suspicion that what you thought you knew is a social construction so you can not be entirely sure of anything anymore? I can understand the potential disappointment of parents that have invested a few thousand quids in the education of their progeny (sorry mates, been in the UK these days, picked some lingo, ya’ know) if they see those answers coming from their kid’s mouth after a few years of grad school on their dime.

I will not enter in a more nuanced discussion into the truth of the previous assertion, as I’m not entirely convinced that these times are indeed more materialist, or obsessed with the practicality of the endeavors it celebrates than other times in our history (somehow, I can’t see wealthy parents in classical Greece being any less disappointed with their children coming out of Socrates’ school and spouting some gibberish not so distinct from the one I just spoofed… well, that probably explains how good ol’ Socrates ended), but I leave such discussion to Pitirim Sorokin and the like (latest iteration of the like being Peter Turchin and  Sergey Nefedov, whose Secular Cycles I read this summer, and although mainly deals with the correlations between the material aspects of agricultural societies quiet nicely dovetails with my own construction of socioeconomic dimensions that in turn determine the evolution of the ideological ones… like the amount of idealism vs. materialism prevailing in a given society at a given time; but I digress). The previous unusually long-winded sentence (even for my admittedly ultra-lax own standards) was my way of saying that however our own epoch compares with other past times regarding its distaste for abstract thinking, that would be immaterial to the argument of it being distinctly unwelcoming of the pursuit of philosophy.

A symptom of such lack of congeniality would be the continuous and seemingly unstoppable diminution of the time devoted to the study of the topic in most countries scholarly curricula. A diminution, in turn, systematically and frequently decried by any self-respecting member of the cultured classes and the ruling intelligentsia. Every review of what we should teach the poor kids in the last half century has seen a further extension of what could roughly be called STEM disciplines, at the expense of the humanities, and typically philosophy (or history of Western thought and its correlates) has been first in line for seeing the weekly hours devoted to it reduced, and the number of courses on which it is taught at all further shrunk. I can understand that the philosophy professors at every level protest and complain of such societal choices: the work prospects of the kids’ teachers-to-be are significantly curtailed, and as forming teachers-to-be is the main purpose of the university departments of philosophy, the mighty professors join the chorus and decry the short-sightedness of the politicians that are contributing to the nurturing of future generations of uncritical asses, oblivious to the greatness of their own cultural tradition, which counts between its biggest contributors the revered figures (Plato! Aristotle! Kant! Hegel! And lets rather stop there, as when we reach more recent ages the subject becomes irredeemably ideological and murky) that poor little children will have to grow in abject ignorance of.

As bashing politicians is a nice and almost risk-free activity, most self-appointed “public intellectuals” also join enthusiastically in the denunciation, but in another sign of their own diminishing relevance, as far as I know they have been pathetically ineffective to stem the tide, and in one country after another (Germany, the UK, France, the USA to name but a few) the curriculum has been more and more lightened of “classic” humanities, substituted by some easy science and math, some pop sociology and watered-down history and, in some cases, eve some more modern-sounding alternatives which are truly the same stuff repackaged (so the International Baccalaureate program does not offer philosophy as an option, but it does offer “theory of knowledge”, which is but the old epistemology renamed as to be consumed without alarming the picky parents that mostly want their boys and girls to be well-prepared to pursue successful careers in economics in prestigious international universities, and mostly do not have the slightest interest in them being exposed to old, mushy stuff with Greek names).
As regular readers well know, joining any kind of tribe or clique or group is not my cup of tea, so far from my intention to extoll the virtues of old-fashioned philosophy, as it is taught both at school and university level. Rather, I’ll play the contrarian again and declare that most of the defenses I’ve seen in all of these years leave me pretty cold. Before moving to the main subject of this post, I’ll briefly recapitulate them (and spice them with some critique of my own, as critiquing things is one of my most well-known weaknesses):

·         The teaching of philosophy helps the children (or the young) to develop critical thinking abilities, and is fundamental to them being well-rounded citizens, able to rationally judge the actions of its government and thus collaborate in the construction of a more perfect polity. Let us leave apart for a moment the fact that 99% of the thinkers included in the typical philosophy course were a bunch of scoundrels that devoted their best efforts to justify the society of their own times, presenting it as the pinnacle of human perfection and the most conductive to the happiness of its members. I just can’t see what social interest (or individual interest, for what it’s worth) is served by enhancing the capability for criticizing and judging harshly of the citizenry. That would make sense if we conceded that the current social arrangement was somehow “wrong” and needed urgently to be amended, which in theory is what the critiquing ability should establish in the first place…

·         The teaching of philosophy is distinctly “useful”, as in a “knowledge economy” our reasoning ability and our capacity to manipulate symbols is more important than our ability to perform mechanical, repetitive tasks (as such tasks will be performed, supposedly, by robots or algorithms any day now). Such argument probably presupposes that such ability and capacity are best trained by the study of what (mostly) dead white males wrote a bunch of centuries ago, rather than (say) by actually manipulating symbols (what mathematics teaches) or learning and practicing the rules for communicating them (linguistic and presumably literature). I’ll just dwell for a moment in the incompatibility of this argument with the previous one. If philosophy succeeds in developing the critical faculties of the alumni, they may very well realize that the definition of a live well lived that society is presenting them with (earn as much money as possible so you can rise in a social hierarchy determined exclusively by how much of it you possess) is not that attractive to begin with, so excelling at it (by mastering a “useful” ability) is meaningless.

·         The teaching of philosophy is necessary so we can have a sufficiently big pool from which to draw great thinkers that help us understand the human condition, as it is particularly expressed in our peculiar historical circumstances, and that can offer wise guidance on how to best deal with the hand we have been dealt. As much as I would love, really, really love to sympathize with this one, it reminds me of the old definition Einstein gave of insanity (doing the same thing over and over again but somehow expecting to obtain a different result). It is not as if in the last half century the old system has been very successful at producing powerful, original, persuasive thinkers that have helped steer society away from its current, destructive, dominant reason. So I’ve just given up any hope that the current educational establishment, which has been tasked to process a raw amount of potential talent unrivalled in all of our species history (if only because the total number of pupils sent to school and higher education has been the greater one, plus thanks to the Flynn effect and the universal spread of literacy those pupils have been probably the brightest ones ever), may able to somehow extract from that enormous pool even a tiny fraction of the intellectual daring and verve and sheer energy you could find in a tiny (for modern standards) town in Attica 2,500 years ago (What made Athens so great?)     

In summary, my thought until very recently is that philosophy is almost entirely useless, both from the individual perspective and from the societal one. Just to make things clear, that doesn’t mean it lacks any value. Rather the opposite, it is precisely because it has no use that its value is not only unmeasurable, but also extremely high. So high, indeed, that I would concur with Socrates that the non-reflective life is not worth living, and to properly reflect on it (i.e. to be able to have a life worth living) it is absolutely essential to have a quite extensive exposure to philosophy. But, but… such exposure doesn’t necessarily has to happen at the earliest age, and almost certainly there are better ways to make it fruitful than to have it unenthusiastically explained to you by a poorly paid, unmotivated teacher that would like to do almost any other thing rather than droning about Aristotle’s works in front of a class of uninterested children. Which is an again probably unnecessarily long and circuitous way of saying that I’m perfectly OK with philosophy being entirely taken out of education, and all the faculties and schools of philosophy being closed (remember I already proposed the same fate, albeit for entirely different reasons, for the schools of Economics: Modest proposal ), and its learning, its research and its development being entirely left in the hands of private citizens so inclined, unaffiliated with any formal institution.
But, for the sake of honesty, I have to confess to my readers that I’ve recently found a (totally unexpected, as I openly declared in the title) practical, utilitarian, honest-to-God useful application of the study of philosophy, which somehow shames me, as it tarnishes the selflessness and just-doin’-it-for-the-heck-of-it shtick of the whole endeavor. It is an application that helps explain why so many noted philosophers were polymaths, and spoke a multitude of languages: what philosophy really enables you to do is to read unbelievable amounts of crapload, and retain substantial amounts of it. Let me explain: any student of humanistic disciplines will develop good, sophisticated reading skills. In fields like history, law, or literature you have to read many, many pages, but they normally… make sense. Not so in philosophy, when you may discover long after the fact that whole tracts you have duly digested and mulled over for a good long time (and that you considered very thoughtful and deep and profound at the time of reading them) doesn’t really make any sense at all.

Now I recognize that this may sound a tad extreme, a bit nonchalant and surprising (although I claim no originality, this line of thought was awakened in me by a reporter in the Spanish journal “El País”), so I’ll expand a bit, with a dab of personal experience. I’ve always been quite a bookish person, and I read more than my share of history and philosophy when young, but after my (first) university career, finding my bearings in a very demanding job and founding a family I really eased off for some years. I tried to sneak one or two “serious” books every now and then, but it was difficult to give them the attention span they required, so although I always had six or seven books I was reading more or less simultaneously (poetry, fiction, politics, history, some social commentary… what I’ve never been a fan of were “management” and “self-help” books, then as now I avoided them like the plague), the more abstruse ones could sit in the table besides my bed for months on end until I found the willpower to have a go at them.

However, the tug of devoting more time to philosophical pursuits was definitely there, the impetus to put my thoughts on paper to clarify them and sharpen them and see where they took me never entirely dried, and the satisfaction every time I managed to finish a more philosophically bent text was so marked (the mental equivalent to finishing a grueling workout… you feel you are a better person for having put yourself through such an ordeal and having survived) that I never entirely stopped buying that kind of books. I remember having a breakthrough (but I would only recognize it as such many years later) reading the Critique of Cynical Reason by Peter Sloterdijk: I was working something like 100 hours a week, my wife and very young kid had just moved with me into Brasilia (where we knew absolutely nobody) and all the strength I could muster allowed me to read something like ten minutes per day, just before going to sleep, normally at about 1:00 or 2:00 AM, knowing I had to wake up in 5 or 6 hours to go back to work, Saturdays and Sundays included. You may guess I was not in top intellectual form, and my eyes typically glazed over in the second or third word (I had originally written in the second or third sentence but Sloterdijk style is not that different from my own, and his sentences may go no for one or two pages, so it was not uncommon for me not to be able to finish a single one). I usually fell asleep in the middle of long-winded, convoluted and highly self-referential paragraphs, and if the next morning you asked me to summarize what what I had read the night before was about, I would have been grossly unable to give even a hint of an explanation. So I wondered why on God’s green earth I was putting myself through such a misery, and if there could not be some less absurd way of passing my preciously few leisure minutes. I can’t say I had an epiphany, or I came to some sort of redemptive answer, I just trundled along until I finished the danged book, after which I picked another one of a similar tone…

If you are waiting for some clear-cut moral I’m sorry to disappoint (but bear with me a bit longer -although certainly much less than a full-Sloterdijk length, do not panic! and I still think you may draw some useful lesson). The thing is, by dutifully, stubbornly, joylessly reading almost unintelligible books (without expecting to obtain any utility from them), I slowly developed an ability to, wait for this… read almost unintelligible books. Indeed, I ended up enjoying them so much that they become my almost only source of reading material. I abandoned my old habit of reading six or seven books simultaneously. I started one and didn’t start the next one until I had finished it. And finished meant finished: footnotes, notes at the end, bibliography (I still spare myself the index when there is one, though). I allowed myself to order a new batch from Amazon only when I cracked the spine and started the first page of the last exemplar of the previous batch. Now, I know what some of you are thinking: “nothing new here. Pierre Bourdieu already described it. The poor guy was just accruing tons of symbolic power which he then could translate into an enhanced status within his social milieu. The old trick of acquiring high culture as a surefire status-marker”. Bollocks. I was working in a consulting firm, so the amount of status I got from telling my peers I had read Hegel was exactly zilch. It is not just they could care less (they could not), it’s that such conceit was actively frowned upon, as the time devoted to such patently fruitless pursuit was time detracted from more useful applications, like playing golf with would-be clients or, if some reading had to be involved, memorizing the WSJ, “Forbes” and, for some extra intellectual stimulation, who ate my cheese.

But again, it was strangely fulfilling to be able to read what I knew nobody else around me would be able to read, and to see how it more or less helped create a complex, sophisticated, reasonably coherent understanding of what this thing called life was about in a “deep” sense. I know a lot of people don’t need to read much philosophy to reach that point, just a chapter by Paulo Coelho takes them to the same point. Good for them (for the record, I hate Paulo Coelho, but that is besides our current argument). However, what I was able to extract from so much idiosyncratic reading is beside the point, what I discovered is that reading without much understanding, without much seeing the point of it (without caring if there is a point to it after all) is in itself a skill that, through constant practice, can be improved. And man, did I practice and improve it! OK, so at this point you may legitimately wonder: so what? So philosophy is likely to help develop the skill to read boring books that nobody cares a rat’s ass about… how is that supposed to be useful at all? We’ll get to it in a moment.

But before I have to continue with my little story. Just for funsies I decided that knowing so much philosophy I may as well get some official recognition for it, and I embarked in a doctoral program to obtain a PhD. I thought I could be done with the mandatory credits in a year, and have a dissertation written in two more. It finally took me seven in all, which was a superb enticement to amp up my reading habits even more, as suddenly all that seemingly pointless exposure to the thoughts of long-dead guys may had a use after all (to be regurgitated in a document -the PhD dissertation- that most likely not even the members of the tribunal tasked with judging it would read in its entirety). So the dissertation was written, defended and lauded, and it all was great fun. After having completed that milestone I wanted to do other things, and something that weighed on me was having to rely on translation for so many of the works I was interested in. So having a bit more time learning additional languages seemed a sensible investment, and that’s what I did. At the beginning of 2016 I could fluently speak Spanish, English and Portuguese, and had the joyful experience of having devoted all of four weeks to learn French (plus read the collected works of St. John Persee in a bilingual edition), so French seemed a reasonably easy place to start expanding my language base.

So I just started buying books in French (heeeeello Amazon.fr! a new compulsive client has arrived!) and reading them. The first ones were somewhat disappointing, as I didn’t understand much. Just like with Sloterdijk’s Critique back in Brazil, I told myself. I googled the declinations of 20-30 common verbs, plus some adverbs and prepositions (so I could get an inkling of how the elements of meaning I could gleam connected with one another) and I just trundled along. After having read 10 books it all was easier and clearer (let’s say I felt I understood 85-90% of what the writer was trying to convey). After 20 books (that’s more or less where I’m now) It really doesn’t make a difference if the book I’m reading is in English, in Spanish, in Portuguese or in French, and it doesn’t matter how abstruse or abstract it is (I just finished L’art de penser, a treatise on logic by XVIIIth century Antoine Arnauld, and am currently reading Matière et mémoire by Bergson… both a piece of cake). Seeing how easy it was to “learn” French (I still have to get practice in writing it to consider myself truly tetralingual, but I’ll get there soon) I started in January this year applying the same approach to Italian. With similar results, although the learning curve was steeper (Italian is really very close to Spanish, and the few words that are more different are normally very similar to a French one, so you can pick it up really fast knowing already another three Romance languages). Almost pentalingual then.
Time, then, to push it a little further, and go for a non-Romance tongue. German was the almost unavoidable option, as I have always wanted to master the German language to be able to read Kant’s works in its original form. So I’ve started reading in German, and am pretty stoked by how easy it seems to pick it up.

Only a couple books so far, and I’ve tweaked my method a bit: with French and Italian I focused on reading philosophical work, and that has created certain lacunae in my vocabulary. I can draw on an ample panoply of words to describe mental events (memories, opinions, intentions, emotions and the like) but I’m at a loss if I have to describe everyday objects like clothes, furniture, foodstuffs and the like. With Romance languages that is not an unsurmountable problem, as the words are normally very similar to the Spanish equivalent, but with German that’s not the case. So instead of jumping directly into philosophy (and man, do they have a rich philosophical tradition to draw from!) I’ll read some fiction, starting with children books (I’ve found that many of the books I read translated as a kid were from German authors, so am delighted to revisit them in their original form), then up to YA, and finally some hefty XIX and XX century classics (Hesse, Mann, Gräss -all of which I’ve read in translation, but also Döblin, Kraus and Müsil, and Holderlin, Rilke and Trakl). I intend to spend a little more time consolidating the German language acquisition (2-3 years, and about 50-60 books read before I finally order Kant’s Gesammelte Werke… the Academia version, if I can, although I’ll need to sell a kidney and an eye to afford it), but after that the sky is the limit (my current plan: who cares about living languages? Next three are Latin, Classical Greek and Hebrew).

So there you are: without the ability to read through books I barely understood (but keep on reading anyway, and retaining more of them than what initially meets the eye) I’m sure I wouldn’t have been able to apply this idiosyncratic method of learning, and this is the only method, given my time commitments, that I can afford. And I owe that ability entirely to my training in philosophy, as I’m also pretty sure there isn’t any other subject that forces its practitioners to go through such unappetizing reads. But, Alas! That is not what the current academic climate prepares you for. The way philosophy is currently taught it has to be pre-masticated, pre-digested so it “easy” for the poor fellas that have to learn it. It has to be presented excitingly and energetically so it seems attractive and “fun”. Well, you know what? I don’t think the Nicomachean Ethic can ever be made exciting, or fun, ditto for the Critique of Pure Reason, as much as I have enjoyed reading both (and for what it’s worth, if I were asked if I would rather read the Critique for the first time again or relive the most pleasurable orgasm I’ve ever experienced, I would go with the Critique without hesitation, but please don’t tell my wife). That’s why most standard arguments for teaching philosophy ring hollow to me, as the single real benefit I can see coming from the conscientious pursuit of the field can only come from doing it in isolation, and requires a certain level of seclusion that the current aversion to demanding effort from the students can only curtail, rather than incentivize.

And yes, I know in a few years we will have AI’s translating online any text in any language directly in our ears, so learning another language is a big waste of time and effort (but if you believe that is really going to happen, and that the jumbled translations of Google, as much as they may improve, somehow compare with understanding a foreign text or foreign speech yourself, I still have my proverbial bridge in Brooklyn which I’m willing to sell). 

Friday, July 28, 2017

Organizational justice III

[This may be the third -and last, I swear- part of a paper to be submitted to a conference in the Jesuit University Ignatianum in Krakow w the same title, or may be not, depending on how happy I am with the final result… it is, however, an issue I have been wanting to tackle for some time now but had postponed so far to deal with lighter matters. You may find the first and second part here: OJ I and OJ II ]

In my previous post on this subject I sketched what a framework for justice within an organization may look like based on the Kantian twin precepts of:

a)       granting dignity and recognition to each individual (which implies seeing them as ends in themselves, and never using them as means for a certain end of ours that they have not willingly agreed to pursue themselves) and

b)      construing every decision as the application of a rule we could wish was universally followed

Although I concluded that an organization where everybody follows such precepts (what I called a “KoE organization”) is clearly preferable to a “utility maximizing” one for its members I introduced the possibility that it may not be so clear cut for other groups, be them consumers, stockholders or society in general. In this post I want to close the series extending the previous analysis to those wider groups, seeing if there is something we can conclude (regular readers of this blog surely know that reaching ultimate conclusions is not exactly one of its strongest selling points…)

To make the analysis as comprehensive as possible I will distinguish four groups of people with a legitimate interest in the organization’s operations:

·         employees (the people formally affiliated with the organization, the members, who typically receive payment from it in exchange from their work)

·         owners (the people that have advanced the capital required for the organization to start operating, in the expectation of receiving in exchange a percentage of the benefits it produces proportional to the capital each one has given). Note that the juridical personality of the owners can be very varied, ranging from stockholders (a wide set of individuals with limited liability, as they only risk a fixed amount of money) to partners (typically fewer, who may potentially risk all of their wealth) to other corporations (be they industrial conglomerates or financial institutions who do not actively participate in running the organization)

·         consumers (the rest of the society that can potentially buy what the organization produces, or at least be impacted by the way it operates -be it because it depletes some common good or it contributes to the common well-being)

To make the analysis more dynamic, I will also consider a fourth group:

·         potential employees (people that still does not belong to the organization, but may do so in the future if certain decisions are taken)

What I will do then is analyze how the framework we sketched deals with the potential conflicts of interest between the (current) employees and each one of the other groups, in opposition to what the economic (utility-maximizing) framework would dictate. I can advance that in most cases we will find that the utility-maximizing theory serves to perpetuate the existing relationships of production and systematically defends the interests of the owners over the employees, the employees over the consumers and the current employees over the potential ones, in ways that are very difficult to reconcile with any widely accepted definition of justice. It remains to be seen to what extent a Kantian (or contractualist, we will highlight some of its limitations) approach con overcome similar difficulties.

Employees vs. Owners

We don’t need to get all Galbraithian here (although coming back to the insights of good ol’ Kenneth is always a good thing) to remember how modern capitalism relies on a “caste” of people with a distinct, hard-to-acquire knowledge (what has been somewhat pompously called “management science” that, to be honest, isn’t that difficult to acquire to begin with) to run most businesses. That managerial class is what Galbraith termed a “technostructure”, and it tends to pay itself nicely leaving little to distribute in the form of benefit to the poor capitalists that originally risked their hard-earned money to get the concern going (never mind that with decreasing social mobility the savers with capital to spare have mostly inherited it, and are conforming a new rentier class not that different from the most famed one of the gilded age). Of course, in the eighties of the past century those cunning little capitalist had already developed a sure-fire way to align the interest of the leading managers with their own: link the reward of the top executives (in the form of bonuses and stock options) almost exclusively to their ability to increase the return of the money they had been entrusted with. Conveniently, that’s when the theory of Shareholder value as main (and almost only) metric of the success of a company was born (along with the theory of efficient markets, to provide a patina of legitimacy to such patently absurd construct).

I find it convenient to highlight that this particular conflict plays out with high frequency in the boardrooms of almost every company that has a board as supreme governing body. Every time the results go south or when the yearly salary review comes around it has to be decided how much will be taken from the employees (how many of them will be laid off or how little the total salaries will rise for the coming year) vs how much will the benefit be reduced or how much loss will be suffered by the capital owners. Only there isn’t usually much conflict, because nobody represents the employees in the boardroom (I know, I know, in Germany that’s not the case, but my German contacts luridly tell me how the worker’s representative can be easily bought, manipulated or outmaneuvered, so the dynamics in big German corporations are not that distinct from those of the rest of the world). And any CEO worth his salt knows just how many people he has to give the pink slip to have the analysts praise his resoluteness and have his share value recover from the most negative profit warning. You are going to sell a 25% less than what you had announced just a few quarters ago, and your planned 25 cents per share of benefit rather look like a 5 cent per hare loss? No biggie, lay off a few tens of thousands of workers and the share will probably stay put, or even recover a bit if the market is generally on an upward trajectory. Some astute observer may object that laying off a substantial percentage of the headcount may compromise the ability of the company to meet tis production targets, thus further endangering its very survival, to which I reply reminding my readers that the average corporation carries within its ranks a 20% (and that’s a conservative estimate) of bullshit positions, that add no value whatsoever to its bottom line. Any given company, from the biggest corporate behemoth to the slickest family firm can sustain a significant reduction of its workforce without losing much of its ability to deliver on its existing commitments, and to take on additional contracts.

So how does the dominating economic theory propose we deal with such type of conflict? On paper it is pretty easy: Each worker “deserves” to be paid as much as he contributes at the margin to the value of whatever it is his employer sells. If the last machining operator adds 20 $ per hour worked to the final product, 20$/ hour is what he should make. If, due to the law of diminishing returns, the next machining operator the company hires only adds 18 $ per hour then it would be fit and proper if all the machining operators already hired saw their salary reduced to 18 $/hour (the fact that such reductions rarely happen is a testament of the extraordinary generosity of modern corporations, for sure). Of course, such answer contains a double dose of baloney: first, there is no way a company knows how much each worker adds to the value of the product (believe me, I’ve designed enough analytic accounting systems to know, doesn’t matter what the GAAP says, doesn’t matter if it is ABC, Pareto costing, proportional allocation, rule of thumb or whatever, the process of determining how much value any given activity and any given position adds is rife with subjective allocations, hunches or right away creative obfuscation) and second, even if it knew, if in some ideal world there were unassailable rules (not subject to manipulation to favor the rich and powerful and better informed at the expense of the poor and weak and ignorant) to determine how the work of A compares with the work of B regarding how much they each contribute proportionally to the final price that may be demanded for their joint product, why should it be more just to pay A and B in the same proportion? There are a number of confounding factors that it may be equally just to have in consideration (A may need to devote much more time than B to complete his contribution, the work of A may be more physically or mentally demanding, A activity may be more hazardous, or injurious, etc.) To top it off, the abilities demanded to perform A’s and B’s work may be very different, and command a very different reward in the market. So, although the “differential value contribution” may sound nice, it is but a justification. The truth of the matter is that what I’ve termed the “economic approach” has no answer at all to the question of what is just to pay each worker (or how many workers should the company employ). What it has is the following rule: “use as few workers as you can get away with (given the work laws of the country) and pay them as little as possible (given what it would cost you to replace them with somebody with similar skills hired in the market) to maximize the profit of the company”. A pretty ugly rule when formulated that way, but I myself have been a relentless applier of it for my first fifteen years in business, and am still frequently called to keep on applying it and selling it to the ranks, so I know firsthand it is not as simple as “selfish and megalomaniacal executives throw workers under the bus every time the going gets tough”, as sometimes you either 

a) have no other choice or 

b) become very good at rationalizing and convincing yourself you have no other choice, although in fact you do…

So how would a contractualist, or Kantian approach look like instead? Let’s start with the owners: when someone chooses to finance a new operation they should clearly state what is the return they are aiming for (typically above the market average, depending on the risk level). Sadly, an investment in any enterprise beyond a purely commercial one (entered for a single or a at most a very limited number of transactions) may take years to come to fruition, years in which the market conditions will change, new competitors will enter the market, business cycles will effect their deleterious influence, technology will evolve and potentially disrupt existing practices, etc. Which means that God only knows what may happen with the targeted return, regardless of how good or bad the executives tasked with steering the company turn out to be. And it’s not like all investments are similarly liquid. If you own shares of a company (thus negotiated in a secondary market) you can always sell them if you are not happy with how said company is performing, but not all companies are publicly traded, and selling a stake in one that is not may be fraught with difficulties, and end up costing to more than erase any potential gain. Still, a clearly stated desired rate of return set as a target for the administrators is the only valid, just solution for clearing up that side of the equation.

What about the employees’ side? We face a similar difficulty, as you may be (indeed you should be) as transparent and straightforward as possible when negotiating each hire regarding not just salary and overall working conditions (overtime compensation, holidays, health coverage, safety measures, responsibilities) but also scope of professional advance and career prospects. People work not just for what you pay them today, but also for what they hope to get from it in five, ten, even twenty years (although given market dynamics it’s difficult to fathom what company can still offer some serious semblance of stability over such long term). When I left/ was kicked out my first job what really pissed me off was not that my employer was not honoring some short-term aspect of our contract, but how a whole long-term promise of what working there and committing to the kind of work-life (un)balance it required was being voided and trampled, not just for me but for all the employees that remained. I do know that companies have to adapt to changing market conditions, and that labor contracts can not be renegotiated continuously to reflect such changes, but I still think the twin obligations of any director are honoring the contract with the owners and honoring the contract with the employees, giving both similar weight. And at some point in time both will be largely verbal, implicit, based on a shared understanding and not subject to being claimed in a court of law. And of course one side (the owners) have the power of firing you, while the other has not (workers nowadays have really to be pressed crazily, and badly mistreated, before they may threat with collective action against their employer) so it is only human we give more weight to the interests of the former against the latter. Even then, the Kantian right thing to do is to strive to balance both.

Employees vs. Consumers

According to classic economic theory, there is no real conflict here: a company tries to sell as dear as it may (regardless of who reaps most of the benefits of the income they try to maximize) and its clients would pay as little as possible. That’s shown by the slope of the supply curve tilting upwards with price, whilst the demand curve tilts downward. Where both curves intersect we find the equilibrium price at which the market clears, and at that price everybody has as much of the product as makes them maximally happy: the consumers ideally derive as much marginal utility from consuming that precise amount of what the company produces as they would from any other combination of consumables (by definition, were that not the case, they would choose the alternative combination they more pleasurable, or more utility-maximizing), and the company uses its inputs with maximal efficiency, deriving from them as much income as possible given the technological possibilities of the age.

A very neat and nice construct (let’s forget for a moment there is no such thing as a continuous supply or demand function, as in most markets there are so many more variables that determine how many units would be sold as to render price almost immaterial) that allows us, as a society, to forget that old concept, so exhaustingly discussed for ages, of value. What is, in a market economy, the value of an item for sale (be it the hour of a strategic consultant, or a car, or a house or an epipen to self-inject a life-saving medicine in a moment of dire need)? Being blunt, who cares. Every one of those items has a measurable amount attached to it: its price. And the moment somebody is willing to pay such price, the question is settled, regardless of how much effort it took to put such item in the market, or any other scholastic nicety (the scarcity of capital or land, the marginal productivity of the factors required to produce it or whatnot) that people have resorted to in the old times to explain such price. What is actually paid is all there is to it when it comes to valuing any commodity offered in the market, and any alternative concept of “value” is but a metaphysical fiction. So there is not such a thing as a “just price” that may be calculated independently of what people pays for things, so it may conceivably deviate from the latter amount and thus may point to a “market failure”. By definition, the market has no failures.
Remember that old canard about “use value” and “exchange value” and how the difference between both allowed the evil capitalist to appropriate the excess work capacity of the proletarians, and grow rich exploiting their immiseration? That was Ol’ Karl talking nonsense (well, he was actually talking mostly nonsense, anyway, so you may be pardoned for not paying attention to this particular point). But we don’t need to resort to Marxist obfuscations based on a muddled understanding of Hegel’s dialectics to recognize the market may indeed fail to reflect in the price for which certain commodities exchange the whole loss of collective utility to produce and consume them, the difference between such real costs and what society can successfully pass to the producers normally being known as “externalities”. Such failure creates an unfair distribution of burdens and rewards, as the costs are borne by groups different from those obtaining the benefit of such production and consumption. 

The examples are well known, from the environmental degradation caused by polluting companies (a negative externality borne by consumers that breathe a worse quality air, or drink a less clean water, or simply enjoy some utterly spoiled landscape) to the commonly enjoyed benefits of security and rule of law not paid by tax avoiders (a positive externality that creates the problem of “free-riders” that profit from such shared goods without having to pay for them, leaving others with a higher price tag to pay).

Economic thinking doesn’t have much to say about externalities, other than they may be bad indeed, and the only way it prescribes to minimize them is to put a price on them, so they can be properly internalized and taken into account as normal costs by the producers. Those are the vaunted “market solutions” for many problems that besiege modern economies, from overfishing to climate change to healthcare to pollution. How do they work? Let’s say charitably It’s a decidedly mixed bag. Acid rain and the excessive content of Sulphur dioxide in the air due to coal plants has been quite successfully tackled in the West through cap & trade and taxing, but CO2 emission hasn’t. On the other hand side, the global population of whales in the Ocean seems to be slowly recovering, but surely no market solution here, just every country except Japan accepted to friggin’ stop killing the poor beasts, which constitutes as good an example as you may dream of of the virtues of plain ol’ government steppin’ in and mandating was what self-evidently good and just, markets be damned. Another example? the only country that stubbornly sticks to a supposedly market driven approach to providing health care to its citizens has an abysmal record, with skyrocketing expenses for very sub par results. 

I think the evidence points clearly and unequivocally in the direction that in certain areas there is no rational way to put a cost in the productive activity so its costs are fairly borne, and there simply is no alternative to regulation to protect the common goods. I myself formulated an utopian approach for all economic activity, which I dubbed the “zero footprint rule” (Good stewards of the Earth) stating that all and every degradation of the environment was a cost to be borne by future generations, so all and every economic activity should be required to leave said environment exactly as it encountered it when its activity started (meaning they should potentially contribute to a fund not only for cleaning their short-term, immediate activity, but also for 100% recycling of whatever they produce and for the replacement of whatever non-renewable resource they consumed).

What then about contractualism? Other than avoiding fraudulent advertising, what does it have to say about the legitimate distribution of burdens between the enterprising manufacturers and the passive consumers down the road. Well, it turns out that a load of things, because even if no contract can be signed between producer and consumer (or, even less, between producer and future potential consumers that may not have been born yet) there are a couple of concepts that should be taken into account. First one, what Habermas dubbed the “ideal communicative condition”, which in this case means that the producer should try to imagine itself in an ideal world where he has the same power as the consumer, both have access to the same information, are blessed with a similar discernment (so he can not hope to outwit his client) and both have an infinite amount of time to reach a voluntary agreement both could feel happy about. That provides a set of stringent, “strong” boundaries to what a company can do regarding its possible externalities, starting with openly recognizing them, and ending with giving the affected parties a say in how they value them when it comes to minimize their impact. Second, let’s remember that old deontological tool, the categorical imperative: Producers should act under rules they could wish universal. They can pollute the water to the extent they would be OK with everybody else similarly polluting the water they themselves would have access to, and then drinking it unfiltered. They can add toxic (but addictive) carcinogens to the cigarettes they produce if they were similarly OK with other people adding similarly toxic components to other products they consumed and hiding it (that is, IF they were truly rational in a deontological sense, there is no way Jose in God’s green Earth they would behave as they have factually been doing for decades… but hey! They weren’t acting deontologically at all, just plain’ ol’ utility maximizing, and according to venerable Milton that’s all that could be asked from them: Friedman's everlasting shame ).

So, kids, utility maximization doesn’t have a stellar track record regarding the internalization of externalities (doesn’t matter what economists of the Chicago school may want to tell you), whilst a Kantian approach (with some Habermasian sprinkling) may take you closer to a fairer, more just world. Let’s turn our eyes, finally, to the last potential conflict we identified at the outset.

Actual employees vs. potential ones

Including people that only exists theoretically in your moral calculations is always tricky, as Derek Parfit famously showed in Reasons and Persons (where he essentially defended that we can literally do no wrong to future generations, as doing anything differently would imply they would not be born in the first place -other, different people would- so whatever shitty world we bequeathed them, such shittiness would be a necessary condition for them coming into existence, so it could only be seen as a net positive provided their life was at least barely livable, that is, barely better than not being born at all). We have seen that utilitarianism (or marginal economic reasoning) gives us a lot of latitude to engineer organizations that are less just than what deontology would allow us, both for their members and for society in general. But what if we extend the scope of our concerns to those people that do not belong yet to the organization, but may end up doing so (and profiting from such belonging) if it were more economically efficient, and guided itself by profit maximization (and rewarded its members by equalizing such reward with their contribution at the margins, however difficult it may be to calculate it)? May we find that the greater injustice  within the organization (in the form of the extra exploitation of its workers it condones) and within the wider society (in the form of greater externalities it allows) can be somehow compensated by the extra employment opportunities it provides, bringing more people into its beneficent (or not so beneficent, but let us leave that aside for a moment) fold?

Such consideration has historically been behind the discussion of the overall benefit of minimum wage laws: economists of a libertarian bend tend to disparage such laws, saying that “artificially” rising wages (by legislative fiat) depresses employment, as companies hire less people than what they would choose to do in a “free” market situation. For an example of such disparaging, see Tyler Cowen: Minumum wage is evil and his co-blogger at MR Alex Tabarrok: And awful and And the absolute worst. For Scott Sumner, the fact that higher wages cause fewer people to be employed was “one of the few certainties in economics” (and those few were already precious, if we want to take seriously Economics claim to be a “science”), that’s him comparing his musical chairs model with Keynes’ (or at least with Keynes’ first 50 pages of the General Theory: Money Illusion charitably reading (part of) the GM.  On the other hand side, economists of a non-libertarian persuasion (and that may well be the majority of the profession, from Neo-Keynesians to more statist bent) tend to minimize the impact of such laws, and to highlight the social benefits of people earning a “living wage” that, complemented by generous state aid if needs be, allow every worker (regardless of level of qualification) entirely participate in the benefits of shared socio-economic life (something that earning less than 8 $/hour makes damn hard, especially if hired on a temporary basis).  Recently, two studies analyzing the impact of Seattle rising it’s minimum wage to 15 $/hour and reaching opposing conclusions (one, from UCB, found little impact on employment level, whilst another one from WSU found a significant impact, a good summary of the difference and potential origin of the differing interpretations can be found at the NYT: Is the minimum wage in Seattle good or bad for employment?)

It has to be noted that the contractualist framework we have used as a counterbalance to the prescriptions of what we may term “crass utilitarianism” (as I readily confess I’m “slightly” strawmanning here to make my case more clearly) seems to fail us in this case. You can not “give voice” or “out yourself in the shoes of” people that doesn’t actually exist, being only potential. You can not imagine an “ideal communication situation” where the other part can be any of your countrymen, or even as yet unborn generic human beings. Deontology and contractualism require an actually existing other whose actual concerns and preferences can be duly taken into account.

In this later type of conflict, then, I will not resort to the superiority of the deontological ethical position, but on a three pronged approach, based on three observations about how our society works:

·         If instead of every potential worker that may find employment at a given wage level we fix our gaze in society as a whole, it seems evident that the lower the wage the less attracted new cohorts of workers seem to we towards paid occupation. That observation seems to be confirmed by the low participation rate in the economy where (overall) minimum wage regulations seem to be laxer, or have stopped decades ago updating them with the rate of inflation, and thus stand today with minimum wages stuck as a smaller percentage of the median salary (the USA, which now shows one of the lowest participation rates of the advanced economies). So paying shitty wages may seem a good idea in each individual organization that does it (it may provide employment to more people, which at first blush seems a net positive) but end up hurting the whole of society as it discourages more people from seeking work in the first place (I admit this argument is tentative, and a lot of additional research should be conducted to test its robustness)

·         An economy in which companies are allowed to pay peanuts to their less qualified employees may indeed have a higher aggregate employment level (again, it may not, as hinted in the previous point) and even produce more goods and services (translate that higher employment level – which means it uses more intensively one factor of production, labor- into a higher aggregate supply). But that doesn’t mean that their citizens end up deriving more utility of such state of things. That in our opulent societies we may have reached a limit to the amount of things we value consuming, and that may in turn explain the low productivity gains in most advanced economies was recently pointed to me by this interesting article: (What if we grow less because we want?) whose implications I will probably develop in more detail in a further post.

·         Finally, we shouldn’t forget that each organization’s decision (about salary and production levels) are taken in the context of a certain dominant reason which dictates what both employers and employees should understand as a life well lived, what desires they are allowed to act upon in the pursuit of such life, and how they should defer to each other when acting on those desires. The utility-maximizing mindset is, in this context, part and parcel of the dominant reason of our times (desiderative reason), and the outcomes of such mindset are inseparable from the more encompassing consequences of such reason (something Daron Acemoglu recently recognized in one of his latest posts: We need to grow because... growth! -he calls it “commodification and the insatiability of needs”, but recognizes previously that such commodification is based on setting the social hierarchy on the amount of goods each individual can command)

So for a society that accepts the tenets of desiderative reason (grading people based only on what they earn, and teaching them to value only what translates into a better grade) it may be that the conflict between exiting workers and potential ones can indeed be solved along utility-maximizing lines, and the it is indeed better to pay each member as little as possible because then there can be more of them.

But I hope I have sufficiently argue that we shouldn’t resign ourselves to live forever in such a society (I do recognize it is undeniable that we do live under such one now). And what I say regarding the resolution of the conflict between current employees and potential ones can be extended to the previous two (between employees and owners and between employees and consumers). Both can be solved appealing to utility-maximization rules, and in both cases the solution fits with the overall framework for organizing social relations we collectively adopted in the second half of last century (desiderative reason). Not only fits, but they necessitate each other: desiderative reason needs a way of organizing society (a set of rules within organizations) that steer everybody to produce as much as humanly possible (and then some), rewarding each individual in strict proportion to their contribution (regardless of how “deserved” his ability to contribute is in the first place) and punishing them for any perceived defect (the biggest defect being any inability to actively produce and participate in the system of social signaling that determines everybody’s position in the social hierarchy). Such way of organizing society needs a conceptual framework that legitimizes its rules and makes them acceptable to the many, regardless of how unfair it is in allocating the social product.

As a final aside, I’ll remind my readers that the original justification for accepting Desiderative Reason (“a life well lived is a life in which the maximum number of desires are satisfied or, which is the same, a life in which a maximum balance of pleasure over pain is achieved”) is 

a) inconsistent (pursuing such kind of happiness in the end is detrimental to achieving it) and 

b) unfair (at the end of the day, and after a couple decades of widely shared growth, a tiny minority has seen their ability to satisfy more desires enhanced to unprecedented levels whilst the vast majority has seen their desires grow, but their ability to actually satisfy them stagnate or even decrease -in the West, I know the story in the undeveloped world has been very different). 

But, and I’ve said this a million times already, dominant reasons do not evolve because they are “good” for the societies that adopt them, or because they make their members happier. They evolve because they become better at crushing (either by forcing imitation or by outright military annihilation) the competition.

Which, as I stated at the beginning, has very little to do with justice…