Thursday, November 23, 2017

“Neuroscience”, a Monism for the XXth century (or maybe even the XIXth). On Robert Sapolsky’s Behave

Regulars of this blog (Here I always insert the by now somewhat tired cliché “all two or three of them”) know I have a thing for reading books I thoroughly, consistently, deeply, unambiguously and exhaustively do not enjoy. I do it frequently enough as to consider it is not just lack of attention when choosing what to read, or bad luck (the “I thought it would be good but more than halfway through it I realized it was not just mediocre, but downright atrocious… having already devoted so much time to it I just trundled along thinking it would be better to see it through” excuse), but an active courting of works that I won’t a) like, b) agree with and c) have any use for.

As I brief aside, I know there are people who wouldn’t be caught dead reading a book they don’t like, exemplified by Tyler Cowen (whom I quote frequently and approvingly on so many other issues): Tyler reads a bunch of books, but not many pages of each. All I can say is a feel sorry for them, as they are missing in one of the most important teaching and character formation tools of this time and age. The ability to read unpleasant writings, to soldier through them no matter how unsatisfying, is highly trainable, and not only does it spill over other, more practical areas of life (like the uber-lauded feature of grit, with much media exposure of late), but helps immeasurably in some more mundane applications, like making it much easier to learn foreign languages (something I already recounted in this post recent post: Unexpected uses of Philosophy). So, regardless of what Tyler says, I strongly encourage my readers to persevere when caught in the midst of a dreadful, dreary book, as it is more conductive to a life well lived than the perpetual surrender to the thrill of novelty seeking, with its concomitant rejection of effort and toughness and indulging in instant gratification that are so damaging to the formation of a resilient, well-rounded character.

Back to my quirky habit of reading books that bore me or anger me or somehow disappoint me, I recently finished a doorstopper (well over 700 pages) from Robert Sapolsky aptly titled Behave. The Biology of Humans at our Best and Worst, and extravagantly praised by a number of figures of the scientific establishment, and by the New York Times’ Richard Wrangham, who on his July, 6th review describes the book as a “quirky, opinionated and magisterial synthesis of psychology and neurobiology that integrates this complex subject more accessibly and completely than ever”. That I bought the book after reading such description already denotes a masochistic streak on my part, as the integration of psychology and neurobiology sounds as appealing to me as the integration of astrology and XIIth century Chinese ceramics (I have only the slightest interest in the latter, and utterly despise the former and for the same reasons I despise psychology: both disciplines’ unsubstantiated claims to some sort of “scientific validity”). I’ll devote this post to comment on it, not specifically on the more scientific claims (which, not being an expert on the field, I found well informed and crisply stated, if a bit discombobulated and lacking a logical structure at times; just too many things seemed to be written in the spirit of “this is really cool and fascinates me to no end, so whether it really belongs here or not, whether it really helps drive home whatever point I’m trying to make or not, I’m going to describe it in some detail no matter what”), but on the metaphysical ones.

“Whoa, whoa, whoa!” -  I can hear my readers saying. “Hold your horses here for a minute! Metaphysical claims? On a book about the brain, and hormones, and the genetic origins of behavior? Are you sure?” – Well, I very much am, and as scientifically inclined, and surely devoted of empiricism and all that as the author professes to be, he argues from a very strongly felt metaphysical position about what the ultimate components of reality are, and what the realness of such components tell us about how we should think about our behavior, what we should praise and condemn, and how we should punish deviations (with the unavoidable foray in trolleyology… man, I endlessly wonder what kind of fixation Anglo-Saxon casual readers of philosophy have with electric cars on rails that they simply can not stop drawing them to any discussion of ethics, cursory and circumstantial as it may be).

However, before we dive into the arguments of the book, I’d like to share something I find quite puzzling about the whole “neuromania” (I term I borrow from Raymond Tallis, if you haven’t already read him, specially the excellent Aping Mankind, go and do it ASAP). Let’s consider the following two descriptions of the same behavior (Young Rick knows he should study for next day’s biology exam, but he just can’t gather enough willpower, so he spends the evening watching TV instead):

·         Description 1: when Rick starts considering what to do (while conveniently having his head inside a fMRI) his frontal lobe lights up, which tells us the parts of his brain in charge of executive control and long term planning are being activated. Unfortunately for him, his orbitofrontal complex is not fully developed, and due to neglect and inadequate parental supervision during his early childhood (or maybe because when walking down the university hall to the fMRI he crossed paths with an attractive research assistant who reminded him of a day care nurse that had been mean to him that many years ago, or because he had been primed to be more attuned to earthly pleasures by being made to unconsciously smell of freshly baked bread when entering the room) the metabolic capacity for sustaining the heightened glucosamine demand of that sensitive part of his neural tissue is not up to par, and can not counteract the activity of the medial PFC (somehow the PFC -Pre Frontal Cortex- is always involved in anything having to do either with emotion and expectations of pleasure or with rationality and delayed gratification), or of the limbic system (don’t even ask: lots of neurotransmitters, more Latin-themed brain topology and a bunch of enzymes produced in exotic and distant parts of the body), or the surge in dopamine caused by the prospect of some good ol’ procrastination. To make things worse, we can appreciate that both Broca’s and Wernicke’s areas in his parietotemporal lobe are also lighting up, and them being neural correlates (we don’t know exactly what a neural correlate is yet, at least regarding to the subjective feeling of consciousness, but bear with me) of language processing we can only conclude that he is rationalizing his rascally behavior, and talking himself into it. Indeed, his nucleus accumbens (just above his septum pellucidum, if we are going to use funny Latin names let’s use them to the end!) is also showing signs of enhanced activity, surely as he contemplates the pleasures of just doing nothing, while the insula (which in theory evolved to warn us of the dangers of rotting fruit) remains muted, meaning that our moral sense (a form of disgust, typically geared towards outgroups, but potentially useful to direct that disgust towards non-adaptive behaviors of our own) is similarly muted and has nothing to contribute to the final decision on what to do.

·         Description 2: the guy is lazy. Probably not entirely his fault (but more on that in a moment).

Any self-respecting neuro maniac (or neuro babbler) will tell you that only the first description is “informative”, “scientific”, “information-rich” and constitutes true understanding of the human nature and the aspects of personality involved in the observed behavior. On the contrary, what I contend is that both descriptions have exactly the same informative content. Whilst the former is doubtlessly more florid, and definitely longer, it doesn’t provide any additional understanding of what is going on, and it is not really giving us any additional insight. Specifically (and we’ll come back to that in a moment) it doesn’t provide us with a iota of additional, “useful” information about what the subject exhibiting the behavior may do or may not do in the future (the ability to predict the future being one of the defining features of scientific knowledge).

On a side note, such unkind critique reminds me what R. Wright Mills did, in the Sociological Imagination, with Talcott Parson’s The Social System: he transcribed super long paragraphs of the latter and then “translated” them into much shorter, simpler, more elegant and concise ones, contending that they really meant the same thing, and thus exposing Parson’s magnum opus as a lot of unnecessary, inflated and somewhat pompous babble. I think I was lucky of reading both books in the “right” order (first Wright Mills’ with his scathing critique, and afterwards the one by Parsons), which allowed me to better appreciate the aspects properly deserving criticism, and to separate them from those where the long-winded sentences and the convoluted reasoning were called for (at this point, no reader of mine will be surprised to find me sympathizing with other writers prone to the use and abuse of long-winded sentences and convoluted arguments). And I find it amusing that I may now level such criticism against a work that on many levels is quite well-written, engaging and even witty and downright funny… such are the foibles of the world.

But back to Sapolsky now, the whole book is really not much more than a masterly, exhaustive enumeration of all the aspects of mental life we have found to correspond with the illumination of different parts of the brain when seen inside an fMRI, or different concentrations of neurotransmitters and enzymes in the subjects’ blood (measured in different moments of experiencing some contrived experimental situation or other), or different waves as picked up by and EEG, along with the minutiae of the experiments that purportedly settled such correspondence. And, man, is there a lot to enumerate! Abstract thinking, volition, desire (of different types and kinds), moral evaluation, affects, emotion, reasoning, memory, feelings, broad categorization, narrow categorization, adscription of agency, prediction, anticipation, delayed gratification, succumbing to impulses, visual perception, purposeful meditation… you may think that whatever may happen in your mind Mr. Sapolsky has it covered and pithily conveys the biological basis of it, which means identifying it with the firing of some neurons (or at least the distinct oxygen consumption of some broad areas of the brain) and the variation in the concentration in the blood of some chemicals.

There is the mandatory (for a book that aspires to fairly represent the “state of the art” of neuroscientific investigation) mountain of notes and copious bibliography pointing to the apparently insurmountable mountain of (impeccably “scientific”, of course) evidence supporting its claims, but it is a pity no mention is made to the dubious replicability recently noted of many of those experiments. Which is surprising, given that the book has been published this very same year (2017) and Brian Nosek important paper “Estimating the reproducibility of Psychological science”, which kicked off what has been termed the “crisis of replication” in most social fields was published in Aug 2015. So Sapolky’s still using experiments from John Bargh and the like (when I read Social Psychology and the Unconscious  I was left with the clear and distinct feeling that the whole field was completely, utterly bunk, and I didn’t need sophisticated resources and a failed attempt at replication to conclude that they were either trivial or false; non surprisingly the field of “social psychology” has been one of the worst hit by the replication crisis…) as evidence without mentioning their dubious epistemic status is at best a bit careless, and at worst disingenuous. 

Unfortunately (or fortunately, depending on your previous metaphysical commitments) I came out with the idea that the book fails spectacularly in its declared intent of “explaining” in any meaningful way why we humans act as we do. Maybe it has to resort to too many causal chains (in very different timescales, which make them mightily difficult to integrate with one another). Maybe I read these things to gauge to what extent the advances in medicine and biology should make me question my belief in (or commitment to) free will, and given my prejudices and biases it is not surprising that I come out of such exercises reassured, rather than shaken or converted. There are many intelligent, considered and thoughtful arguments against the existence of such mythical beast (freedom of the will), made since the time of the Classical Greeks, but I’m afraid you won’t find any of them in Behave.

Let’s start with how the author proposes to tackle it head-on, appealing to the somewhat worn out and belittling homunculus argument (after that, don’t ever accuse me again of strawmanning!). I’ll need to quote in some length to capture the rhetoric in all its gloriousness. This is how Mr. Sapolsky presents his understanding of what he calls “mitigated free will”:

There’s the brain – neurons, synapses, neurotransmitters, receptors, brain-specific transcription factors, epigenetic effects, gene transpositions during neurogenesis. Aspects of brain function can be influenced by someone’s prenatal environment, genes and hormones, whether their parents were authoritative or their culture egalitarian, whether they witnessed violence in childhood, when they had breakfast. It’s the whole shebang, all of this book.

And, then, separate from that, in a concrete bunker tucked away in the brain, sits a little man (or woman, or agendered individual), a homunculus, at a control panel. The homunculus is made of a mixture of nanochips, old vacuum tubes, crinkly ancient parchment, stalactites of your mother’s admonishing voice, streaks of brimstone, rivets made out of gumption. In other words, not squishy biological brain yuck.

And the homunculus sits there controlling behavior. There are some things outside its purview – seizures blow the homunculus fuses, requiring it to reboot the system and check for damaged files. Same with alcohol, Alzheimer’s disease, a severed spinal cord, hypoglycemic shock.

There are domains where the homunculus and that brain biology stuff have worked out a détente – for example, biology is usually automatically regulating your respiration, unless you must take a deep breath before singing an aria, in which case the homunculus briefly overrides the automatic pilot.

But other than that, the homunculus makes decisions. Sure, it takes careful note of all the inputs and information from the brain, checks your hormone levels, skims the neurobiology journals, takes it all under advisement, and then, after reflecting and deliberating, decides what you do. A homunculus in your brain, but not of it, operating independently of the material rules of the universe that constitute modern science.

At this point the author may think he has gone a bit overboard, after all polls suggest (although it’s a slippery concept that may arguably not be all that well understood by people answering that kind of question) that consistent majorities in all countries do believe people is endowed with free will, mitigated or not, in our very scientific and deterministic age, when the dominant reason has been hammering them at least since the mid-eighteenth century that this “voluntariness” thing is but a fiction, the sooner to be discarded the better (to work more towards accumulating more material goods, mindless as such accumulation may look like to a dispassionate observer). So Mr. Sapolsky digs deeper, trying to excuse ourselves for our unenlightened foolishness (but not much):

That’s what mitigated free will is about. I see incredibly smart people recoil from this and attempt to argue against the extremity of this picture rather than accept its basic validity: “You are setting up a straw homunculus, suggesting that I think that other than the likes of seizures or brain injuries, we are making all our decisions freely. No, no, my free will is much softer and lurks around the edges of biology, like when I freely decide which socks to wear.” But the frequency or significance with which free will exerts itself doesn’t matter. Even if 99.99 percent of your actions are biologically determined (in the broadest sense of this book) and it is only once in a decade that you claim to have chosen out of “free will” to floss your teeth from left to right instead of the reverse, you’ve tacitly invoked a homunculus operating outside the rules of science.

Well, I´m not sure Mr. Sapolsky would consider me “incredibly smart” (I’m a theist, maybe even a Deist, after all, which in his book is surely a giant letdown), so it is just par for the course that I do not recoil from “this” at all, and wouldn’t even attempt to argue against the “extremity” of such picture, a picture the belief on which is shared by obvious morons like Aristotle, John Stuart Mill, Kant and arguably even Hume, but hey!, who were they to know? They didn’t have fMRI’s, microscopes, EEG’s and the like… bunch of intellectual midgets, that’s who). I’ll apply Sapolsky’s rhetoric to his own position (“no homunculus at all, just the neurons, hormones, genes and that’s it”) towards the end of this post and we’ll see what seems more ridiculous, or looks like being more “extreme” if we dwell for a moment on where that dispassionate, seemingly objective and scientific alternative that he champions  leads us.

But before we get there I think it’s worthy to consider the super duper, bold, brave and at the end quite empty calls for radical overhaul of the penal system (if all the world is indistinguishable from a prison, where nobody has any freedom at all, but just the blind obedience to material forces that were put in play in the big bang and have been playing out necessarily ever since, what does jailing criminals even mean?) in light of the non-existence of that vaunted free-will. At various points the author admonishes us that given what we know of what makes people tick we absolutely must reconsider all our laws and, most markedly, punishments. But when it comes to actually define how such reconsideration should look like, he is maddeningly vague. All he does is point that concepts like “guilt”, “intent” and even “recidivism” lack any real meaning, as all and every action that any of us performs is preordained, is overdetermined by an overlap of evolution (that shaped our species), individual genetics (that shaped our capabilities and dispositions) and the individual circumstances in which we find ourselves (which trigger the evolved responses finely tuned by our genetic endowment and previous history) and so does not merit to be praised or blamed, rewarded or punished any more than the actions performed by animals (donkeys, pigs, cows) that, in past and unenlightened centuries were similarly judged and on which silly verdicts were given. Very bold and brave, indeed, until we try to apply it to the real world.

Let’s remember than in penal theory punishments pursue at least three objectives: we deem it acceptable to harm the perpetrator of a crime because a) it has a deterrent effect on other people (who see that committing crimes is punished and would thus become less likely to do it themselves) b) it makes it more difficult or impossible for the criminal to repeat his bad deeds (from killing him to imprisoning him to depriving him of the material means necessary for recidivism) and d) it compensates the victim (either materially, giving her the proceeds of the fine imposed on the evildoer, or morally, signaling the rejection of society towards what her tormentor had done). You may notice that all three objectives are quite independent of the assumption of free agency on either the criminal or the rest of society. Even if we accept we are all mindless robots, we would still need to levy fines, temporarily imprison wrongdoers and, for some extreme cases, either imprison for life or kill those individuals dangerous and prone to violence that we can not afford to let loose between their fellow beings. We may use a bit of less shaming, and more consequentialist reasoning, but I don’t see the actual penalties of any modern, progressive penal system (like the one you can find in most advanced parts of the world, the USA not included) changing much, or at all. Which doesn’t mean our current system is maximally humane or maximally just, as it already considers to a great extent the criminals as somehow mentally defective, and such condescension may be a harder punishment than granting them independence and recognizing their moral agency, even if that means a harsher punishment (finely illustrated in Henrik Stangerup’s The Man Who Wanted to be Guilty).        

Regardless of the consequences of admitting the fictitious nature of free will, that at the same time are presented as unimaginably bold, revolutionary and requiring we let current norms, laws and institutions essentially unchanged, I’m afraid the animosity of Mr. Sapolsky towards the possibility of such fiction not being a fiction at all lies in a misunderstanding, the misunderstanding of how the freedom worth having in an (in)determinist universe would look like. His confusion reproduces almost verbatim an argument from Daniel Dennett (which I’ve read both in Consciousness Explained and in Freedom evolves): even if the universe were at heart strictly indeterministic, that wouldn’t threaten his understanding of all behavior following necessarily, within a causal chain with no slack, from material causes that were essentially set in stone at the moment of the Big Bang, because A) the indeterministic nature of reality applies only at very small scales (the quantum realm, for particles smaller than a proton or a neutron) and when it comes to “big” stuff, noticeable by our senses such indeterminism vanishes, so we can entirely ignore it; and B) even if there were truly “uncaused” macroscopic events, events for which we could really and ultimately never find a material “cause”, such events would never constitute the basis of a “freedom worth having”, as we traditionally consider a “free” action (free in the sense of being valuable, morally worthy, deserving praise or blame, etc.) one that is consistent with the “personality”, the “character”, the “true self” of the agent, and such action could never come out of the blue, or be entirely random, it could never be supported (or be made to appear more likely) by the fact that the universe is finally not ”causally closed” if we understand such lack of causal closure only to entail the possibility of entirely stochastic, uncaused events.

That is indeed hefty metaphysical stuff, and reading Behave has just reinforced my original hunch that such stuff is but very lightly illuminated by what we learn from neurology and biology. Without needing to resort to so much neurobabble Ted Honderich expressed it better and more nuancedly in his (alas! Quite difficult to find) Theory of Determinism (1990), which was wrong for the same reasons Dennett’s and Sapolsky’s are wrong, namely: A) their understanding of physics is between 50 and 80 years out of whack, the old debate between Heisenberg and Laplace was decisively won by the former, and appeals to hidden variables to causally explain quantum effects have so far been shown to be not only unsubstantiated, but probably incongruous (the best explanations of Heisenberg’s uncertainty principle I’ve seen are entirely apodictic, relaying not on any particular experimental result but on the nature of reality and mathematics themselves); the consensus position between physicists is that nature is fundamentally non-deterministic, that there are really, deeply, entirely uncaused events, and that those events, microscopic as they may be individually, may aggregate to have macroscopic, entirely observable effects (from the disintegration of a radioactive particle to certain phase changes), due to the non-linearity of the equations governing complex systems (that would demand a potentially infinite precision in the measurement of the initial conditions to get even to an estimation of the order of magnitude of the end state of a given system… but HUP puts a hard limit on the precision with which we can measure such initial state, seriously limiting what we can know of such end state, regardless of how macroscopic we find it to be). But wait, it gets worse, because B) you don’t even need to appeal to quantum indeterminism to accept the possibility of a freedom worth having, once you recognize that classical mechanics provides a fairly limited description of a fairly limited percentage of reality (the behavior of “reduced” number of “big” solid bodies moving “slow” -no need to get too technical about the precise meaning of each term between quotes), and that sadly classical mechanics is the only realm where the determinism our authors propound holds sway. What Honderich, and Dennett, and finally Sapolsky are doing is taking the neatly defined (albeit, as I just mentioned, woefully incomplete) concept of causality taken from classical mechanics and applying it to the field of chemistry (mostly a valid extension, for big enough compounds), then extending it again to the field of biology (sneaky, and attending to the results, a not entirely legitimate extension) and finally extending it again to the field of human behavior, forgetting the entirely different meaning a “cause” may have for creatures possessing a symbolic language, culture and a complex neural system that translates into sophisticated motivational structures. Then, they look back as if such chain of extensions of the original concept were entirely uncomplicated and immediate and claim -“see, as the only valid causes are big bodies gravitationally attracted to one another, or imparting momentum to one another through physical contact, or heating one another or exchanging atoms and molecules with one another… there can be no free will, as it would require a weird stuff -a homunculus, that has no way to gravitationally attract, impart momentum, heat or exchange atoms or molecules with normal, everyday, honest-to-God, measurable matter” (that’s the “Casper argument” from Dennett, that I’ve criticized elsewhere, in a nutshell). But it’s them who have beforehand (and somewhat arbitrarily, as we are about to see) limited what can be validly considered a cause, so their whole argument is circular and lacks validity!

Indeed, someone with impeccable empiricist credentials as John Stuart Mill, in his A System of Logic, Ratiocinative and Inductive (first published in 1843), already noted (although he hid it in a footnote) that the very same concept of what a “cause” is had significantly evolved from the times of classical Greece, when its paradigmatic model was precisely… mental intention! (their main example of a cause was me thinking on moving my arm and my arm dutifully moving as intended) to his own time, when the paradigmatic model had become the billiard ball being moved mechanically by the clash of another billiard ball. That’s why in Aristotle canonical model we find listed as causes not just material causes, but formal causes and end causes, that are entirely lacking of our modern concept of what a cause consists in. Add Descartes and his distinction between res cogitans  and res extensa and you may start to understand why the idea of mind as something outright distinct (metaphysically distinct, I dare to say) from matter has become more and more alien to our sensibilities (a somewhat more garrulous and blockheaded example of such strangeness would be the abominable Descartes Error, by Antonio Damasio, but I refuse to devote much attention to such claptrap).

Okay, let’s take stock then of where we stand: We’ve found that Bob Sapolsky’s idea of what constitutes a cause may be not all there is to it when it comes to explaining human (or, for what it’s worth, animal) behavior. We’ve based such finding in the previous (probably unknown to him, that’s the problem of doing metaphysics without enough training in basic philosophy, only partially excusable if you think what you are doing is biology, or neurology, or psychology) and unjustified commitment of what constitutes a valid explanation, that leaves out enormous tracts of reality that are as able to have an empirically verifiable effect on the material evolution of the universe as any legit neutron star or black hole or lump of atoms you may think of. And before anybody accuses me of magical thinking and just attempting to open the causal closure of the material world to sneak in it fairies and old bearded guys in robes inhabiting the skies (not that I would care, really), I will clarify that identifying the insufficiency of matter as universal explanation does not require any kind of previous commitment to revealed religion, the existence of immortal souls or anything like it. I’ve already pointed to one of the fiercest critics of what he terms “neuromania”, Raymond Tallis, an avowed and proud atheist. Another recent critic of the view that there is only and can only be matter, interacting according to the laws revealed by the scientific method so far: Thomas Nagel (similarly open about his atheism). Want a recent criticism of Dennett that doesn’t rely at all in any supernaturalism? Look no further: The Illusionist       

The way I see it, there could only be one redeeming feature of Behave’s underlying thesis: that it worked, and all the hullabaloo about Latin-named regions of the brain and chemical compounds and genes and evolutionary just-so stories (being a bit harsh here, I know) served the purpose of actually being able to predict how people… well, behaved. But alas! Per the author’s own admission, it’s not yet to be:

…my stance has a major problem, one that causes Morse to conclude that the contributions of neuroscience to the legal system “are modest at best, and neuroscience poses no genuine, radical challenge to concepts of personhood, responsibility and competence.” The problem can be summarized in a hypothetical exchange:

Prosecutor: So, professor, you’ve told us about the extensive damage that the defendant sustained to his frontal cortex when he was a child, Has every person who has sustained such damage become a multiple murderer, like the defendant?

Neuroscientist: No

Prosecutor: Has every such person at least engaged in some sort of serious criminal behavior?

Neuroscientist: No

Prosecutor: Can brain science explain why the same amount of damage produced murderous behavior in the defendant?

Neuroscientist: No

The problem is that, even amid all these biological insights that allow us to be snitty about those silly homunculi, we still can’t predict much about behavior. Perhaps at the statistical level of groups, but not when it comes to individuals.

Indeed. As he titles the following subchapter, the whole discipline “explains a lot, but predicts little”. IF you restrict explanation to certain very limited terms, I would add (because if not, it doesn’t even explain that much to begin with). Which is mighty honest from the author, and deserves great kudos for its humility and realism. Humility and realism somewhat sullied when he explains that such lack of predictive power is caused by the relative youth of the discipline (supported by very tricky and fishy graphs showing how many articles containing a number of trendy terms have been published in the last years, compared with how many were published a century ago, when there were all of five medical journals in the whole planet Earth), and if we wait a bit they will tease out all the contributing causes and finally start making brilliant, honest to God predictions that will hold up to scrutiny and allow for the vaunted overhaul of the legal system (predictably in the direction foreshadowed in Minority Report, when we will detain and even execute potential criminals before they can commit their evil deeds).Sorry, but that reminds me of the baseless hope in some mysterious hidden variables that will end up allowing to predict everything. Didn’t work in physics, won’t work in psychology.

But if you are not all that demanding about what you consider scientific (remembering Feynman, most of the curios within Behave are more akin to stamp classification and collection than to physics, or whatever your model of hard science is), or are happy to roll with good ol’ Collingwood and accept as scientific any discipline that searches systematically for the answers to certain questions, and is willing to share the data and methods used to arrive to such answers (even if those data and methods end up demonstrating the opposite of what they were marshaled to prove), then I can not recommend this book highly enough. It is comprehensive, witty, not very well structured but full of silly experiments that will amuse and entertain you, and give you an edge in otherwise dreary conversations (you can follow “the latest science conclusively proves that humans…” with whatever folly conceit you wish and you probably can find some supporting “evidence” within its pages). Best of all (or not, depending on what makes you tick), don’t expect it to question your previous assumptions about what moves us to act or how the mind works (as opposed to the brain), as it is woefully short in those departments… 

Tuesday, October 3, 2017

My take on Charlottesville (When the Nazis come marching in)

One of my most controversial predictions is that North Americans are sleepwalking towards a Second Civil War in which the growing estrangement of the increasingly polarized halves of its electorate will have to act out the enormous reservoirs of animosity and spite they have been building towards each other for the last decades. The recent events in Charlottesville, Virginia, in which rightists and leftists clashed in the streets causing enough turmoil to force Governor Terry McAuliffe to declare a state of emergency, and a counterdemonstrator (32 years old Heather Heyer) was killed, would seem to be a validation of the direness of my predictions, and a clear harbinger of the more virulent clashes to come. However, as is usually the case, there is more in the picture than meets the eye, and a sober assessment of the events rather makes me be more cautious, and even a bit more optimistic. Let’s see why.

1-    What happened (are the streets burning yet?)

First let’s unpack what actually went on in the Virginia locality (population: 48,210 as of the 2010 census, and home to the University of Virginia) had announced its intention to remove a statue to Confederate general Robert E. Lee from the eponymous park (previously renamed Emancipation park, which already tells you a lot about Charlottesville municipal government). 


A loose network of rightist groups opposed to the measure plan a rally on Saturday, Aug 12th under the denomination “Unite the Right”. Such rally is duly notified to the authorities and nominally permitted. The previous day (Fri, Aug 11th) Governor McAuliffe notifies via Twitter that, although demonstrators are protected by their constitutional right to express their views, he finds such views “abhorrent” and encourages Virginians of all persuasions to stay away of the march.

The night of the 11th hundreds of demonstrators march through the campus of the UoV carrying torches (as seen in countless photos, most seem to be oddly out-of-place tiki torches that would be more fit for lighting a barbecue in a patio in the ‘burbs than to be wielded in an exhibition of white power or whatnot…) and chanting niceties like “White lives matter”, “you will not replace us” (most journalists transcribe it as “Jews will not replace us”, as it has more sinister overtones and surely sells more papers) and “blood and soil”. Along with the torch-carrying and white-supremacist slogan chanting, a number of marchers can be seen unequivocally extending their arms in what undoubtedly can be construed as the traditional Nazi salute. Such march is widely and luridly transmitted by all major media organizations in the country.

On the morning of Saturday, Aug 12th demonstrators start gathering around Emancipation park, both for the planned march and to protest against it. A significant number on both sides are armed with visible weaponry and paramilitary gear that would be unimaginable in any other country but, Virginia being a permissive open carry state we can assume nothing out of bounds in the USA. To the surprise of exactly nobody, given the level of publicity achieved by the rally, as many counter-demonstrators as potential demonstrators can be seen around the park, and there are a number of clashes between both (check the NYT account: People pretending to fight - badly ). Around 11:00 AM Governor McAuliffe declares the state of emergency, revoking the authorization for the Right’s march, and ordering the attendants to the rally to abandon their location and dissolve. It has to be noted that such “dissolution” would force them to march through the throngs of  counterdemonstrators gathered around the park, multiplying the chances for fights, clashes and brawls (check the alternative account of an avowedly alt-right attendant: What Loretta saw at C-ville ).

Clashes and widespread violence continue (but, amazingly given the stupendous amount of guns seen all around in every video and photo, no shootouts are reported… of the somewhere north of 30 wounded most are by punching and being beaten with blunt objects, plus some pepper-sprayed), reaching its high point (regarding lethality) when a Dodge Challenger driven by a James Alex Fields Jr. slams a group of counter protestors, killing the aforementioned Ms. Heyer. Later in the day, the crash of a police helicopter monitoring the day’s developments would add two officers to the total body count.    

To top off the division and shock of the nation, the President famously talked to reporters from his golf resort in Bedmister, NJ, condemning the violence, which he blamed on “many sides”, causing not only liberals and progressives, but members of his (purportedly) own party like Speaker Paul Ryan to denounce him for “putting on the same moral plane” the “Nazis and anti-Nazis”. Not one to back off or publicly retract any proffered opinion, Trump later would say that a lot of “very fine people” attended the alt-right rally, and would insist in equally apportioning blame to “both sides” on a speech to reporters at Trump Tower the following Tuesday (Orange one's response ).

2-    How the media reacted (burning? Man, they are exploding! Crumbling! Sizzling!)

Before I offer my interpretation of the facts I’ve just described, I think it’s worthwhile to reflect on how the media has portrayed them, from different points of the political spectrum. We have grown accustomed to the very post-modern idea of there not being a “true-truth”, but just different discourses, or narratives, weaving a hermeneutical network of signifiers that denote no precise significant at all. What that somewhat obscure assertion means is that for the few that still read news from outlets with different political alignments it is pretty common to find entirely diverging descriptions of a single event, to the point of making it difficult to identify such descriptions as applying to the same underlying facts. In the case under consideration, the differences have been predictably magnified: for the mainstream media it has been a national disgrace, the symptom of a seriously corroded and corrupted social compact that not only allows, but apparently encourages normal, seemingly well-adjusted young men (wearing that ultimate symbol of successful integration into middle-class status: khakis and polo shirts! That is no way to dress for the fascist takeover of the state, sir! What self-respecting revolutionary would exchange his jackboots and brown shirt for such a bland attire?) to publicly and unashamedly proclaim their Nazi sympathies and their scorn for every ages-old convention of what is acceptable and proper in a democracy. Nazi salutes? Check. Open proclamation of racial slurs? Check. Embracement not just of somewhat morally tainted past (the Confederacy and the Old South) but of beyond-the-pale fringe elements (like the Ku Klux Klan, David Duke and even ol’ Adolf himself)? Double check.

Unsurprisingly, both MSM and left-leaning circles have been having fits of apoplexy and denouncing the whole thing as furiously and unambiguously as possible, while at the same time pointing to the (at best equivocal) reaction of the White House as the undisputable proof  of the racism and unabashed association with White Supremacism of not just the President and his inner circle, but the whole Republican establishment (the most commented piece along those lines is surely the one penned by Ta-Nehisi Coates in “The Atlantic”: The first white president (how much does the USA suck?) ). For them the right in general is racist, no exceptions allowed. And with fascist tendencies all along, so no surprise they resort to threats, violence and finally murder. The sad outcome of the many clashes in Charlottesville is not an isolated incident carried out by a mentally unstable individual, but the unavoidable consequence of a noxious ideology that, left unchecked, will cause many more eruptions like that one, and many more deaths (hence, combatting it with any means is the only rational and commendable action). 

Fox News, the right leaning talk-radio hosts (Limbaugh, Hannity, Savage, etc.), the Murdoch press and the abundant orthosphere, NeoReaction and alt-right sites in the Interwebz see the whole story under a very different light. Their sympathies were since the beginning clearly with the initial demonstrators, not just because they fully endorse the country’s racist past, which they more or less unabashedly do, and thus also oppose the removal of any statue of Confederate heroes by the -in their eyes- minor feature of having led a rebel army against a constitutionally legitimate government for the sacred cause of being able to keep humans of a different skin color enslaved, but because in general “uniting the right” is something they all can rally behind (sharing, all of them, a sense of dread and disgust towards what they see as an almost unstoppable tide of progressivism and leftism that constitutes an existential threat to everything they hold dear and consider sacred). For all those media, the counterdemonstrators were an unholy and ragtag alliance of everything that is wrong with America today: feminists (“feminazis”), BLM sympathizers (“race traitors”), LGBT advocates (“faggots and butches”) and in general progressives and liberals. Instead of “proud boys” impeccably white and well-groomed marching in their khakis and polo shirts (oddly complemented by a peculiar assortment of shields, kneecaps and helmets), a bunch of blacks, short-haired girls and old hippies with questionable fashion sense carrying bullhorns and placards that seemed plucked from some outdated documentary about racial protests of the 70’s (but let’s not forget the mobile phones, which where mercifully absent back then… one can only wonder about the volume of uploads in Instagram, FB and the like of demonstrators from every sign preening about their exploits, in a new and social-media age version of the old “radical chic”).

Few have claimed that the victim between the counterdemonstrators somehow deserved it, or “had it coming”, but the narrative they weave leaves little doubt this is how they see it. For the right-wing media the whole episode is a further illustration of the inability of the current state (seized by liberals and traitors) to protect decent citizens, from the declaration of the state of emergency (which only served to further curtail the constitutional right to freely express its opinion to the always silenced part of the social body that does not share the left’s worldview) to the failure to protect the people gathered in Emancipation park from the taunts and aggressions of the dangerous “Antifa” mob. Never mind that the only actual casualty was in the ranks of the supposedly aggressive, dangerous and deranged anti-American extremists that went to disrupt a perfectly peaceful and tranquil event. Again, it was all the fault of the “Cathedral”, in this case personified in the Democratic Mayor, the Democratic Governor, the mob of dangerous radicals bent on violence and mayhem grouped under the label “Antifa” and, of course, the devious mainstream media that distorted and manipulated the emotions of some young man so he ended committing a crime. 

3-    So, all of this validates the narrative of “civil war tomorrow”, right?

Er, actually wrong. Always the contrarian, I see more positive than negative aspects to take into consideration after the events in Charlottesville. And remember I could construe it as a validation of my predictions of quick descent of the American polity into fractiousness and conflict (Guys, you're screwed ). But that’s not how I read it. For one, I won’t claim to be the greatest street brawler and bruiser of all time, but I’ve been involved in my share of fights (most of them had unwise amounts of alcohol involved, so take my account with a pinch of salt) and I was surprised, watching the many videos of the “violence”, by how… “performative” it looked like, and how little actual rage or aggression they showed. The few punches that are exchanged in front of the cameras (whose presence may be a distorting factor, or the other way round, the catalyst for all the action) resemble more a limp attempt to swat a mosquito than an actual intent to cause maximum damage whilst minimizing the puncher exposure.

We humans are a social species, and as every military instructor will tell you, getting normal people ready to shoot towards their fellow human beings (even at considerable distance, where the feeling of common humanity can be more easily overcome) requires quite extensive reprogramming. When I was younger (actually, much, much younger) I knew my share of seedy neighborhood gyms, all of which had its crew of testosterone-addled asocial troublemakers (and yes, a disproportionate percentage among them were already “extreme-right” and trained to either join the armed forces, the police or any self-styled anticommunist crusade in not-so-distant-Francoist Spain, their fathers or grandfathers having typically fought side by side with actual, honest-to-God German Nazis and Italian Fascists, the real thing and not the imagined bugbear so easily peddled in leftist fantasies). Even the most apparently psychotic between them had some difficulties overcoming the innate human revulsion towards doing harm and seeing other people suffer (although in some cases, it has to be said, they became quite good at such overcoming). I’ve seen how those guys hit, and that’s very different from what the footage by the NYT, CBS, ABC, WaPo, CNN or Fox shows. What that footage (much of it seem to be the same limited number of events shot from different points of view) depicts are a certain, limited number of posers running in front of the camera to have a go at throwing a (typically ineffective) jab at the least-imposing element of the opposite tribe, and then retreating precipitously back to safety among their own numbers, having accomplished its main goal, which we can surmise was never to gain territorial control of the contested streets, but snatching a nice graphical testimony to hang in their snapchat or Instagram account.

As I was not present in the city during the events, I can not say for sure that all the “street violence” so luridly reported by alarmed journalists was of this theatrical nature. Obviously the guy who rammed his car against the multitude, killing one and wounding multiple others was not “just doing it to look badass on Instagram”, and caused real, grievous and irreversible damage. Additional people were physically damaged to the point of requiring medical attention (19 in the car accident and 14 in other incidents) but if what the newsreels show is any representative indication, I think Charlottesville was a hybrid between a theater and a not-fully-grown-up kids’ giant playground, where self-styled radicals from left and right enacted their fantasies of being badass, rebellious and violently (and valiantly) opposing the unacceptable ideas of the other side:


Please note with this first interpretation of the events I’m not pretending to establish any moral equivalence between both sides, or pretending that white supremacism, racism and even ol’ Nazism are somehow OK (or not, I really don’t buy pieties or second hand opinions from any peddler of political correctness, and my opinions about such issues are really my own and not to be discussed here). I hope we can all agree the “Unite the Right” organizers had considered that their little show turning violent was a real possibility (heck, if not, why come with all the defensive gear, the shields, helmets, and, specially, the “security details” of the most prominent figures?), and that organizing a public event knowing it will turn violent (and thus, assuming people will be hurt) is at best irresponsible, and at worst outright evil. Yep, I know oppressed minorities in repressive states may be excused to resort to violence as no other way of redressing their grievances is open to them, and for many people in the USA “alt-right” theirs is precisely that kind of state. We’ll get back to that contention in a moment…

But similarly irresponsible/ evil is attending such event to be part of that same violence from the other side, regardless of how virtuous your ideas are. The traditional distinction between “defensive” and “offensive” violence does not apply, as we are not dealing with people that were going about their  daily lives and suddenly were presented with a bunch of aggressive fascist threatening them, but of groups of activists that travelled to the scheduled demonstration location to harass and confront demonstrators for expressing their ideas, with the justification that such ideas are obnoxious and morally indefensible (again, I’m not yet declaring if I concur or not with such characterization). It is the embrace of violent means which constitutes a) the essence of totalitarian regimes (which define themselves by abandoning the public, pacific discussion of policy alternatives as main way of consensus formation and resort instead to its unilateral imposition, by whichever means -that’s when violence comes in) and b) the most salient and morally repugnant of its features (an imaginary “benevolent dictatorship” that never inflicted any pain on any of its subjects would be much less evil than one which systematically did -see “enlightened despotism” as proof). Am I saying with this that Trump was right, and we should condemn violence “from all sides”? does such generic condemnation mean that we indeed consider both sides “morally equivalent”? (let’s call them, for greater clarity, fascists and anti-fascists, or white supremacists and anti-white supremacists -or would it be more accurate to call the latter “white subserviencists “?). Not to put too fine a point about it, yes and no. I do indeed oppose (and condemn) every kind of violence, regardless of how honorable the cause it defends, or how ignoble and vile the cause it attacks. In cases of terrible oppression, when any other means of redress are closed, it could be justified to resort to causing pain (including to innocent people), always in a most limited, most circumspect manner, but those cases are few and far between, and certainly none of them obtains in modern day America (or in modern day Europe).

Which is not to say that, once violence is unleashed, every participant is similarly to blame: Those that start it (those who hit first) are normally more to blame than those that respond to it. Those that lose their temper and escalate it (and respond to a taunt with a punch, or to a punch with a shot to the head) are more to blame than those that keep their cool and show some restraint, trying to keep it proportional and not inflict more pain than what they themselves may have suffered. And yes, those that engage in it to advance a “respectable” cause (for a Kantian like me the proof of respectability is pretty straightforward: those that act according to a maxim they can universalize, so they would like to see it become a rule of nature or, alternatively, those that treat other people as ends in themselves and never as means) are less to blame than those that defend dubious, non-universalizable, particularistic causes. Only according to the first two criteria are the white supremacists who intended to march in Charlottesville morally equivalent to the counterdemonstrators that tried to stop them, as according the third their cause, being associated with racial segregation, a celebration of slavery and sedition (which entails a violation of the rule of law), and thus strictly non-universalizable, they are clearly inferior to those that showed up to oppose them.

Now that has been taken out of the way, let’s go back to why I find the sad and tragic events that unfolded the 12th of August still contain a reason to rejoice: essentially, what they showed is that the US of A is much farther from a civil war than what I feared, as the most vocal proponents of the de-humanization of half of the country that is required for such a confrontation to take place are a really tiny minority, unable to inflame the passions of a sizeable amount of their countrymen (as of today still very, very far from reaching a critical mass to have any significant impact on the political, let alone military balance of forces of the country) and willing to fold when confronted with the possibility of a real fight. During the campaign of the presidential election I tended to disagree with many analysts on the left that dismissed the perils of a Trump victory saying that the amount of his followers that bought into white supremacist phantasies was very minor, in the order of a few thousands, but now I think they were spot on, given how easy it was for a bunch of ragtag organizations to outnumber them on very short notice. Breitbart may claim some hundreds of thousands of readers, and the Daily Stormer (now disappeared from the “public” internet) some tens of thousands, but we’ve seen that lurking in unsavory virtual places while safely seated in your parents’ basement is one thing, and hauling your ass to a demonstration with fellow extremists where said ass can be repeatedly kicked is a very different one. A lot of people seem to have signed for the first, but precious few for the second.

And the media in the right have noticed, as the diagnostic I’ve met more frequently is that the “Unite the Right” rally was an unmitigated disaster, brought tons of bad publicity and has probably set up the movement a few years, if not decades. A lot of people, even those of a most conservative persuasion, still balk at being identified as Nazis, or being associated with the Ku Klux Klan. If I were a cynical I would show some surprise at the apparent inconsistency of endorsing blatant discrimination towards certain ethnic/ cultural groups (browns and blacks) but being uneasy about being associated with those who demonize others (Jews), as that’s where the line seem to be drawn. Sorry, but I fail to see how a Nazi is so much worse (and thus so amazingly more evil) than a super-nationalist, jingoistic hick that wants to send “people not like him” (because he considers them inferior and not fully human) back to their countries of origin, just because the former includes in the “not like him” category some people externally indistinguishable from himself (Jews) and the latter does not. And just to be clear and avoid mistakes, it is not because I sympathize with one more than the other: for the record, I consider both equally unacceptable and indefensible. However, a number of alt-right bloggers and neoReactionary thinkers seem to be happily aligned with the super-nationalist jingoist but reject being labelled as full-throat Nazis (see Mencius Moldbug, for obvious reasons).

But enough cynicism already, back to the uplifting consequences of proto-fascist thugs being routed in the streets, we can expect much less visibility from them, and that is not a bad thing. We will see similar levels of rancor and spite and foaming-at-the-mouth between progressives and conservatives, we will see one or the other condone ugly behaviors (both in words and deeds) as long as it is exhibited by someone from their tribe and causes harm to someone of the opposite’s, but such ugly behavior will return to the electronic realm: the usual trolling, badmouthing, toxic name-calling and occasional banding together to be overheard in front of niche audiences (sad puppies) but no street fighting (and hopefully, no car ramming in the enemy’s ranks). So cheer up, Americans! It seems like your simmering Second Civil War will remain a virtual conflict still for a few years to come. If and when it becomes real (not that original an idea, see American War by Omar el Akkad -oh, I forgot you barely read, and much less a book by an Arab-sounding author) is still up to you to decide. 

Friday, September 8, 2017

The days of commercial TV are numbered

Only nobody knows what the number of remaining days is, or even if it is very high (say, we still have 100,000 days of commercial, open television left -that would be 274 years, far longer than the time it’s already been around, since the first emissions in the 50’s of the last century). That’s the problem with prognostication in the social realm, nobody has really much of a clue about how technology will evolve (as Popper famously quipped in The poverty of historicism, if we knew exactly what would be invented in the future, we could as well invent it right away!) and even less how such technologies will interact with the underlying social forces to shape the development of the collectives which embrace them.

In a certain sense, then, the title of today’s post is a bit misleading (no surprise, I didn’t exactly invent clickbait, and as author of one of the world’s less read blogs I wouldn’t readily confess to indulging much in the practice), but as usual, in a torturous and circuitous way I still believe it may help illuminate some tendencies in our world that are worth paying attention to.

What put me in the track of this line of thought this time was a comment by my elder son, to the effect that neither him nor any of his friends or acquaintances watched live TV any more (they just downloaded or streamed those shows they were interested in… admittedly, my son’s friends are the nerdy type that don’t have much use for sports broadcasts, where there is indeed a premium for immediacy). It made me remember an internal report, back in my consultant days, stating the imminent demise of TV as we knew it because of the arrival of a gadget that would revolutionize the way people consumed audiovisual entertainment: the TiVo box (for those who are not familiar with the contraption: essentially a digital VCR with a more friendly user interface that  basically any moron could use, which would allow people to decouple the viewing experience from the time when the originating network chose to broadcast it and, more importantly, would allow the viewers to skip the advertising, thus depriving the content producers from revenue in the long run… guess what? Sending the users bundles of just ads, no annoying programs interrupting them, ended up being a very popular feature of the service).

That was in the second half of the 90’s, so with 20 years of hindsight we may agree that the announcement of the imminent demise was a tad premature. The same may be said, in another 20 years time, of my son impression, but now as then it got me thinking about the money flux that keeps TV going and how such flux may be diverted or weakened, with potentially huge social implications. Because the conventional wisdom states that TV is the most powerful instrument of social control ever devised. In my own terms, the great conveyor belt for infusing each society’s dominant reason in the unsuspecting brains of its citizens. We could have twenty philosophers as brilliant as Kant plus Aristotle plus Stuart Mill (to reflect each major moral tradition’s sensibility) publishing their most persuasive works tomorrow and, frankly, nobody would as much as yawn if they couldn’t advertise it on the telly (not by themselves, mind you, unless they were outrageously good-looking and able to spice their communication with some raunchy personal stories, that is the nature and servitudes of the medium). Indeed, statistically, it is almost certain we have striding  the Earth between us, at this very moment, some thinkers of comparable stature (if only because the number of people pursuing speculative thoughts full time in the countless faculties of the modern world is much, much bigger than the total sum of people that have been able to pursuit such endeavors during the whole history of our species), without anybody noticing, or being able to take any advantage from it (the only explanation for such glaring loss is that such geniuses are most likely either old, ugly, boring or all of the above, so no way they could draw an audience primed to value more flashy attributes).

Notice that I said “the conventional wisdom”, so it is appropriate to consider if, in this case as in so many others, such wisdom may indeed be woefully wrong. Before we can answer that question, let us get back for a moment to how such a powerful institution, supposedly in charge of shaping much of the public sensibility, of their conception of what is ugly and what is nice, what is right and what is wrong, the moral educator of the masses, could sustain itself. Which takes us right into the murky realm of advertising: it is commonly said that TV “sold” entertainment to the audience, a cheap, easy way to pass their time that proved to be almost unbeatable (want proof? The average human being spends almost two hours a day watching TV, which means that once you subtract the time for earning a living, commuting, doing the household chores and sleeping, that’s essentially all they do apart from work). Bollocks, as until relatively recently almost nobody “paid” for being entertained (we’ll get to how that is changing in a moment). TV spread like a wildfire because it actually sold something much more valuable: their audience’s attention to manufacturers that could use that attention to convince them that their products were superior to those of their competition (didn’t matter at all if such claim was true or false).

Seen from outside, it is a pretty silly proposition: pay me to insert whatever flashy message you want (whose production you have to fund separately, of course) between my regular programming, so you can use such message to convince “my” audience of the superiority of your product. If such scheme works, you will be able to sell more units, and ask for a higher price, that would supposedly more than cover the costs of producing the message and paying me for showing it to as many people as possible. But what if your competitors adopt the same strategy? The whole market may end up at a higher price level, with consumers funding both the Coca Cola company and Pepsi Co through paying more for their sodas so they can “enjoy” having their favorite shows interrupted by a barrage of ads from both extolling the supposedly greater virtues of their wares over those of the other. Which is a decidedly inferior equilibrium than the one in which there is no advertising, the sodas are cheaper, and nobody’s show is interrupted with flashy ads that are known to have only the most tenuous relationship with truth. 

When you add the fact that a significant portion of the advertising that has historically kept broadcasting afloat is not just a zero-sum game, but has actively promoted deleterious products and practices (just to name a few: tobacco, alcohol, and sugary drinks that collectively account for a staggering amount of premature deaths in the advanced economies in the last decades) you really have to  wonder how is it possible that we collectively allowed for such insanity to proceed gingerly apace, and how is it possible that any attempt at changing it (leveraged by a raft of technological “game-changers” that always change much less than what was expected, beyond of the financial situation of some of its promoters, that is) tends to meet at most a very limited success. I think the explanation comes from an unholy intersection of an unfortunate feature of human nature with the current mode of development of our overarching social structure (for lack of a better word I’ll stick to the term “capitalism”, which I’ve tried unsuccessfully to qualify as “digital”, “post-industrial”, “advanced” or even “desiderative”, without ever settling on any as being clearer or more illuminating than the rest). Such intersection, when analyzed, makes me fear that open television, funded by advertising and uncritically watched by enormous audiences, will still be with us for many years to come.

Starting with human nature, and without having to go all Maslow-y here, people do indeed need to be entertained (as easily and effortlessly as possible), and after their other basic needs have been satisfied something to fill the endless hours gets a pretty high priority in their scale. The fact that a complete history of leisure still has to be written is revealing, as my hunch is that for most of humanity’s existence there simply was not such a thing. People mostly herded cattle and cultivated the fields, or collected wild berries and tubers, seek suitable partners, wooed them, raised a family and took care of them until they croaked. A tiny few were lucky enough not to have to devote 100% of their time to that, and demanded to be entertained by others instead, but they were such a minuscule proportion as to be entirely irrelevant for the ordering of society, although the tale of their exploits have survived the passing of time much better than the relatively traceless remnants of the majority, and thus they occupy a fraction of our image of past ages much greater than what is warranted. Only after the agrarian revolution that preceded the Industrial Revolution in Europe has our species faced the prospect of what we may term “mass leisure”, that is, the existence of significant numbers of modestly well-off citizens with time on their hands they could devote to pretty much what they wanted, their basic necessities (food, clothing, a suitable dwelling and child-rearing) having been taken care of.

Interestingly, we start to have a better grasp of how people passed their recently (in historical terms) acquired “free” time thanks to the concomitant appearance of the modern novel, a form of narration in which common folk suddenly is deemed as worthy of attention, and in which the inner lives of said folk are minutely traced (or may we say invented? But let us not get too post-modern here) by the authors, which requires presenting how they spend their days. And what we grasp from them is that women did manual labors (mainly sewing, past beyond when thanks to mass retailing it stopped making economic sense to do so), lorded over a dwindling supply of home servants and paid visits to one another (visits that, when received, demanded stupendous efforts of said servants to keep the house in pristine condition and prepare the complicated foodstuff that back then played the part of an expensive car and wall-to-wall plasma TV). Men kept themselves busy feigning to work at almost all times (not that different from today), having a drink at public houses (that, then as now, didn’t require the patrons to speak more than a few words if so inclined) and, when confined in the house due to bad weather, played cards and read the newspaper. They may even, in some scarce cases, read some works of speculative thought. It was also assumed that they needed an outlet for their “bodily passions” (that no sane and healthy man could entirely satisfy with his wife), requiring either the maintenance of a full-time mistress or frequent visits to houses of ill-repute that would still consume a good deal of his free time.

Problem was, due to automation feigning to work more than 80 hours a week was starting to get difficult (we are talking about the beginning of the XX century here folks, so do not think in humanoid robots taking the dentists’ jobs yet). The temperance movement made spending many hours a day in the public houses (and the houses of ill-repute) less and less admissible, and there is only so much a man can play cards without wanting to assassinate his playing partners for a change. So, in a possible instance of the old Marxian dictum that society only proposes itself problems it can solve, in came the radio and the birth of mass “culture”, soon to be followed by TV.

Rich societies adopted both radio and TV at a speed that had no parallel in human history (except if you forget the internal combustion engine automobile, which in the less densely populated USA went in a decade from being owned by 0,1% of families to being owned by 99% of them, quite faster than mobile telephony or the internet, regardless of what today’s techno-utopians like to claim). And this is where the second dimension I mentioned before came in handy (the first dimension, remember, was the brute fact of human need to be entertained and receive help to pass the time once basic necessities have been satisfied): some social arrangements are more vulnerable to the threat of massive boredom than others. If a group has agreed that the ultimate goal of life is to excel between you peers in some public pursuit (be it poetry, philosophizing or charioteering -that would be my current understanding of the keystone of dominant reason in classical Greece, btw)) it provides enough incentives to every one of its members to cultivate certain (socially valued) abilities that will be for them meaningful enough as to devote them most if not all of their “free” time. If a society mostly agrees that the only valid end of human life is to prepare for the afterlife according to the precepts laid out by a barely-literate people more than twenty centuries ago (as most of Europe did between the fall of Rome and the demise of baroque reason in the XVIII century) again, they will find better uses of their time (praying, fasting, renouncing, or whatever activity makes the attainment of such afterlife more likely) than to idly seat watching images be rolled in front of them (specially if they can indirectly decide what kind of images they are presented with, and witness how they collective decide for the ones celebrating lust, sloth, greed and wrath, which the aforementioned book tends to condemn as the most despicable vices).

But if a collective has settled on our current desiderative reason, and considers that a) the ultimate end of life is to satisfy desire (or alternatively, to experience the maximum amount of pleasure over pain in a lifespan); b) every desire is but the expression of a single desire: to show to others that you are socially better considered than them and c) the only meaningful, socially sanctioned way of social consideration is having unfettered access to tons and tons of goodies, once you have exhausted your means of ensuring your access to said goodies (i.e. once you have worked your butt off to have as much money as possible given your circumstances) there isn’t much you can do with the rest of your waking time. You may just eliminate such rest, and just devote absolutely 100% of your energies, your every waking hour, to work, and to the undistracted pursuit of more money, but even our ultra-materialistic, ultra-consumerist society seems to have learned that there are limits that it is better to impose on individuals (like we already did with the pursuit of other pleasures, from drinking -and in general using any drug- to boning, which beyond certain limits are loudly and universally frowned upon). Limits that are in some places (Silicon Valley, where a 9 to 5 work schedule is considered as revealing “loser” status, or the financial community) crumbling and actively contested, but globally our society still thinks that people should do “something” apart for work.

How could it be otherwise? If people only worked, who (or when) would thy consume the fruits of such super-intensive labor? And if such fruits (abstract and intangible as they may be) are not to be consumed, or fought for, what is the point in producing them in the first place? Note that the need for consumption commensurate with production is a needed corollary of the final success of our current dominant reason: once there are no alternative societies with which to vie for supremacy the system must either find a super-hyped-up, super-evil “other” to justify the growing sacrifices demanded to its population (even more so in a scenario where people is pushed to 100% production, with almost no time to enjoy themselves the goods produced, that would be mostly diverted to the military-industrial complex) or attain a steady-state equilibrium (acceptance of lack of growth, lack of opportunities of professional advancement for the ambitious young, and likely lack of technological advances and productivity growth due to reduced incentives for them).

Which is really the world we are living in, with both developmental paths open and in an uneasy balance, as we collectively seem unsure about which one to pursue, so we waver between both. Some societies seem more committed to the military-industrial path, lashing around in search of an external enemy strong enough to justify the maintenance of a “weaponized Keynesianism” that can keep the system in a state of perpetual growth to prepare for an ever-growing menace (surely the USA is the country in which such tendency is more marked, and where it has worked better – the other countries where you can see it in play are places like Cuba, Venezuela, North Korea, Iran… and China may be the other moderately successful economy toying with the idea of following the nationalistic-militaristic path). Others seem more resigned to the steady state and the accompanying stagnation and likely deflation (Japan and the EU are the poster boys, with much of South East Asia soon to follow). What the little technological development that remains ensure is that given the current levels of productivity, there won’t be much work to go around, regardless of the path chosen. Just as there isn’t now, hence the prevalence of bullshit jobs, make work, high unemployment or low participation levels in the workforce.

What all these societies have in common is the embrace of a dominant reason that precludes any pursuit outside of working more to earn more money and thus be recognized as socially superior. Not just that “undervalues” such pursuits, or that “gives less weight” to them. For our current dominant reason any life-project outside of the crass materialism outlined above is an existential threat, and because of that it is both unintelligible (it can not be “understood”, or compared with the alternative it presents and weighed against it) and dangerous. But let us take stock for a moment about the kind of life projects and interests that have no place in the grand narrative we have collectively settled for, through some examples:

·         identifying some transcendent truth and devoting ourselves to it

·         choosing a field with a well-recognized set of external criteria of excellence (what MacIntyre called a “practice”) and try to be as good as possible in it, even if it doesn’t give you any money

·         caring for other people altruistically (not “reciprocally altruistically”, but for the sheer joy of helping flourish and prosper those we deeply care about, regardless of being paid or even recognized)

I’m not saying that the activities derived from those pursuits are somehow superior or nobler or in any sense “better” than the ones our dominant reason would rather direct us to engage in. Aw, what the heck! Of course I’m sayin’ that!  Because what is at the end of the day what desiderative reason beseeches us to do? Watch friggin’ TV so we get more brainwashed into desiring more stuff, which will in turn make us want to work more, or go deeper into debt, but will help keep the whole system spinnin’… Whoa! Some life program, isn’t it? But for some time (was that the secret allure of the 60’s, so difficult to understand from today’s perspective?) it seemed like people were waking up to the emptiness of the value system that wasn’t yet as hegemonic as it is today (what the Marxist critique would call its “internal contradictions”). A system that forbid any “meaningful” activity outside of working for its cancerous, suicidal, perpetual augmentation (regardless of how close it got to running against nature limits through overpopulation and non-renewable resources exhaustion) was at the end of the day a system nobody would want to live in. If the only leisure it could abet was watching a shabby catode-ray tube showing endless commercials barely interrupted by snippets of shoddy story-telling it is no surprise people started flirting with “alternative” lifestyles and different value systems that allowed for more time to be devoted to activities perceived as more “meaningful”, that required a higher level of engagement, that allowed people to find a fulfillment, a contentment that passively watching the tube could not match.

But, alas! The system reacted… there are multiple narrative threads that may contribute to an explanation about how from that momentary weakness the reason dominant back then, instead of evolving and adapting and changing gears just doubled down and succeeded in becoming hegemonic over the whole globe: with the development of identity politics the powers-that-be played different segments of the populace against one another, with the result of having each embrace the overarching value system even more fiercely; the main alternative to dominant reason (embodied in capitalist society) was even more exhausted (it was, after all, communism, the embodiment of bureaucratic reason, an older version of western values); and, last but not least, entertainment technology simply got much better, and fused itself with the dominant ideology more strongly and more subtly, which leads directly to our own days’ flat-screen HD TVs, the internet, mobile telephony, social networks, the golden age of TV shows and, of course, videogames and any time soon Virtual Reality.


Which leads me back to the original thread of this post: “commercial TV”, understood as TV produced by enormous corporations (the “networks”, and most likely the traditional ones, as they are the institutions both well connected to the legislative power and with the knowledge of how the content is produced and distributed and what the customers are willing to spend time watching… see the mostly failed efforts of big Telcos to gain a significant foothold in that turf for well over two decades), distributed for free (although some “carriers” -satellite and fiber- will be able to wring some revenue from taking some premium-quality signal to a substantial number of homes paying for only a fraction of the distribution costs) will still be with us for the foreseeable future. It is just too valuable for the maintenance of the whole civilizational compact to be easily replaced.