Tuesday, January 2, 2018

Optimal training strategies for each of the four main skills of life

God knows I’m no great fan of the self-help genre. Actually, that would be some understatement, as not only have I never bought a book under that label, but I tend to actively despise both them and their readers (I’m arrogant and unsympathetic to my distressed fellow human beings’ plight like that), an ugly tendency that leads me to hector anybody I see reading them about the likely evils and shortcomings of their content. Take a look at the almost 200 posts I’ve published in the last years and, although being mainly concerned with moral philosophy and ethics (well, and weightlifting, which ends up being the most practical part of it) you won’t find much applicable advice on how you, anonymous reader, should conduct your life.

Which makes some sense. I don’t know you and your specific circumstances, so how could I dare to suggest you to do this instead of that? (whatever “this” and “that” may be). However, a few weeks ago I had a conversation with my elder son that looked like a semi-decent, reasonably generalizable, piece of advice, and it seemed kinda selfish to keep it to myself. So, breaking a lifelong tradition of being circumspect about those issues, I’m gonna share in this post some thoughts that can be immediately put in practice, and that can turn your actual, real, lived life for the better (as opposed to my usual pabulum of a highly abstract nature, which tends to be as practically useful as an Inuit sealskin jacket in the Sahara).

To begin with, we all want to get better. By definition, we all want what is good, and we intuitively understand what being good in certain areas consists in, although we may differ on the details. I’m going to focus in four areas of goodness in which I think there is wide agreement, both about their overall desirability (that is, almost everybody, in any age and culture, agrees that they constitute a “good”, that having a greater mastery in that area is a net positive for living a worthy life) and about the most salient features being desired (what mastering such area looks like). The four areas are:

·         Relationships

·         Intellectual ability

·         Physical ability

·         Transcendence

You may miss some areas that are important for you, and you may find some in there that you don’t think are that salient right now. Just bear with me, as I’ll be briefly justifying each one in turn. But there is one area, in particular, that is conspicuously absent and that may cause some puzzlement: work. So its absence requires some previous justification.  Isn’t work, like, super-important? Specially, in our materialistic, anomic, secularized societies, much more important than some other areas in the list (I’m looking at you right now, transcendence)? Don’t people show, in their day to day actions, that they value their work more than relationships (all those broken or deprioritized friends and family ties), more than intellectual ability (all those courses not taken, books not read), more than physical ability (all those workouts missed and gym memberships left to expire unused) and, surely, more than transcendence (what was that about, again)?

In a sense, I’m not going to write about work because sure as hell you don’t need another voice telling you how essential it is for a fulfilling life and in order to develop your full potential. You are already drowned by messages along those lines, and, you know what? They are mostly hokum. I’ve been attending to a number of retirement parties lately, some of them to say goodbye to lowly assistants (although much loved) and some to CEOs and Chairmen (a more mixed bag, although I’ve been incredibly lucky to work with very widely respected professionals at all levels) and the impression I take is that, at the end of the day, even for those at the very top of the corporate hierarchy, almost in every case your job is not all that it was hyped up to be. Even for the captains of industry, the sacred entrepreneurs, the titans that created companies from scratch and turned them in multi-billion dollar concerns, what remains, what they are remembered for, is their gentleness (or lack thereof), their intelligence (ditto) or, in one surprising case, how insanely in shape he was. And that seems to be the norm: what do people remember of Steve Jobs? That he was a) a tremendous jerk, b) pretty clever and c) lucky (you don’t hear that last one much, but the undercurrent of his biography underlines how he had the right ideas at the right place and time for them to payback beyond anybody’s wildest dreams). Something similar could be said of Bill Gates, Jack Welch or Lee Iacocca.

Which points to a second aspect of how we perform our (paid) jobs that is also worth noting. Was Jobs (or Welch, or Gates) a good worker (in their case, a good CEO)? Depends on who you ask, and, more pointedly, depends on what moment in time you focus on. Jobs was terrible in his first tenure (almost run Apple to the ground) and then stellar in his second (but then again, was it an increased learned ability, or did he just get lucky)? People close to Gates say he was at best mediocre, but consistently lucky, and consistently good at finding great helpers to compensate for his deficiencies (or lack of interest). Then again others say the opposite, and maintain he was a second to none manager and visionary (a most rare combination). Same with Welch, who oversaw wild fluctuations in the total share value of GE (the metric on which he, and the rest of his sycophants, were fixated). Work performance, simply, is notoriously hard to measure, and there is nothing like a consensus on what excellence as a worker means. A lack of consensus, by the way, that is played yearly in performance reviews all over the world when, regardless of what “objective” goals and metrics have been set, managers the world over end up rewarding those workers with personalities more like themselves, and then try to rationalize such self-preference the best they can.

Furthermore, even if there were a consensus of what a good worker looks like, beyond someone healthy, good at relationships (so it is easy to get along and work with him) and clever (three areas we have already covered), I’m not sure such set of qualities would be “trainable”, which will allow me to finish this justificatory detour with a clarification about the purpose of this post. To that end, I need to explain the difference between “trainable” and “non-trainable” abilities. The paradigm of the first group would be physical strength, and the defining feature of the group is that we have a pretty good understanding of how to increase the abilities within it (applying consistently the principle of progressive overload, as I expounded in this old post: Progressive Overload in a nutshell). A possible paradigm of the second group could be playing rugby (or being a CEO), as their defining feature is that we don’t have a clear grasp on how to improve at them, other than practicing something as close to the real thing as possible. We could try to break it down in its components (in the case of Rugby, say, decide that playing well consists in having good positional awareness, “reading” where the ball should go, running fast, passing both far and precisely, tackling bigger guys unerringly without being injured, dribbling opponents, etc.) and try to develop protocols to improve in each of them separately, but as that would seem insanely time-consuming, and there would always be doubts about how effectively each of those “component” abilities transferred to the actual play, that is not the most common approach. What most people do is just devote a ton of time to play, in actual games or in training, and hope that such devotion will pay off in enhancing (more or less) in unison all the components, and thus in making them better players overall. Which is what usually happens. The application of such strategy to work would be “do as much of it as you think you need to get to the level of competence you desire to achieve”… which is essentially something you already knew. But I can not add much more, and neither can any self-professed self-help guru, as what that level of competence consists in is subject to debate, probably quite different from one job to the next and has an uncertain influence, compared with other factors outside your control (like global market conditions, personality of boss or fit with corporate culture, all of which are proxies for “sheer luck”, as much as we would like to think life is fair, meritocracy works and everybody really gets what they deserve).

What we do know in Rugby is that, everything else being equal (ability reading the field, precision in passing, ability tackling), the stronger player will win over the weaker one, so strength being highly trainable it always pays to devote time to develop it. Similarly, in your job I can safely bet that being better at forming meaningful relationships, having more mental acuity and sharpness and having more endurance and stamina (being physically more fit) are going to be beneficial. Regarding transcendence… you’ll have to wait ‘til we get there to judge to what extent it is similarly positive and worthy of your attention and time. Without more ado, then, let’s talk about how best to train those qualities:

Relationships


This one should be a cinch. Probably the single factor with more influence in how satisfied we are with our own life is the density of our social network, the amount of high-quality relationships we have managed during our life to weave with lovers, family members, friends, and even colleagues and professional partners (doubt it? Have the most cursory look at the Harvard longitudinal study: The Harvard Study which has been following the original participants for seven decades now, and has been expanded to include people from different socioeconomic status,  their wives and children). Having many people who care for you, and whom you care for, not only makes you happier, but strongly protects your health, both mental and physical. So investing in maintaining such a robust network should be a doozy.

Sadly, a surprising majority of people in the West doesn’t seem to have got the message, and get to the end of their working life estranged from their families (all those years too busy working to devote much time to them, you know?) and knowing very few people outside of their professional circle. And those inside, after retirement, don’t seem all that interested in maintaining much in the form of a relationship with them, once they can not advance or promote their own interests (yep, professional relationships based on self-interest are mostly selfish like that, sorry to break the bad news to you).

Given, then, that we accept such state of affairs is bad, and thus that it makes sense to invest time in improving our ability to form that kind of meaningful, stable, long-term, rewarding relationships, how can you go about to get better at it? Simple: family reunions. Yeah, I know, you already visit your parents in Thanksgiving, and in Christmas every other year. Too tough already, given you don’t get along all that well with that know-it-all of your brother-in-law. And you went some years ago to the wake of that second uncle, and saw again those cousins you hadn’t seen since you were a freckled and carefree kid. Starting to see the problem? Meeting people once a year or less is no way to sustain, even less nurture and grow, a relationship! My suggestion is going full Mediterranean here: lunch at your parents (or your spouses parents’) house every friggin’ weekend, unless something really big impedes it (“really big” means second coming of Christ is probably an acceptable excuse, hangover from poker night with friends is probably not). And the more the merrier, prod your old ones to invite aunts, uncles, cousins, old family friends, whomever has the thinnest relationship that is still breathing.

I know sometimes having lunch (or dinner) w extended family may seem a chore. You don’t necessarily agree with the political views of your siblings, although, being same blood, you are willing to give ‘em a pass. But the in-laws? Well, they are putting up with your kin much more than you, so shut up, man up and be nice to them also! Because that’s the trick, that’s what makes the weekly family lunch invaluable to develop your relationship-forming abilities. You are constantly reminded that your own view is not that important. Doesn’t matter how much or how little you have accomplished in life, for your parents you will always be that little lovely critter that run around the house screaming every Christmas morning, the one that was run over by a bus and got out smiling as if nothing had happened, or that gulped a bottle of rat poison and had to be rushed to the hospital to had an orogastric lavage or whatever foolish and mildly embarrassing thing you did when very young that still is regularly brought up in any reunion (not to mention how huge your big toe was when you were born…). And for your siblings you will always be that mildly annoying, arrogant, self-important dick that competed with them for mom and dad’s attention, told them incessantly they were really (really!) adopted and paid them so little attention that didn’t remember at all their life transforming summer trip to Italy and keep on insisting they must have imagined it all. And that’s OK! Even if it is a pain to relive (again and again!) those little things, because by anchoring ourselves in a shared narrative they help to put us in our place and remind us of our (relative) insignificance in the grand scheme of things, and that’s really what a relationship, and love, are about: being a part of a whole bigger than you, a whole that will go on after you are not here anymore, a part that you didn’t entirely chose (you were born into it without anybody asking your opinion beforehand) but you didn’t entirely renounce, so it’s not as if you can’t be fiercely proud of it anyway.

And even more important, those reunions remind you that for a meaningful relationship to flourish it has to be disinterested, unselfish. It has to be “not about you”, or not too much anyway (and certainly, not only about you, as our hyper-individualistic, self-centered age is wont to forget). It’s great if you have a serious illness to have friends and family really concerned, and visiting you in the hospital and all, but for most of your life you keep on seeing them, and putting up with their put downs, just because they are your friends and family, not because you expect to receive something in return.

Which takes us to the thorny issue of relationships within your work environment. Why not train to be better at them by practicing them more directly? Wouldn’t it be more advisable to recommend to go out to lunch every week with a group of co-workers? To organize social outings with the colleagues? Absolutely not. That is akin to putting the carriage in front of the horses. Remember: you don’t push yourself to see your family more often to be able to form more bonds at work. Forming meaningful relationships is not a means to a higher end (like professional progress or making more money, or closing more sales). Forming meaningful relationships is an end in itself, the highest end, indeed, if you want to be happier (brief aside: if you really want to be happier there is absolutely one end you should not pursue, and that is happiness itself;  It is an ironic feature of human nature that the more obsessively you pursue happiness the less likely you are to achieve it). And, ahem, and sorry again if this is news to you, professional relationships are rarely deep enough to qualify as “meaningful” or “happiness enhancing”. Remember the last chapter of 30 Rock, and the frustration of “Tracy Jordan” as the character played by Tina Fey explains to him that after all those years working together they would probably never see each other again? I found that pretty insightful, as after a number of job changes I can attest that more often than not job relationships are quite volatile, and being supported by each party pursuing its own personal interest, they dissolve quickly and without a trace.

I grant that there is a special category of job relationship that can endure for the long term: the ones made in the first years of professional career (5 to 10, tops), when you are fresh out from college and are suddenly surrounded by similarly minded colleagues, as clueless as yourself, and probably as willing to give it all and carve a name for themselves in the corporate world. You can make real friends with your equals then, but after those 5-10 initial years you have too many responsibilities, the environment typically becomes too political, you can make favors, and ask for favors that may enhance or hinder other people careers, so they just stop approaching you for your charm and sympathy, and start doing it to see what they can gain by associating with you.

So yes, by all accounts, participate in your company’s social life (respectfully and graciously… the young intern that you think is making a pass at you is probably a) not making it and b) even if she is, it is not because you are so goddamn attractive and irresistible, but because your position and authority make you appear so… just say no), but don’t think that somehow compensates for not having anybody that gives a rat’s ass about you outside of your professional circle. It doesn’t, and if that is the case, you better do something to fix that!

Intellectual Ability


If the previous one was easy, this one is darn complex. I’m sure you are already familiar with a thousand miraculous solutions to make you sharper, more clever, a faster reasoner, and thus better at a number of mental fields (mathematics, learning foreign languages, passing IQ tests, improving your SAT score and whatnot): from mobile apps that supposedly train your brain to listening audiotapes while sleeping to solving sudokus to consuming nootropic drugs, the market is awash with products that promise to increase your intelligence for a pittance. Which is all right, as all those solutions have one thing in common: they don’t work. There is only one thing that consistently seems to work: having chosen the right parents, as general intelligence (which is the foundation on which all the other abilities are constructed) is both highly heritable and not very amenable to training.

What a bummer! Does that mean that you are stuck with the same mediocre mental capability you were born with, and that you are condemned to never get past first degree French, unable to go beyond “Bon Jour, mon nom est Choderlos, comment est que vous vous appellez?” (or something similarly lame)? Well, kinda yes, but the mileage you can extract from that predefined set of capabilities may vary, and I’m going to propose a proven method to get the maximum from what you have got. I’ll promptly recognize that the sample size on which the proof has been made is somewhat modest (one person, I’ll let my astute readers guess his identity).
The method is pretty simple, actually, and darn cheap. Its only drawback is that to bear fruits it requires a ton of willpower and grit, as it demands a lot of consistency during a very long period of time. Here it goes: Read. Lots. Of. Boring. Books.

That’s it. That’s the optimal method to increase your mental acuity, your cerebral suppleness, your reasoning capabilities, your working memory and your symbolic manipulation capacity, all at once. You may understandably have some doubts, so let me clarify a bit:

·         Read: that part should be uncontroversial, reading is the fastest way to absorb new information, and the more information you have at your command, the better you can think. Marshall McLuhan identified reading from an alphabetic writing as the key to Western dominance (the basic tool that taught us how to solve problems in the most generic way imaginable: break them down as we break the words in syllables, tackle each one separately, and then put the solution back together as we do with the meaning of the word) and I recently read in Emmanuel Todd (absolutely atrocious and unwarranted conclusions, but that would be the matter for another post) that learning to read between 5 and 10 years irreversibly alters the kids’ brains making them more plastic and more intelligent. So no audiobooks, and no videos. Just read.

·         Lots: that means really a big number. Twenty, maybe (gasp) even fifty? Nah. Think in thousands, not in hundreds (well, I already warned you that it required time, didn’t I?) At least half an hour every single day, no matter how exhausted you arrive from work; how badly you want to watch TV, or play videogames, or hang out with friends; how hung over you feel from yesterday’s party or how seductively your wife is trying to woo you to the conjugal bed. Aws, OK, in that last case you can go and leave reading for another day. We are talking of week days, of course. Everything under three hours in the weekend is considered cheating. Please note that I’m talking about the bare minimum. For the method to work, that half hour during the week and three hours in the weekend constitute the non-negotiable floor, but regarding a possible ceiling, the sky is the limit. If you can leave work earlier and clock 2-3 hours on a weekday, so much better! If on a blessed Sunday you can read for 14-16 hours go gladly for it! Few pleasures compare to finishing a couple of numbingly, maddeningly, apocalyptically boring books in a single day! You really go to bed being a so much better man, and feeling so contented with yourself that it truly defies words.

·         Boring: I know what you are thinking now: “boring? Why do they have to be boring? If I’m going to read thousands of books surely I can at least make it as enjoyable an effort as possible! If I read 2,000-3,000 books I’m gonna turn into an insufferable egghead, doesn’t matter if they are all about Jewish jokes and Sci-Fi and erotica”. Nope, sonny, it doesn’t work like that. You read 3,000 books of good science fiction, or heroic fantasy, or YAF (is that even a thing?) and you end up being definitely nerdier, but not a iota more clever. What really makes your “cleverness muscle” (aka brain) pump is forcing yourself to go through concepts and ideas you don’t care a patootie about. Having to fight to stay awake as your vision crawls across the page mercilessly scribbled with combinations of words that make barely any sense. And once you reach the end of the page, having to start again because you realize you haven’t registered a single concept, and if you were asked about the meaning of what you just read you would have to stare blankly and confess you didn’t have a clue. I recognize that the prospect of slaving through those many boring books may seem like an insufferable drudgery to many, so soul-crushing and willpower-depleting as to seem an insurmountable obstacle in your way to cleverness. All I can recommend is the “weightwatchers” strategy: join a group of similarly minded people to which you declare your intentions and where you report periodically about your progress so they keep you accountable. Human teenagers have been doing it for centuries, it is called “university career” and its main tenet has always been to subject the students to a syllabus of works as boring and unrelated with anything actually done in a real job as the teachers could get away with. Indeed, I find it so inspiring that I have been studying one or another, almost without interruption, since I was 17 years old…

·         Books: it almost goes without saying, but only books will do the trick. Reading millions of fortune cookies, advertising placards, movie posters or T-shirt messages won’t do the trick. It has to be long and convoluted and with many pages so you have to keep your working memory fully engaged to try to remember as much as possible of what came before. I hear some TV shows now (“lost”, “breaking bad”, “the sopranos”, “game of thrones”) demand almost as much from their viewers as your average Russian novel from the XIXth century… maybe, maybe not, but there are simply not so many shows of that kind around, and they don’t exercise our ability to break problems down in tiny components and then reassemble the solution from the ground up, so I’ll stick with books, and I recommend you do too.

So there you are, the optimal training strategy for becoming more clever, more disciplined, more acute, is to read lots and lots of boring books. Of matters you are not familiar with. Of course, at a certain point you run out of matters, and you start to actually enjoy reading XIVth century metaphysics, or tracts on the philosophy of right of the Carolingian codex, or treatises on the sociology of Andaman Islands tribesmen, as you recognize events, ideas, theories, and you see how they “fit” in the vast and wonderful network of human knowledge. Does that mean that you can not keep on pushing the envelope, and have to admit you have reached a definitive plateau and can only stagnate in your mental development? Far from it! At that point you introduce reading in foreign languages you have not yet mastered. But that is a very advanced technique which we will leave for another day.

Another thing we will leave for another day is discussing the optimal strategies for training the two remaining skills (physical ability and transcendence), as this post is already inordinately long, and I’ve surely taxed the patience of my readers more than enough… 

Thursday, November 23, 2017

“Neuroscience”, a Monism for the XXth century (or maybe even the XIXth). On Robert Sapolsky’s Behave

Regulars of this blog (Here I always insert the by now somewhat tired cliché “all two or three of them”) know I have a thing for reading books I thoroughly, consistently, deeply, unambiguously and exhaustively do not enjoy. I do it frequently enough as to consider it is not just lack of attention when choosing what to read, or bad luck (the “I thought it would be good but more than halfway through it I realized it was not just mediocre, but downright atrocious… having already devoted so much time to it I just trundled along thinking it would be better to see it through” excuse), but an active courting of works that I won’t a) like, b) agree with and c) have any use for.

As I brief aside, I know there are people who wouldn’t be caught dead reading a book they don’t like, exemplified by Tyler Cowen (whom I quote frequently and approvingly on so many other issues): Tyler reads a bunch of books, but not many pages of each. All I can say is a feel sorry for them, as they are missing in one of the most important teaching and character formation tools of this time and age. The ability to read unpleasant writings, to soldier through them no matter how unsatisfying, is highly trainable, and not only does it spill over other, more practical areas of life (like the uber-lauded feature of grit, with much media exposure of late), but helps immeasurably in some more mundane applications, like making it much easier to learn foreign languages (something I already recounted in this post recent post: Unexpected uses of Philosophy). So, regardless of what Tyler says, I strongly encourage my readers to persevere when caught in the midst of a dreadful, dreary book, as it is more conductive to a life well lived than the perpetual surrender to the thrill of novelty seeking, with its concomitant rejection of effort and toughness and indulging in instant gratification that are so damaging to the formation of a resilient, well-rounded character.

Back to my quirky habit of reading books that bore me or anger me or somehow disappoint me, I recently finished a doorstopper (well over 700 pages) from Robert Sapolsky aptly titled Behave. The Biology of Humans at our Best and Worst, and extravagantly praised by a number of figures of the scientific establishment, and by the New York Times’ Richard Wrangham, who on his July, 6th review describes the book as a “quirky, opinionated and magisterial synthesis of psychology and neurobiology that integrates this complex subject more accessibly and completely than ever”. That I bought the book after reading such description already denotes a masochistic streak on my part, as the integration of psychology and neurobiology sounds as appealing to me as the integration of astrology and XIIth century Chinese ceramics (I have only the slightest interest in the latter, and utterly despise the former and for the same reasons I despise psychology: both disciplines’ unsubstantiated claims to some sort of “scientific validity”). I’ll devote this post to comment on it, not specifically on the more scientific claims (which, not being an expert on the field, I found well informed and crisply stated, if a bit discombobulated and lacking a logical structure at times; just too many things seemed to be written in the spirit of “this is really cool and fascinates me to no end, so whether it really belongs here or not, whether it really helps drive home whatever point I’m trying to make or not, I’m going to describe it in some detail no matter what”), but on the metaphysical ones.

“Whoa, whoa, whoa!” -  I can hear my readers saying. “Hold your horses here for a minute! Metaphysical claims? On a book about the brain, and hormones, and the genetic origins of behavior? Are you sure?” – Well, I very much am, and as scientifically inclined, and surely devoted of empiricism and all that as the author professes to be, he argues from a very strongly felt metaphysical position about what the ultimate components of reality are, and what the realness of such components tell us about how we should think about our behavior, what we should praise and condemn, and how we should punish deviations (with the unavoidable foray in trolleyology… man, I endlessly wonder what kind of fixation Anglo-Saxon casual readers of philosophy have with electric cars on rails that they simply can not stop drawing them to any discussion of ethics, cursory and circumstantial as it may be).

However, before we dive into the arguments of the book, I’d like to share something I find quite puzzling about the whole “neuromania” (I term I borrow from Raymond Tallis, if you haven’t already read him, specially the excellent Aping Mankind, go and do it ASAP). Let’s consider the following two descriptions of the same behavior (Young Rick knows he should study for next day’s biology exam, but he just can’t gather enough willpower, so he spends the evening watching TV instead):

·         Description 1: when Rick starts considering what to do (while conveniently having his head inside a fMRI) his frontal lobe lights up, which tells us the parts of his brain in charge of executive control and long term planning are being activated. Unfortunately for him, his orbitofrontal complex is not fully developed, and due to neglect and inadequate parental supervision during his early childhood (or maybe because when walking down the university hall to the fMRI he crossed paths with an attractive research assistant who reminded him of a day care nurse that had been mean to him that many years ago, or because he had been primed to be more attuned to earthly pleasures by being made to unconsciously smell of freshly baked bread when entering the room) the metabolic capacity for sustaining the heightened glucosamine demand of that sensitive part of his neural tissue is not up to par, and can not counteract the activity of the medial PFC (somehow the PFC -Pre Frontal Cortex- is always involved in anything having to do either with emotion and expectations of pleasure or with rationality and delayed gratification), or of the limbic system (don’t even ask: lots of neurotransmitters, more Latin-themed brain topology and a bunch of enzymes produced in exotic and distant parts of the body), or the surge in dopamine caused by the prospect of some good ol’ procrastination. To make things worse, we can appreciate that both Broca’s and Wernicke’s areas in his parietotemporal lobe are also lighting up, and them being neural correlates (we don’t know exactly what a neural correlate is yet, at least regarding to the subjective feeling of consciousness, but bear with me) of language processing we can only conclude that he is rationalizing his rascally behavior, and talking himself into it. Indeed, his nucleus accumbens (just above his septum pellucidum, if we are going to use funny Latin names let’s use them to the end!) is also showing signs of enhanced activity, surely as he contemplates the pleasures of just doing nothing, while the insula (which in theory evolved to warn us of the dangers of rotting fruit) remains muted, meaning that our moral sense (a form of disgust, typically geared towards outgroups, but potentially useful to direct that disgust towards non-adaptive behaviors of our own) is similarly muted and has nothing to contribute to the final decision on what to do.

·         Description 2: the guy is lazy. Probably not entirely his fault (but more on that in a moment).

Any self-respecting neuro maniac (or neuro babbler) will tell you that only the first description is “informative”, “scientific”, “information-rich” and constitutes true understanding of the human nature and the aspects of personality involved in the observed behavior. On the contrary, what I contend is that both descriptions have exactly the same informative content. Whilst the former is doubtlessly more florid, and definitely longer, it doesn’t provide any additional understanding of what is going on, and it is not really giving us any additional insight. Specifically (and we’ll come back to that in a moment) it doesn’t provide us with a iota of additional, “useful” information about what the subject exhibiting the behavior may do or may not do in the future (the ability to predict the future being one of the defining features of scientific knowledge).

On a side note, such unkind critique reminds me what R. Wright Mills did, in the Sociological Imagination, with Talcott Parson’s The Social System: he transcribed super long paragraphs of the latter and then “translated” them into much shorter, simpler, more elegant and concise ones, contending that they really meant the same thing, and thus exposing Parson’s magnum opus as a lot of unnecessary, inflated and somewhat pompous babble. I think I was lucky of reading both books in the “right” order (first Wright Mills’ with his scathing critique, and afterwards the one by Parsons), which allowed me to better appreciate the aspects properly deserving criticism, and to separate them from those where the long-winded sentences and the convoluted reasoning were called for (at this point, no reader of mine will be surprised to find me sympathizing with other writers prone to the use and abuse of long-winded sentences and convoluted arguments). And I find it amusing that I may now level such criticism against a work that on many levels is quite well-written, engaging and even witty and downright funny… such are the foibles of the world.

But back to Sapolsky now, the whole book is really not much more than a masterly, exhaustive enumeration of all the aspects of mental life we have found to correspond with the illumination of different parts of the brain when seen inside an fMRI, or different concentrations of neurotransmitters and enzymes in the subjects’ blood (measured in different moments of experiencing some contrived experimental situation or other), or different waves as picked up by and EEG, along with the minutiae of the experiments that purportedly settled such correspondence. And, man, is there a lot to enumerate! Abstract thinking, volition, desire (of different types and kinds), moral evaluation, affects, emotion, reasoning, memory, feelings, broad categorization, narrow categorization, adscription of agency, prediction, anticipation, delayed gratification, succumbing to impulses, visual perception, purposeful meditation… you may think that whatever may happen in your mind Mr. Sapolsky has it covered and pithily conveys the biological basis of it, which means identifying it with the firing of some neurons (or at least the distinct oxygen consumption of some broad areas of the brain) and the variation in the concentration in the blood of some chemicals.

There is the mandatory (for a book that aspires to fairly represent the “state of the art” of neuroscientific investigation) mountain of notes and copious bibliography pointing to the apparently insurmountable mountain of (impeccably “scientific”, of course) evidence supporting its claims, but it is a pity no mention is made to the dubious replicability recently noted of many of those experiments. Which is surprising, given that the book has been published this very same year (2017) and Brian Nosek important paper “Estimating the reproducibility of Psychological science”, which kicked off what has been termed the “crisis of replication” in most social fields was published in Aug 2015. So Sapolky’s still using experiments from John Bargh and the like (when I read Social Psychology and the Unconscious  I was left with the clear and distinct feeling that the whole field was completely, utterly bunk, and I didn’t need sophisticated resources and a failed attempt at replication to conclude that they were either trivial or false; non surprisingly the field of “social psychology” has been one of the worst hit by the replication crisis…) as evidence without mentioning their dubious epistemic status is at best a bit careless, and at worst disingenuous. 

Unfortunately (or fortunately, depending on your previous metaphysical commitments) I came out with the idea that the book fails spectacularly in its declared intent of “explaining” in any meaningful way why we humans act as we do. Maybe it has to resort to too many causal chains (in very different timescales, which make them mightily difficult to integrate with one another). Maybe I read these things to gauge to what extent the advances in medicine and biology should make me question my belief in (or commitment to) free will, and given my prejudices and biases it is not surprising that I come out of such exercises reassured, rather than shaken or converted. There are many intelligent, considered and thoughtful arguments against the existence of such mythical beast (freedom of the will), made since the time of the Classical Greeks, but I’m afraid you won’t find any of them in Behave.

Let’s start with how the author proposes to tackle it head-on, appealing to the somewhat worn out and belittling homunculus argument (after that, don’t ever accuse me again of strawmanning!). I’ll need to quote in some length to capture the rhetoric in all its gloriousness. This is how Mr. Sapolsky presents his understanding of what he calls “mitigated free will”:

There’s the brain – neurons, synapses, neurotransmitters, receptors, brain-specific transcription factors, epigenetic effects, gene transpositions during neurogenesis. Aspects of brain function can be influenced by someone’s prenatal environment, genes and hormones, whether their parents were authoritative or their culture egalitarian, whether they witnessed violence in childhood, when they had breakfast. It’s the whole shebang, all of this book.

And, then, separate from that, in a concrete bunker tucked away in the brain, sits a little man (or woman, or agendered individual), a homunculus, at a control panel. The homunculus is made of a mixture of nanochips, old vacuum tubes, crinkly ancient parchment, stalactites of your mother’s admonishing voice, streaks of brimstone, rivets made out of gumption. In other words, not squishy biological brain yuck.

And the homunculus sits there controlling behavior. There are some things outside its purview – seizures blow the homunculus fuses, requiring it to reboot the system and check for damaged files. Same with alcohol, Alzheimer’s disease, a severed spinal cord, hypoglycemic shock.

There are domains where the homunculus and that brain biology stuff have worked out a détente – for example, biology is usually automatically regulating your respiration, unless you must take a deep breath before singing an aria, in which case the homunculus briefly overrides the automatic pilot.

But other than that, the homunculus makes decisions. Sure, it takes careful note of all the inputs and information from the brain, checks your hormone levels, skims the neurobiology journals, takes it all under advisement, and then, after reflecting and deliberating, decides what you do. A homunculus in your brain, but not of it, operating independently of the material rules of the universe that constitute modern science.

At this point the author may think he has gone a bit overboard, after all polls suggest (although it’s a slippery concept that may arguably not be all that well understood by people answering that kind of question) that consistent majorities in all countries do believe people is endowed with free will, mitigated or not, in our very scientific and deterministic age, when the dominant reason has been hammering them at least since the mid-eighteenth century that this “voluntariness” thing is but a fiction, the sooner to be discarded the better (to work more towards accumulating more material goods, mindless as such accumulation may look like to a dispassionate observer). So Mr. Sapolsky digs deeper, trying to excuse ourselves for our unenlightened foolishness (but not much):

That’s what mitigated free will is about. I see incredibly smart people recoil from this and attempt to argue against the extremity of this picture rather than accept its basic validity: “You are setting up a straw homunculus, suggesting that I think that other than the likes of seizures or brain injuries, we are making all our decisions freely. No, no, my free will is much softer and lurks around the edges of biology, like when I freely decide which socks to wear.” But the frequency or significance with which free will exerts itself doesn’t matter. Even if 99.99 percent of your actions are biologically determined (in the broadest sense of this book) and it is only once in a decade that you claim to have chosen out of “free will” to floss your teeth from left to right instead of the reverse, you’ve tacitly invoked a homunculus operating outside the rules of science.

Well, I´m not sure Mr. Sapolsky would consider me “incredibly smart” (I’m a theist, maybe even a Deist, after all, which in his book is surely a giant letdown), so it is just par for the course that I do not recoil from “this” at all, and wouldn’t even attempt to argue against the “extremity” of such picture, a picture the belief on which is shared by obvious morons like Aristotle, John Stuart Mill, Kant and arguably even Hume, but hey!, who were they to know? They didn’t have fMRI’s, microscopes, EEG’s and the like… bunch of intellectual midgets, that’s who). I’ll apply Sapolsky’s rhetoric to his own position (“no homunculus at all, just the neurons, hormones, genes and that’s it”) towards the end of this post and we’ll see what seems more ridiculous, or looks like being more “extreme” if we dwell for a moment on where that dispassionate, seemingly objective and scientific alternative that he champions  leads us.

But before we get there I think it’s worthy to consider the super duper, bold, brave and at the end quite empty calls for radical overhaul of the penal system (if all the world is indistinguishable from a prison, where nobody has any freedom at all, but just the blind obedience to material forces that were put in play in the big bang and have been playing out necessarily ever since, what does jailing criminals even mean?) in light of the non-existence of that vaunted free-will. At various points the author admonishes us that given what we know of what makes people tick we absolutely must reconsider all our laws and, most markedly, punishments. But when it comes to actually define how such reconsideration should look like, he is maddeningly vague. All he does is point that concepts like “guilt”, “intent” and even “recidivism” lack any real meaning, as all and every action that any of us performs is preordained, is overdetermined by an overlap of evolution (that shaped our species), individual genetics (that shaped our capabilities and dispositions) and the individual circumstances in which we find ourselves (which trigger the evolved responses finely tuned by our genetic endowment and previous history) and so does not merit to be praised or blamed, rewarded or punished any more than the actions performed by animals (donkeys, pigs, cows) that, in past and unenlightened centuries were similarly judged and on which silly verdicts were given. Very bold and brave, indeed, until we try to apply it to the real world.

Let’s remember than in penal theory punishments pursue at least three objectives: we deem it acceptable to harm the perpetrator of a crime because a) it has a deterrent effect on other people (who see that committing crimes is punished and would thus become less likely to do it themselves) b) it makes it more difficult or impossible for the criminal to repeat his bad deeds (from killing him to imprisoning him to depriving him of the material means necessary for recidivism) and d) it compensates the victim (either materially, giving her the proceeds of the fine imposed on the evildoer, or morally, signaling the rejection of society towards what her tormentor had done). You may notice that all three objectives are quite independent of the assumption of free agency on either the criminal or the rest of society. Even if we accept we are all mindless robots, we would still need to levy fines, temporarily imprison wrongdoers and, for some extreme cases, either imprison for life or kill those individuals dangerous and prone to violence that we can not afford to let loose between their fellow beings. We may use a bit of less shaming, and more consequentialist reasoning, but I don’t see the actual penalties of any modern, progressive penal system (like the one you can find in most advanced parts of the world, the USA not included) changing much, or at all. Which doesn’t mean our current system is maximally humane or maximally just, as it already considers to a great extent the criminals as somehow mentally defective, and such condescension may be a harder punishment than granting them independence and recognizing their moral agency, even if that means a harsher punishment (finely illustrated in Henrik Stangerup’s The Man Who Wanted to be Guilty).        

Regardless of the consequences of admitting the fictitious nature of free will, that at the same time are presented as unimaginably bold, revolutionary and requiring we let current norms, laws and institutions essentially unchanged, I’m afraid the animosity of Mr. Sapolsky towards the possibility of such fiction not being a fiction at all lies in a misunderstanding, the misunderstanding of how the freedom worth having in an (in)determinist universe would look like. His confusion reproduces almost verbatim an argument from Daniel Dennett (which I’ve read both in Consciousness Explained and in Freedom evolves): even if the universe were at heart strictly indeterministic, that wouldn’t threaten his understanding of all behavior following necessarily, within a causal chain with no slack, from material causes that were essentially set in stone at the moment of the Big Bang, because A) the indeterministic nature of reality applies only at very small scales (the quantum realm, for particles smaller than a proton or a neutron) and when it comes to “big” stuff, noticeable by our senses such indeterminism vanishes, so we can entirely ignore it; and B) even if there were truly “uncaused” macroscopic events, events for which we could really and ultimately never find a material “cause”, such events would never constitute the basis of a “freedom worth having”, as we traditionally consider a “free” action (free in the sense of being valuable, morally worthy, deserving praise or blame, etc.) one that is consistent with the “personality”, the “character”, the “true self” of the agent, and such action could never come out of the blue, or be entirely random, it could never be supported (or be made to appear more likely) by the fact that the universe is finally not ”causally closed” if we understand such lack of causal closure only to entail the possibility of entirely stochastic, uncaused events.

That is indeed hefty metaphysical stuff, and reading Behave has just reinforced my original hunch that such stuff is but very lightly illuminated by what we learn from neurology and biology. Without needing to resort to so much neurobabble Ted Honderich expressed it better and more nuancedly in his (alas! Quite difficult to find) Theory of Determinism (1990), which was wrong for the same reasons Dennett’s and Sapolsky’s are wrong, namely: A) their understanding of physics is between 50 and 80 years out of whack, the old debate between Heisenberg and Laplace was decisively won by the former, and appeals to hidden variables to causally explain quantum effects have so far been shown to be not only unsubstantiated, but probably incongruous (the best explanations of Heisenberg’s uncertainty principle I’ve seen are entirely apodictic, relaying not on any particular experimental result but on the nature of reality and mathematics themselves); the consensus position between physicists is that nature is fundamentally non-deterministic, that there are really, deeply, entirely uncaused events, and that those events, microscopic as they may be individually, may aggregate to have macroscopic, entirely observable effects (from the disintegration of a radioactive particle to certain phase changes), due to the non-linearity of the equations governing complex systems (that would demand a potentially infinite precision in the measurement of the initial conditions to get even to an estimation of the order of magnitude of the end state of a given system… but HUP puts a hard limit on the precision with which we can measure such initial state, seriously limiting what we can know of such end state, regardless of how macroscopic we find it to be). But wait, it gets worse, because B) you don’t even need to appeal to quantum indeterminism to accept the possibility of a freedom worth having, once you recognize that classical mechanics provides a fairly limited description of a fairly limited percentage of reality (the behavior of “reduced” number of “big” solid bodies moving “slow” -no need to get too technical about the precise meaning of each term between quotes), and that sadly classical mechanics is the only realm where the determinism our authors propound holds sway. What Honderich, and Dennett, and finally Sapolsky are doing is taking the neatly defined (albeit, as I just mentioned, woefully incomplete) concept of causality taken from classical mechanics and applying it to the field of chemistry (mostly a valid extension, for big enough compounds), then extending it again to the field of biology (sneaky, and attending to the results, a not entirely legitimate extension) and finally extending it again to the field of human behavior, forgetting the entirely different meaning a “cause” may have for creatures possessing a symbolic language, culture and a complex neural system that translates into sophisticated motivational structures. Then, they look back as if such chain of extensions of the original concept were entirely uncomplicated and immediate and claim -“see, as the only valid causes are big bodies gravitationally attracted to one another, or imparting momentum to one another through physical contact, or heating one another or exchanging atoms and molecules with one another… there can be no free will, as it would require a weird stuff -a homunculus, that has no way to gravitationally attract, impart momentum, heat or exchange atoms or molecules with normal, everyday, honest-to-God, measurable matter” (that’s the “Casper argument” from Dennett, that I’ve criticized elsewhere, in a nutshell). But it’s them who have beforehand (and somewhat arbitrarily, as we are about to see) limited what can be validly considered a cause, so their whole argument is circular and lacks validity!

Indeed, someone with impeccable empiricist credentials as John Stuart Mill, in his A System of Logic, Ratiocinative and Inductive (first published in 1843), already noted (although he hid it in a footnote) that the very same concept of what a “cause” is had significantly evolved from the times of classical Greece, when its paradigmatic model was precisely… mental intention! (their main example of a cause was me thinking on moving my arm and my arm dutifully moving as intended) to his own time, when the paradigmatic model had become the billiard ball being moved mechanically by the clash of another billiard ball. That’s why in Aristotle canonical model we find listed as causes not just material causes, but formal causes and end causes, that are entirely lacking of our modern concept of what a cause consists in. Add Descartes and his distinction between res cogitans  and res extensa and you may start to understand why the idea of mind as something outright distinct (metaphysically distinct, I dare to say) from matter has become more and more alien to our sensibilities (a somewhat more garrulous and blockheaded example of such strangeness would be the abominable Descartes Error, by Antonio Damasio, but I refuse to devote much attention to such claptrap).

Okay, let’s take stock then of where we stand: We’ve found that Bob Sapolsky’s idea of what constitutes a cause may be not all there is to it when it comes to explaining human (or, for what it’s worth, animal) behavior. We’ve based such finding in the previous (probably unknown to him, that’s the problem of doing metaphysics without enough training in basic philosophy, only partially excusable if you think what you are doing is biology, or neurology, or psychology) and unjustified commitment of what constitutes a valid explanation, that leaves out enormous tracts of reality that are as able to have an empirically verifiable effect on the material evolution of the universe as any legit neutron star or black hole or lump of atoms you may think of. And before anybody accuses me of magical thinking and just attempting to open the causal closure of the material world to sneak in it fairies and old bearded guys in robes inhabiting the skies (not that I would care, really), I will clarify that identifying the insufficiency of matter as universal explanation does not require any kind of previous commitment to revealed religion, the existence of immortal souls or anything like it. I’ve already pointed to one of the fiercest critics of what he terms “neuromania”, Raymond Tallis, an avowed and proud atheist. Another recent critic of the view that there is only and can only be matter, interacting according to the laws revealed by the scientific method so far: Thomas Nagel (similarly open about his atheism). Want a recent criticism of Dennett that doesn’t rely at all in any supernaturalism? Look no further: The Illusionist       

The way I see it, there could only be one redeeming feature of Behave’s underlying thesis: that it worked, and all the hullabaloo about Latin-named regions of the brain and chemical compounds and genes and evolutionary just-so stories (being a bit harsh here, I know) served the purpose of actually being able to predict how people… well, behaved. But alas! Per the author’s own admission, it’s not yet to be:

…my stance has a major problem, one that causes Morse to conclude that the contributions of neuroscience to the legal system “are modest at best, and neuroscience poses no genuine, radical challenge to concepts of personhood, responsibility and competence.” The problem can be summarized in a hypothetical exchange:

Prosecutor: So, professor, you’ve told us about the extensive damage that the defendant sustained to his frontal cortex when he was a child, Has every person who has sustained such damage become a multiple murderer, like the defendant?

Neuroscientist: No

Prosecutor: Has every such person at least engaged in some sort of serious criminal behavior?

Neuroscientist: No

Prosecutor: Can brain science explain why the same amount of damage produced murderous behavior in the defendant?

Neuroscientist: No

The problem is that, even amid all these biological insights that allow us to be snitty about those silly homunculi, we still can’t predict much about behavior. Perhaps at the statistical level of groups, but not when it comes to individuals.

Indeed. As he titles the following subchapter, the whole discipline “explains a lot, but predicts little”. IF you restrict explanation to certain very limited terms, I would add (because if not, it doesn’t even explain that much to begin with). Which is mighty honest from the author, and deserves great kudos for its humility and realism. Humility and realism somewhat sullied when he explains that such lack of predictive power is caused by the relative youth of the discipline (supported by very tricky and fishy graphs showing how many articles containing a number of trendy terms have been published in the last years, compared with how many were published a century ago, when there were all of five medical journals in the whole planet Earth), and if we wait a bit they will tease out all the contributing causes and finally start making brilliant, honest to God predictions that will hold up to scrutiny and allow for the vaunted overhaul of the legal system (predictably in the direction foreshadowed in Minority Report, when we will detain and even execute potential criminals before they can commit their evil deeds).Sorry, but that reminds me of the baseless hope in some mysterious hidden variables that will end up allowing to predict everything. Didn’t work in physics, won’t work in psychology.

But if you are not all that demanding about what you consider scientific (remembering Feynman, most of the curios within Behave are more akin to stamp classification and collection than to physics, or whatever your model of hard science is), or are happy to roll with good ol’ Collingwood and accept as scientific any discipline that searches systematically for the answers to certain questions, and is willing to share the data and methods used to arrive to such answers (even if those data and methods end up demonstrating the opposite of what they were marshaled to prove), then I can not recommend this book highly enough. It is comprehensive, witty, not very well structured but full of silly experiments that will amuse and entertain you, and give you an edge in otherwise dreary conversations (you can follow “the latest science conclusively proves that humans…” with whatever folly conceit you wish and you probably can find some supporting “evidence” within its pages). Best of all (or not, depending on what makes you tick), don’t expect it to question your previous assumptions about what moves us to act or how the mind works (as opposed to the brain), as it is woefully short in those departments… 

Tuesday, October 3, 2017

My take on Charlottesville (When the Nazis come marching in)

One of my most controversial predictions is that North Americans are sleepwalking towards a Second Civil War in which the growing estrangement of the increasingly polarized halves of its electorate will have to act out the enormous reservoirs of animosity and spite they have been building towards each other for the last decades. The recent events in Charlottesville, Virginia, in which rightists and leftists clashed in the streets causing enough turmoil to force Governor Terry McAuliffe to declare a state of emergency, and a counterdemonstrator (32 years old Heather Heyer) was killed, would seem to be a validation of the direness of my predictions, and a clear harbinger of the more virulent clashes to come. However, as is usually the case, there is more in the picture than meets the eye, and a sober assessment of the events rather makes me be more cautious, and even a bit more optimistic. Let’s see why.

1-    What happened (are the streets burning yet?)

First let’s unpack what actually went on in the Virginia locality (population: 48,210 as of the 2010 census, and home to the University of Virginia) had announced its intention to remove a statue to Confederate general Robert E. Lee from the eponymous park (previously renamed Emancipation park, which already tells you a lot about Charlottesville municipal government). 


A loose network of rightist groups opposed to the measure plan a rally on Saturday, Aug 12th under the denomination “Unite the Right”. Such rally is duly notified to the authorities and nominally permitted. The previous day (Fri, Aug 11th) Governor McAuliffe notifies via Twitter that, although demonstrators are protected by their constitutional right to express their views, he finds such views “abhorrent” and encourages Virginians of all persuasions to stay away of the march.

The night of the 11th hundreds of demonstrators march through the campus of the UoV carrying torches (as seen in countless photos, most seem to be oddly out-of-place tiki torches that would be more fit for lighting a barbecue in a patio in the ‘burbs than to be wielded in an exhibition of white power or whatnot…) and chanting niceties like “White lives matter”, “you will not replace us” (most journalists transcribe it as “Jews will not replace us”, as it has more sinister overtones and surely sells more papers) and “blood and soil”. Along with the torch-carrying and white-supremacist slogan chanting, a number of marchers can be seen unequivocally extending their arms in what undoubtedly can be construed as the traditional Nazi salute. Such march is widely and luridly transmitted by all major media organizations in the country.

On the morning of Saturday, Aug 12th demonstrators start gathering around Emancipation park, both for the planned march and to protest against it. A significant number on both sides are armed with visible weaponry and paramilitary gear that would be unimaginable in any other country but, Virginia being a permissive open carry state we can assume nothing out of bounds in the USA. To the surprise of exactly nobody, given the level of publicity achieved by the rally, as many counter-demonstrators as potential demonstrators can be seen around the park, and there are a number of clashes between both (check the NYT account: People pretending to fight - badly ). Around 11:00 AM Governor McAuliffe declares the state of emergency, revoking the authorization for the Right’s march, and ordering the attendants to the rally to abandon their location and dissolve. It has to be noted that such “dissolution” would force them to march through the throngs of  counterdemonstrators gathered around the park, multiplying the chances for fights, clashes and brawls (check the alternative account of an avowedly alt-right attendant: What Loretta saw at C-ville ).

Clashes and widespread violence continue (but, amazingly given the stupendous amount of guns seen all around in every video and photo, no shootouts are reported… of the somewhere north of 30 wounded most are by punching and being beaten with blunt objects, plus some pepper-sprayed), reaching its high point (regarding lethality) when a Dodge Challenger driven by a James Alex Fields Jr. slams a group of counter protestors, killing the aforementioned Ms. Heyer. Later in the day, the crash of a police helicopter monitoring the day’s developments would add two officers to the total body count.    

To top off the division and shock of the nation, the President famously talked to reporters from his golf resort in Bedmister, NJ, condemning the violence, which he blamed on “many sides”, causing not only liberals and progressives, but members of his (purportedly) own party like Speaker Paul Ryan to denounce him for “putting on the same moral plane” the “Nazis and anti-Nazis”. Not one to back off or publicly retract any proffered opinion, Trump later would say that a lot of “very fine people” attended the alt-right rally, and would insist in equally apportioning blame to “both sides” on a speech to reporters at Trump Tower the following Tuesday (Orange one's response ).

2-    How the media reacted (burning? Man, they are exploding! Crumbling! Sizzling!)

Before I offer my interpretation of the facts I’ve just described, I think it’s worthwhile to reflect on how the media has portrayed them, from different points of the political spectrum. We have grown accustomed to the very post-modern idea of there not being a “true-truth”, but just different discourses, or narratives, weaving a hermeneutical network of signifiers that denote no precise significant at all. What that somewhat obscure assertion means is that for the few that still read news from outlets with different political alignments it is pretty common to find entirely diverging descriptions of a single event, to the point of making it difficult to identify such descriptions as applying to the same underlying facts. In the case under consideration, the differences have been predictably magnified: for the mainstream media it has been a national disgrace, the symptom of a seriously corroded and corrupted social compact that not only allows, but apparently encourages normal, seemingly well-adjusted young men (wearing that ultimate symbol of successful integration into middle-class status: khakis and polo shirts! That is no way to dress for the fascist takeover of the state, sir! What self-respecting revolutionary would exchange his jackboots and brown shirt for such a bland attire?) to publicly and unashamedly proclaim their Nazi sympathies and their scorn for every ages-old convention of what is acceptable and proper in a democracy. Nazi salutes? Check. Open proclamation of racial slurs? Check. Embracement not just of somewhat morally tainted past (the Confederacy and the Old South) but of beyond-the-pale fringe elements (like the Ku Klux Klan, David Duke and even ol’ Adolf himself)? Double check.

Unsurprisingly, both MSM and left-leaning circles have been having fits of apoplexy and denouncing the whole thing as furiously and unambiguously as possible, while at the same time pointing to the (at best equivocal) reaction of the White House as the undisputable proof  of the racism and unabashed association with White Supremacism of not just the President and his inner circle, but the whole Republican establishment (the most commented piece along those lines is surely the one penned by Ta-Nehisi Coates in “The Atlantic”: The first white president (how much does the USA suck?) ). For them the right in general is racist, no exceptions allowed. And with fascist tendencies all along, so no surprise they resort to threats, violence and finally murder. The sad outcome of the many clashes in Charlottesville is not an isolated incident carried out by a mentally unstable individual, but the unavoidable consequence of a noxious ideology that, left unchecked, will cause many more eruptions like that one, and many more deaths (hence, combatting it with any means is the only rational and commendable action). 

Fox News, the right leaning talk-radio hosts (Limbaugh, Hannity, Savage, etc.), the Murdoch press and the abundant orthosphere, NeoReaction and alt-right sites in the Interwebz see the whole story under a very different light. Their sympathies were since the beginning clearly with the initial demonstrators, not just because they fully endorse the country’s racist past, which they more or less unabashedly do, and thus also oppose the removal of any statue of Confederate heroes by the -in their eyes- minor feature of having led a rebel army against a constitutionally legitimate government for the sacred cause of being able to keep humans of a different skin color enslaved, but because in general “uniting the right” is something they all can rally behind (sharing, all of them, a sense of dread and disgust towards what they see as an almost unstoppable tide of progressivism and leftism that constitutes an existential threat to everything they hold dear and consider sacred). For all those media, the counterdemonstrators were an unholy and ragtag alliance of everything that is wrong with America today: feminists (“feminazis”), BLM sympathizers (“race traitors”), LGBT advocates (“faggots and butches”) and in general progressives and liberals. Instead of “proud boys” impeccably white and well-groomed marching in their khakis and polo shirts (oddly complemented by a peculiar assortment of shields, kneecaps and helmets), a bunch of blacks, short-haired girls and old hippies with questionable fashion sense carrying bullhorns and placards that seemed plucked from some outdated documentary about racial protests of the 70’s (but let’s not forget the mobile phones, which where mercifully absent back then… one can only wonder about the volume of uploads in Instagram, FB and the like of demonstrators from every sign preening about their exploits, in a new and social-media age version of the old “radical chic”).

Few have claimed that the victim between the counterdemonstrators somehow deserved it, or “had it coming”, but the narrative they weave leaves little doubt this is how they see it. For the right-wing media the whole episode is a further illustration of the inability of the current state (seized by liberals and traitors) to protect decent citizens, from the declaration of the state of emergency (which only served to further curtail the constitutional right to freely express its opinion to the always silenced part of the social body that does not share the left’s worldview) to the failure to protect the people gathered in Emancipation park from the taunts and aggressions of the dangerous “Antifa” mob. Never mind that the only actual casualty was in the ranks of the supposedly aggressive, dangerous and deranged anti-American extremists that went to disrupt a perfectly peaceful and tranquil event. Again, it was all the fault of the “Cathedral”, in this case personified in the Democratic Mayor, the Democratic Governor, the mob of dangerous radicals bent on violence and mayhem grouped under the label “Antifa” and, of course, the devious mainstream media that distorted and manipulated the emotions of some young man so he ended committing a crime. 

3-    So, all of this validates the narrative of “civil war tomorrow”, right?

Er, actually wrong. Always the contrarian, I see more positive than negative aspects to take into consideration after the events in Charlottesville. And remember I could construe it as a validation of my predictions of quick descent of the American polity into fractiousness and conflict (Guys, you're screwed ). But that’s not how I read it. For one, I won’t claim to be the greatest street brawler and bruiser of all time, but I’ve been involved in my share of fights (most of them had unwise amounts of alcohol involved, so take my account with a pinch of salt) and I was surprised, watching the many videos of the “violence”, by how… “performative” it looked like, and how little actual rage or aggression they showed. The few punches that are exchanged in front of the cameras (whose presence may be a distorting factor, or the other way round, the catalyst for all the action) resemble more a limp attempt to swat a mosquito than an actual intent to cause maximum damage whilst minimizing the puncher exposure.

We humans are a social species, and as every military instructor will tell you, getting normal people ready to shoot towards their fellow human beings (even at considerable distance, where the feeling of common humanity can be more easily overcome) requires quite extensive reprogramming. When I was younger (actually, much, much younger) I knew my share of seedy neighborhood gyms, all of which had its crew of testosterone-addled asocial troublemakers (and yes, a disproportionate percentage among them were already “extreme-right” and trained to either join the armed forces, the police or any self-styled anticommunist crusade in not-so-distant-Francoist Spain, their fathers or grandfathers having typically fought side by side with actual, honest-to-God German Nazis and Italian Fascists, the real thing and not the imagined bugbear so easily peddled in leftist fantasies). Even the most apparently psychotic between them had some difficulties overcoming the innate human revulsion towards doing harm and seeing other people suffer (although in some cases, it has to be said, they became quite good at such overcoming). I’ve seen how those guys hit, and that’s very different from what the footage by the NYT, CBS, ABC, WaPo, CNN or Fox shows. What that footage (much of it seem to be the same limited number of events shot from different points of view) depicts are a certain, limited number of posers running in front of the camera to have a go at throwing a (typically ineffective) jab at the least-imposing element of the opposite tribe, and then retreating precipitously back to safety among their own numbers, having accomplished its main goal, which we can surmise was never to gain territorial control of the contested streets, but snatching a nice graphical testimony to hang in their snapchat or Instagram account.

As I was not present in the city during the events, I can not say for sure that all the “street violence” so luridly reported by alarmed journalists was of this theatrical nature. Obviously the guy who rammed his car against the multitude, killing one and wounding multiple others was not “just doing it to look badass on Instagram”, and caused real, grievous and irreversible damage. Additional people were physically damaged to the point of requiring medical attention (19 in the car accident and 14 in other incidents) but if what the newsreels show is any representative indication, I think Charlottesville was a hybrid between a theater and a not-fully-grown-up kids’ giant playground, where self-styled radicals from left and right enacted their fantasies of being badass, rebellious and violently (and valiantly) opposing the unacceptable ideas of the other side:


Please note with this first interpretation of the events I’m not pretending to establish any moral equivalence between both sides, or pretending that white supremacism, racism and even ol’ Nazism are somehow OK (or not, I really don’t buy pieties or second hand opinions from any peddler of political correctness, and my opinions about such issues are really my own and not to be discussed here). I hope we can all agree the “Unite the Right” organizers had considered that their little show turning violent was a real possibility (heck, if not, why come with all the defensive gear, the shields, helmets, and, specially, the “security details” of the most prominent figures?), and that organizing a public event knowing it will turn violent (and thus, assuming people will be hurt) is at best irresponsible, and at worst outright evil. Yep, I know oppressed minorities in repressive states may be excused to resort to violence as no other way of redressing their grievances is open to them, and for many people in the USA “alt-right” theirs is precisely that kind of state. We’ll get back to that contention in a moment…

But similarly irresponsible/ evil is attending such event to be part of that same violence from the other side, regardless of how virtuous your ideas are. The traditional distinction between “defensive” and “offensive” violence does not apply, as we are not dealing with people that were going about their  daily lives and suddenly were presented with a bunch of aggressive fascist threatening them, but of groups of activists that travelled to the scheduled demonstration location to harass and confront demonstrators for expressing their ideas, with the justification that such ideas are obnoxious and morally indefensible (again, I’m not yet declaring if I concur or not with such characterization). It is the embrace of violent means which constitutes a) the essence of totalitarian regimes (which define themselves by abandoning the public, pacific discussion of policy alternatives as main way of consensus formation and resort instead to its unilateral imposition, by whichever means -that’s when violence comes in) and b) the most salient and morally repugnant of its features (an imaginary “benevolent dictatorship” that never inflicted any pain on any of its subjects would be much less evil than one which systematically did -see “enlightened despotism” as proof). Am I saying with this that Trump was right, and we should condemn violence “from all sides”? does such generic condemnation mean that we indeed consider both sides “morally equivalent”? (let’s call them, for greater clarity, fascists and anti-fascists, or white supremacists and anti-white supremacists -or would it be more accurate to call the latter “white subserviencists “?). Not to put too fine a point about it, yes and no. I do indeed oppose (and condemn) every kind of violence, regardless of how honorable the cause it defends, or how ignoble and vile the cause it attacks. In cases of terrible oppression, when any other means of redress are closed, it could be justified to resort to causing pain (including to innocent people), always in a most limited, most circumspect manner, but those cases are few and far between, and certainly none of them obtains in modern day America (or in modern day Europe).

Which is not to say that, once violence is unleashed, every participant is similarly to blame: Those that start it (those who hit first) are normally more to blame than those that respond to it. Those that lose their temper and escalate it (and respond to a taunt with a punch, or to a punch with a shot to the head) are more to blame than those that keep their cool and show some restraint, trying to keep it proportional and not inflict more pain than what they themselves may have suffered. And yes, those that engage in it to advance a “respectable” cause (for a Kantian like me the proof of respectability is pretty straightforward: those that act according to a maxim they can universalize, so they would like to see it become a rule of nature or, alternatively, those that treat other people as ends in themselves and never as means) are less to blame than those that defend dubious, non-universalizable, particularistic causes. Only according to the first two criteria are the white supremacists who intended to march in Charlottesville morally equivalent to the counterdemonstrators that tried to stop them, as according the third their cause, being associated with racial segregation, a celebration of slavery and sedition (which entails a violation of the rule of law), and thus strictly non-universalizable, they are clearly inferior to those that showed up to oppose them.

Now that has been taken out of the way, let’s go back to why I find the sad and tragic events that unfolded the 12th of August still contain a reason to rejoice: essentially, what they showed is that the US of A is much farther from a civil war than what I feared, as the most vocal proponents of the de-humanization of half of the country that is required for such a confrontation to take place are a really tiny minority, unable to inflame the passions of a sizeable amount of their countrymen (as of today still very, very far from reaching a critical mass to have any significant impact on the political, let alone military balance of forces of the country) and willing to fold when confronted with the possibility of a real fight. During the campaign of the presidential election I tended to disagree with many analysts on the left that dismissed the perils of a Trump victory saying that the amount of his followers that bought into white supremacist phantasies was very minor, in the order of a few thousands, but now I think they were spot on, given how easy it was for a bunch of ragtag organizations to outnumber them on very short notice. Breitbart may claim some hundreds of thousands of readers, and the Daily Stormer (now disappeared from the “public” internet) some tens of thousands, but we’ve seen that lurking in unsavory virtual places while safely seated in your parents’ basement is one thing, and hauling your ass to a demonstration with fellow extremists where said ass can be repeatedly kicked is a very different one. A lot of people seem to have signed for the first, but precious few for the second.

And the media in the right have noticed, as the diagnostic I’ve met more frequently is that the “Unite the Right” rally was an unmitigated disaster, brought tons of bad publicity and has probably set up the movement a few years, if not decades. A lot of people, even those of a most conservative persuasion, still balk at being identified as Nazis, or being associated with the Ku Klux Klan. If I were a cynical I would show some surprise at the apparent inconsistency of endorsing blatant discrimination towards certain ethnic/ cultural groups (browns and blacks) but being uneasy about being associated with those who demonize others (Jews), as that’s where the line seem to be drawn. Sorry, but I fail to see how a Nazi is so much worse (and thus so amazingly more evil) than a super-nationalist, jingoistic hick that wants to send “people not like him” (because he considers them inferior and not fully human) back to their countries of origin, just because the former includes in the “not like him” category some people externally indistinguishable from himself (Jews) and the latter does not. And just to be clear and avoid mistakes, it is not because I sympathize with one more than the other: for the record, I consider both equally unacceptable and indefensible. However, a number of alt-right bloggers and neoReactionary thinkers seem to be happily aligned with the super-nationalist jingoist but reject being labelled as full-throat Nazis (see Mencius Moldbug, for obvious reasons).

But enough cynicism already, back to the uplifting consequences of proto-fascist thugs being routed in the streets, we can expect much less visibility from them, and that is not a bad thing. We will see similar levels of rancor and spite and foaming-at-the-mouth between progressives and conservatives, we will see one or the other condone ugly behaviors (both in words and deeds) as long as it is exhibited by someone from their tribe and causes harm to someone of the opposite’s, but such ugly behavior will return to the electronic realm: the usual trolling, badmouthing, toxic name-calling and occasional banding together to be overheard in front of niche audiences (sad puppies) but no street fighting (and hopefully, no car ramming in the enemy’s ranks). So cheer up, Americans! It seems like your simmering Second Civil War will remain a virtual conflict still for a few years to come. If and when it becomes real (not that original an idea, see American War by Omar el Akkad -oh, I forgot you barely read, and much less a book by an Arab-sounding author) is still up to you to decide.