Wednesday, November 12, 2014

The second machine age (Brinjolfsson & McAffee)

Finished this past weekend "The second machine age", which I knew I was not going to like, and with which I knew I would mainly disagree. Boy, was I right.

The whole book is a fine example of what has been called "present bias", or the tendency of the human mind to give recent experience an inordinate weight when assessing its own situation (it's the same phenomenom that makes us think the crime statistics must be at an all time height after being mugged, regardless of the objective fact that they are at a minimum not seen since the 60's, or thinking air travel must be more dangerous than other means fo transportation after hearing from a plane crash, when it is actually one of the safest ways to move). According to the authors, we are (right now, aren't we lucky?) at a critical moment in the history of the human species, as we are about to enter the "2nd machine age" of the title, with the effect of witnessing ever increasing improvements in our standard of living, our well being and our material (and spiritual) wealth. ¿How so? mainly due to the exponential growth of computing power (captured by the by now all too famous Moore's law) which has enabled the development of ever more powerful Artificial Intelligences (AI's) that will in turn boost our productivity, and thus produce growing amounts of goods (both material and cultural, as in many cases the only product of Today's industries is information) with decreasing consumption of resources (be them capital, natural or labour).

There are a number of obvious flaws in that reasoning, starting with the fact that the "brilliant technologies" that just a few years ago "seemed like science fiction" the authors are so besotted with loose a lot of their luster after a closer examination (and quick acclimation, few things look as dated as Yesteryear's wonders): yup, Google has developed a car that drives itself under most circumstances (but not in heavy city traffic, which is precisely the kind of driving people loathe most); IBM has developed a software program that beats human champions at the game of Jeopardy! (this one tends to impress less us non american audiences, who are not that fond of the game in the first place) and may be used for clinical diagnostic; Waze uses the location of its users to dynamically calculate the fastest path between two points (which is less useful in European, Asian or African cities with older centers -not grid-like- where there may only be one way to get from point A to point B, so all Waze can do is let its users know the amount of misery unavoidably in front of them)... and that's about it. Convenient? certainly. Shocking? Earth-shattering? Hardly.

What the cheerleaders for the beneficial and unprecedented impact of what we could legitimately call the "IT revolution" do not seem to grasp is that technologically we havent progressed that much in the last four decades. We produce most of our energy with power plant designs made in the 60's-70's. We travel in planes, trains and boats designed in the same era. Even the cars that mostly populate our roads are based in designs that haven't changed that much (as Bill Gates famously noted in a much ballyhooed comparison) since the 40's of the past century. Construction-wise, our buildings, offices and factories use the same materials, shapes (with the exception of Santiago Calatrava's and Frank Gehry's creations) and techniques that were developed a century ago. Even our ability to put payloads in orbit has not evolved that much since the late 60's. We do have almost viable solar energy (that's a novelty), almost viable electric vehicles and almost viable cheap rocket launchers, but I'm hesitant to declare that's a significant breakthrough that heralds a new era of ever increasing productivity gains and unmatched progress. That doesn't mean I am ready to dismiss that aforementioned revolution and declare all the recent developments in IT and communication inconsequential. I do think it is affecting our society in fundamental ways, and that indeed it will end up spelling the end of capitalism as we know it, and will end up being as significant as the discovery and popularization of the printing press (a good parallel, the printing press didn't significantly impact the material conditions of its era, it most definitely didn't usher a new era of productivity gains or economic development). But for reasons that have nothing to do with what Brinjolfsson and McAffee present in this book.

Amost as an aside, I have to say I found specially tiresome and uninformed their comment about the tremendous impact IT is having in the "human sciences" (as it should, no area of human interest can be left unaffected by such important develpments!), advancing the nauseating name of culturomics for a purported new approach to the study of social disciplines heavily reliant on the use of computers and numerical analysis. Such approach has already brought to light, by parsing millions and millions of pages written in the past centuries, amazing discoveries, to note:

  • fame is achieved faster this days than it used to be, but it also fades faster
  • the total number of words in the English language has grown a 70% between 1970 and 2010
  • byt the middle of the 20th Century interest in darwinian evolution was fading, until the discovery of DNA by Watson and Crick reignited it
Well, this just blew my mind! such insight! such depths! this really convinced me the time of the machine had finally arrived, as only sophisticated algorithms poring over unimaginable amounts of raw data could have reached such counterintuitive, deeply disturbing conclusions! Steven Pinker would be proud (and Leon Wieseltier put to shame) of such brilliant, scientifically sound inroads in the dusty muchy world of the humanities...

Jokes apart, the reason the authors expose themselves without noticing to such ridicule by posing as "interesting" what any scholar worth his salt would consider trite (if not downright moronic) just highlights the problem with this techno-optimism: when chanting the praises of AI it tends to forget that we still do not know (and do not have an operational working defintion of) what "NI" is. It is great to declare we are building better and better Artificial Intelligences, but you don't need to scratch very deep to find that we would never know if we truly are, as we still do not agree (are quite clueless, frankly) as to what "Natural Intelligence" consists in. And this is not just wordplay, but a classical case of people with hammers seeing the whole field as a nail.

By the beginning of the last Century, as we learned to compute w machines, we got all excited assuming that was all that intelligence conssited in: really complex computations. So you got a machine computing and following algorithms at enough speed and voila! you would have an intelligent machine (this may sound whimsical, but completely serious philosophers of mind said exactly that: put enough processing power in a sufficiently small volume and sentience and intelligence would almost amgically "supervene"). Now we have become able to implement via algorithms some sorts of pattern recognition, and we have duly come to the conclusion that pattern recognition is really what intelligence is all about (see "On intellligence" by Jeff Hawkins: On intelligence in Amazon, which impressed me much when I read it for the first time eight years ago, but which I now find much less compelling), and are feeling again very excited thinking we are on the verge of getting really "intelligent" machines without the need to really undersand how our own intelligence works. Well, sorry to disappoint but I'm afraid we are as far from getting anything resembling intelligence as we have always been. More and more I see intelligence requires things like "caring about" and "valuing", just for starters, that we are utterly clueless about how to implement in a Sw program (or in a Hw substrate, btw), and without which there is not much progress to be made (apart from cleverer and cleverer "maechanical turks" that may someday even pass the Turing test without really having a single thought).

But having economists being all wrong about neuroscience and philosophy of mind is really par for the course, and doesn't really surprise me at this point. What saddens me is seeing obviously brilliant people being als mostly wrong in the economic analysis that should be their core competency. But we will need another post to dwell on that...

No comments:

Post a Comment