The human report in the age of artificial intelligence
This Blog post was written by Professor Peter Kawalek.
Sometimes I say to the students, ‘…it is like Rome. It is like Rome. Silicon Valley is like Rome. We live in its wake, somewhere north of it, part of it but not really at the centre. And it is the era of Antoninus Pius or somewhat earlier. We are not sure of the peak of it, not sure of the end.”
There are lots of differences, of course. Even more than ancient Rome, Silicon Valley is at the centre of a web. Its Ivy League temples and its cousin in Seattle must be considered as part of the inner court. But like Rome, Silicon Valley holds the vines of power, it is a magnet to a multi-racial citizenry, and its output, its export, is the world itself.
Last week, Elon Musk sent a Tesla to space with David Bowie looping on the hi-fi. Two out of three sections of ‘Falcon Heavy’ made it back to Earth. What is this really? The start of inter-planetary mineral mining?
Just a couple of days ago, on Twitter, an online populace watched as two Boston Dynamics robotic dogs unlatched a door and freed themselves from a laboratory. At Stanford, researchers contemplate new strategic roles for radiologists, whilst AI takes over radiology itself. The professional classes are in the front line of these changes.
McKinsey has something to sell, of course, but it does the stats and the tables well, and citing AI and other developments concludes,
“We estimate that between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world, based on our midpoint and earliest (that is, the most rapid) automation adoption scenarios.” (Read the McKinsey report)
My younger daughter will still be 26 in 2030. It is less than 12 years away.
Here is Mark Carney speaking in December 2016 in the former economic power of Liverpool:
“The fundamental challenge is that, alongside its great benefits, every technological revolution mercilessly destroys jobs and livelihoods – and therefore identities – well before the new ones emerge.
“This was true of the eclipse of agriculture and cottage industry by the industrial revolution, the displacement of manufacturing by the service economy, and now the hollowing out of many of those middle-class services jobs through machine learning and global sourcing.”
“….well before the new ones emerge.”
If a transition takes long enough, then for most people it is better described as decline. And probably also as ‘poverty.’
I enjoyed Harari’s books Sapiens and Homo Deus even though as an academic I am supposed to point out that they rely on many simplifications.
They spit fire in multiple directions and ultimately assemble to a drumbeat of constant reminder: soon we will no longer possess the most functional intelligence on the planet, says Harari. One day we will not have the most functional intelligence on the planet, d-rum, dr-um. Soon you will not be posses’d of the most functional intelligence on the planet. Point made, again and again. That power will be passed to machines.
All we will have to offer is our consciousness. And our heart.
We will see.
One of the best things about a business school is its ontological uncertainty. Long thought a weakness by some scholars, an academic institution that struggles to define and defend its own disciplinary boundaries better takes on the variety of the world it seeks to comprehend. All it now takes is for the world to speed up some more and scholars will increasingly see that for the business school, at least, information theory is a nexus. I do not want to call it a Theory of Everything, but I will settle for the politer claim that it is a helpful ‘Theory of Every Underpinning.’ Accountancy – information. Marketing – information. Supply chain – information. Power – information. Leadership – the massage of information.
Indeed, on the best campuses, I think, over the next ten or so years, there will be a kind of productive uncertainty. There will be a sense of the questions that lie between disciplines and between departments. The future itself will be somehow metamorphic, not designed, but rather more drawn out of the affordances of many disciplinary inventions and interventions. No single school nor department can monopolise this.
Coda, for the students.
There is no course I teach with the breadth that would allow it, but really I should conclude with my students by taking them back to California. This time we should not visit Tesla nor Google, nor the digital labs of AI and robotics and tech. Rather we should visit the human labs of biology, psychology, neuroendocrinology and neurology.
Scholars like Elizabeth Blackburn and Robert Sapolsky have for a long time been onto findings as significant as those we see in the tech labs. This time, rather than Artificial Intelligence, we learn of Human Intelligence itself: how it is formed, how it develops, how it is affected by trauma and stress, how the body might sometimes be protected or repaired.
It is a delicate thing, this human intelligence, and we have nurtured it badly, or at least unevenly and haphazardly. Sort of cosmologically, I don’t care whether or not a student can read a graph of data as well as a robot might. When I listen to Blackburn or Sapolsky, or Martha Farah in Pennsylvania, I am interested in that student’s intelligence as it is entire. And kind of for its own sake.
Many scholars are starting to say this now, but maybe the shift we are seeing in society is to lessen or end the functional account of human beings, where we are only ever as good as our in-tray, and to start a developmental account in which we learn and we become because these things are good in themselves.