What is it about Artificial Intelligence and Humanity…?
At Loughborough University’s Centre for Information Management, we have been asking, “What is it about humanity that we can’t give away to intelligent machines?”
What we mean by this is to ask what might we give and what is it that we must not give. What authority given to intelligent machines is conducive to our human selves, and what authority is not conducive? What helps us and what hurts? Where is the boundary and how will we know it?
We are writing about this topic with multiple authors from across Europe. There is a paper coming soon in the International Journal of Information Management. Our aim is to develop a new perspective on the relationship between our human place in the world and the intelligent automation (Coombs et al., 2020) that we are increasingly drawing into our lives around us.
A New Bauhausian Unity
Taking a cue from Dr Patrick Stacey, whose work calls out a “de-levelling” and calls for a “critique!”, we are writing of a humanity that is elemental in the world; whose best becoming is the ultimate justification of any automation project. In simpler terms, we are ultimately suggesting that we begin to shift the reality of automation decisions from the economic towards consideration of wellbeing. Hence, not for the first time, an economic rationality and a deeper cause of wellbeing are pulled into tension. The resolution of this tension will rely on new understanding of both these subject areas. We need to look again at economic understanding and at theories of wellbeing. The theatre will be staged over thousands and millions of decisions about what should be automated and what should not.
We need to know.
Stacey’s position is that human wellbeing has to be primary. It has to be understood, known collectively, and pursued as an end-goal in the new instrumentation. This “New Humanism” with its call to critique stands in opposition to the fluidity of sociomateriality (Orlikowski & Scott, 2008), wherein human wellbeing is not explicitly prioritised within a social and technological arrangement. Hence, as machine intelligence advances, there is the risk that the human becomes “de-levelled”, or just another aspect of a system that is constructed on some economic rationale and for some ultimate political interest or dystopian technical necessity.
To debate this issue we formed a special discussion dedicated to our question of “What is it about humanity that we can’t give away to intelligent machines?” This special discussion took place within a day during the 2019 annual meeting of the European Research Centre on Information Systems (ERCIS) was held at the Centre for Information Management (CIM) of Loughborough University. Patrick Stacey led the discussion with his presentation “Towards a New Humanism. Or are we too late?” The host team from CIM identified four themes to investigate regarding machines and humanism: Crime & Conflict, Jobs, Attention Economy, and Wellbeing. These themes were selected because of the coverage they had already attracted in the media and academic discourse.
To encourage discussion a world café format was adopted. Our world café had four tables, one for each of the themes and a table host that collated the discussion. There was time for three café rotations to allow ERCIS colleagues to discuss different themes. Key discussion points were captured using post-it notes on flip charts.
After the workshop, Dr Boyka Simeonova reviewed all the post-its and created a matrix of themes and cross cutting issues shared with the table hosts. Patrick and table hosts were invited to write up their contributions for publication. The final paper has authors from five European nations and one from South America. The workshop and write up process has produced a thought-provoking research agenda that we believe will be pursued in further partnership through ERCIS and across global academic networks.
An important output from this debate is that we propose an emboldened humanism that we centre on the notion of a new Bauhausian human-machine unity. Such a unity rests on three pillars.
- Accepting this centrality of human critique of our own systems.
- Deliberately enabling this critique through design: human need and value must be designed into the loop of intelligent machine decision-making.
- Deliberately enabling this design through a traceable relationship – the unity requires that humans can interpret and explain intelligent machine decision-making in progress.
It is through the acceptance of these terms that a new theoretical fork is reached, and the human requirements that have been “de-levelled” in debate, are elevated and prioritised in a re-turn to a newly expressed humanism (Stacey, 2019). Human wellbeing rests on the design of systems that serve it, that refuse to de-level the human to a functionary of some kind of combined intelligence and which reserve for us that power to “critique!”
This blog post was written by Professor Peter Kawalek , Director of Loughborough University’s Centre for Information Management and Dr Crispin Coombs , Head of Loughborough University’s Information Management Academic Group. If you’d like to know more about the research we do in this area please visit the Centre for Information Management website.
Coombs, C., Hislop, D., Taneva, S. K., & Barnard, S. (2020). The strategic impacts of Intelligent Automation for knowledge and service work: An interdisciplinary review. The Journal of Strategic Information Systems, 101600. https://doi.org/10.1016/J.JSIS.2020.101600
Orlikowski, W. J., & Scott, S. V. (2008). Sociomateriality: Challenging the Separation of Technology, Work and Organization. Academy of Management Annals, 2(1), 433–474. https://doi.org/10.5465/19416520802211644
Stacey, P. (2019). What is it that Humanity Should Not Give Away to Machines? In Bath Royal Literary and Scientific Institution. Retrieved from https://www.brlsi.org/events-proceedings/proceedings/what-it-humanity-should-not-give-away-machines
Image licenced under creative commons