The AI Governance of AI: The Rising Dilemma of Determinism versus Efficiency

by , , and

Our lives are ruled by data. Not just because it informs companies of what we want, but because it helps us to remember and differentiate what we want, what we need, and what we can ignore. All these decisions give way to patterns, and patterns, when aggregated, give us a picture of ourselves. A world where such patterns follow us, or even are sent ahead of us — to restaurants to let them know if we have allergies, to retail stores to let them know our preferred clothing size —is now so feasible that labeling it science fiction would expose a lack of awareness more than a lack of imagination.

The benefits of AI are making many of our choices easier and more convenient, and in so doing, tightening competition in the space of customer choice. As this evolution happens, the question is less to what extent AI is framing our choices, but rather, how it is shaping them. In such a world, we need to understand when our behavior is being shaped, and by whom.

Clearly, most of us are quite comfortable living in a world where our choices are shaped by AI. Indeed, we already live in such a world: from search engines to smooth traffic flow, many of our daily conveniences are built on the speed provided by the backend. The question we need to ask ourselves when considering AI, and its governance, is whether we are comfortable living in a world where we do not know if, and how, we are being influenced.

Behavioral Bias or Behavioral Cues?

AI can do for our understanding of behavior what the microscope did for biology.

We have already reached the point where software can discover tendencies in our behavioral patterns we might not consider, identifying traits that our friends and family would not know. The infamous, but apocryphal, story of the father who discovered his daughter was pregnant when Target began sending advertisements for baby suppliers (after detecting a shift in her spending) gives us a sneak peek.[i]

Our lives are already ruled by probabilistic assumptions, intended to drive behavior. Now we need to ask, and answer honestly, how much of your life are you willing to have shaped by algorithms you do not understand? More importantly, who should be tasked with monitoring these algorithms to flag when they have made a bad decision, or an intentionally manipulative one?

As more companies use AI, and the complexity of its insights continues to grow, we will be facing a gap above the right to an understanding, or the right to be informed – we will be facing a gap concerning when, and if, a violation has occurred at all.

As our digital presence grows, and this presence is being pulled by public directions for the future of e-governance and private for how we engage with our interests – meaningful governance will have to include an essential first step, the right to know how our data is being used, who has it, and when are they using it.

Another example of how behavior and technology are interfacing at a faster than ever pace is through the observation of what Chatbots have been shown to provide us: the potential for emotional associations, which might be used for manipulative purposes.[ii] As developments in natural language processing grow to combine with advanced robotics, the potential of building that bond from touch, warmth, comfort, also grows – particularly in a world where we experience the largest endemic of loneliness, driving the UK to literally appoint a minister for loneliness.

As machine to machine data grows in the internet of things, companies with preferential access will have more and more insight into more and more minute aspects of behavioral patterns we ourselves might not understand — and with that comes a powerful ability to nudge behavior. Good data is not just about volume, it is about veracity — as IoT grows, we are handing firms everything they need to know about us on a silver platter.

We can argue still that the issue is not the volume, the issue is the asymmetry of analytic competency in managing that volume — meaning asymmetries in capturing value. In turn, this means some companies not only understand you, but can predict your behaviour to the point of knowing how to influence a particular choice most effectively. In the age of big data, the best business is the insight business.

Accountability: who is looking after us?

The first question concerning building accountability is how to keep humans in the decision loop of processes made more autonomous through AI. The next stage needs to preserve accountability in the right to understanding — to know why an algorithm made one decision instead of another.

New proposals are already emerging on how to do this — for example, when specific AI projects are proprietary aspects of a firm’s competitiveness, we might be able to use counterfactual systems to assess all possible choices faced by an AI.[iii] But systems that map decisions without breaking the black box will not be able to provide the rationale by which that algorithm made one decision instead of another.

Yet the problem still goes deeper. The problem with transparency models is the assumption that we will even know what to look for — that we will know when there needs to be a choice in opting out of a company’s use of data. In the near future, we may not be able to understand by ourselves when an AI is influencing us.

This leads us to a foundational issue: to govern AI, we may need to use AI.

We will need AI not just to understand when we are being influenced in overt ways, but to understand the new emerging ways in which companies can leverage the micro-understanding of our behavior. The capacity for existing legal frameworks, existing political institutions, and existing standards of accountability to understand, predict, and catch the use of AI for manipulative purposes is sorely lacking.

Algorithmic collusion is already a problem — with pricing cartels giving way to pop-up pricing issues that can disappear, without prior agreement, thus avoiding the initial claims.[iv] We can imagine a world where collusion is organized not by the market, but by tracking the behavior of distinct groups of individuals to organize micro-pricing changes.

Naturally, questions emerge: who will govern the AI that we use to watch AI? How will we know that collusion is not emerging between the watchers and the watched? What kind of transparency system will we need for a governing AI to minimize the transparency demands for corporate AI?

The future of AI governance will be decided in the margins — what we need to pay attention to is less the shifting structure of collusion and manipulation, but the conduct, and the ability for competent AI to find the minimal number of points of influence to shape decision making.

We need to have a conversation to make our assumptions and beliefs about price fixing, about collusion, about manipulation, painfully clear. In an age of AI, we cannot afford to be vague.

References
[i]  Piatetsky, Gregory. Did Target really predict a teens pregnancy? The Inside Story May 7 2014 KD nuggets
[ii] Yearsly, Liesl. We need to talk about the power of AI to manipulate humans. June 5 2017. MIT Tech Review
[iii] Mittelstadt, Brent. Wachter, Sandra. Could counterfactuals explain algorithmic decisions without opening the black box? 15 January 2018. Oxford Internet Institute Blog
[iv] Algorithms and Collusion: Competition Policy in the Digital Age. OECD 2017

Josh Entsminger

Josh Entsminger is an applied researcher at Nexus Frontier Tech. He additionally serves as a senior fellow at Ecole Des Ponts Business School’s Center for Policy and Competitiveness, a research associate at IE business school’s social innovation initiative, and a research contributor to the world economic forum’s future of production initiative.

Mark Esposito

Mark Esposito is a member of the Teaching Faculty at the Harvard University's Division of Continuing, a Professor of business and economics, with appointments at Grenoble Ecole de Management and Hult International Business School. He is an appointed Research Fellow in the Circular Economy Center, at the University of Cambridge's Judge Busines School. He is also a Fellow for the Mohammed Bin Rashid School of Government in Dubai. At Harvard, Mark teaches Systems Thinking and Complexity, Economic Strategy and Business, Government & Society for the Extension and Summer Schools and serves as Institutes Council Co-Leader, at the Microeconomics of Competitiveness program (MOC) developed at the Institute of Strategy and Competitiveness, at Harvard Business School. He is Founder & Director of the Lab-Center for Competitiveness, a think tank affiliated with the MOC network of Prof. Michael Porter at Harvard Business School and Head of the Political Economy and Sustainable Competitiveness Initiative. He researches the "Circular Economy" inside out and his work on the topic has appeared on top outlets such as The Guardian, World Economic Forum, Harvard Business Review, California Management Review, among others. He is the co-founder of the concepts of "Fast Expanding Markets" and "DRIVE", which represent new lenses of growth detection at the macro, meso and micro levels of the economy. He is also an active entrepreneur and co-founded Nexus FrontierTech, an Artificial Intelligence Studio, aimed at providing AI solutions to a large portfolio of clients. ​He was named one of the emerging tomorrow's thought leaders most likely to reinvent capitalism by Thinkers50, the world’s premier ranking of management thinkers and inducted into the "Radar" of the 30 most influential thinkers, on the rise.

Terence Tse

Terence is a co-founder & managing director of Nexus Frontier Tech: An AI Studio. He is also an Associate Professor of Finance at the London campus of ESCP Europe Business School. Terence is the co-author of the bestseller Understanding How the Future Unfolds: Using DRIVE to Harness the Power of Today’s Megatrends. He also wrote Corporate Finance: The Basics. In addition to providing consulting to the EU and UN, Terence regularly provides commentaries on the latest current affairs and market developments in the Financial Times, the Guardian and the Economist, as well as through CNBC and the World Economic Forum. He has also appeared on radio and television shows and delivered speeches at the UN, International Monetary Fund and International Trade Centre. Invited by the Government of Latvia, he was a keynote speaker at a Heads of Government Meeting, alongside the Premier of China and Prime Minister of Latvia. Terence has also been a keynote speaker at corporate events in India, Norway, Qatar, Russia and the UK. Previously, Terence worked in mergers and acquisitions at Schroders, Citibank and Lazard Brothers in Montréal and New York. He also worked in London as a consultant at EY, focusing on UK financial services. He obtained his PhD from the Judge Business School at the University of Cambridge.

Danny Goh

Danny is a serial entrepreneur and an early stage investor. He is the partner and Commercial Director of Nexus Frontier Tech, an AI advisory business with presence in London, Geneva, Boston and Tokyo to assist CEO and board members of different organisations to build innovative businesses taking full advantage of artificial intelligence technology.
 
 Danny has also co-founded Innovatube, a technology group that operates a R&D lab in software and AI developments, invests in early stage start-ups with 20+ portfolios, and acts as an incubator to foster the local start-up community in South East Asia. Innovatube labs have a team of researches and engineers to develop cutting edge technology to help start-ups and enterprises bolster their operation capabilities. Danny currently serves as an Entrepreneurship Expert with the Entrepreneurship centre at the Said Business School, University of Oxford and he is an advisor and judge to several technology start-ups and accelerators including Startupbootcamp IoT London. Danny has lived in four different continents in the last 20 years in Sydney, Kuala Lumpur, Boston and London, and constantly finds himself travelling.