Embracing artificial intelligence ethically

Categories
Management
Socio-Technical Centre

Dr Kyle Griffith is a Teaching Fellow in the Management Division. Kyle’s research interests include diversity and talent management, working lives, and ethics in artificial intelligence.

Kyle G

In November 2019, I attended the ‘Artificial Intelligence & Data: Disruptions to Society, Organisations and People’ conference hosted by the Artificial Intelligence Research Centre (Brunel University London) in association with the British Academy of Management. The two-day conference brought together academics from across the UK to try and get a grasp on the opportunities, challenges, priorities and actions associated with the digitisation and tracking of our daily lives for the development of artificially intelligent products and services.  

Conceptually, the field goes back almost 70 years but it has been the mass production (and collection) of user data by tech giants like Google and Facebook that has thrust artificial intelligence (AI) back into the public sphere. If the European Union’s General Data Protection Regulation (GDPR) is any indication, demand is growing for more transparency. AI is a hot topic and everyone has an opinion on its future, ranging from the rise of the machines “Terminator” scenario to the digital utopia. As researchers, I believe we can have the greatest impact with an approach of responsible optimism that recognises the part we have to play in AI development. This means encouraging AI development in a way that improves our daily lives. One way to do this is creating proper safety mechanisms that draw clear boundaries as to what is and is not beneficial to society.  

One big takeaway from the conference was that AI is informing organisational decisions now and managers have a responsibility to ensure its implementation is ethical. Conference roundtable discussions also highlighted the need for ethics development alongside ongoing investment in research and development. Without such, examples of data misuse as in the case of the Cambridge Analytica scandal will continue to be commonplace. In this example, users’ Facebook data was collected without their knowledge or consent and was eventually used for micro-targeting in political advertising campaigns. Looking more positively, the evolution of maps and wayfinding is a great example of how AI can benefit our lives. Over the span of a few decades we have gone from static maps to dynamic smartphone apps like Waze and Google Maps that incorporate real-time user data to help us find our way. 

As with many emerging industries, government regulation appears one step behind, however managers have an opportunity to fill the void. We have started to see shifts in the direction of self-regulation, for example, Twitter’s recent decision to no longer accept political advertisements - this decision appeared to be ethically motivated given the success of political misinformation campaigns on their platform. Conference participants reached a consensus that new ethics standards will need to focus on the transparency of data collection and use, to build trust as AI diffuses further into our lives.  

In my own research in organisational behaviour, I am writing up findings from a recent project on organisational change in retail and employee perceptions of increasing technology diffusion (the so called ‘digital transition’). We plan to publish this research and write a blog post on our findings in due course.

Contact us

If you would like to get in touch regarding any of these blog entries, or are interested in contributing to the blog, please contact:

Email: research.lubs@leeds.ac.uk
Phone: +44 (0)113 343 8754

Click here to view our privacy statement. You can repost this blog article, following the terms listed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International licence.

The views expressed in this article are those of the author and may not reflect the views of Leeds University business school or the University of Leeds.