Ever since I was a child, I have been fascinated by how things work. “Why does custard powder get thicker when I stir it, but my bottle of ketchup gets runnier when I shake it?”, “How does putting petrol in the car make the wheels turn”, and “Just how does an aeroplane actually fly?”. I recall being described as ‘an inquisitive child’. With hindsight, this was probably a polite way of referring to the annoying kid who never stopped asking questions!
I now have a lot of sympathy for my parents after producing my very own ultra-inquisitive 6-year-old son…
Finding solutions to problems
However, I think it is this innate desire to understand how things work, and the inherent logic behind the mechanics, that has led me to the role I currently hold at Deloitte.
I started my career back in 2005 working in global mobility tax, helping employers ensure their mobile workforce were tax compliant, and working through the many challenges they faced day-to-day in that and other related areas. After learning first-hand about some of those issues, I moved into our Data Analytics & Digital Products team to help clients find digital solutions to their business problems. Faced with new, emerging technologies, I find myself once again asking ‘how does it work?’
The role of AI
Unsurprisingly, given its increasing popularity, Artificial Intelligence (AI), and more specifically, machine learning, is playing a bigger role in the technology organisations use; certainly at Deloitte we have a keen focus on it - whether helping clients categorise vast datasets for tax reporting purposes, or building algorithms that can more accurately predict the cost of an international assignment.
Whilst I have often asked myself “how does machine learning work?”, without having a PhD in Mathematics, I have come to accept it as another one of those things what will remain a mystery, like end-to-end encryption; ‘it just works’. However, not being able to explain the maths behind machine learning is not an excuse for refusing to exercise caution in our use of it. For example, we should not ignore the risk of bias where historical data shows patterns that are clearly less than ideal. So the question I now find myself asking in my role, instead of ‘how does this work?’ is ‘what is the impact of this? And how do we use it appropriately?’
What is AI bias?
Take predicting the cost of an assignment as an example. Some of the key features that will impact overall costs are the assignment length, the seniority of the individual, and whether or not they will be accompanied by dependants. If we take the last two elements, a natural assumption may be to consider if someone’s age is linked to their level of seniority, and whether or not they have already started a family. But would it be right for the algorithm to assume that because you are in your early twenties that you are (a) not already high on the income scale, or (b) don’t have any dependent children… if we attempted to predict the cost of a ‘social media influencer’s’ international assignment our algorithm would most likely be misfiring. Admittedly, an extreme example, but it makes the point.
Should AI be used?
AI bias is receiving a lot of attention in the media at the moment, given its wider societal implications; various top end tech companies have setup teams specifically to look at the adverse effects of machine learning that can encode implicit bias in training data. And it is these complexities that are leading many thinkers to question the value and morality of using AI for anything more complex than music or movie recommendations.
But AI should definitely not been seen as the ‘bad guy’. Consider the tremendously positive impacts it is having on the fight against climate change and the incredible steps it is making in improving patient outcomes in the healthcare industry.
AI therefore, like an inquisitive child, just needs to be taught correctly, and raised to not have inherent biases. A lesson we could all learn, perhaps.