Bernard Marr is founder and CEO of Bernard Marr & Co. He is also a futurist, keynote speaker, strategic adviser to companies and governments and a best-selling business author.
There are different ways to understand artificial intelligence (AI). Often when we think of the term, we think of machines which are able to pass themselves off as ‘intelligent’. Possessing, in some way, the hard-to-define abilities of organic, sentient beings to plan, analyse, make decisions and perhaps even dream.
It’s true that we’re some way away from the science fiction vision of robots that can converse with us in a way that is indistinguishable from how we might talk to another person. Let alone ponder their own place in the cosmos, as human philosophers do, or even consciously strive to become increasingly human – in the manner, say, of the android Data in Star Trek.
In truth, machines which are classed as capable of AI today mark only the first steps towards this type of machine intelligence. And there’s good reason for that. The tremendous surge in progress and activity we’ve seen in the field of AI over the last decade has been driven by business. And business doesn’t want machines to pontificate on the nature of humanity and conscious thought. It wants them to work!
Within the space of a few short years, we’ve moved from a situation where AI was being talked about by futurists and boffins as something that was set to change the world, to a situation where it’s clearly having real and tangible effects on just about every area of industry, as well as our day-to-day lives. If you use a global system like Visa or American Express to make payments, then your transactions are being analysed by smart, learning machines that are becoming increasingly effective at determining whether your payment is valid or fraudulent.
If you are being treated for a medical condition, then its increasingly likely that the treatment you’re receiving was developed with the help of AI analysis of thousands of clinical trials and scientific papers.
The advertising you’re to when you browse the internet, surf videos on Youtube or even open the junk mail that comes through your letterbox is determined by AI analysis of personal data you’ve left behind through your digital footprint.
The food you eat may well come from crops which a farmer grew with the help of AI, telling them how to efficiently use the land available to them, as well as the most economical way to deploy fertilisers and pesticides to reduce waste and boost yield.
When you take a picture with your smartphone, AI circuitry analyses lighting conditions and ‘recognises’ prominent features of the data the camera sensor is exposed to, such as faces or fast-moving objects, to return an image which will be more pleasing to the eye.
If you apply for a job with a large corporation, it’s increasingly likely your application will be pre-screened by AI algorithms to determine how good a fit your skills and personality will be for a role, before you set foot through the front door.
When you shop in a supermarket, the products you see on the shelves are determined by yet more algorithms, which are learning to take geography, demographics and meteorology into account when making stocking decisions.

Changing the world as we know it
The AI that is changing the world today – from the healthcare industry to finance, education, recruitment and the service industry – is what is known as specialised AI. This refers to AI applications which are designed to do one task, very efficiently and very rapidly, and become increasingly good at that task as it generates and consumes more and more information. In essence, it’s a single step forward from the ‘traditional’ computer software we’ve grown accustomed to over the last half-century. Many of us have grown up with the definition of a computer as a device which takes an input, processes it, and supplies an output. Whether the output is ‘right’ or ‘wrong’ is outside the boundaries of a traditional computer’s understanding – like a foot soldier of a despotic regime, all it does is blindly follow orders. What’s new is essentially the addition of a feedback loop. Without input from us and based purely on the data it has access to, today’s AI ‘learns’ how accurate its results are and how to improve them. This is the basic premise of all machine learning. It’s actually nothing new – the theory has been understood for decades – but it does require enormous amounts of data and processing power to work effectively. What’s changed recently is that, thanks to the internet and cloud computing, those are things we now have in abundance.Out of the lab and into the world
