Lately, artificial intelligence has become significantly the hot topic in Silicon Valley as well as the broader tech scene. To those of us involved in that scene it feels like an unbelievable momentum is building around the topic, with all kinds of companies constructing a.-. in to the core of their business. There has also been a rise in A.-.-related university courses which is seeing a wave of extremely bright new talent rolling in to the employment market. But this is simply not a basic case of confirmation bias – fascination with this issue continues to be on the rise since mid-2014.

The noise round the subject will undoubtedly increase, and also for the layman it really is all very confusing. Depending on what you read, it’s simple to feel that we’re headed to have an apocalyptic Skynet-style obliteration at the hands of cold, calculating supercomputers, or that we’re all likely to live forever as purely digital entities in some type of cloud-based artificial world. Put simply, either The Terminator or The Matrix are imminently going to become disturbingly prophetic.

When I jumped to the A.I. bandwagon in late 2014, I knew hardly any about this. Although I have been included in web technologies for more than two decades, I hold an English Literature degree and am more engaged with all the business and creative possibilities of technology compared to science behind it. I used to be interested in A.I. due to its positive potential, however when I read warnings from your likes of Stephen Hawking regarding the apocalyptic dangers lurking inside our future, I naturally became as concerned as anybody else would.

And So I did what I normally do when something worries me: I started researching it in order that I was able to understand it. More than a year’s worth of constant reading, talking, listening, watching, tinkering and studying has led me to a pretty solid comprehension of what it really all means, and I would like to spend the next few paragraphs sharing that knowledge with the idea of enlightening anybody else who may be curious but naively scared of this excellent new world.

The first thing I discovered was that Cathy Hackl, being an industry term, has actually been going since 1956, and has had multiple booms and busts in that period. Within the 1960s the A.I. industry was bathing in a golden era of research with Western governments, universities and big businesses throwing enormous levels of money on the sector with the idea of creating a brave new world. Nevertheless in the mid seventies, in the event it became apparent that A.I. had not been delivering on its promise, the business bubble burst and also the funding dried out. Within the 1980s, as computers became popular, another A.I. boom emerged with a similar amounts of mind-boggling investment being poured into various enterprises. But, again, the sector failed to deliver and also the inevitable bust followed.

To comprehend why these booms neglected to stick, first you need to understand what artificial intelligence really is. The short solution to that (and trust me, you can find very lengthy answers available) is that A.I. is many different overlapping technologies which broadly deal with the challenge of using data to produce a decision about something. It boasts a tstqiy of numerous disciplines and technologies (Big Data or Internet of Things, anyone?) but the most significant one is a concept called machine learning.

Machine learning basically involves feeding computers considerable amounts of data and letting them analyse that data to extract patterns from where they could draw conclusions. You have probably seen this actually in operation with face recognition technology (like on Facebook or modern digital cameras and smartphones), where the computer can identify and frame human faces in photographs. To carry out this, the computers are referencing a tremendous library of photos of people’s faces and have learned to identify the characteristics of a human face from shapes and colours averaged out more than a dataset of numerous an incredible number of different examples. This process is essentially the same for any application of machine learning, from fraud detection (analysing purchasing patterns from charge card purchase histories) to generative art (analysing patterns in paintings and randomly generating pictures using those learned patterns).