ARTIFICIAL INTELLIGENCE: EXPLORING THE INVISIBLE INNOVATION

What is artificial intelligence

 Artificial Intelligence is a field of Information Technology (IT) aimed to allow and demonstrate how a software system can act rationally.

The earliest reference of studies of the human brain are placed around the 17th century BC with the Edwin Smith Surgical Papyrus, demonstrating how humans have been fascinated by gray matter soon after the start of civilization.
It’s only natural that, with the advent of IT, humans tried to replicate the same workflow in a machine.

BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE

Contrary to common belief, Artificial intelligence is not a new field. The first studies about it started in the 1930s with the “thinking machines”, when three major actors defined the basis:

  • Norbert Wiener with his electrical network (mimicking neurons’ activation);
  • Claude Shannon, describing digital signal processing;
  • Alan Turing, that defined the rules to assess any problem from a digital point of view.

Those three key principles concretized together in 1943, when Walter Pitts and Warren McCulloch described the first Neural Network, where artificial neurons were given the task to resolve simple logic functions.

In the next years, studies continued without a real focus (or a real name), until the Dartmouth Workshop was held in 1956, with a very straightforward proposal: every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. In that precise moment, the term Artificial Intelligence was born and study kept going on. Attention from the public and fundings were on the constant rise, except for two periods called “Winters” – 1974-1980 and 1987-1993 – that saw respectively a major cut of funds by DARPA (1974) and the collapse of LISP Machines (1987).

THE INVISIBLE COMPANION

Luckily, history proven that Artificial Intelligence isn’t only vaporware: after dark times studies begin to prosper again (with some hiccups, for example in 2010 with the EMini S&P 500 futures contracts, when a hot potato effect started unrolling between “intelligent agents”).

Fast forward to today, we can barely notice the presence of Artificial Intelligence, nonetheless it’s a very important and integral part of our lives, participating in:

  • Utilities supplies distribution;
  • Traffic control in major cities;
  • Weather forecast;
  • Food chain of transportation;
  • Logistics;
  • Social media;
  • Habits analysis;
  • Art;
  • and so on

A survey made by Ipsos for the World Economic Forum reported that 60% of the candidates polled think that AI will make their lives easier in the next 3 to 5 years, but only 52% of them think their life will actually be better.

DATA: A DIGITAL DNA

The reason for the skepticism resides in the same core of the AI: the data.

In order to make a system autonomous it needs to be fed data that will subsequently be organized into training datasets from which the machine can learn.

While a lot of data for specific applications are gathered by governments / institutions / organizations, personal data can only be collected with the use of applications like social media. Personal data are obviously very dynamic, hence needing a constant update and collection.

This raised a lot of concerns about privacy and while our data is gradually getting more and more protected thanks to regulamentations (like the GDPR for the EU), it still feels like a wild west.

While in most cases the collection is for a kinda harmless end goal (like clustering for marketing purposes), the same data could be used to manipulate people (e.g. Cambridge Analytica) or, worse, to control people’s lives.