It’s been 20 years since IBM’s supercomputer Deep Blue defeated world chess champion, Gary Kasparov, in a historical first victory for artificial intelligence. What was once a futuristic concept in 1997, has slowly become part of everyday reality.
Scientists have since made huge steps towards creating a computing system that emulates the human brain’s neurons, working together in a neural network to solve problems. Today, supercomputers are smart enough to easily beat not only chess players, but also succeed in similarly sophisticated games, like the 3000-year-old Chinese game of Go, and most recently, poker challenges against multiple human pros.
But how do these artificial intelligence systems win? How did AlphaGo, a computer program developed by Google DeepMind, beat a Grandmaster of Go? And how did Libratus, a software robot, win $1.5m from four of the world’s top poker players in a three-week challenge at a Pittsburgh casino? Artificial intelligence had to not only create highly-strategic thinking, but also intuit and anticipate moves, or even recognize a bluff.
Self-learning Takes From Human Learning. Then Outperforms It.
The goal of artificial intelligence (AI) is to make computers as smart, or even smarter than human beings, by giving them human-like thinking and reasoning abilities. But there are many ways to achieve this.
Several years ago, Deep Blue was taught by using hand-written functions, copying the knowledge and wisdom of top human chess players. By implementing artificial intelligence, IBM’s supercomputer was able to identify things it has seen before, consider all possible moves, predict human responses, and then decide on the best move. This wouldn’t be possible without training it to look at large amounts of data and use algorithms that gave it the ability to perform tasks without any human intervention. This process led to what is now known as “machine learning.”
But it doesn’t end there. It takes even more intelligent systems to defeat human players in more complex games like Chinese Go and especially Poker – there are not only billions of options to foresee, but they require “feeling” or intuition. This is where the process of deep learning comes into play. Deep learning is a highly innovative branch of machine learning that closely imitates the work of the human brain in processing data and creating patterns of decision making.
In contrast to the machine learning approach used by Deep Blue, AlphaGo and Libratus learn not only from historical games, using a database of around 30 million moves, but also play against themselves to search through all possible options, finding the best one over time. It is exactly the same procedure as how humans learn from practice, trying many different things before we make a final decision. Thanks to that self-learn procedure, it intuits from experience how to play optimally – without adding human input or manually implemented rules.
Deep Learning Makes Computers Ultra-precise
Deep learning is one of the most promising subfields of AI research, bringing us a step closer to the sci-fi vision of robots that can think autonomously. Today, deep learning is used in the gaming industry and is finding many applications across any industry that relies on technology. Self-learning algorithms help in many different advancements, from healthcare or image recognition, to self-driving cars or personal assistants. They can help diagnose conditions from spine injuries or heart disease to cancer, or can even play a role in creative arts, adding color to black and white photos.
Ultra-precision offered by deep learning has been also used in the advertising industry. Self-learning algorithms can help achieve super accurate product recommendations, while also better predicting the probability that a user clicks on an ad (conversion potential) or the value of purchase (conversion value) to make advertising activities up to 50% more efficient.
The power of deep learning in advertising is the way the algorithms use a massive amount of data and act like humans without specific instructions or rules. It can work inter alia in recommendations used not only by e-commerce companies to persuade customers to buy additional products, but also by other companies to suggest music, events or even dating profiles.
The typical approach to targeted ads (personalized retargeting) is as follows. A user sees a banner creative based on a matter of predefined assumptions: if you checked out black shoes with a gold clasp, the recommendations might want to show black boots with buckles from historical information gathered. With deep learning however, there are no pre-assumed rules. Computers learn by practising what will be the best combination, should it be the next pair of black shoes or a better option – brown sandals, matching bags, etc. The main drive is that no human being has programmed a computer to perform any of the specific actions described above – every display is driven by data and algorithmic learning. In other words, human action is only required at the step of programming the algorithm how to program itself.
What Will The Future Bring?
Does this all mean that human knowledge is no longer needed? For now at least, the answer is no. Machine learning performs when human activities generate data, and we can utilize those results to understand our world better.
From virtual personal assistants like Siri and Cortana, to Google’s or Tesla’s self-driving cars, deep learning is becoming a part of everyday technology. The speech-recognition options on our smartphones work much better than they used to and the advances in image recognition extend far beyond than we expected. The ultimate goal for deep learning is to make our lives easier and our work more effective.
The next step could be a higher level of artificial intelligence involvement in different fields, especially in such important ones like medicine. Thanks to the better, much developed aspects of AI approach like speech- or image-recognition the future can bring us robots that act like doctors and can be trusted to diagnose easy diseases.
The sci-fi aspect of image recognition, presented in many episodes of Crime Scene Investigation, also has become reality since Google introduced a new AI system capable of “enhancing” an eight-pixel square image, increasing the resolution 16-fold and effectively restoring lost data. Curiously, another quickly developing technique in the field is generative adversarial networks, rumoured soon to allow to “draw” a new image for the given request, basing on the other pictures seen by the algorithm.
There are many more ways to implement AI not only to scientific appliances, but also in our day to day lives. Because when your software is powered by relevant data sets, the possibilities are simply endless.