Google Deepmind’s Victory for AI

 

March 13, 2016

 

Time and again when man has doubted the ability of computers to take a new step, technology has proved us wrong – nothing is perhaps as indicative of this as the field of artificial intelligence. This week’s whitewash of Lee Se-dol, South Korea and the world’s best player of Go, marked another benchmark in the progress of artificial intelligence.
There are more game states in the ancient Chinese game, Go, than there are atoms in the universe – yet, Google’s Deep Mind software, developed in Britain by AlphaGo, was able to easily win the highly anticipated showdown this week. Go professionals said that AlphaGo displayed unorthodox, questionable moves that made sense in hindsight. Just like a student makes sense of an answer manual when he/she doesn’t understand a question (in hindsight), signs that moves only made sense to professionals after AlphaGo played them is a true indicator that AI has conquered the field.

Google co-founder Sergey Brin, who was in Seoul to watch the third match, described Go as a “beautiful game” and said he was excited that the company has been able to “instill that kind of beauty in our computers.”

Go is played primarily through intuition and feel, and because of its beauty, subtlety and intellectual depth it has captured the human imagination for centuries. AlphaGo is the first computer program to ever beat a professional, human player. The AI behind Alpha Go consists of neural networks and machine learning – the combination of the two was said to lead to deep learning. The idea behind deep learning is to combine approaches that get the computer to develop an intuition about how to play the game. Deep learning requires two things: plenty of processing grunt and plenty of data to learn from. AlphaGo repeatedly played tweaked versions of itself over its development phase to nurture the intuition that defeated the world’s best Go player. It’s two main algorithms, policy network and value network allowed it to imitate human play, and evaluate how strong a move is, respectively.

One reason for the commercial and academic excitement around deep learning is that the techniques employed in AlphaGo can be used to teach computers to recognize faces, translate between languages, show relevant advertisements to internet users or hunt for subatomic particles in data from atom-smashers.

This is not the first time AI has faced off against humans. In Chess, Deep Blue, developed by IBM took on chess legend Garry Kasparov in a 1996, six-game series. Kasparov won the series 3-1, but his first loss to the system showed promise in AI - “I could feel – I could smell – a new kind of intelligence across the table.” Before there was Deep Blue, there was Maven, developed in 1986 by programmer Brian Sheppard. Its first incarnation used a set of 100 patterns to evaluate the value of letter racks. By beating humans roughly 2/3 of the time, Sheppard’s creation has earned its title as an intelligence beyond humans at the word game. Exciting times for AI.

 

View the video on Youtube.