We've heard Google's machines were learning at a startling rate, but who knew they were learning so quickly? A few weeks back, Google's AlphaGo beat the world's greatest human Go player, a huge stepping stone in AI research. This week, Google's AlphaGo lost to a controversial of itself that had not been taught even once by a human.
Google is achieving more success in machine learning than humans thought possible. Google employees were startled with the machines’ skill to self-replicate. DeepMind is now able to self-teach better than the humans that built it are capable of.
DeepMind is the intelligent machine behind both AlphaGo and AlphaGo Zero. The original AlphaGo is comprised of 48 AI processors, and data from thousands of Go matches assembled inside of it. From the moment AlphaGo was born, it was able to reasonably comprehend the game. With time and direction from humans, AlphaGo began to learn the different strategies of the game.
At a certain point, AlphaGo had the ability to compete and beat the world's number one human go player. The AI machine mastered a game so difficult to play, it made a marathon look like a one-mile jog. After the amazing win, Google employees couldn't sit still. They wanted to be better than the best. Within forty days defeating the number one go player, Google's AlphaGo Zero was able to defeat AlphaGo.
As previously mentioned, AlphaGo has 48 processors. AlphaGo Zero has four and if that's not shocking enough, the only data AlphaGo Zero was given was the rules of the game. Not one person taught it how to play and it did not receive thousands of matches to analyze. AlphaGo Zero is learning from the number one player of go in the world, AlphaGo.