While Elon Musk attempts to propel the field of artificial intelligence, he likewise accepts there is a cosmically high probability that AI will represent a risk to humankind later on. In an exclusive with Rolling Stone, the tech-illuminating individual asserted we have just a five to 10 percent shot of accomplishment at making AI safe.
Firstly, a noteworthy objective of AI — and one that OpenAI is already seeking — is building AI that is more quick witted than people, as well as fit for adapting autonomously, excluding any human programming or impedance. Where that capacity could take it is unclear.
Secondly, there is the way that machines do not have ethics, regret, or feelings. Future AI may be fit for recognizing "great" and "awful" activities, however particularly human emotions stay only that — human.
In the Rolling Stone article, Musk additionally elaborated on the perils and issues that presently exist with AI, one of which is the potential for only a handful of organizations to essentially control the AI segment. He referred to Google's DeepMind as a prime illustration.
"Between Facebook, Google, and Amazon — and arguably Apple, however they appear to care about privacy — they have more information about you than you can remember," said Musk. “There’s a lot of risk in concentration of power. So if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?”