From Science Fiction To Reality: Evolution Of Artificial Intelligence
In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots but it took decades to transform that fiction into some sort of reality by making breakthrough developments in the field of computer technology. In fact the concept of artificial intelligence has been around for centuries in mythologies but it wasn’t until the 1950’s when its true scientific possibility was explored. And now we have reached a stage where all possibilities are being explored to infuse AI technology for maximization of benefits for humans be it in the field of advertising, banking, agriculture or education and medical technology.
Dawn of AI
A generation of scientists, mathematicians and philosophers all had the concept of AI but it wasn’t until one British Polymath, Alan Turing, suggested that if humans use available information, as well as reason, to solve problems and make decisions — then why can’t machines do the same thing?
- Although Turing outlined machines and how to test their intelligence in his paper Computing Machinery and Intelligence in 1950 — his findings did not advance. The main roadblock in the path was the problem of computers. Before any more growth could happen they needed to change fundamentally — computers could execute commands, but they could not store them. Additionally, funding was also an issue up until 1974.
Computer Revolution and Push
By 1974 computers flourished. They were now faster, more affordable and able to store more information. Early demonstrations such as Allen Newell and Herbert Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA, which was funded by Research and Development Corporation (RAND), showed promise toward the goals of problem-solving and the interpretations of spoken language in machines, and yet there was still a long way to go before machines could think abstractly, self-recognize and achieve natural language processing.
Rise of Powerful Algorithms
In the 1980s AI research got momentum with an expansion of funds and algorithmic tools. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience.
- On the other hand, Edward Feigenbaum introduced expert systems which mimicked decision making processes of a human expert. But it was not until the 2000’s that many of the landmark goals were achieved and AI thrived despite lack of government funds and public attention.
Start of AI Spring
The big change today is that we are in an unprecedented period of technology innovation across so many different fields that give us the belief that the “AI Spring” has not only arrived but is here to stay. Key developments responsible for this optimism are:
- Unlimited Access to Computing Power: The worldwide public cloud services market is projected to grow exponentially. The access is amplified by rapid increase in computational power.
- Huge Fall in Cost of Storing Data: Today the hard drive cost per gigabyte of data has come down exponentially, to the extent that we are approaching near zero marginal cost for storing data.
- Explosion of Digitised Data: As per International Data Corporation (IDC) forecasts, by 2025, the global data sphere will grow ten times of data generated in 2016. Data is to AI what food is to humans. So, in a more digital world, the exponential growth of data is constantly feeding AI improvements.
- Policy Initiatives: There has been tremendous activity concerning AI policy positions and the development of an AI ecosystem in different countries over the last few years – the US published its AI report in December 2016; France published the AI strategy in January 2017 followed by a detailed policy document in March 2018; Japan released a document in March 2017; China published the AI strategy in July 2017; U.K. released its industrial strategy in November 2017 while India’s NITI Aayog came up with National Strategy for Artificial Intelligence in 2018.
- Government Funds and Research Initiatives: Creation of “data trusts”, rolling out of digital connectivity infrastructure such as 5G / full fiber networks, common supercomputing facilities, fiscal incentives and creation of open source software libraries are some of the focus areas of various governments.
Current Status of AI
In recent times, the AI research continues to grow exponentially. Over the last five years AI research has grown by 12.9% annually worldwide.
- The US is the global market leader for Artificial Intelligence with 40% market share. China (2nd) and Israel (3rd) have the next strongest AI ecosystems. China is poised to become the global leader in artificial intelligence research.
- In the area of core research in AI and related technologies, universities and research institutions from the US, China and Japan have led the publication volume on AI research topics between 2010 and 2016.
- Many countries have instituted dedicated public offices such as Ministry of AI (UAE), and Office of AI and AI Council (U.K.) while China and Japan have allowed existing ministries to take up AI implementation in their sectoral areas.
- The first graduate-level AI University in the world has been announced in Abu Dhabi, UAE. Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), named after His Highness Sheikh Mohamed bin Zayed Al Nahyan, Crown Prince of Abu Dhabi will offer MSc and PhD-level programmes in key areas of AI, including: Machine learning, Computer vision and Natural language processing.
- While early science fiction writers might have expected dramatic outcomes of the AI technology, but our world is moving steadily and cautiously towards multiple fields where it can prove to be a potential game-changer.