Today’s world is so heavily driven by Siri, Google Now and Cortana, that it would have been impossible to imagine that they could have been around in the 80s let alone the 50s. But is it a concept that new and nascent?
Let’s go back to the Dartmouth conference in 1955, where the term ‘Artificial Intelligence’ was coined for the first time. Here, J. McCarthy, M. L. Minsky, N. Rochester, and C.E. Shannon in August 31, 1955 stated "We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Gauging from the definition quoted in the sidebar “The Dartmouth Conference,” ‘any program or an algorithm is an AI system simply because it does something that would normally be considered as an act in humans’.
Artificial intelligence (AI) is a field of computer science focused at the development of computers capable of doing things that are normally done by people, things that would generally be considered done by people behaving intelligently.
What could have been the reasons that spurred its re-emergence?
Ironically, the foundational concepts of AI have not changed substantially and today’s AI engines are, in many ways, similar to past ones. The techniques of yesteryear had a shortcoming, not due to inadequate design, but because the needful premise and environment weren’t built yet. In short, the biggest difference between AI then and now is that, there is exponential growth of raw data, focus on specific applications and increased computational and simulating resources, which can contribute to the success of any AI system.
As the name would suggest, an AI system would typically be expected to replicate human intelligence and efficiency. However, depending on the target function and application, AI can be classified in terms of the extent of AI- strong AI, weak AI and practical AI. Strong AI would be an AI system simulating actual human reasoning to think and explain human tendencies. These systems are yet to be built. AI systems that behave like humans in a certain way and execute specified functions of intelligent acts of human beings can be termed as Weak AI. A balance between strong and weak AI, Practical AI, are the systems guided by human intelligence but are not enslaved to them. In brief for a system to be AI it does not require to be as intelligent as humans, it just needs to be intelligent.
Machine learning, cognitive computing, productive analysis, deep learning, recommendation systems are all different facets of AI.
Amazon and Netflix recommendations, Text prompt in mobile phones, Apple’s SIRI and Microsoft’s CORTANA, Thermostat, Google’s NOW are excellent examples of AI systems.
For example, IBM Watson uses the concept of facts being expressed in various forms and that each match against each possible form equals proof of the response. The technology first investigates the language input to pick the elements and relationships needed to tell you what you could be looking for and then, uses arrays of patterns consisting of the words in the original request to find matches in colossal collection of text. Each match would then provide a singular sample of proof and each sample of proof is summed to provide Watson with a number allocated with an answer. Watson is a good example of excellently executed classic AI.
Another excellent use case for AI is in data analysis and interpretation. As new data comes in, many of us spend our time reviewing it and making decisions based on the insights we gain from the data. While we may still want to do the decision-making, many of us would barely want to spend our time and resources digging through raw incoming data. What if we could use AI to do just that, while we just use our actual intelligence in the end? vPhrase’s augmented analytics tool Phrazor uses natural language generation technology to makes sense of the data by turning it into effective narratives. With such a technology that allows for automated scenario assessment, businesses need not sift through inexhaustible data to gain insights or make decisions. In the future, we believe, our technology will be able to analyze even more extensive and enormous data sets to not just make better decisions but also derive a conclusion, as humans would do.
As you would observe, the allure is in the alliance between AI and human expertise. The machine is doing what it does best: reviewing enormous data sets and finding patterns to differentiate various activities and situations, while, as humans, we are doing what we do best: examining the situation, fitting it into a larger picture and then deriving solutions of it suitably.
Romil Shah is an AI enthusiast with considerable experience across varied technology domains, primarily Natural Language Generation (NLG), and Blockchain technology. He is passionate about technological innovation and more importantly its real-world applications. His work within the field has brought about constructive conversations and explorations around AI and its extended use in businesses.