Artificial Intelligence (AI) has been redefining society in ways we have never anticipated. Technology is clinging to us in every walk of our lives, right from unlocking our smartphones to our day-to-day activities, online shopping, intelligent car dashboards, autonomous robots, and so on. Though the concept of AI was first talked about in the early 1950s, forming a basis for many computer learning and complex decision-making processes, it is only of late, where processing huge amounts of data are required, that this field of technology is picking up pace.
What is Artificial Intelligence?
In the simplest terms, AI which stands for artificial intelligence refers to systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect. AI manifests in several forms. A few examples are:
- Chatbots use AI to understand customer problems faster and provide more efficient answers
- Intelligent assistants use AI to parse critical information from large free-text datasets to improve scheduling
- Recommendation engines can provide automated recommendations for TV shows based on users’ viewing habits
AI is much more about the process and the capability for superpowered thinking and data analysis than it is about any particular format or function. Although AI brings up images of high-functioning, human-like robots taking over the world, AI isn’t intended to replace humans. It’s intended to significantly enhance human capabilities and contributions. That makes it a very valuable business asset.
Stages of Artificial Intelligence
AI technologies are categorized by their capacity to mimic human characteristics, the technology they use to do this, their real-world applications, and the theory of mind. Using these characteristics for reference, all artificial intelligence systems – real and hypothetical – fall into one of the three types. These are the three stages through which AI can evolve, rather than the 3 types of Artificial Intelligence.
- Artificial Narrow Intelligence (ANI), which has a narrow range of abilities
- Artificial General Intelligence (AGI), which is on par with human capabilities
- Artificial Super Intelligence (ASI), which is more capable than a human
Artificial Narrow Intelligence / Weak AI / Narrow AI (ANI)
Artificial narrow intelligence (ANI), also referred to as weak AI or narrow AI, is the only type of artificial intelligence we have successfully realized to date. Narrow AI is goal-oriented, designed to perform singular tasks – i.e. facial recognition, speech recognition/voice assistants, driving a car, or searching the internet – and is very intelligent at completing the specific task it is programmed to do.
While these machines may seem intelligent, they operate under a narrow set of constraints and limitations, which is why this type is commonly referred to as weak AI. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behaviour based on a narrow range of parameters and contexts.
Consider the speech and language recognition of the Siri virtual assistant on iPhones, vision recognition of self-driving cars, and recommendation engines that suggest products you make like based on your purchase history. These systems can only learn or be taught to complete specific tasks.
Narrow AI has experienced numerous breakthroughs in the last decade, powered by achievements in machine learning and deep learning. For example, AI systems today are used in medicine to diagnose cancer and other diseases with extreme accuracy through replication of human-esque cognition and reasoning.
Narrow AI’s machine intelligence comes from the use of natural language processing (NLP) to perform tasks. NLP is evident in chatbots and similar AI technologies. By understanding speech and text in natural language, AI is programmed to interact with humans in a natural, personalized manner.
Narrow AI can either be reactive or have limited memory. Reactive AI is incredibly basic; it has no memory or data storage capabilities, emulating the human mind’s ability to respond to different kinds of stimuli without prior experience. Limited memory AI is more advanced, equipped with data storage and learning capabilities that enable machines to use historical data to inform decisions.
Most AI is limited memory AI, where machines use large volumes of data for deep learning. Deep learning enables personalized AI experiences, for example, virtual assistants or search engines that store your data and personalize your future experiences.
Some of the common examples of Narrow AI are:
- Rankbrain by Google / Google Search
- Siri by Apple, Alexa by Amazon, Cortana by Microsoft, and other virtual assistants
- IBM’s Watson
- Image / facial recognition software
- Disease mapping and prediction tools
- Manufacturing and drone robots
- Email spam filters / social media monitoring tools for dangerous content
- Entertainment or marketing content recommendations based on watch/listen/purchase behaviour
- Self-driving cars
Artificial General Intelligence / Strong AI / Deep AI (AGI)
Artificial general intelligence (AGI), also referred to as strong AI or deep AI, is the concept of a machine with general intelligence that mimics human intelligence and/or behaviours, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is indistinguishable from that of a human in any given situation.
AI researchers and scientists have not yet achieved strong AI. To succeed, they would need to find a way to make machines conscious, programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply experiential knowledge to a wider range of different problems.
Strong AI uses a theory of mind AI framework, which refers to the ability to discern needs, emotions, beliefs and thought processes of other intelligent entitles. The theory of mind-level AI is not about replication or simulation, it’s about training machines to truly understand humans.
The immense challenge of achieving strong AI is not surprising when you consider that the human brain is the model for creating general intelligence. The lack of comprehensive knowledge on the functionality of the human brain has researchers struggling to replicate basic functions of sight and movement. Fujitsu-built K, one of the fastest supercomputers, is one of the most notable attempts at achieving strong AI, but considering it took 40 minutes to simulate a single second of neural activity, it is difficult to determine whether or not strong AI will be achieved in our foreseeable future. As image and facial recognition technology advances, it is likely we will see an improvement in the ability of machines to learn and see.
Artificial Super Intelligence (ASI)
Artificial Super Intelligence (ASI), is the hypothetical AI that doesn’t just mimic or understand human intelligence and behaviour; ASI is where machines become self-aware and surpass the capacity of human intelligence and ability.
Superintelligence has long been the muse of dystopian science fiction in which robots overrun, overthrow, and/or enslave humanity. The concept of artificial superintelligence sees AI evolve to be so akin to human emotions and experiences, that it doesn’t just understand them, it evokes emotions, needs, beliefs, and desires of its own.
In addition to replicating the multi-faceted intelligence of human beings, ASI would theoretically be exceedingly better at everything we do; math, science, sports, art, medicine, hobbies, emotional relationships, everything. ASI would have greater memory and a faster ability to process and analyze data and stimuli. Consequently, the decision-making and problem-solving capabilities of super-intelligent beings would be far superior to those of human beings.
The potential of having such powerful machines at our disposal may seem appealing, but the concept itself has a multitude of unknown consequences. If self-aware super-intelligent beings came to be, they would be capable of ideas like self-preservation. The impact this will have on humanity, our survival, and our way of life is pure speculation.
Types of Artificial Intelligence
Based on the functionality of AI-based systems, AI can be categorized into the following types:
- Reactive Machines AI
- Limited Memory AI
- Theory of Mind AI
- Self-Aware AI
Reactive Machines AI
Reactive machines are the simplest level of robot. They cannot create memories or use information learned to influence future decisions – they are only able to react to presently existing situations.
IBM’s Deep Blue, a machine designed to play chess against a human, is an example of this. Deep Blue evaluates pieces on a chessboard and reacts to them, based on pre-coded chess strategies. It does not learn or improve as it plays – hence, it is simply ‘reactive’.
Limited Memory AI
A limited memory machine, as the name might suggest, can retain some information learned from observing previous events or data. It can build knowledge using that memory in conjunction with pre-programmed data. Self-driving cars for instance store pre-programmed data – i.e. lane markings and maps, alongside observing surrounding information such as the speed and direction of nearby cars, or the movement of nearby pedestrians.
These vehicles can evaluate the environment around them and adjust their driving as necessary. As technology evolves, machine reaction times to make judgments have also become enhanced – an invaluable asset in technology as potentially dangerous as self-driving cars. Improvements in machine learning also help autonomous vehicles to continue to learn how to drive similarly to humans – through experience over time.
Theory of Mind AI
Human beings have thoughts and feelings, memories, or other brain patterns that drive and influence their behaviour. It is based on this psychology that theory of mind researchers work, hoping to develop computers that can imitate human mental models. That is – machines that can understand that people and animals have thoughts and feelings that can affect their own behaviour.
It is this theory of mind that allows humans to have social interactions and form societies. Theory of mind machines would be required to use the information derived from people and learn from it, which would then inform how the machine communicates in or reacts to a different situation.
A famous but still very primitive example of this technology is Sophia, the world-famous robot developed by Hanson Robotics, who often goes on press tours as an ever-evolving example to the public of what robots are capable of doing. Whilst Sophia is not natively able to determine or understand human emotion, she can hold a basic conversation and has image recognition and an ability to respond to interactions with humans with the appropriate facial expression, as well as an incredibly human-like appearance.
Researchers have yet to truly develop the theory of mind technology, however, with criticisms of Sophia for instance being that she is simply “a chatbot with a face”.
Self-Aware AI machines are the most complex that we might ever be able to envision and are described by some as the ultimate goal of AI.
These are machines that have human-level consciousness and understand their existence in the world. They don’t just ask for something they need, they understand that they need something; ‘I want a glass of water’ is a very different statement to ‘I know I want a glass of water’.
As a conscious being, this machine would not just know of its own internal state but be able to predict the feelings of others around it. For instance, as humans, if someone yells at us we assume that that person is angry because we understand that is how we feel when we yell. Without a theory of mind, we would not be able to make these inferences from other humans.
Self-aware machines are, at present, a work of science fiction and not something that exist – and in fact, may never exist. As it is, we’re probably best focusing on the development of machine learning in our AI. A machine that has a memory, that can learn from events in its memory and then can take that learning and apply it to future decisions is the baseline of evolution in Artificial Intelligence. Developing this will lead to AI innovation that could turn society on its head, enhance how we live in the day to day exponentially, and even save lives.
Branches of Artificial Intelligence
Artificial Intelligence can be used to solve real-world problems by implementing the following processes/ techniques:
- Machine Learning
- Neural Networks
- Expert Systems
- Fuzzy Logic
- Natural Language Processing
1. Machine Learning
Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.
The process of learning begins with observations or data, such as examples, direct experience, or instruction, to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers to learn automatically without human intervention or assistance and adjust actions accordingly.
But, using the classic algorithms of machine learning, the text is considered as a sequence of keywords; instead, an approach based on semantic analysis mimics the human ability to understand the meaning of a text.
Under Machine Learning there are four categories:
- Supervised machine learning algorithms can apply what has been learned in the past to new data using labeled examples to predict future events. Starting from the analysis of a known training dataset, the learning algorithm produces an inferred function to make predictions about the output values. The system is able to provide targets for any new input after sufficient training. The learning algorithm can also compare its output with the correct, intended output and find errors in order to modify the model accordingly.
- Unsupervised machine learning algorithms are used when the information used to train is neither classified nor labeled. Unsupervised learning studies how systems can infer a function to describe a hidden structure from unlabeled data. The system doesn’t figure out the right output, but it explores the data and can draw inferences from datasets to describe hidden structures from unlabeled data.
- Semi-supervised machine learning algorithms fall somewhere in between supervised and unsupervised learning since they use both labeled and unlabeled data for training – typically a small amount of labeled data and a large amount of unlabeled data. The systems that use this method are able to considerably improve learning accuracy. Usually, semi-supervised learning is chosen when the acquired labeled data requires skilled and relevant resources in order to train it / learn from it. Otherwise, acquiring unlabeled data generally doesn’t require additional resources.
- Reinforcement machine learning algorithms are a learning method that interacts with its environment by producing actions and discovering errors or rewards. Trial and error search and delayed reward are the most relevant characteristics of reinforcement learning. This method allows machines and software agents to automatically determine the ideal behavior within a specific context in order to maximize its performance. Simple reward feedback is required for the agent to learn which action is best; this is known as the reinforcement signal.
2. Neural Networks
Incorporating cognitive science and machines to perform tasks, the neural network is a branch of artificial intelligence that makes use of neurology ( a part of biology that concerns the nerve and nervous system of the human brain). A neural network replicates the human brain where the human brain comprises an infinite number of neurons and to code brain neurons into a system or a machine is what the neural network functions.
In simple terms, a neural network is a set of algorithms that are used to find the elemental relationships across the bunches of data via the process that imitates the human brain’s operating process.
So, a neural network refers to a system of neurons that are original or artificial in nature, where artificial neurons are known as perceptrons, known from here, the complete perceptron model in the neural network.
A neuron in a neural network is a mathematical function (such as activation functions) whose work is to gather and classify information according to a particular structure, the network strongly implements various statistical techniques, such as regression analysis, to accomplish tasks.
From forecasting to market research, they are extensively used for fraud detection, risk analysis, stock-exchange prediction, sales prediction, and many more.
This has emerged as a very sizzling field of artificial intelligence. An interesting field of research and development mainly focuses on designing and constructing robots.
What is a Robot?
A robot is the product of the robotics field, where programmable machines are built that can assist humans or mimic human actions. Robots were originally built to handle monotonous tasks (like building cars on an assembly line), but have since expanded well beyond their initial uses to perform tasks like fighting fires, cleaning homes, and assisting with incredibly intricate surgeries. Each robot has a differing level of autonomy, ranging from human-controlled bots that carry out tasks that a human has full control over to fully-autonomous bots that perform tasks without any external influences.
- Robotics is an interdisciplinary field of science and engineering incorporated with mechanical engineering, electrical engineering, computer science, and many others.
- Robotics determines the designing, producing, operating, and usage of robots. It deals with computer systems for their control, intelligent outcomes, and information transformation.
Robots are deployed often for conducting tasks that might be laborious for humans to perform steadily. Major of robotics tasks involved- assembly line for automobile manufacturing, for moving large objects in space by NASA. AI researchers are also developing robots using machine learning to set interactions at social levels.
4. Expert Systems
Expert systems were considered amid the first successful model of AI software. For the first time, they were designed in the 1970s and after that escalated in the 1980s.
Under the umbrella of AI technology, an expert system refers to a computer system that mimics the decision-making intelligence of a human expert. It conducts this by deriving knowledge from its knowledge base by implementing reasoning and insights rules in terms of the user queries.
The effectiveness of the expert system completely relies on the expert’s knowledge accumulated in a knowledge base. The more the information collected in it, the more the system enhances its efficiency. For example, the expert system provides suggestions for spelling and errors in Google Search Engine. Expert systems are built to deal with complex problems via reasoning through the bodies of proficiency, expressed especially in particular of “if-then” rules instead of the traditional agenda to code. The key features of expert systems include extremely responsive, reliable, understandable, and high execution.
5. Fuzzy Logic
In the real world, sometimes we face a condition where it is difficult to recognize whether the condition is true or not, their fuzzy logic gives relevant flexibility for reasoning that leads to inaccuracies and uncertainties of any condition.
In simpler terms, Fuzzy logic is a technique that represents and modifies uncertain information by measuring the degree to which the hypothesis is correct. Fuzzy logic is also used for reasoning about naturally uncertain concepts. Fuzzy logic is convenient and flexible to implement machine learning techniques and assist in imitating human thought logically.
It is simply the generalization of the standard logic where a concept exhibits a degree of truth between 0.0 to 1.0. If the concept is completely true, standard logic is 1.0 and 0.0 for the completely false concept. But in fuzzy logic, there is also an intermediate value too which is partially true and partially false.
6. Natural Language Processing
In layman’s words, NLP is the part of computer science and AI that can help in communicating between computers and humans by natural language. It is a technique of computational processing of human languages. It enables a computer to read and understand data by mimicking human natural language.
NLP is a method that deals with searching, analyzing, understanding, and deriving information from the text form of data. To teach computers how to extract meaningful information from text data, NLP libraries are used by programmers. A common example of NLP is spam detection, computer algorithms can check whether an email is junk or not by looking at the subject of a line, or text of an email.
Implementing NLP gives various benefits such as;
- It improves the accuracy and efficiency of documents.
- It has the ability to make automated readable summary text.
- It is very advantageous for personal assistants such as Alexa,
- It enables organizations to opt for chatbots for customer support.
- It makes sentiment analysis easier.
Some of the NLP applications are text translation, sentiment analysis, and speech recognition. For example, Twitter uses the NLP technique to filter terroristic language from various tweets, Amazon implements NLP for interpreting customer reviews and enhancing their experience.