AI is capable of almost anything, from predicting patterns to creating images, like this one.
Image: Bing Image CreatorHear the term artificial intelligence (AI) and you might think of self-driving cars, robots, ChatGPT or other AI chatbots, and artificially created images. But it's also important to look behind the outputs of AI and understand how the technology works and its impacts for this and future generations.
AI is a concept that has been around, formally, since the 1950s, when it was defined as a machine's ability to perform a task that would've previously required human intelligence. This is quite a broad definition and one that has been modified over decades of research and technological advancements.
When you consider assigning intelligence to a machine, such as a computer, it makes sense to start by defining the term 'intelligence' -- especially when you want to determine if an artificial system is truly deserving of it.
Also: These experts are racing to protect AI from hackers
Our level of intelligence sets us apart from other living beings and is essential to the human experience. Some experts define intelligence as the ability to adapt, solve problems, plan, improvise in new situations, and learn new things.
With intelligence sometimes seen as the foundation for human experience, it's perhaps no surprise that we'd try and recreate it artificially in scientific endeavors.
And today's AI systems might demonstrate some traits of human intelligence, including learning, problem-solving, perception, and even a limited spectrum of creativity and social intelligence.
AI comes in different forms that have become widely available in everyday life. The smart speakers on your mantle with Alexa or Google voice assistant built-in are two great examples of AI. Other good examples are popular AI chatbots, such as ChatGPT, the new Bing Chat, and Google Bard.
When you ask ChatGPT for the capital of a country or you ask Alexa to give you an update on the weather, you'll get responses that are the result of machine-learning algorithms.
Also: How does ChatGPT work?
Though these systems aren't a replacement for human intelligence or social interaction, they have the ability to use their training to adapt and learn new skills for tasks that they weren't explicitly programmed to perform.
Artificial intelligence can be divided into three widely accepted subcategories: narrow AI, general AI, and super AI.
Artificial narrow intelligence (ANI) is crucial to voice assistants, such as Siri, Alexa, and Google Assistant. This category includes intelligent systems that have been designed or trained to carry out specific tasks or solve particular problems, without being explicitly designed to do so.
ANI might often be referred to as weak AI, as it doesn't possess general intelligence, but some examples of the power of narrow AI include the above voice assistants, and also image-recognition systems, technologies that respond to simple customer service requests, and tools that flag inappropriate content online.
Also: 6 things ChatGPT can't do (and another 20 it refuses to do)
ChatGPT is an example of ANI, as it is programmed to perform a specific task, which is to generate text responses to the prompts it is given.
Artificial general intelligence (AGI), also known as strong AI, is still a hypothetical concept as it involves a machine understanding and performing vastly different tasks based on its accumulated experience. This type of intelligence is more on the level of human intellect, as AGI systems would be able to reason and think like a human.
Also:AI's true goal may no longer be intelligence
Like a human, AGI would potentially be able to understand any intellectual task, think abstractly, learn from its experiences, and use that knowledge to solve new problems. Essentially, we're talking about a system or machine capable of common sense, which is currently not achievable with any form of available AI.
Developing a system with its own consciousness is still, presumably, a fair way in the distance, but it is the ultimate goal in AI research.
Artificial super intelligence (ASI) is a system that wouldn't only rock humankind to its core, but could also destroy it. If that sounds straight out of a science fiction novel, it's because it kind of is: ASI is a system where the intelligence of a machine surpasses all forms of human intelligence, in all aspects, and outperforms humans in every function.
Also:How can generative AI improve the customer experience?
An intelligent system that can learn and continuously improve itself is still a hypothetical concept. However, it's a system that, if applied effectively and ethically, could lead to extraordinary progress and achievements in medicine, technology, and more.
Overall, the most notable advancements in AI are the development and release of GPT 3.5 and GPT 4. But there have been many other revolutionary achievements in artificial intelligence -- too many, in fact, to include all of them here.
Here are some of the most notable:
ChatGPT is an AI chatbot capable of natural language generation, translation, and answering questions. Though it's arguably the most popular AI tool, thanks to its widespread accessibility, OpenAI made significant waves in the world of artificial intelligence with the creation of GPTs 1, 2, and 3.
Also: 5 ways to use chatbots to make your life easier
GPT stands for Generative Pre-trained Transformer, and GPT-3 was the largest language model in existence at the time of its 2020 launch, with 175 billion parameters. The latest version, GPT-4, accessible through ChatGPT Plus or Bing Chat, has one trillion parameters.
Though the safety of self-driving cars is a top concern of potential users, the technology continues to advance and improve with breakthroughs in AI. These vehicles use machine-learning algorithms to combine data from sensors and cameras to perceive their surroundings and determine the best course of action.
Also:An autonomous car that wakes up and greets you could be in your future
Tesla's autopilot feature in its electric vehicles is probably what most people think of when considering self-driving cars, but Waymo, from Google's parent company, Alphabet, makes autonomous rides, like a taxi without a taxi driver, in San Francisco, CA, and Phoenix, AZ.
Cruise is another robotaxi service, and auto companies like Apple, Audi, GM, and Ford are also presumably working on self-driving vehicle technology.
The achievements of Boston Dynamics stand out in the area of AI and robotics. Though we're still a long way away from creating AI at the level of technology seen in the moive Terminator, watching Boston Dyanmics' robots use AI to navigate and respond to different terrains is impressive.
Google sister company DeepMind is an AI pioneer making strides toward the ultimate goal of artificial general intelligence (AGI). Though not there yet, the company initially made headlines in 2016 with AlphaGo, a system that beat a human professional Go player.
Since then, DeepMind has created a protein-folding prediction system, which can predict the complex 3D shapes of proteins, and it's developed programs that can diagnose eye diseases as effectively as the top doctors around the world.
The biggest quality that sets AI aside from other computer science topics is the ability to easily automate tasks by employing machine learning, which lets computers learn from different experiences rather than being explicitly programmed to perform each task. This capability is what many refer to as AI, but machine learning is actually a subset of artificial intelligence.
Machine learning involves a system being trained on large amounts of data, so it can learn from mistakes, and recognize patterns in order to accurately make predictions and decisions, whether they've been exposed to the specific data or not.
Also:What is machine learning? Everything you need to know
Examples of machine learning include image and speech recognition, fraud protection, and more. One specific example is the image recognition system when users upload a photo to Facebook. The social media network can analyze the image and recognize faces, which leads to recommendations to tag different friends. With time and practice, the system hones this skill and learns to make more accurate recommendations.
As mentioned above, machine learning is a subset of AI and is generally split into two main categories: supervised, and unsupervised learning.
This is a common technique for teaching AI systems by using many labelled examples that have been categorized by people. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest -- you're essentially teaching by example.
If you wanted to train a machine-learning model to recognize and differentiate images of circles and squares, you'd get started by gathering a large dataset of images of circles and squares in different contexts, such as a drawing of a planet for a circle, or a table for a square, for example, complete with labels for what each shape is.
The algorithm would then learn this labeled collection of images to distinguish the shapes and its characteristics, such as circles having no corners and squares having four equal sides. After it's trained on the dataset of images, the system will be able to see a new image and determine what shape it finds.
In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorize that data.
An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.
Also:Machine learning is going real-time: Here's why and how
The algorithm isn't set up in advance to pick out specific types of data; it simply looks for data with similarities that it can group, for example, grouping customers together based on shopping behavior to target them with personalized marketing campaigns.
In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.
Consider training a system to play a video game, where it can receive a positive reward if it gets a higher score and a negative reward for a low score. The system learns to analyze the game and make moves, and then learns solely from the rewards it receives, reaching the point of being able to play on its own and earn a high score without human intervention.
Reinforcement learning is also used in research, where it can help teach autonomous robots about the optimal way to behave in real-world environments.
One of the most renowned types of AI right now are large language models (LLM). These models use unsupervised machine learning and are trained on massive amounts of text to learn how human language works. These texts include articles, books, websites, and more.
In the training process, LLMs process billions of words and phrases to learn patterns and relationships between them, making the models able to generate human-like answers to prompts.
The most popular LLM is GPT 3.5, on which ChatGPT is based, and the largest LLM is GPT-4. Bard uses LaMDA, a LLM developed by Google, which is the second-largest LLM.
Part of the machine-learning family, deep learning involves training artificial neural networks with three or more layers to perform different tasks. These neural networks are expanded into sprawling networks with a large number of deep layers that are trained using massive amounts of data.
Deep-learning models tend to have more than three layers, and can have hundreds of layers. It can use supervised or unsupervised learning or a combination of both in the training process.
Also:What is deep learning? Everything you need to know
Because deep-learning technology can learn to recognize complex patterns in data using AI, it is often used in natural language processing (NLP), speech recognition, and image recognition.
The success of machine learning relies on neural networks. These are mathematical models whose structure and functioning are loosely based on the connection between neurons in the human brain, mimicking the way they signal to one another.
Imagine a group of robots that are working together to solve a puzzle. Each one is programmed to recognize a different shape or color in the puzzle pieces. The robots combine their abilities to solve the puzzle together. A neural network is like the group of robots.
Neural networks can tweak internal parameters to change what they output. Each one is fed databases to learn what it should put out when presented with certain data during training.
Also:We will see a completely new type of computer, says AI pioneer
They are made up of interconnected layers of algorithms that feed data into each other. Neural networks can be trained to carry out specific tasks by modifying the importance attributed to data as it passes between layers. During the training of these neural networks, the weights attached to data as it passes between layers will continue to be varied until the output from the neural network is very close to what is desired.
At that point, the network will have 'learned' how to carry out a particular task. The desired output could be anything from correctly labelling fruit in an image to predicting when an elevator might fail based on its sensor data.
Conversational AI includes systems that are programmed to have conversations with a user: trained to listen (input), and respond (output) in a conversational manner. Conversational AI uses natural language processing to understand and respond in a natural way.
Also: Why conversational AI is now ready for prime time
Some examples of conversational AI are chatbots like Google Bard, smart speakers with a voice assistant like Amazon Alexa, or virtual assistants on your smartphone like Siri.
General consumers and businesses alike have a wealth of AI services available to expedite tasks and add convenience to day-to-day life -- you probably have something in your home that uses AI in some capacity.
Here are some common examples of artificial intelligence available to the public, both free and for a fee:
Though generative AI leads the artificial intelligence breakthroughs of 2023, there are other top companies working on their own breakthroughs.
It's not surprising OpenAI has taken the lead so far in the AI race this year, after making generative AI tools available for widespread use for free, such as the AI chatbot ChatGPT and Dall-E 2, which is an image generator.
Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert
Google's parent company, Alphabet, has its hands in several different AI systems through some of its companies, including DeepMind, Waymo, and the aforementioned Google.
DeepMind continues to pursue artificial general intelligence, as evidenced by the scientific solutions it strives to achieve through AI systems. It's developed machine-learning models for Document AI, optimized the viewer experience on Youtube, made AlphaFold available for researchers worldwide, and more.
Also: DeepMind: Why is AI so good at language? It's something in language itself
Though you may not hear of Alphabet's artificial intelligence endeavors in the news every day, its works in deep learning and AI in general have the potential to change the future for human beings.
Aside from creating Microsoft 365 Copilot for its 365 lot of applications, Microsoft provides a suite of AI tools for developers on Azure, such as platforms for developing machine learning, data analytics, and conversational AI, customizable APIs that achieve human parity in computer vision, speech, and language.
Also: Microsoft CEO Nadella: 'Expect us to incorporate AI in every layer of the stack'
Microsoft has also invested heavily into OpenAI's development, and is using GPT-4 in the new Bing Chat, as well as a more advanced version of Dall-E 2 for the Bing Image Creator.
These are just a few examples of companies leading the AI race, but there are many others worldwide that are also making strides into artificial intelligence, including Baidu, Alibaba, Cruise, Lenovo, Tesla, and more.
Artificial intelligence has the power to change the way we work, our health, how we consume media and get to work, our privacy, and more.
Consider the impact that certain AI systems can have on the world as a whole. People can ask a voice assistant on their phones to hail rides from autonomous cars to get them to work, where they can use AI tools to be more efficient than ever before.
Also: Generative AI could lower drug prices. Here's how
Doctors and radiologists could make cancer diagnoses using fewer resources, spot genetic sequences related to diseases, and identify molecules that could lead to more effective medications, potentially saving countless lives.
Alternatively, it's worth considering the disruption that could result from having neural networks that can create realistic images, such as Dall-E 2, Midjourney, and Bing; that can replicate someone's voice or create deepfake videos using a person's resemblance. These could threaten what photos, videos, or audios people can consider genuine.
Also: Why your ChatGPT conversations may not be as secure as you think
Another ethical issue with AI concerns facial recognition and surveillance, and how this technology could be an intrusion on people's privacy, with many experts looking to ban it altogether.
The possibility of artificially intelligent systems replacing a considerable chunk of modern labor is a credible near-future possibility.
While commonplace artificial intelligence won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.
Also: Generative AI is changing your technology career path. What to know
However, artificial intelligence can't run on its own, and while many jobs with routine, repetitive data work might be automated, workers in other jobs can use tools like generative AI to become more productive and efficient.
There's a broad range of opinions among AI experts about how quickly artificially intelligent systems will surpass human capabilities.
Fully autonomous self-driving vehicles aren't a reality yet but, by some predictions, the self-driving trucking industry alone is poised to take over 500,000 jobs in the US inevitably, even without considering the impact on couriers and taxi drivers.