What is artificial intelligence (AI)?

No longer a feature of science-fiction, artificial intelligence (AI) is here – and it’s here to stay. While the world attempts to grasp the ramifications of the technology in its current iterations, AI continues to evolve at blistering pace. Whether in the realm of industrial automation, scientific research or the creative industries, the far-reaching effects of AI are still to be determined. However, it is already impacting our daily lives. 

Amid the hyperbolic language that surrounds AI, many people struggle to understand what it is and what it means for them. For a better understanding of what is AI, how it works, its practical applications – and why standards are crucial to its safe onward development – read on. 

Оглавление

Enable Javascript to view table

What is AI? Decoding the AI meaning

Artificial intelligence is “a technical and scientific field devoted to the engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives” [ISO/IEC 22989:2022]. While this definition of artificial intelligence is accurate from the technical perspective, how does it translate for the average person?

In truth, AI is just a practical tool, not a panacea. It’s only as good as the algorithms and machine learning techniques that guide its actions. AI can get really good at performing a specific task, but it takes tonnes of data and repetition. It simply learns to analyse large amounts of data, recognize patterns, and make predictions or decisions based on that data, continuously improving its performance over time.

Today, this AI meaning has evolved beyond mere data processing to include the development of machines capable of learning, reasoning and problem-solving. The machine learning has become so “competent” as to generate everything from software code to images, articles, videos and music. This is the next level of AI, the so-called generative AI, which differs from traditional AI in its capabilities and application. While traditional AI systems are primarily used to analyse data and make predictions, generative AI goes a step further by creating new data similar to its training data.

Подпишитесь на нашу рассылку

Будьте в курсе новостей об искусственном интеллекте и связанных с ним стандартах!

* Информационный бюллетень на английском языке
How your data will be used

Please see ISO privacy notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

How does AI work?

In essence, AI analyses data to extract patterns and make predictions. It does this by combining large datasets with intelligent AI algorithms – or sets of rules – that allow the software to learn from patterns in the data. The way the system accomplishes this is through a neural network – an array of interconnected nodes that relay information between various layers to find connections and derive meaning from data.

To grasp how this works, we must unpack the following concepts:

  • Learning: AI’s machine learning feature enables machines to learn from data, identify patterns and make decisions without explicit programming. Going one step further, advancements in deep learning empower AI software to understand more complex patterns using millions of data points.
  • Reasoning: The ability to reason is crucial to AI because it allows computers to mimic the human brain. AI can make inferences based on commands it is given, or other available information, to form hypotheses or develop strategies for addressing a problem.
  • Problem solving: AI’s problem-solving capability is based on the manipulation of data through trial-and-error techniques. It involves using algorithms to explore various possible paths to find the most optimal solution to complex problems.
  • Processing language: AI uses natural language processing – or NLP – to analyse human language data in a way that is meaningful to computers. What is NLP? It refers to the ability of computers to understand, interpret and generate human language, using text analysis, sentiment analysis and machine translation.
  • Perception: AI scans the environment through sense captors such as temperature sensors and cameras. Known as computer vision, this field of AI enables machines to interpret and understand visual data and is used in image recognition, facial recognition and object detection.

Strong AI vs weak AI

Artificial intelligence (AI) encompasses a diverse spectrum of capabilities, which can be broadly classified into two categories: weak AI and strong AI. Weak AI, often referred to as artificial narrow intelligence (ANI) or narrow AI, embodies systems meticulously crafted to excel at specific tasks within well-defined parameters. These systems operate within a confined scope of expertise and lack the capacity for general intelligence. Think of them as specialists trained to perform particular functions efficiently.

Despite its name, weak AI is anything but weak; it is the powerhouse behind many of the artificial intelligence applications we interact with every day. We see examples of narrow AI all around us. From Siri and Alexa’s lightning-fast responses to IBM Watson’s data-crunching prowess and the seamless navigation of self-driving cars, ANI fuels the remarkable innovations shaping our world.

Here are some other examples of narrow AI applications, characterized by their specialized algorithms designed for specific tasks:

  • Smart assistants: Often referred to as the best examples of weak AI, digital voice assistants use natural language processing for a range of specific tasks like setting alarms, answering questions and controlling smart home devices.
  • Chatbots: If you’ve ever talked on a chat with your favourite e-store, chances are you’re communicating with AI. Many customer service platforms us ANI algorithms to answer common enquiries, leaving humans free to perform higher-level tasks.
  • Recommendation engines: Ever wondered how Netflix always seems to know what movie you want to watch or how Amazon predicts your next purchase? These platforms use ANI to analyse your viewing or purchasing habits, alongside those of similar users, to deliver personalized suggestions.
  • Navigation apps: How do you get from A to B without getting lost? A navigation app, such as Google Maps, is a software application that uses ANI designed to provide real-time directions to users when navigating from one location to another.
  • Email spam filters: A computer uses artificial narrow intelligence to learn which messages are likely to be spam, and then redirects them from the inbox to the spam folder.
  • Autocorrect features: When your iPhone rectifies your typos as you write, you’re experiencing the power of weak AI at work in your everyday life. By leveraging algorithms and user data, these predictive text functions ensure smoother and more efficient text composition across devices.

Each of these applications demonstrates the strength of ANI to execute well-defined tasks by analysing large datasets and following specialized algorithms. So, next time you marvel at the capabilities of AI, remember that it’s weak AI that powers these remarkable innovations, shaping our world in ways that we once thought unimaginable.

Strong AI
  • Also know as artificial general intelligence (AGI)
  • Design to adapt, learn and apply knowledge across various domains
Weak AI
  • Also know as artificial narrow intelligence (ANI) or narrow AI
  • Design to excel at specific tasks within well-defined parameters

In contrast, the concept of strong AI, also known as general AI, aspires to develop systems capable of tackling a wide array of tasks with a level of proficiency that satisfies human standards. Unlike their narrow AI counterparts, strong AI systems aim to possess a form of general intelligence, allowing them to adapt, learn and apply knowledge across various domains. Essentially, the goal is to create artificial entities endowed with cognitive abilities akin to those of humans, capable of engaging in intellectual endeavours spanning diverse fields.

While strong AI is purely speculative with no practical examples in use today, that doesn’t mean AI researchers aren’t busy exploring its potential developments. Notably, strong AI is being harnessed in the field of artificial general intelligence (AGI) research and the development of intelligent machines and social media algorithms.

Theoretically, AGI could perform any human job, from cleaning to coding. So, although there are currently no real-life applications of AGI, the concept is poised to have a transformative impact in several fields. These include:

  • Language: Writing essays, poems, and engaging in conversations.
  • Healthcare: Medical imaging, drug research and surgery.
  • Transportation: Fully automated cars, trains and planes.
  • Arts and entertainment: Creating music, visual art and films.
  • Domestic robots: Cooking, cleaning and childcare.
  • Manufacturing: Supply chain management, stocktaking and consumer services.
  • Engineering: Programming, building and architecture.
  • Security: Detecting fraud, preventing security breaches and improving public safety.

While researchers and developers continuously strive to push the boundaries of AI AGI capabilities, achieving true general intelligence comparable to human cognition poses immense challenges and remains an elusive goal on the horizon. That being said, with the significant advancements in AI technology and machine learning, it seems the question we should ask is not if but when.

What are the four types of AI?

Artificial intelligence (AI) encompasses a wide range of capabilities, each serving distinct functions and purposes. Understanding the four types of AI sheds some light on the evolving landscape of machine intelligence:

  • Reactive machines: These AI systems operate within predefined rules but lack the capacity to learn from new data or experiences. For instance, chatbots used to interact with online customers often rely on reactive machine intelligence to generate responses based on programmed algorithms. While they perform well within their designated functions, they cannot adapt or evolve beyond their initial programming.
  • Limited memory: Unlike reactive machines, AI systems with limited memory possess the ability to learn from historical data and past experiences. By processing information from previous interactions, these types of AI systems can make informed decisions and adapt to some extent based on their training. Examples include self-driving cars equipped with sensors and machine learning algorithms that enable them to navigate through dynamic environments safely. Natural language processing applications also use historical data to enhance language comprehension and interpretation over time.
  • Theory of mind: This type of AI is still a pipe dream, but it describes the idea of an AI system that can perceive and understand human emotions, then use that information to predict future actions and make decisions on its own. Developing AI with a theory of mind could revolutionize a wide range of fields, including human-computer interactions and social robotics, by enabling more empathetic and intuitive machine behaviour.
  • Self-aware AI: This refers to the hypothetical scenario of an AI system that has self-awareness, or a sense of self. Self-aware AI possesses human-like consciousness and understands its own existence in the world, as well as the emotional state of others. So far, these types of AI are only found in the fantastical world of science fiction, popularized by iconic movies such as Blade Runner.

These four types of AI showcase the rich diversity of intelligence seen in artificial systems. As AI continues to progress, exploring the capabilities and limitations of each type will contribute to our understanding of machine intelligence and its impact on society.

Machine learning vs deep learning

Central to these advancements are machine learning and deep learning, two subfields of AI that drive many of the innovations we see today. While related, each of these terms has its own distinct meaning.

  • Supervised learning: The algorithm is trained on a labelled dataset where each example has an input and a corresponding output, learning from this labelled data to make predictions on new, unseen data.
  • Unsupervised learning: Without any predefined labels or outputs, the algorithm learns to discover hidden structures or groupings within the data.
  • Reinforcement learning: Trained to interact with an environment and learn through trial and error, the agent receives feedback in the form of rewards or penalties as it performs actions allowing it to learn and improve performance over time.

Deep learning is a subset of machine learning, focused on training artificial neural networks with multiple layers – inspired by the structure and function of the human brain – consisting of interconnected nodes (neurons) that transmit signals.

By automatically extracting features from raw data through multiple layers of abstraction, these AI algorithms excel at image and speech recognition, natural language processing and many other fields. Deep learning can handle large-scale datasets with high-dimensional inputs, but requires a significant amount of computational power and extensive training due to their complexity.

Examples of AI technology

So what can AI do? Most people are familiar with it through smart speakers and smartphone assistants like Siri and Alexa, but new AI technology constantly makes our lives easier and more efficient in many other ways.

Here are some examples of AI technology and applications:

  • Healthcare AI can process and analyse vast amounts of patient data to provide accurate predictions and recommend personalized treatment for better outcomes.
  • Business and manufacturing benefits from automation in every field, from fraud detection, risk assessment and market trends analysis to AI-powered robots on production lines. AI systems can also predict equipment failures before they occur and detect anomalies in network traffic patterns, identifying cybersecurity threats. And in retail, AI offers inventory management, personalized shopping experiences, chatbots to assist customers and analysis of customer preferences, increasing sales through better targeted adverts.
  • Education AI includes intelligent tutoring systems which adapt to students’ needs, providing tailored feedback and guidance. AI also offers automated grading, content creation and virtual-reality simulations.
  • Transportation AI optimizes traffic flow, predicts maintenance needs, and improves logistics in shipping companies, while in agriculture it can optimize crop yield and reduce resource wastage. Drone technology monitors soil conditions, identifies crop diseases and assesses irrigation requirements, and AI systems can recommend efficient pesticide usage and crop management.
  • Entertainment: By analysing user preferences, AI can recommend movies, music or books. Virtual and augmented reality create immersive entertainment environments. Realistic CGI and “special effects” AI enhances the visual experience of movies and games.

The growth and impact of generative AI

These examples of artificial intelligence, culminating in the rise of large-scale language models like Chat GPT, mark just the beginning of a remarkable journey. This is the advent of generative AI – an exciting new frontier in artificial intelligence, focusing on the creation of new content rather than just analysing existing data. Unlike traditional AI systems that are primarily designed for classification or prediction tasks, generative models aim to develop novel outputs that mimic human creativity and imagination. This will enable machines to autonomously produce various types of content, including images, text, music and even entire virtual worlds.

However, generative AI is not yet fully polished. Generative models, while powerful, have several downsides, including the potential for creating convincing misinformation (or deep fakes), perpetuating biases and raising concerns about copyright and job displacement. They also pose security threats, challenges in quality control and require substantial computational resources, leading to high costs and environmental impacts.

The truth is, generative AI is still in its learning phase, and initial setbacks in certain software should not overshadow the extraordinary potential of AI technology. Efforts are underway to address the challenges associated with generative models through advancements in detection technology and improvements in training data and algorithms. They also include enhanced security measures, heightened education and awareness, and more efficient use of computational resources.

This multifaceted approach should ensure a more responsible and beneficial use of generative AI, supported by guidelines and regulations.

AI governance and regulations

With increasing integration across various industries, the importance of ensuring the quality and reliability of AI software cannot be overstated. Despite the risks involved, AI still suffers from a lack of regulation. This is where International Standards can help.

Standards, such as those developed by ISO/IEC JTC 1/SC 42 on artificial intelligence, play a pivotal role in addressing the responsible development and use of AI technologies. They help to bridge the gaps in regulation, giving decision makers and policymakers the tools to establish consistent and auditable data and processes.

These standards can bring long-term value to a business, particularly in areas such as environmental reporting. Standards build credibility with stakeholders, ensuring the benefits of artificial intelligence outweigh the associated risks through aligning with existing regulations and governance tools.

History of artificial intelligence: who invented AI?

AI has progressed in leaps and bounds, transforming many aspects of our world. But to truly appreciate its current capabilities, it’s important to understand its origins and evolution. So who created AI? To find out, let’s take a journey through the fascinating history of artificial intelligence.

Today’s AI loosely stems from the 19th-century invention of Charles Babbage’s “difference engine” – the world’s first successful automatic calculator. British code-breaker Alan Turing, who was a key figure in the Allies’ intelligence arsenal during WWII, amongst other feats, can also be seen as a father figure of today’s iterations of AI. In 1950, he proposed the Turing Test, designed to assess a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.

From that point onward, advancements in AI technology began to accelerate exponentially, spearheaded by such influential figures as John McCarthy, Marvin Minsky, Herbert Simon, Geoffrey Hinton, Yoshua Bengio, Yann LeCun, and many other. But it wasn’t all smooth sailing. While AI flourished in the early years, with computers’ capability to store more information, it soon hit a roadblock: computers simply couldn’t store enough information or process it fast enough. It wasn’t until the 1980s that AI experienced a renaissance, sparked by an expansion of the algorithm toolkit and an increase in funding.

To cut a long story short, here are some key events and milestones in the history of artificial intelligence:

  • 1950: Alan Turing publishes the paper “Computing Machinery and Intelligence”, in which he proposes the Turing Test as a way of assessing whether or not a computer counts as intelligent.
  • 1956: A small group of scientists gather for the Dartmouth Summer Research Project on Artificial Intelligence, which is regarded as the birth of this field of research.
  • 1966-1974: This is conventionally known as the “First AI Winter”, a period marked by reduced funding and progress in AI research due to failure to live up to early hype and expectations.
  • 1997: Deep Blue, an IBM chess computer, defeats world champion Garry Kasparov in a highly publicized chess match, demonstrating the fabulous potential of AI systems. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows.
  • 2011: In a televised Jeopardy! contest, IBM’s Watson Deep QA computer defeats two of the quiz shows’ all-time champions, showcasing the ability of AI systems to understand natural language.
  • 2012: The “deep learning” approach, inspired by the human brain, revolutionizes many AI applications, ushering in the current AI boom.
  • 2016: Developed by a Google subsidiary, the computer program AlphaGo captures the world’s attention when it defeats legendary Go player Lee Sedol. The ancient board game “Go” is one of the most complex ever created.
  • 2017 to date: Rapid advancements in computer vision, natural language processing, robotics and autonomous systems are driven by progress in deep learning and increased computational power.
  • 2023: The rise of large language models, such as GPT-3 and its successors, demonstrates the potential of AI systems to generate human-like text, answer questions and assist with a wide range of tasks.
  • 2024: New breakthroughs in multimodal AI allow systems to process and integrate various types of data (text, images, audio and video) for more comprehensive and intelligent solutions. AI-powered digital assistants are now capable of engaging in natural, contextual conversations as well as assisting with a wide variety of tasks.

The exponential growth of computing power and the Internet has brought with it the concept – and the reality – of machine learning, the development of AI algorithms that can learn without being programmed, by processing large datasets. This is known as “deep learning” which empowers computers to learn through experience. Over the past decade, AI has become integral to everyday life, influencing how we work, communicate and interact with technology.

How will AI change our world?

As it becomes more sophisticated, we can expect to see artificial intelligence transform the way we work and live. In addition to the many applications outlined above, AI will play a crucial role in addressing global challenges and accelerating the search for solutions.

But the potential implications of AI are far-reaching and profound. As AI becomes more powerful and pervasive, we must ensure it is developed and used responsibly, addressing issues of bias, privacy and transparency. For this to be achieved, it is crucial to stay informed and be proactive in shaping its development, to build a future that is both beneficial and empowering for all.