What is Symbolic Artificial Intelligence?

1911 09606 An Introduction to Symbolic Artificial Intelligence Applied to Multimedia

symbolic ai

Kahneman describes human thinking as having two components, System 1 and System 2. System 1 is the kind used for pattern recognition while System 2 is far better suited for planning, symbolic ai deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed.

And unlike symbolic AI, neural networks have no notion of symbols and hierarchical representation of knowledge. This limitation makes it very hard to apply neural networks to tasks that require logic and reasoning, such as science and high-school math. But the benefits of deep learning and neural networks are not without tradeoffs. Deep learning has several deep challenges and disadvantages in comparison to symbolic AI. Notably, deep learning algorithms are opaque, and figuring out how they work perplexes even their creators. The two biggest flaws of deep learning are its lack of model interpretability (i.e. why did my model make that prediction?) and the large amount of data that deep neural networks require in order to learn.

Symbols also serve to transfer learning in another sense, not from one human to another, but from one situation to another, over the course of a single individual’s life. That is, a symbol offers a level of abstraction above the concrete and granular details of our sensory experience, an abstraction that allows us to transfer what we’ve learned in one place to a problem we may encounter somewhere else. In a certain sense, every abstract category, like chair, asserts an analogy between all the disparate objects called chairs, and we transfer our knowledge about one chair to another with the help of the symbol.

LISP is the second oldest programming language after FORTRAN and was created in 1958 by John McCarthy. LISP provided the first read-eval-print loop to support rapid program development. Program tracing, stepping, and breakpoints were also provided, along with the ability to change values or functions and continue from breakpoints or errors.

You create a rule-based program that takes new images as inputs, compares the pixels to the original cat image, and responds by saying whether your cat is in those images. Symbolic artificial intelligence showed early progress at the dawn of AI and computing. You can easily visualize the logic of rule-based programs, communicate them, and troubleshoot them.

  • They can simplify sets of spatiotemporal constraints, such as those for RCC or Temporal Algebra, along with solving other kinds of puzzle problems, such as Wordle, Sudoku, cryptarithmetic problems, and so on.
  • Symbolic AI’s growing role in healthcare reflects the integration of AI Research findings into practical AI Applications.
  • LISP provided the first read-eval-print loop to support rapid program development.
  • One such project is the Neuro-Symbolic Concept Learner (NSCL), a hybrid AI system developed by the MIT-IBM Watson AI Lab.

We expect it to heat and possibly boil over, even though we may not know its temperature, its boiling point, or other details, such as atmospheric pressure. Time periods and titles are drawn https://chat.openai.com/ from Henry Kautz’s 2020 AAAI Robert S. Engelmore Memorial Lecture[17] and the longer Wikipedia article on the History of AI, with dates and titles differing slightly for increased clarity.

The conjecture behind the DSN model is that any type of real world objects sharing enough common features are mapped into human brains as a symbol. Those symbols are connected by links, representing the composition, correlation, causality, or other relationships between them, forming a deep, hierarchical symbolic network structure. Powered by such a structure, the DSN model is expected to learn like humans, because of its unique characteristics. Second, it can learn symbols from the world and construct the deep symbolic networks automatically, by utilizing the fact that real world objects have been naturally separated by singularities. Third, it is symbolic, with the capacity of performing causal deduction and generalization.

New AI programming language goes beyond deep learning

This method involves using symbols to represent objects and their relationships, enabling machines to simulate human reasoning and decision-making processes. The advantage of neural networks is that they can deal with messy and unstructured data. Instead of manually laboring through the rules of detecting cat pixels, you can train a deep learning algorithm on many pictures of cats.

Programs were themselves data structures that other programs could operate on, allowing the easy definition of higher-level languages. The rule-based nature of Symbolic AI aligns with the increasing focus on ethical AI and compliance, essential in AI Research and AI Applications. Symbolic AI’s role in industrial automation highlights its practical application in AI Research and AI Applications, where precise rule-based processes are essential.

In pursuit of efficient and robust generalization, we introduce the Schema Network, an object-oriented generative physics simulator capable of disentangling multiple causes of events and reasoning backward through causes to achieve goals. The richly structured architecture of the Schema Network can learn the dynamics of an environment directly from data. We compare Schema Networks with Asynchronous Advantage Actor-Critic and Progressive Networks on a suite of Breakout variations, reporting results on training efficiency and zero-shot generalization, consistently demonstrating faster, more robust learning and better transfer. We argue that generalizing from limited data and learning causal relationships are essential abilities on the path toward generally intelligent systems. Parsing, tokenizing, spelling correction, part-of-speech tagging, noun and verb phrase chunking are all aspects of natural language processing long handled by symbolic AI, but since improved by deep learning approaches.

Basic computations of the network include predicting high-level objects and their properties from low-level objects and binding/aggregating relevant objects together. These computations operate at a more fundamental level than convolutions, capturing convolution as a special case while being significantly more general than it. All operations are executed in an input-driven fashion, thus sparsity and dynamic computation per sample are naturally supported, complementing recent popular ideas of dynamic networks and may enable new types of hardware accelerations. We experimentally show on CIFAR-10 that it can perform flexible visual processing, rivaling the performance of ConvNet, but without using any convolution. Furthermore, it can generalize to novel rotations of images that it was not trained for. We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers.

Critiques from outside of the field were primarily from philosophers, on intellectual grounds, but also from funding agencies, especially during the two AI winters. Multiple different approaches to represent knowledge and then reason with those representations have been investigated. Below is a quick overview of approaches to knowledge representation and automated reasoning.

Rule-Based AI, a cornerstone of Symbolic AI, involves creating AI systems that apply predefined rules. This concept is fundamental in AI Research Labs and universities, contributing to significant Development Milestones in AI. At the heart of Symbolic AI lie key concepts such as Logic Programming, Knowledge Representation, and Rule-Based AI. These elements work together to form the building blocks of Symbolic AI systems. Symbolic Artificial Intelligence, or AI for short, is like a really smart robot that follows a bunch of rules to solve problems.

The universe is written in the language of mathematics and its characters are triangles, circles, and other geometric objects. The grandfather of AI, Thomas Hobbes said — Thinking is manipulation of symbols and Reasoning is computation. 1) Hinton, Yann LeCun and Andrew Ng have all suggested that work on unsupervised learning (learning from unlabeled data) will lead to our next breakthroughs. A similar problem, called the Qualification Problem, occurs in trying to enumerate the preconditions for an action to succeed.

Openstream.ai Bridges Human-Machine Conversations With Next-Gen Voice Agents – PYMNTS.com

Openstream.ai Bridges Human-Machine Conversations With Next-Gen Voice Agents.

Posted: Sat, 30 Mar 2024 06:25:51 GMT [source]

When deep learning reemerged in 2012, it was with a kind of take-no-prisoners attitude that has characterized most of the last decade. He gave a talk at an AI workshop at Stanford comparing symbols to aether, one of science’s greatest mistakes. Natural language processing focuses on treating language as data to perform tasks such as identifying topics without necessarily understanding the intended meaning.

The second AI summer: knowledge is power, 1978–1987

Currently, Python, a multi-paradigm programming language, is the most popular programming language, partly due to its extensive package library that supports data science, natural language processing, and deep learning. Python includes a read-eval-print loop, functional elements such as higher-order functions, and object-oriented programming that includes metaclasses. Symbolic artificial intelligence is very convenient for settings where the rules are very clear cut,  and you can easily obtain input and transform it into symbols. In fact, rule-based systems still account for most computer programs today, including those used to create deep learning applications. Their Sum-Product Probabilistic Language (SPPL) is a probabilistic programming system. Probabilistic programming is an emerging field at the intersection of programming languages and artificial intelligence that aims to make AI systems much easier to develop, with early successes in computer vision, common-sense data cleaning, and automated data modeling.

Neural Networks, compared to Symbolic AI, excel in handling ambiguous data, a key area in AI Research and applications involving complex datasets. One solution is to take pictures of your cat from different angles and create new rules for your application to compare each input against all those images. Even if you take a million pictures of your cat, you still won’t account for every possible case.

A second flaw in symbolic reasoning is that the computer itself doesn’t know what the symbols mean; i.e. they are not necessarily linked to any other representations of the world in a non-symbolic way. Again, this stands in contrast to neural nets, which can link symbols to vectorized representations of the data, which are in turn just translations of raw sensory data. So the main challenge, when we think about GOFAI and neural nets, is how to ground symbols, or relate them to other forms of meaning that would allow computers to map the changing raw sensations of the world to symbols and then reason about them.

When you provide it with a new image, it will return the probability that it contains a cat. Implementations of symbolic reasoning are called rules engines or expert systems or knowledge graphs. Google made a big one, too, which is what provides the information in the top box under your query when you search for something easy like the capital of Germany. These systems are essentially piles of nested if-then statements drawing conclusions about entities (human-readable concepts) and their relations (expressed in well understood semantics like X is-a man or X lives-in Acapulco). Each approach—symbolic, connectionist, and behavior-based—has advantages, but has been criticized by the other approaches. Symbolic AI has been criticized as disembodied, liable to the qualification problem, and poor in handling the perceptual problems where deep learning excels.

Early work covered both applications of formal reasoning emphasizing first-order logic, along with attempts to handle common-sense reasoning in a less formal manner. The future includes integrating Symbolic AI with Machine Learning, enhancing AI algorithms and applications, a key area in AI Research and Development Milestones in AI. In Symbolic AI, Knowledge Representation is essential for storing and manipulating information. It is crucial in areas like AI History and development, where representing complex AI Research and AI Applications accurately is vital. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy.

Neural Networks excel in learning from data, handling ambiguity, and flexibility, while Symbolic AI offers greater explainability and functions effectively with less data. Logic Programming, a vital concept in Symbolic AI, integrates Logic Systems and AI algorithms. It represents problems using relations, rules, and facts, providing a foundation for AI reasoning and decision-making, a core aspect of Cognitive Computing. If I tell you that I saw a cat up in a tree, your mind will quickly conjure an image. Error from approximate probabilistic inference is tolerable in many AI applications.

symbolic ai

Deep learning and neural networks excel at exactly the tasks that symbolic AI struggles with. They have created a revolution in computer vision applications such as facial recognition and cancer detection. SPPL is different from most probabilistic programming languages, as SPPL only allows users to write probabilistic programs for which it can automatically deliver exact probabilistic inference results. SPPL also makes it possible for users to check how fast inference will be, and therefore avoid writing slow programs. Already, this technology is finding its way into such complex tasks as fraud analysis, supply chain optimization, and sociological research. Samuel’s Checker Program[1952] — Arthur Samuel’s goal was to explore to make a computer learn.

Think of it like playing a game where you have to follow certain rules to win. In Symbolic AI, we teach the computer lots of rules and how to use them to figure things out, just like you learn rules in school to solve math problems. This way of using rules in AI has been around for a long time and is really important for understanding how computers can be smart. René Descartes, a mathematician, and philosopher, regarded thoughts themselves as symbolic representations and Perception as an internal process.

Logic Programming and Symbolic AI:

As opposed to pure neural network–based models, the hybrid AI can learn new tasks with less data and is explainable. And unlike symbolic-only models, NSCL doesn’t struggle to analyze the content of images. Symbolic AI is reasoning oriented field that relies on classical logic (usually monotonic) and assumes that logic makes machines intelligent. Regarding implementing symbolic AI, one of the oldest, yet still, the most popular, logic programming languages is Prolog comes in handy. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages. Also known as rule-based or logic-based AI, it represents a foundational approach in the field of artificial intelligence.

The Future is Neuro-Symbolic: How AI Reasoning is Evolving – Towards Data Science

The Future is Neuro-Symbolic: How AI Reasoning is Evolving.

Posted: Tue, 23 Jan 2024 08:00:00 GMT [source]

Symbolic AI’s growing role in healthcare reflects the integration of AI Research findings into practical AI Applications. Improvements in Knowledge Representation will boost Symbolic AI’s modeling capabilities, a focus in AI History and AI Research Labs. Expert Systems, a significant application of Symbolic AI, demonstrate its effectiveness in healthcare, a field where AI Applications are increasingly prominent. Contrasting Symbolic AI with Neural Networks offers insights into the diverse approaches within AI. The justice system, banks, and private companies use algorithms to make decisions that have profound impacts on people’s lives. Unfortunately, those algorithms are sometimes biased — disproportionately impacting people of color as well as individuals in lower income classes when they apply for loans or jobs, or even when courts decide what bail should be set while a person awaits trial.

A change in the lighting conditions or the background of the image will change the pixel value and cause the program to fail. Many of the concepts and tools you find in computer science are the results of these efforts. Symbolic AI programs are based on creating explicit structures and behavior rules. We use symbols all the time to define things (cat, car, airplane, etc.) and people (teacher, police, salesperson). Symbols can represent abstract concepts (bank transaction) or things that don’t physically exist (web page, blog post, etc.). Symbols can be organized into hierarchies (a car is made of doors, windows, tires, seats, etc.).

Integration with Machine Learning:

Problems were discovered both with regards to enumerating the preconditions for an action to succeed and in providing axioms for what did not change after an action was performed. Cognitive architectures such as ACT-R may have additional capabilities, such as the ability to compile frequently used knowledge into higher-level chunks. Our chemist was Carl Djerassi, inventor of the chemical behind the birth control pill, and also one of the world’s most respected mass spectrometrists. We began to add to their knowledge, inventing knowledge of engineering as we went along. Symbolic AI-driven chatbots exemplify the application of AI algorithms in customer service, showcasing the integration of AI Research findings into real-world AI Applications.

But it is undesirable to have inference errors corrupting results in socially impactful applications of AI, such as automated decision-making, and especially in fairness analysis. While this may be unnerving to some, it must be remembered that symbolic AI still only works with numbers, just in a different way. By creating a more human-like thinking machine, organizations will be able to democratize the technology across the workforce so it can be applied to the real-world situations we face every day. A more flexible kind of problem-solving occurs when reasoning about what to do next occurs, rather than simply choosing one of the available actions. This kind of meta-level reasoning is used in Soar and in the BB1 blackboard architecture.

Our researchers are working to usher in a new era of AI where machines can learn more like the way humans do, by connecting words with images and mastering abstract concepts. Semantic networks, conceptual graphs, frames, and logic are all approaches to modeling knowledge such as domain knowledge, problem-solving knowledge, and the semantic meaning of language. DOLCE is an example of an upper ontology that can be used for any domain while WordNet is a lexical resource that can also be viewed as an ontology. YAGO incorporates WordNet as part of its ontology, to align facts extracted from Wikipedia with WordNet synsets. The Disease Ontology is an example of a medical ontology currently being used. The key AI programming language in the US during the last symbolic AI boom period was LISP.

However, Transformer models are opaque and do not yet produce human-interpretable semantic representations for sentences and documents. Instead, they produce task-specific vectors where the meaning of the vector components is opaque. For other AI programming languages see this list of programming languages for artificial intelligence.

The early pioneers of AI believed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Therefore, symbolic AI took center stage and became the focus of research projects. Being able to communicate in symbols is one of the main things that make us intelligent. Therefore, symbols have also played a crucial role in the creation of artificial intelligence. Thus contrary to pre-existing cartesian philosophy he maintained that we are born without innate ideas and knowledge is instead determined only by experience derived by a sensed perception. Children can be symbol manipulation and do addition/subtraction, but they don’t really understand what they are doing. Hobbes was influenced by Galileo, just as Galileo thought that geometry could represent motion, Furthermore, as per Descartes, geometry can be expressed as algebra, which is the study of mathematical symbols and the rules for manipulating these symbols.

Probabilistic programming languages make it much easier for programmers to define probabilistic models and carry out probabilistic inference — that is, work backward to infer probable explanations for observed data. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning. We introduce the Deep Symbolic Network (DSN) model, which aims at becoming the white-box version of Deep Neural Networks (DNN). The DSN model provides a simple, universal yet powerful structure, similar to DNN, to represent any knowledge of the world, which is transparent to humans.

Cyc has attempted to capture useful common-sense knowledge and has “micro-theories” to handle particular kinds of domain-specific reasoning. Forward chaining inference engines are the most common, and are seen in CLIPS and OPS5. Backward chaining occurs in Prolog, where a more limited logical representation is used, Horn Clauses. Symbolic AI offers clear advantages, including its ability to handle complex logic systems and provide explainable AI decisions. In legal advisory, Symbolic AI applies its rule-based approach, reflecting the importance of Knowledge Representation and Rule-Based AI in practical applications.

Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. For example, OPS5, CLIPS and their successors Jess and Drools operate in this fashion. There have been several efforts to create complicated symbolic AI systems that encompass the multitudes of rules of certain domains. Called expert systems, these symbolic AI models use hardcoded knowledge and rules to tackle complicated tasks such as medical diagnosis. But they require a huge amount of effort by domain experts and software engineers and only work in very narrow use cases.

A separate inference engine processes rules and adds, deletes, or modifies a knowledge store. Looking ahead, Symbolic AI’s role in the broader AI landscape remains significant. Ongoing research and development milestones in AI, particularly in integrating Symbolic AI with other AI algorithms like neural networks, continue to expand its capabilities and applications. Maybe in the future, we’ll invent AI technologies that can both reason and learn. But for the moment, symbolic AI is the leading method to deal with problems that require logical thinking and knowledge representation.

Fourth, the symbols and the links between them are transparent to us, and thus we will know what it has learned or not – which is the key for the security of an AI system. Last but not least, it is more friendly to unsupervised learning than DNN. We present the details of the model, the algorithm powering its automatic learning ability, and describe its usefulness in different use cases. The purpose of this paper is to generate broad interest to develop it within an open source project centered on the Deep Symbolic Network (DSN) model towards the development of general AI. The recent adaptation of deep neural network-based methods to reinforcement learning and planning domains has yielded remarkable progress on individual tasks.

In symbolic AI, discourse representation theory and first-order logic have been used to represent sentence meanings. Latent semantic analysis (LSA) and explicit semantic analysis also provided vector representations of documents. In the latter case, vector components are interpretable as concepts named by Wikipedia articles. One such project is the Neuro-Symbolic Concept Learner (NSCL), a hybrid AI system developed by the MIT-IBM Watson AI Lab. NSCL uses both rule-based programs and neural networks to solve visual question-answering problems.

Neural Networks’ dependency on extensive data sets differs from Symbolic AI’s effective function with limited data, a factor crucial in AI Research Labs and AI Applications. This will only work as you provide an exact copy of the original image to your program. For instance, if you take a picture of your cat from a somewhat different angle, the program will fail.

Knowledge representation and reasoning

ArXiv is committed to these values and only works with partners that adhere to them. The General Problem Solver (GPS) cast planning as problem-solving used means-ends analysis to create plans. Graphplan takes a least-commitment approach to planning, rather than sequentially choosing actions from an initial state, working forwards, or a goal state if working backwards. Satplan is an approach to planning where a planning problem is reduced to a Boolean satisfiability problem. Marvin Minsky first proposed frames as a way of interpreting common visual situations, such as an office, and Roger Schank extended this idea to scripts for common routines, such as dining out.

symbolic ai

An infinite number of pathological conditions can be imagined, e.g., a banana in a tailpipe could prevent a car from operating correctly. Similarly, Allen’s temporal interval algebra is a simplification of reasoning about time and Region Connection Calculus is a simplification of reasoning about spatial relationships. Japan championed Prolog for its Fifth Generation Project, intending to build special hardware for high performance. Similarly, LISP machines were built to run LISP, but as the second AI boom turned to bust these companies could not compete with new workstations that could now run LISP or Prolog natively at comparable speeds.

Knowledge Representation:

Advantages of multi-agent systems include the ability to divide work among the agents and to increase fault tolerance when agents are lost. You can foun additiona information about ai customer service and artificial intelligence and NLP. Research problems include how agents reach consensus, distributed problem solving, multi-agent learning, multi-agent planning, and distributed constraint optimization. Constraint solvers perform a more limited kind of inference than first-order logic.

Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analog to the human concept learning, given the parsed program, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences.

  • Our chemist was Carl Djerassi, inventor of the chemical behind the birth control pill, and also one of the world’s most respected mass spectrometrists.
  • By the mid-1960s neither useful natural language translation systems nor autonomous tanks had been created, and a dramatic backlash set in.
  • As computational capacities grow, the way we digitize and process our analog reality can also expand, until we are juggling billion-parameter tensors instead of seven-character strings.
  • To that end, we propose Object-Oriented Deep Learning, a novel computational paradigm of deep learning that adopts interpretable “objects/symbols” as a basic representational atom instead of N-dimensional tensors (as in traditional “feature-oriented” deep learning).
  • In legal advisory, Symbolic AI applies its rule-based approach, reflecting the importance of Knowledge Representation and Rule-Based AI in practical applications.

Ultimately this will allow organizations to apply multiple forms of AI to solve virtually any and all situations it faces in the digital realm – essentially using one AI to overcome the deficiencies of another. A certain set of structural rules are innate to humans, independent of sensory experience. With more linguistic stimuli received in the course of psychological development, children then adopt specific syntactic rules that conform to Universal grammar.

symbolic ai

For example, experimental symbolic machine learning systems explored the ability to take high-level natural language advice and to interpret it into domain-specific actionable rules. The work in AI started by projects like the General Problem Solver and other rule-based reasoning systems like Logic Theorist became the foundation for almost 40 years of research. Symbolic AI (or Classical AI) is the branch of artificial intelligence research that concerns itself with attempting to explicitly represent human knowledge in a declarative form (i.e. facts and rules). If such an approach is to be successful in producing human-like intelligence then it is necessary to translate often implicit or procedural knowledge possessed by humans into an explicit form using symbols and rules for their manipulation.

In turn, connectionist AI has been criticized as poorly suited for deliberative step-by-step problem solving, incorporating knowledge, and handling planning. Finally, Nouvelle AI excels in reactive and real-world robotics domains but has been criticized for difficulties in incorporating learning and knowledge. A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules for problem-solving.[51]

The simplest approach for an expert system knowledge base is simply a collection or network of production rules.

The technology actually dates back to the 1950s, says expert.ai’s Luca Scagliarini, but was considered old-fashioned by the 1990s when demand for procedural knowledge of sensory and motor processes was all the rage. Now that AI is tasked with higher-order systems and data management, the capability to engage in logical thinking and knowledge representation is cool again. In contrast, a multi-agent system consists of multiple agents that communicate amongst themselves with some inter-agent communication language such as Knowledge Query and Manipulation Language (KQML).

They can simplify sets of spatiotemporal constraints, such as those for RCC or Temporal Algebra, along with solving other kinds of puzzle problems, such as Wordle, Sudoku, cryptarithmetic problems, and so on. Constraint logic programming can be used to solve scheduling problems, for example with constraint handling rules (CHR). Knowledge-based systems have an explicit knowledge base, typically of rules, to enhance reusability across domains by separating procedural code and domain knowledge.

For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based Chat PG reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings.

Symbolic AI has numerous applications, from Cognitive Computing in healthcare to AI Research in academia. Its ability to process complex rules and logic makes it ideal for fields requiring precision and explainability, such as legal and financial domains. MIT researchers have developed a new artificial intelligence programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives. Read more about our work in neuro-symbolic AI from the MIT-IBM Watson AI Lab.