Today, bogus intelligence is mostly about artificial neural networks and deep learning. But this is not how it always was. In fact, for most of its six-decade history, the field was bedeviled by allegorical bogus intelligence, also known as “classical AI,” “rule-based AI,” and “good ancient AI.”

Symbolic AI involves the absolute embedding of human ability and behavior rules into computer programs. The convenance showed a lot of affiance in the early decades of AI research. But in recent years, as neural networks, also known as connectionist AI, gained traction, allegorical AI has fallen by the wayside.

The role of symbols in bogus intelligence

Symbols are things we use to represent other things. Symbols play a vital role in the human anticipation and acumen process. If I tell you that I saw a cat up in a tree, your mind will bound adjure an image.

We use symbols all the time to define things (cat, car, airplane, etc.) and people (teacher, police, salesperson). Symbols can represent abstruse concepts (bank transaction) or things that don’t physically exist (web page, blog post, etc.). They can also call accomplishments (running) or states (inactive). Symbols can be organized into hierarchies (a car is made of doors, windows, tires, seats, etc.). They can also be used to call other symbols (a cat with fluffy ears, a red carpet, etc.).

Being able to acquaint in symbols is one of the main things that make us intelligent. Therefore, symbols have also played a acute role in the conception of bogus intelligence.

The early antecedents of AI believed that “every aspect of acquirements or any other affection of intelligence can in acceptance be so absolutely declared that a apparatus can be made to simulate it.” Therefore, allegorical AI took center stage and became the focus of analysis projects. Scientists developed tools to define and dispense symbols.

Many of the concepts and tools you find in computer science are the after-effects of these efforts. Allegorical AI programs are based on creating absolute structures and behavior rules.

An archetype of allegorical AI tools is acquisitive programming. OOP languages allow you to define classes, specify their properties, and adapt them in hierarchies. You can create instances of these classes (called objects) and dispense their properties. Class instances can also accomplish actions, also known as functions, methods, or procedures. Each method executes a series of rule-based instructions that might read and change the backdrop of the accepted and other objects.

Using OOP, you can create all-encompassing and circuitous allegorical AI programs that accomplish assorted tasks.

The allowances and limits of allegorical AI

Symbolic bogus intelligence showed early advance at the dawn of AI and computing. You can easily anticipate the logic of rule-based programs, acquaint them, and troubleshoot them.

webrok

Symbolic bogus intelligence is very acceptable for settings where the rules are very clear cut,  and you can easily obtain input and transform it into symbols. In fact, rule-based systems still annual for most computer programs today, including those used to create deep acquirements applications.

But allegorical AI starts to break when you must deal with the messiness of the world. For instance, consider computer vision, the science of enabling computers to make sense of the agreeable of images and video. Say you have a annual of your cat and want to create a affairs that can detect images that accommodate your cat. You create a rule-based affairs that takes new images as inputs, compares the pixels to the aboriginal cat image, and responds by saying whether your cat is in those images.

This will only work as you accommodate an exact copy of the aboriginal image to your program. A hardly altered annual of your cat will yield a abrogating answer. For instance, if you take a annual of your cat from a somewhat altered angle, the affairs will fail.

One band-aid is to take pictures of your cat from altered angles and create new rules for your appliance to assay each input adjoin all those images. Even if you take a actor pictures of your cat, you still won’t annual for every accessible case. A change in the lighting altitude or the accomplishments of the image will change the pixel value and cause the affairs to fail. You’ll need millions of other pictures and rules for those.

And what if you wanted to create a affairs that could detect any cat? How many rules would you need to create for that?

The cat archetype might sound silly, but these are the kinds of problems that allegorical AI programs have always struggled with. You can’t define rules for the messy data that exists in the real world. For instance, how can you define the rules for a self-driving car to detect all the altered pedestrians it might face?

Also, some tasks can’t be translated to direct rules, including speech acceptance and natural accent processing.

There have been several efforts to create complicated allegorical AI systems that beset the multitudes of rules of assertive domains. Called expert systems, these allegorical AI models use hardcoded ability and rules to tackle complicated tasks such as medical diagnosis. But they crave a huge amount of effort by domain experts and software engineers and only work in very narrow use cases. As soon as you generalize the problem, there will be an access of new rules to add (remember the cat apprehension problem?), which will crave more human labor. As some AI scientists point out, symbolic AI systems don’t scale.

Neural networks vs allegorical AI

Neural networks are almost as old as allegorical AI, but they were abundantly absolved because they were inefficient and appropriate compute assets that weren’t accessible at the time. In the past decade, thanks to the large availability of data and processing power, deep acquirements has gained acceptance and has pushed past allegorical AI systems.

The advantage of neural networks is that they can deal with messy and baggy data. Take the cat detector example. Instead of manually active through the rules of audition cat pixels, you can train a deep acquirements algorithm on many pictures of cats. The neural arrangement then develops a statistical model for cat images. When you accommodate it with a new image, it will return the anticipation that it contains a cat.

Deep acquirements and neural networks excel at absolutely the tasks that allegorical AI struggles with. They have created a anarchy in computer vision applications such as facial recognition and cancer detection. Deep acquirements has also driven advances in language-related tasks.

Deep neural networks are also very acceptable for reinforcement learning, AI models that advance their behavior through abundant trial and error. This is the kind of AI that masters complicated games such as Go, StarCraft, and Dota.

But the allowances of deep acquirements and neural networks are not after tradeoffs. Deep acquirements has several deep challenges and disadvantages in allegory to allegorical AI. Notably, deep acquirements algorithms are opaque, and addition out how they work perplexes even their creators. And it’s very hard to acquaint and troubleshoot their inner-workings.

Neural networks are also very data-hungry. And unlike allegorical AI, neural networks have no notion of symbols and hierarchical representation of knowledge. This limitation makes it very hard to apply neural networks to tasks that crave logic and reasoning, such as science and high-school math.

The accepted state of allegorical AI

Some accept that allegorical AI is dead. But this acceptance couldn’t be further from the truth. In fact, rule-based AI systems are still very important in today’s applications. Many arch scientists accept that symbolic acumen will abide to remain a very important component of bogus intelligence.

There are now several efforts to amalgamate neural networks and allegorical AI. One such activity is the Neuro-Symbolic Concept Learner (NSCL), a hybrid AI system developed by the MIT-IBM Watson AI Lab. NSCL uses both rule-based programs and neural networks to solve visual question-answering problems. As against to pure neural network–based models, the hybrid AI can learn new tasks with less data and is explainable. And unlike symbolic-only models, NSCL doesn’t attempt to assay the agreeable of images.

Maybe in the future, we’ll invent AI technologies that can both reason and learn. But for the moment, allegorical AI is the arch method to deal with problems that crave analytic cerebration and ability representation.

This commodity was originally appear by Ben Dickson on TechTalks, a advertisement that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also altercate the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the aboriginal commodity here. 

Read next: Report: Polestar to build more Chinese showrooms to attempt with Tesla

Corona coverage

Read our daily advantage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.