Deep Learning

The Basic Idea

As progress is made in the fields of technology, engineering and science, we are beginning to build machines that can more accurately and successfully mimic human intelligence. Artificial intelligence continues to evolve and bring with it both excitement and fears.

Artificial intelligence is the overarching field of computers being taught how to mimic human thinking patterns, which can be broken down into different techniques. The sub-groups of artificial intelligence are machine learning and deep learning.

Deep learning is a technique in which machines are programmed similarly to the human brain. Just as the human brain learns from experience, when machines use the deep-learning technique, they use vast amounts of data to develop an algorithm. For example, voice-controlled personal assistants – like Siri on your phone – develop an algorithm by learning from examples. In order to respond to human requests appropriately, the algorithms are fed examples of human voices making requests.1

Incorporating general intelligence, bodily intelligence, emotional intelligence, spiritual intelligence, political intelligence and social intelligence in AI systems are part of the future deep learning research.


– Amit Ray, Indian author and pioneer of the compassionate artificial intelligence movement in his bookCompassionate Artificial Intelligence2

Theory, meet practice

TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.

Our consulting services

Key Terms

Artificial Intelligence: a technique that allows a machine to replicate human behavior. It is the science and engineering of machines that mimic human problem solving and decision-making.3

Machine Learning: a technique to achieve artificial intelligence by using algorithms and data. It helped lay the groundwork for deep learning.

Deep Learning: a type of artificial intelligence in which machines are based on the structure and function of the human brain.

Connectionism: a theory of artificial intelligence that tries to understand how people learn and how memory functions by examining and mapping the brain at a neural level.4

Inhibitory inputs:  An input, which has a maximum effect on the decision being made. Regardless of other inputs, if this one is present, a particular outcome will occur. For example, an input like tickling might take precedence over all other inputs and automatically make you laugh. They are called inhibitory because they inhibit other inputs from mattering.5

Excitatory inputs: These inputs do not, by themselves, necessarily cause an output to fire, but when combined with others, can cause an output to occur.5

Activation Function: the formula channels, which are in between the input and the output of a machine, used to determine the output.5

Turing Machine: a mathematical model that reduces a computing device to its essentials.6 When it comes to deep learning, it means it reduces the brain to its essential logical structure of neurons firing.

Perceptrons: a type of artificial neural network investigated by Frank Rosenblatt.

Error back-propagation: an algorithm developed by Rosenblatt that understood that deep learning models had to incorporate both feedforward and feedback. When a computer is learning, inputs that do not result in the desired output are still useful as they provide a form of error feedback.7

History

Machines with human-like capabilities have been a subject of interest and discussion since science-fiction brought them to life in the mid-20th century. Academically, deep learning was first linked to research conducted by Walter Pitts and Warren McCulloch. Walter Pitts was a logician who worked in the neuroscientific field and Warren McCulloch was a neurophysiologist and cybernetician (specialist in the science of communications in machines).8

In 1943, Pitts and McCulloch created a computer model based on the neural networks of the brain. The pair moved from the University of Chicago to the Massachusetts Institute of Technology to work in what is now thought to have been the first department of cognitive science. They designed a machine that mimicked the function of a biological neuron. A neuron receives a signal through its dendrite, processes it through the soma, then passes it as an output through a cable-like structure called an axon. The axon is connected to other neurons creating some kind of response. Our sense organs, such as our eyes, interact with the environment and take in an input, process it, and put out an output.5

For example, when watching a stand-up comedian, our brain takes in the input – the joke – processes it – determines whether it’s funny – and then develops a response – a laugh. Pitts and McCulloch developed a machine based on this brain process in their paper “A Logical Calculus of Ideas Immanent in Nervous Activity,” and their model became known as the M-P neuron.5

The M-P neuron was revolutionary as one of the first examples of deep learning. It was a simple model as it used a binary input function, where only two inputs could occur simultaneously. In 1958, Frank Rosenblatt, an American psychologist, put forward a new model: the classical perceptron model that used the artificial neuron. His model introduced more weights, where some inputs could have a greater impact on outcome than others. Inputs could either be inhibitory, and override all other inputs, or excitatory. 5

Over the years, logicians, neuropsychologists and psychologists have all continued to refine the model and improve deep learning systems, which has allowed the algorithms of machines and computers to become more and more similar to the human mind. According to Terrence Sejnowski, a neuroscientist who played an important role in the founding of deep learning, the moment deep learning became popular was in 2012 at an artificial intelligence conference. At this conference, computer scientist Geoffrey Hinton showed that you could take a large dataset, which they called ImageNet, that had 10,000 categories and 10 million images, and that deep learning was able to classify these images 20% better than machine learning.9

People

Walter Pitts

A logician who began his work in neuroscience as a teenager. He was a pioneer of cybernetic theory, as he sought to use logic to map out the functionality of the human brain. From an understanding of the fundamental operations of logic, such as conjunctions (and), disjunctions (or), and negation (not), he believed he could understand the way the human brain made decisions as well. He thought the set-up of neurons in our brain was binary – either cells fire and produce a response, or they don’t. Through this theory, Pitts worked with McCulloch to develop the M-P neuron, the first model of deep learning.10

Warren McCulloch

A neurophysiologist who approached the field through connectionism. Together with Pitts, Mculloch developed the first model of deep learning by understanding the brain as a computing machine. McCulloch described his process as turning the human brain into a Turing machine.4

Frank Rosenblatt

An American computer scientist working at the Cornell Aeronautical Laboratory when he made major breakthroughs in the field of artificial intelligence. Through computer simulations and detailed mathematical analysis, he investigated perceptrons and realized that the brain was a multilayered network, not a binary two-layered network. He also developed the error back-propagation algorithm of deep learning.11

Terrence Sejnowski

An American computational neuroscientist and pioneer of the study of learning algorithms. In the 1980s, he challenged the logic and symbol kind of machine learning that artificial intelligence was currently running with. His book,The Deep Learning Revolution, takes a look at how deep learning went from being a small academic interest to a disruptive technology embedded in all aspects of our lives.12

Geoffrey Hinton

A British-Canadian cognitive psychologist and computer scientist, most known for his work on artificial neuron networks used in deep learning. In the computer science world, there began a competition in 2009 called ImageNet, which challenged teams to build a computer that would be able to recognize 1,000 objects. For three years, no team had been successful, but in 2012, Hinton and his team finally developed a computer that worked according to deep learning and could accurately recognize over 10 million objects.13

Consequences

Deep learning is all about understanding the brain as a neural network, which makes it possible for us to create machines that mimic the same algorithms as our brains.

Prior to deep learning, computer scientists attempted to achieve artificial intelligence through machine learning. With machine learning, humans must program systems with algorithms to check for specific things when analyzing data, and what the appropriate output is. For example, if you want a machine to differentiate between apples and oranges, you would have to input the different features of apples and oranges. You would develop an algorithm for the machine learning system, such as ‘if red and/or has a stem and/or a shiny surface, then apple’. That means machine learning can only be used to find pre-programmed features and only helps analyze data with a specific, narrow focus.14

When it comes to deep learning, we don’t have to program the features into an algorithm, as it develops its own algorithm after analyzing vast amounts of data. The machine will learn through processing data through its neural network, without human intervention.14

If a computer was programmed according to machine learning and it encountered an unusual-looking apple, it might not recognize it as an apple because it has been programmed to only look and sort based on three characteristics: color, stem, and shiny surface. Deep learning, which has created its own system after encountering hundreds of apples, is more likely to still be able to identify an apple that has varying or atypical characteristics.14 Deep learning performs better the more data it encounters, whereas machine learning performs worse because it isn’t as attuned to variables in data.15

Deep learning has revolutionized what computers can do. Facial recognition and object detection from photographs or videos are possible because of deep learning. Automatic vehicles – with no driver – are also possible because of deep learning. Siri, Alexa, or Hey Google are all only possible because of deep learning too! The possibilities are endless.15

Controversies

Deep learning allows machines to process more data and decide an appropriate response even when encountering variables, which is thought to make it ‘smarter’ than a pre-programmed algorithm. However, since deep learning means that machines learn on their own without human intervention, learning what the appropriate output is for each piece of data requires a lot of practice. For deep learning to actually perform better than machine learning, computers must encounter lots of data to form neural paths between an input and a desired output.15

Since deep learning requires vast amounts of data, it can also be very expensive. The complex data model requires state of the art machinery which costs a lot.15 However, as we make advances in technology, computers and other artificial intelligence machines are getting less expensive, whereas human labor is getting more expensive. Since deep learning requires no human programming or intervention, it could be argued that in the long-run, it is actually cheaper.9

Medical Imaging Analysis

To find out if something is wrong through medical imaging, computers need to be able to analyze images and identify if there is something abnormal in the image. However, before deep learning, these computers had to be programmed with what to look for, which gave them a very narrow focus task. They could be programmed to find traces of a specific disease, but if there was an abnormality external to that disease, it would categorize the medical image as normal. Radiologists would have to know exactly what to look for to know what algorithm to program.16

That’s where deep learning comes in and can be particularly useful for analyzing chest x-rays. There are a myriad of chest x-ray abnormalities, which, according to Google’s artificial intelligence researchers, “makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions.” 16

That’s why Google’s artificial intelligence researchers built a deep learning system that can detect whether a chest x-ray is abnormal or not.16 While it might not inform us exactly what is wrong, it allows radiologists to know which cases need their attention and which cases can be excluded, which speeds up the clinical process. No longer does test, after test, after test have to be run. One test is enough to see a red flag and investigate accordingly.

So how did the researchers build a reliable deep learning system that could detect dangerous abnormalities, but didn’t pick up on benign abnormalities? Deep learning engineers often face the challenge that their machines will become sensitive to irrelevant factors. For example, one deep learning system engineered for medical use for detecting skin cancer detection began to pick up on ruler marks on skin and categorize those images as cancerous.16

The answer is in big data – the more data, the better the deep learning system. The researchers built the system using 250,000 x-rays from five hospitals in India, and then evaluated whether it was accurate when it came to scans from other countries as well by cross-referencing with x-rays from China and the U.S.16

While Google’s research doesn’t yet make radiologists obsolete, their system is definitely a useful tool for efficiency and accuracy.

Related TDL Content

Machine Learning and Personalized Interventions: David Halpern

If you’re curious about machine learning, the initial artificial intelligence system that deep learning got its roots from, this podcast episode is for you. David Halpern, CEO of the Behavioral Insights Team discusses the future of behavioral science,  how machine learning ties into behavioral science and how it allows for personalized interventions, which he believes are the future of behavioral science.

Algorithms for Simpler Decision-Making (1/2): The Case for Cognitive Prosthetics

Deep learning outsources human cognitive function to computers and machines. It both mimics and helps our own decision-making capabilities by making data analysis easier. Algorithmic decision-making, which are analytics deployed to make better, data-driven decisions, are seemingly everywhere. In this two-part story, our contributor Jason Burton, explores how deep learning and algorithmic decision-making might be further used asa tool that aids or extends our cognitive capabilities.

Sources

  1. Marr, B. (2018, October 1). What Is Deep Learning AI? A Simple Guide With 8 Practical Examples. Forbes. https://www.forbes.com/sites/bernardmarr/2018/10/01/what-is-deep-learning-ai-a-simple-guide-with-8-practical-examples/?sh=1e1073888d4b
  2. Deep learning Quotes. (n.d.). Goodreads. Retrieved September 22, 2021, from https://www.goodreads.com/quotes/tag/deep-learning
  3. What is deep learning? (2020, May 1). IBM. https://www.ibm.com/cloud/learn/deep-learning
  4. Copeland, B. J. (2020, January 9). Connectionism. Encyclopedia Britannica. https://www.britannica.com/technology/connectionism-artificial-intelligence
  5. Chandra, A. L. (2018, November 7). History of the Perceptron. Medium. Retrieved September 22, 2021, from https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1
  6. Turing machine. (2020, April 3). Encyclopedia Britannica. https://www.britannica.com/technology/Turing-machine
  7. Whittington, J. C., & Bogacz, R. (2019). Theories of error back-propagation in the brain. Trends in Cognitive Sciences23(3), 235-250. https://doi.org/10.1016/j.tics.2018.12.005
  8. Foote, K. D. (2017, January 31). A Brief History of Deep Learning. DATAVERSITY. https://www.dataversity.net/brief-history-deep-learning/
  9. Chen, A. (2018, October 16). A pioneering scientist explains ‘deep learning’. The Verge. https://www.theverge.com/2018/10/16/17985168/deep-learning-revolution-terrence-sejnowski-artificial-intelligence-technology
  10. Gefter, A. (2015, February 5), The man who tried to redeem the world with logic. Nautilus. https://nautil.us/issue/21/information/the-man-who-tried-to-redeem-the-world-with-logic
  11. Copeland, B. J. (2015, November 8). Perceptrons. Encyclopedia Britannica. https://www.britannica.com/technology/perceptrons
  12. The Deep Learning Revolution. (n.d.). The MIT Press. Retrieved September 22, 2021, from https://mitpress.mit.edu/books/deep-learning-revolution
  13. Hao, K. (2020, November 3). AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything”. MIT Technology Review. https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/
  14. Simplilearn. (2019, June 3). Deep Learning In 5 Minutes | What Is Deep Learning? | Deep Learning Explained Simply | Simplilearn [Video]. YouTube. https://www.youtube.com/watch?v=6M5VXKLf4D4
  15. Advantages of Deep Learning | disadvantages of Deep Learning. (n.d.). RF Wireless World. Retrieved September 22, 2021, from https://www.rfwireless-world.com/Terminology/Advantages-and-Disadvantages-of-Deep-Learning.html
  16. Dickson, B. (2021, September 15). Google’s new deep learning system can give a boost to radiologists. VentureBeat. https://venturebeat.com/2021/09/16/googles-new-deep-learning-system-can-give-a-boost-to-radiologists/

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?