Artificial Intelligence Models
The Basic Idea
These days, tech is everywhere. Society has steadily moved towards increased automation and digitization, which has only been exacerbated by the COVID-19 pandemic. Work-from-home orders and storefront closures have solidified modern society as a digital era.
Increased automation and digitization have been made possible thanks to artificial intelligence. Artificial intelligence is all about making computers and machines make decisions like humans. By programming computers to mimic human thinking patterns, they are able to perform aspects of our jobs — although a scary thought (cue the sci-fi movies of robots taking over the world), AI can make processes much more effective and often more accurate.
There are different models of artificial intelligence.
- Artificial intelligence models are the tools and algorithms used to train computers to process and analyze data – just as humans do.
- Machine learning is a broad category that falls under the artificial intelligence model label, in which computers are taught to think by themselves and develop their own algorithms after processing vast amounts of data.
- Other artificial intelligence models need an algorithm to be programmed into the computer and will learn to adjust the algorithm based on experience.
- Lastly, there are also models that do not have the ability to learn on their own at all – they only function according to the preprogrammed algorithm and need human input.1
For example, Google Maps and other navigation applications use artificial intelligence models to guide us to our destinations. The machine remembers the edges of buildings that it learned by using data from other travellers and through inputted data via an algorithm. As people use the application on a day-to-day basis, the model incorporates the data gathered from these travels and can give more accurate route information by recognizing changes in traffic flow.2
However, a big question remains: do artificial intelligence models enhance humanity and society, or do they run the risk of making humans redundant? Here are two different opinions:
“The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
– Stephen Hawking, an English theoretical physicist, who discovered that black holes emit radiation and was the first to discover a theory of relativity and of quantum mechanics.3
“Some people call this artificial intelligence, but the reality is that this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”
– Ginni Rometty, American business executive who was the first woman to serve as the president and CEO of IBM.3
Theory, meet practice
TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.
Artificial Intelligence: a branch of computer science, where the engineering of machines mimics human problem solving and decision-making. It is the opposite of “natural intelligence”, exhibited by humans and animals.4
The Artificial Intelligence Effect: a phenomenon in which people no longer see artificial intelligence for what it is after becoming a widespread part of daily life. It is seen as a tool because we are so used to technology completing a task and hiding the work behind it. For example, you likely don’t think of using Google Maps as using an artificial intelligence model!5
Machine Learning: the process of a computer attempting to learn from the past. Data is inputted into a machine, gets passed through an algorithm (an artificial intelligence model) and churns out an output. If the computer returns the correct result, then it affirms the algorithm. If it is wrong, it adjusts its algorithm accordingly.6
Neural Networks: Artificial models are designed with neural networks. Neural networks mimic how neurons in our brain interact with one another — an input triggers a response and creates an output.6
Deep Learning: Deep learning is the way that machine learning functions. While some artificial intelligence models are built by first inputting an algorithm, deep learning is a technique where the machine develops an algorithm after encountering vast amounts of data.4
Turing Machine: a Turing machine is a hypothetical machine developed by mathematician Alan Turing in 1936. It was a machine that, by changing data in 0’s and 1’s (simplifying data to its essentials) could simulate any computer algorithm.7
Supervised Machine Learning Models: artificial intelligence models that require human training. People will tag sets of data, and the model will learn from the way that humans are analyzing the data.8
Unsupervised Machine Learning Models: artificial intelligence models which require no human input. These models are trained by software instead, which identifies patterns so that the computer can mimic it.8
Semi-supervised Machine Learning Models: artificial intelligence models which combines both supervised and unsupervised machine learning approaches, using both human training and software training.8
Mathematicians Alonzo Church and Alan Turing were the first to use computation as a device to conduct formal reasoning. They developed the Church-Turing thesis in 1936, which suggests that any real-world computation can be translated into an equivalent computation involving a Turing machine. The thesis was developed shortly after Turing developed the Turing machine, and opened the realm of possibilities for computer learning. People began to believe that it might be possible to build an electronic brain.9
Since access to computers wasn’t widespread in 1936, it took a few years for the “electronic brain” to become a nuanced theory. The Turing model was only hypothetical, but in 1943, neuroscientist Warren Sturgis McCulloch and logician Walter Harry Pitts formalized it and created the first computation theory of mind and brain. In their paper titled “A Logical Calculus of the Ideas Immanent in Nervous Activity,” they explained how neural mechanisms in computers could realize mental functions.10
However, artificial intelligence wasn’t a reality until 1949, because computers could not store commands. Although they could execute them, they could not retain an artificial intelligence model. No one had yet made that reality a possibility because computing was very expensive. The term artificial intelligence wasn’t even coined until 1955, and it was in that same year that computer scientist and cognitive psychologists Allen Newell, Cliff Shaw, and Herbert Simon created a proof of concept for artificial intelligence. They developed the Logic Theorist, a program that used artificial intelligence to mimic the problem-solving skills of a human.11
From that moment on, many became interested in developing artificial intelligence models. In 1997, American computer scientist Tom Mitchell gave a more refined definition of artificial intelligence than had previously been expressed. He defined it as “a computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” 12
Let’s represent this using the Google Maps Example. If you want a computer to predict traffic patterns (task T), you would run a program through an artificial intelligence model with data about past traffic patterns (experience E) and once it has successfully learned, it will do better at predicting future traffic patterns (performance P).12
There are hundreds of practical, important uses of artificial intelligence models. AI models help make the analysis and processing of data more efficient and increase automation. Both deep learning and artificial intelligence models are revolutionizing society.
Initially, artificial intelligence models were reactive machines that couldn’t store any memory, which means they couldn’t learn from experience. These days, all artificial intelligence computers can store memory, which means machines are constantly getting better and better at analyzing data. While deep learning machines learn completely from experience, computers that abide by artificial intelligence models continue to refine their algorithms through experience.13 These machines are making processes more efficient, reducing the need for human intervention (and therefore reducing human error), and can help organizations understand how to improve their functions.14
There are advantages of both machine learning models and artificial intelligence models that do not learn solely from experience but instead use pre-programmed algorithms. Those who use pre-programmed algorithms can quickly process data and deliver desired results. It doesn’t require additional time to “learn” what to do, only to refine its processes, thereby requiring simpler and cheaper machinery. Machine learning, although more expensive, can process more complex data and is self-sufficient thus not requiring as much human input.
There are quite a few ethical controversies when it comes to artificial intelligence models.
One is that a lot of artificial intelligence models are used to “survey” our behavior, whether it be our digital footprint or facial recognition, and we don’t know how exactly the data is being used or stored.
In machine learning, since artificial intelligence models learn by themselves, ethical concern is that there is a lack of transparency with artificial intelligence tools.15 For models that don’t abide by machine learning, there can exist biases in the algorithms that are inputted into the programs. For example, there has been a lot of controversy surrounding facial recognition after it became apparent that this technology was significantly less accurate in recognizing the faces of Black people. This occurred from a majority-white team creating the models, who themselves were not as accurate when distinguishing between people of color. Their bias became embedded within the artificial intelligence model.16
There is also the question of whether artificial intelligence models can have morality included as part of their programming. If an autonomous (self-driving) car finds itself in a situation where a jaywalker will be hit if it doesn’t slam down on the brakes, it must decide between the safety of the people in the car and the safety of the pedestrian — how can a computer make that decision?15
Some people also think that artificial intelligence is reducing our humanity and what is natural. Phenomena like “designer babies”, where people can choose what genes a child will have, are being debated as to whether they take away from what is natural. Such innovations require us to consider the moral and ethical aspects of artificial intelligence. As stated by the Executive Chairman of the World Economic Forum, Klaus Schwab, “We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.” 3
Where are artificial intelligence models used?
Artificial intelligence models can detect cancer in patients. By analyzing X-Rays and CRT images, they are able to detect abnormalities in the human body related to cancer. Since these days, those models abide by machine learning, they are becoming more accurate and able to recognize even abnormal cancers because it has learned through experience.17
Ever wonder how our phones predict what we’re about to say next? Our phones give us suggestions for the next word in a text message or predict the end of the sentence in an email. They also give us suggestions when they think we’ve misspelled a word. All of this is possible through artificial intelligence models, where our phones analyze our previous communication (and the communication patterns of the general population) to predict what we want to say next.2
Chatbots & Digital Assistants
Chatbots have taken over customer service agents, whether it be an Alexa or Siri. Chatbots can efficiently answer frequently asked questions by analyzing the customer’s question and matching it to past experiences. Digital assistants listen to your voice, process, and analyze the data, and perform the desired function.2
We’ve all heard of the conspiracy theories that our phones are listening to us, but our phones store so much data about us that they don’t need to listen to us to populate our social media with targeted ads. Based on your previous searches, the searches of people in your network, and demographic markers, artificial intelligence models predict what products you are most likely to buy and shows you them on your feeds.2
Artificial intelligence is used to revolutionize all fields, including behavioral science. Since “artificial intelligence” is a bit of a buzzword and encompasses many variations of computer learning, this article helps break down what exactly artificial intelligence is and how it is used both positively and negatively. Our contributor Julian Hazell explores whether artificial intelligence really gives us greater insight into human behavior, or whether we program it to reinforce our pre-existing beliefs.
These days, data is one of the most valuable resources (sometimes more than money) and it governs much of our lives. Data determines what ads we are shown, what products we buy, and shapes our likes and dislikes. All of our choices are somewhat guided by data. However, in this article, our contributors Mark Esposito, Danny Goh, Josh Entsminger and Terence Tse question whether we should be comfortable living in a society where our behavior is shaped by AI, or more importantly, by the people controlling the AI. They ask the question — who is being held accountable for ensuring that the way AI is used, and data is shared is ethical?
- What is an AI model? Here’s what you need to know. (2021, July 6). viso.ai. https://viso.ai/deep-learning/ml-ai-models/
- Reeves, S. (2020, August 10). 8 Helpful Everyday Examples of Artificial Intelligence. IoT For All. https://www.iotforall.com/8-helpful-everyday-examples-of-artificial-intelligence
- Marr, B. (2017, July 25). 28 Best Quotes About Artificial Intelligence. Forbes. https://www.forbes.com/sites/bernardmarr/2017/07/25/28-best-quotes-about-artificial-intelligence/?sh=115b1a8a4a6f
- Deep learning. (2021, October 1). The Decision Lab. https://thedecisionlab.com/reference-guide/computer-science/deep-learning/
- What is the AI effect, and is it set to happen again? (2020, December 3). ThinkAutomation. https://www.thinkautomation.com/bots-and-ai/what-is-the-ai-effect-and-is-it-set-to-happen-again/
- Machine learning. (2021, October 7). The Decision Lab. https://thedecisionlab.com/reference-guide/computer-science/machine-learning/
- Mullins, R. (2012). Raspberry Pi. Department of Computer Science and Technology. https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html
- Soha, G. (May 24). What is an AI model? Reveal Brainspace. Retrieved November 1, 2021, from https://resource.revealdata.com/en/blog/what-is-an-ai-model
- Rustagi, D. (2020, May 20). Church’s Thesis for Turing Machine. GeeksforGeeks. https://www.geeksforgeeks.org/churchs-thesis-for-turing-machine/
- Piccinini, G. (2004). The first computational theory of mind and brain: A close look at Mcculloch and Pitts’s “Logical calculus of ideas immanent in nervous activity”. Synthese, 141(2), 175-215. https://doi.org/10.1023/b:synt.0000043018.52445.3e
- Anyoha, R. (2017, August 28). The History of Artificial Intelligence. Science in the News. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
- McCrea, N. (2014, August 8). An Introduction to Machine Learning Theory and Its Applications: A Visual Tutorial with Examples. Toptal Engineering Blog. https://www.toptal.com/machine-learning/machine-learning-theory-an-introductory-primer
- Press, G. (2016, December 30). A Very Short History Of Artificial Intelligence (AI). Forbes. https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/?sh=398b8256fba2
- McKinsey. (2018, April 25). The real-world potential and limitations of artificial intelligence. McKinsey Podcast [Audio podcast episode]. https://www.mckinsey.com/featured-insights/artificial-intelligence/the-real-world-potential-and-limitations-of-artificial-intelligence
- Artificial Intelligence: examples of ethical dilemmas. (2020, October 2). UNESCO. Retrieved November 1, 2021, from https://en.unesco.org/artificial-intelligence/ethics/cases
- Najibi, A. (2020, October 24). Racial Discrimination in Face Recognition Technology. Science in the News. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
- Cruz, J. A., & Wishart, D. S. (2006). Applications of machine learning in cancer prediction and prognosis. Cancer Informatics, 2, 117693510600200. https://doi.org/10.1177/117693510600200030