Algorithm

The Basic Idea

In a world increasingly dominated by technology, we hear the word Algorithm everywhere. What actually is an algorithm? In simple terms, an algorithm is essentially a sequence of concrete instructions that tell an operator what to do. Think of a flow chart that moves through steps of YES and NO guiding someone to a specific outcome.

A common analogy used to explain an algorithm is that of a recipe. Imagine a typical recipe for chocolate chip cookies. The recipe would include raw ingredients that go through a sequence of commands in order to create a final product. In this case, the “operator” is the baker, who will read the recipe, execute the instructions, and maybe even update and refine the recipe, depending on how the cookies turn out.

We can apply this conceptual logic to a computer program, where you substitute the baker for a computer. The program will also read, execute, and refine the instructions, taking raw ingredients, or inputs, through a sequence of commands to create a final outcome or solve a problem. This process is occurring all around us. Your journey to get to this page to read this piece was aided by a series of algorithms. Perhaps you googled “algorithm” or someone shared this link on social media; although it seems like you may have found yourself here on your own accord, there’s a hidden layer of algorithmic computer code that made for a seamless process.

We have already turned our world over to machine learning and algorithms. The question now is, how to better understand and manage what we have done?


- Barry Chudakov, founder and principal of Sertain Research

Theory, meet practice

TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.

Our consulting services

Key Terms

Artificial Intelligence (AI): A domain within computer science where intelligence is generated by a machine rather than a biological being.

Machine Learning: A process within the realm of computer science corresponding to algorithms that improve through experience and feedback – “learning” – as opposed to explicit coding from a human programmer.

Big Data: Very large data sets often analyzed through algorithmic processes to reveal patterns.

History

The term algorithm comes from the latinized name of a 9th-century Persian polymath named Muḥammad ibn Mūsā al-Khwārizmī, who is responsible for introducing the concept of algebra to European mathematics.1 Aside from the etymology, algorithms go back much earlier than the 9th century, with the discovery of a clay tablet near Baghdad believed to be the earliest use of a division algorithm used by ancient Babylonian mathematicians in 2,500 BC.2

Near the later stages of the Industrial Revolution, a series of triumphs set the stage to move the algorithm from beyond the clay tablet and into what it is today. In 1840, Ada Lovelace wrote on paper what is believed to be the world’s first machine algorithm. In 1847, George Boole invented binary algebra, also sometimes referred to as Boolean algebra, which would become the basis of computer code. In 1888, Giuseppe Peano introduced the axiomatization of mathematics, a rule-based logic now essential to modern computing.

These 19th-century discoveries paved the way for the algorithm. In 1936, as a Ph.D. student at Princeton University, the British mathematician named Alan Turing created the world’s first computing apparatus. Now referred to as a “Turing Machine,” Turing’s theoretical concept could solve problems based on encoded instructions – a breakthrough that was essentially the invention of the computer.3 Although Turing is often labeled as the creator of the computer, additional works in the 1930s that also attempted to establish the concept of computability, such as those of Kurt Gödel and Alonzo Church, deserve some credit as well. Overall, a number of bright minds contributed to the evolution of modern computing and algorithms – it is difficult to pinpoint a single innovation or intellectual genius that paved the way for the ubiquity of computer algorithms we see today.

People

Muḥammad ibn Mūsā al-Khwārizmī

A 9th-century Muslim astronomer and mathematician, al-Khwārizmī’s legend lives on in the latinized versions of his name and works in the terms algebra and algorithm. In the 12th century, his textbook on arithmetic was translated into Latin and has been credited as having introduced the decimal number system to the Western world.

Alan Turing

The British mathematician is considered a central figure in the conceptualization of the algorithm and modern computing. Turing’s legacy has been further magnified due to his codebreaking contributions to the Allied forces during the Second World War, as well as the challenges of confronting prejudice and discrimination as a gay man in mid-20th century England. The honorary Turing Award bearing his name is often referred to as the “Nobel Prize of computing.”

Consequences

Any summary of the consequences of algorithms on modern society will be an understatement as it’s difficult to fully communicate the impact of such a wide-spread concept in just a few paragraphs. Much of the modern world is now governed by algorithms: anything you can imagine doing on the internet or your smartphone is mostly an algorithmic process. The majority of trades in financial markets are performed by algorithms, which means that much of your savings and pension is now managed by lines of code as much as human decision-making. It’s not just financial decisions that are being outsourced to algorithms but many decisions in medical care, public policy, and business management are now a product of some form of AI.

Algorithms are the foundation of Artificial Intelligence (AI). Many AI programs today rest on a machine learning algorithm. These algorithms seek patterns in vast data sets and are superior to traditional algorithms. Imagine you were writing a program that would identify photos of tigers. You would have to explicitly incorporate the necessary logic that would distinguish a photograph with a tiger. If you think of this in the form of a flowchart, you might see binary questions like whether the animal has the color orange in its coat. The problem with this method, however, is that tigers might be more complicated than you thought. There are too many rules that need to be programmed, and without them, the program would be prone to lots of mistakes, such as the inability to identify a white tiger. A machine learning algorithm, however, would be shown thousands of pictures of tigers and non-tigers and would make a prediction on whether a photo was of a tiger or not based on the statistical regularities it observed. It can then refine its process based on the feedback it receives following its predictions.

This process is omnipresent in the digital world. Machine learning algorithms on the internet seek patterns in data, incorporating the inputs, such as user behavior, in order to create some output, such as a curated news feed. Machine learning algorithms on Netflix, for example, will make predictions as to what content you might want to watch, based on what you’ve watched in the past, and make recommendations based on its predictions. The algorithm will further refine its predictions as you accept or reject its recommendations. Note how this is different from a predetermined set of criteria outlined by a programmer. This process of learning allows the algorithms to improve with more data, a fact that is largely responsible for the growth of the big data economy.

When many people think of AI, they think of human-like robots that are coming to take over the world. What’s escapes our attention is all of the ways that AI is already embedded in our daily lives. Most people now carry an abundance of machine learning algorithms in their pockets. There is no doubt that algorithms – sometimes synonymous with “technology” – have made our lives easier in many ways. Instead of having to consult an ambiguous city map, we can let Google Maps tell us exactly where to go, and instead of parsing through countless texts in a public library to find a simple fact, we can retrieve it in milliseconds through a search engine. Given the power and ubiquity of such a concept, it is no surprise that algorithms carry tremendous social responsibility and are subject to widespread public discussion.

Controversies

Algorithms are front and center in many contemporary debates in society. Most of these controversies stem from the morals surrounding the use of a given algorithm, both in terms of its consequences as well as issues concerning data privacy.

Algorithms have been criticized as having too much power. They can determine who is hired for a job, who gets a loan, and what news you read. The last point has become a particularly hot topic in the era of fake news, as we recognist that Facebook and YouTube have algorithms that play a role in creating echo chambers and enhancing political polarization.4 In 2018, controversy erupted following a data breach of Facebook data by Cambridge Analytica, a consulting firm that used machine learning techniques to generate tailored political ads. Beyond the scandal’s mishandling of data privacy, the idea of political beliefs being manipulated by algorithmic methods designed to generate user engagement and increase corporate profits comes across as rather dystopian to many.

The dystopian concerns surrounding algorithms are a central theme in much of the skepticism about their use. People often point to the use of facial recognition – a popular concept in machine learning – in China and the ominous parallels drawn to an Orwellian social surveillance system. Similar arguments are made in North America, where law enforcement agencies have been lambasted for using facial recognition technology. In addition to privacy issues, many highlight the fallibility of these algorithmic technologies, noting that many of these services can exhibit racial biases. Although human error and racial discrimination are still prominent in areas such as law enforcement, people are much less comfortable with the notion of errors made by a faulty algorithm.

It’s true that algorithms face extensive scrutiny over flaws and mistakes, but whether this scrutiny is justified is something that’s been challenged by proponents of technological governance. Every time an accident occurs involving Tesla’s Autopilot AI feature, the company is put under the spotlight as people highlight the perils of self-driving cars. Elon Musk has challenged this criticism, once tweeting that “it’s super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in the past year gets almost no coverage.” Many share Musk’s sentiment that algorithms, though not perfect, are in many cases an improvement from human decision-making.

Case Studies

IBM’s Watson Promotes Diversity and Inclusion

As the previous section highlighted, algorithmic processes are often challenged for their inadvertent biases that can negatively affect individuals such as those with lower socioeconomic status, as well as minorities. Not all algorithms are the same, however, as some can actually enhance diversity and inclusion rather than hinder it. This is the message IBM has taken in the promotion of its AI services for HR practices. In a whitepaper from the company, a group of researchers listed various cognitive biases such as confirmation bias and the self-serving bias that can have negative implications in the workplace, particularly in regard to diversity and inclusion. IBM went on to cite that 65% of HR professionals believe that AI can help combat these challenges in bias. “Unlike human beings, machines do not have inherent biases that inhibit diversity and inclusion,” They wrote. “Rather, they are subject to the choices of data and algorithmic features chosen by the people building them. When appropriately developed and deployed, AI may be able to remove the attributes that lead to biases and can learn how to detect potential biases, particularly those unconscious biases that are unintentional and hard to uncover in decision-making processes.”

2010 Flash Crash

The prominence of algorithmic trading in financial markets has given rise to the risk of large and rapid declines in the market. These events are called flash crashes and can occur when trading algorithms react to a particular change in the market, triggering a snowball effect where large volumes are sold to prevent further losses. Conceptually speaking, this is an ordinary effect in the stock market: A stock is on a steep decline and investors sell to cut their losses, further aiding its fall. These sell-offs might happen more slowly when most trades are executed by humans, but with algorithms at the helm, a sharp decline can occur at any moment Regulations in recent years have been put in place with the aim of prevent this from happening.

On May 6, 2010, at 2:32pm EST, the US stock market experienced one of the most turbulent periods in its history. Within minutes, the Dow Jones Industrial Average exhibited its largest ever intraday decline, equating to losses in the hundreds of billions of dollars.5 The event lasted for a little over 30 minutes, and the market quickly recovered its losses but that day would become an infamous moment in Wall Street’s history. Although the exact cause of the crash is debated and perhaps more nuanced than a single causal source, the common thread behind the event is that it was triggered by high-frequency trading algorithms that irrationally sparked the sell-off.

Related resources

The AI Governance of AI

This piece addresses some of the moral quandaries around AI that were mentioned in the controversies section, particularly surrounding issues of accountability.

Government Nudging in the Age of Big Data

This piece discusses the potential benefits of using big data and applying machine learning algorithms to maximize the efficacy of public policy interventions.

Sources

  1. Al-Khwārizmī. Encyclopedia Britannica. Retrieved from https://www.britannica.com/biography/al-Khwarizmi
  2. Barbin, É. (1999). A history of algorithms: from the pebble to the microchip (Vol. 23). J. L. Chabert (Ed.). Berlin: Springer.
  3. Watson, I. (2012, April). How Alan Turing Invented the Computer Age. Scientific American. Retrieved from https://blogs.scientificamerican.com/guest-blog/how-alan-turing-invented-the-computer-age/
  4. Bessi, A., Zollo, F., Del Vicario, M., Puliga, M., Scala, A., Caldarelli, G., … & Quattrociocchi, W. (2016). Users polarization on Facebook and Youtube. PloS one11(8), e0159641.
  5. Kirilenko, A., Kyle, A. S., Samadi, M., & Tuzun, T. (2017). The flash crash: High‐frequency trading in an electronic market. The Journal of Finance72(3), 967-998.

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?