Thinking Machines: The Evolution of AI

AI is pervasive in today’s society. Confusion, excitement, and even fear can stir at the mere mention of this “new” technology. We’re here to explain how AI isn’t quite so new to the tech game after all and how it evolved from simple rule-based systems to complex models capable of performing tasks that were once exclusive to humans. Its future holds huge potential, with advancements likely to shape numerous aspects of society and industry. By the end of this article, you will be an AI fan, ready to harness all of this AI power in your best business practices.

At OSTechnical our experts make AI accessible to all business owners helping make each day a little easier. Let’s step back in time and learn all about the evolution of AI.

Early Concepts and Foundations (1950s)

In 1956 the term Artificial Intelligence was coined at the Dartmouth Conference. The foundations of the idea were created when Alan Turing published his paper “Computing Machinery and Intelligence” in 1950. The great thinker posed the question, “Can machines think?” [1]  This was the same year he developed ‘The Turing Test’. This was a test he proposed that would determine whether or not machines could display intelligent behavior. 

The Turing Test:

  1. Setup: Imagine a game with three participants: a human (the judge), another human, and a machine. The judge interacts with the other human and the machine through a computer interface, so they can’t see or hear them directly.
  2. Interaction: The judge asks questions to both the human and the machine. The purpose is to try to determine which responses come from the human and which come from the machine.
  3. Evaluation: If the judge frequently cannot tell the difference between the human’s responses and the machine’s responses, then the machine is said to have passed the Turing Test.

In the late 1950s, the first AI programs were developed, such as the Logic Theorist by Allen Newell and Herbert A. Simon, which mimicked human problem-solving skills. The Logic Theorist was like a very smart robot that could solve math puzzles. This project laid the groundwork for future AI research, showing that machines could be programmed to solve complex problems using logical reasoning. [2

Expert Systems and Knowledge Representation (1960s-1980s)

By the 1970s AI moved into the sciences through Expert Systems. These were systems that used AI to mimic the decision-making capabilities of human experts. DENDRAL was developed in 1965 to analyze chemicals. It was used to help figure out the structures of unknown molecules. When chemists had a sample of a chemical but didn’t know what it was, they could give DENDRAL some clues, like information from a special machine (a mass spectrometer) that tells them about the chemical’s parts. This sped up the process of discovering what a chemical looked like and was used widely in industry and academia. [3]

Interest in AI took a giant leap in the 80s. As personal computers grew in popularity so did an interest in making them smart. Expert Systems continued developing and could give specific advice or solve problems in certain specializations such as mechanics. Computers were gaining in intelligence but AI didn’t learn on its own at this point. The dream of what it could one day be solidified though. The problem was these computers just weren’t powerful enough yet.

AI Winter (1990s)

The 1990s were dark days for AI. People were disappointed in what computers could accomplish on their own. It was as if your robot could only partially clean your room and did a mediocre job at best. Funding dried up and interest in AI dwindled as well. Expert Systems were cool but programmers could get any job done without breaking the bank. One of the only exciting highlights is that in 1997, an AI system won the world championship of chess. [4]

The Birth of Machine Learning and The Deep Learning Revolution (2000s-2010s)

Winter came and went in the new millennium with the advent of Machine Learning. Gone were the days of rule-based systems that were too specialized to become widespread. The development of algorithms that could learn from data was the way of the future. With the internet on the rise, huge datasets forced the creation of sophisticated machine learning models. Support Vector Machines (SVMs) could classify and predict data with impressive accuracy that are still used in spam filters and to diagnose medical ailments today.

Deep learning is artificial intelligence that helps computers learn and make decisions. The goal was to make life easier. With superhuman capabilities like image recognition, speech synthesis, and language recognition deep learning opened up the real world for real people. Travel, communication, and inclusiveness for all pressed fast forward by the 2010s. [5]

Current and Future Trends (2020s and Beyond)

AI is now a part of everyday life, even if you don’t realize it. From Siri to Alexa, Amazon to Netflix we are constantly interacting with artificial intelligence. The current focus is on ethical practices. Regulatory frameworks are being created now to ensure that AI is used to enhance our existence rather in a positive way.

According to Georgia Tech’s Justin Biddle, “Humans generate, design, develop, distribute, and monitor AI systems. Human decisions are impactful throughout the AI development life cycle, and those decisions, reflecting the developers’ values, impact the performance of AI systems in a significant way.” AI has the potential to make major decisions in people’s lives such as being admitted into universities and there must be systems in place to ensure that ethical standards are being met. This means diversity and equity in the hiring place at AI tech companies must be honored to ensure that every voice creating the artificial intelligence we use represents the population at wide. [6]

Artificial Intelligence has been in the mix for over 70 years and is only on the rise. OSTechnical’s pool of experts not only is well-versed in AI best practices but is representative of the diverse society we live in today. We look forward to further discussing AI and its benefits for your business in the future. 

Sources

  1. Muggleton, Stephen. “Alan Turing and the development of Artificial Intelligence.” AI Communications, vol. 27, no. 1, 2014, pp. 3-10, chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.doc.ic.ac.uk/~shm/Papers/TuringAI_1.pdf.
  2. Sloat, Sarah. “A brief history of Logic Theorist, the first AI.” Popular Science, 3 October 2023, https://www.popsci.com/technology/the-first-ai-logic-theorist/. Accessed 23 May 2024.
  3. 3. Copeland, BJ, and Michael Aaron Dennis. “DENDRAL | Artificial Intelligence, Machine Learning & Expert Systems.” Britannica, 25 April 2024, https://www.britannica.com/technology/DENDRAL. Accessed 23 May 2024.
  4. Dickson, Ben. “What is the AI winter?” TechTalks, 12 November 2018, https://bdtechtalks.com/2018/11/12/artificial-intelligence-winter-history/. Accessed 23 May 2024.
  5. Wikipedia, https://www.linkedin.com/pulse/evolution-ai-20-years-new-old-job-titles-meltem-ballan-ph-d-/. Accessed 23 May 2024.
  6. “5 AI Ethics Concerns the Experts Are Debating | Ivan Allen College of Liberal Arts.” Ivan Allen College of Liberal Arts, https://iac.gatech.edu/featured-news/2023/08/ai-ethics. Accessed 23 May 2024.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Do You Want To Boost Your Business?

drop us a line and keep in touch