Ever since ancient times, humans have dreamed of creating machines that could think and act like them. They imagined artificial beings that could perform tasks, solve problems, and even create art. They told stories and myths about these creatures, such as the bronze giant Talos who guarded the island of Crete1, or the golem who was brought to life by a rabbi in Prague2.
But these were only fantasies, until the dawn of the modern era, when science and technology began to make rapid progress. Philosophers and mathematicians tried to understand the nature of human thinking and reasoning, and how it could be represented by symbols and rules. They devised systems of logic, algebra, and calculus that could manipulate abstract concepts and solve complex equations. They wondered if machines could be built that could follow these rules and perform calculations faster and more accurately than humans.
The invention of the programmable digital computer in the 1940s was a breakthrough that made this dream possible. Computers were machines that could store and execute instructions, and process data in binary form. They could perform arithmetic operations at incredible speed, and handle large amounts of information. They were also flexible and adaptable, as they could be programmed to perform different tasks according to different inputs.
Some visionary scientists saw the potential of computers to go beyond mere calculation, and to simulate human intelligence. They asked: Can machines think? Can they learn from data and experience? Can they understand natural language and communicate with humans? Can they play games and solve puzzles? Can they create original and creative content?
These questions gave rise to the field of artificial intelligence, or AI, which was officially founded at a workshop held on the campus of Dartmouth College, USA during the summer of 195632. The conference brought together researchers from various disciplines who shared a common goal: to build machines that could exhibit intelligent behavior. They coined the term “artificial intelligence” to describe this endeavor, and they predicted that it would be achieved in a generation or less.
They were optimistic and ambitious, and they had good reasons to be. They had access to powerful computers and generous funding from governments and corporations. They had developed methods and techniques such as search algorithms, logic programming, neural networks, genetic algorithms, expert systems, natural language processing, computer vision, speech recognition, and machine learning. They had achieved impressive results in various domains such as chess, mathematics, medicine, linguistics, robotics, and art.
But they also faced many challenges and difficulties along the way. They realized that human intelligence was not easy to define or measure, and that it involved many aspects such as emotion, intuition, creativity, common sense, and social skills. They encountered problems that were too hard or too vague for computers to solve, such as understanding natural language or recognizing faces. They struggled with limitations of hardware, software, data, and resources. They faced criticism and skepticism from other scientists, philosophers, politicians, and the public.
They also experienced cycles of hype and disappointment, known as AI winters2. These were periods when AI research lost funding and interest due to unrealistic expectations or failed promises. The first AI winter occurred in the 1970s when AI failed to deliver on its grand vision of general intelligence2. The second AI winter occurred in the late 1980s when AI failed to compete with cheaper and faster alternatives such as statistical methods2.
But AI never died. It always bounced back with new ideas and innovations. It always adapted to new challenges and opportunities. It always benefited from new developments in other fields such as neuroscience, psychology, biology, physics, economics, sociology, and art.
And it always made progress.
In the 21st century, AI has witnessed a resurgence of interest and investment due to several factors such as new methods (such as deep learning), new applications (such as social media), new hardware (such as GPUs), new data (such as big data), new platforms (such as cloud computing), new domains (such as health care), new collaborations (such as open source), new competitions (such as AlphaGo), new ethics (such as fairness), new regulations (such as GDPR), new risks (such as cyberattacks), new opportunities (such as self-driving cars), new challenges (such as climate change), new visions (such as artificial general intelligence), new frontiers (such as artificial life).
AI is now everywhere. It is part of our daily lives. It helps us work smarter, learn faster, play harder, communicate better, create more, explore further.
AI is not a dream anymore.
It is a reality.
And it is only the beginning.