Artificial intelligence has been around for centuries, with early examples appearing as early as the 18th century. However, it was not until the 1950s that artificial intelligence as we know it began to take shape. In 1950, Alan Turing published a paper entitled “Computing Machinery and Intelligence” in which he proposed a test for determining whether or not a machine could be considered intelligent. The Turing Test, as it came to be known, is still widely used today.
In the years following Turing’s publication, a number of researchers began working on artificial intelligence projects. One of the most notable was Marvin Minsky, who is often referred to as the “Father of AI”. Minsky’s work led to the development of the first AI programs, known as expert systems. These programs were designed to mimic the decision-making process of human experts in specific domains.
The 1980s saw a major resurgence in AI research, thanks in part to the availability of powerful computers and the development of new programming languages such as Lisp. Lisp was particularly well suited for AI applications due to its ability to represent complex data structures. This decade also saw the development of the first neural networks, which were inspired by the brain’s structure and function.
The 1990s was a decade of great promise for AI. This was the decade that saw the development of the first truly intelligent machine, IBM’s Deep Blue. Deep Blue was capable of defeating the world chess champion, Garry Kasparov, in a match consisting of six games.
The early 21st century has seen a continued expansion in the field of artificial intelligence. Machines are now capable of tasks that would have been considered impossible just a few years ago, such as translating between languages, driving cars, and even beat humans at complex games such as Go.
Article written by Franck Jr. Walter
contact me at: franck [at] ketrium.com