Vous n'êtes pas identifié.
The idea of "a machine that believes" dates back to ancient Greece. But given that the introduction of electronic computing (and relative to some of the subjects talked about in this short article) crucial events and turning points in the development of AI consist of the following:
1950.
Alan Turing publishes Computing Machinery and Intelligence. In this paper, Turing-famous for breaking the German ENIGMA code during WWII and frequently referred to as the "daddy of computer science"- asks the following question: "Can makers think?"
From there, he provides a test, now notoriously referred to as the "Turing Test," where a human interrogator would attempt to identify between a computer system and human text response. While this test has actually undergone much scrutiny because it was published, it remains a fundamental part of the history of AI, and a continuous idea within approach as it uses ideas around linguistics.
1956.
John McCarthy coins the term "synthetic intelligence" at the first-ever AI conference at Dartmouth College. (McCarthy went on to create the Lisp language.) Later that year, Allen Newell, J.C. Shaw and Herbert Simon develop the Logic Theorist, the first-ever running AI computer program.
1967.
Frank Rosenblatt develops the Mark 1 Perceptron, the first computer system based upon a neural network that "discovered" through experimentation. Just a year later on, Marvin Minsky and Seymour Papert release a book entitled Perceptrons, which ends up being both the landmark deal with neural networks and, a minimum of for a while, an argument versus future neural network research efforts.
1980.
Neural networks, which utilize a backpropagation algorithm to train itself, became widely used in AI applications.
1995.
Stuart Russell and Peter Norvig publish Expert system: A Modern Approach, which turns into one of the leading textbooks in the study of AI. In it, they explore 4 prospective goals or definitions of AI, which differentiates computer systems based on rationality and believing versus acting.
1997.
IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
2004.
John McCarthy composes a paper, What Is Expert system?, and proposes an often-cited definition of AI. By this time, the period of big data and cloud computing is underway, allowing companies to manage ever-larger information estates, which will one day be used to train AI designs.
2011.
IBM Watson ® beats champions Ken Jennings and Brad Rutter at Jeopardy! Also, around this time, data science begins to emerge as a popular discipline.
2015.
Baidu's Minwa supercomputer uses an unique deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the typical human.
2016.
DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champ Go gamer, in a five-game match. The success is considerable provided the substantial number of possible moves as the game progresses (over 14.5 trillion after just 4 relocations). Later, Google purchased DeepMind for a reported USD 400 million.
2022.
A rise in large language designs or LLMs, such as OpenAI's ChatGPT, creates a huge modification in efficiency of AI and its possible to drive enterprise worth. With these new generative AI practices, deep-learning models can be pretrained on big quantities of data.
2024.
The most current AI trends point to a continuing AI renaissance. Multimodal models that can take numerous types of data as input are providing richer, more robust experiences. These models unite computer system vision image acknowledgment and NLP speech recognition capabilities. Smaller models are also making strides in an age of diminishing returns with massive designs with large criterion counts.
Hors ligne