An Excerpt from my upcoming book on the history of AI
AI was first birthed at a workshop in Dartmouth College, New Hampshire, US, in the year 1956. It was themed, ‘The Dartmouth Research Project on Artificial Intelligence’. It lasted up to eight weeks and was composed mainly of brainstorming sessions. The idea to organize this workshop is credited to John McCarthy, one of the founding fathers of Artificial Intelligence and a key participant at the Dartmouth Research Project.
He was an assistant mathematics professor at Dartmouth College. He was also a Sloan Fellow in physical sciences. This workshop was birthed out of his frustration that a paper he co-edited with Claude Shannon did not say more about the possibility of computers possessing intelligence. Based on this, one could easily assume that McCarthy had ample experience working with computers, but in fact, his first attempt at programming a computer was in 1955.
Around the time, IBM made their latest computer brand, the IBM 704 available for research and educational purposes. Several colleges would share access to this computer, Dartmouth inclusive. They chose McCarthy to be the university representative in the computer's use. Turns out his attempt at programming gave him an edge over others. It was in this capacity that he met Nathaniel Rochester, Head of IBM’s Information Research Department.
Rochester later invited him to spend some time with IBM in the summer of 1955. It was that summer that he and Rochester convinced Shannon and Marvin Minsky to work with them in drafting a proposal for the hosting of the Dartmouth workshop. The bulk of the writing and organizing however fell on John McCarthy. The proposal was titled, ‘Summer Research Project on Artificial Intelligence’. Credit for the name ‘Artificial Intelligence’ goes to McCarthy as well.
This wasn’t the first name considered though for the new discipline. An alternative name was Computational Intelligence. In picking the name, McCarthy said,
‘I had to call it something, so I called it Artificial Intelligence, and I had a vague feeling that I’d heard the phrase before, but in all these years I have never been able to track it down’
The proposal was submitted to the Rockefeller Foundation for funding in August 1955 and was accepted. The purpose of the workshop as stated in the proposal was to:
‘… proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it… For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving’
The intention was that the workshop would birth cross-industry collaborations and knowledge sharing necessary for laying the foundations for the new scientific discipline of Artificial Intelligence. However, it did not live up to McCarthy’s expectations. First, the participants did not all come which contributed to each person pursuing independent projects.[i] At the workshop and afterward, the top scientists and mathematicians worked separately and in isolation, each person pursuing his research agenda.
They all achieved significant landmarks that made it possible to say in hindsight that the period from 1956 to 1973 was the first time AI boomed. McCarthy had hoped that they would develop a common approach or idea of AI from the proceeds of the workshop. Instead, it brought the top researchers in the field — McCarthy, Minsky, Allen Newell, Herbert Simon, and Arthur Samuel together in one place, something that had never happened before.
They could talk and plan their future research projects in AI, all of which provided a necessary foundation for the formal discipline of AI. For instance, Newell and Simon shared their idea for list processing that could process symbolic structures and treating programs. Their closed approach contributed to the first slump in AI (Chapter 3 will discuss this). Without open networks for collaboration and knowledge sharing, technological development cannot be easily scaled and rapid.
Today, the tech community has learned from its past and fully embraces the open-source approach to development, not just in AI but in other 4th generation technologies. This is one of the major factors driving what is now the longest boom in AI R&D from 1997 to date. Chapter 6 will discuss this in further detail. For now, let’s focus on the events at Dartmouth. Like I said earlier, all the participants at the Dartmouth conference pursued different approaches to AI, all of which are worthy of consideration.
Allen Newell and Herbert Simon, both of Carnegie Mellon University, Pennsylvania, focused on what we now know as the Carnegie approach. A study of how humans solve their problems with the hope of deciphering formal rules that govern problem-solving characterized this approach. These formal rules would then develop computer algorithms. They both worked together to develop the Physical Symbol System Hypothesis (PSSH) which won them a joint ACM Turing award in 1975.
Their hypothesis was explained in a paper jointly authored and presented by them both, which states that a physical symbol, such as a computer already has the means to carry out intelligent action. They conceived intelligent action as the goal of artificial intelligence, which could only be achieved with the use of a symbol. The only thing missing for such a computer to behave intelligently was the ‘appropriate symbol-processing programs’, which must be provided by humans.
The test of intelligent machines was by its ability to achieve a provided goal, despite variation, difficulties, and complexities in its work environment. The essence of these obstacles was to simulate the same challenges that a typical human would face in performing such a task. They foresaw the importance of Artificial General Intelligence or Strong AI. They believed that the investment in computer science vis-a-vis AI was not worth it if such machines only performed specific tasks.
It would have been interesting to get their take on modern trends in AI, where more focus is placed on narrow applications of AI. The best return on investment, in their estimation, would be to see these computers introduced into general society to perform knowledge-intensive tasks (IBM Watson would have exceeded their expectations in this respect). These intelligent machines would act as agents of humans and be capable of navigating the realities of humans. [ii]
Both authors were however open to the fact that holes could be poked in their theory, conceding that intelligent behavior was difficult to replicate. They thus concluded that the best way to defend or attack this hypothesis was by producing empirical evidence for all to see.[iii] In artificial intelligence, they have been recognized as one of its founding fathers, through the development of heuristic programming, heuristic search, means-ends analysis, and methods of induction as means of solving interesting problems.
Outside of AI, they also contributed to the field of psychology. They were the major proponents of the idea that we can describe human cognition in terms of physical symbols. They developed theories for human problem solving, verbal learning, and inductive behaviour, concerning certain tasks. They also developed computer programs that incorporated these theories to replicate human behaviour.
We can say it that their works in AI and Psychology were linked and symbiotic.
[i] J. Moor, ‘The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years’ AI Magazine January 2006
[ii] Allen Newell and H. A. Simon, ‘Computer Science as Empirical Inquiry: Symbols and Search’, 1975 ACM Turing Award Lecture, Communications of the ACM, March 1976, Volume 19, Number 3, pg. `113, 114