We humans always have always anthropomorphized the things we created. The dates can go back to ancient Greeks, where Pygmalion — the Greek artist felt in love with Galatea, the sculpture he created and wished to give his creation a life.
When programmable computers were first conceived, people wondered if they could also think and behave like we humans. The answer then was no, and not much has changed since. However, such crazy questions led to the foundation of a new era which led to the revolution in Information Sciences and the modern day Artificial Intelligence (aka modern computing). Today, modern computing is a thriving field with many practical applications and active research topics. We look to modern software to automate routine labour, identify, classify, and translate speech or images, make diagnoses in medicine and support basic scientific research.
In the early days of modern computing, the field rapidly tackled and solved problems that are intellectually difficult for human beings but relatively straightforward for computers — problems that can be described by a list of formal, mathematical rules. The true challenge to modern computing proved to be solving the tasks that are easy for people to perform but hard for people to describe formally — problems that we solve intuitively, that feel automatic, like recognising spoken words or faces in images. It turns out these can also be described by a list of slightly different, formal and informal mathematical rules, and statistical treatments, it just took us a little bit longer to work out the math, statistics, and programming languages needed to do this. Rapid advances in the speed, reliability, and energy requirements of computing hardware also contributed to many of these successes.
Here in this coming series of blog posts, we will study the solutions to these more intuitive problems. This solution is to allow computers to mimic the hypothetico deductive reasoning process in man and classify the world in terms of a hierarchy of data sets or what humans would call concepts, with each data set defined through its relation to simpler data sets similar to the way humans do this, but unlike humans using the human brain, in a way fully defined by mathematical and statistical rules programmed into the computer in advance. By gathering larger and larger data sets from different sources, this approach eliminates the need of human operator to formally specify all the data that the computer needs. The hierarchy of data sets also helps the computer to define complicated data sets by building them out of simpler ones. If we have to visualise this process, or draw a graph showing how these data sets representing concepts are built on the top of each other, the graph comes out to be deep, with many layers. For this reason, we call this approach to AI deep learning. Moistly we call it that because ‘deep’ sounds cool and makes us feel special. Really ‘deep’ is nothing more than a hedge modifier term with no real meaning or any meaning at all.
Many of the early success of AI took place in relatively sterile and formal environments and did not require computers to have any knowledge about the world. This was good because computers and machines cannot have knowledge, if they did they would no longer be machines/computers. For example, IBM’s Deep Blue chess playing system defeated world champion Garry Kasparov in ’97. Chess is of-course a very simple game in terms of computers which only have sixty-four locations and thirty-two pieces that can move in only rigidly circumscribed ways. Devising a successful chess strategy is a tremendous accomplishment, but the challenge is not due to the difficulty of describing the set of chess pieces and allowable moves to the computer. Chess can be completely described by a very brief list of formal rules, easily provided ahead of time by the programmer.
Computers back then were really good at doing computations, today they are even better. Ironically, abstract and formal tasks that are among the most difficult undertakings for a human being are among the easiest for a computer. Think of a calculator defeating a human in basic mathematical operations — exactly abstract and formal task. A person’s everyday life requires an immense amount of knowledge about the world. Much of this knowledge is subjective and intuitive, and therefore difficult, but not impossible, to articulate in a formal way. Even if it cannot be fully described formally, sets of rules and relationships between rules can be defined/programmed into a computer that allows a close enough approximation of human like knowledge capture to make them seem intelligent or like they are learning. Of course they are not intelligent or capable of learning but we use these words not in their normal, everyday sense, but rather in their new modern sense. We assume that everyone understands this and thus do not feel at all bad about deceiving the vast majority of people on the planet who do not. They will catch up eventually.