The idea of such thing as an artificial intelligence (AI) is popular. Of course basic forms of AI exist in the world already – in videogames in particular. Non-player characters are often governed by some form of AI in varying sophistication. However, when people refer to artificial intelligence, they generally mean an intelligence that is reflective of human intelligence – an artificial intelligence that is communicable and difficult to distinguish from organic human intelligence. From now on, when I refer to AI, that is what I mean.
Whether this is possible or not is a matter of debate, but it is undoubtedly tied to another idea that the human brain is effectively a powerful computer. I know that for one computer scientist Alan Turing thought this to be the case. So returning to the question that is the title, “If a computer could think like a human, what would its first thoughts be?” To elaborate on what I mean by this question, I don’t mean to ask about the thought contents of a fictional AI indistinguishable from human intelligence as for one, the thoughts would probably be similar to regular human thought. To do this, the obvious way would be to consider the thought experiment of what our first thoughts would be if we were to just come in to existence with our fully formed adult minds but with no memories. While this thought experiment might provide some useful insight in to answering the question, it is not the full scope that I wish to consider. The important part to note is what is meant by thinking like a human. I take this to mean that the computer would be in such a state that if we were to interact with it, the responses gained would appear to be the result of thought in the same way that we would perceive it to be thinking. Like how if you are talking to a friend, you are certain that when they are talking to you, they are thinking.
I know this is not really a good definition, and I hope to refine it in this discussion, but I my aim was to define the terms of what is meant by thinking without considering the nature of the mechanics and inner workings of the ‘thought process.’ I do this because I cannot be sure of what a machine’s “brain process” would be like – I can’t know the workings of this hypothetical software. In the same vein, so far as I am aware the brain processes that govern our own thoughts are not as especially well understood (if anyone who is more clued up on neuroscience wishes to correct me on this, email me). So still the difficulty is, how do you define thinking?
The immediate reductionist answer is that thinking is just the act of having thought processes. I agree this is true, and so the next question is “what is a thought process” and this is a harder question. While we might refer to computers or other inanimate things as ‘thinking”, this is a metaphorical use of language. When someone says “the computer is thinking” they do not mean that the computer in front of them that is taking a long time to achieve a task is literally thinking in the same sort of notion that we consider ourselves to think. Do animals think? Possibly. Do plants think? I strongly doubt it.
What is it that separates decision making in response to stimuli from this concept of thinking. I propose that the key ingredient is consciousness. Something seems intuitively right about that, however this doesn’t close the matter. There is now the trouble of defining what is even meant by consciousness – this is not a trivial issue and it is one I might return to at a later date. What I aimed to do in this post was to open up the question and bring to the front a series of themes such that I gain a fuller conception of the issues for the final project.
I wish to finish this post with a idea that I cam across recently and that was this idea of emergent properties. Water is a liquid, it is composed of H20 molecules that are clumped together fairly strongly that each have enough energy such that it is a liquid phase state. Consider now a single H20 molecule – it is neither solid, liquid nor gas. Properties such as phase state are referred to in physics as bulk properties - ones that are not present in the consistent molecules but emerge when the system is treated as a whole. Liquidity can be thought of as an emergent property.
Taking a physicalist account of the world (i.e there are only physical things and nothing more), it is possible that consciousness is such an emergent property of tissue – such neurones firing on their own do not exhibit consciousness, but it is a property that emerges when neurones are connected en masse. This seems sensible and it begs the question that if this is true, what is it about neurones that give rise to this emergent property of consciousness? Even more so, is it possible to artificially recreate the base properties of neurones such that an amalgamation of these artificial neurones would also result in consciousness? This is all purely conjecture and I am far from being any sort of authority or expert on neuroscience, however I find the idea entertaining despite being hypothetical and I’ll be coming back to this idea of consciousness as an emergent property in later posts as it will likely be a major theme in the final project.