Myself, Coding, Ranting, and Madness

The Consciousness Stream Continues…

Can a Machine be Conscious?

22 Jun 2010 2:30 Tags: None

N.B.: This is an unedited copy-paste job of a philosophy essay I wrote in a hurry earlier this year. If I get some more free time, I may actually re-wrtie to be properly readable. In the mean time, thoughts on the matter would be most welcome.

The first step in looking for consciousness in a machine is, naturally, to find consciousness anywhere and have an understanding of what consciousness is. Form here, by looking at Turing and post-Turing machines, we will see if anything short of an artificial brain can be conscious.

Like almost all concepts, a mind can be sub-divided, in this case into the extrospective mind, which deals with the awareness of the external world (or, at least, experiences thereof), and the introspective mind, which deals with the processing of stored information and experience. In addition, a conscious mind is aware; introspectively it is self-aware, giving it the ability to formulate a notion of the self, and individuality. Extrospectively, for a mind to be conscious, it must be reactively aware of the physical world, rather than responding in set ways to set experiences.

However, there are some things for which there is more debate; emotions, a feature of the known conscious entities, but it does not seem logical that they would be required for awareness, understanding, or even free will. These distinctions are somewhat tenuous, and are even harder to test for; it has to be taken as a given that all of humanity is conscious, and that the rest of the world are not zombies. However, in order to search for consciousness in machines, a method of determining whether an object or system is conscious is needed.

What must not happen is falling into the gumption trap of arguing that ‘machines can’t do ’, where something is an action that we associate with a conscious human being. Alan Truing, whose work on electronic computers saw the opening of the AI question having practical meaning, makes this point as early as 1950, in section 5 of “Computing Machinery and Intelligence”.

The most famous test is the Turing test, which places the consciousness benchmark as being when a machine can fool a human into believing it is conscious. Turing-conscious machines already partially exist – advanced computational engines and game AIs can respond, to a limited set of situations, at human or even above-human levels - and could, in theory, be extended to deal with all questions that the examiner in a Turing test could be asked of them, given unlimited programming.

John Searle et alii would call these machines ‘Weak AI’ , not just unconscious but also unthinking. They are capable of ‘general intelligence’, but do not have any understanding of reality, not do they have any true understating of the self. With all of their programming, they may even be able to break the problems laid out by Hubert Dreyfus but, due to the limits of a state machine, they will always work purely on their innate knowledge.

Despite many philosophies being based on the idea of innate knowledge, there is always the concept of experience and learning. These are two of the most important features of a conscious entity; extrospective experience must lead to an adaptation of the responses of the entity. This leads to the requirement to not only be able to process complex inputs and determine the right output, or action, but also to be able to learn from mistakes.

This sort of mechanism allows for a partial resolution of the Dreyfus-esque analogy of the automatic bus, which is confronted by a first pregnant woman who needs to be hospital, and then a terrorist demanding to be taken to the airport. Its reaction (whether it goes on to the end of the bus route, the hospital, or the airport), rather than being based entirely on the base programming, would now be also be based on the knowledge accumulated during its life service, and any information synchronised with other machines. This, however, leaves open both the question of effective pure experience can be, and how useful shared experience is.

The ability of a machine, based entirely on some form of Turing table of input conditions and output reactions, to synchronise with other machines of the same type, and therefore combine their experiences into a set of shared experiences, allows for machines to learn at a much faster rate than humans. What is also allows is for new machines to skip the stage of ‘growing up’, allowing the combined experience of their elders to become their innate knowledge, similar to the concept of a racial memory.

Pure electronic experience, as would be experienced by machines made from the currently available technology, may lack one of the basic elements of sapience – instinct. Most instinct could be described as the conscious, or possibly sub-conscious, reaction based on some innate knowledge. The lack of base instincts in a machine was one of Dreyfus’ attacks against a purely symbolic Turing-esque consciousness; Truing himself anticipated this argument, holding to Occam's razor that such instincts can themselves be described, and saying that “we cannot so easily convince ourselves of the absence of complete laws of behaviour...The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'” Although machines can have programmed innate knowledge, there will be no way of separating the base instinctive instructions from added ‘education’ programming; for example, Turing table entries for the concept of protecting pregnant women (an instinct for humans) and for driving a bus to the airport who be semantically equivalent inside the machine. This lack of distinction between instinct and learning would allow a machine of this type to drift away from the basics instincts it was programmed with.

Perhaps, therefore, a basic Turing table machine cannot both be conscious and intelligent whilst still being able to learn from its mistakes. Almost certainly, it cannot exist in this state without some input or control from another conscious entity, which possesses its own instincts.

Another major problem facing a machine of this type is its input sensitivity; the translation of raw data, such as what it ‘sees’, ‘feels’, or ‘hears’ into a set of discrete, but related, symbols. It is this process that will allow the machine to use its Turing tables to make decisions; it is in fact the processed data that is the input and is generally referred to as. The problem here is the distinction of inputs; even a programmer with unlimited time would only find a finite set of inputs to deal with, as there are only a finite number of inputs in the universe. However, there are strictly more situations than combinations of inputs, leading to the problem that more than one input could lead to multiple correct responses, depending on unknowns.

For example, a security robot is guarding a building when the lights go out. The machine does not know why the lights have gone out, but must respond. The reason could be as simple as the fuse has blown, in which case he needs to go to the fuse box, or a military outfit could be storming the building, and he should be rushing to defend it. Assuming both options have even been considered during the machine’s programming, how can it choose between them? A Turing machine cannot have gut feelings , and so must somehow decide on a course of action with insufficient information.

Even if these problems were to be resolved, and a learning, experiencing Turing-like machine was built, would it be conscious? One of the words that has so far been ignored from the earlier definition of consciousness is ‘aware’. This point, made most effectively as Searle’s ‘Chinese Room’ thought experiment, is an extremely important one.

Suppose there is a computer program that passes the Turing Test and demonstrates general intelligence. Suppose specifically that the program can converse in fluent Chinese. Write the program out on a series of cue cards and give them to an ordinary person who does not know Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese; however is there really anyone (or anything) in the room that understands Chinese? The answer is, clearly, no. The person himself is conscious, but the complete system of the room is not; it is merely an extremely intelligent system.

However, this leads to a contradiction with most monist theory; if the entirety of the mind is made of the same materials as flesh and blood, it is constrained by the same physical laws. Therefore the human brain is a physical system, and as such has the same limitations as a generalised Turing system, with each individual neuron being having fixed inputs and outputs. This is the basis of the theory of mind known as ‘Functionalism’ which was first formulated by Hilary Putnam in the 1960s with the rise of both physical and computer science research and breakthroughs.

This formulation appears to imply that a complete model of the brain, made in the way of a machine, would have all of the same properties of the brain, making it an Artificial Brain. However, this logic leads straight back to the Chinese Room problem, and the implication that humans are not necessarily aware of what they are doing; we might in fact be zombies. However, our self-awareness implies this is not the case, allowing for consciousness in machines.

Taking the dualist case, however leads to the requirement of dichotomy in the universe between the physical body and the ethereal mind, implying the need for a ‘ghost in the shell’ if a machine is to be conscious. This phrase was first coined by Gilbert Ryle in his rebuttal of Descartes’ Mind-Body dualism . If such a ghost exists, it would be able to resolve many questions in the AI debate, but so far no tangible evidence has been found. Furthermore, a dualist view of the world is riddled with problems, mainly the problem of building a link between the physical and mental worlds.

Overall, building an intelligent Turing machine is certainly possible, and notably less complex then building a conscious Turing machine. A monist would have to maintain that building a conscious machine is not only possible, but also have been done in the form of a person; from here, an artificial brain should be possible. The dualists have a much wider range of responses, but the majority of evidence (including almost every point made here) points towards the requirement of some kind of Ghost in the Shell in order for a machine to become conscious.