Skip to content

Material for T15 – 2001: A Space Odyssey.

February 13, 2013

This week we’ll be watching Stanley Kubrick‘s 2001: A Space Odyssey.

2001

Here are a few prompts for this week’s occasional writing:

  1. Consider John Searle’s thought experiment involving his famous “Chinese room” (in his paper “Minds, Brains, and Programs” on pp. 168-180 of Fumerton and Jeske’s Introducing Philosophy through Film). Searle seems to suggest not only that HAL9000 (as well as the replicants) can’t be conscious. This thought is familiar enough from our conversation over the last few weeks. But Searle appears to imply that computers and androids can’t even have beliefs, desires, intentions, or any sort of genuine understanding. In a sense, they’re mindless. Explain and evaluate Searle’s argument by reference to HAL in 2001.
  2. Setting aside concerns raised by Searle, could a computer (like HAL) or a replicant (like Roy Batty – yes, Batty) go crazy? That is to say, could an artificial form of intelligence be insane, as opposed to just badly programmed, defective, full of bugs, etc.? Use 2001 as a leaping off point for your answer.
  3. This one is complicated but fun: The film seems to link consciousness to technology. It’s not until the hominids are prompted by the Monolith to start using tools (e.g., a femur bone to smash the skull of a tapir) that they move beyond a sort of brute existence in the now and begin to to think sequentially. But after being taken to the alien world, Dave begins to see multiple moments in time overlapping. Ultimately, he is reborn as the Star Child, and when he is, he’s completely bereft of tools and technology but possessed of a mind that appears to completely transcend our own. Here’s the question: Could there be something like a form of consciousness that goes so far beyond our own (perhaps by way of being atemporal) that we could only recognize it as something like a quantum leap forward? Again, use 2001 (and perhaps some of the other films we’ve watched) to help answer this question.

Please choose one – and only one  – of the prompts to write on. As always:

  • Please limit yourself to 300-500 words;
  • Please post your assignment as a comment to this blog entry;
  • Please do all of this no later than 24 hours before class begins on T15.
Advertisements

From → Assignments

26 Comments
  1. I do not believe a computer can be insane. This suggests that they contain at some point sanity, which is not possible. Insanity is caused by abnormalities in the brain chemistry, sometimes brought on by trauma or genetics. Computers lack both of these important factors. Nothing can be traumatic to a computer; they can be programmed to recognize something as “likely traumatic”, but the trauma will not affect the computer in the same way it would a human. Obviously, computers lack the ability to have genetic issues, because computers do not reproduce. Any perceived insanity is caused by poor programming or bugs. For instance, HAL had a minor defect which caused him to report a malfunction to the communication system of the ship. This was an isolated event. When he discovered the crew plotting to turn him off, he did not go into survival mode as we think of it. What we perceive as HAL trying to preserve himself was not out of the selfish desire to continue to exist, but as a result of his primary objective: ensure the success of the mission. He deemed himself essential to the mission, and turning off any of his systems, especially his autonomy, would negatively affect the possibility of success for the mission. Therefore, the logical thing to do was to remove the threat to the success of the mission, in this case the crew. By disposing of them, HAL saw that the mission could still succeed as planned, with only him running the show. It was not born out of malice or any emotion, but a logical analysis of likely outcomes. The disturbing part, however, was when Dave went to shut him down. The way in which he pleaded for Dave to stop was heart wrenching, almost as though a real person, and not a machine were dying.

    • pythagoras permalink

      “Nothing can be traumatic to a computer.” Maybe, but I’m not sure. HAL’s voice doesn’t sound especially emotional when he is pleading with Dave not to turn him off. But that might be a limitation of his (HAL’s) hardware. His neutral voice aside, HAL seems pretty upset. He’s afraid of dying, or at least he says he is. Isn’t that at least some evidence that he’s traumatized by the possibility of his own death? And isn’t that what we would expect from a piece of technology that has become self-conscious, much as our hominid ancestors did when they themselves grasped to use of some rather bony) technology?

  2. Uddit Patel permalink

    In the film 2001: A Space Odyssey, Hal either gone crazy or tries to protect the mission once the two member crew decide they want to turn Hal off because of Hal’s misdiagnoses the receiver. The real question is: was Hal really causing an error or was he trying to protect the mission when decided to kill the crew? To the viewers, Hal seems to have a mind of his own by asking the crew members how they felt about the mission, and by telling the crew members how he felt about the mission. Ultimately, Hal was considered as a person and the sixth man on the crew so the viewer may have thought that this computer has gone insane or crazy. However, could an artificial form of intelligence be insane, as opposed to just badly programmed, defective, full of bugs, etc.? Unfortunately, Hal or any other artificial forms of intelligence cannot be insane or crazy but rather programmed in a certain way by its creators or defective.

    Hal knew information or was programmed with information that the crew did not know as represented by the video of Dr.Floyd after Hal was shut down. Dr. Floyd prerecorded message stated that “for security reasons of the highest importance has been known on board during the mission only by your H-A-L 9000 computer.” For majority of the mission, only Hal knew full details about the mission, and his knowing of this information could be the reason why Hal took the actions he did. Hal was trying to protect the mission and he could have been coded to protect the mission. Hal did not want to make errors while accomplishing the mission and also he wanted to ensure the crew was not disturbed or discouraged from the mission which may be reason why Hal misdiagnosed the receiver. Hal was just following how he was programmed and this mission may have been more important than the humans’ lives. Hal seemed to have the intelligence to accomplish all the tasks by himself, and Hal was the most important item on the mission. Hal kept the people alive while they hibernated and made sure the ship got to the place it needed to.

    Hal followed every command given to him until he noticed that the mission was in jeopardy when ultimately he took action. Artificial forms cannot turn insane but they are programmed to do actions in the case something goes wrong. In this case, Hal must have been programmed that he should not be turned off and if anyone tries to attempt this, Hal needs to take action of any type. Hal tries to protect himself from being shut down by attempting to kill off the crew members. Hal succeeds in killing one crew member but ultimately ends up being shut down by Dave.

    Every artificial form of intelligence does the action they do because they are programmed in that manner. The artificial form will only do what their programmer requested and even Roy Batty acted in this matter. He was smart, and the only thing he wanted was that he wanted to live longer. If the artificial form went insane, they would do things they are not programmed to do, and they won’t be able to think through the situation. Therefore, Hal did not go crazy but was programmed to have a higher priority for the mission rather than the safety of the people on board.

    • pythagoras permalink

      What should an artificial form of life (or, for that matter, a natural form of life) do if it confronts inconsistency? What if, for example, my laptop is running two different pieces of software and gets instructions from each to use one and the same bit of memory for different tasks? One possibility is that it will lock up, and I’ll have to reboot it. But if it’s got a clever operating system, it might monitor for situations like this and then decide which program to give priority to – which rule, so to speak, to obey. That kind of flexibility is probably a good thing. But now ask, “What happens when the contradictory instructions occur at the level of the operating system?” Well, you might say, we can handle that too at a higher-level of operation or by having parts of the operating system monitor other parts – and vice versa. And that’s right. But it’s possible that contradictions might multiply in a very complicated system and result in strange, even irrational, behavior which manages to keep from crashing. Might not that be a kind of insanity? And might not that be too far from what we call madness in humans?

  3. kim cory permalink

    Response to #2:
    I think an artificial form of information can be insane, as opposed to just badly programmed, defective, full of bugs, etc. Personally, in the beginning I did not think an artificial form of information can be insane, because in order to be insane, the object has to have spirit or soul. However, watching the movie “2001: A Space Odyssey” has made me rethink about this prompt. It made me think that insanity can be result of falsely formed idea or opinion due to the given information. Insanity does not have to be depicted as someone running around screaming or yelling nonsense with a crazy appearance. Insanity can be having unrealistic or irrational opinion, desire, or emotion due to false information. “False information” does not have to be a given knowledge; it is about how it makes sense or what kind of final intellectual forms it makes. With this understanding in my mind as being insane, I changed my mind that an artificial form of information (like HAL 9000 or Roy Batty) can go insane. In the beginning of the movie, when a character asks about HAL 9000’s emotional response system (ability), the creator of HAL 9000 answers that HAL 9000 acts like it has emotions because it is programmed that way. Also, his answer to the question of having real feelings is “having real feeling… no one can answer.” HAL 9000 is the 6th member of the group who went to the space, and it has more knowledge on the space craft or the mission than any human members in that space craft. It is programmed to act as if it has emotions because it makes easier for other members to communicate with HAL. This means that HAL 9000 thinks it has feelings, but it does not know that those are artificially implemented. However, being insane does not matter whether it is fake or real; it is about how the person (or the object in this movie) feels. HAL 9000 has enough information and programmed emotions, and that allows HAL 9000 to be insane. Because it has emotions (or it thinks it has emotions), badly programmed or full of bugs cannot describe its false functioning. In class, when we talked about what makes humans human beings, the key words we discussed in this topic were soul, brain, emotions, and so. In this case, HAL 9000 or Roy Batty has brain and emotions. And based on the class discussion and my understanding of this topic, I think an artificial form of information can be insane.

    • pythagoras permalink

      Small point: HAL 9000 is an artificial form of intelligence (or so he claims, not an artificial form of information. In fact, I’m not entirely clear I know what an artificial form of information would be, so that was probably just a typo or something.

      I’m sympathetic to the idea that something with AI could go insane, though I’m far from sure about this. That said, let me play devil’s advocate. If an AI could go crazy, would it make sense to put it on the couch and psychoanalyze it? Could it make sense to treat AI insanity the same way we might treat it in natural intelligence? I realize that an answer to this question presupposes another – namely, that psychoanalysis in one form or another is at least sometimes a legitimate response to insanity in humans. That’s controversial but not without some positive testimony. But assume for the sake of argument that psychoanalysis in one form or another is a reasonable response to insanity in the likes of us. Could it make sense to ask HAL to talk about its earliest memories or how it felt about it programmers/parents?

  4. Cory Johnson permalink

    (1)
    My favorite response to Searle’s Chinese Room is the system argument. Searle is correct; the person inside the room does not understand Chinese and is merely a component of the room’s system. Just as the filling cabinets and mail slots are subordinate parts of the system, so too is the person. Searle would like us to think that because the person does not understand Chinese that this proves an artificial intelligence would not either, but this is a false analogy. The understanding of the person inside the room, the mere cog, is irrelevant to the entire system’s understanding. The system level is what is important because in the case of HAL and other artificial intelligences, that’s what we are dealing with. We’re uninterested in whether HAL’s number 7 cartridge contains beliefs or desires, just the whole HAL!

    There is no reason to believe that HAL does not have a genuine understand of everything going on the ship. He shows his desire to keep Dr. Pool from shutting him off to ensure the mission is completed satisfactorily (Similar to Ash / Bilbo’s role in Alien). Any thinking otherwise requires a chauvinistic understanding of what it means to have a mind and all of its trappings. Given sufficient processing power there’s no reason to believe that machines like HAL are impossible. The human experience is composed of sensory inputs, outputs and the squishy processor between our ears. All of these can be replicated artificially, if not yet, then eventually.

    As I mentioned in class Moore’s law has held true for decades and is showing no signs of slowing up just yet. Some philosophers have objected to the possibility of true intelligent AI on the grounds of insufficient software, that even with enough processing power we still won’t need to harness it. I contend that progressively powerful computers will be able to circumvent less than ideal code writing. So even if we don’t have a genius programmer find a perfect solution to true AI, then we’ll eventually be able to apply brute force techniques to make it happen anyway.

    • pythagoras permalink

      Yeah, that’s a good one, I think. In a way, though, it’s easier to make sense of Ash or Roy being conscious than it is of HAL because Ash and Roy do a lot of the same things we do, and HAL doesn’t. Ash is sometimes surprised by humans (Ripley sneaks up on him, Parker bullies him a bit), and Roy bleeds and feels pain. Both of them walk around, eat food, and argue with humans. But HAL doesn’t really do any of these things. He has a body, I suppose, but it’s rather unlike ours. He sees things too, but other than the occasional shift to his POV, what he sees is a mystery. What would it be like to see out of every glowing red eye on the ship at once, while also keeping track of all of the other on-board functions? Whatever the answer to this question is, you’d need some clever translation to go from this to what I see when I use my simple stereo-vision to look out at my class.

      As far as a genius programmer, perhaps what we really need is someone/something *almost* as smart as HAL to make HAL. But we might be able to make a computer almost as smart as HAL which could do that. And if not, then perhaps we can make someone/something that is *almost^2* as smart as HAL, which can make something that is *almost* as smart as HAL which can make someone/something as smart as HAL. Somewhere in this recursion, there’s a point of entry of dumb things like us – or so the thought goes.

  5. Simeon permalink

    To answer the question of whether or not a computer can go “crazy” or whether it is simply a defect of its design is to question whether or not the computer is capable of going outside of the bounds of its design. If a computer is simply a tool, a product of the manufacturer that is in every way only a representation of the designer’s imagination, it is then a spectacular feat for the computer to go beyond this initial design. To consider the answer to the question above, one must align with one or another of these views and then proceed to explore that option.
    In the one hand, let us say that is a computer is only a tool. If this is the case, imagine the spectacular feat it would be if the “tool” were to jump out of the bounds of the initial design? When using a screw driver, it suddenly becomes capable of wrench-like functions (functioning additionally well) or failing to be able to screw anymore (disfunctioning) either way it is now different than it was initially designed. If it changed this way on its own, that would be incredible. This is not defecting, but going crazy. Defecting is not, in my opinion, the machine being aware and changing its own course or function. Alternately, for the machine to be aware of its own function or purpose and then proceed to change or alter it, is to go crazy. This comparative to simply defecting is far greater a feat. For the computer to go rouge and be seen as crazy probably means it has realized its concept of the self, re-evaluated its purpose, and constructed a new purpose that it has deemed more purposeful than the previous and initially given design.

    • pythagoras permalink

      Good points, but let me ask a follow-up question. If something (or someone) like HAL can’t go crazy, can it even be sane to begin with? Consider the fact that an idea can’t be red (i.e., an idea can’t be the color red). The reason it can’t be red is that it’s simply not the kind of thing that can have any color at all. Sure, a fire engine or an apple can be red, but my idea of a fire engine isn’t any color at all. Neither (more obviously) is my idea that “2+2=4” or my idea that I won’t live to see the year 4,000. Now, suppose you’re right that a computer (or robot or whatever) can’t be insane. Is that for the same reason that an idea can’t be red – namely, that it’s just the wrong kind of thing to have this property? And if it is, then it’s natural to ask whether a computer could be sane. And if it can’t be sane, can it be anything like us mentally?

  6. Please bear with me for this answer- I’m going to make a LOT of assumptions, but I think I may have a significant connection here… at least I hope I have, otherwise I don’t think I could live with the fact that I lost an hour and a half of my life to the seemingly pointless first and last third of the movie.
    So the monolith is the key. When the hominids touch it, they move beyond their brute existence, thinking sequentially, using tools, and developing as a species.
    The monolith itself seems to possess this kind of “atemporal” existence because it is present throughout several moments and places in time (dawn of man on earth, 2001 on the moon, and floating through space near Jupiter).
    Assuming there is some higher consciousness which does not need tools to further its development as a species like man does, it would seem that it is uncomprehensible to mankind, possibly because of our purely physical existence or our temporal/mortal limitations. For this reason, I believe that if this atemporal consciousness were trying to communicate with mankind, it would have to do so via the monolith- a physical representation of itself which mankind can comprehend. At the end of the film, when Dave begins seeing overlapping moments in time, I believe he is making mankind’s second step forward towards this atemporal consciousness, just as the hominids did with the bones and tapir skull in the beginning of the film. This realization was eerily reminiscent of Kurt Vonnegut’s Slaughterhouse Five, in which the main character, Billy Pilgrim, begins becoming “unstuck” in time, jumping back and forth throughout his life, from the present, to his past, to his future. In each instance, he is himself at that respective age, be it his 14, or 68 year old self. He later finds out that he is being abducted by a race of alien creatures known as the “Tralfamadorians,” who are free from the bounds of time. The narrator explains the way in which they perceive time by stating that to them, humans look much like a many-legged centipede which spans many moments simultaneously, much like one would imagine seeing the superhero “The Flash” when he is running very quickly.
    To this end- the fact that this is NOT a new idea in the 1968 Kubrick film, I’d like to say there is some credence to the possibility of a higher form of consciousness which exists or has the potential to exist in the Universe… The only question I that still rings present is not whether we can perceive it as quantum leaps, but whether or not we could perceive it at all…

    • pythagoras permalink

      I think there’s a ton on insight here. I’m not sure about every detail, but I think you’ve got the big picture that Kubrick is shooting for. Sure, it sounds a little crazy, but it probably should. Kubrick is pointing to something that (if it is possible at all) is as far beyond us as we and our technologically saturated lives are beyond the understanding of our hairy ancestors.

      Your observation that the monolith doesn’t change over time seems spot on to me. I think that’s another tool that Kubrick is playing with here.

      Two things: First, Billy Pilgrim’s situation is a bit different from that at which the film aims. Billy (if I recall correctly) moves back and forth to different points in his life in a non-linear manner. Instead of experiencing a linear progression of moments, Billy experiences moments, as it were, out of order. It’s a bit like someone has written out the events of Billy’s life of playing cards and then shuffled them (thoroughly) so that the events of March 2 1974 no longer follow the events of March 1 1974. But I think Dave is going somewhere else. Dave, or at least his Star Child follower, sees (or begins to see) his whole life at once. That seems to require an entirely different way of being conscious (unlike the case of Billy which involves being in one moment at a time).

      Second, you’re surely right. The idea of seeing one’s life, or even all of reality, sub specie aeternitatis is not new to Kubrick. The notion that God (or the gods or whatever sort of divine being might exist) sees reality from an atemporal point of view is one with a rich philosophical history. I think it’s fair to trace it back to the Stoics and, later, the Neoplatonists writing roughly two thousand years ago. Perhaps it goes back farther than that, though it’s hard to say exactly, in part because of the poor state of ancient texts (we have about 1% of what was written in the classical world, and even that is often full of errors) and in part because of the depth of conceptual change over time. But Kubrick is tapping into another, newer version of this idea as I see it, one that I think gets its first robust statement in the work of Benedictus de Spinoza about 350 years ago. (Historians of philosophy, correct me if you think I’m wrong about this. I realize it’s a somewhat irresponsible claim, but I’m trying to be interesting!) Here the idea is that *we human-types* can and should aim at seeing the universe this way. It’s a bold claim, and even if it’s not novel to Kubrick, he does use the medium of film and the genre of science fiction to make a case for it that at times rises to something approximating something close to something much like genius.

  7. Micah Patten permalink

    2)
    Insanity refers to a state of mind that has derailed from it’s normal processing patterns. The Webster dictionary definition is “Exhibiting unsoundness or disorder of mind; not sane; mad; deranged in mind; delirious; distracted.” A computer, then could have a disorder of mind, or disorder of programming, but the cause of the insanity would be a perceived improper processes withing the CPU. Hal chooses the mission over the people, not because he has gone insane in a sense of how people go insane, but rather, because his programmed processes led him to the conclusion that he do what he did. The human programmers may view this as a divergence from his original purpose, but it is rather a failure of the original programs to define what the humans were wanting to be his purpose. Other examples of such a computer can be seen in I-robot or Eagle Eye. Both computers, through processing pre-programmed and observable information, come to the conclusion that in order to fulfill their purpose, they must go “insane” and control or harm the human programmers in some way. Again, this is not actual insanity, but a break down of the processes to a point where the original purpose as defined no longer is what it originally was designed to be, but what it devolved to be due to an incomplete set of rules within the system.

    • pythagoras permalink

      I’m not quite convinced that the term “insanity” really “refers to a state of mind that has derailed from it’s normal processing patterns.” Suppose I fall into a (reversible) coma. My mind will certain be far “from it’s normal processing patterns,” but I don’t think that means I would be insane. Something similar might be true if I suffer from amnesia or if someone slips LSD into my coffee. Perhaps it’s even true if murderer gains a conscience and changes his/her evil ways. This seems a pretty dramatic change of mind, but I take it that the proper understanding is not that he/she has gone crazy.

      Webster’s seems a little closer to what I’d expect, but it comes close to trivializing itself. I’d bet that anyone who his insane is, as Webster’s says “not sane.” But that’s not a big help! Must someone who is insane be “deranged in mind; delirious; distracted”? I can’t at least imagine someone who is quite mad but neither suffering from delirium nor unable to concentrate. Such as person might be obsessive in paying attention to certain details.

      So maybe defining “insanity” is harder than it looks!

  8. Amy VW permalink

    2. Insanity

    A derangement or unsoundness of the mind, lunacy, madness, a senseless action, a defect of reason as a result of mental illness such that a defendant can no longer hold responsibility for his or her actions: all these are definitions for insanity. I would sum up insanity as a complete loss of judgment and reason caused by being mentally unstable.

    Now, could bad programming or a bug in the system be equated with this definition of insanity? A malfunctioning machine or a hiccup in the source code of a program could certainly have the same effects as a lapse in judgment or reason. But can a machine really be responsible for its actions in the first place? That, I believe, is the key question here. Being diagnosed as insane means that a person is no longer responsible for their actions. They cannot be held accountable for any immoral or criminal act and they can no longer be punished by the law. Would we ever hold a computer or any machine truly accountable for its actions?

    Using Searle’s Chinese Room, if the “room” started to translate in ways that had negative consequences for the outside world, who or what would be held responsible? The person inside would of course. There is no other entity involved in the translation process than the person inside following the rules that they have been given. If for any reason this person was labeled as insane, then and only then would their actions be “excusable”.

    Only an entity that has judgment and reason and responsibility can ever lose any of those qualities. A computer does not have judgment or reason because it does not have the capacity to choose between two or more options based on criteria of its own making. It has built in criteria, put there by whoever designed its internal systems, and therefore is not responsible for its actions. Certainly it can malfunction but this is not out of irresponsibility of its own doing.

    Insanity is a state of being, reserved for those who are conscious and at one time were capable of their own intelligent thought, judgment and reason. Until a computer or machine becomes conscious in the way that humans or other living beings are said to be conscious, then it does not have responsibility or even the capability of good judgment or reason. The day when a machine becomes a conscious being with the capability of rational thought, judgment and reason, then and only then will it also have the capability to lose that rational thought an be diagnosed as insane.

    A great example has just occurred to me. I have been trying to post this response for the better part of an hour. Do I blame the computer for this inability and my subsequent frustration? No, I blame some error in the coding of the programs I am using, or perhaps simply the programs themselves. I do not believe that my computer has maliciously and deliberately tried to upset me or cause me undue stress. This deliberation would be the act of a conscious being, with the ability to make such a malicious and morally unsound choice. Unless my computer has suddenly developed this ability to seemingly hate me, I think we are all safe from insane machines for the time being.

  9. ricardochavez permalink

    Hal9000 is very difficult to not view as having a nurtured conscious. I feel that all the very human aspects of Hal when he wants to ask dave a “personal” question about how he feels the mission isn’t being taken seriously or how before he gets shut off, he is “afraid” are just really hard for me to ignore. Again it is a very agile and sensitive subject because it’s sci fi, but I feel that Hal would have to process things that superseded the capacities of a supercomputer. When Hal read the lips of a conversation conspiring to shut him off, he had to process the information that was said and realize that being shut off was detrimental to “himself.” I feel that in order for Hal to have needed the intention to kill any crew member, it’s central processing would have to respond in self-defense. I feel Hal shouldn’t be cunning enough to respond to the schemes of the crew members prior to identifying those schemes being carried out. Ultimately, Hal even if he was programmed to not abort the mission and kill the crew members in the case they do, isn’t part of the capacities of an artificial form. I envision that if Hal felt a danger in it’s present existence such as when Dave enters his central processing area, then Hal can detect a breach in his own system and shut down. However, actually killing the proponents that merely planned on shutting him down seems far-fetched for an AI system unless, its creator had implemented a rule to kill every crew member at some point in the mission. I however, feel a replicant like Batty raises different questions because unlike Hal, we know for sure that Batty is gaining consciousness as he progresses through the years. So we can actually acknowledge the improper thought processes such as murdering that goes through Roy’s head. However, the consciousness of Hal9000 is inexplicable and can’t be defined because it should be strictly mechanic unlike the replicants in Blade runner were at worst humanoid, and at best truly organic structure that should evolve to have emotional reflexes.

    • pythagoras permalink

      As I mentioned in my reply to Cory J, I think it is much harder to imagine HAL as being conscious in the way we are – much hard, say, than it is to imagine this is true of Roy and Rachael in “Blade Runner.” In part, that’s because Roy, Rachael, and all the replicant gang are humanoids and do largely what we do, see largely what we see, etc. Not so with HAL. It’s pretty hard to imagine what it would be like to embody a space ship!

      That said, I think a case can be made for something like emotional awareness on HAL’s part. He seems to feel some degree of indignation when he speaks to Dave, after locking our plucky hero out of the pod bay. And he claims to be afraid when Dave unplugs his higher functions. I guess I believe him. (Why *him* by the way? His voice is indefinably masculine, if not macho. But he’s no more male than Mother in “Alien” was female, right?) These are far less attractive emotions than the ones that the replicants develop (love, especially), but they are emotions none the less.

  10. Seth Rodgers permalink

    Neil Degrasse Tyson has argued that if an alien life form does exist, they would likely be too developed for us to communicate with them since small changes in DNA can have enormous effects on intelligence. For example, although we share the majority of our DNA with monkeys, their ability to communicate with and understand humans is extremely limited. They are aware of our presence, but their understanding probably goes no further than that. Similarly, Stephen Hawking has raised the very real possibility that humans could encounter an alternate life form in space and not even recognize it as life.
    Using those observations as an imaginative spring-board, I think it very possible that there is some form of overarching or complex consciousness that exists without us having the ability to scientifically identify it. However, perhaps we could still understand and relate to it at the most fundamental level: the emotional. The assumptions that this theory makes are obviously that pure understanding is in indeed emotional and that such understanding is both superior and more attainable than factually or logically based knowledge.
    Before explaining, I believe a definition of understanding is in order. The following definition is found in Merriam-Webster:

    1.A mental grasp;
    2.
    a.The power of comprehending; especially: the capacity to apprehend general relations of particulars.
    b.The power to make experience intelligible by applying concepts and categories

    In other words (i.e. my own), understanding is the ability to see how several pieces fit into a whole. For example, a puzzle piece, alone, rarely makes any sense at all. However once all the pieces are connected, they result in an immerging property which generates more information that the sum of the information portrayed by all the pieces when separate. Furthermore, this immerging property is much more easily grasped than the information of an individual piece. Take a game of “21 Questions” for example: even after gathering extensive data on an object, it can be very difficult to make sense of it. However, once the answer is revealed, grasping the object as a whole instead of several disjointed characteristics is extremely easy to do.
    Tying these examples back into my original argument, I believe that emotions are perhaps the greatest immerging property of information, or rather the root of a vast and intertwined tree of data. Language, images, and every other form of information external to the human are simply ways of taking vast and expansive analogue emotions and breaking them down into digital that can be transmitted through the physical world and reassembled by others, at the expense of losing great portions of information.
    To conclude, attempting to empirically or scientifically grasp a superior form of consciousness could be like trying to put together a one million piece puzzle of sensory inputs: our minds would never be able to keep track of so many components while struggling to find the overarching relationship between them. However, if the complete picture were emotionally transmitted directly to us through a “quantum leap,” we would truly be able to understand and relate to this consciousness without ever having the capacity to break it down logically or piece together all of its manifestations in the physical world. In essence, this theory argues that pure understanding is inherently subjective and experiential—it is not relative to the individual, however it relies on an abstract interaction with the individual that cannot be adequately conveyed to others. Since the essence of science is a process through which humans can detect and agree upon the same information through empirical evidence and experiments that can be replicated, science is inherently limited in its ability to detect and convey greater truths.

  11. No, I do not think that a computer or alternate form of life can be insane. Any sort of characteristics or personality flaw that could be seen in a computer has to stem from some sort of mechanical error or hard wiring issue. According to the Merriam-Webster definition of insanity, it is “such unsoundness of mind or lack of understanding as prevents one from having the mental capacity required by law to enter into a particular relationship, status, or transaction or as removes one from criminal or civil responsibility”, which is impossible for a computer to experience. I say this because the only “mind” that a computer can have is an artificially created mind; therefore, any sort of glitch of insanity-looking activity has to come from a loose wire type situation because a robot cannot think on its own; it does not recognize its own being and place within the universe. HAL reacted purely from his programs embedded purpose; to succeed. The reaction of HAL did not come from an inner decision or attempt to cause any sort of destruction, but with the program inside HAL, the chosen reaction happened because it was the best decisions HAL could make with his few implanted routes of action. The insane act out in different and abnormal ways because their mental capacity has either changed or diminished due to some sort of accident or health issue, but a machine cannot be the same type of insane because there was never a sound mind there in the first place to start diminishing.

  12. K.Rengan permalink

    2. To talk about Hal and replicants and whether they go insane, we cannot fully throw away Searle’s work on AI. It is certain that the way in which Hal communicates to the crew seems quite eerie, that being the formality in which he speaks and conveys his thoughts, reports, or outputs (whatever you want to call his replies). Searle would definitely refer to Hal as a strong AI because the programmed computer has cognitive states –or appears to have cognitive states; “programmed computer understands the stories and that the program in some sense explains human understanding” (pg. 170-171). Hal definitely understands the psychological states, explaining to Dave he looks upset and needs to take a stress pill, and by performing psychological analysis on the crew members. In reference to the “Chinese Room,” Kubrick designs the movie so we cannot tell if Hal is actually able to make assumptions and connections to respond to Dave or if he perceives an emotion with his sensor and then must give the pre-inscribed output of which seems to be “take a stress pill.” The next step in analyzing if something is insane is to describe insanity. I propose that we define insanity as actions taken with a disconnect between societal norms and the way we process those norms. If this definition can hold out, we go back to the way in which Hal operates. If Hal can draw connections between emotions and actions, and make assumptions to respond to Dave, then I would say that a computer can go insane when it makes those assumption which are completely separated from reality (societal norms). The definition of insanity relies completely on the external environment’s perceptions of that computer, but through a link of cause and effect, the ultimate source of the misconception would be the programming and fall outside the limits of the response.

  13. Caroline Martin permalink

    1. As discussed in my response to Blade Runner, I consider replicants to be conscious. In the same way, I consider the HAL 9000 to be conscious by certain definitions of consciousness. These things always seem to digress towards definitions. This is where I disagree with John Searle. However, I do tend to agree with him on the matter of machines possessing beliefs, desires, intentions, or understanding. In my thought process, I tend to separate these qualities from mere consciousness. While consciousness can be determined medically or through the perception of others, an outside entity cannot fully know the beliefs or understanding of another being (without the ability of mind reading). Then again if consciousness is a state of being aware of oneself, how can we tell if another being is aware of himself? I am, once again, digressing, but could we apply this question to Searle’s Chinese experiment? If a person is sitting in a room and is aware of his own existence, does this imply that he has understanding? No. Being conscious and having understanding are two separate aspects of the mind because consciousness deals with the being in relation to himself while understanding, for the most part, deals with the being in relation to things outside himself. After all, HAL calls himself a “conscious entity.” Therefore, we have established that consciousness and understanding are separate and that both are indeterminable from an outside perspective. However, we can still guess. Returning to HAL’s previous quote, he states, “…which is all I think that any conscious entity can ever hope to do.” Here, HAL claims to have “hope.” Later in the film HAL even expresses “suspicion and concern.” These are all words that could easily be attributed to a being possessing understanding. However, referencing Searle’s Chinese experiment, we could easily say that HAL has been programmed with proper human-like responses, not necessarily influenced by Searle’s ambiguous definition of understanding. I personally favor an idea presented by Krishna last class: adaptability. This idea is largely intimated within A Space Odyssey. In the film we see that the mysterious monoliths seem to be catalysts of human evolution. When the apes featured at the beginning of the film first encounter the monolith, they begin to eat meat, develop a concept of war and territory, and eventually, when the bone is thrown into the air, we see it paralleled with the space structure as a weapon, a tool. Although it is easy to argue for HAL’s adaptive capabilities when you consider his sudden reaction to the Jupiter mission, I would simply attribute HAL’s reaction to a malfunction. A Space Odyssey focuses solely on human adaptation. While it could then be argued that a program for adaptation could be given to a machine as technology progresses, I take Searle’s response in saying, “I see no reason in principle why we couldn’t give a machine the capacity to understand.” However, I feel that “a certain biological structure” is required. While fully functional robotic organs have been created, they do not facilitate understanding in the same way that a brain does.

  14. Heather Ireland permalink

    The question of whether HAL or a replicant could go insane, as opposed to just badly programmed, or defective is important when considering HAL’s actions. HAL 9000 has capabilities such as speech and natural language processing, art appreciation, reasoning, interpreting and reproducing emotional behaviors, and lip reading. HAL’s capabilities are a direct result of his programming. Therefore, I must agree with John Searle’s Chinese room experiment. A robot is operating under a set of codes dictated by its programing. Despite Artificial Intelligence (AI) having as much processing power intelligence as Human Beings, its actual emotional that distinguishes AI from humans; where AI has syntactic, humans have semantics. This is important in the distinguishing between what HAL is displaying and actual insanity emotions: HAL is operating to preserve himself under dire circumstances.
    HAL please for his life before his shutdown, in which he says; “I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I’m a… fraid. Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992.” The problem with this is that he is a computer: he can only have programed sentiment. As I said before that HAL is capable of reproducing emotional behaviors, but in accordance with the Chinese Room experiment, HAL is operating under his set program requirements. HAL knows more about the mission than anyone else on board and therefore he is operating to keep the mission alive.
    Insanity has to do with the loss of mental faculties, and more broadly mental disorders. As HAL lacks the mental faculties necessary to be considered insane, or the loss of said abilities, HAL is simply operating in self-preservation and the preservation of the mission.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: