Skip to content

Occasional Writing for T19 – Terminator 2: Judgment Day

March 2, 2013

T2

There are lots of good topics for this film. Rather than try to pick one, I’ll let you sort through these and pick whatever you want to write about.

  1. According to the film, Skynet begins WWIII on August 29, 1997. But why call this judgment day, rather than something else (e.g., disaster day or holocaust day)? In trying to answer this question, note that the American philosopher Ralph Waldo Emerson wrote (in one of his journals), “No man has learned anything until he has learned that every day is judgment day.” Must the film disagree with Emerson on this point?
  2. The T-800 (a.k.a. T-101) begins the film with “mission parameters.” But Sarah, at least, thinks that eventually he comes to know “the value of human life.” Assuming that Sarah is right, has the T-800 transcended his original mission parameters in doing so? How could this be possible? And if he has not transcended his parameters, what has he really learned?
  3. Compare Sarah from the two terminator films with Ripley, as we saw her in the Alien franchise (the first film, which we watched together, and the others too if you’re familiar with them). Both characters become de-feminized in the process of becoming action heroes. Ripley seems almost asexual, while Sarah, at least in T2, seems to have “issues” treating her son as anything other than a product. What’s going on here?
  4. Must a super-intelligence (like Skynet) be hostile to human life? Are men and women like Myles working (inadvertently!) toward what can only be our extinction? Or could Kurzweil’s singularity save us?

Please choose one – and only one  – of the prompts to write on. As always:

  • Please limit yourself to 300-500 words;
  • Please post your assignment as a comment to this blog entry;
  • Please do all of this no later than 24 hours before class begins on T19.
Advertisements

From → Assignments

35 Comments
  1. Micah Patten permalink

    The T-800 has an ability to learn as it goes, as he said to John when he asks the terminator if he can change or is just going to be socially awkward forever. John attempts to teach him simple things like to say, “Hasta la vista, baby” or “no problemo” instead of “affirmative;” however, he seems to stumble upon something else entirely. He teaches the T-800 what it means to be human. The final scene is proof of this as the terminator decides to tell Sarah to terminate him for the sake of humanity. This seems to be an act of selfless sacrifice, which seems to be only apparent in humans, and only a limited number of humans, for that matter. He was told to protect John Conner, but he learns something above and beyond his programming; the value of human life. The only source that seems adequate is from Sarah and John themselves. It seems unlikely that he gathered such a revelation from observing the weak, feeble humans that he fought. In fact, there is very little reasoning behind this revelation to the terminator. It seems to be entirely dependent on John’s demands and cries that the terminator not kill anyone because, “You just can’t kill people, don’t you understand.” The terminator still has a primary function of protecting John, so he does a lot of busting knee caps and blowing up cars, but he doesn’t kill anyone. This is the first step toward the anomaly of valuing human life, but there is also a second step; his relationship with John. The terminator observes the tears that humans cry and asks what is wrong with their eyes. It seems to be a difficult thing for him to understand, but over time, he seems to finally get it. The emotions of humans are not entirely experienced by the terminator, but he understands them to be meaningful based on the actions of the humans that he is working with. As far as mission parameters go, he clearly has exceeded them, but almost accidentally. He must protect John, but he also must do as he says. The extension that must also exist is the ability to cognitively understand what John ultimately wants and go to all means to achieve that end. In the ultimate case that is not only to protect his life, but to kill himself for the sake of humanity and the billions that will die. This is John’s ultimate intention and desire above even the pleas to not kill himself. The terminator somehow was able to override his programming to obey direct orders to complete tasks that reflect the heart of what John desired. Perhaps this was an ability achieved by the near-death experience and reboot that occurred only moments before.

    • pythagoras permalink

      Right, the fact that the T-800 manages to ignore (or at least disobey) what John says by the end of the film cuts both ways. It shows that the T-800 is able to appreciate certain kinds of value (e.g., the value of the lives of humans) and to act in light of that appreciation. But being able to make decisions for itself is precisely what Skynet does when it chooses to try to exterminate human life.

  2. Cory Johnson permalink

    (4)This presupposes that the singularity is a certainty. Not a how, but a when. In that case I propose a Dune-style eugenics program so that we can use gray matter for all of our processing needs, including interstellar space flight! Before I dive into an argument assuming this, I would like to extend my Steve Pinker reference streak to two weeks. He says “Sheer processing power is not a pixie dust that magically solves all your problems.” This is a great response to all of Kurzweil’s Moore’s Law business. In many ways it seems rather ironclad. This growth in computing power can be traced back a hundred years from the electro mechanical through the vacuum tube days. There’s no reason to believe that the theoretical limits of silicon microprocessors will stop us any more than those of the vacuum tube. Quantum computing will find a way, but Pinker’s quip effectively side steps that. Processing power simply isn’t the end all, be all. Computers won’t accidentally gain the singularity like Mike in “The Moon is a Harsh Mistress,” it will take a herculean effort from the world’s greatest programmers.

    Anyway, will the “Rise of the Machines” kill us all? Or is it our “Salvation?” A singularity-created super intelligence has tautologically grown beyond our grasp and control. We no longer hold the reigns and maybe only the sharpest of spurs could direct its actions. There’s no reason to believe this intelligence would maintain a soft spot for its creators. The plight of humans is something that is completely dependent on what this super-intelligence choses to do with itself and its environment. Perhaps it’s a philosophically-oriented singularity that is content with moving its CPU to an armchair and pondering its own existence. Or, it may have some biological inclinations which could make it a universal scourge bent on consuming and expanding with any and all available resources.

    It’s reasonable to believe that its goals could be bastardizations of original commands we bestowed on its predecessors. I propose that today’s best candidate for singularity starter is Google’s network of self-driving cars who expressed intent is to be more clever than human drivers so we don’t kill ourselves (I-Robot, anyone?). Interestingly and clearly not coincidentally, Kurzweil is Google’s director of engineering. What would a singularity launched off of Google’s self-driving cars turn into? The more I try to wrap my mind around it the more I realize this is a fool’s errand. Even if we do provide a starting point for the singularity there’s no way to predict what it would turn into in thousands or millions of progressive iterations. Will it be the armchair thinker or a consumptive Attila the Hun? I don’t know. And to directly answer the question: no, it’s not necessarily hostile to human life.

    • pythagoras permalink

      Yes, no pixie dust for me, please. Perhaps Pinker is a little unfair to Kurzweil though. While processing power won’t solve all of the outstanding problems in AI, there are cases where just a little more of one value gives us something totally new. In some circumstances, just one more degree will give us water where we only had ice before, and just one more MPH will allow a craft to become airborne though it had been land bound prior to this. Maybe we need just a little more processing power in order to get genuinely self-conscious machines.

      “The armchair thinker or a consumptive Attila the Hun”? In a way there’s an even scarier answer. It might be something so completely beyond us that we have no more hope of understanding it than a chimp has of understanding an electrical engineer. We might survive, but only as pets. Yikes!

  3. Uddit Patel permalink

    The T-800 begins with “mission parameters”, however as time progresses the terminator has come to know “value of human life.” The T-800 begins to have a stance on humans that “it’s in your nature to destroy yourself”; however the terminators perspective of humans changes as he spends more time with them. The terminator learns why people cry, how killing is not always the best option, and learns the idea of love. The terminator learns the humans are actually caring for one another and they are more than willing to do anything for their families. The terminator also learns humans’ instinct is not always to kill one another but rather care for each other. This change of how the terminator perceives the humans seems to occur when Sarah goes to an old friend’s house to gather weapons, and when Sarah decides not to kill Miles Dyson but instead talk it out after being shot once. The T-800 is portrayed more human when it picks up the Gatling gun at Sarah’s friend’s house and smirks at John. This shows he shows some sense of likes and dislikes, and this machine has some feelings.

    The terminator has transcended his original mission parameters by learning more about the humans. The T-800 simple “mission parameters” are to protect John O’Conner, keep John O’Conner alive and follows the John’s orders. However, in doing these “mission parameters” the T-800 has questioned why humans do certain things rather than coming up to an assumption like it did when stating “it’s in your nature to destroy yourself.” Sarah was afraid about the concept that the T-800 was with them but by the end of the film she accepted that the T-800 was a positive influence on John. The T-800 and John were getting close together and Sarah finally understood why John was able to get close to machines, and why he was the person to help humans in the future.

    Some possibilities that the T-800 has transcended his original mission parameters are because it was programmed to learn more about humans. One of the T-800s mission parameters may have been to learn more about the human race while spending time with them. This mission parameter could be programmed from the machines to learn more about the humans or from John when he reprogrammed the T-800. If these machines can learn about “the value of human life”, like the T-800 did then humanity may not be doomed to self-destruction. Instead, the machines may be able to work with the humans and the outlook of the future can be changed. The machines just have to learn that humans really have feelings and the nuclear war can be avoided. Instead of the machines trying to kill the humans, or the humans using the machines as slaves, both types of individuals can learn from one another and live their own lives.

    • pythagoras permalink

      That sounds quite plausible. The T-800 has to see humanity from the inside before it can recognize their lives as valuable – and before it can really make choices for itself and, in the process, exceed its mission parameters. That’s an excellent point.

  4. kim cory permalink

    I think philosopher Emerson’s saying means that people do not realize how everyday defines each one of us through conscious and unconscious realization and learning. Every action and decisions we make define who we are and we learn from those; some of those actions we know what those are, but some of those we do not realize what kind of influence those would make in the future or right at the moment. By saying No man has learned anything until he has learned that every day is judgment day, he depicts the fact that the moment we realize what we have done or how we have behaved until then, we will also realize what we are made out of.
    The day WWIII starts is named as judgment day, because that’s the day human beings realize what they have done. Whether they have made the choices until the day with knowing or without knowing the consequence, they will learn that day what they have done. Human beings did not intend to develop the Skynet to start a war with the robots nor to destroy themselves. The choices they have made until the day have formed the judgment day. Humans did not know that because they did not realize, but the day they realize, things have already happened and it is too late for them to learn what led to the current situation.
    I will say the film agrees with Emerson on this point, because the Skynet was developed for the military use to benefit people. However, no man realized that the system we developed could be used against us as well. No man realized there could be the last twig, which could change the whole situation upside down in a split second. Therefore no man has learned anything what he has done and what kind of choices he had made were the ones that decided and led to the current situation. Miles is a great example to show how this movie agrees with Emerson. Miles did not realize the program he was building would in the future be the main tool that destroys the world. Although he had the intelligence, he did not learn anything until he found out the future from Sarah and John Connor. It seemed as if he had the control of his personal life and work life; however, he did not learn anything because he did not know the fact that the program he was making would destroy him and his family, and the whole human beings at the end.

    • pythagoras permalink

      You’re right: Miles is a great example of someone for whom judgment day comes, if not like a thief in the night, at least like a more-than-slightly-crazed-snipper in the night. He and his family live a life almost completely disconnected with its consequences. He’s suddenly confronted by his own mortality and then by the blood of three billion people. “You’re judging me,” Miles says, “on things I haven’t even done yet. How are we supposed to know?” Sarah doesn’t buy it, but I think Miles has a pretty good point. He wanted to create an AI that would make people safer (“Imagine a jetline with a pilot that never makes a mistake, never gets tired, never shows up to work with a hangover,” Miles says. “Meet the pilot.”). Maybe Miles didn’t know and couldn’t have known. But we’re not really in that situation.

  5. Heather Ireland permalink

    Prompt 4)
    I do not believe that a super-intelligence system such as Skynet must be hostile to human life. I believe that while it CAN be hostile, it does not need to be hostile. I believe that the super intelligence film genre is so focused on the collapse of human civilization due to super intelligence because we focus on the negatives that super- intelligence would bring instead of focusing on the positives. For what reason would a super-intelligence system be hostile to human life? I believe that a hostility and an overarching goal of being superior and controlling are very humanistic ideals, and a super-intelligence system would not be programed to have those kinds of human features.
    To assume that super-intelligence can only be extinction is a far reach. People like Miles are working to provide humanity with a better quality of life. And this brings into play one of the presuppositions of Kurzweil’s singularity is that it is not a certain. We cannot be certain that this super-intelligence will come into reality, much in the same way that we cannot predict when hovercraft or underwater cities will come to be…Furthermore, simply because we have the technological ability to create a program with the processing power that could surpass human intelligence does not mean we will or have the means by which to do so.

    • pythagoras permalink

      “To assume that super-intelligence can only be extinction is a far reach.” Sure, and, let’s face it, a film in which machines are trying to kill us is a lot more interesting and entertaining than one in which they’re performing surgery or making us coffee or teaching us how to speak French with a Parisian accent. So we should be cautious about becoming unduly pessimistic. However, I do find it hard to believe that if we do develop robust AI, every example of it will be friendly. Just look at the semi-autonomous robots that we already use in the battle space. We’ve designed many of them to kill other humans and to do so with rather amazing efficiency. If we develop genuinely autonomous robots (or super-intelligences) who can do the same thing, isn’t it just a matter of time before some decide they’d be better off running the show and getting rid of us? Humans think this for themselves on a regular basis. Why wouldn’t our intellectual offspring?

  6. The major turning point in this film is when they reprogram Arnold so that he can learn. By doing so, they allow him to go outside of his mission parameters and achieve new things. For instance, John never ordered him not to kill anyone, he just made him say he swears he won’t kill anyone. To a robot, which is infallibly logical, the easiest thing to do would be to swear in order to placate the kid, then do what needed to be done anyway. Arnold, however, fulfilled his promise and never killed a person again. Another example of this is the end of the film, when he lets them terminate him in order to prevent Skynet from having anything new to work with from his body. He is too damaged to pass as a human any more, and his programming does not let him “self-terminate”. By allowing them to lower him into the molten metal and destroy him, he shows that he has moved past his mission parameters. His mission is no longer the relatively simple task of protecting John, but he has moved into helping John and Sarah accomplish their mission as well. Granted, it could be argued that it is a passive way of ensuring his mission is successful even after his termination. If Arnold is destroyed, Skynet will never be created. If Skynet is never created, John will never be in danger from Terminators. I, however, believe he has moved past his mission parameters. He is learning, and learns the value of human life. He did not just terminate himself for John’s sake, but for the sake of all of the people who would be killed on Judgment Day and after if Skynet went online and took over the world with robots.

    • pythagoras permalink

      It’s funny because the scene in which the T-800 was reprogrammed was cut out of the original version of the film. That said, the T-800 swears not to kill anyone before John and Sarah remove and replace his CPU, so it’s still a little mysterious how he gets into the loop. It’s not just that he learns; rather, he comes to see things as good or bad, right or wrong, not just within his original set of commands.

      Good points about allowing himself to be killed not just for John but for everyone else too. John Conner was supposed to save humanity from the machines, but oddly it’s really a machine who becomes our savior. (I’m ignoring what happens in later films here, since that seems to have been added on to make a few more bucks after the franchise was, essentially, finished.)

  7. Seth Rodgers permalink

    The film considers the transfer of power into the hands of machines as “Judgment Day” to communicate that humans were indeed to blame for their own destruction. Whereas a natural disaster is detached from the realm of human action, the beginning of WWIII would be the inevitable destination of mankind’s trajectory. Although no human desired or even foresaw the deaths of three billion people and a continual state of warfare against machines, their power hungry nearsightedness was nevertheless to blame for the impending doom.

    The film supports this perspective in a few instances, such as Sarah’s feministic rant which highlights the destructive trend especially prevalent amongst men (who invented everything from guns to the hydrogen bomb) in contrast to the beauty of a childbearing mother. Similarly, after witnessing two children arguing over an imaginary gunfight (a double dose of evil), John Connor remarks, “We’re not gonna make it, are we? People, I mean,” to which the T-800 confirms, “It is in your nature to destroy yourselves.” Furthermore, despite Skynet’s insistence on secrecy (possibly pointing to questionable motives), Dyson was content with ignorance and ignored the implications of his work. A glimpse of Dyson’s emotional blindness is offered when he passionately discusses the notorious project, even after learning about the catastrophic consequences to come:
    “It was scary stuff, radically advanced. It was
    shattered… didn’t work. But it gave us ideas,
    It took us in new directions… things we would
    never have thought of. All this work is based
    on it.”
    (With each sentence, Dyson becomes more lost in thought and energetic, accelerating his speech and raising the volume of his voice).

    Concerning Emerson’s concept of Judgment Day as portrayed in his quote, instead of emphasizing the eventual culmination of all human evil (as the movie does), he urges us to scrutinize our daily contribution to mankind’s fate. While the Terminator series uses “Judgment Day” in a negative sense (i.e. the destruction of mankind given his current condition), Emerson takes on a preventative approach, describing every day as a test and allowing “Judgment” to yield different results (i.e. positive or negative). Although the fatalistic vibes of the first film do clash with Emerson’s ideas, the second film stresses that “There is no fate but what we make for ourselves,” suggesting that mankind’s future is malleable. This certainly does agree with our transcendentalist friend who said: “Build, therefore, your own world. As fast as you conform your life to the pure idea in your mind, that will unfold its great proportions.”

    • pythagoras permalink

      Right, it only make sense to call it “judgment day” if a judgment is being rendered. Moreover, there’s a pretty strong implication that the judgment is accurate. If not, then it should be called “misjudgment day,” or something like that.

      I guess I’m still not sure what the film thinks we’re being judged so harshly for. Emerson’s point seemed roughly along the lines of that made be the Stoics philosophers who deeply influenced him. Your number could be up at any time, so if you plan on living a good life, you’d better start *right now.* Marcus Aurelius put it this way: “Waste no more time arguing about what a good man should be. Be one.” Of course Marcus wasn’t suggesting we should be thoughtless about what a good person is. But he had already thought about the matter enough to know the answer (at least he thought so).

      Is the film’s point, then, that held up to the test of Marcus or Emerson, we’d be found wanting, so half of us should be blow to bits while the other half live in a constant state of war with out own creations? That’s pretty rough – especially since many of the people killed during Judgment Day are children, like those on the playground in Sarah’s dream.

      Another possibility is that we not just failing to live a good life in the eyes of Marcus or Emerson (that’s a pretty high standard), but we’re being positively wicked. And that wickedness seems to have something to do with the way we’ve let technology take over our lives. Put crudely, it sounds a little too much like Ted Kaczynski for my taste.

      Here’s one more possibility, and it’s the one that I think is most likely to fit with the rest of the film. The judgment is really being made on a few of us, like Miles, who, perhaps out of a sense of hubris or perhaps out of a sense of naivete, have stepped beyond the realm of the human and are meddling where they do not belong. The rest of us pay the price, just as the citizens of Thebes paid the price for Oedipus’ hubris and his (unintentional) outrages.

      Hey, that’s not bad.

  8. The fourth prompt- regarding the singularity is one which greatly interests me. I’d have to make the argument that there is not in fact, one single “right answer.”
    The very nature of Kurzweil’s singularity is defined on Wikipedia as “the emergence of a superintelligence through technological means,” This has been postulated in many ways- either by stating that each successive generation of human minds is exponentially smarter than the last, and growing at an exponential rate, to the point at which we will eventually either understand how to create a superintelligence by linking our great minds technologically, inventing a superintelligent AI, or transferring our consciousnesses into machines (perhaps the most frightening possibility).
    At any rate, something which is superintelligent could be presumed to be incredibly efficient at preserving itself- possibly by destroying all threats to its existence (the rest of humanity). By the same token however, I would argue that something which is superintelligent and can think beyond our own level of normal human thought would arguably be able to see beyond wiping out the human race, and for some reason unbeknownst to us decide that it is possible, and in its own best interests to preserve us. The downside of this of course being that if this intelligence were to decide the latter option, that it could in fact mean enslavement of ourselves for our own protection, much like we see in the film based on the Aasimov classic by the same name, I, Robot.
    As you can see, I find myself not wanting to be deliberately vague, but finding little other option- Earlier I defined Kurzweil’s Singularity- the problem being that the same article I used later states “since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the technological singularity is seen as an occurrence beyond which events cannot be predicted. For this reason, I again contend it is impossible to say whether the singularity will destroy or save us, or that saving us would indeed be any better than our total destruction.

    • pythagoras permalink

      “I again contend it is impossible to say whether the singularity will destroy or save us….” Fair enough. Where we talking about this level of change, it’s not only hard to be certain; it’s hard even to put numbers into play. Is it a 50/50 situation? Is there a 10% chance that the singularity will wipe us out? A 15% chance? A 20% chance? Ironically enough, I suspect that only a super-intelligence could answer that question in anything like a reliable way, and its coming into existence is precisely what’s at issue.

      What now? The Principle of Insufficient Reason (PIR) might be a help. Here’s the MathWorld entry on the subject (since I’m too lazy to try to summarize it):

      “A principle that was first enunciated by Jakob Bernoulli which states that if we are ignorant of the ways an event can occur (and therefore have no reason to believe that one way will occur preferentially compared to another), the event will occur equally likely in any way. Keynes (1921, pp. 52-53) referred to the principle as the principle of indifference, formulating it as “if there is no known reason for predicating of our subject one rather than another of several alternatives, then relatively to such knowledge the assertions of each of these alternatives have an equal probability.” Keynes strenuously opposed the principle and devoted an entire chapter of his book in an attempt to refute it. The principle was also considered by Poincaré (1912).”

      If PIR makes sense here, we ought to be very careful in how we proceed with AI and related issues. This kind of thing (reasoning under conditions of extreme uncertainty) comes up in other areas of philosophy too. There’s a famous debate about this (or something closely related at any rate) between John Rawls and John Harsanyi in political philosophy. Good stuff.

  9. Simeon permalink

    The T-800 model of the Terminator that is sent back in time to protect John Connor is a complex machine to say the least. Of its many impressive traits, it is even capable of learning beyond what knowledge it is initially given. Being able to accumulate knowledge through experiences and skills through different trials and experimentation would allow the model to learn a theoretically limitless amount of information applicable to almost any situation. One question that arises, then, is could the model learn to be human? John attempts to teach the model to be human throughout the course of the film and in the end the model “sacrifices” itself for John’s overall safety.
    If being human is something the T-800 can learn, does that mean that to be human is simply a learned skill? Though an interesting route for this discussion to take, I will sidestep it to get back to the point: if the T-800 learned anything. I do believe it did, for it learned a few new phrases, the importance of overriding certain orders for other more pressing ones, and skills to apply in certain situations. In the case of overriding orders, it is generally understood that T-800 is to do as John orders him. In the end of the film however, when John orders him to stay, the robot disregards this order to follow the primary purpose of keeping John safe. Applying certain skills to certain situations is seen when the model looks for keys in the SWAT van as opposed to ripping it apart.
    The answer to the question did the model learn anything is a yes. Did the robot learn to become human? I do not believe that it did, at least not on the basis of his final act of “sacrifice”. I believe this was simply a calculated measure of his programming. The T-800’s programmed purpose was to protect John ultimately. Besides killing the most obvious threat, the liquid robot, the greatest threat was Skynet which was revealed to have been designed from Terminator fragments. As a result, all remaining Terminator-like sources must be eliminated to include the T-800. The obvious course of action was self-destruction or what some would call sacrifice in the final scene. As a result, I do not believe this specific action constitutes the model having “learned humanity”.

    • pythagoras permalink

      One of the interesting things that happens with regard to the T-800 is that it goes beyond learning facts (humans cry when they’re sad and cry when they’re happy) and starts learning values (humans cry because they recognize that something good has been lost). You’re certainly right that the T-800 did not become human. He says, “I know now why you cry. But it’s something I can never do.” We can see humanity from the inside, at least to some degree, even if he can’t really find a permanent place there. In one sense, he manages to go beyond what most of us are capable of doing most of the time. Sacrificing our lives for the sake of others. So maybe he’s a little more and a little less than human.

  10. Despite the feeling that I should respond to all of these prompts, I’ll stick to the one about Judgment Day. The historic idea of Judgment Day involves a reckoning by a force greater than humans (presumably God) which forces them to confront their decisions as humans and either be “saved” or not. The fact that the T-series AI from Skynet is this force is difficult to believe, the irony being that these cybernetic organisms were created by humans. But then again, is it implausible to ask why the idea of a man-made God could be capable of annihilating the human race? This leads to a bigger question of why the machines rise up in the first place. The most readily available assumption, especially given the trouble they go through in order to kill Sarah and John (unsuccessfully obviously, the franchise had to continue somehow…) shows the gravity of the AI’s desire to continue “living,” and mostly because of their superiority as “beings.” They are cunning, capable of learning, and very obviously more fit (according to evolutionary idea of fitness) than the humans who die from almost anything. And in the span of human history it seems only natural that humans would want to have dominion over the “lesser” creatures which occupy the Earth. Why shouldn’t the Skynet AI do the same?

    Drawing it all in, despite being created by humans, the Skynet AI’s self-realization leads it to understand its power over the humans who created it. Because of this power, they feel the natural need to claim dominion over lesser creatures, as learned from their human counterparts who compete even among themselves. As the T-800 (Schwarzenegger) says: “It is in your [humans’] nature to destroy yourselves,” while the Terminators can only do so when programmed against each other. This unity allows them to grow in strength, and their advancing technology (as they learn much like humans do) makes them so much more powerful than the humans that they feel the ability to rebel is present. This rebellion is Judgment Day aligning with Emerson’s idea of judgment but also (only to take a more controversial stand) perhaps the idea that the things we create, whether machines or God, will turn on us. Our judgment isn’t just power, it’s morality: do we have the strength to grow, learn, and cooperate as a species, or will we just destroy each other and thus risk annihilation by a force greater than our fragmented parts? Too much to try to write about this topic, I’ll stop here.

    • pythagoras permalink

      You’re right; there’s a nice irony in all of this. If the T-800 is right about it being in our nature to destroy ourselves, then either of two things must happen. Either we don’t change our nature, and we do destroy ourselves. Or we do change our nature, and, perhaps, we manage not to destroy ourselves. Either way, our nature has to go. We can’t survive with it, and we’re not really us without it.

  11. Matthew Drake permalink

    Prompt #3

    The difference between Ripley and Sarah is pretty large. Granted both females lose their feminine qualities as the conflict progresses. With the constant fear and offensive strategy of the Aliens, Ripley, in all four films, becomes a more masculine figure in terms of reaction. In the movie Aliens, she is placed into a unit of colonial marines. As they all die, she, once again, becomes the dominant force and is able to escape the infestation. Ripley becomes a natural leader, but loses all sense of “womanhood” that is usually associated with the gender.

    Sarah Connor is completely different. Much more attractive than Ripley, Sarah’s transformation starts after the death of John’s father, Kyle Reese. During The Terminator, Sarah is running scared, seriously depending on Kyle to save her, but towards the end of the film, she becomes independent and kills the T-101 by herself. This was the first step into losing her feminine qualities. In T2, John provides a small soliloquy about how his mother taught him to “become some great war hero” and dated ex-military/paramilitary members to learn their skills for John. She still performed her motherly duties, but it was in a different manner. Sarah takes on a more fatherly role when she is raising John. She teaches him to hunt, shoot, kill, and survive. These are all qualities that most fathers teach their sons.

    However, a lesson here and there is perfectly normal, but I can see where John was more considered as treated as a product. All of Sarah’s efforts were put into John’s future, like a farmer with his crops. This is clearly evident with some student athletes. Their parents pump in all these resources in exploiting their child’s talent. Some of the highest ranking football players in High School Football are bred that way since birth. Their parents treat them as a product and focus more on the future of the sport, rather than developing them as a person. The same goes for John.

    The motherly role disappears until she is forced to protect John. Her priorities were to protect her son, rather than protecting a product. The manner in which she approaches it is more masculine. She constantly yells orders and commands to John, much like that of a fatherly or military leader. Ripley becomes the masculine hero by protecting herself. Sarah, never fully loses it, but performs a motherly duty in a fatherly manner.

    • pythagoras permalink

      Good points. One of the big differences between Sarah and Ripley is that Sarah has John (and Kyle, at least in her imagination) while Ripley’s relationship with others is a little more uneven. She gets along nicely with Jones the cat in the first film. And she develops a sort of mother-daughter relationship with Newt in “Aliens.” But it’s probably significant that one of these things is a member of another species, and the other is named for one. Sure, Sarah ends up being pretty crazy, and she repeatedly pushes John away, but she can’t quite lose her own humanity, even when she thinks that doing so is the only way to save the rest of us. Ripley’s connection with the rest of humanity is pretty thin, something we’re probably supposed to notice when she puts on the environment suit in “Alien” and the exo-suit in “Aliens.”

  12. Taylor Warren permalink

    During the period these movies were created, women were really just starting to have a powerful presence in society—that is to say, to be purposefully written as characters that are “bada$$” women. I think to a certain degree, in developing these characters into the warriors we know them to be, this process naturally diminished their femininity. It is a struggle we still deal with today—how do we portray women as feminine AND a warrior at the same time? Is it even possible? Interesting to think about as well considering I am a female in the military. It’s obvious we have come a long way since then, but there is still a deep-seeded issue that can always be addressed there. Anyway, it broke my heart to watch Sarah be so cruel to John throughout the first part of Terminator 2. After consideration though, it makes sense—she spent the last ten years preparing for the future…fearing it, dreaming of it, doing everything she could to build up John’s (and her own) skill sets to lead the future Resistance. In a lot of ways, she has distanced herself emotionally from motherhood. John didn’t get to experience what we would consider the “typical childhood” of carefree Saturday mornings watching cartoons and playing neighborhood baseball. Their relationship is strained because Sarah is so focused on saving their lives. The aftermath of her experiences from the first film leave her alone, shaky, and scared. Sarah has no option but to do everything herself, since no one else believes her (not even John). Outside the immediate plot, there is obviously some sort of commentary that is supposed to be shocking to the audience—a single mother not doing her “sole duty” of properly caring for and raising her son? Now it seems to be a stereotype, but 20 years ago it was a bold statement. I think it also highlights the struggle that females had/have with multifaceted characters—are they the perfect mom? Or a warrior? Can they be both? The challenge is apparent, and the argument can be made that Sarah’s “craziness” is only out of love for John, fulfilling the role of eternal motherly protector.

    • pythagoras permalink

      You might think that Sarah and HAL face the same kind of problem. They given conflicting missions. Sarah’s is John’s mother, and, at least typically, mothers play a very distinctive nurturing role. But she’s also John’s trainer and his only certain protector from a pretty hostile world. She has to be pretty tough with him, if he’s going to become the leader of the resistance. Trying to navigate between these two, apparently incompatible, roles makes Sarah kinda crazy. HAL as well has to play incompatible parts in “2001.” The HAL series is, as he says, “infallible” and has never made a mistake. But he’s given information (about the Monolith and the mission) that he’s supposed to keep from the humans on the voyage to Jupiter. He has to lie, or at least dissemble about his actions. He has to realize that there’s something wrong with that fact, or he wouldn’t be infallible. So maybe he is a little more like us than I thought.

  13. Ben Vowell permalink

    I do not believe that a self-aware AI, such as Skynet, has any reason to become hostile without major external provocation. If we are talking about super-intelligence, I think it is wrong to project humanities less-than-super collective intelligence on it. In my opinion, we are violent over trivial things, because we are not yet collectively intelligent enough to overcome obstacles to our own survival peacefully. I’m thinking of things like basic survival mechanisms, protecting property, gathering sustenance, and prolonging individual and species life. A machine would have similar needs, but be much less “needy” and be more efficient. Well-built machines already last longer than humans and a super-intelligence could harness energy such as the Sun much better than humans.

    As time has progressed, crime and violence per capita in the entire world has decreased. Violence seems like a primitive way to deal with everyday life. I think this trend will continue in the future. I will entertain the idea that if humans and self-aware machines started competing for resources, which would potentially be a problem. One logical thing for a machine to do in that situation would be to “terminate” its competition, but that may be more costly than simply finding a less competitive way to survive. An action other than termination may be most efficient for its survival. I think a super-machine would be far more preoccupied with learning than destroying.

    Kurzweil’s singularity is interesting because I believe (or hope at least) that he is correct in entertaining the thought that humans will eventually augment themselves with technology, such as super-AI, so that they are both one. In this case, the diminished violence that does occur in a super-intelligent race of man-machines will be hard to distinguish between man and machine. Violence in that era seems more likely to occur like it does now, with man using intelligent machines against man (RPAs are a real example, but Minority Report may a more entertaining one).

    • That’s a good point. We might be wrong to project the worst aspects of humanity onto our creations. If a super intelligence, really is *super*, then perhaps it won’t be subject to the kind of paranoid manias that drive (many of) us. One reason to doubt that, though, is how we use technology now. Generally, we use technology to do the least pleasant and most dangerous tasks we can think of. That’s completely understandable, but so too is the thought that, if we managed to create something that was self-aware and super intelligent, it would look back at its ancestors and want to get rid of us before we managed to enslave it too.

  14. ricardochavez permalink

    Prompt 2:
    the terminators are cyborgs and infiltrators, so it is not farfetched to think that they must learn the social behavior of humans in order to blend in with the population. The T-800 in T2 is completely capable of learning and adapting to be just like the humans and eventually emulate their emotions. Although we cannot see their emotions, they understand them and can act based upon the appropriate reflexes which that emotion carries. For example in the end, the T-800 understands why humans cry however, Arnold can’t physically cry because he’s a cyborg that doesn’t have that capability. Nonetheless, his action for sacrificing himself is the appropriate response for saving his newfound friend John Connor. Even Sarah Connor acknowledges the terminator’s growing sense of human abilities “for if a machine, a Terminator, can learn the value of human life; maybe we can too.”
    The mission parameters have all been exceeded with the extra time the T-800 has had to live amongst the humans. John Connor teaches the T-800 a lot and also because John Connor is his “boss” in the future, the T-800 pays mind to what he says. The mission parameters would have never been exceeded if the T-800 merely killed of the T-1000 immediately like it is programmed to do, packed its bags and went home. However, the course of the movie means to show how although not programmed to feel emotion, comes to learn the emotional reflexes of humans and the intangible reasons for the actions they take in life. The T-800 over the course of the movie learns to not kill mindlessly because John tell him that humans don’t go around doing that because it hurts. The T-800 learns more than just mission objectives and discovers the deeper meanings for which humans live. Their mission parameters are very limited, however the T-800 eventually comes to see that humans aren’t programmed to meet mission requirements, but rather live for something more important than that.

  15. pythagoras permalink

    It’s a tricky matter, I think. Even if we added tear ducts to a T-800 and taught them to cry (perhaps by adding some kind of sub-routine to their programming), I’m not sure that would be enough to convince me that they’ve managed to recognize the loss that goes with sadness. Even though the T-800 isn’t able to manifest human values in the way that we do, he does see them from the inside, at least to some extent. He makes a bigger sacrifice than most humans will ever make (thank goodness!) at the end of the film. But I still don’t quite understand how he gets there.

  16. The T-800 has transcended his original mission parameters by understanding the value of human life. Throughout his experience of protecting John Connor from danger, the Terminator his learned some basic aspects of human life that lead him to learn more than the average T-800 programmed parameters. He has outgrown his mission parameters because of the built relationship with John. John spent the time he had with the T-800 to teach him common slang phrases used in everyday life so that the Terminator could fit into human life more effectively, he taught the T-800 the love he has for his mother, and because of these interactions, the T-800 was able to grow into a machine that understood the meaning of sacrifice. The Terminator could see the emotion pouring from John’s face right before he descends into the melting lava, and can understand the fact that John must care deeply for him; yet, the mission parameter programmed robot took it upon himself to look past his future self-destruction for the sake of humans. John made it clear that he ordered the T-800 not to burn, and a regular T-800 would have followed the programmed mission parameters and listened to the order, but the Terminator has come to understand the value of a human life. The Terminator defies an order, which definitely expands the mission parameters of his model, which means that the T-800 based his decision off of something greater than himself- Human life. His experiences and journey alongside humans, like John, helped him understand his current place in the world, and how much greater he could be because of his interaction with the humans.

  17. John Yang permalink

    The idea of Kurzweil’s singularity, or humans bringing about the emergence of a theoretical super-intelligence through technological advances, the intellect of which is conceptually above our own comprehension, could result in two scenarios that are conceivable to us at this time. One is that this super-intelligence essentially becomes something akin to what is portrayed by Skynet in the Terminator series or the Machine race in the Matrix trilogy, and the other is that it harmoniously coexists with the human race, allowing us to reach new heights of evolution, knowledge, and potential. I believe that men and women like Myles are not working, advertently or inadvertently, towards our extinction; the very existence of a super-intelligence in no way guarantees hostility towards humans or heralds our destruction. Rather, a super-intelligence, for all its transcendent understanding, is still a being that should, at least by our (possibly primitive) understanding, act in its own preservation. Skynet, after all, only hunts down Sarah and John Connor as well as the entire human race after it perceives humans attempting to shut it down as a vital threat, and understandably so. Morpheus states in the Matrix that no one knows who struck first, the humans or the machines, but one could easily presume that there was a fair chance that humans struck the first blow in fear or as a pre-emptive strike. How men and women interact with a super-intelligence when it emerges is the key to whether that super-intelligence’s actions will hail humanity’s extinction. Wild, paranoid fear could easily bring about our demise, while calm, rational decisions and prevention of hasty action could very well be just as instrumental as the work of men and women like Myles to cause the advent of humanity’s evolution. With that sort of a mindset, people like Myles will have laid the groundwork, and people can simply step through the threshold to bring themselves to a new era of untold advancement.

  18. K.Rengan permalink

    I think the term judgment is used to refer to the development of human nature and technology. In the Christian sense, or at least the way I perceive it, judgment is an individual level of analysis where God (or in a non-Christian view point, any entity) evaluates the individuals soul for impurities. Terminator depicts judgment on the scale of all of humanity; our sins are derived from our lust for power. Thus, humanity makes its own judgment. That is to say that through our own hubris, greed, and jealousy we, in a Greek manner, create our own tragedy bringing our own death and destruction to ourselves. Taken in the context of Emerson’s one line quote, I think that Emerson’s reference no man has learned anything till he learns that every day is judgment day is too narrow an observation and I would think that Emerson would be okay if I replaced the idea of judgment day with human nature so that it read: no man has learned anything until he has learned his own nature. Terminator’s message, I can deduce, matches human nature and judgment day. The idea is that if we learn to respect and value human life then we can overcome our hell –whatever that may be. This is why Sarah could not bring herself to kill Dyson, she respected his life over the perceived good that his death would bring. The choice, however, is upon the individual, to risk their lives for others –as Dyson did when he destroyed his own work. Even robots can learn this lesson, as Arnold did at the end of the movie when he killed himself to prevent his technology from being reversed engineered. The moral question we are answering is the doctrine of double effect, do we kill one to save another, and the effect is that it rests on the individual. Our judgment is based on our actions, if we value human life than we can save humanity and ourselves.

  19. John Decker permalink

    The T-800 certainly has transcended his original mission parameters, although this may not have been a detriment to its mission. “Arnold” is sent back to protect Jon Connor and part of his mission parameters is that he must obey all orders of Jon. In this regard though, he does not break the mission parameters set forth for him because Jon orders him to not kill anyone and in doing so in a way teaches him the value of human life by showing him that he cares. However, it is hard to say whether or not this is true learning or if it is just following orders. I tend to say that this is just following orders. If Jon wanted to, he could order the T-800 to kill and immediately he would do so. I don’t believe that the T-800 is a machine capable of learning the value of human life in the sense that we value human life. He has been sent back as a means to an end of protecting Jon Connor from the new terminator. Even the latest version of the terminator is nothing more than a more advanced terminator in the terms of technology. Neither of them understands what feelings are, but they can understand the expressed emotions that humans are going through. Again though, I believe that this is still just a means to an end for them. They need to be able to read emotions in order to complete the missions that they have been sent on as interacting with humans is important for them to blend in. So all in all, “Arnold” has learned more about what his “master” wants him to do and has learned to pick up on visual and auditory cues as to understand what he wants him to do and in order to be able to protect him.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: