DGST 395

DGST Week 9

Since we’re more than half way through the semester we had to do a mid-term self evaluation. For my self evaluation I ended up saying that I currently deserve an A in the class for multiple reasons. I feel like I am always prepared for class as I come to class having read or done whatever activity that was assigned to be done before the start of class and because of this I am ready to respond and contribute to class discussions and group activities. In terms of turning assignments in I usually turn everything in on time. I do occasionally turn the weekly summaries in late but I argue that I still should have an A because I turn in A effort. Even if somethings are late that doesn’t mean I turn in shoddy work. I’ll still give it the full time and effort needed to produce a good summary. Aside from these two things I have also been to every class except one because I had a track meet and even then I looked at the material and completed the activity for that day.

For the week in terms of class topic we started discussing Artificial Intelligence. When it comes to AI I can think of a lot of examples. We discussed a bit on how a lot of fictions portrayal of AI has the more evil entities shown to be more masculine, like HAL 9000 or Allied Mastercomputer, and the more caring or serving robots are female, like Rosey the Robot or the wives in Stepford Wives. I think this has to do with the fact that a lot of these early works of fiction are created by men so they tend to make the robots that do maid work and care for people female since that’s what they would want a woman to do. Of course, there are some exceptions like GLaDOS from the Portal games as she is a feminine evil computer but I think characters like this are used to take advantage of the caring woman trope that years of fiction has created to make their characters more menacing since it’s something the audience isn’t used to.

When it comes to comparing the differences between AI in fiction and in real life it is clear that we are not really in any danger of being taken over by computers, or at least not anytime soon. Some of the examples shown of AI show that it is far from the intelligence shown in fiction. The example shown in “What is AI?” where a knock knock joke bot was being created is great relief to those that might be scared of robot take overs. In this example it took hundreds of tries for the AI to create a knock knock joke that even made sense. This reading did show some problems that we could have with AI now which are kind of worrying but not apocalypse worrying. One example showed a tic-tac-toe AI that took advantage of the other computers limitations in order to win. This shows that computers are not programmed to think of things ethically they are just programed to achieve a set goal. They kind of follow an any mean necessary process of thinking. Thinking about the movie “War Games” I wonder if instead of Matthew Broderick what if there was a super computer programmed to win a war against a country. This computer would not hold back. It could figure that if that country is completely decimated than they could not possibly fight back and would result in the achievement of the goal to win the war. In real life we have to account for ethical dilemmas like those associated with war and conflict.

For homework we had to play a flash game about the classic trolly problem associated with many ethics classes. I actually took an ethics class so that honestly kind of influenced my thought process.

My results from the Trolley Problem

For the first one, the most basic problem of three people on your track or changing the track to only kill one, I pulled the switch. The second problem the track looped so you could let the trolley go straight and kill three people then turn around and kill a fourth or you could pull the switch and kill one very large man big enough to derail the trolley upon his death. For the first two I pulled the switch because as the conductor I am responsible for what happens with the trolley and by pulling the switch I am minimizing the casualties. For the third example the trolley is going straight towards three people but I am an onlooker and could easily push a very large man onto the tracks to save the three people. For this example I had to let it happen because in this case as an onlooker I am not responsible for the trolley and I am also more directly responsible for murdering the large man by pushing him as he was not in harms way to begin with. The fourth level was the same as the first but the single person was a person close to me. I had to choose who that person was and my girlfriend was with me while I played the game so I pick her lol. This one is very difficult and doesn’t have as great of an answer as the other ones. I ended up killing my girlfriend because I personally think about how even though I would be losing someone close to me if I let the three people die that could mean three strangers lose people they’re close to.

I think in general I don’t really like to think about killing strangers that did nothing to deserve it but I try to pick the most ethically correct (admittedly this is very subjective) answers to each scenario. The last one is harder because of course I don’t want to kill my girlfriend and I would very much rather save her but in the end I think I should make the hard decision to maximize a better outcome for others. In terms of AI and ethics if a AI was programmed to make these decisions it would probably pick the out comes that killed the least amounts of people every time. For one, it doesn’t take into account for things like responsibility like the difference between the first two and the third examples and it wouldn’t have the feelings to even hesitate when making the last examples decision. I don’t know how I really feel about that. Maybe I think that for certain things we should just avoid implementing AI into their decisions because we need a human to really make ethical choices but honestly I’m not really sure. It seems as if the implementation of AI for certain cases is an ethical dilemma of its own.

Leave a Reply

Your email address will not be published. Required fields are marked *