art Coding DGST 395

DGST 395 Week 10

This week we discussed a lot about ethical issues and topics in relation to AI. We finished off last week by playing a small flash game by Pippin Barr based on the ethical thought experiment called the trolley problem. I discussed this in last weeks post but it brought up a lot of questions about ethics when it comes to how AI would carry out their programmed goals.

Starting this week we continued to discuss the trolley problem and then played a game very similar to that thought experiment but more directly related to AI. The game, called the moral machine, put the player in scenarios where they had to choose to crash a self driving car killing the passenger(s), sometimes no passengers were present, or keeping the vehicle intact and killing people on a crosswalk instead. This makes you think a lot about whether or not computers should be making these decisions and which ones are ethically “correct”. Personally, I concluded that this means that we shouldn’t even really try to put out fully self driving cars until it is entirely supported by syncing up to stop lights and other cars in order to prevent AI from even having to make these decisions. The risk currently presented is way too high to make the concept even worth it in my opinion.

Following our discussion in ethics we discussed AI and race. Reading excerpts from Black Software it is clear that AI can be used unethically when it comes to race as well. In the two chapters we read it shows how police used AI to map out hot areas of crime and attempt predictive policing in order to stop crime before it happens. This concept was carried out using the ALERT II system and was not as great as its descriptions made it sound. Sure it did what it said it would do on paper but in reality it was really promoting over policing in areas of high minority population. AI uses data sets to learn how to “think” and since many of the data sets created by the police are already skewed in a racist bias against minority groups they ended up creating a computer that follows that same mind set. This is one example of how AI could be created without a person of color in mind but Joy Buolamwini presents another example on her TED talk. Facial detection did not work for Joy in multiple instances and she looks into the matter to find out that we once again have AI presented with data sets without minority faces in mind. The data sets used to make this facial recognition software consisted entirely of white men, because of this minorities and women often would not be accurately detected if detected at all. Both of these examples show that AI often is not created with everyone in mind and that could lead to many problems. This is shown more tragically in Khari Johnson’s article in which he describes three men that were wrongfully arrested using facial recognition. Like Buolamwini stated earlier this technology was not made with their faces in mind which caused computers to match up the men of Johnson’s article up to the criminal that police were looking for. These are also some example of weapons of math destruction as described by Cathy O’Neill. Now we have ethical issues in terms of how are we going to make sure that we keep these technologies inclusive for the sake of peoples safety. Joy Buolamwini has a solution in which you can report cases like her discovery or volunteer to help diversify data set on her website.

We finished off the week on a lighter note as we talked about creativity using AI. We went over a lot of examples of AI generated art and even did a kahoot where we had to guess if the art was made by AI or by a human. Many of them were hard to pick as they were very similar to art that I’ve seen before. The one specific example was of a portrait that looked so real but was AI generated. I think when it comes to art anything goes as long as you’re not plagiarizing something and because of that using AI is a totally viable way to create art. I think art has evolved every so often in history and AI is just going to be a part of that evolution. It seems to me to be a new medium in which to create art. I guess the real question in the end is: does the programmer deserve as much credit for the art as the computer does? Personally I feel like it seems like the programmer has a rough idea of what they want the AI to create but in reality they have no control about the exact outcome and because of that the computer deserves more credit. At the end of the day though, art is subjective and no matter what the outcome or means of an art there is always an argument for how good or creative it is.

Leave a Reply

Your email address will not be published. Required fields are marked *