Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Tech's Ethics
Tech's Ethics
Tech's Ethics
Ebook74 pages59 minutes

Tech's Ethics

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Technology is advancing at an exponentially increasing rate. From surveillance to encryption, algorithms to automation, technology is changing our day-to-day reality on a seemingly daily basis. As technological changes advance, the challenge of our time is to make sure our ethical principles keep up. There are difficult questions to ask with regards to technology and its impact on the world. "Tech's Ethics" asks some of those tough questions and puts forward some potential answers as well.

LanguageEnglish
Release dateSep 12, 2019
ISBN9780463437544
Tech's Ethics
Author

Gunner Technology

An AWS Partner specializing in JavaScript development for government and business.

Read more from Gunner Technology

Related to Tech's Ethics

Related ebooks

Computers For You

View More

Related articles

Reviews for Tech's Ethics

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Tech's Ethics - Gunner Technology

    1 AI

    AI

    Both the benefits and the dangers of AI are numerous. The problem is you can't simply take the benefits without getting the dangers. If it were as simple as that, no one would care about the rise of artificial intelligence. But the sad reality is that AI is already having incredibly negative impacts on our world and is poised to produce even more disastrous outcomes in the near future.

    It's difficult to see the danger of AI when the wonders of AI are so prevalent. It seems like every week a new gadget comes out that automates a part of your daily life that you probably weren't too fond of doing. But the same AI that powers your Roomba or your Tesla can also be used for some pretty awful.

    Practical Considerations

    Speaking of Tesla, one of the major issues with AI is liability. Several Tesla drivers at this point have suffered severe injuries or even death as a result of the car's AI-powered automated driving functionality failing. One driver's Tesla drove him directly into a median at high speed thanks to an old, faded lane marker that the car interpreted as active. That driver died. Or consider the woman in Arizona who was hit by a self-driving Uber car. Who's at fault there? The courts ruled Uber and its technology weren't liable at all, which sets a fairly terrifying precedent.

    Or consider the current military practices being utilized around the globe by the United States, the world's military superpower and the greatest force of potential destruction this planet has ever seen. (That's simply an objective evaluation based on the amount of firepower in our arsenal, not a commentary on the military's propensity to use said force.) By as early as 2008, the number of autonomous military vehicles (drones, robots, etc.) in Iraq outnumbered the number of ground troops of all of our allies combined. What happens if military technology advances to the point where power is granted wholly to automated systems? What if human involvement becomes a limiting factor in the push for a swift outcome? The decision to strike a target, along with all of its aftereffects and the potential for collateral damage, seems to be the most human of all decisions. Relegating such decisions to AI seems like a rash decision.

    And largely that is due to the potential for AI to malfunction. Consider the stock market flash crash from 2010. A simple bug in a machine learning algorithm designed to time the stock market led to an absolute fire sale and the erosion of trillions of dollars of market capital. The drop eventually recovered, but not until after serious damage was done. The most reasonable explanation is that the programmers of various high frequency trading platforms simply screwed up. Unfortunately, their little coding errors caused a 10% drop in the market in a little over a half an hour. That's a record that will never be broken. (Unless an even worse trading algorithm makes its way onto the scene.)

    Offloading responsibilities onto AI makes our lives faster, easier, and more efficient. But maybe our lives shouldn't be so fast or easy or efficient. Maybe we need to slow down, make things more difficult, and struggle with stuff to ensure that small mistakes don't turn into huge problems. But that's certainly not the goal of AI.

    Philosophical Considerations

    There are even deeper issues at hand when one considers the non-practical, more philosophical impacts of AI. Because while AI can easily replace the more mundane aspects of our day-to-day lives, it can also replace those deeply important parts of our lives that keep us human.

    Joseph Weizenbaum is famous for developing ELIZA in 1966 at MIT. It was an incredibly simple program that did some basic natural language processing and more or less replicated the behavior of a bad psychologist (stuff like digging for information - Men are all alike? In what way? - or repeating back a statement as a question - My boyfriend made me come here. Your boyfriend made you come here?). The results shocked Weizenbaum. People felt a strong emotional attachment to ELIZA and considered it a great listener. He would later spend the latter half of his life decrying AI and its increasing prevalence in society.

    One hugely important question with AI is what should be turned over to it? Another key question is who gets to decide? The problem today is that no one is bothering to ask either question and all aspects of everyday life are gladly turned over to AI and the decision makers are the vested interests involved in AI development itself. We might, for example, question the ethics of an automated assistant making a dinner reservation with a live human being, but Google doesn't care and has developed technology to do just that, and no one seems to want to question

    Enjoying the preview?
    Page 1 of 1