When it was first invented, the automobile revolutionized America. Distances that before took days to travel could now be covered in mere hours. The world became a more interconnected place. However, in today’s world, the automobile is fraught with problems. Over 30,000 each year die in automobile-related accidents, many of which could have been prevented if drivers had been paying attention to the road, rather than their cell phones, screaming toddlers, or singing along to the radio. The increasing number of cars on the road also leads to an increase in greenhouse gases, which contribute to climate change and negatively affect the environment. Furthermore, cars take up space, spending ninety percent of their existence stationary, parked in a driveway or parking lot, waiting to be used. All these problems could be solved by adopting the self-driving car, as computers do not get distracted or intoxicated, always paying attention to the road. Cars can be linked to travel as a pack, using a technique utilized by NASCAR drivers called “drafting,” which can save fuel. Finally, self-driving cars could be used as a taxi service, where customers order a car on their cell phone and it drives itself to their doorstep within minutes, eliminating the need for people to own cars, and opening up huge amounts of space that can be put to better use. However, even though self-driving cars would bring many benefits to society, they are not without their downsides. This technology has the potential to completely change how society and the economy operate, and many companies will be competing to be the first manufacturer selling these new cars, even if they have to ignore codes and safety procedures, leading to many difficult ethical scenarios for the people working on this technology. For example, if I were an engineer working on this new technology, an ethical dilemma could arise as companies get closer to putting these cars on the market. The CEO of Nissan has already promised that these cars will be on the market by 2020, something which could put a good deal of pressure on other companies trying to release their own version of the self-driving car. Let us say that I am an engineer working at Google on their computer-powered Toyota Prius in the year 2018. The 2020 deadline is approaching and Nissan is preparing to release their self-driving car within the next year. My boss wants to release of our version before they can in order to steal the spotlight from their model and gain control of the market as the first self-driving car available. However, during an examination of a test drive of the car in downtown Pittsburgh during the winter, my team and I discover a flaw in the car’s main sensor that would cause them to become less effective at extremely cold temperatures, causing them to misjudge how far away other cars are. The problem is not easily fixable, and would take several months to remedy, requiring entirely new sensors to be built out of a material which can function better at lower temperatures after being in use for several years. However, when I inform my boss of the problem as the head of the test drive, he tells me to forget about it and to release the car as it is now, as a redesign of the sensors now would mean losing out on a significant share of the market to Nissan and other competitors whose cars are ready to launch. The question I have to face is whether I as an engineer can approve this car for launch given this flaw, or if I must speak out about this problem and force the company to redesign. As an engineer, I have at my disposal a myriad of resources to decide the best course of action, from official codes of ethics from professional societies, to articles published in engineering journals and advice from practicing engineers. But to determine the best course of action to this ethical scenario, it is first necessary to understand just how important this sensor is to the self-driving car, and just how catastrophic a failure could be. Google’s self-driving car operates by taking in massive amounts of information every second via sensors, processing that information using a computer stored within the car, and then having the computer make a decision of what the car should do in that instant. The car has four main sensors. Two radars on the front and back of the car allow it to judge how far away objects are from the bumpers. A global positioning system, or GPS, is mounted near the rear window, and sends data from satellite maps to the car, allowing it to figure out where it is in relation to the road. This data is also compared with data collected from Google engineers taking the same route, which allows the car to compare where the GPS map says the car is, to where the engineers’ maps say they are. This comparison also allows the car to tell what objects on the side of the road are, whether they are pedestrians that need to be watched closely, or telephone poles and mailboxes. When the car is made available to the public, cars will be able to compare the route they are taking with routes taken by numerous other driverless cars, ensuring that cars will not go off the road after a few months in the majority of locations. However, the main sensor for the car is the laser mounted on the roof. The laser shoots out 64 beams of light to record data and generate a 3-D map of the world around the car. The map shows other cars, cyclists, pedestrians, and stationary objects. A computer inside the car then compares the situation it sees on the map with the rules of the road that have been programmed into it to make decisions. This process happens hundreds of times per second, which allows the vehicle time to react if something goes wrong. This is primarily where the autonomous car has an advantage over human drivers: it can react to bad situations much faster than a human can, and will always be paying attention to the road, never distracted or intoxicated, allowing the vehicle to act much quicker than a human to avoid an accident. However, a major flaw in one of the sensors could be catastrophic, particularly in the main laser sensor on top. If the sensor becomes less effective, it is difficult for the car to judge where it is in relation to everything in the rest of the world. The effects could range from the car failing to see a snowbank in time and driving into it, to hitting a pedestrian running across the street at the last minute, to even hitting another car on a highway when it attempts to pass at a high speed. Such an event would harm a good deal of people in northern countries during the winter, possibly causing as many or more car accidents as human drivers cause now. As an engineer, the way to decide how to act given an ethical dilemma is by considering the codes set forth by professional organizations. The National Society of Professional Engineers, or NSPE, has put forth a code of ethics which every engineer must abide by in making decisions in difficult scenarios, which contains six canons engineers must follow. In the scenario of finding an error in the sensors of a self-driving car, the most-applicable canon is “Engineers shall hold paramount the safety, health, and welfare of the public.” Clearly, releasing a product known to be flawed would not be in accordance with this code, and is therefore unethical. The course of action as outlined in the code would be to “advise [my] clients or employers when they believe a project will not be successful.” This would involve informing my boss about the conditions under which the sensor would fail and outlining a plan to fix it, as well as advising him to delay the release of the car until the problem has been remedied. However, as my employer has vetoed redesigning the sensor to better perform in freezing temperatures, it is necessary to look to other portions of this canon. According to the code, “if engineers' judgment is overruled under circumstances that endanger life or property, they shall notify their employer or client and such other authority as may be appropriate.” In this scenario, the ethical decision is to notify relevant parties that the company is prepared to release a flawed product to the public, whether those authorities be people higher up at the company, such as project director Sebastian Thrun, or authorities in the government responsible for approving the vehicle. Additionally, from a professional standpoint, engineers “shall at all times strive to serve the public interest,” which would involve protecting the safety of the public by preventing an unsafe car from being released onto the market. I can also make an ethical decision by consulting the code of ethics in my chosen discipline of mechanical engineering. The code of ethics of the America Society of Mechanical Engineers contains ten canons, the first of which is the same as the first canon of the NSPE code: “Engineers shall hold paramount the safety, health and welfare of the public in the performance of their professional duties.” Releasing a product which I know to be faulty would be in violation of this canon, and therefore, the unethical decision. It would also be in violation of the third canon of this code: “Engineers shall continue their professional development throughout their careers and shall provide opportunities for the professional and ethical development of those engineers under their supervision.” Choosing to release an unsafe product to the public would set a bad example for the engineers on my team, and would therefore be in violation of this canon. Again, my course of action is shown to be to alert relevant authorities of this unethical move that could potentially harm members of the public. Engineering ethics is also a subject that has been written about extensively by veteran engineers seeking to advise those just starting out in the field, and these articles can provide a wealth of knowledge. For example, in his article “Engineering Case Studies: Bridging Micro and Macro Ethics,” Robert R. Kline argues that engineers must look to connect micro ethics, ethics involving the engineer, his or her boss, and the company he or she works for, with macro ethics, ethics dealing with “collective social responsibility” regarding technology. He also cites a case study where a Turkish airplane crashed near Paris, killing everyone on board, because the directions for securing the cargo door were only in English. By keeping these ideas in mind, I can not only focus on the implications for the company and my relationship with my boss and coworkers, but also my duty as an engineer to all of society that will be affected by my decisions regarding the sensor failure. This case study can also remind me to consider the perspectives of all people in using this technology, not just people who think the same way I do. Another useful source is the book “Ethical Decision Making” by Lisa Newton. Her book outlines three main tenets to keep in mind when dealing with ethically-charged scenarios: welfare, justice, and dignity. These tenets involve preventing harm and putting others first, obeying the codes of the profession, and respecting all people by treating them as one would want to be treated. Following these tenets can lead to a better resolution of ethically-charged scenarios, and can be applied to my scenario as a Google engineer. Making the prevention of harm to others my first priority leads me to choose to speak out about the problem, which is also a major section of the codes of my profession. Respecting people is especially important in my reporting of the situation to my boss and to those higher up in the company, as even though my boss is making an ethical mistake my choosing to release the car as it is, he is still a person deserving of respect, which should be considered in making an ethical decision. Finally, a third source that contains advice from current and former engineers is the article “Ethical Decision Making in Today’s Classrooms.” The article consists of steps to consider in deciding whether or not an action is ethical, examining all courses of actions, who will be affected in these choices, and dissecting them under from an empirical perspective. Additionally, this source is written for college students, so it is fairly simple in its language and is easy to digest, which makes it a good initial source to gain the basics of engineering ethics and a good starting point in making an ethical decision. (I need 1 more source on ethics in a similar vein to the past 3 par. possible “interview” with an engineer I “know”? Sort of dishonest as I’m making it up, silly thing to do in a paper about ethics, I can find another article if necessary, just thinking of varying it instead of writing very similar things about 4 articles). Upon analyzing the number of sources there are about ethics, my course of action is clear. Google has made a mistake by choosing to release their self-driving car just to be the first car available on the market, risking people’s lives with an inferior product. By analyzing the engineering codes of ethics, as well as advice from current engineers in the form of books and articles, it is apparent than an engineer’s first duty is to protect the public from harm. In order to fully accomplish this central idea, I must bring the problem to the attention of main officials on the car project, within the company, and within the government. Doing so will force the product to be delayed, allowing the sensor to be fixed, and preventing the harm of thousands of consumers.