Ethics for Software Engineers - TAMU Computer Science Faculty

Ethics for Computer Scientists
and Software Engineers
• You will be taking ENGR 482 (Engineering
Ethics) at some point.
• This course is hosted by the Philosophy
Department, but is oriented to Engineers.
• The principles span across all engineering
disciplines.
• Dr. Ed Harris is an expert on Engineering
Ethics and literally wrote the book.
• Why do computer scientists have to learn about
Ethics?
– Employers require that students understand the
ethical implications of design and decision-making
(including how to deal with gray areas).
– There are a lot of interesting related topics, like
intellectual property (e.g. copyrights), privacy and
security, etc.
– We trade in digital information (from images and
music to source code); there are many implications,
such as what can be copied freely.
– Software has a real impact on people. There can be
consequences of software we design and programs
we write, based on how they affect users, as well as
others. (e.g. in terms of safety, privacy...)
• In ENGR 482, you will learn about different
ethical frameworks.
– Philosophical – some researcher argue that their are
common, innate ideals that most humans instinctively
share, such as the Golden Rule (do unto others as
you would have them do unto you)
– Utilitarian – driven by cost-benefit analysis
– Religious – for some people, morals might be based
in their religion
• In ENGR 482, you will learn about sources of
ethical conflicts, and techniques for evaluating
difficult situations and making ethical decisions.
– laziness, greed, pressure from boss, group-think...
• We live in a world of Digital Information
– icons, clipart, photos, videos, music
– scanned, searchable documents
– software
– emails, tweets
• Question:
– What is free (public)?
– What is protected (copyrighted)?
– What is private?
What information is Public?
• images, clip art, icons, music, text,
software, engineering designs?
• can you post pictures of your friends?
– a building?
– what if the Coca-Cola logo is in background?
• do you have to get permission?
– when in doubt, you should cite sources
– this includes online material (e.g. URL)
• Reminder: plagiarism is serious. Do not
copy stuff off the web (or your friends) and
present it as your own.
The Legal Side
• concept of Intellectual Property (IP)
– rationale: time-limited monopoly to incentivize
creativity
– what counts as IP?
• inventions, methods, processes, ‘use’ patents (e.g.
new uses for old medications)
• valuable to employers
– what about design/style (look-and-feel)?
• yes, just ask Apple, who has been sued by
Microsoft over “graphical OS interfaces” and
Samsung over smart phone designs
– however, you can’t copyright or patent an idea,
only the expression of that idea
• copyrights
– the expression of an idea, like a book or a
song
• patents
– inventions, methods for doing/making
something, or improvements of such
• criteria – must be:
– novel
– useful
– non-obvious
– first to file (a recent change in the US)
• What is an adequate modification?
– any derivative work is protected (including
translation to other languages, etc.)
– is there a “minimal clip”? mashups/remixes?
incidental depiction in a photo?
– often subjective – the extent of the clip counts
(and whether it affects profit potential)
• Concept of fair use
– can make personal copies, backup software
– can mention a brief quote in a review article
– can use some material for non-profit or
educational purposes
Can you patent an algorithm?
• Richard Stallman argues that algorithms are like
ideas and shouldn’t be patented (and it
damages the industry to do so)
• there are major problems with “patent trolls” who
look for intentional or accidental re-use of
methods and try to extort fees
• the USPTO finally ruled that algorithms cannot
be patented
• Yet, there are patents for algorithms like:
– IDEA (International Data Encryption Algorithm)
– LZW compression used in GIF images
– both patents are now expired
Copyrighting of Software
• software doesn’t quite fit definition of a
copyright or patent
– can’t copyright an algorithm (like an idea), but
can copyright the expression/implementation as
a program in a language (like book or process)
– it can be viewed as a blueprint for making
executables
• putting your copyright on a piece of code
(source code file) is sufficient, though you
should register if you want to protect it
– Copyright @ 2013. Thomas Ioerger. All rights
reserved.
• software is often designed using
libraries/components/plugins, so you have
to be careful about what can be re-used
– see Google Drive example
– http://www.google.com/google-d-s/legal.html
• alternative types of software copyrights:
– freeware
• GNU FSF (Free Software Foundation)
• Copyleft: can freely use, modify and re-distribute,
but only if your software also permits this.
– shareware
– Open source
The Ethical Side
• Aspirational ethics:
– do more than just minimally follow the rules,
i.e. not just abiding by restrictions and making
decisions based on potential liability, etc.
• Principles
– respect for persons (Golden Rule – you
wouldn’t want somebody stealing your ideas)
– give credit, acknowledge source (even if it
might mean sharing profit)
– when in doubt: cite it, or re-write it in your own
words
Privacy and Security
• what information is private?
– medical/financial records
– academic/employment/voting records
– not google searches or tweets
• Interestingly, the “right to privacy” is not in the
US Constitution Bill of Rights, though it has been
interpreted by the Supreme Court an extension
of other rights.
• FOIA – Federal Open-Information Act
• emails
– caution: they’re not as private as you think, especially
in employer accounts
– might as well assume they could become public
Privacy vs. Social utility (tradeoff)
– there are cases where we justify collection of
private data for the public good
– examples:
• health insurers have to collect info to set premiums
• credit card companies have to collect info to issue
credit scores
– digital info can be aggregated and crossreferenced in large databases of information
• risks to privacy, potential for identity theft
– solutions
• fair information practices
• de-identification
The Legal Side
• who is responsible for security?
– you? employer? IT person (firewall)?
– note: don’t share passwords!
• what must you keep password-protected?
– social security numbers, credit card numbers...
• security decisions driven by liability
– legal/financial consequences
– risks of potential for identity theft
• companies have a responsibility to inform
parties of security breaches
– example: in 2007, hackers stole 45 million credit card
numbers from TJ Maxx servers
The Ethical Side
• privacy and security represent a tradeoff
– there are many levels of security with different costs
• passwords
• firewalls?
• is encryption warranted? what bit-level?
• could take a utilitarian view
– balance costs and inconvenience
• the problem is, people often disagree on:
– perceived risk (probability of being hacked)
– relative weight/importance of privacy
• aspirational ethics:
– don’t just make decisions based on legal or financial
considerations (which tend to emphasize the negative), but
instead, aspire to protect users’ rights
• ethical principles: individuals’ dignity and right to privacy
Ethics in Software Engineering
• There are many design decisions in software that
can have significant consequences...
• Famous software bugs
– Therac-25 (1985)
• chemo-radiation equipment
• software bug in beam-shield controller caused radiation
overdoses, multiple deaths
– AT&T long-distance network (1990)
• crashed (in New England) due to missing ‘break’ in a switch
case causing 1 switch send a failure/congestion message to
another switch causing it to reset/self-test, and this cascaded
to crash other switches
– Floating point/math co-processor in Intel chips
• Responsibility for documentation of
software, testing, and fixing bugs
– basically, common engineering ethics apply
here
– shared responsibility of programmer, team,
manager, company...
ACM Code of Ethics
• Associate for Computing Machinery
• Like most professional societies, we have a
code of ethics
– http://www.acm.org/about/code-of-ethics
– ACM code emphasizes safety of public over
interests of employer
• so for example, if your manager asks you to do
something risky or dangerous, such as release code
you know has bugs in it, you shouldn’t just blindly do it
– members are obliged to take responsibility for
their work, keep informed, honor laws,
confidentiality, privacy, etc.
• 1. GENERAL MORAL IMPERATIVES.
–
–
–
–
–
–
–
–
1.1 Contribute to society and human well-being.
1.2 Avoid harm to others.
1.3 Be honest and trustworthy.
1.4 Be fair and take action not to discriminate.
1.5 Honor property rights including copyrights and patent.
1.6 Give proper credit for intellectual property.
1.7 Respect the privacy of others.
1.8 Honor confidentiality.
• 2. MORE SPECIFIC PROFESSIONAL RESPONSIBILITIES.
– 2.1 Strive to achieve the highest quality, effectiveness and dignity in
both the process and products of professional work.
– 2.2 Acquire and maintain professional competence.
– 2.3 Know and respect existing laws pertaining to professional work.
– ...
• 3. ORGANIZATIONAL LEADERSHIP IMPERATIVES.
– 3.2 Manage personnel and resources to design and build information
systems that enhance the quality of working life.
– 3.3 Acknowledge and support proper and authorized uses of an
organization's computing and communication resources.
– ...
Ethics in Interface Design
• design of software must match “cognitive
structures” – how people think about a system
– we can’t go into detail here (take a class on it), but
this is a significant aspect of design of software
systems that must be considered, especially for safety
– consider how to display state info/status indicators so
users really understand
– consider how to make actions/options and their
effects clear (example: does this action erase data?)
– an example:
• early Air-Traffic Control systems had a GUI with lights that
blinked twice a second to indicate every thing was OK
• this had to be changed to once per second, because
controllers were getting confused because the human brain
interprets lights blinking twice per second as “Alert!”
Case: Autopilot or Human Error?
•
The following was excerpted from Olson, W.A. (2001). Risks of Cockpit
Automation.http://aupress.au.af.mil/digital/pdf/paper/wf_0014_olson_identify
ing_and_mitigating_risks.pdf
•
On 24 April 1994 an Airbus 300-600 crashed while on approach to Nagoya,
Japan. During the approach the copilot inadvertently engaged the aircraft’s
“go-around mode,” which caused the automated systems to attempt to fly
away from the ground using the aircraft pitch trim system, while the pilots
attempted to continue the landing approach via input to the elevator.
The pilots were unable to determine that the pitch trim input of the autopilot
system was causing difficulties controlling the aircraft.
Additionally, the design of the A300 autopilot (at that time) did not allow the
pilots to override the autopilot by use of opposing control stick pressure.
Thus, the pilots and automated systems continued to struggle for control,
with the aircraft eventually pitching up to near vertical, stalling, and crashing
on the approach end of the runway—killing 264 passengers and crew.
•
•
•
•
This case illustrates that a technological advance (the autopilot) that was
supposed to help human operators actually got in the way. It didn’t
malfunction. The problem stemmed from the crew not fully understanding
what it was doing and how that interacted with what they wanted to do.
• There are downsides to technology/online/the Internet/social
networking, which can make us...
– ...less sociable via personal interactions
• e.g. children who spend all day playing video games or texting
• loss of inter-personal skills, expression
• alienation, loneliness (think of online shopping vs. face-to-face)
– ...reliant/dependent on computers
• loss of math skills, grammar/spelling, memory
• I don’t know how my car works anymore, and I can’t fix it.
• GPS, googlemaps for navigation
– ...gullible and uncritical
• “It must be true - I saw it on the Internet.”
• blaming the technology - “I didn’t fix it because the red light wasn’t blinking.”
– ...desensitized to violence (video games), pornography, copying
• The point is:
• Design of software features should promote human well-being
• What if you can implement something that
is bad or disruptive?
– examples of abuse:
• disassembly, hacking
• web-bots, spam
– a script that queries a web site so much that it
overloads it (e.g. checking every millisecond
for a book at the library)
– the 1988 Morris Worm, which exploited a
loophole in a Unix daemon to spread from
machine to machine
• they didn’t to it to make money, it was just an
experiment
• they made a mistake in the implementation that
generated many more copies than intended
• cases related to hacking
– Kevin Mitnick (http://en.wikipedia.org/wiki/Kevin_Mitnick)
• gained remote access to corporate data using phones
• was sentenced to 5 years in prison and forbidden to use
computers afterwards
• even though he didn’t profit, did his punishment befit his
crime, or were the feds unduly trying to make an example out
of him (perhaps out of ignorance or fear)?
• 1986 Computer Fraud and Abuse Act
• Ethical Principles
– don’t harm/hinder others
• Respect for Persons
– if someone sets up a server, respect its intended use
• RSS - this is an example where they intend to stream it to
you and have anticipated the load
– even if you figure out how somebody implemented
something, don’t publish it (it violates their rights)
• So in summary...
– try to make decisions guided by ethical principles
– keep in mind the ACM Code of Ethics
– consider the consequences of software you design (on
users and others)
– Design software features that promote human well-being
– take responsibility for testing your code
– respect copyrights and don’t re-use stuff unless you give
due credit
• food for thought:
– Google’s corporate motto is “Don’t be evil.”
– why do you think this is?
– because they have the power to do some really wicked
things with their technology and data