Uploaded by ctrlbreak+studylib

An Ethical Argument for Dumb Pipes

advertisement
An Ethical Argument for ‘Dumb Pipes’.
Design Regrets
At a fundamental level, the Internet was originally intended to be a loosely
interconnected network of an open architecture1. Aside from the technical requirements
regarding the basic underlying routing technology (namely addressing information and other
metadata in the packet headers), the ability to even parse or examine much of the inner content of
internet traffic, central to much of the debate regarding Net Neutrality, was originally intended to
be encrypted in the base TCP/IP protocol and not cleartext.2 If the foundational protocol of the
Internet had encryption ‘baked in’ from its inception, many of the ethical concerns regarding an
ISPs duty, or even ability, to differentiate traffic would be negated, not to mention the impacts on
cybersecurity posture.
Decades of Fallout
Due to its very nature, the security evolution of the internet inherently has an overlap
with end user privacy and the ethical considerations of how traffic can be differentiated. As a
direct result of the adoption of ever more sophisticated security protocols and mechanisms, much
of the once cleartext data is becoming opaque to inspection. Not only by potential hackers, but
also by an ISP looking to differentiate data streams. Likely, the most recognizable of these
technologies is the (now) almost ubiquitous use of SSL/TLS (usually indicated via a URL
comprising an HTTPS address) or Transport Layer Security, which can be used to fully encrypt
many methods of communication, not just web sites. As this is negotiated between the end
servers and customer device, it may not be possible for an intermediate (in our case, the ISP) to
1
even determine the domain name the customer intends to visit, or any other data about the
connection. Only the rough IP addressing information required to forward/route the packets.
The ‘DNS lookups’ customers perform are also a huge source of traffic metadata that
ISPs can potentially gather from their customers as an ISP typically operate their own DNS
servers. There are now multiple projects underway attempting to provide ‘for DNS lookups’ the
same forms of encryption and privacy that SSL/TLS has provided to websites.3 Many ISPs have
since expressed great concern regarding the pending possibility of losing access to this cleartext
metadata, with other parties calling such concerns unfounded.4
An early solution engineered to potentially connect disparate networks, a travelling
corporate road warrior, or protect oneself while connected an untrusted network (even an ISP) is
the advent of the Virtual Private Network (VPN) which, when leveraged, also prevents any
attempt at inspection of the encapsulated traffic, including for the purposes of traffic
differentiation. It can only be categorized as opaque bulk traffic from the outside. VPN
providers are utilized for many reasons, the primary reason being in the name itself, privacy and
security. Additionally, many users leverage the flexibility in geographic gateways (provided by
premium VPN services) to circumvent geo-fencing of content (Netflix, BBC, etc.), and to test for
the existence of, and then attempt to avoid, shaping/throttling of traffic they otherwise generate
while using their personal internet connection. While atypical now, many users also choose to
implement a permanent VPN service they are paying for, through which all of their internet
traffic then traverses5.
In addition to the creation of VPN tunnels, piecemeal upgrades and improvements
implemented for legacy internet protocols, there are now multiple decentralized6 abstracted
overlay networks that attempt to provide security and privacy from the ground up, while still
2
operating atop the legacy internet. Some of the more popular overlay networks one may
recognize are the ToR Project7, the I2P network8, and FreeNet9. As with the VPN tunnels and
other protocol evolution, this has the net effect of removing most ability to perform meaningful
traffic differentiation.
This is ultimately presenting a considerable challenge to intermediaries (large private
networks, security analysts, ISPs, etc.) to even classify some of the traffic that is now traversing
their network. It appears there are multiple efforts underway to leverage Machine Learning10
and Neural Networks to attempt to differentiate what may be happening ‘within’ this form of
encrypted traffic.11
The Trouble with QoS (Quality of Service)
Many commonly held beliefs regarding QoS are in fact not applicable to the wider
internet. While QoS can (and usually is) leveraged within the equipment domain of an ISP or
private company, practically all of this packet management intent is lost at the point where the
ISPs network is peered with other ISP networks12. An exception to this could be MPLS
(Multiprotocol Label Switching) network links. These private circuits are effectively allocated a
fixed commitment of resources between two or more static endpoints owned by a commercial
entity, and not the wider internet. These 'leased lines' are typically used to facilitate a private
Wide Area Network (WAN) between commercial entities. Within these links, company-wide
QoS could be implemented and maintained, even over vast geographic spans. However, while
the MPLS links can be considered as leveraging/traversing the internet hardware and
infrastructure, this is abstracted away into a single private network under the control of a
commercial entity, to which they can implement whatever QoS policy they wish.
3
There have been many attempts over the years to find consensus for some form of end-toend (E2E) QoS across the public internet13, with varying degrees of adoption. However, ISPs
typically only evaluate their own commitments and SLAs regarding QoS to within their
equipment domain. A recent CRTC report regarding QoS metrics for Canadian ISPs found the
following:
Parties agreed that including the QoS of the global Internet beyond Canadian Tier 1 cities would
not be appropriate, since this does not constitute Canadian ISPs’ fixed broadband Internet
access network. Parties noted that it would be impossible for Canadian ISPs to measure QoS
beyond Canadian Tier 1 cities into the global Internet.14
Stratification Of the Public Internet
For the purposes of this essay, the author has roughly classified ‘public internet traffic’
into the following three categories:
1. Leased, private links between one or more private endpoints (MPLS, etc.) ISPs attempt to
create a modern analogue to the leased communication lines of the past. However, due to
the converged nature of telecommunications equipment in the modern era, these now
exist atop the same packet-switched infrastructure as the rest of ‘the internet’.
2. Unencrypted public internet traffic for which service differentiation can be technically
possible. This is what the author considers to be ‘basic consumer internet’.
3. Encrypted public internet traffic for which service differentiation is not technically
possible. This would include traffic that has become encrypted, decentralized, or
otherwise obfuscated.
4
Ethical Contentions
This leads to an interesting contention between unencrypted public traffic and encrypted
public traffic, as well as an intersection between the ethics of Net Neutrality and the right to
privacy. For many reasons, (not the least of which is grassroots opposition to already
implemented traffic shaping and manipulation) personal use of premium VPN services has been
rising across much of the internet15, even when paying a significant premium above and beyond
the base subscription cost of internet service from an ISP.
From the ISPs perspective, the encrypted traffic cannot be sub-divided and differentiated
based on content (sometimes not even by ‘destination’)16. It must, due to hard technical
limitation, consider this flow of traffic as an atomic, indivisible ‘dumb pipe’, reducing an ISPs
capability to filter or prioritize packets to rudimentary anti-congestion routing algorithms.
How then, from a holistic traffic management and filtering perspective, can an ISP
prioritize or otherwise shape these encrypted/opaque traffic flows with regards to other more
malleable customer data flows? A shaping algorithm may be able to manipulate cleartext traffic
but can make no determination with respect to the content of the equally legitimate encrypted
data. Should the encrypted tunnel be considered less important merely because it is privacy
enforcing?
The author evaluated multiple public ITMPs from Canadian ISPs and was only able to
identify one ISP where this issue is being considered.17 This ITMP places ‘most encrypted
traffic’ at a lower priority than online gaming, social networking, and basic web browsing, and a
higher priority than cloud backups, P2P file sharing, and other bulk transfers. Given this
example policy, a corporate user attempting to partake in a voice or video call via their company
5
VPN may have an unacceptable experience with these real-time protocols, simply due to
protecting their privacy and company security using encryption.
Aside from basic link capacity congestion mitigation algorithms (FIFO or similar
algorithms), the author is of the opinion that prioritizing general purpose internet access via
traffic classification and application filtering becomes not only unethical when faced with rising
volume of encrypted traffic, but untenable given current trends.
Conclusion
The initial ideal state of ‘dumb pipes’ should be aspired to and implemented as best as
technically possible with respect to public citizen facing service providers, without resorting to
traffic shaping, deep packet inspection, rate limiting or other Quality of Service (QoS)
technologies.
The current paradigm of end-user (ISP customer) as a mere consumer of content from
large companies, while a natural and popular use of the communication mediums capability, is at
odds with the original intended design of the ‘internet’ itself. At its root, the internet was
designed to be a method of digital communication between two or more peers using a resilient,
multi-path routing, communication network.
Almost as if by design, the Internet ‘interprets censorship as damage and routes around
it’18. While this is clearly subjective between individuals, groups, and even cultures, there will
always be a point of contention attempting to augment/shape communication traffic. For every
attempt to enact / enforce some form of traffic filtering and manipulation, there will be technical
attempts to thwart such endeavors.
The author has grave concerns that the contention between an end users’ right to privacy
and neutral access to the wider public internet will eventually reach an inflection point with
6
respect to contemporary and future encryption technologies, and the private and public sectors
(in)ability to interrogate and analyze the underlying content of said traffic (a key requirement to
deviate from true neutrality). While being fully aware that such powerful technologies will give
rise to both beneficial and detrimental societal changes, encryption and the right to privacy does
appear to be an almost binary proposition. Any compromise regarding encryptions technical
implementation is likely to be terribly abused or compromised by malicious actors (through a
mistake in its technical implementation or lapse in other security), while outlawing or attempting
to ban its use has extremely dystopian overtones (no expectation of privacy in an increasingly
online world) and is also just as unlikely to be technically enforceable.
The promise of end-to-end encryption is, ultimately, a simple value proposition: it’s the idea that
no one but you and your intended recipients can read your messages. There’s no amount of
wordsmithing that can get around that.19
1
https://www.internetsociety.org/internet/history-internet/brief-history-internet/
2
https://www.washingtonpost.com/sf/business/2015/05/30/net-of-insecurity-part-1/
3
https://dnsprivacy.org/the_solutions/
4
https://arstechnica.com/tech-policy/2019/09/isps-worry-a-new-chrome-feature-will-stop-them-fromspying-on-you/
5
https://www.pcmag.com/how-to/how-to-install-a-vpn-on-your-router
6
https://internethealthreport.org/v01/decentralization/
7
https://www.torproject.org/about/history/
7
8
https://geti2p.net/en/about/intro
9
https://freenetproject.org/pages/about.html
10
Adversarial Network Traffic: Towards Evaluating the Robustness of Deep Learning-Based Network
Traffic Classification - https://arxiv.org/pdf/2003.01261.pdf
11
VoIP Traffic Detection in Tunneled and Anonymous Networks Using Deep Learning-
https://ieeexplore.ieee.org/document/9406580
12
https://www.nojitter.com/qos-becoming-irrelevant
13
https://en.wikipedia.org/wiki/Quality_of_service#End-to-end_quality_of_service
14
https://crtc.gc.ca/eng/archive/2018/2018-241.htm
15
https://www.go-globe.com/vpn-usage-statistics/
16
https://www.torproject.org/about/history/
17
Telesat Ka2 Satellite Traffic Management Policy - https://www.xplornet.com/policies/usage-
traffic-policies/telesat-ka2-satellite-traffic-management-policy-may-20-2016/
18
https://en.wikipedia.org/wiki/John_Gilmore_(activist)#Activism
19
https://www.eff.org/deeplinks/2019/12/fancy-new-terms-same-old-backdoors-encryption-
debate-2019
8
Download