G 7 November 2014 Edge withtext Security Report Going Title – to sizethe and position box to center Report Title in the blue bar Stratecast Analysis by Michael P. Suby Stratecast Perspectives & Insight for Executives (SPIE) Volume 14, Number 41 Going to the Edge with Security Introduction1 In building construction, effective supply chain management is critical to completing projects on time and within budget. In the sequenced orchestration of construction, delayed delivery of building supplies in one stage can have cascading implications on latter stages. Recognizing this risk, building contractors make calculated choices among delivery approaches. A tangible example is the mixing of cement, aggregates, and water to produce concrete. Should the concrete be produced on or in close proximity to the job site, or should the concrete be produced at an off-site location and transported to the job site? While many factors go into the decision on where to mix the raw materials (e.g., available space at the job site for storing raw materials and mixing, local ordinances, economies of scale, and quality control), the delivery of wet concrete is also a consideration. If, for example, the transportation of wet concrete from the off-site mixing plant to the job site is unpredictable or extensively lengthy relative to mixing the cement on-site, the construction project timeline would need to be extended, and the anticipated project cost increased to compensate. In a manner of speaking, there are trade-offs. Might this analogy on location trade-offs be applicable in the delivery of information and network security? Stratecast believes it is. While the use of security technologies by businesses is critical in managing risk, trade-offs are present. For example, in distributed denial of service (DDoS) security, redirecting inbound Web site traffic to a scrubbing center adds network latency and processing time to the end-to-end delivery of legitimate traffic; time that could reach a level noticeable to site visitors. While an acceptable trade-off to the Web site owner relative to the potential alternative of a disrupted Web site, it’s a trade-off nonetheless. Additionally, there is the implicit cost of network transport used to direct inbound Web site traffic to the scrubbing center, and then returning legitimate traffic to the Web site. This network usage is not free; the cost is included in the price of the DDoS security service. This is just one example of the trade-offs with a security approach that relies on redirecting network traffic to a centralized processing center. Similar trade-offs in terms of security and network infrastructure investments and latency are present if security processing is conducted at an onpremises gateway location (e.g., at a business network perimeter or in front of a data center). Perhaps a relocation of security processing is in order. In this SPIE, we examine an alternative approach of pushing security processing outward to the edge of carrier networks. In preparing this report, Stratecast conducted interviews with: • Akamai – Various executives at Akamai’s Analyst Summit on October 16, 2014. • CenturyLink – Randy Tucker, Senior Marketing Manager and Product Strategist - Network, Hosting, and Cloud Solutions; and Peter Brecl, Senior Product Manager • Fortinet – Stephan Tallent, Director MSSP Americas Please note that the insights and opinions expressed in this assessment are those of Stratecast and have been developed through the Stratecast research and analysis process. These expressed insights and opinions do not necessarily reflect the views of the company executives interviewed. 1 SPIE #41, November 2014 © Stratecast | Frost & Sullivan, 2014 Page 2 More than Protection The prospect of moving security processing to the edge of carrier networks is not exclusively for the benefit of optimizing the performance experience of connected users, and reducing costs, but also has a bearing on the changing nature of network traffic flows and the implications on congestion. Conceptually, a looming issue with changing network traffic flows is the rising bandwidth capacity and use of access networks (wireless and wired) relative to the capacity of core networks. Simplistically stated, is the cumulative effect of escalation in the number of connected endpoints (e.g., mobile devices, homes, business locations, and the Internet of Things—IoT) and destinations (e.g., Web sites and cloud), changes in high-bandwidth traffic patterns (e.g., cloud-to-cloud, on-premises data center-tocloud), plus generational leaps in mobile (from 3G to 4G and then to 5G) and wired (from megabit TDM to gigabit and beyond with Ethernet) access networks outstripping the capacity of core networks—much like arteries (access networks) funneling into a traffic circle (core network)? Definitive evidence on the overall extent of this looming issue of core network congestion is hard to ascertain—a critical tipping point, per se, is more of a projection than a massive, near-term reality. Nevertheless, there are signs of the impending; and, positively, there are also examples of security-atthe-edge already in the market. • Delivery of Web Pages is Slowing Down – As assembled by Akamai, the average page load time has increased by over 60% in the last two years, as shown in the table below. Increases in page size and number of objects, plus an increase in mobile access, are part of the cause, but also reflective of a delivery infrastructure that is not keeping pace with the changing dimensions in what is being delivered and how. Exhibit 1: Delivery of Web Pages is Slowing Down 2012 Typical Page Size (kilobits, Kb) 2013 2014 788 Kb 1,081 Kb 1,622 Kb Typical Number of Objects 88 101 112 Mobile Penetration 9% 19% 30% Average Page Load Time (seconds) 6.6 7.2 10.7 Source: httparchive.org, Akamai, Radware • DDoS Scrubbing Centers Capacity Growing – Reflecting increasing frequency and size of DDoS attacks, DDoS security service providers are increasing their scrubbing center capacity. 2 Akamai, for example, increased its capacity by 70%, from 1.85 Terabytes per In depth analysis on the market and providers of DDoS security platforms and services is contained in Frost & Sullivan’s Analysis of the Global Distributed Denial of Service (DDoS) Mitigation Market (NDD2-74), July 2014. To obtain a copy of this report or any other Stratecast or Frost & Sullivan report, please contact your account representative or email inquiries@stratecast.com. 2 SPIE #41, November 2014 © Stratecast | Frost & Sullivan, 2014 Page 3 second (Tbps) in 2013 to 3.15 Tbps in 2014; and recently opened scrubbing centers in the Asia-Pacific region. Similarly, Arbor Networks is planning an expansion of its Arbor Cloud DDoS scrubbing capacity to 1.5 Tbps by mid-2015. 3 If demand materializes for this capacity, an increasing load on core networks follows. However, for Akamai, with the scrubbing centers it gained in the Prolexic acquisition, the incremental load on core networks due to traffic rerouting is tempered. The core network used by Akamai to reroute traffic is Internet route-optimized (i.e., avoids congestion points) via the company’s real-time traffic analysis and routing algorithms. • Blocking of DDoS Attacks Moving to the Edge – In a nod to reducing core network usage in mitigating DDoS attacks, and a change of course from its earlier blocking approach of attack traffic, AT&T has moved blocking from within its scrubbing centers to upstream, nearer to DDoS attack traffic origination. According to AT&T, within 15 minutes (on average) after attack traffic has reached its scrubbing centers, 90% of the traffic is blocked upstream in its network, rather than being blocked in the scrubbing center. CenturyLink, with its DDoS Mitigation Service, also blocks confirmed attack traffic in a distributed fashion—frequently, at carrier network peering points. Akamai follows a different but highly effective approach. With its original DDoS service, Kona Site Defender, Akamai’s globally distributed edge servers purge DDoS attack traffic from incoming Web site traffic (i.e., scrubbing and blocking at the edge). Incidentally, Kona Site Defender forms the basis for IBM’s managed Web defense service; and Akamai is actively pursuing resell and white-label arrangements with other managed security service providers. • Unified Threat Management (UTM) Also Moving to the Network – CenturyLink has taken the concept of network-based security services, and placed it on a more distributed and virtual plane. With its Network-Based Security service, businesses can subscribe to a suite of security services (firewall, VPN, intrusion detection and prevention, anti-malware, Web content filtering, and data loss prevention) hosted as customer-specific virtual instances at CenturyLink’s IP/MPLS 4 points of presence (PoPs). CenturyLink uses Fortinet’s technology to deliver the service. With a PoP deployment, each of the customer’s locations sits directly on the doorstep to the Internet. Internet-destined traffic from branch offices and remote locations is not re-directed to a headquarters gateway for security treatment, or to a small number of regional, multi-tenant security platforms. The performance hit due to network hair-pin turns is essentially eliminated. • For One Provider, Web Application Firewall has Always Been at the Edge – Akamai’s Kona Site Defender is also an edge-based Web Application Firewall (WAF). 5 Built on Akamai analysis to identify Web application threats (e.g., cross-site scripting and SQL injection), customer-specific mitigation policies are broadcasted to Akamai’s global edge servers to block threatening traffic. Unique in its edge-based WAF approach, Akamai is set to improve the service in 2015 through the introduction of enhanced capabilities in policy activation and tuning, and also in threat monitoring. Arbor Network Announces Multi-Terabit per Second Mitigation Capacity Expansion for Arbor Cloud DDoS Protection Service, Press Release (September 30, 2014). 4 Internet Protocol Multiprotocol Label Switching 5 Extensive analysis on this evolving product category is contained in Analysis of the Global Web Application Firewall Market (NE28-74), October 2014. 3 SPIE #41, November 2014 © Stratecast | Frost & Sullivan, 2014 Page 4 More to Come The service examples in DDoS, UTM, and WAF are tangible demonstrations that traditional security can be as effective in an edge-based approach as in centralized platforms, plus produce additional benefits of improved performance (i.e., lower latency) and reduced consumption of network capacity. Stratecast believes more edge-based security developments will unfold in the future. Potential developments include: • Establishing Multiple Layers of Protection – With edge-based security, the potential to have layers of complementary protection is possible: at the distributed edge or edges, and at a centralized gateway. Similar to the escalating skill levels in video games, protection policies can be escalated to thwart attackers in a stepwise fashion. At the outer edge, general policies that yield high results in purging bad traffic with limited processing requirements and fast throughput are employed. As the traffic moves closer to the protected asset (e.g., a Web site), the policies become more sophisticated and require more processing resources, to trip up attackers of more modest skills. At this stage, the throughput demands are less, as the volume of traffic to process is less than at the outer edge. Before reaching the protected asset, a final set of highly sophisticated and compute-heavy policies are used to thwart expert attackers. With this multi-layer approach, the development and deployment of policies is optimized for both security efficacy and resource optimization, to produce a balanced approach. • Protecting the Internet of Things (IoT) – The deployment and use of traditional endpoint security software on smartphones and tablets trails PCs by a wide margin, despite attractive multi-device packaging by vendors of endpoint security software. 6 As the IoT moves forward, the risk of compromise will surely rise. Yet, the prospect that the innumerable variations of device types in IoT will be inherently secure (e.g., via embedded security), or will buck the trend of mobile devices and have after-market security software extensively deployed, is unlikely. A new approach to protect these devices from the risks of being Internet-connected, and likely connected 24x7, is needed. With effective instrumentation, management, and monitoring, edge security could be the type of low-cost and reliable means to protect literally thousands, if not millions or billions, of IoT devices at their point of Internet connectivity. • Controlling Access Permissions – The true identity of the end user associated with a connecting device, and the security state of that device, are essential pieces of information in determining who has access to what; and this includes public-facing resources (e.g., Web sites and Software as a Service resources). Unfortunately, interrogation of user identity and device properties and security state typically occur at or after a connection into a “protected” environment, or not all. Change is in the wind. On identity and access management (IAM), movement to cloud-delivered IAM capabilities is gaining momentum. The latest announcement from IBM of a new Cloud Identity Service, 7 built on the company’s recent acquisition of Lighthouse Security Group, is just one of many indicators of growing market Market analysis on endpoint security products is included in Frost & Sullivan’s Analysis of the Endpoint Security Market (NE3F-74), September 2014. 7 IBM Unveils Industry's First Intelligent Cloud Security Portfolio for Global Businesses, Press Release (November 5, 2014). 6 SPIE #41, November 2014 © Stratecast | Frost & Sullivan, 2014 Page 5 demand for pervasive and upscale IAM capabilities. 8 Additionally, Network Access Control (NAC), as examined in an upcoming Frost & Sullivan market analysis study, is on a resurgence trajectory. This is also an indicator of growing market demand to assess risk as a standard routine in making decisions on access permission. Edge security, in Stratecast’s view, either in stand-alone mode or in collaboration with cloud or on-premises IAM and NAC solutions, could become an effective mechanism to improve access permission control—broadly, consistently, and cost effectively. 8 Analysis on Lighthouse Security Group is contained in Following the Cloud-Tailored Model in Identity & Access Management (SPIE 2013-37), October 11, 2013. SPIE #41, November 2014 © Stratecast | Frost & Sullivan, 2014 Page 6 Stratecast The Last Word The concept of moving security operations to the network edge—that is, closer to the points of traffic origination—has merit. For traffic payloads where speed in end-user engagements is critical, an at-the-edge approach assists in reducing network latency caused by rerouting traffic for inspection and treatment to a central or regional processing center, which may not be in the most route-optimized path between the end user and the asset he or she is connecting to. Additionally, for network carriers who also are providers of security services, an edge security approach contributes to the optimization of core network resources, as the need for network optimization expands due to advancing capabilities of access networks and endpoint devices, and network traffic patterns become denser (e.g., in the use of cloud-based services). There simply may not be enough core network infrastructure to serve all the demand for capacity at the performance level required. Although logical on paper, moving security to the edge is not as simple as porting a security operation from its current location (e.g., an on-premises gateway or a regional platform hosted in a carrier network) to a multitude of distributed edge locations. Certainly there are technology considerations to be addressed. Yet, with mature and maturing foundational technologies like virtualization, software defined networking (SDN), and network function virtualization (NFV), the technical hurdles seem solvable. What can be a more challenging hurdle is control. In the security discipline, customers of security solutions demand control, and control requires comprehensive visibility and validation of each of their individual security instances. When information and network security was younger and principally based on a “box on-premises” approach, logical and physical control was assured. As the options of network-based security delivered from either a multi-tenant platform or a customer-dedicated appliance hosted off-premises in a carrier network became available, assurances to subscribers that a drop in security integrity and control versus on-premises appliances would not occur had to be established. In an edge-based approach, this hurdle of assurance and retention of control must scale with the number of edge security instances. Additionally, manageability is another key consideration. With security and IT organizations having accountability for security operations at various physical and virtual locations— on-premises, endpoint devices, network-based platforms, in the cloud, and at the edge—the orchestration and monitoring complexity multiplies. With this multiplication, the potential of a decline in security integrity increases; too many balls to juggle. Therefore, in order for an edge-based security approach to flourish, technical and operational perspectives, from both the provider and the customer, must be considered and addressed. This has been done, as the edge-based security examples cited in this SPIE demonstrate. However, none were accomplished in short order or alone. Years of planning and preparation took place, and collaboration with technology providers was essential. Potentially, these early examples will pave the way to a faster cycle in the introduction of new forms of edge-based security solutions. Michael P. Suby VP of Research Stratecast | Frost & Sullivan msuby@stratecast.com SPIE #41, November 2014 © Stratecast | Frost & Sullivan, 2014 Page 7 About Stratecast Stratecast collaborates with our clients to reach smart business decisions in the rapidly evolving and hypercompetitive Information and Communications Technology markets. Leveraging a mix of action-oriented subscription research and customized consulting engagements, Stratecast delivers knowledge and perspective that is only attainable through years of real-world experience in an industry where customers are collaborators; today’s partners are tomorrow’s competitors; and agility and innovation are essential elements for success. Contact your Stratecast Account Executive to engage our experience to assist you in attaining your growth objectives. About Frost & Sullivan Frost & Sullivan, the Growth Partnership Company, works in collaboration with clients to leverage visionary innovation that addresses the global challenges and related growth opportunities that will make or break today’s market participants. For more than 50 years, we have been developing growth strategies for the Global 1000, emerging businesses, the public sector and the investment community. Is your organization prepared for the next profound wave of industry convergence, disruptive technologies, increasing competitive intensity, Mega Trends, breakthrough best practices, changing customer dynamics and emerging economies? For more information about Frost & Sullivan’s Growth Partnership Services, visit http://www.frost.com. SPIE #41, November 2014 CONTACT US © Stratecast | Frost & Sullivan, 2014 Page 8 For more information, visit www.stratecast.com, dial 877-463-7678, or email inquiries@stratecast.com.