Some Vulnerabilities are Different than Others: Studying Vulnerabilities and Attack Surfaces in the Wild K A RT I K N AYA K , DA N I EL M A R I NO, P E TROS E FSTATHOPOU LOS , T U DOR DU M I T R AS Data Breaches and Cyber Attacks are Common! Recent iCloud bug, iOS vulnerability Adobe Reader media API vulnerability Conficker worm Newer security technologies introduced Address space layout randomization, Data execution prevention – Windows Vista Sandboxing/protected mode – Adobe Reader 10, Internet Explorer 7 6/27/2016 2 Can we measure the impact of any of the system-security techniques? 6/27/2016 3 Effectiveness of Current Security Metrics CVSS scores: Do not reflect creation of proof-of-concept exploits [Bozorgi et al.’10] CVSS scores: Do not reflect exploits in the wild [Allodi et al.’13] Classical defect prediction metrics (e.g. code complexity, churn, coverage, etc.): Low recall when predicting vulnerabilities [Zimmermann et al.’10] Attack surface [Manadhata et al.’10]: Difficult to assess in the field 6/27/2016 4 "If you cannot measure it, you cannot improve it." --Lord Kelvin 6/27/2016 5 In a Nutshell Existing security metrics Reflect developer’s view Not deployment environment Our security metrics Deployment environment Evaluate impact of system-security techniques IE 5 vs Office 2000 Number of vulnerabilities: IE 5 = 3 x Office 2000 Exploited vulnerabilities: IE 5 = Office 2000 6/27/2016 6 Outline Current metrics and their limitations Proposed metrics Data analysis approach Data analysis “If/whether” metrics “how often” metrics “when” metrics Implications Conclusion 6/27/2016 7 Current Metrics and their Limitations Number of vulnerabilities Common Vulnerability Scoring System (CVSS) Impact on confidentiality, integrity, availability Access complexity, Access vector, etc Scores given by security vendors Do not capture i. Likelihood of attacks ii. Severity [Allodi et al.’12, Allodi’13, Bozorgi et al.’10] Attack Surface [Howard et al.’03, Manadhata et al.’11, Kurmus et al.’13] Set of ways in which an adversary can enter the system and cause damage Requires access to source code Hard to measure These metrics do not reflect security in the field CVSS BaseScore = ((0.6 * Impact) + (0.4 * Exploitability) - 1.5) * f(Impact) Users can modify attack surface (e.g. installing software, reconfiguring services) 6/27/2016 8 Proposed Metrics “if/whether” a vulnerability is exploited? Exploitation ratio Count of vulnerabilities exploited Per product “how often” are products attacked? Exercised attack surface Attack volume Per host “when” are attacks against vulnerabilities seen? These metrics are Based on field data i.e. computed on real hosts Computed per product or per host 6/27/2016 9 Data Sets used National Vulnerability Database – CVE, vulnerable s/w list Symantec Signatures – CVE, signature IPS signatures AV signatures Obtained by screen-scraping Symantec webpages Links: ter.ps/azlisting, ter.ps/attacksign Worldwide Intelligence Network Environment (WINE) IPS telemetry – machine ID, attack_time, signature Binary Reputation – machine ID, binary, time 300 million IPS entries over 6 million hosts from 2009 to 2014 Only consider network-based attacks 6/27/2016 10 Data Analysis Approach (1) Product A Product B CVE-1 CVE-2 .. CVE-5 .. CVE-10 CVE-11 CVE-12 .. CVE-15 NVD 6/27/2016 Product A: Sign A, CVE-2 Number of exploited vulnerabilities: 3 Sign B, CVE-3 Exploitation Ratio: 0.3 Sign C, CVE-3 Sign D, CVE-8 Product B: Sign E, CVE-12 Number of exploited vulnerabilities: 2 Sign F, CVE-14 Exploitation Ratio: 0.4 Symantec Signatures Combined 11 Data Analysis Approach (2) WINE IPS Telemetry WINE Binary Reputation 6/27/2016 Mach-1, 12th Jan 2014, Sign A Mach-1, 13th Jan 2014, Sign B Mach-1, 16th Jan 2014, Sign A Mach-1, 17th Jan 2014, Sign E Mach-1, 20th Jan 2014, Sign F Mach-1, Product A, 1st Jan 2014 Mach-1, Product B, 1st Jan 2014 Mach-1, Product-A, 12th Jan 2014, Sign A, CVE-2 Mach-1, Product-A, 13th Jan 2014, Sign B, CVE-3 …. Compute Exercised Attack Surface, Attack Volume 12 Products Analyzed Microsoft Windows – XP, Vista, Windows 7 Microsoft Office – 2000, 2003, 2007, 2010 Internet Explorer – Versions: 5 – 8 Adobe Reader – Versions: 5 – 11 Popular products and hence good candidates 6/27/2016 13 Outline Current metrics and their limitations Proposed metrics Data analysis approach Data analysis “If/whether” metrics “how often” metrics “when” metrics Implications Conclusion 6/27/2016 14 “If/Whether” a Vulnerability is Exploited? Exploitation ratio = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝒆𝒙𝒑𝒍𝒐𝒊𝒕𝒆𝒅 𝑣𝑢𝑙𝑛𝑒𝑟𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝒅𝒊𝒔𝒄𝒍𝒐𝒔𝒆𝒅 𝑣𝑢𝑙𝑛𝑒𝑟𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠 Count of vulnerabilities exploited in the wild Per product metrics Product A 6/27/2016 CVE-1 CVE-2 .. CVE-5 .. CVE-10 Sign A, CVE-2 Sign B, CVE-3 Sign C, CVE-3 Sign D, CVE-8 Product A: Number of exploited vulnerabilities: 3 Exploitation Ratio: 0.3 15 Cyber attackers have had less time to attack newer products? 90% of vulnerabilities are attacked within 94 days after disclosure Analysis using Symantec signatures, OSVDB and WINE Most vulnerabilities are exploited within 30 days after disclosure Result also confirmed in [Frei’09, Bilge et al.’12, Microsoft Security Intelligence Report’14] 6/27/2016 16 Variation of Exploitation Ratio Between Products On an average, only 15% of the vulnerabilities were exploited The exploitation ratio varies between different products Adobe Reader < Microsoft Office IE 5 and Office 2000 have same number of exploited vulnerabilities (27), but the exploitation ratio of Office 2000 is thrice that of IE 5 Product Exploitation Ratio Exploited vuln Office 2000 0.32 27 Office 2003 0.35 43 Office 2007 0.27 18 Office 2010 0.25 5 Adobe Reader 8 0.16 29 Adobe Reader 9 0.11 29 Adobe Reader 10 0.08 13 Adobe Reader 11 0.06 5 Internet Explorer 5 0.12 27 Demonstrates the pitfall of using number of vulnerabilities as a security measure 6/27/2016 17 Newer Versions have Fewer Exploited Vulnerabilities Newer versions have fewer exploited vulnerabilities and lower exploitation ratio Adobe Reader 10, 11 < Windows Vista, Windows 7 < Exploitation Ratio Exploited vulnerabilities 0.21 39 0.07 20 Adobe Reader 8 0.16 29 Adobe Reader 9 0.11 29 Adobe Reader 10 0.08 13 Adobe Reader 11 0.06 5 Adobe Reader 8, 9 Windows XP Windows 7 Windows XP Newer security technologies introduced Address space layout randomization (ASLR) Data execution prevention (DEP) Sandboxing Commoditization of exploits 6/27/2016 Product 18 OS security, as seen by developers 6/27/2016 OS security, as seen by users 19 “How Often” are Vulnerabilities Exploited? Exercised Attack Surface: Number of distinct vulnerabilities that are exploited on a host in a given month Attack Volume: Number of attacks experienced by a host in a given month Per host metrics Mach-1, Product-A, 12th Jan 2014, Sign A, CVE-2 Mach-1, Product-A, 13th Jan 2014, Sign B, CVE-3 Mach-1, Product-A, 14th Jan 2014, Sign B, CVE-3 Mach-1, Product-A, 16th Jan 2014, Sign D, CVE-8 Mach-1, Product-B, 20th Jan 2014, Sign E, CVE-12 Mach-1, Product-B, 24th Jan 2014, Sign E, CVE-12 6/27/2016 Mach-1: Exercised Attack Surface: 4 Attack Volume: 6 20 Exercised Attack Surface for Product Versions Exploited vulnerabilities: Newer security Reader < IE technologies introduced in Reader, IE: Protected Mode Reader is easier for cyber criminals to exploit than IE 6/27/2016 Newer security technologies introduced in Vista, Windows 7: ASLR, DEP 21 Variation of Exercised Attack Surface over Time Spikes correspond to attacks on CVE-2009-4324 Apps contribute more to the attack surface than the OS 6/27/2016 22 Can we Reduce Attack Surface? Analyzed OS vulnerabilities with most attacks Services corresponding to 6 out of top 10 OS vulnerabilities could be disabled Others could not be disabled (e.g. vulnerability in files related to the kernel) 6/27/2016 23 “When” are Vulnerabilities Exploited? How long do machines survive? Survival implies machine not attacked for a vulnerability of an installed product Kaplan-Meier estimator to compute survival probability Accounts for censored data (e.g. hosts may leave our study without being attacked) 6/27/2016 24 Survival of Windows vs Reader vulnerabilities Newer security technologies introduced in Reader 10, 11: Protected Mode 6/27/2016 25 Survival of IE vulnerabilities Newer security technologies introduced in IE 7: Protected Mode 6/27/2016 26 Product Upgrade Lag Product upgrade lag: how long a user continues to use a product after a new version is released Maximum upgrade lag over all versions of the product Releases version 2 Version 1 installed User upgrades to V2 2000 6/27/2016 2002 Version 5 Version 4 Version 3 1 years Year Upgrade lag: 53 To V5 To V3 2004 2006 2008 2010 2012 2014 27 Effect of Product Upgrade Lag on Exercised Attack Surface Does the length of time to upgrade have an effect on attack surface? 6/27/2016 28 Can we measure the impact of any of the system-security techniques? Our metrics show an improvement after Sandboxing/Protected mode was introduced in Adobe Reader 10 and Internet Explorer 7 Security was a major goal for Windows Vista. Our metrics show an improvement in Windows Vista, Windows 7 after ASLR, DEP, UAC were introduced 6/27/2016 29 Implications Low exploitation ratio for newer products Cyber criminals feel scarcity of exploits (e.g., budget of $100,000 allocated for purchasing zero-day exploits by the author of blackhole exploit kit) Exploit kits are rented/sold in the market Policies in enterprises (BYOD) Attack surface varies between different hosts Takes user behavior into account (e.g. upgrade lag) Qualitative analysis ASLR, DEP, UAC in Windows Vista, Windows 7 Protected mode/Sandboxing in Adobe Reader 10, Internet Explorer 7 6/27/2016 30 Conclusion Few vulnerabilities are exploited in the wild Number of vulnerabilities does not correspond to number of exploited vulnerabilities The average exercised attack surface for Windows, IE and Reader decreases with newer versions Few vulnerabilities (CVE-2009-4324) have disproportionate number of attacks Users using Reader are attacked more in comparison to Windows and IE Quicker upgrades correspond to reduced attack surface 6/27/2016 31 THANK YOU kartik@cs.umd.edu 6/27/2016 32