Uploaded by Eddie Cezniak

LNX-TXT Linux Security Textbook

advertisement
Cybersecurity Professional Program
Linux Security
Textbook
Table of Contents
About this Textbook.................................................................................................... 5
Legend ........................................................................................................................ 5
Chapter 1: Introduction to Linux ................................................................................. 6
Section 1: Linux History ................................................................................................ 6
Section 2: Linux Distributions ....................................................................................... 9
Section 3: Open-Source Philosophy ........................................................................... 12
Section 4: Linux Installation ....................................................................................... 13
Section 5: System Libraries ........................................................................................ 14
Section 6: Users ......................................................................................................... 16
Section 7: Command Line Interface ........................................................................... 17
Section 8: Terminal Emulator ..................................................................................... 18
Chapter 2: CLI Fundamentals......................................................................................20
Section 1: CLI & Terminal Emulators .......................................................................... 20
Section 2: Command Structure .................................................................................. 23
Section 3: File System Structure ................................................................................. 25
Section 4: Listing Files ................................................................................................ 28
Section 5: Working with Files & Folders ..................................................................... 33
Section 6: Data Streams ............................................................................................. 38
Section 7: Grep Basics ................................................................................................ 43
Section 8: Find & Locate ............................................................................................. 44
Section 9: History Command ...................................................................................... 45
Chapter 3: Users & Permissions .................................................................................46
Section 1: Piping......................................................................................................... 46
Section 2: Advanced Grep & AWK .............................................................................. 48
Section 3: Additional Commands ............................................................................... 50
Page 2
Section 4: Users ......................................................................................................... 55
Section 5: Password Management ............................................................................. 57
Section 6: Groups ....................................................................................................... 60
Section 7: Permissions ............................................................................................... 62
Section 8: SUID, SGID & PATH .................................................................................... 68
Chapter 4: Networking & System Management .........................................................71
Section 1: Network Testing using Ping & Traceroute.................................................. 71
Section 2: Networking Files & Configuration .............................................................. 72
Section 3: Updating & Upgrading ............................................................................... 76
Section 4: Apache....................................................................................................... 79
Section 5: Other Applications ..................................................................................... 82
Chapter 5: Services & Hardening ................................................................................84
Section 1: Common Services & Protocols ................................................................... 84
Section 2: SSH & SCP .................................................................................................. 85
Section 3: FTP............................................................................................................. 88
Section 4: Samba........................................................................................................ 90
Section 5: Hardening .................................................................................................. 93
Section 6: Bash Scripting ............................................................................................ 95
Chapter 6: Bash Scripting ...........................................................................................98
Section 1: Bash Scripting Introduction ....................................................................... 98
Section 2: Script Input & Output .............................................................................. 101
Section 3: Conditions ............................................................................................... 103
Section 4: Arithmetic Operators ............................................................................... 109
Section 5: Archives ................................................................................................... 111
Section 6: File Integrity ............................................................................................ 114
Section 7: Extra: Crontab (Cron Table) ..................................................................... 117
Page 3
Chapter 7: Host Security........................................................................................... 120
Section 1: Linux External Mounting .......................................................................... 120
Section 2: Boot Protection ....................................................................................... 121
Section 3: PAM ......................................................................................................... 122
Section 4: SELinux & AppArmor ............................................................................... 125
Section 5: Privilege Escalation .................................................................................. 128
Section 6: Extra — Crontab Security......................................................................... 130
Chapter 8: Network Security .................................................................................... 132
Section 1: Iptables .................................................................................................... 132
Section 2: Firewalld .................................................................................................. 134
Section 3: Fail2Ban (F2B) .......................................................................................... 138
Section 4: Log Monitoring ........................................................................................ 140
Section 5: Bash Scripting to Counter Apache Enumeration ...................................... 142
Section 6: Secure Apache Configuration .................................................................. 145
Section 7: Banner Hiding for SSH & Apache ............................................................. 146
Section 8: SSL Encryption ......................................................................................... 148
Section 9: SFTP ......................................................................................................... 151
Page 4
About this Textbook
This document is a textbook for the entire Linux Security course. It includes all the
information the lecturer will present in class.
The textbook proceeds through the entire course in chronological order, from the
beginning phase through the advanced material.
Legend
Colored text blocks appear throughout this document to redirect the reader to a
specific source or enrich them with additional information.
Colored text boxes include the following:
Lab Assignment
Review the corresponding lab files to practice what has been taught so far.
This box may also include specific questions to practice.
Good to Know
Additional information or insight regarding a topic. This is for enrichment
purposes only and is not part of the exam materials.
Tip
Useful information that may help learners study the material or work with
specific tools.
More Information
Links or references to external material that can be used to expand one’s
knowledge on a subject.
Page 5
Chapter 1: Introduction to Linux
Section 1: Linux History
Linux is a family of operating systems (OS) intended to provide a UNIX-like experience.
Linux uses the GNU General Public License version 2 (GPLv2), in contrast to UNIX, which
has a proprietary license.
Linux was created in the early 1990s by Finnish student Linus Torvalds as a private
project that began with the development of the Linux kernel.
Linux primarily uses a terminal (CLI), and most Linux distributions have a graphical user
interface (GUI) as well, which makes the environment more user friendly. Linux itself is
only the kernel, which manages communication between applications and hardware.
Linux vs. Windows
Some points of comparison between Linux and MS Windows include:
• Windows is a closed-source system; Linux is an open-source system.
• Windows is purchased; Linux is mostly free.
• Windows is not customizable; Linux is highly customizable.
Many Linux distributions include GUI systems, GNU utilities, installation and
management tools, and various applications, such as OpenOffice, Firefox, and others.
Linux also comes with the open-source Netfilter, an iptables-based firewall tool.
Good to Know
As the course continues, you will become familiar with iptables, its purpose,
and how to use it.
Some Linux distributions require additional applications to create a complete and
usable OS. Most Linux systems, however, are considered complete operating systems,
and include A–Z programs, such as an editor, compilers, etc.
Page 6
Linux is one of the most user-friendly UNIX-like operating systems. Installation of sound
cards, flash players, and other desktop applications and software is simple and
straightforward. The fact that it is an open-source software enables many organizations
to create customized Linux distributions.
Linux Kernel
Today there are many distributions based on the Linux kernel, and most are free.
Due to its scalability, Linux-based operating systems can be used for a variety of
products, ranging from wristwatches and refrigerators to supercomputers.
Although the applications and use cases of each OS will be different, the kernel is the
same in all of them, with only slight changes in each distribution. The kernel establishes
communication between the hardware and software components, and manages the
system’s resources.
Linux kernel
Linux OS is composed mainly of the layers shown above, which provide crucial
separation for stable functionality and data security. The kernel is an abstraction layer
that serves as a buffer between users/apps and hardware. Separating those elements
prevents an application or user from obtaining access to the hardware and damaging
the system.
Page 7
Due to the layered architecture, applications cannot directly access the hardware
without the right permissions. Applications that require access to hardware resources
will have to communicate with them through the kernel.
The kernel has four primary responsibilities:
1.
2.
3.
4.
Hardware management
Memory management
Process management
System call management
Hardware Management
Systems can include a variety of hardware components, such as CPUs, memory devices,
sound cards, graphics cards, etc. The kernel stores all data related to the device drivers
and how to interact with them. Without a kernel, the components could not be
controlled.
Memory Management
The kernel is responsible for OS memory management. It keeps track of memory usage
for the purpose of enhancing performance and uses virtual memory addresses to
ensure that one process does not manipulate the data of another process.
Process Management
The kernel assigns resources to individual processes and prioritizes them. It also
manages process security and ownership information.
System Call Management
The kernel can receive requests from programs to perform certain tasks.
Page 8
Section 2: Linux Distributions
There are many Linux distributions, some of which are fully customized, and some that
are compiled to a ready image and uploaded to the internet as ISO files. Common
distributions of the Linux family include Debian, Ubuntu, RedHat, CentOS, SUSE, and
others. Some have different command execution syntax, and some are completely
different in their user interface and experience.
Debian, for example, contains basic utilities that can be used to customize the
environment.
Computers & Servers
Over time, Linux became more and more user friendly, and evolved to include a
modern, convenient desktop environment. This opened Linux to a larger volume of
users, and the Linux personal computer OS today features an excellent GUI experience
and numerous applications, such as those in the Ubuntu desktop.
Linux operating systems designed for servers often do not implement GUI features and
desktop experiences. Ubuntu has a server version without GUI features, which makes it
“lighter” and requires less resources for its basic operations.
Linux servers can be used to host cloud services such as OwnCloud and NextCloud.
Other Uses of Linux
In addition to servers and personal computers, Linux is also used as a core component
in Android and IoT devices. Because the Android OS is based on the Linux OS, every
Android phone, smart watch, and multimedia device runs on the Linux kernel.
Distribution Variety
The fact that Linux is an open-source project enabled it to branch out to form many
different distributions. Each distribution has its own purpose, strong points, and
weaknesses. Some distributions are dedicated to server management, such as Ubuntu
and CentOS. Other distributions are dedicated to penetration testing and hacking, such
as Parrot and Kali.
Page 9
Due to the flexibility of the Linux architecture, many distributions are based on older
ones. For example, Kali Linux is based on Debian, and CentOS is based on Fedora Linux.
Although there are many distributions with different GUI experiences and file
managers, they all share a common kernel.
Lab Assignment: Distribution Research
Become familiar with Linux-based operating systems and search online for
differences among the most common distributions.
Follow the instructions in lab document LNX-01-L1.
Basic Linux Components
File manager – software that provides an interface for file and directory management.
Desktop environment – software that provides a user-friendly GUI.
Package manager – software that provides the ability to download, install, and update
applications.
Debian is a basic distribution of Linux. It does not contain a desktop experience or a
graphical interface other than the basic UNIX terminal. Users can install many utilities in
Debian to enhance their work with the operating system.
Even if a Linux distribution does not come with a GUI environment, one can easily be
installed. The GUI helps users perform many actions without having to run terminal
commands and is especially helpful for new users who may be unfamiliar with the
command-line interface (CLI). For example, the GUI interface simplifies file
management tasks, such as moving directories, copying and pasting files and folders,
and accessing directory listings.
Page 10
Dedicated Environment
Users can choose the Linux distribution that suits their needs. For example, because Kali
OS contains hacking tools, it is often used by penetration testers. Although Kali includes
a GUI interface, the terminal within the GUI is used for many operations, since most of
the hacking tools are supported only via the terminal.
To build an operating system, programmers rely on some of the more commonly used
distributions, such as Debian, to customize the repository, applications, and the entire
look.
Good to Know
In addition to Kali, an operating system called the SANS Investigative
Forensic Toolkit (SIFT), designed by SANS, is used for malware analysis and
forensics.
There are also dedicated Linux distributions such as RHEL and CentOS, which are
enterprise-grade versions for servers. Unlike other distributions, RHEL is licensed, and
requires a fee for support. CentOS is a free alternative to RHEL, with several differences
and without RHEL’s enterprise support.
More Information
Some images in this textbook are taken from the Debian OS or Kali OS.
Note that there aren’t many differences between the two systems, because
Kali is based on Debian.
Page 11
Section 3: Open-Source Philosophy
Open-source refers to source code published by developers and organizations so that
anyone can see how the program is written, and modify and enhance the code.
Modified GNU-licensed code can be used privately or released to the public for the
benefit of the community.
Open-source code has a great advantage over closed-source applications, because
people can work together to improve the code and fix security vulnerabilities.
As an open-source software, Linux is free and can be downloaded from the internet or
redistributed under GNU licenses. Some Linux distributions, such as Red Hat and Novell,
provide additional support and services for a fee.
Licenses
Most operating systems come in a compiled format, meaning the main source code is
not directly accessible. The source code of an open-source OS is included in its compiled
version, and anyone can modify and customize it. In addition, an open-source OS allows
the user to run programs, change the code for a specific use, redistribute copies, and
more.
When software is compiled and ready, it is published and made available to the public.
Applications typically include signatures, and some open-source software have a GNU
General Public License (GPL), that ensures that it remains open-source, free, and
available to be modified and configured at the kernel level.
Berkeley Software Distribution (BSD) is an open-source OS based on research
conducted on UNIX. It was created at the University of California, Berkeley, and its last
release was in 1995. From BSD, derivative programs, or descendants were created,
including FreeBSD, NetBSD, DragonFly BSD, and others.
Apache is an example of a free open-source software. It can interface with third-party
applications, and can be edited, sold, and distributed as a customized package based on
the Apache software. It cannot, however, be redistributed without proper attribution.
Apache is commonly used with the GPL license (version 3), which allows developers to
mix codes.
Page 12
Section 4: Linux Installation
Linux, similar to other operating systems, can be installed as an operating system on the
host computer or as a virtual machine. As an operating system, Linux can interact with
and support many hardware components.
Before installing Linux on a virtual machine, a few things should be preconfigured.
VirtualBox requires the preparation of an installation surface that applies RAM and disk
size and matches and marks the platform (type and version) prior to inserting the ISO.
Some Linux versions include a live version along with a full installation, but all have
several common installation-related steps, such as creating a user, setting a password,
configuring the installation path, updating the OS (during installation), and choosing a
language.
When installing Debian, the first step is to choose the installation type: graphical
experience (32/64-bit), standard installation, or graphical installation that does not
include the GUI interface. The next step is to choose the language and country, and
then the hostname, domain name, and root password. After those steps, the user
specifies his/her actual name, and a username and password for non-admin users.
Debian does not use the root as the main user, for security reasons. The final steps are
to save the configuration details and choose the installation path.
After installation, Debian configuration is required, including HTTP proxy and mirroring.
During the configuration process, options are chosen for the desktop experience or
server installation. Since Debian does not install a desktop experience or server by
default, this stage includes several options for faster installation: Debian Desktop
Environment, Print Server, and Standard System Utilities. The last step is GRand Unified
Bootloader (GRUB) installation, after which the machine reboots.
Lab Assignment: Debian Installation
Learn how to configure primary settings and resources for a Linux
installation and install a Debian environment from an image disk.
Follow the instructions in lab document LNX-01-L2: Debian Installation.
Page 13
Section 5: System Libraries
OS Root Directories
File and directory organization in Linux follows a single-root inverted tree structure. The
file system starts at the root directory, which is indicated by a forward slash (/). Paths
are delimited by a forward slash.
/root – super user home directory
/boot – kernel image
/etc – system configuration files
/home – user directories
/mnt – mount points
/sbin – executables
/dev – device files
/bin – executables
/lib – libraries
Page 14
Some important directories include:
• /root, /home/<username>
The home folder(s)
• /bin, /usr/bin, /usr/local/bin
/sbin, /usr/sbin, /usr/local/sbin
Binary program files
• /media and /mnt
External file system and mount points
Additional directories include:
•
•
•
•
•
•
/etc – System config files
/tmp – Temporary files
/boot – Kernel and bootloader
/var and /srv – Server data
/proc and /sys – System information
/lib – Library directories
For file and directory names, all characters are valid, except the forward slash. Be
careful when using special characters in file or directory names. Some characters should
be set within quotes when they are referenced.
File and directory names are case-sensitive. For example, PROJECT, Project, project, and
pRoject are all different file names, and while they can all be used, caution should be
applied when doing so.
Page 15
Section 6: Users
In Linux, there are three user types:
• Service users
• Regular users
• Root user, or superuser
A simple way to view all users in the system is to read the contents of the /etc/passwd
file. There is a line in that file for each user.
Service Users
Generally, service users run non-interactive or background processes on a system, while
regular users can log in and run interactive processes.
Regular Users
A regular user is added by the root user and has low-level permissions. This user cannot
perform high-level actions in the system. Also known as super-do, a sudo is a normal
user added to sudo groups by the root user. This user may be assigned permissions to
access and execute some root commands.
Root Users
A root user, or superuser, is the highest-level user in the system. Each system has its
own root admin user. Root users can access all files in the system and execute all
commands.
A root user can override any file ownership, permissions, or restrictions. In addition,
their ability to perform system-wide changes means their accounts must be kept
secure.
An unsecure superuser account means a hacker can assume superuser privileges and
make changes to other user accounts in the system.
Page 16
Section 7: Command Line Interface
The terminal allows the user to enter commands for the operating system. The
command interface depends on the distribution.
In the terminal prompt, the dollar sign ($) typically means “logged in as regular user.”
The number sign (#) means “logged in as root user.”
Commands use the following syntax:
Command [options] [arguments]
A command can be a representation of existing data in the system, but can also
configure the system itself, create new files, and run programs. Options are properties
of commands that expand the command’s capabilities. They are typically represented
by hyphens and one or more letters.
command
option
arguments
CLI vs. GUI
While Microsoft Windows OS is developed for regular end users and includes an easyto-use and well-designed interface, Linux is designed more for technical purposes, such
as servers, devices, hacking tools, and forensics. The Linux OS is mainly based on the
command-line interface (CLI), but most Linux operating systems are designed and
installed with a graphical user interface (GUI) as well.
Page 17
Although GUI actions are the equivalent of commands, the CLI is still more commonly
used by IT professionals. CLI allows the user to run a variety of actions with a single
device (keyboard) from any directory. For example, if the CLI command execution
location is in the root folder, you can use a single command to create a new folder in
the Home folder. If you need to create multiple directories, one script can create as
many directories as needed, whereas in the GUI, it would be a much more manual
process. In addition, a GUI consumes more system resources.
Remembering commands may be challenging at first, especially if the command is
complex and works with several flags.
Installing GNOME as a Desktop Experience
In addition to the clean Linux-based OS installation process described above and the
features that can be added as part of the kernel, GNOME (GNU Network Object Model
Environment) installs tools and features in a graphical interface and presents them in a
desktop environment.
GNOME is one of many desktop experience interfaces that are referred to as windows
managers, and can be installed and uninstalled at any time.
Lab Assignment: GNOME DE Installation
Create a more interactive virtual environment that includes graphical
interfaces and tools.
Follow the instructions in lab document LNX-01-L3: GNOME DE Installation.
Section 8: Terminal Emulator
In Linux, the terminal emulator allows the user to run OS commands directly on the
machine. It is a tool that has a graphical interface and emulates shell or terminal text.
The terminal emulator also allows remote command execution through SSH or Telnet
protocols. For example, a network administrator who wants to connect from a
Microsoft Windows OS can run a simple command in Windows or Linux to create a
connection based in the terminal emulator.
Page 18
Graphical Usage & Applications
Some applications cannot be replaced with a terminal emulator, such as those that run
their own database. For example, Firefox can be opened using the terminal, but it
cannot run without using the graphical interface, since it is based on graphics.
Some applications, such as Wireshark, can be used as both GUI and non-GUI.
Lab Assignment: Graphical Applications
Experience the GNOME desktop environment and Debian tools.
Follow the instructions in lab document LNX-01-L4: Graphical Applications.
Page 19
Chapter 2: CLI Fundamentals
Section 1: CLI & Terminal Emulators
As a core component of the operating system, the terminal allows users to run
commands on the system. The CLI accepts text commands, but each operating system
has its own variations of command syntax.
There are many types of terminals. Some operate without a GUI when the computer is
booted, while others, such as GNOME, work with an emulated GUI.
CLI terminals can run remotely via processes such as SSH and Telnet and are known as
Remote Terminals.
Many devices and operating systems support terminal emulators which can be
downloaded and executed as third-party applications on operating systems such as
Windows, Linux, Android, and OSX (Apple). Terminal emulator are basically applications
that enable connection to the system’s shell and can run commands from within the
shell.
Shell Types
A shell is an application that executes commands in text form within an operating
system. Application commands run from shells are checked against the $PATH variable.
Some commands execute binary files often located in the /bin or /sbin directories.
However, each shell has its own slightly different built-in commands (such as cd) that
can be run regardless of the $PATH variable.
C Shell
Utilizing the C Shell is done by running the csh command from a terminal. C shell
commands provide programming features, such as keyboard shortcuts, automation by
scripting, displaying a history of commands, and more. The C Shell lends itself to ease of
extension and expansion using a C-syntax development language (similar to the C
programming language). The C shell is common for developers who would like to
maintain consistency in syntax to the operating system itself.
Page 20
Sh
Also known as the Bourne shell, sh is a simple shell interface that works with the OS. In
some cases, sh is used as a scripting language and contains many features that are
designed for programming.
BASH
BASH, or the Bourne Again Shell, is an improved version of the sh shell and therefore
has a similar command syntax. BASH shell scripts are commonly run on Linux
distributions.
Z Shell (Zsh)
The Z shell is an extension of the sh shell and shares common features with it. The
features include automatic directory movement, recursive directory expansion (for
example, /et/ap will navigate to /etc/apt), spell check, correction, and more.
System Awareness
Commands like whoami, pwd, and uname -r can indicate details about the user,
directory position, and operating system.
whoami outputs the user currently logged in to the CLI.
pwd (Present/Print Working Directory) indicates the current location. For example, if
the command execution was printed from /usr/bin, that will be the output of the
command.
Page 21
uname -r displays information about the system and its version. The command can also
be used with different flags to present additional information, such as the kernel
version.
Page 22
Section 2: Command Structure
The terminal and commands run through it are used to configure system settings and
display existing data. System operations and management can be faster and more
efficient via the terminal, and process automation is simpler.
Commands run via the terminal have unique structures that include letters, numbers,
and characters.
Command Breakdown
Every command has a similar structure that includes several sections.
The first section represents data regarding the user and the system.
root@debian:/etc# represents the logged user, the machine’s name, the current
directory, and a sign representing the user, which in the example above is the number
sign (#) that represents a root user (super user). The dollar sign ($) would represent a
simple user.
The second section is the command itself with an option part for flags, and arguments
passed to the command.
Tip
The default prompt in many Linux distributions shows
username@host:/current/path for a quick orientation of where you are in
the file system.
Page 23
System Commands
System commands provide data regarding the system and its content.
Examples of system commands include:
• whoami – returns the current user name.
• pwd – prints the current working directory.
• uname – returns information regarding the operating system.
Cywar Lab Assignment: Obtain System Information
Use the command line to obtain information about the Linux
operating system.
Navigate to the Cywar Practice Arena. Find the lab in Linux
Security, LNX02 – CLI Fundamentals, “Obtain System
Information”
Page 24
Section 3: File System Structure
The following are some important directories in the Linux system:
• /dev points to the location of device-related files. For example, the file for the
device associated with a hard disk is stored in this directory.
• /etc stores configuration files, such as the host file, kernel configuration, and
system services. Some configurations have folders within the /etc directory.
• /media stores information about removable media, such as CDs and USBs. In
many modern Linux distributions, when a removable disk is connected, a
directory will automatically be created with the disk contents.
• /usr stores user binaries and read-only data.
• /var stores variable data files, such as log files (/var/log) and HTTP server files
(/var/www), although these can be stored in the /srv directory as well.
Good to Know
Linux views everything, hardware, software, running processes, etc., as files
or filestreams.
File System Navigation
The most fundamental skills to master in Linux are manipulating files, navigating the
directory tree, and understanding the file system environment.
After logging in to a server, the initial location is often the user account’s home
directory. This directory is designated for users to store files and create other
directories.
In a Linux system, files and directories typically contain different types of information,
such as application files, user documents, and images. The various types of data are
displayed in a GUI file system manager. In Linux (as in most other systems) files and
directories are navigated from the CLI, using the cd command.
File navigation can be done from any directory location and can reach any other
directory via a single command.
Page 25
Change Directory (cd)
There are two types of paths used for navigation: absolute and relative.
Absolute path – A complete path that always starts from the root directory (/), which is
at the top of the file system hierarchy. The path should be used when the destination is
not located close to the current location.
For example, an absolute path command may be: cd /home/john
Relative path – Navigation to directories without specifying the root. This path
command should be used when the destination is close to the current location.
For example, a relative path command may be: cd Documents/
Tip
The Tab key is an essential auto-complete tool in navigation and command
execution.
The following example shows the step-by-step procedure for navigation to a
destination, with each command entering a different folder in the hierarchy.
Note that from the second line, the cd command uses a relative path to navigate the file
system.
The following example shows how to enter the full path of a destination. If the
command starts with a forward slash, it looks for the destination beginning with the
root folder.
Page 26
The navigation command includes the following options:
•
•
•
•
cd ~ returns to the home directory.
cd - returns to the previous directory.
cd .. navigates to the parent directory of the current directory.
cd / navigates to the root directory of the entire system.
Tip
The tilde sign (~) indicates your home directory.
Page 27
Section 4: Listing Files
The ls command is used to list the contents of a directory.
Note that in the example above, every line begins with d or -. These characters indicate
the type of file listed, as follows:
Regular files (-): Contain programs, executables, and text files.
Directory files (d): Contain lists of files.
Special files:
Block file (b)
Character device file (c)
Named pipe file (p)
Symbolic link file (l)
Socket file (s)
Hidden Files
Invisible files are represented by dots (.) at the beginning of the file name.
Examples of hidden files include the following:
• .bashrc – The Bourne shell configuration file
• .kshrc − The Korn shell (ksh) configuration file
• .cshrc − The C shell (csh) configuration file
Page 28
• .rhosts − The remote shell configuration file
• .profile – User settings related to a shell, such as the location of a shell-based
search for executed commands
Configuration files enable shell customization, such as new function creation, coloring,
and control over the command completion mechanism.
To list invisible files, specify the -a option in the ls command, as follows:
Page 29
Understanding Commands
When some commands are entered without any options, their usage and format are
displayed. However, the man command can be used to display help for a command
from the Linux manual.
man – Opens the manual for a specified binary.
--help – A flag that can be used together with the name of almost any other Linux
command to display helpful information on how to use the binary.
Page 30
Whatis – Displays an informative line from a binary’s manual.
find – Searches for files and directories. For example, if you create a document in the
Home folder, but your current CLI location (working directory) is in the /root directory,
the results will display “no such file or directory.”
This is because the command looks for the file in the current working directory. You can
use the following syntax to run a search in all directories: find / -name “Document”
Whereis – Finds the locations of specific binaries, their manual, and source file.
Which – Shows where the execution location is for specific binaries. Entering it before a
command or set of commands shows the location of the command, which will typically
be /usr/bin.
Page 31
Good to Know
Earlier it was mentioned that an important Linux path is /usr because it
contains distribution documentation. Manual pages are listed in the
/usr/share/doc directory. Erasing the directory will cause commands that
display manual content to fail.
Page 32
Section 5: Working with Files & Folders
Files in Linux can include more than text, images, and compiled programs. They can also
contain partitions, hardware device drivers, and directories.
Linux files are case-sensitive. Two files can share the same name, but their names may
have different letter cases. For example, File.txt and file.txt are two different files.
There are three basic file types:
• Ordinary files: These are files that contain data, text, or program instructions.
• Special files: Some of these files provide access to hardware, such as hard drives,
modems, or Ethernet adapters. Others are similar to aliases or shortcuts that
open or activate the files they represent.
• Directories: These can store special and ordinary files. UNIX-based directories are
similar to folders in Windows.
Creating Files & Directories
Files and directories can be created using terminal commands, and the process is easier
and more flexible than via the GUI.
To create a file, use the command touch [option] [file name].
Page 33
To create a directory, use the command mkdir [option] [directory name].
Deleting Files & Directories
Several commands can be used to delete files and directories. Since directories may
include additional content, a recursive flag is required in the command to delete it (-r).
To delete files, use the command rm [option] [file name].
Additional flags can be added to the rm command. For example, to force file deletion,
you can use the -f flag, and the syntax would be rm -rf *, which would recursively delete
all files in the current directory, including directories within that directory.
Note that the -r causes the deletion to be recursive, and the asterisk (*), which is also
known as a wildcard, represents “everything”.
The following example shows how the r and f flags are used in the command to delete
directories and their content.
Page 34
The following example shows how rmdir is used to delete a directory.
More Information
Ubuntu and many other Linux distributions protect the root directory.
Some, however, do not, and you should never run rm -rf /* on a production
system!
Page 35
Copying Files & Folders
The cp <filename> <destination> command copies files. Filename and destination can
be absolute or relative paths. The destination parameter can include a file name and
also enables the user to define a new name for the file that was copied.
Moving Files
The command to move a file from one location to another is mv <filename>
<destination>. The original location is referenced first, and then the new location. Note
that using the CLI to move a file executes the command immediately with no message
asking to confirm the operation (as would appear in Windows and other GUI
interfaces). The command will also overwrite any files in the destination with the same
specified name.
Page 36
The mv command can also rename files in the system. To change a file’s name and
leave it in the same directory, the destination should be the current working directory
(./).
Cywar Lab Assignment: Working with files and Folders
Create files and folders and move and copy them within the file system.
Navigate to the Cywar Practice Arena. Find the lab in Linux Security, LNX02
– CLI Fundamentals, “Working with Files & Folders”
Page 37
Section 6: Data Streams
There are three standard streams in Linux:
• Stdin: The standard input stream that reads data from the user
• Stdout: The standard output stream that, by default, sends data to the output
terminal
• Stderr: The standard error stream that sends error messages to stdout
Writing & Reading Files
Different commands are used in Linux to view files. The cat [file name] command is
used to output a file’s content to the terminal.
Good to Know
cat is short for “concatenate.” It combines two files and sends the output to
stdout. A benefit of the command is displaying a single file via stdout.
Appending and writing text to a file can be done using the echo command, which
displays typed content on the same terminal interface.
If you add > or >> and a file name, it adds the content in the file without displaying it on
the terminal interface.
The > sign after echo overwrites all existing content and insert the new content.
Using >> appends the existing content to the end of the file.
Page 38
Nano
Nano is a text editor for UNIX-like operating environments that use a command-line
interface. It includes shortcut keys to exit a file, cut a file’s content, look for keywords in
a file, and more. The keys appear at the bottom of the command line when using Nano.
The command used to invoke Nano is nano [file name].
Page 39
Vim
Vim is a built-in text editor based on the terminal. It is invoked by typing vi. Vim uses
colors and displays line and character locations.
Messages displayed at the bottom of the terminal window can be set to appear in
yellow or red.
Writing code in Vim is made easier by its completion feature, which completes what
you type and helps minimize mistakes in the code.
Vim opens in command mode, whereby what you type is interpreted as commands
rather than regular strings. For example, if you type the letter u, the editor will interpret
it as the command to undo a previous action. Ctrl+R will be interpreted as the
command to redo the previous action.
When in a different mode, command mode can be activated by pressing Esc.
Pressing i will activate the insert mode, which will allow regular string input in a file,
rather than characters activating commands.
To run a command while a Vim file is open, use the colon (:). The colon can be used, for
example, to save the file, search for something in the code, or display information on
other commands. When you type a colon (:), a line will appear at the bottom of the
terminal, where you can enter a command.
Page 40
Gedit
Gedit is a text editor for UNIX-like operating environments. It is a third-party application
that works with an interface similar to Notepad.
Gedit is invoked using the gedit command. Terminating the command without exiting
the Gedit interface will close the Gedit window without saving the changes.
It supports syntax highlighting, printing, plugins, spell check, and more.
Text appears in monochrome (black), unless a different color scheme is chosen.
Shortcut keys can be used to perform actions, such as Ctrl+A, which selects all text in a
file, or Ctrl+S, which saves the file.
Page 41
Lab Assignment: Editing Files
Practice using text editors and output redirection to read from and write to
files.
Follow the instructions in lab document LNX-02-L1.
Page 42
Section 7: Grep Basics
The grep command in Linux is often used to filter text for a more specific search. It can
be combined with other functions and commands to improve search results.
Grep outputs the result. If you want to find a word in a large file, you can specify the
word in the command and the output will include the specified word. Although it does
not, by default, indicate the word’s location within the file’s content, you can combine
the basic command with additional commands to obtain that information.
The grep command shows the entire line in the result.
Page 43
Section 8: Find & Locate
Finding files in Linux can be done with the locate or find command.
Locate can find files in the Linux system, but it relies on a database that must be
updated to provide more accurate results.
Flags such as filetype and file name can be added to the find command to further filter
the results.
Locate has an advantage over find in its speed.
Find yields more accurate results and uses more complex syntax.
Find command example: find /root/Desktop -name *.sh
The command will find a file with the .sh extension in the /root/Desktop directory. The
asterisk (*) is used as a wildcard that represents “anything.”
Additional example: find –iname test
This command will find a file named test, with the results being non-case-sensitive.
Using –not in the command will return the opposite (only files that do not match the
exact case).
To find a file by size, use find / -size 50M
Page 44
Section 9: History Command
Linux maintains a list of commands used in the terminal. The list can be useful when
you want to repeat a command and view the order in which commands were executed.
It does not display content that was not executed, such as if you began typing grep
Class but did not send the command.
All commands that were sent, even if they produced an error message with no output,
will be retained in the history.
To repeat a command, use the history command together with an exclamation mark (!)
and the number of the command in the list output. This will complete the command
when part of the syntax is inserted after the sign. For example, !apt will complete the
syntax based on the last apt command. Important: Working this way may lead to
system issues, if not performed with caution.
Lab Assignment: Search & Locate
Use search commands to locate files and find specific text within them.
Follow the instructions in lab document LNX-02-L4.
Page 45
Chapter 3: Users & Permissions
Section 1: Piping
The pipe character (|) redirects the output of one command through the input of
another command. Piping is a type of redirection from a stdout to a stdin. It is used
mainly in command form and can also be used in programs and processes.
The pipe character divides the command into two inter-dependent parts. Since
commands flow from left to right, the second part of a piped command must be
associated with the first part.
The grep command is used to search for strings within text. Concatenating the ls
command with grep using a pipe will filter the search to obtain a more accurate output:
ls -l | grep <file name>
In the example, the output of ls -l flows through grep, which displays all directories and
files that contain the tmp string.
Another example is to use the cat command to display a file’s contents and search for
specific words or sentences within the contents using grep:
cat <file name> | grep <word>
In the example, the output of the file flows through grep, which displays the line
containing the specified word.
Page 46
As another example, you can display the contents of a file using the cat command and
sort the contents by redirecting its output to the sort command:
cat <file name> | sort
In the example, the output is sorted in alphabetical order.
Page 47
Output Redirection & Operators
The pipe is one example of a two-part command in which the first part affects the
second part. Other redirection operators include > and >>.
Using > overwrites data in a file with the text that follows the character. Using >>
appends data to the end of the specified file. Neither redirection displays output on the
command line, but they do generate records in the history log.
Because ls displays the list of items in the current directory, adding > after ls followed
by a file name places the ls output in a file but does not show it in the terminal. If the
name of the file does not exist, it is created and contains the executed command ls >
<file name>
Section 2: Advanced Grep & AWK
Grep and AWK are tools used to find words, characters, lines, and text patterns in files.
Their primary use is for quickly finding specific words and patterns that match the
search in all specified files and folders.
Additional flags that can be used with grep include the following:
• -i – Ignores case sensitivity
• -v – Outputs the unmatched typed filter
• -c – Counts the number of lines that were found with a match
• -l – Displays the file name of input files
• -n – Shows the number of the line the word was found in
• -r – Represents recursive, and processes all files in the directory
An example of the grep command for a recursive search from the directory it is running
on is to find the word success in all files and subdirectories. For each result, it will print
every line containing the word.
Page 48
AWK is a data extraction tool commonly used in Linux systems. It can extract specific
data from texts and output the results to the screen.
Following is an example of AWK that shows how to retrieve the third and fourth
columns of each line in a file.
$3 is the third word (third field of each line in the file).
$4 is the fourth word (fourth field of line in the file).
By default, AWK uses white space to separate between fields; however, you can set a
custom separator using the –F flag. For example -F “:” will set the colon character as a
separator.
Page 49
Section 3: Additional Commands
Tail & Head
Tail and head commands are typically used in content with large amounts of text. They
do not filter for strings. Tail displays the last 10 lines of the document while head
displays the first 10 lines. You can change the number of lines displayed using the -n
flag.
For example, for a file that contains numbers in order from 1 to 20, head displays
numbers 1 to 10, while tail displays numbers 11 to 20.
Additional Useful Commands
Some additional commands can be useful regarding ordering items in text.
For example, the sort command sorts content line by line. If the sort command is used
with numbers, the output displays the numbers from lowest to highest without taking
into consideration the number value. For example, if the numbers 1, 2, and 10 are
sorted, the order will be 1, 10, and 2. If the sort command is executed on letters, the
results will be sorted in alphabetical order in which uppercase letters precede
lowercase.
Some additional flags can be combined with this command, such as -o, which places the
sorted output in a new file; -r, which sorts content in reverse order; and -n, which takes
numeric values into consideration. To combine flags, use a single hyphen with each flag
and place flags one after the other.
Page 50
Another command that customizes alphabetic output is cut. This command “cuts”
letters and presents several parts of the word or words. The text can be arranged by
bytes using the -b flag, by column with -c, by field with -f, and -d is used as a delimiter.
A file’s word count, number of letters, bytes, and more can be displayed with wc. Flags
determine the content that will be displayed. For example, -l lists the line numbers in
the file, -w displays the number of words in the file, -c shows the number of bytes, and
-L displays the longest line in the file.
Page 51
Page 52
More
When working with large files, cat displays all content so you can search the file without
opening it. To avoid scrolling endlessly to find something in the file, it can be viewed
page by page using the more command. In the output of a command using more, you
can use the spacebar to scroll down a page, or press Enter to read the next line. The
bottom line of the terminal displays the percentage of content read.
Page 53
Less
The less command opens one page of the file at a time (more loads the entire file).
During command execution, a search option provides the ability to search the file for
keywords using ?.
The following is an example of a search for logs performed in December:
Cywar Lab Assignment: LNX-03-CYL1 Linux Pipelines
Use stream redirection and piping to filter the output of commands and
obtain a more accurate output.
Follow the instructions in Cywar lab document LNX-03-CYL1.
Page 54
Section 4: Users
User Types
Linux uses groups to manage users and set access permissions. User account types in
Linux can be personal accounts or the root account. The root account is a superuser
that has complete access to the operating system, including making changes and
managing other users.
As a best practice, the root user should not be able to log in to the OS freely, but should
do so using the sudo command, which allows switching to root-level access when
changes are necessary.
Linux is grounded in the concepts of file ownership and permissions associated with
UNIX. Security begins at the file system level, whereby a file or directory owner can
restrict access based on whether or not a user requires it. This method helps build a
control scheme with varying levels of access for different users.
From a security perspective, regular users typically perform daily routine-like
operations. An attacker’s interest will be to obtain higher-level permissions for
unrestricted and unlimited command execution, which the root user has. Not every
command that the root user can run can be executed by a regular user. For example,
the root user can access all other user accounts.
A regular user can be configured as a sudoer, which grants the privilege to execute
commands with root permission via the sudo command.
Creating & Deleting Users
During the installation of a Debian environment, the creation of a regular user account
is required. Only the root user can create and delete other users, and each user should
have a unique login name.
Two commands can be used to create a new user, adduser and useradd. The difference
between them is that adduser does not require additional information because it uses
the value specified in the command and default information from the operating system.
Page 55
For example:
The useradd command simply creates a user account, which is specified directly
following the command. The user is created without a password and home directory,
which must be created separately.
/etc/skel
This is a directory that is used as a template for a new user’s home directory.
Every file or directory created within it, will appear in the new user’s home
directory.
Page 56
Section 5: Password Management
The computing world has grown and developed over the years. In the earliest stages,
computers communicated through wires and carried one-way transmissions. Today’s
technology allows us to manage interfaces remotely, and there is a management panel
for almost every device. As communication has increased in sophistication, people have
developed new ways to overcome security and leverage hacking techniques.
Hackers are always seeking to breach security with an administrator or root password.
In some cases, users are unaware of the importance of security measures, and in other
cases vulnerabilities exist due to programs or technological misconfigurations. To
effectively protect logins, a password must be complex enough so that it cannot be
guessed or cracked. The longer and more complex a password is, the harder it will be
for automated tools to crack it.
Such tools, however, can run through possible passwords and combinations faster than
humans can count, and will eventually guess the password if no security settings are
established to block them after a predetermined number of attempts. Passwords
should be changed every few months in accordance with a security policy that regulates
parameters such as password length.
Note that passwords of users who are not security-minded often include personal
information because it is easier to remember, and such information is the first thing
that comes to mind when the password definition field appears. If a hacker has a
person’s name, they can access that person’s social media accounts, and see the date of
birth and names of family members.
Sometimes, a person’s previously used passwords can be easy to guess. Choosing a new
password that is significantly different from the current and previous passwords (not
just changing a single character) will ensure a higher level of security.
There are specially designed applications with databases that generate secure
passwords, such as KeePass, 1Password, SysPass, and LastPass. These are good
solutions for users who constantly forget their passwords, or who wish to have more
complex and randomly-generated passwords.
Page 57
Other considerations include keeping the password to yourself, not writing it down, and
avoid using common passwords (such as qwerty or 123456). A list of common
passwords can be downloaded from Google, and those are often the first passwords a
hacker tries when attempting to log in to another user’s account.
Login.defs
The login.defs file is located in the /etc directory and is responsible for retaining
password management information. This file contains configurations designating the
maximum length of passwords, password expiry periods, the generation of prompts to
change passwords that will soon expire, and more.
The file is referenced when a password is set using passwd and does not depend on
other applications that require a password, such as SSH, Apache, and others.
Users & Passwords
Examples of files that need to be protected are shadow and passwd. They contain hash
and plain text passwords and users that exist in a system, and hackers often try to gain
access to those files.
The /etc/shadow file contains passwords for each user encrypted with one-way keys.
The /etc/passwd file includes a list of all users in the system.
Every user in the system has a unique user identifier (UID). The UID value is used for
identification and to define which system resources a user can access. New users
created in the system begin with UID 1000, while the root user has the value 0.
Page 58
Lab Assignment: LNX-03-L1 Creating Users
Create new users and configure customized home directories and password
policies.
Follow the instructions in lab document LNX-03-L1.
Page 59
Section 6: Groups
Creating groups of users simplifies the application of settings and permissions. Any user
who belongs to a group has the settings and permissions assigned to the entire group.
Linux identifies groups by their Group Identification (GID), which is similar to UID for
users. Each GID value is unique, and group names are case sensitive. For example, a
group named staff and a group named Staff would be recognized as two different
groups.
Creating and deleting groups can be done with the groupadd and groupdel commands.
When a user is created during system installation, they are automatically assigned to a
standard group (unless defined as root). They are not added to any group other than
the standard group upon creation, unless specified during the user definition process.
Usermod is a Linux command that adds existing users to existing groups. Two common
flags used with the command are -a for append, and -G for groups.
The groups <username> command can be run to find out what groups a user belongs
to. The command groups johnd, may, for example, display johnd: root staff admins.
The /etc/group file lists all users and the groups they are associated with. The file
displays several groups of characters, each separated with a colon. The first shows the
group name, and in most cases, the second displays x, which stands for password
(/etc/shadow file), followed by the unique GID number, and users that are listed in the
group. If no users are listed in the group, the last section will be empty.
Page 60
Besides the built-in groups and groups created after service installation, there is a group
called sudo (superuser). Members of this group can add sudo in a command to obtain
high-level permissions for that particular command. Another way to use the sudo
command is by installing a sudo package and adding users to the related file that
configures the sudo users. The file can be found in /etc/sudoers after the package
installation.
To edit the sudoers file, you can use the visudo command or any text editor, but you
must have sudo permissions to view or edit the file.
Page 61
Section 7: Permissions
There are three main file permissions (for non-owners): read, write, and execute.
Permissions appear in the terminal as -rwxrwxrwx, and can vary among owners,
groups, and users.
Typically, the root user is the only one who has the permissions to work with system
configuration files.
chmod
Permissions for a file are set with the chmod command using numerical input (such as
777 for full permissions), or r/w/x.
Page 62
The letters determine the permissions of read, write, and execute for a file, and the
format is divided into the following parts:
1. The first three characters are the file permissions that apply to the creator
(owner) of the file.
2. The next three characters determine the group permissions.
3. The last three characters represent ‘other’ or ‘public’.
Numerical values can also be applied:
• 4: Read only
• 2: Write only
• 1: Execute only
For numbers greater than three, you add the permissions together to get the final
permission.
Examples:
• Read and Execute = 4 + 1 = 5
• Read and Write = 4 + 2 = 6
• Read, Write, and Execute = 1 + 2 + 4 = 7
Letter
Values
Read, write, and execute (rwx)
Read and write (rw-)
Read and execute (r-x)
Read only (r--)
Write and execute (-wx)
Write only (-w-)
Execute only (--x)
Numeric
Equivalent
7
6
5
4
3
2
1
Page 63
Good to Know
The method of representation may seem counterintuitive at first, but it is
like that because the representations are of “octal permissions.” For more
information, see: https://docs.nersc.gov/filesystems/unix-file-permissions/
Setting permissions with the numerical format requires three numbers, one for each
permission section: owner, group, and public. For example, chmod 777 <file name>
grants full permissions for anyone to use the file.
The second method, known as UGO, is to specify the entity and the permission using
letters. Each section has a representative letter:
• User – u
• Group – g
• Other – o
Page 64
The chmod command can use this method with a plus sign (+) for addition, minus sign
(-) for removal, and an equals sign (=) to apply a permission.
Common File Permissions
-rwxrwxrwx
-rw-rw-rw-
777
666
-rwx--x--x
711
-rwxr-xr-x
755
-rwx------
700
-rw-r--r--
644
-rw-------
600
Anyone can read, write, and execute.
Anyone can read and write.
Users can read, write, and execute.
Groups and others can only execute.
Users can read, write, and execute.
Groups and others can only read and execute.
Only users can read, write, and execute.
Users can read and write.
Groups and others can only read.
Only the user can read and write.
Page 65
File & Directory Ownership
Every file and directory in the system has a user owner and a group. The owner can be
identified by the first name that appears after the permission, and the group is the
second name. In the following screenshot, the blue box indicates the user, and the
orange box indicates the group.
Page 66
chown
The file owner and group can be changed using the chown command. By changing the
owner or the group of the file or directory, the permissions will be applied to the
specified entity.
For example: chown <user>:<group> <file name>
Cywar Lab Assignment: Essentials of Linux Group Management
Use groups to allow or deny user access to folders and files.
Navigate to the Cywar Practice Arena. Find the lab in Linux
Security, LNX03 – Identity and Access Management, “Essentials of
Linux Group Management”
Page 67
Section 8: SUID, SGID & PATH
SUID and SGID are special characters (bits) that can be attached to files or directories to
provide additional permission capabilities. In some cases, these capabilities may be
beneficial for the system’s operation. However, they can also be viewed as potential
vulnerabilities.
SUID & SGID
These are flags that can be added to the owner permission section of a file:
• SUID – The flag s provides other users with the ability to execute the file as its
owner.
• SGID – provides the ability to execute a file for anyone in the owning group.
The permissions are based on UGO or numerical settings. The numerical settings are
4xxx for setuid, 2xxx for setgid, and 6xxx for both. For example, to assign full
permissions and setuid and setgid to a file, the command would be: chmod 6777
filename
Page 68
The PATH is the location the command references. Commands are located in the bin or
sbin directory, and the command path is configured so that it first references the bin
and then navigates to the sbin if it is not found in the bin directory. The path can be
manually configured and comes with default settings depending on the commands,
libraries, application settings, and location. Each file path is separated with a colon (:).
Lab Assignment: LNX-03-L2: The Power of SUID
Configure a SUID on one of the system’s binaries to allow a simple
user to copy classified data.
Follow the instructions in lab document LNX-03-L2.
Extra: Symbolic Links & Hard Links
There is a difference between saving a file in a directory that is saved directly to a hard
disk, and saving a file on a hard disk but navigating to it from another location in the
system, such as a shortcut in the Windows OS.
Symbolic link – Also known as a soft link, it is similar to a shortcut in Windows. The link
is written to a different directory than the one it is saved in.
Hard link – Files that are saved to the hard drive (regularly created and saved files). Any
change in a hard link also causes a change in its associated soft link.
Page 69
Page 70
Chapter 4: Networking & System Management
Section 1: Network Testing using Ping & Traceroute
Ping
The ping command, which is based on the Internet Control Message Protocol (ICMP),
sends a network packet called an ICMP echo request to a remote server, which replies
with the information about how long it took for the packet to get to the server and back
to the station. The response can also include an error message if the connection fails.
Ping can be used with an IP address or DNS name and has advanced options that can be
set with flags such as -c, which counts the number of pings, -f, which floods the server
with ping requests, and -i, which sets the interval in seconds between each packet it
sends.
Traceroute
Traceroute checks the path of a packet that is sent to its destination and displays it in
the command line. Each device along the path is known as a hop. Not every hop will be
visible because some devices do not reply to ICMP requests, even though they process
them. Such devices are marked with **. The final destination is the target IP address,
and the source IP is linked to the sending station.
IP Address
When clean versions (without additional tools) of some Linux distributions are installed
(such as Debian), the ip address command displays the IP address of the station. The ip
a command can also be used and is included in Debian’s iproute package. The popular
ifconfig command requires a newer version of the package called net-tools.
Page 71
ifconfig
Like the ip address command, ifconfig displays information such as the IP address,
network card name, protocol support (IPv4/IPv6), subnet mask, and more. The ifconfig
command is also used to turn the network interface on or off with the following
command syntax: ifconfig up and ifconfig down.
Section 2: Networking Files & Configuration
The Linux environment can be used as a client for different needs, and it can be used as
a server to manage stations and users. Either way, the host obtains the IP address from
a DHCP server or acts as the server that distributes IP addresses.
dhclient
IP addresses can be changed, released, renewed, or removed from the server side and
can be configured on a client station using the dhclient command. The dhclient
command requests a new IP address, dhclient -r releases an IP address received from a
DHCP server, and dhclient -p allows configuration of a custom port (the default port is
68 UDP).
Static IP Address
A static IP address can be configured through server configuration or from the client.
However, setting a fixed IP address via the client configuration file can cause a collision
between IP addresses on the same network if the DHCP server distributes the address
when the host is down and the address is available for use. In most cases, the address is
reserved on the server.
There are two ways to set static IP addresses for a client. The first is via a command line
with the following syntax:
ifconfig <interface> <ip_address> netmask <mask> up
All the information about an address can be gathered by a simple ifconfig or ip address
command. To apply the changes, the interface needs to be restarted and in some cases
a reboot is required as well.
Page 72
The other method is to configure the interface configuration file with the address,
netmask, and gateway in the /etc/network/interfaces file directory (in Debian). Each
Linux distribution has a different location for the network configuration file.
Route Display
The route command displays the routing host tables that show the possible ways to
forward network traffic. For example, if a host is listed under VLAN 10.0.0.x and needs
to transfer information to a 192.168.0.x VLAN network, adding the VLAN to the routing
table will allow communication through the gateway. Setting the IP address as one of
the sides (outgoing or incoming traffic) allows communication between all traffic
segments. Displaying the table is done with the route command and is listed as a table
showing columns for the destination, gateway, genmask, flags, metric, Ref, and Iface.
•
•
•
•
•
Destination – The routed network
Gateway – The gateway address that points to the network
Genmask – The network mask of the destination network
Flags – Additional data that describes the routes (U = up, G = gateway).
Metric – A value that specifies the preferred route when there are several routes
to the same destination. A lower number indicates a higher preference.
Page 73
• Ref – References to the specified route.
• Use – Route lookups (decision-making process to determine how to route packets
to their destinations).
• Iface – The interface used for the route.
Lab Assignment: LNX-04-L1 Static & Dynamic IP Configuration
Practice static and dynamic IP address configuration and learn more about
routing.
Follow the instructions in lab document LNX-04-L1.
nslookup
A package called dnsutils contains the nslookup tool that resolves IP addresses. Each
domain has information about the owner, main servers, and fully qualified domain
name (FQDN). Dnsutils allows you to run the DNS check from the CLI, although it can
also be found online. Flags can be added to the command to obtain more accurate
results, such as NS records that point to management sources like msft, Cloudflare, and
Google. The flag -type=ns. -type=any will display additional information about the
domain, such as NS records, mailing address, expiration date, and other records,
including SPF and TXT.
DNS Configuration
The service in Linux systems responsible for address name resolution is called systemresolved.
Name resolution is performed by DNS servers specified in the resolve.conf file located
in /etc/.
Page 74
Windows stores resolved local IP addresses in the host file under the system32
directory and Linux stores them in the /etc/hosts configuration file. This file serves the
same purpose in Linux and Windows: when a host tries to reach an IP address, it first
refers to the local file and then checks with other resolving sources like DNS servers.
Network Statistics
Another networking perspective is checking the connection from the local host. The
netstat command shows network statistics about the workstation and displays the port
number, incoming and outgoing connections, if the connection is active or inactive, and
if the port is listening. Some tools using these commands display the information in a
graphical view.
Netsat returns results with information about the online state and live connections,
similar to the ss command, which dumps socket statistics. The ss command is installed
by default in Linux distributions and is easier to use than netstat.
Page 75
lsof
Another system indicator tool is lsof, which displays open files, local and network, from
the specific host in which the command is executed. The lsof command displays results
only from the directory it is executed from and if a user does not have permission to
view the file, the output will be readlink: permissions denied. Some files are not
accessible to users, such as system files that are located in the root directory, while
others are displayed with the directory path, process ID (PID), node, user, file type, and
device.
Section 3: Updating & Upgrading
During Linux installation, a file in the /etc/apt/ directory called sources.list is installed.
Many Debian-based distributions make use of a directory of list files called
/etc/apt/sources.list.d/. The purpose of the files is to communicate with the sources
listed in the file for online updates. Since Linux is an open-source operating system,
source lists can be configured freely in all distributions. For example, Kali tools can be
distributed in the Debian environment by adding a Kali distribution source list.
Package
A package is a type of archive that contains all files required for a binary to be installed
and operate properly. In some cases, a package may also list other packages required
for the specific binary.
Packages are typically installed by software called Package Manager, which is a simple
and highly efficient utility for installing, updating, and removing binaries. The package
manager is operated via commands and does not require the use of browsers and
download websites.
Apt Package Manager
Debian uses a package manager called "apt" that provides a wide selection of options to
handle packages.
The options include:
• apt install – Installs packages
Page 76
• apt remove – Removes packages
• apt update – Updates package lists for upgrades
• apt upgrade – Updates existing packages to the newest available version
Updating the system does not mean it installs the packages. Instead, it saves a list of the
newest versions of the available packages in the /var/cache/apt/archives directory,
which can then be retrieved and installed using the apt install command.
Repositories
The paths listed in the sources.list directory are called repositories, which are storage
locations that Linux designates for all packages (a remote server). The file can be
located in the /etc/apt/ directory and there are also several files in the
/etc/apt/sources.list.d/ directory.
When an update or installation command is initiated, the system must have internet
connection to access the repository. Without the connection, the system will not be
able to view the information in the repository.
Changes can be made in the sources.list file to change the accessed repositories, but
such changes require root access.
Page 77
Lab Assignment: LNX-04-L2 System Update & Upgrade
Obtain a more comprehensive understanding of how Linux repositories
work and practice using the repositories of other distributions.
Follow the instructions in lab document LNX-04-L2.
Debian package (Dpkg) files (.deb) allow for the easy installation and removal of
applications, and provide information about the package. Dpkg is a common way to
install and remove applications in a Linux environment.
Good to Know
Dpkg stores and saves information about .deb files.
Tip
To install a Debian package, use the command dpkg -i file.deb
Page 78
Section 4: Apache
Apache is a common server program in Linux that provides web hosting services.
Apache is run as a service itself, called apache2. The service operates by default on port
80 for HTTP and needs to be configured to use port 443 for HTTPS.
Apache is not a physical server. It operates via software that can be installed on a
machine and used as a server. It creates a connection between the server and web
browsers. One of the reasons for its popularity is the fact that Apache is cross-platform
and works on UNIX-based and Windows OS systems.
The default Apache webpage appears as follows, when installed on Debian systems:
Page 79
Apache Configuration
The web files are located by default in /var/www/html/index.html. This is the root
folder and contains Apache webpages. You can change it by editing the apache2.conf
file, which is usually located in /etc/apache2/ along with many other Apache
configuration files. The apache2.conf file is also responsible for loading the other
configuration files in the /etc/apache2 directory.
Starting the Service
Since not all daemons (servers) boot with the OS, it is a best practice to know some
basic service commands, such as service <daemon_name> <action> or the newer
version of the service command, systemctl <action> < daemon_name>. The most
common actions are start, restart, stop, and status.
You can configure a service to start with the system boot using the command systemcl
enable < daemon_name>. Another important command is journalctl, which is a utility
that retrieves messages from the kernel, system daemons, journals, and other log
sources.
Service Debugging
In some cases, services may encounter errors when activated or used daily. When a
service is activated, it is important to check its status using the command service
<service name> status and verify that it is active. If the service fails to start, a brief
description of the problem will often be provided.
Another command that can be used to view service status is systemctl status <service
name>.
Page 80
The command journalctl can be used to retrieve service data, such as messages, from
the kernel, system services, and additional sources. The command journalctl -unit=<service name> can be used to display logs for specific services.
Lab Assignment: LNX-04-L3 Install & Configure Apache
Install and configure an Apache service and create a local web server.
Follow the instructions in lab document LNX-04-L3.
Page 81
Section 5: Other Applications
Logwatch
Logwatch is a system log analyzer and reporter. It generates periodic reports based on
criteria specified by the user and can aggregate logs from multiple machines to a single
report. Even though Logwatch was written for the Linux OS, it can analyze logs from
multiple platforms. Some Logwatch versions are not supported by older versions of
system logs.
Logwatch is installed using the command apt install logwatch -y
dmesg
dmesg is a diagnostic tool that generates messages from the kernel ring buffer during
the boot sequence. This is especially useful in cases of device failure. The kernel ring
buffer records messages related to the operation of the kernel.
Nginx
Nginx is another open-source web server that focuses on performance optimization.
Aside from being a web server, it is also used as a reverse proxy, HTTP cache, and load
balancer. The advanced capabilities of Nginx have often made it a preferrable choice
over Apache.
Other Servers
There are many other types of servers, of which the following are some examples:
• Squid Cache Proxy: A proxy server, mainly used to resolve slow network
connectivity
• Cups Print Server: A print server that runs in the background and transfers print
requests
• MongoDB: A type of server database that is document-oriented and unstructured
(NoSQL)
• Gaming server: Servers that can run on home PCs for multiple-player video
games
Page 82
• Plex Media Server: A service can store entertainment services (similar to Netflix)
for movies, program series, music, and more
Cywar Lab Assignment: LNX-04-CYL1 Linux Networking and
Services
Practice a basic level of service troubleshooting.
Follow the instructions in Cywar lab document LNX-04-CYL1.
Page 83
Chapter 5: Services & Hardening
Section 1: Common Services & Protocols
Linux Services
Iptables is a generic, highly versatile, firewall utility that is pre-installed on most Linux
distributions. It is CLI-based, with no graphical interface, and looks for rules in its table
that match packets, and either allows them, or blocks them.
Named, which is part of the BIND DNS package, is a service that executes the DNS
server daemon, which converts host names to IP and vice versa.
Sendmail is a mail transfer agent (MTA) used to deliver pre-formatted email messages.
It comes pre-installed by default on most Linux distributions and does not work with a
GUI, but only through the CLI. The SMTP protocol that is used by nearly all email
services communicates via sendmail.
Network Time Protocol Daemon (NTPD) is the most widely used method to sync Linux
system clocks with network time servers.
Linux Protocols
Secure Shell (SSH) is a secure network protocol with a range of uses. It can be used to
securely access remote servers and hosts, and operates by creating an encrypted
connection between a local client and a remote server over an insecure network such
as the internet.
The connection not only allows you to access remote servers and hosts but also to
securely transfer files using the SCP protocol via SSH. SSH is used by applications such as
PuTTY, MobaXterm, and many more. Port 22 is reserved for SSH.
File Transfer Protocol (FTP) is one of the oldest methods for transferring files among
hosts over TCP/IP connections. FTP is a client-server protocol that creates two
communication channels between a client and a server: one to control the
conversation, and another to transmit the data. FTP does not encrypt transferred files
and was eventually replaced with Secure Shell FTP (SFTP), which works over SSH.
Page 84
FTP uses two ports: port 21, to set the connection between both communicating
parties, and port 20, to transfer the data.
Samba is an open-source software suite that runs on Linux-based operating systems
and communicates with Windows clients. Samba enables the sharing of resources such
as printers and files through the Common Internet File System (CIFS) and the Server
Message Block (SMB) protocols. SMB is used by applications and services to talk to each
other over a network.
SMB uses two protocols: 139 when it runs on top of NETBIOS as an older version, and
port 445 when it runs on top of TCP.
The SMB2 version, which Microsoft released in 2006, featured reduced communication
chatter. SMB3, released in 2012, introduced enhanced capabilities such as support for
virtualized environments, and end-to-end encryption.
NFS is the Linux equivalent of SMB. Interoperability between the two is achievable but
tricky due to the difference in authentication and security policies.
Section 2: SSH & SCP
SSH Remote Connection
A remote connection to an SSH server can be established from a CLI with a dedicated
command or a dedicated third-party application, such as PuTTy.
In every connection, one side has the role of the server and the other has the role of
the connecting client. The SSH daemon constantly listens to the service's port and waits
for a client's request to connect.
To connect via CLI, the service command must be used with the credentials of the
server and a user to connect to: ssh [user]@[IP address]
Page 85
Initiation of an SSH connection via the Terminal
SSH Connection
The connection process consists of several steps. The first step is to set the command
with the server’s credentials to initiate the connection. Upon initiation, the client is
asked about continuing or not, and then sends a password for the specified user. If the
password is correct, a shell appears.
Secure Copy Protocol
SCP is a protocol that provides the ability to transfer files among parties securely, via
SSH. Like SSH, SCP also has an authentication level in the connection process.
SCP allows file uploads to and downloads from a server, via port 22. Different flags can
be used to control the bandwidth, cypher, and ports of the connection.
The SCP protocol is easy-to-use and pre-installed on Linux distributions.
Page 86
The following command flags can be used to transfer files to a remote location:
SCP Flags
Meaning
-P <port>
Specifies the remote connection port.
Sends original modification details such as date created,
-p
access times, etc.
-r
Copies the entire directory (recursive).
Used in debug mode to observe the connection details
-v
between SCP and SSH.
-S (program) Use a third-party program for the connection.
-l
Bandwidth limitation.
The following command is used to transfer the file test.txt to a remote location, john's
home directory:
scp test.txt root@10.10.10.12:/home/john
The following command is used to copy the file test.txt from a remote location to the
local system:
scp root@10.10.10.12:/home/john/test.txt /root
Page 87
Section 3: FTP
FTP works with two sides: one is the client, and the other is the server.
The client application is called ftp, and the server application is called vsftpd.
The client and server applications may not be pre-installed in the Linux distribution by
default.
FTP Installation
The installation process for the client is simple and requires the execution of a single
command: apt install ftp
After the installation, a connection can be established to a server simply by providing
the name of the application and the target IP address, for example: ftp 10.0.0.1
Upon connection, the CLI of the ftp server will appear and allow command execution.
FTP Commands
Meaning
append
bye
cd
cdup
close
delete
get
Appends a file
Terminates the connection
Changes the directory
Goes to the parent directory
Ends the session
Deletes a file
Receives a file
A browser can also be an ftp client, as long as the protocol is specified in the target URL.
Page 88
Access to FTP server via a browser
File Sharing
The protocol allows file sharing among parties and can also be used to deploy a file
storage server that users can access via CLI or a browser and download files from it. The
server’s access permissions can be controlled via its configuration file, vsftpd.conf,
which is located in /etc/.
The default installation of an FTP server is considered unsecure, since all the data is
transferred in plain text.
vsftpd
This is the server side of the FTP communication. The server is responsible for running
the FTP daemon and stores all information regarding the service and its configurations.
Installation of the vsftpd service is done using the apt install vsftpd command.
Page 89
Section 4: Samba
Samba is a service that operates via the SMB protocol, which is responsible for file
sharing in a network. The shared directory that it opens is specified in the smb.conf
configuration file located in /etc/samba. The configuration file allows selection of
different access permissions, such as allowing write operations to the directory,
permitting browsing in other directories, and more.
Samba Installation
The installation process is straightforward, and the service is installed via the apt
package manger.
Before installing Samba, it is recommended to have a fixed IP address to avoid
unnecessary changes to the configuration file.
A configuration change in the smb.conf file
Page 90
Smb.conf
Samba’s configuration file is used to manage all the options regarding data sharing. The
file contains comments that explain the different options and assists in the
configuration process. The process includes the following steps:
1. The first configuration option in the file specifies the service’s work environment,
meaning whether or not it is a workgroup of a domain.
2. To share a folder, its name and path must be specified.
3. The folder’s read and access permissions must be set. For example, allow writing
to the share, and provide guest access to it.
4. Any change in the configuration file requires a system restart.
Part of the smb.conf configuration file
Page 91
SMB vs. NFS
NSF (Network File System) is a protocol used to access files from a remote network.
SMB
Supports Windows
More complex than NFS
Meant for multiple users
Resources can be managed from
other resources
NFS
Supports some Windows versions
Setting up a connection is fast and easy
Meant for downloading documents only
Can be run from a single machine
Lab Assignment: Install and Configure Samba & FTP
Perform basic configuration of Samba, provide access via MS Windows
client, and create an FTP server to transfer files from another machine.
Follow the instructions in lab document LNX-05-L2.
Page 92
Section 5: Hardening
Hardening is the practice of enhancing security, checking configurations, creating rules
and policies, updating and patching software and systems, and a variety of other
measures, with the aim of decreasing the surface vulnerability of programs, services,
protocols, and operating systems.
SSH Hardening
The following steps are recommended to provide a higher level of security for the SSH
service:
• Back up configuration file – It is recommended to back up the original file, so that it
can be recovered if a configuration issue arises.
• Disable root login – To prevent remote connections with high-level privileges, the
option to log in with the root user must be disabled.
• Allow list users – Specify which users can log in to the system to lower the potential
attack surface.
• Change default ports – Changing the ports assists in reducing the attack surface and
can mitigate potential attacks performed by inexperienced attackers.
• Set a certificate – Use this option to control the login security level without
monitoring the usage of strong passwords.
• Configure Fail2Ban – A brute-force prevention solution.
• Audit connections – Monitor requests to connect to the service, to keep track of
potential threats.
FTP Hardening
FTP service hardening options include those described above for SSH, but some
additional options address its unique vulnerabilities, as follows:
• Disable anonymous connections – Prevent the option to log in without
credentials.
• Set disk quotas – Restrict the size of files that can be uploaded.
• Set access time restrictions – Prevent users from connecting to the server at
specific times, such as after work hours.
Page 93
Samba Hardening
The following hardening options are suggested for the Samba service:
•
•
•
•
Allow list the host access segment.
Turn off the option to save passwords.
Don’t publish the service to the world.
Use the relevant SMB version.
Note: When the share is configured, the browsable, writeable, and guest ok options
must be considered carefully, since they can increase the attack surface in the server.
Cywar Lab Assignment: LNX-05-CYL1 Troubleshooting Webservers
Make webserver services more secure and less vulnerable.
Follow the instructions in Cywar lab document LNX-05-CYL1.
Page 94
Section 6: Bash Scripting
This aim of this section is to prepare for the next module, which covers Bash scripting in
more detail.
Why Use Bash?
Bash scripts allow multiple commands in a single file to be executed, which makes
command execution easier. It also allows system commands in the script, which are
otherwise typically executed from the terminal.
Bash features make it ideal for administrative task automation. Script files have the
extension .sh, and start with the line #!/bin/bash, which tells the system where to pass
the data for execution.
Bash
The characters ./ are placed before the name of a script, for it to be executed.
When a script file is created, it is not assigned execution permissions, which must be
added manually to allow its execution.
When scripts are written in text editors, the editors recognize the code and provide
different colors for their various sections.
Page 95
Variables & Arguments
The following signs are used in Bash scripts to define different types of information and
perform actions.
Character
$
#
""
$!
Function
Invokes a variable
Starts a comment
Defines textual content
Completes the last command
Automation
Automation is a crucial aspect of Linux system management and operation. Bash
scripting is ideal for that purpose and can significantly reduce the time and effort of
administrator and user tasks.
When an automated task is performed, it is recommended to have the script print
messages about the execution for informational and possibly debugging purposes.
sed
The sed command is typically used for word replacement. It runs a search on the
specified item, deletes the desired word, and replaces it with another specified word.
After its execution, it displays an output of the result.
The command includes flags that can be used to enhance the search and replace
operation.
Page 96
As an example, the following command replaces the word with the word replaces: sed
‘s/words/replaces/’ filename
Page 97
Chapter 6: Bash Scripting
Section 1: Bash Scripting Introduction
Bash, also known as the Bourne-again shell, is a type of interpreter that processes shell
commands. An interpreter is a program that executes instructions written in a highlevel language.
Running a high-level language program can be done via an interpreter or compiler. The
difference between them is that the interpreter translates the code into an
intermediate form and then runs it, while compilers translate the code into machine
language before running it. Although compiled programs run faster, they first need to
go through the compilation process, which is time-consuming.
Interpreters run high-level programs immediately. A shell interpreter links commands
written in CLI to OS services and binaries, and runs them.
Bash Scripting
Anything that can be run as a command can be included in a script. For example, you
can write a script to run simple commands such as tar and crontab to schedule a
backup.
A shell script is a complete language, with variables, functions, and conditional
executions. When you execute a script, a terminal window opens to run the commands.
Since the commands are run in plain text, they are typically logged and can be viewed
using the history command.
Similar to processes, scripts can run and perform tasks in the background.
Shebang
A script is indicated by the shebang at the start of the document. In a UNIX-like
operating system, a shebang is interpreted as an executable file. It consists of a hashtag
(#) and an exclamation mark (!) followed by a path for the interpreter, and no spaces.
To execute a script file, simply run ./ and the name of the script with no spaces, or bash
<script path and name>.
Page 98
Alias
When working with a shell interpreter, aliases can help write scripts faster and more
efficiently. An alias is a combination of commands that are piped together.
The commands can be sequenced in .bashrc files, which are located in the user’s home
directory. The file is hidden (as indicated by the dot before the file name) and can be
viewed using ls -la (the -a flag shows hidden files) or opened using a text editor (for
example, vim ~/.bashrc). The file can also be viewed using the GUI file explorer, and
Ctrl+H allows you to see hidden files in a folder.
If an alias does not exist, the shell interpreter will go to the PATH and look for
executable files that correspond to the given command. PATH is an environment
variable that points the shell to directories where the executable files (such as binaries
and scripts) reside.
Page 99
Variables
A variable is a character string to which a value can be assigned. Since a variable is a
pointer to actual data, it can represent anything from a character to a device. Variables
are indicated by the dollar sign ($).
To assign a variable, simply write the reference, followed by an equals sign (=), and the
value of the variable with a number, or a word in quotes. For example, num=3 or
str=“Hello World”.
To assign a command output to a variable, you can use back ticks (`): myIP=`ip a | grep A 1 enp0s3`. Using that example, you can obtain the IP address by running echo $myIP
after the grep command.
Tip
When setting variables in Bash, don’t place spaces between the variable and
the value.
Page 100
Section 2: Script Input & Output
Read & Echo
When learning a new language, a good way to start is to learn how to print and read
data. The read command waits for input from the user and assigns it to a variable. The
command can be used in a script to collect information from the user and use it as a
variable.
For example:
The example will produce the following output:
By using the -ep flag with the read command, the first echo can be replaced by read.
The flag creates a message for the user instead of the echo requirement.
Page 101
Disclaimer
When using the read command, any time you take user input to a program,
you should watch out for potential malicious injections.
Command Separators
Variable=$ (command) runs the command in the parentheses and saves the output in
the variable (like the back tick sign mentioned earlier). A good test is to assign the
command date to a variable and run it a few times in succession. You will get the same
result even when time has passed.
Page 102
Logical Operators
Several commands can be written on a single line by separating them with a semicolon
(;). This allows you to combine several commands in a single variable. Other command
separators can be used as well, such as ||, which runs the second command if the first
fails, and &&, which runs the second command if the first succeeds.
The exit status syntax is echo $? and can only be run from the terminal window.
Section 3: Conditions
Conditional Operators
Conditional operators perform actions based on whether a statement is true or false.
They consist of if…then statements, whereby if the response is true, then an action will
be performed, or a message will be issued. There can also be an else clause for a false
response, which will perform a different action or issue a different message.
Conditional operators can be logical operators, such as and or, which can check as many
things as needed in one statement. Logical operators are indicated by a double
ampersand (&&) or a double pipe (||). For ampersands, both conditions must be true.
For pipes, only one condition must be true.
Page 103
Operators
Operator
=
!=
-lt
le
-gt
ge
-eq
<
>
Meaning
Equals
Does not equal
Less than
Less than or equal to
Greater than
Greater than or equal to
Equals (number)
Less than
Greater than
If, Else, Then
The if structure includes if followed by open brackets, with the condition appearing in
the brackets. The word then follows, and what should happen if the statement is true.
The structure ends with fi.
Note: If statements must include the following to work:
- They must end with fi.
- They must have spaces before and after an option in brackets.
For example:
Page 104
There is an option to write all of the code in one line.
For example:
Elif
Elif is a combination of else and if. While if will always be checked and else only runs
when if returns false, elif provides the ability to check multiple if statements, because
else considers only the last if statement.
For example:
Loops
Loop statements simplify repetitious tasks by continuously repeating an action until a
condition is met. Repetitive tasks are used in programming, as well as in malicious
brute-force attacks, the aim of which is to log in to a system by guessing a user’s
password.
In malicious usage of a loop, an attacker can create a script featuring a loop that tries to
enter a website by going through many possible passwords one at a time.
Page 105
There are three kinds of loops:
While loop: Performs a block of code, as long as the condition is true.
For loop: Performs a block of code with a range, as long as the condition is true.
Page 106
Until loop: Runs a block of code until a specific condition is met.
'Do' Parameter
The do parameter defines the action to perform if the condition is true. The do action
follows for and done closes the loop.
Note: Done is necessary to let the script know when to end the loop actions.
For example:
Page 107
Exit Status
Exit status is code that triggers a verification process. It checks if the last command
executed was a success or failure. The standard exit code is 0 for success, and any
number between 1 and 255 for failure.
Diff
Diff is a command that compares two files line-by-line and displays the differences.
The command uses specific symbols, and special instructions are required to create two
identical files.
Lab Assignment: LNX-06-L1 Basic Scripting
Practice common Linux commands, Bash scripting, and Bash operators.
Follow the instructions in lab document LNX-06-L1.
Page 108
Section 4: Arithmetic Operators
Arithmetic operators are mathematical functions that can calculate two or more
operands, or objects that can be manipulated. For example, in 2 - 1 = 1, the 2 and 1 are
operands and the minus sign is the operator.
Arithmetic operators are used in equations in computer language scripts, to perform
various types of calculations.
'let' Command
The let command is used to calculate arithmetic expressions. It converts a variable to an
arithmetic expression.
The command does not require spaces, but if one is needed in the expression, it must
appear in quotation marks.
Mathematical Equations
The following mathematical equations can be used for calculations:
• + = Addition
• - = Subtraction
• * = Multiplication
• / = Division
Notes for variables and calculations:
• Using a double plus sign (++) before or after a variable increases the value by 1.
• Variables are calculated relative to their locations within the command flow.
• The last variable that is calculated will be the final value.
Page 109
Example:
In the following script, the output will be 4 because it solves the NUM (number)
variable in each of the equations.
In arithmetic operators, there is an option to use the escape character (\), which
indicates the removal of the special meaning of a character.
Expr
The expr command evaluates a given sequence. For example, in the following script,
the output will be 6:
Page 110
Double Parentheses
The starting point of a calculation can be indicated using double parentheses.
For example, in the following script, the calculation will begin with 2 + 3, whereby the
internal set of parentheses is used for the arithmetic operation.
Section 5: Archives
An archive is a group of files that are bundled into a single file. Many archives are
compressed to reduce the file’s size.
Transmission of data and program distribution can also be performed using archives.
Software available online is typically distributed in archived files that include all
associated files and documentation.
Zip & Unzip
The zip command is located in the /bin Linux directory. Options can be added to the
command, such as the -d flag, which deletes the file during the unzipping process, and u, which updates the compression.
Zipping Files
The zip command compresses specified files. A list of files that were added will be
displayed at the end of the command.
Page 111
Zip is not installed by default in the Debian distribution.
Unzipping Files
For a zipped file to be unzipped, the file must be specified in the unzip command.
The -d flag can be used to export the zipped content to a different folder.
Compression
Gzip compresses data to reduce its size. The original data can be recovered by
unzipping the compressed file. This application is essential for web apps and websites
because the HTTP protocol uses Gzip for output, enabling smaller files to be
downloaded by visitors.
BZip2 is an open-source file compression program. It can only compress individual files
and is not a file archiver.
Tar Archive
Tar is typically used by Linux system administrators to back up data. It creates archive
files that can be moved easily from disk to disk. A tar archive is created using tar -zcvf
<archive_name> <directory>.
The -zcvf flags have the following meanings:
•
•
•
•
z – Compress using Gzip
c – Create a new archive
v – Show a list of processed files
f – Specify the archive name
For example:
Page 112
Creating Backups
Creating file backups is crucial for any organization, regardless of its size.
Backups should be created every 24 hours to prevent data loss.
Important data should be backed up a minimum of once per week.
The following script creates a backup of the home folder and puts it in the temp folder:
backup_files=”/home/”
“/home/” is the location you choose to back up.
dest is the destination of the backup files.
The archived file name consists of the following:
currentDate=`date +%Y-%m-%d`
hostname=$(hostnames)
archive_file="$hostname-$currentDate.tgz
A message to the user regarding the process can be sent using:
echo -e "Backing up $backup_files to $dest/$archive_file \nprocess
To print the start date:
echo “Script start time: ‘date’”
Page 113
To back up files using the tar command, run:
“tar czfP $dest/$archive_file $backup_files”
To print a message at the end of the process, run:
echo -e "Backup finished! \nScript end time: `date`"
Lab Assignment: LNX-06-L2 Compression & Backup
Build a backup plan for a directory and write an automated script that
compresses the folder and backs it up.
Follow the instructions in lab document LNX-06-L2.
Section 6: File Integrity
In Linux, file integrity refers to whether or not files have been modified.
Hashing
Hashing is the process of generating a unique value from a string or text using a
mathematical algorithm. Hashes are used for almost any type of digital content,
including text, documents, images, and more. A hashing function will always output the
same results for the same given data.
Databases of known password hashes, called rainbow tables, contain data that
correspond with hashed values.
Hashing Tools
Linux has a built-in command called checksum generation that displays the outcome of
a cryptographic hash function. The most popular encryption utility in Linux is md5sum.
There are tools that make hashing easier and can create a hash for almost anything,
including files, words, and passwords.
The following are examples of hashing tools.
B2SUM
B2SUM is a BLAKE2 hashing tool. BLAKE2 is a cryptographic hash that works faster than
MD5 and SHA1(3) and has the SHA-3 security standard.
Page 114
Cksum
Checksum is a well-known tool that uses the cksum command to count bytes in a file.
This allows you to compare two files, one that you created and the source file, to
ensure that data was not compromised.
Md5sum
Md5sum uses an MD5 message digest. It generates an MD5 hash for almost any type of
object and is pre-installed in most Linux distributions.
Sha1sum
Sha1sum works on SHA-1 message digests, which are no longer considered secure.
Sum
The sum command performs a checksum and counts 512-byte blocks in a file.
Extra: /etc/shadow
Linux passwords are stored in /etc/shadow, which requires high-level permissions to
access. The file consists of eight fields separated by colons.
Page 115
The password structure of the shadow file is as follows: $id$salt$hashed.
$id is the algorithm used in Linux.
Other algorithms are indicated as follows:
$1$ is MD5
$2a$ is Blowfish
$2y$ is Blowfish
$5$ is SHA-256
$6$ is SHA-512
The last six fields in the file indicate password expiry and account lockout, as follows:
Last password change: Days since Jan 1, 1970, that a password was last changed.
Minimum: The minimum number of days required between password changes.
Maximum: The maximum number of days the password is valid.
Warn: Number of days before a password is set to expire, upon which the user will be
warned that the password needs to be changed.
Inactive: Number of days after a password expires, upon which the account will be
disabled.
Expire: Days since Jan 1, 1970, that the account was disabled.
Lab Assignment: LNX-06-L3 File Integrity Monitoring
Practice Bash variables and operators while obtaining a better
understanding of the Apache protocol and its folders.
Follow the instructions in lab document LNX-06-L3.
Page 116
Section 7: Extra: Crontab (Cron Table)
Crontab is a tool that stores tasks that are scheduled to be executed by cron, such as
running a routine script or restarting a machine.
Since Cron is popular among IT system administrators, it is often considered a
vulnerable target by hackers. If the default is not changed, the configuration will remain
the same, and it will be unprotected.
Crontab Text Editor
When Cron is started for the first time, you need to specify the text editor. The crontab
-e command will start the johnd crontab file.
The default and recommended editor is Nano.
Crontab Structure
The following is an explanation of the command above:
•
The zeroes represent minutes, with a range of 0–59.
•
The hour specified in the example is 6 p.m., the range is 0–23.
•
All days are specified in the example (*), the range is 1–31 days.
•
All months are specified in the example (*), the range is 1–12 months, or the name
of the month.
Page 117
•
Tuesday is specified as the third day of the week in the example (the count starts
at 0), the range is 0–6, or the names of the days.
•
The file path navigates to the script for execution.
The example above sets a task to run a script every Tuesday at 18:00.
The same script is shown below in the Nano text editor.
The example in Nano includes a description of how the command must be entered for
Cron to run it.
The crontab -l command can display the file to verify changes.
The following are additional examples:
• Cron for every 2 minutes: * /2 * * * *
• Cron for every 6 hours: * * /6 * * *
Page 118
Crontab Parameter Format
• m – Minutes (0–59)
• h – Hours (0–23)
• dom – Day of month (1–31)
• mon – Month of year (1–12)
• dow – Day of week (0–6, 0 represents Sunday)
• Command – The command chosen to be run
• Dash – Time range
Another tool, called at, is similar to Cron, but is typically used to create one-time
scheduled tasks in Linux. Cron is generally used for more periodic task scheduling, such
as daily, monthly, and yearly.
Cywar Lab Assignment: Bash Scripting for Security
Use all learned commands to build a security script while learning how to
think outside of the box.
Navigate to the Cywar Practice Arena. Find the lab in Linux Security, LNX06
– Bash Scripting, “Scripting for Security”
Page 119
Chapter 7: Host Security
Section 1: Linux External Mounting
Most major Linux distributions come with the option to live-boot the operating system.
Live-booting means running the OS entirely from RAM, complete with all the programs
and features, without making any changes to the local hard drive, while still having
access to it.
This is accomplished by creating a bootable USB flash drive or a CD/DVD and burning
the OS image file (.iso) using a tool like Rufus. Then, during the boot sequence of your
computer, choose from which medium the BIOS will boot.
There are benefits to this method. If you are thinking of switching to Linux, you could
try different Linux distributions and flavors before making your choice. Another good
use would be to have a bootable flash drive of a preferred OS handy for use on public
computers, both for security purposes and since public computers usually limit
capabilities, there are Linux distributions tailored for these purposes. Linux live boots
can also be used for forensics and recovery purposes, for which there are special
distributions as well, such as DEFT Linux.
There are two significant limitations to the Live-booting method. The first is that this
instance of the OS runs entirely from RAM. Therefore, if there isn’t sufficient RAM, the
OS can be sluggish, crash, or fail to boot entirely. The second is that this instance of the
OS will be gone after a reboot.
RAM is volatile memory, meaning it is cleared upon reboot. However, there is an option
to add persistence to the instance. This involves repartitioning the hard drive and
allocating space for the live instance. This is not the same as a dual boot setup. With
dual boot, you have the option at any point to reboot to the alternate OS, while in
persistent live booting, the bootable medium with the OS image on it must be inserted
in the computer for the OS to load. Additionally, in this live OS setup, operating system
files are not stored on the computer’s persistent memory.
Page 120
Section 2: Boot Protection
When you turn on your computer, several tasks begin to run in sequence in the time it
takes for your desktop to appear. One of them is the boot loader. The boot loader is a
small block of code with instructions for the CPU to prepare the machine and pass it on
to the kernel for more complex processes, like managing the OS. The boot loader code
is in a predefined location on the HDD, and from there it is pulled to the RAM for the
CPU to run.
GNU GRUB (GRand Unified Bootloader) is a boot loader package that allows you to boot
multiple operating systems. Its interface has an easy-to-use selection menu and can list
more than 150 operating systems. GRUB is freeware and is used mostly on Linux-based
systems, although there are versions that support Windows as well. GRUB can be
reconfigured on the spot using an interactive command line. It can run an OS from a
variety of mediums, even from the network. It provides the capability to view files on
supported file systems.
GRUB has the ability to password-protect its menus from unauthorized access.
However, a person with access to the physical computer can gain access to the file
system through ways that simple password protection cannot prevent, which is why full
disk encryption is necessary. Full disk encryption encrypts the operating system
partition and second stage of the boot loader, which consists of the Linux kernel and
initial RAM disk.
In Linux systems an entire disk can be encrypted together with all the data stored on it.
For full disk encryption, LVM must be enabled during the installation of the Debian
distribution. After the encryption the files will be available in readable form only when
the system is unlocked by a trusted user. If it is not necessary to encrypt the entire disk,
file-level encryption can be performed to encrypt only specific files.
Linux Unified Key Setup (LUKS) is the standard for Linux hard disk encryption. LUKS
provides a standard on-disk format, which facilitates compatibility among distributors.
Page 121
dm-crypt is a disk encryption subsystem that uses the kernel’s crypto API framework.
dm-crypt can encrypt single files, logical volumes, partitions, or entire disks. Together
with LUKS, it allows different keys to access encrypted data.
Note: dm-crypt does not work solely with LUKS.
Lab Assignment: LNX-07-L1 Encrypt Grub & System
Learn more about Linux host security, attacks hackers perform on machines
after gaining access, and how the attacks are mitigated.
Follow the instructions in lab document LNX-07-L1.
Section 3: PAM
Pluggable Authentication Modules (PAM) reside between Linux applications and the
Linux native authentication system. Authentication in Linux is done by comparing a
password’s hash to the hash of the password in the /etc/shadow file, in which secure
user account information is stored.
Page 122
Since many Linux programs and services require their own authentication mechanisms
and files, there is a potential for data to be inconsistent among the different types of
files.
With the introduction of PAM, one user database can be used for all services. PAM
integrates multiple low-level authentication modules into a high-level API that provides
dynamic authentication support for applications.
Using PAM, a system admin can deny or allow applications the right to authenticate
users or generate warnings when programs attempt to authenticate.
PAM has a modular design that provides complete control over how users are
authenticated simply by editing a configuration file, regardless of the authentication
scheme of a certain program or service. This is due to a library containing the functions
for the proper authentication scheme that is loaded with PAM.
PAM’s features include:
• Prevents the use of old passwords based on their age.
• Provides uniform operation among system users.
• Is structured in modules that can interface with services, like SSH.
Packages PAM uses for its operation include:
• cracklib-runtime – a library that prevents users from configuring a weak
password that can easily be cracked.
• libpam-pwquality – a library that provides multiple options for password quality
checks, and rates the passwords based on their randomness.
Page 123
When PAM is configured, different tokens are available for use to control the created
policy:
Operator
Meaning
Auth
Required
Checks if users are who they claim to be.
Control flag for modules.
Module used to deny or allow services based on an
Pam_listfile.so
arbitrary file.
Onerr
Set to ‘succeed’ (module argument).
Item
User (specifies what is listed).
Sense
Allow (specifies which action to take).
/etc/ssh/sshd.allow (argument, specifies a file containing
File
one item per line, such as a user name).
The configuration of PAM can be done through a single file, located at /etc/pam.conf,
or spread across individual files per program or service, located in the /etc/pam.d
directory.
Note: /etc/pam.d takes precedence over /etc/pam.conf.
Lab Assignment: LNX-07-L2 PAM Authentication Features
How to enforce strong user password policies in Linux.
Follow the instructions in lab document LNX-07-L2.
Page 124
Section 4: SELinux & AppArmor
Security-Enhanced Linux (SELinux) is a set of security enhancements to the Linux kernel
that allow more granular control over access measures using a labeling system and a
policy enforced by the kernel. SELinux can be traced back to earlier projects by the
United States National Security Agency (NSA). It was designed to enforce the security of
information to better address threats of tampering and malicious or flawed
applications. It comes with a default set of policy configurations that can easily be
changed.
Standard Linux access controls restrict files with read, write, and/or execute
permissions. The permissions can be changed by the user or a program the user is
running. SELinux, on the other hand, governs file access controls using policies deployed
on the system that cannot be changed by a user or program. SELinux uses a system that
labels every process, file, directory, and system object on the machine. The policy rules
define which labels have access to each other.
Top Secret
Secret
Confidential
Unclassified
Multi-Level Security (MLS) is a security scheme that enforces the
four hierarchal levels of clearance of the Bell-LaPadula Model.
Each level is comprised of a sensitivity and a category.
Under MLS, dynamic entities such as users and processes are
referred to as subjects, and static entities such as files,
directories, and devices are referred to as objects. Both subjects
and objects are labeled with a security level: a clearance security
level for subjects and a classification security level for objects.
The purpose of MLS is to prevent access to information by
unauthorized users and machines.
Subjects can read the same or lower levels and write to the same
or higher levels. As defined by the BLP model, they cannot read
up or write down.
Note: MLS access rules are typically used alongside conventional access permissions.
Top-level clearance does not grant administrative rights on the system.
Page 125
MLS is not installed by default and can be installed with the apt command:
johnd@debian:~$ sudo apt install selinux-policy-mls -y
The SELinux Policy Management Tool is a GUI interface that is installed separately:
johnd@debian:~$ sudo apt install policycoreutils-gui -y
After installation, it can be started from Applications → System Tools → SELinux Policy
Management Tool.
Page 126
A targeted policy is the default policy enforced by SELinux to control processes. When
using a targeted policy, processes that are targeted run in their own confined domain
with limited access to process-relevant files, and processes that are not targeted run in
an unconfined domain. If a targeted process is compromised by an attack, the attacker
will only have access to files in the domain.
AppArmor is a Linux security module (LSM) that creates restrictions for program
capabilities based on individual program profiles. AppArmor is a mandatory access
control (MAC) system, which allows the OS to restrict a subject’s access to objects and
the operations it can run on them. AppArmor profiles operate in one of two modes:
enforcement or complain.
Enforcement mode will enforce the policy defined in the profile and report policy
violations. Complain mode will only report policy violations.
AppArmor policies completely define what resources applications can access, and with
which privileges. By default, access is denied if no profile is set to ‘allow’.
Page 127
AppArmor is deployed with several default policies, along with learning-based tools, so
virtually every application can be deployed with AppArmor.
Policy violations can be set to notify users via email or a pop-up notification on the
desktop.
Section 5: Privilege Escalation
Most operating systems are designed with the idea that multiple users may be using the
same machine. Therefore, operating systems have the ability to set different privilege
levels for different users. For example, an administrator will have higher privileges than
a basic user.
Privilege escalation (PE) happens when a user gains privileges they are not entitled to.
These privileges can provide an attacker with the ability to wreak havoc upon the
system by stealing sensitive information, installing malicious software, or deleting
crucial OS files, rendering the OS unusable. An attacker strives to get the highest
available privileges in the system, which the root user has in Linux.
PE is usually achieved by exploiting bugs in the OS or by misusing installed programs,
taking advantage of design flaws to use applications in ways they were not intended to
be used.
Large amounts of tools are created and customized for the purpose of elevating a
regular user’s privileges. In addition, there are CVEs that describe system vulnerabilities
that can be exploited to achieve PE.
One of the first things an attacker will do upon gaining access to a machine is to look for
sensitive data such as passwords, user names, network schemes, and IPs to maximize
the attack surface and gain access to more machines on the network. Lack of cyber
awareness is a serious issue that hackers exploit.
In one scenario, an attacker sends an email to an employee in an organization. The
email seems legitimate and has a harmless-looking attached file, which the user
downloads and opens.
Page 128
The file evades security measures since it does not perform suspicious activities; it
simply copies the contents of the /proc/version file and sends it back to the attacker.
This file holds the system version information. The attacker then looks for a CVE for that
version and now has a list of vulnerabilities they can exploit for the machine.
CVE, Common Vulnerabilities and Exposures, is a catalog of known security threats
sponsored by the US Department of Homeland Security.
A wildcard is a single character that represents one or more characters. For example, an
asterisk (*) matches any number of characters; searching for dev* will yield developer,
development, devops, and other similar results beginning with the letters dev.
Wildcard injection is an older UNIX hacking trick that takes advantage of unsanitized
input in the Linux shell. For example, when an argument is used as a file name,
including the hyphen (-), the Linux shell will interpret it as a command-line argument for
the executed command.
Misconfiguration refers to insufficiently securing assets or incorrectly implementing
security measures. An example would be not changing default login credentials on a
router or IP webcam.
LinEnum is an enumeration tool that is used to gather information and check for
escalation points on a machine. Once a shell is accessed on a targeted machine, running
LinEnum will launch a search for kernel and distribution release details, and provide
system information such as current IP, routing details, and DNS information. It will
collect data about users, services, permissions, privileges, and more. LinEnum performs
numerous automated checks and presents you with a detailed report.
Page 129
Pspy is a command-line tool that collects information about processes without the need
for root permissions. The information it gathers includes commands run by other users,
cron jobs, and more. The tool sets up inotify watchers that trigger procfs scans the
moment a process accesses the /usr folder.
Section 6: Extra — Crontab Security
Cron is a useful tool that provides the ability to automate and schedule tasks and
events. Crontab is a file in which cron jobs to be executed are listed. Manipulating the
file can potentially lead to a compromised system.
By examining the crontab file, an attacker can see which scripts are set to run and
change the scripts to perform malicious actions.
Page 130
Cywar Lab Assignment: LNX-07-CYL1 Crontab Security
Practice using the Crontab application, configure recurring tasks, and exploit
Crontab vulnerabilities.
Navigate to the Cywar Practice Arena. Find the lab in Linux Security, LNX07
– Host Security, “Crontab Security”
Page 131
Chapter 8: Network Security
Section 1: Iptables
iptables is the command-line user interface for the built-in Linux kernel subsystem
called Netfilter. Netfilter is what implements the firewall and routing capabilities of the
kernel, making it so that every Linux device can act as a router and/or firewall.
iptables can be likened to a conditions system in which whether or not a condition is
met, an action is executed. It either accepts, drops, or rejects network packets based
on rules in the tables.
• Accept – accepts the connection and allows communication.
• Drop – denies the connection and prevents the communication without displaying
an error code.
• Reject – denies the connection and prevents the communication but sends an
error message to the source of the request.
iptables has three main tables, which are defined by their rules:
• Filter: The main and most widely used table that controls whether or not a packet
is allowed to continue to its destination.
• NAT: This table implements network address translation rules for packets that
must have their source or destination address modified to reach their destination.
• Mangle: Holds rules that modify IP headers.
Two more tables that are not as widely used are:
• Raw: This table is called before the filter or the NAT tables, and is used to specify
whether or not connection tracking should be applied.
• Security: A table that marks packets in the context of SELinux security.
Page 132
iptables organizes rules in chains that determine when rules will be evaluated. The filter
table consists of three built-in chains that represent the direction of the traffic. Input
represents packets entering the system, forward is for routers that forward the traffic
to the next hop or recipient, and output is for traffic leaving the system.
New chains can be added, and existing ones can be deleted. Chains contain rules that
are evaluated sequentially, meaning that if you have a rule in your input chain in the
filter table to accept traffic from 192.168.10.15 but before it, there is a rule that drops
traffic from 192.168.0.0/24, the packet will not get through since the rule to accept it
comes after the rule to drop all traffic from that network.
Iptables has multiple flags that can be used when defining rules:
Flag
A
C
D
E
F
h
i
j
o
p
Meaning
Appends rules
Checks rules before adding them
Deletes a rule by number
Renames a user-defined chain
Flushes the selected chain
Lists command structures
Sets a network interface
Jumps to a specified target
Sets the outgoing interface
Protocol name
Page 133
Section 2: Firewalld
Firewalld is an iptables wrapper, and zone and-service based system, that provides
enhanced capabilities, such as runtime and permanent configuration sets, and a
graphical interface.
Firewalld is installed by default on:
• CentOS 7 and higher
• Fedora 18 and higher
• Red Hat Enterprise Linux 7 and higher
• OpenSUSE Leap 15 and higher
• SUSE Linux Enterprise 15 and higher
It is also possible to install Firewalld on other Linux distributions, such as Debian
systems (Ubuntu and others).
To install Firewalld, use the apt command: sudo apt install firewalld -y.
The graphical interfaces need to be installed separately using the apt command:
apt install firewall-applet -y. After installation, they can be run from the Applications
menu.
Page 134
Firewalld GUI
Page 135
Firewall zones are pre-defined sets of rules with varying degrees of trust based on
location or scenario. For example, the public zone is intended for an environment in
which other computers on the network are not to be trusted.
Each zone allows for traffic from its defined services and protocols only. Upon
installation, the default zone will be Public.
Zones can also be defined by interface type, whether internal or external, and have
services defined appropriately. For example, external interface services such as HTTPS
and DNS would be allowed, but DHCP would not.
Unless configured otherwise, interfaces are assigned to the default zone.
New zones can be created in one of two ways:
1. With the new zone command:
firewall-cmd --new-zone=zone-name –-permanent
The new zone will appear after the service is reloaded.
2. With an existing zone’s .xml file as a template.
The second method involves copying the .xml file of an existing zone from the
/usr/lib/firewalld/zones/ and /etc/firewalld/zones/ directories and changing its
configuration to suit your needs.
Firewalld employs runtime and permanent configuration sets, meaning you can change
rules without interrupting connections by adding them to the runtime configuration set.
The changes will be not be retained upon reboot unless added to the permanent
configuration set.
Firewalld has a feature called Panic Mode, which is intended for use in extreme cases.
Panic mode will sever all existing connections and drop all incoming and outgoing
packets. The mode can be turned on and off using firewall-cmd --panic-on and firewallcmd --panic-off. To check if Panic mode is on, use firewall-cmd --query-panic.
Page 136
All Firewalld commands begin with firewall-cmd.
The following table lists some useful commands:
Syntax
Meaning
--new-zone=<name>
Creates a new zone and its name.
--add-service=<service>
Adds a service to a zone.
--list-services
Lists all services in the zone.
--get-services
Lists all available services.
--zone=<zone name> --list-all
--list-all
Lists all port services in a zone.
Lists all port services in the default zone.
-set-default=<zone name>
Changes the default zone.
--reload
Reloads FirewalId.
--permanent
Creates persistent settings.
--add-port
Adds a specific port.
--get-zones
Lists existing zones.
Note: Firewalld commands need to be run with superuser permissions.
Lab Assignment: LNX-08-L1 Configure an Allow List
Create firewall rules to block all SSH traffic. Create an allow list and add the
host IP.
Follow the instructions in lab document LNX-08-L1.
Page 137
Section 3: Fail2Ban (F2B)
F2B is an Intrusion Prevention System (IPS) designed mainly to prevent brute-force
attacks on UNIX-based operating systems. F2B uses Python scripts to scan logs located
in directories such as /var/log/auth.log, /var/log/httpd/error_log and
/var/log/apache/access.log. Once it identifies suspicious behavior, such as multiple
incorrect password entries from the same source IP, it adds a corresponding rule to the
local firewall or other packet filtering system to ban traffic from that IP, either
temporarily or permanently. F2B can also send email notifications regarding events.
Fail2Ban is installed separately, using the apt command: apt install fail2ban -y.
F2B comes with filters for common services like Apache, SSH, and others. The filters are
regular Python expressions (regex) that can be modified. The configuration file for F2B
is located in /etc/fail2ban/jail.conf. It is not recommended to make changes to this file,
since it will be overwritten when the package is updated. Instead, copy it to jail.local
and make changes there. It is important to keep the name jail.local for F2B to recognize
the file. To copy the file, use cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
Jail.local has the following global configurations:
• Ignoreip – F2B ignores traffic from this IP, meaning the IP is allow listed and will
not be blocked. A single IP or a network can be entered. Multiple entries should
be separated with spaces.
• Bantime – How long a ban should last, in seconds
• Maxretry – Number of failed attempts before the ban
• Findtime –The period of time during which failed attempts are counted
Page 138
Important notes:
1. Fail2Ban is not a replacement for a firewall. It is meant to work in conjunction
with an existing firewall or packet filtering system.
2. F2B cannot handle a distributed brute-force attack, meaning an attacker can use
a large number of machines to perform separate attempts from multiple IPs and
bypass F2B altogether.
Page 139
Section 4: Log Monitoring
A log file is a file that holds records of events that occur in an operating system or
software. In Linux, most of the actions performed on the system, either by the user or
the system, are recorded in a log in the /var/log directory. The kernel has a log that
records every kernel-related action and change.
Logs record nearly everything that happens in Linux, including memory usage, disk
space changes, services, applications, tasks, authentications, commands, and more.
They are useful for administrators to monitor system activity and troubleshoot the
system when necessary.
Logs are divided into three categories:
• Information logs: Record information without requiring an event or trigger.
These logs include information on lists, authentications, services, and more.
• Warning logs: Include informational messages that require attention, which can
be, for example, warnings for an application that does not function properly, or a
firewall that unexpectedly switches to a backup server.
• Error logs: Incidents that require the system administrator’s immediate
attention. They often indicate when a service or application is not operable.
Because there are so many logs to keep track of, the need arose to make this task more
efficient. Software was developed to collect the data from log files and aggregate it to a
single interface for easier management.
Nagios is a solution that provides monitoring for any kind of network, server, web
service, or application, using a simple, easy-to-work-with graphical user interface that
can be customized. The Nagios configuration wizard makes it easy to set up the
software and collect data from different sources.
In Linux servers, logs are generated at a rapid pace. A web server, for instance, has a lot
of traffic and can generate more than 10GB of log data per day. Having so much data in
a single file can lead to high CPU and memory consumption. Logrotate allows the
sysadmin to configure how big of a log the system should keep, at what frequency the
logs should be rotated and backed up, and how long they should be retained.
Page 140
Logrotate is used to rotate log files, by removing old logs from the system and creating
new files. The criteria for rotation can be the age of the file or the file size, and usually
runs automatically through the cron utility. Logrotate can also compress log files and
can be configured to send emails to users when logs are rotated. Logrotate can be
configured by editing the logrotate.conf file, located at /etc/logrotate.conf.
Page 141
Section 5: Bash Scripting to Counter Apache Enumeration
Enumeration is the process of establishing an active connection to a system and
extracting information that can be exploited later. The information can include user
names, machine names and addresses, kernel or installed application versions, network
interfaces and resources, and even running services. Information gathered can later be
used to identify vulnerabilities to achieve privilege escalation or lateral movement in
the network.
Enumeration is a crucial step in penetration testing. It can directly affect the outcome of
a penetration test (pentest) since the more information gathered about a target, the
more potential attack vectors will be available for exploitation.
There are different types of information that can be enumerated by an attacker or a
pentester, and various tools that acquire specific types of information.
Examples of enumeration tools include:
• Nmap: Scans a network and looks for hosts, the OS version, and open ports.
• Nikto: A scanner for web servers that looks for server versions and potentially
dangerous files and HTTP methods
• DirBuster, Gobuster, and Dirsearch: Applications that enumerate directories and
file names on web application servers
• WPScan: WordPress vulnerability scanner
• Dnsenum: A script written in Perl to enumerate DNS information on a domain
All-in-one (AIO) or “smart” tools were developed to enumerate several types of
information in one convenient interface.
Wget is a tool created by the GNU project to retrieve files from web servers. It is an
amalgamation of World Wide Web and get. The tool can download files from FTP and
SFTP, HTTP, and HTTPS servers. If not specified, Wget will download to the directory the
user is currently in.
Page 142
Wget is a non-interactive tool, which means that the user does not have to be logged in
to the server to download files. Files can download in the background without requiring
a user to be logged in to the system. Wget can recursively download entire websites
(excluding pages listed in the Robots.txt file, unless otherwise defined), while keeping
the original structure of the site and creating a local copy of the website.
Wget has the capacity to pause and resume downloads, whether manually by the user
or automatically due to an unresponsive server, in which case Wget will automatically
resume the download once the server responds.
The Wget syntax is wget [option] [URL].
Examples of Wget options include:
Syntax
Meaning
-h
Lists all available options.
-b
Allows downloads in the
background.
-o [path]
Overwrites an existing file.
-c
Resumes download.
-i [file name]
Looks for URLs in a file.
-m
Mirrors an entire website.
Curl stands for client URL and is used to transfer data to, or from a server with URL
syntax. Curl supports a variety of protocols and doesn’t require user interaction. It can
run from a CLI, as well as in a script.
The -o option must be used in the curl command, followed by a file name, to save the
output to a file; otherwise, the requested webpage will be download and displayed in
the terminal window: curl -o facebook_login.txt https://www.facebook.com/
Page 143
Curl enables multiple page requests in the same command, using brackets. For
example: https://yoursite.{abc,def,ghi}.com. Or, if you are downloading files from an
FTP server, you can use the following syntax: ftp://ftp.yourserver.com/file[1-50].txt.
Nesting is not supported, but several brackets can be used in the same command:
http://yoursite.com/archive[2000-2019]/vol[1-15]/part{a,b,c}.html
Curl can display a detailed progress meter of the download or upload using the -o
option. The information it displays includes data transferred, transfer speed, and
estimated time.
Curl syntax varies depending on the protocol in use:
Syntax
Meaning
-a (uses FTP, SFTP when uploading)
Appends to a file instead of overwriting it
--basic (basic HTTP authentication)
Downloads in the background
-o [file name] (HTTP, HTTPS, FTP, SFTP)
Outputs to a file
-C [offset]
Resumes download from a given offset
-D [filename] (HTTP, FTP)
Dumps headers to a file
-E <certificate[:password]>
Tells curl to use a specific certificate file
Page 144
Curl vs. Wget
Wget is useful when downloading multiple files or mirroring an entire website.
Curl is better suited to downloading several files from the same address, since it
supports wildcards. Curl supports more protocols across several platforms and uses
more efficient authentication methods.
Wget can resume interrupted downloads automatically, while curl cannot.
The tools are similar, but each can be suitable for a different purpose.
Section 6: Secure Apache Configuration
The following are some methods of hardening and securing an Apache server:
1. Use only necessary HTTP methods: Some HTTP methods, such as PUT, DEL,
TRACE, and CONNECT are known to be vulnerable to exploitation and privilege
escalation and are not recommended.
2. Work with weak users for services: Services can be exploited for privilege
escalation, so it is a best practice to have them run with regular user permissions
rather than special permissions.
3. Reduce or remove server headers: Server headers contain version information
about Apache servers. Attackers can look up known vulnerabilities for specific
versions and exploit them.
4. Disable banners: Banners also divulge important information for hackers to
exploit.
5. Set access controls: Such controls restrict access to your server.
6. Use only TLS 1.2 or 1.3 and disable older versions: Older versions of SSL and TLS
have been cracked and are not recommended. Some websites still use TLS 1.0,
1.1, and SSL2.0.
7. Disable directory listings: To prevent enumeration attacks.
8. Update services as needed: New vulnerabilities are discovered all the time, and it
is an essential practice to install the latest security patches.
Page 145
Lab Assignment: LNX-08-L2 IP Lockout
Create a Bash script that blocks IP addresses listed in iptables when they
exceed the limit of 404 error login attempts.
Follow the instructions in lab document LNX-08-L2.
Section 7: Banner Hiding for SSH & Apache
When an Apache server receives a request, by default it sends back potentially sensitive
information, including the server version, operating system, installed modules, and
more. Banners can also include such information.
This seemingly harmless information results in more potential attack vectors for a
hacker. Banners should be hidden, not only for Apache, but also for SSL and other
services that have external access.
The curl command can be used to view a banner. The following example demonstrates
how an SSH banner provides information regarding the OS.
Page 146
As a best practice, banners should be disabled to prevent leakage of information that
an attacker may use against the system. When Apache encounters an error, it displays
output with the error code. In some cases, the code may include server-related
information that should be hidden. Custom error messages can be defined and
configured for the service.
ModSecurity
ModSecurity is an open-source web application firewall (WAF) used for data monitoring
over HTTP traffic. It allows filtration of data received by a web server, and in response it
detects and blocks malicious content. Its installation and integration are fairly easy with
the Apache web server.
Page 147
Section 8: SSL Encryption
In the early days of the internet, website developers didn’t give much thought to
security, because not many people had access to the internet. Information was passed
between servers and hosts unencrypted and in plain text. Information in transit was
exposed to interception, theft, and modification.
An attack in which a hacker intercepts the information exchange between a host and a
server is called a man-in-the-middle (MITM) attack. The attacker places an unidentified
network device between the host and the server and examines every packet before
sending it to its destination. This type of attack can be mitigated by encrypting the
traffic, which can be done using SSL certificates.
SSL is the standard for encrypting information between a web server and a browser. It
is based on certificates generated by a trusted source (a certificate authority, CA) that
ensures the connection is secure.
Modern browsers have visual cues in the address bar in the form of a closed padlock or
the word secure written before the URL to inform users that the connection is secure.
Websites using SSL encryption include HTTPS instead of HTTP in the URL as another
indicator that the connection is secure.
The process of obtaining a signed certificate for a server begins with a Certificate
Signing Request (CSR), which generates a pair of keys: a public key and a private key.
The CSR data containing the public key is sent to the CA. The CA uses the CSR data to
create a data structure matching the private key, without knowing or compromising it.
The certificate is then installed on the server. When installed, it can create secure
connections with hosts accessing the server.
Page 148
SSLv1 was not publicly released. SSLv2, the first public version, was replaced by SSLv3,
which, in turn, was replaced by Transport Layer Security (TLS). The latest version is
TLSv1.2, and 1.3 will be next, with some organizations already implementing TLSv1.3.
The SSL/TLS handshake is a process in which a server and a browser exchange the
necessary information to establish an encrypted communication session.
A TLS handshake is not limited to browser-server communication. It occurs when any
network device wants to establish an encrypted communication connection.
Page 149
1. During the Client Hello phase, the client sends its SSL/TLS version, encryption
options, and data compression method to the server.
2. The server replies with Server Hello with the chosen encryption and compression
methods, a session ID, the server’s certificate, and the public key.
3. The client verifies the server’s certificate with the CA and sends a secret key
encrypted with the server’s public key.
4. The server decrypts the secret key with its private key and a secure connection is
established.
Cywar Lab Assignment: LNX-08-CYL1 Utilizing ModSecurity
Install ModSecurity and secure the Apache service.
Navigate to the Cywar Practice Arena. Find the lab in Linux Security, LNX08
– Network Security, “Utilizing ModSecurity”
Page 150
Section 9: SFTP
SFTP stands for SSH File Transfer Protocol. It is an extension of the SSH protocol, which
was included in SSH2 and uses the same port, 22. SFTP was designed to work with the
encrypted connection SSH establishes to securely transfer files between machines. SFTP
relies completely on SSH for encryption and authentication. SFTP also provides
enhanced file functionality, such as permission and attribute manipulation, file locking,
and more.
The following are some key differences between FTP and SFTP:
• FTP uses a client-server architecture, meaning that a server holds files the client
downloads, as opposed to SFTP, which transfers files between any two nodes.
• FTP uses two separate channels, one to transfer data and one to control the
connection; both are unencrypted. SFTP uses SSH, which establishes an
encrypted connection, then transmits all the data on a single secure channel.
• FTP works on port 21, while SFTP uses port 22.
• FTP directly transfers files, whereas SFTP uses a tunneling transfer method.
FTPS is another secure file transfer protocol that uses FTP with SSL/TLS for security, and
therefore requires a certificate. As opposed to SFTP, FTPS communication can be logged
and read by humans. SFTP communication cannot be logged, since SSH transfers data in
binary form and is unreadable to humans.
Lab Assignment: LNX-08-L3 SSL Encryption
Generate an SSL certificate for the Apache web server and configure Apache
to use SSL.
Follow the instructions in lab document LNX-08-L3.
Page 151
Download