Uploaded by Omar Sameh

Embedded Linux Diploma

advertisement
Embedded Linux
Introduction
The gr e a t UNICS
●
●
●
●
In the late 1960s, Bell Labs was involved in a project with MIT and General
Electric to develop a time-sharing system, called Multiplexed Information and
Computing Service (Multics).
Multics doesn’t make a good success, so that Bell labs decided to pull out.
In 1969, a team of programmers in Bell Labs who worked in Multics decided to
make a better OS, they called it Uniplexed Information and Computing Service
(UNICS).
The team was leaded by the great programmers:
Ken Thompson
●
Dennis Ritchie
In the same year, Linus Torvalds was born on December 28, 1969.
Unix Revolution
●
●
●
●
●
In 1971, First edition of Unix released. It includes over 60 commands.
In 1973, Ritchie rewrote B and called the new language the C language.
For some years, the Unix team released some improvements in the new OS and
they started to install it in many sites inside Bell Labs.
Unix V6, released in 1975 became very popular. Unix V6 was free and was
distributed with its source code to many universities. Hence, many
organizations had modified the unix source code to build their own
distributions. Most popular distribution are:
○ Berkeley Software Distribution (BSD)
○ Sun Microsystems Distribution (Solaris)
In 1983, Bell Labs found that there are many distributions of Unix that is
uncontrolled and untracked. The decided to make a closed commercial version
of unix and called it System V
GNU Project
●
In 1983, Richard Stallman started a development of
100% free software. Not 90% and not 99.5%. It is totally
free. Free not means free in cost, but free from
freedom!, and he established free SW foundation.
It means that this system would give the user:
❏
❏
❏
❏
The freedom to run the program as you wish
The freedom to copy the program and give it
away to your friends and co-workers
The freedom to change the program as you
wish
The freedom to distribute an improved version
and thus help build the community.
Richard Stallman
GNU, is a recursive acronym meaning GNU's Not Unix
GNU Animal
GNU Project
●
A Unix-like operating system includes a kernel, compilers, editors, text
formatters, mail software, graphical interfaces, libraries, games and many other
things. Thus, writing a whole operating system is a very large job. Stallman
started in January 1984.
●
One of these most important components is the GNU C Compiler GCC.
●
In 1988, IEEE has specified a standard for maintaining compatibility between
operating systems. Portable Operating System Interface (POSIX) defines the
application programming interface (API), along with command line shells and
utility interfaces, for software compatibility. Unix was selected as the basis.
●
By 1990 we had either found or written all the major components except
one—the kernel.
Finally, The Linux
●
In 1991, Linus Torvalds build a free, open source
kernel name it Linux.
●
Combining Linux with the almost-complete GNU
system resulted in a complete operating system: the
GNU/Linux system.
●
Estimates are that tens of millions of people now use
GNU/Linux systems.
●
Nowadays, there are many distributions of GNU/Linux
systems such as:
Finally, The Linux
Why Linux ??
•Free, Safe, Robust.
•Source code of all related SW is available.
•Ported to a large variety of CPU architectures (X86, ARM, PowerPC, .....).
•Large device drivers coverage.
•Hosting huge number of languages & libraries.
•Software is highly modularized, making it easy to build something new.
•Due to its low cost and ease of customization, Linux has been shipped in many
embedded devices (smartphones, network devices, PDAs, IVI, GPS devices, ....).
•Large Community.
Linux Components
•BootLoader(grub, uboot)
•Kernel (www.kernel.org)
•Filesystem(FHS)
–init process
–shell (bash )
–Services and process manager
–Scripts & Environment variables
–Linux commands
–Applications (user apps, GUI, …)
Linux Components
User
Back to Basics
User
Software
User Applications
Operating system
System
Applications
GUI
Scheduler
Command
Window
File System
Dispatcher
Kernel
Resource
Management
Core Hardware Drivers
Hardware
CPU
Memory
I/O
Linux Kernel map
UBUNTU The Name
An African word means “I’m because we are” or “One caring for all”
LTS
LTS is an abbreviation for Long Term Support.
A new Ubuntu Desktop and Ubuntu Server are
released every six months. That means you'll always
have the latest and greatest applications that the
open source world has to offer. Ubuntu is designed
with security in mind. You get free security updates
for at least 9 months on the desktop and server.
A new LTS version is released every two years. In previous releases, a Long Term
Support (LTS) version had three years support on Ubuntu (Desktop) and five years on
Ubuntu Server. Starting with Ubuntu 12.04 LTS, both versions received five years
support. There is no extra fee for the LTS version
Upgrades to new versions of Ubuntu are and always will be free of charge.
Installation
We have four ways to run Ubuntu:
1
2
3
4
Install as primary or secondary OS
Run Ubuntu from USB drive without installation
Use virtual machine such as VMware or VirtualBox
Dual boot.
Please follow the steps of installation with your instructor ...
Ubuntu boot Sequence
Exploring Ubuntu GUI
•Gnome & Kde(X Window system)
•Windows & Workspaces
•UbuntuHelp
•Common applications
–gedit>> Text Editor
–nautilus >> File Explorer
•Terminal
Virtual machine and hypervisor
Hypervisor is computer software, firmware or hardware that creates and runs virtual
machines
1-it’s the key for enabling virtualization on computers or even ECUs .
2-it’s a SW installed on top of computer HW.
3-it’s responsible for creating the virtualization layer.
4-it manages sharing computer and mainframes resources between virtual machines.
Virtual machine and hypervisor
Linux Root File System
In windows you might have drives like C, D, … etc in which your files are splitted.
However, in Linux there is something called Root Directory in which all your files are
present and any important files or videos or songs can be accessed from the root. In
another meaning, the Root directory is the start of absolute paths for all files on the
system.
Linux root f i l e s y s t e m
Linux root f i l e s y s t e m
File types in Linux
In Linux, everything is a file. Directories are files, files are files, and devices like Printer,
mouse, keyboard etc. are files. We can classify the files into three types:
● General Files
General Files also called as Ordinary files. They can contain image, video, program
or simply text. They can be in ASCII or a Binary format. These are the most
commonly used files by Linux Users.
● Directory Files
These files are a warehouse for other file types. You can have a directory file
within a directory (sub-directory).You can take them as 'Folders' found in
Windows operating system.
● Device Files
In MS Windows, devices like Printers, CD-ROM, and hard drives are represented
as drive letters like G: H:. In Linux, there are represented as files.For example, if
the first SATA hard drive had three primary partitions, they would be named and
numbered as /dev/sda1, /dev/sda2 and /dev/sda3.
File Naming Rules
In Windows, you cannot have 2 files with the same name in the same folder
While in linux, you can have 2 files with the same name in the same directory,
provided they use different cases. The naming rules for files in linux are:
1.
2.
3.
4.
Case sensitive
No obvious length limit
Can contain any character (including whitespace, except /).
File name extensions are not needed and not interpreted. Just used for user
convenience.
Users in Linux
In Linux, the are three types of users:
● Regular User
A regular user account is created for you when you install Ubuntu on your system.
All your files and folders are stored in /home/ which is your home directory. As a
regular user, you do not have access to directories of other users.
● Root User
Other than your regular account another user account called root is created at
the time of installation. The root account is a superuser who can access restricted
files, install software and has administrative privileges. Whenever you want to
install software, make changes to system files or perform any administrative task
on Linux; you need to log in as a root user. Otherwise, for general tasks like
playing music and browsing the internet, you can use your regular account.
Home Directory
Consider, a regular user account "Tom". He can store his personal files and directories
in the directory "/home/tom". He can't save files outside his user directory and does
not have access to directories of other users. For instance, he cannot access directory
"/home/jerry" of another user account "Jerry".
When you boot the Linux operating system, your user directory (from the above
example /home/tom) is the default working directory.
Embedded Linux
Commands
Part 1
Command Line interface
In Linux system, you can access all files either using the graphical user interface (GUI)
or using set of commands in the command line interface (CLI). Although the GUI is
more user friendly, however the CLI will give you the opportunity to do things much
easier.
Imagine that you want to create 1000 new file, using GUI it would be big effort and big
time but using CLI it was would cost you just a couple of code lines. CLI has its specific
uses and is widely used in scripting.
The CLI software is called the terminal or shell, three ways to open it:
1.
2.
3.
Anything can be opened from the Search Bar
Can be opened by right click and choose Open Terminal
The Shell terminal can be easily opened by pressing:
Alt+Ctrl+T
Command Line Interface
Working with directories
pwd
Print Working directory. This command would print absolute path of working Directory. It prints the
current directory of the terminal window.
cd + Folder
Change Directory. This command change directory to a specified Folder. It takes either absolute or
relative path.
Note: If there is a space in the folder path put it between single or double quotes.
cd (or cd ~)
Go to the home directory
cd ..
Go up
General Form f o r Commands
In Linux system, each command has a set of options that may used to modify on the
command behavior. The option may be a letter or a word.
Syntax option 1: command without option
command [input] [output] ex: cp src des
Syntax option 2: command with letter option
command -letter [input] [output] ex: cp –r src des
Syntax option 3: command with word option
command --word [input] [output] cp –recursive src des
Simple commands in linux
•Command syntax
command [option(s)] [parameter(s)]
command –(one_character_option)
command –-(full_word_option)
•Sample commands
Cd > change directory
Ls > list files in current directory
Pwd > print current working directory
Mv > move & rename
Commands execution in shell
Built-in commands in shell.
Linux man pages
Linux man
• Help commands
man command
info command
command –help > this will retrieve the coded help of command
help built-in-command
•Man Pages
man section command (man 5 passwd)
•Display manual page descriptions
man –f command (short description)
whatis
(short description)
•Search the manual page names and descriptions
man -k pattern ( man -k password, man -k password | grep passwd )
apropos pattern
Linux man
How to search in manual page ?
1-man ls
2-press /
3-type string you want to search for example all
4-press n for next result
5-press N for previous result
Linux man
Manual pages examples
man –f passwd, to know command section
man –w passwd, to know location of manual page
man –k password, get passwd command
man 1 passwd
man -I 1 PASSWD
man 3 printf
man 2 signal
man 8 fdisk
man 8 reboot
man cd
cd --help
Ready Made Packages
A packaging system is a way to provide programs and applications for installation. This way, you don’t
have to build a program from the source code. APT (Advanced Package Tool) is the command line tool
to interact with the packaging system.
Syntax:
sudo apt_command [package_name]
Example:
Note:
- apt actually works on a database of available packages. If the database is not updated, the system
won’t know if there are any newer packages available. This is why updating the repository should
be the first thing to do in in any Linux system after a fresh install.
-
most of apt commands requires superuser privileges so you’ll need to use sudo
Update APT DB
Updating the repository would give you information about the available packages on your system. There
are three options:
Get
There is a new version available. It will download the information about the version (not the package
itself). You can see that there is download information (size in kb) with the ‘get’ line in the screenshot
above.
Hit
There is no change in package version from the previous version
Ign
The package is being ignored. Either the package is way too recent that it doesn’t even bother to check
or there was an error in retrieving the file but error was trivial and thus it is being ignored. Don’t worry,
this is not an error.
Ready Made Packages
The are two commands do the same functionality, apt and apt-get. apt is the old one and more stable but
it less user freindly.
apt command
apt-get
function of the command
apt install
apt-get install
Installs a package
apt remove
apt-get remove
Removes a package
apt update
apt-get update
Refreshes repository index
apt upgrade
apt-get upgrade
Upgrades all upgradable
packages
apt full-upgrade
apt-get dist-upgrade
Upgrades packages with
auto-handling of
dependencies
apt search
apt-cache search
Searches for the program
apt show
apt-cache show
Shows package details
Ready Made Packages
•Install packages from main repositories (automatically)
apt-get install app-name >> install new app
apt-get update
>> update list of pkgs
apt-get remove app-name >> remove app
•Install packages from debfiles (manually)
dpkg-ipkg-name.deb
•Install from source code
./configure [options] (not always)
make
make install
Install Essential Build Package
The default Ubuntu repositories contain a meta-package named build-essential that contains the GCC
compiler and a lot of libraries and other utilities required for compiling software.
ls -lah /usr/share/doc/build-essential
cat /usr/share/doc/build-essential/list
Write command in Python
We can write any command in python by importing a module named os.
Two functions in this module are important:
1- system
This function shall execute a certain command.
2- popen
This function shall execute a command and return its output if it has an output. The return is an object of
type file, you can use the read() operation to get it as string.
Write C code in linux
1-create your file by touch example.c
2-gedit your file to add the code by gedit example.c
3-compile your file by gcc example1.c –o ex1
4-to run your file ./ex1
5- to view executable info type file ex1
System info commands
whoami
–print effective current user
•hostname
–show or set the system's host name
•date
–print or set the system date and time
•uptime
–Tell how long the system has been running
•uname –a
–print system information
Clear
–To clear the terminal
history
–to view bash history
System info commands
•lsusb
–list USB devices
•lspci
–list all PCI devices
PCI, or Peripheral Component Interconnect is an interface to add additional hardware
components to a computer system.
•lscpu
–display information about the CPU architecture
•lsmod
–list loaded modules
System info commands
-env or printenv: is to print system environment variables.
-export is a built-in command of the Bash shell. It is used to mark variables and
functions to be passed to child processes. Basically, a variable will be included in child
process environments without affecting other environments
-export only will show you all the exported variables
-export myvar=1 to export your variable in the current shell and its childs
-printenv myvar or echo $myvar to view value of your exported variable also
try printenv myvar in another shell ?
- myvar2=1 to export your variable in current shell only
-enter another bash by typing bash and print myvar2 and to exit it type exit
SuperUser Mode
The superuser mode is used to do the commands that need higher privileges.
1 To execute a command in the root mode use the command
sudo sudo command
2 To move to the superuser mode use the following command
sudo su
sudo -i
su root
3 To exit the superuser mode and get back to the normal mode use the
command
exit
SuperUser Mode
•Switch to another user from terminal
su user-name
•Switch to root user
su >> without changing full environment
su - >> switch to full root environment
•Run command with root priviliges
sudo command
How to use vi
•Open file
–vi filename
•Enter edit (insert) mode (i)
•Exit edit (insert) mode (Esc)
•On command mode you can
–Exit without saving changes (:q!)
–Save changes and exit (:wq)
–Save changes and stay (:w)
–Delete a full line, go to line then press (dd)
Linux Basic Commands
Command
Important Options
Description
echo
print on terminal window
date
get date from the system
cal
get calendar from the system
clear
clear the command prompt
get OS name
uname
-n
name of the machine
-r
version of the kernel
-a
all information
whoami
get active user name
gedit
open a file in gedit editor
Linux Basic Commands
Command
Important Options
Description
list the contents of a certain path
note: if no path is given, it would the list of the contents
for the current working path
ls
-l
list all directories but with long format listing, showing
some details like owner settings, permissions, time stamp
-S
list with sorting by size
-a
List with hidden files , note that hidden files has . at the
starting of its name
*.html
List only files with a particular format
(i.e) .html format
“concatenate” commands
❖ cat > file.txt:
take input from user and save it file.txt
end the input by the shortcut “Ctrl + D”
❖ cat file.txt:
shows the content of file.txt
❖ cat file1.txt >> file2.txt:
append content of file1.txt into file2.txt,
change happens in file2.txt only,
file1.txt stays as it is
Some flags with ‘cat’
Description
Instruction
used to add line numbers to non blank lines
cat -b
used to add line numbers to all lines
cat -n
used to squeeze successive blank lines into one line
cat -s
shows $ at the end of each line
cat -E
Working with Files
“searching” commands
❖ grep string file.txt:
□ used to search for a particular
string in a text file, returns
results for matching string
“string”
□ similar to “Ctrl+f”
Grep “Search tool”
•To make a horizontal search in a text file
grep
•It prints all the lines matching a pattern
grep search-pattern file-name
•To print all lines without the matching pattern
grep -v pattern file-name
•It can be used with piping
cat file-name | grep pattern
Some flags with ‘grep’
Description
returns results for case insensitive strings
returns matching strings along with their
line numbers
returns no of lines not matching the
search string
returns no of lines matching the search
string
Instruction
grep –i string file.txt
grep –n string file.txt
grep –v string file.txt
grep –c string file.txt
Cut command
To find a definite column from a text file which is organized in fields delimited with
a predefined character.
Cut (projection)
•Remove sections from each line of file and print only the required field
cut -d ” delimiter ” -f” field-num” file-name
cut -d: -f1 /etc/passwd
Grep & cut can be used to search in text using selection & projection (same
databases concept)
–Example: Print the shell for a user
grep root /etc/passwd| cut -d: -f7
•wc file-name
–Displays the number of lines, words, and characters in a file.
Working with Files
“copy” commands
❖ cp file.txt + destination path:
used to copy file.txt to the given destination
The same thing can be done with folders and directories
but add the -R flag
note that paths must be preceded by /
Absolute Path from the root directory
Working with Files
“copy” commands
□ Getting a file from its absolute path to
the current working directory:
❖
□
❖
□
dot to indicate for
current directory
cp + absolute source directory/filename .
Copying a file from anywhere to anywhere:
cp + absolute source directory/filename + absolute destination directory
Copy any number of files not only one:
Some flags with ‘cp’
Description
Instruction
enters interactive mode ,shell asks before
overwriting files, y for yes , n for no
cp -i
doesn’t overwrite the file
cp -n
Recursive copy for copying folders/directories
cp -R
Working with Files
“move” commands
❖ mv file.txt + destination path:
used to move/cut file.txt to the given destination
Also used for renaming files
❖
Same options of copy can be used with move
Working with Files
□ what if I want to copy or move all text files
□ use the * operator to indicate for “all”:
❖ cp *.txt destination path:
Copy all files whose names end with .txt to the destination path
❖
the same thing can be used with “mv”:
Working with Files
“touch” command
❖ touch file1 file2 file3:
used to create any number of files at the current directory
Working with Directories
“make directory” commands
❖ mkdir + FolderName:
used to create new folder with the specified name at the current directory
create a
folder called
“Folder1”
“Folder1” is
created
Working with Directories
“make directory” commands
❖ mkdir -□ what if I want to create nested directories like:
folder inside folder inside folder…?
❖ p folder1/folder2/folder3 … :
used to make parent directories and subdirectories
create
Folder2 , then
inside it create
Folder3 , then
inside it create
Folder4
Working with Directories
“make directory” commands
□ what if I want to create multiple
directories inside the same parent directory?
❖ mkdir -p parentDirectory/{Folder1,Folder2,F3,..}
create Folder1 ,
then inside it
create f1,f2,f3
(children :D)
Working with files and Directories
“Remove and Remove directory” commands
❖ rmdir + FolderName:
□ used to remove the specified folder from the current directory
□ works only if the directory is empty
f1 exists
f1 is deleted
Working with files and Directories
“Remove and Remove directory” commands
❖ rm -r + FolderName:
□ used to remove the specified folder from the current directory
□ works even if the folder isn’t empty
❖ rmdir + FolderName:
□ used to remove the specified folder from the current directory if empty
Text file utilities
•Viewing file contents
cat file-name
more file-name or less file-name
•Scrolling keys for the more command
•Space bar: moves forward on screen
•b: move back one screen
•/string: search forward for pattern
•n: find the next occurrence
•q: quit and return to the shell prompt
head file-name
tail file-name
Links
Soft Link:
soft or symbolic is more of a short cut to
the original file....if you delete the original
the shortcut fails and if you only delete
the short cut nothing happens to the
original.
hard link:
Just creates another file with a link to the
same underlying inode.
inode is a data structure that stores
various information about a file in
Linux, such as the access mode (read,
write, execute permissions),
ownership, file type, file size, group,
number of links, etc. Each inode is
identified by an integer number. An
inode is assigned to a file when it is
created.
Inodes in Linux
To create softlink: ln -s File_name softln
To create hardlink: ln File_name hardln
To know more about file type: stat file_name
Permissions
•Each file has an owner and assigned to a group.
•Linux allows users to set permissions on files and directories to protect them.
•Permissions are assigned to
–File owner
–Members of the group the file assigned to
–All other users
•Permissions can only be changed by the owner or root.
Permissions
Permissions
To display the permissions on files
ls –l
|rwx|rwx|rwx
type |usr|grp|other
Permissions
•To change permissions on file
chmod
•Symbolic mode
chmod u+x file
chmod g+w file
chmod o-r file
•Numeric (octal) mode
chmod 555 file
chmod 755 file
chmod 214 file
0 = --1 = --x
2 = -w3 = -wx
4 = r5 = r-x
6 = rw7 = rwx
Ownership
•To change the ownership of file
chown new-owner file-name
•Only root can change ownership of files.
•To change group of file
chgrp new-group file-name
•To change owner & group together
chown new-owner:new-group file-name
Create and delete users
-To add a new user: sudo adduser ali
-To switch to a new user: su ali
-To switch to created user:su MuhammadHussein
-To add user to sudo group: sudo usermod -aG sudo ali
-To view user groups:groups ali
-To delete a user: Sudo deluser ali
-To change user password: passwd ali
Command Redirection
The redirection operation is used to redirect the data output from a command into a text file instead
of printing it on the terminal window.
The '>' symbol is used for output (STDOUT) redirection. Try: ls –lah > newfile
-each time you do > the file will be overwritten
If you do not want a file to be overwritten but want to add more content to an existing file, then you
should use '>>' operator. Try: ls >> newfile
-to view file cat new file
It’s to redirect the output channel to a text file to use as log or any other usage.
•There’s two main output channels
–Standard output channel
•command > file, overwrite •command >> file, Append
–Standard error channel
•command 2> file, rm -r / 2> file
–To redirect both channels
•command &> file
•The Black hole is used to redirect output to NO-WHERE
–/dev/null
command redirection
•The Black hole is used to redirect output to NO-WHERE, “/dev/null” is a virtual device file
/dev/null
Since there are two types of output, standard output and standard error, the first use case is to
filter out one type or the other.
grep -r power /sys/
Since “Permission denied” errors are part of stderr, you can redirect them to “/dev/null.”
grep -r power /sys/ 2>/dev/null
-If you want to redirect both stderor and stdout to null you can do
grep -r power /sys/ &>/dev/null
Wildcard characters
Asterisk(*): represents 0 or more character
ls f*
rm *.c
rm *.*htm*
rm *.*
•Question mark(?) character represents any single character
ls file?
rm f?
•Square bracket([ ]): represent a range of characters for a single character position
ls [a-f]*
Sequential commands in Linux
We us semicolon ; to execute successive commands one after another
ls ; pwd ; cd /
we could use && to run command if another ran successfully , for example A&&B to run
A if A executed correctly
rm / && pwd
We could use || to run command if another failed
cat xfile || touch xfile
Searching for files
•To locate the location for a command and all its other files (help,….)
whereis command > whereis ls
•To locate only the binary file
whereis –b command
•To find files by name in the whole system
Locate file-name
–Very fast tool, as it searches in a database and to update database run
sudo updatedb
•To search for files in a directory hierarchy
find path-to-search –name search word
Searching for files
Limit Search Queries to a Specific Number
locate "*.html" -n 20
Display The Number of Matching Entries
locate -c [eclipse]*
Refresh mlocate Database
sudo updatedb
Review Your Locate Database
locate -S
Process management
•The Linux process divides CPU time into time slices, in which each process will
get a turn to run, higher priority processes first.
•User can affect the priority by setting the niceness value for a process .
•Niceness values range from -20 to +19, which indicates how much of a bonus
or penalty to assign to the priority of the process.
•Most processes run with a niceness value of 0 (no change).
•Smaller numbers are higher priority and vice versa.
•Users can adjust this value down as far as +19 but can not increase it. Root can
increase the priority of a process as high as -20
Process management
System Monitor
–GUI tool to monitor all running processes
top:Terminal-based tool to display all running Linux tasks
Ps:Report a snapshot of the current processes in the current session.
Ps -a:Report a snapshot of the current processes in the current session exclude session
loader.
Ps -axu :process from all running sessions {terminal and non terminal, gui,non gui}
pid of prog –name:find the process ID of a running program
Pstree:Display a tree of processes > pstree -g
pidof init
ls -lah /sbin/init
Process management
nice -n priority command
–Assign a niceness value to a new process
–nice -n 19 ./script.sh
•renice-n priority -p pid
–Assign a niceness value to a running process
–renice -n 10 -p 1001
Kill send a signal to a process
kill pid>> to terminate a process
kill -s signal-number pid
kill –l > to list signals
Killall kill processes by name
Killall process-name
Archives in Linux
•tar: The GNU version of the archiving utility
•To make an archive without compression
tar cfa rchive.tar file(s)
option c > to create
option f > to define output name
option a > auto compress
•To make a compressed archive
tar cjf archive.tar.bz2 file(s)
tar czf archive.tar.gz file(s)
•To extract an archive
tar xvf myfile.tar option x > to extract v > to view log
tar xvf myfile.tar.bz2
tar xvfz myfile.tar.gz
Embedded Linux
Linux OS
Content
-Memory management
-Address binding
-Fragmentation
-Posix interfaces
-Segmentation
-TLBs and cashes
-signals, threads and alarms
Why Memory Management ?
→ Every process needs own memory
• Memory Division
→ Independence / Non-interface between Processes
• Protection
→ Communication between Processes
• Sharing (pipes /shared memory)
→ Process memory with attributes (Like read only for code )
• Logical organization
→ Executing Programs bigger than available memory
• Physical organization (Overlaying & Reusing )
Linking and loading
linker:
is special program that combines the object files, generated by compiler/assembler, and other pieces
of codes to originate an executable file have. exe extension. In the object file, linker searches and
append all libraries needed for execution of file.
Loader :
The loader is special program that takes input of object code from linker, loads it to main memory,
and prepares this code for execution by computer. Loader allocates memory space to program
Linking and loading
Static Vs Dynamic linking
Shared libraries
-Files with the “. so” extension are
dynamically linked shared
object libraries
Ldd ex1 to view .so libs
-The location of configuration file
having location of shred libraires in
Linux is in .conf files in directory
/etc/ld.so.conf.d/
-to view any of .conf files
cat /etc/ld.so.conf.d/i386-linuxgnu.conf
-to view the shared libraries
tree /usr/lib/ | grep .so
By default, ldconfig reads the content
of ,fnoc.os.dl/cte/creates the appropriate
symbolic links in the dynamic link directories,
and then writes a cache
to /etc/ld.so.cache which is then easily used by
other programs.
This is do by : sudo ldconfig -v
Address Binding
Address binding is the process of mapping from one address space to another address
space. Logical address is address generated by CPU during execution whereas Physical
Address refers to location in memory unit(the one that is loaded into memory).Note
that user deals with only logical address(Virtual address).
Address Binding
Compile Time – If you know that during compile time where process will reside in
memory then absolute address is generated i.e physical address is embedded to the
executable of the program during compilation. Loading the executable as a process in
memory is very fast. But if the generated address space is preoccupied by other
process, then the program crashes and it becomes necessary to recompile the program
to change the address space.
-Locations are fixed and known.
-There’s no address translation, the defined addresses in linker will be same in the main
memory locations. The used addresses are the absolute address too.
Address Binding
Load time – If it is not known at the compile time where process will reside then
relocatable address will be generated. Loader translates the relocatable address to
absolute address. The base address of the process in main memory is added to all
logical addresses by the loader to generate absolute address.
-The loading address will be know during loading of the executable, for example when
we gonna upload linux kernel inside our bootloader uboot, we will pass the load
address of the kernel
Flash memory
Target memory
Prog.c
prog x
prog x
prog x
.
.
.
.
.
.
A:10
0x1AB:10
0xAB:10
B:20
0x1AC:20
0xAC:20
C:0
0x1AD:0
0xAD:0
Add A+B
Add 0x1AB+0x1AC
Add 0xAB+0xAC
load 0xAD
load 0x1AD
load 0xAD
End
End
End
Address Binding
Execution time- The instructions are in memory and are being processed by the CPU.
Additional memory may be allocated and/or deallocated at this time. This is used if
process can be moved from one memory to another during execution(dynamic linkingLinking that is done during load or run time)
Address Binding
Address Binding
-Every program will be a process.
-process allocated during execution.
-MMU unit is needed.
-program will reside in memory with its logical address and MMU will convert it to physical
address.
-instructions will keep its logical addresses in memory
- CPU fetch instruction> instruction is with its logical address > CPU request address
translation from MMU > then CPU execute the instruction
Memory Allocation
Memory Allocation
Issues with continuous
allocation is:
1-Programms sizes shall be
less than the available
memory size
2-Fragmentaion happen due
to continuous allocation
3-Hard for programs to grow
during execution
Memory Allocation types
First Fit
In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Best Fit
The best fit deals with allocating the smallest free partition which meets the requirement
of the requesting process. This algorithm first searches the entire list of free partitions and
considers the smallest hole that is adequate. It then tries to find a hole which is close to
actual process size needed.
Worst fit
In worst fit approach is to locate largest available free portion so that the portion left will
be big enough to be useful. It is the reverse of best fit.
Paging in Linux
-Program will be divided to equaled size partitions called pages
-Ram will be divided into equaled size partitions called frames
-page size == frame size to make sure that there’s no unused space in the frame
-Logical Address or Virtual Address (represented in bits): An address generated by the CPU
-Logical Address Space or Virtual Address Space( represented in words or bytes)
so if we have 4096 space, 4096=2^12 so to have 256=2^8 page each page will have 16=2^4
locations
Paging
-Consider process x of size 10Kb
-page size is 2Kb, so process x will be divided into 5 pages [p1..p5]
-CPU will generate logical address and MMU will convert it to physical
address
P D
F
D
RAM
1:p1
2:
CPU
3:p3
4:p2
Page
no
Frame
no
P1
1
P2
4
P3
3
P4
invalid
p5
7
HDD
5:
6:
7:p5
p4
8:
9:
What is the Memory Management ?
Multi level paging
Multi-level page tables are tree-like structures to hold page tables. As an example, consider a
two-level page table, again on a 32-bit architecture with 212 = 4 kbyte pages. Now, we can
divide the virtual address into three parts: say 10 bits for the level-0 index, 10 bits for the level1 index, and again 12 bits for the offset within a page.
Single level memory consumption 2^20 * (2^2 bytes/entry) = 4 MB
multi level memory consumption (2^10 level-0 entries) · (2^2 bytes/entry) + 1 · (2^10 level1 entries) · (2^2 bytes/entry) = 2 · 2^12 bytes = 8 kb. (Best case)
Translation lookaside buffer
-A translation lookaside buffer (TLB) is a memory cache that is used to reduce the
time taken to access a user memory location. It is a part of the chip's memorymanagement unit (MMU). The TLB stores the recent translations of virtual
memory to physical memory and can be called an address-translation cache.
-with TLBs try to make pages larger as possible to decrease TLB miss rate
-Multi level TLBs used to increase performance and have more utilization of
translation
Lab
-Write simple c program to view virtual addresses and page table
1-touch example1.c
2-gedit example1.c and add your code
3-compile your code gcc example1.c -o ex1
4- use objdump binary utility to view virtual addresses of your code
objdump -D ex1
5- All the addresses are virtual addresses
6-Add while (1) to code 7- ps –a to know the process id
8-to read page table cat /proc/6301/pagemap
Paging pros and cons
Advantages of Paging
•Easy to use memory management algorithm
•No need for external Fragmentation
•Swapping is easy between equal-sized pages and page frames.
•Allows demand paging and prepaging
•Sharing of pages
•Protection of pages
Disadvantages of Paging
•May cause Internal fragmentation > fixed with segmetation
•Page tables consume additional memory.
•Longer memory access times (page table lookup) > solved with TLBs
•Page table could be big > solved by multi-level-paging
Segmentation
-Segmentation :A process is divided into Segments. The chunks that a program is
divided into which are not necessarily all of the same sizes are called segments.
Segmentation gives user’s view of the process which paging does not give. Here the
user’s view is mapped to physical memory.
-Segmentation is used to solve issue of internal fragmentation
Segmentation pros and cons
Advantages of Segmentation
1.No internal fragmentation
2.Average Segment Size is larger than the actual page size.
3.Less overhead
4.It is easier to relocate segments than entire address space.
5.The segment table is of lesser size as compare to the page table in
paging.
Disadvantages
1.It can have external fragmentation.
2.it is difficult to allocate contiguous memory to variable sized
partition.
3.Costly memory management algorithms.
Segmentation with Paging
In this example we see that we will use paged segments which means that
1-divide your program into logical segments
2-each segment will be divided into equalled size pages
3-physical memory will be divided into frames and frame size == page size
4-each page will be assigned to a frame 5-page table will be constructed for each segment and
will be added in a single frame
6-then segment table will be constructed to access each page table for each segment
Seg 0 page
table
Segmentation with Paging
Pure segmentation is not very popular and not being used in many of the operating systems.
However, Segmentation can be combined with Paging to get the best features out of both the
techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further
divided into fixed size pages.
1.Pages are smaller than segments.
2.Each Segment has a page table which means every program has multiple page tables.
3.The logical address is represented as Segment Number (base address), Page
logical address > which segment > which page > which frame
View fault pages
1-create while(1) helloworld program
2- compile your code gcc example1.c -o ex1
3- run your code with ./ex1
4-open new terminal and get procces id ps –a
5-type ps -o min_flt,max_flt,cmd 6699 to get fault pages
min_flt
maj_flt
number of minor page faults
number of major page faults
6-to know your program sections readelf ex1 -a
Page Fault
-In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when new page comes
in.
-Page Fault :A page fault happens when a running program accesses a memory page that
is mapped into the virtual address space, but not loaded in physical memory.
-the better the algorithm the less miss rate for pages
Page replacements algorithms
First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames Find number of page
faults.
Page replacements algorithms
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3,
2, 1, 0, 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10
page faults.
Page replacements algorithms
Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page
frame. Find number of page fault
POSIX in Linux
The Portable Operating System Interface (POSIX) is a family of standards specified by the
IEEE Computer Society for maintaining compatibility between operating systems.[1] POSIX
defines the application programming interface (API), along with command line shells and
utility interfaces, for software compatibility with variants of Unix and other operating
systems.
To view POSIX libraries and interfaces: https://en.wikipedia.org/wiki/C_POSIX_library
Process in Linux
-An instance of a running program is
called a process. Every time you run a shell
command, a program is run and a process is
created for it. Each process in Linux has
a process id (PID) and it is associated with a
particular user and group account.
-Linux is a multitasking operating system,
which means that multiple programs can be
running at the same time (processes are also
known as tasks). Each process has the illusion
that it is the only process on the computer. The
tasks share common processing resources (like
CPU and memory).
To view all process info: ls /proc/<PID>
Process in Linux
A process control block (PCB) is a data
structure used by computer operating systems
to store all the information about a process. It
is also known as a process descriptor.
•When a process is created (initialized or
installed), the operating system creates a
corresponding process control block.
•Information in a process control block is
updated during the transition of process
states.
•When the process terminates, its PCB is
returned to the pool from which new PCBs are
drawn.
•Each process has a single PCB.
Process in Linux
Create a back ground process by add & at the
end of the command of executable.
1- create while (1) prog
2- compile this prog
3- run prog ./ex1 & as a background
4- ps –a to view running processes
Check examples from 1 to 12
Process in Linux
Process in Linux
Running/Runnable (R): running processes are processes using a CPU core right now, a runnable
process is a process that has everything to run and is just waiting for a CPU core slot.
Interruptable sleep (S):Process is sleeping or blocked waiting for some conditions to be running
again.
The difference between process in Interruptible Sleep (S) state and Uninterruptible Sleep (D) is that
the former will wake up to handle signals while the former won't. We'll talk about signals in a
moment, but let's suppose that a process is waiting for a I/O operation to complete before wake up.
If in the meantime, it receives a signal to terminate (SIGKILL), it will terminate before having the
chance to handle the requested data. That's why I/O operations normally go to uninterruptible sleep
while waiting for the result.
Process in Linux
Zombie(Z):A zombie process is a process whose execution is completed but it still has an
entry in the process table. Zombie processes usually occur for child processes, as the
parent process still needs to read its child’s exit status. Once this is done using the wait
system call, the zombie process is eliminated from the process table.
Create Process in Linux with fork
Fork system call is used for creating a new process, which is called child process, which runs
concurrently with the process that makes the fork() call (parent process). After a new child process is
created, both processes will execute the next instruction following the fork() system call. A child process
uses the same pc(program counter), same CPU registers, same open files which use in the parent
process.
It takes no parameters and returns an integer value. Below are different values returned by fork().
Negative Value: creation of a child process was unsuccessful.
Zero: Returned to the newly created child process.
Positive value: Returned to parent or caller. The value contains process ID of newly created child
process.
Create Process in Linux with fork
Nested fork example
execl in Linux
excel functions
Signal in Linux
Signals are software interrupts sent to a program to indicate that an important event has
occurred. The events can vary from user requests to illegal memory access errors. Some
signals, such as the interrupt signal, indicate that a user has asked the program to do
something that is not in the usual flow of control.
Signal in Linux
Default Actions
Every signal has a default action associated with it. The default action for a signal is the
action that a script or program performs when it receives a signal.
Some of the possible default actions are −
•Terminate the process.
•Ignore the signal.
•Dump core. This creates a file called core containing the memory image of the process
when it received the signal.
•Stop the process.
•Continue a stopped process.
View all available signals by kill –l
Check examples from 13 to 19
Threads in Linux
A thread of execution is often regarded as the smallest unit of processing that a
scheduler works on.
A process can have multiple threads of execution which are executed asynchronously.
This asynchronous execution brings in the capability of each thread handling a particular
work or service independently. Hence multiple threads running in a process handle their
services which overall constitutes the complete capability of the process.
Check examples from 20 to 24
Threads in Linux
Threads in Linux
The functions defined in the pthreads library include:
pthread_create: used to create a new thread
Syntax:
int pthread_create(pthread_t * thread, const pthread_attr_t * attr,
void * (*start_routine)(void *), void *arg);
•thread: pointer to an unsigned integer value that returns the thread id of the thread
created.
•attr: pointer to a structure that is used to define thread attributes like detached state,
scheduling policy, stack address, etc. Set to NULL for default thread attributes.
•start_routine: pointer to a subroutine that is executed by the thread. The return type
and parameter type of the subroutine must be of type void *. The function has a
single attribute but if multiple values need to be passed to the function, a struct must
be used.
•arg: pointer to void that contains the arguments to the function defined in the earlier
argument
Threads in Linux
pthread_exit: used to terminate a thread
Syntax:
void pthread_exit(void *retval);
Parameters: This method accepts a mandatory parameter retval which is the pointer
to an integer that stores the return status of the thread terminated. The scope of this
variable must be global so that any thread waiting to join this thread may read the
return status
pthread_join: used to wait for the termination of a thread.
Syntax:
int pthread_join(pthread_t th, void **thread_return);
Parameter: This method accepts following parameters:
th: thread id of the thread for which the current thread waits.
thread_return: pointer to the location where the exit status of the thread mentioned
in th is stored
Threads in Linux
pthread_self: used to get the thread id of the current thread.
Syntax:
pthread_t pthread_self(void);
pthread_equal: compares whether two threads are the same or not. If the two
threads are equal, the function returns a non-zero value otherwise zero.
Syntax:
int pthread_equal(pthread_t t1, pthread_t t2);
Parameters: This method accepts following parameters:
t1: the thread id of the first thread
t2: the thread id of the second thread
Threads in Linux
pthread_cancel: used to send a cancellation request to a thread
Syntax:
int pthread_cancel(pthread_t thread);
Parameter: This method accepts a mandatory parameter thread which is the thread
id of the thread to which cancel request is sent.
pthread_detach: used to detach a thread. A detached thread does not require a
thread to join on terminating. The resources of the thread are automatically released
after terminating if the thread is detached.
Syntax:
int pthread_detach(pthread_t thread);
Parameter: This method accepts a mandatory parameter thread which is the thread
id of the thread that must be detached.
Socket programming
What is socket programming?
Socket programming is a way of connecting two nodes on a network to communicate with each
other. One socket(node) listens on a particular port at an IP, while other socket reaches out to the
other to form a connection. Server forms the listener socket while client reaches out to the server.
Server steps:
1. Create socket
2. Bind to address and port
3. Put in listening mode
4. Accept connections and process there after.
Client steps:
1. Create socket
2. Connect to server.
Socket programming
Socket programming
Socket programming
Socket programming
Server steps:
1. Create socket
2. Bind to address and port
3. Put in listening mode
4. Accept connections and process there after.
Client steps:
1. Create socket
2. Connect to server.
Socket programming
Embedded Linux
Shell Programming
→ No programing language is perfect. There is not even a single best
language, there are only languages well suited or perhaps poorly suited
for particular purposes.
- What is bash scripting?
→ The basic idea of bash scripting is to execute multiple commands to
automate a specific job.
→ You can say that the shell is the glue that binds these commands
together
→ Note: shell script is text file that contains a series of commands . Any
work you can do on the command line can be automated by a shell script.
History of Bash?
Steve Bourne wrote the Bourne shell which appeared in the Seventh Edition Bell Labs
Research version of Unix.
Many other shells have been written; this particular session concentrates on the Bourne
and the Bourne Again shells.
Other shells include the Korn Shell (ksh), the C Shell (csh), and variations such as tcsh.
Your
command
Or shell
script
ls
Linux
Shell
Bash
Converted to
Binary
language by
shell
Now Linux
Kernel
Understand
Your Request
00100101
Now Linux
Kernel
Understand
Your Request
Create your first shell script.
1- Create a file to write a bash script (file.sh)
2- the first line in bash script →(#!/bin/bash)
this compiler place to
translate your script
We have two function to print on terminal
1- echo
2- printf
1- echo syntax:
→ echo message
→ echo “message”
2- printf syntax:
→ printf “message”
Note→ In printf→ We must have write (\n)
in end of message to print a new line
1- To run your bash script, the first step change file mode → chmod +x filename.sh
2- Then run your bash script by using this command →./filename.sh
1- Single line comment :
# single comment
2- Multi line comment:
<<comment name
multi line comment
comment name
Just about every programming language in existence has the concept of variables - a symbolic name
for a chunk of memory to which we can assign values, read and manipulate its contents
Examples:
VAR=value
VAR = value
#!/bin/sh
MY_MESSAGE="Hello World"
echo $MY_MESSAGE
x="hello"
expr $x + 1
MY_MESSAGE="Hello World"
MY_SHORT_MESSAGE=hi
MY_NUMBER=1
MY_PI=3.142
MY_OTHER_PI="3.142"
MY_MIXED=123abc
• Variable is a part from the memory, used to hold a piece of data.
• Like a python language, it has no command for declaring a variable
• A variable is created the moment you first assign a value to it
• Example Code:
x=1
y=“ahmed”
echo $x $y
echo “$x $y”
printf”%s %d\n” $y $x
• Read only variable: cannot change the value of variable and cannot
unset(delete the variable from my code)
• Syntex→ readonly Variable_Name
EX→ name=“Ahmed”
readonly name
→ Ahmed
echo $name
name=“Ali”→ error
•Define a variable
variablename=VALUE
•Print a variable to screen
echo $variablename
•System variables:
–Normal: has to be changed with each user
HOME PWD SHELL
–Environmental: changes with the login shell only with (su-)
PATH
•env print environmental variables
•read take a value from the user to a variable
read variablename
read –p “STRING” VAR
• Special Variable:
1- $$: This variable have the process number which the bash is running
2- $?: check the process is success or not if your process has problems it return any
number except zero and if your process has not problem it return zero.
3- $#: return number of input variables into your program
4- $* or $@: return all input variables that your program entered
5- $!: This variable have the process number which it run in background
6- $0: return name of bash
7- $n: where(n=1,2,3,…….) return the value of variable
Syntax → read variable_name
Example
echo “please enter your name”
read name
echo “welcome” $name
Example
Output
Ask user enter the first name then ask user enter the last name then the program
print welcome first name last name
Expected Output
Arithmetic operations : (+,-,*,/,**,%) we have two method :
First method :
Second method :
Syntax :
Syntax
X=10
Y=10
echo $(($x+$y))
echo $(($x-$y))
echo $(($x*$y))
echo $(($x/$y))
echo $(($x%$y))
echo $(($x**$y))
X=10
Y=10
echo `expr $x + $y`
echo `expr $x - $y`
echo `expr $x \* $y`
echo `expr $x / $y`
echo `expr $x % $y`
Ask user enter two numbers to Calculate all arithmetic operations on 2
variables by using two method.
Expected Output
The variables that I enter while I initiate the program
Syntax:
./filename.sh value1 value2 …..
How to scan it→ VariableName=$1
Example:
lab.sh
x=$1
y=$2
echo `expr $x + $y`
terminal
./Lab.sh 5 3
output
8
User enter his name and age your program print welcome name and age by using
positional variable
Expected Output
Arithmetic operation
Syntax:
c=‘echo $var1 + $var2 | bc’
piping
Example:
x=10.5
y=7.36
echo ‘echo $x+$y|bc’
echo ‘echo $x-$y|bc’
echo ‘echo $x*$y|bc’
echo ‘echo $x/$y|bc’
echo ‘echo $x%$y|bc’
echo ‘echo $x**$y|bc’
Ask user enter two Real numbers to Calculate all arithmetic operations on 2
variables.
Expected Output
The following relational operators is used with numbers only:
-eq: check if the value of two operands are equal or not, if yes , then the
conditional becomes true
example→ [ $a –eq $b ] → is not true
-ne: check if the value of two operands are equal or not, if value are not
equals, then the conditional becomes true
example→ [ $a –ne $b ] → is true
-gt: check if the value of left operand is greater than the value of right
operand, if yes , then the conditional becomes true
example→ [ $a –gt $b ] → is not true
-lt: check if the value of left operand is less than the value of right
operand, if yes , then the conditional becomes true
example→ [ $a –lt $b ] → is true
a=10
b=20
a=10
b=20
The following relational operators is used with numbers only:
-ge: check if the value of left operand is greater than and equal the
value of right operand, if yes , then the conditional becomes true
example→ [ $a –ge $b ] → is not true
-le: check if the value of left operand is less than and equal the value of
right operand, if yes , then the conditional becomes true
example→ [ $a –le $b ] → is true
The following relational operators is used with string only:
a=“abc”
b=“efg”
=: check if the value of two operands are equal or not, if yes , then the conditional
becomes true
example→ [ $a = $b ] → is not true
!=: check if the value of two operands are equal or not, if value are not equals, then the
conditional becomes true
example→ [ $a != $b ] → is true
-z: check if give string operand size is zero , if it is zero length then it returns true
example→ [ –z $a ] → is not true
-n: check if give string operand size is not zero , if it is not zero length then it returns true
example→ [ –n $a ] → is true
-str: check if string is not empty string it is return true
example→ [ –str $a ] → is true
• We have two types of conditional statements:
1- if-statement
2-case-esac
• if-statement:
Syntax:
space
if [ condition ]
Then
commands
fi
space
Example:
if [ x –eq 0 ]
Then
echo “zero”
fi
Ask user enter number and Determine the absolute number to this number .
Expected Output
• else-if and else:
Syntax: space
if [ condition#1 ]
Then
commands
elif [ condition#2 ]
Then
commands
else
commands
fi
space
Example:
if [ x –eq 0 ]
Then
echo “zero”
elif [ x –lt 0 ]
Then
echo “negative”
else
echo “positive”
fi
Ask user to enter two numbers to determine Addition, subtraction, division and
multiplication but if any number is negative print the number is negative and exit.
Expected Output
Write bash script to change file name to anther name, Check that all parameters
have been sent before executing the command.
Expected Output
Embedded Linux
Shell Programming 2
• You can use multiple if...elif statements to perform a multiway branch. However, this
is not always the best solution, especially when all of the branches depend on the
value of a single variable.
• Shell supports case...esac statement which handles exactly this situation, and it does
so more efficiently than repeated if...elif statements.
• Syntax:
Case VariableName in
value#1) commands
<tap>
;;
value#2) commands
<tap>
;;
esac
Example:
Name=$1
Case ” $Name” in
“ali”) echo “welcome ali”
<tap>
;;
“gerges”) echo “welcome gerges”
<tap>
;;
*) echo “invalid name”
<tap>
;;
esac
• Type of loops:
1- while
2- until
3- for
4- select
• while :
• Syntax:
while [ condition ]
do
commands
done
Example:
while [ $count –lt 0]
do
echo “I am happy”
count=`expr $count+1`
done
• until:
• Syntax:
until [ condition ]
do
commands
done
• for:
• Syntax:
For item in items
do
commands
done
Example:
while [ $count –gt 0]
do
echo “I am happy”
count=`expr $count+1`
done
Example:
For i in *
do
echo $i
done
• Select:
• Syntax:
Select number in 1 2 3 4 5 6 7 8
do
Commands
done
Example:
Select n in 1 2 3 4 5 6 7 8 9 10
do
if [ $n –eq 2 ]
Then
echo “yes”
fi
done
Example:
total=0
while[ 0 ]
do
echo “please enter a number :”
read number
If[ $number –eq -1 ]
then
break
fi
total=`expr $total +$number
done
echo $total
Sleep:
With the sleep statement we can make delay then continue the code
Example:
total=0
Number=0
while[$number –lt 10 ]
do
echo “please enter a number :”
read number
If[ $number –eq 3 ]
then
continue
fi
total=`expr $total +$number
sleep 1
done
echo $total
Example:
total=0
Number=0
while[$number –lt 10 ]
do
echo “please enter a number :”
read number
If[ $number –eq 3 ]
then
continue
fi
total=`expr $total +$number
done
echo $total
Write a bash script to separate files and directories that it is found in the current
directory
Expected Output
Syntax:
Example:
array_name=(item1 item2 item3)
space
space
Accessing:
echo ${ Names[0] }
output
output
echo ${ Names[*] }
echo ${ Names[a] }
Names=(‘ahmed’ ‘ali’ ‘hassan’)
Ages=(25 26 27 )
output
ahmed
Ahmed
Ali
hassan
Ahmed
Ali
hassan
Names[3]=‘Rami’
echo ${ Names[*] }
Output
Ahmed
Ali
Hassan
Rami
Syntax:
Calling :
Function_Name=( )
{
commands
}
Function_Name
:
Example:
PrintMyName()
{
echo “Ahmed”
}
PrintMyName
Body of function
Calling
Passing variables to function :
In this case we make the function act as the program, while calling a function
we can passing the variables
:
Example:
PrintMyName()
{
echo $1
echo $2
}
PrintMyName “ahmed” “30”
Body of function
Calling
Start-up scripts are scripts of commands executed at login. They are used to:
–Set up the environment
–Establish commonly used aliases
–Run programs
Location of startup scripts in /etc
•~/.profile
•It gets executed automatically by the login shell when one logs-in from the textual
console or GUI.
•~/.bash_profile or ~./bash_login
•If one of these files exist, bash executes it rather then "~/.profile" when it is started
as a login shell. (Bash will prefer "~/.bash_profile" to "~/.bash_login"). However,
these files won't influence a graphical session by default.
•~/.bashrc “this file is executed with each opened terminal”
•This file will be executed in each and every invocation of bash as well as while
logging in to the graphical environment. And it’s executed after profile script.
•/etc/profile
•This file gets executed whenever a bash login shell is entered as well as by the
DisplayManagerwhen the desktop session loads.
•/etc/bash.bashrc
•This is the system-wide version of the ~/.bashrcfile. By default this file is executed
whenever a user enters a shell or the desktop environment.
Embedded Linux
RPI3 Programming
Raspberry Pi is a series of small single-board computers developed in
the United Kingdom by the Raspberry Pi Foundation in association
with Broadcom. Early on, the Raspberry Pi project leaned towards the
promotion of teaching basic computer science in schools and in developing
countries. Later, the original model became far more popular than
anticipated, selling outside its target market for uses such as robotics. It is now
widely used in many areas, such as for weather monitoring, because of its low
cost, modularity, and open design.
Eben Upton : wanted to create cheap and affordable
computer for students to be able to do projects
Introduction
• The Raspberry Pi is a credit-card-sized fully functioning
computer (System on chip) created by the non-profit
Raspberry Pi Foundation in the UK. It’s considered as a SOC
(system on chip) device that runs on a Linux OS specially
designed for it, called Raspbian.
• Raspbian is the official OS for Raspberry Pi, where other
third party OSes like Firefox OS, Android, RISC OS, Ubuntu
Mate . can be installed on Pi, even Windows 10 version is
also available for Pi.
• Like a computer, It has memory, processor, USB ports, audio output, graphic
driver for HDMI output .
Raspberry Pi Applications:
1- Smart home automation
2-ROV and Robots
3-Camera systems
4-Smart TV
5-super computers
6-open cv projects
7-machine learning projects
8-automotive applications
9-IOT projects
Introduction
Why Raspberry pi ?
- Complete computing platform
- Cheap
- Raw
- It doesn’t come hidden away in a box, you have access to the GPIO.
- Runs Linux
What can you do with it?
- General purpose computing
- Learning programming
- It comes preloaded with compilers and interpreters for many languages.
- Project platform because of its ability to integrate with electronics and
interacting with external devices
- Product prototyping
- Embedded linux products.
- Embedded systems solutions.
- IOT prototyping.
Introduction
• According to the Raspberry
Pi Foundation, more than 5
million Raspberry Pis were
sold by February 2015,
making it the best-selling
British
computer.
By
November 2016 they had
sold 11 million units, and
12.5 million by March 2017,
making
it
the
third
best-selling
"general
purpose computer". In July
2017, sales reached nearly
15 million. In March 2018,
sales reached 19 million. In
December 2019, sales
reached 30 Million.
•USB ports — these are used to connect a mouse and keyboard. You can also
connect other components, such as a USB drive.
•SD card slot — you can slot the SD card in here. This is where the operating
system software and your files are stored.
•Ethernet port — this is used to connect Raspberry Pi to a network with a cable.
Raspberry Pi can also connect to a network via wireless LAN.
•Audio jack — you can connect headphones or speakers here.
•HDMI port — this is where you connect the monitor (or projector) that you are
using to display the output from the Raspberry Pi. If your monitor has speakers,
you can also use them to hear sound.
•Micro USB power connector — this is where you connect a power supply. You
should always do this last, after you have connected all your other components.
•GPIO ports — these allow you to connect electronic components such as LEDs
and buttons to Raspberry Pi.
Raspber r y Pi Versions
Raspber r y Pi Versions
Ra s pber r y Pi 3 Components
Raspber r y Pi 3 Pinout
OS Installation
•
In the following steps, we will install Raspbian OS on SD card and connect to
Raspberry pi headless with SSH
•
Step1: Insert a microSD card into your computer. Your card should be 8GB or
larger
•
Step2: Install Etcher, Click the Select image button and choose the raspbian
image to flash
OS Installation
•
Step3: Select the SD card and click flash
OS Installation
•
Step4: Etcher will take a few minutes to install Raspbian on your microSD
card. When it's done -- at least in Windows -- you'll see a number of alerts
prompting you to format the card. Close these dialog boxes or hit cancel on
them (otherwise, you will format over the OS).
OS Installation
•
Step5: Write an empty text file named "ssh" (no file extension) to the root of
the directory of the card. When it sees the "ssh" on its first boot-up, Raspbian
will automatically enable SSH (Secure Socket Shell), which will allow you to
remotely access the Pi command line from your PC.
OS Installation
•
•
Step6
For headless mode using wifi create file called wpa_supplicant.conf in boot partition
and add the following configuration of your wifi
OS Installation
•
Step6: Navigate to the Network Connections menu, which is part of the oldschool Control Panel. You can get to this screen by going to Settings->Network
& Internet->Wi-Fi and then clicking "Change Adapter Settings" on the right
side of the screen. This works whether you are sharing an Internet connection
that comes to your PC from Wi-Fi or from Ethernet.
OS Installation
•
Step7: Right-click on the adapter that's connected to the Internet, and select
properties.
OS Installation
•
Step8: Enable "Allow other network users to connect" on the "Sharing" tab.
OS Installation
•
Step9: Now we can communicate through putty to the raspberry pi. Enter
raspberrypi or raspberrypi.local as the address you wish to connect to in
Putty, and click Open.
OS Installation
•
Step10: Enter pi as your username and raspberry as your password. You may
want to change these later.
OS Installation
•
Step 11: after successful connection to Pi, Enter sudo raspi-config at the
command prompt. Then choose Interfacing Options
OS Installation
•
Step 12: Enable VNC (Virtual Network Computing)
OS Installation
•
Step 13: launch VNC Viewer and Select New connection from the File menu
OS Installation
•
Step 14: Enter raspberry.local in the "VNC Server" field. If this does not work,
try again with the name "raspberrypi" without .local then click ok
OS Installation
•
Step 15: Double-click on the connection icon to connect and Click Ok if you
are shown a security warning.
OS Installation
•
Step 16: 7. Enter the Pi's username and password when prompted. The
defaults are username: pi and password: raspberry. Click Ok.
OS Installation
•
Step 17: Adjust resolution
RPi.GPIO module
•
Let’s go to
https://www.raspberrypi.org/documentation/linux/usage/commands.md
to view most used rpi commands
Rpi3 config menu
•
•
•
Enter rpi3 config menu by sudo raspi-config
This page describes the console based raspi-config application. If you are using the
Raspberry Pi desktop then you can use the graphical Raspberry Pi Configuration
application from the Preferences menu to configure your Raspberry Pi.
Generally speaking, raspi-config aims to provide the functionality to make the
most common configuration changes. This may result in automated edits to
/boot/config.txt and various standard Linux configuration files.
Rpi3 config menu
System Options
The system options submenu allows you to make configuration changes to various parts
of the boot, login and networking process, along with some other system level changes.
Wireless LAN
Allows setting of the wireless LAN SSID and passphrase.
Audio
Specifiy the audio output destination.
Password
The default user on Raspberry Pi OS is pi with the password raspberry. You can change
that here.
Hostname
Set the visible name for this Pi on a network.
Rpi3 config menu
Overclock
On some models it is possible to overclock your Raspberry Pi's CPU using this tool. The
overclocking you can achieve will vary; overclocking too high may result in instability.
Selecting this option shows the following warning:Be aware that overclocking may
reduce the lifetime of your Raspberry Pi.
Overlay File System
Enable or disable a read-only filesystem
Boot Order
On the Raspberry Pi4, you can specify whether to boot from USB or network if the SD
card isn't inserted.
Rpi3 config menu
Boot / Auto login
From this submenu you can select whether to boot to console or desktop and whether
you need to log in or not. If you select automatic login, you will be logged in as the pi
user.
Network at Boot
Use this option to wait for a network connection before letting boot proceed.
Resolution
Define the default HDMI/DVI video resolution to use when the system boots without a
TV or monitor being connected. This can have an effect on RealVNC if the VNC option is
enabled.
Interfacing Options
In this submenu there are the following options to enable/disable: Camera, SSH, VNC,
SPI, I2C, Serial, 1-wire, and Remote GPIO.
SCP (secure cpy)
Sharing files between windows and rpi3
scp is a command for sending files over SSH. This means you can copy files between
computers, say from your Raspberry Pi to your desktop or laptop, or vice-versa
scp myfile.txt pi@192.168.1.3:/home/pi/project/
Will use winscp program in windows
Connect to wifi via terminal
BOARD numbering system
View board info
RPi.GPIO module
• RPi.GPIO is a package that provides a class to control the GPIO on a Raspberry Pi.
The RPi.GPIO module is installed by default in Raspbian. To make sure that it is at
the latest version:
• Now we can import the module by adding this line of code:
• By doing it this way, you can refer to it as just GPIO through the rest of your script.
BOARD numbering system
●
There are two ways of numbering the IO pins on a Raspberry Pi within RPi.GPIO.
The first is using the BOARD numbering system. This refers to the pin numbers on
the P1 header of the Raspberry Pi board. The advantage of using this numbering
system is that your hardware will always work, regardless of the board revision of
the RPi. You will not need to rewire your connector or change your code.
BCM numbers system
●
The second numbering system is the BCM numbers. This is a lower level way of
working - it refers to the channel numbers on the Broadcom SOC. You have to
always work with a diagram of which channel number goes to which pin on the
RPi board. Your script could break between revisions of Raspberry Pi boards..
Pin Mode Setup
●
You may be already familiar with Digital Input Output Peripheral in any other
development board. The DIO peripheral allows two options for each pin, Input
and Output. You must configure the pin mode before working on it.
Syntax
Modes
●
●
GPIO.IN
GPIO.OUT
Output Pin Value
● To write a pin high or low, use output function.
● These pins are 3.3V at high level and 0V at low level
● A maximum of 16mA can be drawn from the digital pin
Syntax
OutValue
● GPIO.HIGH
● GPIO.LOW
Initial Value
Optionally you can define an initial value for the pin while configuring it as
output.
LAB 1
Expected Output
Write a python code that provide a toggle
button on a window, the button is toggling a
LED connected to GPIO Pin on the PI
Time To
Code
Input Pin Value
● To read the value of a GPIO pin use input function
● Warning: Do not apply more than 3.3v on GPIO pin used as input.
Syntax
Return
●
●
0 / GPIO.LOW / False
1 / GPIO.HIGH / True.
Pull Up/Down
If you do not have the input pin connected to anything, it will 'float'. In other words,
the value that is read in is undefined because it is not connected to anything until you
press a button or switch. It will probably change value a lot as a result of receiving
mains interference.
To get round this, we use a pull up or a pull down resistor. In this way, the default value
of the input can be set. It is possible to have pull up/down resistors in hardware and
using software. In hardware, a 10K resistor between the input channel and 3.3V (pullup) or 0V (pull-down) is commonly used. The RPi.GPIO module allows you to configure
the Broadcom SOC to do this in software:
Cleanup
At the end any program, it is good practice to clean up any resources you might have
used. This is no different with RPi.GPIO. By returning all channels you have used back
to inputs with no pull up/down, you can avoid accidental damage to your RPi by
shorting out the pins. Note that this will only clean up GPIO channels that your script
has used. Note that GPIO.cleanup() also clears the pin numbering system in use.
Delay function
Suspend execution of the calling thread for the given number of seconds. The
argument may be a floating point number to indicate a more precise sleep time. The
actual suspension time may be less than that requested because any caught signal will
terminate the sleep() following execution of that signal’s catching routine. Also, the
suspension time may be longer than requested by an arbitrary amount because of the
scheduling of other activity in the system.
Pulse Width Modulation
Pulse Width modulation (PWM) has wide range of applications in embedded system
development. Using RPi GPIO, you can make software PWM easily by the following
steps:
1- Create PWM Instance
2- Use the following property functions:
Function
Description
PWM_Ch.start(duty_cycle)
where duty_cycle is between 0.0 <= dc <= 100.0
PWM_Ch.ChangeFrequency(freq)
where freq in the value in Hz
PWM_Ch.ChangeDutyCycle(duty_cycle)
where duty_cycle is between 0.0 <= dc <= 100.0
PWM_Ch.stop()
stop the PWM signal
Pulse Width Modulation
Pulse Width Modulation
How to use Timer with interrupts to calculate PWN frequency and duty cycle?
1-Configure pin as interrupt on rising edge
2-when timer fires, configure pin on falling edge and start timer
3-when falling edge received save timer value it will be Ton and configure pin as
interrupt on rising edge and clear timer
4-when rising edge recived save timer value and it will be Toff
Freq=1/(Ton+Toff) duty_cycle=Ton/Ton+Toff
-get Ton=TimerTicks*TickTime
-Get max measured Frequency could be used by RPI3
Pulse Width Modulation
Programming Interrupts
An edge is the change in state of an electrical signal from LOW to HIGH (rising edge) or
from HIGH to LOW (falling edge). Quite often, we are more concerned by a change in
state of an input than it's value. This change in state is an event.
GPIO provide the function add_event_detect that allows you to detect detect events
for GPIO.RISING, GPIO.FALLING or GPIO.BOTH.
Interval Timers
Using threading module in Python, you can create an interval timer. The function timer
is used to perform an action that should be run only after a certain amount of time
has passed.
1- First import the threading module
2- use the timer function
3 Start the timer
4 You can stop the timer any time before it fires
Uart Communication
UART ’tI .rettimsnarT/revieceR suonorhcnysA lasrevinU rof sdnats s a physical circuit in
a microcontroller, or a stand-alone IC. A UART’s main purpose is to transmit and
receive serial data.
Uart Communication
UARTs transmit data asynchronously si ereht snaem hcihw ,no clock signal to
synchronize the output of bits from the transmitting UART to the sampling of bits by the
receiving UART. Instead of a clock signal, the transmitting UART adds start and stop bits
to the data packet being transferred. These bits define the beginning and end of the data
packet so the receiving UART knows when to start reading the bits.
When the receiving UART detects a start bit, it starts to read the incoming bits at a
specific frequency known as the baud rate. Baud rate is a measure of the speed of data
transfer, expressed in bits per second (bps). Both UARTs must operate at about the same
baud rate .
Uart Communication
START BIT
The UART data transmission line is normally held at a high voltage level when it’s not
transmitting data. To start the transfer of data, the transmitting UART pulls the
transmission line from high to low for one clock cycle. When the receiving UART detects
the high to low voltage transition, it begins reading the bits in the data frame at the
frequency of the baud rate.
DATA FRAME
The data frame contains the actual data being transferred. It can be 5 bits up to 8 bits
long if a parity bit is used. If no parity bit is used, the data frame can be 9 bits long. In
most cases, the data is sent with the least significant bit first.
Uart Communication
PARITY
Parity describes the evenness or oddness of a number. The parity bit is a way for the
receiving UART to tell if any data has changed during transmission. Bits can be changed by
electromagnetic radiation, mismatched baud rates, or long distance data transfers. After the
receiving UART reads the data frame, it counts the number of bits with a value of 1 and
checks if the total is an even or odd number. If the parity bit is a 0 (even parity), the 1 bits in
the data frame should total to an even number. If the parity bit is a 1 (odd parity), the 1 bits
in the data frame should total to an odd number. When the parity bit matches the data, the
UART knows that the transmission was free of errors. But if the parity bit is a 0, and the total
is odd; or the parity bit is a 1, and the total is even, the UART knows that bits in the data
frame have changed.
Uart Communication
STOP BITS
To signal the end of the data packet, the sending UART drives
the data transmission line from a low voltage to a high voltage
for at least two bit durations.
Rpi3 Uart
All UARTs on the Raspberry Pi are 3.3V only - damage will occur if they are
connected to 5V systems. An adaptor can be used to connect to 5V systems.
Alternatively, low-cost USB to 3.3V serial adaptors are available from various third
parties.
Rpi3 UART
Primary UART (mini uart) UART1
On the Raspberry Pi, one UART is selected to be present on GPIO 14 (transmit)
and 15 (receive) - this is the primary UART. By default, this will also be the UART
on which a Linux console may be present. Note that GPIO 14 is pin 8 on the GPIO
header, while GPIO 15 is pin 10.
Secondary UART(PL011) UART0
The secondary UART is not normally present on the GPIO connector. By default,
the secondary UART is connected to the Bluetooth side of the combined wireless
LAN/Bluetooth controller, on models which contain this controller.
Rpi3 Uart
By default, only UART0 is enabled. The following table summarises the
assignment of the first two UARTs:
So if by default UART0(one connected to Bluetooth) so how to used it ?
1-enable it from raspri-config menu
2-or just add enable_uart=1 in config.txt
3-Swap mini uart with Uart connected to Bluetooth by disabling Bluetooth
4- sudo geany /boot/config.tx
add those lines
dtoverlay=pi3-disable-bt
enable_uart=1
To write to uart that’s connected to the gpio pins
echo "hi" >> /dev/serial0
Rpi3 Uart
Uart0 > PL011 >secondary > connected to Bluetooth > set enable_uart=1
and dtoverlay=pi3-disable-bt ”disable bluthooth”
disable-bt disables the Bluetooth device and makes the first PL011 (UART0) the primary
UART.disable-bt disables the Bluetooth device and makes the first PL011 (UART0) the
primary UART.
UART1 > mini Uart > primary > GPIO > set enable uart=0
Rpi3 Uart
IFTTT
What is IFTTT?
IFTTT derives its name from the programming conditional statement “if this, then
that.” What the company provides is a software platform that connects apps,
devices and services from different developers in order to trigger one or more
automations involving those apps, devices and services
https://ifttt.com/smartlife > for smart home
IFTTT
IFTTT
IFTTT
IFTTT
IFTTT
IFTTT
IFTTT
IFTTT
IFTTT
IFTTT Webhooks
Well, this IS possible thanks to the IFTTT Webhooks service, which was specifically
designed for makers.
Webhooks can send and receive triggers via HTML POST and GET requests, which can
be easily added to your code using the 'requests' python library.
A GET request pulls data from a source/server, and a POST request does the opposite it sends (or updates) data to a server.
IFTTT
Flask is a web framework. This means flask provides you with tools,
libraries and technologies that allow you to build a web application. This
web application can be some web pages, a blog, a wiki or go as big as a
web-based calendar application or a commercial website.
IFTTT
IFTTT
IFTTT
IFTTT
IFTTT
No Ip
No Ip
No Ip
No Ip
Router IP
Embedded Linux
Intro to Embedded Linux and Toolchain
We strongly recommend to use GNU/Linux as the desktop operating system to
embedded Linux developers, for multiple reasons.
▶ All community tools are developed and designed to run on Linux. Trying to use
them on other operating systems (Windows, Mac OS X) will lead to trouble.
▶ As Linux also runs on the embedded device, all the knowledge gained from using
Linux on the desktop will apply similarly to the embedded device.
▶ If you are stuck with a Windows desktop, at least you should use GNU/Linux in a
virtual machine (such as VirtualBox which is open source), though there could be a
small performance penalty.
-We have chosen Ubuntu, as it is a widely used and
easy to use desktop Linux distribution
Components of Linux system
Cross-compilation ToolChain
Compiler, debugger, libraries, headers, extra tools.
Boot Loader (grub, lilo, uboot)
Started by the hardware.
Responsible for basic initialization, loading and executing the kernel.
Linux Kernel
Contains the process and memory management, network stack, device
drivers, and many other core OS responsibilities, and provides services
to userspace applications.
C Library (glibc, uclibc, ….)
The interface between the kernel and the userspace applications.
User Land
Services manager
Configuration files
Common system commands
User
Booting sequence
Bootloader
-Executed by the hardware at a fixed location in ROM / Flash
-Initializes support for the device where the kernel image is found (local
storage, network, removable media)
-Loads the kernel image in RAM and execute.
Linux Kernel
-Uncompress itself
-Initializes the kernel core and statically compiled drivers (required to
access the root filesystem)
-Mounts the root filesystem(specified by the root kernel parameter)
-Executes the first user space program (specified by the init kernel parameter)
User Land
-First user space program Configures user space and starts up system services and any
installed user interface (graphical or terminal based).
Building process
From scratch
-The original way, you need to build every single component from scratch,
from its source code. Applying your configurations, compiler options, building each
component of the system, then integrate all these parts together.
Auto-Build tools
Automated tools in the form of “build scripts”, written by the experts in this
field to make it easy for everyone to build a full embedded Linux system with the
minimal knowledge of building process.
▶ When doing embedded development, there is always a split between
▶ The host, the development workstation, which is typically a powerful PC
▶ The target, which is the embedded system under development
▶ They are connected by various means: almost always a serial line for debugging
purposes, frequently an Ethernet connection, sometimes a JTAG interface for
low-level debugging
The usual development tools available on a GNU/Linux workstation is a native
toolchain
▶ This toolchain runs on your workstation and generates code for your workstation,
usually x86
▶ For embedded system development, it is usually impossible or not interesting to
use a native toolchain
▶ The target is too restricted in terms of storage and/or memory
▶ The target is very slow compared to your workstation
▶ You may not want to install all development tools on your target.
▶ Therefore, cross-compiling toolchains are generally used. They run on your
workstation but generate code for your target.
Three machines must be distinguished when discussing toolchain
creation
▶ The build machine, where the toolchain is built.
▶ The host machine, where the toolchain will be executed.
▶ The target machine, where the binaries created by the toolchain are executed.
Build machine: where to build toolchain
Host machine: where to build executables with the build toolchain
Target machine: where to flash compiled code
GCC Compiler: GNU Compiler Collection
▶ One of the most used free compilers
Elf File
.text, the machine code of the compiled program.
.rodata, read-only data, such as the format strings in printf statements.
.data, initialized global variables.
.bss, uninitialized global variables. BSS stands for block storage start, and this section
actually occupies no space in the object file; it is merely a placer holder.
.symtab, a symbol table with information about functions and global variables defined and
referenced in the program. This table does not contain any entries for local variables;
those are maintained on the stack.
.rel.text, a list of locations in the .text section that need to be modified when the linker
combines this object file with other object files.
.rel.data, relocation information for global variables referenced but not defined in the
current module.
.debug, a debugging symbol table with entries for local and global variables. This section is
present only if the compiler is invoked with a -g option.
.line, a mapping between line numbers in the original C source program and machine code
instructions in the .text section. This information is required by debugger programs.
.strtab, a string table for the symbol tables in the .symtab and .debug sections.
Elf File
Symbols and Symbol Resolution
Every relocatable object file has a symbol table and associated symbols. In the context of a
linker, the following kinds of symbols are present:
Global symbols defined by the module and referenced by other modules. All non-static
functions and global variables fall in this category.
Global symbols referenced by the input module but defined elsewhere. All functions and
variables with extern declaration fall in this category.
Local symbols defined and referenced exclusively by the input module. All static functions
and static variables fall here.
Expected Output
Lab Description: …
1- open terminal
2- gedit lab1.c
3- gcc lab1.c
4- ./a.out
5- file a.out
6- gcc -E lab1.c > lab1.i
7- gcc –S lab1.i > lab1.s
Read help of gcc compile by gcc --help
▶ Binutils is a set of tools to generate and manipulate binaries for
a given CPU
architecture
▶ as, the assembler, that generates binary code from assembler source code
▶ ld, the linker
▶ ar, ranlib, to generate .a archives (static libraries)
▶ objdump, readelf, size, nm, strings, to inspect binaries. Very useful analysis tools!
▶ objcopy, to modify binaries
▶ strip, to strip parts of binaries that are just needed for debugging (reducing their
size).
▶ https://www.gnu.org/software/binutils/
▶ The C library and compiled programs needs to interact
with the kernel
▶ Available system calls and their numbers
▶ Constant definitions
▶ Data structures, etc.
▶ Therefore, compiling the C library requires kernel
headers, and many applications also require them.
▶ Available in <linux/...> and <asm/...> and a few
other directories corresponding to the ones visible in
include/uapi/ and in arch/<arch>/include/uapi in
the kernel sources
The kernel to user space ABI is backward compatible
▶ ABI = Application Binary Interface - It’s about binary compatibility
▶ Kernel developers are doing their best to never break existing programs when the
kernel is upgraded. Otherwise, users would stick to older kernels, which would be
bad for everyone.
▶ Hence, binaries generated with a toolchain using kernel headers older than the
running kernel will work without problem, but won’t be able to use the new
system calls, data structures, etc.
▶ Binaries generated with a toolchain using kernel headers newer than the running
kernel might work only if they don’t use the recent features, otherwise they will
break.
What to remember: fine to keep an old toolchain as long as it works fine for your
project, even if you update the kernel.
Rule of thumb : Keep kernel headers version and linux kerel version to be the same.
▶ The C library is an essential component of a Linux system
▶ Interface between the applications and the kernel
▶ Provides the well-known standard C API to ease application development
▶ Several C libraries are available: glibc, uClibc, musl, klibc, newlib...
▶ The choice of the C library must be made at
cross-compiling toolchain generation time, as
the GCC compiler is compiled against a specific
C library.
glibc:
License: LGPL
▶ C library from the GNU project
▶ Designed for performance, standards compliance and
portability
▶ Found on all GNU / Linux host systems
▶ quite big for small embedded systems
https://www.gnu.org/software/libc/
uClibc-ng:
Lightweight C library for small embedded systems
▶ High configurability: many features can be enabled or disabled through a
menuconfig interface.
▶ Supports most embedded architectures, including MMU-less ones (ARM Cortex-M,
Blackfin, etc.). The only library supporting ARM noMMU.
▶ No guaranteed binary compatibility. May need to recompile applications when the
library configuration changes.
▶ Some glibc features may not be implemented yet (real-time, floating-point
operations...)
▶ Focus on size rather than performance https://uclibc-ng.org/
musl C library:
▶ A lightweight, fast and simple library for embedded systems
▶ Supported by build systems such as Buildroot and Yocto Project.
Let’s compile and strip a hello.c program statically and compare the size
▶ With gcc 6.3, armel, musl 1.1.16:
7300 bytes
▶ With gcc 6.3, armel, uclibc-ng 1.0.22 :
67204 bytes.
▶ With gcc 6.2, armel, glibc:
492792 bytes.
Chossing C library:
▶ Advice to start developing and debugging your applications with glibc, which is
the most standard solution.
▶ Then, when everything works, if you have size constraints, try to compile your app
and then the entire filesystem with uClibc or musl.
▶ If you run into trouble, it could be because of missing features in the C library.
▶ In case you wish to make static executables, musl will be an easier choice. Note
that static executables built with a given C library can be used in a system with a
different C library.
Building a toolchain manually:
Building a cross-compiling toolchain by yourself is a difficult and painful task! Can
take days or weeks!
▶ Lots of details to learn: many components to build, complicated configuration.
Need to be familiar with building and configuring tools.
▶ Many decisions to make about the components (such as C library, gcc and binutils
versions, ABI, floating point mechanisms...). Not trivial to find working
combinations of such components!
▶ Need to be familiar with current gcc issues and patches on your platform
What to do is Get a pre-compiled toolchain:
▶ Solution that many people choose
▶ Advantage: it is the simplest and most convenient solution
▶ Drawback: you can’t fine tune the toolchain to your needs
▶ Make sure the toolchain you find meets your requirements: CPU, endianness, C
library, component versions, ABI, soft float or hard float, etc.
▶ Possible choices
▶ Toolchains packaged by your distribution
Ubuntu example:
sudo apt install gcc-arm-linux-gnueabihf
▶ Bootlin’s toolchains (for most architectures): https://toolchains.bootlin.com
▶ Toolchain provided by your hardware vendor.
Toolchain building utilities:
Another solution is to use utilities that automate the process of building the
toolchain
▶ Same advantage as the pre-compiled toolchains: you don’t need to mess up with
all the details of the build process
▶ But also offers more flexibility in terms of toolchain configuration, component
version selection, etc.
▶ They also usually contain several patches that fix known issues with the different
components on some architectures
▶ Multiple tools with identical principle: shell scripts or Makefile that automatically
fetch, extract, configure, compile and install the different components
▶ Rewrite of the older Crosstool, with a menuconfig-like configuration system
▶ Feature-full: supports uClibc, glibc and musl,hard and soft float, many architectures
▶ Actively maintained
▶ https://crosstool-ng.github.io/
Obtain ready made tool chain:
1-sudo apt-get install gcc-arm-none-eabi
2-arm and press tab
Expected Output
Lab Description: …
use ready made toolchain and build on your host then move binary to your rpi3
arm-linux-gnueabi-gcc main.c -o armcompiled
What does arm-linux-gnueabi-gcc meain ?!
arm = Is the target architecture, means that this corss tool chain is for arm Arch
Linux = Is the operating system that’s toolchain run on
gnueabi = stands for Embedded Application Binary Interface
gcc = Is the Gcc compiler
An ABI defines how data structures or computational routines are accessed in machine code,
which is a low-level, hardware-dependent format. In contrast, an API defines this access in
source code, which is a relatively high-level, hardware-independent, often human-readable
format. A common aspect of an ABI is the calling convention, which determines how data is
provided as input to, or read as output from, computational routines. Examples of this are the
x86 calling conventions.
For setup follow those instruction:
-sudo apt-get install git
-sudo apt-get install -y gcc g++ gperf bison flex texinfo help2man make libncurses5-dev \
python3-dev autoconf automake libtool libtool-bin gawk wget bzip2 xz-utils unzip \
patch libstdc++6 rsync
-git clone https://github.com/crosstool-ng/crosstool-ng
./bootstrap >> configuring and setting up environment for cross tool ng
./configure --enable-local >> dependency checks
make
./ct-ng list-samples | grep arm
./ ct-ng armv8-rpi3-linux-gnueabihf
./ct-ng menuconfig
gedit .config
./ct-ng build
View tool chain components
ls ~/x-tools and add it PATH sys var for better use
Embedded Linux
Bootloader Design
▶ The bootloader is a piece of code responsible for
▶ Basic hardware initialization
▶ Loading of an application binary, usually an operating system kernel, from flash
storage, from the network, or from another type of non-volatile storage.
▶ Possibly decompression of the application binary
▶ Execution of the application
▶ Besides these basic functions, most bootloaders provide a shell with
various
commands implementing different operations.
▶ Loading of data from storage or network, memory inspection, hardware diagnostics
and testing, etc.
The x86 processors are typically bundled on a board with a
non-volatile memory containing a program, the BIOS.
▶ On old BIOS-based x86 platforms: the BIOS is responsible for
basic hardware initialization and loading of a very small piece of
code from non-volatile storage.
▶ This piece of code is typically a 1st stage bootloader, which will
load the full bootloader itself.
▶ It typically understands filesystem formats so that the kernel file
can be loaded directly from a normal filesystem.
▶ This sequence is different for modern EFI-based systems.
GRUB, Grand Unified Bootloader, the most powerful one.
https://www.gnu.org/software/grub/
▶ The CPU has an integrated boot code in ROM
▶ Boot ROM on AT91 CPUs, “ROM code” on OMAP, etc.
▶ Exact details are CPU-dependent
▶ This boot code is able to load a first stage bootloader from a storage
device into
an internal SRAM (DRAM not initialized yet)
▶ Storage device can typically be: MMC, NAND, SPI flash, UART
(transmitting data
over the serial line), etc.
▶ The first stage bootloader is
▶ Limited in size due to hardware constraints (SRAM size)
▶ Provided either by the CPU vendor or through community projects
▶ This first stage bootloader must initialize DRAM and other hardware
devices and
load a second stage bootloader into RAM
How to take control of your RPI3 ?
-The boot ROM (the first stage bootloader) is
programmed into SoC during manufacturing of
the RPI. This code looks for the file
bootcode.bin (second stage bootloader) in one
of the partitions of SD card and executes it.
-The bootcode.bin code looks for the file
config.txt for any third stage bootloader info. If
nothing found, it loads the default bootloader
start.elf from the SD card and runs it.
-The start.elf code reads config.txt multiple
times to initialize basic hardware, load dtb and
kernel into RAM.
How to take control of your RPI3 ?
-what will be done to upload our customized
boot loader that we will replace the kernel
with our u-boot binary by editing the config.tx
by
1-kernel=u-boot.bin
2-Enable_uart=1
3-Edit your cmdline.txt
By this u-boot will be loaded in ram and we are
now controlling the boot of the rpi 3
https://www.raspberrypi.org/documentation/
hardware/raspberrypi/bootmodes/bootflow.m
d
▶ There are several open-source generic bootloaders.
Here are the most popular ones:
▶ U-Boot, the universal bootloader by Denx
The most used on ARM, also used on PPC, MIPS, x86, m68k, RiscV, etc. The
de-facto standard nowadays. We will study it in detail.
https://www.denx.de/wiki/U-Boot
▶ Barebox, an architecture-neutral bootloader created by Pengutronix.
It doesn’t have as much hardware support as U-Boot yet. U-Boot has
improved
quite a lot thanks to this competitor.
https://www.barebox.org
DTB: Device tree binary
SPL: Secondary program loader
TPL: Tertiary Program Loader
Cd inside uboot directory and write
tree –L 1
▶ Get the source code from the website or from git, and
uncompress it
▶ The configs/ directory contains one or several configuration
file(s) for each
supported board
▶ It defines the CPU type, the peripherals and their configuration, the memory
mapping, the U-Boot features that should be compiled in, etc.
▶ Examples:
configs/stm32mp15_basic_defconfig
configs/stm32mp15_trusted_defconfig
▶ U-Boot must be configured before being compiled
▶ Configuration stored in a .config file
▶ make BOARDNAME_defconfig
▶ Where BOARDNAME is the name of a configuration, as visible in the configs/
directory.
▶ You can then run make menuconfig to further customize U-Boot’s configuration!
▶ Make sure that the cross-compiler is available in PATH
export CROSS_COMPILE=arm-linux-gnueabiexport ARCH=arm
▶ Compile U-Boot, by specifying the cross-compiler prefix.
Example, if your cross-compiler executable is arm-linux-gcc:
make CROSS_COMPILE=arm-linux▶ The main result is a u-boot.bin file, which is the U-Boot image. Depending on
your specific platform, or what storage device you’re booting from (NAND or
MMC), there may be other specialized images: u-boot.img
▶ This also generates the U-Boot SPL image to be flashed together with U-Boot.
The exact file name can vary too, depending on what the rom code expects.
▶ Connect the target to the host through a serial console.
▶ Power-up the board. On the serial console, you will see something like:
The U-Boot shell offers a set of commands. We will study the most important
ones, see the documentation for a complete reference or the help command.
https://www.denx.de/wiki/view/U-Bootdoc/BasicCommandSet
BDINFO:
Print board info structure: The bdinfo command (short: bdi) prints the information that U-Boot
passes about the board such as memory addresses and sizes, clock frequencies, MAC address, et
This type of information is generally passed to the Linux kernel
ECHO:
Echo args to console:
ERASE:
Erase flash memory:
FATINFO:
Print information about filesystem:
FATLS:
List files in a directory (default /):
PRINTENV:
Print environment variables
SAVEENV:
Save environment variables to persistent storage
SETENV:
Set environment variables:
SLEEP:
Delay execution for some time:
TFTPBOOT:
Boot image via network using TFTP protocol:
VERSION:
Print monitor version:
Expected Output
Lab Description: Configure uboot for RPI-3
It’s a good practice to clean before your start configuration and building, make distclean
-Inside u-boot directory type tree configs | grep rpi to view all the possible config files to be
used with RPI3 board
Make menuconfig to edit basic configuration
Start building your image make –j8
Copy u-boot.bin to your boot partition then edit config.txt and cmdline.txt
cmdline.txt:
console=serial0,115200 console=ttyS0 root=/dev/mmcblk0p2 rootfstype=ext4 rw
rootdelay=5 elevator=deadline fsck.repair=yes rootwait
config.txt:
enable_uart=1
kernel=u-boot.bin
Expected Output
Lab Description: Configure uboot to run on vexpress a9 Board
In this example we will run u-boot on Vexpress board, but first
we have to configure u-boot correctly.
-Inside u-boot directory type tree configs | grep express to view
all the possible config files to be used with vexpress board
export CROSS_COMPILE=arm-linux-gnueabiexport ARCH=arm
Type make vexpress_ca9x4_config to write the needed
configuration to .config file that’s parsed to u-boot to generate uboot image
Open menu config to edit the default configuration and customize your u-boot
Type make menuconfig
Open menu config to edit the default configuration and customize your u-boot
Type make xconfig
Start building your u-boot image by
make –j8
Start running your u-boot
./lab1.sh u-boot
by this command you are passing your bootloader to qemu to boot as its booting on
real device
By executing this command you will be able to know a lot of useful info that can be
used later from the generate image by u-boot
arm-linux-gnueabi-readelf -h ../../u-boot/u-boot
Embedded Linux
Linux Kernel
Linux is a monolithic kernel with a modular design (e.g, it can insert and remove Linux
kernel modules at runtime), supporting most features once only available in closed
source kernels of non-free operating systems.
https://makelinux.github.io/kernel/map/
Linux is a monolithic kernel with a modular design (e.g, it can insert and remove Linux
kernel modules at runtime), supporting most features once only available in closed
source kernels of non-free operating systems.
Linux kernel is the layer between Hardware and system call interface “POSIX”
POSIX: Portable Operating System Interface for Unix
https://pubs.opengroup.org/onlinepubs/009695399/fu
nctions/contents.html
Process ID
https://www.linux.com/news/discoverpossibilities-proc-directory/
https://www.thegeekdiary.com/understanding
-the-sysfs-file-system-in-linux/
The user space is responsible for the following
1-The program image to be run is on the Root
file system “RFS” it will be loaded in the ram,
but only the needed pages to be run at a time
2-Each process will have its own address space
that’s away from the kernel address space
3-User space make user to talk with kernel via
system calls provided by POSIX
arch
The arch subdirectory contains all of the architecture specific kernel code. It has further
subdirectories, one per supported architecture, for example i386 and alpha.
include
The include subdirectory contains most of the include files needed to build the kernel
code.
init
This directory contains the initialization code for the kernel and it is a very good place to
start looking at how the kernel works.
mm
This directory contains all of the memory management code. The architecture specific
memory management code lives down in arch/*/mm/, for example
arch/i386/mm/fault.c.
drivers
All of the system's device drivers live in this directory. They are further sub-divided into
classes of device driver, for example block.
ipc
This directory contains the kernels inter-process communications code.
modules
This is simply a directory used to hold built modules.
fs
All of the file system code. This is further sub-divided into directories, one per
supported file system, for example vfat and ext2.
kernel
The main kernel code. Again, the architecture specific kernel code is in arch/*/kernel.
net
The kernel's networking code.
lib
This directory contains the kernel's library code. The architecture specific library code
can be found in arch/*/lib/.
scripts
This directory contains the scripts (for example awk and tk scripts) that are used when
the kernel is configured.
Makefile
This file is the top-level Makefile for the whole source tree. It defines a lot of useful
variables and rules, such as the default gcc compilation flags.
Documentation
This directory contains a lot of useful (but often out of date) information about
configuring the kernel, running with a ramdisk, and similar things. The help entries
corresponding to different configuration options are not found here, though - they're
found in Kconfig files in each source directory.
Expected Output
Lab Description: Run Linux kernel on RPi3 board
1- We will used the available default configurations for RPI3 board
to find the exact name of default configurations of your board type
ls /arch/arm/configs/ | grep bcm
2- Apply your configuration to .config file that will be parsed to make to compile Linux
kernel for RPI3 board
make bcm2835_defconfig
3- Make menuconfig to configure linux kernel with needed configuration
4- Start Building linux kernel by by make –j8
5- Kenral building finshed, zImage in location
arch/arm/boot/zImage
1- bootloader U-boot
2- Device Tree binary
3- Root file system
4- zImage or uImage for Linux kernel
5-kernel modules
To initialize and boot a computer system, various software components interact. Firmware might
perform low-level
initialization of the system hardware before passing control to software such as an operating
system, bootloader, or
hypervisor. Bootloaders and hypervisors can, in turn, load and transfer control to operating
systems. Standard, consistent
interfaces and conventions facilitate the interactions between these software components
Device tree make linux system to be data driven system
Device tree example
Device tree example
We have there levels of device tree
1-SOC level
2-Board level
3-Board specific level
Device tree example
DTB : Devicetree blob. Compact binary representation of the devicetree.
DTC : Devicetree compiler. An open source tool used to create DTB files from DTS files.
DTS : Devicetree syntax. A textual representation of a devicetree consumed by the DTC.
Embedded Linux
Root File System
A pseudo file system maintains information about the currently running
system, not existing on hard disk, but only in RAM
not existing on harddisk, but only in RAM
To view all the available files and processes in /proc
ls /proc
Linux provides us a utility called ps for viewing information related with the
processes on a system which stands as abbreviation for “Process Status”. ps
command is used to list the currently running processes and their PIDs along with
some other information depends on different options. It reads the process
information from the virtual files in /proc file-system. /proc contains virtual files,
this is the reason it’s referred as a virtual file system.
Ps /proc/1
PID – the unique process ID
TTY – terminal type that the user is logged into > ? Means that no terminal is
attached to the process
TIME – amount of CPU in minutes and seconds that the process has been running
CMD – name of the command that launched the process.
/proc/1
A directory with information about process number 1. Each process has a directory
below /proc with the name being its process identification number.
/proc/cpuinfo
Information about the processor, such as its type, make, model, and performance.
/proc/devices
List of device drivers configured into the currently running kernel.
/proc/dma
Shows which DMA channels are being used at the moment.
/proc/filesystems
Filesystems configured into the kernel.
/proc/interrupts
Shows which interrupts are in use, and how many of each there have been.
/proc/meminfo
Information about memory usage, both physical and swap.
/sys is a virtual file system provided by Linux. Sys provides a set of virtual files
by exporting information about various kernel subsystem, hardware devices and
associated drivers from the kernel's device model to user space. In addition to
providing information about various devices and kernel subsystem, exported
virtual files are also used for their configuration.
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- defconfig
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- menuconfig
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- CONFIG_PREFIX=<install_path> install
Checking make help of busy box tool
Apply your needed configuration using menuconfig to make your customized minimal
file system, also for better practice and build BusyBox as a static binary (no shared libs)
to git rid of dependency
Build busybox as static binary with no shared libarires
Build Busybox
Generated files are
bin,sbin,usr,linuxrc
Here we are having 91 command with symbolic link to busybox
Here we are having 91 command with symbolic link to busybox
Embedded Linux
Buildroot and Yocto
project
Expected Output
Lab Description: Build full customized image for RPI3 using build root
1- Choose your config file
2- Run your config file
3-Apply your configuration
3-Build root output
3-Build root output
Move images to your SDCARD :dd if=sdcard.img of=/dev/sdb
note: when you inject your SDCARD type dmesg to know your sdcard name
The goal of Yocto Project is to take the complexity out of building and maintaining a
custom embedded Linux image. They plan to achieve this by creating a wrapper around
pre-existing build tools and build systems, as well as collaborating with OpenEmbedded.
By using the metadata syntax of OpenEmbedded, they are able to maintain familiarity for
developers while also providing an ease of entry for newcomers
Poky
Poky, which is pronounced Pock-ee, is a reference embedded distribution and a reference
test configuration. Poky provides the following:
-A base-level functional distro used to illustrate how to customize a distribution.
-A means by which to test the Yocto Project components (i.e. Poky is used to validate the
Yocto Project).
-A vehicle through which you can download the Yocto Project.
Poky is not a product level distro. Rather, it is a good starting point for customization.
BitBake Utility.
BitBake can be used to run a full build for a given target with
– bitbake [target].
●
But it can be more precise, with optional options:
-c <task> execute the given task
-s list all locally available packages and their versions
-f force the given task to be run
-p, --parse-only Quit after parsing the BB recipes.-b <recipe>
execute recipe (without resolving dependencies).
Recipes.
A recipe is a set of instructions to describe how to retrieve, patch, compile, install and
generate binary packages for a given application.
● It also defines what build or runtime dependencies are required.
● It also contains functions that can be run (fetch, configure, compile…) which are called
tasks.
Building recipes involves executing the following functions, which can be overridden
when needed for customizations.
do_fetch
do_unpack
do_patch
do_configure
do_compile
do_install
do_package
do_rootfs
Layers allow you to isolate different types of customizations from each other. While you
might find it tempting to keep everything in one layer when working on a single project,
the more modular your metadata, the easier it is to cope with future changes.
A list of existing and maintained layers can be found at
http://layers.openembedded.org/layerindex/branch/master/layers/
To integrate any layer with yours, just download it, put it on any location on your disk,
a good practice is to save it where all others
●
layers are stored. The only requirement is to let BitBake know about the new layer by editing
● [BuildDir]/conf/bblayers.conf
● conf/bblayers.conf
– Contains locations for all layers needed for your build process.
BBLAYERS ?= " \
/home/user/yocto/poky/metayocto
\
● conf/local.conf
– Set your build options, choose target machine, add or remove features
from your build image. Configuration variables can be overridden there.
– BB_NUMBER_THREADS = "threads"
● How many tasks BitBake should perform in parallel.
– PARALLEL_MAKE = "j threads"
● How many processes should be used when compiling.
– MACHINE ?= "raspberrypi"
The machine the target is built for.
– DL_DIR ?= <downloaddirpath>
Sources download directory
● The Yocto Project provides tools and metadata for creating custom Linux images.
● These images are created from a repository of 'baked' recipes.
● A recipe is a set of instructions for building
packages:
– Where to obtain the upstream sources and which patches to apply
– Dependencies (on libraries or other recipes)
– Configuration/compilation options
– Define which files go into what output packages
An SDK (Software Development Kit) is a set of tools
allowing the development of applications for a given target (operating system, platform,
environment…).
● It generally provides a set of tools including:
– Compilers or cross-compilers.
– Linkers.
– Library headers.
– Debuggers.
– Custom utilities.
https://www.yoctoproject.org/docs/2.4.2/yocto-project-qs/yocto-project-qs.html
https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html
https://docs.yoctoproject.org/ref-manual/
https://docs.yoctoproject.org/bitbake/index.html
-Install needed dependinces
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential
chrpath socat libsdl1.2-dev xterm
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential
chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping
python3-git python3-jinja2 libegl1-mesa libsdl1.2-dev pylint3 xterm
-Install needed dependencies
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential
chrpath socat libsdl1.2-dev xterm
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential
chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping
python3-git python3-jinja2 libegl1-mesa libsdl1.2-dev pylint3 xterm
Its better to use yocto project on a powerful machine with stable internet connection !
Generating Rpi3 images from Yocto project
1-Clone yocto project source code
-mkdir yocto
-git clone -b pyro git://git.yoctoproject.org/poky.git
2-Make directory for needed layers to be cloned
-cd sources
-git clone -b pyro git://git.openembedded.org/meta-openembedded
-git clone -b pyro https://github.com/agherzan/meta-raspberrypi.git
-cd ..
Generating Rpi3 images from Yocto project
3-The final command runs the Yocto Project oe-init-build-env environment setup script.
Running this script defines OpenEmbedded build environment settings needed to complete
the build. The script also creates the Build Directory, which is build in this case and is located
in the Source Directory. After the script runs, your current working directory is set to the
Build Directory. Later, when the build completes, the Build Directory contains all the files
created during the build.
source poky/oe-init-build-env rpi-estei-build
Generating Rpi3 images from Yocto project
4-View avilaible targets from bitbake
-bitbake --help
5-Add your layers to the build system
-bitbake-layers add-layer ../sources/meta-raspberrypi
-bitbake-layers add-layer ../sources/meta-openembedded/meta-oe/
-bitbake-layers add-layer ../sources/meta-openembedded/meta-python/
-bitbake-layers add-layer ../sources/meta-openembedded/meta-networking.
Generating Rpi3 images from Yocto project
6-Add your target machine to generate image for it by
adding in /media/MuhammadHussein/942754b6-5b7d-4fab-b91ecebbb2b37e89/yocto/rpi-estei-build/conf/local.conf
MACHINE ?='raspberrypi3’
7- Start building your minimal rpi image by
-bitbake rpi-basic-image
8 - Write the image to SD Card
dd if=tmp/deploy/images/raspberrypi/rpihwupimageraspberrypi.rpisdimg of=/dev/sdb
-mkdir yocto
-gitclone -b pyro git://git.yoctoproject.org/poky.git
-mkdir sources
-cd sources
-git clone -b pyro git://git.openembedded.org/meta-openembedded
-git clone -b pyro https://github.com/agherzan/meta-raspberrypi.git
-cd ..
-source poky/oe-init-build-env rpi-estei-build
-bitbake --help
-bitbake-layers add-layer ../sources/meta-raspberrypi
-bitbake-layers add-layer ../sources/meta-openembedded/meta-oe/
-bitbake-layers add-layer ../sources/meta-openembedded/meta-python/
-bitbake-layers add-layer ../sources/meta-openembedded/meta-networking.
-add in /media/MuhammadHussein/942754b6-5b7d-4fab-b91e-cebbb2b37e89/yocto/rpi-esteibuild/conf/local.conf
EXTRA_IMAGE_FEATURES? = 'package-management debug-tweaks' MACHINE ?='raspberrypi3’
-bitbake rpi-basic-image
Install needed dependencies
Download Source of yocto project
Download rpi3 layer to add in yocto build system
Add meta-raspberrypi to BBLAYERS in conf/bblayers.conf, this shall be done to add your rpi3
to yocto build system
Configure yocto that your target is raspberrypi3
Starting building your minimal rpi3 image by
Bitbake task scheduling
Muhammad Hussein: 01066705320
Download