conference materials

advertisement
TO THE PARTICIPANTS OF THE 3rd STUDENT
SCIENTIFIC AND TECHNICAL
CONFERENCE DEDICATED TO THE 92nd
ANNIVERSARY OF NATIONAL LEADER
HAYDAR ALIYEV
Dear students, the participants of the Conference!
For the third time during the short history of Baku Higher Oil
School, our students will make presentations of their research
outcomes made during the current academic year at this
scientific and technical conference. Such events are of utmost
importance in terms of ensuring exchange of scientific
information among the students, as well as present the results of
their research work to the public. Eventually, participation in
such conferences not only facilitates deeper understanding of the
programmes courses, but also helps students to expand their
world view and enrich scientific and technical knowledge.
As a public higher education institution Baku Higher Oil
School’s primary goal is to train highly qualified prospective
engineers applying up-to-date curricula and advanced
technologies ensuring integration among teaching, research and
3
businesses. To this end, we support every initiative aiming at
motivating students for scientific research, and organize various
events to facilitate reporting, while doing our best to create the
environment and opportunities in this regard. Your obligation as
a student is only to make full use of these chances and share our
knowledge with your peers.
I would like to congratulate all of you on this occasion
and wish you success!
I hope that this Conference will pave the way for achieving the
abovementioned goals and increase the number of students who
wish to embark upon research.
Sincerely,
Elmar Gasimov
Rector of Baku Higher Oil School
4
PLENARY REPORTS
REULEAUX TRIANGLE
Guldana Hidayatli
Gul.idayatlee@gmail.com
Supervisor: Khanum Jafarova
A Reuleaux triangle is the simplest and the most famous
Reuleaux polygon. It is a curve of constant width, meaning a
planar convex oval with the property that the distance between
two parallel tangents to the curve is constant.
The term derives from Franz Reuleaux, a 19th-century German
engineer who recognized that simple plane curves of constant
width might be constructed from regular polygons with an odd
number of sides. Thus, triangles and pentagons are frequently
constructed using a corresponding number of intersecting arcs.
However, first examples of this triangle is found in the works of
Leonardo da Vinci which are dated to the XV century, he
created a world which have been shown with the help of four
Reuleaux triangles gathered around four poles. Later, in the
XVIII century, the idea of building a triangle occurs in the
works of Leonhard Euler.
This triangle has some amazing properties. It is constant-width,
meaning that it will hug parallel lines as it rolls. By rotating the
centroid of the Reuleaux triangle appropriately, the figure can
be made to trace out a square, perfect except for slightly
rounded corners. This idea has formed the basis of a drill that
will carve out squares. And, there is interesting fact that the ratio
of its circumference to its width is PI.
Application of the Reuleaux triangleis based on its properties.
The main applications in engineering: Watts drill (drill square
holes), rotary-piston Wankel engine (within approximately
cylindrical chamber for complex trajectory moving trihedral
rotary piston - Reuleaux triangle), clamshell mechanism in
5
cinema projectors (using the rotation property of the Reuleaux
triangle in a square with side), cam mechanisms steam engines,
sewing machines and clockworks, rollers for transporting heavy
loads, manhole covers (property of constant width), as a
mediator. Furthermore, since the XIII century the property of
symmetry and harmony of triangle is used in architectural
structures based on lancet arches and ornamental elements.
GREEN CHEMISTRY
Emilya Mammadova
Emilya.mammedova@gmail.com
Supervisor: Sabina Ahmadova
Assos. Prof. Rena Abbasova
The recognition of Green Chemistry, also called “Benign
Chemistry”, “Sustainable Chemistry” or “Clean Chemistry”, as
one of the initiative methods of pollution prevention is a quite
recent phenomenon. It is certainly rational to inquire about the
reason of such a late adoption of this straightforward approach.
The answer is found in a conjunction of economic, regulatory,
scientific, and even social factors, which consolidated in the
1990s to give rise to Green Chemistry. Since then, Green
Chemistry has found implementation and commercialization on
a wide scale.[1,2]
The application of Green Chemistry approach has been
investigated through the production of benzyl benzoate in the
presence of homogeneous catalyst which is ionic liquid Nmethylpyrrolidone hydrosulfate. An ionic liquid is a salt in
which the ions are poorly coordinated, which results in these
solvents being liquid below 100°C, or even at room
temperature. Benzyl benzoate is produced from benzoic acid
and benzyl alcohol with the use of benzene as a solvent. The
temperature of the reaction must be equal to the boiling point of
6
benzene (80°C) and duration is supposed to be about 5-6 hours.
The end of the process is defined by the allocated amount of
water and acid number. Obtained ether is used in the
pharmaceutical industry and organic synthesis. Moreover, due to
high boiling point and low volatility benzyl benzoate is applied
as solvent and the latch of volatile odoriferous substances, such
as artificial musk.
The main goal of the etherification of benzoic acid by
benzyl alcohol has been the investigation of various factors
defining technological parameters of the process (conversion,
velocity, selectivity) which include chemical nature of raw
materials, composition of the initial mixture, catalysis, pressure,
duration, impact of the chosen solvent and the intensity of
mixing. The ratio of the amount of the acid to the alcohol has
been selected as 1:1.2 respectively, whilst the amount of
benzene used is 100 ml. The yield of the reaction is quite high
due to the correct usage of the amount of the catalyst. At the
end of the reaction the mixture is cooled up to the room
temperature and filtered from the impurity of the catalyst.
Eventually, atmospheric and vacuum distillations have been
conducted in order to get pure ether and final analysis has been
made.
So the current research is based on the application of
Green Chemistry on etherification. The range of various
synthesis of acids and alcohols with the use of different ionic
liquids as catalysts has been conducted to produce esters. It’s
been found out that most of these reactions provide high yield,
economically preferable, non-toxic and environmentally
friendly. In conclusion, it can be underlined that the adoption of
Green Chemistry is giving more and more preferable results
both economically and environmentally, and the etherification is
a proof of this statement.
7
References:
1. Richard A. Bourne and Martyn Poliakoff. Green chemistry:
what is the way forward? Mendeleev Commun., 2011, 21,
235–238
2. P.Tundo , P.Anastas, D.StC. Black, J. Breen , T.Collins ,
S.Memoli , J.Miyamoto , M.Polyakoff , and W.Tumas.
Synthetic pathways and processes in green chemistry.
Introductory overview . Pure Appl. Chem., Vol. 72, No. 7,
pp. 1207–1228, 2000
SAND CONTROL
Rasul Samadzade
rasulsamadzade@gmail.com
Supervisor: Arif Mammadzade
Introduction
Sand production is the production of sand during oil and gas
extraction. They have no economic value but they cause a lot of
damage to bottom hole assembly and significantly affect
hydrocarbon production rate. Sand can either erode production
equipment or bloke the passage for hydrocarbon migration.
Sand production mainly occurs at shallow depth or in young
layers of rock. However, there are some exceptions that sand
production can also occur in deep layers of rock that are weakly
cemented. Sand production initiated when stress exceeds the
rock strength. Here rock strength means the bonding between
rock grains and excess stress can be caused by tectonic forces,
overburden pressure, etc. Sand production is significant when
permeable formation has large amount of water because grains
can be moved by the interstitial forces between water and sand.
8
Sand production consequences
Sand production can lead to significant problems. It influences
to both tools and operation process. Generally, sand production
separates into 3 types: 1) Transient, 2) Continuous and 3)
Catastrophic. The first one refers to declining sand production
over time. So does not possess any serious threat. The second
one happens in most wells. The amount of produced sand can be
tolerated. It means that the sand can be distinguished from fluid
via separator and do not possess erosion danger. The third one is
the worst as the excessive amount of sand will be produced and
it can result in following problems:



Damage to well, casing and other sophisticated tools
Accumulation of sand in pipes and creation of stuck
Low production rate
Sand control methods
Predicting sand infiltration
The first method for fighting against sand infiltration is
determining it beforehand. Being able to predict whether
formation will produce sand or not is a big challenge and it is
beneficial in all aspects: economics, sophisticated tools,
productivity. However, it is not straightforward to predict it
despite the fact that a lot of methods have been proven. One of
the reasons is that the field can be not drilled beforehand and
there is not any log data. In that case drilling exploration wells
will help to predict the possibility of sand production.
Difficulties can arise even if the log data are known because
formation strength and properties can vary from layer to layer.
This discrepancy can cause catastrophic results and can lead to
improper use of sand control techniques. For example, gravel
9
packing can be used if sand formation is hard. There are some
ways that are used to predict the possibility of sand production
beforehand.
1) Sieve analysis
2) Sonic log
3) Porosity log
4) Finite element analysis
Controlling sand production
The simplest and straightforward way to control sand
production is to restrict rate of flow. This method is rarely used
because it does not give accurate and predictable consequences.
Its aim is to minimize the flow rate in order to decrease the
attractive force between grains and fluid. With decreased
production rate sand arc will form inside formation and it will
prevent further sand accumulation. However, as production
continues the petrophysical properties of the production zone,
temperature, pressure, changes and for each change the flow rate
should be adjusted. It takes a lot of time and effort. So, many
other methods were invented in order to improve the sand
control efficiency. These methods can be divided into 2
categories: mechanical and chemical.
Mechanical methods
1) Slotted liner, 2) Screen, 3) Gravel pack, 4) Frac and pack
Chemical methods
1) Plastic consolidation, 2) Resin-Coated gravel
10
FUTURE ENERGY SOURCES
Ahmed Galandarli
galandaroff@inbox.ru
Supervisor: prof. Fuad Veliyev
Introduction
It is a really exciting time to be alive, because we have a
front row seat to the only known transformation of a world
powered by dirty fossil fuels, to a planet that gets its energy
from renewable, clean sources. It is happening just once, right
now. The research was done on possible alternative energy
sources and mainly top 10 of them are: nuclear fusion, flying
wind farms, bio-fuels, solar windows, nuclear fission,
geothermal heat from underground lava beds, hydrogen fuel
cells, tidal power, human power, space-based solar power. The
possibility of each of them was considered in terms of problems
they have and the top 3 possible alternative energy sources of
future was chosen. They are: nuclear fusion, space based solar
power and the most probable one- hydrogen energy.
1. Nuclear Fusion
The reason why nuclear fusion is in this top is that it
produces energy in a much more efficient and safe way than we
produce energy today and definitely it has the potential to
provide a nearly inexhaustible supply of energy.
Fusion produces energy by fusing together two hydrogen
isotopes, deuterium and tritium, that are virtually inexhaustible.
Deuterium comes from ocean water and tritium, though limited
today, will be produced from lithium as a byproduct of the
reaction. The only byproducts of the fusion process are helium
and a fast neutron, which carries the heat to make steam,
meaning there is none of the long-lived radioactive waste
11
produced by conventional nuclear fission reactors. The thing is
that lighter Nuclei such as Lithium and Helium when combined
together form a Heavier Nucleus. The mass of heavier nucleus is
less than the initial reacting nuclei. The word 'heavier' here is
phrased in terms of energy released, which is usually enormous,
rather than mass. Therefore the law of conservation of energy is
satisfied.
Every hour more energy from the Sun reaches us than
we earthlings use in an entire year. To try and save o a lot more
of it, one idea is to build giant solar farms in space that will
collect some of the higher intensity solar radiation.
Giant Mirrors would reflect huge amounts of solar rays
onto smaller solar collectors. This Energy would then be
wirelessly beamed to Earth as either a microwave or laser beam.
One of the reasons why this amazing idea is still an idea is
because its, big surprise, very expensive.
Figure1. Space Based Solar Power
12
Figure 2. Hydrogen Energy
To my mind, the most probable alternative energy source
is hydrogen. The element hydrogen is – by far the most
abundant in the universe – is very high in energy. An engine that
burns pure hydrogen produces almost no pollution. This is why
NASA powered its space shuttles and parts of the International
Space Station with the stuff for years. The only reason we are
not powering the entire world with hydrogen is because it only
exists in our planet in combination with other elements such as
oxygen.
Hydrogen can be made from molecules called
hydrocarbons by applying heat, a process known as "reforming"
hydrogen. This process makes hydrogen from natural gas. An
electrical current can also be used to separate water into its
components of oxygen and hydrogen in a process called
electrolysis. Some algae and bacteria, using sunlight as their
energy source, give off hydrogen under certain conditions.
Once hydrogen is separated it can be pumped into
mobile fuel cells in vehicles that are able to directly convert it
13
into electricity for running propeller or it can be burned to
generate energy.
COLLECTING OF PETROLEUM BY NATURAL HAIR
Ali Alikishiyev
ali.alikishiyev@mail.ru
Supervisor: Assoc. Prof. Ilhama Zarbaliyeva
These days there are many inevitable issues in the world.
One of the most inevitable issues is water contamination. Water
is contained in different ways. For example, it undergoes to be
contaminated by household waste, chemical substances, etc.
However, water contamination caused by oil leakage is the
worst one. Since the spill waters contain much more polycyclic
aromatic hydrocarbons (PAHs) than before the spill, it is also
very dangerous for marine life. PAHs can harm marine species
directly and microbes used to consume the oil can reduce
marine oxygen levels. Methane can potentially suffocate marine
life and create “ dead zones “ where oxygen is depleted. As an
example the Deepwater Horizon oil spill in the Gulf of Mexico
on the BP-operated Macondo Prospect which occurred in 2010
on 20th April can be shown.
Figure 1. Structure of human hair
14
The spill hosted 8332 species, including more than 1270 fish,
218 birds, 1456 mollusks, 1503 crustaceans, 4 sea turtles and 29
marine mammals. Between May and June 2010, the spill waters
contained 40 times more PAHs than before the spill. The oil
contained approximately 40% methane by weight, compared to
about 5% found in typical oil deposits. Oil spills can be
removed from the surface of the water by means of different
reagents and chemical substances. However, it has been found
out that also human hair has some properties to remove oil from
the surface of the water.
Hair contains a protein called keratin. This same keratin
also composes the skin and nails. Hair is also like skin in that it
contains three main layers. The layer in the center of the hair
shaft is the medulla. The medulla is made up of round, circular
cells that can sometimes appear empty. The middle, second
layer of a strand of hair is called the cortex. The cortex contains
a lot of cells that are joined together. This cortex helps to
support the hair’s strength and also makes up the biggest portion
of the hair. The outside layer of hair is the cuticle. Much like a
nail cuticle, the hair cuticle acts as a protectant, in that it
protects the other two inner layers of the hair. The hair cuticle is
not visible. Every hair grows from a hair follicle, a pocket-like
area of the skin that has what looks like a bulb. The center of
that bulb is a papilla. It is the job of the blood vessels of the
papilla to give nutrients and oxygen to the hair.
Sometimes people put oil in their hair in an effort to
moisturize and protect the hair. There is also the common
misconception that hair absorbs oil. While hair does not
technically absorb the oil, the oil does coat the hair. The oil is
unable to completely absorb into the hair. Instead, the oil coats
the hair by latching onto cracks and holes in the hair shaft. The
hair has a scaly surface which allows the oil to penetrate the hair
as it slides down the hair and slips into those cracks. It can be
thought like wearing a coat. A coat will protect you, but you
15
would not actually absorb the coat. Hair and oil work in a
similar manner. The hair is protected by the oil but is not
absorbed.
Figure 2. Unit of laboratory
In order to determine the properties of human hair to
remove oil from the surface of the water it was experimented by
us in the lab of BHOS. We took 800ml water and added 20g oil.
To remove this oil we took 5g human hair. At the end of the
experiment we found out that by means of 5g human hair we
removed approximately 19.5 g oil from the surface of the water
(approximately 1 g oil remained on the surface of the water).
The volume of oil that can be removed by means of hair is
shown in the table below depending on the mass of hair. We
conducted this experiment for several times to get the oil
volume and hair mass dependence. It has been proved that hair
can contain oil even more than 4 fold of its actual weight.
All of us are having our hair cut and throw it away as litter.
However, we can make use of this property of human hair in oil
industries by making up hair mats, oil mops.
16
Oil mass and hair mass dependence table:
The mass of hair ( g )
The volume of The mass of the
oil g )
removed oil (g)
5
20
19.5
7
30
29.0
7.5
40
39.0
8
50
49.0
10
60
58.5
Also, it is possible to use hair mats to carry on oil tankers, so in
case of a disaster the mats can be thrown in to start working
immediately. After oil is collected with some industrials
technique so it is possible to absorb oil and convert
environmental threat into nontoxic compost. Hair salons can
help to collect hair clippings from their shops and by means of
hair mats thousands of gallons of oil can be removed from water
and recycled.
References
1. Book-“Mechanical Properties and Structure of AlphaKeratin Fibers-Wool, Human Hair” by Max
Feughelman. ( page 1-6 ).( 1 October 1996 )
2. Encyclopaediya Britannica-Deep Water Horizon Oil
Spill (2010). (page 1-4).
WIRELESS POWER TRANSMISSION
Elshad Mirzayev
mirzayevelshad@gmail.com
Supervisor: Prof. Siyavush Azakov
Wireless power transmission (WPT) is currently a popular
topic in research area. The reason is that WPT refers to energy
transmission from one point to another without interconnecting
17
wires. Wireless transmission is useful in cases where
interconnecting wires are inconvenient, hazardous, or
impossible. Wireless power transmission is based on strong
coupling between electromagnetic resonant objects to transfer
energy wirelessly between them. This differs from other
methods like simple induction, microwaves, or air ionization.
The system consists of transmitters and receivers that contain
magnetic loop antennas critically tuned to the same frequency.
Due to operating in the electromagnetic near field, the receiving
devices must be no more than about a quarter wavelengths from
the transmitter. Unlike the far field wireless power transmission
systems based on traveling electro-magnetic waves, wireless
electricity employs near field inductive coupling through
magnetic fields similar to those found in transformers except
that the primary coil and secondary winding are physically
separated, and tuned to resonate to increase their magnetic
coupling. These tuned magnetic fields generated by the primary
coil can be arranged to interact vigorously with matched
secondary windings in distant equipment but far more weakly
with any surrounding objects or materials such as radio signals
or biological tissue.
SOLVING NONLINEAR EQUATIONS BY MEANS
OF MATLAB GUI TOOLS
Qulu Quliyev
quluquliyev@gmail.com
Supervisor: Assoc.Professor Naila Allahverdiyeva
Introduction
Numerical analysis in mathematics means to solve
mathematical problems by arithmetic operations: addition,
subtraction, multiplication, division and comparison. Since these
operations are exactly those that computers can do. One of the
18
most important problems in science and engineering is to find
the solution of nonlinear equations. We know simple formulae
for solving linear and quadratic equations and there are
somewhat more complicated formulae for cubic and quartic
equations, but not formulae like these are available for
polynomial of degree greater four. Besides polynomial
equations, there are many problems in scientific and engineering
applications that involve the function of transcendental nature.
Numerical methods are of often used to obtain approximated
solution of such problems because it is not possible to obtain
exact
solution
by
usual
algebraic
processes.
Bisection Method
There are several methods in order to solve nonlinear
equations. Bisection is one of them. The bisection
method in mathematics is
a root-finding
method that
repeatedly bisects an interval and then selects a subinterval in
which a root must lie for further processing. In my program
bisection method is located in the ratio buttons group in the left
side of window. Once we entered the equation which we are
supposed to solve in equation input, and chose bisection
method, small windows will appear in order set initial
parameters. They are included initial and final value of interval,
and error level (indicator of accuracy). Once they have been
inserted, the program will show the answer at the bottom of
window, in front of the string called root.
Newton-Raphson
The idea of the method is as follows: one starts with an initial
guess which is reasonably close to the true root, then the function is
approximated by its tangent line, and one computes the x-intercept
of this tangent line. This x-intercept will typically be a better
approximation to the function's root than the original guess, and the
method can be iterated. The general formula can be given as follows:
19
In the program which I created by using GUI elements, in
order to find the root of equation we need to enter two
parameters after inserting the equation as a function of x. The
initial value (it is represented by xn here), and the error level (it
is represented by xn+1-xn here). After inserting these parameters
correctly, the program will calculate the root and will show the
final answer.
Secant
Method
If we take the formula of Newton and instead of derivative of
the function, we take two points in the graph and write
Fn’(x)=(Fn(x)-Fn(x0)/(x-x0)), we will get the secant method.
The method was developed independently of Newton's method,
and predated the latter by over 3,000 years. If we express x with
delx*x0, new method will be applied which is called Modified
Secant.
20
“PETROLEUM AND CHEMICAL
ENGINEERING” SECTION
TREATMENT OF WATER: CHEMICAL, PHYSICAL
AND BIOLOGICAL
Nargiz Khalilzada
Simuzar Babazada
nargiz.xalilzada@yahoo.com
simuzerbabazade@gmail.com
Supervisor: Assoc. Prof. Ilhama Zarbaliyeva
Conventional wastewater treatment consists of a combination of
physical, chemical, and biological processes and operations to
remove solids, organic matter. Sometimes, there is a need to
remove nutrients from wastewater. Water treatment is the
industrial-scale process that makes water more acceptable for an
end-use, which may be drinking, industry and medicine. Water
treatment is also small-scale water sterilization which some
group of people who live in wilderness areas practice often.
Water treatment is the method that helps to remove existing
water contaminants and reduce their concentration. In this way,
water ,may be, safely returns for second use in the environment.
At this point it should be taken into account that some of these
ways base on natural process –called water circulation. There
are a lot of methods of treatment of water in industry also.
General terms used to describe different degrees of treatment are
these: preliminary, primary, secondary and tertiary which is
also called advanced wastewater treatment.
Chemical treatment consists of using some chemical reaction or
reactions to improve the water quality. Probably the most
commonly used chemical process is chlorination. Chlorine, a
strong oxidizing chemical, is used to kill bacteria and to slow
21
down the rate of decomposition of the wastewater. Bacterial kill
is achieved when vital biological processes are affected by the
chlorine. Another strong oxidizing agent is ozone that has also
been used as an oxidizing disinfectant.
Neutralization is also commonly used in many industrial
wastewater treatment operations as a chemical process.
Neutralization consists of the addition of acid or base to adjust
pH levels back to neutrality. Since lime is a base it is
sometimes used in the neutralization of acid wastes.
Coagulation consists of the addition of a chemical that, through
a chemical reaction, forms an insoluble end product. This
product serves to remove substances from the wastewater.
Biological wastewater treatment is the conversion of
biodegradable waste products from municipal or industrial
sources by biological means. The practice of using a controlled
biological population to degrade waste has been used for
centuries. However, early wastewater treatment processes were
quite simple. Those methods have become more and more
complex over time. Nowadays, treatment of biodegradable
wastes prior to discharge has become an essential part of
municipal and industrial operations.
Microbial biomass has been used since the early 1900s to
degrade contaminants, nutrients, and organics in wastewater.
Until recently, the biological treatment of drinking water was
limited. Nevertheless, recent developments may mean that
biological drinking water treatment may become more feasible
and more likely to be accepted by the public. These
developments include:
(1) The rising costs and increasing complexities of handling
water-treatment residuals (e.g., membrane concentrates);
(2) The emergence of new contaminants that are particularly
amenable to biological degradation (e.g., perchlorate);
(3) The push for green technologies (i.e., processes that
efficiently destroy contaminants instead of concentrating them);
22
(4) The emergence of membrane-based treatment systems,
which are highly susceptible to biological fouling.
Physical methods of wastewater treatment accomplish removal
of substances by use of naturally occurring forces, such as
gravity, electrical attraction, and van der Waals forces. In
general, the mechanisms involved in physical treatment do not
result in changes in chemical structure of the target substances.
In some cases, physical state is changed, as in vaporization, and
often dispersed substances are caused to agglomerate, as
happens during filtration.
Physical methods of wastewater treatment include
sedimentation, flotation and adsorption. There are also barriers
such as bar racks, screens, deep bed filters, and membranes.
References
1. Rumana Riffat (2012) "Fundamentals of Wastewater
Treatment and Engineering", chapter 5 Wastewater
treatment fundamentals, page 75
2. Dr Michael R. Templeton; Prof. David Butler (2011)
"Introduction to wastewater treatment", biological
treatment, page 43
3. Joanne Drinan, Joanne E. Drinan, Frank Spellman (Nov
30, 2000) Water and Wastewater treatment Chemical
Water Quality Characteristics, page 27; Biological Water
Quality Characteristics, page 35
23
CLEANER TECHNOLOGIES, CONTROL, TREATMENT
AND REMEDIATION TECHNIQUIS OF WATER IN OIL
AND GAS INDUSTRY
Aygul Karimova
Kerimova.aygul@gmail.com
Supervisor: Sevda Zargarova
Introduction
Global shortage of fresh water sources requires usage of
unconventional water sources by recycling, reusing processes of
waste water for industrial processes. In order to achieve this
aim, new water treatment technologies are invented depending
upon brunches of different industries and degree of water
contamination at those industrial processes.
One of the common sources of industrial wastewater includes
Oil and Gas industry. The objective of oil and gas produced
water treatment is to meet the discharge regulations, reusing of
treated produced water in oil and gas operations such as cooling,
heating, irrigation processes, developing agricultural water uses,
water for human consumption and other beneficial uses. Current
water treatment technologies have successful applications on
relevant procedures.
Oil and gas industry produces approximately 14 billon bbls of
water per year which is considered as waste water. However,
modern technology gives an opportunity to consider that water
as a potential profit stream. Produced water handling
methodology depends on the composition of produced water,
location, quantity, and availability of resources.
Some of the options available to the oil and gas operator for
managing produced water might include the following [1] :
24
1. Avoid production of water onto the surface
2. Inject produced water.
3. Discharge produced water
4. Reuse in oil and gas operations
5. Consume in beneficial use
The general objectives for operators when they plan produced
water treatment are:
1. De-oiling – Removal of free and dispersed oil and grease
present in produced water. They generate spinning motion of
fluid that creates centrifugal force to push heavier water upward
and lighter oil middle, while water continues down and exists
tapered end.
2. Soluble organics removal – Removal of dissolved organics.
This method includes removal of soluble hydrocarbons with onsite liquid condensate coming from compression units
3. Disinfection – Removal of bacteria, microorganisms, algae,
etc. This step of removal is necessary due to reduction of
biological scaling and water contamination.
4. Suspended solids removal – Removal of suspended particles,
sand, turbidity, etc.
5. Dissolved gas removal – Removal of light hydrocarbon gases,
carbon dioxide, hydrogen sulphide, etc.
6. Desalination or demineralization – Removal of dissolved
salts, sulphates, nitrates, contaminants, scaling agents, etc.
Waste water contains approximately 2000 ppm-150000 ppm salt
content, which has been main issue in water treatment.
25
7. Softening – Removal of excess water hardness.
8. Sodium Adsorption Ratio (SAR) adjustment – Addition of
calcium or magnesium ions into the produced water to adjust
sodicity levels prior to irrigation.
9. Miscellaneous – Naturally occurring radioactive materials
(NORM) removal [1].
Selection of produced water treatment structure is often a
challenging problem that is steered by the overall treatment
objective. The general plan is to select the cheapest method –
preferably mobile treatment units which assure the achievement
of targeted output criteria. In this way technology can be
positioned in the field for optimum convenience and the
technology can be fine-tuned to meet specific end-uses for the
water.
Sophisticated pipeline networks and treatment plants today
furnish us with this elixir of life and industry. As intense
pressure is placed on the planet's limited water supplies,
businesses are again turning to technological innovation. New
and emerging inventions should see human civilisation through
the 21st century and, with any luck, the next 10,000 years.
The re-cycling of treated and disinfected wastewater back into
the potable water distribution system is being adopted in many
cities. The combination of advanced biological treatment of
wastewater coupled with Reverse Osmosis membrane
treatments and disinfection provides a multiple barrier approach
in the risk management strategies for the delivery of safe reuseable water [2].
The main issue in oil gas produced water treatment in
Azerbaijan is removals of organic chemicals especially
26
paraffin’s which has higher percentage in Azeri oil.
Approximately 11,000 tones oil-waste-water was produced in
Baku Oil Refinery. Here basic treatment procedures are done by
“TOPAS’ type bioremediation technology, “VEW system
specification” type separator, as well as “OSS-500” systematic
technology.
Water Treatment For Upstream Oil And Gas Industry
Water is a key part for the Upstream Oil & Gas industries, from
oilfields exploration to oil and gas production. Whether it is for
Enhanced Oil recovery (EOR) or chemical EOR, for reuse or
External source of water for injection.
Depending on the water
resource available (seawater, brackish water, freshwater), water
treatment solutions differ in order to produce Injection Water.
Produced Water / Waste water
Generated by the Exploration &
Production activity, the Produced Water needs to be treated
before reuse or discharge to the environment. Reuse / Produced
Water reinjection
The recycling of the Produced Water limits
the external water source consumption [3].
References
1. J. Daniel Arthur, Bruce G. Langhus, Chirag Patel, 2005,
Technical Summary of Oil and Gas produced Water
Treatment Technologies.
2. Membrane Filtration, Tech Brief, National Drinking
Water Clearinghouse Fact Sheet (1999).
3. Aquatech International Corporation,
http://www.aquatech.com/industries/oil-and-gas
27
HALLOYSITE NANOTUBES:
INVESTIGATION OF STRUCTURE
AND FIELD OF APPLICATION
Natavan Asgerli
nata.asgerli@mail.ru
Supervisor: Akad. Vagif Abbasov
Prof. Tarana Mammadova
Ass.prof. Sevda Fatullayeva
Halloysite nanotubes have a great potential to be applied in
various industrial fields as additives to polymer nanocomposites in order to improve their mechanical properties,
nanocontainers for chemically active substances in medicine and
pharmaceutical industry, nanocontainers for corrosion
inhibitors, admixture to paints to develop their qualities,
catalysts in petrochemical industry, etc. [1-5]. Unlike other
types of nanotubes, halloysite readily available in nature in
thousands of tons and the process does not require a
complicated manufacturing or synthesis. With high specific
surface area, an elongated tubular shape with alumen, which can
serve as nanocontainers and is harmless to the environment,
halloysite can be used widely in the industry. The purpose of the
presented work is to demonstrate the results of experiments
which were carried out in order to discover new fields of
application of halloysite nanotubes.
Halloysite is a dual layer aluminosilicate which has a
substantially hollow tubular structure in the submicronrange,
and is chemically similar to kaolinite [1]. It is extracted from
natural deposits in the USA, New Zealand, Korea, China,
Turkey, etc. These minerals are formed from kaolinite over
millions of years as a result of weathering and hydrothermal
processes.
28
One of the fields of technology, which uses a unique tubular
structure of halloysite, is industrial coatings and paints [3].
Blank halloysite lumens can be loaded with corrosion inhibitors
to achieve slow diffusion of inhibitors into the outside
environment.
a
b
c
d
e
Figure 1. Halloysite loading procedure.
(a) The suspension of halloysite in a solution containing the
loading agent; (b) vacuum applied to remove air bubbles from
the pores; (c) the vacuum causesentry into the empty pores of
the solution; (d) loaded tube separated from the solution by
centrifugation; (e) drying to remove the solvent
Preparation of polymer nanocomposites is a rapidly
developing area in the plastics technology, which offers a wide
variety of useful materials with improved properties such as
increased tensile strength, increased resistance to flammability,
low weight, low cost, etc. Having tubular structure halloysite
nanoparticles may find wide application in the preparation of
polymer composites because they, unlike kaolinite do not
require a long process due to the lack of thin plastic layered
sheet. Halloysite is more disordered compared with lamellar
minerals, making its dispersion in the polymer matrixeasier.
There were obtained various halloysite-polymer composites
(such as rubber, polyamide, an epoxy resin, polypropylene, etc).
The mechanical properties were improved significantly in many
cases. Experiments have shown that the inclusion of 5%
halloysite in polypropylene increased flexural modulus of the
29
a
Pressure (MPа)
composite by 35%, while the tensile strength was improved by
6.1%. Addition of halloysite in polar polymers such as
polyamide also showed a significant improvement in the
mechanical properties of the composite compared to pure
polymer.
Another promising area of application of nanotubes is
halloysite industrial paints. Currently, many paints contain
additives based on nanoparticles, such as titanium dioxide
(rutile), silica, clay, mica, latex, etc. Some of these nanoparticles
are added to improve the properties of the paint, while the other
is only added to reduce the cost of the product. Halloysite
nanotubes offer wide application in paint industry, because the
particles are easily mixed with various coatings and
significantly improve the mechanical properties of the paint.
b
Relative deformation (%)
c
Figure 2. Ordinary paint vs paint with 10% halloysite.
(a) Deformation curve of paint (blue-ordinary paint; pink-paint
with 10% halloysite); (b) drawing of dyed surface with ordinary
paint; (c) paint with 10% halloysite
Aluminosilicates have been extensively used as catalysts in
various hydrocarbon oil refining processes, even starting from
the early stage of development of oil industry. Halloysite
belongs to the kaolinite clay mineral family with a high ratio of
30
Al/Si as compared with other aluminosilicates. Due to the high
content of aluminum oxide in the presence of acid segments in
nanoparticles leads to cracking of hydrocarbons [4,5]. These
acidic sites catalyze heterolytic cleavage of chemical bonds,
which leads to the formation of unstable carbocations, which
undergo rearrangements and chain cleavage by C-C bonds or
beta-elimination. All these processes contribute to the
formation of highly reactive radicals and ions that further
accelerate cracking. In fact, the conducted experiments
demonstrated that benzene obtained with the application of
halloysite catalyst possessed lighter fractional content, higher
olefins fraction, along with octane number ranging from 93 to
99.
References:
1. Joussein E., Petit S., Churchman J., et al. /“Halloysite clay
minerals – a review”, Clay Miner. 2005. V. 40. P. 383.
2. Lu Z., Eadula S., Zheng Z., et al. /Colloids and Surfaces A:
Physicochemical and Engineering Aspects. 2007. V. 292. P.
56.
3. Abdullayev E., Shchukin D., Lvov Y. /Polym. Mater. Sci.
Eng. 2008. V. 99. P.331.
4. Kutsuna S., Ibusuki T., Takeuchi K. /Environmental Science
and Technology. 2000. V. 34. P.2484.
5.Gary J.,Handwerk G./Petroleum Refining Technology
and Economics, 4th Edition, Marcel Dekker, Inc.NY, USA.
2001.
31
MULTISTAGE COMPRESSORS
Ibrahim Mammadov
ibrahimmemmedov.95@gmail.com
Supervisor: Prof. Siyavush Azakov
Compressors are utilized to increase the pressure of
gases and they have many applications in the industry.
Especially in oil-gas industry, compressors play a crucial role
for transportation of low pressure gases for long distances,
drilling process, and injection of low pressure gases into wells.
In many cases, gases which are produced with oil have very low
pressure after separation process and in order to use these gases
for different purposes, they should have very high pressure. In
order to get high pressure for gases in the industry, using the
multistage compressors are more preferable in comparison to
single stage compressor because with the multistage
compressors the pressure of gases can be increased much more
than 100 times with less work. This is the main factor that
makes multistage compressors widely used unit for the industry.
Different types of multistage compressor are used depending on
purpose. For instance, if the very high pressure (1000 psig) is
required for gases with low volume, then reciprocating or rotary
compressors are used. While huge amount of gases is required
to get pressure up to 150 psig, in the case centrifugal or axial
multistage compressors are more preferable.
The main advantage of multistage compressor is that the
multistage compressor allows to get high pressure with doing
less work. Also, it assist to overcome some mechanical
problems which are related to compression process and the main
problem is associated with the increasing temperature of gases
after compressions process because it is difficult to work with
high temperature gas in the industry. The multistage
compressors tackle this problem using intercooling process in
32
which the increased temperature of gas after each stage is
cooled back to its initial temperature and it helps compression
process to approach near isothermal which requires the least
work for compression process but it is an ideal case and cannot
be achieved in the industry. Other advantage of multistage
compressors is less pressure ratio in comparison to single stage
compressors. The purpose of this scientific research is to
illustrate the main advantages of multistage compressors for the
oil-gas industry and to show how the multistage compressor
should be chosen depending on process.
CATALYSTS APPLIED IN
PETROCHEMICAL PROCESSES
Tunzale Imanova
tunzale.imanova@inbox.ru
Supervisor: Ass.prof. Sevda Fatullayeva
Thousands of substances have been produced that are not
easily found in nature but possess unique and useful properties.
If you wish to have an economical and ecofriendly approach
towards your chemical transformations then you should use an
appropriate catalyst, which speed up or accelerate the chemical
reaction and remain unaltered in the end so that it can be used
for next reaction. In these reactions catalyst speed up the
reaction by forming bond with reacting molecules and then
allowing these intermediate to react to form product which
detach itself from catalyst, leaving behind catalyst in its original
form for next reaction. Catalyst provides a new energetically
more favorable way for reaction to happens which can be more
complex than normal path. A catalyst does not change the
thermodynamic of the reaction; it changes kinetic of the reaction
so if the reaction is thermodynamically not possible then
catalyst cannot change the situation. The catalyst accelerates the
forward and backward reaction to same extent.
33
A contact catalyst is one that has a large porous surface to
which other substances adhere by a process called adsorption.
Atoms or molecules of different substances collect on the
surface of the catalyst (Figure 1). While on the surface they
react together and are released in a different form.
The catalyst itself can be a precious metal, like platinum,
palladium or rhodium. Platinum-rhodium catalysts are used as
reduction catalysts and platinum-palladium catalysts are used as
oxidizing catalysts.
Figure 1. Catalytic reaction
Zeolites are microporous aluminosilicate minerals,
commonly used as commercial adsorbents, which are widely
used as catalysts in the petrochemical industry for the synthesis
of intermediate chemicals and polymers; in refining, essentially
in reactions of fluid catalytic-cracking and hydrotreatments; in
technologies for the abatement of pollutants, for removal of NO,
CO and hydrocarbons in emissions of stationary and mobile
combustors; in the production of fine chemicals, for the
synthesis of intermediates and active compounds [1-3].
34
The crude oil consists of many different chemicals with
various chemical and physical properties. Distillation or
fractionation is the first stage in petroleum refining. This
separates out the various constituents of the crude oil. A process
known as cracking breaks down large molecules into smaller
ones. This is done either by thermal cracking, using high levels
of heat and pressure, or by using less heat and pressure and a
catalyst. Catalytic cracking is done by the fluid technique. In
this, a catalyst in the form of a fine powder is poured like a
liquid through the petroleum and out to a regenerator. In the
regenerator, carbon that has become attached to the catalyst is
removed. The catalyst is then returned to the refining cycle.
Catalytic cracking is normally used to make gasoline. The
gasoline produced by this process has a higher octane number
than gasoline produced by straight distillation. The octane
number is a measure of the tendency of fuels to knock (make a
knocking noise) when used in automobile engines. The higher
the number, the less is knock. Another refining method is called
hydrocracking. In this method, hydrogen and catalysts are used
under high pressure. The process results in greater amounts of
gasoline and less waste than catalytic cracking.
a) mordenite
b) chabazite
35
c) zeolite catalyst
d) zeolite powders
Figure 2. Structure and some representatives of zeolites
Zeolite catalysts are currently used on a large scale in the
petroleum industry to produce high-grade propellants, fuels and
raw materials for the chemical industry. On the one hand, large
hydrocarbon molecules in the crude oil can be broken down into
more useful medium-sized molecules. On the other hand,
zeolites can also be used to couple very small hydrocarbons –
such as ethene and propene – to obtain higher quality products
also of medium chain length. Advances in zeolite catalysis for
refining applications will continue to be driven by the
availability of new materials and the demands for improved
fuels and lubricants.
References:
6. I.A.Nagim, S.Kulkarni, D.Kulkarni. /Journal of Engineering
Research and Studies. 2011. V.2. P. 272.
7. J.Scherzer./Octane Enhancing, Zeolitic FCC Catalysts.
Dekker, New York, 1990. P. 41–109.
8. W.-C.Cheng,
G.Kim,
A.W.Peters,
X.Zhao
and
K.Rajagopalan, Catal. Rev. Sci. Eng. 1998. V. 40. P. 39.
36
LIQUEFIED NATURAL GAS (LNG)
Turan Nasibli
Aysel Mamiyeva
nesibli73@gmail.com
ayselmamiyeva@gmail.com
Supervisor: Rana Abbasova
Liquefied natural gas or usually referred to as LNG, is
natural gas that is passed a number of processes and cooled
down until it becomes in the form of liquid at atmospheric
pressure (1 atm=1.01325 bar). The main advantage of this
process is to reduce costs of transportation and obtain more
efficiency. Since there are economic limitations on the distance
that natural gas can be transported through onshore and offshore
gas pipelines, the liquefaction of natural gas to LNG product
facilitates the transportation and storage of natural gas in a safe
and economic manner. By transforming gas from its natural
state to liquid state, it can be delivered via pipes or tankers from
distant production areas for consumption. LNG is a good choice
to help world's growing energy needs due to its flexibility,
environmental benefits and large resource base. Since it mainly
consists of methane, natural gas bubble point temperature at
atmospheric pressure is about -163°C; the bubble point
temperature is defined as the state at a certain pressure in which
the fluid is completely liquid and the first bubble of gas is
formed. In comparison, when natural gas is liquefied its volume
decreases by 600 times as well as it remains colorless, odorless,
non-corrosive and non-toxic as in the gaseous phase.
37
Figure 1. Typical natural gas composition
Figure 1 shows a chart with typical raw natural gas
composition. The primary component is methane (CH4), but it
also contains ethane (C2H6), propane (C3H6), butane (C4H10),
and heavier hydrocarbons (C5+); non-hydrocarbons such as
carbon dioxide (CO2), hydrogen sulfide (H2S) and nitrogen (N2)
may be present as well.
There are several steps with processes which are carried
out to obtain liquefied natural gas in LNG plants. Figure 2
indicates the block diagram of LNG plant including those steps.
First step is a treatment process. It is included
condensations removal, acid gas (H2S) removal, carbon dioxide
(CO2) removal and dehydration. The gas is first extracted and
transported to a processing plant where it is purified by
removing any condensates such as oil and mud.
BASF AKTIENGESELLSCHAFT (BASF) is an
International chemical company which empowers the
technology for removal of carbon dioxide and hydrogen sulfide
from natural gas using Methyl Diethanolamine (aMDEA), a
tertiary amine. The activated MDEA is an aqueous solution of
MDEA plus an activator chemical. It is a non-toxic and noncorrosive solution.
Carbon dioxide (CO2) is considered as a contaminant. It
is because it would freeze in the cryogenic process of converting
38
gaseous methane to liquid methane and block the process flow.
In this way, carbon dioxide in the composition of natural gas is
removed to acceptable levels by amine absorption. Sufficient
CO2 is removed to provide that the natural gas reaches for the
LNG liquefaction unit with less than 50 ppm (v) of carbon
dioxide. At any greater concentration of (CO2), the natural gas
will freeze, obstructing the flow of natural gas and preventing
the production of LNG. The CO2 is treated by countercurrent
contact of the natural gas with the circulating aMDEA solution
in the Acid Gas Removal Absorber.
The water-saturated gas leaving the Acid Gas Removal
Unit is dried in the Dehydration Unit to meet cryogenic process
specification requirements. This unit uses a three-bed sieve
configuration; two beds are based on the absorption mode while
the third bed is undergoing regeneration. Each of the molecular
sieve beds are regenerated every 24 hours. The Dehydration
Unit dries water-saturated treated gas down to less than 1 ppm
(v) of water to avoid freezing and plugging in the cryogenic
liquefaction unit by gas hydrates. Dehydration is attained by
first precooling the treated natural gas from the Acid Gas
Removal Unit in the Wet Feed Propane Vaporizer to condense
and remove the bulk of water. Liquids formed in the cooling
process are removed in the Dehydrator Inlet Separator. Then the
remaining water is adsorbed from the natural gas in the
Molecular Sieve System.
Treated gas from above units is fed to the Refrigeration
and Liquefaction Unit. The Refrigeration and Liquefaction Unit
liquefies the treated natural gas into LNG to enable storage at
near atmospheric pressures. This stage is the heart of any LNG
project. The Refrigeration and Liquefaction Unit will utilize the
Air Products and Chemicals. For the purpose of this
39
Figure 2. Block diagram
work, two of the technologies are particular importance; the
propane precooled and for sub-cooling mixed refrigerant
process (MR) and the mixed fluid cascade (MFC). The wellknown method is using of mixed refrigerant process which 8590% of worldwide LNG plants use this method for sub-cooling.
The natural gas is first precooled by propane refrigeration. After
being cooled it enters heat exchanger for sub-cooling. Here it is
cooled using propane mixed refrigerant (MR). The sub-cooled
LNG leaving the heat exchanger is reduced in pressure by a
control valve and then is sent to the LNG Storage Tank. The
LNG entering Storage Tanks is at 1.08 bara pressure and 163.1°C temperature. After liquefaction and storage, the LNG is
loaded onto specially designed ships built around insulated
cargo tanks.
40
LNG is an environmentally friendly fuel. Natural gas is a
purest fuel and is used around the world to reduce CO2
emission. It produces less carbon dioxide and sulfur emissions
than coal. LNG is simply natural gas in a liquid phase. When
LNG is exposed to the environment, it rapidly evaporates,
leaving no residue.
HYDROCARBON PRODUCTS
AND PROCESSING
Camal Ahmadov
camal.ehmedov@yahoo.com
Supervisor: Corresponding Member
of ANAS, Prof. Z.H.Asadov
1.Introduction
In organic chemistry,a hydrocarbon is an organic compound
consisting entirely of hydrogen and carbon.The majority of
hydrocarbons found on Earth naturally occur in crude oil,where
decomposed organic matter provides a plenty of carbon and
hydrogen.When hydrogen and carbon bond,they make limitless
chains of hydrocarbons.
2.Usage
Hydrocarbons are a primary energy source for current
civilizations.The popular use of hydrocarbons is a combustible
fuel source.Iin their solid form,hydrocarbons take the form of
asphalt(bitumen).Mixtures of volatile hydrocarbons are now
used instead of chlorofluorocarbons as a propellant for aerosol
sprays,due to chlorofluorocarbon’s impact on the ozone
layer.Some examples industrial use of hydrocarbons:

Propane exists in “propane bottles” mostly as a liquid.
41


Butane provides a safe,volatile fuel for small pocket
lighters.
Hexane is used as a significant fraction of common
gasoline.
Hydrocarbons are currently the main source of the world’s
electric energy and heat sources (such as home heating) because
of the energy produced when burnt.Often this energy is used
directly as heat such as in home heaters which use either
petroleum or natural gas.The hydrocarbon is burnt and the heat
is used to heat water,which is then circulated.
3.Petroleum
Extracted hydrocarbons in a liquid form are referred to as
petroleum and hydrocarbons in a gaseous form are referred to as
natural gas.Petroleum and natural gas are found in the Earth’s
subsurface with the tools of petroleum geology.They are a
significant source of fuel and raw materials for the production of
organic chemicals.Hydrocarbons are mined from oil sands and
oil shale.These reserves require distillation and upgrading to
produce synthetic crude and petroleum.Oil reserves in
sedimentary rocks are the source of hydrocarbons for the
energy,transport and petrochemical industry.Economically
important hydrocarbons include fossil fuels such as
coal,petroleum and natural gas,and its derivatives such as
plastics,paraffin,waxes,solvents and oils.
4.Processing
Hydrocarbon processing is divided into five parts:



Gas processing
Nitrogenous Fertilizers
Petrochemical
42


Refining
Synfuels
I will try to give a brief information about processing and its
techniques.
Gas processing:
Gas processing is a complex industrial process designed to clean
raw natural gas.In order to clean raw natural gas.we separate
impurities and various non-methane hydrocarbons and fluids for
producing what is known as pipeline quality dry natural
gas.Natural-gas processing begins at the well head.The
composition of the raw natural gas extracted from producing
wells depends on the type,depth, and location of the
underground deposit and the geology of the area.Oil and natural
gas are often found together in the same reservoir.The natural
gas produced from oil wells is generally classified as dissolved
gas.It means that the natural gas is associated with or dissolved
in crude oil.Most natural gas extracted from the Earth
contains,to varying degrees,low molecular weight hydrocarbon
compounds,including:
methane(CH4),ethane(C2H6),propane(C3H8) ,butane(C4H10).
Refining:
Oil refineries are one of the ways where hydrocarbons are
processed for use.Crude oil is processed in several stages to
form desired hydrocarbons,used as fuel and in other products.
43
Figure 1.Refinery.
Petrochemical:
Petrochemicals are chemical products which derive from
petroleum.Some chemical compounds made from petroleum are
also obtained from other fossil fuels,such as coal or natural
gas.The two most common petrochemical classes are
olefins(including
ethylene
and
propylene)
and
aromatics(including benzene,toluene and xylene isomers).Oil
refineries produce olefins and aromatics by fluid catalytic
cracking of petroleum fractions.Chemical plants produce olefins
by steam cracking of natural gas liquids like ethane and
propane.Aromatics are produced by catalytic reforming of
naptha.Olefins and aromatics are the building-blocks for a wide
range of materials such as solvents,detergents, and
adhesives.Olefins are the basis for polymers and oligomers used
inplastics,resins,fibers,elastomers,etc.
The adjacent diagram depicts the major hydrocarbon sources
used in producing petrochemicals:
44
Figure2. References
5.References
http://mitei.mit.edu/research/innovations/hydrocarbon-productsand-processing
http://users.clas.ufl.edu/jbmartin/petroleum_geology/hydrocarbo
ns.html
http://www.sulzer.com/en/Industries/HydrocarbonProcessing/Gas-Processing
45
THE HYBRID CLEANER RENEWABLE ENERGY
SYSTEMS
Ali Aladdinov
Farid Mustafayev
alialaddinov@gmail.com
farid.mustafaev@yahoo.com
Supervisor: Dos. Rena Abbasova
Introduction
One of the primary needs for socio-economic development in
any nation in the world is the provision of reliable electricity
supply systems. Today, to overcome this problem in worldwide
scale renewable energy systems have become main point.
However, utilizing only one type of them to supply all needs of
one place or community cannot be efficient due to some
common reasons such as inefficient sun, wind or geothermal
resources. To reach its solution the basic factor is to combine
two or more renewable energy systems and use them together,
in this way it is possible to get more efficient systems which are
called a hybrid renewable energy systems.
Now, raised question is that what hybrid renewable
energy system is? It combines multiple types of energy
generation and/or storage or uses two or more kinds of fuel to
power a generator. A hybrid energy system is a valuable method
in the transition away from fossil fuel based economies.
Particularly in the short term, while new technologies to better
integrate renewable energy sources are still being developed,
backing up renewable generation with conventional thermal
electric production can actually help expand the use of
renewable energy sources.
How it works.
Hybrid energy systems can capitalize on existing energy
infrastructure and add components to help reduce costs,
46
Figure 1. Hybrid Power Systems
environmental impacts and system interruptions. Planning a
hybrid electricity system has a market focus rather than a
technology focus: the priority is to choose a mix of energy
technologies that is the most efficient and reliable way to meet
users’ needs.
An important issue in renewable energy development has
been the inability to rely on intermittent renewable sources, such
as wind and solar, for base load power. It is not economical to
ramp up or reduce production at large conventional base load
power plants; so even if wind or solar plants are producing
enough electricity to supply both peaking and some base load
demand, it does not generally offset fossil fuel-based or nuclear
base load energy generation. Small, agile hybrid energy systems
are one way to allow energy production from intermittent
renewable sources into the grid more reliably. To respond
accordingly to peaks and dips in renewable energy production,
hybrid systems are best implemented on a small scale because
small generators are more flexible. These agile systems can,
when possible, be interconnected into the central grid system
and function as small power plants.
47
Opportunities
• Hybrid energy systems are particularly well suited for use
in remote locations. Hybrid systems can serve standalone minigrids, thus avoiding costly transmission costs. The increased
capability of integrating renewable energy production into the
electricity mix reduces the costs of transporting fuel to remote
areas.
• Applicable for combined heat and power and district
heating: As technology systems that can be used for distributed
generation, isolated grids or on-site application, hybrid energy
systems are generally well suited for combined heat and power
production or district heating.
Challenges to using a hybrid energy system
Financial
• The multiple components required to form a hybrid system
generally make them expensive to build.
Technical
• There is no single optimal hybrid energy system
configuration. Rather, optimizing is based on the availability of
renewable and non-renewable resources, on site-specific energy
infrastructure, production costs and incentive policies. Planning
a hybrid system, thus, necessitates an adequate study period for
each proposed project site.
Example
Koh Tao (island) in southern Thailand: the Provincial
Electricity Authority (PEA) installed a hybrid wind-diesel
energy system to increase power capacity and reliability and to
reduce the long-term costs. The PEA had previously relied on a
diesel system that cost 6.5 million baht (US$200,000) in losses
per year due to high fuel and fuel transportation costs. Based on
the wind resource, electricity infrastructure and geographic
constraints, the PEA chose to install a 250kW wind turbine to
reduce its heavy reliance on diesel.
48
Conclusion
Overall, the thesis covers information related to operations
of these types of systems, utilization by humans and at the same
time the application of hybrid cleaner renewable energy systems
for different purposes, such as reduction of ecological issues,
generation of inexpensive and reliable energy and in design of
eco-friendly buildings in constructions of cities.
ARTIFICIAL LIFT METHODS
Mahmud Mammadov
mammadovmahmud@gmail.com
Supervisor: prof. Arif Mammadzade
1. Introduction:
Artificial lift refers to the use of artificial means to
increase the flow of liquids, such as crude oil or water, from a
production well. Generally this is achieved by the use of a
mechanical device inside the well (known as pump or velocity
string) or by decreasing the weight of the hydrostatic column by
injecting gas into the liquid some distance down the well.
Artificial lift is needed in wells when there is insufficient
pressure in the reservoir to lift the produced fluids to the
surface, but often used in naturally flowing wells (which do not
technically need it) to increase the flow rate above what would
flow naturally. The produced fluid can be oil, water or a mix of
oil and water, typically mixed with some amount of gas.
2.Types of Artificial Lifts:
1 Rod Pumps - A downhole plunger is moved up and down by
a rod connected to an engine at the surface. The plunger
movement displaces produced fluid into the tubing via a pump
consisting of suitably arranged travelling and standing valves
mounted in a pump barrel.
49
Figure 2. Hydraulic Pumps
Hydraulic Pumps use a high pressure power fluid to:
(a)
Drive a downhole turbine pump or
(b)
Flow through a venturi or jet, creating a low pressure area
which produces an increased drawdown and inflow from
the reservoir.
3 Electric Submersible Pump (ESP) employs a downhole
centrifugal pump driven by a three phase, electric motor
supplied with electric power via a cable run from the surface
on the outside of the tubing.
50
4 Gas Lift involves the supply of high pressure gas to the
casing/tubing annulus and its injection into the tubing deep in
the well. The increased gas content of the produced fluid
reduces the average flowing density of the fluids in the tubing,
hence increasing the formation drawdown and the well inflow
rate.
5. Progressing Cavity Pump (PCP) employs a helical, metal
rotor rotating inside an elastomeric, double helical stator. The
rotating action is supplied by downhole electric motor or by
rotating rods.
CEMENTATION
Rza Rahimov
Rza.rehimov@gmail.com
Supervisor: prof. Arif Mammadzade
Cementing operations can be divided into two broad
categories: primary cementing and remedial cementing.
Primary cementing. The objective of primary cementing is to
provide zonal isolation. Cementing is the process of mixing a slurry
of cement, cement additives and water and pumping it down through
casing to critical points in the annulus around the casing or in the open
hole below the casing string. The two principal functions of the
cementing process are:


To restrict fluid movement between the formations
To bond and support the casing
51
Zonal isolation
Zonal isolation is not directly related to production; however,
this necessary task must be performed effectively to allow
production or stimulation operations to be conducted. The
success of a well depends on this primary operation. In addition
to isolating oil-, gas-, and water-producing zones, cement also
aids in




Protecting the casing from corrosion
Preventing blowouts by quickly forming a seal
Protecting the casing from shock loads in deeper drilling
Sealing off zones of lost circulation or thief zones
Remedial cementing
Remedial cementing is usually done to correct problems
associated with the primary cement job. The most successful
and economical approach to remedial cementing is to avoid it by
thoroughly planning, designing, and executing all drilling,
primary cementing, and completion operations. The need for
remedial cementing to restore a well’s operation indicates that
primary operational planning and execution were ineffective,
resulting in costly repair operations. Remedial cementing
operations consist of two broad categories:


Squeeze cementing
Plug cementing
Cement placement procedures
In general, there are five steps required to obtain successful
cement placement and meet the objectives previously outlined.
52
1. Analyze the well parameters; define the needs of the
well, and then design placement techniques and fluids to
meet the needs for the life of the well. Fluid properties,
fluid mechanics, and chemistry influence the design
used for a well.
2. Calculate fluid (slurry) composition and perform
laboratory tests on the fluids designed in Step 1 to see
that they meet the needs.
3. Use necessary hardware to implement the design in Step
1; calculate volume of fluids (slurry) to be pumped; and
blend, mix, and pump fluids into the annulus.
4. Monitor the treatment in real time; compare with Step
1 and make changes as necessary.
5. Evaluate the results; compare with the design in Step
1 and make changes as necessary for future jobs.
Formation pressures
When a well is drilled, the natural state of the formations is
disrupted. The wellbore creates a disturbance where only the
formations and their natural forces existed before. During the
planning stages of a cement job, certain information must be
known about the formation's:



Pore pressure
Fracture pressure
Rock characteristics
Generally, these factors will be determined during drilling.
The density of the drilling fluids in a properly balanced drilling
operation can be a good indication of the limitations of the
wellbore.
To maintain the integrity of the wellbore, the hydrostatic
pressure exerted by the cement, drilling fluid, etc. must not
53
exceed the fracture pressure of the weakest formation. The
fracture pressure is the upper safe pressure limitation of the
formation before the formation breaks down (the pressure
necessary to extend the formation’s fractures). The hydrostatic
pressures of the fluids in the wellbore, along with the friction
pressures created by the fluids’ movement, cannot exceed the
fracture pressure, or the formation will break down. If the
formation does break down, the formation is no longer
controlled, and lost circulation results. Lost circulation, or fluid
loss, must be controlled for successful primary cementing.
Pressures experienced in the wellbore also affect the strength
development of the cement.
DRILLING FLUIDS
Eltun Sadigov
Eltun.sadigov@gmail.com
Supervisor: prof. Arif Mammadzade
Successful drilling and completion of a petroleum well
and its cost considerably depend on the properties of drilling
fluids. The price of drilling fluids is relatively low compared to
other drilling equipment, however, choice of a proper fluid and
maintenance of right properties during drilling process have a
significant effect on total costs. Improper choice can cause
problems such as low penetration rate of drill bit, loss of
circulation, stuck drill pipe, corrosion of drill pipe and so on.
Such problems leads to an increase of the number of rig days
required to drill to a given depth, and this is in turn affects total
well costs. In addition, drilling fluids have an influence on
formation evaluation and subsequent well productivity.
Drilling fluids have a wide variety, but all of them have
following common functions:
54
1. To carry drill cuttings
2. To cool and clean drill bit
3. To maintain the stability of the sections of the
borehole that has not been cased
4. To decrease friction between the drill pipe and the
sides of the hole
5. To prevent oil, gas or water from flowing to the
borehole from the permeable rocks
6. To form a thin, impermeable filter cake which seals
pores
7. To help collect and interpret information from drill
cuts, cores, and logs.
There are certain limitations or negative requirements
that are placed on drilling fluids keeping the above functions. A
drilling fluid should not:
1. Injure drilling stuff, nor be damaging to the nature
2. Be unusually expensive
3. Be corrosive or cause excessive wearing of drilling
equipment
As every petroleum reservoir is unique, the properties and
composition of drilling fluid must be adjusted to the
requirements. Therefore, special additives are used to achieve
appropriate properties. Although drilling fluids have wide range
compositions, they are commonly classified according to their
base:
1. Water-base mud
2. Oil-base mud
3. Gas
Depending on formation features and requirements one of the
above three types of drilling fluid is selected, and if necessary,
certain modifications are made in fluid composition.
55
The aim of this research is to study functions and
properties of drilling fluids, and selection of drilling fluids. It
will be shown that what features of a reservoir and requirements
affect the selection of drilling fluids. In addition, problems
occurring due to drilling fluids will be mentioned.
PERFORATING WELLS
Gizilgul Huseynova
Gizilgul.Huseynova@gmail.com
Supervisor: prof. Arif Mammadzade
Introduction
Perforating is the next stage in well completion after drilling
and casing well. The aim of perforating gun is to create
effective flow contact between the cased wellbore and a
productive reservoir. Maximum reservoir productivity in new or
existing wells depends on optimized perforating. The
engineered perforating systems help engineers achieve the best
production or injection in the well by optimizing the
relationship between the gun system, wellbore, and reservoir.
Perforations are main piece of the inflow section of the well and
have considerable affect to the total completion efficiency. In
the early 1930s, perforations were made with a bullet gun.
Today the bullet gun has been replaced with the shaped-charge
perforator. Perforations which are just holes must be made
through the casing in order to flow of oil or gas into the
wellbore. The most common method of perforating includes
shaped-charge explosives, however there are other perforating
techniques including bullet perforating, high-pressure fluid
jetting.
56
Perforating Gun Systems
Shaped charges make penetration by forming high-pressure,
high-velocity gas. These charges are adjusted in a tool called a
gun is brought down on wireline into the well opposed to the
producing zone. The charges are shot electronically from the
surface. The tool is taken back after perforations are made.
Perforating is usually held by a service company that applies
this technique.
Figure 1. Perforating process
The most crucial part of the gun system to manufacture with
maximum quality consistency is the shaped charge which is
assembled from a case or container, the main explosive
material, and a liner. Each component of the charge is
manufactured to exact allowance to make sure that the liner
collapses to form the jet according to the design. Wireline is the
most common way to run perforating guns, due to wireline
57
provides the advantages of real-time depth control and
selectivity along with reduced logistics compared with
deployment on tubing. The effectiveness of the perforating
process depends on the care and design of the procedure. In
order to create an ideal flow path, a number of critical steps are
taken into consideration:



Design
Quality control
Quality control inspection
Depth control for perforating is usually achieved with a gamma
ray or casing collar locator log. Short joints are also run in the
production casing to support in the correlation. Perforation
efficiency is accomplished with maximum penetration, identical
crushed zone, and minimal sealing due to slug debris. To sum
up, maximized productivity from well depends on getting the
perforations right located in the productive interval, correctly
oriented in spite of the type of gun for the wellbore
environment, reservoir, and completion geometry.
WELL CONTROL
Konul Alizada
konul.alizada@gmail.com
Supervisor: prof.Arif Mammadzade
Introduction
This thesis aims at giving an introduction to a range well control
methods and equipment presently in use. Generally, classical
well control is based on decades of experiences from worldwide
drilling operations. In the early years of offshore drilling, most
wells were drilled with simple wellbore geometries in shallow
water. By the years, the boundaries of drilling have continuously
been pushed towards new extremes. With the higher downhole
58
pressures and temperatures wells are getting deeper and the
wellbore geometries are getting more complicated. Then this
leads some little changes in the actual well control procedures. It
is always important to ensure that fluid (oil, gas or water) does
not flow in an uncontrolled way from the formations being
drilled, into the borehole and eventually to surface. This flow
will occur if the pressure in the pore space of the formations
being drilled (the formation pressure) is greater than the
hydrostatic pressure exerted by the colom of mud in the
wellbore (the borehole pressure). It is essential that the borehole
pressure, due to the colom of fluid, exceeds the formation
pressure at all times during drilling. If, for some reason, the
formation pressure is greater than the borehole pressure an
influx of fluid into the borehole (known as a kick) will occur. If
no action is taken to stop the influx of fluid once it begins, then
all of the drilling mud will be pushed out of the borehole and the
formation fluids will be flowing in an uncontrolled manner at
surface. This would be known as a Blowout. This flow of the
formation fluid to surface is prevented by the secondary control
system. Secondary control is achieved by closing off the well at
surface with valves, known as Blowout Preventers – BOPs.
Well Control Principles
There are basically two ways in which fluids can be prevented
from flowing, from the formation, into the borehole:


Primary well control is achieved by maintaining
hydrostatic pressure (bottom hole pressure) in the
wellbore greater than the pressure of the fluids in the
formation being drilled (formation pressure), but less
than pressure required to permanently deform the rock
structure of a formation (fracture pressure).
Secondary well control is used if pressure of the
formation fluids exceeds the hydrostatic pressure of
59

drilling fluid for any reason. Primary well control is then
lost and a well will flow, so blowout preventers (BOP's)
must be closed quickly to enable the kick to be
controlled by one or more of the "kill procedures".
Tertiary well control describes any emergency well
control situation when formation cannot be controlled by
primary or secondary well control, such as a kick is
taken with the kick off bottom, the drill pipe plugs off
during a kill operation, there is no pipe in the hole, hole
in drill string, lost circulation, excessive casing pressure,
plugged and stuck off bottom, or gas percolation without
gas expansion.
How Does Well Control Work?
Blowouts are easily the most dangerous and destructive
potential disasters in the world of oil drilling. Not only can they
lead to serious injury and even death, but they can also cause
massive, debilitating production shut-downs and can have a
negative effect on future production from the lost
well. Blowouts can also cause severe ecological damage. As
with any potential disaster, prevention is the first step in
avoiding an otherwise costly and dangerous situation. These
preventative measures are called, collectively Well Control.
Blowout preventers (BOPs), in conjunction with other
equipment and techniques, are used to close the well in and
allow the crew to control a kick before it becomes a blowout.
Blowout preventer equipment should be designed to:




Close the top of the hole.
Control the release of fluids.
Permit pumping into the hole.
Allow movement of the inner string of pipe.
60
Figure1. Blowout preventer
References
1. Drilling Engineering-BP handbook
2. Patent Drawing of a Subsea BOP Stack, available from:
http://ookaboo.com/o/pictures/picture/13257456/Patent_
Drawing_of_a_Subsea_BOP_Stack_wit
61
APPLICATION OF MAGNETIC FIELD IN OIL INDUSTRY
Munavvar Salmanova
munevver.salmanova@gmail.com
Supervisor: prof. Arif Mammadzade
Introduction
In the process of oil production, wax deposition often occurs on the
wall of oil pipes and offshore oil wells, which will reduce the inner
diameter and even block the pipelines. A number of physical,
chemical and thermal methods are used in the paraffin control
process in oil production. At present the heating method is
extensively utilized to prevent the deposition of wax on the oil
pipelines. Consequently, tens of billions of dollars, along with a
huge amount of gas or electric energy, are consumed in the oil
heating every year. An alternative method to be developed is
urgently required to save the energy consuming in paraffin control.
The successful application of technologies based on magnetic field
generation in well bottom-hole and well-bore allows deciding a
number of problems in oil industry that increase the production rate
of producers. During recent decades, magnetic paraffin control
(MPC) technology has been extensively adopted for its advantage
in energy-saving and pollution reduction. According to the
mechanism of magnetic paraffin control, magnetic treatment could
reorient the paraffin crystalline particles and cause an aggregation
of little particles into particles shape, which prevents the wax
deposition and results in the viscosity reduction of crude oil. In
contrast to the oil heating method, only a little energy is consumed
to maintain a magnetic field. So, remarkable economic efficiency
can be achieved by application of MPC technology on oil
pipelines.
62
Paraffin Control Investigation Problems
In this section we analyze the conditions and causes of asphaltresin-paraffin deposits (ARDP) for oil production fields. Known to
date chemical and physical methods for preventing and removing
ARDP form producing wells are considered .We propose a method
of dealing with ARDP, based on the use of down-hole magnetic
systems and the main results of their use.
1. Causes and conditions of asphalt-resin-paraffin (ARDP)
deposits in oil production, one of the modern problems that
cause complications in the wells, oil field equipment and
pipeline communication (Figure 1)
Figure 1.Asphalt –resin-paraffin deposites in the oil well
tubings [1].
The accumulation of ARDP in a flow ofoil field equipment
and on the innersurface of pipes leads to a decrease in
system performance, reduce the overhaul period of wells
(OPW) and pumping systems efficiency.
1. The composition and structure of the ARPD.
63
ARPD is a complex hydrocarbon mixture consisting of
paraffin (wax) (20-70% by weight), asphalt-resin matter
(ARM) (20-40% by weight, silica gel resins, oils, water and
solids [1]. Paraffin hydrocarbons of methane series consist
of hydrocarbons from C16 H34 to C64 H34. In situ oil they are
dissolved. Depending on the content of paraffin these oils
are classified as follows: low-paraffinicity oils- paraffin
content less than 1.5% by weight; paraffin base crude oil
from 1.5 to 6 % by weight; high-paraffin base oils- paraffin
content of which more than 6% by weight.
2. Classification of methods of control with ARPD.
Along with the high cost of a significant drawback is the
complexity of chemical method of selection of effective
reagent associated with the constant changes in operating
conditions during field development. Methods attributable to
physical, based on the effects of mechanical and ultrasonic
vibrations (vibration method), as well as electrical, magnetic
and electromagnetic fields on oil produced and transported
products. The impact of magnetic fields is assumed to be the
most promising physical methods. The use of magnetic
devices in oil to prevent AFS began in the fifties of last
century, but because of the low efficiency was not
widespread. There were no magnets, long enough and stable
working conditions in the well. Recently, interest in the use
of a magnetic field to influence the AFS increased
significantly, due to the appearance on the market a wide
range of high-energy magnets based on rare-earth materials.
At present about 30 different organizations offering
magnetic deparafinizator. It was established that under the
influence of the magnetic field in a moving fluid is
destroyed aggregates consisting of submicron ferromagnetic
microparticles of iron compounds, which are at a
64
concentration of 10-100 g/t of oil and associated water. Each
unit contains several hundred to several thousands of microparticles, so the destruction of aggregates leads to a sharp
(100-1000 times) increase in the concentration of
crystallization centers paraffins and salts and the formation
of ferromagnetic particles on the surface of gas bubbles of
micron size. The destruction of aggregates of paraffin
crystals precipitate in the form of fine, large, stable
suspension, and the growth rate of deposits is reduced
proportionately reduced average size dropped together with
resins and asphaltenes in the solid phase wax crystals. The
formation of microbubbles of gas in the centers of
crystallization after magnetic treatment provides, according
to some researchers, the gas lift effect, leading to some
increase in well production.
BUBBLE-POINT PRESSURE
Mustafa Asgerov
Mustafa.asgarov@gmail.com
Supervisor: prof. Arif Mammadzade
When heating a liquid consisting of two or more
components, the bubble point is the temperature (at a given
pressure) where the first bubble of vapor is formed. At
discovery, all petroleum reservoir oils contain some natural gas
in solution. Often the oil is saturated with gas when discovered,
meaning that the oil is holding all the gas it can at the reservoir
temperature and pressure, and it is at its bubble-point.
Occasionally, the oil will be under saturated. In this case, as the
pressure is lowered, the pressure at which the first gas begins to
evolve from the oil is defined as the bubble-point. Given that
vapor will probably have a different composition than the liquid,
65
the bubble point (along with the dew point) at different
compositions are useful data when designing distillation
systems. For a single component the bubble point and the dew
point are the same and are referred to as the boiling point. In
order to be able to predict the phase behavior of a mixture,
scientists and engineers examine the limits of phase changes,
and then utilize the laws of thermodynamics to determine what
happens in between those limits. The limits in the case of gasliquid phase changes are called the bubble point and the dew
point.
Reservoir fluid characterization is an important issue in
reservoir and production engineering calculations. The bubble
point pressure (Pb), which is an important Pressure-VolumeTemperature (PVT) property, determines the oil-water flow
ratio during hydrocarbon production. If too high, the quantity of
produced water obtained at the surface may be higher than that
of oil, production will be reduced, and well efficiency will be
low. The produced water, also known as brine is the water
associated with oil and gas reservoirs that is produced along
with the oil and gas. Accurate determination of bubble point
pressure is of major importance since it affects phase behavior
of crude, which is indeed influential in further upstream and
downstream computations. Several correlations have been
proposed in the recent years to predict fluid properties using
linear or non-linear regression and graphical techniques. For
binary systems, volumetric method gives good results but in
determining the bubble point pressure in multicomponent
systems, which are the oil phase transformations do not occur
instantaneously, but in some areas, which greatly reduces the
accuracy of the method. For binary systems, volumetric method
gives good results. Determining the bubble point pressure in
multicomponent systems, which are the oil, phase
transformations do not occur instantaneously, but during to
66
make some time, which bring to appearance the area transition it
greatly reduces the accuracy of the method. A comparison of the
bubble point pressure found in a porous medium, and without it,
we determined the increase of bubble point pressure in a porous
medium. Experiments show the relationship between bubble
point pressure and solubility of the gas. It is established that an
increase gas factor bubble point pressure increases, which
promote to an increase in shrinkage of oil. The paper notes that
due to the low solubility of nitrogen in oil as compared to
hydrocarbon gases, its presence in the reservoir fluid, even in
small amounts has a significant influence on the bubble point
pressure gas-oil mixture. With increasing temperature (in the
range 26.7– 87.80 C) the solubility of gas in the oil increased,
resulting bubble point pressure increased more slowly.
HORIZONTAL DRILLING
Rasul Ismayilzada
Rasul.ismayilzada@gmail.com
Supervisor: prof. Arif Mammadzade
Horizontal drilling is the process of drilling and completing, for
production, a well that begins as a vertical or inclined linear
bore which extends from the surface to a subsurface location
just above the target oil and gas reservoir called the “kickoff
point” then bears off on an arc to intersect the reservoir at the
entry point and thereafter, continues at a near-horizontal attitude
to entirely remain with the reservoir until the desired bottom
hole location is reached.
The technical objective of horizontal drilling is to expose
significantly more reservoir rock to the well bore surface and
then to intersect multiple fracture systems within a reservoir and
67
avoid unnecessarily premature water and gas intrusion that
would interfere with the oil production. The economic benefits
of horizontal drilling success are to be increased productivity of
the reservoir and prolongation of the reservoir`s commercial
life. Moreover, horizontal drilling affords a range of benefits:
increased rate of return from the reservoir, increased recoverable
reserves, lower production costs and reduced number of
platforms and wells for field.
Three main types of horizontal wells are defined by the rate at
which the radius of curvature is built.
1. Short-radius horizontal wells
2. Medium radius horizontal wells
3. Long-radius horizontal wells.
Depending on the intended radius of curvature and the hole
diameter, the arc section of a horizontal well may be drilled
either conventionally or by use of a drilling fluid-driven axial
hydraulic motor or turbine motor mounted directly above the
bit.
Downhole instrument packages that telemeter various sensor
readings to operators at the surface are included in the drill
string near the bit, at least while drilling the arc and nearhorizontal portions of the hole. Control of hole direction or
steering is accomplished at least one of the following.



A steerable downhole motor
Various “bent subs”
Downhole adjustable stabilizer
The main types of horizontal drilling equipment:
Bent housings: These provide a permanent bend in the (BHA)
bottom-hole assembly (1/2 to 11/2 degrees) and build well
deviation and control the horizontal trajectory.
68
Positive displacement motors (PDM): It is located immediately
above the bit in a BHA. It is powered by mud displacing a
helical shaft that rotates inside a rub housing and turn the bit.
Top drive system: It turns the drillpipe directly, rather than
using a rotary table. Also, while the drillstring is pulled from the
hole, it can be rotated and circulation maintained, making top
drive attractive for horizontal drilling.
The application of horizontal drilling technology to the
discovery and productive development of oil reserves has
become a frequent, worldwide event over the past 5 years. The
aim of this research is to focus primarily on domestic horizontal
drilling applications and on salient aspects of current and nearfuture horizontal drilling and completion technology.
SYNTHETIC BASED DRILLING FLUIDS
Rashad Nazaraliyev
resad1957@gmail.com
Supervisor: prof. Arif Mammadzade
The drilling fluid or drilling mud is an important component
of drilling rig. The technology of drilling fluids has developed
as rapidly and largely as the rotary drilling machine.
What is drilling fluid?
This is the fluid (or mud) used in drilling operations that has a
function (or functions) required to drill a well sufficient for
operation with maximum low cost.
In the late 19th century, water was the only principal fluid used
in rotary drilling, even though some mixing of natural clay
particles into the fluid must have occurred much of the time.
The general term "mud" originated when certain types of clays
were added to water to form drilling fluid. However, recent
69
developments have made the term "mud" somewhat out-dated.
Modern mud systems are now mentioned as drilling fluids due
to the large number of additives that can be used to give special
properties to drilling fluids. Drilling muds are materials pumped
through the rig’s drill string and drill bit to remove drill cuttings
from the bore hole during drilling operations. They also clean
the bit, keep desired pressure differential between the formation
and mud constant and serve to stabilize the hole. For most
drilling, the muds used are prepared by dispersing finely
divided clays in water and are called Water Based Muds or
Fluids (WBMs or WBFs). These solids provide the desired
suspending power to assist with cuttings removal and mud
density to control pressure. However, WBMs tend to interact
with water sensitive formations during drilling operations
causing some problems, such as bit balling and hole stability
problems. These conditions may cause a variety of costly
difficulties for operators such as stuck pipe and reduced drilling
rates. To deal with these problems when drilling through
difficult or unknown formation conditions, an invert emulsion
based mud is used. In an invert emulsion mud, an organic based
fluid forms a continuous outer phase surrounding an internal
aqueous phase of finely dispersed droplets (an emulsion). The
mud solids and other additives are also suspended in the organic
phase. Because the external phase is insoluble in water,
interactions with water sensitive formations are reduced. For
this reason invert muds reduce sloughing problems, form better
filter cakes and produce more stable bore holes. These attributes
lead to provide higher space velocity of the mud and thus better
removal of cuttings. Invert muds are generally based on diesel
or mineral oil and called oil base muds or fluids (OBMs or
OBFs). OBFs have been used extensively in the North Sea, in
Canadian offshore waters, and in several other offshore
development areas in the world. Cuttings containing adsorbed
OBFs were routinely discharged to offshore waters of the North
70
Sea until the early 1990s when such discharges were severely
prohibited. For example, in the UK Sector of the North Sea, 75
percent of the 333 wells drilled in 1991 were drilled with OBFs,
resulting in the discharge of an estimated 11,000 metric tons of
oil to the ocean. However, in 1995, less than 30 percent of 342
wells were drilled with OBFs.
The inability to discharge cuttings greatly increased cost as they
now had to be transported to a safe disposal site. This quickly
produced a need for a high performance environmentally safe
mud to allow for cuttings discharge. As a result, Enhanced
Mineral Oil Based Fluids (EMOBFS) were produced. Enhanced
mineral oils are conventional paraffinic mineral oils that have
been hydro-treated or otherwise purified to remove all aromatic
hydrocarbons. Enhanced mineral oils generally contain less than
about 0.25 percent total aromatic hydrocarbons and less than
0.001 weight percent total polycyclic aromatic hydrocarbons.
Evaluation of one of the enhanced mineral oils showed that it
contained less than 1 mg/L benzene. Aromatic hydrocarbons,
including polycyclic aromatic hydrocarbons, are considered to
be the most toxic components of OBFs. Although EMOBFs are
less toxic than OBFs, it is also not allowed in some parts of the
world to use. To meet this need, alternative inverse emulsion
muds were developed using less toxic synthetic based organic
fluids so that drill cuttings discharge would be allowed.
Therefore, SBFs are produced using synthetic fluids formed
from specific purified starting materials and they lead to defined
products that are essentially free of undesirable polycyclic
aromatic hydrocarbons (PAHs). These materials are less toxic
and more biodegradable than refined mineral oil products such
as diesel oil.
Thus, the research project described in the thesis deals with an
investigation of Synthetic based fluids (SBFs or SBMs) in the
following way:
71
 What is SBF
 How and from which substances are SBFs produced
 What are the Advantages and Disadvantages
THE SIDETRACK OPERATION AND ITS IMPORTANCE IN
OIL INDUSTRY.
Iftikhar Huseynov
iftixar9@gmail.com
Supervisor: prof. Arif Mammadzade
“Sidetrack operation” means drilling a secondary wellbore away
from an original wellbore. To achieve casing exit 1 there are some
steps which should be provided.
1. Radius of secondary wellbore should be less than main
wellbore radius in order to getting out of secondary wellbore
without deformation.
2. Pressure should be controlled
3. Deformation which is occur in mother and secondary wellbore
should be minimized.
4. Minimizing the amount of sand entering the wellbore.
Sidetracking operation may be done intentionally or may occur
accidentally. Intentional sidetracks might bypass2 an unusable section
of the original wellbore or explore a geologic feature nearby. In the
bypass case, the secondary wellbore is usually drilled substantially
parallel to the original well.
Openhole sidetracking is most commonly applied in three
scenarios:
1. For drilling a horizontal lateral from a main wellbore;
2. For drilling a lateral in a multilateral well;
1
2
Exit of secondary wellbore which is done by sidetrack operation
The act of passing the mud around a piece of equipment.
72
3. For managing unplanned events, such as a collapsed borehole
or lost BHA.
Figure 1. Sidetrack operation
Traditionally, the most frequently employed openhole
sidetracking methodology began with setting a cement plug, followed
by a directional BHA, once the cement had hardened. The success of
the plug-setting operation depends on the formation’s compressive
strength, the quality of the cement, and the cure time. Consequently, a
plug failure will result in added trip time, the need for a new cement
plug, re-drilling previously drilled footage, and the loss of rig time and
reconfiguration of drilling trajectory.
Application of sidetrack operation in multilateral wells is
common usage of sidetrack methodology in oil industry. The main
purpose of multilateral wells is to increase productivity and in order to
drill horizontal lateral wellbores sidetrack is main operation. Hence, it
could be said that productivity of wells could be raised by sidetrack
operation.
73
UNCONVENTIONAL OIL AND GAS
RESOURCES
Umid Tarlanli
Umid.terlanli@gmail.com
Supervisor: Prof. Arif Mammadzade
In last decades, there is rising interest towards unconventional
oil and gas resources as they hold vast reserves in the world
compared to conventional oil and gas resources. Although
extraction and use of these resources is not economically viable,
they have paramount importance as potential energy resources
and will definitely be widely used in the next decades as new
ways of extraction are found. These resources generally include:
coalbed methane, gas hydrates, tight gas sands, gas shale and
shale oil, oil sands, oil shale. Some of these are listed below:
Natural Gas hydrates
Natural gas hydrates are ice-like structures in which gas, most
often methane, is trapped inside of water molecules. Unlike the
ice we’re all familiar with that’s derived entirely from water,
gas hydrates are in fact highly flammable, a property that makes
these crystalline structures both an attractive future energy
source as well as a potential hazard. Hydrates are a much more
abundant source of natural gas than conventional deposits.
According to the U.S. Geological Survey, global stocks of gas
hydrates range account for at least 10 times the supply of
conventional natural gas deposits, with between 100,000 and
300,000,000 trillion cubic feet of gas yet to be discovered.
Estimates of the resource potential of natural gas hydrates vary,
but most estimates place the resource potential as greater than
the known reserves of all oil, natural gas and coal in the world.
Several possible recovery methods are now under investigation:
74

Heating the hydrates using hot water, steam, electromagnetic
radiation (such as microwaves) or electricity. These methods
would raise the temperature so that the hydrates would melt,
releasing the natural gas.

Lowering the pressure of the hydrates. Lowering the pressure
would also cause the hydrates to melt, releasing the natural gas.

Injecting chemical inhibitors. Inhibitors prevent hydrates from
forming or cause hydrates that have formed to “melt.”
The success of any of these techniques will depend on their
ability to overcome a number of inherent challenges.
Oil sands
Oil sand is either loose sand or partially consolidated sandstone
containing a naturally occurring mixture of sand, clay, and
water, saturated with a dense and extremely viscous form of
petroleum technically referred to as bitumen (or colloquially tar
due to its similar appearance, odor, and color). Oil sands
reserves have only recently been considered to be part of the
world's oil reserves, as higher oil prices and new technology
enable profitable extraction and processing. There are numerous
deposits of oil sands in the world, but the biggest and most
important are in Canada and Venezuela, with lesser deposits
in Kazakhstan and Russia. The total volume of nonconventional oil in the oil sands of these countries exceeds the
reserves of conventional oil in all other countries combined.
Shale oil
Shale oil is unconventional oil produced from oil shale rock
fragments by pyrolysis, hydrogenation, or thermal dissolution.
These processes convert the organic matter within the rock
(kerogen) into synthetic oil and gas. The resulting oil can be
used immediately as a fuel or upgraded to meet refinery
75
feedstock specifications by adding hydrogen and removing
impurities such as sulfur and nitrogen. The refined products can
be used for the same purposes as those derived from crude oil.
Global technically recoverable oil shale reserves have recently
been estimated at about 2.8 to 3.3 trillion barrels (450×109 to
520×109 m3) of shale oil, with the largest reserves in the United
States, which is thought to have 1.5–2.6 trillion barrels.
All of above listed energy resources and others noted in first
paragraph have very huge capacities for being used in industry.
Advantages and disadvantages also up to date technologies for
extraction those resources efficiently will be discussed in
presentation.
DRILLING BITS
Azad Abdullayev
freeazadbhos@gmail.com
Supervisor: prof. Arif Mammadzade
How well the bit drills depends on several factors, such as
the condition of the bit, the weight applied to it, and the rate at
which it is rotated. Also important for a bit performance is the
effectiveness of the drilling fluid in clearing cuttings, produced
by the bit away from the bottom. The aim of drilling is to:
1. Make a hole as fast as possible by selecting bits which
produce good penetration rates
2. Run bits with a long working life to reduce trip time
3. Use bits which drill a full-size or full-gauge hole during the
entire time they are on bottom.
Bits can generally be classified into two categories:

Roller bits

Drag bits
Roller bits. The cutting elements of roller cone bits are arranged
on “conical” structures that are attached to a bit body. Typically
76
three cones are used and the cutters may be tungsten carbide that
is inserted into pre-drilled holes into the steel cone shell or steel
teeth that are formed by milling directly on the cone shell as it is
manufactured. The length, spacing, shape, and tooth material are
tailored for drilling a particular rock. Insert types used as teeth
on roller-cone bits.
Drag Bits. There are two general types of drag bits that are in
common usage. The oldest is the natural diamond matrix bit in
which industrial grade diamonds are set into a bit head that is
manufactured by a powdered metallurgy technique.
The size, shape, quantity, quality, and exposure of the diamonds
are tailored to provide the best performance for a particular
formation. Each bit is designed and manufactured for a
particular job rather than being mass produced as roller cone
bits are. The cuttings are removed by mud that flows through a
series of water courses. The design of these water courses is
aimed at forcing fluid around each individual diamond. The
matrix diamond bit cuts rock by grinding and thus a primary
function of the fluid is to conduct heat away from the diamonds.
The other type of drag bit is the polycrystalline diamond
compact (PDC) bit that is constructed with cutters comprised of
a man made diamond material.
Bit selection is based on using the bit that provides the lowest
cost per foot of hole drilled. This cost is expressed by the
following cost-pet-foot equation. The most common application
of a drilling cost formula is in evaluating the efficiency of a bit
run. A large fraction of the time required to complete a well is
spent either drilling or making a trip to replace the bit. The total
time required to drill a given depth ( D) can be expressed as
the sum of the total rotating time during the bit run (tb), the nonrotating time during the bit run (tc) and trip time (tt) The drilling
cost formula is:
Cf = (Cb + Cr (tb + tc + tt)] / D
77
Where Cf is drilled cost per unit depth, Cb is the cost of bit,
and Cr is the fixed operating cost of the rig per unit time
independent of the alternatives being evaluated.
THE PUMPS WHICH ARE USED IN OIL INDUSTRY
Elvin Hasanli
elvinhasanli1996@gmail.com
Supervisor: Elmaddin Aliyev
Introduction
Transportation of oil is very important in industry and pumps
are crucial elements of transportation system. As heart makes
the blood to move in veins, pumps make oil move in pipelines.
When oil is extracted, initial reservoir pressure enough to
extract oil without using help of pumps as artificial lift.
However as time passes and reservoir pressure decreases,
artificial methods are must be used to make the oil comes out.
Pumps have wide applications as artificial lift methods. The oil
and gas industry is producing, transporting and refining
unconventional and heavier grades of crude oil from places such
as Canada, California, Mexico and South America. Crude oil
from these areas is highly viscous and without help of pumps to
carry out all these things would be extremely hard. There are
many types of pumps. The most common ones are Centrifugal
and Positive Displacement pumps.
Centrifugal pumps are one of the most preferred
pumping devices in the hydraulic world. A centrifugal pump
uses revolving device - an impeller to convert input power to
kinetic energy by accelerating liquid. Impeller is the main part
of the system. It has a series of curved vanes fitted inside shroud
plates. When the impeller rotates, it makes the fluid move
radially out. Since rotational mechanical energy is transferred to
the fluid, at discharge side of the impeller pressure and kinetic
78
energy of water will rise. At the suction side water is getting
displaced, so a negative pressure will be induced at the eye.
Such low pressure helps to draw water stream into the system
again and this process continues. If no water is present initially
negative pressure developed by the rotating air at the eye of
impeller will be very small to suck fresh stream of water. Casing
is one part that impeller is fitted inside of it. Casing becomes
wider are along the flow direction. Such shape helps to place
newly added water stream and will also help to reduce exit flow
velocity. Reduction in flow velocity will result in increase in
static pressure. It is one of the most important aim as it is
required to overcome resistance of pumping system.
Commonly, there are two types of positive
displacement pumps. These are “Rotary” and “Reciprocating”
positive displacement pumps. Rotor is located inside of the
casing and is connected to a driver. As the screw rotates,
cavities (small holes) are formed between the casing and in the
screw. These cavities moves towards the pumps discharge since
the screw rotates and carries fluid. Another type of rotary
positive displacement pumps is “two screws” pump. One of the
screws is attached to couple driver and this is called “driver
screw” or “power screw”. The force exerted by the rotating
screws pushes the entering fluid out of the pump and through
discharge piping.
Piston pumps are operated by using a reciprocating piston. The
liquid enters a pumping chamber via an inlet valve and is
pushed out via an outlet valve by the action of piston. Piston
pumps use a plunger or piston to move media through
cylindrical chamber. These pumps operate with a suction stroke
and a discharge stroke. The suction stroke occurs when the
piston polls are going back.
79
FORMATION OF HYDRATES IN OIL AND GAS
PIPELINES
Bextiyar Allahverdiyev
allahverdiyevb@bk.ru
Supervisor:Prof.Fuad Veliyev
One of the most actual problems of oil industry is
formation of hydrates in oil and gas pipelines while delivering
the petroleum through water to off or onshore or to long
distances at low temperature conditions.
It is obvious that most of wells produce not only pure gas
or oil but also water or even the mixture of them. In the
pipelines that transport the mixture of them at low depth and
low temperature condition the hydrates are formed by reaction
of gas and water.Additionally, if the distance from the reservoir
to receiver point(terminal at onshore or platform)is long enough
for dangerous temperature decrease in pipeline system or the
pipeline goes under the water at longer distances at cold climate
zone it can make inevitable problems associated with hydrate
formations.Next main reason of hydrate formation in pipeline
system can be the shutting down of system for a while for
different purposes (maintenance operations which results the
falling of temperature to so far that the hydrate emerges.
Since the development of petroleum industry it remains
one of unsolved problems of engineering. However, some
methods are applied to reduce the effect of hydrate formation
during the operation or shut-down. Additionally, it does not
guarantee the safety of process. The second method is to make
use of inhibitors which is classified as environmental and
thermodynamic inhibitors. The first one is the method of drying
the gas before the delivering process which reduces the water to
gas ratio. The latter one is the most common and popular one
80
which is applied via heating the gas, reducing the pressure,
injection of salt solutions and alcohol or glycol. However, it cost
not only too much but also reduces the effect of time
management which brings the problem of revenues.
Additionally, it does not guarantee the safety of process.The last
one is removing of hydrates via sudden pressure decrease in
well plug.
In conclusion, despite of development in modern science
hydrate problem has not solved totally.
THERMOHYDRODYNAMICAL EFFECTS
IN PRE-TRANSITION PHASE ZONES
Subhana Allahverdiyeva
Subhana.allahverdiyeva@gmail.com
Supervisor: Prof. Fuad Veliyev
1. Introduction
According to kinetic theory of phase transformation until the
critical phase transition ⍺ - ß transition, the phase ⍺ have been
contained the embryos of the new phase ß, the numbers of
which correspond to Boltzmann`s distribution, as a result of
which the thermodynamical properties of the system are being
changed profoundly.
These effects can be illustrated using the paraffin system.
2. Experimental results
The first factor that may be studied is the dependence of ratio of
isobar heat capacity to thermal expansion coefficient (Cp / ⍺) on
the temperature in pre-transition zone. The process is reversible
and adiabatic. For this case we have:
dT = T(⍺p / Cp) ΔP
81
For the small changes of P and T the ratio ⍺p / Cp may be taken
as a constant, and the equation can be written as:
Cp / ⍺p = To(ΔP / ΔT)
It was experimentally revealed that on temperature values much
more than the critically value Tcr the ratio Cp / ⍺p is virtually
constant.
However, while approaching to the critical point (Tcr)
appearance of the embryos of a new phase occurs, as a result of
which non-linear essential change of the ratio takes place.
In the next step the heat exchange process was considered, and
change of the thermal diffusivity in pre-transition temperature
zone was experimentally investigated.
The thermal diffusivity is one of the basic parameters of the heat
transfer process, and it has
with the specific heat capacity
(Cp), thermal conductivity (λ) and density (ρ) the relationship
as shown below:
a= λ/ρ Cp
It was revealed that a little alternation of temperature leads to
considerable change of heat transfer process. For instance,
decrease of temperature from 333K to 325K causes the
increasing of the thermal diffusivity more than 25 times.
Reference
1. Mirzajanzade A.H., Bolotov A.A. Reological properties
of gas-liquid solutions near by the pressure of saturation.
Izvestiya Akademi Nauk SSSR, №1, 1988
82
2. Veliev F.H., Change of thermophysical parameters in
pretransition temperature zone, Journal of Kocaeli
University, №1, 1999
NEGATIVE PRESSURE IN LIQUIDS
Rafael Samadov
Rafa.L.Samedov@gmail.com
Supervisor: Fuad Veliyev
1. Introduction
Negative pressure is considered to be one of the
metastable states in which liquids can be extended up to a
certain limit. The metastable condition, in this case, means the
simultaneous supercooling and superheating. Obviously, it is an
absurd, from the first sight, to consider the possibility of the
existence of the liquid in both extreme conditions. However, this
can be contradicted if the process of investigation of the
negative pressure proceeds with a usage of perfectly clean
apparatuses. There are different facts which provide the essence
of the negative pressure in some vital systems in living
organisms, e.g., in the blood vessel system.
From theoretical point of view, the maximum value of
the negative pressure which can be obtained from the ideally
𝑁
pure water is equivalent to approximately 10−9 𝑚2 . This fact
means that the imaginable rope of water with diameter of 0.01m
is able to sustain an enormous extending effort of more than
105 𝑁. It can be clearly concluded that formation of large values
of negative pressure has a prone to have a strong ability of
extending efforts.
83
Figure 1: Tension
manometer
2. Experimental results
The history of the negative pressure outbreaks in 1843
when F. M. Donny using an apparatus, illustrated in figure (1),
showed the ability of liquids to withstand such kind of
metastable condition. The apparatus consists of a U-tube with
one long limb sealed at the top, and one short limb linked to the
vacuum pump. When the long limb is completely filled with
liquid (by tilting it into the horizontal position and then restoring
it back to the vertical) the liquid is held up by the pressure of the
atmosphere at the free surface, B. As the pressure at B is
reduced to nearly zero, the liquid in the long limb ordinary starts
to fall until it levels with B. But if care is taken to denucleate the
liquid by removing all traces of undissolved air from the long
limb, it will remain standing in this limb when the pressure at B
is reduced to zero. Under these conditions the pressure at A is
less than zero absolute, by an amount depending upon the height
AB.
In fact, Donny used the denucleated sulfuric acid with
a help of which he obtained the value of negative pressure of
84
approximately -0.12bars. There were also other scientists such
as, Osborne Reynolds, Berthelot, Meyer etc., who conducted
experiments concerning the obtaining of the negative pressure.
It is worth noting that the record value of negative pressure was
provided by the L. J. Briggs (-42.5MPa).
It is important to take into consideration that trials
which were conducted by scientists counted above are related to
ones which were obtained using homogeneous liquids or liquids
of one kind. However, the essence of the following topic lies on
the creation of negative pressure in real impure, unclean,
heterogeneous liquid systems. The basic idea here was reaching
the negative pressure due to the sudden character of extending
efforts.
Figure 2: Negative pressure wave
85
There are two ways of interference into the metastable
(overheated) zone in the liquid-steam phase diagram: guasistatic and impulse methods. The idea of the impulse method is
that the pressure falls so quickly that the existing centers of
steam generation as bubbles, embryos, admixture, etc. have no
opportunity to appear during this time. Under these conditions
the “purity” of system is not that decisive, and herewith there
may be some states if an overhead liquid with negative pressure.
On this base, long-proceeding experimental work has been
performed to create impulsive negative pressure in real
heterogeneous compound liquid systems, such as tap water,
crude oil, solutions, etc. and to use the phenomenon of negative
pressure for raising the effectiveness and efficiency of various
technological processes. The following results and the
description of the trial help to understand the significance of the
role of negative pressure as an energy factor in evaluation of
many transient thermohydrodynamic processes in nature and
production systems.
Figure 3: Variation of
stream`s T
86
The conducted experiment consists of the following
sequence of steps: 1) The test liquid was filled into the tank with
a volume of 3𝑚3 and attached to the horizontal steel pipe with
internal diameter of 0.04m and the length of 30m, and certain
initial pressure Po, was created in the closed system with the
help of the compressed air. 2) The valve placed on the free end
of the pipe helped to quickly open the liquid system (10−2 sec)
and oscillograms indicating the changes in the pressure and
temperature in two points of the stream (0.5m and 30m away
from the valve) were taken recorded by relevant transducers
mounted inside of the pipe. 3) As pressure sensors, semiconductor strain –gauge ones were used, having uninterrupted
linear characteristics in the areas of extension and compression
Liquids used in the experiment encompass tap water
and such unclean liquids as crude oil and clay solutions.
Figure(2) represents the typical variation of pressure with time
𝑘𝑔
in crude oil (𝜌 = 934 3 ) stream in two established test points
𝑚
which initial pressure and temperature are Po=0.7MPa, and
To=298K, respectively.
The liquid at the mouth of the pipe is like boiling system. The
process is followed by quick considerable fall in the temperature
of the stream (7-10) after which it is slowly restored
apparently due to the heat transfer process accompanied by
process of cavitation, reverse condensation of steam, dissolving
of the extracted gas (Fig.3).
The important result of this particular investigation is
the potential of generation negative pressure waves in real liquid
systems. There are several natural effects and technological
applications associated with a usage of the negative pressure.
87
One of the examples of natural impacts is the
geological effects. As a matter of fact, extreme dynamic
processes in the underground medium can be considered as a
synergetic manifestation of the negative pressure together with
other thermohydrodynamical factors.
Energy saving technologies: It has been worked out
and widely tested in field conditions new technologies on
utilization of negative pressure phenomenon for cleaning of oil
producing hydraulic systems/well bore, pipeline/from various
accumulations and increasing of effectiveness of oil producing
at different well operation methods.
There is at least one situation where the ability to
employ negative pressures could make the difference between
life and death. Thousands of men have died of thirst in
mangrove swamps and in waterless forest or brush while the
swap ducts of the trees around them were full of drinkable but
inaccessible fluid. One way of solving such problem is to invent
a negative-pressure syringe which could assist to draw out the
life-giving fluid from trees.
The most exciting prospect, however, is that of
applying negative pressure to irrigation systems.
3. References
1. Veliev F.H., “Negative pressure waves in
hydraulic systems”, Engineering and Technology International
Journal of Mechanics, Aerospace, Industrial and Mechatronics
Engineering Vol:9 №: 1, 2015
2. Hayward A.T., “Negative Pressure in Liquids:
Can It Be Harnessed to Serve Man?”, American Scientist, 59,
1971
88
HYDRAULIC SHOCK IN PIPES WITH
CONSIDERATION OF THE TEMPERATURE FACTOR
Fidan Selim-zade
f.selim_zade@mail.ru
Supervisor: Prof. Fuad Veliyev
The problem of influence of the fluid temperature variation due
to the change of pressure on hydraulic shock in pipelines
isconsidered.
To calculate the pressure jump during a hydraulic shock in a
pipeline, the well-known Zhukovskii formula is used
𝑃 − 𝑃0 = 𝜆𝜈0 𝜌0
The application of mentioned relationship in estimation of
hydraulic shock in water line pipes resulted in value almost the
same with actual one (1% error) and therefore the formula could
be considered successful. However, if we generally consider
flows of different fluids in pipelines, the neglect of the
temperature factor in the formulation of the problem may lead to
substantial errors when estimating the wave process in a
hydraulic system.
Differential equations describing the propagation of a shock
wave in a pipeline are obtained with consideration of the
adiabaticity of the process and corresponding formulas have
been derived to estimate the velocity of sound and the greatest
increment of pressure in a pipeline in the presence of a hydraulic
shock. Following resolution is placed in the basis of further
investigations: due to rapid compression and expansion of the
medium occurred from shock wave propagation, the heat in the
particular area of the system cannot be exchanged with
surrounding medium and it results in absence of any influx or
removal of heat. Therefore, the propagation of shock wave may
be considered as a reversible adiabatic or isentropic process
89
returning in change of medium temperature as consequence of
pressure change. As consequence, temperature factor should be
introduced into equations. The outcomes of conducted work
may be presented as following equations:
Velocity of the shock wave propagation in an elastic fluidfilled
pipe:
1
𝜔=
𝜌
2𝑅 𝜌
𝑇
√ 0 + 0 0 − 𝜌0 𝛼𝑃2 0
𝐾
𝑒𝐸
𝐶𝑃
Largest increase in the pressure after a direct hydraulic shock:
𝑃 − 𝑃0 =
𝜆𝜈0 𝜌0
√1 − 𝜆2 𝜌0 𝛼𝑃2
𝑇0
𝐶𝑃
Conclusion. The results of experimental investigations carried
out point to the necessity of taking into account the temperature
factor in estimating the parameters of a hydraulic shock in
pipelines
90
NEW CLEANUP METHOD OF THE OILCONTAMINATED SOILS WITH CHEMICAL
EXTRACTION
Sadigli Lyaman
laman.sadikhli@gmail.com
Supervisor: Prof. Rena Mustafayeva
Introduction
As a result of oil production, an immense amount of oil
polluted soils remain futile, which creates hazard for the
environment. A lot of work have been done in order to liquidate
this danger, such as:
1. Decreasing the number of carbohydrogens in soils
via bio-decomposition;
2. Thermoadsorption method;
3. Extraction of carbohydrogens in organic and
inorganic solvents.
However, these methods do not ensure a complete
cleanup of the said soils. The purpose of this project is to
eradicate this problem.
Proposed solution
To resolve this problem, it was offered the usage of the
solvent with the certain polarity - C3-C4 alkanols and their
ethers, as organic solvents. These solvents dissolve the organic
molecules properly and do not extract the inorganic salts and
mineral substances. The applied alkanols and their ethers
evaporate at low degrees - 35-45°C and practically do not
dissolve. Therefore, easy drainage of the soils that have been
used with the solvent and it is ability of recycling within a
restricted system allow to create a wasteless technology. This
91
experiment was conducted in soxhlet extractor and during 1.5
hour of cycling it was achieved 98.5% of cleaning soils from oil
and oil-products.
Advantages
This method overcomes the presently used ones by many
indexes:
1. Components that are used come into cycle repetitively
2. The process runs under 35-45 C, which lowers energy
costs
3. The components allow to extract all oil components
form soil and boring muds
4. This method allows extracting a big quantity of highly
viscous oils, which can cover the costs
5. The component allows to clean the soil up to high
(99,9%-100%) level of soil cleaning from oil products
In addition to the said advantages, C3-C4 alkanols and their
ethers are commonly produced in the industry, therefore making
the use of this solvent economically advantageous. Therefore,
this solvent is offered to make an ex-situ treatment technology
in order to eradicate this problem.
92
“PHYSICS AND MATHEMATICS” SECTION
ENGINEERING APPLICATIONS OF DIFFERENTIAL
EQUATIONS (SECOND-ORDER)
Hajar Hidayatzade
Chinara Guliyeva
hidayetzade.hecer@gmail.com
cg209@hw.ac.uk
Supervisor: Assos.Prof. Khanum Jafarova
A differential equation is a mathematical equation that
relates some function with its derivatives. In applications, the
functions usually represent applied mathematics, physics,
and engineering quantities, the derivatives represent their rates
of change, and the equation defines a relationship between the
two. An example of modeling a real world problem using
differential equations is the determination of the velocity of a
ball falling through the air, considering only gravity and air
resistance. The ball's acceleration towards the ground is the
acceleration due to gravity minus the acceleration due to air
resistance. A differential equation is an equation involving
𝑑𝑦
variables (say x and y) and ordinary derivatives, i.e.𝑑𝑥 (first
𝑑2 𝑦
𝑑𝑦
order), 𝑑𝑥 2 (second order). When the derivative is of the form 𝑑𝑥
then x is the independent variable and y is the dependent
variable. The order of a differential equation is determined by
the highest order derivative in the differential equation.
𝑑2 𝑦
𝑑𝑦
a(x) 𝑑𝑥 2 + b(x) 𝑑𝑥 + c(x)y = f(x).
Here, y is the dependent variable and x is the
𝑑2 𝑦
independent variable, the highest order term is 𝑑𝑥 2 and so it is a
second order differential equation. It is linear since the
93
coefficients of y and its derivatives are functions of x only (so
there are no powers of y or its derivatives or products of these).
If a differential equation is not linear it is said to be nonlinear. If
f(t) = 0 the differential equation is homogeneous. Otherwise it is
inhomogeneous.
Second-order linear differential equations have a variety
of applications in science and engineering. In this section, the
vibration of spring will be introduced. Second order equations
are usually applied mechanical vibrations, in particular, at a
mass that is hanging from a spring.
Figure 1. Free vibration
There is a spring of length l, called the natural length, and an
object with mass m up to it. When the object is attached to the
spring the spring will stretch a length of L. The equilibrium
position is the position of the center of gravity for the object as
it hangs on the spring with no movement.
Undamped free vibration:
At equilibrium, the Hooke’s law is shown below:
mg=kL
94
Where, k is spring constant (k>0), m is mass, L is elongation of
the spring (see figure 1), and g is gravitation constant. The
simplest mechanical vibration equation occurs when γ=0, F(t)=0
(γ is damping constant and F(t) is externally applied forcing
function).
mu’’ + ku = 0
Where, u(t) is initial displacement of the mass. The
general solution for u(t) is then:
u(t) = C1cos 𝜔𝑡 + C2sin 𝜔𝑡
𝑘
Where, 𝜔 = √𝑚 is called the natural frequency of the
system. A motion of this type is called simple harmonic motion.
Damped free vibration (γ>0, F(t)=0):
When damping is present, the motion equation of the
unforced mass-spring system becomes:
mu’’ + γu’ + ku = 0
Where, m, γ, k are all positive constants. The
displacement u(t) behaves differently in this case, depending on
the size of γ relatively to m and k. There are 3 possible classes
of behaviors based on the possible types of root(s).
Case I. Two distinct real roots:
When γ2>4mk, there are two distinct real roots. The
displacement is in the form
u(t) = C1eat + C2ebt
95
A mass-spring system with such type displacement
function is called overdamped.
Case II. One repeated real root:
When γ2=4mk, there is one repeated real root. It is
−𝛾
negative: r = 2𝑚. The displacement is in the form
u(t) = C1ert + C2 tert
A system exhibits this behavior is called critically
damped.
Case III. Two complex conjugate roots:
When γ2<4mk, there are two complex conjugate roots,
where their common real part, λ, is always negative. The
displacement is in the form
U(t) = C1eλtcosμt + C2eλtsinμt
A system exhibits this behavior is called underdamped.
Differential equations play an important role in modelling
virtually every physical and technical process. These equations
are used to solve real-life problems. Many fundamental laws of
physics and chemistry can be formulated as differential
equations. We can consider differential equations as the
language in which the laws of nature are expressed.
Understanding properties of solutions of differential equations is
fundamental to much of contemporary science and engineering.
96
FITTING A PIPELINE WITH MINIMAL COST BY
USING OPTIMIZATION OF A FUNCTION ON A
CLOSED INTERVAL
Narmin Abbasli
Emin Balashov
abbaslinarmin@gmail.com
balasov.emin@gmail.com
Supervisor: Assos. Prof. Khanum Jafarova
A common problem encountered by the oil industry is
determining the most cost effective pipeline route in connecting
various wells in an oil fertile area. As natural gas pipeline
systems have grown larger and more complex, the importance
of optimization of design and operation of pipelines has
increased. The investment costs and operation expenses of
pipeline networks are so large that even small improvements in
optimization of design and operation conditions can lead to
substantial saving in capital and operating cost. Optimization of
natural gas pipelines is often used to determine the optimum
system variables in order to minimize the total cost.This work
represents an optimization framework for the routing and
equipment design of main pipelines to be used for fluid
transmission. Natural gas transmission pipelines transport large
quantities of natural gas across long distances. They operate at
high pressures and utilize a series of compressor stations at
frequent intervals along the pipeline (more than 60 miles) to
move the gas over long distances. The optimization of the
design of a gas pipeline to transmit natural gas involves a
number of variables, which include pipe diameter, pressure,
temperature, line length, and space between compressor
stations, required inlet and delivery pressures and delivery
quantity. Each of these parameters influences the overall
construction and operating cost in some degree and the selection
97
of one or more will determine the economics of the construction
and operation of the system. This is as true for the design of a
system from a clean sheet of paper (grass roots) as it is for the
development and upgrading of an existing system, the only real
difference between these two examples is the extent to which
some of the variables are already fixed. Because of the number
of variables involved, the task of establishing the optimum can
be quite involved and in order to ensure a robust solution, many
options may have to be investigated. Moreover, the simulation
program which has been developed and which is intended to
provide optimum solutions to the design of a pipeline system
and to permit the rapid investigation of the effects of significant
variables on the optimum design. The software computer
program “Lingo” is used to obtain the solution procedure for
optimal design and operation of gas transmission pipelines.
This work will clearly illustrate how to determine the
optimal cost by using derivatives and global extrema of
functions. We will also consider additional cost in order to get
more accurate results. To recap, there is no doubt that this
project seems very applicable to something in real life, and the
mathematics involved is not too difficult.
APPLICATIONS OF MATHEMATICS IN MECHANICAL
ENGINEERING PROBLEMS
Ilkin Cavadlı
Ilkin_cavad@mail.ru
Supervisor: Khanum Jafarova
Essentially every part of an automobile engine, and in the entire
vehicle, involves applications of mathematics, generally
described as being Engineering principles. There are some
formulas which found by using integrals are used to design the
98
engine of cars. Mathematics is required also in terms of exact
calculus which is very important for the working process of
cars. The main area that math is utilized on cars is the cylinders.
By using math modern car models and engines are developing
so rapidly.
Figure 1.Internal combustion engine application
The mathematical sciences have great impact on a wide range of
Army systems and doctrine. The objective of the programs of
the Mathematical Sciences program is to respond to the
quantitative requirements for enhanced capabilities of the Army
in the twenty-first century in technologies related to the
physical, informational, biological, and behavioral sciences and
engineering. Mathematics plays an essential role in measuring,
modeling, analyzing, predicting, and controlling complex
phenomena and in understanding the performance of systems
and organizations of critical interest to the Army. Mathematical
Sciences play an important role in solving Army issues related
to materials, information, robotics, networks, C4ISR, testing,
evaluation, decision-making, acquisition, training, and logistics.
With the advent and subsequent refinement of high-performance
computing and large-scale simulation, the mathematical
sciences have become an integral part of every scientific and
engineering discipline and the foundation for many
interdisciplinary research projects. Computing and simulation
now complement analysis and experimentation as fundamental
99
means to understand informational, physical, biological, and
behavioral phenomena. High-performance computing and
advanced simulation have become enabling tools for the Army
of the future. Real-time acquisition, representation, reduction,
and distribution of vast amounts of battlefield information are
key ingredients of the network-centric nature of the modern
digital battlefield. Management and understanding of modern
information-dominated battlefields and complex, inter-related
networks provide significant motivation for fundamental
research in the design and development of intelligent and
cooperative systems.
MATHEMATICS IN OUR LIFE
Nazrin Dolkhanova
nazrindolkhanova@gmail.com
Supervisor: Khanum Jafarova
Introduction:
We, the children of the universe, use math in order to build our
lives and unravel the secrets of the world. Is there anything else
in math except some formulas, scientific discoveries or
calculations? In your opinion, can we face math in a real life?
The calculations successfully done by the sportsmen, or finding
the angle and the displacement while playing Angry Birds, or
the amount of flour added by our mothers while cooking the
cake? If you look around more precisely, you can find out that,
in fact, the majority of the things we see or hear is somehow
related to the laws of math. Here, we are going to discuss such
topics as rules of the figures inside figures, the harmony in
music, finding the center of the planet Earth and the criteria of
the perfect human face.
Music:
Mathematics plays an important role in music. It is not limited
by only size of the notes and rhythm. Sometimes the question is
100
asked, how Beethoven could create such beautiful melodies,
being completely deaf. The basic principle is that notes must be
in harmony with each other. Mathematical point of view, the
pattern in which the sine graphs of the several notes played
together at the same time are intersected at their starting point of
(0, 0), and again at (0, 0.042) after each 2π period is known as
consonance, which sounds naturally pleasant to our ears. But
perhaps equally captivating is Beethoven’s use of dissonance.
Take a look at second plane, these two notes’ sine graphs show
the waves are largely out of sync, matching up rarely if at all.
Figure 1. Consonance, dissonance
Symmetry:
Sometimes we do not pay our attention to simmetric beings.
However, it is a branch of mathematics. Symmetry in everyday
language refers to a sense of harmonious and beautiful
proportion and balance. It's not only in science, but also in
nature, architecture, music, and most importantly in biological
creatures.
Golden ratio:
Leonardo Fibonacci who tried to explain the laws of nature
using math, introduced the beauty formula “golden ratio” to the
world. So, he wanted to determine the amount of rabbits
produced by a pair of rabbits. During the experiment he finds
out that the amount is increasing with the following progression:
1, 2, 3, 5, 8, 13, 21.... Thus, each term of it equals the sum of
two previous terms. This progression is a key to the ultimate
beauty, so called Fibonacci sequence. Fibonacci determined that
the ratio between each term of this progression regarding the
101
preceeding one equals approximately 1,68033..., which is
“golden ratio” found by the famous artist, sculptor, inventor and
mathematician Leonardo da Vinci. There is a special
relationship between the Fibonacci numbers: the ratio of each
next term of Fibonacci sequence by previous term is
approximately the same number. This ratio is called golden
ratio.
𝑥 𝑥+𝑦
=
= 1,618033 = 𝜑
𝑦
𝑥
The golden ratio has been denoted by the Greek letter φ (phi)
named after Phidias, a sculptor who is said to have employed it.
It seems like all the beauty surrounding us is based on this
number. We can see it in the human body or cone, snail shell or
Lunari flower, architects and artists’ works or even universe.
The very irreplacable effect of the Fibonacci sequence is to
uncover the secrets of the beauty of the human face and body
form. For example, the human face: ratio of the width to length
of the face, the body: the ratio of the distance from foot to navel
to the distance from navel to the top of the head give us the
number of φ. As we know, human hair is not straight, it is spiral
or leaning, so that the tangent of curve or angle of curve gives
the golden ratio.
The next example demonstrated the relationship between
mathematics and world is that Mecca is the center of the earth
and it is proven with golden ratio. Thus, the ratio of the distance
from Mecca to the North Pole (7731,68 km) to the distance from
Mecca to the South Pole (12348,32 km) is equal to 1,618. In
addition, the ratio of the distance between the 2 poles (19980,00
km) to the distance from Mecca to the South Pole, equals to
1.618 too. In some other calculations, the center is also Mecca
and the ratio of the east length to the west length is again the
102
number of φ. Moreover, the ratio of the west length to the length
of the circle equals 1,618.
Figure 2. Golden ratio Mecca sity
Johannes Kepler said: “Geometry has two great treasures; one is
the Theorem of Pythagoras; the other, the division of a line into
extreme and mean ratio. The first we may compare to a measure
of gold, the second we may name a precious jewel”. Fractal:
We can see the mathematical relationship - fractal at nature's
fascinating creatures. A fractal is a natural phenomenon or a
mathematical set that exhibits a repeating pattern that displays at
every scale. In other words, The patterns are built with reduced
or enlarged proportionally called a fractal shape. The main
feature of the fractal is that model is the same as a small part of
itself. For example, pyramid sprouts, snowflake, fern leaves, icy
glass, aloe plants, dragonfly wings, blood vessels and so on.
Conlusion:
In conclusion, the animate and inanimate beings we see, the
environment around us, music we hear and every other things
are directly connected to mathematics. The law of the music
harmony, symmetry, and fractal in nature, the golden ratio
which conguer world, let us say “mathematics in our life”
103
NUCLEAR POWER INDUSTRY
Afsana Zeynalli
efsanezeynalli@gmail.com
Supervisor: Corresponding Member of ANAS, Prof.
Z.H.Asadov
1. Abstract
In this thesis it will be shown the nuclear processes and
use of nuclear power in industry.
2. Introduction
Nuclear energy is the energy which is created in nuclear
reaction (usually using Uranium 235 and Plutonium
mines) and it is observed as nuclear fission, nuclear
decay and nuclear fusion. Neutrons crash with nucleus
and they cause creation of new neutrons and other
particles. These obtained particles contain very high
kinetic energy and this energy is used to turn energy into
heat when particles collide with each other.
2.1 Nuclear reaction:
Let’s look at the reaction between uranium-235 isotope and
neutron.
In
this reaction Uranium 235 isotope turns to very excited Uranium
236 isotope because of neutrons and this new isotope is very
unstable, so it collapses into two parts and other new neutrons in
short time. In this process 200MeV energy is obtained and new
neutrons cause the next division of nucleus.
104
In nuclear process, accelerated neutron particles, gamma
radiation and great amount of atomic energy release.
2.2 Types of decay
Firstly, it was obvious from the direction of the electromagnetic
forces applied to the radiations by external magnetic and electric
fields that alpha particles from decay carried a positive charge,
beta particles carried a negative charge, and gamma rays were
neutral.
Figure 1. Alpha particles may be completely stopped by a
sheet of paper, beta particles by aluminum shielding.
Gamma
105
Type of Descriptio
radiatio n
n
Alpha
Nucleus
releases a
He atom.
He atom is
known as
an alpha
particle
Beta
Nucleus
releases an
electron
and
converts a
neutron to
a proton
Gamma
Change
Example
A Helium atom
is
released.
Remaining
nucleus weighs
4 less and has 2
less protons
238
92𝑈
Remaining
atom has:
 Same
mass
 One
more
proton,
one less
neutron
Nucleus
Nucleus
goes from remains same,
high
but a gamma
energy
ray is released
state to a
low
energy
state
231
53𝐼
→
→
238
92𝑈
234
90𝑇ℎ
231
54𝑋𝑒
→
+ 42𝐻𝑒
+ −10𝑒
238
92𝑈
+𝛾
2.3 Disasters in nuclear field
Except advantages of nuclear power in industry, some serious
nuclear and radiation accidents have occurred. Benjamin K.
106
Sovacool has reported that worldwide there have been 99
accidents at nuclear power plants. Fifty-seven accidents have
occurred since the Chernobyl disaster, and 57% (56 out of 99)
of all nuclear-related accidents have occurred in the USA.
Nuclear energy is used in submarines, Atomic Electric Stations
(AES) and spaceships, also, several countries have already
attempt to produce nuclear weapons.
Conclusion
In this thesis it has been shown that nuclear power is an ongoing
source of energy today and main need of this industry is to
invest much money for proliferation, project new advanced
technological equipment, and reduce amount of risk in
workplace.
3. Work Cited
 Məmmədov Q.Ş. Xəlilov M.Y. Ekoloqların məlumat
kitabı. "Elm" nəşriyyatı. Bakı: 2003. 516 s.
 Markandya, A.; Wilkinson, P. (2007). "Electricity
generation and health". Lancet 370 (9591): 979–990.
 The Future of Nuclear Power
 Tomoko Yamazaki and Shunichi Ozaka (June 27, 2011).
"Fukushima Retiree Leads Anti-Nuclear Shareholders at
Tepco Annual Meeting
http://www.world-nuclear.org/info/Nuclear-FuelCycle/Power-Reactors/Nuclear-PowerReactors/http://fukushimaupdate.com/
107
LIQUID CRYSTALS
Saida Alasgarova
alasgarovasaida@gmail.com
Supervisor: Ass.prof. Sevda Fatullayeva
It is known that, there are 3 states of matter: solid, liquid
and gas. However, in the end of the 19th century the next state
phase of matter was introduced under the name of “liquid
crystal”. This state of matter is an intermediate of matter in
which both behaviors in terms of liquid and solid can be found.
Liquid crystals were observed first in 1888 in the cholesteryl
benzoate compound.
cholesteryl
benzoate
Basically, liquid crystals are divided into two groups:
thermotrophic and lyotrophic liquid crystals [1].
Figure 1. Liquid crystal textures
These two groups are varied from each other according to
108
their properties. For instance, the basic units of thermotrophic
liquid crystals are molecules that their phase transitions depend
on the temperature and pressure only, but in lyotrophic liquid
crystals dependence of solvent is added to the list also [2].
Thermotrophics are widely used in displays of electronic
devices, such as computers, sensor screens digital watches, etc.
They have different shapes which consist of rod-like, disk-like
and banana-shaped. The thermotropic crystals are utilized for
synthesizing organic molecules and bringing the modification
into technology.
Chemically, lyotrophic liquid crystals are more interesting.
That is the reason in this work we will consider the main points
of them and their application in the real life.
Lyotropic liquid crystals
A lyotropic liquid crystal is constituted of two or more
components that exhibit liquid-crystalline properties in certain
concentration ranges. In the lyotropic phases, solvent molecules
fill the space around the compounds to provide fluidity to the
system. This type of crystals organizes themselves without any
external force into primary structure. They are amphiphilic
which means they have hydrophilic and hydrophobic parts. In
this case, it can be said that lyotrophic liquid crystals are alike to
cell membrane.
Hydrophobic end
Hydrophilic end
Figure 2. Lyotropic liquid crystal
109
As a solvent water is used however in the last 15 years after
the experiments it has been found that solid solvent is also
possible. It is proved that using from the chemical and physical
properties of lyotrophic liquid crystals the smaller sizes of the
compound can be got. Taking as an example, modern expensive
technology is only able to measure up 50 nm; however by
means of the liquid crystals we can get smaller holes in the
organic chemicals.
In the end of our research, we have analyzed that there are
several advantages of application of liquid crystals in the
different fields of industries. Thus, liquid crystals are applied on
crude oil recovery, detergent and soap production, cosmetic, etc.
If we use lyotrophic liquid crystals on the surface of catalysts it
will give its effect also and some chemical processes will be
developed.
References:
9. Andrienko D. /Introduction to liquid crystals. Inter. Max
Planck Res. School. 2006. P. 32
10. Martin J.D., Keary C.L., Thornton T.A., Novotnak M.P.,
Knutson J.W., Folmer J.C. /"Metallotropic liquid crystals
formed by surfactant templating of molten metal halides".
Nature Materials. 2006. V. 5. P. 271.
however first developed in 1959 by mathematician Paul de
Casteljau using de
Casteljau's
algorithm,
a numerically
stable method to evaluate Bézier curves, at Citroën, another
French automaker.
The main areas where these curves are applied are following:
Animation. In animation applications, such as Adobe
110
Flash and Synfig, Bézier curves are used to outline, for example,
movement. Users outline the wanted path in Bézier curves, and
the application creates the needed frames for the object to move
along the path. For 3D animation Bézier curves are often used to
define 3D paths as well as 2D curves for keyframe interpolation.
Fonts. TrueType fonts use composite Bézier curves composed
of quadratic Bézier curves. Modern imaging systems
like PostScript, Asymptote, Metafont, and SVG use composite
Béziers composed of cubic Bézier curves for drawing curved
shapes. OpenType fonts can use either kind, depending on the
flavor of the font.
String art. Quadratic Bézier curve are obtained from strings
based on two intersecting segments
Computer graphics. Bézier curves are widely used in computer
graphics to model smooth curves. As the curve is completely
contained in the convex hull of itscontrol points, the points can
be graphically displayed and used to manipulate the curve
intuitively. Affine
transformations such
as translation
and rotation can be applied on the curve by applying the
respective transform on the control points of the curve.
This work will clearly highlight above main areas of application
of Bézier curve.
GLOBAL WARMING
Asiman Saidzadeh
saidasiman@gmail.com
Supervisor: Asoss. Prof. Abbasova Rena
Nowadays, there are a lot of problems concerning our
fragile Earth. These are Acid Rains, Ozone Depletion and so on.
The main topic of this report, however, is the phenomenon
111
called Global Warming. This thesis describes the main culprits
of this phenomenon, gives extended information about human
impact on it, weighs some skeptical views of Global Warming,
and proposes some possible solutions.
First and foremost, what is Global Warming? Global
Warming is the phenomenon caused by Greenhouse Effect.
Scientists observed that the global temperature over the last few
decades increased considerably (Graph 1). The collocation
“Greenhouse Effect” wasn’t chosen at random. That is because
of the similarity of the processes which occur in the atmosphere
to the processes in a greenhouse.
Figure 1: Increase in the temperature
Sunlight in the form of visible light passing through the
atmosphere reaches the surface of the Earth. A very large
percentage of this light is absorbed by the surface, and the
remaining is reflected back to the atmosphere. Some part of this
radiation returns back to the open space and the rest is absorbed,
and then re-emitted by the so-called greenhouse gases in the
form of infra-red radiation. Thus greenhouse gases, such as
carbon dioxide, methane, nitrous oxides, water vapour,
112
chlorofluorocarbons (CFCs) cause “trapping” of the light in the
atmosphere as it is done by the transparent walls and roof of a
greenhouse. Therefore, it is thought that it is the greenhouse
gases which are the main culprits. However, as many scientists
believe, in the very top of the list of culprits is humanity. The
most potent greenhouse gas is carbon dioxide because of its
high concentration in the atmosphere. The concentration,
however, increases, and, having said that, the increase is quite
steep. (Graph 2)
The increase in the concentration of carbon dioxide, one
of the major atmospheric contributors to the greenhouse effect
has been carefully documented at the Mauna Loa Observatory in
Hawaii. The 1990 rate of increase was about 0.4% per year. The
interesting cyclic variations represent the reduction in carbon
dioxide by photosynthesis during the growing season in the
northern hemisphere. Current analysis suggests that the
combustion of fossil fuels is a major contributor to the increase
in the carbon dioxide concentration, such contributions being 2
to 5 times the effect of deforestation.
Figure 2: Increase in the concentration of carbon dioxide.
Not all scientists, however, believe that the global
temperature is really rising. The global warming controversy
113
concerns the public debate over whether global warming is
occurring, how much has occurred in modern times, what has
caused it, what its effects will be, whether any action should be
taken to curb it, and if so what that action should be. In the
scientific literature, there is a strong consensus that global
surface temperatures have increased in recent decades and that
the trend is caused primarily by human-induced emissions of
greenhouse gases. No scientific body of national or international
standing disagrees with this view, though a few organizations
with members in extractive industries hold non-committal
positions. Disputes over the key scientific facts of global
warming are now more prevalent in the popular media than in
the scientific literature, where such issues are treated as
resolved, and more in the United States than globally.
In conclusion, the only thing that is remained unsaid is
the possible ways out of this environmental problem. First of all,
it would be rather better to use less fossil fuels, the time to pass
to renewable sources of energy has already came. Then, there is
an idea of scrubbing exhaust gases after combustion. Carbon
dioxide may be caught by reagents like calcium oxide in order
to be converted to less dangerous species. In order to decrease
amount of nitrous oxides using pure oxygen (instead of air)
during combustion appears to be a probable solution. And,
finally, everything is in our hands. We should protect the
environment for the future generations. Save the environment,
save the future!
114
BRAIN CAPACITY, MECHANISM OF ADDICTION IN
BRAIN
Cavad Iskenderov
Rafael Abdullayev
cavad.iskenderov@gmail.com
abdullayevrafael@gmail.com
Supervisor: Assos. Prof. Rena Abbasova
The human brain is the center of the human nervous
system. Enclosed in the cranium, it has the same general
structure as the brains of other mammals, but is over three times
as large as the brain of a typical mammal with an equivalent
body size. The brain has the size and appearance of a small
cauliflower. But thanks to its 100 billion nerve cells, we can
think, plan, imagine, and so much more. The brain has two
cerebral hemispheres. Each takes care of one side of the body,
but the controls are crossed: the right hemisphere takes care of
the left side, and vice versa. The brain monitors and regulates
the body’s actions and reactions. It continuously receives
sensory information, and rapidly analyses this data and then
responds, controlling bodily actions and functions. The brain
steam controls breathing, heart rate, and other autonomic
processes that are independent of conscious brain functions. The
neocortex is the center of higher order thinking, learning, and
memory. The cerebellum is responsible for the body’s balance,
posture, and the coordination of movement.
Making sense of the brain's mind-boggling complexity isn't
easy. What we do know is that it's the organ that makes us
human, giving people the capacity for art, language, moral
judgments, and rational thought. It's also responsible for each
individual's personality, memories, movements, and how we
sense the world.
All this comes from a jellylike mass of fat and protein
weighing about 3 pounds (1.4 kilograms). It is, nevertheless, one
115
of the body's biggest organs, consisting of some 100 billion
nerve cells that not only put together thoughts and highly
coordinated physical actions but regulate our unconscious body
processes, such as digestion and breathing.
The brain's nerve cells are known as neurons, which
make up the organ's so-called "gray matter." The neurons
transmit and gather electrochemical signals that are
communicated via a network of millions of nerve fibers
called dendrites and axons. These are the brain's "white
matter."
The cerebrum has two halves, or hemispheres. It is
further divided into four regions, or lobes, in each hemisphere.
The frontal lobes, located behind the forehead, are involved
with speech, thought, learning, emotion, and movement. Behind
them are the parietal lobes, which process sensory information
such as touch, temperature, and pain. At the rear of the brain are
the occipital lobes, dealing with vision. Lastly, there are
the temporal lobes, near the temples, which are involved with
hearing and memory.
The second largest part of the brain is the cerebellum,
which sits beneath the back of the cerebrum. It is responsible for
coordinating muscle movement and controlling our balance.
Consisting of both grey and white matter, the cerebellum
transmits information to the spinal cord and other parts of the
brain.
The diencephalon is located in the core of the brain. A
complex of structures roughly the size of an apricot, the two
major sections are the thalamus and hypothalamus. The
thalamus acts as a relay station for incoming nerve impulses
from around the body that are then forwarded to the appropriate
brain region for processing. The hypothalamus controls
hormone secretions from the nearby pituitary gland. These
hormones govern growth and instinctual behavior such as
116
eating, drinking, sex, anger, and reproduction. The
hypothalamus, for instance, controls when a new mother starts
to lactate.
The brain stem, at the organ's base, controls reflexes
and crucial, basic life functions such as heart rate, breathing, and
blood pressure. It also regulates when you feel sleepy or awake.
The brain is extremely sensitive and delicate, and so
requires maximum protection. This is provided by the
surrounding skull and three tough membranes called meninges.
The spaces between these membranes are filled with fluid that
cushions the brain and keeps it from being damaged by contact
with the inside of the skull.
Drug addiction, or as it is also called, drug dependence,
is a serious health problem; in addition to the huge direct health
costs (psychiatric and physical), there are massive costs in terms
of crime, loss of earnings and productivity, and social damage.
The drugs of primary concern are the opioids, stimulants
(amphetamines, cocaine), and alcohol, although nicotine
addiction (smoking) is also an important health issue. Reducing
the extent of drug dependence is one of the major goals of
medicine. The processes of addiction involve alterations in brain
function because misused drugs are neuroactive substances that
alter brain transmitter function. There is an impressive and
rapidly growing research base that is giving important insights
into the neurochemical and molecular actions of drugs of
misuse-the processes that are likely to determine such misuse in
human beings. Exciting new developments in neuroimaging
with both PET (positron emission tomography) and SPECT
(single photon emission computed tomography) provide, for the
first time, the possibility of testing in human beings theories of
drug addiction derived from preclinical studies
117
CYMATICS: FROM VIBRATION TO MANIFESTATION
Saida Ismayilova
saida9517@gmail.com
Supervisor: Prof. Siyavush Azakov
Cymatics is the study of the visible sound and vibration based
on periodic effects of vibration and sound on matter. Cymatics
analyzes sounds by applying basic principles of wave
mechanics. It would like very basic and dull science but it is a
miracle to witness the beauty of the pattern created by sound.
One of the common experiments is Chladni plate experiment.
This experiment was developed by German physicist and
musician Ernest Florens Friedrich Chlandi following the
observations of resonance by Da Vinci and Galileo and the
experiment of English scientist Robert Hooke. He established
discipline within physics that came to be called acoustics, the
science of sound. Chladni bowed a metal plate lightly covered
with sand until it reached resonance at which point the vibration
caused the sand to move. These figures illustrate two primary
things vibrating and non-vibrating areas. When a flat sheet of an
elastic material is vibrated, the plate swings not only as a whole
but also as parts. The boundaries between these vibrating parts,
which are specific for every particular case, are called node lines
and do not vibrate. The other parts are oscillating constantly. If
sand is then put on this vibrating plate, the sand is concentrated
on the non-vibrating node lines. The oscillating parts or areas
thus become empty. According to Jenny, the converse is true for
liquids; that is to say, water lies on the vibrating parts and not on
the node lines. These patterns are called Chladni figures.
118
LASERS AND HOLOGRAPHY
Emin Abdullayev
Chingiz Agasiyev
emin_2112@yahoo.com
chingiz239@gmail.com
Supervisor: Prof. Siyavush Azakov
The physical basis of the laser is a quantum mechanical
phenomenon of stimulated (induced) radiation. The laser
radiation may be continuous, constant power, or pulse, reaching
extremely high peak powers. In some schemes, the operating
element is used as a laser amplifier for optical radiation from the
other source. There are many types of lasers used as a work
environment all the physical state of matter. Some types of
lasers, such as lasers, dye solutions or polychromatic solid-state
lasers can generate a set of frequencies (modes of an optical
resonator) over a wide spectral range. Dimensions vary from
microscopic lasers for a number of semiconductor lasers to the
size of a football field for some neodymium glass laser. The
unique properties of laser radiation allowed their use in various
fields of science and technology, as well as in everyday life,
starting with the read and write CDs and ending research in the
field of controlled thermonuclear fusion. Lasers are used in
holography to create holograms themselves and get holographic
three-dimensional image.
119
NANOROBOTICS
Vafa Mammadova
vafaamammadovaa@gmail.com
Supervisor: Prof. Siyavush Azakov
Nanorobotics is the emerging technology field creating
machines or robots whose components are at or close to the
scale of a nanometer (10−9 meters). More specifically,
nanorobotics refers to the nanotechnology engineering
discipline of designing and building nanorobots, with devices
ranging in size from 0.1–10 micrometers and constructed of
nanoscale or molecular components. The names nanobots,
nanoids, nanites, nanomachines, or nanomites have also been
used to describe these devices currently under research and
development. Nanomachines are largely in the research-anddevelopment phase, but some primitive molecular machines and
nanomotors have been tested. An example is a sensor having a
switch approximately 1.5 nanometers across, capable of
counting specific molecules in a chemical sample. The first
useful applications of nanomachines might be in medical
technology, which could be used to identify and destroy cancer
cells. Another potential application is the detection of toxic
chemicals, and the measurement of their concentrations, in the
environment. Rice University has demonstrated a singlemolecule car developed by a chemical process and including
buckyballs for wheels. It is actuated by controlling the
environmental temperature and by positioning a scanning
tunneling microscope tip.
120
THE THEORY OF RELATIVITY
Huseyn Abbasov
Elshan Mikayilov
mikayilovelshan@gmail.com
huseyn.abbasovbanm@gmail.com
Supervisor: Prof. Siyavush Azakov
There have been domination of classical ideas about time
and movement, for many centuries. According to these ideas,
space and time are considered absolute, and movement do not
have any influence on time, and linear dimensions of an object
do not depend on movement of the object. According to
classical mechanics, mechanical events happen in the same
manner in all inertial systems. However, theory of relativity
improved Albert Einstein states that measurements of various
quantities are relative to the velocities of observers. In
particular, space contracts and time dilates. Special relativity is a
theory of the structure of spacetime. It was introduced in
Einstein's 1905 paper "On the Electrodynamics of Moving
Bodies". The theory has many surprising and counterintuitive
consequences.
General relativity is a theory of gravitation developed by
Einstein in the years 1907–1915. The development of general
relativity began with the equivalence principle, under which the
states of accelerated motion and being at rest in a gravitational
field (for example when standing on the surface of the Earth) are
physically identical. The upshot of this is that free fall is inertial
motion: an object in free fall is falling because that is how
objects move when there is no force being exerted on them,
instead of this being due to the force of gravity as is the case
in classical mechanics. This is incompatible with classical
mechanics and special relativity because in those theories
121
inertially moving objects cannot accelerate with respect to each
other, but objects in free fall do so. To resolve this difficulty
Einstein first proposed that spacetime is curved. In 1915, he
devised the Einstein field equations which relate the curvature
of spacetime with the mass, energy, and momentum within it.
STANDARD MODEL AND THE ROLE OF HIGGS
BOSON
Ibrahim Sharifov
Nizam Zahidli
ibrahim-emcz@mail.ru
nizam_zahidli@yahoo.com
Supervisor: Prof. Siyavush Azakov
A new run at CERN’s Large Hadron Collider (LHC) in 2015
could show whether the Higgs boson matches the Standard
Model of particle physics or opens the door to new theories.
If it looks like a Higgs, and acts like a Higgs, it’s probably a
standard Higgs boson. That’s the drift from the latest
measurements at LHC, where physicists have been carefully
characterizing the new particle they discovered in 2012. So
far, every test at the Geneva accelerator confirms that the
new particle closely resembles the Higgs boson described by
the Standard Model of particle physics. These results
resoundingly confirm the Higgs theory first put forward in
1964 by Robert Brout, François Englert and Peter Higgs and
helped win the latter two the Nobel Prize in 2013.
Scientists are eager to detect deviations from this idea,
however, which might reveal a deeper layer of physics. For
instance, if the Higgs boson decays to lighter particles at
slightly different rates than predicted, it could indicate the
122
presence of new, exotic particles interfering with those
decays. Whereas the most recent results show no sign of such
interference, the next phase of the LHC could offer new
insights; it is set to start operating at higher energies in early
2015. At those energies, the Higgs boson may open the door
to a new theory of physics that more fully explains the
universe.
STRING THEORY AND THE ORIGIN OF UNIVERSE
Rufat Abdulazimov
rufat95@hotmail.com
Supervisor: Prof. Siyavush Azakov
Since ancient times the human race has been trying to
understand how the universe came to exist. Recently,
scientists have been searching for the theory of everything.
This theory shall explain everything in this universe. One of
the theories that qualifies as the theory of everything is string
theory. According to string theory, one can describe an
elementary particle with the help of one-dimensional strings
(just as one can describe a cube in 3 dimensions). These
strings can oscillate and oscillations, in turn, lead to different
species of particles, with their own unique mass, charge, and
other properties determined by the string's dynamics.
Moreover, these oscillations are responsible for the
interactions between particles. However, this theory also
states that our universe has 10-26 dimensions instead of the
standard 4-dimensional model with 3 spatial dimensions and
time being the 4th. This theory is classified into several other
string theories. The M-theory unites all of the version of the
superstring theory and has 11 dimensions. Nonetheless, the
theory is still widely disputed and is yet to be proven by an
experiment. If proven, the theory might possibly become the
123
theory of everything and explain questions that have long
averted mankind like quantum gravity.
APPLICATION OF A BÉZIER CURVE
Allahverdiyeva Nigar
Hasanov Murad
allahverdiyeva.nigar96@gmail.com
murad.hasanov22@gmail.com
Supervisor: Assos. Prof. Khanum Jafarova
There is a range of areas where mathematics can be applied.
A Bézier curve is one of the most spread applications of
math. A Bézier curve is a parametric curve frequently used
in computer graphics and related fields. The mathematical
basis for Bézier curves — the Bernstein polynomial — has
been known since 1912, but its applicability to graphics was
understood half a century later. Bézier curves were widely
publicized in 1962 by the French engineer Pierre Bézier, who
used them to design automobile bodies at Renault. The study
of these curves was however first developed in 1959 by
mathematician Paul de Casteljau using de Casteljau's
algorithm, a numerically stable method to evaluate Bézier
curves, at Citroën, another French automaker.
124
Figure 1.String art.
The main areas where these curves are applied are following:
Animation. In animation applications, such as Adobe
Flash and Synfig, Bézier curves are used to outline, for
example, movement. Users outline the wanted path in Bézier
curves, and the application creates the needed frames for the
object to move along the path. For 3D animation Bézier
curves are often used to define 3D paths as well as 2D curves
for keyframe interpolation.
Fonts. TrueType fonts use composite Bézier curves
composed of quadratic Bézier curves. Modern imaging
systems like PostScript, Asymptote, Metafont, and SVG use
composite Béziers composed of cubic Bézier curves for
drawing curved shapes. OpenType fonts can use either kind,
depending on the flavor of the font.
Quadratic Bézier curve are obtained from strings based on
two intersecting segments
125
Computer graphics. Bézier curves are widely used in
computer graphics to model smooth curves. As the curve is
completely contained in the convex hull of itscontrol points,
the points can be graphically displayed and used to
manipulate the curve intuitively. Affine transformations such
as translation and rotation can be applied on the curve by
applying the respective transform on the control points of the
curve.
This work will clearly highlight above main areas of
application of Bézier curve.
DISTRIBUTED TEMPERATURE SENSING IN
INTELLIGENT WELLS
Riyad Muradov
Gurban Orujov
riyadmuradov@gmail.com
oqurbanov95@gmail.com
Supervisor: Prof. Siyavush Azakov
Nowadays, intelligent well system enables continuous
downhole monitoring and production control. Knowledge of
such parameters, as well pressure and temperature in real
time gives the ability to quickly respond to any changes in
the well and reservoir behavior, leading to optimization of
recovery. Distributed temperature sensing systems (DTS) are
devices serving as linear sensors of temperature across the
well. In this new technology, temperature is recorded
through optical fiber cable, thus not being measured at
points, but in a continuous manner. The fiber optic line is
encapsulated inside an Inconel tube, and the tube itself is
covered with plastic insulating material. The DTS uses a
126
laser to launch 10 nanosecond bursts of light down the optic
fiber; each burst a small amount of light is backscattered
from molecules in the fiber. This effect is called Raman
scattering, and, unlike the incident ray, the frequency of
backscattered light shifts by frequency of the optical fiber
molecules oscillation. The backscattered light is then
detected, and the whole spectrum is analyzed.
INVESTIGATION OF METHODS AND PRINCIPLES
FOR USING OF SOLAR ENERGY
Jalya Hajiyeva
Sabina Mammadova
Jale.hajieva@mail.ru
Supervisor: Assoc.Professor Naila Allakhverdiyeva
Solar
energy is
radiant light and heat from
the sun harnessed using a range of ever-evolving
technologies such as solar heating, solar photovoltaics, solar
thermal
energy, solar
architecture and artificial
photosynthesis.
It is an important source of renewable energy and its
technologies are broadly characterized as either passive
solar or active solar depending on the way they capture and
distribute solar energy or convert it into solar power. Active
solar techniques include the use of photovoltaic systems,
concentrated solar power and solar water heating to harness
the energy. Passive solar techniques include orienting a
building to the Sun, selecting materials with
favorable thermal mass or light dispersing properties, and
designing spaces that naturally circulate air.
In 2011, the International Energy Agency said that "the
development of affordable, inexhaustible and clean solar
127
energy technologies will have huge longer-term benefits. It
will increase countries’ energy security through reliance on
an indigenous, inexhaustible and mostly import-independent
resource, enhance sustainability, reduce pollution, lower the
costs of mitigating global warming, and keep fossil
fuel prices lower than otherwise. These advantages are
global. Hence the additional costs of the incentives for early
deployment should be considered learning investments; they
must be wisely spent and need to be widely shared”.
Solar panel refers either to a photovoltaic module, a solar
hot water panel, or to a set of solar photovoltaic (PV)
modules electrically connected and mounted on a supporting
structure. A PV module is a packaged, connected assembly
of solar cells. Solar panels can be used as a component of a
larger photovoltaic
system to
generate
and
supplyelectricity in commercial and residential applications.
Each module is rated by its DCoutput power under standard
test conditions (STC), and typically ranges from 100 to 320
watts. The efficiency of a module determines the area of a
module given the same rated output – an 8% efficient 230
watt module will have twice the area of a 16% efficient 230
watt module. There are a few solar panels available that are
exceeding 19% efficiency. A single solar module can
produce only a limited amount of power; most installations
contain multiple modules. A photovoltaic system typically
includes a panel or an array of solar modules, an inverter,
and
sometimes
a battery and/or solar
tracker and
interconnection wiring
The power source of the sun is absolutely free. The
production of solar energy produces no pollution. The
technological advancements in solar energy systems have
made them extremely cost effective. Most systems do not
require any maintenance during their lifespan, which means
128
you never have to put money into them.
Solar energy systems are now designed for particular
needs. For instance, you can convert your outdoor lighting to
solar. The solar cells are directly on the lights and can’t be
seen by anyone. At the same time, you eliminate all costs
associated with running your outdoor lighting. Solar energy
can be used in remote areas where it is too expensive to
extend the electricity power grid. It is estimated that the
worlds oil reserves will last for 30 to 40 years. On the other
hand, solar energy is infinite (forever). Solar energy can be
used in remote areas where it is too expensive to extend the
electricity power grid.
Several companies have begun embedding electronics into
PV modules. This enables performing maximum power point
tracking(MPPT) for each module individually, and the
measurement of performance data for monitoring and fault
detection at module level. Some of these solutions make use
of power optimizers, a DC-to-DC converter technology
developed to maximize the power harvest from solar
photovoltaic systems. As of about 2010, such electronics can
also compensate for shading effects, wherein a shadow
falling across a section of a module causes the electrical
output of one or more strings of cells in the module to fall to
zero, but not having the output of the entire module fall to
zero.
DISADVANTAGES OF SOLAR POWER. Initial Cost :
The initial cost of purchasing and installing solar panels
always become the first disadvantage. Although subsidy
programs, tax initiatives and rebate incentives are given by
government to promote the use of solar panels we are still
way behind in making full and efficient use of solar energy.
What is a Solar Cell? A structure that converts solar
energy directly to DC electric energy. It supplies a voltage
129
and a current to a resistive load (light, battery, motor). Power
= Current x Voltage=Current2 x R= Voltage2/R. It is like a
battery because it supplies DC power. It is not like a battery
because the voltage supplied by the cell changes with
changes in the resistance of the load. Solar cells are
Figure 1.How solar works
long lasting sources of energy which can be used almost
anywhere. Solar cells are also totally silent.
Silicon Solar cell Principle p-n Junction Diode: The
operation of a photovoltaic (PV) cell requires 3 basic
attributes: The absorption of light, generating either electronhole pairs or excitons. The separation of charge carriers of
opposite types. The separate extraction of those carriers to an
external circuit.
130
Figure.2 The obtained root of the equation
inserted into text box
POLLUTION: Most of the photovoltaic panels are made
up of silicon and other toxic metals like mercury, lead and
cadmium. Pollution in the environment can also degrade the
quality and efficiency of photovoltaic cells. New innovative
technologies can overcome the worst of these effects.
RELIABILITY: Unlike other renewable source which can
also be operated during night, solar panels prove to be
useless during night which means you have to depend on the
local utility grid to draw power in the night.
131
“INFORMATION AND COMMUNICATION
TECHNOLOGIES” SECTION
COMPARATIVE ANALYSIS OF METHODS OF
DIGITAL INTEGRATION BY MEANS OF MATLAB
PACKAGE
Leyli Abbasova
leyli.abbasova@yahoo.com
Supervisor: Naila Allakhverdiyeva
Introduction
The integral is an important concept in mathematics. Integration
is one of the two main operations in calculus, with its inverse,
differentiation, being the other. Integrals appear in many
practical situations. If a swimming pool is rectangular with a flat
bottom, then from its length, width, and depth we can easily
determine the volume of water it can contain, the area of its
surface, and the length of its edge. But if it is oval with a
rounded bottom, all of these quantities call for integrals.
Practical approximations may suffice for such trivial examples,
but precision engineering requires exact and rigorous values for
these elements.
Mathematical Background
The Upper Sum and Lower sum Rule
To compute the area under a curve, we make approximations by
using rectangles inscribed in the curve and circumscribed on the
curve. The total area of the inscribed rectangles is the lower
sum, and the total area of the circumscribed rectangles is the
upper sum. By taking more rectangles, you get a better
approximation. In the limit, as the number of rectangles
132
increases “to infinity”, the upper and lower sums converge to a
single value, which is the area under the curve. This process is
illustrated with the area under the curve. The result of this
theory for upper sum is
𝐼 = 𝑓(𝑏)(𝑏 − 𝑎)
The result of this theory for lower sum is
𝐼 = 𝑓(𝑎)(𝑏 − 𝑎)
where a and b are intervals.
The Trapezoidal Rule
The trapezoidal rule is the first of the Newton-Cotes closed
integration formulas. It corresponds to the case where the
polynomial is first-order:
𝑏
𝑏
𝐼 = ∫ 𝑓(𝑥)𝑑𝑥 ≅ ∫ 𝑓1 (𝑥)𝑑𝑥
𝑎
𝑎
A straight line can be represented as
𝑓1 (𝑥) = 𝑓(𝑎) +
𝑓(𝑏) − 𝑓(𝑎)
(𝑥 − 𝑎)
𝑏−𝑎
After estimating of the integral of 𝑓(𝑥)between the limits a and
b. The result of the integration is
𝐼=
𝑓(𝑏) + 𝑓(𝑎)
(𝑏 − 𝑎)
2
which is called the trapezoidal rule.
When we employ the integral under a straight-line segment to
approximate the integral under a curve, we obviously can incur
133
an error that may be substantial. An error for the trapezoidal rule
is
𝐸𝑡 = −
1
𝑓′′(ξ)(𝑏 − 𝑎)3
12
where ξ lies somewhere in the interval from a to b.
Simpson’s Rule
The final method we will consider for the numerical
approximation of definite integrals is known as Simpson’s Rule.
If there is an extra point midway between f (a) and f (b), the
three points can be connected with a parabola. If there are two
points equally spaced between f (a) and f (b), the four points can
be connected with a third-order polynomial. The formulas that
result from taking the integrals under these polynomials are
called Simpson’s rules. Simpson’s 1/3 rule results when a
second-order interpolating polynomial is substituted into:
𝑏
𝑏
𝐼 = ∫ 𝑓(𝑥)𝑑𝑥 ≅ ∫ 𝑓2 (𝑥)𝑑𝑥
𝑎
𝑎
After integration and algebraic manipulation, the following
formula results:
𝐼 ≅ (𝑏 − 𝑎)
𝑓(𝑥0 ) + 4𝑓(𝑥1 ) + 𝑓(𝑥2 )
6
where a = x0, b = x2, and x1 = the point midway between a and
b, which is given by (b + a)/2. This equation is known as
Simpson’s 1/3 rule. It is the second Newton-Cotes closed
integration formula.
Error for Simpson’s 1/3 rule can be shown as
134
𝐸𝑡 = −
(𝑏 − 𝑎)5 (4)
𝑓 (𝜉)
2880
where ξ lies somewhere in the interval from a to b. Thus,
Simpson’s 1/3 rule is more accurate than the trapezoidal rule.
Comparative analysis
By using different methods which I mentioned previously I
wrote different programs for each method by helping of
MATLAB package. After this, I analyzed the results. I got that,
the upper sum and lower sum methods are faster than other
methods. But, these methods have less accuracy than others.
Trapezoidal rule is quite simple in implementing, compared to
other methods. It takes a piecewise linear approximation of the
function and hence execution of the solution is faster. The
solution is very rapidly convergent. It consists of the fact that
weighting coefficients are nearly equal to each other. On the
other hand, the error is much higher compared to the other
methods. In addition, this method is less accurate for nonlinear
functions since it has straight line approximation. Simpson's rule
assumes piecewise quadratic interpolation and hence the
accuracy is much higher compared to trapezoidal rule. It can
integrate polynomials up to third order accurately. The error is
less compared to that of trapezoidal rule. Weighting coefficients
are simple and do not fluctuate in terms of magnitude. But, a
large number of ordinates are needed in between the interval.
135
PRINCIPLES OF OPERATION AND PRINTING
TECHNOLOGY FOR 3D-PRINTERS
Mustafa Hajiyev
mustafahaciyev@yahoo.com
Supervisor: Assoc.Professor Allakhverdiyeva Naila
In the era of intensive industrialization numerous technical
miracles are being created permanently. One of those
breakthroughs is surely should be recognized as 3D printing
machines. These marvels of technology has infiltrated almost all
sectors of both economical and industrial manufacturing. This
survey will bring some clarity to the general associations about
3D printing and analyze prime principles of working these
machines. So what are 3D printers?
Figure1. 3 D-printer
As an inception, the principal objective of these devices is
production of three dimensional objects by inkjet printer heads
under computer control. That is the reason why sometimes they
are called industrial robots. A main working instruction of 3D
printers based on sequential deposition of layers on powder bed
and this regulation is extending the possibilities of deriving
136
various geometrical shapes.
We will have discussion about general terminologies such as
additive manufacturing and identify main discrepancies between
latter and subtractive manufacturing in the example of CNC
milling. These 2 types has created misleading perceptions about
three dimensional productions, albeit the 3D printing has
escalated the diversity of raw materials such as sprayed,
sacrificial and support materials and therefore enabled new
object geometries.
Certainly, the systematical design and order of involved
processes will be analyzed initiating from 3D modeling and
culminating in generated product. What’s more, alongside with
range of possibilities these technology boasts with diversified
techniques of production like extrusion, granular, lamination
methods, photopolymerization, bio and nanoscale printing to
name but a few. And furthermore, materials for them also differ
from one another.
Figure 2. Outputs from 3D-printer
137
Moreover, RepRap project is the essence of revolutionary 3D
printers. The capability of replicating itself is the most splendid
characteristic in FOSH 3D printers, which are the popular
representatives of RepRap. In fact, the absolute achievement of
100% cloning is not seem likely for now, but still specialists can
duplicate vast majority of essential details in them. In addition,
low speed signs for 3D printers were the main setbacks in
development of mass production. Therefore, professionals
offered multiple extruder heads for simultaneous productions,
regulated by single controller.
As the most important part of information, the spheres of
using 3D printers are extremely large. Not surprisingly, it is
expected that the capacity of these devices will make possible to
produce food and chemicals in a large scale in handy future.
And we will bring some interest to the discussion focusing on
main ideas presented by Lee Cronin in Ted Talk about domestic
production of medicines.
The 3D printers also infiltrated even into industrial sectors
and today up to a certain point used in cloth and automobile
production. Besides that, they have particular role in
construction purposes for easing the work of people. But the
most impressive achievements of three dimensional printing
machines are their using in medical surgeries and implant
preparations for patients. And, undoubtedly, robotechnology is
the next sphere where the help of 3D printers was vital and Geil
Langevin’s experiments with InMoov are first examples to that.
As the time is passing, 3D printers more and more becoming
available for domestic uses. Surely we won’t disregard issue of
financial efficiency of these devices and 3D printing pens will
possibly take part in our discussion.
138
DEVELOPMENT OF AN ALGORITHM FOR
DETERMINING AN OPTIMUM FUNCTIONAL
DEPENDENCE OF THE EXPERIMENTAL DATA
Rahim Rahimli
ragim95@gmail.com
Supervisor: Assoc.Professor Allakhverdiyeva Naila
There are several methods to find the best functional
dependence of the data. One of them is Curve Fitting, also
known as regression analysis, is used to find the "best fit" line or
curve for a series of data points.
Most of the time, the curve fit process will produce an
equation that can be used to find points anywhere along the
curve. To do this there are several curve fit models. When you
are deciding on them, it helps to know some of the underlying
properties associated with the data to make an educated decision
on which model to use. Depending on your field of study, you
may find that certain equations or families of equations are used
on a regular basis. For instance, you can use parabolic
(polynomial) regression to approximate the height of dropped
object through time, or trigonometric fitting for data where
values repeats with some period.
After making decision on model, we should pick the
coefficients that best fits the line to the data. However, doing
this is not always mathematically easy task, because it requires
sometimes finding roots of nonlinear systems of equations and
solving matrixes. For this reason, process of determining an
ideal dependence of data involves use of computers and
specially designed software. Therefore, in this project we write a
program using C# programming language where different
algorithms and methods are used to complete the task,
particularly least squares method, which minimizes the square
139
of the error between the original data and the values predicted
by the equation.
Figure 1. Square fits available in this project
There are a few least square fits available in this project:




Linear (This function fits a straight line through data, of
the form 𝑦 = 𝑎 + 𝑏𝑥. No data restrictions.)
Polynomial (This function fits a curve through data, of the
form 𝑦 = 𝑎 + 𝑏𝑥 + 𝑐𝑥 2 + 𝑑𝑥 3 … + 𝑛𝑥 𝑛 .
The
more
complex the curvature of the data, the higher the
polynomial order required to fit it. No data restrictions).
Exponential (This function fits a curve through data, of
the form 𝑦 = 𝑎𝑒 𝑏𝑥 . It is generally used to fit data that
increases or decreases at a high rate. This curve fit cannot
fit negative data or data equal to zero.)
Logarithmic (This function fits a curve through data, of
the form 𝑦 = 𝑎 + 𝑏 log 𝑥. A logarithmic curve fit is
generally used with data that spans decades (100, 101, 102,
and so on). This curve fit cannot be used to fit negative
data or data equal to zero.)
140


Trigonometrical (This function fits a curve through data
of the form 𝑦 = 𝑎 sin(𝑏𝑥 + 𝑐). A trigonometrical curve fit
is used with data, which fluctuates with time between
some values. No data restrictions.)
Combined (This feature adds ability to combine several
curve models. Data should be appropriate for both
methods).
Figure 2. Curve fit model
The application consist of different areas each of them
performing their own task. There is data grid block, where user
enters data, graph pane, where we can draw data points as
scatter graph or using one of our buttons apply curve fit and
draw approximate line. Moreover, we can let program to choose
best curve fit model (with least error).
141
COMPARATIVE ANALYSIS OF SORTING
ALGORITHMS
Karim Karimov
kerimovscreations@gmail.com
Supervisor: PhD, Associate Professor Leyla Muradkhanli
Introduction
A sorting algorithm is an algorithm that puts elements of a list in
a certain order. The most-used orders are numerical order
and lexicographical order. Efficient sorting is important for
optimizing the use of other algorithms (such as search
and merge algorithms) which require input data to be in sorted
lists; it is also often useful for canonicalizing data and for
producing human-readable output. More formally, the output
must satisfy two conditions:
The output is in nondecreasing order (each element is no smaller
than the previous element according to the desired total order);
The output is a permutation (reordering) of the input.
Further, the data is often taken to be in an array, which
allows random access, rather than a list, which only allows
sequential access, though often algorithms can be applied with
suitable modification to either type of data.
Since the dawn of computing, the sorting problem has attracted
a great deal of research, perhaps due to the complexity of
solving it efficiently despite its simple, familiar statement. For
example, bubble sort was analyzed as early as 1956. A
fundamental limit of comparison sorting algorithms is that they
require linearithmic time – O(n log n) – in the worst case,
though better performance is possible on real-world data (such
as almost-sorted data), and algorithms not based on comparison,
such as counting sort, can have better performance. Although
many consider sorting a solved problem – asymptotically
optimal algorithms have been known since the mid-20th century
– useful new algorithms are still being invented, with the now
142
widely user Timsort dating to 2002, and the library sort being
first published in 2006.
Sorting algorithms are prevalent in introductory computer
science classes, where the abundance of algorithms for the
problem provides a gentle introduction to a variety of core
algorithm concepts, such as big O notation, divide and conquer
algorithms, data structures such as heaps and binary
trees, randomized algorithms, best, worst and average case
analysis, time-space tradeoffs, and upper and lower bounds.
Classification
Sorting algorithms are often classified by:




Computational
complexity (worst, average
and
best behavior) of element comparisons in terms of the size
of the list (n). For typical serial sorting algorithms good
behavior is O(n log n), with parallel sort in O(log2 n), and
bad behavior is O(n2). Ideal behavior for a serial sort is
O(n), but this is not possible in the average case, optimal
parallel sorting is O(log n). Comparison-based sorting
algorithms, which evaluate the elements of the list via an
abstract key comparison operation, need at least O(n log n)
comparisons for most inputs.
Computational complexity of swaps (for "in place"
algorithms).
Memory usage (and use of other computer resources). In
particular, some sorting algorithms are "in place". Strictly,
an in place sort needs only O(1) memory beyond the items
being sorted; sometimes O(log(n)) additional memory is
considered "in place".
Recursion. Some algorithms are either recursive or nonrecursive, while others may be both (e.g., merge sort).
143




Stability: stable sorting algorithms maintain the relative
order of records with equal keys (i.e., values).
Whether or not they are a comparison sort. A comparison
sort examines the data only by comparing two elements with
a comparison operator.
General
method:
insertion,
exchange,
selection,
merging, etc. Exchange sorts include bubble sort and
quicksort. Selection sorts include shaker sort and heapsort.
Also whether the algorithm is serial or parallel. The
remainder of this discussion almost exclusively concentrates
upon serial algorithms and assumes serial operation.
Adaptability: Whether or not the presortedness of the input
affects the running time. Algorithms that take this into
account are known to be adaptive.
THREE-DIMENSIONAL MODELING USING AUTOCAD
ELECTRICAL
Elvin Ismayilov
elvin.ismayilov0@gmail.com
Supervisor: PhD, Associate Professor Leyla Muradkhanli
Introduction
AutoCAD® Electrical is AutoCAD® software for controls
designers, purpose-built to create and modify electrical control
systems. It contains all the functionality of AutoCAD, the
world’s leading CAD software, plus a comprehensive set of
electrical-specific features and functions that offer significant
productivity gains. AutoCAD Electrical helps you stay ahead of
the competition by automating control engineering tasks, such
as building circuits, numbering wires, and creating bills of
material. AutoCAD Electrical provides a library of more than
650,000 electrical symbols and components, includes real-time
error checking, and enables electrical and mechanical teams to
144
collaborate on digital prototypes built with Autodesk®
Inventor® software. As part of the Autodesk solution for Digital
Prototyping, AutoCAD Electrical helps manufacturers get their
products to market faster with lower costs.
Main Features
In today’s global market AutoCAD® Electrical comes ahead of
other CAD programs, which are intended to design and
visualize electrical circuits, schemes, and so on, due to its
unique and demanded features. The main features of
AutoCAD® Electrical are:

Thousands of Standards-Based Schematic Symbols
AutoCAD® Electrical software ships with more than
2,000 standards-based schematic symbols.

Automatic Wire Numbering and Component Tagging
Automatically assign unique wire numbers and
component tags in your drawings and reduce the time
you spend tracking design changes so you can have
fewer errors.

Real-Time Error Checking
Avoid costly errors before the build phase begins by
catching and removing errors during design.

Real-Time Coil and Contact Cross-Referencing
AutoCAD® Electrical sets up parent/child relationships
between coils and contacts, keeping track of how many
contacts are assigned to any coil or multi-contact device,
and alerting users when they have exceeded the limit.

Automatically
Spreadsheets
Create
PLC
I/O
Drawings
from
145
AutoCAD® Electrical gives users the ability to generate
a comprehensive set of PLC I/O drawings by simply
defining the project’s I/O assignments in a spreadsheet.

Share Drawings with Customers and Suppliers and
Track Their Changes
Easily exchange data with customers or suppliers in
native DWG format.
Three-dimensional modeling
As a member of modern CAD systems, AutoCAD® Electrical is
capable of creating models of objects in several different forms.
These models can be further processed to provide the geometry
necessary for analysis by other programs. Thus, for example,
stress or thermal analysis can be done prior to the production of
actual hardware. There are two-dimensional and threedimensional CAD systems. In two-dimensional system the
graphics screen is used as a substitute for drawing paper. All
drawings are produced only in one plane without any depth. As
most of the larger CAD systems, AutoCAD® Electrical has an
ability to model in three dimensions. The spatial image of the
object is drawn in a pictorial projection using x-y-z co-ordinate
geometry and is stored in the memory. It can be recalled and
redrawn in 3-D pictorial projection or in orthographic projection
representing the image of the object in a number of 2-D views,
i.e. the front, end, plan and auxiliary views.
One application of a three-dimensional model is in the
generation of an engineering drawing by arranging multiple
views of the model on a used on the drawing sheet and then
annotating these views with dimensions, labels and notes. This
approach is also used in AutoCAD® Electrical in a slightly
different way, so in AutoCAD® Electrical, a three-dimensional
146
model is drawing by arranging multiple views of electrical
devices, circuits, schemes ,and etc.
Figure 2. three-dimensional model
The three-dimensional models, circuits, and schemes, which are
arranged and designed in AutoCAD® Electrical, are also could
be imported and easily applied to the other AutoCAD®
products, such as AutoCAD® Inventor.
ANALYSIS OF ALGORITHMS FOR IMAGE
COMPRESSION
Ramil Shukurov
sukurov.ramil652@gmail.com
Supervisor: PhD, Associate Professor Leyla Muradkhanli
What is compression?
Compression is the process of reducing the size of a file by
encoding its data information more efficiently. By doing this,
the result is a reduction in the number of bits and bytes used to
store the information. In effect, a smaller file size is generated in
order to achieve a faster transmission of electronic files and a
smaller space requires for its downloading.
147
Basically, compression is used in three sections that are given
below:
1. Physical Science
 Compression(physics)- the result of the subject of a
material to compressive stress
 Compressibility- a measure of volume change from
pressure
 Gas compression- raising the pressure and reducing the
volume of gases
 Compression(geology)- a system of forces that tend to
decrease the volume of rocks
2. Information Science
 Data compression- the process of encoding digital
information using fewer bits
 Audio compression(data)- the compression of digital
audio streams and files
 Bandwidtch compression- a reduction in either the time
to transmit or in the amount of bandwidth required to
transmit
 Image compression-the application of data compression
on digital images
 Video compression- the compression of digital video
streams and files
 Dynamic range compression- a compression process that
reducess the dynamic range of an audio signal
3. Medicine
 Brain compression- a potentially fatal condition where
pressure is exerted on the brain by internal bleeding
 Compression bandage- a bandage that uses compression
to reduce the flow of blood
 Cold compression therapy—toalleviate the pain of minor
injures
There are several reasons for compressing images. First
148
of all, image compression is important for web designers
who want to create faster loading web pages which in
turn will make your website more accessible to others.
This image compression will also save you a lot of
unnecessary bandwidth by providing high-quality image
with
fraction
of
file
size.
Secondly, image compression is also important for
people who attach photos to emails which will send the
email more quickly, save bandwidth costs and not make
the recipient of the email angry. Sending large image
attachments can be considered offensive. This make
people very upset because the email takes a long time to
download and it uses up their precious bandwidth.
For digital camera users and people who save lots of
photos on their hard drive, Image Compression is more
important. By compressing image you've taken or
download, you can store more images on your disk thus
saving your money from purchasing bigger hard disk.
Image compression has two types. One of them is lossless,
another one is lossy image compression. These types of image
compression sharply differ from each other, so let’s analyse
each of them.
Lets first talk about lossless data compression. Meaning that
decoded data is binary identical to the original data. For lossless
image compression the only thing we can really do is to
describe information more efficent compared to its natural
description. That means we have to get rid of as much
redundancy as possible. Redundant data is a data that receiver
can already derive from data that he had received before. That
means data does not result in information gain for receiver
because he knew all info in advance. Let me give you an
example to simply describe data more efficiently:
149
1111888888555227777777
for the given sequence natural description stores each item
individually, because of many similarities among the adjacent
items, the following sequence
4168352287
saves a lot of data. Here is shown how many same items occur
by the value of item, this approach is called run-length
encoding, this only work if data fits our model, meaning here
that data is highly repetitive. The following sequence
12121212
1112111211121112
shows that if data does not fit our assumptions, compression
cannot be achieved , even worse, it results in data expansion.
Lossless compression is applied when the decoded data is the
same as original data. Binary data, text can be taken as examples
for that.
As it was mentioned previously, when decoded data does not fit
our original data,receiver cannot receive all data that means it is
lossy image compression. Basically, 2 techniques are common
to remove irrelevant information in image data compression the
first one is subsampling or downsampling which just reduces the
resalution of the image. A Simply way to do that is to sign the
average of lets say 4 pixels in original image to one single pixel
in downsamplimg image. Second one is to describe each
individual pixel with less accuracy. This type of compression is
appllied to data that originated analog signals. For instance,
audio , video. Futher more, there are 2 basic properties of lossy
image compression:
 The amount of data that is used to describe information, it
can be expressed by the rate which is the file size and bits in
relation to the number of pixels.
150
𝐹𝑖𝑙𝑒 𝑠𝑖𝑧𝑒
∗ 8[𝑏𝑝𝑝3]
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠
Another way is that to give ratio between the file size of the
compressed image and file size of the origional image.
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑐𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑒𝑑 𝑖𝑚𝑎𝑔𝑒
𝐶𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑟𝑎𝑡𝑖𝑜 =
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 𝑖𝑚𝑎𝑔𝑒

Distortion is the amont of error that is introduce due to
compression. In compressed images, it is pears in the form of so
called compression artifacts. Typical artifacts that result from
compression jpeg are accurance of blogs in entire image.
Measuring the distorion is not straightforward as measuring the
rate. After all, it highly depends on visirable perception of
person who fused the image thus it can differ frm person to
person. The most common and simple metric is MSE( mean
squared error):
1

𝑀𝑆𝐸 = 𝑛 ∑𝑛𝑖=1(𝑥𝑖1 − 𝑥𝑖2 )2
x(i1)original
image
x(i2)- reconstructed image
Sometimes is more convenient to express some sort of quality
instead of the amount of error. That is done by pic signal to
noise ratio which sets the maximum intensity in an image, the
relation to MSE.
log10 𝑚𝑎𝑥𝑖|𝑥𝑖 |2
PSNR = 10 ∗
𝑀𝑆𝐸
𝑅𝑎𝑡𝑒 =
How Does Compression Work?
3
Bpp- bit per pixel
151
When you have a file containing text, there can be repetitive
single words, word combinations and phrases that use up
storage space unproductively. Or there can be media such as
high tech graphical images in it whose data information
occupies too much space. To reduce this inefficiency
electronically, you can compress the document.
Compression is done by using compression algorithms
(formula) that rearrange and reorganize data information so that
it can be stored more economically. By encoding information,
data can be stored using fewer bits. This is done by using a
compression/decompression program that alters the structure of
the data temporarily for transporting, reformatting, archiving,
saving, etc.
Compression, when at work, reduces information by using
different and more efficient ways of representing the
information. Methods may include simply removing space
characters, using a single character to identify a string of
repeated characters, or substituting smaller bit sequences for
recurring characters. Some compression algorithms delete
information altogether to achieve a smaller file size. Depending
on the algorithm used, files can be adequately or greatly reduced
from its original size.
The image compression is also various according to image
formats. Actually, there are many formats of images, to be exact
23, the basic ones as follows: JPEG, GIF, BMP, TIFF, PNG.The
table below shows the difference of three image formats:
152
Pros
 24-bit
color w/ up to
16M colors
 Great for photographs
 Great for images w/
more than 256 colors
JPEG

Great for making
(JPG)
smaller file sizes
(usually)


GIF





PNG



Can be animated
Great for images with
limited colors
Great for line
drawings & clip art
Great for text images
Lossless
Smaller file size
Great transperancy
support
Great for text images
Great for logos
Lossless
Cons
x Discard a lot of data
x After compression,
JPEGs tend to create
artifacts
x Cannot be animated
x Does not support
tranperancy
x Not good for text
images
x Lossy
x Only supports 256
colors
x File size can be quite
large
x Not good for
photographs
x Cannot be animated
x Not supported by all
web browsers
x Not great for large
images( Large file size
compared to JPG)
Image data compression in previous two decade achieves
substantial progress. It’s done using different quantization
methods, Entropy coding and mathematical transformation.
Image compression now around to two-dimensional images but
in future comes to tree-dimensional image also. New approaches
are being proposed for progressive work in term of feature
preserve with compression rate for image data compression.
153
THE RELATIONSHIP BETWEEN
POVERTY AND ELECTRICITY USE
MODELLING OF FIGURES IN ORDER
TO PREDICT THE FUTURE OF TREND
Rauf Mahmudzade
r.mahmudzade@mail.ru
Supervisor: Assos. Prof. Manafaddin Namazov
1.Introduction
In this research, having analyzed relationship between energy
use and poverty in developing countries, we will describe
current patterns of energy use, including rates of electrification
by using a unique country-by-country database. This research
also details the way how to model using a specific program
called Matlab and projects the figures of electricity access for
three decades.Modern energy services enhance the standards of
life in countless ways. Electric light enables to extend work and
study hours, refrigeration in hospitals allows to keep medicines
irrespective of circumstances. Consequently, modern energy is
directly linked with the economy of a country and can reduce
poverty by advancing a poor country’s productivity and
extending the quality and range of its products. Although other
energy sources such as biomass, Kerosene and liquefied
petroleum gas (LPG) can meet with some demands of country
they involve inevitable ramification for both urban and rural
areas.
2.The Transition to Modern Fuels
As the income of poor families increase the tendency to afford
more modern appliances and demand for better energy services
will emerge. However, this is not a straightforward process and
dependent on various factors.
154
Three main determinants in transition from traditional to
modern energy use are fuel availability, affordability and
cultural preferences. If a contemporary distribution system is
not available in a country, households cannot obtain access to
modern fuels, even it is affordable for them.
Even when they can afford modern fuels, households
may not use them if they are much more expensive than
traditional biomass. In rural areas, biomass is frequently
considered as something that is “free” and readily available.
This kind of thinking seriously preclude the switch to modern
energy. The affordability of energy-using equipment is just as
important as the affordability of fuels.
In some cases, traditions determine the fuel choice
regardless of fuel availability and income. In India, Pakistan and
other developing countries, even well-off households keep a
biomass stove to prepare their traditional bread.
How to model data and predict the future of trend
using Matlab:
MATLAB allows you to model your data using linear
regression. A model is a relationship between independent and
dependent variables. Linear regression produces a model that is
linear in the model coefficients. The most common type of
linear regression is a least-squares fit, which can fit both lines
and polynomials.
Before you model the relationship between pairs of
quantities, it is a good idea to perform correlation analysis to
establish if a relationship exists between these quantities.
Correlation is a method for establishing the degree of
probability that a linear relationship exists between two
measured quantities. When there is no correlation between the
two quantities, then there is no tendency for the values of one
155
quantity to increase or decrease with the values of the second
quantity. MATLAB provides the following three functions for
computing correlation coefficients and covariance. In typical
data analysis applications, where you are mostly interested in
the degree of relationship between variables, you need only to
calculate correlation coefficients. That is, it is not necessary to
calculate the covariance independently.
Figure 1. Matlab,s functions, for correlation analysis
If you need to fit nonlinear data using MATLAB, you can
try transforming the variables in your model to make the model
linear, use the nonlinear algorithm” fminsearch”, or use the
Curve Fitting Toolbox.
2. Residuals and Goodness of Fit
Residuals are defined as the difference between the observed
values of the dependent variable and the values that are
predicted by the model. When you fit a model that is appropriate
for your data, the residuals approximate independent random
errors.
To calculate fit parameters for a linear model, MATLAB
minimizes the sum of the squares of the residuals to produce a
good fit. This is called a least-squares fit. You can gain insight
into the “goodness” of a fit by visually examining a plot of the
residuals: if the residual plot has a pattern, this indicates that the
model does not properly fit the data.
156
Notice that the “goodness” of a fit must be determined in
the context of your data. For example, if your goal of fitting the
data is to extract coefficients that have physical meaning, then it
is important that your model reflect the physics of the data. In
this case, understanding what your data represents and how it
was measured is just as important as evaluating the goodness of
fit.
3. Covariance
Use the MATLAB cov function to explicitly calculate the
covariance matrix for a data matrix (where each column
represents a separate quantity).
In typical data analysis applications, where you are mostly
interested in the degree of relationship between variables, you
can calculate the correlation coefficients directly without
calculating the covariance first.
The covariance matrix has the following properties:
• cov(X) is symmetrical.
• diag(cov(X)) is a vector of variances for each data
column, which represent
a measure of the spread or dispersion of data in the
corresponding column.
• sqrt(diag(cov(X))) is a vector of standard deviations.
• The off-diagonal elements of the covariance matrix
represent the covariance between the individual data
columns. Here, X can be a vector or a matrix. For an mby-n matrix, the covariance matrix is n-by-n. For an
example of calculating the covariance, load the sample
data in count.dat that contains a 24-by-3 matrix:
load count.dat
Calculate the covariance matrix for this data:
cov(count)
MATLAB responds with the following result:
ans =
1.0e+003 *
157
0.6437 0.9802 1.6567
0.9802 1.7144 2.6908
1.6567 2.6908 4.6278
4. Correlation Coefficients
The correlation coefficient matrix represents the normalized
measure of the strength of linear relationship between variables.
Correlation coefficients rk are given by
∑𝑁
𝑡=1(𝑥𝑡 − 𝑥)(𝑥(𝑡 + 𝑘) − 𝑥)
𝑟𝑘 =
2
∑𝑁
𝑡=1(𝑥𝑡 − 𝑥) )
Figure 2. Correlation Coefficients
5. Access to electricity
To improve our conception of the electrification process,
an extensive database which is abundant with best available
information for developing countries on the amount of people
who have an access to electricity in their homes has been used
in this research. The database is broken down by rural and urban
areas.
Aggregate data for 2000 show that the number of people
without electricity today is 1.64 billion, or 27% of the world’s
population. More than 99% of people without electricity live in
developing countries, and four out of five live in rural areas. The
World Bank estimates that the number of people without
158
electricity has fallen from 1.9 billion in 1970, but not on a
straight-line decline, in 1990, the figure was
Figure 3. Access to electricity
2 billion. As a proportion of the world’s population the number
of unelectrified has fallen even more sharply — from 51% in
1970 to 41% in 1990.
The average electrification rate for the OECD, as well as
for transition economies, is over 99%. Average electrification
rates in the Middle East,
North Africa, East Asia/China and Latin America are all above
85%. More than 80% of the people who currently have no
access to electricity are located in South Asia and sub-Saharan
Africa (Figure 13.4). Lack of electricity is strongly correlated to
the number of people living below $2 per day (Figure 13.5).
Income, however, is not the only determinant in electricity
access. China, with 56% of its people still poor, has managed to
supply electricity to more than 98% of its population.
Over the past three decades, half the growth in world population
occurred in urban areas. Worldwide, electrification has kept
pace with urbanisation, maintaining the number of the urban
159
population without electricity at roughly 250 million. Put
another way, the urban electrification rate increased from 36%
in 1970 to 91% in 2000. The bulk of the urban unelectrified live
in Africa and South Asia, where more than 30% of the urban
population do not have electricity.
Four out of five people lacking access to electricity live in rural
areas. This ratio has remained constant over the past three
decades. The number of the rural unelectrified has fallen by
more than 200 million, and rural electrification rose from 12%
in 1970 to 57% in 2000.
160
CONTENTS
To The Participants Of The 3rd Student Scientific And
Technical Conference Dedicated To The 92nd Anniversary
Of National Leader Haydar Aliyev, Rector of Baku Higher
Oil School, Elmar Gasimov, ………................………...……...3
PLENARY REPORTS
Reuleaux Triangle, Guldana Hidayatli …………….…..….…5
Green Chemistry, EmilyaMammadova………………………6
Sand Control, Rasul Samadzade……….……….…….………8
Future Energy Sources, Ahmed Galandarli…..………….....11
Collecting Of Petroleum by Natural Hair, Ali Alikishiyev...14
Wireless Power Transmission,Elshad Mirzayev…………....17
Solving Nonlinear Equations By Means Of Matlab Gui
Tools,QuluQuliyev……………………..………………..……18
“PETROLEUM AND CHEMICAL ENGINEERING”
SECTION
Treatment Of Water: Chemical, Physical And Biological
Nargiz Khalilzada, SimuzarBabazada……………..……..…..21
Cleaner Technologies, Control, Treatment And Remediation
Techniquis Of Water In Oil And Gas Industry,
Aygul Karimova…………………………………..…………..24
Halloysite Nanotubes: Investigation Of Structure And Field
Of Application, Natavan Asgerli…………………………….28
Multistage Compressors, Ibrahim Mammadov.…………...32
Catalysts Applied In Petrochemical Processes,
Tunzale Imanova ……………………………………………33
Liquefied Natural Gas (Lng), Turan Nasibli,
Aysel Mamiyeva …..………………………………………….37
Hydrocarbon Products And Processing
Camal Ahmadov …………………...………...……………..41
161
The Hybrid Cleaner Renewable Energy Systems
Ali Aladdinov, Farid Mustafayev…………………………….47
Artificial Lift Methods Mahmud Mammadov.………….…..49
Cementation, Rza Rahimov………….………………………51
Drilling Fluids Eltun Sadigov…….……………………….…54
Perforating Wells, Gizilgul Huseynova……………………..56
Well Control, Konul Alizada………..............…...….……….58
Application Of Magnetic Field In Oil Industry
Munavvar Salmanova…...……………………………………62
Bubble-Point Pressure, Mustafa Asgerov………...…...……65
Horizontal Drilling, Rasul Ismayilzada……………………..67
Synthetic Based Drilling Fluids
Rashad Nazaraliyev…………………………………...…..….69
The Sidetrack Operation And Its Importance In Oil
Industry, Iftikhar Huseynov…………….……………….….72
Unconventional Oil And Gas Resources
Umid Tarlanli…………………………...…...……………….74
Drilling Bits, Azad Abdullayev……….………..…………….76
The Pumps Which are used in Oil Industry
Elvin Hasanli…………..………………………..……………78
Formation Of Hydrates In Oil And Gas Pipelines
Bextiyar Allahverdiyev……………….………………………80
Thermohydrodynamical Effects In Pre-Transition Phase
Zones, Subhana Allahverdiyeva….………………………….81
Negative Pressure in Liquids, Rafael Samadov…….…..…..83
Hydraulic Shock In Pipes With Consideration Of The
Temperature Factor, Fidan Selim-zade..…………………..89
New Cleanup Method Of The Oil-Contaminated Soils With
Chemical Extraction, Sadigli Lyaman............................... ....91
“PHYSICS AND MATHEMATICS” SECTION
Engineering Applications Of Differential Equations(SecondOrder) Hajar Hidayatzade, Chinara Guliyeva……………….93
162
Fitting A Pipeline With Minimal Cost By Using
Optimization Of A Function on a Closed Interval
Narmin Abbasli, Emin Balashov…………………………….97
Applications Of Mathematics In Mechanical Engineering
Problems Ilkin Cavadlı…………………………..…...…….98
Mathematics in Our Life, Nazrin Dolkhanova……..……..100
Nuclear Power Industry, Afsana Zeynalli………….…...…104
Liquid Crystals, Saida Alasgarova………….……..………108
Global Warming, Asiman Saidzadeh ……….…………..…111
Brain Capacity, Mechanism of Addiction in Brain
Cavad Iskenderov, Rafael Abdullayev……………….……..115
Cymatics: from Vibration to Manifestation
Saida Ismayilova…………………………………………….118
Lasers and Holography
Emin Abdullayev, Chingiz Agasiyev…….……………….…119
Nanorobotics, Vafa Mammadova…………………..….…..120
The Theory of Relativity
Huseyn Abbasov, Elshan Mikayilov…………………….….121
Standard Model and the Role of Higgs Boson
Ibrahim Sharifov, Nizam Zahidli………………………..….122
String Theory and the Origin of Universe
Rufat Abdulazimov……………………………………...…..123
Application of a Bézier Curve, Allahverdiyeva Nigar,
Hasanov Murad…………………………………….………124
Distributed Temperature Sensing in Intelligent Wells
Riyad Muradov, Gurban Orujov…………………………...126
Investigation of Methods and Principles for Using of Solar
Energy, Jalya Hajiyeva, Sabina Mammadova………..…...127
“INFORMATION AND COMMUNICATION
TECHNOLOGIES” SECTION
Comparative Analysis Of Methods Of Digital Integration by
Means Of Matlab Package, Leyli Abbasova……....………132
Principles Of Operation And Printing Technology for
163
3d-Printers, Mustafa Hajiyev ……………………..…….…136
Development Of An Algorithm For Determining An
Optimum Functional Dependence Of The Experimental
Data, Rahim Rahimli………….………………..…….…….139
Comparative Analysis of Sorting Algorithms
Karim Karimov…………...…………………………………142
Three-Dimensional Modeling Using Autocad Electrical
Elvin Ismayilov………………………..………………….…144
Analysis of Algorithms for Image Compression
Ramil Shukurov…………………………………….……….147
The Relationship Between Poverty And Electricity Use
Modelling of Figures in Order to Predict the Future of
Trend, Rauf Mahmudzade………………………..….…….154
Contents…………………………………………………….161
164
Ünvan: Bakı şəhəri, AZ 1025,
Xətai rayonu, Xocalı prospekti 30
e-mail: info@bhos.edu.az
www.bhos.edu.az
http://facebook.com/BakiAliNeftMektebi
165
Download