Uploaded by key_notes88

A Design Space Exploration of Binary-Tree-Decoder for Multi-Block-External-Memory-Controller

advertisement
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 01-10
A Design Space Exploration of Binary-Tree-Decoder For MultiBlock- External-Memory-Controller in Multi-µc Embedded Soc
Design
Uday Arun1, A. K. Sinha2
1, 2
Bharath Institute of Science & Technology, Chennai, Tamil Nadu, India
Abstract––In any Micro-Controller Embedded System the whole memory is accessed by a single Micro-Controller. It
uses only a fraction of memory and rest of the memory is wasted. In addition to this the only one Micro-Controller is
executing all the given set of instructions or program. In our work we have designed a Multi-Block external memory
& Binary-Tree-Decoder for a Multi-Micro-Controller Embedded System so that the complete memory is divided into
more than one block (in our case it is 4) and it can be accessed independently by more than one Micro-Controller.
This can be done in two ways one is static memory mapping mechanisim and dynamic memory mechanisim. In static
memory mapping, the Micro-Controller is able to access only a particular block of memory to which it is mapped.
While in the Dynamic memory mapping, any Micro-Controller can access any block of memory. Also, the different
part program is executed by different Micro-Controller parallely, which results in to speed up the execution speed of
the Multi-Microcontroller system. Current embedded applications are migrating from single processor-based
systems to intensive data communication requiring multi-processing systems to fulfill the real time application
demands. The performance demanded by these applications requires the use of multiprocessor architecture. For
these types of multiprocessor systems there is a need for developing such memory mapping mechanism which can
support high speed. For selected memory mapping mechanism what should be the decoding mechanism and the
controller design that gives low-power consumption, high-speed, low- area system. Our alorithtim of Binary-TreeDecoder improves the MMSOPC embedded system design. However the designing of Binary-Tree-Decoder alorithim
(for 256K memory) has not been designed by any of researcher which is presented in this work.
Keywords––SOC, MPSOC, MMSOC, MPSOPC
I. INTRODUCTION
Current embedded applications are migrating from single processor-based systems to intensive data
communication requiring multi-processing systems to fulfill the real time application demands.The performance demanded
by these applications requires the use of multiprocessor architecture.For these type of multiprocessor systems there is a need
for developing such memory mapping mechanism which can support high speed.For selected memory mapping mechanism
what should be the decoding mechanism and the controller design that gives low power consumption, high speed , low area
system. The designing of an on chip micro networks meeting the challenges of providing correct functionality and reliable
operation of interacting system-on-chip components [1]. The dynamic partitioning of processing and memory resources in
embedded MPSoC Architectures [3]. The idea of dynamic partitioning of memory resources is been used in designing the
architecture of multi-block memory. The MPSoC design challenges, one of the key MPSoC architectural point while
designing and programming is the memory access methods [20].
To acccess the multi-block memory we have used dynamic memory mapping technique and designed a controller
for this external memory that is used in 8-bit Multi-MicroController-System-On-Chip[20][3].
In a multi-microcontroller SOCs there can be a possibility of two kind of memory mapping.

Static Memory mapping

Dynamic memory mapping
A. STATIC MEMORY MAPPING:
In static memory mapping, the micro-controller is able to access only a particular block of memory to which it is mapped.
Microcontroller No.
Memory Block address
Memory Block No.
MC01
0000-1FFF
BLOCK0
MC02
2000-2FFF
BLOCK1
MC03
3000-3FFF
BLOCK2
MC04
4000-4FFF
BLOCK3
1
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
Fig. 1 Static memory mapping
B. DYNAMIC MEMORY MAPPING:
While in the Dynamic memory mapping any micro-controller can access any block of external memory, also the
different program can be executed by different micro-controller parallely, which results in to high throuput & the performace
of the multi-microcontroller system improves[19]. As Dynamic memory mapping gives more processing speed so we follow
this technique[22]. But, this technique is very complex and requires extra hardware to implement this technique. An address
decoding technique to generate memory address need to enchanced so that no extra burden on hardware.
Microcontroller No.
Memory block No.
Control signal
Status
MC01
Block0,Block1,Block2,Block3
RD=0 & WR=1
READ
MC02
Block0,Block1,Block2,Block3
RD=0 & WR=1
READ
MC03
Block0,Block1,Block2,Block3
RD=0 & WR=1
READ
MC04
Block0,Block1,Block2,Block3
RD=0 & WR=1
READ
In the Dynamic memory mapping, any micro-controller can access any block of memory.
Fig. 2 Dynamic memory mapping
2
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
II. MEMORIES IN MULTI-MICROCONTROLLER ENVIRONMENT
There are different memory scheme available for SOCs having multiple Microcontrollers. Three of them are
listed below[27].

UMA (Uniform Memory Access)

NUMA (Non-Uniform Memory Access)

COMA (Cache Only Memory Access)

CC-NUMA (Cache Coherency Non Uniform Memory Access)

CC-COMA (Cache Coherency Cache Only Memory Access)
A. UMA (Uniform Memory Access) Model:
Physical memory is uniformly shared by all the processors. All the processors have equal memory access time to
all memory words, so it is called uniform memory access.
Each processor may use private cache.Multiprocessor are called tightly coupled systems, due to the high degree
of resource sharing. The system interconnect takes the form of a common bus, crossbar switch or a multistage network.
Synchronization and communication among processors are done through using Share variables in the common memory.
When all the processors have equal access to all peripherals devices, the system is called a systematic multiprocessor. Here
all the processors are equally capable of running the executive programs such as operatinsystem kernel and I/O service
routines. When only one or subsets of processors are executive-capable, the system is called Asymmetric multiprocessor.
An executive or a master processor can execute the operating system and handle I/O. The remaining processors, having no
I/O capability are called Attached Processors (APs) which execute codes under the supervision of the master processors.
B. NUMA (Non-Uniform Memory Access) Model:
NUMA multiprocessor is a shared-memory system where access time varies with the location of the memory word.
LM1
P1
LM2
P2
LMn
Pn
Shared
Memory
The shared memory is physically distributed to all processors, called local memory. The collection of local
memories from a global address space is accessible by all processors. To access a local memory with a local processor is
faster than to access the remote memory attached to other processors due to added array through the interconnection
3
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
network. Besides distributed memories, globally shared memory can be added to a multiprocessor system. In this case, there
are three memory-access patterns:
(a) Local memory access (Fastest)
(b) Global memory access.
(c) Remote memory access (slowest)
In hierarchically structured multiprocessors, Processors are devided into clusters which are itself an UMA or a
NUMA multiprocessor. Clusters are connected to global shared-memory modules. All processors belonging to the same
cluster are allowed to uniformly access the cluster shared-memory modules. All the clusters have equal access to the
global memory.
C. COMA (Cache Only Memory Access) Model:
A multiprocessor using cache-only memory assumes the COMA model.This model is a special case of a NUMA
machine, in which the distributed main memoris are converted to caches.There, is no memory hierarchy at each processors
node. All the caches memory is accessable from a global address space. Remote cache access is assisted by the distributed
cache directories (D).
D. Other Model:
Another variation of multiprocessors is a cache-coherent non-uniform memory access (CC-NUMA) model
which is specified with distributed shared memory and cache directories.
One more variation may be a cache-coherent COMA machine where all copies must be kept consistent.
Fig. 5 COMA model of multiprocessor
4
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
III. SOC MODEL AND DESIGN
This is the generic system architecture of the Multi- microcontroller system on chip. In our project the multi block
external memory is basically the integrated form of the following components:
1. MAR (Memory Address Register)
2. MDR (Memory Data Register)
3. MMBAC(Multi Memory Block Arbitration Controller)
4. BINARY TREE DECODER
5. Memory Block comparator
6. Memory of 256 Kb
In the above shown architecture the four microcontroller are connected to the memory via different controllers and
buffer (register like MAR, MDR) with the help of address bus , data bus and control bus. Buses are nothing but the set of
parallel wires that connect two components. In this architecture the data buses are of 8Bits, address buses are of 16Bits wide.
A.
BINARY TREE DECODER
In the above implementation the binary tree decoder is the block which helps to implement the low power
dynamic memory accesses by keeping in mind the low power and area.
5
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
ENABLE BIT
ADDRESS BITS
Integer conversion
EN7 EN6 EN5 EN4 EN3 EN2 EN1 EN0 A7 A6 A5 A4 A3 A2 A1 A0
0
0
0
0
0
0
0
0 0 0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0 0 0 0 0 0 0 0 1
1
0
0
0
0
0
0
0
0 0 0 0 0 0 0 1 0
2
0
0
0
0
0
0
0
0 0 0 0 0 0 0 1 1
3
0
0
0
0
0
0
0
0 0 0 0 0 0 1 0 0
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0
0
0
0
0
0
0 1 1 1 1 1 1 1 1
255
0
0
0
0
0
0
0
1 0 0 0 0 0 0 0 0
256
0
0
0
0
0
0
0
1 0 0 0 0 0 0 0 1
257
0
0
0
0
0
0
0
1 0 0 0 0 0 0 1 0
258
0
0
0
0
0
0
0
1 0 0 0 0 0 0 1 1
259
0
0
0
0
0
0
0
1 0 0 0 0 0 1 0 0
260
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0
0
0
0
0
0
1 1 1 1 1 1 1 1 1
511
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0
0
0
0
0
1
1 1 1 1 1 1 1 1 1
1023
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
1
1
1
1
1 1 1 1 1 1 1 1 1
65535
Fig.7. Decoding Mechanism of Binary Tree Decoder
After Decoding Mechanism of Binary Tree Decoder, Binary Tree generation and memory address generation
for all blocks of memroy follows an algortihm which is as follows:
for i in 0 to 255 loop
if conv_integer(BinaryTreeDecoder_En) = i then
--ForestTreeDecoder_Out := 16*i + conv_integer(y); ……………………………………..Old Algo.
BinaryTreeDecoder_Out := 256*i + conv_integer(y); ……………………………..……..New Algo.
B .RTL Schematic of Decoding Mechanism of Bianry Tree Decoder
Fig.8: RTL Schematic of Binary Tree Decoder (BTD)
6
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
Fig.8: RTL schematic of integrated block
Fig.9: RTL schematic of MULTI-BLOCK-EXTERNAL-Memory-Controller IN Multi-µC SOC DESIGN
C. Results
The memory controller design using binary tree decoder is implemented on FPGA board and below table
summarises the result.
7
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
C.1. Design summary of Binary tree decoder
C.2.Comperative Area Analysis of Binary Tree Decoder and Forest Tree Decoder
Sr.
No.
1
Parameter
(64KB Memory Block)
No. of Decoder
Forest Tree Decoder
Binary Tree Decoder
Comments
4096
256
16times less than the
Forest Tree Decoder
2
Synthesis Time/Issues
Able to synthesize faster
3
Area
Unable to finish
synthesis, tool crashes
due to requirement of
large virtual memory
space
More number of gates
Less number of gates
8
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
C.3.Power Analysis of Binary Tree Decoder
S.No.
Type
Performance metric (power)
1
Vccint
Voltage
(V)
1.8
2
Vcco33
3.3
Current
(mA)
15
Power
(mW)
27
2
6.6
33.6
Total Power
C.4. Results from X-Power summary report of the integrated design
Voltage
Total Current
Vdd =3.3V, Vt =1.8V
17mA
Total Power
33.6mW
IV. CONCLUSION
Our result is tested, verified & prototped on the following environment using following Hardware & software tools
& technology:
•
Target Device:
xc2s600e-6fg456
•
On Board frequency:
50 MHz
•
Test frequency:
12MHz
•
Supply Voltage:
3.3 volts
•
Worst Voltage:
1.4 to 1.6 volts
•
Temperature:
-40oC to 80oC
•
Speed Grade:
-6
•
FPGA Compiler:
Xilinx ISE 8.2i
•
Simulator used:
ModelSimXE-ii 5.7g
The initial implementation was based on forest tree decoder. The major enhancement done in this work is to
replace Forest Tree Decoder with Area, power efficient implementation of Binary Tree Decoder. The decoding process is
one of the most time consuming process, this decoding mechanism in turn makes the whole MPSOC system faster and
efficient with help of multi block memory architecture. With reference to the 2d VLSI complexity model we have found that
as x-axis (data lines) width is kept fixed (8 bit) and y-axis (row address of memory) height is reduced in case of binary tree
decoder in comparison to forest tree decoder.So the over all area of 2D model is reduced which in turn increases the
accessing speed of the multi-processor system. As the y-axis height is reduced this implies that searching time of particular
memory word (8 bit here) is reduced hence accessing speed is increased. In another aspect, as the area is lesser and the
power consumed is also lesser.
REFERENCES
Journals:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Benini,L. and De Micheli, G.,2002. Networks on Chips: A new SoC paradigm. IEEE Computer, 35(1), 2002; 7078.
Beltrame,G.,Sciuto,D.,Silvano.C.,Paulin,P.and Bensoudane,E.,2006. An application mapping Methodology and
case study for multi-processor on –chip architectures.Procceding of VLSI SoC 2006. 16-18 October 2006,
Nice.France 146-151.
Xue, L.,Ozturk , O.,Li.F., Kandemir, M. and Kolcu. I.,2006.Dynamic partitioning of processing and memory
resources in embedded MPSoC architectures .Proceeding of DATE 2006 ,6-10 March 2006, Munich,Germany,
690-695.
Benini,L. and De Micheli, G.,2006. Networks on Chips: Technology and tools. Morgan Kaufmann, San
Francisco,2006,ISBN-10:0-12-370521-5.
Berens, F.,2004. Algorithm to system-on-chip design flow that leverages system studio and systemC 2.0.1. The
Synopsys Verification Avenue Technical Bulletin 4,2,May 2004.
Bouchhima , A., Yoo, S. and Jerraya, A. A.,2004. Fast and accurate timed execution of high level embedded
software using HW/SW interface simulation model. Proceeding of ASP-DAC 2004,January 2004,
Yokohama,Japan,469-474.
Bouchhima, A.,Chen ,X.,Petrot, F.,Cesario, W.O. and Jerraya, A. A.,2005. A Unified HW/SW interface model
discontinuities between HW and SW design .Proceeding of EMSOFT’05,18-22 september 2005,New Jersey,USA,
159-163.
Bouchhima, A.,Bacivarov, I.,Youssef,W.,Bonaciu, M. and Jerraya, A. A.,2005.Using abstract CPU subsystem
simulation model for high level HW/SW architecture exploration.Proceeding of ASP-DAC 2005,
Shanghai.China,969-972.
Coppola, M., Locatelli, R., Maruccia, G.,Pieralisi,L. and Scandurra, A. Spidergon,2004.
A novel on-chip communication network .Proceeding of International Symposiumon System-on-Chip ,16-18
November 2004.
9
A Design Space Exploration of Binary-Tree-Decoder For Multi-Block- External-Memory-Controller…
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Flake, P. and Schirrmeister. F.,2007. MPSoS demands system level design automation.EDA Tech Forum
,4(1),March 2007 ; 10-11.
Gauthier, L..Yoo. S. and Jerraya, A. A.2001. Automatic generation and targeting of application specific operating
systems and embedded systems software.IEEE TCAD Journal,20, Nr. 11,November 2001.
Guerin,X., Popovici, K., Youssef, W.,Rousseau, F.and Jerraya, A.,2007. Flexible application software generation
for heterogeneous multi-processor System-on-Chip. Proceeding of COMPSAC 2007.23-27 July 2007,Beijing,
China.
Ha, S.,2007. Model-based programming environment of embedded software for MPSoC. Proceeding of ASPDAC’07, 23-26 January 2007,Yokohamma, Japan, 330-335.
Kangas.T.,Kukkala. P.,Orsila,H. Salminen,E. , Hannikainen, M., Hamalainen, T.D.,Riihimaki,J. and Kuusilinna,
K.,2006.UML-based multiprocessor SoC design framework.ACM Transactions on Embedded Computing Systems
(TECS),5(2). 2006.281-320.
Kienhuis ,B.,Deprettere. Ed. F.,van der Wolf. P. and Vissers. K.A.,2002. A methodology to design programmable
embedded systems – the Y- Chart approach .Lecture Notes in Computer Science. Volume 2268.Embedded
Processor Design Challenges. Systems, Architechture.Modeling and Simulation –SAMOS 2002.Springer, 18-37.
Khugaut, W.,Gadke, H. and Gunzel, R. Train,2006. A virtual transaction layer architecture for TLM- based
HW/SW codesign of synthesizable MPSoC. Proceedings of DATE 2006, 6-10 March 2006 .Munich,
Germany.1318-1323.
Jerraya , A. and Wolf , W.,2005. Hardware-software interface codesign for embedded systems. Computer, 38(2),
February 2005 ;63-69.
Jerraya , A., Bouchhima, A and Petrot, F.,2006. Programming models and HW/SW interfaces abstraction for
Multi- Processor SoC. Proceeding of DAC 2006 .San Francisco.USA, 280-285.
Martin, G.,2006. Overview of the MPSoC design challenge. Proceeding of DAC 2006, 24-28 July 2006, San
Francisco, USA, 274-279.
De Micheli, G., Ernst,R.and Wolf, W. Readings in hardware/Software Co-Design.Morgan KAudmann, 2002, ISDN
1558607021.
Popovici, K. and Jerraya ,A. Programming models for MPSoC.Chapter4 in Model Based Design of Heterogeneous
Embedded Systems. Ed. CRC Press.2008.
Popovici,K.,Guerin, X., Rousseau, F., Paolucci, P.S. and Jerraya, A.,2008 Platform Based software design flow for
heterogeneous MPSoC. ACM Jouranal: Transactions on Embedded Computing Systems (TECS),7(4),July 2008.
Pospiech, F.,2003.Hardware dependent Software (Hds).Multiprocessor SoC Aspects. An Introduction. MPSoC
2003. 7-11 July 2003. Chamonix. France.
Sarmento, A., Kriaa, L., Grasset, A., Youssef, W.,Bouchhima, A.,Rousseau,F., Cesario, W. and Jerraya, A.
A.,2005. Service dependency graph, an efficient model for hardware/software interface modeling and generation
for SoC design . Proceeding of CODES-ISSS 2005, New York, USA, 18-21 September 2005.
Schirmer, G.,Gertslauer, A. and Domer, R.,2008. Automatic generation of hardware dependent software for
MPSoC from abstract specifications. Proceeding of ASP-DAC 2008, 21-24 January 2008, Seoul,Korea.
Van der Wolf, P. et al,2004. Design and programming of embedded multiprocessors: An interface-centric
approach .Proceeding of CODES+ISSS 2004,Stockholm, Sweden, 206-217.
Bacivarov, I., Bouchhima, A.A.ChronoSym, 2005. A new approach for fast and accurate SoC cosimulation,
International Journal on Embedded Systems(IJES),1(1),;103-111.
Yoo.S. and Jerrara,A. A.,2003.. Introduction to hardware abstraction layers for SoC .Proceeding of DATE 2003, 37 March 2003, Munich. Germany, 336 -337.
Youssef, M., Yoo. S.,Sasongko, A., Paviot. Y.and Jerraya, A., 2004. Debugging HW/SW interface for MPSoC:
Video Encode System Design Case Study. Proceeding of DAC 2004, 7-11 June 2004, San Diego ,USA,908-913.
Books:
1.
2.
3.
Wolf and Jerraya, “MPSOC Design”, Springer
Marc Moonen, Francky Catthoor, “Algorithms and Parallel VLSI architectures-III, Elsevier.
Naveed Sherwani, “Algorithms for VLSI Physical Design Automation”, Kluwer Academic Publishers.
10
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 01-07
Resolving Cloud Application Migration Issues
C.Kishor Kumar Reddy1, P.R Anisha2, B.Mounika3, V.Tejaswini4
1234
Department of CSE, Vardhaman College of Engineering, Shamshabad
Abstract--Cloud computing is a rapidly emerging technology which involves deployment of various services like software,
web services and virtualized infrastructure, as a product on public, private or hybrid clouds on lease basis. Cloud storage
can be an attractive means of outsourcing the day-to-day management of data, but ultimately the responsibility and
liability for that data falls on the company that owns the data, not the hosting provider. In part one of this look at cloud
application migration, we discussed how cloud providers, through the selection of hypervisors and networking, affect the
capability to migrate applications. In part two, we will talk about how appropriate architectures for cloud applications
and open standards can reduce the difficulty in migrating applications across cloud environments. This paper highlights
the Tools to facilitate application migration in clouds. This paper outlines architectures and indicates when each is most
appropriate based on system needs and cloud provider capabilities.
Keywords: Virtualized Infrastructure, Public Cloud, Private Cloud, Hybrid Cloud, Cloud Storage, Application Migration,
Hypervisors.
I.
INTRODUCTION
Although the term “Cloud Computing” is based on a collection of many old and few new concepts in several
research fields like Service-Oriented Architectures (SOA), distributed and grid computing as well as virtualization, it has
created much interest in the last few years. This was a result of its huge potential for substantiating other technological
advances while presenting a superior utilitarian advantage over the currently under-utilized resources deployed at data
centers. In this sense, cloud computing can be considered a new computing paradigm that allows users to temporary utilize
computing infrastructure over the network, supplied as a service by the cloud-provider at possibly one or more levels of
abstraction. Consequently, several business models rapidly evolved to harness this technology by providing software
applications, programming platforms, data-storage, computing infrastructure and hardware as services. While they refer to
the core cloud computing services, their inter-relations have been ambiguous and the feasibility of enabling their interoperability has been debatable. Furthermore, each cloud computing service has a distinct interface and employs a different
access protocol. A unified interface to provide integrated access to cloud computing services is nonexistent, although portals
and gateways can provide this unified web-bas based user interface.
In part one of this look at cloud application migration, we discussed how cloud providers, through the selection of
hypervisors and networking, affect the capability to migrate applications. In part two, we will talk about how appropriate
architectures for cloud applications and open standards can reduce the difficulty in migrating applications across cloud
environments.
A good deal of time and money in the IT industry has been spent on trying to make applications portable. Not
surprising, the goal around migrating applications among clouds is to somehow make applications more cloud-portable. This
can be done in at least three ways:
Architect applications to increase cloud portability,Develop open standards for clouds.
Find tools that move applications around clouds without requiring changes.
Most of today's large, old monolithic applications are not portable and must be rebuilt in order to fit the target
environment. There are other applications that require special hardware, reducing their portability, and even many of the
newer applications being built today are not very portable, certainly not cloud portable.
II.
APPLICATION ARCHITECTURES AND THE CLOUD
Numerous cloud experts have indicated how important an application’s architecture reflects on the ability to move
it from one cloud to another. Appropriate cloud application architectures are part of the solution to cloud interoperability,
and existing applications may need to be re-architected to facilitate migration. The key is trying to architect applications that
reduce or eliminate the number of difficult-to-resolve dependencies between the application stack and the capabilities
provided by the cloud service provider.
Bernard Golden, CEO of HyperStratus, has noted that, to exploit the flexibility of a cloud environment, you need
to understand which application architectures are properly structured to operate in a cloud, the kinds of applications and data
that run well in cloud environments, data backup needs and system workloads.
There are at least three cloud application architectures in play today:
Traditional application architectures (such as three-tier architectures) that are designed for stable demand rather
than large variations in load. They do not require an architecture that can scale up or down.
1
Resolving Cloud Application Migration Issues
Synchronous application architectures, where end-user interaction is the primary focus. Typically, large numbers
of users may be pounding on a Web application in a short time period and could overwhelm the application and system.
Asynchronous application architectures, which are essentially all batch applications that do not support end-user
interaction. They work on sets of data, extracting and inserting data into databases. Cloud computing offers scalability of
server resources, allowing an otherwise long running asynchronous job to be dispersed over several servers to share the
processing load.
Platform as a Service (PaaS) providers provide tools for developing applications and an environment for running
these applications. To deliver an application with a PaaS platform, you develop and deploy it on the platform; this is the way
Google App Engine works. You can only deploy App Engine applications on Google services, but cloud application
platforms such as the Appistry CloudIO Platform allow for in-house private cloud deployment as well as deployment on
public cloud infrastructures such as Amazon EC2.
Where the application is developed and where it is to be run are factors that feed into the application architecture.
For example, if you develop in a private cloud with no multi-tenancy, will this application run in target clouds where multitenancy is prevalent? Integrating new applications with existing ones can be a key part of application development. If
integration involves working with cloud providers, it is difficult because cloud providers do not typically have open access
into their infrastructures, applications and integration platforms.
Older applications that depend on specific pieces of hardware -- meaning they'll want to see a certain type of
network controller or disk -- are trouble as well. The cloud provider is not likely to have picked these older pieces of
hardware for inclusion in its infrastructure.
In your efforts to migrate applications, you may decide to start working with a cloud provider template where the
provider gives you an operating system, such as CentOS or a Red Hat Enterprise Linux template. You'll then try to put your
applications on it, fixing up the things that are mismatched between the source application environment and the target
environment. The real challenge is that this approach becomes an unknown process, complete with a lot of workarounds and
changes.
As you move through a chain of events, fixing problems as you go, you are really rewriting your application.
Hopefully you won't have to rewrite it all, but you will surely change configurations and other things. You are then left with
a fundamentally different application. This could be good or bad, but either way you'll have at least two versions of your
application -- the data center version and the cloud version.
If moving an application back and forth between your data center and a cloud (or from one cloud to another cloud)
results in two different versions of the application, you are now managing a collection of apps. As you fix and encounter
problems, you'll have to work across however many versions of the application that you have created.
Open cloud standards are considered the eventual solution to issues around application migration and cloud
interoperability. We view cloud standards as a collection; this one starts at the low level with something like OVF (Open
Virtualization Format) that gives you a universal language for describing the metadata and configuration parameters of
virtual machines. At the next level, something that would describe the environment -- the connectivity between virtual
machines -- would be useful. This would give you the networking between the virtual machines and the functions and scale
of the environment in which the virtual machines operate.
How to build an application for the cloud
Applications interfere with cloud computing adoption
It is unlikely that we will see cloud standards being adopted this year or next year, for reasons that include ongoing
innovation. Vendors such as VMware would love to just say, "We will do the whole black box thing for you: buy our stuff
and you can put up a cloud and offer it to your customers." The cloud providers are not thrilled with this idea because they
want to differentiate their services. They don’t want to go the route of standards if clouds are then driven to commodities. If
and when we have standards, there will almost certainly be a problem with how cloud providers do or offer unique things on
top of standards.
John Considine, the CTO of CloudSwitch, notes that for a cloud provider, a standard provides customers with what
they want and provides a guideline for how cloud is implemented. In the case of the VMware vCloud API -- which has been
submitted to the DMTF for ratification as an open standard for cloud APIs -- VMware dictates how cloud environments are
configured and accessed with respect to things like definition of resources and catalogs of virtual machines. These
"mandates" have a direct impact on how a provider implements its cloud.
What are some hints for architecting cloud applications? One suggestion is to design the application and its
supporting stack components not to rely on the operating system and the infrastructure. The more you do this, the better off
you will be with respect to interoperability and application migration. If you can use mature fourth-generation languages or
interpretive systems to build applications, then you will also have a better chance for interoperability.
The problem you might run into is not getting the performance and/or the functionality you need. In addition, you
may have to avoid certain performance and capability benefits that could be available with hypervisor tools or from the
specifics of an operating system. You also might have to go for a generic operation of your application with min-set
functionality to make it portable from cloud to cloud.
What kind of existing applications are good candidates for running in the cloud? The more generic and higher
level the application is, the greater your chances of moving it from cloud to cloud. One of the cloud's weakest areas is in
needing total control over the operating system. If you are running an old version of Linux or Windows, then you are
probably in trouble; most public clouds do not support older versions of operating systems. This is a dating problem, as
applications written before a certain date are not easily movable.
Migrating applications among clouds is not easy. But open standards for cloud computing, when they appear, and
the advent of tools such as CloudSwitch and Racemi will ease the difficulty and make hybrid clouds more of a reality.
2
Resolving Cloud Application Migration Issues
III.
HOW PROVIDERS AFFECT CLOUD APPLICATION MIGRATION
In an attempt to reduce lock-in, improve cloud interoperability and ultimately choose the best option for their enterprises,
more than a few cloud computing users have been clamoring for the ability to seamlessly migrate applications from cloud to
cloud. Unfortunately, there's more to application migration than simply moving an application into a new cloud.
To date, cloud application migration in clouds has focused on moving apps back and forth between a virtualized data center
or private cloud environment and public clouds such as Amazon Elastic Compute Cloud, Rackspace or Savvis. There is also
a group of public cloud-oriented companies that are looking to move applications to private clouds or virtualized data centers
to save money. Still others are interested in moving applications from one public cloud to another in a quest for better
service-level agreements and/or performance at a lower cost.
What are some of the worries in moving an application from one environment to another?
Data movement and encryption, both in transit and when it reaches the target environment.
Setting up networking to maintain certain relationships in the source environment and preparing to connect into
different network options provided by the target environment.
The application itself, which lives in an ecosystem surrounded by tools and processes. When the application is
moved to a target cloud, you may have to re-architect it based on the components/resources that the target cloud provides.
Applications are being built in a number of ways using various Platform as a Service offerings, including
Windows Azure, Google App Engine and Force.com. With a few exceptions for Windows Azure, the applications you create
using most platforms are not very, if at all, portable. If you develop on Google App Engine, then you have to run on Google
App Engine.
Public clouds, like Amazon, also allow you to build applications. This is similar to building in your data center or
private cloud, but public clouds may place restrictions on resources and components you can use during development. This
can make testing difficult and create issues when you try to move the application into production mode in your data center
environment or to another cloud environment.
Applications built in data centers may or may not be easily moved to target cloud environments. A large number of
applications use third-party software, such as database and productivity applications. Without access to source code,
proprietary third-party applications may be difficult to move to clouds when changes are needed.
Why is migrating apps in the cloud so difficult? During traditional application development, we have complete control of
everything we need: a physical server, an operating system, the amount of memory needed, the disk storage system, network
configuration, patching and runtime systems such as Java.
But when server consolidation came along, our notion of environment for applications changed somewhat. We are
given an application environment that we still control, sans the personal physical server. Hypervisors provide us with
hardware and all other aspects of an environment, so that our operating system, middleware and applications still have all of
the things that they want and need.
Application architectures for cloud computing environments
Microsoft brings rapid application development to the cloud
And when it comes to clouds, providers pick the operating system, management tools, the networking architecture,
the storage system and the virtual machine configuration. This is the baseline with which you work, offering much less
control of the environment for developing and deploying applications. If you want to move some of your applications to a
cloud, then you have to make them work in an environment that will almost certainly be different from the one in which they
were last deployed and/or developed.
The decisions made by the cloud provider affect what you can and cannot do within the cloud. For example, a
cloud provider could decide to go with cheap disk drives and do big NFS shares or iSCSI shares to feed the virtualization
layer, which sets a performance envelope, a protection level and a raw capacity for the storage for your virtual machine. You
do not generally get a say in any of this; you must live with it.
Take a look at Amazon. The public cloud leader has decided that your virtual machine gets one network adapter
with one interface. It also doesn't support layer 2 connectivity or broadcast and multi-cast. When you go into these types of
environments, the design of your application is constrained by the cloud resources, in this case networking, that you're
allowed to access.
Another example further illustrates how functions at various levels can affect your ability to move applications to
clouds and from cloud to cloud. You have chosen MySQL as the database system for your application. One of the functions
you can perform with MySQL is database replication -- two instances of a database kept in synch. The replication process in
MySQL utilizes multi-cast as part of low-level Ethernet to communicate between the two database instances. If you want to
run your application on Amazon and replicate a database, you have a problem: Amazon does not support multi-cast in its
networking layers.
What do you do? You could use a different database in your application, or you could implement the replication
function differently, but that would require messing with MySQL or handling it through the application. Either way, if you're
thinking about interoperability and moving applications from cloud to cloud, you'll also have to think about top-to-bottom
integration of your entire software stack into a target cloud environment.
The challenge is that you may not know all of the dependencies between the components of your application stack
and elements of the cloud itself. There is a lot of value to extract from the use of clouds. but if you have to expend a lot of
energy and if it creates a burden for maintenance and support, you greatly diminish whatever gains you get.
3
Resolving Cloud Application Migration Issues
IV.
TOOLS TO FACILITATE APPLICATION MIGRATION IN CLOUDS
Given the differences in environments, some customers may not want to go through the oft-difficult process of
making an application work in a target cloud. But if we can give the virtual machine in the new cloud exactly what it wants
to see -- independent of the hypervisor, the cloud environment it is on -- then application migration is easier. This is what
products like CloudSwitch and Racemi offer.
Applications built in data centers may or may not be easily moved to target cloud environments.
CloudSwitch facilitates multi-tier application migration in the cloud with its Cloud Isolation Technology, which is
a virtualization technology layer that automatically runs on top of the cloud provider’s hypervisor and beneath the end user’s
operating system. The virtualization layer feeds the virtual machine exactly what it wants to see -- without requiring
anything special from the cloud provider -- and runs on behalf of the customer to protect and isolate an environment in the
cloud. Applications do not have to be modified when using CloudSwitch; it maps the application so that it seems to be
running within the target cloud environment while actually maintaining the same exact configuration as in the source
environment.
Racemi takes a different approach to migrating applications than CloudSwitch. It involves capturing a server,
physical or virtual, in one environment (data center or cloud) and then deploying it in a target environment (data center or
cloud). An important component of the Racemi application migration offering is a management appliance that has access to
both the captured server environment and the target server environment and begins a mapping process between the two.
Once this mapping process is completed, the capturing-deploying activity is complete and the application has been migrated
to the target environment.
Migrating applications can be quite a big issue in hybrid cloud and a little less of an issue when you just want to
move an application from your data center to a public cloud. You may, however, still encounter some of the problems
discussed above.
Application migration involves more than just dealing with potentially incompatible cloud application
programming interfaces. There are potential issues at each level of an application stack, as your two clouds are very likely to
have differences in hypervisors, operating systems, storage and network configurations, and drivers.
When you are creating cloud environments with the intention of moving applications around, you need to perform
a thorough investigation of the differences in the cloud environments and look at your application architectures to determine
if they are reasonable fits. The second part of this piece on cloud application migration will offer up some possible solutions
to difficult migration problems.
V.
CLOUD APPLICATION
A cloud application (or cloud app) is an application program that functions in the cloud, with some characteristics
of a pure desktop app and some characteristics of a pure Web app. A desktop app resides entirely on a single device at the
user's location (it doesn't necessarily have to be a desktop computer). A Web app is stored entirely on a remote server and is
delivered over the Internet through a browser interface.
Like desktop apps, cloud apps can provide fast responsiveness and can work offline. Like web apps, cloud apps
need not permanently reside on the local device, but they can be easily updated online. Cloud apps are therefore under the
user's constant control, yet they need not always consume storage space on the user's computer or communications device.
Assuming that the user has a reasonably fast Internet connection, a well-written cloud app offers all the interactivity of a
desktop app along with the portability of a Web app.
If you have a cloud app, it can be used by anyone with a Web browser and a communications device that can
connect to the Internet. While tools exist and can be modified in the cloud, the actual user interface exists on the local
device. The user can cache data locally, enabling full offline mode when desired. A cloud app, unlike a Web app, can be
used on board an aircraft or in any other sensitive situation where wireless devices are not allowed, because the app will
function even when the Internet connection is disabled. In addition, cloud apps can provide some functionality even when no
Internet connection is available for extended periods (while camping in a remote wilderness, for example).
Cloud apps have become popular among people who share content on the Internet. Linebreak S.L., based in Spain,
offers a cloud app named (appropriately enough) "CloudApp," which allows subscribers to share files, images, links, music,
and videos. Amazon Web Services offers an "AppStore" that facilitates quick and easy deployment of programs and
applications stored in the cloud. Google offers a solution called "AppEngine" that allows users to develop and run their own
applications on Google's infrastructure. Google also offers a popular calendar (scheduling) cloud app.
Application architectures for cloud computing environments
The ways that organizations use applications, and the applications themselves, are evolving. Applications
nowadays work with increasing amounts of data while suffering from inconsistent loads and use patterns. In short, they're
well-suited for cloud computing environments.
In this webcast, Bernard Golden, CEO of cloud consulting firm HyperStratus, describes the key principles of
application architectures in a cloud computing environment. He also discusses the limitations of running applications in the
cloud, including application integration issues, software licensing and security.
Choosing an application architecture for the cloud
For most of us, building applications to run in a cloud environment is a new ball of wax. But to exploit the
flexibility of a cloud environment, you need to understand which application architectures are properly structured to operate
4
Resolving Cloud Application Migration Issues
in the cloud, the kinds of applications and data that run well in cloud environments, data backup needs and system
workloads. There are three architectural choices for cloud environments:
Traditional architecture;
asynchronous application architecture (i.e., one focused on information processing, not end-user interaction); and
Synchronous application architecture.
As part of your system design, you need to research and design your application appropriately for the particular
environment offered by your cloud provider. Cloud environments are not all created equal: They offer different mechanisms
to implement applications. Choosing from the major Platform as a Service providers Platform as a Service is the preferred
cloud computing approach for many -- if not most -- organizations, as it frees software developers and IT operatives from
infrastructure management chores, security concerns and licensing issues.
Amazon's new Elastic Beanstalk service, albeit in a beta version, enables Amazon Web Services to join Google
and Microsoft in forming a triumvirate of top-tier Platform as a Service (PaaS) providers. These organizations readily pass
most governance standards established by upper-level management and boards of directors for IT vendors, such as the
following:
Weighing the cloud computing standards dilemma
Many IT pros view the lack of cloud computing standards as a potential roadblock to adoption, stemming from
cloud provider lock-in fears and the inability to move virtual machines and data from cloud to cloud. Today, there is a single
cloud standard -- the Open Virtualization Format (OVF), pioneered by VMware for facilitating the mobility of virtual
machines -- but it alone does not solve the cloud interoperability issue. The vendors with the best shot at providing de facto
cloud API standards are VMware and Amazon. What users want is a cloud application programming interface (API) like the
network API, TCP/IP, one that’s implemented in all cloud products and services and promotes transparent interoperability.
This would increase the confidence of prospective public cloud adopters, as they’d be able to leave their providers whenever
they want. It would also eliminate the belief that it’s easy to get into the cloud but difficult to get out. However, Forrester
analyst James Staten says he believes that a common cloud API is way off in the future. He sees the push for standards as too
far ahead of where the market is: “There is no compelling reason to comply; not enough enterprise users have established
cloud computing initiatives.”
Organizations presently pushing for cloud standards
There are a number of organizations that ratify proposals for open standards and others that develop guidelines and
provide information to those interested in cloud computing. Some of the more important ones include:
The Distributed Management Task Force (DMTF) develops cloud interoperability and security standards. The
DTMF created the Open Cloud Standards Incubator (OCSI) in 2009 to address the need for open management standards for
cloud computing. The mission of the National Institute of Standards and Technology (NIST) is to promote U.S. innovation
and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance
economic security.The Open Cloud Consortium (OCC) is a member-driven organization that develops reference
implementations, benchmarks and standards for cloud computing. The Open Grid Forum (OGF) is an open community
committed to driving the rapid evolution and adoption of applied distributed computing. OGF accomplishes its work through
open forums that build the community, explore trends, share best practices and consolidate these best practices into
standards. OGF has launched the Open Cloud Computing Interface Working Group to deliver an open community,
consensus-driven API, targeting cloud infrastructures. The Storage Networking Industry Association (SNIA) has adopted the
role of industry catalyst for the development of storage specifications and technologies, global standards, and storage
education. The Cloud Security Alliance (CSA) publishes guidelines for secure cloud computing, and the Cloud Computing
Interoperability Forum (CCIF) is a vendor-neutral, open community of technology advocates and consumers dedicated to
driving the rapid adoption of global cloud computing services.
VI.
THE CLOUD STANDARDS AVAILABLE TODAY
Various proprietary and open APIs have been proposed to provide interoperability among Infrastructure as a
Service (IaaS). As far as I know, however, the first and only cloud-oriented standard that has been ratified is the OVF, which
was approved in September 2010 after three years of processing by the DMTF.
OVF’s open packaging and distribution format for virtual machines (VMs) gives customers and vendors some
platform independence. It helps facilitate mobility, but it does not provide all of the independence needed for cloud
interoperability. OVF lets vendors or enterprises package VMs together with applications and operating systems and calls to
any other applications and hardware as needed. This meta data includes information about VM images, such as the number
of CPUs and memory required and network configuration information.
Understanding cloud compliance issues
Security issues in cloud computing
Terremark guru talks enterprise cloud security issues As for the other prominent APIs, VMware announced the
submission of its vCloud API to the DMTF in September 2009. The API is expected to enable consistent provisioning,
management and service assurance of applications running in private and public clouds. The GoGrid API was submitted to
the Open Grid Forum’s OCCI working group. This effort stalled because the GoGrid API failed to achieve any significant
backing from the industry.
Oracle recently published its Oracle Cloud API on OTN (Oracle Technical Network) and submitted a subset of the
API to the DMTG Open Cloud Standards Incubator group in June 2010. The Oracle Cloud API is basically the same as the
5
Resolving Cloud Application Migration Issues
Sun Cloud API but with some refinements. This submitted proposal has not received much attention from the IT industry.
Red Hat submitted the Deltacloud API to DMTF as a standard for cloud interoperability in August 2010. It is a set of open
APIs that can be used to move cloud-based workloads among different private clouds and public cloud providers. Red Hat
has contributed Deltacloud to the Apache Software Foundation as an incubator project. Deltacloud attempts to abstract the
details of cloud provider cloud implementations so that an application or developer writing an application only has to call a
single API to get a response regardless of the back-end cloud.
The Rackspace Cloud API has been open sourced and is included in OpenStack. And finally, the Amazon EC2
API is viewed by many as the de facto public cloud API standard. Vendors such as Eucalyptus Systems and OpenNebula
implement much of the Amazon EC2 API. As far as we know, it has not been submitted to DMTF or any other open
standards group.
VII.
QUESTIONS TO POSE ON CLOUD STANDARDS
Because interoperability standards between cloud platforms are not yet in place, what should a prospective cloud
adopter do? For starters, do not wait for interoperability standards to be ratified. In an environment of tremendous change
where the potential benefits could be large, a better decision is to study up and make a choice. Try to determine which
vendors have the best opportunities to turn their cloud APIs into de facto standards for private and public clouds. Be sure to
ask a number of questions like these and then compare the answers with your needs for both the short and the long term:
Does the vendor have cloud APIs that appeal to customers and service providers, along with widespread acceptance in the
cloud marketplace?
Has the vendor submitted one or more of its cloud APIs for ratification to DMTF or one of the other standards
organizations?
Does the vendor have significant numbers of partners to promote and use its cloud APIs?
Does the vendor’s cloud API promote interoperability between private and public clouds? What about between the vendor’s
cloud and another vendor's cloud?
Can the vendor provide a way to transfer your data out of its service?
What users want is a cloud API like TCP/IP, one that’s implemented in all cloud products and services. If you
woke up today and wanted to move data from one cloud to another, there is a good chance that you cannot. If you want to do
this in the future, however, make sure that you don't architect your solution to take advantage of a vendor’s proprietary
features and services, such as the data store in Google App Engine or Amazon’s SQS (Simple Queue Service). In my
opinion, the vendors with the best shot at providing de facto cloud API standards are VMware and Amazon. Amazon is the
800-pound gorilla in the public cloud space, but VMware is trying to leverage its dominance in the virtualization software
market to gain the lead in both the private and public cloud markets. If customers go heavily for internal clouds first, as
many are predicting, VMware could become the de facto standard in the private cloud business. And with the vCloud
Express offering and the vCloud API, VMware also has a good chance at challenging Amazon in the public cloud. Over a
year ago, VMware CEO Paul Maritz said that more than 1,000 hosting providers have enlisted to help enable public clouds
via vCloud Express. However, the vCloud API could be considered a lock-in API, unless other hypervisor technologies in
the cloud beyond vSphere actually adopt or make use of it.
As for the other major player in cloud, Microsoft’s strategy is aimed at its huge Windows-installed base. I don't
expect Microsoft to make its APIs open, as its clouds don't have to interoperate with other clouds for it to be a successful
vendor. Even Microsoft’s own services, Hyper-V Cloud and Windows Azure, have limited interoperability. The reason:
Microsoft believes that Azure is the wave of the future. If you are a large, cloud-hungry Windows user who wants to buy
from a single vendor and doesn't object to lock-in (neither of which is advisable), choose Microsoft.
VIII.
CONCLUSION
Cloud computing is a rapidly emerging technology which involves deployment of various services like software,
web services and virtualized infrastructure, as a product on public, private or hybrid clouds on lease basis. As a part of cloud
application migration, we discussed how cloud providers, through the selection of hypervisors and networking, affect the
capability to migrate applications. In another part, we discussed about how appropriate architectures for cloud applications
and open standards can reduce the difficulty in migrating applications across cloud environments. We highlighted the Tools
to facilitate application migration in clouds. We highlighted outlines architectures and indicates when each is most
appropriate based on system needs and cloud provider capabilities.
1.
2.
3.
4.
5.
6.
7.
8.
REFERENCES
Wayne Jansen and Timothy Grance. “Guidelines on Security and Privacy in Public Cloud Computing” NIST 800144
Allan A. Friedman and Darrell M. West. “Privacy and Security in Cloud Computing “ Issues in Technology
Innovation. October 2010.
Siani Pearson. “Taking Account of Privacy when Designing Cloud Computing Services” HP Laboratories
The MITRE Corporation, “Common Vulnerability and Exposures (CVE),” http://cve.mitre.org/, Mar. 2011.
C. Gentry, “Fully homomorphic encryption using ideal lattices,” in STOC, 2009.
C. Gentry, “Computing arbitrary functions of encrypted data,” Commun. ACM, March 2010.
http://wattsupmeters.com/.
Amazon Case Studies. http://aws.amazon.com/solutions/case-studies/.
6
Resolving Cloud Application Migration Issues
7
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 01-05
Clustering With Multi-Viewpoint Based Similarity Measure:
An Overview
Mrs. Pallavi J. Chaudhari1, Prof. Dipa D. Dharmadhikari2
1
Lecturer in Computer Technology Department, MIT College of Polytechnic, Aurangabad
Assistant Professor in Computer Sci. and Engineering Department, MIT College of Engineering, Aurangabad
2
Abstract––Clustering is a useful technique that organizes a large quantity of unordered text documents into a small
number of meaningful and coherent cluster, thereby providing a basis for intuitive and informative navigation and
browsing mechanisms. There are some clustering methods which have to assume some cluster relationship among the
data objects that they are applied on. Similarity between a pair of objects can be defined either explicitly or implicitly. The
major difference between a traditional dissimilarity/similarity measure and ours is that the former uses only a only a
single viewpoint, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not
be in the same cluster with the two objects being measured. Using multiple viewpoints, more informative assessment of
similarity could be achieved. Theoretical analysis and empirical study are conducted to support this claim. Two criterion
functions for document clustering are proposed based on this new measure. We compare them with several well-known
clustering algorithms that use other popular similarity measures on various document collections to verify the advantages
of our proposal.
Keywords––Document Clustering, Multi-Viewpoint Similarity Measure, Text Mining.
I.
INTRODUCTION
Clustering in general is an important and useful technique that automatically organizes a collection with a
substantial number of data objects into a much smaller number of coherent groups [1] .The aim of clustering is to find
intrinsic structures in data, and organize them into meaningful subgroups for further study and analysis. There have been
many clustering algorithms published every year. They can be proposed for very distinct research fields, and developed
using totally different techniques and approaches. Nevertheless, according to a recent study [2] more than half a century after
it was introduced; the simple algorithm k-means still remains as one of the top 10 data mining algorithms nowadays. It is the
most frequently used partitional clustering algorithm in practice. Another recent scientific discussion [3] states that k-means
is the favorite algorithm that practitioners in the related fields choose to use. K-means has more than a few basic drawbacks,
such as sensitiveness to initialization and to cluster size, difficulty in comparing quality of the clusters produced and its
performance can be worse than other state-of-the-art algorithms in many domains. In spite of that, its simplicity,
understandability and scalability are the reasons for its tremendous popularity. While offering reasonable results, k-means is
fast and easy to combine with other methods in larger systems. A common approach to the clustering problem is to treat it as
an optimization process. An optimal partition is found by optimizing a particular function of similarity (or distance) among
data. Basically, there is an implicit assumption that the true intrinsic structure of data could be correctly described by the
similarity formula defined and embedded in the clustering criterion function. Hence, effectiveness of clustering algorithms
under this approach depends on the appropriateness of the similarity measure to the data at hand. For instance, the original
K-means has sum-of-squared-error objective function that uses Euclidean distance. In a very sparse and high dimensional
domain like text documents, spherical k-means, which uses cosine similarity instead of Euclidean distance as the measure, is
deemed to be more suitable [4],[5]. A variety of similarity or distance measures have been proposed and widely applied,
such as cosine similarity and the Jaccard correlation coefficient. Meanwhile, similarity is often conceived in terms of
dissimilarity or distance [6].Measures such as Euclidean distance and relative entropy has been applied in clustering to
calculate the pair-wise distances.
The Vector-Space Model is a popular model in the information retrieval domain [7] .In this model, each element in
the domain is taken to be a dimension in a vector space. A collection is represented by a vector, with components along
exactly those dimensions corresponding to the elements in the collection. One advantage of this model is that we can now
weight the components of the vectors, by using schemes such as TF-IDF [8].The Cosine-Similarity Measure (CSM) defines
similarity of two document vectors di and dj , sim(di , dj) , as the cosine of the angle between them. For unit vectors, this
equals to their inner product:
sin(di, dj )  cos (di , dj )  di t dj
(1)
This measure has proven to be very popular for query-document and document-document similarity in text
retrieval. Collaborative-filtering systems such as GroupLens [9] use a similar vector model, with each dimension being a
―vote‖ of the user for a particular item. However, they use the Pearson Correlation Coefficient as a similarity measure, which
first subtracts the average of the elements from each of the vectors before computing their cosine similarity. Formally, this
similarity is given by the formula:
1
Clustering with Multi-Viewpoint based similarity measure: An Overview
c ( X ,Y ) 
 ( x  x)( y  y)
 ( x  x)  ( y  y )
j
j
j
2
j
j
j
(2)
2
j
Where, xj is the value of vector X in dimension j , x is the average value of X along a dimension, and the
summation is over all dimensions in which both X and Y are nonzero [9]. Inverse User Frequency may be used to weight the
different components of the vectors. There have also been other enhancements such as default voting and case amplification
[10], which modify the values of the vectors along the various dimensions. In a provocative study, Ahlgren et al. questioned
the use of Pearson’s Correlation Coefficient as a similarity measure in Author Co-citation Analysis (ACA) with the
argument that this measure is sensitive for zeros. Analytically, the addition of zeros to two variables should add to their
similarity, but the authors show with empirical examples that this addition can depress the correlation coefficient between
these variables. Salton’s cosine is suggested as a possible alternative because this similarity measure is insensitive to the
addition of zeros [7].In a reaction White defended the use of the Pearson correlation hitherto in ACA with the pragmatic
argument that the differences between using different similarity measures can be neglected in the research practice. He
illustrated this with dendrograms and mappings using Ahlgren et al.’s own data. Bensman contributed to the discussion with
a letter in which he argued for using Pearson’s r for additional reasons. Unlike the cosine, Pearson’s r is embedded in
multivariate statistics and because of the normalization implied this measure allows for negative values. The problem with
the zeros can be solved by applying a logarithmic transformation to the data. In his opinion, this transformation is anyhow
advisable in the case of a bivariate normal distribution. Leydesdorff & Zaal experimented with comparing results of using
various similarity criteria—among which the cosine and the correlation coefficient—and different clustering algorithms for
the mapping. Indeed, the differences between using the Pearson’s r or the cosine were also minimal in our case. However,
our study was mainly triggered by concern about the use of single linkage clustering in the ISI’s World Atlas of Science
[11]. The choice for this algorithm had been made by the ISI for technical reasons given the computational limitations of that
time. The differences between using Pearson’s Correlation Coefficient and Salton’s cosine are marginal in practice because
the correlation measure can also be considered as a cosine between normalized vectors [12]. The normalization is sensitive
to the zeros, but as noted this can be repaired by the logarithmic transformation. More generally, however, it remains most
worrisome that one has such a wealth of both similarity criteria (e.g., Euclidean distances, the Jaccard index, etc.) and
clustering algorithms (e.g., single linkage, average linkage, Ward’s mode, etc.) available that one is able to generate almost
any representation from a set of data [13]. The problem of how to estimate the number of clusters, factors, groups,
dimensions, etc. is a pervasive one in multivariate analysis. In cluster analysis and multi dimensional scaling, decisions
based upon visual inspection of the results are common.
The following Table 1 summarizes the basic notations that will be used extensively throughout this paper to
represent documents and related concepts.
TABLE 1
Notations
Notation
n
m
c
k
d
S = { d1,….,dn}
Sr
Description
number of documents
number of terms
number of classes
number of clusters
document vector, || d || = 1
set of all the documents
set of documents in cluster r
D   dis di
composite vector of all the documents
Dr   disr di
Composite vector of cluster r
C=D/n
Cr = Dr / nr
centroid vector of all the documents
centroid vector of cluster r, nr = sr
II.
RELATED WORK
2.1 Clustering:
Clustering can be considered the most unsupervised learning technique; so , as every other problem of this
kind, it deals with finding a structure in a collection of unlabeled data. Clustering is the process of organizing objects into
groups whose members are similar in some way.Therefore a cluster is a collection of objects which are similar between them
and are dissimilar to the objects belonging to other clusters. Generally, clustering is used in Data Mining, Information
Retrieval, Text Mining, Web Analysis, Marketing and Medical Diagnostic.
2
Clustering with Multi-Viewpoint based similarity measure: An Overview
2.2 Document representation:.
The various clustering algorithms represents each document using the well-known term frequency-inverse
document frequency (tf-idf) vector-space model (Salton, 1989). In this model, each document d is considered to be a vector
in the term-space and is represented by the vector
(3)
dtfidf  (tf 1 log (n / df 1), tf 2 log (n / df 2),...., tfm log (n / dfm))
Where tfi is the frequency of the ith term (i.e., term frequency), n is the total number of documents, and dfi is the
number of documents that contain the ith term (i.e., document frequency). To account for documents of different lengths, the
length of each document vector is normalized so that it is of unit length. In the rest of the paper, we will assume that the
vector representation for each document has been weighted using tf-idf and normalized so that it is of unit length.
2.3 Similarity measures:
Two prominent ways have been proposed to compute the similarity between two documents di and dj. The
first method is based on the commonly-used (Salton, 1989) cosine function:
cos (di, dj )  di t dj / (|| di || || dj ||)
(4)
Since the document vectors are of unit length, it simplifies to di dj. The second method computes the similarity
between the documents using the Euclidean distance dis (di, dj) =||di – dj||. Note that besides the fact that one measures
similarity and the other measures distance, these measures are quite similar to each other because the document vectors are
of unit length.
t
III.
OPTIMIZATION ALGORITHM
Our goal is to perform document clustering by optimizing criterion functions IR and IV [clustering with MVSC]. To
achieve this, we utilize the sequential and incremental version of k-means [14] [15], which are guaranteed to converge to a
local optimum. This algorithm consists of a number of iterations: initially, k seeds are selected randomly and each document
is assigned to cluster of closest seed based on cosine similarity; in each of the subsequent iterations, the documents are
picked in random order and, for each document, a move to a new cluster takes place if such move leads to an increase in the
objective function. Particularly, considering that the expression of IV [clustering with MVSC] depends only on nr and Dr,
r=1… k, let us represent IV in a general form
k
Iv   Ir (nr , Dr )
(5)
r 1
Assume that, at beginning of some iteration a document di belongs to a cluster SP that has objective value IP (nP,
DP).di will be moved to another cluster Sq that has objective value Iq (nq,Dq) if the following condition is satisfied:
(6)
Iv  Ip (np -1, Dp - di)  ( Iq (nq  1, Dq  di) - Ip(np , Dp) - Iq(nq , Dq)
st.q  arg max {Ir (nr  1, Dr  di) - Ir (nr , Dr )}
Hence, document di is moved to a new cluster that gives the largest increase in the objective function, if such an
increase exists. The composite vectors of corresponding old and new clusters are updated instantly after each move. If a
maximum number of iterations is reached or no more move is detected, the procedure is stopped. A major advantage of our
clustering functions under this optimization scheme is that they are very efficient computationally. During the optimization
process, the main computational demand is from searching for optimum clusters to move individual documents to, and
updating composite vectors as a result of such moves. If T denotes the number of iterations the algorithm takes, nz the total
number of non-zero entries in all document vectors, the computational complexity required for clustering with IR and IV is
approximately O (nz.k.T).
IV.
EXISTING SYSTEM
The principle definition of clustering is to arrange data objects into separate clusters such that the intra-cluster
similarity as well as the inter-cluster dissimilarity is maximized. The problem formulation itself implies that some forms of
measurement are needed to determine such similarity or dissimilarity. There are many state-of-the art clustering approaches
that do not employ any specific form of measurement, for instance, probabilistic model based method [16], and non-negative
matrix factorization [17] .Instead of that Euclidean distance is one of the most popular measures. It is used in the traditional
k-means algorithm. The objective of k-means is to minimize the Euclidean distance between objects of a cluster and that
cluster’s centroid:
k
min   || di - Cr ||2
(7)
r 1 di Sr
However, for data in a sparse and high-dimensional space, such as that in document clustering, cosine similarity is
more widely used. It is also a popular similarity score in text mining and information retrieval [18]. Cosine measure is used
in a variant of k-means called spherical k-means [4]. While k-means aims to minimize Euclidean distance, spherical k-means
intends to maximize the cosine similarity between documents in a cluster and that cluster’s centroid:
k
max   di Cr
||Cr||
r 1 di Sr
t
(8)
3
Clustering with Multi-Viewpoint based similarity measure: An Overview
The major difference between Euclidean distance and cosine similarity, and therefore between k-means and
spherical k-means, is that the former focuses on vector magnitudes, while the latter emphasizes on vector directions. Besides
direct application in spherical k-means, cosine of document vectors is also widely used in many other document clustering
methods as a core similarity measurement. The cosine similarity in Eq. (1) can be expressed in the Following form without
changing its meaning:
sin (di, dj )  cos ( di - 0, dj - 0)  ( di - 0) t ( dj - 0)
(9)
Where, 0 is vector that represents the origin point. According to this formula, the measure takes 0 as one and only
reference point. The similarity between two documents di and dj is determined with respective to the angle between the two
points when looking from the origin. To construct a new concept of similarity, it is possible to use more than just one point
of reference. We may have a more accurate assessment of how close or distant a pair of points is, if we look at them from
many different viewpoints. From a third point dh, the directions and distances to di and dj are indicated respectively by the
difference vectors (di − dh) and (dj − dh). By standing at various reference points dh to view di, dj and working on their
difference vectors, we define similarity between the two documents as:
sin ( di, dj ) 
di , dj Sr
1
 sin (di - dh, dj - dh)
n - nr dh S \ Sr
(10)
As described by the above equation, similarity of two documents di and dj - given that they are in the same cluster is defined as the average of similarities measured relatively from the views of all other documents outside that cluster. What
is interesting is that the similarity here is defined in a close relation to the clustering problem. A presumption of cluster
memberships has been made prior to the measure. The two objects to be measured must be in the same cluster, while the
points from where to establish this measurement must be outside of the cluster. We call this proposal the Multi-Viewpoint
based Similarity, or MVS. Existing systems greedily picks the next frequent item set which represent the next cluster to
minimize the overlapping between the documents that contain both the item set and some remaining item sets. In other
words, the clustering result depends on the order of picking up the item sets, which in turns depends on the greedy heuristic.
This method does not follow a sequential order of selecting clusters. Instead, we assign documents to the best cluster.
V.
PROPOSED SYSTEM
The main work is to develop a novel hierarchal algorithm for document clustering which provides maximum
efficiency and performance. A hierarchical algorithm clustering algorithm is based on the union between the two nearest
clusters. The beginning condition is realized by setting every datum as a cluster. After a few iterations, it reaches the final
clusters wanted. The final category of probabilistic algorithms is focused around model matching using probabilities as
opposed to distances to decide clusters. It is particularly focused in studying and making use of cluster overlapping
phenomenon to design cluster merging criteria. Proposing a new way to compute the overlap rate in order to improve time
efficiency and ―the veracity‖ is mainly concentrated. Based on the Hierarchical Clustering Method, the usage of ExpectationMaximization (EM) algorithm in the Gaussian Mixture Model to count the parameters and make the two sub-clusters
combined when their overlap is the largest is narrated. Here, the data set is usually modeled with a fixed (to avoid
overfitting) number of Gaussian distributions that are initialized randomly and whose parameters are iteratively optimized to
fit better to the data set. This will converge to a local optimum, so multiple runs may produce different results. In order to
obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to, for soft
clustering this is not necessary.
Fig.1 On Gaussian-distributed data, EM works well, since it uses Gaussians for modeling clusters
Experiments in document clustering data show that this approach can improve the efficiency of clustering and save
computing time.
4
Clustering with Multi-Viewpoint based similarity measure: An Overview
Fig. 2 System Architecture
Fig. 2 shows the basic system architecture. Given a data set satisfying the distribution of a mixture of Gaussians,
the degree of overlap between components affects the number of clusters ―perceived‖ by a human operator or detected by a
clustering algorithm. In other words, there may be a significant difference between intuitively defined clusters and the true
clusters mixture.
VI.
CONCLUSION
The key contribution of this paper is the fundamental concept of similarity measure from multiple viewpoints.
Theoretical analysis show that Multi-viewpoint based similarity measure (MVS) is potentially more suitable for text
documents than the popular cosine similarity measure. The future methods could make use of the same principle, but define
alternative forms for the relative similarity in or do not use average but have other methods to combine the relative
similarities according to the different viewpoints. In future, it would also be possible to apply the proposed criterion
functions for hierarchical clustering algorithms. It would be interesting to explore how they work types of sparse and highdimensional data.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
A.K.Jain, M.N.Murty, P.J.Flynn,‖ Data Clustering: A Review ―, ACM Computing Surveys, Vol. 31, No. 3, pp.
265-321, Sept. 1999.
X. Wu, V. Kumar, J. Ross Quinlan, J. Ghosh, Q. Yang, H. Motoda, G. J. McLachlan, A. Ng, B. Liu, P. S. Yu, Z.,
H. Zhou, M.Steinbach, D. JHand and D. Steinberg, ―Top 10 algorithms in data mining,‖ Knowledge Inf. Syst.,
vol. 14, no. 1, pp. 1–37, 2007.
I. Guyon, U. von Luxburg, and R. C.
Williamson, ―Clustering: Science or Art?‖ NIPS’09 Workshop on
Clustering Theory, 2009.
I. Dhillon and D. Modha, ―Concept decompositions for large sparse text data using clustering,‖ Mach. Learn.,
vol. 42, no. 1-2, pp. 143–175, Jan 2001.
] S. Zhong, ―Efficient online spherical K- means clustering,‖ in IEEE IJCNN, 2005, pp. 3180–3185.
G. Salton. Automatic Text Processing. Addison-Wesley, New York, 1989.
M.J.McGill, ―Introduction to Modern Information Retrieval‖, NY-1983.
Gerard M. Salton and Christopher Buckley, ―Term Weighting Approaches in Automatic Text Retrieval‖, 24(5),
pp. 513-523, 1988.
Paul Resnick, Mitesh Suchak, John Riedl, ―Grouplens: an open architecture for collaborative filtering of netnews‖,
CSCW- ACM conference on Computer supported cooperative work, pp. 175-186, 1994.
John Breese,David Heckerman,Carl kadie, ― Empirical analysis of predictive algorithms for collaborative
filtering‖, the 14th conference on uncertainty in Artificial Intelligence, pp. 43-52, 1998.
] Small, H., & Sweeney, E.,‖ Clustering the Science Citation Index Using Co- Citations I‖, A Comparison of
Methods. Scientometrics, 7, pp. 391- 409, 1985.
Jones, W. P., & Furnas, G. W., ―Pictures of Relevance: A Geometric Analysis of Similarity Measures‖, Journal of
the American Society for Information Science, 36 (6), pp. 420- 442, 1987.
Oberski, J., ―Some Statistical Aspects of Co-Citation Cluster Analysis and a Judgment by Physicist‖, In A. F. J.
van Raan (Ed.), Handbook of Quantitative Studies of Science & Technology, pp. 431-462, 1988.
Y. Zhao and G. Karypis, ―Empirical and theoretical comparisons of selected criterion functions for document
clustering,‖ Mach. Learn., vol. 55, no. 3, pp. 311–331, Jun 2004.
] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2nd Ed. New York: John Wiley & Sons, 2001.
A. Banerjee, I. Dhillon, J. Ghosh, and S. Sra, ―Clustering on the unit hypersphere using von Mises-Fisher
distributions,‖ J. Mach. Learn. Res.,vol. 6, pp. 1345–1382, Sep 2005.
W. Xu, X. Liu, and Y. Gong, ―Document clustering based on Nonnegative matrix factorization,‖ in SIGIR, 2003,
pp. 267–273.
C. D. Manning, P. Raghavan, and H. Sch ¨ utze, An Introduction to Information Retrieval. Press, Cambridge U.,
2009.
5
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 01-05
A Literature Survey on Latent Semantic Indexing
Ashwini Deshmukh1, Gayatri Hegde2
1,2
Computer Engineering Department,Mumbai University,New Panvel,India.
Abstract––Working of web engine is to store and retrieve web pages. One of the various methods such as crawling,
indexing is used. In this paper we will be discussing about latent semantic indexing which uses indexing technique. When
a user enters a query into a search engine the engine examines its index and provides a listing of best-matching web
pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the
text. The index is built from the information stored with the data and the method by which the information is indexed. In
this study, we perform a systematic introduction and presentation of different information retrieval techniques and its
comparison with latent semantic indexing (LSI). A comparison table has been made to give a better look of different
models. Few limitations of LSI are discussed.
Keywords –– Data mining, Latent Semantic Indexing, retrieval techniques, Singular Value Decomposition,
I.
INTRODUCTION
Latent semantic indexing is a retrieval technique that indexes and uses a mathematical technique called
(SVD)Singular Value Decomposition, which identifies the pattern in an unstructured collection of text and find relationship
between patterns. Latent semantic indexing (LSI) has emerged as a competitive text retrieval technique. LSI is a variant of
the vector space model in which a low-rank approximation to the vector space representation of the database is computed
[1]. There are usually many ways to express a given concept, so the literal terms in a user‘s query may not match those of a
relevant document. In addition, most words have multiple meanings, so terms in a user‘s query will literally match terms in
documents that are not of interest to the user [2]. Textual documents have many similar words such documents can be
classified depending upon similarities in terms using various techniques. The most classic technique is Vector Space Model
(VSM) and SMART system, but due to few disadvantages of this methods improved techniques like LSI & PLSI is used for
semantic search of documents. The first session of the paper gives an overview about different techniques for information
retrieval and its comparison with LSI. Second session discuss the various applications of LSI.
II.
LATENT SEMANTIC INDEXING (LSI)
Latent Semantic Indexing is an information retrieval technique that indexes and uses mathematical technique SVD,
which identifies the pattern in an unstructured collection of text and finds relationship between them. A well known method
for improving the quality of similarity search in text is called LSI in which the data is transformed into a new concept space
[6]. This concept space depends upon the document collection in question, since different collections would have different
sets of concepts. LSI is a technique which tries to capture this hidden structure using techniques from linear algebra. The
idea in LSI is to project the data into a small subspace of the original data such that the noise effects of synonymy and
polysemy are removed. The advantageous effects of the conceptual representation extend to problems well beyond the text
domain LSI has emerged as a competitive text retrieval technique. LSI is a variant of the vector space model in which a lowrank approximation to the vector space representation of the database is computed.
2.1 Algorithm for LSI
To perform Latent Semantic Indexing on a group of documents, following steps are performed:
Firstly, convert each document in your index into a vector of word occurrences. The number of dimensions your vector
exists in is equal to the number of unique words in the entire document set. Most document vectors will have large empty
patches, some will be quite full. Next, scale each vector so that every term reflects the frequency of its occurrence in context.
Next, combine these column vectors into a large term-document matrix. Rows represent terms, columns represent
documents. Perform SVD on the term-document matrix. This will result in three matrices commonly called U, S and V. S is
of particular interest, it is a diagonal matrix of singular values for document system.
Set all but the k highest singular values to 0. k is a parameter that needs to be tuned based on your space. Very low values
of k are very lossy, and net poor results. But very high values of k do not change the results much from simple vector search.
This makes a new matrix, S'. Recombine the terms to form the original matrix (i.e., U * S' * V(t) = M' where (t) signifies
transpose). Break this reduced rank term-document matrix back into column vectors. Associate these with their
corresponding documents. Finally this results into Latent Semantic Index. The first paragraph under each heading or
subheading should be flush left, and subsequent paragraphs should have
III.
INFORMATION RETRIEVAL TECHNIQUES
There exist various information retrieval techniques.
1.1 SMART System
1
A Literature Survey On Latent Semantic Indexing
In the Smart system, the vector-processing model of retrieval is used to transform both the available information requests as
well as the stored documents into vectors of the form:
Di = (wi1; wi2;:::; wit) (1)
Where Di represents a document (or query) text and wi is the weight of term Tk in document Di. A weight zero is
used for terms that are absent from a particular document, and positive weights characterize terms actually assigned[4].The
SMART system is a fully-automatic document retrieval system, capable of processing on a 7094 computer search requests,
documents available in English, and of retrieving those documents most nearly similar to the corresponding queries. The
machine programs, consisting of approximately 150,000 program steps, can be used not only for language analysis and
retrieval, but also for the evaluation of search effectiveness by processing each search request in several different ways while
comparing the results obtained in each case [5]. Steps for information retrieval SMART systemare as shown in Fig 1: [11]
Establish
Document
collection
Classify
Document
Concepts
Index by
Concepts
and Weights
User
Conveys
Information
need
Calculate
Similarity
Relevance
Feedback
by User
Figure1. Information Retrieval SMART System[11]
3.2 Vector Space Model (VSM)
The representation of a set of documents as vectors in a common vector space is known as the vector space model
and is fundamental to a host of information. Vector Space Model‘s retrieval operations ranges from scoring documents on a
query, document classification and document clustering.The vector model is a mathematical-based model that represents
terms, documents and queries by vectors and provides a ranking[6].VSM can be divided in three steps:

Document indexing where content bearing words are extracted

Weighting of indexed terms to enhance retrieval of document

Ranks the document with respect to the query according to the similarity measure [14] Vector space model is an
information retrieval technique in which , a word is represented as a vector. Each dimension corresponds to a
contextual pattern. The similarity between two words is calculated by the cosine of the angle between their vectors
[7].
3 3. Concept Indexing (CI)
In term-matching method similarity between query and the document is tested lexically [7]. Polysemy (words
having multiple meaning) and synonymy (multiple words having the same meaning) are two fundamental problems in
efficient information retrieval. Here we compare two techniques for conceptual indexing based on projection of vectors of
documents (in means of least squares) on lower-dimensional vector space Latent semantic indexing (LSI)Concept indexing
(CI)Indexing using concept decomposition(CD) instead of SVD like in LSI Concept decomposition was introduced in 2001:
First step: clustering of documents in term-document matrix A on k groups
Clustering algorithms: Spherical k-means algorithm is a variant of k-means algorithm which uses the fact that vectors of
documents are of the unit norm Concept matrix is matrix whose columns are centroids of groups cj– centroid of j-th group.
Second step: calculating the concept decomposition
Concept decomposition Dk of term-document matrix A is least squares approximation of A on the space of concept vectors:
DK= CKZ (2)
Where Z is solution of the least squares problem given as :
Z=
(3)
Rows of Ck = terms
Columns of Z = documents
3.4 . Probabilistic Latent Semantic Analysis (PLSA)
It is a statistical technique for the analysis of two-modes and co-occurrence data. PLSA evolved from latent
semantic analysis, adding a sounder probabilistic model. Compared to standard latent semantic analysis which stems
from linear algebra and downsizes the occurrence tables (usually via a singular value decomposition), probabilistic latent
semantic analysis is based on a mixture decomposition derived from a latent class model. This results in a more principled
approach which has a solid foundation in statistics. Probabilistic Latent Semantic Analysis (PLSA) is one of the most
popular statistical techniques for the analysis of two-model and co-occurrence data [8]. Considering observations in the form
of co-occurrences (w,d) of words and documents, PLSA models the probability of each co-occurrence as a mixture of
conditionally independent multinomial distributions:
2
A Literature Survey On Latent Semantic Indexing
(4)
The first formulation is the symmetric formulation, where ‗w‘ and ‗d‘ are both generated from the latent
class ‗c‘ in similar ways (using the conditional probabilities P(d|c) and P(w|c) ), whereas the second formulation is
the asymmetric formulation, where, for each document d, a latent class is chosen conditionally to the document according to
P(c|d), and a word is then generated from that class according to P(w|c).The starting point for Probabilistic Latent Semantic
Analysis is a statistical model which has been called aspect model [13] The aspect model is a latent variable model for cooccurrence data which associates an unobserved class variable z 2 Z = fz1; : : zKg with each observation. A joint probability
model over D W is as shown is Fig 2.
Figure 2: Graphical model representation of the aspect model in the asymmetric (a) and symmetric (b) parameterization
IV.
COMPARING VARIOUS TECHNIQUES
While composing a document, different expressions are used to express same or somewhat different data and the
use of synonyms or acronyms is very common. vector space model regards each term as a separate dimension axis .various
search has been done which proves that VSM creates a hyper dimension space vector of which 99% of term document
matrix are empty [3]. Limitations of VSM like long documents are poorly represented because they have poor similarity
values, Search keywords must precisely match document terms; word substrings might result in a "false positive match" and
more makes it difficult to used in some cases ,hence a more improved method for semantic search and dimension reduction
LSI is introduced. SMART system prepossess the document by tokenizing the text into words ,removing common words that
appear on its stop-list and performing stemming on the remaining words to derive a set of terms .it is a UNIX based retrieval
system hence cataloging and retrieval of software component is difficult. LSI is superior in performance that SMART in
most of cases[9].
Table1: Document Table 1
D1 Survey of textmining: clustering, classification, and retrieval
Automatic text processing: the transformation analysis and retrieval of
D2
information by computer
D3 Elementary linearalgebra: A matrix approach
D4 Matrixalgebra& its applications statistics and econometrics
D5 Effective databases for text&document management
D6 Matrices, vectorspaces, and informationretrieval
D7 Matrixanalysis and appliedlinearalgebra
D8 Topological vectorspaces and algebras
D9
Table2: Document Table 2
Informationretrieval: data structures &algorithms
D10
Vectorspaces and algebras for chemistry and physics
D11
Classification, clustering and dataanalysis
D12
Clustering of large data sets
D13
Clustering algorithms
D14
Document warehousing and textmining: techniques for improving
business operations, marketing and sales
Datamining and knowledge discovery
D15
Table 3: Terms
Document
Linear
Term
Algebra
Neutral term
Text
Linear
Analysis
Mining
Algebra
Application
Clustering
Matrix
Algorithm
3
A Literature Survey On Latent Semantic Indexing
Classification
Vector
Retrieval
Space
Information
Document
Data
0.5
0.5
data mining terms
linear algebra terms
0.45
0.4
0.4
0.3
0.35
0.2
0.3
0.1
0.25
0
-0.1
0.2
-0.2
0.15
0.1
-0.3
data mining terms
linear algebra terms
-0.4
-0.5
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Projection of terms by SVD
0.05
0.45
0.5
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Projection of terms by CD
4.1 Result from Anlysis
V.
APPLICATIONS OF LSI
LSI is used in various application .as web technology is growing new web search engines are developed to retrieve
as accurate data as possible .Few areas where LSI is used is listed as follows :LSI is used to clustering algorithm in medical documents as such documents as it includes many acronyms of
clinical data [3].Used as retrieval method for analysing broken web links. The best results are obtained applying KLD when
the cache page is available, otherwise a co-occurrence method. The use of Latent Semantic Analysis has been prevalent in
the study of human memory, especially in areas of free recall and memory search it was found ,a positive correlation
between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled
one after another in free recall tasks. To Identify Similar Pages in Web Applications This approach is based on a process that
first computes the dissimilarity between Web pages using latent semantic indexing, and then groups similar pages using
clustering algorithms.[12] To compare content of audio we use LSI. The latent semantic indexing (LSI) is applied on audio
clips-feature vectors matrix mapping the clips content into low dimensional latent semantic space. The clips are compared
using document-document comparison measure based in LSI. The similarity based on LSI is compared with the results
obtained by using the standard vector space model [15]. LSI to overcome semantic problem on image retrieval based on
automatic image annotation. Statistical machine translation used to automatically annotates the image. This approach
considers image annotation as the translation of image regions to words [15]. A bug triage system is used for validation and
allocation of bug reports to the most appropriate developers. Automatic bug triage system may reduce the software
maintenance time and improve its quality by correct and timely assignment of new bug reports to the appropriate developers
[16].
4
A Literature Survey On Latent Semantic Indexing
VI.
LIMITATIONS OF LSI
Difficult to find out similarities between terms.
2.
A bag-of-word lead to unstructured information.
3.
A compound term is treated as 2 terms.
4.
Ambiguous terms create noise in the vector space
5.
There‘s no way to define the optimal dimensionality of the vector space
6.
There‘s a time complexity for SVD in dynamic collections.
Table 4. Comparison between various techniques
Parameters
VSM
SMART
CI
Basic
working
Represents terms,
documents by
vectors.
Vector space
model
Uses concept
Decomposition
(CD)
Size of the
matrix
Hyper dimension
space is created
Hyper
dimension
space is created
Smaller matrix for
presenting doc
Smaller matrix for
presenting doc
Small
Performance
Good with small
documents
Good with
small
documents
good
Good
Very good
VII.
LSI
Uses Singular
Value
Decomposition
(SVD)
PLSI
Improved LSI
with Aspect
Model
CONCLUSION
The representation of documents by LSI is economical. It is basically an extension of vector space model .It tries
to overcome problem of lexical matching by conceptual matching. Thus by this study we can conclude that LSI is a very
effective method of retrieving information from even for large documents, it has few limitations, detailed solution for those
is out of paper‘s study. Paper mainly focuses on various methods of information retrieval and their comparison .Few
disadvantages of LSI that can be overcome by PLSI. Detail description of PLSI has not been included will be done as a
future study for this paper.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
Jing Gao, Jun Zhang Clustered SVD strategies in latent semantic indexing USA Oct 2004
Scott Deerwester Graduate Library School Susan T. Dumais George W. Furnas Thomas K. Landauer Indexing by
Latent Semantic Analysis
Choonghyun Han, MA Jinwook Choi Effect of Latent Semantic Indexing for Clustering Clinical Documents., MD,
PhD 978-0-7695-4147-1/10 $26.00 © 2010 IEEE
Gerard Salton and Chris Buckley. Term-weighting approaches in automatic text retrieval formationProcessing and
management, 24(5):513{523, 1988}.
G. Salton The SMART System — Retrieval Results and Future Plans
Silva. I. R.; Souza, J. N; Santos K.S. Dependence among terms in vector space model‖2004. IDEAS
'04.
Proceedings. International
Jasminka Dobša Comparison of information retrieval techniques Latent semantic indexing (LSI) and Concept
indexing (CI) Varaždin
Chuntao Hong; Wenguang Chen; Weimin Zheng; Jiulong Shan; Yurong Chen; Yimin Zhang Parallelization and
Characterization of Probabilistic Latent Semantic Analysis Parallel Processing, 2008. ICPP '08. 37th International
Conference
Reea Moldovan, Raduioan Bot, Gertwanka Latent Semantic Indexing for Patent Documents.
Diana Inkpen Information Retrieval on the Internet.
Eric Thul The SMART Retrieval System Paper Presentation & Demonstration Session 2005.10.13
Stockholm Thomas Hofmann Probabilistic Latent Semantic Analysis, UAI'99.
John Blitzer Amir Globerson Fernando Pereira Distributed Latent Variable Models of Lexical Co-occurrences.
Yan JunhuDependable, Autonomic and Secure Computing, 2009. DASC '09. Eighth IEEE International
Conference ―A Kind of Improved Vector Space Model‖.
Biatov K; Khehler J; Schneider D, Semanting computing Audio clip content comparison using latent semantic
indexing 2009 .ICSE‘09. IEEE International Conference
Ahsan, S.N.; Ferzund, J.; Wotawa, F Automatic Software Bug Triage system (BTS) based on latent semantic
indexing.Software Engineering Advances, 2009. ICSEA '09. Fourth International Conference
5
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 01-03
A Virtual Access Point Based Fast Handoff Scheme of IEEE
802.11
Jyoti Sachan 1, Anant Kr. Jaiswal 2
1,2
Computer Science & Engg. Amity University, Noida
Abstract––In the fast growing wireless technologies, IEEE 802.11 handoff is a big issue in communication. The seamless
handoff problem in WLANs is a very important design issue to support the new astounding and amazing applications in
WLANs, particularly for a user in a mobile vehicle. The entire delay time of a handoff is divided into probe,
authentication, and reassociation delay times. Because the probe delay occupies most of the handoff delay time, efforts
have mainly focused on reducing the probe delay to develop faster handoff schemes. In the proposed system we reduce
the latency of handoff in wireless network by using scanning techniques, preregistration and virtualization techniques
that will make the handoff faster. The main challenge in the fast handoff is to find the suitable APs, because the most of
delay in seamless handoff is due to probe delay. We use the heuristic approach for finding the adjacent APs. After finding
all the suitable APs the preregistration process will be started and security context will be transferred to all the selected
APs. We are also providing the solution of inter ESS domain handoff problem with the help of virtual AP and buffering
mechanism. In the virtual AP technique we use parallel processing of communication between different APs and MH.
Keywords––virtual, access point, buffering, handoff, probe delay, ESS domain.
I.
INTRODUCTION
The 802.11 IEEE Standard has been enabled low cost and effective LAN services (WLAN). It is widely believed
that WLAN will become a major portion of the fourth generation cellular system (4G). Many wireless multimedia and peerto-peer applications, such as VoIP and mobile video conference, are developed on wireless LANs. The real-time applications
suffer the handoff latency when a mobile host roaming between different wireless LANs. For the IEEE 802.11 MAC
operation, the ”handoff” function which occurs when a mobile host (MH) moves its association from one access point (AP)
to another is a key point to mobile users. There are many analysis results about handoff delay time and many methods
proposed to reduce the handoff latency. That divided the complete handoff process into two distinct logical steps: discovery
and re-authentication, and classified the entire handoff latency into three delays: probe delay, authentication delay, and reassociation delay. The handoff procedure is given in the fig (a). The probe delay occupies very large proportion of the whole
handoff latency; hence how to reduce the probe delay is our main object. In the IEEE 802.11 standard, the scan function is
used in the discovery phase to help an MH with finding the potential APs to re-associate with, and a full scan of probing
eleven channels will be implement when the MH entering the handoff process. To improve the scan operation can
significantly facilitate to reduce the handoff latency. In order to decrease the handoff latency, a lot of handoff schemes in
connection with different kinds of delays are proposed. Subsequently, the related research results are briefly described with
regard to two parts, such as discovery phase including and probe delay and re-authentication phase including authentication
delay and re-association delay.
Fig (a)
1
A Virtual Access Point Based Fast Handoff Scheme of IEEE 802.11
The existing handoff schemes can only resolve the problems of inner ESS domain handoff, but cannot effectively resolve
that of inter ESS domains handoff because the handoff between two ESS domains involves network layer operations.
In the proposed system we reduce the latency of handoff. First of all we find out the adjacent APs having good
signal strength, by selective scanning technique. In the scanning technique the MH will find out the entire good signal APs
among all by using received signal strength indicator (RSSI). After scanning process, the MH will start the pre-registration
process in this process the MH will be registered with all the selected APs. If the MH is moving in another ESS domain then
the owner of MH must to provide all the authentication details to the MH that will use during the per-registration process.
II.
EXISTING SYSTEM
In the IEEE 802.11handoff there are three step processes shown in fig (a).
1.
Scanning
In the scanning process the Mobile Host (MH) searches for all the available active and passive AP’s in the range.
2.
Authentication
After the scanning process MH sends the authentication messages to a particular AP and AP will send the
authentication response to the MH as confirmation of the acceptance.
3.
Reassociation
Exchanging reassociation messages to establish connection at target AP. At this point in an 802.1X BSS, the AP and Station
have a connection, but are not allowed to exchange data frames, as they have not established a key.
The existing handoff schemes can only resolve the problems of inner ESS domain handoff, but cannot effectively resolve
that of inter ESS domains handoff because the handoff between two ESS domains involves network layer operations.
III.
PROPOSED SYSTEM
In this paper we propose a model for IEEE 802.11 handoff. In this model we are using virtual AP, neighbours
graph and also providing solution for inter ESS domain authentication during handoff. As we all know that maximum time
of handoff is occupied by the probe delay. This phase of handoff is known as scanning or searching for APs in the range. We
find out the adjacent APs having good signal strength, by selective scanning technique. In the scanning technique the MH
will find out the entire good signal APs among all by using received signal strength indicator (RSSI). After scanning process
the MH will start the pre-registration process in this process the MH will be registered with all the selected APs. Now the
virtual AP will work between the MH and all the registered APs. All the authentication details will be provided to the virtual
AP before the communication starts the fig (b) will better illustrate the working of propose model.
Fig (b)
After the complition of scaning process. The virtual AP will broadcast the authentication messages to all the selected APs
and after get response form the APs. The virtual AP send the reassociation messages to every AP and wait for response.
Once the MH want to access the network it sends the request to the virtual AP and virtual AP will send the request to all
connected APs. An other problem arise when the MH moves into other ESS domain explain in fig (c).
2
A Virtual Access Point Based Fast Handoff Scheme of IEEE 802.11
Fig (C)
IV.
METHOD USED
In the fast handoff scheme of virtual Access Point we use following methods.
A.
Scanning method
Here we use the selective scanning technique. In this technique the virtual AP will find out the entire good signal
APs in the range of the network. We will find out good strength APs with the help of Received Signal Strength
Indicator (RSSI). This is important that all AP must stay in the range for long time. So that it reduces the effort of
scanning again and again.
B.
Authentication
In the authentication we need all the authentication details of networks (ESS domains) where we can move.
Without the proper authentication the virtual AP is not able to connect with any AP. Once we got all authentication
details, the virtual AP will broadcast authentication messes to all selected APs.
C.
Reassociation
Reassociation is the final step of handoff after successful reassociation. We can start communication. During the
communication the virtual AP uses buffer that stores the request and response. Buffer provides data when the
connection will change. Buffer will initialize automatically at the completion of request / response cycle.
V.
CONCLUSIONS
In this paper, we proposed the model for fast handoff of IEEE 802.11. With the help of this model we can reduce
the time of handoff. We used virtual access point in the model. It helps to communication properly without interruption.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
Yuh-Shyan Chen; Chung-Kai Chen; Ming-Chin Chuang; “DeuceScan: Deuce-Based Fast Handoff Scheme in
IEEE 802.11 Wireless Networks” Vehicular Technology Conference, 2006. VTC-2006 Fall. 2006 IEEE 64th .
Ping-Jung Huang; Yu-Chee Tseng; Kun-Cheng Tsai “A Fast Handoff Mechanism for IEEE 802.11 and IAPP
Networks ” Vehicular Technology Conference, 2006. VTC 2006-Spring. IEEE 63rd .
Zhao Li; Lizheng Fu; Ke Xu; Zhenyu Shi; Jianping Wu “Smooth Handoff Based on Virtual Interface in Wireless
Network ” Wireless and Mobile Communications, 2006. ICWMC '06. International Conference.
Rebai, A.R.; Rebai, M.F.; Alnuweiri, H.M.; Hanafi, S. “An enhanced heuristic technique for AP selection in
802.11 handoff procedure ” Telecommunications (ICT), 2010 IEEE 17th International Conference.
Haider, A.; Gondal, I.; Kamruzzaman, J. “Dynamic Dwell Timer for Hybrid Vertical Handover in 4G Coupled
Networks” Vehicular Technology Conference (VTC Spring), 2011 IEEE 73 rd.
Berezin, M.E.; Rousseau, F.; Duda, A. “Multichannel Virtual Access Points for Seamless Handoffs in IEEE
802.11 Wireless Networks” Vehicular Technology Conference (VTC Spring), 2011 IEEE 73rd.
Dwyer, J.; Bridewell, H.; Nguyen, N.-Y.; Jasani, H. “Impact of handoff delay on RADIUS enabled 802.11
WLANs ” Southeastcon, 2011 Proceedings of IEEE.
Li-Hsing Yen; Hung-Hsin Chang; Shiao-Li Tsao; Chia-Chi Hung; Chien-Chao Tseng “Experimental study of
mismatching ESS-subnet handoffs on IP over IEEE 802.11 WLANs ” Wireless and Optical Communications
Networks (WOCN), 2011 Eighth International Conference on.
3
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 11-15
Study of Fluoride Concentration in the River (Godavari) and
Groundwater of Nanded City
Review Dr. Nazneen Sadat
Asstt. Professor, Deccan College of Engineering and Technology, Hyderabad
Abstract––A study is conducted to asses the Fluoride Concentration in the River (Godavari) and Groundwater of the
Nanded City, in Maharashtra, which is a prominent city of Marathwada Region. The Fluoride is responsible for
many diseases ranging from simple Dental Carries Reduction to Kidney Failures and Deaths. In the City, the
Fluoride was found to be in the range of 0.43 mg/L to 2.0 mg/L. While in River water it was found to be in the range
of 0 .23-1.76 mg/L. It is thereby established that the River, Groundwater and even Municipal tap water in Nanded, is
found to cross the permissible limits of Fluoride contamination in some seasons.
Keywords––Fluoride, River, Groundwater, Water Pollution, Nanded City, Osteoscleorosis, Fluorosis
I.
INTRODUCTION
Fluoride, a naturally occurring element, exists in combination with other elements as fluoride compounds and is
found as a constituent of minerals in rocks and soil. When water passes through and over the soil and rock formations
containing fluoride, it dissolves these compounds, resulting in the small amounts of soluble fluoride present in virtually all
water sources.
Fluoride is extensively distributed in nature and enters the human body through both drinking water, that is, river
and groundwater. Fluoride concentration in groundwater fluctuates from more than 1 to 25 mg or more per liter (1). The
levels of fluoride much higher than 25 mg/L have been reported from India, Kenya, and South Africa. (2)
The Indian Council of Medical Research (The Indian Council of Medical Research, 1975) has given the highest
desirable limit of fluoride as 1.0 mg/L and the maximum permissible limit as 1.5 mg/L. The Bureau of Indian Standards has
recommended the limit of 1.5 mg/L. (3).
Dental caries reduction, mottled enamel, osteoscleorosis, crippling fluorosis, kidney changes, these types of
diseases may cause due to fluoride, at different concentrations. When fluoride is ingested, approximately 93% of it is
absorbed into the bloodstream. A good part of the material is excreted, but the rest is deposited in the bones and teeth, (4)
and is capable of causing a crippling skeletal fluorosis. This is a condition that can damage the musculoskeletal and nervous
systems and result in muscle wasting, limited joint motion, spine deformities, and calcification of the ligaments, as well as
neurological deficits (5).
Not only human beings, animals can also be affected by high consumption of fluoride. Excess concentration
affects animal breeding and causes mottled teeth of the young animals. At very high concentration of fluoride, plants prevent
the accumulation of chlorophyll ‘a’ and ‘b’ and photochlorophyll. Fluoride has long been known to undermine fertility in
animals and man.
Fluoride in drinking water cannot be detected by taste, sight or smell. Testing is the only way to determine the
fluoride concentration.
II.
SOURCES OF FLUORIDE
The main source of fluoride in natural waters are Fluorite (CaF2), Fluorapatite [3Ca3 (PO4) 2, CaF2], Cryolite
(Na3AlF6), Magnesium fluoride (MgF2) and as replacement of ions of crystal lattic of micas and many other minerals.
As rainwater percolates through the soils, it comes in contact with the rocks and minerals present in the aquifer
materials. Due to the presence of acid in the soils, dissolution of fluoride from the country rocks occurs.
III.
PERMISSIBLE LIMITS OF FLUORIDE IN DRINKING WATER IN THE INDIAN
CONTEXT
The maximum level of fluoride which the body may tolerate is 1.5 parts per million (ppm). This is often based on
water fluoride content. This practice of basing maximum permissible levels of fluoride in drinking water is concept
borrowed from the West.
This is not a practical proposition in India for the simple reason that food items, mainly agricultural crops are also
heavily contaminated with fluoride. Besides, cosmetics like toothpastes and certain drugs do contain fluoride. Keeping in
view the various sources through which fluoride finds entry into the body, 1.5 ppm of fluoride in water is considered to be on
the higher side. But in India, it refers to total daily intake irrespective of the source (s). However, the maximum permissible
limit of fluoride in drinking water should not exceed 1.00 ppm/or lesser the better (6).
11
Study of Fluoride Concentration in the River (Godavari) and Groundwater of Nanded City
IV.
EFFECTS OF INTAKE OF FLUORIDE
Fluoride is not actively transported into animal cells, it is second only to iron in concentration in the human body,
and thus should not be classified as a micro-trace element for humans. The total fluoride found in a typical adult human body
is approximately 2.6 gms, which puts it above the concentration range for micro-trace elements.
As defined in the medical dictionary (7), fluorosis is a hazardous condition resulting from ingestion of excessive
amounts of fluorine. It is an endemic problem in most of the developing countries. Fluorosis caused by high intake of
fluoride has been prevalent in India for last six decades. It is highly toxic to humans and animals when consumed in more
than prescribed quantities.
Fluorosis is known to take three forms namely Skeletal fluorosis, Dental fluorosis and Non-skeletal fluorosis. The
Crippling malady of fluorosis not only affects the bones and teeth but every tissue and organ of the body leading to death
after prolonged illness. Fluoride can also damage a foetus if the pregnant woman consumes water or food with high
concentration of fluorides. Ingestion of high fluoride content during breast- feeding can cause infant mortality due to
calcification of blood vessels.
When fluoride is naturally present in drinking water, the concentration should not increase more than 1.0 mg/L.
The Indian Council of Medical Research (8)has given the highest desirable limit of fluoride as 1.0mg/L and maximum
permissible limit as 1.5 mg/L. The Bureau of Indian Standards has recommended the limit of 1.5mg/L (3). Manufacturers of
products for internal consumption generally restrict the F- concentration of water to about 1.0mg/L. The effect of F- on
livestock is same as in human beings and should not exceed 2.0mg/L, since excess concentration affects animal breeding and
causes mottled teeth of the young animals. In a concentration it is not significant in irrigation water, but higher concentration
in plants prevents the accumulation of chlorophyll ‘a’ and ‘b’ and photo chlorophyll.
4.1 Effects of excess Fluoride on Human body (9)
F- mg/L
Physiological effect.
1.0
Dental carries reduction
2.0
Mottled enamel
5.0
Osteoscleorosis
8.0
Osteoscleorosis
20-80
Crippling fluorosis
125
Kidney Changes
2500
Death
V.
REASONS FOR DRINKING WATER CONTAMINATION WITH FLUORIDE
The earth’s crust is known to contain abundance of fluoride due to the presence of following:
1) High calcium granite contain: 520 ppm of fluoride
2) Low calcium granite contain: 850 ppm of fluoride
3) Alkaline rocks: 1200-8500 ppm of fluoride
4) Shale: 740 ppm of fluoride
5) Sandstone: 270 ppm of fluoride
6) Deep sea clays: 1300 ppm of fluoride
7) Deep sea carbonates: 540 ppm of fluoride (6)
India is considered to be one of the richest countries in the world for the presence of fluoride bearing minerals. The
fluoride bearing minerals are broadly classified as ----A) Fluorides
B) Phosphates
C) Silicates and
D) Mica Groups (10)
Depending upon the depth at which water pockets are located and the chemical composition of the earth’s crust in
that zone, the water may or may not be contaminated. In Maharashtra among 30 districts 10 are problem districts with high
contamination of fluoride in drinking water.
VI.
MATERIALS AND METHODS
The fluoride content of groundwater and river water sample was analysed by spectrophotometer in the laboratory.
Standard solution having 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0 and 9.0 ml fluoride standard solution was taken in 50 ml
Nessler’s tubes and then distilled water was added upto the mark. 5 ml Zirconyl chloride (ZrOCl 2.8H2O) and 5 ml SPANDS
solution was added in each standard. A standard curve of absorbance against concentration was prepared by running those
solutions on spectrophotometer.
In 50 ml of sample, 5 ml Zirconyl chloride and 5 ml of SPANDS solution was added. (For sample same procedure
was followed). The reading were taken on spectrophotometer at 570 nm. Concentration of fluoride was calculated by
comparing the absorbance with the standard curve. (11)
VII.
DISCUSSION
Gupta and Gupta (12)studied physico-chemical and biological characteristics of drinking water in Satna (M.P.).
They found the fluoride values within permissible limit in river water. Fluoride was 0.03 mg/L at one station and at another
12
Study of Fluoride Concentration in the River (Godavari) and Groundwater of Nanded City
station it was 0.08 mg/L. Amalan et.al., (13) recorded fluoride in the range of 0.36 to 3.5 mg/L in the surface water at
industrial area.
Dahiya et.al., (14) studied prevalence of fluorosis among children in district Bhiwani and noted fluoride in the
range of 0.8 to 8.6 mg/L in ground water of that area.
Bhosle et.al., (15) studied on fluoride content in Godavari river and noted the minimum fluoride concentration 0.75
mg/L and maximum 1.69 mg/L. Adak and Purohit (16) noted fluoride in the range of 0.46 to 0.47 mg/L in pond water of
Mandikudar, Orissa.The fluoride concentration ranged from 0.60 to 2.20 mg/L in groundwater of Hyderabad. (17)
Muthu et.al, (18) studied on an improved method defluoridation, recorded fluoride concentration between 0.75 to
4.05 mg/L in bore wells of thirty different villages of Natrampalli Tiruvannamalai Circle, (Tamil Nadu). Das et.al., (19)
recorded minimum fluoride 0.42 mg/L and maximum 6.88 mg/L from ground water of Guwahati, Assam. Gangal and
Magara (20) surveyed 30 villages of Jaipur Rajasthan. The fluoride concentration varies from 1.5 to 16 mg/ L in ground
water.
VIII.
RESULTS
Table. No. 1 Monthly mean values of Fluoride in mg/L in river water of Nanded.
Site – I Govardhan Ghat (Up Stream)
Site – II Radha Govindh Ghat (Down Stream)
Site – III Municipal water
13
Study of Fluoride Concentration in the River (Godavari) and Groundwater of Nanded City
Table. No. 2 Monthly mean values of Fluoride in mg/L in ground water of Nanded during 2001-2002.
Table. No 3 Monthly mean values of Fluoride in mg/L in ground water of Nanded during 2002-2003.
Groundwater Station 1: CIDCO-I (Sambhaji Chowk)
Groundwater Station 2: CIDCO-II (Hedgewar Chowk)
Groundwater Station 3: HUDCO-42
Groundwater Station 4: Gadipura-19
Groundwater Station 5: Bhagirat Nagar.
The monthly mean values of fluoride during two years (August 2001 to July 2003) from two river water, one
municipal water and five ground water sampling stations are shown in Tables No 5-1,5-2 and 5-3 and depicted in Figures No
5-a, 5-b, 5-c and 5-d respectively.
The Govardhan Ghat (Station I) showed the range of fluoride 0.39 to 1.78 mg/L. The maximum fluoride of 1.78
mg/L was noted in April 2003 while the minimum was noted as 0.39 mg/L in August 2001.
14
Study of Fluoride Concentration in the River (Godavari) and Groundwater of Nanded City
The maximum fluoride recorded was 1.75 mg/L in the month of May 2002 at Radha Govindh Ghat (Station II) and
the minimum of 0.63 mg/L in the month of October 2001.
In municipal water the maximum fluoride was recorded 1.47 mg/L in the month of May 2002 while minimum
fluoride noted as 1.47 mg/L in May 2002.
The maximum fluoride of river water was noted as 1.78 mg/L during April 2003 at Station I and minimum fluoride
recorded was 0.39 mg/L during August 2001 at Station I.
The fluoride of ground water from Nanded city at different stations selected were in the range of 0.43 to 1.78 mg/L
at Station I, 0.83 to 1.57 mg/L at Station II, 0.67 to 2.00 mg/L at Station III, 0.50 to 2.00 mg/L at Station IV and 0.49 to 1.53
mg/L at Station V respectively.
In ground water of Nanded city maximum fluoride noted as 2.00 mg/L in May 2003 at Station IV and in May 2002
at Station III, and minimum fluoride recorded as 0.43 mg/L at Station I.
In river water fluoride content was above the permissible limit. The municipal water also content fluoride in small
amounts. It touches to the permissible limit of fluoride.
All the ground water stations having fluoride concentrations were found to be very close to prescribed limit in
other seasons and even crossing the permissible limit during summer season.
IX.
CONCLUSION
To summarise, the baseline results of present investigation into the river (Godavari) and ground water quality of
Nanded city, showed that in ground water the maximum fluoride concentration was recorded 2 mg/L from station IV during
summer. The minimum fluoride concentration was noted as 0.43 mg/L from station I. All the ground water stations having
fluoride concentrations were found to be very close to prescribed limit in other seasons and even crossing the permissible
limit during summer season. In river water fluoride content was above the permissible limit. The municipal water also
contained fluoride in small amounts. Its content was in the range of 0.23 to1.47 mg/L. It touches to the permissible limit of
fluoride.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
World Health Org. Guidelines for drinking water quality vol.1: Recommendations, . Geneva : s.n., 1984. pp. 1-130.
World Health Organisation. Monograph series No. 59: Fluorides and human health : . Geneva : s.n., 1970. p. 364.
Bureau of Indian Standards. Indian standards drinking water specification IS 10500. 1991.
January 8, 1988, Chemical Engineering News, p. 36.
New York State Coaliation Opposed to Fluoridation. New York State Coaliation Opposed to Fluoridation. 1989.
release,11.
A.K., Susheela. Prevention and control of fluorosis - National Technology Mission on Drinking water; Technical
information for Training cum Awareness Camp. s.l. : National Technology Mission on Drinking wate, 1989.
Encyclopedia and Dictionary of Medicine, Nursing and allied health Medical dictionary . 2nd. 1978.
Special report. The Indian Council of Medical Research . New Delhi : National institute of science
communication, 1975. Indian science congress. Food Nutrition and Environment security. The road ahead. Vol.
Series 44.
Fluoride hazards in ground water of Orissa, India. Das S. Mehta B.C., Samanta S. K., Das P. K., Srivastava S.K.
1(1), 2000, Indian J. Environ. Health, pp. 40-46.
Fluoride bearing minerals in India; Their geology. Minerology and geochemistry. G., Karunakaran. 1974.
Proceedings of symposium of Fluorosis.
APHA –AWWA-WPCF . Standard Methods for the Examination of Water and Waste Water. Washington D.C. :
American Public Health Association, 1980.
Physico-chemicaland Biological Study of Drinking Water in Satna, Madhya Pradesh, India. Gupta, Gupta and.
18(4), 1999, Poll. Res., pp. 523-525.
Dental Fluorosis in an Industrial area contaminated with Fluoride in Drinking Water. Amalan V. Stanley, Pillai
K.S. 19(3), 1991, Poll Res. , pp. 305-308.
Prevalence of fluorosis among school children in rural area, district Bhiwani-a case study. Dahiya Sudhir,
Amarjeet Kaur, Nalini Jain. 42.(2), 2000, Indian J. Environ. Health., pp. 192-195.
Studies on the Fluoride content of Godavari River Water at Nanded. Bhosle A.B., Narkhede R.K., Balaji Rao,
Patil P.M. 7(3), 2001, Eol. Env. and Cons., pp. 341-344.
Status of surface and ground water quality of Mandiakudar part _1, Physical Chemical Parameters. Purohit, Adak
Dasgupta M and. 20(1), 2001, Poll Res., pp. 103-110.
Studies on Ground Water Quality of Hyderabad . Srinivas Ch., Ravi Shankar Piska., Venkateshwar C.,
Satyanaraya Rao, M.S. and Ravider Reddy R. 19(2), 2000, Poll Res. , pp. 285-289 .
An improved method of defluoridation. Muthu Ganesh I., Vinodhini V., Padmapriya G., Sathiyanarayana K. and
Sabumon P.C. 45 (1), 2003, Indian J. Environ. Health, pp. 65-72.
Fluoride and other inorganic constituents in ground water of Guwahati, Assam, India. Das Babulal, Talukadar
Jitu, Sarma Surashree, Gohain Biren, Dutta Robin K., Das Himangshu B., Das Subhash C. 2003, Current Science,
Vol. 85 (5), pp. 657-66.
Geochemical study of fluoride in groundwater of Sanganer Tehsil of Jaipur district of Rajasthan. Gangal R.K.,
Magara Y. 19 (1), 2003, Oriental Journal of Chemistry, pp. 55-60.
15
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 08-14
Enriching Image Labels by Auto-Annotation: Simple but
Effective and Efficient
Weilei Wang, Song Han ,Ying Zhou, Yong Jia, Yong Chen and Qingxiang Cao
Roboo Inc.
Abstract––Image search is popular and welcome currently, however the rareness of labels available may influence the
performance of most commercial search engines built on traditional vector space model. To bridge the semantic gap
between text labels and text queries, we propose one simple approach to extend existing labels using thesaurus. Different
from naive approach where synonyms of tokens contained in a label are simply added, we employ three metrics to filter
out those non-appropriate candidate labels (combination of synonyms): user log, semantic vector and context vector. The
experiment results indicate that the proposed method has impressive performance, in term of effectiveness and efficiency.
Keywords–– auto-annotation, label extension, effective, efficient.
I.
INTRODUCTION
The amount of multimedia resource search request, like image, video or music, has been increasing on most
commercial search engine. For example, based on our own data, the PVs (page-view) contributed by image search occupy
over 10% of our total flow on Roboo® (http://m.roboo.com, a popular mobile search engine in China). The ratio will be
much higher if we count all non-text resource retrieval service. This is exactly consistent with what we understand about the
users of mobile search: they prefer entertainment resource during their non-office time [11]. In the remaining text, we will
focus the discussion on image annotation, but what proposed here can be applied to video or music as well.
However, there is one critical challenge posed on us regarding these multimedia resources, that is their labels are
normally quite short compared with Web document. One study on our own image repository indicates that the average
length of image labels is 5.18(characters). Besides, the average query length on image is about 3.52 [11], which worsen the
performance of image search further since obvious there is a gap between image labels and textual queries. Since manually
annotating images is a very tedious and expensive task, image auto-annotation has become a hot research topic in recent
years.
Although many previous works have been proposed using computer vision and machine learning techniques, image
annotation is still far from practical[12]. One reason is that it is still unclear how to model the semantic concepts effectively
and efficiently.
The other reason is the lack of training data, and hence the semantic gap can not be effectively bridged [12].
Therefore, till now, almost all known running commercial image search engines depend on labels attached to images,
including ours. The direct result of this search model is that:

Hit missing is not avoidable due to the short labels available;

One image can only serve very few queries. For example, with the example shown in Fig 1., the image has label
“性感车模” (Sexy auto model). Based on current VSM-based (Vector Space Model) search model, the image will
and only will be retrieved upon query of “性感”, “车模”, or “性感车模”. Though “性感车模” normally means
implicitly “美女” (pretty girl, one quite popular query as indicated by our log), this example image will not appear
in the query result of “美女” due to the short of label “美女”.
Fig. 1. One image example with label “性感车模” (Sexy auto model)
Generally, these two aspects together construct the so-called semantic map between target resources and users’
initiatives, as shown as Path 3 in Fig 2. Ideally, we hope the query can directly be matched by some resources, like Path 3,
but we are not always so fortunate in real world. Two possible options to solve the problem:
1. Query extension (Path 1)[1,8].
2. Label extension (Path 2).
8
Enriching Image Labels by Auto-Annotation: Simple but Effective and Efficient
User query
Path2
Path1
Path3
Query
extension
Label
extension
Path1
Path2
Resource
Fig. 2. The path connecting query and resource
In this paper, we choose Path 2 since it is believed to more “secure” approach than Path 1 based on the following
considerations:
1. The extension work normally is done offline, so efficiency is not so critical a factor while we choose candidate
approaches. Normally, better methods require more computing resource based on common sense;
2. Given those zero-hit queries, we may “attach” them purposely to some resources via some appropriate extension
algorithm;
3. By preparing the extended labels offline first, the online response speed would not be influenced;
4. No matter how well one query extension may work, it is still built on the labels; Rich labels play as the basis for
more applications.
5. Extended labels may evolve, added or removed from an image, with time on to meet specific application
requirements.
In this paper, label extension is based on thesaurus. However, different from naive approach, we propose three
metrics to filter out those non-appropriate labels. In Section 2, the proposed method is explained in detail. In Section 3,
experiment results are presented to demonstrate the effectiveness and efficiency. Short conclusion and future work are
shared in Section 4.
II.
LABEL EXTENSION
In this section, how labels are extended by us will be discussed in detail. The overall procedure is shown in Fig 3,
and each step will be discussed in the following sub-sections.
2.1 Word Segmentation (Tokenization)
In English, there is space between word and word. However, in Chinese, one word may contain one or more than
one Chinese character, and all characters appear in concatenation in a sentence. Therefore, Chinese text has to be segmented
or tokenized first to extract words before they can be analyzed or indexed [2,3]. In Fig 3, the input “Label” refers to the
original label available to an image, and “Word Segmentation” module outputs a series of tokens appearing as their relative
order in the original label string. Repeated tokens are left unchanged to keep the original semantic meaning. These tokens
play as the input for next step – Get candidate using synonyms.
2.2 Collect Candidates Using Thesaurus
Thesaurus[4,7] plays as an important role here because we have to select the candidates very carefully to keep the
original semantic meaning unchanged. Synonym is believed an ideal candidate considering that normally thesaurus is
constructed in a serious way normally. The appropriateness of synonym is believed as a solid basis for the success of our
method. Actually, thesaurus technique is widely applied in information processing and information retrieval, such as
detecting text similarity over short passages [5], Query expansion using synonymous terms [6], et al.
9
Enriching Image Labels by Auto-Annotation: Simple but Effective and Efficient
Word Segment①
Label
Get candidate using
synonyms②
Evaluation using
Evaluation using
Evaluation using
User log③
semantic vector④
context vector⑤
1. filter candidates using some
rules⑥
2. get the final score combined the
multi-evaluation score above⑦
Get TopN as
result⑧
Fig. 3. Overall procedure of label extension proposed by us
Here, given the sequence of tokens, the replacement with synonym is applied to each individual token separately
and independently.This candidate construction procedure can be formalized as below:
1. With one label L , its corresponding token sequence as output by word segmentation component is denoted as
TSL  {T1 , T2 ,..., Tk } , and Ti represents one token.
2.
With each Ti , we get its synonyms via reference to one open thesaurus edited and maintained by[7]; all the
3.
With one instance selected from each Syn(Ti ) , and concatenate them together, we get one candidate label. All
synonyms plus Ti itself construct one set denoted as Syn(Ti ) ;
candidates
are
L
| Syn(Ti ) | ,
and | Syn(Ti ) | is the cardinality of set Syn(Ti ) .
k
i 1
denoted
with
CL  {CLi  C1C2 ...Ck | C j  Syn(T j ), i  1...k ; j  1..L} where
possible
With this method, many more possible labels are generated. For example, assuming one label “性感车模”, it is
segmented into two tokens “性感”+“车模” first. By referring to thesaurus, we notice that there are five synonyms
corresponding to “车模” , and four synonyms corresponding to “车模”. Then, we may have 30 labels ( (5+1) * (4+1) = 30),
e.g. “性感模特”, 美丽车模 etc. However, some labels may be meaningless by combining the candidate synonyms, though
each synonym token itself is acceptable. For example, one possible extension of “我的女人” (my girl), like “我的女性” (my
female) may not be appropriate although “女性” (female) may be a acceptable synonym of “女人” (girl or woman). From
the example here, it is noticed that deciding whether one synonym alone is appropriate or not may not be easy or possible.
The more practical and reasonable approach is to evaluate each candidate label separately, though it results with more
computing work. How it can be achieved will be discussed in the following three sub-sections.
2.3 Evaluation Using User Log
User log records the users' queries and the corresponding frequencies, and it not only does full justice to the
interests and behaviors of users but is able to allow us to evaluate the appropriateness of one candidate label:
1.
CL  {CL  C C ...C | C  Syn(T ), i  1...k; j  1..L} contains all the candidate labels as output by the end
i
1 2 k
j
j
of last step.
the log corpus, where qi is the
Log  { (qi ,fi )i | 1 ,N. . .represents
, }
query and fi is its frequency being issued;
2.
Given the log data and generated candidate labels, we filter those not found in
The log, with CL '  {CL | C  CL and C  Log} left. In the current version,
i
i
i
we require COMPLETE match while determining filtering one candidate out or
not. Actually, this filtering is also required by Step 4 and Step 5 as indicated in
Fig 3;
10
Enriching Image Labels by Auto-Annotation: Simple but Effective and Efficient
3.
With each CLi  CL ' , one score is assigned, denoted as SLog (CLi ) and SLog (CLi )  fi /
f .
j
Therefore, SLog (CLi ) is the result of Step 3 in Fig 3. Based on our experiments,this is effective to filter some nonappropriate candidates, and give a reasonable score for each accepted candidate with the heuristics that the more frequently
queried, the more important one candidate is. However, still there is “noise” exist in the remaining candidates, and we need
do further filtering.
One bonus advantage of this method is that we may “create” images for some queries originally having no matched results
by appending some more labels.
2.4 Evaluation Using Semantic Vector
Given a set of candidate labels, it is hard, or impossible, to simply tagging “right” or “wrong”. For instance, “女
人”(woman) has synonyms like “女性”(female) or “老婆”(wife). Though both of are acceptable, it does NOT mean that they
are “equal” in meaning as compared to the original label “女人”. In this section, we propose to rank the remaining
candidates based on their semantic similarity relative to the original label.
Given two strings, like “春节愉快” and “新春快乐”, they have the same meaning even though they share no any
common token, which results in 0 similarity score in traditional vector space. To address this problem, a more reasonable as
well as reliable measure about the closeness among labels is desired, and it is expected to use more comprehensive
background information for reference, instead of simple term-wise happenings. Fortunately, the Web provides a potential
rich source of data that may be utilized to address this problem, and modern search engine helps us to leverage the large
volume of pages on the web to determine the greater context for a query in an efficient and effective manner.
Our solution to this problem is quite simple, and it is based on the intuition that queries which are similar in
concept are expected to have similar context. For each query, we depend on the search engine to seek for a group of related
and ranked documents, from which a feature vector is abstracted. Such feature vector contains those words that tend to cooccur with the original query, so it can be viewed as the corresponding context of the query, providing us more semantic
background information. Then, the comparison of queries becomes the comparison of their corresponding feature, or
context, vectors. Actually, we found that there were similar application of this method to determine the similarity between
more objects on the Web [8,10].
Here, one candidate label or someone token contained is viewed as a query, and be submitted to a search engine to
retrieve relevant documents from which the so-called “semantic background” knowledge could be abstracted and referred
later. The procedure is formalized as follows:
Let q represent a query, and we get its feature vector (FV), denoted as FV (q) , with the following sequential steps:
1. Issue the query q to a search engine; for now, let us ignore the actual algorithms of the search engine and assume
that the approach is generalizable to any engine that provides “reasonable” results;
2. Let D(q) be the set of retrieved documents. In practice, we only keep the top
k documents, assuming that they contain enough information, so D(q)  {d1 , d2 ,..., dk } .
3.
Compute the TF-IDF term vector tvi for each document di  D(q) , with each element as
tv ( j )  tf  log( N )
i
i, j
df
j
where tf i , j is the frequency of the jth term in di , N is the total number of documents available behind the search engine,
and df j is the total number of documents that contain the jth term. The TF-IDF is widely applied in the information retrieval
(IR) community and has been proved work well in many applications, including the current approach.
4.
N
Sum up tvi , i  1..K ,to get a single vector. Here, for the same term, its IDF,i.e. log( df
) , is known as identical
in different tvi , so the sum of their corresponding weights can be calculated;
5.
Normalize and rank the features of the vector with respect to their weight (TF-IDF), and truncate the vector to
its M highest weighted terms. What left is the target FV (q) .
After we get the feature vector for each query of interest, the measure of semantic similarity between two queries is defined
as:
Sim(qi , q j )  FV (qi )  FV (q j );
In our label extension, we evaluate the similarity between the original label L and someone candidate label CL from two
aspects:
1. One is from the overall, with a score denoted as ScoreAll :
ScoreAll  Sim( L, CL)
2. The other is from the partial view with corresponding score denoted as ScorePartial :
ScorePartial  max( Sim(Ti L , Ti CL )}
i
where Ti L refers to the ith token in original label L , and Ti CL refers to
11
Enriching Image Labels by Auto-Annotation: Simple but Effective and Efficient
the ith token in the candidate label CL (The candidate construction algorithm
ensures that L and CL have same number of tokens, so the calculation
here( has no problem);
3. Given a candidate label, CL , its final score is determined by
Ssv (CLi )  max(ScoreAll , ScorePartial ) .
Ssv (CLi ) therefore, is the output of Step 4 in Fig 3.
2.5 Evaluation Using Context Vector
Till now, two dimensions have been introduced to measure if one candidate label is acceptable or not, including
query log and semantic information. However, it is noticed that due to the feature of our index repository and queries
received (most about entertainment), the selection procedure is more or less biased. To weaken this influence so that those
more serious or formal labels would not be filtered out, one additional dimension called Context Vector (CV) based on
People’s Daily news Corpus[9] is introduced. How a context vector is got and how to determine the similarity score based on
context vectors are formalized as follows:
1.
Given a label L , we find all the “contexts” containing L ,
{ ( . .T.i , L T
, i 1,
,. .In. .the
) }current version, we limit the context length as 1,
that is we only abstract {(T1 , L, T2 )} ;
2. Given all contexts ready, {(T1 , L, T2 )} , we summarize all the happenings of
someone token Ti that appears in the triple above, ignoring its position in the
triple before or after L . This allows to get the so-called context vector of
L , CV ( L)  {tf1 , tf 2 ,..., tf m } where tf refers to someone token’s frequency and they are ranked in a decreasing order
in term of frequency;
3. The similarity between the original label L and candidate label CLi is
Sim( L ,CLi ) CV L( )CV CL(i . For
) easy reference, this score is denoted as SCV (CLi ) .
So, the score by Context Vector is similar to that by Semantic Vector, both depend on more some 3rd-party corpus
to gain deeper understanding of the labels of interest. The only difference exists on how to construct the vectors for
comparison. Table 1 lists two examples where the similarity scores based on two different vectors are listed for different pair
of labels, and it indeed indicates that the two scores are complementary to some extend.
Table 1.
Two examples about Semantic Vector vs. Context Vector
Similarity based on
Similarity based on
Example pair
Semantic Vector
Context Vector
愉快 vs 快乐
0.433
0.00
(both refer to happiness)
美女 vs 尤物
0.00
0.268
(pretty girl vs. stunner)
2.6 Reach the Output
Till now, we have introduced how to construct candidates given a label, and how to ranking the candidates based
on different scores, S Log , SSV and SCV . To get the final score of someone label, the three scores need to be combined in some
manner. In the current version, weight is assigned to each score, and they are added linearly as:
Score  a1  S Log  a2  S sv  a3  Scv
To make them comparable and addable, each individual score is normalized before
be summarized. Based on observation via lots of experiments and manually study, a1  0.8 , a2  1.3 a3  1.0 are chosen
since satisfactory results are produced then.
III.
EXPERIMENTAL STUDY
In our experiment, given each label, we apply the procedure shown in Fig 3 to get candidate labels, and some
summary statistics information is presented here:
12
Enriching Image Labels by Auto-Annotation: Simple but Effective and Efficient
Table 2. Summary information of experiments
Total # of images in test repository
1,957,090
Total # of different labels
270,192
The ratio of image # to label #
7:1
Average length of label
5.18
Total time to process all images
3,296(sec.)
Total # of labels being extended finally (with at least one 1,992
new candidate label output)
Total # of images influenced, i.e. with new label 132,5816
generated
From Table 2, we could conclude that:
1. Many images share same labels (the ratio of image # to label # is 7:1);
2. Labels available are short;
3. The running performance is acceptable, costing less than 1 hour to process about two millions of images on a
common PC (CPU 1.6 GHz; 2.0GB RAM);
4. Although there are only 1,992 labels been extended actually, about 1.3 millions of images (over two third) are
influenced, which furthermore confirms the first point, i.e. many images have same labels. Actually, these labels
are also frequently queried, which reflects the common preference among human beings, no matter image
annotators or query submitters.
With the 1,992 labels got, four editors are employed to do manual checking. Finally 1405 labels are thought as
appropriate, about 70.5% (1405/1992) acceptance ratio. Two primary causes may explain the non-appropriate extensions:
1. The quality of thesaurus relative to our application. Some synonyms may be acceptable in other fields,but not to us
in mobile search application. It may be worthy of effort to build one thesaurus customized for our field on long
terms;
2. Another problem is about polyseme. Although we carefully evaluate the appropriateness of one candidate based on
query log, semantic vector and context vector, extra effort to further disambiguation is still desired.
IV.
CONCLUSION
Image search is one popular service online. However, the rareness of annotation prevents is widely known as a
challenge. In this paper, we propose a simple architecture is proposed to enrich the labels available for an image. Firstly, we
construct a series of candidates for a given label based on simple rule and thesaurus. Then, three different dimensions of
information are employed to rank the candidates, including user log, semantic similarity and context similarity. The
experiments indicate that over 70% of generated labels are regarded as reasonable and acceptable. Although our discussion
is based on image, the proposed method obviously can be applied to other multimedia resources suffering the similar
problem, e.g. video or theme to stall on mobile phone. There is still much space to improve the underlying performance of
our work. For example, we admit that filtering candidates by the fact if they appear in query log is somewhat too restrict.
Partial containment relation or similarity between queries and labels may bring us more labels. Other information or metrics
may be introduced and be included in selection also in applications.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Mitra M, Singhal A, Buckley C. Improving automatic query expansion. In: Proceedings of the 21st Annual
International ACM SIGIR Conf. on Research and Development in Information Retrieval. Melbourne: ACM Press,
206-214 (1998)
Stefan Riezler, Yi Liu, Alexander Vasserman: Translating Queries into Snippets for Improved Query
Expansion, In: Proceedings of the 22nd International Conference on Computational Linguistics (COLING'08)
(2008).
Jianfeng GAO, Joshua Goodman, Mingjing Li and Kai-fu Lee: Toward a unified approach to statistical language
modeling for Chinese. ACM Transactions on Asian Language Information Processing (TALIP), 1(1):3 – 33 (2002).
梅家驹,竺一鸣,高蕴琦等编.同义词词林.上海:上海辞书出版社,1983.
Hatzivassiloglou V, et al.: Detecting text similarity over short passages: exploring linguistic feature combinations
via machine learning. Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural
Language Processing and Very Large Corpora. College Park, Maryland, 203-212 (1999).
Guo, Y., Harkema, H., & Gaizauskas, R: Sheffield University and the TREC 2004 genomics track: Query
expansion using synonymous terms. In Proceedings of TREC, (2004).
http://blog.csdn.net/ganlantree/archive/2007/10/26/1845788.aspx
Kamvar, M. and Baluja, S.: Query suggestions for mobile search: understanding usage patterns. In: Proceedings of
the SIGCHI conference on Human Factors in computing systems (CHI), (2008).
http://icl.pku.edu.cn/icl_groups/corpus/shengming.htm
Shunkai Fu, Michel C. Desmarais, Bingfeng Pi, Ying Zhou, Weilei Wang, Gang Zou, Song han, and Xunrong Rao.
Simple but effective porn query recognition by k-NN with semantic similarity measure, In: Proceedings of
APWeb/WAIM, LNCS 5446, pp. 3-14, (2009).
13
Enriching Image Labels by Auto-Annotation: Simple but Effective and Efficient
11.
12.
Fu, S.-K., Han, S., Zhuo, J.-H., Pi, B.-F., Zhou, Y., Demarais, M.C. and Wang, W.-L.: Large scale analysis of
Chinese mobile query behavior. In: Proceedings of 7th International Conference on Information and Knowledge
Engineering, Las Vegas, USA (2009).
Wang, X.-J., Zhang, L., Jing, F. and Ma, W.-Y.: AnnoSearch, Image auto-annotation by search. In: Proceedings of
International Conference on Computer Vision and Pattern Recognition (CVPR), Near York, USA (2006).
14
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 06-11
Bit Error Rate of Mobile Wimax (Phy) Under Different
Communication Channels and Modulation Technique
T.Manochandar1, R.Krithika2
1
Department of Electronics and Communication Engineering, VRS College of engineering and technology,
Villupuram- 607 107.TamilNadu
2
Department of Electronics and Communication Engineering, E.S College of engineering and technology,
Villupuram-605 602.TamilNadu
Abstract––Mobile Wimax is a broadband wireless solution that enables the convergence of mobile and fixed broadband
network, through a common wide area broadband radio access technology and flexible network architecture. The
Performance of mobile Wimax under varying channel is one of the interesting research interests. Most of the most
existing systems, based on performance and evaluation under channel condition are limited to AWGN, ITU etc in mobile
Wimax. In this paper the performance of mobile Wimax (PHY layer) under SUI channel models in addition to different
data rates and modulation techniques were analyzed. The simulation covers important performance parameters like Bit
Error Rate and Signal to Noise Ratio.
Keywords––Wimax, BER, SNR, BPSK, OFDMA
I.
INTRODUCTION
IEEE802.16e is a global broadband wireless access standard capable of delivering high data rates to fixed users as
well as portable and mobile ones over long distance [1].In mobile Wimax air interface adopts orthogonal frequency division
multiple access (OFDMA) for improved multi-path performance in non-line-of sight (NLOS) environment. Mobile Wimax
extends the OFDM PHY layer to support terminal mobility and multiple–access. The resulting technology is Scalable
OFDMA. Data streams to and from individual users are multiplexed to groups of sub channel on the downlink and uplink.
By adopting Scalable PHY architecture, mobile Wimax is able to support a wide range of bandwidths.
The performance of the WiMAX (Worldwide Interoperability for Microwave Access) can be evaluated by using
the Stanford University Interim (SUI) channel models which has a set of six channels for terrain types [3].With different data
rates, coding schemes and modulation techniques.
The mobile WiMAX standard builds on the principles of OFDM by adopting a Scalable OFDMA-based PHY
layer (SOFDMA) [4]. SOFDMA supports a wide range of operating bandwidths to flexibly address the need for various
spectrum allocation and application requirements.
The simulation done in the paper covers important performance such as Bit Error Rate and Signal to Noise Ratio.
II.
WIMAX PHYSICAL LAYER
This project deals with the performances of Bit Error Rate and Signal to Noise Ratio in mobile WiMAX physical
layer. The block diagram of the physical layer of mobile WiMAX diagram is given in Figure 1. The transferring of data or
receiving the data is done through the physical layer of WiMAX. So the uplink and the downlink of the message were done
on the physical layer of WiMAX. There are three levels in the physical layer of mobile WiMAX physical layer are

Bit level processing

OFDM symbol level processing

Digital IF processing
Each levels of physical layer of WiMAX consist of certain processes for transferring the data at uplink region and
receiving of data at downlink region consists of encoder, decoder, symbol mapper and randomizer etc., Every processes were
done in order to improve the performance of the mobility condition of mobile wimax. In this paper the performance was
analyzed by the signal to noise ratio and the bit error rate
6
Bit Error Rate Of Mobile Wimax (Phy) Under Different Communication…
.
Figure: 1. Block Diagram of Wimax Physical Layer
Table.1 Parameter of mobile wimax physical layer
Parameter
Value
FFT size
128
512
1024
2048
Channel
Bandwidth(MHz)
1.25
5
10
20
Subcarrier Frequency
spacing
(KHz)
10.94
Useful Symbol Period
91.4
Guard Time
1/32,1/8,1/6,1/4
III.
CHANNEL
The medium between the transmitting antenna and the receiving antenna is said to be the channel. The profile of
received signal can be obtained from that of the transmitted signal, if we have a model of the medium between the two. The
model of the medium is called channel model.
7
Bit Error Rate Of Mobile Wimax (Phy) Under Different Communication…
Y (f) =X (f) H (f) +n (f)
Where,
Y (f) ->output signal
H (f) ->channel response
X (f) ->input signal
A. Stanford University Interim (SUI) Channel Models:
It is a set of six channel models representing three terrain types and a variety of Doppler spreads, delay spread and
mainly the line-of-sight/non-line-of-site conditions that are typical of the continental. The terrain type
A, B, C is same
as those defined in Erceg model [10]. The multipath fading is modeled as a tapped delay line with 3 taps with non-uniform
delays. The gain associated with each tap is characterized by a Rician Distribution and the maximum Doppler frequency.
In a multipath environment, the received power r has a Rician distribution, whose Pdf is given by:
Pdf(r)=r/σ2e(-r2/2σ2)
0≤r≤∞
A is zero if there is no LOS component and the pdf of the power becomes
Pdf(r)=r/σ2e[-frac2+A2σ2]Io(rA/σ2) 0 ≤ r ≤ ∞
This is also known as Raleigh distribution.
The ratio K=A2/ (2σ2) in the Rician case represents the ratio of LOS component to NLOS component and is called the "KFactor" or "Rician Factor."
The general structure for the SUI channel model is as shown below in figure. This structure is for Multiple Input
Multiple Output (MIMO) channels and includes other configurations like Single Input Single Output (SISO) and Single
Input Multiple Output (SIMO) as subsets.
Input
Mixing
Matrix
Tapped
Delay
Line
Matrix
Output
Mixing
Matrix
Figure 2.SUI Channel Model
Power Distribution For each tap a set of complex zero-mean Gaussian distributed numbers is generated with a
variance of 0.5 for the real and imaginary part, so that the total average power of this distribution is 1. This yields a
normalized Rayleigh distribution (equivalent to Rice with K=0) for the magnitude of the complex coefficients. If a Rician
distribution (K>0 implied) is needed, a constant path component m is added to the Rayleigh set of coefficients. The ratio of
powers between this constant part and the Rayleigh (variable) part is specified by the K-factor. For this general case, we
show how to distribute the power correctly by first stating the total power P of each tap:
P=|m2|+σ2
Where m is the complex constant and σ2 the variance of the complex Gaussian set. Second, the ratio of powers is K=m2/|σ2|.
Table 2: Terrain type and Doppler spread for SUI channel model
Channel
Terrain
Doppler
Spread
LOS
Type
spread
SUI-1
SUI-2
SUI-3
SUI-4
C
C
B
B
Low
Low
Low
High
Low
Low
Low
Moderate
High
High
Low
Low
SUI-5
SUI-6
A
A
Low
High
High
High
Low
High
8
Bit Error Rate Of Mobile Wimax (Phy) Under Different Communication…
In the SUI channel model, parameter for the SUI 1 and SUI 2 channel model has been tabulated in 3 and 4
respectively for the reference. BER performance is evaluated in this channel models. Depending on the performance
parameter for the SUI channel, the performances of wimax physical layer are evaluated through the performance graph.
.
For a 300 antenna beam width, 2.3 times smaller RMS delay spread is used when compared to an omnidirectional
antenna RMS delay spread. Consequently, the 2nd tap power is attenuated additional 6 dB and the 3rd tap power is
attenuated additional 12 dB (effect of antenna pattern, delays remain the same). The simulation results for all the six
channels are evaluated. The above experiments are done using the simulation in Matlab communication tool box.
IV.
SIMULATION RESULTS
The output for the performance of mobile WiMAX was estimated by the BER and the SNR plot using the
MATLAB coding with BPSK modulation. The bandwidth used in the experiment was 3.5 MHz
9
Bit Error Rate Of Mobile Wimax (Phy) Under Different Communication…
Figure3.BER curve for BPSK modulation
The output for the performance of mobile WiMAX was estimated by the Bit Error Rate and the Signal to Noise
Ratio plot using the MATLAB coding with Quadrature Phase Shift Keying modulation technique is given below in figure 4.
Figure 4. BER curve for QPSK modulation
The output for the performance of mobile WiMAX was estimated by the BER and the SNR plot using the
MATLAB coding with 16QAM modulation. It is illustrated in the figure5.
Figure5. BER curve for 16QAM modulation
10
Bit Error Rate Of Mobile Wimax (Phy) Under Different Communication…
The output for the performance of mobile Wimax was estimated by the BER and the SNR plot using the
MATLAB coding with 64QAM modulation and graphical illustration done as in figure 6.
Figure6. BER curve for 64QAM modulation
V.
CONCLUSION
In this paper, the performance of mobile WIMAX physical layer for OFDMA on different channel condition
assisted by Mobile IP(Internet protocol) for mobility management was analyzed. The analysis demonstrated that the
modulation and coding rate had a greater impact on the relative performance between the different SUI channel conditions.
The performance was analyzed under SUI channel models with different modulation techniques for mobility management. It
is found from the performance graphs that SUI channel 5 and 6 performs better than the conventional ones.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
IEEE 2010 Omar Arafat,K.Dimyati, a study of physical layer of Mobile WiMax under different communication
channels & modulation technique.
[2]IEEE std 802.16 Etm-2004”Part 16:Air Interface for fixed and mobile broadband wireless Access system,”Feb
2004.
Md.ZahidHasan,Md.AshrafulIslam, Comparative Study of Different Guard Time Intervals to Improve the BER
Performance of WiMAX Systems to Minimize the Effects of ISI and ICI under Adaptive Modulation Techniques
over SUI-1 and AWGN Communication Channels, (IJCSIS) International Journal of Computer Science and
Information Security,Vol. 6, No.2, 2009.
IEEE 802.16 Broadband Wireless Access Working Group Simulating the SUI Channel Models Li-Chun Wang
Department of Communications Engineering National Chiao Tung University Hsinchu,Taiwan.
Mai Tran, G Zaggaoulos and A .Nix” Mobile WiMAX MIMO performance Analysis: Downlink and Uplink”,
PIMRC 2008.
J.G.Andews, A.Ghoshand R.Muhamed,” Fundamentals of WiMAX Understanding broadband Wireless Networks,
“Prentice Hall, Feb 2007.
RajJain,@aum.org,”ChannelModelaTutorial”, feb21,2007 submitted to wimax forum at aum.
M.S. Smith and C. Tappenden, “Additional enhancements to interim channel models for G2 MMDS fixed wireless
applications,” IEEE 802.16.3c-00/53
M.S.Smith, J.E.J. Dalley, “A new methodology for deriving path loss models from cellular drive test data”,
Proc.AP2000 Conference, Davos, Switzerland, April 2000.
V. Erceget.al, “A model for the multipath delay profile of fixed wireless channels,” IEEE JSAC, vol. 17, no.3,
March 1999, pp. 399-410.
L.J.Greenstein, V. Erceg, Y.S. Yeh, and M.V. Clark, “A new path-gain/delay-spread propagation model for digital
cellular channels,” IEEE Trans. Veh. Technol., vol. 46, no. 2, May 1997.
J.W. Porter and J.A. Thweatt, “Microwave propagation characteristics in the MMDS frequency band,” ICC’2000
Conference Proceedings, pp. 1578-1582.
L.J. Greenstein, S. Ghassemzadeh, V.Erceg, and D.G. Michelson, “Ricean K-factors in narrowband fixed wireless
channels: Theory, experiments, and statistical models,” WPMC’99 Conference Proceedings, Amsterdam,
September 1999.
11
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 06-09
Natural Language Processing For Content Analysis in Social
Networking
1
2
Mr. A. A. Sattikar1, Dr. R. V. Kulkarni2
V. P. Institute of Management Studies & Research, Sangli, Maharashtra, India
Shahu Institute of Business Education & Research (SIBER), Kolhapur, Maharashtra, India
Abstract––After going through review of almost twenty-five researches on security and privacy issues of social
networking, researchers observed that users have strong expectations for privacy on Social Networking web sites as in
Social Networking blogs or posts content is surrounded by a very high degree of abusive or defaming content .Social
networking has emerged as the most important source of communication in the world. But a huge controversy has
continued in full force over supervising offensive content on internet stages. Often the abusive content is interspersed
with the main content leaving no clean boundaries between them. Therefore, it is essential to be able to identify abusive
content of posts and automatically rate the content according the degree of abusive content in it. While most existing
approaches rely on prior knowledge of website specific templates and hand-crafted rules specific to websites for
extraction of relevant content, HTML DOM analysis and visual layout analysis approaches have sometimes been used,
but for higher accuracy in content extraction, the analyzing software needs to mimic a human user and under- stand
content in natural language similar to the way humans intuitively do in order to eliminate noisy content. In this paper, we
describe a combination of HTML DOM analysis and Natural Language Processing (NLP) techniques for rating the blogs
and posts with automated extractions of abusive contents from them.
Keywords –– BLOG, HTML, NAIVE BYES, NLP, POST
I. INTRODUCTION
Natural Language Processing (NLP) is both a modern computational technology and a method of investigating and
evaluating claims about human language itself. Some prefer the term Computational Linguistics in order to capture this latter
function, but NLP is a term that links back into the history of Artificial Intelligence (AI), the general study of cognitive
function by computational processes, normally with an emphasis on the role of knowledge representations, that is to say the
need for representations of our knowledge of the world in order to understand human language with computers. Natural
Language Processing (NLP) is the use of computers to process written and spoken language for some practical, useful,
purpose: to translate languages, to get information from the web on text data banks so as to answer questions, to carry on
conversations with machines, so as to get advice about, say, investments and so on. These are only examples of major types
of NLP, and there is also a huge range of small but interesting applications, e.g. getting a computer to decide if one film
story has been rewritten from another or not. NLP is not simply applications but the pure technical methods and processes
that the major tasks mentioned above divide up into Machine Learning techniques which is automating the construction and
adaptation of machine dictionaries, modelling human agents' beliefs and desires etc. The last task is closer to Artificial
Intelligence, and is an important component of NLP. If computers are to involve in realistic conversations: they must, like
human, have an internal model of them they converse with."NLP goes by many names — text analytics, data mining,
computational linguistics — but the basic concept remains the same. NLP relates to computer systems that process human
language in terms of its meaning. Apart from common word processor operations that treat text like a sheer sequence of
symbols, NLP considers the hierarchical structure of language: many words make a phrase, many phrases make a sentence
and, ultimately, sentences convey messages. By analyzing language for its meaning, NLP systems have many other useful
roles, such as correcting grammar, converting speech to text and automatically translating text between languages. NLP can
analyze language patterns for understanding text. One of the most convincing ways NLP offers valuable intelligence is by
tracking sentiment — the nature of a written message (tweet, Facebook update, etc.) — and tag that text as positive, negative
or neutral. Much can be gained from sentiment analysis. Companies can target unsatisfied customers or, find their
competitors’ unsatisfied customers, and generate leads. These examples can be called as ―actionable insights‖ and can be
directly implemented into marketing, advertising and sales activities. Every day, people discuss brands many of times across
social media sites. Companies want a little of that to determine how their customer communicates, and more importantly, to
discover important information that helps business. However, the sheer density of social conversations makes that difficult.
Digital marketers and professionals are taking help of artificial intelligence tools like Natural Language Processing(NLP) to
filter the social noise and properly connect with their target customers. If such tools really detect and analyse how a customer
feels about a brand, is it possible with Natural Language Processing to analyse the blogs/posts of Social networking sites for
detecting abused and defaming content? One of the long-term goals of artificial intelligence is to develop the programs that
are capable of understanding and generating human language. A fundamental aspect of human intelligence seems to be not
only the ability to use and understand natural language, but also its successful automation that have an incredible impact on
6
Natural Language Processing For Content Analysis in Social Networking
the usability and effectiveness of computers themselves. Understanding natural language is much more than parsing
sentences into their individual parts of speech and looking those words up in a dictionary. Real understanding depends on
broad background knowledge about the domain of conversation and the idioms used in that domain as well as an ability to
apply general contextual knowledge to resolve the omissions and ambiguities that are a normal part of human speech.
A general text analysis process includes following sub tasks.
•
Information retrieval or identification of a corpus is a preliminary step: collecting or identifying set textual
materials, on the Web or held in a file system, database, or content management system, for analysis. Although
some text analytics systems bound themselves to purely statistical methods, many others apply broader natural
language processing, such as part of speech tagging, syntactic parsing, and other types of linguistic analysis.
•
Named entity recognition is the use of statistical techniques to identify named text features: people, organizations,
place names, stock ticker symbols, certain abbreviations, and so on. Disambiguation — the use of contextual clues
— may be required to decide where, for instance, "Ford" refers to a former U.S. president, a vehicle manufacturer,
a movie star (Glenn or Harrison), a river crossing, or some other entity.
•
Recognition of Pattern Identified Entities: Features such as telephone numbers, e-mail addresses, and quantities
(with units) can be discerned through regular expression or other pattern matches.
•
Coreference: identification of noun phrases and other terms that refer to the same object. • Relationship, fact, and
event Extraction: identification of associations among entities and other information in text
•
Sentiment analysis involves discerning subjective material and extracting various forms of attitudinal information:
sentiment, opinion, mood, and emotion. Text analytics techniques are helpful in analyzing sentiment at the entity,
concept, or topic level and in distinguishing opinion holder and opinion object.
•
Quantitative text analysis is a set of techniques stemming from the social sciences where either a human judge or a
computer extracts semantic or grammatical relationships between words in order to find out the meaning or
stylistic patterns of a casual personal text for the purpose of psychological profiling etc.
II. ROLE OF NLP IN SOCIAL MEDIA
The rise of social media such as blogs and social networks has raised interest in sentiment analysis. With the
increase in reviews, ratings, recommendations and other forms of online expression, online opinion has turned into a kind of
virtual currency for businesses looking to sell their products, identify new opportunities and manage their reputations. As
businesses look to automate the process of filtering out the noise, understanding the conversations, identifying the relevant
content and actioning it properly, many are now looking to the field of sentiment analysis. If web 2.0 was all about
democratizing publishing, then the next stage of the web may well be based on democratizing data mining of all the content
that is getting published. One step towards this aim is accomplished in research, where several research teams in universities
around the world currently focus on understanding the dynamics of sentiment in e-communities through sentiment analysis.
The CyberEmotions project, for instance, recently identified the role of negative emotions in driving social networks
discussions. Sentiment analysis could therefore help understand why certain e-communities die or weaken away (e.g.,
MySpace) while others seem to grow without limits (e.g., Facebook). However Social Networking Sites now days are using
such techniques to get more smart. Facebook is using natural language processing to group posts in your News Feed, and
link to a Page relevant to the topic that is being discussed. If more than one of our friends post about the same topic, and it
has a Page on the social network, the posts will be grouped under a Posted About story, even if your friends don’t explicitly
tag the Page. The story is posted in the following format: ―[Friend] and [x] other friends posted about [Page]‖ where the last
part is a link to the Page in question.
But besides this tracking sentiment in the popular social networking sites and blogs has long been of need of today.
With the availability of blogs and posts via social networking, it is now possible to automate some aspects of this process. A
system can be developed that will use active machine learning technique to monitor sentiment in blogs and posts from
popular social networking sites. The proposed system has two novel aspects. Firstly, it generates an aggregated posts feed
containing diverse set of messages relevant to the concerned subject. This allows users to read the posts/blogs from their
preferred social networking sites and annotate these posts/blogs as ―positive‖ or ―negative‖ through embedded links.
Secondly, the results of this manual annotation process are used to train a supervised learner that labels a much larger set of
blogs/posts. The annotation and classification trends can be subsequently tracked online. The main motivation for applying
machine learning techniques in this context is to reduce the annotation effort required to train the system. Annotates can only
be asked to annotate approximately ten blogs/posts per day where the remaining articles are classified using a supervised
learning system trained on the smaller set of manually annotated articles. A number of machine learning techniques can be
used in the system.
In recent years, blogs have become increasingly popular and have changed the style of communications on the
Internet. Blogs allow readers to interact with bloggers by placing comments on specific blog posts. The commenting
behavior not only implies the increasing popularity of a blog post, but also represents the interactions between an author and
readers. To extract comments from blog posts is challenging. Each blog service provider has its own templates to present the
information in comments. These templates do not have a general specification about what components must be provided in a
comment or how many complete sub-blocks a comment is composed of. HTML documents are composed of various kinds
of tags carrying structure and presentation information, and text contents enwrapped by tags. The input to pattern
identification is an encoded string from an encoder. Each token in the string represents an HTML tag or a non-tag text. The
algorithm scans the tokens. When encountering a token that is likely to be the head of a repetitive pattern (called a ―rule‖
hereafter too), the subsequent tokens are examined if any rules can be formed. Many kinds of irrelevant comments are
posted. For example, spam comments may carry advertisements with few links. Besides, commenter may just leave a
message for greeting. Identifying relevant comments is an important and challenging issue for correctly fining the opinion of
readers. Natural Language Processing (NLP) can be leveraged in any situation where text is involved. NLP involves a series
7
Natural Language Processing For Content Analysis in Social Networking
of steps that make text understandable (or computable). A critical step, lexical analysis is the process of converting a
sequence of characters into a set of tokens. Subsequent steps leverage these tokens to perform entity extraction (people,
places, things), concept identification and the annotation of documents with this and other information. From the blogs/posts,
a query selection process selects a diverse set of blogs/posts for manual annotation. The remainder is classified as ―positive‖
or ―negative‖ by a Bayes classifier based on the comparison with manual annotation. Naïve Bayes classifier would be
effective for producing aggregate sentiment statistics analysis with due care that should be taken in the training process. The
Bayesian Classifier is capable of calculating the most probable output depending on the input. It is possible to add new raw
data at runtime and have a better probabilistic classifier. A naive Bayes classifier assumes that the presence (or absence) of a
particular feature of a class is unrelated to the presence (or absence) of any other feature, given the class variable. For
example, a fruit may be considered to be an apple if it is red, round, and about 4" in diameter. Even if these features depend
on each other or upon the existence of other features, a Naive Bayes classifier considers all of these properties to
independently contribute to the probability that this fruit is an apple.
Typical Naive Bayes algorithm has following different
methods used in it.
use Algorithm::NaiveBayes;
my $nb = Algorithm::NaiveBayes->new;
$nb->add_instance
(attributes => {foo => 1, bar => 1, baz => 3},
label => 'sports');
$nb->add_instance
(attributes => {foo => 2, blurp => 1},
label => ['sports', 'finance']);
... repeat for several more instances, then:
$nb->train;
# Find results for unseen instances
my $result = $nb->predict
(attributes => {bar => 3, blurp => 2});
The methods used in above algorithm can be explained as follows.
new()
Creates a new Algorithm::NaiveBayes object and returns it.
The following parameters are accepted:
add_instance( attributes => HASH, label => STRING|ARRAY )
Adds a training instance to the categorizer. The attributes parameter contains a hash reference whose keys are string
attributes and whose values are the weights of those attributes. For instance, if you're categorizing text documents, the
attributes might be the words of the document, and the weights might be the number of times each word occurs in the
document.
The label parameter can contain a single string or an array of strings, with each string representing a label for this
instance. The labels can be any arbitrary strings. To indicate that a document has no applicable labels, pass an empty array
reference.
train()
Calculates the probabilities that will be necessary for categorization using the predict() method.
predict ( attributes => HASH )
Use this method to predict the label of an unknown instance. The attributes should be of the same format as you passed to
add_instance(). predict() returns a hash reference whose keys are the names of labels, and whose values are the score for
each label. Scores are between 0 and 1, where 0 means the label doesn't seem to apply to this instance, and 1 means it does.
As with most computer systems, NLP technology lacks human-level intelligence, at least for the likely future. On a
text-by-text basis, the system’s conclusions may be wrong — sometimes very wrong. For instance, the tweeted phrase
―You’re killing it!‖ may either mean ―You’re doing great!‖ or ―You’re a terrible gardener!‖ No automated sentiment
analysis that currently exists can handle that level of nuance. Furthermore, certain expressions (―ima‖) or abbreviations
(―#ff‖) fool the program, especially when people have 140 characters or less to express their opinions, or when they use
slang, profanity, misspellings and neologisms.
Finally, much of social media interaction is personal, expressed between two people or among a group. Much of
the language reads in first or second person (―I,‖ ―you‖ or ―we‖). This type of communication directly contrasts with news or
brand posts, which are likely written with a more detached, omniscient tone. Furthermore, each of the above social media
participants likely varies their language when they choose to post to Twitter vs. Facebook vs. Tumblr. Last but not least, the
English language and intonation differs hugely based on the source and the forum.
III. CONCLUSION
NLP is a tool that can help protect your privacy providing insight into the blogs and posts. However, it is not
meant to replace human intuition. In social media environments, NLP helps to cut through noise and extract vast amounts of
8
Natural Language Processing For Content Analysis in Social Networking
data to help understand customer perception, and therefore, to determine the most strategic response. The challenges to
producing useful applications of content analysis of Blogs/Posts, particularly within the context of automated analyses, are
substantial. However, the benefits of a such analysis are even more substantial, including greater confidence in knowledge
and the ability to predict future outcomes. In partial pursuit of such a manifesto, this paper mentions briefly algorithmic
proposal for analysing contents of blogs and posts in social networking using natural language processing with supplemental
user involvement.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Papadopoulos, E (2001), The relationship between the Internet financial message boards and the behavior of the
stock market, ProQuest, http://sunzi.lib.hku.hk/ER/detail/hkul/3067352, retrieved 2011-09-23.
a b Peter Turney (2002). "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised
Classification of Reviews". Proceedings of the Association for Computational Linguistics. pp. 417–424.
arXiv:cs.LG/0212032.
Bo Pang; Lillian Lee and Shivakumar Vaithyanathan (2002). "Thumbs up? Sentiment Classification using
Machine Learning Techniques". Proceedings of the Conference on Empirical Methods in Natural Language
Processing (EMNLP). pp. 79–86.
http://www.cs.cornell.edu/home/llee/papers/sentiment.home.html.
Minqing Hu; Bing Liu (2004). "Mining and Summarizing Customer Reviews". Proceedings of KDD 2004..
http://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html.
Bo Pang; Lillian Lee (2004). "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization
Based on Minimum Cuts". Proceedings of the Association for Computational Linguistics (ACL). pp. 271–278.
http://www.cs.cornell.edu/home/llee/papers/cutsent.home.html.
a b Bo Pang; Lillian Lee (2005). "Seeing stars: Exploiting class relationships for sentiment categorization with
respect to rating scales". Proceedings of the Association for Computational Linguistics (ACL). pp. 115–124.
http://www.cs.cornell.edu/home/llee/papers/pang-eestars. home.html.
Bing Liu; Minqing Hu and Junsheng Cheng (2005). "Opinion Observer: Analyzing and Comparing Opinions on
the Web". Proceedings of WWW 2005.. http://www.cs.uic.edu/~liub/FBS/sentimentanalysis.html.
Kim, S.M. & Hovy, E.H. (2006). "Identifying and Analyzing Judgment Opinions.". Proceedings of the Human
Language Technology / North American Association of Computational Linguistics conference (HLTNAACL
2006). New York, NY.. http://acl.ldc.upenn.edu/P/P06/P06-2063.pdf.
a b Benjamin Snyder; Regina Barzilay (2007). "Multiple Aspect Ranking using the Good Grief Algorithm".
Proceedings of the Joint Human Language Technology/North American Chapter of the ACL Conference (HLTNAACL). pp. 300–307.
Rada Mihalcea; Carmen Banea and Janyce Wiebe (2007). "Learning Multilingual Subjective Language via CrossLingual Projections". Proceedings of the Association for Computational Linguistics (ACL). pp. 976–983.
http://www.cse.unt.edu/~rada/papers/mihalcea.acl07.pdf.
a b Lipika Dey , S K Mirajul Haque (2008). "Opinion Mining from Noisy Text Data". Proceedings of the second
workshop on Analytics for noisy unstructured text data, p.83-90.
http://portal.acm.org/citation.cfm?id=1390763&dl=GUIDE&coll=GUIDE&CFID=92244761&CFTOKEN=30578
437.
Pang, Bo; Lee, Lillian (2008). "4.1.2 Subjectivity Detection and Opinion Identification". Opinion Mining and
Sentiment Analysis. Now Publishers Inc. http://www.cs.cornell.edu/home/llee/opinion-miningsentiment- analysissurvey.html.
Fangzhong Su; Katja Markert (2008). "From Words to Senses: a Case Study in Subjectivity Recognition".
Proceedings of Coling 2008, UK. http://www.comp.leeds.ac.uk/markert/Papers/Coling2008.pdf.
a b Wright, Alex. "Mining the Web for Feelings, Not Facts", New York Times, 2009-08-23. Retrieved on 200910-01.
Kirkpatrick, Marshall. ", ReadWriteWeb, 2009-04-15. Retrieved on 2009-10-01.
Bing Liu (2010). "Sentiment Analysis and Subjectivity". Handbook of Natural Language Processing, Second
Edition,
(editors:
N.
Indurkhya
and
F.
J.
Damerau),
2010.
http://www.cs.uic.edu/
~liub/FBS/sentimentanalysis.html.
Michelle de Haaff (2010), Sentiment Analysis, Hard But Worth
It!, CustomerThink,
http://www.customerthink.com/blog/sentiment_analysis_hard_but_worth_it, retrieved 2010-03-12.
Thelwall, Mike; Buckley, Kevan; Paltoglou, Georgios; Cai, Di; Kappas, Arvid (2010). "Sentiment strength
detection in short informal text". Journal of the American Society for Information Science and Technology 61
(12): 2544–2558. http://www.scit.wlv.ac.uk/~cm1993/papers/SentiStrengthPreprint.doc.
A. A. Sattikar, Dr. R. V. Kulkarni (2011),‖A review of Security and Privacy Issues in Social Networking‖,
International Journal of Computer Science and Information Technologies, Vol. 2 (6) , 2011, 2784-2787
A. A. Sattikar, Dr. R. V. Kulkarni (2012),‖A Role of Artificial intelligence in Security and Privacy Issues in Social
Networking‖, International Journal of Computer Science and Engineering Technologies, Vol. 2 (1) , 2012, 792795
9
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 04-09
Image Compression for Authentication & Integrity of Data Based
on Discrete Wavelet Transform
Ms. Neha Patil1, Prof. N. A. Dhawas2
Assistant Professor
Department of Computer Engg. SCOE Khargar, Mumbai University
Head Of Department
2
Department of Information Technology SIT, Lonawala, Pune University
1
Abstract––The process of encryption and message authentication code is conventionally considered as orthogonal
security mechanism. The objective of message security mechanism is to validate data authentication, whereas encryption
is used to deform the message to achieve data confidentiality. The digital signature and watermarking methods are used
for image authentication. Digital signature encodes the signature in a file separate from the original image.
Cryptographic algorithms have suggested several advantages over the traditional encryption algorithms such as high
security, speed, reasonable computational overheads and computational power. The frequency-domain techniques mainly
used for watermarking of the human visual system are better captured by the spectral coefficients. The transforms are
broadly categorized in two ways (a) Discrete Cosine Transformation (DCT) (b) Discrete Wavelet Transformation (DWT).
It is very difficult to explain all the methods so in this paper we concentrated on DWT. The proposed research work is the
integration of message authentication code as well as encryption where the parameterized hash value of an encrypted
image is designed to be same as the hash value of the parent unencrypted original image. The hash value will be
computed without decrypting the original data. Therefore the authenticity can be proven without revealing the actual
information. A digital watermark and signature method for image authentication using cryptography analysis is
proposed. The digital signature created for the original image and apply watermark. Images are resized before
transmission in the network. After digital signature and water marking an image, apply the encryption and decryption
process to an image for the authentication. The encryption is used to securely transmit data in open networks for the
encryption of an image using public key and decrypt that image using private key.
Keywords––Image Authentication, DWT, Integrity of Data.
I.
OBJECTIVE
The main objective of our project work is multimedia encryption that aims at rendering the information
unintelligible under the influence of a key. Encryption aims at removing the redundancy and the correlation of the data by
scrambling it. In order to protect data privacy, images and videos need to be encrypted before being transmitted to the
destination. Because the multimedia data such as videos form large streams of data, partial encryption that encrypts only
some significant parts of the data is employed. Multimedia has some special requirements like perceptual security,
efficiency, cryptographic security, format compliance and signal processing in the encrypted domain.
II.
INTRODUCTION
Our existing system described in order to boost the data security, we focuses on the marriage of the two levels of
security. Multimedia encryption and multimedia authentication schemes serve two different purposes but they can be merged
together in one system to protect confidentiality and to check the authenticity of the data. It is indeed a very challenging
problem but if we can integrate the two functionalities simultaneously, it will revolutionize the area of multimedia
distribution. A commutative watermarking and encryption has been proposed that embeds watermark into the encrypted
media directly, which avoids the decryption watermarking-encryption triples. The encryption and watermarking operations
are done on the same data part. Commutative watermarking and encryption scheme is proposed for media data protection. In
the scheme, the partial encryption algorithm is adopted to encrypt the significant part of media data, while some other part is
watermarked. It has been reported that commutative schemes based on partitioning the data are vulnerable to replacement
attacks. Since one data part is not encrypted, it leads to leakage of information and it is vulnerable to watermark attacks.
Digital signature is a sort of Cryptography. Cryptography means keeping communications private. Its mainly used for the
converting of the information is encryption and decryption. You can access the information without access key. The main
process of the digital signature is similarly as the handwritten signature. It‟s like paper signature and it having the digital
certificate using this verifies the identity. Watermarking is a sub-discipline of information hiding. It is the process of
embedding information into a digital signal in a way that is difficult to remove. It‟s providing copyright protection for
intellectual method that‟s in digital format. The cryptography is providing better mechanisms for information security. In this
analysis to provide the public and private keys for recovery the original information.
III.
METHODOLOGIES
4
Image Compression for Authentication & Integrity of Data Based on Discrete…
3.1 Digital signature and Watermarking
Digital signature is a sort of Cryptography. Cryptography means keeping communication private. It deals with
encryption, decryption and authentication.
3.1.1. Secret key or Symmetric Cryptography
In this processes sender and receiver massages have to know the similarly key for encryption and description of the image.
3.1.2Public key or Asymmetric Cryptography
Asymmetric Cryptography involves two related keys, one of which only the „private key‟ and
Other is 'public key‟.
3.1.3 Creation of Digital Signature
The creation of digital signature is done by getting the details from administrator, and the created signature is posted to the
signature table and this is used by the certification authority. This is done by the certification authority and creates a personal
identification to the person. This is carried out by using the Digital Signature Algorithm and the Secure Hashing algorithm.
This digital signature provides a personalization.
3.1.4 Digital Signature Algorithm
(i) GLOBAL PUBLIC-KEY COMPONENTS
1. p = a prime number,
where 2^(L-1) < p < 2^(L) for 512 = < L = <1024
and L a multiple of 64
2. q = a prime divisor of p - 1,
where 2159 < q < 2160
3. g = h^((p-1)/q) mod p,
where h is any integer with 1 < h < p - 1
such that h(p-1)/q mod P>1 (g has order q mod p).
(ii) THE USER’S PRIVATE KEY:
x = a randomly or pseudo randomly generated integer with 0 < x < q
(iii) USER’S PUBLIC KEY:
y = g^(x) mod p
(iv) USER’S PER-MESSAGE SECRET NUMBER:
k = a randomly or pseudo randomly generated integer with 0 < k < q
3.2 Watermarking Digital Signature
Digital image watermarking schemes mainly fall into two broad categories:
3.2.1 Spatial-domain techniques
The spatial –domain techniques consist of two categories, these are as follows:
a) Least-Significant Bit (LSB):
The given image contains pixels these pixels are indicated by the 8-bit sequence, the watermarks are linked two the last, bit
of selected pixels of the original image. its used to hide the information and attackers could not destroy the information.
b) SSM-Modulation-Based Technique:
These technique are applied in the water marking algorithms with an linked information and attached to the original image
with pseudo noise signal, it‟s modulated by the watermark.
3.2.2 Frequency-domain techniques
The frequency-domain techniques mainly used for watermarking of the human visual system are better captured by the
spectral coefficients. The transforms are broadly categorized in two ways DCT and DWT.
(a)
Discrete Cosine Transformation (DCT)
(b)
Discrete Wavelet Transformation (DWT)
DWT is the method gives the correct result for analysis and compression of image. The wavelet transform preserves the time
information along with frequency representation.
The MATLAB program is executed for different images & program is generalized to N number of levels. The program is
given on next page:
%% Image processing based on discrete wavelet transform (DWT) for N level Decomposition %%
Clear all
clc
a=imread('cameraman.tif');%%size of image is 256*256
[NN MM]=size(a);
5
Image Compression for Authentication & Integrity of Data Based on Discrete…
final=zeros(NN);%%Required for display of final image
a=double(a);
[row col]=size(a);
[Lo_D,Hi_D,Lo_R,Hi_R] = wfilters('db2');%%Daubechies-4wavelets
%%Lo_D stands for Low decomposition filter
%%Hi_D stands for High decomposition filter
%%Lo_R stands for Low reconstruction filter
%%Hi_R stands for High reconstruction filter
N=input('How many stages of decomposition you want N= : Type the value of N and press enter key')
figure(1)
imshow(uint8(a)),title({' Discrete wavelet transform (DWT) for N level Decomposition';'Original Image'})
for decomp=1:1:N %%Main loop for decomposition
for n=1:1:row
b(n,:)=conv(a(n,:),Lo_D);%%convolving only the rows
c(n,:)=conv(a(n,:),Hi_D);
end
for x=1:1:row
for y=1:1:(col/2)
B(x,y)=b(x,2*y);%%down sampling col
C(x,y)=c(x,2*y);
end
end
%%The size of B and C is N*n/2
[row col]=size(B);
for n=1:1:col
e(:,n)=conv(B(:,n).',Lo_D).';%%Convolving only the row%%
f(:,n)=conv(B(:,n).',Hi_D).';
g(:,n)=conv(C(:,n).',Lo_D).';
h(:,n)=conv(C(:,n).',Hi_D).';
end
E=0;F=0;G=0;H=0;
for x=1:1:row/2
for y=1:1:col
E(x,y)=e(2*x,y);%%downsampling the row
F(x,y)=f(2*x,y);
G(x,y)=g(2*x,y);
H(x,y)=h(2*x,y);
end
end
final(1:NN,1:NN)=[E F;G H];
%% The process can continue for further decomposition plotting
no=decomp+1
%%required to assign the figures(no)
figure(no)
imshow(uint8(final)),title(['DWT decomposition Image at ',int2str(no-1),' Level'])
[row col extra]=size(E);
a=E;
NN=NN/2;
b=zeros(row,col+3);
c=zeros(row,col+3);
B=zeros(row,col/2);
C=zeros(row,col/2);
e=zeros(row+3,col/2);
f=zeros(row+3,col/2);
g=zeros(row+3,col/2);
h=zeros(row+3,col/2);
end
After execution of program 4 images are displayed for N=3.
Given as follows:
6
Image Compression for Authentication & Integrity of Data Based on Discrete…
Discrete wavelet transform (DWT) for N level Decomposition
Original Image
The above process of 2-D discrete wavelet transforms are divided into three sub images for providing the watermarking for
host image
3.2.3 Authentication using image verification
This is done by authentication verifier, initially he logins with person date of birth and document having signature
like pan card, passport or driving license and extracts the signature. These details are submitted for the verification process.
By this image verification is done and can know the detailed information of Water Marked image.
IV.
CRYPTOGRAPHY
An encryption system is also called a cipher, or a cryptosystem. The message consists of plaintext, and cipher text. Denote
the plaintext and the cipher text by P and C, respectively. The encryption procedure of a cipher can be described as C = EKe
(P), where Ke is the encryption key and E is the encryption function. Similarly, the decryption procedure is P = DKd (C),
where Kd is the decryption key and D is the decryption function. For public-key ciphers, the encryption key Ke is published,
and the decryption key Kd is kept private, for which no additional secret channel is needed for key transfer.
4.1 Encryption for image
The given picture shows the encryption for image:
4.2 Decryption for image
The given picture shows the decryption for image
7
Image Compression for Authentication & Integrity of Data Based on Discrete…
V.
CONCLUSION
In DWT method calculating wavelet coefficients at every pixel it generates lot of data. To reduce it we can use few
pixels for analysis to get accurate readings. In wavelet decomposition the signal is separated using slow and fast components
using pair of Finite Impulse Response (FIR) filter.
Digital signature and watermark are two techniques used for copyright protection and authentication, respectively. In this
paper a digital signature and watermark methods are used cryptography analysis proposed for image security. Experiments
show our scheme is robust to reasonable compression rate while preserving good image quality, and capable to
authentication.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
G.L. Friedman, “The Trustworthy digital Camera: Restoring Credibility to the Photographic image”, IEEE
Transaction on Consumer Electronics, Vol. 39, No.4, 1993, pp. 905-910.
J. Cox, J. Killian, F.T. Leighton, and T. Shamoon, “Secure Spread Spectrum Watermarking for Multimedia”, IEEE
Transactions on Image Processing, Vol.6, No. 12, 1997, pp.1673-1678.
C.-Y. Lin and S.-F. Chang, “A Robust Image Authentication Method Distinguishing JPEG Compression from
Malicious Manipulation”, IEEE Transaction on Circuits and Systems of Video Technology, Vol. 11, No. 2, 2001,
pp.153-168.
J. Fridrich, “Robust Bit Extraction from Images”, in Proceedings of IEEE International Conference on Multimedia
Computing and Systems (ICMCS‟99), Vol. 2, 1999, pp. 536-540.
S. S. Maniccam, N. G. Bourbakis, Lossless image compression and encryption using SCAN”, Pattern Recognition
34 (2001), 1229-1245
Chin-Chen Chang, Min-Shian Hwang, Tung-Shou Chen, “A new encription algorithm for image cryptosystems”,
The Journal of Systems and Software 58 (2001), 83-91
Jiun-In Guo, Jui-Cheng Yen, “A new mirror-like image encryption algorithm and its VLSI architecture”,
Department of Electronics Engineering National Lien-Ho College of Technology and Commerce, Miaoli, Taiwan,
Republic of China.
Jui-Cheng Yen, Jiun-In Guo, “A new chaotic image encryption algorithm“,Department of Electronics Engineering
National Lien-HoCollege of Technology and Commerce, Miaoli, Taiwan Republic of China, E-mail:
jcyen@mail.lctc.edu.tw
Shuqun Zhang and Mohammed A. Karim, Color image encryption using double random phase encoding”,
MICROWAVE AND OPTICALTECHNOLOGY LETTERS Vol. 21, No. 5, June 5,1999, 318-322
Young-Chang Hou, “Visual cryptography for color images”, Pattern Recognition 36 (2003),
www.elsevier.com/locate/patcog, 1619-1629
C. Yen and J. I. Guo, “A new image encryption algorithm and its VLSI architecture.” In Proceedings of IEEE
workshop on signal processing systems, pp. 430–437, 1999.
M. V. Droogenbroech, R. Benedett, "Techniques for a selective encryption of uncompressed and compressed
images," In ACIVS‟02, Ghent, Belgium. Proceedings of Advanced Concepts for Intelligent Vision Systems,2002.
[13] S. Changgui, B. Bharat, "An efficient MPEG videoencryption algorithm, " Proc e edings of the symposium on
reliable distributed systems, 2002, page(s):708,711.
A. Mitra, , Y V. Subba Rao, and S. R. M. Prasnna, "A new image encryption approach using combinational
permutation techniques," Journal of computer Science, vol. 1, no. 1, p.127, 2006,
Available:http://www.enformatika.org
8
Image Compression for Authentication & Integrity of Data Based on Discrete…
15.
A. Sinha, K. Singh, "Image encryption by using fractional Four retransform and Jigsaw transform in image bit
planes,"Source: optical engineering, spie-int society optical engineering, vol. 44, no. 5 , 2005, pp.15-18.
AUTHORS
Ms. Neha Patil is pursuing M.E. in Computer Engineering from Sinhgad Institute of Technology, Pune
University and working as assistant professor in Saraswati College of Engineering Kharaghar. Her area of interest includes
in Network Security, Image Processing, Artificial Intelligence and Neural network
Prof. N. A. Dhawas is the Head of IT Department at Sinhgad Institute of Technology, Lonawala, having
experience of 15 years in the field of teaching. His research interest includes Mobile Computing, Image Processing, Applied
Algorithms.
9
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 16-23
Application of Geo-Grid Reinforcement Techniques for
Improving Waste Dump Stability in Surface Coal Mines:
Numerical Modeling and Physical Modeling
I L Muthreja1, Dr R R Yerpude2, Dr J L Jethwa3
1,2
3
Associate Professor in Mining Engineering, Visvesvaraya National Institute of Technology, Nagpur, India
Former Director-level Scientist, and Scientist-in-charge, Central Mining Research Institute, Nagpur Regional
Centre, presently a Tunneling Consultant
Abstract––The increase demand of coal production in India can only be achieved through the mechanised surface
method of mining. Reserves from shallow depths are depleting and surface coal mining has to go deeper and deeper
day by day. This will result in increase in the volume of waste rock and dumping area. The problem of land
acquisition along with stringent environmental law will compel the coal companies to have waste dumps of more
height. Higher waste dumps can only be planned and constructed by using earth reinforcement techniques using
geogrids. As geogrids have not been used in waste dumps of Indian coal mines, numerical models and physical models
are developed to study the role of geogrids in waste dump stability. This paper discusses the comparison of dump
stability analysis results of numerical modeling and results of physical modeling.
Keywords––Factor of safety, Geogrids, Waste dumps
I.
INTRODUCTION
Coal is the major source of energy in India and, at the moment, more than 55% of principal energy consumption is
being met by this natural resource. Proven geological reserve of coal in the country makes it the most potential source to
sustain the growing demand of energy due to economical development of the country. Even after advent of the nuclear
energy, the dominance of coal is likely to continue due to different techno-economical and safety reasons. This tremendous
increase in the coal production can only be achieved through the mechanised surface mining with increase waste rock
handling. Reserves from shallow depths are depleting fast and surface coal mining has gone deeper in coming days. This will
result in increase in the volume of waste rock which will need more dumping area. The problem of land acquisition along
with stringent environmental law, the coal companies are bound to have higher dump yards.
The waste dump management is a very complex issue which depends on numerous factors. Some factors are
within the control of the planners and mine operators but certain factors are not. Land constraints as well as increase in
stripping ratio will force the planners and operators to go for increasing dump heights of the existing dumps. Thus, it is a
future essential necessity to have large waste dumps with steeper slopes in opencast coal mines for ecological and economic
reasons. But larger heights dumps with steeper slopes are associated with increased instability aspects, necessarily require
better waste dump stability management.
Steepened slopes with higher dumps have become increasingly advantageous due to better land usage and lower
site development costs. The proven concept of tensile reinforcement allows construction of dumps with improved stability,
than are possible with the present practice with poor dump stability. Dumps reinforced with geo-synthetics/geo-grids can
reduce land requirement substantially, provide a natural appearance, and improve dump stability and safety.
The application of geogrids has been modeled numerically using FLACSLOPE software developed by M/s
ITASCA, USA. To compare these results, physical modeling has been done in the laboratory and experiments were
performed.
II.
GEOGRIDS
For millennia, civilizations have sought effective methods to construct stable soil retaining structures. The
technique of reinforced soil is not a new concept. In a modern context, reinforced soil has become a viable and cost-effective
technique only over the last 30 years. This was brought about by the development of geogrid reinforcements engineered
from strong and durable polymers. These geogrid reinforcements enable substantial tensile loads to be supported at defined
deformations over extended design lives.
Reinforced soil is the technique where tensile elements are placed in the soil to improve stability and control
deformation. To be effective, the reinforcements must intersect potential failure surfaces in the soil mass. The fig.1 illustrates
the principle of geogrids application. Strains in the soil mass generate strains in the reinforcements, which in turn, generate
tensile loads in the reinforcements. These tensile loads act to restrict soil movements and thus impart additional shear
strength. This results in the composite soil/reinforcement system having significantly greater shear strength than the soil
mass alone.
Reinforced soil using geogrids is a very effective technique. The major benefits of reinforced soil are:
16
Application of Geo-Grid Reinforcement Techniques for Improving Waste Dump Stability in Surface…
•
•
The inclusion of geogrid in soil improves the shear resistance of the soil thereby improving its structural capability.
Land acquisition can be kept to a minimum because reinforced structures can be made steeper and higher than
would otherwise be possible.
Fig 1- Geogrid reinforcement of slope
Geogrid reinforcements are manufactured from highly durable polymers that can maintain strength, stiffness,
and soil interaction over extended design lives. Typical design lives may range from 1 to 2 years for a temporary reinforced
soil structure, to 120+ years for a permanent reinforced soil structure. Fig-2 shows a typical geogrids
Fig.2. A typical geogrid
The use of geogrids has not been applied in the Indian coal mining industry as yet. Coal mine waste dumps consist
of loose broken rock material mainly consisting of sandstone, shale, and soil. The behavior of this material is similar to soil
as discussed above. The use of geosynthetics/geogrids will definitely help in improving the stability of waste dumps.
III.
WASTE DUMP STABILITY ANALYSIS USING NUMERICAL MODEL:
Representative samples from existing waste dumps in coal mines were collected and tested for determination of
material characteristics. The results are shown in Table-1
Table 1: Waste dump and floor material characteristics
Sample
Dump (Broken Overburden
Material)
3
2
Density, kN/m
Cohesion, kN/m
20.02
24.0
Friction Angle, Degree
27.5
17
Application of Geo-Grid Reinforcement Techniques for Improving Waste Dump Stability in Surface…
Floor soil
(Black Cotton Soil)
20.96
39.0
20.9
For modeling, above characteristics of the material from a surface coal mine have been used
The following properties of geogrids were used for numerical modeling:
Bond Cohesion= 1000 N/m
Bond Friction angle= 100
Thickness = 5 mm
FLAC SLOPE numerical modeling software is used for stability analysis of waste dump. The analysis was performed by
introducing Geogrids in the waste dump in the following way:
a. Basic Waste dump, without any geogrid
b. One layer of geogrid at a height of 25% of total dump height
c. One layer of geogrid at a height of 50% of the total dump height
d. Two layers of geogrid, one at a height of 25% and other at 50% of the total height of dump
The following figures (1-4) show the results of modeling and the stress contours:
Fig.1 Basic waste dump model without any geogrid
Fig.2 Waste dump model with one layer of geogrid at 25% of dump height
18
Application of Geo-Grid Reinforcement Techniques for Improving Waste Dump Stability in Surface…
Fig.3 Waste dump model with one layer of geogrid at 50% of dump height
Fig.4 Waste dump model with two layer of geogrid one at 25% and other at 50% of dump height
The Numerical Modeling results give factor of safety of waste dump in each case.
The table-2 shows the factor of safety as determined by Numerical Modeling for different conditions.
Table2- Waste dump stability analysis using FLACSLOPE Numerical Modeling Software
Sr. No
Case
Factor of Safety by
Numerical Modeling
1
Simple
2
Geogrid_25 %
3
Geogrid_50 %
4
Geogrid_25_50 %
0.97
1.02
1.05
1.11
The above results indicate increase in factor of safety with application of Geogrids in various combinations.
IV.
WASTE DUMP PHYSICAL MODELING:
The geogrids have not been used in coal industry in India; hence validation of the numerical modeling results with
field study is not possible. Considering this constraint, physical model was prepared and experiments were performed on the
physical model to compare the results from numerical modeling qualitatively.
A model of acrylic sheets having a dimension 300 mm X 300 mm X 450 mm was prepared on a tilting platform.
Road metal dust was used for constructing the dump and the failure was induced by increasing the slope of base, the tilting
platform. The plastic net resembling geogrids was used for the experiment. The model details are shown in figure 5.
Following Procedure has been adopted for conducting experiments with physical model:
1. Same material has been used for all cases
2. Same dump configuration has been used for all cases
3. Dumps are stable under normal level floor conditions
19
Application of Geo-Grid Reinforcement Techniques for Improving Waste Dump Stability in Surface…
4.
The failure is induced by increasing the dip angle of the floor gradually
The hypothesis used for the experimentation is that the modeled dump, which fails at higher slope of base
platform, will have higher stability under normal level base platform conditions.
The following experiments of monitoring failure, having same conditions as that of Numerical Modeling, were performed:
a. Base model without use of geogrid
b. One layer of geogrid at the height of 25% of total dump height
c. One layer of geogrid at the height of 50% of total height
d. Two layers of geogrid, one at a height of 25% and other at 50% of the total height of dump
The figures 5-13 show the experiments performed as above
Fig.5 Basic Model without any geogrid
Fig.6 Basic Model without any geogrid another view
Fig.7 Basic Model without any geogrid at failure
20
Application of Geo-Grid Reinforcement Techniques for Improving Waste Dump Stability in Surface…
Fig.8 model with one layer of geogrid at 25% of dump height
Fig.9model with one layer of geogrid at 25% of dump height another view
Fig.10 Physical model with one layer of geogrid at 50% of dump height
21
Application of Geo-Grid Reinforcement Techniques for Improving Waste Dump Stability in Surface…
Fig.11 Physical model with one layer of geogrid at 50% of dump height, another view
Fig.12 Physical model with two layers of geogrid one at 25% and another at 50% of dump height
Fig.13 Physical model with two layers of geogrid one at 25% and another at 50% of dump height, at failure
22
Application of Geo-Grid Reinforcement Techniques for Improving Waste Dump Stability in Surface…
The results of physical model are shown in the table-3
Table-2: Results of Physical Modeling
Sr.
Case
Dip Angle of the Floor at
No
failure, Deg
1
2
3
4
Basic model
Geogrid_25
Geogrid_50
Geogrid_25_50
12.33
17.50
20.00
22.00
The results of Numerical Modeling (Table-1) and Physical modeling (Table-2) show similar trend. This validates
the results of Numerical Model. Hence other parametric studies can now be performed using Numerical Modeling. This
experimentation will open the applicability of geogrids in coal mining industry for improving the stability of waste dumps,
thereby planning higher dumps to save on dumping land required.
V.
CONCLUSIONS
This study is an attempt to introduce geogrids in improving stability of coal mine waste dumps. The results from
Numerical and Physical Modeling show similar trend in respect of stability of waste dumps. This proves the application of
geogrids in improving the stability of coal mine waste dumps. Thus by using geogrids, higher dumps, accommodating large
overburden, can be designed. The coal mining industry will certainly be relieved from the problem of land acquisition, which
is becoming difficult day by day in India and at the same time ensuring the stability of waste dumps.
REFERENCES
1.
2.
3.
4.
5.
6.
Algın,H. M.,(2010), “ Settlement Analysis of Geosynthetic Reinforced Soil Retaining Walls at Foundation Level”,
EJGE - Electronic Journal of Geotechnical Engineering, vol.15, pp697
Ling,H.I,;Cardany,C.P.;Sun L.X. and Hashimoto;(2000); “Finite Element Study of a Geosynthetic-Reinforced Soil
Retaining Wall with Concrete-Block Facing;Geosynthetics International;Vol.7 No.2
Palmeira, E. M., Tatsuoka,F., Bathurst, R. J., Stevenson, P. E. and Zornberg,J. G.,(2008) , “Advances in
Geosynthetics Materials and Applications for Soil Reinforcement and Environmental Protection Works”,EJGE Electronic Journal of Geotechnical Engineering,Special Issue Bouquet 08
www.tensarcorp.com, Geosynthetics Applications Newsletter , Issue 19.1 | 2009
www.mirafi.com, Geosynthetics for soil reinforcement
Zornberg ,Jorge G.;(2007); “New Concepts in Geosynthetic-Reinforced Soil”; REGEO’2007
23
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 15-16
Feature Extraction from Gene Expression Data Files
Sayak Ganguli, Manoj Kumar Gupta, Sasti Gopal Das and Abhijit Datta
DBT Centre for Bioinformatics, Presidency University, Kolkata
Abstract––The Gene Expression Omnibus (GEO) maintained at the National Center for Biotechnology Information
(NCBI) is an archive where microarray data is available freely to researchers. The database provides us with numerous
data deposit options including web forms, spreadsheets, XML and Simple Omnibus Format in Text (SOFT). Along with
the purpose of a repository a large collection of tools are available to help users effectively explore, and analyze gene
expression patterns stored in GEO. The need for better analyses of gene expression data was explored using object
oriented programming languages such as C and Practical extraction and reporting language (PERL). The test set that
was used for this study were human transcription factors. The results obtained indicate that feature extraction was
successful.
Keywords––Bioinformatics, Gene Expression Omnibus, NCBI, Object oriented programming.
I.
INTRODUCTION
The GEO database (3) is one of the largest repositories of gene expression data on the web (1, 2) which has been
generated using a large ensemble of high throughput measuring techniques. Such techniques include comparative genomic
hybridization for analyzing gain or loss of genes and tiling arrays for the analyses and detection of transcribed regions or
single-nucleotide polymorphism. Apart from these approaches data from ChIP-chip technology is also stored which provides
information on the protein binding regions on nucleotide molecules. Data from serial analysis of gene expression (SAGE),
massively parallel signature sequencing (MPSS), serial analysis of ribosomal sequence tags (SARST), and some peptide
profiling techniques such as tandem mass spectrometry (MS/MS) are also accepted. Several tools such as GENE CHASER
(5) exists which focus on the biological condition of expression of a particular data set, however, as microarray expression
data comprise around 95% of the available data types in GEO it becomes necessary for providing a tool which shall enable
the user to specifically analyze a particular data type from the expression data set (6).
The following are some existing tools which are used by the researchers for data analyses at GEO.
1.
Profile neighbors retrieves gene connections that show a similar or reversed profile shape within a DataSet,
2.
Sequence neighbors retrieves profiles which are related by nucleotide sequence similarity across all DataSets.
3.
Homolog neighbors retrieve profiles of genes belonging to the same Homolo-Gene group.
4.
Links menu enables the users to easily navigate from the GEO databases to associated records in other Entrez data
domains including GenBank, PubMed, Gene, UniGene, OMIM, HomoloGene, Taxonomy, SAGEMap, and
MapViewer.
However, analyses of specific gene sets from the files that are present in the GEO present a significantly labour
intensive task. Here we present a feature extraction program written in C and PERL both belonging to the object oriented
programming domain, a necessity when dealing with bioinformatics datasets as individual values are to be treated as objects
and both these fundamental languages are easy to implement due to their platform independent property(4). The program
allows the user to extract information regarding specific genes in a set of files. The program is currently tested on
transcription factor expression data sets of human origin but would be extended to other datasets as well.
II.
MATERIAL AND METHOD
This program is written in C and integrates PERL for feature extraction from the GEO data files.
Steps for Sorting Human Transcription Factors:
1. Generating raw data
2. Generating transcription factors
3. Generating Gene Data File
4. Generating final transcription factors list
15
Feature Extraction from Gene Expression Data Files
III.
CONCLUSIONS
With the increase in experimental techniques utilizing high throughput technologies, the need for automated data
curation is paramount for successful time scale studies and for future implementation of robust analyses platforms. The
feature extraction algorithm developed was tested on the most abundant class of genes that are present in general expression
analyses – transcription factors. The results indicate that the algorithm can be extended in the development of an integrated
webserver for the analyses and characterization of more such gene specific queries. Having sequence information together
with expression information can help in the functional annotation and characterization of unknown genes or in finding novel
roles for characterized genes. These data are also valuable to genome-wide studies, allowing biologists to review global gene
expression in various cell types and states, to compare with orthologs in other species, and to search for repeated patterns of
coregulated groups of transcripts that assist formulation of hypotheses on functional networks and pathways.
IV.
ACKNOWLEDGEMENTS
The authors acknowledge the support of the Department of Biotechnology, Government of India BTBI scheme.
REFERENCES
1.
2.
3.
4.
5.
6.
Akerkar, R. A.; Lingras, P. An Intelligent Web: Theory and Practice, 1st edn. 2008, Johns and Bartlett, Boston.
Albert, R.; Jeong, H.; Barab´asi, A.-L. Diameter of the world-wide Web. Nature, 1999, 401, pp. 130–131.
Barrett T, Edgar Ron. Mining Microarray Data at NCBI’s Gene Expression Omnibus (GEO) Methods Mol Biol.
2006; 338: 175–190.
Chakrabarti, S. Data mining for hypertext: A tutorial survey. SIGKDD explorations, 2000, 1(2), pp. 1–11.
Chen R., Mallelwar R., Thosar A., Venkatasubrahmanyam S., Butte A.J. (2008) GeneChaser: identifying all
biological and clinical conditions in which genes of interest are differentially expressed, BMC Bioinformatics
9:548
Tarca AL, Romero R, Draghici S: Analysis of microarray experiments of gene expression profiling. Am J Obstet
Gynecol, 2006 Aug;195(2):373-88
16
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 12-18
Framework for Role of Transportation in Estimated Resources
Supplying In Construction Site
M.D.Nadar1, Dr.D.N.Raut2, Dr. S.K.Mahajan3
1
Research scholar in production Engineering Department VJTI, Mumbai-400019.
2
Professor in production Engineering Department, VJTI, Mumbai-400019
3
Director in Directorate of Technical Education (DTE) Govt. of Maharashtra, Mumbai
Abstract––Engineering construction projects play an important role in national economic development. India, which
contributed approximately 8.5% to the total of India’s GDP. Project schedule slips, budget overruns, compromised
quality, resulting claims and counter-claims problems have plagued the industry. The reasons for poor project
performances abound. Previous researches have dealt much with the problems of project risk and uncertainty, variations
in project outcomes, work fragmentation, complex relationships among stakeholders and activities, and excessive phase
overlaps in general Construction projects are implement ting different phases., viz conception phase, definition phase,
planning phase, scheduling phase, controlling phase and termination phase Many construction projects suffer from
delay. Delays give rise to disruption of work and loss of productivity, late completion of project increased time related
costs, and third party claims and abandonment or termination of contract Various factors affect completion periods of
projects The objective of paper identifying the causes of delay delivering renewable and non renewable resource to site,
and optimize transportation costs of resources for supplying to sites. To study current status problems of resource delivery
procedure for construction industries and develop effective resource supply model. Construction projects are facing
numerous problems related to resources supply to various construction project sites. The proposed framework is
evaluated through expert interviews and literature survey, to develop the model addressing the problems are involving in
distribution of various resources to supply for executing construction projects This framework also provides physical
distribution of resources from different point of sources to construction project sites and mapping for the improvement of
business performance in construction industries .
Keywords––Renewable and non renewable resources, transportation cost, transshipment quantity, delivery channel,
construction site, replenishment quantity of resources, stores location etc.,
I.
INTRODUCTION
The goals of EPC industries provide deliverable project product (facilities) to customers / owner / developers of
their needs, wishes and desires are defined, quantified and qualified into clear requirements which was communicated
Construction project activities are consumes resources Planning is required for supplying resources at right quantity, right
time and good condition to the site. Construction industries are facing problems to supply resources physically to project
sites at lower cost, at the earliest time and good condition of resources. Transshipment planning problems are more
complexity in construction projects. Construction sites are geographically spread different locations. The resources storing
facilities also geographically spread different locations. Construction projects sites team estimating resources quantities and
quantities are vary site to site depends on owners of project requirements. Resource capacity of storage facility varies
location to location. Distance between project sites and storage facilities are varying. Transportation cost and transit time
varying
stores in the function of time and location of site and storage facility. The transport of construction resources
accounts for 20% of energy consumed by the construction industry R A Smith etc., (2003).
i
II.






RESEARCH METHODOLOGY:
To identify potential the research problems focusing in the area of resource supplying to construction sites from
EPC industries and vendors’ of EPC industries
Data and information of EPC industries are collected using the following methods
To interview senior management of contractor, Construction management, consultant and owner of EPC industry
project and visit construction project site to collect the research related data and information
To collect information of EPC industry through literature survey like to referred research problems related
international journals
To develop the integrated business performance enhancement model through heuristic optimization analysis and
achieve the trade-off of the transportation cost and transshipment quantities of resources
To measure the business performance factors of EPC industries
III.
FORMULATION
This paper addresses transport of resources in construction projects. The problem focuses on transshipment and
transport of renewable or non renewable resources. To develop Tradeoff model between cost and time in the transport of
12
Framework for Role of Transportation in Estimated Resources Supplying…
resources in the construction sites In this paper focuses on renewable, non renewable resources are physically supplying to
construction projects.
To optimize transportation cost for transport of resources to construction sites.
3.1. Resources:
Renewable resources are available ( Blazewicz et.al. (1986) on a period –by-period basis that is the amount is
renewable from period to period. Renewable resources are limited at particular time. Typical examples of renewable
resources included machines, tools, equipment, space, manpower etc.,
Non renewable resources are available on a total project basis, with limited consumption availability for the entire
project. Typical examples of nonrenewable resources included raw materials, components, consumption materials etc.,
Doubly constrained resources are constrained per period as well as for the overall project combination of
renewable and nonrenewable resources.
Partially renewable, nonrenewable resources (BӦttcher et al. (1996), Drexl (1997) examples of manpower may
work on every day from Monday through Friday and either on Saturday or on Sunday but not both. Man power is working
renewably weekends and man power is nonrenewable in weekdays.
Resource is called preemptible if each of its units may be preempted, i.e. .withdrawn from currently processed
tasks, allotted to another task and then returned to the previous task. Resources which do not posses the above property are
called or non preemptible . ( Blazewicz et.al. (1986)
Location of construction sites, location of resources storage facility.
Distance between storage and project sites. Transport cost of resources for loading, transit, toll and unloading cost
from storage/construction site
3.2. Notations:
Location of resources storage = Source (k)
Location of construction site = Sink (n)
C ij = Unit transportation cost of resources from source i to sink j (i=1,2,3,…..k; j=1,2,3,4….n)
Ai= No. of units of resources are available at Source (i) (i=1,2,3,4…k)
Bj= No. of units of resources are allocated at sink (j) (i=1,2,3,4…n)
Xij = Quantities of resources supplying from source (i) to
sink (j)
Xi = Quantities of resources inventory in sources (k) (i=1,2,3,..k)
Xj = Quantities of resources inventory in sink (n) (j=1,2,3,..n)
-Xi = Quantities of resources backlog in sources (k) (i=1,2,3,..k)
- Xj = Quantities of resources backlog in sink (n) (j=1,2,3,..n)
Dij Distance between source i and sink j (i=1,2,3,…..k; j=1,2,3,4….n)
Pj = Penalty cost of backlog quantity of resources at the construction site
Hj = Holding cost of inventory quantity of resources at the storage
Z = Total transportation cost
3.3. Model
The objective of the problem is to determine the amount of resources to be shipped from each source to each construction
site such that the total transportation cost is minimized
3.3.1.Scenario –I
k
When
n
A  B
i 1
i
j 1
J
(I)
Minimize transportation cost
Objective function is
k
Z=
n
 C  X
i 1 j 1
ij
(1)
ij
n
Subject to
 X  A (i=1,2,3….k)
j 1
ij
i
(2)
k
  X   B ( j  1,2,3..... n)
(3)
X ij  0(i  1,2,3.... k ; j  1,2,3,4.... n)
(4)
i 1
ij
j
Expression (1) represents the minimization of the total distribution cost, assuming a linear cost structure for
shipping. Equation (2) states that the amount being shipped from source i to all possible destinations should be equal to the
13
Framework for Role of Transportation in Estimated Resources Supplying…
total availability, Ai , at that source. Equation (3) indicates that the amounts being shipped to destination j from all possible
sources should be equal to the requirements, Bj, at that destination. Usually Eq. (3) is written with positive coefficients and
right-hand sides by multiplying through by minus one.
3.3.1.1.Linear programming format
Z  (C11 X 11  C12 X 12  .... C1n X 1n ) 
(C 21 X 21  C 22 X 22  ...  C 2 n X 2 n )  .
Minimize
...  (C k1 X k1  C k 2 X k 2  ...  C kn X kn )
Subject to
X 11  X 12  ....... X 1n  A1 ;
X 21  X 22  ...... X 2 n A2 ;
.
.
.
X k1  X k 2  X k 3  ....X kn  Ak
 X 11  X 12  ...... X 1n   B1;
 X 21  X 22  ....... X 2 n   B2
.
.
.
X k1  X k 2  X k 3  .... X kn   Bn
X ij  0 (i  1,2,3....k ; j  1,2,3...n)
C11 -Denotes transport cost from source destination 1 to sink destination2
Cij = f ( Dij, Wk, Vk)
Dij Distance between source i and sink j (i=1,2,3,…..k; j=1,2,3,4….n)
Wk Weight of the resource (i= 1, 2,3,4…….k)
Vk Volume of the resource (i= 1, 2,3,4…….k)
3.3.2. Scenario –II
n
k
When
A > B
i 1
i
j 1
j
Then holding cost incurs due to inventory built-up in resources storing destination
Minimize the transportation cost
k
Z=
n
 C  X
i 1 j 1
ij
ij
k
n
i 1
j 1
  Hi  X i   H j  X j
(5)
Subject to
n
 X < A (i=1,2,3….k)
j 1
ij
i
(6)
k
  X >  B ( j  1,2,3..... n)
(7)
X ij  0(i  1,2,3.... k ; j  1,2,3,4.... n)
(8)
i 1
ij
j
Expression (5) represents the minimization of the total distribution cost, assuming a linear cost structure for
shipping. Equation (6) states that the amount being shipped from source i to all possible destinations should be less than the
total availability, Ai , at that source. Equation (7) indicates that the amounts being shipped to destination j from all possible
14
Framework for Role of Transportation in Estimated Resources Supplying…
sources should be greater to the requirements, Bj, at that destination. Usually Eq. (7) is written with positive coefficients and
right-hand sides by multiplying through by minus one.
3.3.2.1. Linear programming format
Z  (C11 X 11  C12 X 12  .... C1n X 1n ) 
(C 21 X 21  C 22 X 22  ...  C 2 n X 2 n )  .
...  (C k1 X k1  C k 2 X k 2  ...  C kn X kn )  ( H 1 X 1
Minimization of transportation cost
H 2 X 2  H 3 X 3  ...  H k X k )  ( H 1 X 1 
H 2 X 2  ....H n X n
Subject to
X 11  X 12  ....... X 1n < A1 ;
X 21  X 22  ...... X 2 n< A2 ;
.
.
.
X k1  X k 2  X k 3  ....X kn < Ak
 X 11  X 12  ...... X 1n >  B1;
 X 21  X 22  ....... X 2 n >  B2
.
.
.
X k1  X k 2  X k 3  .... X kn >  Bn
X ij  0 (i  1,2,3....k ; j  1,2,3...n)
3.3.3.Scenario –III
When
k
n
i 1
j 1
 Ai <  B j
Then Backlog cost incurs due to inventory built-up in resources storing destination
Minimize the transportation cost
k
Z=
n
k
n
i 1 j 1
i 1
j 1
 C ij  X ij   Pi  X i   Pj  X j
(9)
Subject to
n
 X > A (i=1,2,3….k)
j 1
ij
i
(10)
k
  X <  B ( j  1,2,3..... n)
(11)
X ij  0(i  1,2,3.... k ; j  1,2,3,4.... n)
(12)
i 1
ij
j
Expression (9) represents the minimization of the total distribution cost, assuming a linear cost structure for
shipping. Equation (10) states that the amount being shipped from source i to all possible destinations should be less than the
total availability, Ai , at that source. Equation (11) indicates that the amounts being shipped to destination j from all possible
sources should be greater to the requirements, Bj, at that destination. Usually Eq. (11) is written with positive coefficients
and right-hand sides by multiplying through by minus one.
15
Framework for Role of Transportation in Estimated Resources Supplying…
3.3.3.1. Linear programming format
Z  (C11 X 11  C12 X 12  .... C1n X 1n ) 
(C 21 X 21  C 22 X 22  ...  C 2 n X 2 n )  .
Minimization of transportation cost
...  (C k1 X k1  C k 2 X k 2  ...  C kn X kn )  ( P1 X 1
P2 X 2  P3 X 3  ...  Pk X k )  ( P1 X 1 
P2 X 2  ....Pn X n
Subject to
X 11  X 12  ....... X 1n > A1 ;
X 21  X 22  ...... X 2 n> A2 ;
.
.
.
X k1  X k 2  X k 3  ....X kn > Ak
 X 11  X 12  ...... X 1n <  B1;
 X 21  X 22  ....... X 2 n <  B2
.
.
.
X k1  X k 2  X k 3  .... X kn <  Bn
X ij  0 (i  1,2,3....k ; j  1,2,3...n)
IV.
SOLVING THE TRANSPORTATION PROBLEM
Solution of optimum resource quantity is required to deliver supply to different construction sites from different
storage destination at minimum transportation cost with different working business scenarios. Transportation problems to
develop an efficient algorithm for the general minimum-cost flow problem by specializing the rules of the simplex method to
take advantage of the problem structure. However, before taking a somewhat formal approach to the general problem, the
method will indicate the basic ideas by developing a similar algorithm for the transportation problem. The properties of this
algorithm for the transportation problem will then carry over to the more general minimum-cost flow problem in a
straightforward manner. Historically, the transportation problem was one of the first special structures of linear programming
for which an efficient special-purpose algorithm was developed. In fact, special-purpose algorithms have been developed for
all of the network structures many transportation computational algorithms are characterized by three stages
1.
2.
3.
Obtaining an initial solution;
Checking an optimality criterion that indicates whether or not a termination condition has been met (i.e., in the
simplex algorithm, whether the problem is infeasible, the objective is unbounded over the feasible region, or an
optimal solution has been found);
Developing a procedure to improve the current solution if a termination condition has not been met. After an initial
solution is found, the algorithm repetitively applies steps 2 and 3 so that, in most cases, after a finite number of
steps, a termination condition arises. The effectiveness of an algorithm depends upon its efficiency in attaining the
termination condition. Since the transportation problem is a linear program, each of the above steps can be
performed by the simplex method. Initial solutions can be found very easily in this case, however, so phase I of the
simplex method need not be performed. Also, when applying the simplex method in this setting, the last two steps
become particularly simple.
The transportation problem is a special network problem, and the steps of any algorithm for its solution can be
interpreted in terms of network concepts. However, it also is convenient to consider the transportation problem in
purely algebraic terms
Initial solutions variables of transportation cost are obtained by the following traditional methods. The basic initial
solutions are determined by Northwest corner method (NW corner method) . NW corner method is easy to use and requires
simple calculations (J. Reeb et.al 2002). NW corner solutions are not optimal solutions. Transport problems with
16
Framework for Role of Transportation in Estimated Resources Supplying…
degeneracy initial basic solution is determined by with better accuracy is Vogel Approximation Method and MODI method
Minimum cost (Row minimum, Colum minimum) methods are use to determine initial basic solutions.
The blocking method for finding an optimal solution to bottleneck transportation problems
4.1.The blocking method
Algorithm:
Step 1: Find the maximum of the minimum of each row and column of the given transportation table.
Step 2: Construct a reduced transportation table from the given table by blocking all .
Step 3. Check if each column demand is less than to the sum of the supplies in the reduced transportation problem obtained
from the Step 2.. Also, check if each row supply is less than to sum of the column demands in the reduced transportation
problem obtained from the Step 2.. If so, go to Step 6.(Such reduced transportation table is called the active transportation
table). If not, go to Step 4.
Step 4: Find a time which is immediately next to the time allocated value.
Step 5: Construct a reduced transportation table from the given transportation table by blocking all cells having time more
than U and then, go to the Step 3..
Step 6: Do allocation according to the following rules:
(a) allot the maximum possible to a cell which is only one cell in the row / column . Then, modify the active
transportation table and then, repeat the process till it is possible or all allocations are completed.
(b) If (a) is not possible, select a row / a column having minimum number of unblocked cell and allot maximum
possible to a cell which helps to reduce the large supply and/ or large demand of the cell.
Step 7: This allotment yields a solution to the given bottleneck transportation problem.
The basic solutions of allocated resource quantities from blocking method is developing further towards optimal solution is
given by using Bottleneck Cost Transportation BCTP and Blocking Zero point method (R.S. Garfinkel and M.R. Rao,
(1979) and Y.P. Aneja and K.P.K.Nair(1974) ,
Computer solutions for formulated transportation problems with degeneracy linear programming are solving by LINDO
software the software will solve the LP problem. It will add slack, surplus, and artificial variables when necessary. (J. Reeb
et.al 2002) LINDO is systems of tools. LINDO includes the following software Optimization software, integer
programming, linear programming, nonlinear programming, stochastic programming and Global optimization
4.2.Combinatorial Optimization - Heuristics, and Meta-heuristics Manar Ibrahim Hosny (2010):
The importance of CO problems stems from the fact that many practical decision making issues concerning, for
example, resources, machines and people, can be formulated under the combinatorial optimization framework. As such,
popular optimization techniques that fit this framework can be applied to achieve optimal or best possible solutions to these
problems, which should minimize cost, increase performance and enable a better usage of resources.
combinatorial optimization problem (Maniezzo et al. 1998; Kowalski 2005). These transportation problems have
been solved optimally using exact algorithms such as stage ranking and branch-and-bound methods used in mixed-integer
programming (MIP) (Adlakha and Kowalski 2003). However, the application of these methods has been restricted to smallscale problems because computation time dramatically increases with problem size (Kowalski 2005). For even a mediumscale problem with hundreds to thousands of edges, the computation time required by MIP
might be so large that the problem becomes unsolvable for
practical purposes (Adlakha and Kowalski 2003). Several approximation algorithms, generally called heuristics,
have been developed to solve larger problems in a reasonable time (Gottlieb and Paulmann 1998; Sun et al. 1998).
Meta-heuristic Algorithms: Heuristic methods are yielding estimated resources approximate solution of resource
quantity (Xij) to be delivered to different storage depot to different construction sites. The variables compared with
different heuristic methods in the performance factor cost, time and profit of transportation problems. Heuristic procedures
based Evolution algorithms like Genetic Algorithms, Simulated Annealing, Tabu Search and Ant colony Optimization
4.3.Genetic Algorithms
were introduced by (Holland, 1975) as a method for modeling complex systems. They are a class of powerful
search algorithms applicable to a wide range of problems with little prior knowledge. GAs are particularly good at global
search and can deal with complex and multimodal search landscapes. They are known as effective methods that allow
obtaining near-optimal solutions in adequate solution times. GAs is an evolving important component of artificial
intelligence, based on fundamental principles of Darwinian evolution and genetics. GAs are robust general-purpose search
program based on the mechanism of natural selection and natural genetics. Genes and chromosomes are the fundamental
elements in GAs. A chromosome is a string of genes. In a real problem, genes are the variables that are considered influential
in controlling the process being optimized, and chromosome is a solution to the problem. Genetic Algorithms (GAs) search
for the optimal solution from populations of chromosomes. A GA may be defined as an iterative procedure that searches for
the best solution of a given problem among a constant-size population, represented by a finite string of symbols, the genome.
The search is made starting from an initial population of individuals often randomly generated. At each evolutionary step,
individuals are evaluated using a fitness function. High-fitness individuals will have the highest probability to reproduce.
The evolution (i.e., the generation of a new population) is made by means of two operators: the crossover operator and the
mutation operator. The crossover operator takes two individuals (the parents) of the old generation and exchanges parts of
their genomes, producing one or more new individuals (the offspring). The mutation operator has been introduced to prevent
convergence to local optima, in that it randomly modifies an individual’s genome (e.g., by flipping some of its bits, if the
genome is represented by a bit string). Crossover and mutation are performed on each individual of the population with
17
Framework for Role of Transportation in Estimated Resources Supplying…
probability Pc and Pm respectively, where Pm<Pc. Further details on GA can be found in (Falkenauer 1998, Goldberg D
1989).
4.4.Procedure for evolution program
begin
t← 0
Initialize P(Xij)
evaluate P(Xij)
while (not termination-condition) do
begin
Xij← i + 1, j+1
select P(Xij) from P(Xij-1)
alter P(Xij)
evaluate P(Xij)
end
end
V.
CONCLUSION
In this paper Construction Industries are facing problems to supply the resources to construction sites. Formulated
the transportation problem to deliver the resources quantities at the minimum transportation cost the scope of the paper is
time factor of transport might be significant in several transportation problems. The proposed methods are quite simple from
the computational point of view and also, easy to understand and apply. By blocking zero point method, we obtain a
sequence of optimal solutions to a bottleneck-cost transportation problem for a sequence of various time in a time interval.
This method provides a set of transportation schedules to bottleneck-cost transportation problems which helps the decision
makers to select an appropriate transportation schedule, depending on his financial position and the extent of bottleneck that
he can afford. The blocking zero point method enables the decision maker to evaluate the economical activities and make the
correct managerial decisions.
VI.
ACKNOWLEDGMENT
I express my sincere thanks to my father Shri. A.Dharmaraj Erode. Whose continuous vision and mission make me
to develop this paper and also my sincere thanks to Dr.R.I.K.Moorthy Principal Pillai Institute of Information Technology
New Panvel , I extend my thanks to my family members
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Reeb, J. and S. Leavengood. 2000. Using Duality and Sensitivity Analysis to Interpret Linear Programming
Solutions, EM 8744.
Manar Ibrahim HosnyInvestigating Heuristic and Meta-Heuristic Algorithms for Solving Pickup and Delivery
2010.
Y.P. Aneja and K.P.K.Nair , Bi-criteria transportation problem, Management Sci., 25(1979), 73-79. 3(4).
R.S. Garfinkel and M.R. Rao, The bottleneck transportation problem, Naval Research Logistics Quarterly, 18
(1971), 465-472
Baptiste, P., Le Pape, C., & Nuijten, W. (2001). Constraint-based scheduling: Applying constraint programming in
scheduling problems. Kluwer.
Sun, M., Aronson, J., McKeown, P., and Drinka, D. 1998. A tabu search heuristic procedure for the fixed charge
transportation problem. Eur. J. Oper. Res. 106: 441–456. doi:10.1016/S0377- 2217(97)00284-1.
Weintraub, A., and Dreyfus, S. 1985. Modifications and extensions of heuristics for solving resource transportation
problems. Cooperation Agreement Final Report. University of California, Berkeley, Calif.
Weintraub, A., Jones, G., Magendzo, A., Meacham, M., and Kirby,M. 1994. A heuristic system to solve mixed
integer forest planning models. Oper. Res. 42: 1010–1024. doi:10.1287/opre.42.6.1010.
Weintraub, A., Jones, G., Meacham, M., Magendzo, A., Magendzo, A., and Malchauk, D. 1995. Heuristic
procedures for solving mixed-integer harvest scheduling-transportation planning models. Can. J. For. Res. 25:
1618–1626. doi:10.1139/x95-176.
Zeki, E. 2001. Combinatorial optimization in forest ecosystem management modeling. Turk J. Agric. For. 25: 187–
194.
Zeng, H., Pukkala, T., Peltola, H., and Kelloma¨ki, S. 2007. Application of ant colony optimization for the risk
management of wind damage in forest planning. Silva Fenn. 41: 315–332.
Saleh Al HadiTumi, Saleh Al HadiTumi, Abdul Hamid KadirPakir, Causes of delay in construction Industry in
Libys (2009) The International Conference on Economics and Administration, Faculty of Administration and
Business, University of Bucharest, Romania ICEA – FAA Bucharest, 14-15th November 2009
Syed Mahmood Ahmed Salman AzharIrtishad Ahmad, Supply Chain Management in Construction scope benefits
and barriers Delhi Business Review ? Vol. 3, No. 1, January - June 2002
18
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 10-14
Security Management for Distributed Environment
Ms. Smita Chaudhari‪1, Mrs. Seema Kolkur2
1
Assi. Prof. Of S. S. Jondhale College Of Engineering,Dombivli,Mumbai University, INDIA
2
Asso. Prof. Of Thadomal Shahani College Of Engineering, Mumbai University, INDIA
Abstract––A mobile database is a database that can be connected to by a mobile computing device over a mobile network.
Mobile processed information in database systems is distributed, heterogeneous, and replicated. They are endangered by
various threats based on user’s mobility and restricted mobile resources of portable devices and wireless links. Since
mobile circumstances can be very dynamic, standard protection mechanisms do not work very well in such environments.
So our proposed model enhances the security in mobile database system. In this paper we develop a security model for
transaction management framework for peer-to-peer environments. If any attack still occurs on a database system,
evaluation of damage must be performed as soon the attack is identified. The attack recovery problem has two aspects:
damage assessment and damage repair. The complexity of attack recovery is mainly caused by a phenomenon called
damage spreading. This paper focuses on damage assessment and recovery procedure for distributed database systems.
Keywords––Mobile Database, Transaction Management, Security
I.
INTRODUCTION
In mobile environment, several mobile computers collectively form the entire distributed system of interest. These
mobile computers may communicate together in an ad hoc manner by communicating through networks that are formed on
demand. Such communication may occur through wired (fixed) or wireless (ad hoc) networks. Distributed database systems
are made up of mobile nodes and peer-to-peer connection. These nodes are peers and may be replicated both for faulttolerance, dependability, and to compensate for nodes which are currently disconnected. Several sites from this system must
participate in the synchronization of transaction. There are different transaction models [5] available for mobile computing
environment, but data transmission between the base station (BS) and the mobile station (MH (S)) is not secure which leads
to data inconsistency as well as large number of rejected transactions. Typical operating system security features such as
memory and file protection, resource access control and user authentication are not useful for distributed environment. A key
requirement in such an environment is to support and secure the communication of mobile database. This paper focuses on
security management processing for MCTO (Multi-Check-out Timestamp Order) [2] model by using symmetric encryption
and decryption [1] between the Base station BS and the mobile host MH with the aim at achieving secure data management
at the mobile host.
If any attack occurs on a database system, evaluation of damage must be performed as soon the attack is identified.
If the evaluation of damage not performed soon after attack, the initial damage will spread to other parts of the database via
valid transactions, consequently resulting in denial-of-service. As more and more data items become affected, the spread of
damage becomes even faster. Damage assessment is a complicated task due to intricate transaction relationships among
distributed sites. For the assessment, the logs need to be checked thoroughly for the effect of the attack. Damage recovery [6]
can be “Coldstart” or “Warmstart”. This paper focuses system that uses the “Coldstart” method for damage assessment and
recovery. The proposed system uses DAA (Damage Assessment Algorithm) [3] to detect the spread of malicious transaction
in distributed replicated database system. After detection of affected transactions, these are recovered using the recovery
procedure.
II.
THE PROPOSED MODEL
The architecture of the proposed system is as shown in fig.1. The mobile host in mobile network first gives the
encrypted request to fixed proxy server. The fixed proxy server updates the data and the result is given back to the mobile
network.
10
Security Management for Distributed Environment
Fixed proxy server
Fixed network
Result
DAAPass1/Pass2
Enc/Dec
DAA Pass1/Pass2
Update the data base
Decrypted
result
Encrypted
Request
DAA- Damage Assessment Algorithm
Enc- Encryption algorithm
Dec-Decryption algorithm
Enc/Dec
Mobile network
Fig.1 Architecture of the proposed system
The proposed model consists of both encryption and decryption algorithms located at the BS and the MH(s) as
shown in Fig.1.The encryption algorithm is started when the data transferred. The decryption algorithm is started when
encrypted data is received. The DAA (Damage assessment Algorithm) on the fixed network uses local logs.
The proposed system used MCTO model [2]. The model has two types of networks, i.e., the fixed network and the
mobile network. For the fixed network, all sites are logically organized in the form of two-dimensional grid structure. For
example, if the network consists of twenty-five sites, it will logically organize in the form of 5 x 5 grids. Each site has a
master data file.
A. Diagonal replication on Grid (DRG) Technique
For replication, the proposed system used a DRG [4] technique. In the fixed network, the data file will replicate to
diagonal sites. While in the ad hoc network, the data file will replicate asynchronously at only one site based on the most
frequently visited site. As an example, Assume that in5x5grid, the same file will be replicated to s (1,1),s (2,2), s (3,3),s(4,4)
and s(5,5). The „commonly visited site‟ is defined as the most frequent site that requests the same data at the fixed network
(the commonly visited sites can be given either by a user or selected automatically from a log file/ database at each centre).
B. Damage evaluation protocol
The purpose of this model [3] is to provide an efficient method to assess the effects of a malicious transaction in a
fully distributed replicated database system. The model is based on the following assumptions.

The local schedules are logged in each site and the attacker cannot destroy the log. The extended log can be
considered to include all the read operations in addition to the write operations in the log.

The attacking transactions are identified.

Blind writes are not permitted. That is if a transaction writes some data item it is assumed to read the value of that
item first.
The following transaction classifications are used in the model: Malicious transaction, authentic transaction,
affected transaction, bad transactions and unaffected transaction. Consider a distributed database system consisting of two
logs as shown in fig.3 that are replicated at different sites. Since this is a replicated distributed database system, any change
in one log is appeared to every site where that log is replicated.
11
Security Management for Distributed Environment
R1 (a) W1 (x)
R2.2(c)
R2.1 (b) W2.1 (b)
R4.2(c) W4.2(c)
R4.1 (d)
R5 (g)
R6.1 (d) W6.1 (d)
R6.2 (h) W6.2 (h)
R7 (d)
R8 (g) W8 (g)
R12.1 (e) W12.1 (e)
R9.2 (g) W9.2 (g)
R14.1 (a)
R12.2 (k)
R25 (d) W25 (y)
R21.2 (g) W21.2 (g)
Site 1
Site
Fig.3 Effect of Transaction distribution
The underlined transactions are malicious transactions. Let us first consider the log at site 1 in which T6 is marked
as the first malicious transaction. When DAA procedure is executed at site 1 the affected transactions will be detected and
added to the undo list (the list of transactions whose effect must be removed from the database). For example, transaction
T7 and T25 are affected transactions since they both read the damaged data item d, which has been updated by T6. At site 2,
T8 is marked as malicious hence T9 and T21 will be detected as affected. After assessing the affected transactions, all
affected transactions are repaired using damage recovery procedure.
i)Proposed Algorithm for Damage Assessment
Input: The update log, read log, the set of malicious transactions M
Output: The set of bad transactions TB, the set of dirty items D and the set of global bad Transactions GB.
1. Initializations: TB ={}, tmp_bad_list ={}, D = {}, tmp_dirty_list = {}.
2. Find the first malicious transaction committed in the log.
3. For each transaction Ti read its entry P ij (x) in the log
3.1 If Ti is in M then
If Pij is a write operation then
D = D U{x}
\* Add the data item to the dirty list*\
3.2 Else
3.2.1 Case Pij is a read operation
If x is in D
tmp_bad_list = tmp_bad_list U {Ti}
/*Add Ti to tmp_bad_list*/
3.2.2 Case Pij is a write operation
tmp_dirty_list = tmp_dirty_list U {x}
/* Add x to tmp_dirty_list*/
3.2.3 Case Pij is an abort operation
Delete Ti from tmp_bad_list
Delete x from tmp_dirty_list
3.2.4 Case Pij is a commit operation
If Ti is in the tmp_bad_list
Move Ti from tmp_bad_list to TB.
Move all the data items of Ti from the
tmp_dirty_list to the D.
The algorithm starts by adding data items updated by malicious transactions to the dirty list. After that it scans the log for all
presumed-bad transactions and adds the data items that have been updated by them to the dirty list.
ii) Proposed Algorithm for Damage Recovery
Input: The update log, set of malicious transaction M, set of affected transactions A.
Output: Set of recovered transactions whose effect has been undone: UN
Intermediate input/output: The set of bad transactions TB, the set of dirty items D, difference, temp.
1. Move to the position in the log where the first malicious transaction appeared.
2. For each transaction Ti in M,
temp  old value of item X
3. For each Affected transaction Ti in A, read its entry Pij (x) in the log.
3.1 If x is a numeric value,
Read the old values and new values of item x from the log.
12
Security Management for Distributed Environment
Calculate difference ← new value of x – old value of x.
3.2 Else
Read the old values and new values of item x from the log.
4. For every transaction Ti in the update log, read its entry Pij (x) in the log
4.1 If Ti is in A,
4.2 If x is a numeric value
Update Old value of x  temp
New value of x ← temp - difference.
4.3 Else
Copy temp to new value of x.
5. Update the original table‟s state according to recovered transactions in the log.
The algorithm begins by scanning every affected transaction in the update log. If the item updated by affected
transaction is a numeric value, it calculates the difference between the old value and new value of the item. The difference is
updated to new value of item. If item is a character value, copy old value of item to new value. Update the changes in the
tables to bring the database back in running state.
III.
IMPLEMENTATION ISSUES
A. System Design
The system uses VB as front-end and Oracle9i as a backend. There are two parts in which the proposed security
management system works. In the first part, the mobile user retrieves the desired information from the distributed database
system. The mobile user connects with distributed database system using wireless communication. During communication,
the request or result may get violated during transmission. To avoid it, the communication between mobile user and
distributed database system is secured using MD5 algorithm. The database is replicated on distributed database system using
Oracle‟s MMR (Multi-Master Replication) in such a way that the consistent data is available at different sites.
In the second part, Damage Assessment and Repair module is developed. The main component of this module is
log, which captures different operations on database. Two types of log are required: read log and update log. The prototype
is implemented on top of an Oracle server. Since Oracle redo log structure is confidential and difficult to handle, read and
write information is maintained manually. In particular, a Proxy is used to mediate every user transaction and some specific
triggers to log write operations. A trigger is associated with each user table to log the write operations on that table. All the
write operations are recorded in the log table.
Oracle triggers cannot capture every read operation. Oracle triggers can capture the read operations on a data item
that is updated or deleted but cannot capture read-only operations. To capture every read operation, read set from SQL
statement is extracted and stored in read log table.
B. Performance Issues
The performance of the first part i.e. data retrieval can be measured by the response time for user transactions. It is
the time between mobile users enters his queries to the time he gets back the result. It depends on the number of records the
user wants to retrieve.
The performance of the attack recovery subsystem can be measured by the average repair time, which indicates
how efficient the subsystem is at repairing damage. The Average repair time depends on the number of Affected
Transactions.
IV.
CONCLUSIONS AND FUTURE WORK
In this paper we have developed a mobile transaction model, which captures data and movement nature of mobile
transactions. This model is based on multi-check-out. The model describes a mobile transaction Management by Timestamp
Order. The Encryption and Decryption algorithms are used for secured data transmission between the base station BS and
the mobile host MH(s). After an attack, the DAA procedures are used to avoid and repair the effect of malicious transactions.
As a future work, an agent can be used on the fixed proxy server. When the mobile client is disconnected in
MCTO model, the result of the transaction is not lost but will be stored with the mobile agent. When the transaction is
completed, the agent returns and delivers the result to the user. If the user is disconnected, it waits until the user is
reconnected. The Agent is also used in order to maintain serializability in multi check-out mode, timestamp ordering to
serialize the mobile transaction at the fixed proxy server.
REFERENCES
1.
2.
Abdul-Mehdi, Z.T.; Mahmod, R., “Security Management Model for Mobile Databases Transaction Management”
Information and Communication Technologies: From Theory to Applications, 2008.,3rd International Conference
on
7-11 April 2008 Page(s):1 – 6.
Mehdi, Z.T. Mamat, A.B. Ibrahim, H. Dirs, Mustafa.M. 2006.“Multi-Check-Out Timestamp Order Technique
(MCTO) for Planned Disconnections in Mobile Database”, The 2nd IEEE International Conference on Information
& Communication Technologies: from Theory to Applications, 24-28 April, Damascus, Syria, Vol.1, and p.p 491498.
13
Security Management for Distributed Environment
3.
4.
5.
6.
Rami Samara and Brajendra Panda “Investigating the Effect of an Attack on a Distributed Database” IEEE
Workshop on Information Assurance United States Military Academy, West Point, NY 1-4244-0130-5/06/$20.00
©2006 IEEE 312
M. D. Mustafa, B.Nathrah, M. H. Suzuri, M. T. Abu Osman “Improving Data Availability using hybrid
Replication technique in Peer-To-Peer Environments” processing of the 18th international conference on
Advanced Information Networking and application 0-7695-2051-0/04 $ 20.00@ 2004 IEEE
Weider D. Yu, Sunita Sharma “A Mobile Database Design Methodology for Mobile Software Solutions” 31st
Annual International Computer Software and Applications Conference (COMPSAC 2007)
Paul Ammann, Sushil Jajodia, Peng Liu “Recovery from Malicious Transactions” IEEE TRANSACTIONS ON
KNOWLEDGE AND DATA ENGINEERING, VOL.14,NO.5,SEPTEMBER/OCTOBER 2002
14
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 10-21
Fuzzy Self-Adaptive PID Controller Design for Electric Heating
Furnace
Dr. K Babulu1, D. Kranthi Kumar2
1
Professor and HOD, Department of Electronics and Communication Engineering,
Jawaharlal Nehru Technological University Kakinada (JNTUK), Kakinada, Andhra Pradesh, India.
2
Project Associate, Department of ECE,
Jawaharlal Nehru Technological University Kakinada (JNTUK), Kakinada, Andhra Pradesh, India.
Abstract––This paper deals with the importance of fuzzy logic based self-adaptive PID controller in the application of
temperature process control of an electric heating furnace [1]. Generally, it is necessary to maintain the proper
temperature in the heating furnace. In common, PID controller is used as a process controller in the industries. But it is
very difficult to set the proper controller gains. Also because of nonlinear and large inertia characteristics of the
controller [2], often it doesn’t produce satisfactory results. Hence, the research is going on for finding proper methods for
overcoming these problems. Taking this aspect into consideration, this paper proposes methods to choose the optimum
values for controller gains (proportional, integral, and derivative), fuzzy logic based intelligent controller and also fuzzy
logic based self-adaptive PID controller for temperature process control. This paper comprises the comparison of
dynamic performance analysis of Conventional PID controller, fuzzy based intelligent controller and fuzzy based selfadaptive PID controller. The whole system is simulated by using MATLAB/Simulink software. And the results show that
the proposed fuzzy based self- adaptive PID controller [3] has best dynamic performance, rapidity and good robustness.
Keywords––Electric heating furnace, PID tuning methods, Fuzzy based self-adaptive PID controller, Mat lab, Simulink.
I.
INTRODUCTION
The term electric furnace refers to a system that uses the electrical energy as the source of heat. Electric furnaces
are used to make the objects to the desired shapes by heating the solid materials below their melting point temperatures.
Based on the process in which the electrical energy converted into heat, electric furnaces are classified as electric resistance
furnaces, electric induction furnaces etc. In industries, the electric furnaces are used for brazing, annealing, carburizing,
forging, galvanizing, melting, hardening, enameling, sintering, and tempering metals, copper, steel and iron and alloys of
magnesium.
Figure.1 shows the schematic diagram for Electric arc furnace. There are three vertical rods inserted into the
chamber act as electrodes. When current is passed through them, they produce arc between them. Material to be heated
(steel) is placed in between the electrodes touching the arc. This arc is produced until the material reaches the desired
temperature. Then the molten steel can be collected at bottom of the chamber. There are two types of arcs. Those are direct
arc and indirect arc. The direct arc is produced between the electrodes and the charge making charge as a part of the electric
circuit. The indirect arc is produced between the electrodes and heats the material to be heated. Open arc furnaces, DC
furnaces, arc-resistance furnace etc. comes under direct arc furnaces.
Figure.1 Electric arc furnace
In general, the in industries, PID (Proportional + Integral +Derivative) controller is used for process control
because of good characteristics like high reliability, good in stability, and simple in algorithm. However, controlling with
PID is a crisp control; the setting of values for gain parameters is quite a difficult and time consuming task. Often, response
10
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
with PID controller [5] has large peak overshoots. Many papers have proposed different tuning algorithms to reduce this
problem. But still it is a challenging task to set PID gains.
PID controller is suitable only for linear systems with known mathematical models. But controlling of temperature of electric
furnace is a non-linear, time delay time varying. Hence, conventional PID controllers can‟t produce satisfactory results when
it is used to control the temperature of the electric furnace.
To get rid out of this problem, this introduces fuzzy logic controller to control the temperature. But still it has steady state
error. To reduce this error, this paper also introduces self-adaptive PID controller based on fuzzy to get the best performance
of the system. In self-adaptive fuzzy controller, PID gain parameters are tuned by fuzzy controller.
The temperature process [1] of an electric furnace is a common controlled object in temperature control system. It can be
shown mathematically by a first order system transfer function. This is given by the equation.1
G( S ) 
K
e S
TS  1
(1)
Hence, the transfer function for the given electric arc furnace system is given by the equation.2
G(S ) 
1
e 1.8 S
11S  1
(2)
Where,
Delay time (τ)
= 1.8 sec
Time constant (T) = 11 sec
Static gain (K)
=1
II.
PID CONTROLLER DESIGN WITH TUNING ALGORITHMS
A simple controller widely used in industries is PID controller. But, the selection of values for the PID gains is
always a tough task. Hence, in this paper, some algorithms are discussed to tune the PID gain parameters. They are given
below.

Pessen‟s tuning algorithm

Continuous cycling tuning algorithm

Tyreus-luyben tuning algorithm

Damped cycling tuning algorithm
The procedural steps for applying pessen‟s, continuous cycling and damped cycling tuning algorithms are as follows.
Step-1: Design the system with only proportional controller with unity feedback.
Step-2: Adjust the proportional gain value until the system exhibits the sustained oscillations.
Step-3: This gain value represents critical gain (KC) of the system. Note the time period of oscillations (TC). This time
represents the critical time period.
Step-4: From these KC and TC values, calculate PID parameter gains. [7, 15]
The procedural steps for applying damped cycling tuning algorithm are as follows.
Step-1: Design the system with only proportional controller with unity feedback.
Step-2: Make sure that the P-control is working with SP changes as well as with preset value changes.
Step-3: Put controller in automatic mode to obtain damped cycling.
Step-4: Apply a step change to the SP and observe how the preset value settles.
Step-5: Adjust the proportional gain of the system until damped oscillations occurs. This gain value represents KC
Step-6: From the response, note the values of first and second peak overshoots and then calculate PID gain parameters.
[15]
Figure.2 Conventional PID controller design
11
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
Figure.2 shows the conventional PID controller design. The values of KP, KI and KD that are calculated from the algorithms
are applied to the model and observed the responses. Table.1 gives the values for PID gain parameters calculated from all
tuning methods. The responses with different algorithms are shown in figures [11-15].
TABLE.1 PID GAIN PARAMETER VALUES
III.
S.
No.
Method Name
KP
TI
KI
TD
KD
1
Pessen‟s
method
3.432
3.4
1.009
2.266
7.776
2
Continuous
cycling
2.08
6.8
0.305
3.266
4.713
3
Tyreusluyben
4.68
14.96
0.312
1.079
50.127
4
Damped
cycling
9.324
2.33
2.354
0.252
5.484
SYSTEM DESIGN WITH FUZZY LOGIC CONTROLLER
Fuzzy logic control [14] can be used even when the process is a non-linear, time varying. The control temperature
of the electric furnace is a non-linear. Hence, we can apply fuzzy control to the temperature process control. Fuzzy control
has become one of the most successful methods to design a sophisticated control system. It fills the gap in the engineering
tools which are left vacant by purely mathematical and purely intelligent approaches in the design of the system. A fuzzy
control system is the one that is designed on fuzzy logic. The fuzzy logic system is a mathematical system that analyses the
input analog values in terms of logical values between 0 and 1 as shown in figure.3
Figure.3 Difference between crisp and fuzzy
Figure.4 shows the elements of a fuzzy logic system. Fuzzy logic defines the control logic in a linguistic level. But
the input values (Measured variables) are generally in numerical. So, Fuzzification needs to be done. Fuzzification is a
process of converting numerical values of measured variable to the linguistic values. Fuzzy inference structure converts the
measured variables to command variables according its structure.
Figure.4 Basic elements of a fuzzy logic system
Figure.5 shows the MATLAB model of the system fuzzy logic controller. It takes two inputs namely error signal and
change in error.
12
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
Figure.5 MATLAB/Simulink model for Fuzzy logic controller
[1] Fuzzy membership functions
Fuzzy set is described by its membership functions [16]. It categorizes the element in the set, whether continuous
or discrete. The membership functions can be represented by using graphs. There are some constraints regarding usage of the
shapes. The rules that are to be written to characterize the fuzziness are also fuzzy. The shape of the membership function to
be considered is one of the important criteria. Membership function is represented by a Greek symbol “µ”. The commonly
used shapes for the membership functions are triangular, Gaussian and trapezoidal. Gaussian shape membership function is
preferred for temperature process control. But for temperature control in electric furnace it is better to consider the triangular
membership functions which is similar to Gaussian function to make the computation relatively simple. Figure.6 shows the
selection of membership functions for the input and outputs for the fuzzy logic controller. Figure.7 shows the input
membership function editor for fuzzy logic controller.
Figure.6 Selection inputs/outputs for designing fuzzy inference structure for fuzzy logic controller
Figure.7 Membership function editor for fuzzy logic controller
The basic methods for fuzzification are sugeno and mamdani. And some defuzzification methods are middle of
maximum, quality method, mean of maxima, first of maximum, semi linear defuzzification, last of maximum, center of
gravity, fuzzy clustering defuzzification, and adaptive integration. This paper uses the mamdani fuzzification method.
13
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
[2] Fuzzy inference structure
The steps performed by the fuzzy rule based inference structure are given as follows [13]
Fuzzification: It is a mapping from the observed input to the fuzzy sets. The input data match the condition of the fuzzy
rules. Therefore, fuzzification is a mapping to the unit hypercube 0 and 1.
Inference Process: It is a decision making logic which determines fuzzified inputs corresponding to fuzzy outputs,
according with the Fuzzy rules.
Defuzzification: It produces a non-fuzzy output, for application purpose that needs a crisp output. This process is used to
convert a Fuzzy conclusion into a crisp.
[3] Fuzzy rules for developing FIS
Generally human beings take decisions which are like if-then rules in computer language. Suppose, it is forecasted
that the weather will be bad today and fine tomorrow then nobody wants to go out today and postpones the work to
tomorrow. Rules associate with ideas and relate one event to the other.
TABLE.2 IF-THEN STATEMENTS FOR FUZZY INFERENCE SYSTEM FOR FUZZY CONTROLLER
1. If E = HN and EC = HN Then U = HP
2. If E = HN and EC = MN Then U = HP
3. If E = HN and EC = LN Then U = HP
4. If E = HN and EC = ZE Then U = HP
….....
........
….....
47. If E = HP and EC = LP Then U = HN
48. If E = HP and EC = MP Then U = HN
49. If E = HP and EC = HP Then U = HN
The decision makings are replaced by fuzzy sets and rules by fuzzy rules. Table.2 shows some fuzzy rules written for the
fuzzy inference structure.
TABLE.3 FUZZY RULES FOR DEVELOPING FUZZY INFERENCE STRUCTURE (FIS) FOR FUZZY LOGIC CONTROL
E
HN
MN
LN
ZE
LP
MP
HP
EC
U
HN
HP
HP
HP
HP
MP
LP
ZE
MN
HP
HP
MP
MP
LP
ZE
LN
LN
HP
HP
MP
LP
ZE
LN
MN
ZE
HP
MP
LP
ZE
LN
MN
HN
LP
MP
MP
ZE
LN
MN
HN
HN
MP
LP
ZE
LN
MN
MN
HN
HN
HP
ZE
LN
MN
HN
HN
HN
HN
Where, HP - High Positive; MP - Medium Positive; LP - Low Positive; ZE - Zero Value; LN - Low Negative; MN - Medium
Negative; HN - High Negative.
IV.
SYSTEM DESIGN WITH FUZZY-PID CONTROLLER
The conventional PID controller [12] parameter gains are always obtained by some testing methods. These
methods have low precision and takes long time for debugging. In the recent years, several approaches are proposed PID
design based on intelligent algorithms mainly the fuzzy logic method. This method takes the advantage of both conventional
PID control and fuzzy logic control. Figure.8 shows the design of the system with fuzzy-PID controller.
14
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
Figure.8 MATLAB/Simulink model with fuzzy-PID controller
Figure.9 shows the selection of inputs and outputs for design of FIS for fuzzy-PID controller. In this error and
error change in error are considered as inputs, PID controller parameter gains (proportional + integral + derivative) are
considered as the outputs for fuzzy logic control. These PID parameter gains are given to the conventional PID control
block. Figure.10 shows the membership function editor for fuzzy-PID controller.
Figure.9 Selection inputs/outputs for designing fuzzy inference structure for fuzzy-PID controller
Figure.10 Membership function editor for fuzzy-PID controller
TABLE.4 IF-THEN STATEMENTS FOR FUZZY INFERENCE SYSTEM FUZZY-PID CONTROLLER
1)
2)
49)
If (E is HN) and (EC is HN) then
(KP is HP) (KI is HN) (KD is LP)
If (E is HN) and (EC is MN) then
(KP is HP)(KI is HN)( KD is LP)
………….
If (E is HP) and (EC is HP) then
(KP is HN)( KI is HP)( KD is HP)
15
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
TABLE.5 FUZZY RULES FOR DEVELOPING FUZZY INFERENCE STRUCTURE (FIS) FOR FUZZY-PID CONTROLLER
E
KP
HN
MN
LN
ZE
LP
MP
HP
Ec
KI
KD
HP
HP
MP
MP
LP
ZE
ZE
HN
HN
HN
MN
MN
LN
ZE
ZE
LP
LN
HN
HN
HN
MN
LP
HP
HP
MP
LP
LP
ZE
LN
MN
HN
HN
MN
LN
LN
ZE
ZE
LP
LN
HN
MN
MN
LN
ZE
MP
MP
MP
LP
ZE
LN
LN
LN
HN
MN
LN
LN
ZE
LP
LP
ZE
LN
MN
MN
LN
LN
ZE
MP
MP
LP
ZE
LN
MN
MN
ZE
MN
MN
LN
ZE
LP
MP
MP
ZE
LN
LN
LN
LN
LN
ZE
LP
LP
ZE
LN
LN
MN
MN
LP
MN
LN
ZE
LP
LP
MP
MP
ZE
ZE
ZE
ZE
ZE
ZE
ZE
LP
ZE
LN
MN
MN
MN
HN
MP
ZE
ZE
LP
LP
MP
HP
HP
HP
LN
LP
LP
LP
LP
HP
ZE
ZE
MN
MN
MN
HN
HN
HP
ZE
ZE
LP
MP
MP
HP
HP
HP
MN
MP
MP
LP
LP
HP
The output variables and certainty outputs are given by the equation. (3-5) from those values, the PID gain parameters based
on fuzzy-PID control are given by the equations.(6-8)
n
K P  f1 ( E , Ec) 
*
  ( E, Ec) K
j
j 1
Pj
n
  ( E, Ec)
(3)
j
j 1
n
K I  f 2 ( E , Ec) 
*
  ( E, Ec) K
j
j 1
Ij
(4)
n
  ( E, Ec)
j
j 1
n
K D  f 3 ( E , Ec) 
*
  ( E, Ec) K
j
j 1
n
  ( E, Ec)
j 1
Dj
(5)
j
16
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
Figure.11 MATLAB/Simulink model for comparison of system responses with different controllers
Here, „µ‟ refers to the membership function of fuzzy set; „n‟ refers to the number of single-point set. K Pj , K Ij and K D
*
*
*
*
are output variables, K P , K I and K D are certainty outputs. The PID gain parameters can be calculated as follows.
K P  K P!  K P*
KI  K  K
!
I
(6)
*
I
KD  K  K
!
D
V.
(7)
*
D
(8)
SIMULATION RESULTS
Figure.12 Response of the system with Pessen’s PID controller
17
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
Figure.11-14 show the responses of the system with conventional PID controller when it is tuned with pessen‟s,
continuous cycling, Tyreus-luyben and damped cycling tuning algorithms. Figure.15 shows the comparison of the responses
of the system with above tuning algorithms.
Figure.16 shows the response of the system with fuzzy logic controller. It has some steady state error. But it is eliminated by
using with fuzzy-PID controller. Figure.17 shows the response of the system with fuzzy-PID controller. The responses of
these two methods are compared in figure.18
Table.6 shows the comparison of the transient response parameters when system is designed with different PID control
strategies.
Figure.13 Response of the system with continuous cycling PID tuning method
Figure.14 Response of the system with Tyreus-luyben method
Figure.15 Response of system with damped cycling method
18
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
Figure.16 Comparison of responses with various tuning algorithms
Figure.17 Response of the system with fuzzy logic controller
Figure.18 Response of the system with Fuzzy-PID controller
19
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
Figure.19 Comparison of responses with fuzzy and fuzzy-PID controllers
TABLE.6 COMPARISON OF TIME DOMAIN PERFORMANCE PARAMETERS FOR VARIOUS CONTROLLING METHODS
Time Domain Performance Parameters
S. No.
1.
2.
3.
4.
5.
6.
Controller Used
Pessen‟s
PID Controller
Damped Cycling
PID Controller
Continuous Cycling
PID Controller
Tyreus-Luyben
PID Controller
Fuzzy Logic
Controller
Fuzzy-PID
Controller
Delay
Time
(Td) in Sec
Rise Time
(Tr) in Sec
Settling
Time (Ts)
in Sec
Peak
Overshoot
(MP) in %
Transient
Behavior
% Steady
state
Error
(ESS)
2.425
3.408
50.6
33.5
Oscillatory
0
3.07
7.828
51.6
8.6
3.208
7.644
67.13
13.5
1.289
3.131
52.85
2.7
1.382
32.54
123.18
21.55
45.85
101.53
VI.
No
overshoot
No
overshoot
No
Oscillatory
No
Oscillatory
Less
Oscillatory
0
0
0
Smooth
5.8
Smooth
0.2
CONCLUSION
Hence, in this paper firstly, the conventional PID controller is used as temperature process controller for Electric
Heating Furnace System. Later on fuzzy logic based intelligent controller is introduced for the same. The performance of
both is evaluated against each other and from the Table 6, the following parameters [8] can be observed.
1) Even though, the PID controller produces the response with lower delay time, rise time and settling time, it has severe
oscillations with a very high peak overshoot of 2.7%. This causes the damage in the system performance
2) To suppress these severe oscillations Fuzzy logic controller is proposed to use. From the results, it can be observed that,
this controller can effectively suppress the oscillations and produces smooth response. But it is giving a steady state error of
5.8%.
3) Furthermore, to suppress the steady state error, it is proposed to use Fuzzy-PID controller, where the PID gains are tuned
by using fuzzy logic concepts and the results show that this design can effectively suppress the error to 0.2% while keeping
the advantages of fuzzy controller.
Hence, it is concluded that the conventional PID controller could not be used for the control of non-linear processes like
temperature. So, the proposed fuzzy logic based controller design can be a preferable choice to achieve this
REFERENCES
1.
2.
3.
Hongbo Xin, Xiaoyu Liu, Tinglei Huang, and Xiangjiao Tang “Temperature Control System Based on Fuzzy
Self-Adaptive PID Controller,”in the proc. of IEEE International Conference, 2009.
Su Yuxin, Duan Baoyan, “A new nonlinear PID controller. Control and Decision,” 2003, 18 (1):126-128.
He Xiqin, Zhang Huaguang, “Theory and Application of Fuzzy Adaptive Control [M],” Beihang University Press,
Beijing, 2002:297-303.
20
Fuzzy Self-Adaptive PID Controller Design for Electric Heating Furnace
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Wang Li; Wang Yanping; Zang Haihe, “The resin abrasives products hardening furnace temperature control
system design based on fuzzy self-tuning PID,” International conference on Computer ,mechatronics ,control, and
electronic engineering, Vol. 3, No. 5,pp. 458 – 461,2010.
Liu Jinkun, “MATLAB Simulation of Advanced PID Control [M],” Electronic Industry Press, Beijing, 2006: 102129.
Guo Yangkuan, Wang Zhenglin. “Process control engineering and simulation,” Beijing: Electronic Industry Press,
2009.
Curtis D Johnson, “Process Control Instrumentation Technology,” Pearson Education, 2009.
A. Nagoor kani, “Control Systems,” First edition, RBA Publications, 2002.
Hao Zhengqing, Shi Xinmin, “Fuzzy Control and Its MATLAB Simulation [M],” Tsinghua University Press,
Beijing Jiaotong University Press, Beijing, 2008:89-126.
Zhuzhen wang, Haiyan Wang , and Xiaodong Zhao, “Design of series leading correction PID controller,” In the
proc. of IEEE International Conference, 2009.
Gang Feng, “Analysis and Synthesis of Fuzzy Control Systems: “A Model Based Approach, Automation and
Control Engineering Series,” CRC Press, Taylor, 2010.
N.S. Marimuthu and P. Melba Mary, “Design of self-tuning fuzzy logic controller for the control of an unknown
industrial process,” IET Control Theory Appl., 2009, Vol. 3, Iss. 4, pp. 428–436.
Reza Langari and John Yen, “Fuzzy Logic-Intelligence, Control, and Information,” Pearson Education, 2007.
Wolfgang Altmann, “Practical Process Control for Engineers and Technicians,” IDC Technologies,2005.
S. N. Sivanandam, S. Sumathi, and S. N. Deepa, “Introduction to Fuzzy Logic using MATLAB,” Springer-Verlag
Berlin Heidelberg, 2007.
Dr.K.Babulu1 received the Ph.D. degree in VLSI from JNTUA, India in the year 2010. At present he
has been working as Professor and headed with the department of ECE, UCEK, JNTUK, Kakinada,
India. His research interests include VLSI Design, Embedded System Design, ASIC Design and PLD
design. He has published over 28 papers in various national and international journals and conferences.
D.Kranthi Kumar2 is a Project Associate pursuing his M.Tech Degree with Instrumentation and Controls
specialization at the department of Electronics and Communication Engineering, UCEK, JNTUK, Kakinada.
21
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 24-31
Optimization of Process Parameters in Eli-Twist Yarn
P.Mageshkumar1, Dr.T.Ramachandran2
1
Assistant Professor, Department of Textile Technology, K.S.Rangasamy College of Technology, Tiruchencode.
2
Principal, Karpagam Institute of Technology, Coimbatore.
Abstract––Eli-Twist yarn, produced by Suessen Elite compact spinning technology has superior in technically and
economically in the recent days compared to the two-ply yarns. The quality of this Eli-twist yarn is governed by many
parameters. The distance between roving strands and the negative pressure applied in the suction has substantial
effects on the quality aspect of this yarn. The feed roving distance and negative pressure were varied in such a way
that ten different combinations are produced. The effects of these two parameters on yarns fineness, Rkm, Elongation
%, U %, Hariness were studied. To assess the yarn quality the concept of Cpk (Process Capability Index) could be
used effectively. An attempt has been made to implement the concept of Cpk in the analysis of the properties of Elitwist yarn.
Keywords: Eli-Twist, Compact, Hariness, U%, Cpk,
I.
INTRODUCTION
Eli-Twist is a patented process which produces economically a compact spin-twisted yarn with the EliTe Compact Set
directly on the ring spinning frame. The Eli-Twist Process permits to reduce the twisting triangle to a degree that it has
practically no influence on spinning stability. In this process, two rovings pass through the twin condensers at the entrance of the
drafting system and they are drafted separately at a certain distance. Further the drafted roving leaves the front roller pair of the
drafting system and passes through the compacting zone. In the condensing zone, the distance between the drafted fibre strands
is reduced to about 5 mm by means of two suction slots in V-shaped arrangement in the compacting zone. This close distance is
possible, as the two yarn components when leaving the delivery roller pair does not produce any fibre ends sticking out and
overlapping each other. On the other hand, this close distance enables the twisting point to come closer to the delivery clamping
line.
The Eli-Twist Process allows reduction in the twisting triangle to a degree that the restrictions mentioned above are
eliminated. During condensing, both components get closer and reach a minimum distance by means of two suction slots in
the condensing zone in a V shaped arrangement.
Figure 1 - Drafting Zone of Eli-Twist Spinning
Owing to condensation, the two components after leaving the condensing zone do not form a spinning triangle.
Consequently, no fibres are found to be sticking out, spreading up to the other yarn component or not being embedded in the
yarn. The twist, running into the two yarn legs from the twisting points, need not overcome any resistance and hence easily
reaches the clamping line. As a result, the two fibre strands can be led very closely and the twisting point has a very small
distance from the clamping line of the front roller pair.
The Eli-twist process yarn has a novel structure combining all advantages of condensing and doubling. Yarn
surface and appearance of Eli-twist are comparable to a single compact yarn.
II.
MATERIALS AND METHODS
Materials used in the trial are 100% Indian Cotton (Variety: Shankar 6), which is processed under control up to
spinning department. Table 1 shows the details of process path and important process parameters.
24
Optimization of process parameters in Eli-Twist yarn
Table 1 - Spinning Preparatory process parameters
Parameters
Details
Carding
LC 300 A V3
a) Delivery Hank
0.120
b) Delivery Speed in mpm
120
c) Production Rate (Kg/hr)
35.4
Pre Comber Draw Frame
LD-2
a) No of Doubling
5
b) Delivery Hank
0.120
c) Delivery Speed in mpm
600
Uni Lap
E-5
a) No of Doubling
22
b) Lap Weight
75
Comber
E-65
a) Delivery Hank
0.120
b) Comber Speed (npm)
450
Finisher Draw Frame
D-40
a) Delivery Hank
0.120
b) No of Doublings
8
c) Delivery Speed in mpm
400
Simplex
LFS 1660 V
a) Delivery Hank
0.9
b) Spindle Speed (rpm)
980
c) TPI / TM
1.40 / 1.48
LR-6 AX – Elitwist Converted 1660 V
Ring spinning
a)Yarn Count
40/2
b) Avg. Spindle Speed (rpm)
15500
c) TPI
18.0
The process details and parameters given above are common for all the trial samples, whereas the changes have
been made only in the spinning parameters. Levels have been decided for the two factors by considering the raw material,
atmospheric conditions, economic viability, work methods & possibility of the mills.
Table 2 - Factors & Levels
Factors
Levels
Distance between the 2 roving strands
5 (0mm, 2mm,4mm,6mm,8mm)
Negative pressure applied in the suction zone
2 (22mbar & 24mbar)
The material from Speed frames have been taken for the trials. All the trials have been run in the identical spindles
in the ring frame at a constant speed. The distance between the roving strands is optimized by means of specially designed
roving guides and the measurement of the distance was taken by using a normal measuring scale. Fig.1 represents both the
normal roving guide and the specially designed roving guide present in the drafting zone.
25
Optimization of process parameters in Eli-Twist yarn
Table 3 - Design of experiments
Distance b/w 2 roving strands Suction pressure applied (mbar)
in mm
0
22
Trials
Trial 1
Trial 2
0
24
Trial 3
2
22
Trial 4
2
24
Trial 5
4
22
Trial 6
4
24
Trial 7
6
22
Trial 8
6
24
Trial 9
8
22
Trial 10
8
24
Figure 2 - Specially designed roving guide
Figure 3 - Measurement of Negative Pressure
III.
TESTING PARAMETERS
The trial samples have been tested for their count, variation in count, Lea strength, Single yarn strength,
Elongation %, number of imperfections, Hairiness and Classimat faults respectively. All the testing has been done by using
the latest testing instruments under the standard atmospheric conditions. Testing procedures have been followed as per the
testing instrument recommendations.
26
Optimization of process parameters in Eli-Twist yarn
Table 4 - Testing Process Parameters
Test Type
Testing Instruments
No of Samples Tested
Testing Speed (mpm)
Count
Lea Strength
Evenness
Cascade
Statex
Uster Tester 5
30 / Trial
30 / Trial
30 / Trial
NA
25 m/min
800 m/min
Tensile Properties
Hairiness
Classimat
Uster Tenso Jet 4
Zweigle
Loepfe Zenit
30 / Trial
30 / Trial
100% samples
400 m/min
500 m/min
1500 mpm (i.e., winding speed)
Test results of the samples, both at ring frame cops stage and the final cone stage were tabulated. Apart from the
laboratory test results, the online data on the autoconer breakage, online Classimat and the lab data were also included.
Laboratory testing has been conducted in the recently developed Uster testing instruments.
All the testing has been conducted at standard atmospheric conditions i.e. at 65% relative humidity with dry
temperature of 74°F and wet temperature of 66°F respectively.
Test Results have been analysed statistically by means of using the concept of CpK (Process Capability Index).
The quality of yarn has been assessed by comparing the test results with the specifications/variations. Each parameter has
been given certain amount of weightage and the CpK values have been calculated for each trial.
IV.
RESULTS & DISCUSSION
Table 6 - Test results of trail 1, 2 , 3, 4, 5
Process Parameters
Requireme
nt
Trial 1
Count
Trial 2
Trial 3
Trial 4
Trial 5
40/2
40/2
40/2
40/2
40/2
Set TPI / TM
18.2 / 4.06
18.2 / 4.06
18.2 / 4.06
18.2 / 4.06
18.2 / 4.06
Twist Direction
S
S
S
S
S
Cone
Cone
Cone
Cone
Cone
0
2
4
6
8
22
22
Package Tested
Distance b/w 2
Roving Strands mm
Negative Pressure
applied in Suction
Zone (m bar)
22
Avg Count
Mea
n
20.0
Cvb
1.3
Mea
n
20.7
Avg Rkm
20.5
7.5
Elongation %
5.4
U%
6.8
Test Results
Cvb
0.5
Me
an
20.6
Cvb
0.7
Mean
20.5
21.9
6.5
21.0
5.3
7.5
5.5
6.8
5.4
2.5
7.4
3.9
7.0
0
4.0
22
Cvb
0.6
Mea
n
20.5
21.0
5.5
6.0
5.9
2
7.0
0
0
1
22
Cvb
0.8
Mea
n
20.5
Cvb
0.7
20.7
5.5
21.2
5.4
6.0
5.7
7.1
5.7
5.5
6.9
1.2
6.9
2.0
0
0.0
0
0.0
3
84.0
2
77.0
58.0
5
63.0
Thick +50%
5
35.0
8
57.0
4
85
5
2.8
316.
0
65.0
Neps +200%
8
35.0
8
44.0
4
96
5
85.0
4
Total Imp / Km
13
71
30.0
49
34
45
38.0
24
22.0
27
33.0
31
30.0
29
31
25
46.0
18
31.0
20
46.0
29
22.0
26
34
24
34.0
21
36.0
20
35.0
Thin -50%
16
Thin -30%
Thick +35%
25
35.0
Neps +140%
25
35.0
Total Imp / Km
50
Hairiness
4.8
4.0
750
35.0
20.0
1.5
S3
Total CMT Faults
Over all Cpk
8
131
4.7
1457
.0
38.0
11
104
12.1
7.3
1.5
0.813
4.6
570.
0
13.0
7
94
63
4.7
4.6
5.2
14.7
543.0
7.3
1.5
20.0
1.5
1.106
1.099
7
4.7
710.
0
33.0
67
3.9
7.3
1.5
1.345
4.9
652.
0
33.0
3.9
7.3
1.5
1.261
27
Optimization of process parameters in Eli-Twist yarn
Table 7 - Test results of trail 6,7,8,9,10
Process
Parameters
Require
ment
Trial 6
Trial 7
Trial 8
Trial 9
Trial 10
40/2
40/2
40/2
40/2
40/2
Set TPI / TM
18.2 / 4.06
18.2 / 4.06
18.2 / 4.06
18.2 / 4.06
18.2 / 4.06
Twist Direction
S
S
S
S
S
Package Tested
Distance b/w 2
Roving Strands
mm
Negative Pressure
applied in Suction
Zone (m bar)
Cone
Cone
Cone
Cone
Cone
0
2
4
6
8
24
24
24
24
24
Count
Test Results
Mea
n
Cv
b
Mea
n
Cv
b
Mea
n
Cvb
Me
an
Cvb
Mea
n
Cv
b
Mea
n
Avg Count
20.0
1.3
20.7
1.2
20.3
0.5
20.7
1.3
20.7
1.2
20.4
Avg Rkm
20.5
7.5
20.3
7.1
20.5
7.1
19.7
7.2
21.3
5.3
21.3
Elongation %
5.4
7.5
6.0
7.7
5.7
7.0
5.6
7.5
5.6
6.4
5.5
U%
6.8
2.5
7.3
2.3
7.0
2.3
7.1
1.0
6.7
1.3
6.9
0
0.0
0
0.0
0
0.0
0
0.0
0
6
52.
0
5
103.
0
2
89.
0
1
4
80.
0
5
52.
0
2
Thin -50%
Thick +50%
5
Neps +200%
8
Total Imp / Km
13
35.
0
35.
0
10
Thin -30%
7
59.0
14
9
71.0
14
7
41
26.0
45
16.0
12
27.
0
23
26
23.
0
22
55.0
29
31.0
9
46.
0
17
25
34.
0
27
39.0
28
16.0
17
9.0
20
Neps +140%
25
35.
0
Total Imp / Km
50
Hairiness
4.8
4.0
4.7
S3
750
35.
0
Total CMT Faults
20.0
1.5
118
90
8.9
4.3
819.
0
26.0
102
4.7
4.5
7.3
718.
0
1.5
24.0
0.944
38
8.6
4.9
7.3
454.
0
1.5
25.0
0.930
5.5
6.4
1.3
0.0
137.0
37.0
37.0
22.0
23.0
60
3.9
4.9
7.3
667.
0
7.3
819.
0
1.5
38.0
1.5
16.0
0.813
0.5
3
23.
0
35.
0
Over all Cpk
84.0
67
25
Thick +35%
7
Cvb
1.410
5.3
7.3
1.5
1.620
28
Optimization of process parameters in Eli-Twist yarn
V.
EFFECT ON HAIRINESS
Surface Plot of Hairiness vs Negative Pressure, Roving Distance
5.0
4.8
Hariness
4.6
4.4
24
23
0.0
2.5
5.0
Roving Distance
7.5
Negative Pressure
22
Figure 4 - Surface plot of distance between two roving strands & Suction Pressure on Uster Hairiness index
In both the levels of the suction pressure, the distance between two roving stands of 2 mm and 4 mm give less
hairiness whereas too higher and lower distances leads to higher hairiness. From the surface plot figure 4, we can understand
that the effect of Roving distance and Negative Pressure on Hariness is very linear. In very low distance, the fibre web
cannot spread out evenly during drafting, also in higher distances the protruding end cannot be controlled effectively by the
suction pressure. This may be the main reason for higher hairiness in both extremes.
VI.
EFFECT ON RKM
Surface Plot of Rkm vs Negative Pressure, Roving Distance
22
Rkm
21
20
24
23
0.0
2.5
5.0
Roving Distance
7.5
Negative Pressure
22
Figure 5 - Surface plot of distance between two roving strands & suction pressure on Rkm (Single yarn strength)
Rkm value does not change much for the distance between roving strand and negative pressure applied. But the coefficient of variation between values are higher for higher negative pressure in all the distance between roving stands and the
8 mm distance, gives comparatively lesser co-efficient of variation in both the negative pressures applied.
The surface plot shows that at 8 mm distance and 24 mbar negative pressures, the Rkm values are higher. The low
negative pressure (22 mbar) does not change the Rkm values much in all the roving distances. This may be due the low
compaction effect by low pressure.
VII.
EFFECT ON TOTAL IMPERFECTIONS
Surface Plot of Total Imperfecti vs Negative Pressur, Roving Distance
15
Total Imperfections 10
5
24
0.0
23
2.5
Roving Distance
5.0
7.5
Negative Pressure
22
Figure 6 - Surface plot of distance between two roving strand & suction pressure on Total imperfections
29
Optimization of process parameters in Eli-Twist yarn
The surface plot clearly indicates that total imperfections get reduced while increasing the suction pressure and
distance between the roving strands. The roving distance plays a major role on total imperfections than the suction pressure.
The defects like Thin -50%, Thick +50%, Neps +200% increase for higher suction pressure, whereas minor defects like Thin
-30%, Thick +35%, Neps +140% reduce for higher negative pressure. The co-efficient of variation increases for higher
roving distances. This phenomena obeys the single yarn compact principle, as the roving distance increases the spinning
triangle also gets increased, the effective control over the edge fibres by drafting roller gets reduced. The higher negative
pressure may not effectively utilise for control over the edge fibres.
VIII.
EFFECT ON YARN UNEVENNESS
Surface Plot of Unevenness vs Negative Pressure, Roving Distance
7.4
7.2
Unevenness
7.0
6.8
24
0.0
23
2.5
Roving Distance
5.0
7.5
Negative Pressure
22
Figure 7 - Surface plot of distance between two roving strands & suction pressure on yarn unevenness
Yarn Unevenness gets reduced for higher distances between roving and suction pressure. The Unevenness gets
reduced upto aroving distance of 6 mm drastically and no further significant reduction is realized after 6 mm. There is no
significant effect of negative pressure on the Yarn Unevenness.
The surface plot graph shows the linear relationship between yarn unevenness and distance between two roving
strands. As the distance between two roving strands increase, the Unevenness gets reduced for upto 6 mm distance and after
8 mm distance the Unevenness gets increased. This may be because higher distance leads to poor control over the individual
fibre by drafting roller. Also when distance becomes very close, there is not sufficient space to spread out the fibre web.
IX.
IMPORTANCE GIVEN TO EACH PARAMETER IN Cpk
The table given below shows the weightage of each parameter taken for the calculation of CpK. Apart from the
average value variation, each parameter has also been considered for calculating the CpK. Weightage of each parameter will
vary according to the end use of the product.
Table 5 - Weightage given to each test parameter
Avg Count
5.0
Avg Rkm
20.0
Elongation %
15.0
U%
5.0
Thick +50%
5.0
Neps +200%
15.0
Thin -30%
2.5
Thick +35%
10.00
Neps +140%
10.00
Hairiness
5.0
S3
2.5
Total CMT Faults
5.0
A model has been developed for predicting the yarn quality at various distances between the roving strands and the
negative pressure applied in the suction zone. With this the yarn quality results can be predicted in future without conducting
any trials. This in turn will result in saving a lot of time and money involved in conducting such trials.
U%
Thick +50%
Neps +200%
Rkm
Elong %
= 7.78 - 0.0593 Dist between roving strands - 0.0210 Suction Pressure
= 9.40 - 0.700 Dist between roving strands - 0.100 Suction Pressure
= 15.5 - 0.125 Dist between roving strands - 0.400 Suction Pressure
= 27.0 + 0.0275 Dist between roving strands - 0.270 Suction Pressure
= -10.4 - 0.0375 Dist between roving strands + 0.730 Suction Pressure
30
Optimization of process parameters in Eli-Twist yarn
UTH
S3
CMT
= 4.99 + 0.0375 Dist between roving strands - 0.0200 Suction Pressure
= 1940 - 38.0 Distance between roving strands - 45.5 Suction Pressure
= 49.0 + 0.75 Distance between roving strands - 0.80 Suction Pressure
X.
CONCLUSION
As the quality of yarn is predicted by means of various parameters such as Unevenness %, Imperfection level,
Single yarn strength, Elongation, Classimat faults, performance of the yarn downstream process etc, CpK is found to be the
right tool for identifying the right process in spinning mills by considering all the yarn quality parameters based on its end
use.
The aforesaid analysis proves that the overall yarn quality is better in 8mm strand spacing with 24-mbar negative
pressure. Also the model developed is very useful in future for predicting the yarn quality of Eli-twist yarn.
Based on the end user the weightage given to each yarn quality parameter will be fine tuned and as a result, the
optimum process parameter can be easily identified.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
Boyles, Russell (1991), "The Taguchi Capability Index", Journal of Quality Technology (Milwaukee, Wisconsin:
American Society for Quality Control) 23 (1): 17 – 26, ISSN 0022-4065, OCLC 1800135.
Booker, J. M.; Raines, M.; Swift, K. G. (2001). Designing Capable and Reliable Products. Oxford: ButterworthHeinemann. ISBN 9780750650762. OCLC 47030836.
Montgomery, Douglas (2004). Introduction to Statistical Quality Control. New York, New York: John Wiley &
Sons, Inc. p. 776. ISBN 9780471656319. OCLC 56729567.
Stritof, A. (2003). Compact Spinning for Improved Quality of Ring-Spun Yarns, FIBRES & TEXTILES in Eastern
Europe, 11(4), 30–35.
Engineering, T. (2004). A Research on the Compact Spinning for Long Staple Yarns, FIBRES & TEXTILES in
Eastern Europe, 12(4), 27–31.
Ozdil, N., Ozdogan, E., & Demirel, A. (2005). A Comparative Study of the Characteristics of Compact YarnBased Knitted Fabrics, FIBRES & TEXTILES in Eastern Europe, 13(2), 39–43.
Cheng, K. P. S., & Yu, C. (2003). A Study of Compact Spun Yarns. Textile Research Journal, 73(4), 345–349.
doi:10.1177/004051750307300412
Basal, G. (2006). Comparison of Properties and Structures of Compact and Conventional Spun Yarns. Textile
Research Journal, 76(7), 567–575. doi:10.1177/0040517506065591
Beceren, Y., & Nergis, B. U. (2008). Comparison of the Effects of Cotton Yarns Produced by New, Modified and
Conventional Spinning Systems on Yarn and Knitted Fabric Performance. Textile Research Journal, 78(4), 297–
303. doi:10.1177/0040517507084434
Goktepe, F. (2006). A Comparison of Compact Yarn Properties Produced on Different Systems. Textile Research
Journal, 76(3), 226–234. doi:10.1177/0040517506061241
Krifa, M. (2006). Compact Spinning Effect on Cotton Yarn Quality: Interactions with Fiber Characteristics. Textile
Research Journal, 76(5), 388–399. doi:10.1177/0040517506062648
Yilmaz, D., Goktepe, F., Goktepe, O., & Kremenakova, D. (2007). Packing Density of Compact Yarns. Textile
Research Journal, 77(9), 661–667. doi:10.1177/0040517507078796
www.isixsigma.com
www.suessen.com
31
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2(September 2012) PP: 17-22
A Native Approach to Cell to Switch Assignment Using Firefly
Algorithm
Apoorva Sharma1, Shamsher Malik2
1
M.Tech Student, UIET, MDU, Rohtak, 2ECE Department, UIET, MDU, Rohtak
Abstract––This paper deals with a design problem for a network of Personal Communication Services (PCS). The goal is
to assign cells to switches in a PCS Network (PCSN) in an optimum manner so as to minimize the total cost which
includes two types of cost, namely handoff cost between two adjacent cells, and cable cost between cells and switches. The
problem of assigning cells to switches in a cellular mobile network is an NP-hard optimization problem. The design is to
be optimized subject to the constraint that the call volume of each switch must not exceed its call handling capacity.
However, because of the time complexity of the problem, the solution procedures are usually heuristic when the number
of cells and switches are more. In this paper we have proposed an assignment heuristic which is faster than the existing
algorithms. This paper describes Firefly Algorithm and proposes a possible way it can be adjusted to solve the problem of
assignment of cells of a geographical area to the available number of switches based on the minimization of total cost for
the assignment.
Keywords––PCS, cell to switch assignment, handoff, optimization, heuristics and combinatorial optimisation, PSO, FA.
I.
INTRODUCTION
In 2008, Xin-She Yang [1] introduced a new Meta - heuristics algorithm Firefly Algorithm that is inspired by
firefly‟s social behaviour. In the basic form, this algorithm is designed to solve primarily continuous problems. The aim of
this work is to get an assignment of cells in a geographical area to the available switches so that the total cost of this
assignment is minimized. Since the last couple of decades, there have been significant advances in the development of
mobile communication systems. Even though significant improvement to communication infrastructure has been attained in
the personal communication service industry, the issues concerning the assignment of cells to switches in order to minimize
the cabling cost and handoff costs in a reasonable time still remain challenging [2].
The handoff caused by a subscriber movement from one cell to another, involves not only the modification of the
location of the subscriber in the database but also the execution of a fairly complicated protocol between switches S1 and S2
[3]. Therefore, there are two types of handoffs, one involves only one switch and the other involves two switches. The latter
is relatively more difficult and costly than the former. Intuitively, the cells among which the handoff frequency is high
should be assigned to the same switch as far as possible to reduce the cost of handoffs. However, since the call handling
capacity of each switch is limited, this should be taken as a constraint. Incorporating the cabling cost that occurs when a call
is connected between a cell and a switch, we have an optimization problem, called the cell assignment problem [4], of
assigning cells to switches such that the total cost, comprising of handoff cost between adjacent cells and cabling cost
between cells and switches, is minimized subject to the constraints of the call handling capacities of switches.
Here two algorithms are used to find this assignment:
1) PSO Algorithm
2) Firefly Algorithm
II.
CELL ASSIGNMENT PROBLEM
In this section, the problem formulation and the mathematical modelling is discussed.
2.1 Problem Formulation
The assignment of cells to switches is first introduced by Arif Merchant and Bhaskar Sengupta [5] in 1995. This problem is
an NP-Complete problem. Based on these ideas presented above this paper presents the problem as:
“Assign all the cells in a particular geographical area to the available number of switches in order to minimize the total cost
which is the sum of cabling cost and handoff cost maintaining two constraints.”
The constraints for the problem are as follows:
1) Each cell must be assigned to exactly one switch.
2) Each switch has some limited capacity and assignment of cells must be done in such a way so that the total load on
the switch should not exceed the capacity of the switch.
17
A Native Approach To Cell To Switch Assignment Using Firefly Algorithm…
Fig 1: mapping representation for CSA problem [6]
III.
MATHEMATICAL MODELLING
The assignment of cell to the switches is an NP-Hard problem, having an exponential complexity (n cells and m
switches) [2].
•
Let no. of cells be „n‟ and no. of switches be „m‟
•
hij – handoff cost between cell i and cell j
•
cik – cabling cost between cell i and switch k
•
dij – distance between cell i and switch (MSC) j
• Mk – call handling capacity of switch k
•
λi - No of communication in cell i
•
Yij – 1 if cell I and j are assigned to same switch and 0 otherwise.
•
Xik – 1 if cell I is assigned to switch k and 0 otherwise.
For all cases, the range of i, j and k are defined as:
1 ≤ 𝑖 ≤ 𝑛, 1 ≤ 𝑗 ≤ 𝑛, 1 ≤ 𝑘 ≤ 𝑚
3.1 Formulation of constraints:
1) Each cell must be assigned to exactly one switch
𝑚
𝑥𝑖𝑘 = 1,
1≤𝑖≤𝑚
… (1)
𝑘=1
2)
Each switch has some capacity
𝑛
𝜆𝑖 𝑥𝑖𝑘 ≤ 𝑀𝑘 ,
1 ≤ 𝑘 ≤ 𝑚 … (2)
𝑖=1
3.2 Formulation of Cost Function:
1) Total Cabling Cost: this is formulated as a function of distance between base station and switch and number of calls that a
cell can handle per unit time [7]. 𝑐𝑖𝑗 (𝜆𝑗 ) is the cost of cabling per kilometre which is also modelled as a function of the
number of calls that a cell i can handle as:
𝑐𝑖𝑗 = 𝐴𝑖𝑗 + 𝐵𝑖𝑗 𝜆𝑗
… (3)
𝑚
𝑐𝑖𝑗 𝜆𝑗 𝑑𝑖𝑗 𝑥𝑖𝑗
𝑗 =1
𝑓𝑜𝑟 𝑖 = 1,2, … 𝑛
… (4)
2) Total handoff cost: we consider two types of handoffs, one which involves only one switch and another which involves
two switches. The handoff that occurs between cells that belong to the same switch consume much less network resources
than what occurs between cells that belongs to two different switches.
𝑛
𝑛
𝑕𝑖𝑗 (1 − 𝑦𝑖𝑗 )
… (5)
𝑖=1 𝑗 =1
3) Total Cost: So our objective is to minimize the total cost which can be formed by the summation of all three costs. The
objective function is given by:
18
A Native Approach To Cell To Switch Assignment Using Firefly Algorithm…
𝑚
𝑛
𝑛
𝑐𝑖𝑗 (𝜆𝑗 )𝑑𝑖𝑗 𝑥𝑖𝑗 +
𝑗 =1
𝑕𝑖𝑗 (1 − 𝑦𝑖𝑗 ) … (6)
𝑖=1 𝑗 =1
IV.
EXISTING METHODOLOGY
There are various methods available for assigning cell to switches but this problem gets complicated for large size
hence heuristics are needed to solve this problem. In this paper Firefly algorithm is proposed and the results are compared
with PSO.
4.1 Particle Swarm Optimisation
PSO method consists of a collection (called swarm) of individual entities (called particles). Each particle
represents a solution in the solution space and iteratively evaluates the fitness of the candidate solutions and remembers the
location where they had their best success. The individual's best solution is called the local best (lbest). Each particle makes
this information available to other particles and could learn the global best solution (gbest). [8].
- A global best (gbest) is known to all and immediately updated when a new best position is found by any particle in the
swarm.
- The local best (lbest), which is the best solution that the particle has seen.
Based on these gbest and lbest values the velocity and the position of each particle is updated. The particle position and
velocity update equations in the simplest form that govern the PSO are given by [8].
𝑉𝑡+1 = 𝑐1 𝑉𝑡 + 𝑐2 𝑟1 𝑙𝑏𝑒𝑠𝑡 − 𝑥𝑡 + 𝑐3 𝑟2 𝑔𝑏𝑒𝑠𝑡 − 𝑥𝑡
𝑥𝑡+1 = 𝑥𝑡 + 𝑉𝑡+1
… (7)
… (8)
where, c1, c2, c3 are the constant weight vector for the previous velocity, local knowledge and global knowledge
respectively and r1 and r2 are two random variables in the range of 0 and 1 [2]. The selection of coefficients in the velocity
update equations affects the convergence and the ability of the swarm to find the optimum.
4.2 PSO Algorithm [2]
Step 1:

Initialize the objective function which is to be minimized and various user defined constants.

Initialize the number of particles.

Set the output or position of maximum or minimum in the objective function.
Step 2: Initialize all the particles and there corresponding positions or solution in the solution space.
Step 3: Find the best particles which has the minimum solution (for minima problems) or maximum solution (for maxima
problems). The solution of this particle is called global best (gbest).
Step 4: For each element find the local best solution which is the minimum solution up to that iteration. This local best
solution for each particle is called lbest.
Step 5: Now using lbest and gbest update the velocity of each particle using equation (7) and then update the position of each
particle using equation (8).
Step 6: Repeat steps 3 to 5 until stopping criterion met.
V.
FIREFLY ALGORITHM
In this section the actual implementation of firefly algorithm for cell to switch assignment problem is explained
[9]. As discussed above that each firefly in the solution space is a complete solution which is independent to the other
fireflies. Implementation of firefly algorithm starts with spreading a particular number of fireflies in the solution space. The
initial position of fireflies may be random or fixed as given by the user. The initial solution means the initial assignment
matrix for each firefly and correspondingly the cost for that assignment [10]. Now each firefly in the solution space has an
assignment and corresponding cost for that assignment. Based on this cost the firefly with minimum cost is selected as the
brightest firefly and all the other fireflies move towards this brightest firefly depending upon randomness of firefly, distance
between fireflies and absorption coefficient of fireflies. This is called motion update for firefly. In the cell assignment
problem motion update means update the elements of assignment matrix. Thus the assignment matrix of each firefly is
updated according to the cost of each firefly.
Now for this new assignment again total cost is calculated for each firefly and all the parameters like brightest firefly and
update value for each assigned matrix are updated and this process is repeated until a stopping criterion is met [11]. From all
the cost values of each firefly the one which is minimum of all is selected as the best value and the corresponding
assignment is selected as the best assignment for the cells.
5.1 Implementation Details
The steps for implementation of firefly are as follows:
Step 1



Initialize the number of cells (n), switches (m) and number of fireflies (p) in the solution space.
Initialize position of cells and switches randomly in the search space.
Calculate distance between each cell and switch.
19
A Native Approach To Cell To Switch Assignment Using Firefly Algorithm…
Step 2


Step 3

Step 4

Step 5



Generate the assigned matrix (xij) for each firefly where each particle is between 0 and 1.
The row of the matrix represents switches and column represents cells.
Obtain solution matrix from the assigned matrix by making the largest value of each column to 1 and all other are
set to 0.
Calculate the total cost based on this solution matrix.
On the basis of this new cost the brightest firefly is found which has the minimum cost for the assignment.
Now update the position of all other fireflies based on the attractiveness of best firefly and also on the basis of
distance and randomness of fireflies.
Now update the position of best firefly randomly.
Repeat step 3 to 5 until stopping criterion is met.
VI.
EXPERIMENTAL RESULTS AND ANALYSIS
All the experiments are done in MATLAB for various cases of cells and switches for two algorithms namely
firefly algorithm (FA) and Particle Swarm Optimization (PSO). The various parameters used for firefly algorithm are as
follows:
Randomness, α=1, absorption coefficient, γ=1, brightness at source=1.
The parameters used in the initialization of problem are: Handoff cost between two cells= 0 to 5,000 per hour,
constant A and B used in cabling cost=1 and0.001 respectively, call handling capacity of a switch=98000, number of
communication in a cell=500 to 1000 per hour.
The comparison of total cost and CPU time is shown below in graphs.
Switches,
cells
2, 25
2, 50
2, 100
2, 150
2, 200
2, 250
3, 25
3, 50
3, 100
3, 150
3, 200
3, 250
5, 25
5, 50
5, 100
5, 150
5, 200
5, 250
10, 25
10, 50
10, 100
10, 150
10, 200
10, 250
Cabling
cost
116
316
496
851
1277
1440
153
361
591
693
977
1536
144
337
507
990
964
1460
135
346
515
837
1163
1535
VII.
Handoff
cost
1919
7723
32840
75953
135630
212390
2317
10728
43593
98487
181380
281720
2945
12673
54483
122020
219390
347340
3493
14881
61153
140070
246800
391560
Table 1 Comparison Table
Total cost by
Total cost by
FA
PSO
2036
2177
8040
9070
33337
35339
76804
79461
136910
140670
213530
2201330
2470
2989
11095
11436
44185
46553
99180
105210
182360
187100
283260
291850
3089
3710
13010
13936
54990
56452
123010
126100
220360
224530
348800
352100
3623
3980
15227
15538
61668
62940
140910
142990
247970
253440
393100
394120
CPU Time
FA
0.175
0.1996
0.2766
0.3929
0.5763
0.7713
0.1723
0.1701
0.2635
1.0216
1.786
2.2914
0.2138
0.3079
0.5929
1.1268
1.5611
2.3752
0.2244
0.3364
0.66
1.1328
1.9344
2.5006
CPU Time
PSO
0.2655
0.2843
0.3472
0.4456
0.67
0.9318
0.2644
0.2579
0.3231
1.185
1.7541
2.5471
0.2783
0.3817
0.633
1.158
1.5302
2.5678
0.286
0.38
0.7132
1.1871
1.9383
2.5572
CONCLUSION AND FUTURE WORK
The Firefly Algorithm is successfully implemented. From the figures (a-d) and (e-h) we can see that the cost in
two algorithms is comparable but the CPU time requirement in case of firefly is less as in case of PSO. As the iterations are
increased the probability of finding the lowest cost increases. But this will also increase the CPU time spent in finding this
minimum cost. The CPU time required for finding the minimum cost is comparatively less in Firefly algorithm as compared
20
A Native Approach To Cell To Switch Assignment Using Firefly Algorithm…
to PSO. Thus Firefly gives better performance when the execution time is concerned. In future, this work can be extended to
include switching and paging cost to the total cost for cell to switch assignment.
Cost Comparison Graphs:
90
80
70
60
50
40
30
20
10
0
East
West
North
1st Qtr
2nd Qtr
3rd Qtr
4th Qtr
The above graphs shows the cost comparison for the proposed Firefly Algorithm and PSO.
CPU Time Comparison Graphs:
21
A Native Approach To Cell To Switch Assignment Using Firefly Algorithm…
The above graphs show the CPU Time comparison graphs for proposed Firefly Algorithm and PSO.
REFRENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Xin-She Yang, “Nature-Inspired Metaheuristic Algorithms”, Luniver Press, 2008.
Siba K. Udgata, U. Anuradha, G. Pawan Kumar, Gauri K. Udgata, “Assignment of Cells to Switches in a Cellular
Mobile Environment using Swarm Intelligence”, IEEE International Conference on Information Technology, pp
189-194, 2008.
A. Merchant, B. Sengupta, “Assignment of cells to switches in PCS networks”, IEEE/ACM Transactions on
Networking, Vol. 3 (5) (1995), pp.521–526.
P. Bhattacharjee, D. Saha, A. Mukherjee, “Heuristics for Assignment of Cells to Switches in a PCSN: A
Comparative Study”, Intl. Conf. on Personal Wireless Communications, Jaipur, India, 1999, pp. 331–334.
Partha Sarathi Bhattacharjee, Debashis Saha, Amitava Mukherjee, “CALB: A New Cell to Switch Assignment
Algorithm with Load Balancing in the Design of a Personal Communication Services Network (PCSN), IEEE, pp
264-268, 2000.
Cassilda Maria Ribeiro Aníbal Tavares Azevedo Rodolfo Florence Teixeira Jr.,” Problem of Assignment Cells to
Switches in a Cellular Mobile Network” via Beam Search Method, WSEAS TRANSACTIONS on
COMMUNICATIONS,2010.
Shxyong JianShyua, B.M.T. Linb, Tsung ShenHsiaoa, “Ant colony optimization for the cell assignment problem
in PCS networks”, march, 2005.
James Kennedy, Russell Eberhart “Particle Swarm Optimization”, Proc. IEEE Int'l. Conf. on Neural
Networks(Perth, Australia), IEEE Service Center, Piscataway, NJ, pp.1942-1948, 1995.
Xin-She Yang, “Firefly Algorithm For Multimodal Optimization”, Luniver Press, 2008.
Xin-She Yang, “Metaheuristic optimization”, December 2010
K. Rajalakshmi, Prakash Kumar, Hima M Bindu “Targeting Cells to Switch Assignment of Cellular Mobile
Network using Heuristic Algorithm” LATEST TRENDS on COMPUTERS (Volume I),2011.
22
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September2012) PP: 19-26
A Synthesizable Design of AMBA-AXI Protocol for SoC
Integration
M. Siva Prasad Reddy1, B. Babu Rajesh2, Tvs Gowtham Prasad3
1,2,3
Kuppam Engineering College, Kuppam
Abstract––System-on-a-Chip (SoC) design has become more and more complexly. Because difference functions
components or IPs (Intellectual Property) will be integrated within a chip. The challenge of integration is “how to verify
on-chip communication properties”. Although traditional simulation-based on-chip bus protocol checking bus signals to
obey bus transaction behavior or not, however, they are still lack of a chip-level dynamic verification to assist hardware
debugging. We proposed a rule based synthesizable AMBA AXI protocol checker. The AXI protocol checker contains 44
rules to check on-chip communication properties accuracy. In the verification strategy, we use the Synopsys VIP
(Verification IP) to verify AXI protocol checker. In the experimental results, the chip cost of AXI protocol checker is
70.7K gate counts and critical path is 4.13 ns (about 242 MHz) under TSMC 0.18um CMOS 1P6M Technology.
Keywords--- AMBA, AXI, CMOS, SoC, Verilog
I.
INTRODUCTION
In recent years, the improvement of the semiconductor process technology and the market requirement increasing.
More difference functions IPs are integrated within a chip. Maybe each IPs had completed design and verification. But the
integration of all IPs could not work together. The more common problem is violation bus protocol or transaction error. The
bus-based architecture has become the major integrated methodology for implementing a SoC. The on-chip communication
specification provides a standard interface that facilitates IPs integration and easily communicates with each IPs in a SoC.
The semiconductor process technology is changing at a faster pace during 1971 semiconductor process technology was
10μm, during 2010 the technology is reduced to 32nm and future is promising for a process technology with 10nm. Intel,
Toshiba and Samsung have reported that the process technology would be further reduced to 10nm in the future. So with
decreasing process technology and increasing consumer design constraints SoC has evolved, where all the functional units of
a system are modeled on a single chip.
To speed up SoC integration and promote IP reuse, several bus-based communication architecture standards have
emerged over the past several years. Since the early 1990s, several onchip bus-based communication architecture standards
have been proposed to handle the communication needs of emerging SoC design. Some of the popular standards include
ARM Microcontroller Bus Architecture (AMBA) versions 2.0 and 3.0, IBM Core Connect, STMicroelectronics STBus,
Sonics SMARRT Interconnect, Open Cores Wishbone, and Altera Avalon[1]. On the other hand, the designers just integrate
their owned IPs with third party IPs into the SoC to significantly reduce design cycles. However, the main issue is that how
to efficiently make sure the IP functionality, that works correctly after integrating to the corresponding bus architecture.
There are many verification works based on formal verification techniques [2]-[6]. Device under test (DUT) is
modeled as finite-state transition and its properties are written by using computation tree logic (CTL) [7], and then using the
verification tools is to verify DUT’s behaviors [8]-[10]. Although formal verification can verify DUT’s behaviors
thoroughly, but here are still unpredictable bug in the chip level, which we want to verify them.
The benefits of using rule-based design include improving observability, reducing debug time, improving
integration through correct usage checking, and improving communication through documentation. In the final purpose,
increasing design quality while reducing the time-to-market and verification costs [19]. We anticipate that the AMBA AXI
protocol checking technique will be more and more important in the future. Hence, we propose a synthesizable AMBA AXI
protocol checker with an efficient verification mechanism based on rule checking methodology. There are 44 rules to check
the AMBA AXI protocol that provide AXI master, slave, and default slave protocol issues.
II.
AMBA AXI4 ARCHITECTURE
AMBA AXI4 [3] supports data transfers up to 256 beats and unaligned data transfers using byte strobes. In AMBA
AXI4 system 16 masters and 16 slaves are interfaced. Each master and slave has their own 4 bit ID tags. AMBA AXI4
system consists of master, slave and bus (arbiters and decoders). The system consists of five channels namely write address
channel, write data channel, read data channel, read address channel, and write response channel. The AXI4 protocol
supports the following mechanisms:




Unaligned data transfers and up-dated write response requirements.
Variable-length bursts, from 1 to 16 data transfers per burst.
A burst with a transfer size of 8, 16, 32, 64, 128, 256, 512 or 1024 bits wide is supported.
Updated AWCACHE and ARCACHE signaling details.
19
A Synthesizable Design of AMBA-AXI Protocol for SoC Integration
Each transaction is burst-based which has address and control information on the address channel that describes
the nature of the data to be transferred. The data is transferred between master and slave using a write data channel to the
slave or a read data channel to the master. Table 1[3] gives the information of signals used in the complete design of the
protocol.
The write operation process starts when the master sends an address and control information on the write address
channel as shown in fig. 1. The master then sends each item of write data over the write data channel. The master keeps the
VALID signal low until the write data is available. The master sends the last data item, the WLAST signal goes HIGH.
Figure 1: Write address and data burst.
Figure 2: Read address and data burst.
20
A Synthesizable Design of AMBA-AXI Protocol for SoC Integration
TABLE 1: Signal descriptions of AMBA AXI4 protocol.
When the slave has accepted all the data items, it drives a write response signal BRESP[1:0] back to the master to
indicate that the write transaction is complete. This signal indicates the status of the write transaction. The allowable
responses are OKAY, EXOKAY, SLVERR, and DECERR. After the read address appears on the address bus, the data
transfer occurs on the read data channel as shown in fig. 2. The slave keeps the VALID signal LOW until the read data is
available. For the final data transfer of the burst, the slave asserts the RLAST signal to show that the last data item is being
transferred. The RRESP[1:0] signal indicates the status of the read transfer. The allowable responses are OKAY, EXOKAY,
SLVERR, and DECERR.
The protocol supports 16 outstanding transactions, so each read and write transactions have ARID[3:0] and AWID
[3:0] tags respectively. Once the read and write operation gets completed the module produces a RID[3:0] and BID[3:0] tags.
If both the ID tags match, it indicates that the module has responded to right operation of ID tags. ID tags are needed for any
operation because for each transaction concatenated input values are passed to module
2.1 Comparison of AMBA AXI3 and AXI4:
AMBA AXI3 protocol has separate address/control and data phases, but AXI4 has updated write response
requirements and updated AWCACHE and ARCACHE signaling details. AMBA AXI4 protocol supports for burst lengths
up to 256 beats and Quality of Service (QoS) signaling. AXI has additional information on Ordering requirements and
details of optional user signaling. AXI has the ability to issue multiple outstanding addresses and out-oforder transaction
completion, but AXI has the ability of removal of locked transactions and write interleaving. One major up-dation seen in
AXI is that, it includes information on the use of default signaling and discusses the interoperability of components which
can’t be seen in AXI3.
In this paper features of AMBA AXI listed above are designed and verified. The rest of the paper is organized as
follows: Section 2 discusses related work. Section 3 of this paper, discusses proposed work. In Section 4, simulation
parameters are discussed. Section 5 discusses results. Future scope and concluding remarks are given in Section 6.
21
A Synthesizable Design of AMBA-AXI Protocol for SoC Integration
AMBA 3.0 AXI
AMBA 2.0 AHB
Channel-based specification, with five separate channels for
read address, read data, writes address, write data, and write
response enabling flexibility in implementation
Explicit bus-based specification, with single
shared address bus and separate read and write
data buses.
Burst mode requires transmitting address of only first data
item on the bus.
Requires transmitting address of every data
item transmitted on the bus.
Out-of-Order ransaction completion provides native support
for multiple, outstanding ransactions.
Simpler SPLIT transaction scheme provides
limited and rudimentary outstanding
transaction completion
Fixed burst mode for memory mapped I/O peripherals.
No fixed burst mode.
Advanced security and cache hint support.
Simple protection and cache hint support.
Native low-power clock control interface.
No low-power interface.
Default bus matrix topology support.
Default hierarchical bus topology support
III.
RELATED WORK
In a SoC, it houses many components and electronic modules, to interconnect these a bus is necessary. There are
many buses introduced in the due course some of them being AMBA [2] developed by ARM, CORE CONNECT [4]
developed by IBM, WISHBONE [5] developed by Silicore Corporation, etc. Different buses have their own properties the
designer selects the bus best suited for his application. The AMBA bus was introduced by ARM Ltd in 1996 which is a
registered trademark of ARM Ltd. Later advanced system bus (ASB) and advanced peripheral bus (APB) were released in
1995, AHB in 1999, and AXI in 2003[6]. AMBA bus finds application in wide area. AMBA AXI bus is used to reduce the
precharge time using dynamic SDRAM access scheduler (DSAS) [7]. Here the memory controller is capable of predicting
future operations thus throughput is improved. Efficient Bus Interface (EBI) [8] is designed for mobile systems to reduce the
required memory to be transferred to the IP, through AMBA3 AXI. The advantages of introducing Network-on-chip (NoC)
within SoC such as quality of signal, dynamic routing, and communication links was discussed in [9]. To verify on-chip
communication properties rule based synthesizable AMBA AXI protocol checker [10] is used.
1) Master
2) AMBA AXI4 Interconnect
2.1) Arbiters
2.2) Decoders
3) Slave
The master is connected to the interconnect using a slave interface and the slave is connected to the interconnect using a
master interface as shown in fig. 3. The AXI4 master gets connected to the AXI4 slave interface port of the interconnect and
the AXI slave gets connected to the AXI4 Master interface port of the interconnect. The parallel capability of this
interconnects enables master M1 to access one slave at the same as master M0 is accessing the other.
22
A Synthesizable Design of AMBA-AXI Protocol for SoC Integration
Figure 4: AMBA AXI slave Read/Write block Diagram
IV.
SIMULATION
Simulation is being carried out on Model Sim Quretus II [11] which is trademark of Mentor Graphics, using
Verilog [12] as programming language. The test case is run for multiple operations and the waveforms are visible in
discovery visualization environment.
2.2 Simulation inputs:
To perform multiple write and read operations, the concatenated input format and their values passed to invoke a
function is shown in the fig. 6 and 7 respectively. Here the normal type of the burst is passed to module. Internal lock value
is 0, internal burst value is 1 and internal port value is 1,for both read and write operations, which indicate that the burst is of
normal type. For write operation address locations passed to module are 40, 12, 35, 42 and 102; for read operations 45, 12,
67 and 98.
2.3 Simulation outputs:
The simulation output signals generated are as follows:

From input side the validating signals AWVALID/ARVALID signals are generated by interconnect which gives
the information about valid address and ID tags.

For write operations BRESP[1:0] response signal generated from slave indicates the status of the write transaction.
The allowable responses are OKAY, EXOKAY, SLERR, and DECERR.

For read operations RLAST signal is raised by slave for every transaction which indicates the completion of
operation.
V.
RESULTS
Simulation is carried out in Modelsim tool and Verilog is used as programming language.
5.1 Simulation result for write operation:
The AResetn signal is active low. Master drives the address, and the slave accepts it one cycle later. The write
address values passed to module are 40, 12, 35, 42 and 102 as shown in fig. 8 and the simulated result for single write data
operation is shown in fig. 9. Input AWID[3:0] value is 11 for 40 address location, which is same as the BID[3:0] signal for
40 address location which is identification tag of the write response. The BID[3:0] value is matching with the AWID[3:0]
value of the write transaction which indicates the slave is responding correctly. BRESP[1:0] signal that is write response
signal from slave is 0 which indicates OKAY. Simulation result of slave for multiple write data operation is shown in fig. 10.
23
A Synthesizable Design of AMBA-AXI Protocol for SoC Integration
Fig 8: Simulation result of slave for write address operation
Fig9: Simulation result of slave for single write data operation
Figure 10: Simulation result of slave for multiple write data operation
24
A Synthesizable Design of AMBA-AXI Protocol for SoC Integration
VI.
SIMULATION RESULT FOR READ OPERATION
The read address values passed to module are 45, 12, 67, 98 as shown in fig. 11 and the simulated result for single
read data operation is shown in fig. 12.
Figure 11: Simulation result of slave for read address operation
Input ARID[3:0] value is 3 for 12 address location, which is same as the RID[3:0] signal for 12 address location
which is identification tag of the write response. The RID[3:0] and ARID[3:0] values are matching, which indicates slave
has responded properly. RLAST signal from slave indicates the last transfer in a read burst. Simulation result of slave for
multiple read data operation is shown in fig. 13.
Figure 12: Simulation result of slave for single read data operation
Figure 13: Simulation result of slave for multiple read data operation.
25
A Synthesizable Design of AMBA-AXI Protocol for SoC Integration
VII.
CONCLUSION
AMBA AXI4 is a plug and play IP protocol released by ARM, defines both bus specification and a technology
independent methodology for designing, implementing and testing customized high-integration embedded interfaces. The
data to be read or written to the slave is assumed to be given by the master and is read or written to a particular address
location of slave through decoder. In this work, slave was modeled in Verilog with operating frequency of 100MHz and
simulation results were shown in Modelsim tool. To perform single read operation it consumed 160ns and for single write
operation 565ns.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Shaila S Math, Manjula R B, “Survey of system on chip buses based on industry standards”, Conference
onEvolutionary Trends in Information Technology(CETIT), Bekgaum,Karnataka, India, pp. 52, May 2011.
ARM, AMBA Specifications (Rev2.0). [Online]. Available at http://www.arm.com.
ARM, AMBA AXI Protocol Specification (Rev 2.0). [Online]. Available at http://www.arm.com, March 2010.
IBM, Core connect bus architecture. IBM Microelectronics. [Online]. Available :
http://www.ibm.com/chips/products/coreconnect
Silicore Corporation, Wishbone system-on-chip (soc) interconnection architecture for portable ip cores.
ARM, AMBA AXI protocol specifications, Available at, http://www.arm.com, 2003.
Jun Zheng, Kang Sun , Xuezeng Pan, and Lingdi Ping “Design of a Dynamic Memory Access Scheduler”, IEEE
transl, Vol 7, pp. 20-23, 2007.
Na Ra Yang, Gilsang Yoon, Jeonghwan Lee, Intae Hwang, Cheol Hong Kim, Sung Woo Chung and Jong Myon
Kim, “Improving the System-on-a-Chip Performance for Mobile Systems by Using Efficient Bus Interface”, IEEE
transl, International Conference on Communications and Mobile Computing, Vol 4, pp. 606-608, March 2009.
Bruce Mathewson “The Evolution of SOC Interconnect and How NOC Fits Within It”, IEEE transl, DAC,2010,
California, USA,Vol 6, pp. 312-313, June 2010.
Chien-Hung Chen, Jiun-Cheng Ju, and Ing-Jer Huang, “A Synthesizable AXI Protocol Checker for SoC
Integration”, IEEE transl, ISOCC, Vol 8, pp.103-106, 2010.
Synopsys, VCS / VCSi User Guide Version 10.3[Online]. Available at, www.synopsys.com, 2005.
Samir Palnitkar, Verilog HDL: A Guide to Digital Design and synthesis, 2nd ed, Prentice Hall PTR Pub, 2003.
Mr. M. Siva Prasad Reddy, Assistant Professor, Dept of ECE, Kuppam Engineering College, Kuppam
received B.Tech in Electronics and Communication Engineering from Mekapati Rajamohan Reddy
Institute of Technology and Science, Udayagiri, Nellore and received M.Tech from Sree Venkateswara
College of Engineering and Technology, Chittoor.
Mr. B.Babu Rajesh, Assistant Professor, Dept of ECE, Kuppam Engineering College, Kuppam received
B.Tech in Electronics and Communication Engineering from PBR VITS, Kavali, Nellore and received
M.Tech from Sree Venkateswara College of Engineering and Technology, Chittoor.
Mr. T V S Gowtham Prasad Associate Professor, Dept of ECE, Kuppam Engineering College, Kuppam
received B.Tech in Electronics and Communication Engineering from Sree Vidyanikethan Engineering
College, A.rangampet, Tirupati and M.Tech received from S V University college of Engineering,
Tirupati. Pursuing Ph.D from JNTU, Anantapur in the field of signal processing as ECE faculty.
Interesting Areas are Digital Signal Processing, Array signal processing, image processing, Video
surveillance, Embedded systems, Digital Communications.
26
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 15-18
Segmentation of Cerebral Tumors Using Coactive Transform
Approach
Mrs.K.SelvaBhuvaneswari1, Dr.P.Geetha2,
2
Assistant Professor, Dept. of CSE, College of Engineering Guindy,
1
Research Scholar, Dept. of CSE, College of Engineering Guindy.
Abstract––The main topic of this work is to segment brain tumors based on a hybrid approach. For tumor segmentation
a coactive approach of wavelet and watershed transform is proposed. If only watershed algorithm be used for
segmentation of image, then over clusters in segmentation is obtained. To solve this, an approach of using wavelet
transformer is proposed to produce initial images, then watershed algorithm is applied for the segmentation of the initial
image, then by using the inverse wavelet transform, the segmented image is projected up to a higher resolution.Even MR
images are noise free, usage of wavelet decomposition involving a low-pass filter decreases the amount of the minute
noise if any in the image and in turn leads to a robust segmentation. The results demonstrate that combining wavelet and
watershed transform can help us to get the high accuracy segmentation for tumor detection.
Keywords––MR image, Region merging, Segmentation, Wavelet, Watershed transform.
I.
INTRODUCTION
Brain tumor is one of the most deadly and intractable diseases. A brain tumor can be defined as a disease in which
cells grow uncontrollably in the brain. Brain tumor is basically of two types:
1) Benign tumors
2) Malignant tumors
Benign tumors do not have the ability to spread beyond the brain itself. Benign tumors in the brain have limited
self-growth and it do not to be treated. But they can create problem due to their location and has to be treated as early as
possible.
Malignant tumor is the actual brain cancer. These tumors can even spread outside of the brain rapidly. Malignant
tumors are left almost untreated most of the time as the growth is so fast that it gets too late for the surgeon to control or
operate it. Brain malignancies again of two types:
i) Primary brain cancer originated in the brain.
ii) Secondary or metastatic brain cancer spread to the brain from another site in the body.
Tumors can be benign or malignant. Imaging plays a central role in the diagnosis and treatment planning of brain
tumor. Tumor volume is an important diagnostic indicator in treatment planning and results assessment for brain tumor. The
measurement of brain tumor volume could assist tumor staging for effective treatment surgical planning. Imaging of the
tumors can be done by CT scan,Ultrasound and MRI etc. The MR imaging method is the best due to its higher resolution
(~100 microns). The methods to segment brain tumors are snakes segmentation, level set segmentation, watershed
segmentation, region-growing segmentation etc.
The Watershed segmentation [1] is preferred for its wide range of applications and automatic features.
Preprocessing experiments are carried out to find which type of filtering will be more beneficial. This reduces the effect of
the speckle and preserves the tumor edges: thereby provide the foundation for a successful segmentation. The desired tumor
area is selected from the segmented image to calculate the volume. MR imaging is currently the method of choice for early
detection of brain tumor [2]. However, the interpretation of MRI is largely based on radiologist’s opinion. Computer aided
detection systems can now assist in the detection of suspicious brain lesions and suspicious masses. The task of manually
segmenting brain tumors from MR imaging is generally time consuming and difficult. An automated segmentation method is
desirable because it reduces the load on the operator and generates satisfactory results.
II.
OVERVIEW
Basically segmentation deals with dividing an image into distinct regions. In fact each region is equivalent with an
object. There are many approaches to image segmentation such as classifications of edges or regions. Mathematical
morphology (MM) is a powerful tool for image segmentation. Watershed algorithm is based on MM and is a useful tool to
image segmentation but is very sensitive to noise and leads to over segmentation in image. In this work watershed algorithm
is used for image segmentation [3]. Multi resolution technique by wavelet transformer is applied to reduce over segmentation
problems caused by watershed algorithm[4]. Using this method, amount of noise and also the small details will be removed
from image and only large objects will remain. This idea has many advantages in segmentation of barin tumor MR images,
which greatly involve the classification among Malignant and Beningn tumors. In this paper, a collective use of wavelet
transform and watershed transform is proposed , To do this, first the wavelet transform is used for denoising, which in turn
15
Segmentation of Cerebral Tumors Using Coactive Transform Approach
leads to the production of four images, approximation image and detail images, then Sobel operator is applied for the
estimation of edges. Additional edge is eliminated by a threshold then initial segmentation image by applied watershed
transform is obtained. To reach the high resolution in the projected segmented image, the inverse wavelet could be
repeatedly used until we get a resolution segmented image that is similar to the initial image.
III.
METHODOLOGY
The diagram of the algorithm is presented in figure 1. In the first step wavelet transform is used for producing
approximation and detail images , then by Sobel mask, approximation image gradient is obtained and additional edge is
eliminated by a threshold then watershed transform is done and the segmented image is projected to high resolution by
inverse wavelet. Region merging is applied in the last phase and cropping of tumor is done.
Input Image
Image approximation
(
Global Thresholding
Watershed Segmentation
Inverse Wavelet-HR Image
Region Merging
Segmented Image
Fig 1.Overall system architecture
Step 1: Wavelet Transform
The 1-D DWT can be extended to 2-D transform using separable wavelet filters. With separable filters, applying a
1-D transform to all the rows of the input and then repeating on all of the columns can compute the 2-D transform. When
one-level 2-D DWT is applied to an image, four transform coefficient sets are created. As depicted in Figure 2(c), the four
sets are LL, HL, LH, and HH, where the first letter corresponds to applying either a low pass or high pass filter to the rows,
and the second letter refers to the filter applied to the columns.
(a)
(b)
(c)
Fig. 2 Block Diagram of DWT
(a)Original Image (b) Output image after the 1D applied on Row input (c) Output image after the second 1-D
applied on row input.
SAMPLE CODING:
The Two-Dimensional DWT (2D-DWT) converts images from spatial domain to frequency domain. At each level
of the wavelet decomposition, each column of an image is first transformed using a 1D vertical analysis filter-bank. The
same filter-bank is then applied horizontally to each row of the filtered and subsampled data. Figure 3 depicts, how wavelet
transform is applied to an MRI brain image by the following code:
input_im = handles.input_im;
[LL LH HL HH] = dwt2(input_im,'haar');
Dec = [LL LH;HL HH];
axes(handles.axes2);
imshow(Dec,[]);
title('DWT Image');
handles.LL = LL;
handles.LH = LH;
handles.HL = HL;
16
Segmentation of Cerebral Tumors Using Coactive Transform Approach
handles.HH = HH;
handles.Dec = Dec;
helpdlg('Process Completed');
The wavelet transform can describe an image in a different scale, and due to existence of the low pass filter in
wavelet, noise magnitude is reduced. Before using the wavelet, the wavelet function should be determined. To do this, we
used the Haar method, because it requires small computational complexity (linear with respect to the size of the input
image). [7] By applying the wavelet on an image, four images will be produced, that the size of each one is half of the
original image; they are called: HH, HL, LH, and LL. The first and second components correspond to horizontal and vertical
position respectively, and the letter H and L are representing the high and low frequency respectively, (Jung, 2007). Figure 3
demonstrates the output of this
Fig 3.(a)LL,LH,HL,HH sets of Wavelet Transform (b)Initial,Band and final output of Wavelet Transform
Step 2: Edge Detection and Removal of Additional edge
One of the most fundamental segmentation techniques is edge detection. [1] There are many methods for edge
detection. Convolution of the approximation image is done by Sobel mask.
Step 3: Watershed Transform
In the next step, by applying the watershed transform,[5][6] initial segmentation at the lowest resolution is
obtained. Figure 4 shows the output of Watershed Transform.
Fig 4.Output of Watershed Transform
Fig 5. High Resolution Image.
Step 4: Low Resolution to High Resolution Segmented Image
The segmented image has a low resolution with respect to the original image. By applying the inverse wavelet
transform and using detail images, a higher resolution image will be obtained from the segmented image [8]. With repeating
this step, the segmented image and original image will have the same resolution. It should be noticed that before using the
inverse wavelet, only the information of the edge on the details image should be kept [9]. See figure 6 for original and noisy
images. As shown in this figure, there are some pixels which are belong to no region, they are lost pixels. In the next step,
we use an approach for solving this problem.
Step 5: Finding the Lost Pixel
For appointing the lost pixels, the intensity of the lost pixels was compared to the eight non lost neighbors’ pixels
and the intensity difference between lost pixel and non lost neighbor’s pixels are computed.[10] Lost pixel appointed to the
region that has a minimum intensity difference. By repeating the steps 4 and 5, the segmented image will have the same
resolution as the original image.
Step 6: Region Merging
In order to have more reduction of the regions in the high resolution image, region merging was used [12]. It
means that, if the intensity of the two adjacent regions was smaller than a threshold, they will be combined. It will reduce the
number of regions.
17
Segmentation of Cerebral Tumors Using Coactive Transform Approach
Step 7: Image Cropping:
Cropping is the process of selecting desired region from an image that is to be processed [11].The image shows the
desired tumor portion. The cropped tumor set is identified by the regional maxima image obtained through watershed
algorithm. This image is used to calculate the tumor volume for further analysis.
Fig. 6 Cropped tumor image for area calculation
IV.
CONCLUSION & FUTURE WORK
This type of segmentation can be used to detect tumor early and provide improved results with the help of coactive
approach. Semantics can be incorporated in region merging phase.This work enables to detect the suspicious region and the
future work would be calculating area and volume of tumor and storing the output in a database so that it can be matched
with the some of the sample which will be pre-stored in a database, so that according to the symptoms and detection of
tumor can be done in improved manner. The future work includes the integration with the concept of ontology that can be
used for better and accurate results.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Rafael C.Gonzalez ,Richard E.Woods and Steven L.Eddins ,Digital Image Processing using MATLAB Second
edition, Pearson Education, 2011.
R. Rajeswari, P. Anandhakumar, Segmentation and Identification of Brain Tumor MRI Image with Radix4 FFT
Techniques, European Journal of Scientific ResearchVol.52 No.1 (2011), pp.100-109
G. M. N. R. Gajanayake1, R. D. Yapa1 and B. Hewawithana2, Comparison of Standard Image Segmentation
Methods for Segmentation of Brain Tumors from 2D MR Images
R. B. Dubey1, M. Hanmandlu2, S. K. Gupta3 and S. K. Gupta, Region growing for MRI brain tumor volume
analysis, Indian Journal of Science and Technology, Vol.2 No. 9 (Sep 2009)
Malik Sikandar Hayat Khiyal, Aihab Khan, and Amna Bibi, Modified Watershed Algorithm for Segmentation of
2D Images.In Issues in Informing Science and Information Technology, Volume 6, 2009.
Yifei Zhang, Shuang Wu, Ge Yu, Daling Wang, A Hybrid Image Segmentation Approach Using Watershed
Transform and FCM. In IEEE Fourth International Conference on FSKD, 2007.
Jun Zhang, Jiulun Fan, Medical Image Segmentation Based on Wavelet Transformation and Watershed Algorithm.
In IEEE Proceedings of the International Conference on Information Acquisition,pp-484-489,August 20 - 23,
2006, Weihai, Shandong, China.
Jiang. Liu, Tze-Yun LeongP, Kin Ban CheeP, Boon Pin TanP, A Set-based Hybrid Approach (SHA) for MRI
Segmentation, ICARCV., pp. 1-6, 2006.
Ataollah Haddadi a, Mahmod R. Sahebi a, Mohammad J. Valadan Zoej a and Ali mohammadzadeh, Image
Segmentation Using Wavelet and watershed transform.
Hua Li, Anthony Yezzia, A Hybrid Medical Image Segmentation Approach based on Dual-Front Evolution
Model.In IEEE conference proceedings, 2005.
Zhang Xiang ,Zhang Dazhi ,Tian Jinwen and Liu Jian, A Hybrid Method for 3D Segmentation of MRI Brain
Images.In IEEE proceedings,pp 608-612,2002.
Kostas Haris, Serafim N. Efstratiadis, Nicos Maglaveras, and Aggelos K. Katsaggelos , Hybrid Image
Segmentation Using Watersheds and Fast Region Merging. In IEEE Transactions on Image Processing, vol. 7, no.
12, December 1998s
18
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 22-25
Design of a Ploughing Vehicle Using Microcontroller
P.Dileep Kumar1, P. Ajay Kumar Reddy 2, K. Bhaskar reddy3, M.Chandra sekhar
Reddy 4, E.Venkataramana 5
1,2,3,4,5
(Assistant Professor, Department of ECE, Kuppam Engineering College/JNTUA/INDIA)
Abstract––In today’s technically advanced world, autonomous systems are gaining rapid popularity. The goal of this
project was to develop a ploughing vehicle usually like tractors used to plough a field and it can be controlled
automatically or manually through PC. This vehicle has a combination of wireless camera integrate in such a way, so as
to provide simultaneous data acquisition and data transmission to the remote user through a RF transceiver.
Keywords––Wireless camera, AV Receiver, Motors, Microcontroller, RF Transceiver
I.
INTRODUCTION
Autonomous vehicles have numerous applications. These vehicles can be used for object detection and avoidance.
Ploughing is a method which is used to separate the soil on the surface and bring fresh nutrients to the surface. Before each
crop to be planted in the agricultural field, the farmers will plough their field using different techniques like mouldboard
plough and nowadays especially using tractors. In daily life, tractors are used to plough a field before a crop has planted in
the field. The global focus of this paper to design a vehicle that will be plough the field by using dc motors connected to the
ploughing device to lift up and down the plougher As the tractor will plough the field using plougher which is of more than
100kgs,but as the dc gear motors motors connected to the vehicle will not be able to lift up more than ½ kg ,a plougher
prototype connected to the automatic vehicle to plough the field and the farmer can remotely monitor the ploughing action
in PC through a wireless camera and AV receiver. The ploughing vehicle can be manually controlled as well as automatic
operation can be done on the PC through keyboard keys. The RF Transceiver is used to transmit the commands from the PC
through serial communication and used to receive the commands in the ploughing vehicle to move in a particular direction
according to the commands sent from the farmer through PC.
II.
HARDWARE IMPLEMENTATION
The block diagram of the hardware implementation of the entire system is as shown in the Figure1and 2. This
Vehicle is radio operated, self-powered and has all the controls like a normal car. A motor has been connected to it, so that it
can lift up and down the ploughing device when required. Wireless camera will send real time video and audio signals,
which could be seen on a remote monitor, and action can be taken accordingly
Heart of our robot is ARM7 (LPC2129). Microcontroller acts as master controller decodes all the commands received from
the transmitter and give commands to motor driver and also generating PWM pulses for the speed control. Based on the
input codes master will give command to slave microcontroller and robot will behave as follows.

moves in forward direction

moves in reverse direction,

speed controls in both the direction

it can even turn left or right while moving forward or in reverse direction.

Instant reverse or forward running without stopping.
A. Transmitting unit
Here a variable frequency oscillator1 is used for modulating the frequency i.e. to be transmitted and has its output
thigh
Fig1: Diagram of Vehicle Controller Circuit
Fig2: Block Diagram of PC Control Unit
22
Design of a Ploughing Vehicle Using Microcontroller
frequency oscillator 2 for generating a carrier wave. The carrier wave is then radiated into space by the antenna. The farmer
sends the commands to the ploughing vehicle using the PC through RF Transceiver and the AV receiver is used to receive
the video of the ploughing vehicle direction .To display the video on the computer, a video capture card placed in PCI slots
of CPU.
B. Receiving Unit
The receiving antenna is connected to a tuned wave detecting circuit for detecting the waves transmitted by
transmitter antenna. The output of the tuned wave detecting circuit is connected to amplifier which in turn has its output
connected to the input of the high pass frequency as well as the filter to a low pass frequency filter.
The received signals from the RF Transceiver are sent to the microcontroller. According to the commands received from the
farmer ,the microcontroller gives the inputs to the motor driver to make move of dc gear motors connected to ploughing
vehicle in a particular direction. The wireless camera which is separated from the microcontroller mounted on the vehicle to
capture the video, where the vehicle is going. The AV receiver module is shown in Figure 3.
III.
HARDWARE RESOURCES
A. Power supply circuit
The main building block of any electronic system is the power supply to provide required power for their
operation. For the LPC 2129 microcontroller. The LPC means Low Power Consumption which has on –chip regulators
which provide 1.8V, 3.3V &5V. +1.8V is required for core CPU +3.3V for on-chip Peripherals
+ 5V for input –output devices. The wireless camera and AV receiver requires 9V supply and it can be provided through
rechargeable battery
B. CC2500 RF Transceiver
The CC2500 is a low-cost 2.4 GHz transceiver designed for very low-power wireless applications. The RF
transceiver is integrated with a highly configurable baseband modem. The modem supports data rate up to 500k Baud. This
transceiver is intended for the 2400-2483.5 MHZ ISM (Industrial, Scientific and Medical) and SRD (Short Range Device)
frequency band. It has low current consumption of RX: 13.3mA, TX: 21.2mA and operating at voltage 1.8 to 3.6V.
C. Motor Driver L293D
The Device is a monolithic integrated high voltage, high current four channel driver designed to accept standard
DTL or TTL logic levels and drive inductive loads and switching power transistors. To simplify use as two bridges each pair
of channels is equipped with an enable input.
A separate supply input is provided for the logic, allowing operation at a lower voltage and internal clamp diodes
are included. This device is suitable for use in switching applications at frequencies up to 5 kHz. The L293D is assembled in
a 16 lead plastic package which has 4 center pins connected together and used for heat sinking. The chip is designed to
control 2 DC motors. There are 2 Input and 2 output pins for each motor.
D. Brushed DC Gear Motors
It is usually the motor of choice for the majority of torque control and variable speed applications .For the
movement of our ploughing vehicle, we are using DC motors. It is operated by 9VDC power supply. In any electric motor,
operation is based on simple electromagnetism. A current carrying conductor generates a magnetic field; when and to the
strength of the external magnetic field.
E.UART
A Universal Asynchronous Receiver /Transmitter, abbreviated UART is a type of "asynchronous receivertransmitter", a piece of computer hardware that translates data between parallel and serial forms. UARTs are commonly
used
in
conjunction
with
communication
standards
such
as
EIA,RS-232,RS-422,RS-485.
The Universal Asynchronous Receiver/ Transmitter (UART) takes bytes of data and transmits the individual bits in a
sequential fashion. At the destination, a second UART re-assembles the bits into complete bytes. Each UART contains a
shift register, which is the fundamental method of conversion between serial and parallel forms. Serial transmission of digital
information (bits) through a single wire or other medium is much more cost effective than parallel transmission through
multiple wires
F. RF Communication
Radio frequency (RF) is a rate of oscillation in the range of about 3 kHz to 300 GHz, which corresponds to the
frequency of radio waves, and the alternating currents which carry radio signals. RF usually refers to electrical rather than
mechanical oscillations. The energy in an RF current can radiate off a conductor into space as electromagnetic waves (radio
waves); this is the basis of radio technology.
G. JMK AV Receiver with Wireless Camera
It is mini wireless monitoring video camera and wireless receiver set for home and small business surveillance and
is used here for demonstration purpose. Simply install the wireless camera in the ploughing vehicle where we want to
23
Design of a Ploughing Vehicle Using Microcontroller
monitor and set the wireless receiver in the place (up to 15 meters away) and hook it up to a PC to watch the ploughing
action.
H. TV Capture card
A TV capture card is a computer component that allows television signals to be received by a computer. It is a
kind of television tuner. Most TV tuners also function as video capture cards, allowing them to record television programs
onto a hard disk. Digital TV tuner card is as shown in the Figure
Figure 3: AV Receiver and Wireless Camera
The card contains a tuner and an analog-to-digital converter along with demodulation and interface logic.
Figure 4: ATI digital TV capture card
.
IV.
SOFTWARE IMPLEMENTATION
For the software implementation, we deploy two software packages. First one is the Keil µVision 3.0. second one
is the Flash magic simulator. The Keil µVision Debugger accurately simulates on-chip peripherals (I²C, CAN, UART, SPI,
Interrupts, I/O Ports, A/D Converter, D/A Converter, and PWM Modules) of ARM7device Simulation helps to understand
hardware configurations and avoids time wasted on setup problems. With simulation, we can write and test applications
before target hardware is available. The system program written in embedded C using KEIL IDE software will be stored in
Microcontroller. Keil development tools for the Microcontroller Architecture support every level of software developer from
the professional applications engineer to the student for learning about embedded software development. The industrystandard Keil C Compilers, Macro Assemblers, Debuggers, Real-time Kernels, Single-board Computers, and Emulators
support all ARM7 derivatives. The Keil Development Tools are designed to solve the complex problems facing embedded
software developers. Flash magic is used to dump the code to microcontroller from PC. Flash Magic is a free, powerful,
feature-rich Windows application that allows easy programming of Philips FLASH Microcontrollers. Build custom
applications for Philips Microcontrollers on the Flash Magic platform! Use it to create custom end-user firmware
programming applications, or generate an in-house production line programming tool.The Flash Memory In-System
Programmer is a tool that runs under Windows 95/98/NT4/2K. It allows in-circuit programming of FLASH memories via a
serial RS232 link. Computer side software called Flash Magic is executed that accepts the Intel HEX format file generated
from compiler Keil to be sent to target microcontroller. It detects the hardware connected to the serial port.
V.
RESULTS
The ploughing vehicle on the field is shown in Fig 5 and finally the farmer can give the commands to the vehicle
to move in a particular direction and the video where the vehicle is going is shown in Fig 6
Figure 5: Ploughing vehicle on Field
Fig 6: Video from the remote vehicle
24
Design of a Ploughing Vehicle Using Microcontroller
VI.
CONCLUSION
As we all know, these days most of the farmers came to the cities leaving their fields in their home places because
of water scarcity and cost payable for ploughing the fields for tractors, JCB’S etc It’s our onus to take an initiative to design
a model of an apt ploughing vehicle, so that the farmers who didn’t know about driving the tractors , how to plough the field
can also remotely control the ploughing vehicle through this ploughing device prototype and the ploughing can be done
either automatically (or)manually. Through this project, we can develop more applications that will be interface to this
ploughing vehicle
ACKNOWLEDGMENT
This work is supported by Kuppam Engineering College and thanks for valuable references provided by the
authors. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and
do not necessarily reflect their views.
REFERENCES
1.
2.
3.
4.
Pete Miles & Tom Carroll, Build Your Own Combat Robot, (2002).
Decoder HT-12D, Encoder HT-12E
http://robokits.co.in/shop/index.php?main_page=product_info&cPath=14_15&products_id=76>
A. Khamis, M. Pérez Vernet, K. Schilling, “A Remote Experiment On Motor Control Of Mobile Robots”, 10 th
Mediterranean Conference on Control and Automation – MED2002.
Design of an Intelligent Combat Robot for war fields-(IJACSA) International Journal of Advanced Computer
Science
and Applications, Vol. 2, No. 8, 2011
#1 P. Dileep Kumar, obtained B.Tech degree from Kuppam Engineering College, Kuppam. He has
completed M.Tech degree in the area of Embedded Systems under JNTU Anantapur. He is working as
Assistant Professor in department of ECE, Kuppam Engineering College, Kuppam. His areas of
interests are Embedded Systems, Electronic Devices and Circuits, Microprocessors and
Microcontrollers.
#2 P. Ajay Kumar Reddy, obtained B.Tech degree from Kuppam Engineering College, Kuppam. He
has completed M.Tech degree in the area of Embedded Systems under JNTU Anantapur. He is working
as Assistant Professor in department of ECE, Kuppam Engineering College, Kuppam. His areas of
interests are Embedded Systems, Microprocessors and Microcontrollers.
#3 K. Bhaskar Reddy, obtained B.Tech degree from SVPCET, Puttur. He has completed M.Tech
degree in the area of Embedded Systems in Sastra University. He is working as Assistant Professor in
department of ECE, Kuppam Engineering College, Kuppam. His areas of interests are Embedded
Systems, Robotics, Microprocessors and Microcontrollers.
#4 M.Chandra Sekhar Reddy, obtained B.Tech degree from Kuppam Engineering College, Kuppam.
He has completed M.Tech degree in the area of VLSI System Design under JNTU Anantapur. He is
working as Assistant Professor in department of ECE, Kuppam Engineering College, Kuppam. His
areas of interests are VLSI Design, Nano Technology, Microprocessors and Microcontrollers.
#5 E.Venkataramana, obtained B.Tech degree from RGMCET, Nandyal. He has completed
M.Tech degree in the area of Electronics Design and Technology in NIT Calicut . He is working as
Assistant Professor in department of ECE, Kuppam Engineering College, Kuppam. His areas of
interests are Embedded Systems, VLSI Design, Signal Processing, Microprocessors and
Microcontrollers.
25
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 32-37
Study on Techniques for Providing Enhanced Security During
Online Exams
Sri Anusha.N1, Sai Soujanya.T2 Dr S.Vasavi3
1
II M.Tech 2 Assistant Professor 3 Professor ,
Department of Computer science & Engineering, VR Siddhartha College of Engineering, Vijayawada
Abstract–– In online exams the location of the proctor varies from the location of the examinee. Due to the increase in
the distance, chances of doing malpractice increases. To avoid such situations, the examinee has to be constantly
monitored. Many techniques were proposed for providing security during conduct of exams. This paper studies
various authentication techniques namely unimodal, multimodal and data visualization techniques. The paper also
proposes an enhanced security method Enhanced Security using Data Visualization (ESDV) for conduct of online
exams.
Keywords––Authentication, data visualization, malpractice, online exam, security
I.
INTRODUCTION
In online exams the location of the proctor varies from the location of the examinee. Due to the increase in the
distance, chances of doing malpractice increases. To avoid such situations, the examinee has to be constantly monitored.
Monitoring can be done by using authentication techniques such as providing username and password, biometric methods
like face, finger prints, iris identification techniques.
Authentication is a process that permits an entity to establish its identity to another entity [1]. Authentication
methods are of three types namely passwords, tokens and biometrics. With the use of passwords, only authenticated users are
logged in. Conditions such as, the password should contain minimum of eight characters; one letter, one number, one special
character etc are provided to make the passwords strong enough for intruder attacks. Passwords should be often changed to
avoid the risk of stealing and guessing.
The second mode of authentication is the use of tokens. Some of the applications of the tokens are physical keys,
proximity cards, credit cards, Asynchronous Transform Mode (ATM) cards. They are simple in usage and easy to produce.
To avoid the risk of stealing, these tokens are used along with the Personal Identification Number (PIN). The last mode of
authentication is biometrics. In this method the user enrolls by providing a sample physical characteristic, which is converted
from analog to the digital form. This conversion is stored in a template, which is later verified with the new sample provided
by the user at the time of authentication. If the two samples match with a slight variance then the user is authenticated.
Biometrics authentication can be applied in the fields of facial recognition, finger prints, hand geometry, keystroke
dynamics, hand vein, iris, retina, signatures, voice, facial thermo gram, Deoxyribonucleic acid (DNA). There are a varying
number of authentication systems namely central authentication systems, multi factor authentication system, split
authentication system and message authentication system. Central authentication system authenticates users remotely using a
central authority system across large number of systems. Applications of this system are Remote access dial in user service,
Terminal access controller access control system, Kerberos and Diameter. The multi factor authentication system combines
multiple authentication factors into a single model, thus increasing the reliability and security. Application of this system is
usage of ATM card with PIN number. Split authentication system, splits the authentication among two parties. The two
parties should submit their passwords or cryptographic keys to encrypt or to decrypt a message. In Message authentication
system, the message is authenticated by using message authenticated code (MAC). The message authenticated code is
generated by combining message with a secret key shared by both the sender and the receiver. On receiving the message, the
receiver recomputed its own MAC and compares it with received MAC. If any change is found, then the message is said to
be altered. Digital signatures are used to ensure authenticity and non-repudiation.
The term data visualization can be described as the graphical representation of the given data. It makes an
overview of entire data, thus making the viewers to easily interpret the data. There are a varying number of techniques
proposed for different dimensional database. Scatter plots, line graphs, survey plots and bar charts are used for two
dimensional data [2]. Scatter plots represent the two dimensional attributes by using x-y axis coordinate system. If more
number of data sets is used then for making a difference, different colors are used for each data set. Line graphs display
single valued function related to one dimension. Survey plots consists of n-rectangular areas each representing one
dimension. A line is used to represent each dimension, with the length proportional to the dimension‟s length. Bar charts
represent the data by using the rectangular blocks and the represented area is filled within blocks.
Scatter plots, survey plots and animation techniques can be used for visualizing three dimensional data, by adding
a third dimension orthogonal to the other two dimensions. Animation helps in the representation of the data by showing the
variation of the plot with respect to time.
For visualizing high dimensional data, icon based, hierarchical, geometrical and pixel oriented techniques are used
[3]. In icon based, there are a number of varying techniques namely chernoff faces, star glyphs, stick figure, shape coding,
32
Study on Techniques for Providing Enhanced Security During Online Exams
color icon, texture. Chernoff face represents a data set and each feature in the face represents a dimension. Star glyph
represents data by using a single point with equally angled rays. Here each ray represents a dimension, and the length of the
ray represents the value of the dimension. Stick figure maps two attributes to the display matrix and the remaining to the
length, thickness, color of limbs. In Shape coding each data item is visualized by using a pixel of arrays. According to the
attribute values the pixels are mapped onto a color scale which helps in visualization of the multi dimensional data. In color
icon, the pixels are replaced by arrays of color fields that represent the attribute values. Texture helps to gain knowledge
about the overall relation between the attributes in addition to the data items.
Section 2 summarizes the methods introduced so far for providing security during conduct of online exams. Section 3
discusses the proposed system. Section 4 gives the conclusions of the paper. Section 5 discusses the future work related to
the paper.
II.
RELATED WORK
There are a number of methods available in hierarchical techniques namely dimensional stacking, fractal foam,
hierarchical axis, worlds within worlds, tree map [2],[3]. Dimensional stacking divides the given data into two dimensional
subspaces which are stacked into each other. Later from those subspaces, only attributes are chosen for visualization. This
technique is useful for discrete categorical or binned ordinal values. Fractal foam depicts the correlation between the
dimensions by the usage of circles. The starting dimension is represented in the form of circle. Later other circles are
attached to the first circle, which represent the other dimensions. The size of the circles represents the correlation between
the dimensions. In Hierarchical axis the axis is applied in a hierarchical manner. This technique helps in plotting many
attributes on one screen. Worlds within Worlds divides the given data space into three dimensional subspaces. This
technique helps in generating an interactive hierarchy display, which allows the further exploration of n-dimensional
functional spaces. Tree map divides the given screen into a number of regions based upon the attribute values, in a
hierarchical fashion. It helps in obtaining an overview of large datasets with multiple ordinal values.
The varying techniques in geometrical methods are parallel coordinates, Andrew‟s curves, multiple views, radical coordinate
visualization, polyviz, hyper slice, hyberbox, star coordinates, table lens[2],[3]. Parallel coordinates makes use of the parallel
axes in representation of the dimensions. A vertical line is used for projection of each dimension or attribute. Andrew‟s
curves plot each data point as a function of data values, by using the equation:
….(1) [2]
Where, x= (x1, x2, , ,xn) and xn are the values of the data points for the particular dimension. Multiple views are
used for the data sets that contain diverse attributes. This method reveals the correlation and disparities between the
attributes thus making a visual comparison for better understanding. In Radical coordinate visualization, n number of lines
extends from the centre of the circle and end at the perimeter. Each line is associated with one attribute. Data points with
similar or equal values lie close to the centre. Polyviz technique represents each dimension as a line. The position of the data
points depends upon the arrangement of the dimensions. This technique provides more information by giving an inner view
of data distribution in each dimension. Hyper slice contains the matrix graphics representing a scalar function of the
variables. This technique provides an interactive data navigation over a user defined point. Hyberbox is depicted in two
dimensional data space. This technique is helpful in mapping variables to the size and shape of the box. In star coordinates
technique, the data items are represented as points and attributes are represented by axis arranged on a circle. The length of
the axis determines the attribute contribution. Table lens technique uses rows and columns for representing the data items
and attributes. The information among rows and columns is interrelated which helps in analyzing the trends and the
correlation in the data.
The methods that were included in pixel oriented techniques are namely space filling curve, recursive pattern,
spiral and axes technique, and circle segment and pixel bar chart [3]. Space filling curve provides clustering of closely
related data items, thus making the user to easily understand the data. Recursive pattern follows a generic recursive scheme
where, the arrangement of line and columns are performed iteratively and recursively. Spiral techniques arrange the pixels in
a spiral form, depending on the distance from the query. Axes techniques improve spiral techniques, by adding feedback to
the displacement. In circle segment the attributes are arranged on the segments. The data point that is assigned is available at
the same location on different segments. Pixel bar chart presents the direct representation of the data values. Each data item
is represented by a single pixel and is placed in the respective bars.
For visualization of an image, the image has to be pre-processed and later the features are to be extracted from it
[4]. In pre-processing step, filtering, normalization and segmentation techniques are implemented. Filtering of an image
helps in noise removal, sharpening which includes enhancing the details of an image and later smoothing of an image.
Normalization of the image, changes the pixel intensity values such that bringing the image to the normal senses and making
it more familiar. Segmentation helps in dividing the given image into multiple parts, thus making the image to be easily
analysed. By dividing the image, we can further continue the work on the required part rather than on the entire image. The
second step in the data visualization is feature extraction, which is a special form of dimensionality reduction. In feature
extraction, the input data is transformed into a set of features. Here features are selected in a way that, the operation on those
features will yield the desired result. The set of features that can be extracted from an image are shape, texture and colour.
Color is the widely used feature in the feature extraction process [4]. The following are the advantages of using
color features namely robustness, effectiveness, implementation simplicity, computational simplicity, low storage capability.
The color of an image is represented through color model. A color model is specified in terms of 3-D coordinate system and
a subspace within that system where each color is represented by a single point. There are three color models namely RGB,
HSV, Y Cb Cr. RGB colors are called primary colors and are additive. By varying their combinations, other colors can be
33
Study on Techniques for Providing Enhanced Security During Online Exams
obtained. In HSV, the representation of the HSV space is derived from the RGB space cube, with the main diagonal of the
RGB model as the vertical axis in HSV. As saturation varies from 0.0 to 1.0, the colors vary from unsaturated (gray) to
saturate (no white component). Hue ranges from 0 to 360 degrees, with variation beginning with red, going through yellow,
green, cyan, blue and magenta and back to red. HSV is calculated by using the formula:
V=1/3(R+G+B) (2) [4]. YCbCr is a color space used in the JPEG and MPEG international
coding standards. Formula used in calculation is:
Y = 0.299R+0.587G+0.114B
Cb= -0.169R-0.331G+0.500B
Cr = 0.500R-0.419G-0.081B
(3) [4]
The second feature that can be extracted from an image is texture [4].Texture has been one of the most important
characteristic which has been used to classify and recognize the objects and have been used in finding similarities between
images in multimedia databases. Texture alone cannot find similar images, but it can be used to classify textured images
from non-textured ones and then be combined with another visual attribute like color to make the retrieval more effective.
There are 4 methods in texture extraction namely statistical, geometrical, model based, signal processing methods [5].
Statistical methods help in defining the qualities of texture in the spatial distribution of gray values. We use
{I(x,y),0<=x<=N-1,0<=y<=N-1}to denote N*N image with G gray levels. In statistical methods we have 2 types namely cooccurrence matrices and auto correlation features. In Co –Occurrence Matrix, Spatial gray level co-occurrence estimates
image properties related to second-order statistics. The gray level co-occurrence matrix Pd for a displacement vector d = (dx,
dy) is defined as follows. The entry (i,j) of P d is the number of occurrences of the pair of gray levels of i and j and which
are a distance „d‟ apart. Formally, it is given as:
(4) [5] where, (r,s),
(t,v) € N*N, (t,v) = (r + dx, s+ dy), and │.│ is the cardinality of a set. The second type is the auto correlation feature, which
is used to assess the amount of regularity as well as the fineness/coarseness of the texture present in the image. Formally, the
autocorrelation function of an image I (x,y) is defined as follows:
(5)
[5]
Consider the image function in the spatial domain I(x,y) and its Fourier transform F(u,v). The quantity | F(u,v)|2 is
defined as the power spectrum where |.| is the modulus of a complex number.
Geometrical methods are characterized by their definition of texture as being composed of “texture elements” or
primitives. Once the texture elements are identified in the image, there are two major approaches in analyzing the texture.
First type computes statistical properties from the extracted texture elements and utilizes these as texture features. The
geometrical method comes under the second type, which tries to extract the placement rule that describes the texture.
Geometrical methods are further classified into voronoi tessellation features, structural methods [6].
Voronoi tessellation helps in defining local spatial neighborhoods, because the local spatial distributions of tokens
are reflected in the shapes of the voronoi polygons. Texture tokens are extracted and then the tessellation is constructed.
Tokens can be points of high gradient in the image or line segments or closed boundaries. The structural models of texture
assume that textures are composed of texture primitives. The texture is produced by the placement of these primitives
according to certain rules. Structural texture analysis consists of two major steps. One is extraction of the texture elements
and the other is inference of the placement rule. Structure methods represent the texture by well defined primitives (micro
texture) and a hierarchy of spatial arrangements (macro texture) of those primitives.
Model based texture analysis methods are based on the construction of an image model that can be used not only
to describe texture, but also to synthesize it [6]. The model parameters capture the essential perceived qualities of texture.
Model based methods are further divided into random field models, fractals, autoregressive, Markov random models.
Random field models have been popular for modeling images. They are able to capture the local (spatial) contextual
information in an image. These models assume that the intensity at each pixel in the image depends on the intensities of only
the neighboring pixels. Fractals are known for modeling roughness and self-similarity at different levels in image processing.
Given a bounded set A in a Euclidean n-space, the set A is said to be self-similar when A is the union of N distinct (nonoverlapping) copies of itself, each of which has been scaled down by a ratio of r. The fractal dimension D is related to the
number N and the ratio r as follows:
(6) [6]
Autoregressive (AR) model assumes a local interaction between image pixels, where the pixel intensity is a
weighted sum of neighboring pixel intensities. A Markov random field (MRF) is a probabilistic process in which all
interactions is local; the probability that a cell is in a given state is entirely determined by probabilities for states of
neighboring cells. Direct interaction occurs only between immediate neighbors.
Basing on Signal Processing Methods, the psychophysical research has given evidence that the human brain does a
frequency analysis of the image [6]. Most techniques try to compute certain features from filtered images which are then
used in either classification or segmentation tasks.
Signal processing methods are further divided into spatial domain filters, Fourier domain filtering, Gabor and
wavelet models. Spatial domain filters are the most direct way to capture image texture properties. Earlier attempts at
defining such methods concentrated on measuring the edge density per unit area. Fine textures tend to have a higher density
of edges per unit area than coarser textures. In Fourier domain filtering, the frequency analysis of the textured image is done.
As the psychophysical results indicated, the human visual system analyses the textured images by decomposing the image
into its frequency and orientation components. The multiple channels tuned to different frequencies are also referred as
34
Study on Techniques for Providing Enhanced Security During Online Exams
multi-resolution processing in the literature. Gabor filter is a linear filter that is used in edge detection. They are appropriate
for text representation and discrimination. Gabor filters are self similar filters that are produced from one parent wavelet. A
wavelet is a wave like oscillation that starts at the amplitude zero, increases and then decreases to zero. Wavelets are
combined with an unknown signal by using a technique called convolution, to extract the information from the unknown
signal.
Shape is another important visual feature and it is one of the primitive features for image content description [4].
This feature helps in measuring the similarity between the images represented by their shapes. There are two steps in shape
based image retrieval namely, feature extraction and similarity measurement between extracted features. Shape descriptors
are used for feature extraction in shape. They are of two types in shape descriptors namely region based which use whole
area of the object and contour based that use information present in the contour of an object. Features calculated from object
contour are based on circularity, aspect ratio, discontinuity angle irregularity, length irregularity, complexity, rightAngularness, sharpness, directedness.
Region based shape descriptor utilizes a set of Zernike movements calculated within a disk centered at the centre
of the image [4]. Following are the advantages in using Zernike movements namely rotation invariance, robustness,
expressiveness, effectiveness, multilevel representation. Zernike polynomials are an orthogonal series of basic functions
normalized over a unit circle. These polynomials increase in complexity with increasing polynomial order. To calculate the
Zernike moments, the image (or region of interest) is first mapped to the unit disc using polar coordinates, where the centre
of the image is the origin of the unit disc. Those pixels falling outside the unit disc are not used in the calculation. The
coordinates are then described by the length of the vector from the origin to the coordinate point.
One of the techniques, used in the feature extraction is Discrete Cosine Transform [7]. This feature extraction
technique is useful in extracting proper features for face recognition. After applying Discrete Cosine Transform (DCT) to
entire face, some of the coefficients are selected to construct feature vectors. This technique helps in processing and
highlighting signal frequency features. Whenever an input image is given, features are extracted and are stored along with
the input image in the database. Now, when a query image is given it is first normalized then converted into a block image.
Later from the block image, DCT based feature extraction is done. Then these features of the query image are compared with
the features of the input image. Comparison is done with the use of Euclidean distance measure. The formula used in DCT
based feature extraction for two dimensional image is:
The 2-dimensional DCT of an image f(I,j) for I,j=1,…….N is
(7)
[7]
The formula used for the Euclidean distance measure is:
(8)
[7]
Where D is the distance between the feature vector Iq and Id and N is the number of blocks.
[8] Proposes 3G smart phone technologies, which is helpful in the collection of the user data. Basing on this data, it
can depict the contact distance between the users. Then this technology predicts the spread of infectious diseases
among persons through contact and visualizes it through the use of the line graph method. This method can be later
used in our approach for representation of the calculated difference between the pixels.
[9] Came up with a solution, for identifying the malicious attacks from inside the organization. Visualization of the
attacks is done by the use of the bipartite graphs. Bipartite graphs help in the visualization of the unauthorized
behavior of the persons inside the organization. By using the same visualization method, we can depict the
unauthorized behavior of the student. Some forms of the unauthorized behavior can be of accessing questions prior
to the exam, accessing unauthorized documents, changing system time and changing answers after the exam
[10] Proposes a web usage mining technique that helps the merchants to know the trends in buying pattern of the
customers. This helps them to increase their business options. From the data that is stored based on purchasing,
user behavior is noted basing on the navigation of screens and choice of buying items. Different patterns are mined
from this behavior and later these are plotted using coordinate graphs. With the use of the same coordinate graphs,
we can plot the student behavior in answering questions with response to the time.
[11] Proposes a stream line visualization tool that helps in the visualization of the lab work done by the biologists. This
technique also provides visualization for any stream in the course selected that is of the number of class hours,
assignment schedules, meetings, group projects etc. It helps the user, to view the data according to a temporal
format. With the use of this concept, the student behavior in the exam can be visualized according to the change in
time. Bar charts are used for the visualization of the student behavior.
[12] Proposes a unimodel technique that provides a method for security in online exams by the use of the face
recognition techniques. They use Discrete Cosine transform (DCT) and Karhunen-Loeve Transform (KLT)
methods for the feature extraction from face images and later compare these results for face identification. This
technique can be further extended by comparing faces with the stored image.
[13] Proposes a unimodel security technique by using the biometric web authentication. This technique uses bio tracker
software that is useful in recognizing the images on the client side. It visualizes the student images and is useful in
tracking student behavior. Disadvantage with this technique is, it needs powerful servers for executing this
software.
[14] Proposes a unimodel technique that implements one-time password authentication scheme. This scheme provides
separate passwords basing upon the time and location of the mobile device. By using only that password, the user
35
Study on Techniques for Providing Enhanced Security During Online Exams
can access the applications in the mobile device such as online banking. This helps to reduce the risk of intruders.
This technique can be used in the paper, during authentication process of the student.
[15] Proposed a machine learning approach which prevents the theft, impersonation during online exams. This
technique visualizes the student behavior during the exam, thus preventing him from further malpractice.
[16] Came up with a unimodel technique. In this technique, transparent authentication method is used. In this
technique, the student has to make digital signatures by using stenography with voice sample at regular intervals of
time. By comparing the samples, system ensures that the same student is writing the exam throughout the exam
period.
[17] Proposes a multi model approach by providing authentication methods for the student login and later monitor the
student during the exam period. For authentication the password, face recognition or tokens can be used. Later the
student is constantly monitored through webcam. If any high risk is found then, the student is again asked to reauthenticate. This helps in identification of the student that is currently before the system.
[18] Proposed a multi model technique, where a framework is designed for conducting online exams in a secure
environment. For this, they provide username and password during the registration process and later the student is
monitored through webcam. This helps in tracking of the student behavior during the exam.
[19] Proposed a multi model biometrics solution for security in online exam. The student should provide fingerprints
during the examination login and later he is constantly monitored using the webcam. If any abnormal behavior is
found then he is prevented from writing the exam.
[20] Proposed a multi model system for authentication of the students during examination by using the concept of multi
model biometric technology. This technology provides combination of the finger prints and the mouse dynamics.
Finger prints of the student are taken at the beginning of the login. The mouse movements made by the student
during examination period are recorded and later verified for tracking the student behavior.
[21] Proposed a multi model technique for conducting a secure online exam. This technology authenticates student, by
using group key cryptography and later the student is continuously monitored through webcam. The proposed
cryptographic technique can be used in our system for student registration and authentication.
[22] Proposed a unimodel approach that tracks the answering pattern of the student through data visualization
techniques. This approach helps the administrator in evaluation of tests and assignment of the grades to the
students. These data visualization techniques can be used in our system for tracking the student behavior.
III.
PROPOSED SYSTEM
In this paper a multimodal technique is proposed for providing security. This includes authentication of the student
at the beginning of the exam and later continuous monitoring of the student through webcam as shown in Fig 1. The
detection and comparison of the student‟s face and behavior is done in two steps. Firstly pre-processing is done to the image
through filtering, normalization and segmentation. Later in the second step, feature extraction is done based on the color,
texture and shape of the obtained image. The technique that is implemented in the feature extraction is Discrete Cosine
Transformation method. Then these obtained features of the input image are compared with the features of the database
image through Euclidean distance measure. Later the plotted difference is represented by using two dimensional data
visualization techniques. Through these plots, the proctor can easily visualize any change noted in the student behavior. This
approach thus employs a full time security for online exams.
Fig 1: Architecture for enhanced security using data visualization (ESDV)
IV.
CONCLUSIONS
In this paper, a set of authentication techniques are discussed. They include passwords, tokens, and biometrics.
Special conditions are involved, to make the techniques to be strong enough to the intruder attacks. Varying authentication
systems are discussed namely central authentication system, multi factor authentication system, split authentication system
and message authentication system. This paper discuss a multi model authentication technique, that helps in continues
monitoring of the student. The behavior of the student is visualized by using different visualization techniques.
36
Study on Techniques for Providing Enhanced Security During Online Exams
Different data visualization techniques are discussed, for varying dimensional data. Scatter plot, survey plot, line
graph and bar charts are used for visualizing two dimensional data. The same techniques are used in visualization of three
dimensional data, by addition of the third dimension perpendicular to the remaining two dimensions. Icon based, hierarchical
methods, geometrical methods and pixel oriented techniques are used for visualizing high dimensional data. By usage of
these techniques, tracking of the student behavior is done and continues monitoring is provided thus implementing an
enhanced security in online exam.
V.
FUTURE WORK
Our Proposed system should be validated using a face database set provided in [23]. The dataset consists of 1521
images. For each image, twenty manual points are taken basing on right eye pupil, left eye pupil, right mouth corner, left
mouth corner, outer end of right eye brow, inner end of right eye brow, inner end of left eye brow, outer end of left eye brow,
right template, outer corner of right eye, inner corner of right eye, inner corner of left eye, outer corner of left eye, left
template, tip of nose, right nostril, left nostril, centre point on outer edge of upper lip, centre point on outer edge of lower
lip, tip of chin. These points of each image are to be calculated by using Discrete Cosine Transform technique and later
comparison is to be made through Euclidean distance measure. Also visualization technique can be used for tracking student
behavior during exam.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
Christopher Mallow,” Authentication Methods and Techniques”
Kim Bartke,” 2D, 3D and High-Dimensional Data and Information Visualization”, Seminar on Data and
Information Management ,2005
Winnie Wing-Yi Chan, “A survey on Multivariate Data Visualization”, Department of Computer Science and
Engineering. Hong Kong University of Science and Technology,2006
Ryszard S.Choras, ”Image Feature Extraction Techniques and their Applications for CBIR and Biometric
System”, International Journal of Biology and Biomedical Engineering,2007
Andrzej Materka and Michal Strzelecki, “Texture Analysis Methods – A Review”, Technical University of Lodz,
Institute of Electronics ul. Stefanowskiego,1998
Mihran Tucerya, Anil K. Jain,”Texture Analysis”, World Scientific Publishing Corporation,1998
Aamer. S. S. Mohamed, Ying Weng, Jianmin Jiang and Stan Ipson, “An Efficient Face Image Retrieval through
DCT Features” School of Informatics, University of Bradford BD7 1DP, UK. { A.S.S.mohamed, Y.Weng,
J.Jiang1, S.S.Ipson }@Bradford.ac.uk ,Signal and image processing,2008
Benavides, J.; Demianyk, B.; McLeod, R.D.; Friesen, M.R.; Laskowski, M.; Ferens, K.; Mukhi, S.N.; ” 3G
Smartphone Technologies for Generating Personal Social Network Contact Distributions and Graphs ” , 13th IEEE
International Conference on e-Health Networking Applications and Services(Healthcom),2011
Im Y.Jung, Yeom, H.Y.; ”Enhanced Security for Online Exams using Group Cryptography”, IEEE transactions on
Education,2009
Sonal Tiwari, Deepti Razdan, Prashant Richariya, Shivkumar Tomar, “A web usage mining framework for
business intelligence”, 3rd International Conference on Communication Software and Networks,2011
Shengqiong Yuan, Aurélien Tabard, Wendy Mackay; ”StreamLiner: A General-Purpose Interactive CourseVisualization Tool”, International Symposium on Knowledge Acquisition and Modelling Workshop,2008
C.M Althaff Irfan, Syusaku Nomura, Karim Ouzzane, Yoshimi Fukumura” Face-based Access Control and
Invigilation Tool for e-Learning Systems”, International Conference on Biometrics and Kansei Engineering,2009
Elisardo Gonzalez Agulla, Rifon, L.A. Castro, J.L.A. Mateo, C.G., “Is my student at the other side? Applying
Biome tric Web Authentication to E-Learning Environments ”, Eight IEEE International Conference on Advanced
Learning Technologies, ICALT,2008
Wen-Bin Hsieh; Jenq-Shiou Leu;” Design of a time and location based One-Time Password authentication
scheme ”, 7th International Wireless Communications and Mobile Computing Conference, IWCMC,2011
Taiwo Ayodele, Charles.A. Shoniregun, Galyna Akmayeva,”Toward e-learning security: A machine learning
approach”, International Conference on Information Society( i-society),2011
Nawfal.F.Fadhal, Gary B Wills, David Argles, ”Transparent authentication in E-Learning ”, Conference on
Information Society (i-society),2011
Kikelomo Maria Apampa, Gary Wills, David Argles, ”An approach to presence verification in summative eassessment security”, International Conference on Information Society(i-society),2010
Nasser Modiri, Ahmad Farahi, Snaz Katabi,”Providing security framework for holding electronic examination in
virtual universities.”, 7th International Conference on Networked Computing and Advanced Information
Management(NCM),2011
Levy, Y., Ramim.M.Michelle, “Initial Development of a Learners‟ Ratified Acceptance of Multibiometrics
Intentions Model (RAMIM)”, Interdisciplinary Journal of E-learning and Learning Objects (IJELLO),2009
Asha, S. Chellappan, C., “Authentication of e-learners using multimodal biometric technology ”,International
Symposium on Biometrics and Security Technologies,2008
Im Y.Jung, Yeom, H.Y.; ”Enhanced Security for Online Exams using Group Cryptography”, IEEE Transactions
on Education,2009
Gennaro Costagliola, Vittorio Fuccella, Massimiliano Giordano, Giuseppe Polese, “Monitoring Online tests
through Data visualization”,IEEE Transactions on Knowledge Data and Engineering, 2009
http://www.facedetection.com/facedetection/dataset.htm
37
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 23-35
Performance Evaluation of a Low Heat Rejection Diesel Engine
with Jatropha
N. Janardhan1, M.V.S. Murali Krishna2, P.Ushasri3, and P.V.K. Murthy4
1
Mechanical Engineering Department, Chaitanya Bharathi Institute of Technology, Gandipet, Hyderabad-500075.
2
Mechanical Engineering Department, College of Engineering, Osmania University, Hyderabad- 500 007
3,4
Vivekananda Institute of Science and Information Technology, Shadnagar, Mahabubnagar-509216.
Abstract––Investigations were carried out to evaluate the performance of a low heat rejection (LHR) diesel engine
consisting of air gap insulated piston with 3-mm air gap, with superni (an alloy of nickel) crown, air gap insulated liner
with superni insert and ceramic coated cylinder head with different operating conditions of crude jatropha oil (CJO) with
varied injection timing and injection pressure. Performance parameters were determined at various values of brake mean
effective pressure (BMEP). Pollution levels of smoke and oxides of nitrogen (NOx) were recorded at the various values of
BMEP. Combustion characteristics of the engine were measured with TDC (top dead centre) encoder, pressure
transducer, console and special pressure-crank angle software package. Conventional engine (CE) showed deteriorated
performance, while LHR engine showed improved performance with vegetable operation at recommended injection
timing and pressure and the performance of both version of the engine improved with advanced injection timing and
higher injection pressure when compared with CE with pure diesel operation. Relatively, peak brake thermal efficiency
increased by 14%, smoke levels decreased by 27% and NOx levels increased by 49% with vegetable oil operation on LHR
engine at its optimum injection timing, when compared with pure diesel operation on CE at manufacturer’s
recommended injection timing.
Keywords––Vegetable oil, LHR engine, Performance, Exhaust emissions, Combustion characteristics.
I.
INTRODUCTION
In the scenario of increase of vehicle population at an alarming rate due to advancement of civilization, use of
diesel fuel in not only transport sector but also in agriculture sector leading to fast depletion of diesel fuels and increase of
pollution levels with these fuels, the search for alternate fuels on has become pertinent for the engine manufacturers, users
and researchers involved in the combustion research. Vegetable oils and alcohols are the probable candidates to replace
conventional diesel fuel, as they are renewable. Most of the alcohol produced in India is diverted to Petro-chemical
industries. That too, alcohols have low cetane number, and hence engine modification is necessary if they are to be used as
fuel in diesel engine. Rudolph diesel [1], the inventor of the engine that bears his name, experimented with fuels ranging
from powdered coal to peanut oil. Several researchers [2-11] experimented the use of vegetable oils as fuel on conventional
engines (CE) and reported that the performance was poor, citing the problems of high viscosity, low volatility and their
polyunsaturated character. Not only that, the common problems of crude vegetable oils in diesel engines are formation of
carbon deposits, oil ring sticking, thickening and gelling of lubricating oil as a result of contamination by the vegetable oils.
The presence of the fatty acid components greatly affects the viscosity of the oil. The above mentioned problems are reduced
if crude vegetable oils are converted [12] into biodiesel, which have low molecular weight, low dense and low viscosity
when compared with crude vegetable oils. Investigations were carried out [13-18] with biodiesel with CE and reported that
performance improved and reduced smoke emissions and increased NOx emissions. The drawbacks associated with crude
vegetable oil and biodiesel call for LHR engine. It is well known fact that about 30% of the energy supplied is lost through
the coolant and the 30% is wasted through friction and other losses, thus leaving only 30% of energy utilization for useful
purposes. The concept of LHR engine is to reduce coolant losses by providing thermal resistance in the path of heat flow to
the coolant, there by gaining thermal efficiency. Several methods adopted for achieving LHR to the coolant are i) using
ceramic coatings on piston, liner and cylinder head ii) creating air gap in the piston and other components with low-thermal
conductivity materials like superni, cast iron and mild steel etc. Investigations were carried out by various researchers [1921] on ceramic coated engines with pure diesel operation and reported brake specific fuel consumption (BSFC) was
improved in the range 5-9% and pollution levels decreased with ceramic coated engine. Studies were also made on ceramic
coated LHR engine with biodiesel operation and reported that performance improved and decreased smoke emissions and
increased NOx emissions. The technique of providing an air gap in the piston involved the complications of joining two
different metals. Investigations were carried out [22] on LHR engine with air gap insulated piston with pure diesel. However,
the bolted design employed by them could not provide complete sealing of air in the air gap. Investigations [23] were carried
out with air gap insulated piston with nimonic crown with pure diesel operation with varied injection timing and reported
brake specific fuel consumption was improved by 8%. It was studied [24] the performance of a diesel engine by insulating
engine parts employing 2-mm air gap in the piston and the liner and the nimonic piston with 2-mm air gap was studded with
the body of the piston and mild steel sleeve, provided with 2-mm air gap was fitted with the total length of the liner, thus
attaining a semi-adiabatic condition and reported the deterioration in the performance of the engine at all loads, when
compared to pure diesel operation on CE. This was due to higher exhaust gas temperatures. Experiments were conducted
23
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
[25] experiments on LHR engine, with an air gap insulated piston, air gap insulated liner and ceramic coated cylinder head.
The piston with nimonic crown with 2 mm air gap was fitted with the body of the piston by stud design. Mild steel sleeve
was provided with 2 mm air gap and it was fitted with the 50 mm length of the liner. The performance was deteriorated with
this engine with pure diesel operation, at recommended injection timing. Hence the injection timing was retarded to achieve
better performance and pollution levels. Experiments were conducted [12] with air gap insulated piston with superni crown
and air gap insulated liner with superni insert with varied injection timing and injection pressure with different alternate fuels
like vegetable oils and alcohol and reported that LHR engine improved the performance with alternate fuels. Experiments
were conducted [26] on different degrees of insulation and reported that performance was improved with higher degree of
insulation. Investigations were carried out [27] with air gap insulated piston, air gap insulated liner and ceramic coated
cylinder head with crude jatropha oil an crude pongamia oil based biodiesel and reported alternate fuels increased the
efficiency of the engine and decreased smoke emissions and increased NOx emissions.
The present paper attempted to evaluate the performance of LHR engine, which consisted of air gap insulated
piston, air gap insulated liner and ceramic coated cylinder head with crude jatropha oil with varied injection pressure and
injection timing and compared with pure diesel operation on CE at recommended injection timing and injection pressure.
II.
METHODOLOGY
LHR diesel engine contained a two-part piston Fig 1, the top crown made of low thermal conductivity material,
superni-90 screwed to aluminum body of the piston, providing a 3-mm-air gap in between the crown and the body of the
piston. The optimum thickness of air gap in the air gap piston was found to be 3-mm [23], for better performance of the
engine with superni inserts with diesel as fuel. A superni-90 insert was screwed to the top portion of the liner in such a
manner that an air gap of 3-mm is maintained between the insert and the liner body. At 500oC the thermal conductivity of
superni-90 and air are 20.92 and 0.057 W/m-K respectively. Partially stabilized zirconium (PSZ) of thickness 500 microns
was coated by means of plasma coating technique.
1. Crown with threads
2. Gasket
3. Air gap
4. Body
5. Ceramic coating
6. Cylinder head
Insulated piston
7 Insert with threads
8. Air gap
9. Liner
Insulated liner
Ceramic coated cylinder head
Figure.1 Assembly details of air gap insulated piston, air gap insulated liner and ceramic coated cylinder head
The properties of the vegetable oil and the diesel used in this work are presented in TABLE 1
Table 1. Properties of the test fuels
Test Fuel
Viscosity at
Density at
Cetane number
25oC
25oC
(centi-poise)
Diesel
12.5
0.84
55
Jatropha oil (crude)
125
0.90
45
Calorific
value
(kJ/kg)
42000
36000
Experimental setup used for the investigations of LHR diesel engine with jatropha oil based bio-diesel is shown in
Fig 2. CE had an aluminum alloy piston with a bore of 80 mm and a stroke of 110mm. The rated output of the engine was
3.68 kW at a speed of 1500 rpm. The compression ratio was 16:1 and manufacturer’s recommended injection timing and
injection pressures were 27obTDC and 190 bar respectively. The fuel injector had three holes of size 0.25mm. The
24
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
combustion chamber consisted of a direct injection type with no special arrangement for swirling motion of air. The engine
was connected to electric dynamometer for measuring its brake power. Burette method was used for finding fuel
consumption of the engine.
1.Engine, 2.Electical Dynamo meter, 3.Load Box, 4.Orifice meter, 5.U-tube water manometer, 6.Air box, 7.Fuel
tank, 8, Three way valve, 9.Burette, 10. Exhaust gas temperature indicator, 11.AVL Smoke meter, 12.Netel Chromatograph
NOx Analyzer, 13.Outlet jacket water temperature indicator, 14. Outlet-jacket water flow meter, 15.Piezo-electric pressure
transducer, 16.Console, 17.TDC encoder, 18.Pentium Personal Computer and 19. Printer.
Figure 2. Experimental Set-up
Air-consumption of the engine was measured by air-box method. The naturally aspirated engine was provided
with water-cooling system in which inlet temperature of water was maintained at 60oC by adjusting the water flow rate. The
engine oil was provided with a pressure feed system. No temperature control was incorporated, for measuring the lube oil
temperature. Copper shims of suitable size were provided in between the pump body and the engine frame, to vary the
injection timing and its effect on the performance of the engine was studied, along with the change of injection pressures
from 190 bar to 270 bar (in steps of 40 bar) using nozzle testing device. The maximum injection pressure was restricted to
270 bar due to practical difficulties involved. Exhaust gas temperature (EGT) was measured with thermocouples made of
iron and iron-Constantan. Pollution levels of smoke and NOx were recorded by AVL smoke meter and Netel Chromatograph
NOx analyzer respectively at various values of BMEP. Piezo electric transducer, fitted on the cylinder head to measure
pressure in the combustion chamber was connected to a console, which in turn was connected to Pentium personal computer.
TDC encoder provided at the extended shaft of the dynamometer was connected to the console to measure the crank angle of
the engine. A special P- software package evaluated the combustion characteristics such as peak pressure (PP), time of
occurrence of peak pressure (TOPP), maximum rate of pressure rise (MRPR) and time of occurrence of maximum rate of
pressure rise (TOMRPR) from the signals of pressure and crank angle at the peak load operation of the engine. Pressurecrank angle diagram was obtained on the screen of the personal computer. The accuracy of the instrumentation used in the
experimentation is 0.1%.
III.
RESULTS AND DISCUSSION
3.1 Performance Parameters
Fig 3 indicates that CE with vegetable oil showed the deterioration in the performance for entire load range when
compared with the pure diesel operation on CE at recommended injection timing.
30
1-CE-Diesel-27bTDC
25
BTE (%)
20
2-CE-CJO-27bTDC
15
3-CE-CJO-29bTDC
10
4-CE-CJO-32bTDC
5
0
0
1
2
3
4
5
6
5-CE-CJO-33bTDC
BMEP (bar)
Figure 3. Variation of BTE with BMEP in CE at various injection timings
25
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
Although carbon accumulations on the nozzle tip might play a partial role for the general trends observed, the
difference of viscosity between the diesel and vegetable oil provided a possible explanation for the deterioration in the
performance of the engine with bio-diesel operation. The result of lower jet exit Reynolds numbers with vegetable oil
adversely affected the atomization. The amount of air entrained by the fuel spray was reduced, since the fuel spray plume
angle was reduced, resulting in slower fuel- air mixing. In addition, less air entrainment by the fuel spay suggested that the
fuel spray penetration might increase and resulted in more fuel reaching the combustion chamber walls. Furthermore droplet
mean diameters (expressed as Sauter mean) were larger for vegetable oil leading to reduce the rate of heat release as
compared with diesel fuel. This also, contributed the higher ignition (chemical) delay of the vegetable oil due to lower
Cetane number. According to the qualitative image of the combustion under the crude vegetable oil operation with CE, the
lower BTE was attributed to the relatively retarded and lower heat release rates. BTE increased with the advancing of the
injection timing in CE with the vegetable oil at all loads, when compared with CE at the recommended injection timing and
pressure. This was due to initiation of combustion at earlier period and efficient combustion with increase of air entrainment
in fuel spray giving higher BTE. BTE increased at all loads when the injection timing was advanced to 32obTDC in CE at
the normal temperature of vegetable oil. The increase of BTE at optimum injection timing over the recommended injection
timing with vegetable oil with CE could be attributed to its longer ignition delay and combustion duration. BTE increased at
all loads when the injection timing was advanced to 32obTDC in CE, at the preheated temperature of the vegetable oil. The
performance improved further in CE with the preheated vegetable oil for entire load range when compared with normal
vegetable oil. Preheating of the vegetable oil reduced the viscosity, which improved the spray characteristics of the oil.
From Fig 4, it could be noticed that LHR version of the engine showed the improved performance for the entire
load range compared with CE with pure diesel operation.
35
1-CE-Diesel-27bTDC
30
BTE (%)
25
2-LHR-CJO-27bTDC
20
15
3-LHR-CJO-31bTDC
10
5
4-LHR-CJO-32bTDC
0
0
1
2
3
4
BMEP (bar)
5
6
Figure 4. Variation of BTE with BMEP in LHR engine at various injection timings
High cylinder temperatures helped in better evaporation and faster combustion of the fuel injected into the
combustion chamber. Reduction of ignition delay of the vegetable oil in the hot environment of the LHR engine improved
heat release rates and efficient energy utilization. Preheating of vegetable oil improves performance further in LHR version
of the engine. The optimum injection timing was found to be 31obTDC with LHR engine with normal vegetable oil
operation. Since the hot combustion chamber of LHR engine reduced ignition delay and combustion duration and hence the
optimum injection timing was obtained earlier with LHR engine when compared with CE with the vegetable oil operation.
Fig 5 indicates that peak BTE was higher in the LHR engine when compared with CE at all loads with vegetable
oil operation. Preheating of the vegetable oil improved the performance in both versions of the engine compared with the
vegetable oil at normal temperature. Preheating reduced the viscosity of the vegetable oils, which reduced the impingement
of the fuel spray on combustion chamber walls, causing efficient combustion thus improving BTE.
35
1-CE-Diesel-27bTDC
2-CE-CJO-27bTDC
3-LHR-CJO-27bTDC
4-CE-CJO-32bTDC
LHR-CJO-31bTDC
30
BTE,( %)
25
20
15
10
5
0
0
1
2
3
4
5
6
BMEP ( bar)
Figure 5. Variation of BTE with BMEP in both versions of the engine at recommended and optimized injection timings
26
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
Injection pressure was varied from 190 bars to 270 bars to improve the spray characteristics and atomization of the
vegetable oils and injection timing is advanced from 27 to 34 obTDC for CE and LHR engine. The improvement in BTE at
higher injection pressure was due to improved fuel spray characteristics. However, the optimum injection timing was not
varied even at higher injection pressure with LHR engine, unlike the CE. Hence it was concluded that the optimum injection
timing is 32obTDC at 190 bar, 31obTDC at 230 bar and 30obTDC at 270 bar for CE. The optimum injection timing for LHR
engine was 31obTDC irrespective of injection pressure.
From TABLE 2, it could be noticed that improvement in the peak BTE was observed with the increase of injection
pressure and with advancing of the injection timing in both versions of the engine. This was due to improved spraying
characteristics and efficient combustion as vegetable oil got long duration of combustion and hence advancing of injection
timing helped efficient energy release from the fuel leading to produce higher BTE. Peak BTE was higher in the LHR engine
when compared to CE with different operating conditions of the vegetable oil
Injectio
n
Timing
DF
CJO
DF
CJO
DF
CJO
DF
Table 2. Data of peak BTE
Peak BTE (%)
Conventional Engine
LHR Engine
Injection Pressure (Bar)
Injection Pressure (Bar)
190
230
270
190
230
270
NT
PT
NT
PT
NT
PT
NT
PT
NT
PT
NT
PT
28
-29
--30
-29
-30
-30.5
-24
25
25
26
26
27
28.5
29
29
29.5
29.5
30
29
--30
-30.5
-29.5
-30.5
-31
-26
26.5
26.5
27
28
28.5
29
29.5
29.5
30
30
30.5
29.5
-30
-31
-30
-31
-31
-27
27.2
28
28.5
27.5
28
32
33
33.5
33
33
33.5
30
30.5
30.5
CJO
28
DF
31
Test
Fuel
(obTD
C)
27
30
31
32
33
28.5
27.5
31
28
27
27.5
31
32
32
32.5
32.5
33
30
---
--
--
--
--
--
-
From TABLE 3, it is evident that brake specific energy consumption (BSEC) at peak load decreased with the
increase of injection pressure and with the advancing of the injection timing at different operating conditions of the vegetable
oil.
Table 3. Data of BSEC at peak load operation
Inje
BSEC (kW/ kW)
ctio
Conventional Engine
LHR Engine
n
Injection Pressure (Bars)
Injection Pressure (Bars)
Ti
Test
190
230
270
190
230
270
min
Fuel
g
(O
NT
PT
NT
PT
NT
PT
NT
PT
NT
PT
NT
PT
bT
DC
)
DF
4.00
-3.92
-3.84
-4.16
--4.08
-4.00
-27
CJO
4.96
4.70
4.70
4.65
4.65
4.60
3.96
3.92
3.92
3.88
3.88
3.84
D
3.92
--3.88
-3.84
-4.08
-4.00
-3.90
-30
CJO
4.70
4.65
4.65
4.60
3.92
3.88
3.93
3.89
3.89
3.85
3.85
3.81
DF
3.84
-3.80
-3.77
-3.86
3.85
3.84
31
CJO
4.45
4.40
3.92
3.88
3.96
3.92
3.78
3.76
3.76
3.74
3.74
3.72
DF
3.82
--3.78
-3.79
-------32
CJO
3.98
3.94
3.94
3.90
3.90
3.86
3.90
3.86
3.86
3.82
3.82
3.78
33
DF
3.77
-3.77
-3.84
----------------DF-Diesel Fuel, CJO- Crude Jatropha Oil, NT- Normal or Room Temperature , PT- Preheat Temperature
From Fig 6, it is evident that CE with vegetable oil operation at the recommended injection timing recorded higher
EGT at all loads compared with CE with pure diesel operation. Lower heat release rates and retarded heat release associated
with high specific energy consumption caused increase in EGT in CE. Ignition delay in the CE with different operating
conditions of vegetable oil increased the duration of the burning phase. LHR engine recorded lower value of EGT when
compared with CE with vegetable oil operation. This was due to reduction of ignition delay in the hot environment with the
provision of the insulation in the LHR engine, which caused the gases expand in the cylinder giving higher work output and
lower heat rejection. This showed that the performance improved with LHR engine over the CE with vegetable oil operation.
27
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
600
1-CE-Diesel-27bTDC
500
2-CE-CJO-27bTDC
EGT ( o C)
400
300
3-LHR-CJO-27bTDC
200
100
4-CE-CJO-32bTDC
0
0
1
2
3
4
5
6
5-LHR-CJO-31bTDC
BMEP ( bar)
Figure 6 Variation of EGT with BMEP in both versions of the engine at recommended and optimized injection timings
The value of EGT at peak load decreased with advancing of injection timing and with increase of injection
pressure in both versions of the engine with vegetable oil as it is evident from TABLE 4. Preheating of the vegetable oil
further reduced the magnitude of EGT, compared with normal vegetable oil in both versions of the engine.
Table 4. Data of EGT at peak load operation
Inject
ion
timin
g
(o b
TDC)
27
30
31
32
33
EGT at the peak load (oC)
Test
Fuel
Conventional Engine
LHR Engine
Injection Pressure (Bars)
Injection Pressure (Bars)
190
DF
CJO
DF
CJO
DF
CJO
DF
CJO
DF
NT
425
480
410
450
400
420
390
390
375
230
PT
-450
--420
--390
370
---
NT
410
450
400
420
390
390
380
410
375
270
PT
--420
-390
-370
390
---
NT
395
420
385
390
375
410
380
430
400
190
230
270
PT
-390
--370
--390
NT
475
465
455
440
450
420
PT
--460
--420
--390
NT
460
460
450
420
445
390
PT
-455
-390
--370
NT
445
455
445
390
440
370
410
--
---
---
---
---
----
PT
-450
-370
--350
---
Curves from Fig 7 indicate that that coolant load (CL) increased with BMEP in both versions of the engine with
test fuels. However, CL reduced with LHR version of the engine with vegetable oil operation when compared with CE with
pure diesel operation.
28
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
5
4.5
4
CE-Diesel-27 bTDC
CL (kW)
3.5
3
CE-CJO-27 bTDC
2.5
LHR-CJO-27 bTDC
2
-CE-CJO-32 bTDC
1.5
1
LHR-CJO-31 bTDC
0.5
0
0
1
2
3
4
BMEP (bar)
5
6
Figure.7. Variation of coolant load (CL) with BMEP in both versions of the engine at recommended and optimized
injection timings with CTSO operation at an injection pressure of 190 bar.
Heat output was properly utilized and hence efficiency increased and heat loss to coolant decreased with effective
thermal insulation with LHR engine. However, CL increased with CE with vegetable oil operation in comparison with pure
diesel operation on CE. This was due to concentration of fuel at the walls of combustion chamber. CL decreased with
advanced injection timing with both versions of the engine with test fuels. This was due to improved air fuel ratios. From
TABLE .5, it is noticed that CL decreased with advanced injection timing and with increase of injection pressure.
Table 5. Data of CL at peak load operation
Coolant Load (k W )
Injection
timing
( o bTDC)
27
29
30
31
32
33
Test
Fuel
DF
CJO
DF
CJO
DF
CJO
DF
CJO
DF
CJO
DF
190
NT
4.0
4.4
3.8
4.2
3.6
4.0
3.4
3.8
3.2
3.6
3.0
CE
LHR Engine
Injection Pressure (Bar)
Injection Pressure (Bar)
PT
--4.0
-3.8
-3.6
--3.4
--3.2
---
230
NT
3.8
4.0
3.6
3.8
3.4
3.6
3.2
3.4
3.0
3.6
3.2
PT
-3.8
--3.6
-3.4
--3.2
--3.4
---
270
NT
3.6
3.8
3.4
3.6
3.2
3.4
3.0
3.6
3.2
3.8
3.4
PT
--3.6
-3.4
--3.2
-3.4
--3.6
---
190
NT
4.5
3.6
4.3
3.4
4.1
3.2
3.0
PT
--3.5
-3.2
-3.0
230
NT
4.3
3.4
4.1
3.2
3.9
3.0
2.8
2.8
PT
-3.3
-3.0
--2.8
270
NT
4.1
3.2
3.9
3.0
3.7
2.8
PT
--3.1
-2.8
-2.6
2.6
2.4
2.2
This was because of improved combustion and proper utilization of heat energy with reduction of gas
temperatures. CL decreased with preheated vegetable oil in comparison with normal vegetable oil in both versions of the
engine. This was because of improved spray characteristics. Curves from Fig 8 indicate that volumetric efficiency (VE)
decreased with the increase of BMEP in both versions of the engine. This was due to increase of gas temperature with the
load.
29
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
100
1-CE-Diesel-27bTDC
95
2-CE-CJO-27bTDC
VE (%)
90
85
3-LHR-CJO-27bTDC
80
4-CE-CJO-32bTDC
75
70
0
1
2
3
4
5
6
5-LHR-CJO-31bTDC
BMEP (bar)
Figure 8 Variation of VE with BMEP in both versions of the engine at recommended and optimized injection timings
At the recommended injection timing, VE in the both versions of the engine with vegetable oil operation decreased
at all loads when compared with CE with pure diesel operation. This was due increase of temperature of incoming charge in
the hot environment created with the provision of insulation, causing reduction in the density and hence the quantity of air
with LHR engine. VE increased marginally in CE and LHR engine at optimized injection timings when compared with
recommended injection timings with vegetable oil operation. This was due to decrease of un-burnt fuel fraction in the
cylinder leading to increase in VE in CE and reduction of gas temperatures with LHR engine.
TABLE 6 indicates that VE increased marginally with the advancing of the injection timing and with the increase
of injection pressure in both versions of the engine. This was due to better fuel spray characteristics and evaporation at
higher injection pressures leading to marginal increase of VE. This was also due to the reduction of residual fraction of the
fuel, with the increase of injection pressure. Preheating of the vegetable oil marginally improved VE in both versions of the
engine, because of reduction of un-burnt fuel concentration with efficient combustion, when compared with the normal
temperature of the oil.
Table 6. Data of volumetric efficiency at peak load operation
3.2. Exhaust Emissions
It was reported that [28] fuel physical properties such as density and viscosity could have a greater influence on
smoke emission than the fuel chemical properties. It could be observed from Fig 9, that the value of smoke intensity
30
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
increased from no load to full load in both versions of the engine. During the first part, the smoke level was more or less
constant, as there was always excess air present. However, in the higher load range there was an abrupt rise in smoke levels
due to less available oxygen, causing the decrease of air-fuel ratio, leading to incomplete combustion, producing more soot
density. The variation of smoke levels with the brake power typically showed a U-shaped behavior due to the pre-dominance
of hydrocarbons in their composition at light load and of carbon at high load. Drastic increase of smoke levels was observed
at the peak load operation in CE at different operating conditions of the vegetable oil, compared with pure diesel operation
on CE. This was due to the higher value of the ratio of C/H of vegetable oil (0.83) when compared with pure diesel (0.45).
Smoke Intensity (HSU)
70
1-CE-Diesel-27bTDC
60
50
2-CE-CJO-27bTDC
40
30
3-LHR-CJO-27bTDC
20
4-CE-CJO-32bTDC
10
0
0
1
2
3
4
5
5-LHR-CJO-31bTDC
6
BMEP (bar )
Figure 9 Variation of smoke intensity in Hartride Smoke Unit (HSU) with BMEP in both versions of the engine at
recommended and optimized injection timings
The increase of smoke levels was also due to decrease of air-fuel ratios and VE with vegetable oil when compared
with pure diesel operation. Smoke levels were related to the density of the fuel. Since vegetable oil had higher density
compared to diesel fuels, smoke levels were higher with vegetable oil. However, LHR engine marginally reduced smoke
levels due to efficient combustion and less amount of fuel accumulation on the hot combustion chamber walls of the LHR
engine at different operating conditions of the vegetable oil compared with the CE. Density influences the fuel injection
system. Decreasing the fuel density tends to increase spray dispersion and spray penetration. Preheating of the vegetable oils
reduced smoke levels in both versions of the engine, when compared with normal temperature of the vegetable oils. This was
due to i) the reduction of density of the vegetable oils, as density is directly proportional to smoke levels, ii) the reduction of
the diffusion combustion proportion in CE with the preheated vegetable oil, iii) the reduction of the viscosity of the
vegetable oil, with which the fuel spray does not impinge on the combustion chamber walls of lower temperatures rather
than it directs into the combustion chamber.
From TABLE 7 it is evident that smoke levels decreased at optimized injection timings and with increase of
injection pressure, in both versions of the engine, with different operating conditions of the vegetable oil. This was due to
improvement in the fuel spray characteristics at higher injection pressures and increase of air entrainment, at the advanced
injection timings, causing lower smoke levels.
Table 7. Data of smoke levels at peak load operation
Smoke intensity (HSU)
Injection
timing
( o bTDC)
27
30
31
32
33
Test
Fuel
DF
CJO
DF
CJO
DF
CJO
DF
CJO
DF
Conventional Engine
LHR Engine
Injection Pressure (Bars)
Injection Pressure (Bars)
190
NT
PT
48
-65
60
36
-60
55
33
--55
50
32
-50
45
30
---
230
NT
PT
38
-63
58
34
-55
50
32
-50
45
31
-55
52
30
--
270
NT
PT
34
-58
54
32
-45
55
30
-55
52
32
-52
49
35
--
190
NT
PT
55
-45
40
45
-40
35
43
-35
30
------
230
NT
PT
50
-40
35
42
-35
30
41
-30
25
---------
270
NT
PT
45
-35
30
41
-30
25
40
-25
22
-------
31
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
Curves from Fig10 indicate that NOx levels were lower in CE while they are higher in LHR engine at different
operating conditions of the vegetable oil at the peak load when compared with diesel operation. This is due to lower heat
release rate because of high duration of combustion causing lower gas temperatures with the vegetable oil operation on CE,
which reduced NOx levels. Increase of combustion temperatures with the faster combustion and improved heat release rates
in LHR engine cause higher NOx levels. As expected, preheating of the vegetable oil further increased NOx levels in CE and
reduced the same in LHR engine when compared with the normal vegetable oil. This was due to improved heat release rates
and increased mass burning rate of the fuel with which combustion temperatures increase leading to increase NOx emissions
in the CE and decrease of combustion temperatures in the LHR engine with the improvement in air-fuel ratios leading to
decrease NOx levels in LHR engine.
1400
1200
1-CE-Diesel-27bTDC
NOx (ppm)
1000
800
2-CE-CJO-27bTDC
600
3-LHR-CJO-27bTDC
400
4-CE-CJO-32bTDC
200
5-LHR-CJO-31bTDC
0
0
1
2
3
4
5
6
BMEP ( bar)
Figure 10. Variation of NOx levels with BMEP in both versions of the engine at recommended and optimized injection
timings
TABLE 8 denotes that NOx levels increased with the advancing of the injection timing in CE with different
operating conditions of vegetable oil.
Table 8 Data of NOx emissions at peak load operation
Residence time and combustion temperatures had increased, when the injection timing was advanced with the
vegetable oil operation, which caused higher NOx levels. With the increase of injection pressure, fuel droplets penetrate and
find oxygen counterpart easily. Turbulence of the fuel spray increased the spread of the droplets thus leading to decrease in
32
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
NOx levels with increase of injection pressure in CE. However, decrease of NOx levels was observed in LHR engine, due to
decrease of combustion temperatures, when the injection timing was advanced and with increase of injection pressure.
3.3 Combustion Characteristics
From TABLE 9, it could be seen that with vegetable oil operation, peak pressures were lower in CE while they
were higher in LHR engine at the recommended injection timing and pressure, when compared with pure diesel operation on
CE. This was due to increase of ignition delay, as vegetable oils require large duration of combustion. Mean while the piston
started making downward motion thus increasing volume when the combustion takes place in CE. LHR engine increased the
mass-burning rate of the fuel in the hot environment leading to produce higher peak pressures. The advantage of using LHR
engine for vegetable oils was obvious as it could burn low Cetane and high viscous fuels. Peak pressures increased with the
increase of injection pressure and with the advancing of the injection timing in both versions of the engine, with the
vegetable oils operation. Higher injection pressure produces smaller fuel particles with low surface to volume ratio, giving
rise to higher PP. With the advancing of the injection timing to the optimum value with the CE, more amount of the fuel
accumulated in the combustion chamber due to increase of ignition delay as the fuel spray found the air at lower pressure
and temperature in the combustion chamber. When the fuel- air mixture burns, it produced more combustion temperatures
and pressures due to increase of the mass of the fuel. With LHR engine, peak pressures increases due to effective utilization
of the charge with the advancing of the injection timing to the optimum value. The value of TOPP decreased with the
advancing of the injection timing and with increase of injection pressure in both versions of the engine, at different operating
conditions of vegetable oils. TOPP was more with different operating conditions of vegetable oil in CE, when compared with
pure diesel operation on CE. This was due to higher ignition delay with the vegetable oil when compared with pure diesel
fuel. This once again established the fact by observing lower peak pressure and higher TOPP, that CE with vegetable oil
operation showed the deterioration in the performance when compared with pure diesel operation on CE. Preheating of the
vegetable oils showed lower TOPP, compared with vegetable oil at normal temperature. This once again confirmed by
observing the lower TOPP and higher PP, the performance of the both versions of the engine improved with the preheated
vegetable oil compared with the normal vegetable oil. This trend of increase of MRPR and decrease of TOMRPR indicated
better and faster energy substitution and utilization by vegetable oil, which could replace 100% diesel fuel. However, these
combustion characters were within the limits hence the vegetable oil could be effectively substituted for diesel fuel.
Table 9. Data of PP, MRPR, TOPP and TOMRPR at peak load operation
IV.
CONCLUSIONS
The optimum injection timing was found to be 32ob TDC for CE, while it is 31obTDC for LHR engine at an
injection pressure of 190 bar. At recommended injection timing, relatively, peak brake thermal efficiency increased by 2%,
BSEC at peak load decreased by 1%, exhaust gas temperature at peak load increased by 40oC, coolant load decreased by
10%, volumetric efficiency at peak load decreased by 14%, smoke levels decreased by 17% and NOx levels increased by
49%, PP increased by 24% and TOPP was found to be lower with LHR engine in comparison with CE with pure diesel
operation. Performance of the both versions of the engine improved with advanced injection timing and with an increase of
injection pressure with vegetable oil operation.
33
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
ACKNOWLEDGMENTS
Authors thank authorities of Chaitanya Bharathi Institute of Technology, Hyderabad for providing facilities for
carrying out research work. Financial assistance provided by All India Council for Technical Education (AICTE), New
Delhi, is greatly acknowledged.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
Cummins, C. Lyle, Jr. Diesel's Engine, Volume 1: From Conception To 1918. Wilsonville, OR, USA: Carnot
Press, 1993, ISBN 978-0-917308-03-1
Pramanik, K. Properties and use of Jatropha curcas oil and diesel fuel blends in compression ignition engine.
Journal of Renewable Energy, 2003, 239- 48.
Ramadhas, A.S.S., Jayaraj, S. and Muraleedharan, C. Use of vegetable oils as I.C. engine fuels-A review.
Renewable Energy, 2004, 29, 727-742.
Pugazhvadivu, M. and Jayachandran, KInvestigations on the performance and exhaust emissions of a diesel
engine using preheated waste frying oil as fuel. Renewable energy, 2005, 30(14), 2189-2202.
Agarwal, D. and Agarwal, A.K. Performance and emissions characteristics of jatropha oil (preheated and blends)
in a direct injection compression ignition engine. Int. J. Applied Thermal Engineering, 2007, 27, 2314-23.
Choudhury, S., Bose, P.K. Karanja or jatropha – a better option for an alternative fuel in C.I. engine. Proc. of the
International Conference & XX National Conference on I.C. Engines and Combustion, 2007, 321-326.
Surendra, R,K. and Suhash, D.V. Jatropha and karanj bio-fuel: as alternate fuel for diesel engine. ARPN Journal of
Engineering and Applied Science, 2008, 3(1).
Canaker, M., Ozsezen, A.N. and Turkcan, A. Combustion analysis of preheated crude sunflower oil in an IDI diesel
engine. Biomass Bio-energy, 2009, 33, 760-770.0
Venkanna, B.K. and Venkatarama Reddy,C. Performance, emission and combustion characteristics of DI diesel
engine running on blends of honne oil/diesel fuel/kerosene. International Journal of Agriculture and Biology
Engineering, 2009, 4(3), 1-10.
Misra, R.D. and Murthy, M.S. Straight vegetable oils usage in a compression ignition engine—A review.
Renewable and Sustainable Energy Reviews, 2010, 14, 3005–3013.
Hanbey Hazar and Huseyin Aydin. Performance and emission evaluation of a CI engine fueled with preheated raw
rapeseed oil (RRO)-diesel blends. Applied Energy, 2010, 87, 786-790.
Murali Krishna, M.V.S. Performance evaluation of low heat rejection diesel engine with alternate fuels. PhD
Thesis, J. N. T. University, Hyderabad, 2004.
Raheman, H., Ghadege, S.V. Performance of compression ignition engine with mahua bio diesel. Fue, , 2007, 86,
2568-2573.
Banapurmath, N.R., Tewari, P.G., Hosmath, R.S. Performance and emission characteristics of direct injection
compression ignition engine operated on honge, jatropha and sesame oil methyl ester. Journal of Renewable
energy, 2008, 33, 1982-1988.
Murat, K., Gokhan, Ergen. and Murat, H. The effects of preheated cottonseed oil methyl ester on the performance
and exhaust emissions of a diesel engine. Applied Thermal Engineering, 2008, 28, 2136-2143.
Jayant Singh, Mishra, T.N., Bhattacharya, T.K. and Singh, M.P. Emission characteristics of methyl ester of rice
bran oil as fuel in compression ignition engine. International Journal of Chemical and Biological Engineering,
2008, 1(2). 62-66.
Rasim, B. (Performance and emission study of waste anchovy fish biodiesel in a diesel engine. Fuel Processing
Technology, 2011, 92, 1187-1194.
Jaichandar, S. and Annamalai, K. The status of biodiesel as an alternative fuel for diesel engine- An Overview,
Journal of Sustainable Energy & Environment, 2011, 2, 71-75.
Parlak, A., Yasar, H., ldogan O. The effect of thermal barrier coating on a turbocharged Diesel engine
performance and exergy potential of the exhaust gas. Energy Conversion and Management, 2005, 46(3), 489–
499.
Ekrem, B., Tahsin, E., Muhammet, C. Effects of thermal barrier coating on gas emissions and performance of a
LHR engine with different injection
timings and valve adjustments. Journal of Energy Conversion and
Management, 2006, 47, 1298-1310.
Ciniviz, M., Hasimoglu, C., Sahin, F., Salman, M. S. Impact of thermal barrier coating application on the
performance and emissions of a turbocharged diesel engine. Proceedings of The Institution of Mechanical
Engineers Part D-Journal Of Automobile Engineering, 2008, 222 (D12), 2447–2455.
Parker, D.A. and Dennison, G.M. The development of an air gap insulated piston. 1987, SAE Paper No. 870652.
Rama Mohan, K., Vara Prasad, C.M., Murali Krishna, M.V.S. Performance of a low heat rejection diesel engine
with air gap insulated piston, ASME Journal of Engineering for Gas Turbines and Power, 1999, 121(3), 530-540.
Karthikeyan, S., Arunachalam, M., Srinivasan Rao, P. and Gopala Krishanan, K.V. Performance of an alcohol,
diesel oil dual-fuel engine with insulated engine parts”, Proceedings of 9th National Conference of I.C. Engines
and Combustion, 1985, 19-22, Indian Institute of Petroleum, Dehradun.
Jabez Dhinagar, S., Nagalingam, B. and Gopala Krishnan, K.V. Use of alcohol as the sole fuel in an adiabatic
diesel engine, Proceedings of XI National Conference on IC Engines and Combustion, 1989, 277-289, I.I.T.,
Madras.
34
Performance Evaluation Of A Low Heat Rejection Diesel Engine With Jatropha
26.
27.
28.
Jabez Dhinagar, S., Nagalingam, B. and Gopala Krishnan, K.V. (1993). A comparative study of the performance
of a low heat rejection engine with four different levels of insulation. Proceedings of IV International Conference
on Small Engines and Fuels, Chang Mai, Thailand, 1993, 121-126,..
Krishna Murthy, P.V. Studies on biodiesel with low heat rejection diesel engine. PhD Thesis, J. N. T. University,
Hyderabad, 2010.
Barsic,N.J. and Humke, A.L. Performance and emission characteristics of naturally aspirated engine with the
vegetable oil fuels, 1981, SAE Paper 810262.
35
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 27-31
A Dynamic Approach to Multicast Communications
Using Decentralized Key Management
D.Chalapathirao1, K.Rajendra Prasad2 , S.Venkata Rao3
1
Department of CSE, Chaithanya Engg.College, Visakhapatnam, A.P., India
Assoc. Professor, Department of CSE, Chaithanya Engg College, Visakhapatnam, A.P., India.
3
Assist. Professor, Department of CSE, Kaushik College of Engg, Visakhapatnam, A.P., India.
2
Abstract–– The security in the multicast communication in the large groups is the major obstacles for effectively
controlling access to the transmitting data. The IP Multicast itself does not provide any specific mechanisms to control
the intruders in the group communication. Group key management is mainly addresses upon the trust model developed
by Group Key Management Protocol (GKMP). There are several group key management protocols that are proposed, this
paper will however elaborate mainly on Group key management which has a sound scalability when compared with other
central key management systems. This paper emphases protocol which provides a scope for the dynamic group operations
like join the group, leave the group, merge without the need of central mechanisms. An important component for
protecting group secrecy is re-keying. With the combination of strong public and private key algorithms this would
become a better serve to the multicast security.
Index Terms––Group key management, multicast security, scalability.
I.
INTRODUCTION
1.1 Unicast - Broadcast Multicast
The multicast group can be identified with the class D IP address so that the members can enter or leave the group
with the management of Internet group management protocol. The trusted model gives a scope between the entities in a
multicast security system. For secure group communication in the multicast network, a group key shared by all group
members is required. This group key should be updated when there are membership changes in the group, such as when a
new member joins or a current member leaves. Along with these considerations, we take the help relatively prime numbers
and their enhancements that play a vital role in the construction of keys that enhances the strength for the security.
Multicast cryptosystems are preferably for sending the messages to a specific group of members in the multicast
group. Unicast is for one recipient to transfer the message and „Broadcast‟ is to send the message to all the members in the
network. Multicast applications have a vital role in enlarging and inflating of the Internet. The Internet has experienced
explosive growth in last two decades. The number of the Internet users, hosts and networks triples approximately every two
years. Also Internet traffic is doubling every three months partly because of the increased users, but also because of the
introduction of new multicast applications in the real world such as video conferencing, games, ATM applications etc.. broad
casting such as www, multimedia conference and e-commerce, VOD (Video on Demand), Internet
Broadcasting and video conferencing require a flexible multicasting capability. Multicast is a relatively new form
of communications where a single packet is transmitted to more than one receivers. The Internet does not manage the
multicast group membership tightly. A multicast message is sent from a source to a group of destination hosts. A source
sends a packet to a multicast group specifying as the multicast group address. The packet is automatically duplicated at
intermediate routers and any hosts that joined the group can receive a copy of the packet. Because a host can receive
transmitted data of any multicast groups, secure communications is more important in multicasting than in unicasting.
Figure 1.1: Unicast/Multicast Communication through routers
Another important feature of multicasting is its support for data casting applications. Instead of using a set of
point-to-point connections between the participating nodes, multicasting can be used for distribution of the multimedia data
to the receivers.
A dynamic approach to multicast communications using decentralized key management
II.
A SCALABLE GROUP KEY MANAGEMENT PROTOCOL
Our new scalable group key management protocol is based on the following: the Chinese Remainder Theorem and a
hierarchical graph in which each node contains a key and a modulus. The protocol is designed to minimize re-key messages,
bandwidth usage, encryption, and signature operations. Chinese Remainder Theorem: Let m1,m2, ...mn be n positive integers
where they are pair wise relatively prime (i.e. gcd(mi,mj)=1 for i_=j, 1≤i, j≤n), R1,R2,...Rn be any positive integers, and
M=m1m2...mn. Then the set of linear congruous equations X≡R1 mod m1, ...X≡Rn mod mn have a unique solution as:
X= _ni=1 RiMiyi mod M, where Mi=M/mi and yi=M−1i mod mi. In the new protocol, the keys and moduli are constructed as
a tree and maintained by the key server. The tree graph is similar to the tree graph in the LKH protocol but each node of the
tree in the new protocol is assigned two values: a key and a modulus.
Fig:1
Figure 1 depicts the key and modulus graph, where TEK is a traffic encryption key, kij is a key encryption key, and
mij is a modulus. Moduli Maintenance: The key server needs to store 2log2n moduli and each member needs to store log2n
moduli but they do not need to keep the moduli secret. The sibling nodes in the tree graph are assigned with two different
moduli (i.e., mi1 and mi2 where i is the depth of the tree) and the nodes in the different level of the tree are assigned with the
different moduli but each a pair of siblings at the same tree depth are assigned with the same two moduli under the different
parents (see Figure 1). This means there are only 2log2n different moduli in the tree graph, i.e. mij (1≤i≤log2n, j=1, 2) where
i is the depth of the node in the tree, and the nodes (except the root) on a path from a leaf to the root and its direct children
exactly cover all moduli. For instance, in Figure 1, for a path from u1 to the root, the moduli on the path include m11, m21,
and m31, and the moduli on its direct children include m12, m22, and m32. In addition, all different moduli in the tree graph
should be pair wise relatively prime (i.e., gcd (mij, mst)=1 for i_=s or j_=t), and each modulus should be bigger than the key
encryption value, i.e., mij>Ekil (kst) where mij and kil belong to the same node and kst belongs to its parent node. Key
Maintenance: The key server needs to store 2n-1 keys,i.e., TEK and kij(1≤i≤log2n, 1≤j≤2i) where i is the depth of the node in
the tree and j is the ordinal number of the node in the ith depth of the tree, and each member needs to store log2n+1 keys. The
key server shares the keys with each member on the path from its leaf to the root. The keys on its path from the leaf to the
root need to be updated in the protocol when a member joins or leaves the group but all moduli must be kept fixed. To
update the keys on the tree graph, the key server generates a new key for each update node and encrypts it with its children
keys on its path from the leaf to the root. For instance, the key server needs to generate new keys {TEK_, k_il} to update
{TEK, kil} for the arrival of member ud (its leaf key is kwd, w=log2n) to the group, where 1≤i≤log2n-1 and l=_d/2log2n−i_
which is the upper limit integer of d/2log2n−i, and encrypts the updated keys using the following formula, where
e=_d/2log2n−i−1_ and v=_d/2log2n−1_:
The key server then calculates a lock L as follows and multicasts the lock with the indices of keys (i.e., st in the
following formula) to all valid members.
28
A dynamic approach to multicast communications using decentralized key management
Each member decrypts the updated traffic encryption key and related key encryption keys based on their own
moduli and keys. For the departure of member ud from the group, the process is as same as the above except calculating Kwd
(i.e., Kwd=0).
TABLE I : A COMPARISON ON LEVEL OF PROCESSING DIFFICULTY
As an illustration, we give the following example for the re-key process in Figure 1, where the member u8 requests
to join the group. The key server generates new keys {TEK_,k_12, k_24} to update {TEK, k12, k24} and does the following
encryption: K38=Ek38 (k_24), K37=Ek37 (k_24), K24=Ek_24 (k_12),K23=Ek23 (k_12), K12=Ek_12 (TEK_), K11=Ek11
(TEK_). The key server then calculates a lock as L=K38M32y32+K37M31y31+K24M22y22+K23M21y21+K12M12y12 +
K11M11y11 mod M, where M=m11m12m21m22m31m32, Mij=M/mij , yij=M−1ij mod mij .
In the protocol, we can see that the key server uses the same modulus (M) and parameters (Mij, yij ) to calculate
the lock for any re-key process but the key encryption value (i.e., Kst) for calculating the lock are changed based on the rekey requested by the different members. This means the key server can pre-calculate the modulus (M) and parameters (Mij,
yij ) to be used for later re-key processing steps and only needs to calculate them once for a fixed tree graph.
III.
SCALABILITY OF GROUP KEY MANAGEMENT PROTOCOLS
In order to measure the scalability of group key management protocols more accurately, we propose the following
scalability metrics: „computational complexity‟, „bandwidth usage‟, „storage‟, „number of re-key messages‟, and „level of
processing difficulty‟. Computational complexity measures the processing time in the central key server. Bandwidth usage
accounts for the size of total messages sent out by the key server for a re-key process. Storage measures the total size of keys
maintained by the key server. The number of rekey messages is the number of such messages needed to be processed by the
key server. The level of processing difficulty indicates applicability for small mobile devices. Table I gives a comparison on
the level of processing difficulty. Table II gives a comparison of the new protocol with the GKMP, Secure Lock, and LKH
protocols without signature protection. Table III gives a comparison with signature protection, where storage and number of
re-key messages are the same as in Table II.The signature technique for GKMP and LKH is based upon a single digital
signature scheme proposed by Merkle, which has been the most efficient method so far for signing a set of messages
destined to different receivers. From Tables II and III, we see that the LKH and SGKMP reduce the encryption operation and
bandwidth usage from O(n) to O(log n) when compared to the Secure Lock and GKMP protocols, and the length of the lock
from O(n) to O(log n) when compared to the Secure Lock. In addition, SGKMP has better performance when compared to
the LKH protocols. The detailed processing time performance according.
TABLE II: A COMPARISON OF GROUP KEY MANAGEMENT PROTOCOLS WITHOUT SIGNATURE
29
A dynamic approach to multicast communications using decentralized key management
TABLE III: A COMPARISON OF GROUP KEY MANAGEMENT PROTOCOLS WITH SIGNATURE
•
•
•
•
•
•
•
•
•
•
N is the length of the encrypted secret key (default is 128 bits for a symmetric cryptograph) or the length of a hash
value (default is 128 bits)
L is the length of the signature
n is the number of members
H is a hash operation
E is a symmetric key encryption operation
S is a signature operation
A is an Big Integer addition operation
M is a Big Integer multiplication operation
MD is a Big Integer modulus operation
MR is a Big Integer modulus reverse operation to computational complexity is tested in the next section for both
with and without the signature operation.
IV.
PERFORMANCE OF THE NEW PROTOCOL
The performance of the SGKMP, LKH, and Secure Lock protocols were tested in order to compare their
scalability. The testing was done on a PC (Intel Pentium 4 CPU 3.00GHz and 1 GB RAM). The software used for the testing
was Java JDK 1.6. The main classes included Java Big Integer, security, and crypto. The encryption algorithm was AES with
a 128 bit encryption key, and the signature algorithm was 512 bit RSA. The testing determined the processing time
performance according to group size (see Figure 2 and 3) both with and without signature protection. From the testing
results, we can see that the Secure Lock has very poor scalability when compared to SGKMP and LKH (Figure 2, where the
SGKMP and LKH are in the bottom), and the SGKMP has very good scalability for both with and without signature
protection when compared to LKH (Figure 3). The processing time of SGKMP with signature for a rekey request
corresponding to a group size of up to a million members is less than 10 milliseconds.
Fig.2 Performance of the SGKMP,LKH, and SL protocols.
30
A dynamic approach to multicast communications using decentralized key management
Fig.3. Performance of the SGKMP and LKH protocols.
V.
CONCLUSION & FUTURE SCOPE
To improve the scalability of group key management, we propose a scalable group key management protocol and
demonstrate that it has better scalability in terms of computational complexity (from testing) and bandwidth usage (from
calculations in Tables II and III). The security of PGKMP is mainly based on neighborhood authentication of the nodes, as
well as on security associations, while the use of public key cryptography is minimized. PGKMP finds disjoint paths only, so
the route discovery cost will be less as compared to LKH where all possible paths exist and a key server has to be
maintained. Also due to the double encryption scheme provided to the protocol, the network is more secured. There is a
scope to further decrease the overheads and increase more security with this Protocol (PGKMP) and a positive hope for the
enhancement of this protocol.
REFERENCES
1.
2.
3.
4.
5.
C. K. Wong, M. Gouda, and S. S. Lam, “Secure group communications using key graphs,” IEEE/ACM Trans.
Networking, vol.8, no.1, pp.16–30, Feb. 2000.
D. Wallner, E. Harder, and R. Agee, “Key management for multicast: issues and architecture,” National Security
Agency, RFC 2627, June 1999.
H. Harney and C. Muckenhirn, “Group Key Management Protocol (GKMP) architecture,” RFC 2093, July 1997.
H. Harney and C. Muckenhirn, “Group Key Management Protocol (GKMP) specification,” RFC 2094, July 1997.
G. H. Chiou and W. T. Chen, “Secure broadcast using secure lock,” IEEE Trans. Software Engineering, vol. 15,
no. 8, pp. 929–934, Aug. 1989.
Mr. D.CHALAPATHI RAO has received Post Graduation MSc (Maths) from the Department of
Mathematics, Andhra University in 2008. He is pursuing Master of Technology in the Dept of Computer Science
Engineering, Chaithanya Engineering College, Vishakhapatnam, and JNT University. His research interests are Computer
Networking.
Mr. S.VENKATARAO has received Post Graduation MCA from the Department of Computer Science and
Engineering, Andhra University in 2008. His research interests are Computer Networking, Data Mining and Data structures
31
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 19-26
Deformation Studies of Endodontic Obturator Tip Using FEM
Approach
Ratnakar R.Ghorpade1, Dr.Kalyana Sundaram2, Dr.Vivek Hegde3
1
Department of Mechanical Engineering, MAEER’s, Maharashtra Institute of Technology, Paud Road, Kothrud,
University of Pune, Pune-411 038.
2
Department of Mechanical Engineering,BRACT’s, Vishwakarma Institute of Technology,Bibwewadi,
University of Pune, Pune-411 037.
3
Department of Conservative Dentistry & Endodontics, M.A.Rangoonwala College of Dental Sciences &
Research Centre, Pune – 411 001.
Abstract––Heated Cutter tips are used by dental surgeons extensively for treating root canal and other obturation
procedures. Biocompatible Cutter design involves challenges of both structural and thermal loads in the vicinity of the
root canal area . Natural gutta percha is very similar to natural rubber, which is mainly used as filler material to seal off
the prepared root canal cavity against bacterial infections. The cutter is being used to heat, soften, condense and cut off
the excess gutta percha from the apical region of tooth crown with ease. The Gutta Percha Cutter Tip geometry is selected
initially based on ideal root canal geometry available. The cutter tip is being heated by a suitable heating mechanism to
300˚C for 3 to 5 seconds. There is cyclic effect of heating tip temperatures on inner point in tip. Thermo-mechanical
analysis of Gutta Percha Cutter tip is carried out in ANSYS10.0, which gives the combined stress pattern and
deformation pattern on the tip. FEA is a proven cost saving tool and can reduce design cycle time therefore it can be used
as accurate tool to investigate thermo-mechanical stress and deformation pattern on cutter tip, and it will ensure minimal
effects of combined stresses in tip on the inner wall of prepared root canal cavity.
Keywords––Gutta Percha, ANSYS10.0, FEA.
I.
INTRODUCTION
1.1 Problem Statement:Gutta percha is a natural isoprene polymer extracted from the resin and sap of the trees of the Palaquium family,
which grow mainly in south-east Asia. Natural gutta percha is very similar to natural rubber, it is mainly used as filler
material to seal off the prepared root canal cavity against bacterial infections. The cutter is being used to heat, soften,
condense and cut off the excess gutta percha from the apical region of tooth crown with ease. The objective of this paper is
to evaluate critical failure safety of the cutter used for dental surgery.
1.2 Methodology:The design of Gutta percha cutter tip, challenges the creativity of the designer to use knowledge of solid geometry
with liberty, based on his/her own experience to avoid tooth interference, and knowledge of optimization practices. FEA is a
proven cost saving tool and can reduce design cycle time and therefore can be used as accurate tool to investigate thermomechanical stress and deformation pattern on cutter tip. Our work in this paper is directed towards optimizing the tip
geometry for defined cutting load and given heat pattern on tip surface. We also sought to predict the range of deformation
possible due to combined effect of thermal stress and stress due to cutting load.
1.3 Description :
The Gutta Percha Cutter Tip geometry is modeled based on an idealized root canal assumed for a typical molar
tooth. The cutter tip is heated to 300 degree Celsius for 3 to 5 seconds and is left to natural cooling during usage on the
cavity. Here it cools down after transferring the heat to the gutta percha. Heat is continuously supplied to the gutta percha tip
by a suitable mechanism. Thus temperatures on inner point in tip, the cutting force on tip surface, are the inputs for the
analysis. Thermo-mechanical analysis of Gutta Percha Cutter tip is carried out in ANSYS10.0, which gives the combined
stress pattern and deformation pattern on the tip. Further based on this analysis the data were used to make design decisions.
1.4 Analysis :The results obtained from FEA helps designers to take decisions on the design safety and also to optimize it by
analyzing the design for available different materials. FEA acts as a tool for analyzing different Tip geometries for current
study. It also saves time and also simplify complexity involved in analysis of element i.e. tip in the biomedical device. FEA
also helps us to analyze and better understand various critical sections and to predict the failure mode.
19
Deformation Studies of Endodontic Obturator Tip using FEM Approach
II.
LITERATURE SURVEY
Prior work on the Gutta Percha cutter tip reveals that a “wedging effect” created by the intracanal forces
developed during obturations which can be measured using a force analyzer device i.e. Endogramme (measures Force vs
Time that permits the comparison of different obturation techniques.Thus it provides a new approach to the analysis of
intracanal forces ). The wedging effect is the forces resulting from the hydrostatic pressure which is developed by a plugger
while it is pushing gutta-percha into a canal. Assumption: hydrostatic pressure assumed equal in all direction i [1].
The modified Endographe, with a new cupule, a force analyzer device is used to compare the forces and wedging
effects developed in the root canal using four obturation techniques: warm vertical compaction, lateral condensation,
thermomechanical compaction, and Thermafil condensation. The mean values for the wedging effect for warm vertical
compaction, lateral condensation, thermomechanical compaction, and Thermafil condensation were, respectively, 0.65 ±
0.07 kg, 0.8 ± 0.1 kg, 0.6 ± 0.08 kg, and 0.03 ± 0.01 kg. ii [2]
Stress distribution patterns i.e. Radicular stresses during obturation and root stresses during occlusion loading,
were investigated in simulated biomechanically prepared mandibular first premolars with four different tapers at two
different compaction forces and an occlusal load with finite element analysis. FEA used for numerical analysis of complex
structures to determine the stress and strain distribution pattern in the internal structure of tooth. Assessment of stress levels
by measuring deformation patterns (using photoelastic method, strain gauges, Instron Universal machine) inside the root
canal is extremely difficult , therefore FEA have been utilized to address these difficulties and gain insight into internal
stress distributions. Assumptions and input data for FEA: i) A straight root was chosen for this study in order to eliminate
effects due to canal curvature. ii) Guttapercha were compacted by vertical condensation technique in three separate vertical
increments (apical 1/3, middle 1/3, cervical 1/3). iii) Vertical compaction forces of 10N and 15 N are used for each
increment, applied by simulated plugger. iv) Periodontal ligament layer is assumed to be 200μm thick v) A surrounding bone
volume to support the root were created. vi) Simulated standard access opening made in the crown. vii) Root canals with 2%,
4%, 6%, & 12% taper created out of which 4% & 6% were chosen for clinical relevancy. 12% tapered canal chosen
arbitrarily to simulate the effects of excessive canal preparation. viii) Model with apical preparation of 0.35 mm at the point
of constriction, 0.5 mm (from what would be clinically perceived as the radiographic apex). ix) Isotropic properties applied
for Dentine (D), Periodontal ligament (PDL), Supporting bone volume (B), Gutta percha (GP). PDL modeled as a soft
incompressible connective layer ( to approximate fluid behavior). x) Coefficient of friction between GP and root canal wall
were selected as 0.1 to 0.25. xi) Warm GP compacted in three vertical increments until the canal was filled, at the start of
compaction a temperature of 60 ˚C, reduced to 37˚C during filling procedure. xii) Plugger surface was assumed to have
rounded edges and a tip diameter of 0.5 mm ( smaller than the canal diameter) at each compaction increment. Conclusion:
During simulated obturation, root stresses decreased as the root canal taper increases and stresses were greatest at the apical
third and along the canal wall. The fracture resistance of the restored endodontically treated tooth is a function of the
strength of the root (taper of prepared canal) and remaining coronal tooth structure. The clinician must make a decision to
use instruments which have an inherently larger or smaller taper based on the architecture present in a given canal. The
development of new design features such as varying tapers, noncutting safety tips and varying length of cutting blades in
combination with the metallurgic properties of alloy nickel-titanium have resulted in a new generation of instruments and
concepts. Additional in-vivo and in-vitro tests and clinical trial are desirable in order to elucidate the accuracy of finite
element analysis. [3]
The ultimate goal of endodontic treatment is to preserve the tooth in normal function as long as possible. Stress
distributions were investigated in a mandibular second premolar by using a commercially available 3-dimensional finite
element analysis software program (MSC Patran 2007 r1a and MSC Marc 2007r1; The original tooth model was adapted to
construct the other tooth models with varying degrees of curvature (15, 30, and 45 degrees. A standard access opening was
made in the crown, and a round root canal preparation was created with 0.04 taper and a final apical preparation of size 35.
The models also simulated a cylindrical section of normal bone 2 mm from the cementoenamel junction and a periodontal
ligament space 200 μm thick. All bonds at the interface between dentin and post materials were assumed to be perfect
bonds.All posts were assumed to leave 4–5 mm of gutta-percha root filling apically. Each model was meshed by structurally
solid elements defined by nodes having 6 degrees of freedom in tetrahedral bodies (MSC Patran 2007 r1a).Isotropic
properties were applied for all materials. Displacement of all nodes on the lateral surface and base of the supporting bone
was constrained. A cementum layer was not incorporated into the models because it was too thin to be simulated accurately,
and its modulus of elasticity is close to that of dentin. All conditions can be kept identical (such as tooth morphology,
mechanical properties, load, and periodontal support). The numeric method ensures that the root canal preparation has the
same size and taper in each model, which would have been impossible to achieve in an experimental study in human teeth.
Stress distribution in clinical situation is no longer uniform, Root canal shape and root morphology also affect the stress
distribution, resulting in an increased tensile stress on the internal root canal wall. The results in this study are presented in a
qualitative rather than a quantitative manner. [4]
From the computational point of view, only three studies (E. Berutti, G. Chiandussi, I. Gaviglio, and A. Ibba, X.
Xu and Y. Zheng, S. Necchi, S. Taschieri, L. Petrini, and F. Migliavacca) based on finite element analyses (FEA) were
conducted, aiming to evaluate some aspects of the mechanical behavior of the instruments (e.g., the stress distribution)
related to their critical condition during root canal instrumentation and not assessable through laboratory or in vivo tests.
Authors carried out extensive research based on their previous findings by comparing the mechanical performance of three
different Ni-Ti alloys, when used to produce rotary endodontic instruments.Modeling and Analysis: Rhinoceros 2.0
Evaluation (Used to create geometrical model), ABAQUES 6.5-1/ standard (Used for computational Analysis).
Considerations: The handle part of the file does not serve any specific function in the shaping procedure, and therefore it was
neglected in the numerical analyses.Model meshing: 10-nodes tetrahedral elements. And canals were meshed using 4-node
bilinear elements. Element & node optimization: A grid sensitivity study was performed to choose the most convenient
20
Deformation Studies of Endodontic Obturator Tip using FEM Approach
number of elements (in terms of computational time and results accuracy). The canals were modeled as simple rigid surfaces
shaped as pipes, based on radius, angle, and position of curvature. In this work, four types of canal geometries were
considered, characterized by a radius r of 2 or 5 mm, an angle a of 45˚ or 30˚, and an apical or middle position (p) of the
curvature. The modeled instruments were forced into the root canals until the apex was reached (insertion step) and
immediately retrieved (removal step). as a first approach to the problem, neither friction nor machining action was
considered between the instrument blade and the canal wall. A „„soft‟‟ contact with an exponential pressure-overclosure
model was imposed to simulate their interaction. Hence, the torsional stresses induced in the file during the procedure were
neglected. The primary curvature parameter influencing the mechanical behavior of the instrument (higher variations of
strain) is the radius of curvature. Differently, the second dominant parameter is judged the position of curvature for the Ni-Ti
alloys and the angle of curvature for the stainless steel. The stainless steel instruments showed a lower ability to conform to
the canal shape during an entire insertion-removal cycle. In particular, the tip of the stainless steel file plasticized just after
the first contact with the canal wall. The Ni-Ti file bends uniformly along the blade, following the original curvature of the
canal. FEA are recognized to have an important role in optimizing the behavior of biomedical devices. [5]
After obturation, a vertical load was applied by means of spreader inserted into the canal until fracture
occurred.Forces encountered during lateral condensation alone should not be a direct cause of vertical root fracture. Load
generated during lateral condensation is less than the load required to fracture the root.GP compaction in root canal achieved
by attaching hand spreader tip (Hu- friedy) to Instron Testing machine (Model 4206, Instron Corp. Canton MA). Fracture
loads varies from 5 kg to 24.3 kg. Mean Fracture load obtained by Lertchirakarn et al for mandibular premolar was 9.7 kg.
Factors influencing the fracture: i) Root canal shape with reduced radius of curvature is the single most important factor
influencing the location and the direction of fracture lines. ii) External root morphology. iii) Dentine thickness iv) Oval root
canal Fracture occurs when the tensile stress in the canal wall exceeds the ultimate tensile strength of dentin. When an apical
pressure is applied with a round instrument (D11 Hand spreader) inserted into an elliptical canal, it will bind at its narrowest
width, which is typically from mesial to distal. The initial forces will be directed towards the mesio-distal direction leading
to a strain on the bucco-lingual surface. Hence the resulting fracture lines will orient in the bucco-lingual direction. [6]
To calculate the stress in the tooth, surrounding periodontal ligament, and in the alveolar bone when a lower first
premolar is subjected to intrusion or torque movement using a constant moment. Model was subjected to an intrusive force
on the premolar of 0.5 N and a lingual root torque of 3 Nmm. The extent of changes in tissue structures was proportional to
the amount and duration of applied forces and moments. The main resorption occurred at the root apex.The tooth is
constructed as a hollow body with a consistent thickness of 0.2 mm. Young‟s modulus of the tooth is much greater than that
of the periodontal ligament. Young‟s modulus of the bone is clearly greater (1.000– 10.000-times) than that of the
periodontal ligament. The exact geometry of the periodontal ligament space could not be quantified because of its slight
width.Parameters for the linear, elastic mechanical properties of the tooth (consisting of enamel and dentin), periodontal
ligament (the Young‟s modulus varies) and alveolar bone. Capillary blood pressure in PDL ranges between 15 & 35 mm Hg
( i.e.0.002 – 0.0047 MPa), Meshing: 8 noded hexagonal elements ( as stability much better than tetrahedrons). PDL
undergoes greatest deformation (as being softest tissue, thus high element density is used for it.). Force: Fz = -0.25 N (on
lingual and labial sides of tooth crown), Fz = -0.5 N ( as intrusive force). Maximums and minimums of the hydrostatic stress
portion at the apex and alveolar crest. Further research need: It is as yet impossible to measure the stress inside the
periodontal ligament in-vivo. Stress and strain in this area can only be calculated using the finite element method. There has
not been sufficient quantitative investigation so far concerning the connection between forces and movement.It is still
impossible to evaluate all the factors influencing resorption. Orthodontic forces represents only one cofactor affecting root
resorption. [7]
A marked decrease of instantaneous shear modulus was observed at the melting point, and the range of the firstorder transition temperature at heating was from 420C to 600C. Also A marked specific volume change was observed at the
first-order transition temperature. During the setting process, the crystallization of gutta percha is thus thought to adversely
affect the sealing ability in the root canal space. The main cause of endodontic treatment failure is generally said to be
incomplete sealing of the root canal, and accordingly it is important to obturate the root canal closely. Fluidity and melting
point can be altered by arranging the content, the average molecular weight and molecular weight distribution, but a large
amount of volumetric shrinkage with crystal growth cannot be avoided when using gutta percha. The technique using melted
gutta percha alone may not be favourable compared with conventional lateral condensation because melted gutta percha
undergoes a large amount of shrinkage during setting. [8]
The findings of preliminary studies showed that the force required to provide a significant increase in the diameter
of heated gutta-percha specimens should be greater than 3 kg. Further research is required to increase the accuracy and
standardisation of the analysis of the thermoplastic properties of guttapercha and similar root canal filling materials. Few
studies have focused on the differences between commercially available brands of guttapercha,no specific methodology for
testing the thermoplasticity of gutta-percha has been described.[9]
In endodontic therapy, dental gutta-percha is plasticized by a heat carrier or by thermomechanical compaction,
which if used improperly may cause partial decomposition if the heat generated exceeds 100°C, according to the Merck
index. Root canal filling techniques must use temperature control (between 53°C and 59°C) permitting the β-phase guttapercha to change into the α-phase, avoiding the gutta-percha amorphous phase. gutta-percha in the β-phase begins the αphase change when heat reaches the temperature range of 48.6°C to 55.7°C, and that the α-phase material changes to the
amorphous phase when heated between 60°C and 62°C. The heat source must be carefully used and the condenser should be
heated only for a few seconds before condensing and cutting the obturation material; if overheated, periodontal damages
might occur. Heating dental gutta-percha to 130°C causes physical changes or degradation. [10]
Heating up to 130˚C causes chemical-physical changes of the gutta-percha; this is due to the presence of additives
(70–80%), which alter the behaviour of the material. For this reason, the dimensional stability of the filling materials is not
21
Deformation Studies of Endodontic Obturator Tip using FEM Approach
guaranteed. The mass loss in gutta-percha polymer could make the cone material more porous and could reduce its rootcanal sealing property. [11]
III.
MATHEMATICAL MODELING OF CUTTER TIP
3.1 Mathematical model
The device under study can be handheld portable apparatus used to heat, soften & cut gutta percha as filler material
with ease and also fill it in root cavity without disturbing patient. The cutter tip is modeled as tapered bar loaded at smaller
end (i.e. free end) with a force „P‟ and fixed at the bigger end, in order to determine its deformation behavior.
Fig. 1 Solid Model of Dental device with cutter tip
Fig. 2 The basic structure of the tooth, with the infected part,
that needs to be cleaned. Also, the root canal of the tooth can be seen.
Fig. 3 Gutta Percha cutter tip geometry under study
D: Diameter at larger end (i.e.at fixed end „A‟), (mm)
d: Diameter at smaller end (i.e.at free end „B‟), (mm)
P: Load at the free end of tapered bar, (N)
lx: Length of tapered bar, (mm)
l: Length of smaller diameter from point „C‟, (mm)
L: Length of fixed end/larger diameter from point „C‟, (mm)
x-x: Any cross section along the bar at distance „x‟ from point „C‟,
dx: Diameter at section x-x, (mm)
Mx-x: Bending moment at section x-x, (N-mm)
Ix: Moment of inertia at section x-x,
E: Young‟s Modulus. (N/mm2)
Taper lines are extended to meet at point „C‟
Diameter at section x-x:
For uniformly varying cross-sectional area,
22
Deformation Studies of Endodontic Obturator Tip using FEM Approach
M
-------------------(1)
can be rearranged as
-------------------(2)
Solution of above equation for deflection is,
------------------(3)
Where,
k= (64.P.l4)/π.d4
----------------- (4)
3.2 Flowchart to find the deflection at any point on the tapered bar loaded at it’s end (smaller end) and fixed at the
bigger end:
Start
Read P,l,d,E,x,L
k= (64.P.l4)/π.d4
]
Print „y‟
End
3.3 Analytical Solution:
Fig. 4 Deflection plot (Analytical Solution)
IV.
FINITE ELEMENT ANALYSIS OF CUTTER TIP
4.1 FEA Basis:
The plugger is introduced for 3-4 mm into the gutta-percha cone (this also must not touch the dentinal walls),
where it remains for a fraction of a second and is then removed. In actual practice, the probe of the Touch‟n Heat unit is
introduced into the orifice, started and allowed to plunge 3-4 mm into the coronal most extent of the gutta-percha. The heatactivating element is then released and after hesitating momentarily, the cooling instrument is removed, along with an
adhered bite of gutta-percha. In this manner, the gutta-percha is heated around the heat-carrier and about 3-4 mm apically
(no more, because gutta-percha is not a conductor of heat. When the instrument is withdrawn from the root canal,
simultaneously a bite of gutta-percha has remained attached to it and is then removed.
A new introduction of the heat-carrier will then remove another bite of gutta-percha, moving more apically the level of
compaction. Now the assistant will hand the smallest plugger, to compact the softened material in the most apical portion of
the root canal.
23
Deformation Studies of Endodontic Obturator Tip using FEM Approach
a.
b.
c.
d.
Fig. 5 Heat Softening & Compaction of Gutta Percha by Cutter Tip in steps
a.
b.
c.
d.
The heatcarrier is returned, which is introduced into the center of the gutta-percha cone in the root canal.
The heat-carrier is heating the gutta-percha apically to its tip and removes the surrounding material from the canal.
A new introduction of the heatcarrier is softening the gutta-percha in the apical one third.
Another piece of gutta-percha has been removed. Now it is time to use the smallest plugger,
4.2 Assumptions and input data for FEA:
i) A straight root was chosen for this study in order to eliminate effects due to canal curvature.
ii) Guttapercha were compacted by vertical condensation technique in three separate vertical increments (apical 1/3,
middle 1/3, cervical 1/3).
iii) Vertical compaction force of 10N is used for each increment, applied by simulated plugger.
iv) Warm GP compacted in three vertical increments until the canal was filled, at the start of compaction a
temperature of 60 ˚C, reduced to 37˚C during filling procedure.
v) Plugger surface was assumed to have flat and a tip diameter of 0.2 mm ( smaller than the canal diameter) at each
compaction increment.
vi) The handle part of the cutter does not serve any specific function in the shaping procedure, and therefore it was
neglected in the numerical analyses.
4.3 Finite Element Analysis:
Analysis tool: ANSYS 10.0
Material Properties used for NiTi Tips in FEA:Young‟s Modulus: 83GPa, Poisson‟s ratio: 0.33
Model Meshing:
BEAM element of class BEAM 44, (3D Tapered element for structural analysis).
Loads & constraints:
Structural loading: Tip is loaded at it‟s end in transverse direction by a compaction force of 10 N. As the tip is joined to NiTi
pipe at it‟s bigger end, it is constrained for the displacement in the three directions at this end.
4.4 FEA Results:
2.5
Deflection, mm
2
1.5
y = -2.125x + 2.226
R² = 0.791
1
0.5
0
0
-0.5
Fig. 6 FEA Solution for deflection of tapered bar for 10 N
0.5
1
1.5
Tapered Bar Diameter, mm
Fig. 7 Deflection plot (FEA Solution)
24
Deformation Studies of Endodontic Obturator Tip using FEM Approach
Table 1: Analytical & FEA solution for deflection of 10 mm
long tapered bar at various diameters along it’s length when
loaded at smaller free end by 10 N
Diameter
(mm)
0.2
0.28
0.38
0.48
0.58
0.68
0.78
0.88
0.98
1.08
1.18
1.2
Deflection
(mm)
Analytical
Solution
2.993315
2.449849
1.603915
1.043858
0.670952
0.419134
0.248644
0.134983
0.062444
0.020558
0.002104
0
Deflection
(mm)
FEA
Solution
Analytical & FEA solution for 10 N load
3
2.5
Upper line(Blue):Analytical Solution
Lower Line (Red): FEASolution
2.3609
1.5507
0.99342
0.63630
0.40121
0.24401
0.13902
0.07061
0.028752
0.0067579
0
2
Deflection, mm
Distance
from
free end
(mm)
x= 0
0.5
1.5
2.5
3.5
4.5
5.5
6.5
7.5
8.5
9.5
10
1.5
1
0.5
0
0
0.2
0.4
0.6
0.8
1
1.2
Tapered Bar Diameter, mm
Fig. 8 Analytical and FEA Solution for 10 N load
4.5 Thermal Expansion in the material:
For the cutter tip of length 10 mm, 15 mm and 20 mm the thermal expansion is 0.016917 mm, 0.0253755 mm and
0.033834 mm respectively. It is based on
dl = L0 α (t1 – t0)
where, dl = change in length (mm)
L0 = initial length (mm)
α = linear expansion coefficient ( 7.9X10-6)
t0 = initial temperature (oC)
t1 = final temperature (oC)
---------------------------------------- (5)
V.
CONCLUSION
Structural deformation behavior of the Endodontic cutter tip is analyzed based on analytical approach & FEA approach. The
results obtained (Maximum deflection at the cutter tip end observed to be 2.449849 mm analytically, and 2.3609 mm by
FEA), which are closely matching, and could be further utilized in analyzing the effect of stressing the tip on the root canal
cavity. Thus the optimum balance between cutting force on tip and friction force on Gutta percha surface in contact with the
root canal wall is required, while compacting gutta percha in the cavity. It will ensure minimal effects of combined stresses
in tip on the inner wall of prepared root canal cavity. Thus FEA of tip of the Gutta Percha Cutter helps us to standardize to
certain extent the design procedure for the Tip which will have minimum stress effects on inner wall of the root canal cavity.
ACKNOWLEDGMENT
We would be thankful to Dr.Vivek Hegde, (MDS),(Consultant Endodontist & Cosmetic Dentist, Master in Lasers, Vienna),
Professor & Head of Department of Conservative Dentistry & Endodontics, M.A.Rangoonwala College of Dental Sciences
& Research Centre, 2390, K.B.Hidayatulla Road, Azam Campus, Pune – 411 001, for providing necessary inputs during
continual phase of research.
REFERENCES
1.
2.
3.
4.
5.
Jean-Yves Blum,Pierre Machtou,Jean-Paul Micallef, “Analysis of forces developed during obturations. Wedging
effect: Part I”, Journal of Endodontics, Volume 24, Issue 4 , p. 217-222, April 1998.
Jean-Yves Blum,Pierre Machtou,Jean-Paul Micallef, “Analysis of forces developed during obturations. Wedging
effect: Part II”, Journal of Endodontics, Volume 24, Issue 4 , p. 217-222, April 1998.
Dhanya Kumar n. M., Abhishek Singhania, Vasundhara Shivanna, “To Analyze the Distribution of Root Canal
Stresses after Simulated Canal Preparation of Different Canal Taper in Mandibular First Premolar by Finite
Element Study – An In Vitro Study.”, Journal of Endodontology, Issue 2 , p. 12-21, 2008.
Chayanee Chatvanitkul, and Veera Lertchirakarn, “Stress Distribution with Different Restorations in Teeth with
Curved Roots: A Finite Element Analysis Study”, Journal of Endodontics, Volume 36, Issue 1 , p. 115-118,
January 2010.
Lorenza Petrini, Silvia Necchi, Silvio Taschieri, and Francesco Migliavacca, “Numerical Study on the Influence of
Material Characteristics on Ni-Ti Endodontic Instrument Performance”, Journal of Materials Engineering and
Performance, Volume 18(5–6) p. 631-637, August 2009.
25
Deformation Studies of Endodontic Obturator Tip using FEM Approach
6.
7.
8.
9.
10.
11.
Mithra N. Hegde, Shishir Shetty, Navneet Godara, “Evaluation of fracture strength of tooth roots following canal
preparation by hand and rotary instrumentation”- An invitro study, Journal of Endodontology, Issue 1 , p. 22-29,
2008.
Christina Dorow, Franz-Günter Sander, “Development of a Model for the Simulation of Orthodontic Load on
Lower First Premolars Using the Finite Element Method”, Journal of Orofacial Orthopedics, 66: p. 208–218, 2005.
Tsukada G, Tanaka T, Torii M, Inoue K. “Shear modulus and thermal properties of gutta percha for root canal
filling”, Journal of Oral Rehabilitation ,2004 31; 1139–1144.
Mário Tanomaru-Filho, Geraldine Faccio Silveira, Juliane Maria Guerreiro Tanomaru, and Carlos Alexandre
Souza Bier, “Evaluation of the thermoplasticity of different gutta-percha cones and Resilon®”, Australian
Endodontic Journal ; 33: p.23–26, 2007.
Maniglia-Ferreira C, Gurgel-Filho ED, Silva Jr JBA, Paula RCM, Feitosa JPA, Gomes BPFA, Souza-Filho FJ,
“Brazilian gutta-percha points. Part II: thermal properties”, Brazilian Oral Res;21(1):p.29-34, 2007.
Maurizio Ferrante, Paolo Trentini, Fausto Croce, Morena Petrini, Giuseppe Spoto, “Thermal analysis of
commercial gutta-percha”, Journal of Therm Anal Calorim, 2010.
Jean-Yves Blum,Pierre Machtou,Jean-Paul Micallef, “Analysis of forces developed during obturations. Wedging effect:
Part I”, Journal of Endodontics, Volume 24, Issue 4 , p. 217-222, April 1998.
i
Jean-Yves Blum,Pierre Machtou,Jean-Paul Micallef, “Analysis of forces developed during obturations. Wedging effect:
Part II”, Journal of Endodontics, Volume 24, Issue 4 , p. 217-222, April 1998.
ii
26
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 26-30
Adaptive PID Controller for Dc Motor Speed Control
A.Taifour Ali1, Eisa Bashier M. Tayeb2 and Omar Busati alzain Mohd3
1,2
School of Electrical and Nuclear Engineering; College of Engineering;
Sudan University of Science &Technology; SUDAN
3
M.Sc. at School of Electrical and Nuclear Engineering
Abstract––This investigation is concerned with the model reference adaptive PID control (MRAPIDC). Incorporating
separately excited DC motors, subjected to uncertainties including parameter variations and ensuring optimum
efficiency. The idea is to find further perfection for MRAC method and to provide an even more smooth control to the DC
motor and to minimize deficiencies of the traditional MRAC method. This is examined when combining the MRAC
method with the PID control. The performance of the drive system obtained, formed a set of test conditions with
MRAPIDC. To achieve these objectives the simulation environment is provided in the MATLAB Simulink.
Keywords––Adaptive PID, Model Reference Adaptive Control (MRAC), Model Reference Adaptive PID Control
(MRAPIDC), DC Motor Speed Control.
I.
INTRODUCTION
The Direct Current (DC) motors have variable characteristics and are used extensively in variable-speed drives.
DC motor can provide a high starting torque and it is also possible to obtain speed control over a wide range. There are many
adaptive control techniques, among them is the MRAC. It is regarded as an adaptive servo system in which the desired
performance is expressed in terms of reference model. Model Reference Adaptive Control method attracting most of the
interest due to its simple computation and no extra transducers is required [1-4]. The PID controller has several important
functions; it provides feedback, has the ability to eliminate steady state offsets through integral action, and it can anticipate
the future through derivative action. PID controllers are the largest number of controllers found in industries sufficient for
solving many control problems [5]. Many approaches have been developed for tuning PID controller and getting its
parameters for single input single output (SISO) systems. Among the well-known approaches are the Ziegler-Nichols (Z-N)
method, the Cohen-Coon (C-C) method, integral of squared time weighted error rule (ISTE), integral of absolute error
criteria (IAE), internal-model-control (IMC) based method, gain-phase margin method [6-14]. Adaptive control use to
change the control algorithm coefficients in real time to compensate for variations in environment or in the system itself.
This paper deals with the model reference adaptive control approach MRAC [15-18]. In which the output response is forced
to track the response of a reference model irrespective of plant parameter variations.
II.
SYSTEM DESCRIPTION AND MODELING
When the plant parameters and the disturbance are varying slowly or slower than the dynamic behavior of the
plant, then a MRAC control can be used. The model reference adaptive control scheme is shown in figure 1. The adjustment
mechanism uses the adjustable parameter known as control parameter  to adjust the controller parameters [4]. The
tracking error and the adaptation law for the controller parameters were determined by MIT Rule [19].
ωM
Reference Model
Parameter Adjustment
Mechanism

Uc
Controller
Va
ωd
DC MOTOR
Fig. 1: Structure of Model Reference Adaptive Control
MIT (Massachusetts Institute of Technology) Rule is that the time rate of change of θ is proportional to negative gradient of
the cost function ( J ), that is:
d
J

 
 
dt


(1)
26
Adaptive PID Controller for Dc Motor Speed Control
The adaptation error   y p (t )  y M (t ) . The components of d d are the sensitivity derivatives of the error with
respect to the adjustable parameter vector  . The parameter
γ is known as the adaptation gain. The MIT rule is a gradient
scheme that aims to minimize the squared model error ε from cost function [20]:
2
J ( )  12  2 (t )
(2)
A. Modeling of DC Motor: The plant used in simulation is seperatly excited dc motor with dynamic equations as [21]:
(3)
 d    d L  dU
V (t )  I a (t ) Ra  La dtdi  K b (t )
(4)
(5) K m  I a (t )  B (t )  J dt  (t )  TL
d
Taking Laplace transform of equations (4) and (5), the transfer function of the DC motor with no load torque and
uncertainties ( d  0 ) is obtained from let TL= 0:
 ( s)
(6)
V ( s)

[s 2 
Km
JLa
( JRa  BLa )
JLa
s  ( BRa JLKa m Kb ) ]
First let us consider the case with only load disturbances TL≠ 0
 R
 1 
 TL  a  s  
 JL a  J  
dL 
( B R a  K m K b)
( JR a  BLa )
2
[s 
s
]
JL a
JL a
(7)
B. Model Reference Adaptive PID Control: The goal of this section is to develop parameter adaptation laws for a PID
control algorithm using MIT rule.
The reference model for the MRAC generates the desired trajectory yM , which the DC motor speed yP has to follow.
Standard second order differential equation was chosen as the reference model given by:
bM
H M ( s)  2
(8)
s  aM 1 s  aM 0
Consider also the adaptive law of MRAC structure taken as the following form [22]:
(9)
*

u(t )  K P e(t )  Ki  e(t )dt  Kd e (t ) yP

e(t )  uc ,yKp p is proportional gain, Ki is integral gain, Kd is derivative gain and uc is a unit step input. In the
Where;
Laplace domain, equation (9) can be transformed to:
K


(10)
U   K P E  i E  sK d y P 
s


It is possible to show that applying this control law to the system gives the following closed loop transfer function:


K 
YP  GP   K P  i uC  yP   sK d yP 
(11)
s




*
*
*
Apply MIT gradient rules for determining the value of PID controller parameters ( K P , K i and K d ). The tracking error
equation (8) satisfies:



 GP K P s  GP K i  U C


(12)
 YM


 s (1  GP K P )  GP K i  s 2GP K d 


The exact formulas that are derived using the MIT rule can not be used. Instead some approximations are required. An
approximation made which valid when parameters are closed to ideal value is as follows [23]:
Denominator of plant  Denominator model reference then, from gradient method.
dK
J
 J    Y 
(13)
 
  


dt
K i
   Y  K 
Where ;
J
 ,


1
Y
27
Adaptive PID Controller for Dc Motor Speed Control
Then the approximate parameter adaptation laws are as follows:
*
(14)

  p  
s
 e
   2
K p  
 s   a0 s  am1s  am 2 
*

1
   
 e
K i   i    2
s
a
s

a
s

a

  0
m1
m2 
(15)
*

s2
  
 YP
K d   d    2
 s   a0 s  am1s  am 2 
(16)
III.
SIMULATION RESULTS
In this section, some simulation carried out for MRAC Separately Excited DC motor controller. Matlab software
was used for the simulation of control systems. Fig 2 shows Simulik models for both MRAC along with the motor under
control. While Fig.3 shows implementation of MIT rule to obtain adaptation gains. The parameters of separately excited DC
motor are considered as:
K m  K b  0.55 ; Ra  1 ; La  0.046 H J  0.093 Kg.m 2 ; B  0.08Nm / s / rad .
Also, the second order transfer function of the Model Reference as follows:
16
HM  2
s  4s  16
This reference model has 16% maximum overshoot, settling time of more than two seconds and rise time of about 0.45
seconds. In simulation, the constants gammas were grouped in five sets as in table 1:
Table 1: Groups of Gammas
set
1
2
3
4
5
0.2
0.4
0.6
0.8
1.0

p
i
0.8
1.6
2.4
3.2
4.0
d
0.48
0.96
1.44
1.92
2.4
Fig. 2: Simulik Model for MRAC
Fig. 3: Simulik Model for Proportional Adaptation Gain (MIT rule)
28
Adaptive PID Controller for Dc Motor Speed Control
Fig. 4: Output Speed for Different Groups of  ’s
Fig. 5: Adaptation Error for Different Groups of  ’s
Fig. 6: Error, Adaptation Error and Adaptation PID Gains
As shown in Fig. 4 for low adaptation gains, the actual speed has no oscillation but too much delay, so poor
performance. Increasing adaptation gains the output speed response improved towards matching the desired speed value of
model reference. The adaptation error is shown in Fig. 5, while Fig. 6 sows the error, adaptation error and adaptation gains
for certain group of gammas. As results the MRAPIDC achieves satisfactory performance. The transient performance
specifications are shown in table 2. These simulations show that MRAPIDC requires less information of the process at the
same time achieves good performances.
Table 2: Characteristic Values for no Load Speed
Specifications
Sets of Gammas
1
2
3
4
5
Rise Time (sec)
1.15
0.71
0.54
0.46
0.44
Settling Time (sec)
3.2
1.34
1.46
1.29
1.42
% Max Overshoot
0
1.0
3.8
6.1
8.2
29
Adaptive PID Controller for Dc Motor Speed Control
IV.
CONCLUSION
From the simulation results, it is found that the MRAPIDC achieves satisfactory performance the output speed
response of the DC motor. The adaptation gains are responsible to improve the transient performance of the speed response
in terms of rise time, overshoot, settling time and steady-state for step speed response.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
Hans Butler, Ger Honderd, and Job van Amerongen “Model Reference Adaptive Control of a Direct-Drive DC
Motor” IEtE Control Systems Magazine, 1989, pp 80-84.
S. Vichai, S. Hirai, M. Komura and S. Kuroki “Hybrid Control-based Model Reference Adaptive Control”
lektronika Ir Elektrotechnika, Nr. 3(59) (2005) pp 5-8.
Chandkishor, R. ; Gupta, O. “Speed control of DC drives using MRAC technique” 2 nd International Conference
on Mechanical and Electrical Technology (ICMET) Sep 2010, pp 135 – 139.
Missula Jagath Vallabhai, Pankaj Swarnkar, D.M. Deshpande “Comparative Analysis Of Pi Control And Model
Reference Adaptive Control Based Vector Control Strategy For Induction Motor Drive” IJERA Vol. 2, Issue 3,
May-Jun 2012, pp 2059-2070.
Eisa Bashier M. Tayeb and A. Taifour Ali “Comparison of Some Classical PID and Fuzzy Logic Controllers”
Accepted for publication in International Journal of Scientific and Engineering Research; likely to be published in
Vol 3, Issue 9, September 2012. (IJSER - France).
J. G. Ziegler and N. B. Nichols, “Optimum settings for automatic controllers,” Transactions of American Society
of Mechanical Engineers, 1942, Vol. 64.
G. H. Cohen and G. A. Coon, “Theoretical investigation of retarded control,” Transactions of American Society of
Mechanical Engineers, 1953, Vol. 75.
M. Zhuang and D. P. Atherton, “Automatic tuning of optimum PID controllers,” IEE Proceedings on Control and
Applications, 1993, Vol. 140.
D. W. Pessen, “A new look at PID-controller tuning,” Journal of Dynamical Systems Measures and Control, 1994,
Vol. 116.
M. Morari and E. Zafiriou "Robust Process Control" Prentice-Hall, Englewood Cliffs, 1989, New Jersey.
W. K. Ho, C. C. Hang, and L. S. Cao, “Tuning of PID controllers based on gain and phase margin specifications,”
Automatica, 1995, Vol. 31.
P. Cominos, N. Munro. PID Controllers: Recent Tuning Methods and Design to Specification. in: IEE
Proceedings, Control Theory and Applications, 2002, vol. 149, 46–53.
C. Lee. A Survey of PID Controller Design based on Gain and Phase Margins. International Journal of
Computational Cognition, 2004, 2(3): 63–100.
G. Mann, B. Hu, R. Gosine. Time Domain based Design and Analysis of New PID Tuning Rules. IEE
proceedings, Control Theory Applications, 2001, 148(3): 251–261.
K.H. Chua, W.P. Hew, C.P. Ooi, C.Y. Foo and K.S. Lai “A Comparative Analysis of PI, Fuzzy Logic and ANFIS
Speed Control of Permanet Magnet Synchoronous Motor”ELEKTROPIKA: Int. J. of Electrical, Electronic
Engineering and Technology, 2011, Vol. 1, 10-22.
Pankaj Swarnkar, Shailendra Jain and R. K. Nema “Effect of Adaptation Gain in Model Reference Adaptive
Controlled Second Order System” ETASR - Engineering, Technology & Applied Science Research Vol. 1, _o.3,
2011, 70-75.
K. Benjelloun, H. Mechlih, E. K. Boukas, “A Modified Model Reference Adaptive Control Algorithm for DC
Servomotor”, Second IEEE Conference on Control Applications, Vancouver, B. C., 1993, Vol. 2, pp. 941 – 946.
K. J. Astrom, Bjorn Wittenmark “Adaptive Control”, 2nd Ed. Pearson Education Asia, 2001, pp. 185-225.
Stelian Emilian Oltean and Alexandru Morar, "Simulation of the Local Model Reference Adaptive Control of the
Robatic Arm with DC Motor Drive, Medimeria Science, 3, 2009.
A. Assni , A. Albagul, and O. Jomah “Adaptive Controller Design For DC Drive System Using Gradient
Technique” Proceedings of the 2nd International Conference on Maritime and Naval Science and Engineering pp
125 -128.
Faa-Jeng Lin, “Fuzzy Adaptive Model-Following Position Control for Ultrasonic Motor”, IEEE Transactions on
Power Electronics, 1997, Vol. 12. pp 261-268.
Hang-Xiong Li, "Approximate Model Reference Adaptive Mechanism for Nominal Gain Design of Fuzzy Control
System", IEEE transactions on systems. Man, and Cybernetics Society, 1999, Vol. 29. pp41-46.
Muhammed Nasiruddin Bin Mahyddin, "Direct model reference adaptive control of coupled tank liquid level
control system",ELEKTRIKA, 2008, Vol. 10, No. 2, pp 9-17.
30
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 38-41
Source Code Reusability Metric for Enhanced Legacy Software
Shashank Dhananjaya1, Yogesha T2, Syeda Misba3
1
Assistant professor, Dept. of Information Science &Engg, Vidya Vardhaka College of Engineering, Mysuru,
Karnataka INDIA
2
Lecturer, Center for PG Studies, VTU, Bengaluru, Karnataka INDIA
3
Assistant professor, Dept. of Information Science & Engg, New Horizon College of Engineering, Bengaluru,
Karnataka INDIA
Abstract—Most of the application software’s are developed by reusing the source code in order to reduce the
development effort, delivery time, to improve the productivity and quality. Software reuse has been a solution factor
to acquire the existing knowledge from software repository. To add new functionalities source code of the older
version are reused. It is difficult keep track the source code lines that are being reused i.e. changed and added. There
is no indicator of the productivity of the project, nor does it give an insight to the number of expected defects. Both of
these are related to the actual number of new and changed source code lines.In this paper, we have proposed novel
algorithm for software reusability metric using pattern matching. We have implemented using C# language to
evaluate. Result determines the reused files added and changed.
Keywords—software development, software reuse, pattern matching, software metrics, knowledge reuse.
I.
INTRODUCTION
Software is continuously growing in its importance for application development. Currently at Hewlett-Packard
Company, approximately 70% of our engineers do some software development. SystematicSoftware reuse is becoming a key
factor in reducing the development time and effort in the software development process and also to improve the software
quality and productivity. New horizons are opened since the idea of using the existing knowledge for software reuse
appeared in 1968 [1]. The main idea behind the software reuse is domain engineering (aka product line engineering).
Reusability is the degree to which a thing can be reused [2]. Software reusability represents the ability to use part or the
whole system in other systems [3,4] which are related to the packaging and scope of the functions that programs
perform [5]. According to [6], the US department of defense alone could save 300 million $ annually by increasing its level
of reuse as little as 1%. Moreover, software reusability aimed to improve productivity, maintainability, portability and
therefore the overall quality of the end product [7]. Ramamoorthy et. al.[8] mentions that the reusability is not limited to the
source code, but it has to include the requirements, design, documents, and test cases besides the source code. New
technologies andtechniques are required to reuse the existing knowledge from software historical data such as code bases,
execution traces, historical code changes, contains a wealth of information about a software project’s status, progress, and
evolution. Basically software reuse process consists of four steps such as identifying the software components,
understanding the context, applying software reuse techniques, integration and evaluating.
II.
RELATED WORKS
Several research works has been carried out on software reuse by many authors. Morisio et. al [9] has identified
some of the key factors such as adapting or running a companywide software reuse. Prediction of reusability of object
oriented software systems using clustering approach have been made by [10]. G. Boetticher et.al proposed a neural network
approach that could serve as an economical, automatic tool to generate reusability ranking of software [11]. Morisio et. al [9]
TSE article success and failure factors in software reuse sought key factors that predicted for successful software reuse. Tim
Menzies et. al [12] has identified numerous discrepancies which exist between expert opinion and empirical data reported by
Morisioet.al.’s in TSE article.Hironori Washizaki[13]found that metrics can effectively iden-tify black-box components with
high reusability.ShrddhaSagar[14]proposed aapproach using Fuzzy logic to select highlyreusable components in the systems
will eventuallyhelp in maintaining the system in a better way.G.Singaravel[15] told Reusability is motivated to reduce time
and cost insoftware development.YunwenYe [16]proposedCodeBroker model that helps Java programmers in
learningreusable software components in the context of their normaldevelopment environments and practice by
proactivelydelivering task-relevant and personalized information.Mogilensky[17]written about the importance ofsoftware
component reuse as a key means of improving software productivity and quality. Young Lee and Kai H. Chang[18]In his
paper he proposed a quality model for object-oriented software and an automated metric tool.Octavian Paul[19] proposed
metrics and a mathematical model for the of reusability, adaptability, composeabilityand flexibility of software components.
III.
PROPOSED ALGORITHM
Various approaches for source level comparison of different software releases are available but they all have a
serious shortcoming that makes them unsuited for source code metrics with current software development practices they do
not consider files that are being moved or renamed during the course of development. Also, they do not take into account
38
Source Code Reusability Metric for Enhanced Legacy Software
that developers create a significant amount of code by a copy-and-modify approach. This is where this novel approach is
different. This approach compares two source code trees one is called the current source code tree, the other the previous
source code tree and reports which files are changed, added, deleted or unmodified. For each of the changed files, it is
reported how many lines are added or deleted. However, it does not assume a file that is not present in the previous source
code tree is a completely new file. It uses powerful heuristics to determine if another file in the previous source code tree is
highly similar. If so, that file is considered the previous version and the number of added and deleted lines for the current file
is determined. It turns out that a significant number of files are derived from other files in your development organization.
This novel approach helps especially in multi-site project teams it gives more control of and confidence in the metrics
collected on the source code being developed.
A. Determining the difference between files
For the difference between file A and file B, the following algorithm is used.
1. First comments (in C or C++ syntax) are removed.
2. Any whitespace is removed for each line in the file.
3. If the line is empty, it is removed.
4. If a file is considered to be 'generated' (see heuristics above) it is ignored; other files are processed further.
5. A hash code is calculated for each line (the Hash algorithm of .Net for strings is used). This reduces a file to an
array of 32bit integers.
6. This array is sorted. From this, the number of new entries (= new lines) and the number of removed entries (=
deleted lines) is determined in a straightforward way.
B.
Algorithm to find matching files
Finding matching files between the older source code tree and the newer sourced code tree is done in two steps:
first potentially matching files are coupled using the Locators second the content of the potentially matching files is
examined to determine whether the newer one is indeed derived from the older one.
The Locator defines, given a file from the older source code tree, how to find a match in the older source code tree:

SameParentSameName:for a file <new>/mix/sol2/mes.c, only the file <old>/mix/sol2/mes.c is a possible match. The
content of the file mes.c is not examined.

SameParentDiffName: for a file <new>/mix/sol2/mes.c, any file in the directory <old>/mix/sol2/ is a possible match.
Whether a file actually matches is determined by comparing the contents (see below).

DiffParentSameName: for a file <new>/mix/sol2/mes.c, the file mes.c in any directory in <old> is a possible match. The
content of all files with the name mes.c are examined to find the one with the best match.

DiffParentDiffName: for a file <new>/mix/sol2/mes.c, any file at any location in <old> is a possible match. Whether a file
actually matches is determined by comparing the contents.
To determine if the file A (in the old source code tree) that is matched by the algorithm using the Locators as
described above is indeed a previous version of file B (in the new source code tree), the following algorithm is used for each
pair of potentially matching files A and B:
Algorithm 1. Pattern Matching for Software Reuse Metric
1.
Let A be the old version file and B be the new version file and
following parameters are calculated:

The number of new lines in B that are not in A: N (B )

The number of deleted lines in A that are no longer in B:
D(A)

The total number of lines in A: LC ( A)

The total number of lines in B: LC (B )
2.
The matching criteria are tested. The files are considered to
match if one of the following conditions hold:

N ( B  

D( A) 
LCB  , or less than 10% of file B is new.
10
LC( A)
LC( B) , or less than a quarter of
 N ( B) 
4
2
file A is deleted and less than half of file B is new. There is no scientific
reasoning for the number used in those formulas; they are determined by
experimentations and guesstimates and built into the tool.
3.
If they fulfil the matching criteria (M(A,B) the match value is
determined as:
D( A)
N ( B) .
M ( A, B ) 

LC ( A) LC ( B )
The pair of files A and B with the lowest match value is considered to
be the pair where file B is derived from file A.
39
Source Code Reusability Metric for Enhanced Legacy Software
Note that extensions do not play a role in the matching of files. I.e. it is possible that mes.c is considered to match
with include.h. This does (infrequently) occur when code is moved from header files to a source code file (or the other way
around).
IV.
RESULTS AND DISCUSSIONS
Fig.1. Identification added, changed and deleted source code in old and new version.
Fig.2. Calculation of total number of new files, changed files, unchanged, generated files and processing time.
This then results in the following output information::
Package: dummy
Locator statistics:
SameParentSameName found 306
SameParentDiffName found 0
DiffParentSameName found 25
DiffParentDiffName found 20
release previous unresolved files sizenew del turn-over
c_new.zip c_old.zip 169 469 58442 33548 8890 1.31
c_old.zip 408 355 42858 0 0 1.00
name
previous
status sizeosize newdel
/…/CCeTvPca9554.cd /…/CCeTvPca9554.cd changed 91 93 1 3
/…/cetv9554_m.c /…/cetv9554_m.c unchanged69 69 0 0
40
Source Code Reusability Metric for Enhanced Legacy Software
/…/cetv9554_mpow.c /…/cetv9554_mpow.c changed 61 50 15 4
…
/ … /cetvscreen_msplit.c new 135 0 135 0
/ … /cetvdef_mconst.cgenerated 0 0 0 0
/ … /ICetvBkedSouWireIds.id new 13 0 13 0
/ … /DCetvPictureProfile.dd new 1250 125 0
Total number of files : 469
Number of new files : 159
Number of changed files : 166
Number of unchanged files: 141
Number of generated files: 3
Done.
Processing time 1.6 seconds
The following explains some of the relevant output lines:

Lines 3-6 show a summary of how many files were tracked to a previous version although the name or the position in the
source code tree was different in the new release. In this case 25 files were moved to a different directory and 20 files were renamed
as well as moved to a different directory.

Lines 7-9 show a summary of the line counting statistics over the entire release.
C.
Implementation of the Algorithm
The development tools used to implement the Algorithm is Visual Studio IDE 2005 for C#.Net,Automation test
scripts are designed for testing the Algorithm using the PERL. The programming language is C#.
V.
CONCLUSIONS AND FUTURE WORK
Software reuse has become the solutions to reduce development time, improve productivity and quality. Data
mining techniques can be effectively used in software reuse process. The main contributions for this paper are as follows:

Identified the need of software reuse in software development practices

Proposed novel algorithm for software reusability metric using pattern matching
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
P. Gomes.and C. Bento. "A Case Similarity Metric for Software Reuse and Design," Artif. Intell. Eng. Des.
Anal.Manuf., Vol. 15, pp. 21-35, 2001.
W.Frakesand C.Terry, Software Reuse: Metrics and Models, ACM Computing Surveys, Vol. 28, No. 2, June 1996
J.A. Mccall, et. al., "Factors in Software Quality," Griffiths Air Force Base, N.Y.RomeAirDevelopmentCenter
Air Force Systems Command, 1977.
N.S Gill. "Reusability Issues in Component-Based Development," SigsoftSoftw. Eng. Notes, Vol. 28, pp. 4-4,
2003.
J.J.E Gaffney "Metrics in Software Quality Assurance," Presented at the proceedings of the ACM '81
Conference, 1981.
J.SPoulin 1997. Measuring Software Reuse–Principles, Practices and Economic Models, Addison-Wesley,
A.Sharma.et al., "Reusability assessment for software components," SigsoftSoftw. Eng. Notes, Vol. 34, pp. 1-6,
2009.
C.V.Ramamoorthy, et. al., "Support for Reusability in Genesis," Software Engineering, IEEE Transactions, Vol.
14, pp.1145-1154, 1988.
M.Morisio, and C. Tully, “Success and Failure Factors in Software Reuse,” IEEE Transactions on Software
Engineering, Vol. 28, No. 4, pp. 340–357, 2002
AnjuShri, Parvinder S. Sandhu, Vikas Gupta, SanyamAnand, Prediction of Reusability of Object Oriented
Software Systems Using Clustering Approach, World Academy of Science, Engineering and Technology, pp. 853858, 2010.
G.Boetticher. and D. Eichmann, “A Neural Network Paradigm for Characterising Reusable Software,”
Proceedings of The 1st Australian Conference on Software Metrics, 18-19 November 1993.
Tim Menzies, Justin S. Di Stefano. “More Success and Failure Factors in Software Reuse", IEEE Transactions on
Software Engineering, May 2003
Hironori Washizaki,Yoshiaki Fukazawa and Hirokazu YamamotoA Metrics Suite for Measuring Reusability of
Software Components
N.W. Nerurkar,Shrddha Sagar.A Soft Computing Based Approach to Estimate Reusability of
SoftwareComponents
G.SingaravelDr.V.PalanisamyDr.A.Krishnanoverview analysis of reusability metrics in software development for
risk reduction
YunwenYe.Information Delivery in Support of Learning Reusable Software Components on Demand
Applying reusability to somm process definition Judah Mogilensky and Dennis Stipe
Young Lee and Kai H. ChangReusability and. Maintainability Metrics for Object-Oriented Software
Octavian Paul ROTARU Marian DOBREReusability Metrics for Software Components
41
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 36-39
A Survey of Various Scheduling Algorithms in Cloud
Environment
Sujit Tilak 1, Prof. Dipti Patil 2,
1,2
Department of COMPUTER, PIIT, New Panvel, Maharashtra, India.
Abstract––Cloud computing environments provide scalability for applications by providing virtualized resources
dynamically. Cloud computing is built on the base of distributed computing, grid computing and virtualization. User
applications may need large data retrieval very often and the system efficiency may degrade when these applications are
scheduled taking into account only the ‘execution time’. In addition to optimizing system efficiency, the cost arising from
data transfers between resources as well as execution costs must also be taken into account while scheduling. Moving
applications to a cloud computing environment triggers the need of scheduling as it enables the utilization of various
cloud services to facilitate execution.
Keywords––Cloud computing, Scheduling, Virtualization
I.
INTRODUCTION
Cloud computing is an extension of parallel computing, distributed computing and grid computing. It provides
secure, quick, convenient data storage and computing power with the help of internet. Virtualization, distribution and
dynamic extendibility are the basic characteristics of cloud computing [1]. Now days most software and hardware have
provided support to virtualization. We can virtualize many factors such as IT resource, software, hardware, operating system
and net storage, and manage them in the cloud computing platform; every environment has nothing to do with the physical
platform.
To make effective use of the tremendous capabilities of the cloud, efficient scheduling algorithms are required.
These scheduling algorithms are commonly applied by cloud resource manager to optimally dispatch tasks to the cloud
resources. There are relatively a large number of scheduling algorithms to minimize the total completion time of the tasks in
distributed systems [2]. Actually, these algorithms try to minimize the overall completion time of the tasks by finding the
most suitable resources to be allocated to the tasks. It should be noticed that minimizing the overall completion time
of the tasks does not necessarily result in the minimization of execution time of each individual task.
Fig. 1 overview of cloud computing
The objective of this paper is to be focus on various scheduling algorithms. The rest of the paper is organized as
follows. Section 2 presents the need of scheduling in cloud. Section 3 presents various existing scheduling algorithms and
section 4 concludes the paper with a summary of our contributions.
II.
NEED OF SCHEDULING IN CLOUD
The primary benefit of moving to Clouds is application scalability. Unlike Grids, scalability of Cloud resources
allows real-time provisioning of resources to meet application requirements. Cloud services like compute, storage and
36
A Survey of Various Scheduling Algorithms in Cloud Environment
bandwidth resources are available at substantially lower costs. Usually tasks are scheduled by user requirements. New
scheduling strategies need to be proposed to overcome the problems posed by network properties between user and
resources. New scheduling strategies may use some of the conventional scheduling concepts to merge them together with
some network aware strategies to provide solutions for better and more efficient job scheduling [1]. Usually tasks are
scheduled by user requirements.
Initially, scheduling algorithms were being implemented in grids [2] [8]. Due to the reduced performance faced in
grids, now there is a need to implement scheduling in cloud. The primary benefit of moving to Clouds is application
scalability. Unlike Grids, scalability of Cloud resources allows real-time provisioning of resources to meet application
requirements. This enables workflow management systems to readily meet Quality of- Service (QoS) requirements of
applications [7], as opposed to the traditional approach that required advance reservation of resources in global multi-user
Grid environments. Cloud services like compute, storage and bandwidth resources are available at substantially lower costs.
Cloud applications often require very complex execution environments .These environments are difficult to create on grid
resources [2]. In addition, each grid site has a different configuration, which results in extra effort each time an application
needs to be ported to a new site. Virtual machines allow the application developer to create a fully customized, portable
execution environment configured specifically for their application.
Traditional way for scheduling in cloud computing tended to use the direct tasks of users as the overhead
application base. The problem is that there may be no relationship between the overhead application base and the way that
different tasks cause overhead costs of resources in cloud systems [1]. For large number of simple tasks this increases the
cost and the cost is decreased if we have small number of complex tasks.
III.
EXISTING SCHEDULING ALGORITHMS
The Following scheduling algorithms are currently prevalent in clouds.
3.1 A Compromised-Time-Cost Scheduling Algorithm:Ke Liu, Hai Jin, Jinjun Chen, Xiao Liu, Dong Yuan, Yun Yang [2]
presented a novel compromised-time-cost scheduling algorithm which considers the characteristics of cloud computing to
accommodate instance-intensive cost-constrained workflows by compromising execution time and cost with user input
enabled on the fly. The simulation has demonstrated that CTC (compromised-time-cost) algorithm can achieve lower cost
than others while meeting the user-designated deadline or reduce the mean execution time than others within the userdesignated execution cost. The tool used for simulation is SwinDeW-C (Swinburne Decentralised Workflow for Cloud).
3.2 A Particle Swarm Optimization-based Heuristic for Scheduling Workflow Applications: Suraj Pandey, LinlinWu,
Siddeswara Mayura Guru, Rajkumar Buyya [3] presented a particle swarm optimization (PSO) based heuristic to schedule
applications to cloud resources that takes into account both computation cost and data transmission cost. It is used for
workflow application by varying its computation and communication costs. The experimental result shows that PSO can
achieve cost savings and good distribution of workload onto resources.
3.3 Improved Cost-Based Algorithm for Task Scheduling: Mrs.S.Selvarani, Dr.G.Sudha Sadhasivam [1] proposed an
improved cost-based scheduling algorithm for making efficient mapping of tasks to available resources in cloud. The
improvisation of traditional activity based costing is proposed by new task scheduling strategy for cloud environment where
there may be no relation between the overhead application base and the way that different tasks cause overhead cost of
resources in cloud. This scheduling algorithm divides all user tasks depending on priority of each task into three different
lists. This scheduling algorithm measures both resource cost and computation performance, it also Improves the
computation/communication ratio.
3.4 Resource-Aware-Scheduling algorithm (RASA): Saeed Parsa and Reza Entezari-Maleki [2] proposed a new task
scheduling algorithm RASA. It is composed of two traditional scheduling algorithms; Max-min and Min-min. RASA uses
the advantages of Max-min and Min-min algorithms and covers their disadvantages. Though the deadline of each task,
arriving rate of the tasks, cost of the task execution on each of the resource, cost of the communication are not considered.
The experimental results show that RASA is outperforms the existing scheduling algorithms in large scale distributed
systems.
3.5 Innovative transaction intensive cost-constraint scheduling algorithm: Yun Yang, Ke Liu, Jinjun Chen [5] proposed a
scheduling algorithm which takes cost and time. The simulation has demonstrated that this algorithm can achieve lower cost
than others while meeting the user designated deadline.
3.6 Scalable Heterogeneous Earliest-Finish-Time Algorithm (SHEFT): Cui Lin, Shiyong Lu [6] proposed an SHEFT
workflow scheduling algorithm to schedule a workflow elastically on a Cloud computing environment. The experimental
results show that SHEFT not only outperforms several representative workflow scheduling algorithms in optimizing
workflow execution time, but also enables resources to scale elastically at runtime.
3.7 Multiple QoS Constrained Scheduling Strategy of Multi-Workflows (MQMW): Meng Xu, Lizhen Cui, Haiyang Wang,
Yanbing Bi [7] worked on multiple workflows and multiple QoS.They has a strategy implemented for multiple workflow
management system with multiple QoS. The scheduling access rate is increased by using this strategy. This strategy
minimizes the make span and cost of workflows for cloud computing platform.
The following table summarizes above scheduling strategies on scheduling method, parameters, other factors, the
environment of application of strategy and tool used for experimental purpose.
37
A Survey of Various Scheduling Algorithms in Cloud Environment
Comparison between Existing Scheduling Algorithms
Scheduling
Algorithm
Schedulin
g Method
Scheduling Scheduling
Parameters factors
Finding
s
Environme
nt
Tool
s
A
compromised
-Time-Cost
Scheduling
Algorithm [3]
Batch
mode
Cost and
time
An array
of
workflow
instances
1. It is used to reduce
cost and cost
Cloud
Environment
SwinDeW-C
Dependency
mode
Resource
utilization,
time
Group of
tasks
1. it is used for three
Cloud
times cost savings as
Environment
compared to BRS
2.It is used for good
distribution of workload
onto resources
Amazon
EC2
Improved
cost-based
algorithm for
task
scheduling [1]
Batch
Mode
Cost,
performan
ce
Unscheduled
task groups
1.Measures both
Cloud
resource cost and
Environment
computation
performance
2. Improves the
computation/communica
tion ratio
Cloud
Sim
RASA
Workflow
scheduling [2]
Batch
mode
make
span
Grouped
tasks
1.It is used to reduce
make span
GridSi
m
Innovative
transaction
intensive costconstraint
scheduling
algorithm [5]
Batch
Mode
Execution
cost and
time
Workflow
with large
number of
instances
1.To minimize the cost Cloud
under certain userEnvironment
designated
Deadlines.
2. Enables the
compromises of
execution cost and time.
SwinDeW-C
SHEFT
workflow
scheduling
algorithm [6]
Dependency
Mode
Execution
time,
scalability
Group of
tasks
1. It is used for
optimizing workflow
execution time.
2. It also enables
resources to scale
elastically during
workflow execution.
Cloud
Environment
CloudSim
Multiple QoS
Constrained
Scheduling
Strategy of
MultiWorkflows [7]
Batch/de
pendenc
y mode
Scheduling
success
rate,cost,time,
make span
Multiple
Workflow
s
1. It is used to schedule
the workflow
dynamically.
2. It is used to
minimize the execution
time and cost
Cloud
Environment
CloudSim
A Particle
Swarm
Optimizationbased
Heuristic for
Scheduling [4]
IV.
Grid
Environment
CONCLUSION
Scheduling is one of the key issues in the management of application execution in cloud environment. In this
paper, we have surveyed the various existing scheduling algorithms in cloud computing and tabulated their various
parameters along with tools and so on. we also noticed that disk space management is critical in virtual environments. When
a virtual image is created, the size of the disk is fixed. Having a too small initial virtual disk size can adversely affect the
execution of the application. Existing scheduling algorithms does not consider reliability and availability. Therefore there is
a need to implement a scheduling algorithm that can improve the availability and reliability in cloud environment.
38
A Survey of Various Scheduling Algorithms in Cloud Environment
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
Mrs.S.Selvarani1; Dr.G.Sudha Sadhasivam, improved cost-based algorithm for task scheduling in Cloud
computing ,IEEE 2010.
Saeed Parsa and Reza Entezari-Maleki,” RASA: A New Task Scheduling Algorithm in Grid Environment” in
World Applied Sciences Journal 7 (Special Issue of Computer & IT): 152-160, 2009.Berry M. W., Dumais S. T.,
O’Brien G. W. Using linear algebra for intelligent information retrieval, SIAM Review, 1995, 37, pp. 573-595.
K. Liu; Y. Yang; J. Chen, X. Liu; D. Yuan; H. Jin, A Compromised-Time- Cost Scheduling Algorithm in
SwinDeW-C for Instance-intensive Cost-Constrained Workflows on Cloud Computing Platform, International
Journal of High Performance Computing Applications, vol.24 no.4 445-456,May,2010.
Suraj Pandey1; LinlinWu1; Siddeswara Mayura Guru; Rajkumar Buyya, A Particle Swarm Optimization-based
Heuristic for Scheduling Workflow Applications in Cloud Computing Environments.
Y. Yang, K. Liu, J. Chen, X. Liu, D. Yuan and H. Jin, An Algorithm in SwinDeW-C for Scheduling TransactionIntensive Cost-Constrained Cloud Workflows, Proc. of 4th IEEE International Conference on e-Science, 374-375,
Indianapolis, USA, December 2008.
Cui Lin, Shiyong Lu,” Scheduling ScientificWorkflows Elastically for Cloud Computing” in IEEE 4th
International Conference on Cloud Computing, 2011.
Meng Xu, Lizhen Cui, Haiyang Wang, Yanbing Bi, “A Multiple QoS Constrained Scheduling Strategy of Multiple
Workflows for Cloud Computing”, in 2009 IEEE International Symposium on Parallel and Distributed Processing.
Nithiapidary Muthuvelu, Junyang Liu, Nay Lin Soe, Srikumar Venugopal, Anthony Sulistio and Rajkumar Buyya.
“A Dynamic Job Grouping-Based Scheduling for Deploying Applications with Fine-Grained Tasks on Global
Grids”,in Australasian Workshop on Grid Computing and e-Research (AusGrid2005), Newcastle, Australia.
Conferences in Research and Practice in Information Technology, Vol. 44.
39
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 32-39
On Demand Routing Protocols for Mobile Adhoc Network
Soni Dalal1, Mr Sukhvir Singh(HOD)2
1, 2
N.C. College of Engineering, Israna (INDIA) -132107
Abstract––In an ad hoc network, mobile nodes communicate with each other using multi-hop wireless links. There is no
stationary infrastructure such as base stations. The routing protocol must be able to keep up with the high degree of
node mobility that often changes the network topology drastically and unpredictably. Most of the on demand routing
protocols for Manets namely AODV and DSR perform well with uniform output under low network load, mobility, traffic
sources. The objective of the proposed work is to evaluate the performances of each of these protocols under large
number of traffic sources, greater mobility with lesser pause time and varying offered load. Also the metrics taken into
account are: Packet Size /average throughput of generating packets, Packet size / average simulation end to end delay,
packet send time at source node / end-to-end delay. On the basis of the obtained results the performances of the abovementioned on demand routing protocols for Manets is compared using network simulator-2 (NS2).
Index Terms––AODV - Ad Hoc On-demand Distance Vector Routing, DSRP - Dynamic Source Routing Protocol, TORA
- Temporally Ordered Routing Algorithm, NAM- Network Animator, NS- Network Simulator.
I.
INTRODUCTION
Mobile Adhoc Network : Wireless networking [1,6] is an emerging technology that allows users to access information and
services electronically, regardless of their geographic position. Wireless networks can be classified in two types: 
Infrastructured Network: Infrastructured network consists of a network with fixed and wired gateways. A mobile
host communicates with a bridge in the network (called base station) within its communication radius. The mobile
unit can move geographically while it is communicating. When it goes out of range of one base station, it connects
with new base station and starts communicating through it. This is called handoff. In this approach the base
stations are fixed.

Infrastructure less (Ad hoc) Networks:In ad hoc networks all nodes are mobile and can be connected
dynamically in an arbitrary manner. All nodes of these networks behave as routers and take part in discovery and
maintenance of routes to other nodes in the network. Ad hoc networks are very useful in emergency search-andrescue operations, meetings or conventions in which persons wish to quickly share information, and data
acquisition operations in inhospitable terrain.
1.1The ad-hoc routing protocols can be divided into two categories:

Table-driven routing protocols. In table driven routing protocols, consistent and up-to-date routing information
to all nodes is maintained at each node.

On-Demand routing protocols:In On-Demand routing protocols, the routes are created as and when required.
When a source wants to send to a destination, it invokes the route discovery mechanisms to find the path to the
destination. The motivation behind the on-demand protocols is that the "routing overhead" (typically measured in
terms of the number of routing packets transmitted, as opposed to data packets) is typically lower than the shortest
path protocols as only the actively used routes are maintained. There are four multi-hop wireless ad hoc network
routing protocols that cover a range of design choices:
Destination-Sequenced Distance-Vector (DSDV)
Temporally Ordered Routing Algorithm (TORA)
Dynamic Source Routing (DSR)
Ad Hoc On-Demand Distance Vector Routing (AODV).
While DSDV is a table-driven routing protocol, TORA, DSR, AODV, fall under the On-demand routing protocols category.
1.2 Dynamic Source Routing (DSR)
The Dynamic Source Routing Protocol [2] is a source-routed on-demand routing protocol. A node maintains route
caches containing the source routes that it is aware of. The node updates entries in the route cache as and when it learns
about new routes. The two major phases of the protocol are: route discovery and route maintenance. When the source node
wants to send a packet to a destination, it looks up its route cache to determine if it already contains a route to the
destination. If it finds that an unexpired route to the destination exists, then it uses this route to send the packet. But if the
node does not have such a route, then it initiates the route discovery process by broadcasting a route request packet. The
route request packet contains the address of the source and the destination, and a unique identification number. Each
intermediate node checks whether it knows of a route to the destination. If it does not, it appends its address to the route
record of the packet and forwards the packet to its neighbors. To limit the number of route requests propagated, a node
32
On Demand Routing Protocols for Mobile Adhoc Network
processes the route request packet only if it has not already seen the packet and it's address is not present in the route record
of the packet. A route reply is generated when either the destination or an intermediate node with current information about
the destination receives the route request packet. A route request packet reaching such a node already contains, in its route
record, the sequence of hops taken from the source to this node.
As the route request packet propagates through the network, the route record is formed. If the route reply is
generated by the destination then it places the route record from route request packet into the route reply packet
1.3 Ad Hoc On-Demand Distance Vector Routing (AODV)
AODV is a reactive protocol or on-demand protocol [1]. Ad-hoc on demand distance vector routing protocol uses
destination sequence number to offer loop free routing and fresh route to the destination. Unlike tables driven protocols it
does not maintain status of the network via continuous updates. This approached help in reducing the number of messages
and the size of the routes tables.
AODV provides both multicast, and uni-cast connectivity in an ad-hoc environment. One of the main features of
AODV is to respond quickly whenever a link breakage in active route is found.
AODV is a combination of both DSR and DSDV. It inherits the basic on-demand mechanism of route discovery and route
maintenance from DSR plus the use of hop-by-hop routing sequence numbers and periodic beacons from DSDV.
II.
METHODS
Problem Definition: The objective of the dissertation work is to analyse and then do a simulation comparison of
two on demand routing protocol for mobile ad hoc networks. The two reactive protocols that have been simulated and
compared are: Dynamic source routing (DSR) protocol and Ad-Hoc On Demand Distance Vector (AODV) routing protocol.
Although both of these protocols share the common feature of being Reactive in nature, yet they behave differently when
subjected to identical network conditions in terms of packet size, number of traffic sources, mobility rate, topological area,
number of nodes, mobility model.
2.1 Tool Used: The simulations were conducted on an Intel Pentium IV processor at 2.8 GHz, 256 MB of RAM running
Red Hat Linux 10
 Network Simulator-2 (NS-2): The version of network simulator used for simulation is NS-2.27.
 Mobility Model: Random Waypoint Model
2.2 Metrices considered for performance evaluation are:
1.Packet size Vs Average Throughput Of Generating Packets.
2.Packet Size Vs Average Simulation End-to-End Delay.
3.Packet Send Time At Source Node Vs Simulation End-to-End Delay.
2.2.1 Now to get a clear picture of the above mentioned metrices I define them as:

Average Simulation End-to-End Delay: This implies the delay a packet suffers between leaving the sender
application and arriving at the receiver application.

Average Throughput or Packet Delivery Ratio: The ratio between the number of packets sent out by the sender
application and the number of packets correctly received by the corresponding peer application
2.3 Network Simulator:
Background on the ns-2 simulator: NS simulator [12,11] is based on two languages :an object oriented simulator
,written in c++ ,and a Otcl (an object oriented extension of Tcl) interpreter ,used to execute user’s command scripts.
NS has a rich library of network and protocol objects. There are two class hierarchies: the compiled c++ hierarchy and the
interpreted Otcl one ,with one to one correspondence between them.
The compiled c++ hierarchy allows us to achieve efficiency in the simulation and the faster execution times.This is in
particular useful for the detailed definition and operation of protocols. This allows one to reduce packet and event processing
time.
33
On Demand Routing Protocols for Mobile Adhoc Network
2.4 Tcl and Otcl programming:
Tcl (Tool Command Language) [12,11] is used by millions of people in the world. It is a language with a very
simple sintaxis and it allows easy integration with other languages. Tcl was created by Jhon Ousterhout. The characterstics
of these languages are:
 It allows a fast development
 It provide a graphic interface
 It is compatible with many platforms
 It is flexible for integration
 It is easy to use
 It is free
2.5 Visualisation : Using NAM
NAM stands for network animator. Network Animator (NAM) is an animation tool for viewing network
simulation traces and real world packet traces .It supports topology layout, packet level animation and various data
inspection tools. Before starting to use NAM, a trace file needs to be created. This trace file is usually generated by NS. It
contains topology information, e.g. nodes and links, as well as packet traces. During a simulation, the user can produce
topology configurations, layout information and packet traces using tracing events in NS. Once the trace file is generated,
NAM can be used to animate it. Upon startup, NAM will read the trace file, create topology, pop up a window, do layout if
necessary and then pause at time 0. Through its user interface, NAM provides control over many aspects of animation. In
Figure a screenshot of a NAM window is shown, where the most important functions are explained.
Although the NAM software contains bugs, as do the NS software, it works fine most of the times and causes only
little trouble. NAM is an excellent first step to check that the scenario works as expected. NS and NAM can also be used
together for educational purpose and to easily demonstrate different networking issues.
III.
RESULTS AND CONCLUSIONS
Following table gives a glance of the parameters that were considered for the simulation.
Parameter
No. of Mobile Nodes
No. of Traffic sources
Type of traffic
Nodes Speed
Packet Size
Topology Area
40
20
TCP
(0-20) m/s
1024 bytes
1100* 1100
m* m
80
27
TCP
(0-20) m/s
1024 bytes
1100* 1100 m*m
100
30
TCP
(0-20) m/s
1024 bytes
1100* 1100 m*m
3.1 Metrices considered for performance evaluation are:
1.Packet size Vs Average Throughput Of Generating Packets.
2.Packet Size Vs Average Simulation End-to-End Delay.
3.Packet Send Time At Source Node Vs Simulation End-to-End Delay.
34
On Demand Routing Protocols for Mobile Adhoc Network
The table given above shows the three different sets that were considered for the experiment. The number of nodes
was varied as 40,80,100 with the traffic sources 20,27 and 30 respectively. Also the type of traffic sources were TCP.The
packet size was taken to be the same 1024 bytes. Each of the mobile nodes select a random destination at the specified time
and moves towards it. The simulation ends just one second before the total simulation time, which is taken to be 400
seconds. When the packet size was further increased to 2048 bytes, there was a lot of network congestion and both of the
protocols failed to deliver any results.
The following graphs shows the results that were obtained and the comparison of the two On-Demand routing protocols:
DSR and AODV.
Graph A-For 100 nodes
1-Packet size Vs Average Throughput of Generating Packets
Average
Packet Size Vs Average Throughput of
Generating Packets-100 nodes
100
DSR
50
AODV
0
0
500
1000
1500
Packet Size(Bytes)
From this graph we observe that as the packet size is increasing the average throughput of generating packets for
DSR is slightly greater than AODV.
2-Packet send time at Source node Vs. Simulation End To End Delay
End To End
Delay
Packet Send Time At Source Node Vs
Simulation End To End Delay-100 nodes
100
DSR
50
AODV
0
0
100
200
300
400
Packet Send tim e at source node
Here we observe that the end to end delay of DSR is greater than AODV.
3-Packet Size Vs Average Simulation End to End Delay
End To End Delay
Packet Size Vs Average Simulation End To
End Delay-100 nodes
1
0.5
0
DSR
AODV
0
2000
Packet Size(Bytes)
This graph shows that as the packet size is increasing the average simulation end to end delay of DSR increases
and is greater than AODV.
35
On Demand Routing Protocols for Mobile Adhoc Network
Graph B-For 40 Nodes
1-Packet size Vs Average Throughput of Generating Packets
From this graph it is clear that the average throughput of generating packets for AODV is greater than DSR.
2-Packet send time at Source node Vs. Simulation End To End Delay
End to End Delay
Packet Send Time at Source Node Vs
End to End Delay -40 nodes
60
50
40
30
20
10
0
DSR
AODV
0 40 80 12 16 20 24 28 32 36 40
0 0 0 0 0 0 0 0
Send Time
Here also we can see that the end to end delay of DSR is much greater than AODV
3-Packet Size Vs Average Simulation End to End Delay
From this we observe that the average simulation end to end delay of DSR is increasing as the packet size is increasing.
36
On Demand Routing Protocols for Mobile Adhoc Network
Graph C- For 80 Nodes
1-Packet size Vs Average Throughput of Generating Packets
Packet Size Vs Average Throughput
of Generating Packets-80 nodes
90
80
Average
70
60
50
DSR
40
AODV
30
20
10
0
0
200 400 600 800 100 120
0
0
Packet Size(Bytes)
One can clearly notice that the average throughput of generating packets for AODV increases as the packet size
keeps increasing.
2-Packet send time at Source node Vs. Simulation End To End Delay
Here we observe that the end to end delay of DSR is much greater than AODV.
3-Packet Size Vs Average Simulation End to End Delay
This graph shows that average simulation end to end delay of DSR goes on increasing as the packet size increases.
37
On Demand Routing Protocols for Mobile Adhoc Network
3.2 Conclusion:
The On-Demand routing protocols are much efficient to handle the dynamics of mobile ad-hoc networks than the
table driven routing protocol. We need to undertake much deeper study of all these reactive routing protocols which could
prove beneficial to make enhancements in performance of these protocols. It is highly recommended that we start with the
basic building blocks of these protocols and see how each of these blocks interact with each other and thereby observing
how the interaction could be coordinated more effectively so as to lead to increase in performance differentials.
The protocols I took for my study are: AODV and DSR.
AODV although is an On-Demand routing protocol yet it maintains routing tables. We can say that it has features
of both table driven and reactive routing protocol. It has only one entry per source/destination pair, so it has to resort to route
discovery more often than DSR. DSR do not make use of any routing tables. Instead it can have more than one route per
source/destination pair. It makes complete use of source routing, that means the source or the initiator of the data packet has
to determine the complete hop by hop route to the destination. Due to the availability of many alternate routes it has to resort
to route discovery less often than AODV.
On the basis of result, it was concluded that as the packet size is increased the end-to-end delay of AODV is lesser
than that of DSR for larger number of nodes; average throughput of generating packets for DSR is larger than that of AODV
for larger number of Nodes and traffic sources. However the average throughput of generating packets for AODV is greater
when the numbers of nodes are 40 and 80. Delay is an important metric which decides the efficiency of the routing protocol.
DSR (Dynamic source routing) protocol is not a winner when it comes to the large size of the network. The endto-end delay is increased when the packet size is increased. The degraded performance might be because of the aggressive
use of caching. The basic problem is that in highly dynamic environment, the cache becomes stale and could lead to
significant downfall in performance. There is a lot of scope related to the use of caching in DSR.
So AODV gave the best performance overall, making it suitable for medium as well as larger networks.
3.3 Future Scope:
We need to evaluate these protocols AODV, DSR using different mobility models: Reference Point Group
Mobility, Freeway. Also none of these protocols have any mechanism for load balancing, so there is much scope related to
this work. Apart from this the caching strategy used by DSR needs to be more efficient in order to handle frequent topology
changes when the simulation environment is highly dynamic. So there is a need to remove stale entries from the cache more
effectively thereby the performance of DSR could be considerably improved.
ACKNOWLEDGMENT
I deem my proud privilege to express my sincere thanks and gratitude to NCCE, Panipat to provide me an
opportunity to pursue this work. My special thanks & regards to Mr. SUKHVIR SINGH (HOD) who has guided me with
all his relevant knowledge regarding this evaluation.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
Hongbo Zhou "A Survey on routing protocols in manets”: Technical report: MSU-CSE-03-08 mar 28,2003.
Johnson et al "DSR: Dynamic source routing protocol for multihop wireless ad hoc networks", IETF, Feb 2002.
Samba Sesay et al "Simulation comparison of four wireless ad hoc routing protocols" Information technology
journal 3(3): 219-226,2004.
Perkins, Royer, Das, Marina "Performance comparison of two on demand routing protocols for ad hoc networks"
IEEE Infocom 2000 conference.
Broch et al "Performance comparison of multihop wireless ad hoc network routing protocols "
http://www.monarch.cs.cmu.edu/
Padmini Mishra "Routing protocols for ad hoc mobile wireless network" misra@cis.ohio-staLte.edu
Elizabeth M.Royer, C.K.Toh "A Review of current routing protocols for adhoc mobile wireless network"
eroyer@alpha.ece.ucsb.edu,cktoh@ee.gatech.edu
"Performance evaluation of ad hoc routing protocols using ns2 simulation"
Fan Bai, Narayanan Sadagopan, Ahmed Helmy “BRICS: A Building-block approach for analyzing routing
protocols in ad hoc networks-a case study of reactive routing protocols.”
C.K. Toh,”Ad hoc Mobile Wireless Networks: Protocols and systems”, Prentice Hall, New Jersy 2002.
Marc
Greis ’Tutorial,www.isi.edu/nsnam/ns/tutorial.
NS Simulator for beginners
Sampo Naski,”Performance of Ad hoc Routing Protocol :Characteristics and
Comparison,” Seminar on Internetworking,2004.
Laura Marie Feeney, “A taxonomy for routing protocols in mobile ad hoc networks,” Tech. Rep., Swidish Institute
of computer science, www.sics.se/Imfeeney/, October 1999.
IETF working group ,www.ietf.org/html.charters/manet-charter.html.
Fan Bai, Ahmed Helmy,”A Survey of Mobility Models in Wireless Ad hoc Networks”, University of South
California, USA.
T. Camp, J. Boleng, and V. Davies, “A survey of mobility models for ad hoc network research,” Wireless
Communication and Mobile Computing (WCMC): Special issue on Mobile Ad Hoc Networking Research, Trends
and Applications, vol. 2, no. 5, 2002,pp. 483–502.
38
On Demand Routing Protocols for Mobile Adhoc Network
18.
19.
20.
Haas Z.J, Pearlman M.R, Samar P, “The Zone Routing Protocol (ZRP) for AdHoc Networks,” IETF Internet Draft,
July 2002.
Perkins C.; Belding-Royer E.M.; Das S, “Ad hoc On-Demand Distance Vector(AODV) Routing,” IETF Internet
Draft, Jan 2002.
Asis Nasipuri, Book Chapter “Mobile ad hoc Networking,” in hand book of RF and Wireless Technologies, Edited
by Farid Dowla, Newnes, 2004.
39
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 27-31
Classification of MRI Brain Images Using Neuro Fuzzy Model
Mr. Lalit P. Bhaiya1, Ms. Suchita goswami2, Mr. Vivek Pali3
1
Associate professor, HOD (ET&T), RCET,Bhilai(C.G.)
2,3
M-tech scholar, RCET, Bhilai,
Abstract––It is difficult to identify the abnormalities in brain specially in case of Magnetic Resonance Image brain image
processing. Artificial neural networks employed for brain image classification are being computationally heavy and also
do not guarantee high accuracy. The major drawback of ANN is that it requires a large training set to achieve high
accuracy. On the other hand fuzzy logic technique is more accurate but it fully depends on expert knowledge, which may
not always available. Fuzzy logic technique needs less convergence time but it depends on trial and error method in
selecting either the fuzzy membership functions or the fuzzy rules. These problems are overcome by the hybrid model
namely, neuro-fuzzy model. This system removes essential requirements since it includes the advantages of both the ANN
and the fuzzy logic systems. In this paper the classification of different brain images using Adaptive neuro-fuzzy
inference systems (ANFIS technology). Experimental results illustrate promising results in terms of classification
accuracy and convergence rate.
Keywords––Fuzzy logic, Neural network, ANFIS, Convergence rate
I.
INTRODUCTION
With the growing age, there is advancement in each and every field. As far as the medical field is concerned, it
also has everyday progress. The medical imaging field in particular, has grown substantially in recent years, and has
generated additional interest in methods and tools for the management, analysis, communication of medical image data.
Medical imaging technology facilitates the doctors to see the interior portions of the body for easy diagnosis. It also helped
doctors to make keyhole surgeries for reaching the interior parts without really opening too much of the body. CT scanners,
ultra sound and magnetic resonance imaging took over X-ray imaging by making the doctors to look at the body’s elusive
third dimension.MRI picks up signals from the body’s magnetic particles spilling to its magnetic tune and with the help of its
powerful computer, convert scanner data into revealing pictures of internal organs. MRI differs from CT scan as it does not
use radiations. MRI is a noninvasive medical test that helps physicians diagnose and treat medical conditions. It is a
technique based on the measurement of magnetic field vectors generated after an appropriate excitation with strong magnetic
fields and radio frequency pulses in the nuclei of hydrogen atoms present in water molecules of a patient’s tissues. We know
that the content of water differ for each tissue, by using this fact one can quantify the differences of radiated magnetic energy
and have elements to identify each tissue. When we measure the specific magnetic vector components under controlled
conditions, different images can be taken and we can obtain the information related to tissue contrast which reveals the
details that can be missed in other measurements[12]. Detailed MRI image allows the physicians to better evaluate various
parts of the body and determine the presence of certain abnormalities that may not be accessed adequately with other
imaging methods such as X-ray, CT scan, and ultra sound. Currently, MRI is the most sensitive imaging test of the head in
routine clinical practice. MRI can detect a variety of conditions of the brain such as cysts, tumors, bleeding, swelling,
developmental and structural abnormalities, infections, inflammatory conditions or problems with the blood vessels.MRI can
provide clear images of parts of the brain that can not be seen as well with an X-ray, CAT scan, or ultrasound, making it
particularly valuable for diagnosing problems with the pituitary gland and brain stem.
Figure 1.1: MRI machine
27
Classification of MRI Brain Images Using Neuro Fuzzy Model
Applications of MRI segmentation include the diagnosis of brain trauma where a signature of brain injury, white
matter lesions may be identified in moderate and mild cases. MRI segmentation methods are also useful in diagnosing
multiple sclerosis, including the detection of lesions and the quantization of lesion volume using multispectral methods.[5]
Figure 1.2: Schematic diagram of MRI machine
In MRI, water molecules give off radio signals which are converted into high resolution images that look like a picture
shown in figure 1.3.
Figure 1.3: Brain MRI
II.
METHODOLOGY
The work involves the feature extraction of MRI images of brain, dimensionality reduction and finally developing
a suitable neuro fuzzy classifier to classify the normal and abnormal brain images. Images of brain are obtained from MRI
and the textural features are extracted using principal component analysis (PCA) technique. These features are used to train
the neuro fuzzy classifier. The neuro fuzzy classifier is used for classification is the Adaptive Network based Fuzzy
Inference system(ANFIS).The developed neuro fuzzy classifier is tested for classification of different brain MRI samples.
Thus, the proposed work emphasizes on development of Neural Network and Fuzzy logic based method for the classification
of MRI brain images. The block schematic diagram shown in figure 1 is the proposed architecture for classification of MRI
brain images.
28
Classification of MRI Brain Images Using Neuro Fuzzy Model
Figure 2.1: Proposed Methodology for Classification of MRI brain images
2.1 MRI image data set
For the classification of normal and abnormal brain images a data set is collected from different sources one of the
source is the Harvard medical school website. [http://www.med.harvard.edu/aanlib/home.html] The various types of brain
images includes Axial, T2-weighted, 256-256 pixels MR brain images. Figure shows one of the database considered for the
classification. The images are classified as normal and abnormal brain images.
Figure 2.2: A typical example of the used MRI
2.2 Feature Extraction
The feature extraction extracts the features of importance for image recognition. The feature extracted gives the
property of the text character, which can be used for training the database. The obtained trained feature is compared with the
test sample feature obtained and classified as one of the extracted character. [2] The feature extraction is done using principal
component analysis (PCA).This technique is mostly used for the image recognition and reduction. It reduces the large
dimensionality of the data. The feature extraction of the test image is done. The memory of an MR image recognizer is
generally simulated by a training set. The training database is a set of MR images. The task of MR image recognizer is to
find the most similar feature vector among the training set image and test image. In the training phase, feature vectors are
extracted for each image in the training set. Let_1 be a training image of image 1 which has a pixel resolution of MxN (M
rows, N columns). In order to extract PCA features of_1, first convert the image into a pixel vector Φ1 by concatenating
each of M rows into a single vctor. The length of the vector Φ1 will be MxN.
29
Classification of MRI Brain Images Using Neuro Fuzzy Model
Figure 2.3 : Schematic diagram of a MR image recognizer.
2.3 Neuro-Fuzzy Classifier
A neuro-fuzzy classifier is used to detect the abnormalities in the MRI brain images. Generally the input layer
consist of seven neurons corresponding to the seven features. The output layer consist of one neuron indicating whether the
MRI is of a normal brain or abnormal and the hidden layer changes according to the number of rules that give best
recognition rate for each group of features.[3] Here the neuro-fuzzy classifier used is based on the ANFIS technique. An
ANFIS system is a combination of neural network and fuzzy systems in which that neural network is used to determine the
parameters of fuzzy system. ANFIS largely removes the requirement for manual optimization of parameters of fuzzy
system.The neuro-fuzzy system with the learning capabilities of neural network and with the advantages of the rule-base
fuzzy system can improve the performance significantly and neuro-fuzzy system can also provide a mechanism to
incorporate past observations into the classification process.In neural network the training essentially builds the system.
However, using a neuro-fuzzy technique,the system is built by fuzzy logic definitions and and it is then refined with the help
of neural network training algorithms.
Some advantages of ANFIS systems are:

It refines if-then rules to describe the behavior of a complex system.

It does not require prior human expertise

It uses membership functions plus desired dataset to approximate.

It provides greater choice of membership functions to use.

Very fast convergence time.[7]
Figure 2.4 : ANFIS architecture
30
Classification of MRI Brain Images Using Neuro Fuzzy Model
2.3.1 ANFIS GUI
The ANFIS Editor GUI menu bar can be used to load a FIS training initialization, save the trained FIS, open a new
Sugeno system or any of the other GUIs to interpret the trained FIS model. Any data set is loaded into the ANFIS Editor
GUI, (or that is applied to the command-line function ANFIS) must be a matrix with the input data arranged as vectors in all
but the last column. The output data must be in the last column. A sample of ANFIS Editor GUI with input is shown in
Figure 2.5.
Figure 2.5: ANFIS Editor GUI
III.
CONCLUSION
Neural networks are performing successfully where other methods do not. There are many areas viz. medicine,
weather forecasting, classification, resource allocation, and stock market predication; where such decision support system
can help in setting priorities and making effective and productive decisions. Since the traditional connectionist systems do
not provide explicit fuzzy interface, the proposed hybrid systems have wider scope/acceptability and presents dual
advantages of a type-2 fuzzy logic based decision support using ANN techniques. this technique increases accuracy.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Brain MRI Slices Classification Using Least Squares Support Vector Machine Vol. 1, No. 1, Issue 1, Page 21 of
33 ICMED 2007
MRI Fuzzy Segmentation of Brain Tissue Using IFCM Algorithm with Genetic Algorithm Optimization 1-42441031 2/07/$25.00©2007 IEEE
Brain Cancer Detection using Neuro Fuzzy Logic IJESS 2011
A Support Vector Machine Based Algorithm for Magnetic Resonance Image Segmentation 978-0-7695-3304-9/08
$25.00 © 2008 IEEE
An artificial neural network for detection of biological early brain cancer, 2010 Internation journal of computer
applications(0975-8887) vol. I-no.6.
Tracking algorithm for De-noising of MR brain images IJCSNS,Vol.9 no. 11,Nov. 2009
Application of Neuro-Fuzzy Model for MR Brain Tumor Image Classification, International Journal of Biomedical
Soft Computing and Human Sciences, Vol.16,No.1 (2010)
An Enhanced Implementation of Brain Tumor Detection Using Segmentation Based on Soft Computing
International Journal of Computer Theory and Engineering, Vol. 2, No. 4, August, 2010 1793-8201
Segmentation of MR Brain Images Using FCM improved by Artificial Bee Colony (ABC) Algorithm 978-1-42446561-3/101$26.00 ©2010 IEEE
Biological Early Brain Cancer Detection Using Artificial Neural Network (IJCSE) Vol. 02, No. 08, 2010, 27212725
Improved Fuzzy C-Means Algorithm for MR Brain Image Segmentation (IJCSE) Vol. 02, No. 05, 2010, 17131715
Neural Network Based Brain Tumor Detection Using Mr Images IJCSC Vol. 2, No. 2, July-December 2011, pp.
325-331
A Hybrid Technique For automatic MRI brain Images Classification studia univ. babes_{bolyai, informatica,
Volume LIV, 2009
Brain tumor detection based on multi-parameter MRI image analysis ICGST-GVIP journal,ISSN 1687-398x, vol.
[9],issue [III],june 2009
Aiding Neural Network Based Image Classification with Fuzzy-Rough Feature Selection, 978-1-4244-18190/08/$25.00 c_2008 IEEE
31
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 31-32
Temporal Based Multimedia Data Management System in a
Distributed Environment
Uthaya Kumar M1, Seemakurty Utej2, Vasanth K3, B.Persis Urbana Ivy4
1,2,3
SITE VIT University Vellore
Asst.Prof(SG) SITE VIT University Vellore
4
Abstract––Multimedia Data Management is an application which focuses on the storage, access, indexing, retrieval and
preserving multimedia data in a distributed environment. But it was found that a reliable method was required to discover
about the validate information of multimedia data. To overcome this problem, a new concept of temporal database is been
proposed which includes two time elements that is been embedded in the application. The two time elements include the
transaction time and valid time which provides a schema to manage historical data based on past, present and future.
These time elements have a track of changes and transactions that are been updated in the database, without the deletion
of previous version of the data and thus ensuring the users, the data the up-to-date.
I.
INTRODUCTION
We present this research paper concerning the development of multimedia data management application in an
effective way of accessing the information. This application intends to collect, store and access, retrieve & preserve the data
in a distributed environment. It traces the data based on the historical view and update several versions of the data without
overwriting the previous version.
II.
DETAILED PROBLEM DEFINITION
For accessing the multimedia data, multimedia database management system (MDMS) was been developed. But
this method was not efficient enough to access the data based on historical view. Hence it is been enhanced to provide
complex features providing indexing and classification for multimedia data retrieval using temporal databases.
III.
SOLUTION METHODOLOGY
The new method proposed provides an efficient way to manage the historic data .This method consists of two
elements. This includes transaction time which records the actual time; the data is entered into the database. The second
element is the valid time, which specifies the validation time of the data. These temporal operators ensure the management
of data becoming easier and classified. It also plays major role in monitoring the changes and the transactions of the data
during the insertion and updating process.
IV.
EXPERIMENTAL ANALYSIS
It is demonstrated that the proposed approach is been easy & efficient to store the time varying information
ensuring the up-to-date multimedia data. It was analyzed that the existing data was been discarded during the updating
process. Hence the temporal elements are been embedded into the database for accessing the data based on their transaction
time and validation time.
V.
MODELING PRINCIPLES
The two modes or time elements used for tracing time varying information are transaction time & valid time.
1. Transaction time:
This time element is used for recording the time during the insertion and the updation process. Hence the inserted
and the updated data have their own transaction time. This presents a clear view on the historical time of the data.
2. Valid time:
This time element represents the valid period of the multimedia data stored in the database. The validation of the
data changes when the data is been edited and modified. The valid time includes the two attributes valid from & valid to. The
valid time is combined with a set of operators equal, before, after, meets and met-by.
Consider a multimedia database containing set of multimedia data M = {m1, m2,……m n } then the complete model for
temporal based multimedia database is TEMPORAL ( mi € M ) ≤ (tt ∩ OP ∩ vt) where tt is a transaction time, OP is
temporal operator & vt is valid time
31
Temporal Based Multimedia Data Management System in a Distributed Environment
Results
User
Query
TEMPORAL ELEMENTS
Transaction time
Valid time
Temporal Operator
Temporal
Multimedia
Database
Figure 1 Conceptual of Temp Multimedia Database
VI.
RESULTS AND DISCUSSION
The experimental results show that the temporal elements, the transaction time and the validation time involved in
the database yields high performance in the indexing and classification of data based on current, past and present versions.
These time elements thus monitor the changes that have been incorporated into the database and ensure the user with the
updated information.
VII.
CONCLUSION
An efficient method was required to trace the validation of the data information due to the lack of knowledge
regarding the time varying information. To overcome this situation, a temporal database is been developed to discover and
monitor the historical data based on past, present and future in an efficient way. This method is been evaluated to improve
the multimedia data management process by storing the multimedia objects dynamically after the changes been made in the
current record without deleting the previous the previous version of record.
REFERENCES
1.
2.
3.
Mahmood, N., Burney, A., Ahsan, Kamran: “A Logical Temporal Relational Data Model.” International Journal of
Computer Science Issues, Vol.7, Issues 1, 2010.
Halawani, S.M., Al-Romema, N.A.: “Memory Storage Issues of Temporal Database Applications on Relational
Database Management Systems”. Journal of Computer Science, 6(3), 296-304, 2010.
Nordin, A.R.M., Farham, M.: “Applying Time Granularity in Multimedia Data Management”. International
Conference on Computational Intelligence and Vehicular System, 60-64, 2010
32
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 42-46
Technological Innovations and Teaching English Language at
School Level
Dr. RoobleVerma1, Dr. Priyanka Verma2, Manoj Verma3
1
Associate Professor, University Teaching Department in English, Vikram University, Ujjain (M.P.) - INDIA
Assistant Professor, Dept. of Management, Maulana Azad National Institute of Technology, Bhopal (M.P.)INDIA
3
Assistant Professor, Dept. of English, Govt. College Nagod, District Satna (M.P.)-INDIA
2
Abstract––Proficiency in English language is a surety for success in today’s world. Effective communication is the
most important factor when seeking placements. With great changes in the current requirements and the need for
chiseled skills in English communication the teaching-learning process has also undergone incredible changes.
Traditional methods of teaching English are replaced by advanced technological tools and instruments. English
language learners require different set of technical aids at different level. Some aids are useful at the school level and
some at higher level, so, it is very significant to note that the selection of the technical aids for English language
learners at the school level has to be done very judiciously. Such aids need to be picked up that would meet their
demands at school level. This paper is an endeavour to highlight the role of selective technological innovations that
can contribute to honing all four skills of English language learners at the school level in India. The selective
technological aids include Interactive Whiteboard Technology, Film/Picture Projection, and Language Laboratory.
I.
INTRODUCTION
The real progress of a country is measured by the priority it lays on education. So education in every country needs
to be a high-priority situation including new changes and innovation to it education process. The disposition to acquire and
add technology to it, it is necessary to encourage teachers and students to have better contact with this technology, to
reinforce, practice and increase knowledge in different areas and language learning is no exception to this fact. With the
rapid development of the Internet, computer use in the classroom also offers additional possibilities for designing
communicative tasks such as those built around computer- mediated communication and tele collaboration, including the
ability to interact in real time with oral and written communication, to conduct information searches to find attractive and
meaningful material.
English language is extremely important area in the educational system and it opens new horizons for the learners.
Due to the current status of English as a global language of science, technology, and international relations, many countries
around the world consider the teaching of English a major educational priority (Crystal 1997; McKay 2000). It is the need of
the time that education system must respond to the development of technological and scientific advancements and the same
is true for using these technological advancements in the development of English language skills to prepare a competent
manpower for the future. Use of these technological aids can be effectively used for the development of Macro-linguistic
skills of English language.
II.
ADVANTAGES OF USING SELECTED TECHNICAL –AIDS
Teaching and learning English has faced changes in methodologies as well as in techniques, with the advances of
technology. Teaching aids reinforce the spoken or written words with concrete images and thus provide rich perceptual
images which are the bases to learning. When these materials are used in an interrelated way they make learning permanent.
They provide for a great variety of methods. They bring the outside world into the classroom and make us teach efficiently.
Following are the advantages of using selected technical aids at the school level.

Sharpens the communication skills of the learners faster than the conventional methods.

Helps the students to apply the trial and error methods in learning the language.

The latest technologies bring the world into the class room for the learners.

Provides a great variety of language learning methods.

Learner friendly aids develop interest and involvement of school students. Develops greater understanding of the
nuances of the language.

Stimulates self-activity and reduces verbalism.

Helps to teach proficiently eventually helping in overcoming language barriers.

Saves time and helps in learning fast and permanent.

Makes language learning in native like environment and thus picking up correct language skills.
42
Technological Innovations and Teaching English Language at School Level
III.
CLASSIFICATION OF TECHNICAL AIDS USED IN ENGLISH LANGUAGE
LEARNING
Education and technology today have become inseparable. Educational system leaves no stone unturned to make
the best use of technological tools to make the learning effective. This can be seen in all areas of educational programmes.
Same is the case with language learning where new gadgets and technological instruments are supporting the teacher in an
immense ways. The following diagram is a testimony to the fact that how language learning today is technologically driven.
IV.
ROLE OF THE TEACHER IN USING THE TECHNICAL AIDS
In order to derive the advantages of using teaching aids, a teacher needs the knowledge of different types of
teaching aids available, their place in the teaching-learning process and the methods of their evaluation. Teaching aids
supplement the teacher and they do not supplant him. The teacher should be aware of the fact that the major task in teaching
the students at higher secondary level is to select the appropriate material to hone their English communication skills. The
teacher will play different roles at the following different stages of the language learning process.
Presentation Stage
At this stage the language teacher introduces something to be learned and should make contents clear to the students.
Practice Stage
Here, at the practice stage students are allowed to practice what they have learned at the first stage but the teacher
allows the learners to work under his directions so that they may learn things in the right directions.
Production Stage
At this stage the teacher doesn’t gives the free hand to the students to work on their own and make efforts in
acquiring these kills.
Presentation Stage
This stage is a test of students and the skills that they have picked up in the initial three stages. Here the teacher
serves as kind of informant or facilitator and gives students the opportunities to make their presentation.
V.
CRITERIA TO SELECT EFFECTIVE TECHNICAL AIDS
In today’s scientific and technological advancements where there are numerous technologically advanced
instruments to train the students in the language it should be taken into cognizance that neither the application of all types
instruments is advisable nor it is possible. Only after careful consideration of the advantages and disadvantages only those
technical aids should be selected which would be effective in honing the language skills of the students at higher secondary
level. Before selecting the appropriate technical aids following criteria must be taken into cognizance:

Teaching strategy should be appropriate for student level and in line with recent developments of English language
learning and teaching.

The aids should be user friendly and technically sound.

Presentation should be clear and precise.

Readability and difficulty level appropriate for the students.

Graphics fulfil important purposes i.e. (motivation, information) and not distracting to learners.

There should be no grammar, spelling or punctuation errors on the screen.

Material or content must be authentic, accurate and up-to-date
43
Technological Innovations and Teaching English Language at School Level
VI.
SELECTED TECHNICAL AIDS AND THEIR EFFECTIVE APPLICATIONS
In the plethora of technical aids available for learning English language it is of prime importance to select the
appropriate aids and to use them effectively for the betterment of students. Judicious selection would not only save the time
but also would help the students chisel their communication skills and at the same time the study of other subjects is not
hampered at their higher secondary level. Present day’s educational set up is technology-based setup that emphasizes
students build meaning, based on a high degree of interactivity among students, between students and curriculum, and
between students and teacher. Thus the selected technical aids that offer enormous potential in generating these interactions
are as following:
I.
II.
III.
Interactive Whiteboard Technology
Film/Picture Projection
Language Laboratory
I.
Interactive Whiteboard Technology
Interactive Whiteboards (IWBs) is high class technology that offers enormous potential in generating high degree
interactive learning among the students. An IWB is a large, interactive whiteboard that is connected to a digital projector and
a computer. The projector displays the image from the computer screen on the board. The computer can then be controlled
by touching the board, with a special pen. These active classrooms can address different instructional learning in a variety of
ways.
Effective Application of IWB Technology in Higher Secondary Class
Effective Application of IWB Technology can address the following five instructional principles:
1st Instructional Principle
Learning builds on previous experiences and therefore, English language teachers need to incorporate English language
Students’ prior knowledge, culture, interests, and experiences in new learning. IWB classroom can help the students in using
nonlinguistic representation, helping students recognize patterns, and giving students opportunities to practice
communicating complex ideas. However, teachers can also use IWBs to link the learners’ prior experience with new learning
by bringing them close to the home culture, interests, and experiences into the classroom through digital images, music, and
multimedia.
2nd Instructional Principle
Under this principle the learning takes place in a social setting and therefore, English language teachers need to provide
opportunities for the student-interactions. For example, opportunities for students to use an IWB to present and discuss their
own work with other students, or become involved in a class-wide activities, e.g. a class activate, improves their attention
and engagement in the learning process
3rd Instructional Principle
When the knowledge is imparted in a variety of contexts it is always likely to support learning across students with diverse
learning needs and so the teachers need to integrate language learning strategies in different contexts. In this regard it is seen
that active classroom technology-features can compensate for many differences in background that the students bring to the
classroom and helps develop engaged learning environments that encourage students to learn.
4th Instructional Principle
Properly organized, planned, connected and relevant information not only supports students learning process but also helps
them develop higher-order thinking skills. So, the teachers need to actually contextualize instruction and use such strategies
like graphic organizers that support the learners’ development of higher-order skills. Active classrooms also allow instant
and accurate playbacks, which help the learners to access specific segments of material much more easily. Video materials
presented through IWBs can also bring natural and context-rich linguistic and cultural materials to the students while the
Internet—accessed through an active classroom—can enable them to access authentic news and literature in the target
language.
5th Instructional Principle
After the process of IWB based learning is over the feedback and active evaluation of learning furthers students’
understanding and skill development. The result is that English language teachers need to incorporate short-cycle
assessments into the lesson plan that provides learners some measure of how they are progressing through the learning
process.
The purpose for using IWBs in the classroom is to enable access to and use of digital resources for the benefit of
the whole class while preserving the role of the teacher in guiding and monitoring learning.
II. Film/Picture Projection
Film/Picture Projection is an excellent tool for teaching through task based learning. English speakers with best
pronunciations and vocabulary feel that they owe their proficiency to watching English movies.
Effective Application of Film/Picture Projection in Higher Secondary Class
The purpose behind teaching English language through movies is to ensure that students learn pronunciation,
vocabulary, sentence structures, modulation and delivery of words in an effective way. It is significant to pick the right kind
of movies. The choice depends on the age group of the students. For children cartoon movie would be effective while
Teenagers would probably enjoy seeing contemporary action or love stories while adult would enjoy wider range. For young
children of higher secondary class a movie with interesting content and native touch would be beneficial as it may contain
much speaking material worthwhile for proficiency in English language. The exercises associated with the video materials
44
Technological Innovations and Teaching English Language at School Level
are conducted before, during, and after the video presentation, which are known as the stages of previewing, viewing, and
post-viewing (Gower, Phillips, and Walters 1995). These stages are designed to maximize student understanding of the
subject matter, which will in turn increase motivation and involvement.
Previewing Stage Activities
Previewing activities activate students’ prior knowledge and raise their expectations relating to the content of the video. At
this stage the teacher can prepare vocabulary lists, reading texts, and comprehension questions about the video so students
will start reflecting about what they know of the topic.
Viewing Stage Activities
Viewing activities give students practice in both content-based and form-focused tasks that require them to use top-down and
bottom-up processing. Activities include answering multiple-choice questions, filling in the blanks, drawing inferences, and
listening for the gist.
Post-viewing Stage Activities
Post-viewing activities give students the opportunity to evaluate and comment on the video and the associated activities.
Students answer various open-ended questions about the video in terms of their personal enjoyment and the relevance of the
content. At this stage they are required to reflect and write about the content of the video, which encourages them to think
critically about the subject. At first they can write their comments in their native language, but they are progressively
required to express themselves in the target language
III. The Language Laboratory
The language laboratory is an audio or audio-visual installation used as an aid in modern language teaching.
The modern multimedia Language Laboratory i.e. (CALL & WALL) is a type of system comprising a master console with
computer (teacher position) which is connected through LAN to a number of rows of student booths (Student Station),
typically containing a student Computer, headet and microphone. The teacher console is usually fitted with all the students’
port and the teacher can monitor each booth without moving from his place. The teacher can see the screen of the student,
monitor the students’ progress, correct them on the screen, enter into dialogue with all or individual student.
Language Laboratories have undergone tremendous changes right from the beginning till modern times. The first type of
language laboratory was Conventional Laboratory. The conventional lab has a tape recorder and a few audiocassettes of
the target language to teach the learners. The teacher plays the tape and the learners listen to it and learn the pronunciation.
As it is used in a normal classroom setup, it is prone to distractions and this type of laboratory is no longer common. Then
came Lingua Phone Laboratory where learners are given a headset to listen to the audiocassettes being played. Here
distractions are minimized and a certain amount of clarity in listening is possible. Then cam the latest and popular version of
language laboratory i.e. Computer Assisted Language Laboratory (CALL) CALL uses the computer to teach language.
The language course materials are already fed into the computer and are displayed according to the features available in the
system. Nowadays, there are also laboratories with computers with a connection to the Internet. These are called Web
Assisted Language Laboratories (WALL).
Effective Application of Language Laboratory in the Higher Secondary Class
There is a lot of software available on the market that can be used in the multimedia language laboratory to help
students acquire all skill of English language i.e. listening, reading, writing and speaking.
For Higher Secondary students Computer Assisted Language Laboratory (CALL) would be effective rather
than WALL because school students have to learn English language along with other subject in their curriculum. Various
software is available with which various levels of learning is possible according the needs and levels of students. The aim of
using CALL is to make students familiarize with language and then help them to learn the language according to their needs.
It is recommended to the basis software of Learn to Speak ENGLISH (Book + 4 CD-ROMs Pack) (Author: BPB
Multimedia) this is a complete language learning software (4 CD-ROMs) + a reference book. It features * Vocabulary
building exercises * Exercises on Perfect pronunciation * Exercises on Grammar * Progress Tracking * Real word
conversations * Flexible study plans * Sophisticated speech recognition * PDA & Talking dictionary with translations *
Learning through games and Puzzles. This is the basic software which would help the students at the Higher Secondary level
to gradually learn the language skills.
1st CD Stage
1st CD has two parts i.e. Pronunciation and Basic Course. At this stage the teacher can telecast the course content and
material to the students’ port using the Teacher’s console and help the students understand the rules of pronunciation and
introduce them to the Vowel combinations, consonants, and consonant combinations. The second section of the first CD
carries orientation, pretest, basic words and simulated conversations. Now students are asked to pronounce the words learnt
and check through the software the correctness of their pronunciation. There is a short test after the exercise to see the level
of correctness in the pronunciation.
2nd CD Stage
The second CD carries Extended Course Menu. This course menu carries various course and exercises as per the needs of
students. It has three parts Travel, Business, and Everyday life. This exercise gives the language skills needed to
communicate in typical Travel situations, Business situations and Everyday life situations. Here the teacher can also divide
the class into small section and ask them to participate in a dialogue by following the situations they have learnt.
3rd CD Stage
This stage can be used to copy certain exercise that is available online. Internet Lessons are effective but the teacher needs to
telecast that material from the Teacher’s control rather than giving the complete freedom to students. Controlled online
exercises can be of immense use in learning the language skills.
45
Technological Innovations and Teaching English Language at School Level
4th CD Stage
This last CD has Cultural Movies describing about the culture, tradition, society, of the USA. These cultural movies can be
used by the teacher to help the learners read the text of the story about the city, then watch the movie on the screen with the
text written on one side of the screen where the movie is run on the other side of the same screen. Then the teacher can ask
the students to record their understanding on the paper and then check how much they were able to learn from this task based
learning.
Other than this basic software of Learn to Speak ENGLISH pack various other software like Renet, Aristoclass,
Hiclass, Globarina, Console OCL-908W, Histudio MHi Tech, Online Software can be used at later stage once students
are able to freely operate the basic software modules and exercises.
VII.
CONCLUSION
Though no aid can itself be complete to chisel and sharpen the language skills at higher secondary level but the
selected technical aids can help them start their learning gradually. At a later stage more advanced technological aids can
assist them sharpening their language skills. But at the school level they can be shown the path to move on. Thus it can be
said that with the use technical aids teaching and learning English language has gained special attention and it has spread
across the globe. This sort of dominion can only come by virtue of using technical aids to accelerate the process and obtain
better results, To day with Internet as an immense source of information circulated in great volume in the world it is
significant to select the appropriate tools and instruments supporting English language learning.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
Bose, M.N.K. (2005). A text book of English Language Teaching for Indian Students, New Century Book House
(p) Ltd, Chennai.
Crystal, D. 1997. English as a global language. Cambridge: Cambridge University Press.
Gower, R., D. Phillips, and S. Walters.(1995). Teaching Practice Handbook.London: Macmillan Heinemann, 1995.
John Kumar, Antony. (1995). Studies in Language Testing 2, Cambridge University Press.
Kachru, B. (1983). The Indianization of English. New Delhi: OUP.
Kumar, J.K. (2000). Mass Communication in India, Jaico Publishing House, Delhi.
McKay, S. L. (2000). ―Teaching English as an International Language: Implications for Cultural Materials in the
Classroom‖. TESOL Journal 9 (4): 7–11.
Paliwal, A.K. (1998). English Language Teaching, Surabhi Publication.
Richards, J. (2001). Approaches and Methods in Language Teaching. Cambridge: CUP.
46
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 40-46
Performance Improvement of PPM Algorithm
Pramila B 1, Arunkumar G 2, Jagadisha N3, Prasad M.R4
1
Department of E&CE, EWIT, Bangalore.
Department of E&CE, STJIT, Ranebennur.
3
Department of ISE, EWIT, Bangalore.
4
Department of CSE, EWIT, Bangalore.
2
Abstract––While Denial-of-Service (DoS) attack technology continues to evolve, the circumstances enabling attacks have
not significantly changed in recent years. DoS attacks remain a serious threat to the users, organizations, and
infrastructures of the Internet. The research work highlights the usage of probabilistic packet marking (PPM) algorithm
which is a promising way to discover the Internet map or an attack path that the attack packets traversed during a
distributed denial-of-service attack. But it has been seen that its termination condition is not well defined which results in
incorrect attack path constructed by the PPM algorithm. In this research work, a novel approach for a precise
termination condition named as rectified PPM (RPPM) algorithm. The most significant merit of the RPPM algorithm is
that when the algorithm terminates, the algorithm guarantees that the constructed attack path is correct, with a specified
level of confidence. An experimental framework is designed on the RPPM algorithm and show that the RPPM algorithm
can guarantee the correctness of the constructed attack path under different probabilities that a router marks the attack
packets and different structures.
Keywords––Denial-of-Service (DoS), Probabilistic Packet Marking (PPM), Termination Packet Number (TPN).
I.
INTRODUCTION
Today, the Internet is an essential part of our everyday life and many important and crucial services like banking,
shopping, transport, health, and communication are partly or completely dependent on the Internet. According to recent
sources the number of hosts connected to the internet has increased to almost 400 million and there are currently more than 1
billion users of the Internet. Thus, any disruption in the operation of the Internet can be very inconvenient for most of us.
The devastating effects of the DoS attack has caused attention of scientists and researchers, leading to various mechanisms
that have been proposed to deal with them. However, most of them are ineffective against massively distributed DoS attacks
involving thousands of compromised machines. IP trace back is an important step in defending against Denial-of-service
(DoS) attacks. Probabilistic packet marking (PPM) has been studied as a promising approach to realize IP traceback.
The packet marking procedure is designed to randomly encode edges’ information on the packets arriving at the
routers. Then, by using the information, the victim executes the path reconstruction procedure to construct the attack path.
Many IP traceback techniques have been proposed in the past. Among them the Probabilistic Packet Marking (PPM)
approach has been studied mostly. In a PPM approach, the router probabilistically marks the packets with its identification
information and then the destination reconstructs the network path by combining a number of such marked packets. In the
existing system, PPM algorithm is not perfect, as its termination condition is not well defined. The algorithm requires prior
knowledge about the network topology. In packet marking algorithm, the Termination Packet Number (TPN) calculation is
not well defined in the literature. In the existing system it only supports the single attacker environment.
II.
LITERATURE SURVEY
The denial-of-service (DoS) attack is a pressing problem for recent years. DoS defense research blossomed into
one of the main stream in the computer security fields. Various techniques such as the pushback message, ICMP traceback,
and the packet filtering techniques are the results from this active field of research.
Denial of service (DoS) or Distributed DoS (DDoS) attacks have become one of the most severe network attacks
today. Though relatively easy to be executed [1], it could cause devastating damages. By consuming a huge amount of
system resources, DoS attacks can render the normal services to the legitimate users unavailable. While email has become
the most popular form of communication, the DDoS attack is a common mode of attack to cripple a mail server [2].
Lee and Fung [3] indicate that a DoS attack could be carried out during an authentication process involving publickey based operations. Many different approaches have been proposed to defend against DoS attacks. To mitigate the damage
from DDoS attack, Su et al. [6] proposed an online approach which involves first identifying the approximate sources of the
an attack traffic, and then applying packet filtering as near the attack sources as possible.
Huang et al. [7] proposed an incentive based approach using cooperative filtering and cooperative caching for
defeating DDoS attacks. The most effective approach against DoS attack is to isolate the attackers from the victim’s
network. Thus, locating the attack source would be most important. We cannot rely on the source address in the IP header of
an attack packet since the source address is not authenticated in the current protocol when a router forwards a packet; so the
attacker can spoof the source IP address while launching an attack. Locating the attack source usually involves finding the
40
Performance Improvement of PPM Algorithm
paths of the relevant packets. Because of the stateless nature of Internet routing, it is quite difficult to identify such paths.
Finding the attack traffic paths is known as the IP traceback problem [9].
Dean et al. proposed an algebraic marking scheme for IP traceback ,which is based on linear algebra and coding
theory. One main drawback of this marking scheme is its ineffectiveness in dealing with multiple attacks. DoS and
Distributed Denial of Service (DDoS) attacks have posed major security threats to the Internet. To define, a DoS is a
malicious attempt to render a node or network incapable of servicing the legitimate requests. DDoS is the case in which one
or more attackers coordinate the sending of enormous packets aiming at clogging the victim and the network. DDoS attacks
come from various sources with different types of attacks. These attacks attempt to consume the finite resources like
memory, computational power etc. in the network and also at the victim [1]. They would also result in heavy congestion on
the network links thereby disrupting the communication among the users. Moreover, these attacks would exploit the inherent
weaknesses in the IP protocol. One such is spoofing, which is, impersonating one’s IP address. Spoofing further complicates
the detection of the DoS / DDoS attacks. More importantly, innocent host systems are involved for launching such attacks.
For instance, the DoS attacks have disrupted the Internet services and incurred heavy financial losses [2]. Such attacks are
difficult to detect, prevent and traceback, but easy to implement. Thus the devastating effects of the problem have led to the
development of many anti-DoS solutions.
The countermeasures that address the DoS attacks are deployed at different points on the Internet namely at the
source-end, intermediate-network and victim-end. An ideal place to detect and filter the flooding attacks is at the network
where they have been generated that is as close to the source of attacks as possible. The source-end defenses act as filters for
the attacks and have advantages over the other two [3]. They include congestion avoidance over the network, small collateral
damage, feasible deployment etc. The source-end defensive systems maintain the statistics of the outgoing traffic and use
them to analyze and conclude about the ongoing attacks. But, because of the highly distributed nature of the attacks,
detection near the source is cumbersome. Attackers may try to launch the attacks that resemble legitimate requests, thus there
is no suspicious alert about the attack traffic can be generated.
The Probabilistic Packet Marking (PPM) algorithm by Savage et al. had attracted the most attentions in
contributing the idea of IP traceback. The most interesting point of this IP traceback approach is that it allows routers to
mark certain information on attack packets based on a pre-determined probability. Upon receiving enough marked packets,
the victim (or a data collection node) constructs an attack path, which is the set of paths the attack packets traversed Logging
involves storing the packets on the routers which they come across and using data mining principles to find the path the
packets traversed. This method can even trace back the attack long after it has been over. Since the process of logging incurs
significant storage overhead on routers, a mechanism that stores packet digests rather than the packets themselves has been
addressed by Snoren et. al. [4]. Although the hash based technique requires 0.5% of the total link capacity in digest table
storage, the storage requirement at the routers, in general, would be too high.
III.
DESIGN AND IMPLEMENTATION
Implementation of any software is always preceded by important decisions regarding selection of the platform, the
language used, etc. These decisions are often influenced by several factors such as the real environment in which the system
works, the speed that is required, the security concerns, other implementation specific details etc.,
There are three major implementation decisions that have been made before the implementation of this project. They are as
follows

Selection of the platform (operating system).

Selection of the programming language for development of the system.

Coding Standards.
A.
Implementation Requirements
The implementation decisions are mainly concerned with the ease of future enhancement of the system. The major
implementation requirements are

Operating System used to implement this project is Windows XP.

The language chosen for this project is Java

Java Swing.
B.
Path Selection
Windows is distinguished from many popular operating systems in the following important ways. The Windows
operating system is designed to be compatible with the largest combination of PC hardware. It has vast compatibility with
motherboards, processors, memory chips, USB devices, internal disk drives of the PATA and SATA variety and no doubt
other devices and standards yet to be designed can all be configured to run under Windows. Windows desktop and its
simple user interface have become synonymous with computing. Linux operating systems offer a similar mouse driven
interface and the Apple Mac continues to pioneer with its single button mouse, but Windows remains the most popular and
the most recognizable simply by virtue of its earlier successes and the work done by each release of the operating system to
maintain this user familiarity.
C. Language selection
For the implementation of our project we need flexible systems implementation language. Compilation should be
relatively straightforward compiler, provide low-level access to memory, provide language constructs that map efficiently to
machine instructions, and require minimal run-time support. Program should be compiled for a very wide variety of
computer platforms and operating systems with minimal change to its source code. For Graphical User Interface
41
Performance Improvement of PPM Algorithm
programming, language chosen must be simple to uses, secure, architecture neutral and portable. Additional requirements of
GUI are: 1) User interface management: Windows, menus, toolbars and other presentation components be supported by the
language.2) Data and presentation management: language must contains a rich toolset for presenting data to the user and
manipulating that data. 3) The Editor: The language should have an editor, a powerful and extensible toolset for building
custom editors. 4) The Wizard framework: A toolset for easily building extensible, user-friendly Wizards to guide users
through more complex tasks. 5) Configuration management: Rather than tediously write code to access remote data and
manage and save user-configurable settings, etc., all of this is can be well in Java Swing Programming Language. Therefore
Java Swing is chosen for the GUI development.
D.
Naming Conventions
Naming conventions make programs more understandable by making them easier to read. They can also give
information about the function of the identifier, for example, whether it’s a constant, package, or class which can be helpful
in understanding the code. The conventions given in this section are high level. The naming rules for the identifiers are
explained below:

Classes: Class names should be nouns, in mixed case with the first letter of each internal word capitalized. Try to
keep class names simple and descriptive. Use whole words and avoid acronyms and abbreviations. Examples:
Class ParseInputFile

Methods: Methods should be verbs, in mixed case with the first letter lowercase, with the first letter of each
internal word capitalized. Examples: ParseOIDfile()

Variables: Variable names should be short yet meaningful. The choice of a variable name should be mnemonic
that is, designed to indicate to the casual observer the intent of its use.

Constants: The names of variables declared class constants and of ANSI constants should be all uppercase with
words separated by underscores (“_”).Examples: int MIN_WIDTH = 4;

Global Variables. Global variables should be prefixed with a 'g' and it’s important to know the scope of a
variable.

Pointer Variables. Pointers should be prefixed by a 'p' in most cases.
E.
Implementation Steps
Run RMI batch file to start the server. we have to activate the packages of the server using RMI
techniques(registry), router maintenance , packet marking source, leaf router and transition router and destination ,
client run all these batch files

Run the router maintance module. In the router maintance module there are three routers namely. R101, R102 and
R103. (This is the leaf, intermediate and transition router respectively). We can select any one of the router which
can be connected or disconnected and select the submit option.

In the packet marking source we give the source id as client-1 and destination id as client-4. The constructed path
will be automatically created based on the data available in the data base.

Then we select a text file which is divided into number of packets and marked with predefined value and marking
probability. This is implemented in java programming using the functions

using this logic

X =(Math.random()*9)+31;

int ch1=(int) (X);

String pcnt=pktcnt+inc;

Tpn t =(Tpn)Naming.lookup("//192.168.0.55/Server");

t.pktEncode1(pcnt,ch1);

String str1=Integer.toBinaryString(ch1);

Then the packets are sent to the leaf router by selecting the send option.

At the leaf router, the packets are received and it is checked for the values that is encoded form the packet marking
source. Then it is forwarded to the intermediate process router.

The introduction of structured programming in the 1960’s and 70’s brought with it the concept of Structured Flow
Charts. In addition to a standard set of symbols, structured flow charts specify conventions for linking the symbols
together into a complete flow chart. The structured programming paradigm evolved from the mathematically
proven concept that all problems can be solved using only three types of control structures. They are as follows

Sequence

Decision (Selection)

Iterative (or looping).

Process Flow Chart

A flowchart is a common type of diagram that represents an algorithm or process, showing the steps as boxes of
various kinds, and their order by connecting these with arrows. This diagrammatic representation can give a step-by-step
solution to a given problem. Data is represented in these boxes, and arrows connecting them represent flow / direction of
flow of data. Flowcharts are used in analyzing, designing, documenting or managing a process or program. The flow chart
for this project is as shown in the fig 1.
42
Performance Improvement of PPM Algorithm
Figure 1 Flow Chart for the project
Router Maintenance
This function will create a database where the data about the routers are maintained it states whether a particular
router is connected or disconnected.

Purpose
The main purpose of this method is to check whether a router is
available or not.

Functionality
It updates the database about the availability or nonavailability of router.
The various steps in this method are as below
Step 1: Start
Step 2: User can either select a particular router to be
connected or disconnected.
Step3: User can submit his option to the database.
Step 4: stop.

Input
Nothing but choosing the availability of router.

Output
Checking the database for the options chosen by the user.
43
Performance Improvement of PPM Algorithm
Fig 2 Flow chart for the Router Maintenance Module.
IV.
SOFTWARE TESTING
Testing is the major quality control measure used during software development. Its basic function is to detect
errors in the software. During requirement analysis and design the output is a document that is usually textual and not
executable. After the coding phase computer programs are available that can be executed for testing purpose. This implies
that testing not only has to uncover errors introduced during coding, but also errors introduced during the previous phases.
Thus the goal of testing is to uncover requirements, design and coding errors in the programs.
A. Testing Process
The basic goal of the software development process [8] is to produce that has no errors or very few errors. In an
effort to detect errors soon after they are introduced, each phase ends with a verification activity such as a review. However,
most of these verification activities in the early phases of software development are based on human evaluation and cannot
detect all the errors. The testing process starts with a test plan. The test plan specifies all the test cases required. Then the test
unit is executed with the test cases. Reports are produced and analyzed. When testing of some units is complete, these tested
units can be combined with other untested modules to form new test units. Testing of any unit involves the following

Select test cases

Execute test cases and

Evaluate the results of testing.
B.
Test Plan
Testing is an extremely critical and time consuming activity. It requires proper planning of the overall testing
process. Testing process starts with a test plan that identifies all the testing related activities that must be performed and
specifies the schedule, allocates the resources and specifies guidelines for testing.
Testing can be divided into two phases:
1.
Unit Testing
2.
Integration Testing
The test cases are selected to ensure that the system behavior can be examined in all possible combinations of
conditions. Accordingly, expected behavior of the system under different combinations is given. Therefore test cases are
selected which have inputs and the outputs are on expected lines, inputs that are not valid and for which suitable messages
must be given and inputs that do not occur very frequently which can be regarded as special cases.
Apart from unit testing and integration testing Regression testing must be given equal importance. The test plan
should always provision regression testing in it. Regression testing is any type of software testing that seeks to uncover
software errors by partially retesting a modified program. The intent of regression testing is to provide a general assurance
that no additional errors were introduced in the process of fixing other problems. Regression testing is commonly used to test
the system efficiently by systematically selecting the appropriate minimum suite of tests needed to adequately cover the
affected change. Common methods of regression testing include rerunning previously run tests and checking whether
previously fixed faults have re-emerged.
Test Environment
This section discusses in detail the environment for conducting the testing process. The Minimum hardware and
software requirements are specified as follows.
Hardware requirements
 Intel Pentium IV Processor,
 256 MB RAM,
44
Performance Improvement of PPM Algorithm


20 GB HDD.
Input and Output Devices-keyboard, Mouse and monitor
Software requirements


Operating System: Windows XP with SP2,
Programming Language: JDK 1.6,
The Test Setup
The steps to be followed prior to conducting the tests are as follows

Install JDK 1.6 on the computer.

Place the project folder in any of the hard disk drives.

Select the main file of the project and execute it by double clicking on it.

The project file should be in the debug mode.
After performing the above steps the test cases mentioned in the following section can be executed
C. Unit Testing
Here all the individual modules identified are tested one at a time as they are being developed. Unit testing deals
with checking every single component of the code individually to make sure that the modules work properly when they are
integrated.
D. Test Strategy
The strategies used to perform unit testing are described below:

Features to be tested – The features to be tested, most importantly includes the accuracy of the individual unit
and also the range of inputs for which the unit functions properly.

Items to be tested – The items to be tested include all the individual units or functions, which collectively form
the whole system In case of unit testing the items to be tested, are the individual units. Sample input, which can be
any valid input for the unit and its corresponding expected output and the actual output.


Purpose of testing – To examine whether the protocol anomaly detection modules are able to detect any anomaly
which may be present in the Headers of the Individual Protocols
Pass/Fail Criteria – The pass or fail criteria is the matching of the expected and the actual outputs of the
integrated modules.
E.
Integration Testing
Integration testing deals with checking whether the components work correctly in synchronization, when they are
integrated. Unit testing establishes that each component is singularly working as expected. Thus integration testing is
reduced to just checking the co-ordination amongst the tested components. This integration process involves building the
system and testing the resultant system for problems that arise from component interactions. Integration tests should be
developed from the system specification and integration testing should begin as soon as usable versions of some of the
system components are available.
The main difficulty that arises in integration testing is localizing errors that are discovered during the process.
These are complex interactions between the system components and, when an anomalous output is discovered, it may be
hard to find the source of the error. To make it easier to locate the errors, an incremental approach to system integration and
testing must be used.
F. Test Cases for Integration Testing
In this project all the methods are defined in the single class. An integration testing is done by testing the main
module which invokes all the methods.

Integration test case for Sender and Receiver module
V.
CONCLUSION
The conclusion that can be drawn from this project is the learning of detection of the error which is the loss of
packets of data in network system. Secondly, the introduction of the hackers at the intermediate router by which the path of
the packet of data that has traversed can be traced and the location of the hackers is introduced by this can be found. This
projected is extended by implementing with the multiple attacker environments.
REFERENCES
1.
2.
Detecting and Preventing IP-spoofed Distributed DoS Attacks, Yao Chen1, Shantanu Das1, Pulak Dhar2, Abdul
motaleb El Saddik1, and Amiya Nayak, School of Information Technology and Engineering, University of Ottawa,
International Journal of Network Security, Vol.7, No.1, PP.70–81, July 2008.
On Improving an Algebraic Marking Scheme for Detecting DDoS Attacks Moon-Chuen Lee1, Yi-Jun He and
Zhaole Chen The Chinese University of Hong Kong, Dept. of Computer Science & Engineering, Journal of
Information Assurance and Security -2008
45
Performance Improvement of PPM Algorithm
3.
4.
5.
6.
7.
8.
A Precise Termination Condition of the Probabilistic Packet Marking Algorithm, Tsz-Yeung Wong, Man-Hon
Wong, and Chi-Shing (John) Lui, Senior Member, IEEE, IEEE Transactions on Dependable and Secure
Computing, vol. 5, no. 1, January-March 2008.
A proposal for new marking scheme with its performance evaluation for IP Traceback , S. Malliga and Dr. A.
Tamilarasi , Department of Computer Science and Engineering , Kongu Engineering College , Perundurai, Erode,
WSEAS Transactions on Computer Research, Issue 4, Volume 3, April 2008.
Trends in Denial of Service Attack Technology, CERT® Coordination Center, Kevin J. Houle, CERT/CC George
M. Weaver, CERT/CC. 2008.
Toward a Practical Packet Marking Approach for IP Traceback Chao Gong and Kamil Sarac Department of
Computer Science, University of Texas at Dallas, 2008.
Markov Chain Modelling of the Probabilistic Packet Marking Algorithm, Tsz-Yeung Wong, John Chi-Shing Lui,
and Man-Hon Wong, Department of Computer Science and Engineering, The Chinese University of Hong Kong,
International Journal of Network Security, Vol.5, No.1, PP.32–40, July 2007.
Rajib Mall, “Fundamentals of Software Engineering”, Prentice-hall Of India Pvt Ltd Publishing, 3rd edition, 2009.
46
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 40-44
Application of Synchronized Phasor Measurement in Real Time
Wide-Area Monitoring
Ms Anupama S. Patil
Assistant Professor, ZES’s DCOER, Pune
Abstract––The most important parameter in several monitoring, control, and protection functions in inter-connected
electric power networks is phasors representing positive sequence voltages and currents in a power network. There is
need to enhance power system congestion and disturbances by wide area monitoring, protection and control. It is useful
to improve grid planning, operation and maintenance. The latest advances in Synchronized Measurement technology are
used in such systems to monitor, assess and to take control action to prevent or mitigate problems in real time. Time
synchronization is not a new concept or a new application in power systems. With the advancement in technology, the
time frame of synchronized information has been steadily reduced from minutes to microseconds. This paper stresses the
application of phasor technology for power system operations in Real Time Wide-Area Monitoring and Analysis. The
real-time monitoring provides the operator on-line knowledge of system conditions to increase the operational efficiency
under normal system conditions, and allows the operator to detect, anticipate, and correct problems during abnormal
system conditions. A MATLAB simulation has been carried out to demonstrate the monitoring of the multi-machine
system using synchronized phasor measurement.
Keywords––DFT, power system stability, synchrophasor, synchrophasor measurement
I.
INTRODUCTION
Time-synchronization provides advantage of directly measuring the system state instead of estimating it. Phasors
representing positive sequence voltages and currents in a power network are the most important parameters in interconnected electric power networks. To get a consistent description of the state of inter-connected electric power networks, it
becomes necessary to synchronize the measurement processes. Voltage and current phasors in a three phase power system
can be measured from waveform samples. Synchronized phasor measurement technique can be used to determine transient
stability of power system.
II.
PHASOR AND SYNCHROPHASOR MEASUREMENT
A phasor is a complex number associated with a sinusoidal wave. The phasor magnitude is the same as the
sinusoidal wave magnitude. The phasor phase angle is the phase of the wave at t = 0. Phasors normally are associated with a
single frequency. Synchrophasors, or synchronized phasor measurements provide a means for comparing a phasor to an
absolute time reference. The availability of high-accuracy satellite-synchronized clocks makes synchronized phasor
measurement possible. Through the use of the clock, the phasor measurement unit (PMU) produces a reference sinusoidal
wave.[1],[2] This reference wave is a nominal frequency sinusoidal wave for which maximum occurs at the exact start of
each second. The measured local voltage or current is then compared to this reference wave. According to the standard, a
synchronizing source that provides the common timing reference may be local or global. The synchronizing signal may be
distributed by broadcast or direct connection, and shall be referenced to Coordinated Universal Time (UTC). One commonly
utilized synchronizing signal is the satellite signal broadcast from the Global Positioning System (GPS). At present Phasor
Measurement Units (PMUs) are the most sophisticated time-synchronized tool available to power engineers and system
operators for wide-area applications. This tool has been made possible by advancements in computer technology and
availability of GPS signals A phasor is a vector consisting of magnitude and angle that corresponds to a sinusoidal
waveform. The phasor of a signal can usually be derived based on Fourier transform by utilizing data samples N of the
signal within a selected time window. The phasor calculated by DFT is given by:
Phasor X 
N 1

2
x k (coskθ  jsinkθ)
N k 0
(1)
For a steady state signal, the magnitude is a constant while the value of the angle depends on the starting point of
samples. In other words, the angle is a relative quantity and a time reference has to be selected before it can be calculated. By
measuring the difference in phase angle between the Positive Sequence Voltage(PSV) phasor at the output of the generator/s,
it is possible to detect transient stability or instability conditions.
40
Application of Synchronized Phasor Measurement in Real Time Wide-Area…
III.
POWER SYSTEM STABILITY
Stability is the property of a power system containing two or more synchronous machines. The classical model of a
multi machine may be used to study the stability of a power system. [6] Transient stability restores the system after fault
clearance. Any unbalance between the generation and load initiates a transients that causes the rotors of the synchronous
machines to “swing” because net accelerating torques are exerted on these rotors. When the total load equals total
generation, machine speeds are practically equal to synchronous speed. The angular displacements, δ, of machines in a
system provide information about the system dynamics. One cannot measure this angular displacement mechanically. By
computing the voltage phasor behind the machine transient reactance, one can study the phase angle variations to obtain an
image of the machine angular displacement. In practice, machine oscillating modes can be determined by measuring the
phase angle of the positive-sequence voltage phasor at the machine terminals. If the oscillatory response of a power system
during the transient period following a disturbance is damped and the system settles in a finite time to a new steady operating
condition, we say the system is stable. Otherwise the system is considered unstable.
Fig. 1 WSCC 3-machine, 9-bus system
The popular Western System Coordinated Council (WSCC) 3-machine, 9-bus system shown in Fig.1 [7][8] is
considered for analysis. The base MVA is 100, and system frequency is 60 Hz. A MATLAB simulation has been carried out
on above system for healthy condition, various fault conditions and change in load condition. The results of simulation are
observed by PMU.
(a)
(b)
Fig. 2 Healthy system (a) Relative rotor angle δ12 (deg) Vs Time(s) (b) Relative rotor angle δ13(deg) Vs Time(s)
41
Application of Synchronized Phasor Measurement in Real Time Wide-Area…
(a)
(b)
Fig. 3 L-L-L-G fault (a) Relative rotor angle δ12(deg) Vs Time(s) (b) Relative rotor angle δ13(deg) Vs Time(s)
(a)
(b)
Fig. 4 L-L-G fault (a) Relative rotor angle δ12(deg) Vs Time(s) (b) Relative rotor angle δ13(deg) Vs Time(s)
(a)
42
Application of Synchronized Phasor Measurement in Real Time Wide-Area…
(b)
Fig. 5 Increase in load (a) Relative rotor angle δ12(deg) Vs Time(s) (b) Relative rotor angle δ13(deg) Vs Time(s)
(a)
(b)
Fig. 6 Increase in load (a) Relative rotor angle δ12(deg) Vs Time(s) (b) Relative rotor angle δ13(deg) Vs Time(s)
IV.
RESULT ANALYSIS
We will demonstrate that by measuring the difference in phase angle between the PSV phasor at the output of the
generator2 (Bus 2) and the same phasor at the output of the swing generator1 (Bus 1), and the difference in phase angle
between the PSV phasor at the output of the generator3 (Bus 3) and the same phasor at the output of the swing generator1
(Bus 1) it is possible to detect transient stability or instability conditions. Generator 1 is specified as swing generator and
inertia is specified very high. As we specify an infinite inertia, the speed and therefore the frequency of the machine are kept
constant. Various types of faults and load change condition are applied to check the stability of the system.
Fig. 2 shows the results of the simulation with healthy system. The oscillatory response of a power system during
the transient period following a disturbance is damped and the system settles in a finite time to its synchronous value and
PSV phasor phase angle difference time waveforms reflect a stable disturbance for a second time. Hence the system is stable.
When an L-L-L-G fault is applied at Bus 7-5 for 100 ms; the results are shown in Fig. 3. As the fault duration is greater than
the critical clearing fault time, the phase difference increases exponentially with a positive time constant and the machine
speed increases linearly. It indicates loss of machine synchronism. Hence the system is unstable.
Similarly when an L-L-G fault is applied at Bus 1 for 300 ms, the results are shown in Fig. 4. When a load
applied at bus 5 is increased to 320MW, 95MVAR i.e. three times the original; the results are shown in Fig. 5. As the
unbalance between generation load equations, the phase difference increases exponential with a positive time constant. It
indicates loss of machine synchronism. Hence the system is unstable.
When a load applied at bus 8 is increased to 320MW, 95MVAR i.e. more than three times the original; the results
are shown in Fig. 6. As the phase difference increases exponential with a negative time constant. It indicates loss of machine
synchronism. Hence the system is unstable.
V.
CONCLUSION
In synchronized phasor measurement applications, all network waveforms are compared to a time-based phase
reference.The measurement of positive sequence voltage motivated by developments in the digital relaying field is much
more useful than the measurements made on one phase of a three phase ac power system. In different available techniques of
43
Application of Synchronized Phasor Measurement in Real Time Wide-Area…
phasor measurement, DFT technique is widely used. By measuring the difference in phase angle between the PSV phasor at
the output of the generator1 (Bus 4) and the same phasor at the output of the generator2 (Bus 5), it is possible to detect
transient stability or instability conditions. When the fault duration is less than the fault clearing time, system remain stable.
When the fault duration time is greater than the fault clearing time, system becomes unstable.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Gabriel Benmouyal, E. O. Schweitzer, A. Guzmán , Schweitzer Engineering Laboratories, Inc., Techpaper 6139,
Synchronized phasor measurement in protective relays for protection, control, and analysis of electric power
systems
IEEE Standard 1344-1995. .IEEE Standard for Synchrophasors for Power Systems.
J. Z. Yang and C. W. Liu, “A Precise Calculation of Power System Frequency and Phasor” IEEE Transactions on
Power Delivery, Vol. 15, No. 2, April 2000.
Hadi Sadat, Power System Analysis, Tata McGraw-Hill Book Company, 2002
A. G. Phadke and J. S. Thorp, “A New Measurement Technique for Tracking Voltage Phasors, Local System
Frequency”, IEEE Transactions on Power Apparatus and System, May 1983.
Kundur, P., Power System Stability and Control.,McGraw-Hill, 1994.
P. M. Anderson and A. A. Fouad, Power System Control and Stability (Iowa State University Press, Ames, IA,
1977).
P. W. Sauer and M. A. Pai, Power System Dynamics and Stability (Prentice Hall, Upper Saddle River, New
Jersey, 1998)
A. G. Phadke et al., “Synchronized Sampling and Phasor Measurements for Relaying and Control”, IEEE
Transactions on Power Delivery, Vol. 9, No. 1, January 1994.
P. Bonanomi, “Phase Angle Measurements With Synchronized Clocks. Principle and Applications”. IEEE
Transactions on Power Apparatus and Systems, Vol. PAS-100, No. 12, December 1981.
Edward Wilson Kimbark, Power System Stability- Vol III , Elements of Stability Calculations
D.P. Kothari, I J Nagrath, Modern Power System Analysis
44
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 32-38
Content Based Hybrid Dwt-Dct Watermarking For Image
Authentication in Color Images
S. Radharani1, Dr. M.L. Valarmathi2
1
2
Assistant Professor, Department of Computer Applications, Sree Narayana Guru College, Coimbatore,
Associate Professor, Department of Computer Science and Engineering, Government College of Technology,
Coimbatore
Abstract––The present work relates to a novel content based image authentication frame work. Here we take statistical
based four methods for creating watermark. All these methods embed that statistical feature into the host image. In first
method, Frobenius Norm of the host image is taken as watermark using ICA technique. The other methods used to create
watermarks are namely, mean, standard deviation and combined mean and standard deviation of the host image. This
hybrid method combines DCT and DWT to hide the watermark, so that imperceptibility will be maintained. The same
statistical features are extracted as feature set which is embedded in the host image during embedding phase. In the
watermark retrieval process, content modification is identified by correlation method with predefined threshold. The
experimental results demonstrate that newly proposed DWT-DCT hybrid watermark embedding algorithm with HSV is
robust for Compression, noise and other attacks.
Keywords––Discrete Cosine Transform, Discrete Wavelet Transform, Independent Component Analysis, Frobenius
Norm, Human Visual System.
I.
INTRODUCTION
Watermarking is the process of inserting predefined data into multimedia content in a way that the degradation of
quality is minimized and remains at an imperceptible level. Some transformations such as DCT and DWT are used for
watermarking in the frequency domain. A through comparison of the DCT and DWT is found in [1]. For content
authentication, the embedded watermark can be extracted and used for verification purposes. The techniques used are
classified into three main categories: robust, fragile and semi-fragile method. Robust techniques are primarily used in
applications such as copyright protection and ownership, verification of digital media, because they can withstand nearly all
attacks. Fragile methods are mainly applied to content authentication and integrity verification, because they are fragile to
almost all modifications. By contrast, semi-fragile methods are robust against incidental modifications such as JPEG
compression, but fragile to other modifications. The difference between the original watermark and the extracted watermark
is compared by predefined threshold during authenticity verification. The applications of the watermark are Content
Identification and management, Content protection for audio and video content, Forensics, Document and image security,
Authentication of content and objects (includes government IDs), Broadcast monitoring, etc. The present work is applicable
for Image authentication for color images. In this paper first the researchers utilize Frobenius norm as watermark, which is
created using ICA technique and then the researchers also uses the other statistical measures such as mean, standard
deviation and total value of both mean and standard deviation as watermark generation. To find the tampered location,
random noise as tamper is applied on the watermarked image and it is located through coding. The advantage of this method
is that it can tolerate the attacks such as, JPEG compression, Gamma Correction, Noise, Low pass filter, Sharpening,
Histogram equalization and contrast stretching. The quality metrics used in this method are PSNR, Image Fidelity and
Pearson Correlation Coefficient.
II.
RELATED WORKS
In the area of robust watermarking techniques [2] proposes, quantization index modulation algorithm in the
frequency domain to achieve copyright protection. In [3] a binary watermarked image is embedded in selected sub-bands of
a 3-level DWT transformed of a host image. Further, DCT transformation of each selected DWT sub-band is computed and
the PN-sequences of the watermark bits are embedded in the coefficients of the corresponding DCT middle frequencies. In
extraction phase, the watermarked image, which may be attacked, is first preprocessed by sharpening Laplacian of Gaussian
filters. Then the same method applied as the embedding process is used to extract the DCT middle frequencies of each subband. At last, correlation between mid-band coefficients and PN-sequences is calculated to determine watermarked bits. In
the area of fragile watermarking techniques proposes [4] Secure fragile watermarking algorithm. In that scheme a signature
is extracted from each block of the image and is inserted in that block. Extraction of the signature and appropriated
parameters for computation of the signature were studied. In the paper [5] a novel fragile watermarking technique was
proposed to increase accuracy of tamper localization by maintaining the dependence among blocks. In the method the
watermark which is embedded in each block was composed of three digital signatures. This provides both security against
vector quantization counterfeiting attack and increases accuracy of tamper localization. The earlier work on semi-fragile
watermarking research [6], mostly focused on detecting whether an image was tampered with or not. However those
techniques were not able to identify the tampered location. In this Journal Paper [7], the authors used random bias and
32
Content Based Hybrid DWT-DCT watermarking for Image Authentication in Color Images
nonuniform quantization, to improve the performance of the methods proposed by Lin and Chang. In 2000 [8] used the
histogram of ‘value’ or ‘intensity’ component of HSV color space to find out most appropriate area to insert the
watermark.
III.
CONTENT BASED HYBRID DWT-DCT WATERMARKING
In the proposed method the researchers introduces the Human Visual system concepts in which the given RGB
image is converted into HSV image and the embedding process is performed. Before embedding the image is resized into
256X256. During the extraction process, again the HSV image is converted into RGB image. It increases robustness. The
proposed method is classified based on the watermark used to embed in the host image in combined DWT-DCT
watermarking technique. The first method uses the watermark as frobenius norm of the host image. In the second method
mean of the host is used as watermark. In the third method standard deviation is employed as watermark. In the last method
combined value of mean and standard deviation of the host is used as watermark. For all these methods the comparative
study is done based on time factor for both embedding and extraction of the watermark.
3.1. Watermark Generation
i. First the image is divided into blocks of size 16.
ii. For each block apply FastICA.
iii. Depends on choice (frobenius norm, mean, standard deviation, mean & standard deviation) , generate watermark.
a. Compute frobenius norm of the block if the choice is 1.
w =norm (A,’ fro')
b. Compute mean of the block if the choice is 2.
w =mean (mean (A))
c. Compute standard deviation of the block if the choice is 3.
w =std (std (A))
d. Compute mean & standard deviation of the block if the choice is 4.
w =std (mean (A))
3.2. Watermark Embedding
The embedding process follows the algorithm given below:
1. For each block apply DWT for 3 levels.
2. D=DCT(mid frequency of DWT 3rd level)
3. D=(sign(D))* * w(k)
Where is embedding strength, which is obtained by
= standard deviation of original watermark image / standard deviation of extracted watermark
In this work, after completing the experiments, average value of for frobenius norm is 0.247, for mean is 0.068, for
standard deviation is 0.092 and for combined mean and standard deviation is 0.2021 .
4. Perform inverse DCT and inverse DWT.
5. Repeat the steps 2 – 4 for all the blocks.
The result is the watermarked image I*.
3.3 Watermark extraction
1. Perform steps 1 – 5 of the watermark generation procedure on the received image I’ and get the computed
watermark.
2. Perform DWT of each block and then perform DCT of the same block.
3. Extract the embedded watermark from the chosen DCT coefficient.
4. w’ = abs (D) / 
5. This set forms the extracted watermark.
3.4 Authentication
1. Perform block wise percentage difference between the values w and w’
Block difference = |wi’ – wi| / max (wi) * 100
Threshold for the percentage difference has been experimentally determined as 15%.
This block wise difference is used to verify any modifications in the block to check the authenticity. The threshold value is
fixed by experimentation and if the difference is less than threshold value accepts it as authentic otherwise the image is
unauthentic.
IV.
EXPERIMENTAL RESULTS
The proposed scheme has been tested using Matlab. The scheme has been evaluated on the different set of images
of size 256 X 256, 1920 X 1440, 1024 X 768 and 729 X 546. The block size chosen is 16 X 16 so that it resulted in better
PSNR value. The alpha value is calculated based on the standard deviation of original image and watermarked image. For
Frobenius norm as watermark method its value is 0.247, for mean as watermark method the value is 0.068, for standard
deviation as watermark method the value is 0.92 and for the value combined mean and standard deviation its value is 0.2021.
Threshold value of the calculation of percentage difference is chosen as 25%. The value below the threshold resulted as false
negative and the value above the threshold indicates the method to be fragile. Middle-frequency coefficients are selected to
embed the watermark. Embedding the watermark in low-frequency components, result in visual degradation of the host
33
Content Based Hybrid DWT-DCT watermarking for Image Authentication in Color Images
image. Similarly embedding the watermark data in high-frequency components results loss of data due to compression.
Hence our method ensures robustness. The ICA algorithm adopted in this proposed method has been discussed in [9] [10].
To find out the quality of the watermarked image, the metrics PSNR, Pearson Normalized Cross Correlation Coefficient,
Normalized Cross Correlation and Image Fidelity are used. These are calculated between host image and the watermarked
image. The present work implemented in four different types of watermark generation techniques. Fig. 1 shows the Baboon
image, before watermarking and after watermarking both in RGB and HSV formats in Frobenius Norm as watermark
generation technique. Fig. 2 shows the Baboon image after JPEG Compression, Gamma Correction, noise, filter, histogram
equalization and Contrast stretching attacks.
Original Image
Original Image in HSV
Watermarked Image in HSV
Watermarked Image
Figure 1. Original Image , Original Image in HSV, Watermarked image in HSV, Watermarked Image.
Jpeg 100 attack
Jpeg 40 attack
Uniform Noise
Jpeg 80 attack
Jpeg 60 attack
Gamma Correction
Lowpass filter
Gaussian Noise
Sharpening attack
34
Content Based Hybrid DWT-DCT watermarking for Image Authentication in Color Images
Image with Histogram Equalisation Attacks
Contrast Strecting
Figure 2. Watermarked Image after various attacks.
Figure 3 shows the tampered location identification for Baboon Image. Here the random noise is applied as tamper to the
watermarked image. That tampered location was identified correctly.
Tampered Image
Tampered Locations
Figure 3. Detection of tampered location.
Figure 4. Embedding time and Extraction time for various test images.
From Fig. 4, Embedding time depends on the size of the image and extraction time almost same for all types of images.
Figure 5. Percentage block difference between watermark extracted and the watermark for various test images.
35
Content Based Hybrid DWT-DCT watermarking for Image Authentication in Color Images
Figure 6. Percentage block differences for various test images after incidental image processing operations.
From Fig. 6 for the entire test image’s block wise percentage difference is 25 for JPEG attack. For gamma correction the
block wise percentage difference is approximately 5. For Noise attacks the value is negligible other than natural scenes. For
low pass filter the value is above 20 for all images. For sharpening attack its value is almost equal to all types of images and
it is 20. Histogram equalization attack the value is 25 and for contrast stretching it value depends on the type of image.
Table 1. Quality Metrics after Watermarking. Using Hybrid DCT and DWT Watermarking with Frobenius norm
Images
Baboon
Forgery1
House
Lena
Natural_scene
Pepper
Timthumb
MSE
2.88E-05
3.98E-06
2.13E-05
6.23E-05
9.00E-06
4.00E-05
1.70E-05
PSNR
93.539
102.1355
94.843
90.1869
98.5898
92.1122
95.8175
IF
1
1
1
1
1
1
1
PCC
0.99972
0.99984
0.9998
0.99963
0.99983
0.99764
0.99966
NC
1
1
1
1
1
1
1
From TABLE 1, the quality metric PSNR is approximately 95%. Mean Square Value is negligible. Image Fidelity,
Pearson Correlation Coefficient and Normalized Correlation the value is approximately 1.
Figure 7. Embedding time and Extraction time for various test images in various watermark generation techniques.
From Fig. 7 Embedding time is same for the methods using Frobenius norm, Standard Deviation and Combined mean and
standard deviation as watermark and their value is 80. The Embedding time for mean as watermark is 107. Similarly the
extraction time is same for all the methods (20) except mean as watermark (33).
36
Content Based Hybrid DWT-DCT watermarking for Image Authentication in Color Images
Figure 8. Percentage block difference between watermark extracted and the watermark for various test images in various
watermark generation techniques.
From Fig. 8 Block wise difference is Less for mean as watermark and high for combined mean and standard deviation.
Figure 9. Percentage block differences for average test image values after incidental image processing operations. From Fig.
9 frobenius norm as watermark has high blockwise percentage difference for all attacks comparing to all other watermark
techniques and mean as watermark has low blockwise percentage difference for all attacks comparing to all other watermark
techniques.
Table 2. Quality Metrics after Watermarking. Using Hybrid DCT and DWT Watermarking with all watermark
generation techniques.
Type
MSE
PSNR
IF
PCC
NC
DWTDCTfastICAfrob
2.61E-05
95.3177
1
0.999446
1
DWTDCTfastICAmean
2.13E-05
96.64904
0.999931
0.999813
0.9999
DWTDCTfastICAstd
2.1E-05
96.72099
0.999933
0.999814
0.999901
DWTDCTfastICAmean&std
1.51E-05
96.95161
0.999933
0.999851
0.999901
From TABLE 2, the PSNR value is almost same for all the methods and it is approximately 95%. IF, PCC, NC values are
nearly same for all methods and it is 1. The MSE value is negligible for all methods.
37
Content Based Hybrid DWT-DCT watermarking for Image Authentication in Color Images
V.
CONCLUSION
This paper has discussed about content based watermarking of 4 different types of watermark generation schemes
for image authentication using hybrid DWTDCT watermarking. The proposed system embeds the watermark in the HSV
equivalent of the host image and changes it to RGB equivalent watermarked image. The attacks are applied on the RGB
equivalent watermarked image. The proposed method correctly authenticates the image even under normal image processing
operations and it correctly detects tampering and identifies the tampered regions of the image in all four watermark types.
The quality of the watermarked image is better than the existing techniques. The average PSNR value in the proposed system
is 95 % for all methods discussed in this paper. In terms of computation time mean watermark technique takes slightly
greater time than other methods. Extensive experimentation demonstrates the efficiency of the standard deviation watermark
method among the four methods developed.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Abbasfard, M., Digital Image watermarking robustness: A comparative study, Master’s Thesis, Delft, Netherlands
(2009).
Chi-Hung Fan, Hui-Yu Huang and Wen-Hsing Hsu, Department of Electrical Engineering, A robust
watermarking technique Resistant JPEG compression, Journal of Information Science and Engineering, 27, 163 –
180 (2011).
Kasmani, S.A, Naghsh –Nilchi , A, A new robust digital image watermarking technique based on Joint DWT-DCT
transformation, Convergence and hybrid information technology, 2008, ICCIT ’08 , Third international
conference on 11-13 Nov. 2008 , Vol. 2 , Pages 539-544.
Raja’ S. Alomari and Ahmed Al-Jaber, Computer Science Department, Jordan, A fragile watermarking algorithm
for Content authentication, International Journal of Computing & Information Sciences, Vol. 2, No.1, Apr. 2004.
Bilgisayar Muhendisligi Bolumu, Firat Univ, Elazig Tatar, Y, A Novel fragile watermarking technique with high
accuracy tamper localization, Signal Processing and Communications Applications , 2006 IEEE 14 th, Pages 1-4.
Yeung, M.M. Mintzer, F, Invisible watermarking for image verification, ICIP (2)97, Pages 680-683.
Maeno, K, Qibin Sun, Shih-Fu Chang, Suto , M, New semi-fragile image authentication watermarking techniques
using random bias and nonuniform quantization, IEEE transactions on Multimedia, Volume 8 , Issue 1, Pages 3245.
Dinu Coltuc and Philippe Bolon , Color Image Watermarking in HIS Space, (0-7803-6297-7/00 2000 IEEE).
Dr. Latha Parameswaran, Dr. K. Anbumani, Content-Based watermarking for Image Authentication using
Independent Component Analysis, Informatica 32 (2008) Pages 299 – 306.
Aapo Hyverinen, Survey on Independent Component Analysis, Neural Computing Surveys, Vol. 2, pp 04 – 128,
1999.
38
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 33-40
A Novel Approach for Secured Symmetric Key Distribution in
Dynamic Multicast Networks
J.Rajani1, D.Rajani2,
1
Department of CSE, JNTUH, Hyderabad,
2
Department of IT, JNTUH, Hyderabad
Abstract––Multicasting is an efficient means of distributing data in terms of resources usage. All the designated receivers
or members in a multicast group share a session key. Session keys shall change dynamically to ensure both forward
secrecy and backward secrecy of multicast sessions. The communication and storage complexity of multicast key
distribution problem has been studied extensively. We implement a new multicast key distribution scheme whose
computation complexity is significantly reduced. Instead of using conventional encryption algorithms, the scheme
employs MDS codes, a class of error control codes, distribute multicast key dynamically. This scheme considerably
reduces the computation load on each group member as compared to existing schemes employing traditional encryption
algorithms. It easily combined with any key-tree-based schemes and provides much lower computation complexity while
maintaining low and balanced communication complexity and storage complexity for secure dynamic multicast key
distribution.
Keywords-distribution,multicast,MDScodes, computation complexity, erasure decoding, GC.
I.
INTRODUCTION
Key Management is one of the security service required for many group oriented and distributed applications. In
such applications data can be communicated using a secure group key which helps in key distribution techniques. Multicast
is an essential mechanism to achieve scalable information, distribution for group-oriented applications. Multicast refers to
communication where information is sent from one or more parties to a set of other parties in terms of resource (such as
network bandwidth, server computation and I/O load) usage. In this case, the information is distributed from one or more
senders to a set of receivers, but not to all users of the group. The advantage of multicast is that, it enables the desired
applications to service many users without overloading a network and resources in the server. Security is provided when data
is transmitting through an insecure network. Unicast security has several schemes to provide the issues which cannot be
extended directly to the multicast environment. As the transmission takes place over multiple network channels, multicasting
is more vulnerable than unicasting. In many applications, the multicast group membership change dynamically i.e. some new
members are authorized to join a new multicast session while some old members should be excluded. In order to ensure both
forward secrecy and backward secrecy, session keys are dynamically changed the forward secrecy is maintained if an old
member who has been excluded from the current session cannot access the communication of the current session. And
backward secrecy is guaranteed if a new member of the current session cannot recover the communication of past sessions.
This requires each session need a new key that is only known to the current session members, i.e., session keys need to be
dynamically distributed to authorize session members. Group key management is the major issue in multicast security,
which is the fundamental technology to secure group communication by generating and updating secret keys [4]. Access
control and data confidentiality can be facilitated using key management by ensuring that the keys used to encrypt group
communication are shared only among the legitimate group members and only those members can access the group
communication [5] .The shared group key can be used for authentication and also for encrypting the message from a
legitimate group member. In order to prevent the above said problems in the secured multicast communication environment,
the following two security criteria are used. Forward Secrecy, is maintained if an old member who has been evicted should
not be able to access the messages from the current and future sessions. Backward secrecy, is guaranteed if a new member
of the current session cannot recover the communication data of past sessions. The process of changing the session key and
communicating the same to only the legitimate group members is called as Re keying. Group key management schemes are
of three types. Centralized key management: group members trust a centralized server, referred to as the key distribution
center (KDC), which generates and distributes encryption keys. Decentralized scheme: the task of KDC is divided among
subgroup managers. Contributory key management schemes: Group members are trusted equally and all participate in key
establishment. In this paper, we study how a multicast group key can efficiently be distributed in computation. In this a
centralized key management model is used where session keys are issued and distributed by a central group controller (GC),
as it has much less communication complexity, when compared to distributed key exchange protocols[4]. The group
controller uses the communication, computation and storage resources for distributing the session key to the group members.
The main problem here is how the resources can be used to distribute the session key, which is referred to as group key
distribution problem. There are two approaches that are generally used for distributing the session key to the group of n
members. The first approach is that the group controller GC shares an individual key with each group member. This key is
used to encrypt a new group session key. The second approach is that the group controller shares an individual key with each
33
A Novel Approach for Secured Symmetric Key Distribution in Dynamic Multicast Networks
subset of the group, which can then be used to multicast a session key to a designated and Subset of group members. This
approach has less communication, computation and storage complexity when compared to the other approach.
A multicast group with large number of members uses the key-tree-based approach. In this approach it
decomposes a large group into multiple layers of subgroups with smaller sizes. Using this approach the communication
complexity is much reduced, but the storage and computation complexity is increased. In this paper, the main aim is to
reduce the rekeying cost. A new novel approach for computation efficient rekeying for multicast key distribution is
introduced[4][5][14], which reduces the rekeying cost by employing a hybrid group key management scheme .It also
maintains the same security level without increasing the communication and storage complexity. In this scheme, session
keys are encoded using error control codes. In general encoding and decoding uses error control code to reduce the
computation complexity. Thus, the computational complexity of key distribution can be significantly reduced.
1.1
Proposed System
For a time sensitive applications membership change occurs very frequently. In such environment the group
controller only communicates the new session keys to the only existing members of the group. Efficient key distribution is
an important problem for secure group communications. The communication and storage complexity of multicast key
distribution problem has been studied extensively. The scheme employs MDS (Maximum Distance Separable) codes [3], a
class of error control codes, to distribute multicast key dynamically. This scheme drastically reduces the computation load of
each group member compared to existing schemes employing traditional encryption algorithms. Such a scheme is desirable
for many wireless applications where portable devices or sensors need to reduce their computation as much as possible due
to battery power limitations. This ensures easy combination with any key-tree-based schemes and also provides much lower
computation complexity while maintaining low and balanced communication complexity and storage complexity for secure
dynamic multicast key distribution [10].
Advantages of the Proposed System:
The group controller responsibilities are shared by the Group control intermediate such as Re keying process and
scalability of the group process. Use the Identity tree based structure:
1. The group members are not affected by the key generation process when they are willing to communicate with any other
group members.
2. The Centralized key server used for key generation process and the KGC is also act as a Router for group to group
communication.
3. The Re keying process is done only to the particular group members not to the entire group members.
II.
BASIC SCHEME (KDC WITH MDS CODES)
MDS codes
Block codes that achieve equality in Singleton bound are called MDS (maximum distance separable) codes.
Examples of such codes include codes that have only one codeword (minimum distance n), codes that use the whole of
(Fq)n(minimum distance 1). Maximum Distance Separable (MDS) codes are a class of error control codes. that meet the
Singleton bound [13, chapter 11]. Letting GF(q) be a finite field with q elements [13], an (n; k) (block) error control code is
then a mapping from GF(q)k to GF(q)n : E(m) ¼ c, where m ¼ m1m2 _ _ _mk is the original message block, c ¼ c1c2 _ _ _
cn is its code word block, and E(_) is an encoding function, with k _ n. If a decoding function D(_) exists such that D(ci1ci2
_ _ _ cik; i1; i2; _ _ _ ; ik) ¼ m for 1 _ ij _ n and 1 _ j _ k, then this code is called an (n; k) MDS code[3]. For an (n; k) MDS
code, the k original message symbols can be recovered from any k symbols of its code word block. The process of
recovering the k message symbol is called erasure decoding. All the symbols are defined over GF(q), and usually, q ¼ 2m.
The well-known Reed-Solomon (RS) codes [4] are a class of widely usedMDScodes. Notably, the RS codes and other MDS
codes [8] can be used to construct secret-sharing and threshold schemes [5] [7].
2.1 MDS Code Algorithm
MDS consists of three phases:
1) The initialization of the GC
2) The join of a new member
3) The re-keying procedure whenever a group member leaves.
Step 1: The initialization of the GC, Initially GC constructs a codeword C using MDS, C is over a finite space GF(q)
Step 2: One way Hash function H (.)[12], domain of H(.) is GF.
Step 3: Property of H(.) is H(x)=y, it is impossible to derive X from y.
Step 4: The joining of a new member: For a member I,
Then GC sends (ji; si) as a unicast message.
Step 5: Ji= +ve integer ji!=jk.
Step 6: Si= random seed element.
Step 7: The re-keying procedure whenever a group member
leaves Select r from space f, r should not be used
already to generate the group key. For every member j GC constructs an element Cj in GF(q).
Step 8: Cj=H(Si+r).
Step 9: Member j every ‗n‘ members in the group calculate
these own codeword C1 C2………..Cn.
III.
KEY GENERATION AND DISTRIBUTION FRAMEWORK
3.1 Key Generation
34
A Novel Approach for Secured Symmetric Key Distribution in Dynamic Multicast Networks

Private Key
The Private Key is generated using MDS code. The GC sends his number of group members to the KGC (Key Generation
Center). The keys are generated by the KGC and submitted to the GC.

Session Key
In session key generation, initially sixteen decimal digits are generated by using random number generation method .Then
each decimal digit is splited and compared with pre determined binary format. In DES algorithm the 64 bits session key is
considered as a message file and generated user‘s private key is considered as a key file. DES algorithm encrypts the session
key by using user‘s private key and transmitted to the appropriate users.

Join operation
User
Join request
Find join position
Tree structure
Generate keys
to be changed
Member updates
the appropriate keys
Distribute the
keys changed
Fig 1: join operation

Join Request
A Network node issues a request to GC to join the group. The GC checks whether the request is from an authenticated
member or not. If yes the GC accepts the request. Then the node communicates its session key through some secure channel
as shown in figure 1.

Find join position
The GC group controller maintains a tree structure and the tree structure is the logical arrangement of members. The GC
(Group controller) traverses the tree structure and finds a position for the new member. The GC inserts the member details in
this new position, which is a leaf node.

Generate keys
From the new position onwards the GC generates the new
key(s) along the path to root. The new keys are used to replace the old keys of the auxiliary nodes. Update tree structure Old
keys are replaced by their corresponding new keys. Hence forth newly generated keys are used for future communication.
This operation provides backward secrecy i.e. it prevents the newly joined member from accessing the previously
communicated data.

Distribute keys
A packet is constructed, which consists of newly generated key(s). This packet is encrypted using the old key known by a
member or sub-group of members.

User-oriented re-keying
In the user-oriented re keying, the GC constructs each re keying message. Rekey message contains the encrypted form of
session key. So that they contain exactly all the messages that some user or a group of users need.

Key-oriented re-keying
Key-oriented strategy emphasizes that each new key should be packed into a separate message and distributed to the holders

Leave operation
Leave request
Proces
s
Remove the
member
Generate
keys to be
changed
Member updates
the appropriate
keys
Distribute the keys
changed
35
A Novel Approach for Secured Symmetric Key Distribution in Dynamic Multicast Networks
Multicasting
Change key
KGC
Multicast the Key
Unicasting
Transmit the
Message
Specify the user
Figure 2: leave operation

Leave Request
The member issues a request to leave the group as shown in figure 2 .

Process Request
The GC checks whether the request is from an existing member, if so the GC accepts the request as shown in figure 2.

Find leave position
The GC traverses the tree structure and finds the leaving position of the member. The GC then deletes the member details
and removes the node from tree structure.

Generate keys
From the leaving position onwards the GC generates the new key(s) along the path to root. Old keys are replaced by their
corresponding new keys. Hence forth newly generated keys are used for future communication. This operation provides
forward secrecy, i.e. it prevents the left member from accessing the data sent in future communication.

Distribute keys
A packet is constructed, which consists of newly generated key(s). This packet is encrypted using the old key known by a
member or sub-group of members. These new keys help the members to decrypt the messages sent in future communication.

Member updates keys
After receiving the message, the member updates the appropriate set of keys.

User-oriented re-keying
In the user-oriented re keying, the group controller constructs each re keying message [14]. Rekey message contains the
encrypted form of session key, So that they contain exactly all the messages that some user or a group of users need.

Key-oriented re-keying
Key-oriented strategy emphasizes that each new key should be packed into a separate message and distributed to the holders.
3.2 Message Transmission
Multicasting is a process of sending a message to a selected group. Internet applications, such as online games,
newscast, stock quotes, multiparty conferences, and military communications can benefit from secure multicast
communications [10]. In most of these applications, users typically receive identical information from a single or multiple
senders. Hence, grouping these users into a single multicast group and providing a common session encryption key to all of
them will reduce the number of message units to be encrypted by the senders. Various types of data communication are
broadcast, Multicast, group communication.
Figure 3: Transmission of the message M through 4 point-to-point connections.
36
A Novel Approach for Secured Symmetric Key Distribution in Dynamic Multicast Networks
Above figure 3 shows the transmission of message m to four point to point connections. Here node number 1 is the service
provider. Nodes 2,3,4,5 are the receiving nodes. 2,3,4,5 Nodes are receiving the same message.

Group communication
For group communications, the server distributes to each member a group key to be shared by all members [10]. The GC
distributing the group key securely to all members requires messages encrypted with individual keys as shown in [figure 4].
Each such message may be sent separately via unicast. Alternatively, the messages may be sent as a combined message to all
group members via multicast. Either way, there is a communication cost proportional to group size (measured in terms of the
number of messages or the size of the combined message). Observe that for a point-to-point session, the costs of session
establishment and key distribution are incurred just once, at the beginning of the session. On the other hand, group session
may persist for a relatively long time with members joining and leaving the session. Consequently, the group key should be
changed frequently. To achieve a high level of security, the group key should be changed after every join and leave so that a
former group member has no access to current communications and a new member has no access to previous
communications.
Transfer the
Message
Encryption
Decryption
View the
Message
Figure 4: Group Communication
3.3 Cryptography
Cryptography is the process of protecting information by transforming it into an unreadable format called cipher
text. Only those who possess a secret key can decrypt the message into text. Encryption is the process of conversion of
original data (called plain text) into unintelligible form by means of reversible translation i.e. based on translation table or
algorithm, which is also called enciphering. Decryption is the process of translation of encrypted text (called cipher text) into
original data (called plain text), which is also called deciphering. Cryptography plays a major role in the security aspects of
multicasting. For example, consider stock data distribution group, which distributes stock information to a set of users
around the world. It is obvious that only those who have subscribed to the service should get the stock data information. But
the set of users is not static. New customers joining the group should receive information immediately but should not receive
the information that was released prior to their joining. Similarly, if customers leave the group, they should not receive any
further information.
3.4 Authentication
The Login Module is used for the newly joined users to send a request to the Group Controller and it is used for to
retrieve the Private keys after the Group Controller assign keys to the new users as shown in [figure 5]. The user login the
group to enter the user Id and Private Key. If the user Id and private key is correct means the user view the inbox and outbox
message otherwise to display the message box ―Enter the correct Password‖.
User Login
Check the
Uname& Secret
code
Yes
See the Inbox
& outbox
No
Please enter
the secret code
Figure 5: Authentication

System Flow Diagram
Figure 6: system Flow Diagram
37
A Novel Approach for Secured Symmetric Key Distribution in Dynamic Multicast Networks
IV.
COMPARISON WITH TRADITIONAL CRYPTOGRAPHIC SCHEMES
To evaluate the proposed scheme, a multicast key distribution scheme is implemented to disseminate 128-bit
session keys among a 3-ary balanced key tree. The pro-posed schemes compared with traditional cryptographic schemes.
As the communication and storage complexity are the same among all the schemes, it suffices to simply compare the
computation complexity. The comparison considers the following scenario, where each three- member group has one
member that departs. These departures are not constrained to happen at the same time, but un practice, they might tend to
be close. For example at the end of one movie broadcast etc.This makes a batch proceed possible, which means that all
remaining members could be rekeyed at once. Before reporting the experimental result, it is worth pointing out that any oneway hash function [112] used in the proposed scheme can be simplified from general-sense hash function implementations.
For instance, we use the MD5 algorithm [9] as an exemplary hash function in our evaluation. Which produces a 128-bit hash
output from any arbitrary length input? A general MDS input input consists of three components:1)input data 2)padding bits,
and 3)a final 64-bit field for length. In our case, as the input is always 128 bits, we can present the final length field to
represent 128. Moreover, all the rest bits can be set to 0 and removed from MDS logic. This tends to make MDS algorithm
more efficient. Obviously, the same method can be readily applied to other hash algorithms. For example, SHA-1 and SHA
256[9].The MDS algorithm is considered insufficient or using longer session key becomes necessary Experiments are
conducted to compare the computation complexity of the proposed scheme with the conventional cryptographic schemes.
The particular cryptographic algorithms compares in the experiments are CAST-128[14], IDEA [8], AES [1], and RC4 [9],
All keys are set to be 128 bits. To make the comparison as fair as possible, we use the widely adopted and highly optimized
software-based cryptography implementation, i.e., the Cryptography [6] package .We use the same optimization flags to
compile both the cryptography package and our own optimized RS code implementation. We disable the hardware
acceleration (SSE/SSE2) and tend not to compare under that scenario, simply because it is not ubiquitous over all platforms
yet (for example, not available on mobile devices).
Table-1Computational Time Comparing To the RC4
(Multicast Group Size of 59,049)
CASTTIME(MS)
MDS
AES
IDEA
128
RC4
GC
MEMBER
23
5
81
23
71
82
99
23
227
61
The computation time of the key distribution is compared to conventional stream ciphers, as shown in Table 1, for
a selected multicast group size. Notice that the computation times of both the GC and the member using the RC4 cipher are
significantly larger than using other schemes. Even though RC4 itself is a fast stream cipher, its key scheduling process has
dominant effect in this particular scenario, where only 128-bit data is encrypted/decrypted using any given key. Results
under other multicast group sizes are similar, which are thus not duplicated here. Finally it is worth nothing that our basic
scheme simply reduces computation complexity by replacing crypto-graphic encryption and decryption operations with more
efficient encoding and decoding operations. It is orthogonal to any other schemes that use different rekeying protocols and
procedures. This basic scheme can always be combined with any re-keying schemes that use cryptographic encryption and
decryption operations. For example, this basic scheme can be readily adapted incorporate so the called one-way function tree
scheme [6], where a different rekeying protocol on a key tree is used other than the traditional scheme, as described in
Section 4, to further reduce the computation complexity. We leave this simple exercise to interested readers.
V.
IMPLEMENTAION AND RESULTS
A member can register under a particular group controller by selecting one of the groups as shown in [figure 5.1].
When the member selects the group new session key is created for that group. This session key is used to send a request to
the group controller where in the group controller maintains all the details regarding all the members of that particular group
in a database as shown in the [figure 5.2]. Whenever the request goes to the group controller the session key is going to the
compare with the exercising session key available if the session key matches then the member joins the group and active
rekeying process starts as shown in [figure 5.3].
Figure 5.1 Group Login Window
38
A Novel Approach for Secured Symmetric Key Distribution in Dynamic Multicast Networks
Figure 5.2 Send Request Window
Fig 5.3 Group Controller member join Window
Whenever the member leave the group the existing data base is going to be updated automatically updated to the
system a new session key is generated so that a new member can join the group. The enter data base is maintained by the
group controller. The GC is responsible for generating the session key every time whenever the group member joins or
leaves the group as shown in [figure 5.4 and 5.5].
Figure 5.4 Group Controller member Leave Window
Figure 5.5 Keys Generated Window
VI.
CONCLSION
We have presented a dynamic multicast key distribution scheme using MDS codes. The computation complexity
of key distribution is greatly reduced by employing only erasure decoding of MDS codes in-stead of more expensive
encryption and decryption computations. Easily combined with key trees or other rekeying protocols that need encryption
and decryption operations, this scheme provides much lower computation complexity while maintaining low and balanced
communication complexity and storage complexity for dynamic group key distribution. This scheme is thus practical for
many applications in various broadcast capable networks such as Internet and wireless networks.
39
A Novel Approach for Secured Symmetric Key Distribution in Dynamic Multicast Networks
REFERENCE
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
AES Algorithm (Rijndael) Information, http://csrc.nist.gov/Cryptotoolkit/aes/rijindael/, 2007.
M.Abdalla y.shavitt,and A.wool,‖Towards making broad cast encryption practical,‖IEEE/ACM
trans.networking,vol.8,no.4,pp.443-454,aug.2000
M. Blaum, J. Bruck, and A. Vardy, ―MDS Array Codes withIndependent Parity Symbols,‖ IEEE Trans.
Information Theory,vol. 42, no. 2, pp. 529-542, Mar. 1996.
R. Blom, ―An Optimal Class of Symmetric Key Generation Systems,‖ Advances in Cryptology—Proc.
Workshop Theory and Application of Cryptographic Techniques (EUROCRYPT ‘84).
J. Snoeyink, S. Suri, and G. Varghese, ―A Lower Bound for Multicast Key Distribution,‖ Proc. IEEE INFOCOM
‘01, Apr. 2001.
W.Dai,Crypto++Library,http://www.eskimo.com/~weidai/ cryptlib.html, 2007.
J.Blomer,M.Kalfare,M.Karpinaki,R.Karp,M.Luby,and D.Zuckeman,‖An XOR –Based Erasure-Resilient
coding scheme‖, technical report tr-95-048,int‘1 computer science inst,aug.
vol. 45, no. 1, pp. 272- 276, Jan. 1999.
B. Schneier, Applied Cryptography, second ed. John Wiley & Sons,1996
Jack Snoeyink, Subhash Suri, George Varghese ―A Lower Bound for Multicast Key Distribution‖ IEEE
PAPER,2001.
Xiaozhou Steve Li, Yang Richard Yang, Mohamed G. Gouda, Simon S. Lam ―Batch Rekeying for Secure Group
Communications ― IEEE PAPER,2009.
A.T. Sherman and D.A. McGrew, ―Key Establishment in Large dynamic group on way hash function
trees‖,IEEE trans,software engg,vol 20,po 5,pp444-458,may 2003.
F.J.macwilliams and N.J.A Sloane,The thory of error correcting codes,north -holland math.library,1977.
Sanjeev Setia_ Samir Koussih_,Sushil Jajodia ―A Scalable Group Re-Keying Approach for Secure Multicast‖
IEEE PAPER,2000.
Authors
Rajani received M.Tech in Computer science from JNTUH University, Hyderabad, A.P., India. Currently
working in Institute of Aeronautical Engineering as an Associate professor in CSE department (IARE) R.R.Dist, A.P, and
India. She has 6 years of experience. Her research interests include Computer Networks, Operating Systems, Information
Security, JAVA, and J2EE.
D. Rajani received Bachelor‘s degree in Computer science and Engineering from JNTUH, Pursuing M.Tech in
Information and Technology from Institute of Aeronautical Engineering. She is a research scholar in field of Information
Security.
40
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 1(August 2012) PP: 47-48
ZDDP- An Inevitable Lubricant Additive for Engine Oils
Intezar Mahdi1, Rajnish Garg2, Anupam Srivastav3
1,3
Department of Mechanical Engineering, SET, IFTM University, Moradabad, India
2
University of Petroleum & Energy Studies, Dehradun, India
Abstract–– Zinc dialkyldithiophosphate (ZDDP) has been widely used as an additive in lubricating oil, especially
engine oil, because of its excellent antiwear and anti-oxidation properties. ZDDP’s properties partly contribute to the
P, S and Zn elements contained in its molecule structure. The phosphorus element would make the catalysts,
equipped in the exhaust gas converter of the automobile, be poisoned and finally lose their efficacy. Moreover, the
zinc salts produced by ZDDP in tribological condition might cause the electrolytic corrosion. There is a request from
automaker to oil industry to eliminate ZDDP, or at least reduce it to 0.05 per cent but till date, the oil industry finds
uncomfortable with reducing the level below 0.08 per cent. No new antiwear additives to eliminate or reduce ZDDP
have been found until now. This review paper supports that there is no alternative for ZDDP.
Keywords––Additive, Antiwear, ZDDP, Load Carrying Capacity.
I.
INTRODUCTION
Zinc Dialkyldithiophosphates, or ZDTPs, are oil soluble chemicals which are used as additives in lubricating oils
for internal combustion engines and transmissions. ZDTPs are a key component of modern engine oils, and while they
represent only a small fraction of the total engine oil, they play a vital role providing wear protection of key metal-metal
contact points in engines and transmissions. This results in extended engine and transmission life. Because they contain
sulfur, ZDTPs also provide oxidation protection which extends the life of the engine oil. With this dual protection role,
ZDTPs are sometimes referred to as “multifunctional” additives.
Figure-1 below shows a representative chemical structure of a ZDTP. ZDTPs are manufactured by reacting various
types of alcohols with phosphorus pentasulfide, then neutralizing the resultant intermediate with zinc oxide. The phosphorus
– sulfur – zinc linkage is key to understanding how a ZDTP protects engines from wear [1].
Figure-1: Chemical Structure of a ZDDP
ZDTPs appear as gold colored liquids with a thickness or viscosity similar to heavy syrup. They are not soluble in
water, and because they are denser than water, they will sink in a water environment. They are readily soluble in oil and
lighter weight hydrocarbons such as gasoline. ZDTPs have very low vapor pressure and little noticeable odor at ambient
temperatures. When heated however, they have a sulfurous odor, similar to that of burning rubber. Since ZDTPs are present
in a small amount in lubricating oils, this odor is not noticeable during normal operation.
II.
RESEARCH REVIEW
The synergistic effort of the benzothiazole derivative containing dialkyldithiocarbamate (BSD) with ZDDP. Four
oil samples were prepared. For each sample, the total concentration of both BSD and ZDDP in base oil is fixed at 0.5 per
cent and the concentration of BSD and ZDDP, respectively, was changed according different ratio. The antiwear properties
of four samples were evaluated by using a four-ball machine at the load of 490N (50 kg) and sliding time of 15 min [2].
Table1: Performance comparison between ZDDP and BSD [2]
Additive
Base oil
ZDDP
BSD
Concentration (per cent)
0.5
0.5
Pb value (kg)
40
78
70
WSD (mm)
30 kg
50 kg
0.58
Seizure
0.38
0.49
0.46
0.89
There is synergism effort between ZDDP and BSD. We may use BSD to partially replace ZDDP without satisfying
the lubricating performance.
A Novel borate ester derivative containing dialkylthiophosphate groups (coded to BDDP for simplicity) was
synthesized and characterized. Compared with ZDDP, BDDP as additive in synthetic ester (Esterex A51) shows better loadcarrying and friction-reducing property. Meanwhile, BDDP can improve the antiwear performance of the base oil evidently.
47
ZDDP- An Inevitable Lubricant Additive for Engine Oils
But its antiwear property is slightly worse than that of ZDDP. The decomposed borate ester adsorbed on worn surface. S and
P element in BDDP reacted with metal and generated FePO4, Fex(SO4)y, both of which contributed to the formation of
boundary lubricating film. BDDP exhibits better thermal stability than ZDDP. BDDP has high efficiency in controlling the
oxidation of Esterex A51. However, the antioxidative property of ZDDP is better than that of BDDP [3].
Vegetable oils can contribute toward the goal of energy independence and security due to their naturally renewable
resource. They are promising candidates as base fluids for ecofriendly lubricants because of their excellent lubricity,
biodegradability, good viscosity-temperature characteristics, and low evaporation loss. Their use, however, is restricted due
to low thermo-oxidative stability and poor cold-flow behavior. This paper presents a systematic approach to improve their
oxidation behavior by searching for a suitable additive combination. The study of antioxidant/antiwear additive synergism
was investigated on a set of four antioxidants and three antiwear additives in vegetable oils using pressure differential
scanning calorimetry (PDSC) and a rotary bomb oxidation test (RBOT). The results indicate that dialkyldithiocarbamate
antioxidant performed better than diphenylamine and hindered phenol. The zinc dialkyldithiocarbamate antioxidant showed
excellent synergism with antiwear additive antimony dithiocarbamates [4].
In another study, it has been investigated that the ZDDP has very good anti wear abilities. Addition of 0.25% (v/v)
ZDDP by thirteen times. The presence of same amount of ZDDP increases the load carrying capacity of contaminated
lubricant by five times. ZDDP also help in reducing three body wear, particularly when abrasive contaminants are present in
the lubricants [5]. The Result is summarized in table 2.
Table 2: Tabulation of Results
Weight
Weight
Wear
Percenta
of Bullet
of Bullet Weigh
ge Wear
before
after
t
Wight
Test
Test
Loss
Loss
(gm)
(gm)
(gm)
(%)
Sample
Timke
n Ok
Load
(Poun
d)
(A)
Raw Base Oil
40
29.3549
29.3035
0.514
(B)
Base oil+0.25%
ZDDP
520
29.3300
29.3100
(C)
0.1% Sand Particles
+ (B)
200
29.3408
29.3054
III.
Scar
Diameter
(Major)
(mm)
Scar
Diameter
(Minor)
(mm)
0.1750
10.00
6.076
0.200
0.0681
7.00
4.076
0.354
0.1206
8.028
5.0008
CONCLUSION
The studies shows that Zinc dialkyl dio thio phosphate (ZDDP) is a very good anti wear agent known till date.
Even in the presence of contaminants it improves the load carrying capacity to a remarkable value.
REFERENCES
1.
2.
3.
4.
5.
Product Stewardship Summary, Zinc Dialkydithiophosphates (ZDTPs)
Yong Wan, “Synergistic lubricating effects of ZDDP with a heterocyclic compound”, Industrial Lubrication and
Tribology, Volume 60 · Number 6 · 2008 · 317–320.
Yonggang Wang, Jiusheng Li, Tianhui Ren “Tribological study of a novel borate estercontaining
dialkylthiophosphate group as multifunctional additive”, Industrial Lubrication and Tribology, Volume 61 ·
Number 1 · 2009 · 33–39 .
Brajendra K. Sharma, Joseph M. Perez and Sevim Z “Soybean Oil-Based Lubricants: A Search for Synergistic
Antioxidants”,. Erhan, Energy & Fuels 2007, 21, 2408-2414.
Intezar Mahdi, Anupam Srivastav, R Garg, Hemant Mishra, “Tribological Behaviour of ZDDP Anti Wear Additive
with and without Contaminants” International Journal of Mechanical Engineering and Technology (IJMET),
Volume 3, Issue 2, May-August (2012), pp. 354-359.
48
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 47-55
Optimal Synthesis of Crank Rocker Mechanism for Point to Point
Path Generation
Subhash N waghmare1, Roshan B. Mendhule2, M.K. Sonpimple3
1,2,3
Department of Mechanical Engineering ,Priyadarshini College of Enginerring Nagpur-19 India
Abstract––The Present Work Introduces The Concept Of Orientation Structure Error Of The Fixed Link And Present A
New Optimal Synthesis Method Of Crank Rocker Linkages For Path Generation. The Orientation Structure Error Of
The Fixed Link Effectively Reflects The Overall Difference Between The Desired And Generated Path. Avoid By Making
Point By Point Comparisons Between The Two Paths And Requires No Prescription Of Timing. In The Kinematic
Synthesis Of Four Bar Path Generating Linkages, It Is Required To Determine The Linkage Dimensions’ So That A
Point On The Coupler Link Traces Out The Desired Curve. There Are Two Types Of Task For Path Generation. One Of
Them Specified Only Small Number Of Points On The Path, And The Trajectory Between Any Two Specified Point Is
Not Prescribed. The Concept Of Orientation Structure Error Of The Fixed Link Is Introduced. A Simple And Effective
Optimal Synthesis Method Of Crank –Rocker Path Generating Linkages Is Presented.
Keywords––Four Bar Linkage, Fixed Link, Path Generation, Rocker, Link
I.
INTRODUCTION
1.1 Specification of Problem: The Crank Link Length is the most fundamental parameter, controlling all the major
designs as well as performance parameters. Crank Rocker Mechanism Must be designed for minimum Crank Link Length,
while considering Design Variable Rmax & Rmin.Minimization of Crank Link Length is the objective of optimization using
Genetic Algorithm.
In order to carry out the optimum design of a Crank-Rocker Mechanism, it is necessary to understand the
dynamics of the problem and the interrelation of the parameters involved in Synthesis of Mechanism. These parameters can
be grouped as shown in Fig. 1.1 for analysis and optimum design.
1.2 Design Methodology: The most important constraint on the problem is that on the minimization of Input Angle
occurring by considering Design Varibles. Due to limitations of Design Variable Specification we can calculate the
minimum value for crank link Length with optimum value of Input Angle .In this work we can give maximum and minimum
value of Variable for Minimization of Objective Function.
1.3 Overview of Non- Conventional Technique : The present work deals with the basic objective to minimize the Crank
Link Length .The use of non- traditional algorithm with its efficient search and optimization process results in optimal
Crank-Rocker Mechanism Design. In this present work, an attempt is made to use genetic algorithm for optimal design
process so that we can arrive for the best possible solution. The present study involves the action of GA technique. The
output results in this project work proves that this process of design using non- traditional algorithm is an efficient and
effective one.
1.4 Recommended Parameters in GA :Crossover rate: Crossover rate should be high generally, about 80%-95%.
(However some results show that for some problems crossover rate about 60% is the best).Mutation rate: On the other side,
mutation rate should be very low. Best rates seems to be about 0.5%-1 %.
Population size: Very big population size usually does not improve performance of GA (in the sense of speed of
finding solution) Good population size is about 20-30, however sometimes sizes 50-100 are reported as the best. The best
population size depends on the size of encoded string (chromosomes). It means that if you have chromosomes with 32 bits,
the population should be higher than for chromosomes with 16 bits.Selection: Basic roulette wheel selection can be used, but
sometimes rank selection can be better. Encoding: Encoding depends on the problem and also on the size of instance of the
problem. Crossover and mutation type: Operators depend on the chosen encoding and on the problem
47
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
1.5
Genetic Algorithm Terminology:
Fig. 1.1 Process of operation of GA
With the help of Fig 1.1, process of operation of GA is explain belowEncoding: Representation, a linkage between the solutions in the phenotype space and the chromosomes is the
genotype space (binary or real).Selection: Darwinian natural evolution, fitter individuals have a greater chance to reproduce
offspring “survival of the fittest”.
- Parent selection to create offspring.
- Generational selection from both parents and offspring.
Crossover: Operator to be applied to two chromosomes to generate two offspring. Combines the schemata from two different
solutions in various combinations .Mutation: After creation of new individuals via crossover, mutation is applied usually
with a low probability to introduce random changes into the population
- replace gene values lost from the population or not initially present.
- evaluate more regions of the search space.
- avoid premature convergence.
- makes the entire search space reachable.
1.6 Plan of action using Genetic Algorithm
Step 1 Choose a coding to represent problem parameters, a selection operator, a crossover operator, and a
mutation operator. Choose population size, n, crossover probability, pc, and mutation probability, pm. Initialize a
random population of string of size l. Choose a maximum allowable generation number t max. Set t = 0.
Step 2 Evaluate each string in the population.
Step 3 If t > t max or other termination criteria is satisfied.
Terminate.
Step 4 Perform reproduction on the population.
Step 5 Perform crossover on random pairs of strings.
Step 6 Perform mutation on every string.
Step 7 Evaluate strings in the new population. Set t = t +1 and go to Step 3.
1.7 Background of work:
A Crank Link is a mechanism member used to connect the Rocker Link, Follower Link ,Fixed Link and
Interrelated Angles of the Linkages , Crank Link Length should be minimum for smooth Rotation of the Crank of the
mechanism and should follow the desired curve of the Mechanism. In this work for minimization of structure error , First of
all minimize the Crank Link Of the Mechanism by considering some assumption of the parameters.
It is shown that the Crank Link length of the mechanism is the most fundamental parameter, controlling all the
major designs as well as performance parameters. The optimum design of Crank-Rocker Mechanism can be obtained,
satisfying the constraints on design, by controlling parameter of the Mechanism.Hence it is concluded that the Crank-Rocker
Mechanism be designed for minimum Crank Link Length, while satisfying the Design Variables.
1.8 Development of Non- Traditional Search
The development search work describes non-traditional search and optimization methods which are becoming
popular in engineering optimization problems in the recent past. We give a brief introduction to the following techniques of
optimization:
Separable programming
Multi-objective optimization
Calculus of variations
Optimal control theory
Optimality criteria methods
48
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
Genetic algorithms
Simulated annealing
Neural - network - based methods
Optimization of fuzzy system
In some practical optimization problem, the objective and constraint functions are separable in the design
variables. The separable programming techniques are useful for solving such problems. If an optimization problem involves
the minimization of several objective function simultaneously with a specified constraint set, the multi-objective
optimization methods can be used for its solution.
If an optimization problem involves the minimization or maximization of a functional subject to the constraints of
the same type, the decision variable will not be a number, but it will be a function. The calculus of variations can be used to
solve this type of optimization problems. An optimization problem that is closely related to the calculus of variations
problem is the optimal control problem. An optimal control problem involves two types of variable: the control and state
variables, which are related to each other by a set of differential equations. Optimal control theory can be used for solving
such problems. In some optimization problems, especially those related to structural design, the necessary conditions of
optimality for specialized design conditions are used to develop efficient iterative techniques to find the optimum solutions.
Such techniques are known as optimality criteria methods.
In recent years, some optimization methods that are conceptually different from the traditional mathematical
programming techniques have been developed. These methods are based on certain biological, molecular and neurological
phenomenon. Methods known as genetic algorithms are based on the principles of natural genetics and natural selection.
Simulated annealing is based on the simulation of thermal annealing of critically heated solids. Bother genetic algorithm and
simulated annealing are stochastic methods that can find the global minimum with a high probability and are naturally
applicable for the solution of discreet optimization problems. In neural- network- based methods, the problem is modeled as
a network consisting of several neurons and the network is trained suitably to solve the optimization problem efficiently. In
many practical systems, the objective functions, constraints and the design data are known only in vague and linguistic
terms. Fuzzy optimization methods can be used for solving such problems.
II.
MECHANICAL MODEL OF CRANK-ROCKER MECHANISM
Fig 1.2 Mechanical Model of crank – Rocker Mechanism
As shown in figure 1.2 Rmax&Rmin are the longest and shortest distances from point A to the desired
curve,Respectively. When point A lies outside the desired curve, L1<L5; When point A lies inside the desired curve, then
L1> L5. From Eqs.(1) & (2), we can obtain the values of L1 and L5. The length of coupler link, L2, the rocker link L3, the
Fixed Link L4, And the coupler angle  are also taken as Independent Design Variables, which are six in all and do not
include the orientation angle  of the fixed link. When a set of Design Variables is selected , We make ABCD form a closed
loop with point D not Fixed, and Let point M move along the desired curve, then the motion of Dyad ABM is determinant,
and so is the other Dyad ADC. When point M completes cycle along the desired curve the variation of orientation angle of
link AD is defined as the orientation structure Error of the fixed link. If the candidate linkage can generate the desired curve,
the orientation structure error will be zero. The orientation Structural error can effectively Reflect the overall deference
between desired and generated paths.
49
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
III.
OPERATION OF CRANK-ROCKER MECHANISM
Fig 1.3 Operation Of crank-Rocker mechanism
1.
2.
3.
4.
5.
Basically Crank-Rocker Mechansm having four Linkages . It having Cank Link ie. Called as Input Angle, Rocker
Link ,Follower Link and Fixed Link.
When the mechanism Performing Dynamic Action then linkages start to move as per by following input Link of
the Mechanism at Suitable angles of the Linkages. In this Mechanism Link 4 is fixed with some orientation angles.
When the Input Link Start to rotate with input angle, link 2 doing an rocking suitable for Input Link and giving the
power to follow up the Link 3 and it is connected to Fixed Link.
There should not any clearance in the linkage joints, if there is clearance between the joint then coupler curvecan
not form desired curve.
Now the Crank Link rotate at input angle of the link1 and moving the other linkages.
IV.
FORMULATION OF WORK
4.1 Mathematical model of the problem
There are three design variables for the optimal synthesis of the linkages. They are
X=[L2 ,L3,L4]T
The objective function is two minimize the crank length(Input Link). The co-ordinate point of A must insure that
point M can generate desired curve when crank AB rotates uniformly
For the synthesis of mechanism, we have to assume the values of co-ordinate of point A(Xa,Ya) and then we have
to decide the Range for Rmax & Rmin as per the requirement and then apply the following equations for solving the values
of L1 & L5 (L1=Input Link & L5=Died Link)
L1+L5=Rmax…………………………………………………………………………….
(1)
L5-L1=Rmin……………………………………………………………………………...
(2)
On solving the Equations (1) & (2)
We get ,
L5=(Rmax+Rmin)/2……………………………………………………………………..
(3)
And
L1=Rmax-L5………………………………………………………………………………
(4)
Now minimization of Structure Error of the fixed link , means we have to optimize for minimization of Crank Link length.
Once we get the optimal value of crank link length , then we can calculate following parameters
50
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
θ1 (Input Angle) = Tan-1[(ym- ya)/(xm-xa)] + Cos-1[(L12 +(xm-xa)2 + (ym- ya)2 -L52) / ( 2L1 √ [ (xm-xa)2 + (ym- ya)2 ]
)]…………………………………………………………………………..
(5)
θ5 (Died Link Angle) = Tan-1[(ym – ya – L 1Sin θ1 ) / (xm – xa – L 1Cos θ1 )…………………………………. (6)
θ2 (Rocker Link) = θ5 – β…………………………………………………………………
(7)
L2(Rocker Link) = L5 / Cos β…………………………………………………………
(8)
Xc( Co-ordinate of Point C) = xa + L1 Cos θ1 + L 2Cos θ2 …………………………….
(9)
Yc ( Co-ordinate of Point C) = ya + L 1Sin θ1 + L 2Sin θ2 ……………………………
(10)
L3 (Follower Link) = √[(yc- yd)2 + (xc –xd)2]………………………………………….
(11)
L4 (Fixed Link)
(12)
= √[yd- ya)2+( xd- xa)2]…………………………………………….
¥ (Orientation Angle)= Tan-1[(yc1- ya)/(xc1-xa)] + Cos-1[(L42 +(xc1-xa)2 + (yc1- ya)2 –L32) / ( 2L4 √ [ (xc1-xa)2 + (yc1- ym)2
] )]……………………………………………………………….
(13)
Sructurer Error of the fixed link , Es = ¥max - ¥min…………………………………………….(14)
Average Orientation Angle , ¥ avg= [¥max +¥min]/2
4.2 Optimization of Crank- Rocker Mechanism using Genetic Algorithm

Objective Function : To minimize Crank Length( Input Link Length)
(Crank Length) L1=Rmax-L5
Variable: Longest Distance from point A to Desired Curve(Rmax)
& Shortest Distance From point A to Desired Curve
(i) 5< Rmax <10
(ii) 3< Rmin<7
As the objective is to minimize the function :(Crank Length) L1=Rmax-L5 ,in the interval 5< x1 <10 , 3<x2 <7
STEP 1
Binary coding to represent variable x1 and x2
Chose binary coding , 12 bits for variable x1, x2 as shown in Table 4.4
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
211
210
29
28
27
26
25
24
23
22
21
20
2048
1024
512
256
128
64
32
16
8
4
2
1
Table 4.4 Binary coding table





Now the total string length = 12+12=24
For 12 bits, solution accuracy is given below :
(Upper limit – initial interval )/(2 no. of bit-1)
Minimum count = 0 and Maximum count = 4096-1=4095
The details of coding variable is shown in Table 4.5
Table 4.5 Details of coding variable
Rmax Coding
Range of Rmax= Rmax- Rmin
10-5= 5
2n =Range/Accuracy
Accuracy for Rmax=0.0024
Number of digits/population size(n)=12
Rmin coding
Range of Rmin = Rmax- Rmin
7-3=4
2n =Range/Accuracy
Accuracy for Rmin=0.0017
Number of digits/population size (n)=12
51
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
Possible combination=212=4096
Possible combination=212=4096
STEP-2 Iteration
Total 10 Iterations for minimization of Crank- link Length.
STEP-3 Evaluation of each string in population
STEP-4 Reproduction Of Each String in population
STEP-5 Crossover of string in Population
STEP-6 Mutation of string in population
STEP-7 Iteration
Total 10 Iterations for minimization of Crank- link Length after mutation.
V.
DESIGN EXAMPLE
5.1 Oval Shape Curve generation A total of 11 points on the desired curve are specified, which are given in table 1.
A cubic parametric spline is used to interpolate the desired path , which passes through the 11 specified points . The
cumulative sum of chords length between two neighboring specified points is taken as parameter of the spline. Each chords
is divided in to n equal parts .
Table-5.1 Coordinates of specified points of given curve
Order of points
1
2
3
4
5
6
7
8
9
10
11
x-coordinates
4.38
3.84
3.39
3.05
2.84
3.00
3.53
3.97
4.31
4.67
4.89
y-coordinates
3.96
3.82
3.60
3.33
3.02
2.67
2.59
2.84
3.16
3.52
3.94
Table-5.2 for Result of synthesis Example
L1
L2
L3
L4
L5
¥ avg
Es
0.14
1.142
7.66
8.95
1.1125
30.16
1.48
Result of synthesis Example by using Genetic Algorithm in C++Language Programme
52
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
Output of synthesis Example by using C++Language Programme
5.2 Eight Curve generation
A total of 11 points on the desired curve are specified , which are given in table 5.4. A cubic parametric splin is
used to interpolate the desired path , which passes through the 11 specified points . Rmax & Rmin range is . 6<Rmax<15
and 4 <Rmin<10. The cumulative sum of chords length between two neigh bouring specified points is taken as parameter
of the spline. Each chords is devided in to n equal parts .
Table-5.3 Coordinates of specified points of given curve
Order of points
1
2
3
4
5
6
7
8
9
10
11
x-coordinates
4.15
4.50
4.53
4.13
3.67
2.96
2.67
2.63
2.92
3.23
3.49
y-coordinates
2.21
2.18
1.83
1.68
1.58
1.33
1.06
0.82
0.81
1.07
1.45
Table 5.4 for Result of synthesis Example
L1
L2
L3
L4
L5
¥ avg
Es
0.7295
1.9108
6.9020
8.9453
1.6917
32.69
0.4631
53
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
Result of synthesis Example by using Genetic Algorithm in C++Language Programme
Output of synthesis Example by using C++Language Program
VI.
CONCLUSIONS
The salient conclusions drawn from this analysis are :-
•
•
•
•
•
•
•
•
Optimal design of mechanism is closely related to the Input Link Of the Mechanism .
The Rmax & Rmin are more important for synthesis of Mechanism.
In this study, the Crank Rocker mechanism optimization problem is solved efficiently using the non-conventional
method which is genetic algorithm.
The results obtained by GA outperformed the conventional method. The efficiency of non-conventional techniques
demonstrated in this study suggests its immediate application to other design optimization problems.
Hence we can conclude that minimization of objective function can be achieved by increasing number of iterations
Also, GA approach handles the constraints more easily and accurately.
Genetic algorithms can be applied to a wide range of problems Their flexibility comes at the cost of more function
evaluations. There are many variants of genetic algorithms available, each with its own strengths and weaknesses.
It is observed that optimized value obtained by genetic Algorithm are closer and they are much better than the
values obtained by the conventional method.
REFERENCES
1.
2.
Kalyanmoy ,Deb, “Optimization For Engineering Design-Algorithm and Example ”,Pentice Hall of India Private
Limited, New Delhi.1995.
S.S.Rao , “ Engineering Optimization-Theory and Practice ”, New Age International (P) Ltd , New Delhi, 1996.
54
Optimal Synthesis Of Crank Rocker Mechanism For Point To Point Path Generation
3.
U.S. Chavan , S.V. Joshi, “Synthesis and Analysis of coupler curves with combined planar cam follower
mechanism
4.
D. Mundo, G. Gatti and D.B. Dooner “Combined synthesis of five-bar linkages and non-circular gears for precise
path generation”
5) Yahia M. Al-Smadi , Kevin Russell , Wen-Tzong Lee and “An Extension of an algorithm for planar four-bar
path Generation with optimization”
Hafez Tari and Hai-Jun Su “ A Complete Homotopy Solution to the Eight-Point Path Generation of a SliderCrank Four-Bar Linkage”
5.
6.
55
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 45-56
Tensile Properties of Long Jute Fiber Reinforced Polypropylene
Composites
Battula Sudheer Kumar, Rajulapati.Vineela, Shaik Jahara begham
ASSISTANT PROFESSOR , Department of Mechanical
Lakki Reddy Bali Reddy College Of Engineering, Mylavaram Krishna (AP) INDIA.
Abstract––Now-a-days the need for Composite materials has widely increased. Keeping in view the environmental factors
caused by the synthetic fibers like carbon, glass fibers etc has led to the requirement of natural fibers like jute, hemp etc
which are environmentally friendly, low cost and easily available. Synthetic fibers like carbon or glass are widely used in
industry but their cost is high so a need for naturally available and low cost fibers has arrived. Jute, hemp etc are the
natural fibers which can be used as reinforcement material.The main objective of our project is to test the tensile
properties of the long and continuous natural fiber reinforced polypropelene composites. The natural fiber used in our
project is jute fiber and the reinforcement used is continuous or long fiber reinforcement. The fibers for better adhesion
property are first chemically treated with NaOH at different concentrations (5%, 10% and 15%). These treated fibers with
different weight ratios (2.5%, 5%, 7.5% and 10%) are used with polypropylene matrix. The samples are prepared by
injection molding and hand layup technique as per ASTM standards and tested on Universal testing machine and results
are analyzed. The results have shown that, for treated jute fiber reinforced polypropelene samples the tensile strength and
tensile modulus are more than the plain polypropelene samples. There is 28.4% increase in the tensile properties of
15%NaOH treated fiber reinforced polypropelene samples with 10% weight ratio of jute when compared to Plain
Polypropelene samples. Chemical treatment increased the surface roughness and improved the bonding thereby resulting
in increased tensile properties.
I.
INTRODUCTION
Composites are structures that are made up of diverse elements, with the principle being that the sum of the whole
is greater than the sum of its component parts (i.e. 1+1=3). An understanding of composites seems to be inherent in animal
behavior, evident in the nest building of birds, bats and insects, for example. Primitive man used the basic materials that
were available to him such as animal dung, clay, straw and sticks to form composite structures that were literally the first
building blocks of civilization. Even the biblical Noah's Ark was allegedly made of coal-tar pitch and straw, which could
perhaps be the first reported construction of a reinforced composites boat! Moving forward several thousand years, and the
second wave of the industrial revolution that swept though western Europe from the1830s onwards, saw new found
industries developing their own composite technologies such as laminated wood, alloyed metals and steel reinforced
concrete. The earliest polymer castings were developed by Lepage in France using albumen, blood and wood flour to
produce decorative plaques. The first semi - synthetic plastics were produced when cellulose fibers were modified with
nitric, acid to form cellulose nitrate - or celluloid as it was to become known. Today, the composites marketplace is
widespread. As reported recently by the SPI. Composites Institute, the largest market is still in transportation (31%), but
construction (19.7%),marine (12.4%), electrical/electronic equipment (9.9%), consumer (5.8%), and appliance/business,
equipment are also large markets. The aircraft/aerospace market represents only 0.8% which is surprising in light its
importance in the origins of composites. Of course, the aerospace products are fewer in number but are much higher in
value.
What is a composite?
A composite is a product made with a minimum of two materials – one being a solid material and the other a
binding material (or matrix) which holds together both materials. There are many composite products with more than two
raw materials. Those materials are not miscible together and are of a different nature. Composite materials are engineered or
naturally occurring materials made from two or more constituent materials with significantly different physical and chemical
properties which remain separate and distinct at the macroscopic and microscopic scale within the finished structure.
Composite materials are multiphase materials obtained through the artificial combination of different materials in order to
attain properties that the individual components by themselves cannot attain. They are not multiphase materials in which the
different phases are formed naturally by reactions, phase transformations, or other phenomena. An example is carbon fiber
reinforced polymer.A structural composite is a material system consisting of two or more phases on a macroscopic scale,
whose mechanical performance and properties are designed to be superior to those of the constituent materials acting
independently. One of the phase is usually discontinuous, stiffer and stronger and is called reinforcement, whereas the less
stiffer and weaker phase is continuous and is called matrix. Sometimes because of chemical reactions or other processing
effects, an additional phase called interphase, exists between the reinforcement and matrix. In our project the matrix is
polypropylene and the reinforcement is jute fibers.
45
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
Fig 1.1: Phases of a composite material.
II.
EXPERIMENTAL WORK
2.1 Materials
Polypropylene
It is produced by the polymerization of proprene using the Zeigler-Natta catalysts (AL (iso – C₄H₉)₃ and TiCl₃).
Propylene can be prepared as isotactic, syndiotactic or atactic forms.
Fig2.1: Polypropylene granules
Isotactic: - it is the configuration or arrangement in which the functional groups are arranged in the same side
H
H
H
H
|
|
|
|
-CH₂ - C – CH₂ – C – CH₂ -- C – CH₂ – C –
|
|
|
|
CH₃
CH₃
CH₃
CH₃
Atactic: - it is the configuration or arrangement in which the functional groups are arranged randomly.
H
CH₃
CH₃
CH₃
|
|
|
|
-CH₂ - C – CH₂ – C – CH₂ --C – CH₂ – C –
|
|
|
|
CH₃
H
H
H
Syndiotactic: - is the configuration or arrangement in which the functional groups are arranged in an alternating manner.
H
|
CH₃
|
H
|
CH₃
|
46
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
-CH₂ - C – CH₂ – C – CH₂ -- C – CH₂ – C –
|
|
|
|
CH₃
H
CH₃
H
The isotactic pp melts at 170°c and highly crystalline. Being highly crystalline pp exhibits high stiffness, hardness
and tensile strength. But at the same time pp is one of the lightest polymer. Its high strength to weight ratio makes it an
industrial or engineering polymer. It is also highly resistant to many inorganic acids, alkalies and chemicals. But it has a
lesser stability towards heat and light when compared with HDP.But at the same time pp has excellent mechanical
properties.
Uses
PP plastic is mainly produced by injection moulding. Luggage box , battery cases, tool boxes are made of pp
polymer. Filament of fiber of pp is used in making carpets , ropes etc. it has a excellent insulator purpose. pp components are
used in T.V. Radio, Refrigerators parts, storage tanks for chemicals, seat covers.The polypropelene has been purchased from
Marama Chemical Ltd., near Autonagar.
JUTE FIBERS :
Jute fibers are naturally available fibers. They are extracted from the bast plants. Water retting process is
implemented to separate fibers from the core of the bast plants. These semi-retted jute fibers have been purchased from
Kankipadu Farmers Market., near Kankipadu, Vijayawada.
2.2. Fiber Extraction
Retting is the process of extracting fiber from the long lasting life stem or bast of the bast fiber plants. The
available retting processes are: mechanical retting (hammering), chemical retting (boiling & applying chemicals),
steam/vapor/dew retting, and water or microbial retting. Among them, the water or microbial retting is a century old but the
most popular process in extracting fine bast fibers. However, selection of these retting processes depends on the availability
of water and the cost of retting process. To extract fine fibers from jute plant, a small stalk is harvested for pre-retting.
Usually, this small stalk is brought before 2 weeks of harvesting time. If the fiber can easily be removed from the Jute hurd
or core, then the crop is ready for harvesting. After harvesting, the jute stalks are tied into bundles and submerged in soft
running water. The stalk stays submerged in water for 20 days. However, the retting process may require less time if the
quality of the jute is better.
When the jute stalk is well retted, the stalk is grabbed in bundles and hit with a long wooden hammer to make the
fiber loose from the jute hurd or core. After loosing the fiber, the fiber is washed with water and squeezed for dehydration.
The extracted fibers is further washed with fresh water and allowed to dry on bamboo poles. Finally, they are tied into small
bundles to be sold into the primary market.
These bundles of fiber are bought and then soaked in water and dried in sunlight. After drying it is cleaned from
dust by vigorous rubbing and combing and then cut as per required length. Bunches of fibers of 10 gms each are made and
sent for chemical treatment.
Fig 2.2: Soaking jute in water
47
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
Fig 2.3: Rubbing the jute to separate dust and fiber
Fig 2.5: Extracting individual fibers from
Fig 2.4: Combing technique to remove knots
Fig 2.6: Bunches of fibers before weighing
the clean bunch
3.3 Chemical Treatement
The Fibers are to be treated with NaOH to increase their surface roughness and to improve adhesion property. A
few bunches are treated with 5% NaOH and a few with 10% and 15% NaOH. These bunches are soaked in NaOH for 3 hrs
and then washed with distilled water and then for 20mins are soaked in acetic acid solution. Again the samples are washed
with distilled water and are soaked in 6% H2O2 and continuously stirred for 2 hrs. These bunches are then washed again with
distilled water and dried in sunlight.
Fig 2.7: Fiber soaked in 5% NaOH
Fig 2.8: Fiber soaked in Acetic acid
48
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
Fig 2.9: Fibers soaked in H2O2
Fig2.11: Fibers being cleaned with distilled water
Fig 2.10: Fibers being stirred after soaking in H2O2
Fig 2.12: Treated fibers being dried in sunlight
2.3. Sample Preperation
Molds are prepared from zinc sheets by tinsmithy and the size of the molds is 160x13x3 mm.
The polypropelene granules are poured into the injection moulding material and the temperature is set to 170 oC. When the
Polypropelene is melted it is poured into the molds. The fibers are induced when the polypropelene is in molten state. The
composite (PP + Fiber) in the mold is shook to set the polypropelene properly on fibers and then a roller is rolled over it by
applying hand pressure. After cooling the composite material is removed from the mold and filed to the required dimensions.
5 samples of polypropelene are prepared and then grinded to the size 150x13x3 mm, for even surface. The average weight of
the sample is 7.89 gms. 5 samples each of 2.5%, 5%,7.5% and 10% by wt of untreated fibers are prepared. The weight of
2.5% by wt of fiber is 0.2gms, for 5% is 0.3gms, for 7.5% is 0.6gms and for 10% is 0.8gms. 5 samples each of 2.5%,
5%,7.5% and 10% by wt of 5% NaOH treated fibers, 5 samples each of 2.5%, 5%,7.5% and 10% by wt of 10% NaOH
treated fibers, 5 samples each of 2.5%, 5%,7.5% and 10% by wt of 15% NaOH treated fiber. So totally 80 samples of fiber +
Polypropelene are prepared.
Fig 2.13: Molds for preparing samples
Fig 2.14: Polypropelene being poured into the mold
from injection moulding machine
49
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
Fig 2.15: Inserting fibers into polypropelene
Fi g 2.16: Rolling pressure applied to the composite
Fig 2.17: Samples after rolling
Fig 2.18: Removing of samples from molds
Fig 2.19: Plain Polypropelene samples
50
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
Fig2.20: 2.5%, 5%, 7.5%, 10% wt% of untreated fiber samples
Fig 2.21: 2.5%, 5%, 7.5%, 10% wt% of 5% NaOH treated fiber samples
51
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
Fig2.22: 2.5%, 5%, 7.5%, 10% wt% of 10%NaOH treated fiber samples
Fig 2.23: 2.5%, 5%, 7.5%, 10% wt% of 15%NaOH treated fiber samples
2.4. Testing
These Samples are tested for tensile property on a universal testing machine and the results are analysed as per
ASTM standards. The samples are loaded on the machine and the maximum load is set to 200 kg and then load is applied.
The elongation and their respective loads are noted. The diameters and the weights of the fibers are tested before and after
chemical treatment.
Fig2.24 Sample loaded on the machine
52
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
III.
RESULTS AND DISCUSSIONS
3.1. Calculations:
FORMULAE:
1) Ultimate tensile strength = L/A MPa
L = Load (N)
A = Area (mm2)
2)Tensile Modulus = Ultimate tensile strength / Strain (MPa)
Ultimate tensile strength (MPa)
Strain = Elongation / Total length
Elongation (mm)
Total length (mm)
3.2. Results:
Table 3.1: Weights and diameters of fibers before and after chemical treatment.
S.NO
Type of Fiber
Weights, mg
Diameter, mm
1
Untreated fibers
14.4
0.594
2
5% NaOH treated fibers
12
0.391
3
10% NaOH treated fibers
6.2
0.266
4
15% NaOH treated fibers
5
0.197
Fig 3.1: Samples after testing
53
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
Table 3.2: Various parameters of the jute fiber reinforced polypropelene composite materials
Table 3.3: Ultimate tensile strength of the jute fiber reinforced polypropelene composite material samples.
Plain PP
97.5%PP+2.5%FIBERS
UTF
5%NaOH
10%NaOH
15%NaOH
24.51
24.13
24.76
26.05
26.47
24.51
24.06
25.34
26.25
27.56
92.5%PP+7.5%FIBERS
24.51
24.06
25.52
29.59
30.46
90%PP+10%FIBERS
24.51
23.46
26.16
30.05
31.48
95%PP+5%FIBERS
Table 3.4: Tensile Modulus of the jute fiber reinforced polypropelene composite material samples.
Plain PP
UTF
5%NaOH
10%NaOH
15%NaOH
97.5%PP+2.5%FIBERS
122.53
137.10
148.58
158.82
161.40
95%PP+5%FIBERS
122.53
152.31
164.53
189.65
216.68
92.5%PP+7.5%FIBERS
122.53
160.43
185.85
216.01
247.61
90%PP+10%FIBERS
122.53
172.49
194.64
263.62
277.77
54
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
3.3 Graphs
Graph 3.1: Ultimate tensile strength Vs Fiber weight Percent Graph 3.2: Tensile Modulus Vs Fibers weight percentage.
IV.
DISCUSSIONS AND ANALYSIS
Table 4.1 shows the change in weights and diameter of jute fibers before and after chemical treatment. Table 4.2
shows the various parameters of the long and continuous jute fiber reinforced polypropelene composite. The load,
elongation, tensile strength, tensile modulus, mass etc are noted in this table. Table 4.3 and graph 4.1 shows the ultimate
tensile strength and Table 4.4 and graph 4.2 shows the tensile modulus. From the above graphs it is shown that as the weight
percentage increases, ultimate tensile strength and tensile modulus increases. It is shown that 15% NaOH treated fibers show
maximum tensile strength. The orientation of fibers results in unidirectional properties.
In graph 4.1 the ultimate tensile strength of untreated fibers is less than the plain polypropelene. It is obvious that
the ultimate tensile strength of the jute fiber reinforced composites becomes lower than that of the pure polypropelene matrix
as the nominal fiber fraction increases. This can be explained by both the interfacial adhesion between the matrix and fiber
surface and the voids in the composites. Since no coupling agent was introduced to improve the interfacial bonding in this
study, mechanical interlocking without any chemical bonding may be responsible for the adhesion. The strength of this
mechanical interlocking seems to be insufficient to hold the fiber and matrix together as the composite undergoes large
tensile deformation, resulting in low load transfer and, subsequently, low tensile strength. The void contents may be another
source of the low tensile strength, because the voids act as a stress raiser (i.e., stress concentration) that can bring about rapid
failure. This tendency can be observed in the jute fiber reinforced composites.
It is clearly seen in the graphs that the tensile strength and modulus of treated fibers is greater than the plain
polypropelene and untreated fibre composite. Alkali treatment generally increases the strength of natural fiber composites. A
strong sodium hydroxide treatment may remove lignin, hemicellulose and other alkali soluble compounds from the surface
of the fibers to increase the numbers of reactive hydroxyl groups on the fiber surface available for chemical bonding. So,
strength should be higher than untreated fiber composites. The probable cause of this unlike phenomenon may be, alkali
react on the cementing materials of the fiber specially hemicellulose which leads to the splitting of the fibers into finer
filaments. As a result, wetting of fiber as well as bonding of fiber with matrix may improve which consequently make the
fiber more brittle. Under stress, these fibers break easily. Therefore, they cannot take part in stress transfer mechanism. So,
high concentration of sodium hydroxide may increase the rate of hemicellulose dissolution which will finally lead to strength
deterioration. Moreover, unnecessary extra time in treatment may also cause increment of hemicellulose dissolution.
V.
CONCLUSIONS
With the results obtained from the experimental procedure, the following conclusions are made.
1.
2.
3.
4.
As the fibre weight percent increases the tensile strength and modulus increases
The ultimate tensile strength of untreated fibre is less than the plain polypropelene because of formation of voids
and improper adhesion between fiber and matrix due to the lack of required surface roughness needed for bonding
of fiber with matrix.
Due to the chemical treatment the surface roughness increases providing strong interfacial adhesion between
matrix and fiber thereby increasing the ultimate tensile strength and tensile modulus.
The tensile strength of the fiber increases as the percentage of NaOH for treating fibers is increased. NaOH
dissolves lignin, hemicelluloses and other binding materials and providing reactive hydroxyl groups on the surface
of the fiber for better interfacial bonding. The generalised equation can be shown as
Fiber–OH + NaOH -> Fiber–O– Na+ + H2O
REFERENCES
1.
2.
Franco, PJH, Valadez-González, A., 2005. Fiber–matrix adhesion in natural fiber composites. In: Mohanty, A.K.,
Misra, M., Drzal, L.T. (Eds.), Natural Fibers, Biopolymers and Biocomposites. Taylor & Francis, Florida, pp.
177–230.
Hill, C.A., Abdul Khalil, H.P.S., 2000. Effect of fiber treatments on mechanical
55
Tensile Properties Of Long Jute Fiber Reinforced Polypropylene Composites
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
properties of coir or oil palm fiber reinforced polyester composites. Journal of
Applied Polymer Science 78,
1685–1697.
Mohanty, A.K., Misra, M., Drzal, L.T., 2001. Surface modifications of natural fibers
and performance of the resulting biocomposites: an overview. Composite Interface 8 (5), 313–343.
Alam, S. N., Pickering, K. L. and Fernyhough, A. (2004): The Characterization of Natural Fibers & Their
Interfacial & Composite Properties, Proceedings of SPPM, 25-27 February 2004, Dhaka, pp. 248-256
Dieu, T. V., Phai, L. T., Ngoc, P. M., Tung, N. H., Thao, L. P. and Quang, L. H. (2004): Study on Preparation of
Polymer Composites based on Polypropylene Reinforced by Jute Fibers, JSME International Journal, Series A:
Solid
Mechanics and Material Engineering, Vol. 47, No. 4, pp. 547-550.
Gañán, P. and Mondragon, I. (2004): Fique Fiber-reinforced Polyester Composites: Effects of Fiber Surface
Treatments on Mechanical Behavior, Journal of Materials Science, Vol. 39, No. 9, pp. 1573-4803.
Gassan, J. and Bledzki, A. K. (1997): Influence of Fiber-Surface Treatment on The Mechanical Properties of JutePolypropylene Composites, Composites - Part A: Applied Science and Manufacturing, Vol. 28, No. 12, pp. 10011005.
Gassan, J. and Bledzki, A. K. (2000): Possibilities to Improve The Properties of Natural Fiber Reinforced Plastics
by Fiber Modification - Jute Polypropylene Composites, Applied Composite Materials, Vol. 7, No. 5-6, pp. 373385.
Saheb, D. N. and Jog, J. P. (1999): Natural Fiber Polymer Composites: A Review, Advances in Polymer
Technology,Vol. 18, No. 4, pp. 351-363.
Wollerdorfer, M. and Bader, H. (1998): Influence of Natural Fibres on the Mechanical Properties of Biodegradable
Polymers, Industrial Crops and Products, Vol. 8, No. pp. 105-112.
Wambua, P., Ivens, J. and Verpoest, I. (2003): Natural Fibres: Can They Replace Glass in Fibre Reinforced
Plastics?,Composites Science and Technology, Vol. 63, No. 9, pp. 1259-1264.
X. Li, L. G. Tabil, and S. Panigrahi, “Chemical treatments of natural fiber for use in natural fiber-reinforced
composites: a review,” Journal of Polymers and the Environment, vol. 15, no. 1, pp. 25–33, 2007.
56
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September2012) PP: 39-46
The Application of PSO to Hybrid Active Power Filter Design for
3phase 4-Wire System with Variable Load
B. Suresh Kumar1, Dr. K. Ramesh Reddy2
Abstract––An optimal design method for and hybrid active power filters (HAPFs) set at high voltage levels to satisfy the
requirements of Harmonic filtering and reactive power compensation. Multi objective Optimization models for HAPF
were constructed. Detuning effects and faults were also considered by constructing Constraints during the optimal
process, which improved the reliability and practicability of the designed filters. An effective Strategy was adopted to solve
the multi objective optimization Problems for the designs of HAPF. Furthermore, the Particle swarm optimization
algorithm was developed for searching an optimal solution of planning of filters. An application of the method to an
industrial case involving harmonic and reactive Power problems indicated the superiority and practicality of the proposed
design methods.
I.
INTRODUCTION
POWER Systems have to cope with a variety of nonlinear Loads which introduce significant amounts of
harmonics. IEEE Standard 519-1992 provides a guideline for the limitation and mitigation of harmonics. Passive power
filters (PPFs), Active power filters (APFs), and hybrid active power filters (HAPFs) can all be used to eliminate harmonics.
For Medium- and high-voltage systems, PPFs and HAPFs appear to be better choices considering cost where the ratings are
of several tens of megavolt–amperes. The design of such PPFs and HAPFs is a complicated nonlinear programming
problem. Conventional trial-and-error Methods based on engineering experience are commonly used, but the results are not
optimal in most cases.
In recent years, many studies have appeared involving optimal PPF design. A Method based on the sequential
unconstrained minimization Technique has been used for PPF design because of its simplicity and versatility, but numerical
instability can limit the application of this method. PPF design using simulated Annealing has been reported, but the major
drawback is the repeated annealing.
Genetic algorithms (gas) have been widely used in PPF design, but the computing burden and convergence
problems are disadvantages of this approach. A design method for PPFs using a hybrid Differential evolution Algorithm has
also been proposed, but the algorithm is Complex, involving mutation, crossover, migrant, and accelerated
Operations For the optimal design of HAPFs, a method based on gas has been proposed in order to minimize the
rating of APF, but no other optimal design methods appear to have been Suggested. Many methods treated the optimal
design of PPFs and HAPFs as a single objective problem. In fact, filter Design should determine the optimal solution where
there are multiple objectives. As these objectives generally conflict with One another, they must be cautiously coordinated to
derive a Good compromise solution.
In this paper, optimal multi objective designs for both PPFs and HAPFs using an advanced particle swarm
optimization (PSO) algorithm are reported. The objectives and constraints were developed from the viewpoint of practicality
and the Filtering characteristics.
For the optimal design of PPFs, the capacity of reactive Power compensation, the original investment cost, and the
total Harmonic distortion (THD) were taken as the three objectives. The constraints included individual harmonic distortion,
fundamental Reactive power compensation, THD, and parallel and Series resonance with the system. For the optimal design
of HAPFs, the capacity of the APF, The reactive power compensation, and the THD were taken as the three objectives; the
constraints were as for the PPFs.
The Uncertainties of the filter and system parameters, which will Cause detuning, were also considered as
constraints during the optimal design process. A PSO-based algorithm was developed to search for the optimal solution. The
numerical results of case Studies comparing the PSO method and the conventional trial and- Error method are reported.
From which, the superiority and Availability of the PSO method and the designed filters are certified.
II.
SYSTEM UNDER STUDY
A typical 10-kV 50-Hz system with nonlinear loads, as shown in Fig. 1, was studied to determine the optimal
design for both PPFs and HAPFs. The nonlinear loads are the medium frequency furnaces commonly found in steel plants
with abundant harmonic currents, particularly the fifth and seventh orders,
39
The Application of PSO to Hybrid Active Power Filter Design for 3phase 4-Wire System…
66kv
Harmonic current
Measurement point
Main transformer
10kv
Furnaces
Filters
PPF or HAPF
Nonlinear
loads
Other
loads
Fig 1. Single diagram of system for case studies.
TABLE I : HARMONIC CURRENT DISTRIBUTIONS IN PHASE A AND UTILITY TOLERANCES
Measured
National
IEEE standard
Harmonic
Value
Standard
519-1992
Order
(%)
(%)
(%)
6.14
2.61
4
5
2.77
1.96
4
7
1.54
1.21
2
11
0.8
1.03
2
13
0.6
0.78
1.5
17
0.46
0.7
1.5
19
0.95
0.59
0.6
23
0.93
0.53
0.6
25
7.12
5
5
THD
as shown in Table I. The utility harmonic tolerances given in IEEE Standard 519-1992 and the Chinese National Standard
GB/T14549-93 are listed in Table I as percentages of the fundamental current. Table I shows that current THD, and the 5th,
23rd, and 25th order harmonic currents exceed the tolerances based on both standards. In addition, the 7th and 11 th order
harmonics exceed the tolerance based on the National standard.
Filters must therefore be installed to mitigate the harmonics sufficiently to satisfy both standards. Both PPF and
HAPF are suitable and economical for harmonic mitigation in such systems. For this system with nonlinear loads as medium
frequency furnaces, the even and triple harmonics are very small and far below the standard values, so these harmonics are
not considered. In addition, the harmonic voltages are in fact very small, so the voltages are assumed to be ideal.
The fundamental current and reactive power demands are 1012 A and 3–4 MVar, respectively. The short circuit capacity is
132 MVA, and the equivalent source inductance of the system is 2.4 mH
III.
HAPF DESIGN BASED ON PSO
A. HAPF Structure and Performance:
In order to demonstrate the optimal design method of HAPFs based on PSO, an HAPF was designed and is shown
in Fig. 2; it is supposed to be used in the same situation as that shown in Fig. 1. In this HAPF, PPFs are mainly used to
compensate for harmonics and reactive power, and an APF is used to improve the filtering performance .The PPF consists of
the fifth and seventh single-tuned filters and a high-pass damped filter. The APF is implemented with a three-phase voltagesource inverter. Fig. 3(a) shows the single-phase equivalent circuits of the
HAPF, assuming that the APF is an ideal controllable voltage VAF and that the load is an ideal current source IL. ZS is the
source impedance, ZF is the total impedance of the PPF, Vpcc is the voltage of the injected point, and K is the controlling
gain of the APF.
Ls
Vs
10kv bus
Nonlinear
loads
System source
C5
PPF L5
C7 CH
L7
LH
C7
C7
RH
L7
LH
L7
LH
APF
L
Coupling
Transformer
C
C
R
Fig.2. Shunt HAPF.
40
The Application of PSO to Hybrid Active Power Filter Design for 3phase 4-Wire System…
Vpcc
.
zs I.s
IAF
+
Vs
+
-
-
zs
Zf
.
VAF = k . Ish
.
IL
K
.
Ish
Vpcch
.
IAFh
Zf
(a)
.
ILh
(b)
Fig 3.Single-phase equivalent circuits of the HAPF (a) Equivalent circuit.
(b) Equivalent harmonic circuit.
The equivalent harmonic circuit is redrawn as in Fig. 3(b). The harmonic current Ish into the system source and the
harmonic attenuation factor γ are given in the following equations:
I sh |
 
ZF
| I Lh  (1)
K  ZF  ZS
I Sh
ZF
|
| (2)
I Lh
K  ZF  ZS
Assuming that the fundamental component of the supply voltage is fully dropped across the PPF, the voltage and current of
APF can be derived as follows [24]:
  VAFh
    Z Fh . I AFh   Z Fh . I Lh  (3)
VAF
h
h
h
I AF  I AF 1   I AFh  (4)
The rms value of ˙VAF is defined as
h
VAF 

V 2 AFh  (5)
h 5,7,11,13,17,19,23,25
The capacity of the APF is determined by the current I˙AF and the voltage ˙VAF. It is obvious that the low VA rating of APF
can be achieved by decreasing ˙ IAF and ˙VAF. In this shunt hybrid topology, the current I˙AF is almost constant, so the
only way to reduce the VA rating of APF is to decrease the voltage VAF.
B. Multi objective Optimal Design Model for HAPF:
As mentioned earlier, when designing an HAPF, it is very important to minimize the capacity of the APF
component, and there are some other objectives and constraints to be considered when the APF of the HAPF is cut off due to
faults, the PPF part keeps work in to mitigate harmonics until the APF is restored. It follows that some additional constraints
should be included in respect of such occasions. The constructions of objective functions and constraints are described next.
Three important objective functions are defined as follows.
1) Minimize the capacity of the APF, which is mainly determined by the harmonic voltage across it
min VAF 

h 5,7,11,13,17,19,23,25
2
VAFh
 (6)
2) Minimize the current THD with HAPF
min THDI HAPF 
2
 I sh 


  (7)
h  2  I1 
N
where THDIHAPF is the current THD with the HAPF in place; and the definitions of Ish, I1, and N are the same as those in
(7).
3) Maximize the reactive power compensation
max
 Q  (8)
i 5,7, H
i
Where Qi is same with that in (9)
Constraints are defined as follows.
The tolerated values quoted hereinafter are also based on the National standard.
1) Requirements of total harmonic filtering with the HAPF
THDI HAPF  THDI MAX  (9)
Where THDIMAX is defined in (10)
When PPF runs individually with the APF cutoff, an additional constraint is added as follows:
THDI PPF  THDI MAX  (10)
Where THDIPPF is the THD with the PPF alone
41
The Application of PSO to Hybrid Active Power Filter Design for 3phase 4-Wire System…
2) Requirements of individual harmonic filtering with HAPF and PPF alone: Each order harmonic should satisfy the
standards, so the following constraints are included:
I HAPFsh  I hmax , h  5,7,11,13,17,19,23,25  (11)
I PPFsh  I hmax , h  5,7,11,13,17,19,23,25  (12)
Where IHAPFsh and IPPFsh are, respectively, the rms values of the hth order harmonic current into the system source with the
HAPF and the PPF alone Ihmax is defined by (11).
3) Fundamental reactive power compensation limits: The fundamental reactive power must be restricted as
Qmin 
 Q Q
i 5,7, H
i
max
 (13)
Where Qmin and Qmax are as defined in (12).
4) Parallel and series resonance restrictions: Parallel and series resonance with system source will rarely happen for the
HAPF due to the active damping function of the APF. Nevertheless, it is necessary to consider, during the HAPF design,
parallel and series resonance restrictions when PPF works alone with the APF cutoff. Therefore, constraints are constructed,
which are the same as those constructed during the PPF optimal design in (13)–(16).
5) Consideration of the detuning constraints: The HAPF system is not sensitive to detuning effects because of the APF
damping function. In the worst case, that the system impedance decreases by 20%, the system frequency changes to 50.5 Hz,
the capacitance of each branch increases by 5%, and the reactance also increases by 2%, then the filtering performance of the
PPF alone should still satisfy all the standards and limit described earlier, as set out in (10), (12), and (13).
C. Optimal Design for HAPF Based on PSO Based
on the objectives and constraints constructed earlier for HAPF, the multi objective optimization task is carried out
using an advanced PSO algorithm. The capacitance in each branch of the PPF and the characteristic frequency of the highpass damped filter are chosen as optimal variables Xi = (C5, C7, CH, fH)T, while the tuning frequencies of the fifth and
seventh single-tuned filters are predetermined as 242 and 342 Hz, respectively. According to the optimization objectives, the
corresponding fitness functions are defined as
F1' ( X )  VAF  (14)
F2' ( X )  THDI HAPF  (15)
F3' ( X ) 
 Q  (16)
i 5,7, H
i
Similar methods were adopted to solve this multi objective optimization problem. The objective of minimizing the APF
capacity is chosen as the final objective, while the other two objectives are solved by using acceptable level constraints, as
shown in the following equations:
min F1  (17)
F2  1  (18)
 2  F3   3  (19)
Whereα’1,α’3,and α’2 are the highest and lowest acceptable levels for the secondary objectives, respectively. The overall
optimization process for the HAPF based on PSO is similar to that of the PPF in Fig. 4.
TABLE II : Design results of HAPFs based on PSO and conventional methods
Conventional method
Design parameters
Pso-method
C5=59.76uF
C5=80.6uF
L5=7.24mH
L5=5.37mH
The 5th Single-tuned filter
Q5=60
Q5=60
C7=12.32uF
C7=23.76uF
L7=17.58mH
L7=9.11mH
The 7th single -tuned filter
Q7=60
Q7=60
CH=52.06uF
CH=15.28uF
LH=1.20mH
LH=3.32mH
High-pass damped filter
m=0.5
m=0.5
42
The Application of PSO to Hybrid Active Power Filter Design for 3phase 4-Wire System…
TABLE III: HARMONIC CURRENT DISTRIBUTIONS WITH HAPFs BASED ON PSO AND CONVENTIONAL
METHODS
Harmonic
orders
PSO
Method
(%)
5
7
11
13
17
19
23
25
0.24
0.24
0.25
0.1
0.07
0.06
0.13
0.13
0.17
0.11
0.71
0.3
0.16
0.12
0.26
0.26
THD
0.48
0.91
VAF
116.64 V
Reactive Power
Compensation
4MVar
Conventional
Method
(%)
340.82 V
3.88MVar
The design results of HAPFs using PSO and conventional trial-and-error methods are listed in Table II. The design
results based on the conventional method in Table II .It can be seen that the harmonic currents and reactive power are well
compensated by both HAPFs and that the HAPF designed using the method based on PSO can obtain better filtering
performance with lower THD (0.48%) and larger reactive power compensation (4MVar). Moreover, the voltage VAF of the
APF, in this case, was much smaller than that based on conventional method. Therefore, the investment cost of the whole
system is much reduced. Table IV shows the harmonic current distributions when the PPF is working alone, without the
APF.
A comparison is made between the PSO method and conventional method, and it can be seen that all the harmonic
currents and the THD are still within the standards, and the filtering performance of PPF based on PSO is a little better.
TABLE IV: Harmonic current distributions with PPFs alone Based on PSO and conventional methods
Harmonic
orders
PSO
Method
(%)
Conventional
Method
(%)
5
7
11
13
17
19
23
25
THD
1.1
0.76
0.94
0.26
0.14
0.11
0.21
0.20
1.68
0.82
0.39
1.13
0.60
0.29
0.21
0.40
0.38
1.71
TABLE V :HARMONIC CURRENT DISTRIBUTIONS WITH HAPFS AND PPFS ALONE CONSIDERING
DETUNING EFFECTS
PSO
PSO
Conventional
Harmonic
Method
Method
Method HAPF (%)
Orders
PPF (%)
HAPF (%)
5
2.56
0.65
0.44
7
1.72
0.75
0.27
11
0.99
0.23
0.71
13
0.30
0.1
0.27
17
0.17
0.08
0.16
19
0.13
0.06
0.13
23
0.25
0.14
0.28
43
The Application of PSO to Hybrid Active Power Filter Design for 3phase 4-Wire System…
25
THD
0.25
3.31
0.14
1.05
0.28
1.02
In order to verify the filtering performances of HAPF and PPF alone under the worst detuning situations,
comparisons are shown in Table V. It is clear that both HAPFs, using PSO method and conventional method, can obtain
excellent filtering performance in spite of detuning effects.
When PPF runs alone without APF, the PPF using the PSO method can still mitigate all the harmonic currents and
THD within the tolerance, while with the PPF using the conventional method, the 11th order harmonic current exceeds the
tolerance of the National standard.
These results again verify the necessity of considering detuning effects during the design process. Fig. 4 shows the
harmonic attenuation factors of HAPF and PPF alone using the PSO design method and considering detuning effects. It can
be seen that the harmonic currents are still well attenuated, and no resonance point can be found. Furthermore, the
attenuation factor of HAPF is much smaller than that of PPF, which shows the excellent harmonic mitigation performance of
HAPF.
The simulation using the MATLAB/SIMULINK software has been run based on field measurement data., The
compensated source current with the HAPF is shown in Fig. 5. From Fig. 5, we can see that the source current is very close
to a pure sine wave, with the current THD decreasing to 0.48%.
Fig. 6 shows the convergence characteristics of the PSO algorithm developed in this paper for optimal design of
HAPF. In this paper, the PSO algorithm is run 200 times, and every time, it can converge within 360 iterations. All those can
demonstrate the efficiency and validity of PSO for the optimal HAPF design.
Fig 4. Harmonic attenuation factors of the HAPF and its PPF alone based on the PSO method.
(a) Harmonic attenuation factor of the HAPF based on the PSO method. (b) Harmonic attenuation of the PPF alone
based on the PSO method
Fig 5.Compensated source current and its THD analysis with HAPF based on the PSO method
(a) Compensated source currents of phase A with HAPF. (b) THD analysis of compensated source current.
44
The Application of PSO to Hybrid Active Power Filter Design for 3phase 4-Wire System…
Fig.6. Convergence characteristics of PSO for HAPF design.
Fig.7. Simulink model of HAPF using with & without PSO with Variable loads
45
The Application of PSO to Hybrid Active Power Filter Design for 3phase 4-Wire System…
Table: VI Results with Variable Load
WITHOUT PSO
WITH PSO
SCHEME
%
THD
P.F
Reactive
Power
(VAR)
%
THD
P.F
Reactive
Power
(VAR)
HAPF
0.47
0.99
2.018
26.54
0.528
1.31
IV.
CONCLUSION
A novel hybrid APF with injection circuit was proposed. Its principle and control methods were discussed. The
proposed adaptive fuzzy-dividing frequency control can decrease the tracking error and increase dynamic response and
robustness. The control method is also useful and applicable to any other active filters. It is implemented in an IHAPF with a
100-kVA APF system in a copper mill in Northern China, and demonstrates good performance for harmonic elimination.
Simulation and application results proved the feasibility and validity of the IHAPF and the proposed control method.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
L. Gyugyi and E. C. Strycula, “Active ac power filters,” in Proc. IEEE, Ind. Appl. Soc. Annu. Meeting, 1976, pp.
529–535.
N. Mohan, H. A. Peterson, W. F. Long, G. R. Dreifuerst, and J. J. Vithayathil, “Active filters for AC harmonic
suppression,” presented at the IEEE Power Eng. Soc. Winter Meeting, 1977.
F. Peng, H. Akagi, and A. Nabae, “A new approach to harmonic compensation in power system-a combined
system of shunt passive and series active filters,” IEEE Trans. Ind. Appl., vol. 26, no. 6, pp. 983–990, Nov. 1990.
C. Madtharad and S. Premrudeepreechacharn, “Active power filter for three-phase four-wire electric systems using
neural networks,” Elect. Power Syst. Res., vol. 60, no. 2, pp. 179–192, Apr. 2002.
H. Fujita and H. Akagi, “A practical approach to harmonic compensation in power system-series connection of
passive and active filters,” IEEE Trans. Ind. Appl., vol. 27, no. 6, pp. 1020–1025, Nov. 1991.
H. Fujita and H. Agaki, “The unified power quality conditioner: the integration of series and shunt-active filters,”
IEEE Trans. Power Electron., vol. 13, no. 2, pp. 315–322, Mar. 1998.
K. J. P. Macken, K. M. H. A. De Brabandere, I. J. L. Dnesen, and R. J. M. Belmans, “Evaluation of control
algorithms for shunt active tillers under unbalanced and nonsinusoidal conditions,” in Proc. IEEE Porta Power
Tech. Conf., Porto, Portugal, Sep. 10–13, 2001, pp. 1621–1626.
F. Ruixiang, L. An, and L. Xinran, “Parameter design and application research of shunt hybrid active power filter,”
Proc. CSEE, vol. 26, no. 2, pp. 106–111, Jun. 2006.
S. Kim and P. N. Enjeti, “A new hybrid active power filter (APF) topology,” IEEE Trans. Power Electronics, vol.
17, no. 1, pp. 48–54, Jan. 2002.
S. Bhattachaya, P.-T. Cheng, Deep, and M. Divan, “Hybrid solutions for improving passive filter performance in
high power applications,” IEEE Trans Ind. Appl., vol. 33, no. 3, pp. 732–747, May 1997.
L. Malesani, P. Mattavelli, and P. Tomasin, “High performance hysteresis modulation technique for active filters,”
IEEE Trans. Power Electron., vol. 12, no. 5, pp. 876–884, Sep. 1997.
S. Fukuda and R. Imamura, “Application of a sinusoidal internal model to current control of three-phase utilityinterface converters,” IEEE Trans. Ind. Electron., vol. 52, no. 2, pp. 420–426, Apr. 2005.
X. Yuan, W. Merk, H. Stemmler, and J. Allmeling, “Stationary-frame generalized integrators for current control of
active power filters with zero steady-state error for current harmonics of concern under unbalanced and distorted
operating conditions,” IEEE Trans. Ind. Appl., vol. 38, no. 2, pp. 523–532, Mar. 2002.
K. Nishida, Y. Konishi, and M. Nakaoka, “Current control implementation with deadbeat algorithm for threephase current-source active power filter,” Proc. Inst. Elect. Eng., Electr. Power Appl., vol. 149, no. 4, pp. 275–282,
Jul. 2002.
J. H. Marks and T. C. Green, “Predictive transient-following control of shunt and series active power filters,” IEEE
Trans. Power Electron., vol. 17, no. 4, pp. 574–584, Jul. 2002.
A. Nakajima, K. Oku, J. Nishidai, T. Shiraishi, Y. Ogihara, K. Mizuki, and M. Kumazawa, “Development of
active filter with series resonant circuit,” in Proc 19th IEEE Annu. Power Electronics Specialists Conf. Rec., Apr.
11–14, 1988, vol. 2, pp. 1168–1173.
46
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 41-46
Implementing Trust in Cloud Using Public Key Infrastructure
Heena Kharche1, Prof.Deepak Singh Chouhan2
1
Student at Computer Science and Engineering, IES IPS Academy, Indore India
Faculty at Computer Science and Engineering, IES IPS Academy, Indore India
2
Abstract–– Cloud is not a destination. It is an approach. Cloud service providers, along with cloud technology vendors,
today are working towards developing a cloud ecosystem. The main area of concern in cloud ecosystem is security and
trust. As Public Key Infrastructure (PKI) technology has undergone a renaissance, enabling computer to computer
communications this paper will describe the ways to implement trust models using Microsoft Windows Azure. This paper
will provide implementations to the trust model used to enable trust in Cloud.
Keywords –– Azure; Cloud Computing; Cryptography; Public Key infrastructure; Windows
I.
INTRODUCTION
Cloud computing is rapidly emerging as a new paradigm for delivering computing as a utility. While enterprises in
India are apprehensive about public clouds, they would still like to avail themselves benefits that cloud computing have to
offer. The idea behind any cloud computing proposal is for you to pay only for what you use, scaling up or down according
to business needs. Vendors supporting cloud computing can interpret this statement differently, providing varying levels of
services to achieve this result. The industry is now on its way towards cloud computing and virtualization is the
infrastructure on which it is being built. The cloud model will see a rapid adoption in India, as business will appreciate how
technology enables them to easily expand scalability and enhance elasticity [1].
This paper is organized in the following sections:
In Section II, we give a background of the technologies and trust model to be used. In Section III we describe the
implementation methodologies of cloud based PKI. In Section IV we will discuss Conclusion and future work.
II.
BACKGROUND
A. Public Key Infrastructure
PKI consists of programs, data formats, procedures, communication protocols, security policies and public key
cryptographic mechanisms working in a comprehensive manner to enable a wide range of dispersed people to communicate
in a secure and predictable fashion. PKI provides authentication, confidentiality, non repudiation, and integrity of the
messages exchanged. PKI is hybrid system of symmetric and asymmetric key algorithms and methods [2-4]. A public-key
infrastructure (PKI) is a framework that provides security services to an organization using public-key cryptography. These
services are generally implemented across a networked environment, work in conjunction with client-side software, and can
be customized by the organization implementing them. An added bonus is that all security services are provided
transparently— users do not need to know about public keys, private keys, certificates, or Certification Authorities in order
to take advantage of the services provided by a PKI [5].
PKI and the Aims of Secure Internet Communication: The four aims of secure communication on the Internet are
as stated earlier: confidentiality, integrity, authentication and non-repudiation. Authentication is the procedure to verify the
identity of a user. There are three different factors authentication can be based on. These factors are something the user
knows, something the user possesses and something the user is. Something the user knows could be a password that is a
shared secret between the user and the verifying party. This is the weakest form of authentication since the password can be
stolen through, for example, a dictionary attack or sniffing the network. Something the user possesses could be a physical
token like a credit card, a passport or something digital and secret like a private key. This authentication form is usually
combined with something the user knows to form a two-factor authentication. For instance, a credit card and a PIN are
something possessed and something known. Something the user is could be something biometric like a fingerprint, DNA or
a retinal scan which is unique for the user.
B. Cloud Computing
Gartner defines cloud computing as Scalable, IT related capabilities provided as a service on Internet. The term
cloud computing implies access to remote computing services offered by third parties via a TCP/IP connection to the public
Internet. The cloud symbol in a network diagram, which initially represented any type of multiuser network, came to be
associated specifically with the public Internet in the mid-1990s[6].Cloud computing is defined as a model for enabling
convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or cloud
provider interaction [7].
41
Implementing Trust in Cloud Using Public Key Infrastructure
1) Cloud services exhibit five essential characteristics [6-7] that demonstrate their relation to, and differences from,
traditional computing approaches:
a)
On-demand self-service: Computing capabilities are available on demand without any interference of third party.
b)
Broad network access: Capabilities are available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs) as well as
other traditional or cloud based software services.
c)
Resource pooling: The provider’s computing resources are pooled to serve multiple consumers using a multitenant model, with different physical and virtual resources dynamically assigned and reassigned according to
consumer demand. Examples of resources include storage, processing, memory, network bandwidth, and virtual
machines. Even private clouds tend to pool resources between different parts of the same organization.
d)
Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically to quickly
scale out; and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning
often appear to be unlimited and can be purchased in any quantity at any time.
e)
Measured service: Cloud systems automatically control and optimize resource usage by leveraging a metering
capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, or
active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both
the provider and consumer of the service.
NIST Visual Model of Cloud Computing Definition gives the overall perspective and definition of what cloud computing is
[7]:
Fig. II.B NIST Visual Model of Cloud Computing
a)
Service Models: Cloud computing can be classified by the model of service it offers into one of three different groups.
i.
IaaS (Infrastructure as a Service): The capability provided to the customer of IaaS is raw storage space,
computing, or network resources with which the customer can run and execute an operating system, applications,
or any software that they choose.
ii.
PaaS (Platform as a Service): The cloud provider not only provides the hardware, but they also provide a toolkit
and a number of supported programming languages to build higher level services (i.e. software applications that
are made available as part of a specific platform).
iii.
SaaS (Software as a Service): The SaaS customer is an end-user of complete applications running on a cloud
infrastructure and offered on a platform on-demand. The applications are typically accessible through a thin client
interface, such as a web browser.
b) Deployment Models: Clouds can also be classified based upon the underlying infrastructure deployment model as Public,
Private, Community, or Hybrid clouds.
i.
Public Cloud: A public cloud’s physical infrastructure is owned by a cloud service provider. Such a cloud runs
applications from different customers who share this infrastructure and pay for their resource utilization on a utility
computing basis.
ii.
Private Cloud: A pure private cloud is built for the exclusive use of one customer, who owns and fully controls this
cloud.
iii.
Community Cloud: When several customers have similar requirements, they can share an infrastructure and might
share the configuration and management of the cloud. This management might be done by themselves or by third
parties.
iv.
Hybrid Cloud: Any composition of clouds, be they private or public could form a hybrid cloud and be manage a
single entity, provided that there is sufficient commonality between the standards used by the constituent clouds.
C. Windows Azure
Windows Azure platform consists of a highly scalable (elastic) cloud operating system, data storage fabric and
related services delivered by physical or logical (virtualized) Windows Server 2008 instances. The Windows Azure Software
42
Implementing Trust in Cloud Using Public Key Infrastructure
Development Kit (SDK) provides a development version of the cloud-based services, as well as the tools and APIs needed to
develop, deploy, and manage scalable services in Windows Azure.
According to Microsoft, the primary uses for Azure are to
i.
Add web service capabilities to existing packaged applications
ii.
Build, modify, and distribute applications to the Web with minimal on-premises resources
iii.
Perform services, such as large-volume storage, batch processing, intense or high-volume computations, and so on,
off premises
iv.
Create, test, debug, and distribute web services quickly and inexpensively
v.
Reduce costs and risks of building and extending on-premises resources
vi.
Reduce the effort and costs of IT management
vii.
The Windows Azure Platform supports three types of scalable persistent storage:
viii.
unstructured data (blobs)
ix.
structured data (tables)
x.
messages between applications, services, or both (queues)
D. Transport layer Security:
Transport Layer Security (TLS) is a protocol that ensures privacy between communicating applications and their
users on the Internet. When a server and client communicate, TLS ensures that no third party may eavesdrop or tamper with
any message. TLS is the successor to the Secure Sockets Layer (SSL) [8].
Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that
provide endpoint authentication and secure communications over any transport. TLS is normally associated with Internet
communication but can be applied to any transport layer, including sockets and HTTP. Sending unencrypted messages
increases the risk that messages can be intercepted or altered. TLS security technology automatically encrypts e-mail
messages between servers thereby reducing the risk of eavesdropping, interception, and alteration.
E. Trust Models
Enterprise Boundary[5] consists of a group of enterprises that wish to form a group to implement a common
procedure of security.
1) Public Root CA and Public CA
Fig. II.E.1 Public root CA and public CA
Above model consists of a public root CA and the Public CA which are connected by some enterprises in
enterprise boundary that share same rules of security and some individually connected by public CA’s.
Advantage of above model is that all web browsers trust the root CA certificate, and hence all certificates
generated in the hierarchy.
i.
This model lacks in following:
ii.
Trust point (CA certificate) is outside the enterprise boundary and issues certificates to other parties.
iii.
No control of security properties of PKI
iv.
Key Strength: 3072 bit RSA rather than 1024 bit RSA.
v.
Certificate extensions: Key usage.
vi.
Inability to issue certificate and certificate revocation information on demand.
vii.
High implementation cost.
43
Implementing Trust in Cloud Using Public Key Infrastructure
2) Public root CA and Enterprise CA
Fig. II.E.2 Public Root CA and Enterprise CA
Above model consists of a public root CA and an enterprise CA which rules the secure communication between
various enterprises and the root CA.
Advantages of the above model are:
i.
Trust point (CA certificate) is inside the enterprise boundary.
ii.
On demand certificate and certificate revocation issuing.
iii.
Web browsers trust the root CA certificate, and hence all certificates generated in the hierarchy.
Disadvantages of the above model that is supports Limited control of security properties of PKI and the
implementation cost is too high.
3) Enterprise root CA and Enterprise CA
Fig. II.E.3 Enterprise root CA and enterprise CA
Above model operates completely in enterprise boundary which means complete trust is established between the
enterprises communication and remote access.
Advantages of above model are:
i.
Trust points (Root CA and CA certificates) are inside the enterprise boundary.
ii.
Full control of security properties of PKI.
iii.
On demand certificate and certificate revocation issuing.
44
Implementing Trust in Cloud Using Public Key Infrastructure
The only disadvantage of the above model is Web browsers do not trust the root CA certificate, and hence all
certificates generated in the hierarchy.
III.
IMPLEMENTATION METHODOLOGIES
Above trust models can be implemented by following methodologies in a sequence for reducing risks in cloud
computing and develops trust of cloud customer in cloud.
Summary of the protocols and algorithms to be used in following methodologies are:
Transport Layer Security is used as secure protocol. Advanced Encryption Standard is used for cipher algorithm
with secure hash Algorithm and RSA as key exchange.
F. Public root CA and Public CA
In this model only thing we have to look around is the secure communication using TLS and secure data
transmission using Advanced Encryption standard.
Implementing Transport Layer
Security
Checking for the available
certificates supported by public
root CA and Public CA
Implementing 128 bit AES managed
code for encryption and decryption
of the data for data privacy
Fig. III.A Methodology for model 1
G. Public Root CA and Enterprise CA
In this model only thing we have to look around is the secure communication using TLS then we create self signed
certificate which makes enterprise CA, we import the certificates in the browser and secure data transmission using 128 bit
Advanced Encryption standard.
Implementing transport layer
Security
Creating Self signed certificates
for creation of Enterprise CA
Importing the certificates in the
browser to make it authenticated.
Data privacy is maintained
using 128 bit AES Algorithm
Fig. 3.B. Methodology for model 2
45
Implementing Trust in Cloud Using Public Key Infrastructure
H. Enterprise Root CA and Enterprise CA
Implementing Transport layer Security
Creating Enterprise Root CA and importing their self
signedcertificates
Creating Enterprise CA and importing their self
signed certificates
Data privacy is maintained using AES Managed
code
Fig.3.C Methodology for model 3
IV.
CONCLUSION AND FUTURE WORK
In this model the methodology that is implemented is quite complicated in this we have to create both the CA and
have to maintain it across the enterprises which lie in enterprise boundary. The table below shows the comparison between
three models on various parameters which have been calculated logically:
Parameters
Secure
Privacy
Trust
Cost
Feasibility
Performance
Public Root
Public Root
Enterprise
CA and
CA and
Root CA and
Public CA
Enterprise CA Enterprise CA
Less
Medium
More
Less
Medium
More
Medium
Medium
High
High
High
High
High
Medium
low
High
Low
Low
Table IV Comparative Analysis
By implementing the above model we can easily avail the trust within the enterprise boundary using Microsoft
platform .By creating Root CA and Enterprise CA with the decision of all the enterprises may build the complete trust in the
cloud. Technologies and incentives to access or destroy systems emerge as technology moves forward and the value of the
system increases. Hence, a system can only be classified secure to an extent or not secure at all. One critical factor in
security is cost. To limit the incentives to break the system, the cost of breaking the system should be higher or equal to the
value of the information the system is protecting. The paper has discussed a model to build trust in Cloud using public key
Infrastructure. Despite of the limitation of browser support it can be widely used by enterprises. The application of the above
model can be the different plants and branch offices that want to share same data can create a public cloud and define their
own Root CA and CA to ensure confidentiality and integrity of the data. While working on future scope we can easily make
a cloud consumer trust that their data is safe on cloud that too within their own enterprise boundary. In future these
algorithms can be expanded with the new forthcoming algorithms to eliminate the disadvantages of the existing system.
Various performance factors can be improved in the cloud for the use of cloud with full trust.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
Wayne Jansen Timothy Grance, Guidelines on Security and Privacy in Public Cloud Computing
Understanding Public Key Infrastructure (PKI) An RSA Data Security White Paper
Article from Entrust.com
Shashi kiran, Patricia Lareau, Steve Lloyd PKI Basics - A Technical Perspective
Heena kharche,Deepak Singh Chouhan : Building Trust In Cloud Using Public Key Infrastructure, (IJACSA)
International Journal of Advanced Computer Science and Applications, Vol. 3, No. 3, 2012,Pg(26-31)
Roger Jennings: Cloud Computing with the Windows Azure Platform,2010 ,pp11-12
VMware vCenter Configuration Manager Transport Layer Security Implementation WHITE PAPER
The bank of New York Mellon, Transport Layer FAQ ,
46
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2 (September 2012) PP: 56-61
Comparison of Low Field Electron Transport Properties in
Compounds of groups III-V Semiconductors by Solving
Boltzmann Equation Using Iteration Model
H. Arabshahi1, A. Pashaei2 and M. H .Tayarani3
1
Physics Department, Payame Noor University, P. O. Box 19395-4697, Tehran, Iran
23
Physics Department, Higher Education Khayyam, Mashhad, Iran
Abstract––Temperature and doping dependencies of electron mobility in InP, InAs,GaP and GaAs structures have been
calculated using an iterative technique. The following scattering mechanisms, i.e, impurity, polar optical phonon,
acoustic phonon and piezoelectric are included in the calculation. The electron mobility decreases monotonically as the
temperature increase from 100 K to 500 K for each material which is depended to their band structures characteristics.
The low temperature value of electron mobility increases significantly with increasing doping concentration. The
iterative results are in fair agreement with other recent calculations obtained using the relaxation-time approximation
and experimental methods.
Keywords–– Iterative technique; Ionized impurity scattering; Born approximation; electron mobility
I.
INTRODUCTION
The problems of electron transport properties in semiconductors have been extensively investigated both
theoretically and experimentally for many years. Many numerical methods available in the literature (Monte Carlo method,
Iterative method, variation method, Relaxation time approximation, or Mattiessen's rule) have lead to approximate solutions
to the Boltzmann transport equation [1-2]. In this paper iterative method is used to calculate the electron mobility of InAs,
GaAs, InP and Gap. Because of high mobility InP has become an attractive material for electronic devices of superior
performance among the IIIphosphates semiconductors.Therefore study of the electron transport in group IIIphosphates is
necessary. GaP possesses an indirect band gap of 0.6 eV at room temperature whereas InP have a direct band gap about 1.8
eV, respectively. InP and InAs offer the prospect of mobility comparable to GaAs and are increasingly being developed for
the construction of optical switches and optoelectronic devices.
The low-field electron mobility is one of the most important parameters that determine the performance of a fieldeffect transistor. The purpose of the present paper is to calculate electron mobility for various temperatures and ionizedimpurity concentrations. The formulation itself applies only to the central  valley conduction band. We have also consider
band nonparabolicity and the screening effects of free carriers on the scattering probabilities. All the relevant scattering
mechanisms, including polar optic phonons, deformation potential, piezoelectric, acoustic phonons, and ionized impurity
scattering. The Boltzmann equation is solved iteratively for our purpose, jointly incorporating the effects of all the scattering
mechanisms [3-4].This paper is organized as follows. Details of the iteration model , the electron scattering mechanism
which have been used and the electron mobility calculations are presented in section II and the results of iterative
calculations carried out on Inp,InAs,GaP,GaAs structures are interpreted in section III.
II.
MODEL DETAIL
To calculate mobility, we have to solve the Boltzmann equation to get the modified probability distribution
function under the action of a steady electric field. Here we have adopted the iterative technique for solving the Boltzmann
transport equation. Under the action of a steady field, the Boltzmann equation for the distribution function can be written as
f
eF
f
 v. r f 
. k f  ( ) coll
t

t
(1)
Where (f / t ) coll represents the change of distribution function due to the electron scattering. In the steady-state and
under application of a uniform electric field the Boltzmann equation can be written as
eF
f
. k f  ( ) coll

t
(2)
Consider electrons in an isotropic, non-parabolic conduction band whose equilibrium Fermi distribution function is f0(k) in
the absence of electric field. Note the equilibrium distribution f0(k) is isotropic in k space but is perturbed when an electric
field is applied. If the electric field is small, we can treat the change from the equilibrium distribution function as a
56
Comparison of Low Field Electron Transport Properties in Compounds of groups…
perturbation which is first order in the electric field. The distribution in the presence of a sufficiently small field can be
written quite generally as
(3)
f (k )  f 0 (k )  f1 (k ) cos 
Where θ is the angle between k and F and f1(k) is an isotropic function of k, which is proportional to the magnitude of the
electric field. f(k) satisfies the Boltzmann equation 2 and it follows that:
eF  f0
 t


3
3

   cos  f  S  (1  f )  S f d k  f  S (1  f  )  S f   d k 
1 i
0
i 0
1  i
0
i 0 


i
(4)
In general there will be both elastic and inelastic scattering processes. For example impurity scattering is elastic and
acoustic and piezoelectric scattering are elastic to a good approximation at room temperature. However, polar and nonpolar optical phonon scattering are inelastic. Labeling the elastic and inelastic scattering rates with subscripts el and inel
respectively and recognizing that, for any process i, seli(k’, k) = seli(k, k’) equation 4 can be written as
 eF f 0
 (1  f 0 )  Sinel f 0 ] d 3k 
   f1cos [ Sinel
(5)


k
f1 (k ) 
 f 0] d 3k 
  (1  cos )Sel d 3k     [Sinel (1  f0 )  Sinel
Note the first term in the denominator is simply the momentum relaxation rate for elastic scattering. Equation 5 may be
solved iteratively by the relation
 eF f 0
 (1  f 0 )  Sinel f 0 ] d 3k 
   f1cos [n  1][Sinel
(6)


k
f1n (k ) 
 f 0] d 3k 
  (1  cos )Sel d 3k     [Sinel (1  f0 )  Sinel
Where f 1n (k) is the perturbation to the distribution function after the n-th iteration. It is interesting to note that if the initial
distribution is chosen to be the equilibrium distribution, for which f 1 (k) is equal to zero, we get the relaxation time
approximation result after the first iteration. We have found that convergence can normally be achieved after only a few
iterations for small electric fields. Once f 1 (k) has been evaluated to the required accuracy, it is possible to calculate
quantities such as the drift mobility  which is given in terms of spherical coordinates by

 (k / 1  2 F ) f d k
3

  * 0
3m F
3
1

k f d k
2
(7)
3
0
0
Here, we have calculated low field drift mobility in III-V structures using the iterative technique. In the following
sections electron-phonon and electron-impurity scattering mechanisms will be discussed.
Deformation potential scattering
The acoustic modes modulate the inter atomic spacing. Consequently, the position of the conduction and valence
band edges and the energy band gap will vary with position because of the sensitivity of the band structure to the lattice
spacing. The energy change of a band edge due to this mechanism is defined by a deformation potential and the resultant
scattering of carriers is called deformation potential scattering. The energy range involved in the case of scattering by
acoustic phonons is from zero to 2vk  , where v is the velocity of sound, since momentum conservation restricts the
change of phonon wave vector to between zero and 2k, where k is the electron wave vector. Typically, the average value of k
is of the order of 107 cm-1 and the velocity of sound in the medium is of the order of 10 5 cms-1. Hence, 2vk  ~ 1 meV,
which is small compared to the thermal energy at room temperature. Therefore, the deformation potential scattering by
acoustic modes can be considered as an elastic process except at very low temperature. The deformation potential scattering
rate with either phonon emission or absorption for an electron of energy E in a non-parabolic band is given by Fermi's
golden rule as [3,5]
Rde 
*2
*
2 D 2 ac (mt ml )1/ 2 K BT E (1  E )
 v 2  4
E (1  2E )
(1  E )  1 / 3(E ) 
2
(8)
2
Where Dac is the acoustic deformation potential,  is the material density and  is the non-parabolicity coefficient. The
formula clearly shows that the acoustic scattering increases with temperature
Piezoelectric scattering
The second type of electron scattering by acoustic modes occurs when the displacements of the atoms create an
electric field through the piezoelectric effect. The piezoelectric scattering rate for an electron of energy E in an isotropic,
parabolic band has been discussed by Ridley [4] .
57
Comparison of Low Field Electron Transport Properties in Compounds of groups…
The expression for the scattering rate of an electron in a non-parabolic band structure retaining only the important terms can
be written as [3,5]:
R PZ (k ) 
e 2 K B TK av
2
m*
2 2   s
2
 1 2 1  2 
1

2
2
1     3 ( ) 


(9)
Where  s is the relative dielectric constant of the material and Kav is the dimensionless so called average
electromechanical coupling constant.
Polar optical phonon scattering
The dipolar electric field arising from the opposite displacement of the negatively and positively charged atoms
provides a coupling between the electrons and the lattice which results in electron scattering. This type of scattering is called
polar optical phonon scattering and at room temperature is generally the most important scattering mechanism for electrons
in III-V .The scattering rate due to this process for an electron of energy E in an isotropic, non-parabolic band is [3-5]
 e2 2m*   1 1  1  2E 
PO
  
RPO k 
8         E 

(10)
FPO E , EN op , N op  1
Where E = E'±_wpo is the final state energy phonon absorption (upper case) and emission (lower case) and Nop is
the phonon occupation number and the upper and lower cases refer to absorption and emission, respectively. For small
electric fields, the phonon population will be very close to equilibrium so that the average number of phonons is given by the
Bose- Einstein distribution.
Impurity scattering
This scattering process arises as a result of the presence of impurities in a semiconductor. The substitution of an
impurity atom on a lattice site will perturb the periodic crystal potential and result in scattering of an electron. Since the mass
of the impurity greatly exceeds that of an electron and the impurity is bonded to neighboring atoms, this scattering is very
close to being elastic. Ionized impurity scattering is dominant at low temperatures because, as the thermal velocity of the
electrons decreases, the effect of long-range Coulombic interactions on their motion is increased. The electron scattering by
ionized impurity centers has been discussed by Brooks Herring [6] who included the modification of the Coulomb potential
due to free carrier screening. The screened Coulomb potential is written as
V (r ) 
e2
4  0 s
exp(q0r )
r
(11)
Where  s is the relative dielectric constant of the material and q0 is the inverse screening length, which under no
degenerate conditions is given by
ne2
2
q0 
 0 s K BT
(12)
Where n is the electron density. The scattering rate for an isotropic, nonparabolic band structure is given by [3,5]
Ni e4 (1  2E )
b 

(13)
Rim 
Ln(1  b) 
2
*
3/ 2 
1  b 
32 2m   s ( ( E )) 
8 m*  ( E )
 2q02
Where Ni is the impurity concentration.
(14)
b
III.
RESULTS
We have performed a series of low-field electron mobility calculations for GaP, InP, GaAs and InAs materials.
Low-field motilities have been derived using iteration methode.The electron mobility is a function of temperature and
electron concentration .
Figures 1 show the comparison the electron mobility depends on the Temperature at the different electron
concentration in bulk Gap,GaAs,InP and InAs materials. figures 1 show that electrons mobility at the definite temperature
300 k for the InAs semiconductors is gained about 8117cm2v-1s-1 and for GaAs, InP, Gap about 4883,4702,760 cm2v-1s1.also electrons mobility decrease quickly by temperature increasing from 200k to 500 k for all the different electron
concentrations because temperature increasing causes increase of phonons energy too. So it causes a strong interaction
58
Comparison of Low Field Electron Transport Properties in Compounds of groups…
between electrons and these phonons that its result is increase of electrons scattering rate and finally decrease of electrons
mobility. Our calculation results show that the electron mobility InAs is more than 3other materials this increasing is because
of small effective mass.
20000
16
-3
Donor electron density=10 ( cm )
Electron drift mobility (cm2 / V-s)
InAs
15000
10000
5000
GaAs
InP
GaP
0
200
250
300
350
400
450
500
Temperature (K)
Fig 1. Changes the electron mobility Function in terms of temperature in bulk InAs, GaAs, InP and GaP at the
electron concentration 1016 (cm-3).
Figure 2 shows comparison the electron mobility of InAs, GaAs, InP and GaP at the different Temperature .Our
calculation results show that the electron mobility InAs is more than GaAs,InP and GaP
10000
8000
2
Electron drift mobility (cm / V-s)
Temperature= 300 K
InAs
6000
4000
GaAs
InP
2000
GaP
5.00E+017
1.00E+018
-3
Donor electron density ( cm )
Fig 2. Changes the electron mobility Function in terms of room temperature in bulk InAs, GaAs, InP and GaP at the
different electron concentration.
Figures 3 shows comparison the electron mobility depends on the electron concentration at the different
Temperature in bulk GaP and InP materials. Semiconductors mobility decrease by electrons concentrations increasing
because electrons increasing causes increase of ionized impurity centers in crystals that it causes times more electrons under
the influence of the Coulomb potential of impurity centers located that its result is increase of electrons scattering rate and
finally decrease of electrons mobility
59
Comparison of Low Field Electron Transport Properties in Compounds of groups…
GaP:T=200 K
GaP:T=300 K
InP:T=200 K
InP:T=300 K
2
Electron drift mobility (cm / V-s)
4000
3000
2000
1000
5.00E+017
1.00E+018
Temperature (K)
Fig 3. Changes the electron mobility Function in terms of electron concentration in bulk InP and GaP at the different
Temperature
20000
-3
15000
2
Electron drift mobility (cm / V-s)
16
InAs:Ionized impurity densiy =10 [cm ]
17
-3
InAs:Ionized impurity densiy =10 [cm ]
16
-3
GaAs:Ionized impurity densiy =10 [cm ]
17
-3
GaAs:Ionized impurity densiy =10 [cm ]
10000
5000
200
250
300
350
400
Temperature (K)
Fig 4. Changes the electron mobility Function in terms of temperature in bulk GaAs and InAs at the different electron
concentration.
Figures 4 shows comparison the electron mobility depends on temperature at the different electron concentration in bulk
InAs and GaAs materials. .Our calculation results show that the electron mobility InAs is more than GaAS
IV.
1.
2.
CONCLUSION
InAs semiconductor having high mobility of GaAs,GaP and InP because the effective mass is small compared
with all of them
The ionized impurity scattering in all the semiconductor InAs,GaAs,InP and GaP at all temperatures is an
important factor in reducing the mobility.
60
Comparison of Low Field Electron Transport Properties in Compounds of groups…
REFERENCES
1.
2.
3.
4.
5.
6.
Arabshahi H, Comparison of SiC and ZnO field effect transistors for high power applications ,Modern Phys. Lett.
B, 23 (2009) 2533-2539.
J. Fogarty, W. Kong and R. Solanki, Solid State Electronics 38 (1995) 653-659.
Jacoboni C, Lugli P (1989). The Monte Carlo Method for semiconductor and Device Simulation, Springer-Verlag.
Ridley BK (1997). Electrons and phonons in semiconductor multilayers,Cambridge University Press.
Moglestue C (1993). Monte Carlo Simulation of Semiconductor Devices,Chapman and Hall.
Chattopadhyay D, Queisser HJ (1981). Review of Modern Physics,53,part1.
61
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 57-65
Extended Modified Inverse Distance Method for Interpolation
Rainfall
Nafise Seyyednezhad Golkhatmi1, Seyed Hosein Sanaeinejad2, Bijan Ghahraman3,
Hojat Rezaee Pazhand4
1
Graduate student of agricultural meteorology
Ferdowsi University of Mashhad,Water Engineering Group
3
Ferdowsi University of Mashhad,Water Engineering Group
4
Islamic Azad University,Mashhad Branch, Department of Civil
2
Abstract––Local Interpolation and regional analyzing of rainfall are one of the important issues of Water Resources and
Meteorology. There are several methods for this purpose. IDW is one of the classic methods for estimating precipitation.
This method of interpolation at each point gives a weight to the distance to the reference point. IDW method has been
modified in various ways in different research and new parameters such as elevation were added to it. This reform has
two modes. The First state is the ratio of elevation to distance and the second is the inverse of multiplying elevation and
distance. This paper has three objectives. First is generalizing the alignments of various elevation and distances in
MIDW. Second is dimensionless the weights (separately and integrated) according to relationship that is used. Third is
analyzing interpolation errors of daily rainfall regionally. Genetic algorithm is used to find optimal answers. Results
showed that optimal answers primarily is depending on the reverse effects of multiply of altitude and distance (55%) and
with a direct effect of altitude and inverse effect of distance (45%). It's then 71% of cases shows that integrated
dimensionless yields better result than separately ones. Impacts of elevation were established in all MIDW methods.
Results showed that this parameter should be used in interpolation of rainfall. Case study is a daily rainfall catchment of
Mashhad plain.
Keywords––Regionally interpolation, MIDW, Genetic algorithm, Mashhad.
I.
INTRODUCTION
Local and areal estimations of rainfall are needed in those projects and researches that used point, areal and
regional data of hydrology and climatology, areal and regional water use and weather. Ordinary methods for estimating this
Phenomenon includes the Arithmetic mean, Thiessen, Rainfall gradient, Inverse Distance and Isohyetal curves (Horton,
1923, Singh and Birsoy, 1975, Johansson, 2000). Finite element method (Hutchinson and Walley, 1972), Kriging (Vasiliades
and Loukas, 2009), Fuzzy Kriging (Bondarabadi and Saghafian, 2007) and splines (Hutchinson, 1998), Neural Networks
(Rigol et al, 2001, Moghaddam and et al, 2011) as well as relatively new methods for the spatial estimating for this
phenomenon.It can be corrected by conventional methods and acceptable results are achieved. One of these is the Inverse
Distance (IDW) method that some researchers have done some changes in it and called it Modified Inverse Distance method
(MIDW) (Lo, 1992, Chang et al, 2005). Lo (1992) introduced MIDW such that the impact of elevation difference in the IDW
is considered as a factor. Therefore, the weight of each station is to be distance to the elevation ratio with k power. This
relation is suitable for mountainous areas. Chen and Liu (2012) applied IDW method for regional analysis of 46 rain gauge
stations located in Taiwan. They investigated the influence of stations radiuses neighborhood too. Radiuses of the desired
effect are 10 to 30 km and fluctuations in the power of the IDW 1.5 to 4 obtained. Monthly and annual precipitation data
were used in their research. Tomczak (1998) performed spatial interpolation of precipitation with IDW method. He studied
two parameters, the distance and spherical radius and optimized the parameters of function with cross-validation. Then he
attempted with the Jack- Knife to determine the sampling frequency skew, the standard error and confidence intervals for
each station. He concluded that this method can obtain satisfactory results with respect to these two points.
Chang et al. (2005) combined the IDW method by Fuzzy theory for interpolation the missing data. Genetic
Algorithm also was employed to optimize the fuzzy membership function parameters. Daily rainfall of six rain gauge
stations (located in a catchment area of 303 sq km in northern Taiwan) were used for this proposed. Their research
investigated the equation errors of interpolation at each station separately. Estimation errors of precipitation from this
method are generally reported less than Thiessen and arithmetic averages methods. Lu (1992) added ratio of distance to
elevation and Chang et al. (2005) added the inverse of these two parameters products into the IDW method. The regional
rainfall analysis is affected by the elevation. It has a negative influence in the adjacent water resources areas (Coastal and
Forest) and positive in the remote areas (mountainous region). It is possible that to Modify IDW method by changing places
of its parameters and their powers. We can put each one in numerator or denominator of IDW’s coefficients separately or
tougher. So we have four states that they are more general than Chang et al. and Lo modified methods. We follow three
goals in this research. First, we compare these four methods. The various MIDW methods approaches to optimize MIDW
interpolation equation locally (Moghaddam et al., 2011, Tomczak, 1998,Chen and Liu, 2012). Second goal is modifying the
IDW method for optimizing interpolation equation regionally in Mashhad basin area (regional interpolation). Third we
57
Extended Modified Inverse Distance Method for interpolation rainfall
compare two dimensionless weighted methods of MIDW coefficients. So we have eight separated models for regional
interpolation MIDW in general. Our case study is Mashhad basin in Iran.
II.
MATERIALS AND METHODS
1.2 Study area and data
The study region is the catchment area of Mashhad plain with 9909.4 sq km in longitude 580 – 20´ and 600 - 8´East
and latitude 360 - 0´and 370 - 5´North in North East of Iran. Mashhad plain is a basin with irregular topography, the climate
is arid, semi arid and its north and south is surrounded by tow mountain ranges by the name of Hezar-Masjed and Binaloud,
Respectively Number of rain gauge stations in this basin and its adjacent are 49 (each one with a 16 year data 1993-2009).
These stations are run under the Iran Ministry of Energy. 71days of daily rainfall data is used. The geographical position and
characteristics of stations come in Figure (1) and Table (2) respectively. SQL Server 2008 and VB.NET 2010 used for
Descriptive Statistics.
Figure (1) - Catchment location Mashhad plain with rain gauge stations in and around
2.2 Methods of Modified Inverse Distance (MIDW)
Modified Inverse Distance methods are obtained by changing the conventional Inverse Distance (IDW) weight.
Distances between stations are used for weighted IDW. While other factors such as elevation are also an important influence
on rainfall and can be a weight in IDW. Operating arrangement of elevations and distances in the IDW is leads to a variety
MIDW as follows. Lo (1992) introduced the MIDW according to equation (1). The weight of each station ( (
d xi k
) ) is
H xi
the ratio of distance ( d xi ) to the elevation difference ( H xi ). Indeed, impact of component difference is assumed to be
direct and impact of component elevation is reverse with identical power for both.
N
d
∑( Hxi )k .p i
p x  i 1
N
xi
, k0
(1)
d
∑( Hxi )k
xi
i 1
where p x is the Point Rainfall estimation, d xi is the distance between each station from the base station. Moreover, N is
the number of stations that used, k is the power’s parameter and H xi is difference elevation between base station and the
ith station.
Chang et al. (2005) have considered the inverse multiply of elevation and distances as a weight in MIDW. They
are considered different power for distance (m) and elevation (n) weight. Interpolation error equation in this method is
discussed locally (separately for each station). Formula (2) is the model of Chang et al. (2005).
58
Extended Modified Inverse Distance Method for interpolation rainfall
N
∑h xin * d xim )
Px = i =N1
* Pi
, m > 0, n > 0
(2)
∑h xin * d xim
i =1
Where: m and n are powers (parameters). h xi is the elevation difference between base station and ith’ ones. Other items are
the same as formula (1).
The defined model in this study is general pattern of equation (2). So that the distance and elevation difference are
allowed that have different alignment (positive or negative sign of m and n). So their effect is direct or inverse. In other
words, both appear in the numerator, or both in the denominator or one in the denominator and another in the numerator with
different power. Estimating the optimal m and n parameters are needed. Genetic algorithm is suitable for this purpose.
3.2 Genetic Algorithm
Genetic algorithm is a probabilistic search method that surrounds a natural biological process of evolution will
follow. This method is based on the Darwinian theory of evolution and Encodes the potential solutions to a problem to
chromosomes in a simple format. Then implement the combination operators with a random population of individuals (each
one in the form of chromosome), (Kia, 2009). Genetic algorithm can produce many generations. The initial population is
generated randomly in the desired interval. New set of approximations in each generation with the best member selection
process based on their fitness in the problem domain and the multiplication of the operators are made from natural genetics.
This process will ultimately lead to the evolution of population members that have better adaptation of the original members
of the original parents in their environment.
Members of the population or in other words, the current approximation of the optimal solution encodes as string
by sequence into the chromosome with the variables of the problem. Objective function, provides an index to select the
appropriate pair and mating chromosomes in the population. After the initial generation, the next generation is produced
using genetic operations such as elitism, mating (crossover) and mutation. Each member shall be entitled to represent the
value of its compatibility (with the objective function). Members who have a higher consistency than others will have a
higher chance to select and move to the next generation and vice versa (elitism). Mating genetic operator combines the genes
of chromosomes together, but does not necessarily act on all the strings of population. The contrary, it is done with
probability p. Mutation operator is performed with a random selection of chromosomes and random selection of one or more
genes and substituting its contradictory. The mutation also will act with prior probability (pm). Thus continues the cycle of
next generations. The end of the genetic algorithm process is achieving optimal solutions (Shabani Nia and Saeed Nia, 2009,
Kia, 2009). Genetic algorithm is used for estimation and to optimize the variables of interpolation equations and due to time
savings. MIDW method is interpolating method with two m and n parameters where m is power of distance factor and n is
power of the elevation factor. Two parameters (m and n) are generated randomly, and the error function is optimized with
the repeat the algorithm and produced better m and n in each generation. The objective function in MIDW is minimizing the
regional mean absolute errors (MAE), (Equation 9). That is the mean of all local absolute errors. Chromosomes are the two
parameters; m and n (20 chromosomes in each generation) and are generated randomly according to GA operators. Two
adaptable chromosomes in each generation are transferred directly to the next generation (elitism). Genetic algorithm has
been programmed in MATLAB 7.8.0 (R2009). It is necessary to preprocessing data before analyzing that it is explained in
Next section.
2.4 Screening and normalization data and dimensionless weights
Available data are often mixed with error. These errors are the basis of false recorded, incorrect transfer, system
failures, etc. Data must first be screened. If data are not properly screened the elevation difference and distance should be
normalized to the same scale in MIDW (Chang et al, 2006, Hebert and Keenleyside,1995). Equation (3) and (4) is used for
normalization. This conversion causes the Elevation difference and distance should be between 1 and 10.
d′= 1 + 9 * (
d d min
)
d max d min
(3)
h′= 1 + 9 * (
h
h min
)
h max - h min
(4)
- The dimensionless weights obtain with two methods: integration (Equation 5) and separation (Equation 6). If m and n are
assumed to be in the denominator (m and n are both negative), then integration and separation methods are the same as
equation (5) and (6). Other arrangements can be made similarly.
- The integrated dimensionless: dimensionless factor is the sum of multiplication of the distance and Elevation, (equation 5),
(Chang et al, 2005).
59
Extended Modified Inverse Distance Method for interpolation rainfall
N
∑m
1
n
i =1 d pi * h pi
)5(
, m > 0, n > 0
- The separation dimensionless: dimensionless factor is the multiplication sum of the distance and elevation, (equation 6),
(Chang et al, 2005).
N
1
n
1
∑ m * ∑ n , m > 0, n > 0
i =1 d pi
)6(
i =1 h pi
The final equations for MIDW are (7) and (8) in the case that m and n are in the denominator. The rest of the functions for
other states of m and n are similar.
1
1
*
m
N
N dP
h nP
i
Px = ∑WdPi ,h P * Pi = ∑ N i
i
1
1
i =1
i =1
∑m* n
hP
i =1 d P
i
i
1
N
N
Px = ∑Wd Pi ,h P * Pi = ∑ N
i
i =1
i =1
dm
Pi
∑
1
* n
hP
i
N
1
m
i =1 d P
i
* Pi , m > 0, n > 0
)7(
)8(
* Pi , m > 0, n > 0
1
*∑ n
i =1 h P
i
Eight error functions occur according to the descriptions, elevation and distance alignment and type of dimensionless. One of
them is in equation (9), (e.g. inverse of the distance and positive impact of the Elevation in separation dimensionless
method).
N
∑h nxi * d xim .Pi
N
N
i =1
i ≠j
E = ∑e i = ∑p i * N
, m > 0, n > 0
N
i =1
i =1
∑h nxi * ∑d xim
i =1
i ≠j
)9(
i =1
i ≠j
Where p x is the observed rainfall in the ith station, and the second term is estimated rainfall in that station. Other items are
the same as formula (1). The error function is regulated similarly to the other seven cases (9).
III.
RESULTS AND DISCUSSION
Data analysis in this article is 49 stations daily precipitation of Mashhad plain catchment for 16 years. A sample
size of 71 days is randomly selected and analyzed. Initially suspected precipitation was determined and The data were
screened. Then data normalized using equation (3) and (4). Analysis based on these normalized data. The purpose of this
study is to optimize the interpolation function (MIDW) and considering four general substitution of distance and Elevation to
explore the direct or reverse impact on daily rainfall interpolation multiple alignment is obtained by changing the sign of m
and n (power distance and Elevation). m and n parameters are optimized with genetic algorithm. Dimensionless of weights
of MIDW equation is done with equation (5) and (6). So in order to determine the best interpolation formula was studied in
eight states. Four states (substitution distance and elevation) were compared to determine the best relationship MIDW and
two for the best dimensionless. Genetic algorithm because of high-volume of calculations was used to determine the optimal
parameters. The work is as follows.
- Determine the number of required
First, the necessary population was determined to achieve the optimal response. Ten days were randomly selected
and was tested with 5 to 100 generations were divided into steps. Number of generations required to achieve the optimal
response was generally less than 50 numbers. Most of the generations required to achieve stability in the optimal solution
was obtained with 90 generations. Number of generations was chosen to ensure 100 more. Maximum and minimum errors
were calculated separately for each of the eight states MIDW. Errors in the eight-state amplitude were less than 10 mm. The
lowest error in each method with its parameters was chosen as the optimal errors and the desired parameters.
60
Extended Modified Inverse Distance Method for interpolation rainfall
- Effect of various alignments and types of Dimensionless
Alignment of distance and elevation were done in MIDW and algorithm were implemented and compared for each
state. This was compared with the index negative and positive coefficient for power distance (m) and elevation difference
(n). Moreover, dimensionless of weights were done in two ways (equations 5 and 6). Results are shown in Table (2).
Dimensionless indicated that all four alignments may not be necessary. Different states of the following below:
Dimensionless with equation (5):
Role of the distance and Elevation is reversed in 54% of cases (method of Chang et al.),31% of cases the influence
of distance is reversed and the influence of elevation is direct. For 10% of cases the influence of elevation is reversed and the
influence of distance is direct(Lu method) and in 6% of cases the influence of distance and elevation is direct. There are
necessary four alignments with different importance in this dimensionless. Distance and elevation never had zero impact on
the optimal solution.
B – Dimensionless with equation (6)
The influence of elevation difference and distance in 55% of cases was reversed (Chang et al method, 2005) and in
45% of other states influence of distance was reverse and influence of elevation was direct (of Lo, 1992), (Table 2) . Other
alignments in any case no optimal solution. Not eliminate the impact of distance in any equation. However, in the separation
dimensionless, In that case, the influence of elevation is director inverse and influence of direct or inverse, m was zero in
four cases. The following comparison of two methods showed that 87% of cases, equation (5) (the integration of weight and
Elevation) has the optimal solution and separation dimensionless state 13% of the cases has the optimal solution. Regional
best interpolation error equation with equation (9) is the calculated and regional average error is shown in Table (4(.Analysis
of error of regional interpolation equation for each day shows that lowest and highest errors are 1.3 and 8.2 mm respectively
(Table 4). Regional daily mean and standard deviation of errors (71 Days) are respectively 4 and 1.58 with a coefficient of
variation 39.5%. These results indicate that the accuracy of the interpolation equation is relatively good.
- Analyzing the range of m and n for 8 states of regional interpolation equations
Changes of m and n were evaluated for each eight methods. Results showed that changes of range for each
alignment of elevation and distance and each method of dimensionless are different (Table 3). If the influence of distance
and elevation is inversely (In both dimensionless methods) interval of m and n is [0, 16]. If the effect of distance to be
reversed and the effect of elevation be directed, m is between 1.14 to 15.98 and n between 2.09 to 15.98. When the effect of
elevation to be reversed and the effect of distance is directed (In both dimensionless methods), n changes from 0 to 4 and
changes m depends to the dimensionless method. So that changes due to the integrated dimensionless between 2.9 to 16 and
for the separation dimensionless between 0 and 3.07. More detail is shown in Table (3).
IV.
CONCLUSION
The MIDW Interpolation method is used for precipitation. This method adds the elevation as a weight in IDW
equations. So far, two forms of MIDW were discussed. The first form is based on the distance to the elevation ratio (with
power constant k), and the second is based on the inverse of the distance and elevation with various m and n power. This
study extends the MIDW to its general form by considering eight weights which included the two previous forms. It includes
four forms of different alignment of distance and elevation and two methods of weights dimensionless (separation and
integrated). m and n present the power of distance and elevation in the models. Genetic algorithm was used to optimize the
interpolation equation parameters of MIDW. Daily rainfall data for 49 stations of the Mashhad plain catchment was used in
this study. The results showed that the contribution of the integrated weights dimensionless is 87% and separate
dimensionless is 13% in optimization of different alignment of elevation and distance. Moreover, it is not necessary to
considerate all four types of alignment of elevation and distance for the studied areas. The results indicate that in the
separated method the role of the distance and elevation in 54% of cases are inverses (Equation 2). However, in 31% of cases,
the roles of distance is reverse and the role of elevation is direct, 10% of cases, the role of distance is direct and the role of
elevation is inverse (Equation 1), and 6% of cases, role of elevation and distance is direct. But in separated dimensionless,
the role of distance and elevation are reversed in 55% of cases, 45% of cases, the role of distance is direct and the role of
elevation is reversed. Two other alignments don’t have good answers in separated dimensionless and these equations were
excluded from MIDW. So to determine the best regional interpolation function, should not be limited to a specific alignment.
The study of six cases MIDW provides the possibility to achieve optimal solutions. A certain range cannot be considered for
parameters m and n, since large swings are observed in their intervals for eight states of MIDW. The average error and
standard deviation of the regional daily rainfall are 1.58 and 4 with coefficient of variation 39.5% respectively. The results
show that the accuracy of the interpolation equation is relatively good. The survey results showed using the MIDW method
with six different alignment of distance and elevation with two dimensionless in effective to achieve the optimal results.
name
Table 1 - Characteristics Geographic stations in and around Mashhad catchment
Long.
Lat.
Long.
elevation
name
elevation
(North)
(west)
(North)
Lat.
(west)
Dizbad Olya
1880
705752
3997566
Jong
1700
731149
4073827
Tabarokabad
1510
652515
4117177
Abghade Ferizi
1380
685763
4044656
61
Extended Modified Inverse Distance Method for interpolation rainfall
Heyhey
1330
636830
4105749
Goshe Bala
1580
728529
4066718
Eish Abade
1346
664428
4018975
All
1475
738067
4067314
Bar-Erie
1560
652778
4036255
Chenaran
1170
689618
4057478
Marusk
1495
638479
4043758
Moghan
1780
714164
4001945
Yengeje(Abshar)
1680
612457
4076877
Jaghargh
1420
708429
4021113
Cheshme Ali
1540
684471
4004990
Sharif Abad
1455
725852
3989818
Saghibaik
1510
626453
4064663
Talghor
1540
710308
4078053
Farhad Gerd
1500
746352
3961364
Ghadirabad
1175
676396
4074984
Golmakan
1400
693844
4040097
Chakane Olya
1780
631555
4078712
DahaneAkhlomad
1460
674033
4051676
Hesar Dehbar
1220
715841
4020953
Fariman
1395
760631
3955446
Hendel Abad
1210
768676
4035400
Radkan
1210
679342
4075224
Kalate Menar
990
791909
3981966
Dolatabad
1510
694409
4035379
Mayamei
1030
780656
4015387
Sadetorogh
1240
729639
4006280
Bazengan
1020
808510
4022543
Zoshk
1880
697502
4024363
Bahmanjan Olya
1340
675941
4086234
EdareMashhad
990
731039
4021956
Chahchahe
479
797692
4060098
Sar asiyabe
Shandiz
1270
709820
4031347
Gharetikan
520
784601
4079834
Androkh
1200
738131
4051588
Darband Kalat
970
742634
4098004
Sade karde
1300
738455
4056330
Archangan
745
731086
4110734
Mareshk
1870
727140
4077931
Hatamghale
490
709790
4132161
Olang Asadi
900
752266
4015822
Kabkan
1435
669345
4124444
Bande Saroj
1310
713617
4067555
LayenNow
876
721485
4112761
Bolghor
1920
731891
4081022
percent
55.0%
0.0%
0.0%
45.0%
Table 2 - Effect of alignment of elevation and distance in MIDW with dimensionless weights
Separated (equation6)
Integrated (equation 5)
alignment
percent
alignment
Elevation and distance is inverse
54.0%
Elevation and distance is inverse
Elevation is direct and distance
31.0%
Elevation is direct and distance
is inverse
is inverse
Elevation and distance is direct
6.0%
Elevation and distance is direct‫ا‬
Elevation is inverse and
10.0%
Elevation is inverse and
distance is direct
distance is direct
62
Extended Modified Inverse Distance Method for interpolation rainfall
Table (3) – Range of m and n in different MIDW methods
Separated (equation 6)
Integrated (equation 5)
1.04 ≤ n ≤15.98 Elevation and
0.21 ≤ n ≤15.98 Elevation and
distance is inverse
distance is inverse
0 ≤ m ≤13.78
-
Elevation is direct
and distance is inverse
-
Elevation and
distance is direct
1.15 ≤ n ≤3.15
0 ≤ m ≤3.07
Elevation is inverse
and distance is direct
1.14 ≤ m ≤15.98
0.25 ≤ n ≤15.98
2.09 ≤ m ≤15.98
8.06 ≤ n ≤15.98
1.59 ≤ m ≤15.98
0.04 ≤ n ≤3.56
2.98 ≤ m ≤15.98
1
Elevation is direct
and distance is inverse
2
Elevation and
distance is direct
3
Elevation is inverse
and distance is direct
4
Table (4) - regional rainfall average absolute error Interpolation
Date
(in
shamsi)
m
n
sign
m&n
Mean
errors
date
m
n
sign
m&n
Mean
errors
1372/1/15
1.313
1.297
1
4.219
1380/2/1
3.359
1.75
1
3.50
1372/1/7
1.188
1.344
1
7.009
1380/2/11
5.422
1.047
2
3.94
1372/12/11
4.656
2.156
1
2.149
1380/2/23
4.797
0.5
2
6.40
1372/12/21
5.719
3.094
1
5.309
1381/1/25
14.266
2.969
2
4.08
1373/1/2
4.844
15.984
3
2.938
1381/1/29
14.984
10.359
1
4.43
1373/1/29
15.984
0.797
4
3.098
1381/12/22
3.953
15.984
2
2.99
1373/12/27
9
2.313
1
6.479
1381/2/19
3.938
0.531
1
6.70
1373/2/5
9.656
7.563
2
2.540
1381/3/4
4.484
0.406
1
3.27
1374/1/13
1.594
15.984
3
2.140
1382/1/1
2.734
1.625
2
4.69
1374/1/22
11.063
2.469
1
3.194
1382/1/17
8.266
0.047
4
4.63
1374/12/13
11.328
3.5
1
6.206
1382/12/25
2.281
0.844
1
5.29
1374/12/18
4.656
8.063
3
6.579
1382/2/9
6.5
3
2
3.82
1374/12/24
15.984
6.813
1
3.979
1383/1/6
13.109
1.453
1
4.59
1374/2/31
4.203
1.969
1
1.864
1383/12/24
8.219
0.813
1
4.62
1375/1/11
14.219
0.453
2
2.575
1383/2/21
5.25
3.313
1
2.54
1375/1/30
5.016
1.922
2
1.848
1384/1/19
3.484
0.922
1
8.18
1375/1/6
4.094
2.484
1
3.423
1384/12/10
15.875
8.422
1
3.14
1375/12/10
4.281
1.172
2
4.163
1385/1/11
2.438
1.781
1
1.28
63
Extended Modified Inverse Distance Method for interpolation rainfall
1375/2/11
5.266
2.766
2
3.396
1385/1/2
7.266
2.813
1
4.68
1375/3/12
7.125
12.656
3
2.310
1385/1/26
6.984
2.609
1
2.71
1376/1/6
15.859
1.172
2
2.861
1385/1/8
1
1.703
4
5.98
1376/12/11
2.203
6.047
1
5.720
1385/12/25
6.047
0.219
1
5.68
1376/2/11
1.719
0.375
1
4.222
1385/2/26
3.25
0.609
1
5.95
1376/2/4
3.766
15.984
2
5.335
1386/1/15
4.516
3.188
1
3.65
1377/1/9
15.984
3.125
4
2.427
1386/2/15
5
0.359
2
2.23
1377/12/11
1.547
1.625
1
6.782
1387/1/12
0.609
1.563
1
7.80
1377/12/24
3.25
0.391
2
3.992
1387/1/22
6.172
0.781
2
3.26
1377/2/15
8.688
0.656
1
4.580
1387/1/3
4.875
3.563
4
2.61
1378/12/13
4.25
1.328
1
2.880
1387/1/31
8.75
1.438
1
3.42
1378/3/24
8.094
0.234
1
3.289
1387/12/28
2.094
3.547
2
4.21
1379/1/6
7.422
1.891
4
5.841
1387/2/29
6.984
2.188
1
1.65
1379/2/17
11.453
1.406
1
2.608
1388/1/13
7
4.438
2
2.54
1380/1/11
8.813
5.25
2
4.718
1388/12/8
0.156
1.313
4
3.87
1380/1/23
2.25
0.25
2
2.441
1388/2/17
0.828
1.531
4
6.20
1380/1/3
4.609
1.016
1
3.078
1388/3/11
15.984
2.297
2
3.08
1380/12/15
6.688
0.219
1
3.255
Sign of m‫ و‬n : 1 : m, n  0,
2 : m  0,
3 : n  0, m, n  0, 4 :m  0, n  0
REFERENCES
1.
2.
3.
4.
5.
6.
7.
Shabani Nia, F. and Saeed Nia, S., 2009, Fundamental of Fuzzy Control Toolbox using MATLAB, KHANIRAN
Pub., pp 140 (In Persian).
Kia, M., 2009, Genetic algorithm by MATALB, Kyan Rayane Sabz, Pub. (In Persian).
Chang, C.L, Lo, S.L., Yu, S.L., 2005, Applying fuzzy theory and genetic algorithm to interpolate precipitation.
Journal of Hydrology, 314: 92-104.
Chang, C.L., Lo, S.L., Yu ,S.L., 2005, Interpolating Precipitation and its Relation to Runoff and Non-Point Source
Pollution, Journal of Environmental Science and Health, , 40:1963–1973.
Chang, C.L., Lo, S.L., Yu, S.L., 2006, Reply to discussions on ‘‘Applying fuzzy theory and genetic algorithm to
interpolate precipitation’’ by Zekai Sen, Journal of Hydrology 331: 364– 366.
Chen, F.W., Liu, C.W., 2012, Estimation of the spatial rainfall distribution using inverse distance weighting (IDW)
in the middle of Taiwan, Paddy Water Environ.
Hebert C.H., Keenleyside, k.A., 1995,TO NORMALIZE OR NOT TO NORMALIZE? FAT IS THE QUESTION.
Environmental Toxicology and Chemistry, 14 (5):801-807.
64
Extended Modified Inverse Distance Method for interpolation rainfall
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Dingman, S.L., 2002, PHYSICAL HYDROLOGY (SECOND EDITION). PRENTICE- HALL. Inc. 646 pp.
Ghazanfari M. M. S., Alizadeh, A., Mousavi B. S.M., Faridhosseini, A.R., Bannayan A. M. 2011, Comparison the
PERSIANN Model with the Interpolation Method to Estimate Daily Precipitation (A Case Study: North
Khorasan), Journal of Water and Soil. 25(1): 207-215.
Goovaerts, P., 2000, Geostatistical approaches for incorporating elevation into the spatial interpolation of rainfall.
Journalof Hydrology 228:113-29.
Horton, R.E., 1923, Monthly WEATHER REVIEW, ACCURACY OF AREAL RAINFALL ESTIMATES.
Consulting Hydraulic Engineer. 348-353.
Hutchinson, M.F., 1998, Interpolation of Rainfall Data with Thin Plate Smoothing Splines–Part II: Analysis of
Topographic Dependence, Journal of Geographic Information and Decision Analysis, 2(2): 152-167.
Hutchinson, P., Walley, W. J., 1972, CALCULATION OF AREAL RAINFALL USING FINITE ELEMENTS
TECHNIQUES WITH ALTITUDINAL CORRECTIONS. Bulletin of the International Association of
Hydrological Sciences, 259-272.
Johansson, B., 2000, Areal Precipitation and Temperature in the Swedish Mountains, An Evaluation from a
Hydrological Perspective. Nordic Hydrology, 31(3):207-228.
Rigol, J. P., Jarvis, C.H., Sturt, N., 2001, Artificial neural networks as a tool for spatial interpolation, int. j.
Geographical information science, 15(4): 323- 343.
Rahimi B.S., Saghafian, B., 2007, Estimating Spatial Distribution of Rainfall by Fuzzy Set Theory, Iran- Water
Resources Research, 3(2): 26-38.
LO, S. S., 1992, Glossary of Hydrology, W.R. Pub. ,PP 1794
Singh,V.P. and Birsoy,Y.,K.,1975, Comparison of the methods of estimating mean areal rainfall, Nordic
Hydrology 6(4): 222-241.
Tomczak, M., 1998, Spatial Interpolation and its Uncertainty Using Automated Anisotropic Inverse Distance
Weighting (IDW) - Cross-Validation/Jackknife Approach, Journal of Geographic Information and Decision
Analysis, 2(2): 18-30.
Vasiliades, L. and Loukas, A., 2009, PRECIPITATION SPATIAL INTERPOLATION METHODS
ASSESSMENT IN PINIOS RIVER BASIN, GREECE, Proceedings of the 11th international Conference on
Environmental Science and Technology,3-5.
65
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September2012) PP: 47-57
Multicellular Multilayer Plate Model: Numerical Approach and
Phenomenon Related To Blockade by Shear
Mohamed Ibrahim1, Abderrahmane El Harif2
1,2
Laboratory of Mechanics (LM), Department of Physics, Faculty of Sciences-Rabat, P.O. Box 1014, Morocco
Abstract--A numerical model on the flexibility method in the case of a multilayer beam finite element has been developed
and the contributions to its recent developments being made at Mechanical laboratory, Department of physics, Faculty of
Sciences Rabat (Morocco). The results of the experiments and those of numerical calculations were concordant in the
case of quasi-static loading. These results were based on the approach "finite element" coupled with a non-linear model
[23]. Firstly, we present here the results based approach "finite element" related to the analysis of a bending square plate
under concentrated and uniform load, clamped or simply supported on the contour. On the other hand, we present some
results which we evidence to the problem related to the shear locking. The numerical model is based on a threedimensional model of the structure seen here as a set of finite elements for multilayered plates multi cellular matrix
(concrete) and a set of finite element fibers for reinforcement. The results obtained confirm the ability of these tools to
correctly represent the behavior of quasi-statics of such a complex system and presage the deepening of a digital
tool developed.
Keywords––multicellular multilayer plate, numerical approach, Finite element flexible
I.
INTRODUCTION
The phenomenon related to blockade by shear (or appearance of a parasitic stiffness) is a numerical problem that
drew attention of many researchers in the past twenty years and an abundance of solutions which has been discussed in [3, 9,
10, 11, 12, 19, 20, 22].One way to avoid the appearance of shear locking and thus make the solution independent of the
slenderness ratio (the ratio of length L / thickness h) is to calculate the terms of the stiffness matrix by integrating accurately
the relative terms bending and sub-integrating the terms relating to shear [4,5,6,8,13,14,15,16,17,21 ,22].To improve this
phenomenon related to the numerical computation and propose a more efficient solution, we developed a model based on the
flexibility method [23]. The model is formulated on the basis of the forces method by an exact interpolation stresses [18].
This makes it possible to calculate the flexibility matrix, which is the inverse of the stiffness matrix. The purpose of this
study is the modeling of the structural response of the sails carriers subjected to seismic effects using a comprehensive threedimensional numerical model using a nonlinear finite element approach coupled with a damage model developed for the
behavior of concrete material. In this second paper, drawing on the results of the first article and those of [1,2 ,7], we present
only some results related to the analysis of a homogeneous square plate in bending subjected to a concentrated and uniform
load.
II.
MODELING
Complementary to the trials and their interpretation, numerical modeling of this situation type has several
advantages. In this case, it already developed an ambitious and effective model capable of taking into account the different
aspects of this complicated problem, including the quasi-static and dynamic loading. Then after this satisfactory model, it
has to constitute a way to complement the experimental measurements by providing new data. As such, it should contribute
to a better understanding of the phenomena involved and to further provide a basis for dimensionality development methods.
1. METHODOLOGY
An immediate challenge before addressing the simulation of such problems is to choose the right methodology.
The philosophy retained here is to realize the contribution of research in civil engineering to respond in a context of
operational engineering. The choice was made on the use of finite element plate‟s multilayer multistage three nodes and two
degrees of freedom per node.
A realistic numerical prediction of the structural response of such a structure requires a rigorous three-dimensional
geometric model of the system components. This model and its numerical analysis are implemented in the finite element
code RE-FLEX.
Then, the plate is meshed by including its geometry in a full mesh adapted to the different areas of the problem (it
is discredited into layers and its thickness h in cells along x and y the surface) [Fig.1].
47
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
Figure 1 - Finite Element Model: Efforts resulting in a plate
Where N xx , N yy represent the normal forces and N xy the shear plane. M xx , M yy represent the bending moments and
M xy torque. Tx , Ty are the transverse shear stresses.
2. CALCULATING THE ELEMENTARY FLEXIBILITY
The exact interpolation functions are obtained by writing the various external forces of any point of the finite
element, which here are the internal forces of the structure, according to the nodal reduced effort. Thus, we determine the
matrices representing the exact interpolation functions of effort. The external forces of 'finite element' are supposedly similar
with the same nature as the internal forces of the same element.
One of the methods to calculate the external forces of "finite element" is the linearly interpolated from the equilibrium
equations of the system. Notably in our study efforts are assumed constant at every point of "finite element" and moments
vary linearly as a function of
its variables (x and y in case of a plate). Thus, for a triangular plate finite element IJK, we obtained the following
relationships:
-
The matrix that binds the membrane and bending efforts on any point with the reduced efforts is defined by:
   N , M   N , N , N , M , M , M    D ( , )   
T
T
mf
-
xx
yy
xy
xx
yy
xy
r
I
cmf
(1)
The matrix that binds the shear efforts on any point with reduced efforts is defined by:
T   Tx , Ty   bct rI 
T
1

0
0
 Dcmf   
0
0

0

(2)
0 0
1 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 1
0
0
0
0
0
0
0
0
0 0 mi
0
0
mj
0
0
mk
0
0 0
0
mi
0
0
mj
0
0
mk
0 0
0
0
mi
0
0
mj
0
0
With
mi  1     , m j  
0 

0 
0 

0 
0 

mk 
(3)
and mk  
48
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
 0 0 0 b1 0
0 b2

bct    0 0 0
b1 
b2
b3
0
b4 b5
0
b1
0
b4
b3
b6
0
b6 

b5 
(4)
y J  yK
xK  x J
yI  yJ
x J  xI
yK  yI
xI  xK
, b2 
, b3 
, b4 
, b5 
, b6 
S1
S1
S1
S1
S1
S1
s1  yI ( xK  xJ )  yJ ( xI  xK )  yK ( xJ  xI ) is twice the area of the triangle IJK
   N , N , N , M , M , M , M
r T
I
1
2
3
xxI
yyI
xyI
xxJ
, M yyJ , M xyJ , M xxK , M yyK , M xyK

(5)
Where
  the vector of nodal efforts reduced,  D ( , ) and b  are the matrices that represent accurate
r
I
cmf
ct
interpolation functions of the efforts membrane bending and shear respectively in the absence of apportionment. The
stiffness matrix is simply the inverse of the flexibility matrix.
  and T , T  are respectively the vector normal forces, effort membrane, bending moments, twisting moment and
cmf
x
y
shear forces applied to the cell.
The direct connection of the finite element provides the stiffness matrix of elementary model in the local
coordinate expressed by:
 K e    R   Fflxe 
T
1
 R
(6)

1
 Fflxe    Fflxpla (cmf )    Fflxpla (cis ) 

1
(7)
 Fflxpla (cmf )  and  Fflxpla (cis) are respectively the flexibilities of the matrices membrane combination
bending and shearing of the plate.  R is the transition matrix to the system without rigid modes of deformation within five
Where
degrees of freedom, whose force field is represented by equation (8) and the corresponding displacements
(eqt.9):
 F plaq    R rI 
q   Ru e 
T
With  F

plaq
q are defined
(8)
(9)
 the external force exerted by a plate finite element nodal loads equivalent to the same element and u e  the
corresponding vector of nodal displacements and is given by equation (10):
u   u , v , w ,  ,  , u , v , w ,  ,  u , v , w ,  ,  
e T
0I
0I
0I
xI
yI
0J
0J
0J
xJ
yJ , 0K
0K
0K
xK
yK
(10)
Remark: In the simple case of a beam with two nodes with three degrees of freedom [23] the force vector corresponds
exactly to the demands of the nodal finite element beam.
Flexibility matrices concerning the plates are given by:
mcells
1
pla
 F flex
(cmf )   S IJK    Dcmf ( , )   H cmf ( K , K )   Dcmf ( , )  d  d
k 1
T
(11)

mcells
pla
 Fflex
(cisaill )   S IJK   bct   H ct ( K , K ) 
k 1
T
1
bct  d d
(12)

49
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
The matrices  H cmf ( K , K ) 


1
and
 H ct ( K ,K ) are matrices named flexibilities membrane bending and shear
1
respectively:
 Dcmf (K ,K )    Hcmf (K ,K )  dcmf (K ,K )
 Hm
 H cmf ( K , K )    T
 H mf
H mf 
H f 

and
bct    Hct (K ,K )dct 
Nstrata
and
 H c ( K ,K )    hi H  i
i 1
Nstrata
Nstrata
1 Nstrata 3
3
H

hii H i
(
z

z
)
H
and

mf
 i1 i i
3 i 1
i 1
i 1


 k ' (1  i )

 1 i
0 
0



Ei
Ei 
2


and
H

Hi 

1
0

i


i
2
'
1


1  i2 

k
(1


)
i
i 
1  i 
0


0 0


2


2 
1
hi  zi 1  zi , i  ( zi 1  zi )
2
The matrices  H cmf ( K ,K )  and  Hc (K ,K ) respectively represent the stiffness of membrane bending and


with H m 
 hi H i , H f 
shearing of the cell k of the plate, hi and Zi represent respectively the thickness and position Z layer i of the cell, Ei and  i
being respectively the Young's modulus and Poisson's ratio of the corresponding layer. k
'
is the shear correction factor.
d ( , )  e , e ,  , k , k ,   is the vector of plane deformation, and membrane of curvature
experienced by a cell, and d    ,   is the vector of deformations of the distortion in the planes (x, z) and (y, z).
cmf
K
K
xx
'
xy
yy
ct
xx
yy
x
y
xy
3. PRESENTATION OF AN ELEMENT DKT (Discrete Kirchhoff Triangle)
The DKT element defined in [1] is a finite element with three nodes and three degrees of freedom per node. It is
considered in this article, as a finite element with three nodes and five degrees of freedom per node.
The rotations  x ,  y are interpolated in a parabolic manner and the transverse displacements u0 , v0 , w0 are interpolated
in a linear manner [1, 2]:
n
2n
 x   N i  x   Px  k
i 1
i
k  n 1
k
, y 
n
2n
i 1
k  n 1
 Ni  yi   Pyk  k , Pxk  Pk Ck and Pyk  Pk S k
n
n
n
i 1
i 1
i 1
u0   N i u0i , v0   N i v0i , w0   N i w0i
Where C k , S k are the direction cosines, k is the middle of respective sides of the triangle, and are given by the side
ij: Ck  ( x j  xi ) / Lk ,
S k  ( y j  yi ) / Lk and Lk  ( x j  xi )2  ( y j  yi )2
Where n is the number of nodes of the finite element, in the case of a triangular element n  3 and functions N i and Pk
are given by [1, 2]:
N1    1     , N 2   , N 3   , P4  4 , P5  4 and P6  4
The expression of  k according to the nodal variables of nodes i and j is [1]:
50
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
k 
3
3
( wi  w j )  (Ck  xi  Sk  yi  Ck  x j  Sk  y j )
2 Lk
4
  x   N ix1
  y
  y   N i1
So:
N ix1 
N ix3 
 un 
N iy3 
(14)
3
3
3
3
3
3
Pk Ck 
PmCm , Nix2  Ni  Pk C k2  PmC m2 , Nix3   Pk Ck Sk  PmCm Sm
2 Lk
2 Lm
4
4
4
4
N iy1 
III.
N ix2
N iy2
(13)
3
3
3
3
Pk S k 
Pm S m , N iy2  N ix3 , Niy3  Ni  Pk S k2  Pm S m2 for i  1,...n
2 Lk
2 Lm
4
4
ANALYSIS OF A UNIFORM PLATE WITH DKT AND FLEXIBILITY (FLX)
At first glance, the figure 2 represents the results obtained with FLX as we analyze a homogeneous square plate
subjected to uniform load simply supported or built on the contour, for different slenderness L/h (5 to 1000). The plate is
meshed with 128 (N = 8) rectangular isosceles elements (§ 2.2). The results are virtually identical with those obtained with
DST and Q4γ [1] for the recessed plate (Figure 2.a). For the simply supported plate there appeared an error of
about 0.5% (Figure.2.b).
Figure 2-homogeneous square plate with load uniform. Bending in the centre based on L/h
( D  Eh
3
/12(1  2 ) ,  0.3 , k '  5/ 6 )
51
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
In a second step, we describe in Figures 3 to 7 some results [1, 2] and on the analysis of a homogeneous square plate
subjected to a concentrated load at the center or simply supported and built on the contour. A quarter of the plate is meshed
with 2, 8, 32, 128 (N = 1, 2, 4, 8) rectangular isosceles DKT elements (§ 2.3) and FLX (§ 2.2). These elements have five
degrees of freedom per node and are of Kirchhoff (no transverse shear energy, the results are independent of L/h) for DKT
elements and flexible elements taking into account of transverse shear (FLX) for L / h  24 (figures 6 and 7). Figures 4
and 5 we presents the results obtained with FLX as we analyze a homogeneous square plate subjected to concentrated load
simply supported or built on the contour, for different slenderness L/h (5 to 1000) for both types of mesh (there is a thin
outlook of influence mesh ). The plate is meshed with 2, 8, 32 and 128 (N = 1, 2, 4,8) rectangular isosceles FLX elements
(§ 2.2). Convergence can be seen for N = 8, that is to say, for a mesh of 128 elements. we observe a occurrence of an error,
for the clamped plate, in the order of 0.5% mesh A (Figure 4.a) and 0.56% mesh B (Figure 5.a) and simply
supported plate 0.3% mesh A (Figure 4.b) and 0.31% mesh B (Figure 5.b). In Figure 6, we provide the percentage
error of the deflection at the center depending upon „N‟ number of divisions per half side. There is a monotonic convergence
1
 EPexact and as EP   wc .P , we observe
2
w
). It is observed that DKT is a model that over-estimates c . However, the monotonic
with FLX (FLX model is a consistent shift, the total potential energy EP
EF
that
 wc 
EF
  wc 
exact
convergence of DKT can‟t be demonstrated. There is also a strong influence on the orientation of the mesh with triangular
elements of the type DKT and FLX. The convergence of the moment
reaction concentrated in the corner ( 2M xy
B
M xD in the middle of the recessed side and of the
) in case of simply supported plate are presented in Figure 7 for both types
of meshes and for DKT and FLX, (Calculations of efforts have been made directly to the nodes peaks followed by an
average if the node is shared by two elements). There is a fairly rapid convergence, an influence of models and an orientation
of the mesh.
y
B
A
 Symmetry conditions:
 x  0 on CA ; on CD  y  0
 Boundary conditions:
L
C
D
x
-
Recess : w   x   y  0 on ABD
-
Support simple: w   x  0 on AB,
w   y  0 on BD
L

Meshes considered: N=1, 2, 4,8
Case N = 2
A
B
A
B
C
D
C
D
mesh A

mesh B
Kirchhoff solution for a concentrated load P:
 D  Eh /12(1  );  0.3
3
2
Recess : wc  5.6  10
3
PL2 / D and
M xD  0.1257 P
52
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
-
Simple Support:
wc  11.6 103 PL2 / D and R  2M xy
B
 0.1219P
Figure 3-square plate under concentrated load. Data
Figure 4-homogeneous square plate with concentrated load. Arrow in the center in terms of L / h (mesh A)
53
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
Figure 5-homogeneous square plate with concentrated load. Arrow in the center in terms of L / h(mesh B)
Where Wk the numerical value calculated for the different divisions (N = 1, 2, 4.8)
54
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
Figure 5-square plates with concentrated load at center built and simply supported. Error for DKT and FLX
55
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
Figure 6-square plates with concentrated load at center built and simply supported. Error on a moment and a reaction in
the corner for DKT and FLX
IV.
CONCLUSION
The flexibility method developed with a linear interpolation (interpolation functions of the first order) and of way
independently of the transverse displacements and rotations, solves the problem related to the phenomenon by
shear locking. In the case of multicellular multilayer finite element, we observe that the method of flexibility,
which is a model monotone convergence, converges quickly enough for a plate structure. In this paper we have
presented the results for the analysis of a square plate in bending under load concentrated at the center, simply
supported on the contour or clamped while highlighting the influence of the mesh on different slenderness L / h
(Figures 4 and 5: arrow report wc / wk ). We also presented results on an analysis of a square plate subjected to a
uniform load, clamped or simply supported on the contour (Figure 2). The percentage error appeared in Figures 4,
5, 6 and 7 and that can be translated by the phenomenon of blocking is reduced (becomes negligible) by increasing
the number of elements this allows us to confirm the reliability of the method on solving the problem of shear
locking. In the following work (in a future article) we present the results at predictive calculation of the
performance of bearing subject to the sails seismic behavior by numerical simulation coupled with a damage
model by comparison with experimental results and by adopting a damage model for multicellular multilayer
finite element .
56
Multicellular Multilayer Plate Model: Numerical Approach And Phenomenon…
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
Batoz J.L. & GOURI Dhatt « Modélisation des structures par éléments finis Volume 2 Poutres et Plaques
Batoz J.L. & GOURI Dhatt « Modélisation des structures par éléments finis Volume 1, Volume 2, Volume 3 »
Editions HERMES 34, rue Eugène Flachat 75017 PARIS.
Crisfield M.A., [1991]. “Nonlinear Finite Element analysis of solids and structures”. Vol I, John Wiley,
Chichester.
D. Bui, "Le cisaillement dans les plaques et les coques : modélisation et calcul", Note HI-71/7784, 1992.
De Ville de Goyet V., [1989]. “L‟analyse statique non linéaire par la méthode des éléments finis des structures
spatiales formées de poutres à section non symétrique”. Thèse de doctorat, Université de Liège.
Friedman Z., Kosmatka J.B., [1993]. “An improved two-node Timoshenko beam finite element”. Computers and
Structures, vol 47, no 3, pp. 473-481.
GOURI Dhatt et GILBERT TOUZOT « Une présentation de la méthode des éléments finis », Deuxième Edition,
1984 Maloine S.A. Éditeur 75006 Paris.
Ibrahimbegovic N., Frey F., [1992] “Finite element analysis of linear and non linear deformations of elastic
initially curved beams”. LSC internal report 92/02, January, Lausanne.
Ile N., [2000]. “Contribution à la compréhension du fonctionnement des voiles en béton armé sous sollicitation
sismique : apport de l‟expérimentation et de la modélisation à la conception”. These de doctorat, INSA de Lyon.
Ile N., Reynouard J.M., [2000] “Non linear analysis of reinforced concrete shear wall under earthquake loading”.
Journal of earthquake Engineering, Vol.4, N° 2, pp. 183-213.
Kotronis P., [2000]. “Cisaillement dynamique de murs en béton armé. Modèles simplifiés 2D et 3D”. Thèse de
doctorat, Ecole Normale Supérieure de Cachan.
Kotronis P., [2008]. « Stratégies de Modélisation de Structures en Béton Soumises à des Chargements Sévères »
Mémoire pour obtenir un Diplôme d‟habilitation à diriger des recherches, UNIVERSITE JOSEPH FOURIER, au
laboratoire Sols, Solides, Structures - Risques (3S-R)
Kotronis P., Davenne L., Mazars J., [2004]. “Poutre 3D multifibre Timoshenko pour la modélisation des
structures en béton armé soumises à des chargements sévères”. Revue Française de Génie Civil, vol. 8, issues 2-3,
pp. 329-343.
Kotronis P., Mazars J., Davenne L., [2003]. “The equivalent reinforced concrete model for simulating the
behaviour of shear walls under dynamic loading”. Engineering Fracture Mechanics, issues 7-8, pp. 1085-1097.
Kotronis P., Mazars J., Nguyen X.H., Ile N., Reynouard J.M., [2005b]. “The seismic behaviour of reinforced
concrete structural walls: Experiments and modeling”. 250th anniversary of the 1755 Lisbon earthquakeProceedings, Lisbon Portugal, pp. 441-445, cd paper no 86, 1-4 november.
Kotronis P., Mazars J., [2005a]. “Simplified modelling strategies to simulate the dynamic behaviour of R/C
walls”. Journal of Earthquake Engineering, vol. 9, issue 2, pp. 285-306.
Mazars J. (1984). « Application de la mécanique de l‟endommagement au comportement non Linéaire et à la
rupture du béton de structure ». Thèse de doctorat d‟état de l‟Université Paris VI.
NEUENHOFER A., FILIPPOU F.C., Evaluation of Non-linear Frame Finite-Element Models, Journalof
Structural Engineering, Vol. 123, 7, July 1997, pp. 958-966.
Nguyen X.H., Mazars J., Kotronis P., Ile N., Reynouard J.M., [2006a]. “Some aspects of local and global
behaviour of RC structures submitted to earthquake - Experiments and modelling”. EURO-C 2006 Computational
Modelling of Concrete Structures edited by G. Meschke, R. de Borst, H. Mang, N. Bicanic, 27th-30th March,
Mayrhofen, Tyrol, Austria pp. 757-766, 2006- Austria, 27-30 March.
Nguyen X.H., Mazars J., Kotronis P., [2006b]. “Modélisation simplifiée 3D du comportement dynamique de
structures en béton armé”. Revue Européenne de Génie Civil, vol. 10, N° 3, pp. 361-373.
Przemieniecki J.S., [1985]. “Theory of matrix structural analysis”. Dover Pubns, October.
Stolarski H., Belyschko., [1983]. “Shear and membrane locking in C° elements”. Computers and methods in
applied mechanics and engineering, vol. 41, Issue 3, December, pp. 279-296.
Y. BELMOUDEN (2003).Modélisation numérique de la tenue aux séismes des structures en béton armé. Article
publié dans le Journal Bulletin des ponts et chaussées.
57
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 47-55
Voltage Sag and Mitigation Using Algorithm for Dynamic
Voltage Restorer by PQR Transformation Theory
Prof. Shivanagouda B. Patil1, Dr. Shekhappa. G. Ankaliki2
1
E & E Engineering Department, Hirasugar Institute of Technology, Nidasoshi-591 236, Karnataka, India.
E & E Engineering Department, SDM College of Engineering & Technology, Dharwad-580 002, Karnataka,
India.
2
Abstract––Voltage sag is one of the power quality issue and Dynamic Voltage Restorer (DVR) is used for mitigation of
voltage sag. Voltage sag is sudden reduction in voltage from nominal value, occurs in a short time which can cause
damage and loss of production in industrial sector. In this paper, focus is given only on DVR using an algorithm and the
related implementations to control static series compensators (SSCs). Directly sensed three-phase voltages are
transformed to p-q-r co-ordinates without time delay. The controlled variables in p-q-r co-ordinates has better steady state
and dynamic performance and then inversely transformed to the original a-b-c co-ordinates without time delay,
generating control signals to SSCs. The control algorithm is used in DVR system. The simulated results are verified and
mitigation of sag is presented in this paper.
Key words––Voltage Sag, Static Series Compensator (SSCs), PQR Transformation, Dynamic Voltage Restorer (DVR)
I.
INTRODUCTION
In many recent years, power quality disturbances become most issue which makes many researchers interested to
find the best solutions to solve it. There are various types of power quality disturbances in which voltage sag, voltage swell
and interruption. This paper introduces DVR and its operating principle for the sensitive loads. Simple control based SPWM
technique and „pqr‟ transformation theory [1] is used in algorithm to compensate voltage sags/swells. A scope of work is
DVR system will be simulated by using Matlab/Simulink tool box and results were presented. Due to the increasing of new
technology, a lot of devices had been created and developed for mitigation of voltage sag. Voltage sag is widely recognized
as one of the most important power quality disturbances [2]. Voltage sag is a short reduction in rms voltage from nominal
voltage, happened in a short duration, about 10ms to seconds. The IEC 61000-4-30 defines the voltage sag (dip) as a
temporary reduction of the voltage at a point of the electrical system below a threshold [3]. According to IEEE Standard
1159-1995, defines voltage sags as an rms variation with a magnitude between 10% and 90% of nominal voltage and
duration between 0.5cycles and one minute [4] & [5]. Voltage sag is happened at the adjacent feeder with unhealthy feeder.
This unhealthy feeder always caused by two factors which are short circuits due to faults in power system networks and
starting motor which draw very high lagging current. Both of these factors are the main factor creating voltage sag as power
quality problem in power system. Voltage sags are the most common power disturbance which certainly gives affecting
especially in industrial and large commercial customers such as the damage of the sensitivity equipments and loss of daily
productions and finances. An example of the sensitivity equipments are programmable logic controller (PLC), adjustable
speed drive (ASD) and chiller control. There are many ways in order to mitigate voltage sag problem. One of them is
minimizing short circuits caused by utility directly which can be done such as with avoid feeder or cable overloading by
correct configuration. The control of the compensation voltages in DVR based on dqo algorithm is discussed in [6]. DVR
is a power electronic controller that can protect sensitive loads from disturbances in supply system. Dynamic voltage
restorers (DVR) can provide the most commercial solution to mitigation voltage sag by injecting voltage as well as power
into the system. The mitigation capability of these devices is mainly influenced by the maximum load; power factor and
maximum voltage dip to be compensated [7]. In [12-17] Voltage Sag Mitigation Using Dynamic Voltage Restorer System
are discussed under fault condition and dynamic conditions.
II.
DVR SYSTEM
Dynamic voltage restorer (DVR) is a series compensator which is able to protect a sensitive load from the
distortion in the supply side during fault or overloaded in power system. The basic principle of a series compensator is
simple, by inserting a voltage of required magnitude and frequency, the series compensator can restore the load side voltage
to the desired amplitude and waveform even when the source voltage is unbalanced or distorted [8]. This DVR device
employs gate turn off thyristor (GTO) solid state power electronic switches in a pulse width modulated (PWM) inverter
structure. The DVR can generate or absorb independently controllable real and reactive power at the load side. The DVR
also is made of a solid state dc to ac switching power converter that injects a set of three phase ac output voltages in series
and synchronism with the distribution feeder voltages [8]. The amplitude and phase angle of the injected voltages are
variable thereby allowing control of the real and reactive power exchange between the DVR and the distribution system [8].
The dc input terminal of a DVR is connected to an energy source or an energy storage device of appropriate capacity. The
reactive power exchange between the DVR and the distribution system is internally generated by the DVR without ac
47
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
passive reactive components. The real power exchanged at the DVR output ac terminals is provided by the DVR input dc
terminal by an external energy source or energy storage system. DVR structure comprises rectifier, inverter, filter and
coupling transformer shown in Fig.1. Besides, pulse width modulated (PWM) technique is using to control variable voltage.
Filter is using for elimination harmonic generated from high switching frequency in PWM technique. In power system
network, DVR system is connected in series with the distribution feeder that supplies a sensitive load shown in Fig. 2. There
are two main factors relating to the capability and performance of DVR working against voltage sags in a certain power
system: the sag severity level and the Total Harmonic Distortion (THD). Both of these in turn are mainly decided by the DC
source [9]
Fig. 1 DVR Structure
Fig. 2 DVR System in Power System
The Principle Operation of DVR System
In normal situation without short circuit in power system, a capacitor between rectifier and inverter (Fig. 1) will be
charging. When voltage sag happened, this capacitor will discharge to maintain load voltage supply. Nominal voltage will be
compared with voltage sag in order to get a difference voltage that will be injected by DVR system to maintain load voltage
supply. PWM technique is using to control this variable voltage. In order to maintain load voltage supply, reactive power
must be injected by DVR system. Practically, the capability of injection voltage by DVR system is 50% of nominal voltage.
It is sufficient for mitigation voltage sag because from statistic shown that many voltage sag cases in power system involving
less than 0.5 p.u. voltage drop.
Mathematical Model for Voltage Sag Calculation
In this principle the assumption is that the fault current is larger than the load current. The point of common
coupling is the point from which both fault and load are fed. Upon the occurrence of the fault which may be a line-line
ground fault, a short circuit current flow which leads to reduction in the voltage magnitude at the point of common coupling
(PCC) shown in Fig. 3. This voltage sag may be unbalanced and may be accompanied with a phase jump.
Fig. 3 Calculation for voltage sag
48
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
IF 
E
Zs  ZF
(1)
Vsag  I F Z F
(2)
 X  XF 
X 
  tan 1  F   tan 1  S

 RF 
 RS  RF 
(3)
Where
Z S  RS  jX S , is the source impedance, Z F  RF  jX F , is the impedance between the PCC and the fault, and
E  1 p.u is the source voltage. The missing voltage which is the voltage difference between pre-sag condition and sagged
(faulted) condition which is given by equation (4) and vectorial representation is shown in Fig. 4.
Vmis sin g  V presag  Vsag
(4)
Fig. 4 Vector Diagram Showing Voltage Sag
The missing voltage has to be provided by sag compensating device like Dynamic Voltage Restorer (DVR) which is injected
in series with the supply voltages to bring the voltages to pre-sag condition.
.
III.
PROPOSED APPROACH
The objective of this paper is to implement DVR in series with a sensitive load. It is a series compensating device
which injects a set of controllable three-phase AC output voltages, in series and synchronism with the distribution feeder
voltages. The DVR employs solid state switching devices in a pulse width modulated inverter (PWMI), the magnitude and
phase angle of the injecting voltages can be controlled independently. In this work directly sensed three phase voltages are
transformed to p-q-r co-ordinates without time delay [1]. The controlled variables p-q-r co-ordinates have better steady state
and dynamic performance. Later these are inversely transformed to the original a-b-c co-ordinates without time delay. It
must able to respond quickly and experience no voltage sags to the end users of sensitive equipment shown in Fig. 3.
IV.
BASIC CONTROL STRATEGIES OF DVR
The type of the control strategy mainly depends upon the type of the sensitive or critical load, i.e. it depends on the
sensitivity of the load to changes in magnitude, phase shift and wave shape of the voltage wave form. The control techniques
adopted should consider the limitations of the DVR like voltage injection capability and energy limit. Voltage sag
compensation can be done by DVR using real and reactive power transfer. Reactive power solely will not meet the
requirements of dynamic compensation of disturbances that require real power transfer.
The three basic control strategies are,
a)
Pre-sag Compensation
b)
In-phase Compensation
c)
Energy Optimal Compensation
a) Pre-sag Compensation
In this technique, the DVR compensates for the difference between the sagged and the presag voltages by restoring
the instantaneous voltages to the same phase and magnitude as the nominal pre-sag voltages. In this method, in case there is
sag associated with phase shift, full phase compensation is provided so that the load is undisturbed as far as the phase angle
and magnitude are concerned. But there is no control of energy injected into the line and often exhausts the rating of the
DVR. The vector diagram for pre-sag compensation is shown in Fig. 5.
49
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
Fig. 5 Vector Diagram of Pre-sag Compensation
b) In-phase Compensation
In this method, the restored voltages are in-phase with the depressed source side voltages regardless of load current
and presag voltages, thus minimizing the magnitude of the injected voltage. Fig.4 shows phase compensation is not provided
but has got better performance in compensating a broader range of voltage sags. The vector diagram for inphase
compensation is shown in Fig. 6.
Fig. 6 Vector Diagram for Inphase Compensation
c) Energy Optimal Compensation
Voltages are injected at an angle that will minimize the use of real power since voltage restoration is the only
requirement. That means the DVR voltage is injected perpendicular to the load current but the current will change the phase
according to the new voltage applied to it and energy will be drawn from DVR. The vector diagram for optimal
compensation is shown in Fig.7.
Fig. 7 Vector Diagram for Optimal Compensation
V.
COMPUTATION OF COMPENSATING VOLTAGES
There are many methods to calculate the compensating reference voltage waveforms for dynamic voltage restorers.
One of those methods is “PQR power theory”. An effective algorithm based on “PQR power theory” is used to calculate
reference compensating voltages. In this algorithm, the directly sensed three phase voltages are converted into p-q-r
coordinates instantaneously without any time delay. Then the reference voltages in p-q-r domain have simple forms - DC
50
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
values. The controller in p-q-r domain is very simple and clear, has better dynamic and steady state performance than
conventional controllers. The algorithm based on PQR power theory is used to get the reference compensation voltages in pq-r domain and transformed back to 0  domain to drive the space vector modulator to generate the firing signals for
inverter.
PQR Transformation Theory
The 3-phase voltages of three-phase a-b-c coordinates can be transformed to o     coordinates as given below,
Vo 


V  
V 


1
1 
 1
 2
2
2 V

 a 
2
1
1
 V 
1


 b 
3
2
2


Vc 

0 3  3 


2
2 

If sinusoidal balanced voltages vaREF ,
(5)
vbREF and vcREF are selected for the reference waves in the a-b-c coordinates,
the reference waves in o     coordinates can becalculated and shown below,
1
1  v

 1  2  2   aREF 
v


2

REF

 vbREF 
V REF  

3
3
3  
 v REF 
   vcREF 
0

2
2 
v
Since the reference waves vaREF , bREF and
(6)
vcREF are sinusoidal balanced, the 0-axis component voltage voREF
does not exist and the reference wave v REF  v  REF become sinusoidal and orthogonal on the     plane.
Using the reference waves v REF v  REF in the o     coordinates in a mapping matrix, the voltages
ino     coordinates can be transformed to p-q-r coordinates as given by (2).

v REF
0
v REF
 vp  

v REF
 
 vq    0  v
 REF
v  
 r 
0
1


v REF 

v REF 
 vo 
v REF   
  v 
v REF   
  v 
0 


(7)
Where
v REF  v REF 2  v REF 2
(8)
Combining (5) and (7) the voltages in a-b-c coordinates can be transformed to p-q-r coordinates as shown below,
Where C  C1C2
C1 
1
1 
 1
 2
2
2 V

 a 
2
1
1
 V 
1


 b 
3
2
2

 V 
3  c 
0 3 

2
2 


(9)
51
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
v REF 

v REF
0

v
v
 REF
 REF 


v
v REF 

C2   0
  REF
v REF
v REF 



0
0
1





From p-q-r domain to a-b-c domain can be achieved by taking inverse of C,
 vp 
 va 

1 


 vq 
 vb    C 
v 
v 
 c 
 r 
Where C 1  (C1C2 ) 1  C2
C11 
 1

 2
2 1

3 2
 1


 2
1

1
2

1
2


0


v
C2 1    REF
 v REF

 v REF
v
  REF
(10)
(11)
1
C11



3 

2 
3


2 

0
(12)
0

v REF
v REF
v REF
v REF


1

0



0

(13)
PQR Transformation
Fig. 8 Physical Meaning of PQR Transformation
When three-phase voltages are sinusoidal and balanced, the locus of the sensed voltage space vector VSEN is a circle on the
   plane. If the three-phase voltages are in-phase with the three-phase reference waves, the sensed voltage space
vector VSEN becomes aligned with the reference wave space vector VREF . In this case v q and v r do not exist while v p
comprises only a dc component that is equal to | VSEN |. This condition will be a target for the voltage sag compensation by a
DVR.
52
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
DVR Model Description
The block diagram of overall control flow is shown in Fig. 9. Here open loop feed forward control technique is
adopted. Upon the occurrence of sag, there is a reduction in the phase voltages on the downstream of the DVR. The data is
acquired by data acquisition system and detected. The speed with which the sag is detected depends upon the sensors and sag
detection algorithms. Then the three phase voltages are processed using PQR algorithm. The reference compensating
voltages are generated in PQR domain and transformed back to alpha-beta domain. The alpha and beta axis reference
voltages are given as inputs to drive the space vector modulator. The generated firing pulses are used to fire the switches of
the voltage Source Inverter (VSI). The topology of the VSI used is conventional 2-level, three-leg inverter and three-level
(multi-level configuration) diode clamped inverter. Depending upon the topology of the inverter, the number of firing pulses
may be 6/12.
Fig. 9 Simulation Model of Dynamic Voltage
VI.
SIMULATION RESULTS
The simulation results carried out using Matlab/Simulink tool box for conventional two-level inverter using space
vector modulation scheme is presented.
The data used for simulation is,
Vsa = 230  0  ; Vsb = 230  -120  ; Vsc = 230  120  ; Vab= 350 V
The three phase output voltages generated by the VSI can be controlled both in magnitude and phase individually. Finally
the 3-phase voltages are filtered out by a low pass filter before injecting into the line via booster transformer. All the
simulations are carried out using MATLAB/SIMULINK tool box. The phase and line voltages in conventional two-level
inverter are shown in Fig.10. The sag compensation by DVR for double line to ground fault is shown in Fig.11. There is
voltage sag in phase „b‟ and phase „c‟. The DVR provides full magnitude and phase compensation instantaneously.
53
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
Fig. 10 Phase and Line Voltages in Conventional Two Level Inverter
Fig.11 Compensation of Sag by DVR for Phase b and c.
VII.
CONCLUSION
In this paper, a complete simulated DVR system has been developed by using MATLAB/SIMULINK tool box.
The proposed scheme for DVR uses the PQR algorithm to generate the reference compensation voltages without time delay
applied to space vector modulator to drive the conventional two-level topology which generates the required compensation
voltages. It is shown that the simulated DVR developed, works successfully without lacks in its performance when applied to
a simulated power system network. By introducing DVR in the power network, it can help to improve power quality. It is
important to have a good delivery power quality in electrical power systems especially to the critical areas, such as in the
industrial sectors, in order to ensure the smoothness of the daily operations.
REFERENCES
1.
Hyosung Kim, Frede Blaabjerg, Biritte Bak-Jensen, Jacho Choi, “Instantaneous
Phase Systems by Using P-Q-R Theory” IEEE
Journal 2001, PP 478-485.
Power Compensation in Three
54
Voltage Sag and Mitigation Using Algorithm for Dynamic Voltage Restorer…
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
M. F. Faisal, “Power Quality Management Program: TNB‟s Experience,” Distribution Engineering Department,
TNB, 2005.
A. Felce, G. Matas, Y. D. Silva, “Voltage Sag Analysis and Solution for an Industrial Plant with Embedded
Induction Motors” Inelectra S.A.C.A. Caracas, Venezuela, 2004.
Pirjo Heine, Matti Lehtonen, “Voltage Sag Distributions Caused by Power System Faults” IEEE Transactions on
Power Systems, Vol. 18, No. 4, November 2003.
Shairul Wizmar Wahab and Alias Mohd Yusof, “Voltage Sag and Mitigation Using Dynamic Voltage Restorer
(DVR) System” Elektrika Vol. 8, No. 2, 2006, PP 32-37.
Rosli Omar, Nasrudin ABD Rahim, Marizan Suleman “Modeling and Simulation for Voltage Sags /Swells
Metigation using DVR” Journal of Theoretical and Applied Information Technology © 2005 - 2009 JATIT, PP
464-470
H. P. Tiwari and Sunilkumar Gupta, “Dynamic voltage Restorer Based on load condition” International Journal of
Innovation, Management and Technology. Vol.1, No.1 April-2010
A. Ghosh and G. Ledwich, “Power Quality Enhancement Using Custom Power Devices” Kluwer Academic
Publishers, 2002.
P. T. Nguyen and T. K. Saha “DVR against Balanced and Unbalanced Voltage Sags: Modeling and Simulation,”
IEEE-School of Information Technology and Electrical Engineering, University of Queesland, Australia, 2004.
P. S. Bimbhra “Generalized Theory of Electrical Machines” Third Edition Khanna Publishers Delhi.
Pirjo Heine, Matti Lehtonen, “Voltage Sag Distributions Caused by Power System Faults,” IEEE Transactions on
Power Systems, Vol. 18, No. 4, November 2003.
Chris Fitzer, Mike Barnes, and Peter Green, “Voltage Sag Detection for a Dynamic Voltage Restorer” IEEE
Transactions on Industry Applications, Vol. 40, No.1, February 2004.
P. Boonchiam, N. Mithulananthan, “Understanding of Dynamic Voltage Restorers through MATLAB Simulation,”
Thammasat Int. J. Sc. Tech., 11(3), 2006, PP 1-6.
Omar, R., Rahim, N., and Sulaiman, M., “Modeling and Simulation for Voltage Sags/Swells Mitigation Using
Dynamic Voltage Restorer”, Journal of Theoretical and Applied Information Technology, 2009, PP 464-470.
S. Deepa and Dr. S. Rajapandian “Voltage Sag Mitigation Using Dynamic Voltage Restorer System” International
Journal of Computer and Electrical Engineering, Vol. 2, No. 5, October, 2010, PP 1793-8163
H. P. Tiwari and Sunil Kumar Gupta, “Dynamic Voltage Restorer against Voltage Sag” International Journal of
Innovation, Management and Technology, Vol. 1, No. 3, August 2010
Mahmoud A. El-Gammal, Amr Y. Abou-Ghazala, and Tarek I. El-Shennawy, “Dynamic Voltage Restorer (DVR)
for Voltage Sag Mitigation” International Journal on Electrical Engineering and Informatics ‐ Volume 3, Number
1, 2011
55
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2(September 2012) PP: 62-65
Providing Security for Data in Multicasting Using Encryption
M.Kiran Kumar1, B.N.V.MadhuBabu2, K.Nageswrarao3
1
Pursuing M.Tech In Motherteresa Institute Of Science And Technology, CSE Dept, JNTUH, A.P. ,India
2,3
Assoc.Prof In Motherteresa Institute Of Science And Technology, CSE Dept, JNTUH, A.P ,India
Abstract––In this paper we describe about providing security for data in multicasting. In multicasting multiple numbers
of users share the data, from which the server sends the data to users. The data may be in general or confidential. The
data is send from source to destination, in this data transformation there is a chance of malicious attack. So in order to
protect the data a key is known as encryption and decryption is used .the data is encrypted before it is send to the multiple
users along with decryption key Information Security has become an important issue in data communication. Encryption
has come up as a solution, and plays an important role in information security system. This security mechanism uses
some algorithms to scramble data into unreadable text which can be only being decoded or decrypted by party those
possesses the associated key. These algorithms consume a significant amount of computing resources such as CPU time,
memory and battery power and computation time.
Keywords––Encryption, multicasting, decryption, AES, RC4.
I.
INTRODUCTION
For secure communication over public network data can be protected by the method of encryption. Encryption
converts that data by any encryption algorithm using the „key‟ in scrambled form. Only user having access to the key can
decrypt the encrypted data. Encryption is a fundamental tool for the protection of sensitive information. The purpose to use
encryption is privacy (preventing disclosure or confidentiality) in communications. Encryption is a way of talking to
someone while other people are listening, but such the other people cannot understand what you are saying .Encryption
algorithms play a big role in providing data security against malicious attacks. In mobile devices security is very important
and different types of algorithms are used to prevent malicious attack on the transmitted data. Encryption algorithm can be
categorized into symmetric key (private) and asymmetric (public) key In Symmetric keys encryption or secret key
encryption, only one key is used to encrypt and decrypt data. In Asymmetric keys, two keys are used; private and public
keys. Public key is used for encryption and private key is used for decryption (e.g. RSA).Public key encryption is biased on
mathematical function, computationally intensive and is not very efficient for small mobile devices .The present scenario
uses encryption which includes mobile phones, passwords, smart cards and DVDs. It has permeated everyday life and is
heavily used by much web application. Organization of the paper. The paper is organized as follows: In Section 2, we
describe the problem of group key distribution and discuss some related solutions. In Section 3, we describe our family of
key management algorithms for revocation and present sample algorithms from this family. In Section 4, we describe the key
distribution process for adding users to the group. In Section 5, we present the simulation results of our algorithms and
compare their performance with previous solutions. In Section 6, we describe a scenario in which users have variable
requirements and show that our algorithms can adapt to such situations. In Section 7, we describe two key assignment
approaches to reduce the keys stored at different users. In Section 8, we combine our algorithms with an existing solution
and show its benefits. In Section 9, we conclude the paper and discuss future work. Confidential communication is a basic
security requirement for modern communication systems. Solutions to this problem prevent an attacker that observes the
communication between two parties from accessing the exchanged data. We address a related, but harder, problem in a
scenario where the attacker is not only able to observe the communication between the parties, but can also fully
compromise these parties at some time after the confidential data has been exchanged. If a protocol preserves confidentiality
under such attacks, we say that it provides forward secrecy under full compromise. This is a stronger notion than forward
secrecy [18], which guarantees confidentiality when participants‟ long-term secrets (but not their devices or passwords) are
compromised. For example, a subpoena is issued and the communication parties must relinquish their devices and secrets
after (e. g., e-mail) communication took place. In this scenario, the parties would like to guarantee that the authorities cannot
access the exchanged information, even when given full access to devices, backups, user passwords, and keys, including all
session keys stored on the devices.
Assuming public communication channels, any solution to the above problem must ensure that the communication
is encrypted to prevent eavesdropping. The challenge in solving this problem is the appropriate management and deletion of
the keys used to encrypt the data. Several solutions to this problem have been proposed. First, the Ephemerizer system
stores the encryption keys on a physically separate, trusted server accessible by all communicating
II.
RELATED WORK
To reflect current group membership, the group controller also needs to change and distribute the shared keys that
are known to the revoked users. There are two approaches available with the group controller for distributing the new shared
keys. In the first approach, the group controller explicitly transmits the new shared keys (e.g., in [2], [3], [5]) to the current
users. In our work, we adopt the second approach where the group controller and the users update the shared keys using the
62
Providing Security for Data in Multicasting Using Encryption
following technique, where kx is the old shared key, k0x is the new shared key, and f is a one-way function. Using this
technique, only those current users who knew the old shared key kx will be able to get the new shared key k0x. This
technique was also used in [4], [7], [13], [14], [15], [16]. However, this technique may be prone to long-term collusive
attacks, as described [4], by the revoked users. To provide resistance against such attacks, the group controller adopts a
policy in which the keys known to the current users are refreshed at regular intervals of time. From the above discussion, we
note that the rekeying cost for the group controller to revoke multiple users is the cost of sending the new group key. We
measure this cost in the number of messages sent and the encryptions performed by the group controller for distributing the
new group key. Other approaches to address the problem of revoking multiple users are proposed in [17] the group controller
maintains a logical hierarchy of keys that are shared by different subsets of the users. To revoke multiple users, the group
controller aggregates all the necessary key updates to be performed and processes them in a single step. However, the group
controller interrupts the group communication until all the necessary key updates are performed, and then, distributes the
new group key to restore group communication. This interruption to group communication is undesirable for real-time and
multimedia applications. In [18], to handle multiple group membership changes, the group controller performs periodic
rekeying , i.e., instead of rekeying whenever group membership changes, the group controller performs rekeying only at the
end of selected time intervals. From the above discussion, we note that the rekeying cost for the group controller to revoke
multiple users is the cost of sending the new group key. We measure this cost in the number of messages sent and the
encryptions performed by the group controller for distributing the new group key
Theorem: In the basic structure, when one or more users are revoked, the group controller can distribute the new group key
securely to the remaining users using at most any one encrypted transmission.
Proof: We consider three possible cases of user revocation from the basic structure.
Case 1: When no users are revoked, the group controller sends the new group key using the current group key that is known
to all the users. Although this trivial case is not important for the basic scheme, it is important for the hierarchical algorithm
we describe in later sections.
Case 2: When m < K, users are revoked from the group and the group controller needs to distribute the new group key to the
remaining K -m users. The group controller uses the shared key
associated with the
remaining subset of K _m users to send the new group key. Thus, the group controller transmits
. As the revoked
users do not know
, only the current users will be able to decrypt this message.
Case 3: All users are revoked from the group. The group controller does not need to distribute the new group key, and thus,
does not send any messages. We note that once the new group key is distributed, the current users update the necessary
shared keys using the one-way function technique we described in Section 2. However, the basic structure requires the group
controller and the users to store a large number of keys, which is not practical if the group is large. In the next Section, we
present our hierarchical algorithm to reduce the number of keys stored at the group controller and the users. Our hierarchical
algorithm preserves some attractive communication properties of the basic structure while reducing the storage requirement
for the shared keys.
Case1
III.
Case2
Case3
KEY MANAGEMENT ALGORITHMS FOR ENCRYPTION
AES: The Advanced Encryption Standard (AES) was published by NIST (National Institute of Standards and Technology) in
2001. AES is a symmetric block cipher that is intended to replace DES as the approved standard for a wide range of
applications. It has a variable key length of 128,192 or 256 bits. It encrypts data blocks of 128 bits in 10, 12, 14 rounds
depending on key size. AES encryption is fast and flexible in block ciphers. It can be implemented on various platforms.
AES can be operated in different modes of operation like ECB, CBC, CFB OFB, and CTR. In certain modes of operation
they work as stream cipher.
63
Providing Security for Data in Multicasting Using Encryption
RC4: RC4 is a stream cipher designed in 1987 by Ron Rivest. It is officially termed as "Rivest Cipher 4". Stream ciphers are
more efficient for real time processing. It is a variable key size stream cipher with byte oriented operations.This algorithm is
based on the use of a random permutation. According to the various analysis,the period of the cipher is greater than
10100.Eight to sixteen machine operations are required per output byte and the cipher run very quickly in software. The
algorithm is simple, fast and easy to explain. It can be efficiently implemented in both software and hardware.
IV.
RESULTS AND ANALYSIS
We compare the performance of our algorithms with the Algorithms, the group controller associates a set of keys
with the nodes of a rooted tree and the users with the leaves of the tree. Each user knows the keys associated with the nodes
on the path from itself to the root. To revoke a user, the group controller recursively distributes the changed keys at the
higher levels in the key tree using the changed keys at the lower levels. To revoke multiple users the group controller
processes all the key updates in a single step.This reduces the cost of changing a key multiple times.
|Case3|
|Case2|
V.
|Case1|
CONCLUSION
In this paper main objective is provide a fast data encryption and decryption and strong security for data
communication in the multiple-organization system. The objective of the paper is a taking very less time for data processing
to other algorithm and cost also very less to implement any kinds of the large networks .We addressed the problem of data
confidentiality in scenarios where attackers can observe the communication between principals and can also fully
compromise the principals after the data has been exchanged, thereby revealing the entire state of the principals‟ devices. In
this paper, we presented a family of algorithms that provide trade-off between the number of keys maintained by the users
and the time required for rekeying due to the revocation of multiple users. Encryption algorithm play an important role in
communication security where encryption Time, Memory usages output byte and battery power are the major issue of
concern. The performance metrics were throughput, CPU process time, memory utilization, encryption and decryption time
and key size variation. Our algorithms are also suited for overlay multicast applications. In overlay multicast, the end nodes
perform the processing and forwarding of multicast data without using IP multicast support. The solutions provide users with
full control over their data privacy. Our future work will include experiments on image and audio data and focus will be to
improve encryption time and less memory usage.
ACKNOWLEDGMENT
The great full thanks to all author‟s because of working of this paper presentation, References also used for
developing this IEEE transactions
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
DiaasalamaAbdElminaam, HatemMohamadAbdual Kader,Mohly Mohamed Hadhoud, “Evalution the Performance
of Symmetric Encryption Algorithms”, international journal of network security vol.10,No.3,pp,216-222,May
2010.
Diaasalama, Abdul kader, MohiyHadhoud, “Studying the Effect of Most Common Encryption Algorithms”,
International Arab Journal of e-technology, vol 2,no.1,January 2011.
Lepakshi goud T ―Dynamic Routing with security using a DES algorithm‖ NCETIT-2011 .
Donal ― Distributed system‖ Connexions IBC Plaza Houston on Aug 25, 2009 .
G. Manikandan, R. Manikandan, G Sundarganesh, “A New approach for generating Strong Key in RivestCipher4
Algorithm”, Journal of Theoretical and Applied Information Technology, 2011,pp.113-119.
N.Sairam, G.Manikandan, G. Krishnan, “A Novel Approach for Data Security Enhancement using Multi Level
Encryption Scheme”, International Journal of Computer Science and Information Technologies, Vol.2(1),2011,
pp.469-473.
Deguang Le, Jinyi Chang, Xingdou Gou, Ankang Zhang, Conglan Lu, “Parallel AES Algorithm for Fast Data
Encryption on GPU”, 2nd International Conference on Computer Engineering and Technology, Vol.6, ,2010,
pp.v6-1-v6-6.
64
Providing Security for Data in Multicasting Using Encryption
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
G.Manikandan, R.Manikandan,P. Rajendiran, G.Krishnan,G.SundarGanesh, An Integrated Block and Stream
Cipher Approach for Key Enhancement , Journal of Theoretical and applied information Technology, 2011, Vol 28
(2), 83-87.
G. Manikandan, M.Kamarasan,P.Rajendiran, R.Manikandan, A Hybrid Approach for Security Enhancement by
modified Crypto- Stegno scheme, European Journal of Scientific Research, vol.60(2) , 2011, 224-230.
G.Manikandan, N.Sairam, M.Kamarasan, A New Approach For Improving Data Security Using Iterative Blowfish
Algorithm in Journal of Applied Sciences, Engineering and Technology,Vol 4(6) , 2012,603-607
G.Manikandan, N.Sairam, M.Kamarasan, A Hybrid Approach For Security Enhancement by Compressed CryptoStegno Scheme in Research Journal of Applied Sciences, Engineering and Technology,Vol 4(6) ,2012, 608-614.
B.Karthikeyan, V.Vaithiyanathan, B. Thamotharan, M.Gomathymeenakshi and S.Sruthi, LSB Replacement
Stegnography in an image Using Psudorandomised Key Generation. Research Journal of Applied Sciences,
Engineering and Technology, 4(5), 2012, 491-494.
Alanazi Hamdan.O., Zaidan B.B., Zaidan A.A., Jalab Hamid.A., Shabbir .M and Al-Nabhani.Y, “New
Comparative Study Between DES, 3DES and AES within Nine Factors” Journal of Computing, Volume 2, Issue 3,
March2010, ISSN 2151-9617, pp.152-157 .
Salama Diaa , Kader Hatem Abdual , and Hadhoud Mohiy , “Studying the Effects of Most Common Encryption
Algorithms” International Arab Journal of e-Technology, Vol. 2, No. 1, January 2011, pp.1-10.
Elminaam Diaa Salama Abdual., Kader Hatem Mohamed Abdual and Hadhoud Mohiy Mohamed,“Evaluating The
Performance of Symmetric Encryption Algorithms” International Journal of Network Security,Vol.10,
No.3,pp.213- 219, May 2010.
Elminaam Diaa Salama Abdual., Kader Hatem Mohamed Abdual and Hadhoud Mohiy Mohamed, “Tradeoffs
between Energy Consumption and Security of Symmetric Encryption Algorithms” International Journal of
Computer Theory And Engineering, Vol.1,No.3,pp.325-333 August 2009.
Elminaam Diaa Salama Abdual., Kader Hatem Mohamed Abdual and Hadhoud Mohiy Mohamed, “Evaluating the
Effects of Symmetric Cryptography Algorithms on Power Consumption for Different Data Types” International
Journal of Network Security, Vol.11, No.2, pp.78- 87, Sept. 2010.
.
65
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 3 (September 2012) PP: 66-69
Comparison Of Electron Transport Properties In Inn At High
Electric Field For Zincblende And Wurtzite Crystal Structure
Hadi Arabshahi1, Adeleh Zamiri2 and Hossain Tayarani3
1
physics Department, Payame Noor University, P. O. Box 19395-4697, Tehran, Iran
2
physics Department, Higher Education Khayyam, Mashhad, Iran
Abstract––Drift velocity and mobility of zincblende and wurtzite structure of InN has been calculated using a Monte
Carlo technique. All dominated scattering processes have been included in our calculations. Our simulation is based on
the three valley model. Our result show that InN has a electron field velocity of 3.6×10 5 m/s in break down electric field
of 1.71×107 V/m for zincblende structure and has a electron field velocity of 2.8×10 5 m/s in break down electric field of
3.34×107 V/m for wurtzite structure.
Keywords––Drift Velocity; Mobility; Zincblende; Wurtzite; Electric Field.
I.
INTRODUCTION
The small band gap of InN and higher electric pick drift velocity than other III-Nitride materials make InN a
promising material for high-speed electron devices and also it could be suitable for optoelectric devices [1]. InN normally
crystallized in the wurtzite (hexagonal) structure. The zincbelnde (cubic) form has been reported to occur in films containing
both polytypes [2]. Compared wurtzite (WZ) InN much less is known on the properties of the zincblende (ZB) phase of InN.
However, ZB InN layers have been grown and electronic structure of this material has been calculated [3,4].Unfortunately,
so far result on the carrier transport in this InN phase have been reported. This lack has motivated us to recalculate the band
structure of ZB InN to perform low- and high-field transport simulation using the ensemble MC method [5,6].The purpose of
the present paper is to calculate electron drift velocity for various temperature and ionized-impurity concentrations. The
formation itself applies only to central Γ valley conduction band and the two nearest valleys. The calculated band structure
has been approximated by a nonparabolic three-valley model with the conduction band minima located at Γ,U and K points
at wurtzite structure and Γ,X and L points at zincblende structure in the Brillouin zone. The valley have been approximated
by
Where ħ is the reduced plank constant, k is the wave vector, and  is the electron energy relative to the bottom of the valleys
[7].
II.
SIMULATION METHOD
The ensemble Monte Carlo techniques have been used for well over 30 years as a numerical method to simulate
nonequilibrium transport in semiconductor materials and devices and has been the subject of numerous books and reviews
[8]. In order to calculate the electron drift velocity for large electric field, consideration of conduction band satellite valleys
is necessary. A three-valley model for the conduction band is employed. we assume that all donors are ionized and that the
free-electron concentration is equal to the dopant concentration. For each simulation, the motion of ten thousand electron
particles are examined, the temperature being set to 300K, and the doping concentration being set to 10 23 cm-3. Electrons in
bulk material in bulk material suffer intervalley scattering by polar optical, nonpolar optical and acoustic phonons scattering,
intervalley phonons, and ionized impurity scattering. Electron transport is studied using the single Monte Carlo method.
The band structure and material parameters necessary for calculation the scattering probabilities used in present
Monte Carlo simulation are given in table 1 [2].
Parameter
Wurtzite InN
Zincblende InN
Band gap (eV)
Electron effective mass (m*)
Nonparapolicity (eV-1)
Low-Frequency dielectric Constant
High-Frequency dielectric Constant
Density (kgm-3)
Sound velocity (ms-1)
Acoustic Deformation Potential (eV)
Polar optical phonon energy (eV)
1.89
0.11
0.419
15.3
8.4
6810
6240
7.1
0.089
1.94
0.07
0.245
12.45
8.4
6970
5200
7.1
0.089
Table 1: Important parameters used in our calculations for WZ and ZB InN [2]
66
Comparison Of Electron Transport Properties In Inn At High Electric Field For...
III.
RESULTS AND DISCUSSION
Figure 1 show the electron drift velocity in both ZB and WZ InN calculated as a function of the applied electric
field with and without dislocations for a crystal temperature of 300 K, assuming free electron and ionized impurity
concentration of 1023 cm-3. The difference in the calculated drift velocities between the two crystal phases is obvious.
T=300
WZ InN
ZB InN
4.0
5
Drift Velocity(*10 m/s)
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
0
50
100
150
7
Electric Field(*10 v/m)
Figure 1. Electron drift velocity as a function of the applied electric field for ZB and WZ InN at 300 K.
In the case of ZB InN, the velocity increases with increasing electric field more rapidly owing to the lower
effective mass in the central valley, the velocity increase in the case of WZ InN is relatively slower.Similar to the
experimental result it is found that at a field of 3.33×107 vm-1 the pick velocity for WZ InN is about 80% lower in ZB InN
(2.8×105 ms-1 versus 3.6×105 ms-1).
Any further increase of electric field strength results in reduced drift velocity for both materials and a region of
negative differential resistance are observed. It is due to the transfer of electrons from the central valley, where they have a
larger effective mass, to the other valley, where they have a larger effective mass.
WZ InN
300 K
450 K
600 K
3.0
2.0
5
Drift Velocity(*10 m/s)
2.5
1.5
1.0
0.5
0.0
0
50
100
150
7
Electric Field(*10 v/m)
67
Comparison Of Electron Transport Properties In Inn At High Electric Field For...
ZB InN
300 K
450 K
600 K
4.0
3.0
2.5
5
Drift Velocity(*10 m/s)
3.5
2.0
1.5
1.0
0.5
0.0
0
50
100
150
7
Electric Field(*10 v/m)
Figure 2. Calculated electron drift velocity as a function of electron field strength for temperature of 300, 450 and 600 K.
Figure 2 shows the calculated electron steady state drift velocity in bulk ZB and WZ InN as a function of applied
electric field at various lattice temperature. The pick drift velocity decreases while the threshold field increases by same
percent as the lattice temperature increase from 300 K to 600 K.
Next, we compare the mobility of two InN phases at low-field electrons. The low-field mobility has been
calculated by the simple relation
taken in the linear region of the simulated V drift(E) curves, where Vd is
the electron drift velocity. In figure 3 the calculated low-field mobility μ for ZB and WZ InN is presented. Satellite valleys
do not affect the low-field mobility calculation since no intervalley transfer occurs at low electric field inconsequently, the
low-field mobility is attributed solely to transport of Γ valley electrons.
ZB InN
linear fit
4.0
5
Drift Vlocity(*10 m/s)
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0
2
4
6
8
10
12
14
16
18
7
Electric Field(*10 v/m)
68
Comparison Of Electron Transport Properties In Inn At High Electric Field For...
WZ InN
Linear Fit
3.0
5
Drift Velocity(*10 m/s)
2.5
2.0
1.5
1.0
0.5
0.0
0
5
10
15
20
25
30
35
7
Electric Field(*10 v/m)
Figure 3. Mobility of two InN phases at low-field electrons.
The electron mobility at low electric field is about 0.183 m2/Vs for ZB InN compared to 0.075 m2/Vs for WZ InN.
IV.
CONCLUSION(S)
Zincblende InN have higher electron pick drift velocity than wurtzite InN, because of its smaller effective mass. In
both zincblende and wurtzite InN the electron pick drift velocity decrease while the lattice temperature increase from 300 K
to 600 K. InN have higher mobility in zincblende structure than wurtzite structure, because its effective mass is small in ZB
compared with WZ.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
Maslar J E and Wang C A., The Effect of Strain Rate Variations on the Microstructure and Hot Deformation
Behaviour of AA2024 Aluminium Alloy, Appl. Spectrosc., 61,2007, pp.1093-1098.
Bennett H and Heckert A., Calculation of Electron Hall Mobility in GaSb, GaAs and GaN Using an Iterative
Method, J. Appl. Phys., 98, 2005, pp. 103705-103709.
Arabshahi H., Comparison of SiC and ZnO Field Effect Transistors for High Power Applications, Modern Physics
Letters B, 20, 2006, pp.787-793.
Arabshahi H., Calculation of Electron Hall Mobility in GaSb, GaAs and GaN Using an Iterative Method, African
Physical Review, 2007, pp. 45-51.
Arabshahi H., The Effect of Polaron Scattering Mechanisms in Electron Transport Properties in Semiconductor
Devices, International Journal of Science and Advanced Technology,Volume 2 No 2 February 2012, pp. 84-86.
Turner G. W, Eglash S J and Strauss A J., Comparison of High Field Staedy State and Transient Electron
Transport in Wurtzite GaN, AlN and InN, J. Vac. Sci. Technol. B, 11, 1993, pp. 864-870.
Meyer J and Ram-Mohan L. R., Comparison of Steady-State and Transient Electron Transport in InAs, InP and
GaAs", Appl., Lett., 67, 1995, pp. 2756-2762.
Arabshahi H., The Frequency Response and Effect of Trap Parameters on The Charactersitics of GaN MESFETs,
The Journal of Damghan University of Basic Sciences,1, 2007, pp.45-49.
69
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September2012) PP: 58-67
An Experimental and Analytical Study On Composite High
Strength Concrete – Fibre Reinforced Polymer Beams
R.S.Ravichandran1, K. Suguna2 and P.N. Raghunath3
1
1,2,3
Assistant professor, 2Professor and 3Professor
Department of Civil and Structural Engineering, Annamalai University, Chidambaram-608 001, India
Abstract––This paper presents the results of a nonlinear Finite Element (FE) analysis conducted on Reinforced High
Strength Concrete (HSC) beams strengthened with Glass fibre reinforced polymer (GFRP) laminates. Modeling the
complex behavior of reinforced concrete, which is both non-homogeneous and anisotropic, is a difficult confront in the
finite element analysis of civil engineering structures. The accuracy and convergence of the solution depends on factors
such as mesh density, constitutive properties of concrete, convergence criteria and tolerance values etc., Threedimensional finite element models were developed using a smeared crack approach for concrete and three dimensional
layered elements for the FRP composites. The results obtained through finite element analysis show reasonable
agreement with the test results.
Keywords–– Reinforced Concrete, Nonlinear Analysis, Finite Element Analysis and GFRP.
I.
INTRODUCTION
Strengthening or upgrading becomes necessary when the structural elements cease to provide satisfactory strength
and serviceability. Fiber Reinforced Polymer (FRP) composites can be effectively used as an external reinforcement for
upgrading such structurally deficient reinforced concrete structures. The most common types of FRP are aramid, glass, and
carbon; AFRP, GFRP, and CFRP respectively. Many researchers have found that FRP composites applied to reinforced
concrete members provide efficiency, reliability and cost effectiveness in upgradation[4,5,9]. The use of FEA has been the
preferred method to study the behaviour of concrete. Wolanski[2] studied the flexural behavior of reinforced and prestressed
concrete beams using finite element analysis. The simulation work contains areas of study such as Behavior at First
Cracking, Behavior at Initial Cracking, Behavior beyond First Cracking, Behavior of Reinforcement Yielding and Beyond,
Strength Limit State, Load-Deformation Response of control beam and Application of Effective Prestress, Self-Weight, Zero
Deflection, Decompression, Initial Cracking, Secondary Linear Region, Behavior of Steel Yielding and Beyond, Flexural
Limit State of prestressed concrete beam. Arduini, et al.[3] used finite element method to simulate the behaviour and failure
mechanisms of RC beams strengthened with FRP plates. The FRP plates were modeled using two dimensional plate
elements. However the crack patterns were not predicted in that study. Kachlakev, etal.[7] studied the finite element
modeling of reinforced concrete structures strengthened with FRP Laminates with ANSYS and the objective of this
simulation was to examine the structural behaviour of Horsetail creek bridge with and without FRP laminates and establish a
methodology for applying computer modeling to reinforced concrete beams and bridges strengthened with FRP laminates.
II.
EXPERIMENTAL BEAMS
2.1Materials used
The concrete used for all beam specimens had a compressive strength of 64MPa. The concrete consisted of 450
kg/m3 of ordinary Portland cement, 780 kg/m3 of fine aggregate, 680 kg/m3 of coarse aggregate, 450 kg/m3 of medium
aggregate, 0.36 water/cement ratio and 0.8% of hyperplasticizer. The longitudinal reinforcement consisted of high yield
strength deformed bars of characteristic strength 456MPa. The lateral ties consisted of mild steel bars of yield strength
300MPa. The specimens were provided with 8mm diameter stirrups at 150 mm spacing. Two types of GFRP laminates were
used for the study, namely, Chopped Strand Mat (CSM) and Uni-directional Cloth (UDC) of 3mm and 5mm thickness. The
properties of GFRP are shown in Table 1.
TABLE1 Properties of GFRP Laminates
Type of
GFRP
Chopped Strand
Mat
Uni-Directional
Cloth
Thickness
(mm)
Elasticity Modulus
(MPa)
Ultimate Elongation (%)
Tensile Strength
(MPa)
3
7467.46
1.69
126.20
5
11386.86
1.37
156.00
3
13965.63
3.02
446.90
5
17365.38
2.60
451.50
58
An Experimental And Analytical Study On Composite High Strength Concrete…
2.2 Details of Beams
A total of 15 beams were tested. The main test variables considered in the study were steel reinforcement ratio,
type of GFRP laminate and thickness of GFRP laminate. The beams were 150 x 250 mm in cross-section and 3000 mm in
length as shown in Figs.1-3. The beams of A series were reinforced with two numbers of 10 mm diameter bars giving a steel
ratio of 0.419%. The beams of B series were reinforced with three 10 mm diameter bars giving a steel ratio of 0.628%. The
beams of C series were reinforced with three 12 mm diameter bars giving a steel ratio of 0.905%. Stirrups of 8 mm diameter,
at a spacing of 150 mm, were used for the beams. Out of fifteen beams, three served as control beams and the remaining
beams were strengthened with GFRP laminate. The details of beams are presented in Table.2
Fig.1 Details of ‘A’ Series Beams
Fig.2 Details of ‘B’ Series Beams
Fig.3 Details of ‘C’ Series Beams
B
C
RA
0.419
-
-
-
RAC3
0.419
CSM
3
2.864
RAC5
0.419
CSM
5
4.774
RAU3
0.419
UDC
3
2.864
RAU5
0.419
UDC
5
4.174
Type
Thickness
Composite Ratio
% Steel Reinforcement
A
Beam Designation
Beam series
TABLE 2 Specimen Details
GFRP Laminate
RB
0.628
-
-
-
RBC3
0.628
CSM
3
1.909
RBC5
0.628
CSM
5
3.183
RBU3
0.628
UDC
3
1.909
RBU5
0.628
UDC
5
3.183
RC
0.905
-
-
-
RCC3
0.905
CSM
3
1.909
RCC5
0.905
CSM
5
3.183
RCU3
0.905
UDC
3
1.909
RCU5
0.905
UDC
5
3.183
2.3 GFRP Laminate Bonding Technique
Glass Fibre Reinforced Polymer (GFRP) laminates were used for strengthening the beams. The soffit of the beam
was well cleaned with a wire brush and roughened with a surface-grinding machine. Two part epoxy adhesive consisting of
epoxy resin and silica filler was used to bond the GFRP laminates. The adhesive was spread over the beam soffit with the
help of a spread. The GFRP laminate was applied gently by pressing the sheet from one end of the beam to the other along
the length of beam.
2.4 Experimental Test Set-up
All the beams were tested under four point bending in a loading frame of 750 kN capacity. The effective span of
the beam was 2800 mm with 100 mm bearing at the ends. The deflections were measured at mid-span and load-points using
59
An Experimental And Analytical Study On Composite High Strength Concrete…
dial gauges of 0.01 mm accuracy. The crack widths of beams were measured using a crack detection microscope with a least
count of 0.02 mm. Figure.4 shows the loading arrangement and instrumentation adopted for the test.
Fig.4 Experimental Test Set-up
III.
FINITE ELEMENT MODELING
3.1 Element types
3.1.1 Reinforced Concrete
Solid65 element was used to model the concrete. This element has eight nodes with three degrees of freedom at
each node – translations in the nodal x, y, and z directions. This element is capable of plastic deformation, cracking in three
orthogonal directions, and crushing. A schematic of the element is shown in Figure 5 [6]. Smeared cracking approach has
been used in modeling the concrete in the present study [8].
3.1.2 Reinforcement
The geometry and node locations for Link 8 element used to model the steel reinforcement are shown in Figure 6.
Two nodes are required for this element. Each node has three degrees of freedom, translations in the nodal x, y, and z
directions. The element is also capable of plastic deformation.
3.1.3 FRP Composites
A layered solid element, Solid46, was used to model the FRP composites. The element has three degrees of
freedom at each node and translations in the nodal x, y, and z directions. The geometry, node locations, and the coordinate
system are shown in Figure 7.
Fig.5 Solid65 element geometry.
Fig.6 Link 8 – 3-D spar element
60
An Experimental And Analytical Study On Composite High Strength Concrete…
Fig. 7 Solid46 – 3-D layered structural solid
For concrete, ANSYS requires input data for material properties as follows:

Elastic modulus (Ec ) MPa

Ultimate uni-axial compressive strength (f’c) Mpa

Ultimate uni-axial tensile strength (modulus of rupture, fr) Mpa

Poisson’s ratio (=0.2)

Shear transfer coefficient (βt) which is represents conditions of the crack face. The value of βt ranges from 0.0 to
1.0, with 0.0 representing a smooth crack (complete loss of shear transfer) and 1.0 representing a rough crack (no
loss of shear transfer) [1]. The shear transfer coefficient used in present study varied between 0.3 and 0.4

Compressive uni-axial stress-strain relationship for concrete
For steel reinforcement stress-strain curve for the finite element model was based on the actual stress-stain curve obtained
from tensile test.
Material properties for the steel reinforcement are as follows:

Elastic modulus (Es) Mpa

Yield stress(fy) Mpa

Poisson’s ratio ()
Material properties for the GFRP laminates are as follows:

Elastic modulus

Shear modulus

Major Poisson’s ratio
Fig.8 Simplified Uniaxial Stress-Strain Curve for Concrete
Fig.9 Stress-Strain Curve for Steel
Reinforcement
3.2 Cracking of Concrete
The tension failure of concrete is characterized by a gradual growth of cracks, which join together and finally
disconnect larger parts of the structure. It is a usual assumption that crack formation is a brittle process and the strength in
the tension-loading direction abruptly goes to zero after big cracks or it can be simulated with gradually decreasing strength.
The cracked concrete material is generally modeled by a linear-elastic fracture relationship. Two fracture criteria are
commonly used, the maximum principal stress and the maximum principal strain criterions. When a principal stress or strain
exceeds its limiting value, a crack is assumed to occur in a plane normal to the direction of the principal stress or strain. Then
this crack direction is fixed in the next loading sequences. In this study the smeared-crack model was used. A threedimensional failure surface for concrete is shown in Figure 9.
61
An Experimental And Analytical Study On Composite High Strength Concrete…
Fig.9 3-D Failure surface for concrete
3.3 Finite element discretization
The finite element analysis requires meshing of the model. For which, the model is divided into a number of small
elements, and after loading, stress and strain are calculated at integration points of these small elements. An important step in
finite element modeling is the selection of the mesh density. A convergence of results is obtained when an adequate number
of elements is used in a model. This is practically achieved when an increase in the mesh density has a negligible effect on
the results.
3.4 Non-linear solution
In nonlinear analysis, the total load applied to a finite element model is divided into a series of load increments
called load steps. At the completion of each incremental solution, the stiffness matrix of the model is adjusted to reflect
nonlinear changes in structural stiffness before proceeding to the next load increment. The ANSYS program (ANSYS 2010)
uses Newton-Raphson equilibrium iterations for updating the model stiffness. Newton-Raphson equilibrium iterations
provide convergence at the end of each load increment within tolerance limits. In this study, for the reinforced concrete solid
elements, convergence criteria were based on force and displacement, and the convergence tolerance limits were initially
selected by the ANSYS program. It was found that convergence of solutions for the models was difficult to achieve due to
the nonlinear behavior of reinforced concrete. Therefore, the convergence tolerance limits were increased to a maximum of 5
times the default tolerance limits in order to obtain convergence of the solutions.
TABLE 3. Material Models for the Calibration
Material Model
Element Type
Material Properties
Number
1
Solid 65
Linear Isotropic
Ex
40000Mpa
PRXY
0.3
Multi Linear Isotropic
Stress
Strain
Point 1
19.2
0.00048
Point 2
39.68
0.0019
Point 3
54.01
0.0027
Point 4
58.56
0.0029
Point 5
64
0.0032
2
Solid46
Concrete
ShfCf-Op
0.3
ShfCf-Cl
1
UnTensSt
5.6
UnCompSt
-1
BiCompSt
0
HrdroPrs
0
BiCompSt
0
UnTensSt
0
TenCrFac
0
Linear Isotropic
Ex
7467.46Mpa
PRXY
0.3
62
An Experimental And Analytical Study On Composite High Strength Concrete…
3
Link8
Ex
PRXY
Linear Isotropic
2.0E-05
0.3
Bilinear Isotropic
Yield Stress
456Mpa
Tang. Mod
0
Figs. 10-17 show the finite element modeling of reinforced high strength concrete beams strengthened with FRP laminates.
Fig.10. Modeled Steel Reinforcement
Fig.11. Modeled Concrete and Steel
Fig.12. Modeled Concrete, Steel and FRP
Fig.13.Full Scale Mesh Modeled Concrete, Steel and
FRP
Fig.14. Flexural Crack Pattern
Fig.15. Flexural Crack Pattern
63
An Experimental And Analytical Study On Composite High Strength Concrete…
Fig.17. Concrete Crack Signs
Fig.16. Flexural Crack Signs
IV.
RESULTS AND DISCUSSION
4.1 Experimental Test Results
Table.4 summarizes the test results at first crack, yield and ultimate stage of non-strengthened and strengthened beams.
Beam Designation
TABLE 4 Principal Results of Tested Beams
Loading Stages of Beams
First Crack Stage
Yield Stage
Ultimate Stage
Pcr (kN)
Δcr (mm)
Py (kN)
Δy (mm)
Pu (kN)
Δu (mm)
RA
14.39
1.26
29.42
7.91
41.68
21.05
RAC3
16.52
1.41
36.77
9.02
51.48
33.46
RAC5
21.28
3.67
46.58
10.1
66.19
46.81
RAU3
32.94
7.98
51.48
11.42
71.09
53.26
RAU5
36.81
9.23
53.7
10.74
78.45
57.21
RB
28.32
3.68
39.22
8.11
53.93
31.28
RBC3
30.95
4.71
51.48
11.35
61.29
36.23
RBC5
32.17
4.97
53.24
12.41
63.74
56.91
RBU3
33.69
9.35
58.8
12.85
88.25
61.04
RBU5
39.41
11.14
63
12.69
100.51
65.59
RC
30.37
4.45
44.13
10.19
58.84
37.89
RCC3
31.88
3.85
44.73
12.34
66.19
51.42
RCC5
34.14
1.94
53.95
13.95
98.10
55.28
RCU3
34.33
4.41
76.02
15.28
102.81
62.38
RCU5
36.78
4.36
78.14
15.90
112.62
65.00
64
An Experimental And Analytical Study On Composite High Strength Concrete…
The first crack loads were obtained by visual examination. At this stage, the strengthened beams exhibit a
maximum increase of 174% compared to the control beams. The yield loads were obtained corresponding to the stage of
loading beyond which the load-deflection response was not linear. At the yield load level, the GFRP strengthened beams
showed an increase upto 166% compared to the control beams. The ultimate loads were obtained corresponding to the stage
of loading beyond which the beam would not sustain additional deformation at the same load intensity. At the ultimate load
level, the strengthened beams showed a maximum increase of 170% when compared with the control beams. From the
experimental results, it can be observed that, at all load levels, a significant increase in strength was achieved by externally
bonded GFRP laminates. This increase may be attributed to the increase in tensile cracking strength of Concrete due to
confinement by the laminates. For the A series beams of steel ratio 0.419%, the ultimate load increased by 23.51% and
58.81% for 3mm and 5mm thick CSMGFRP laminated beams. For beams strengthened with 3mm and 5mm thick
UDCGFRP laminates, the ultimate load increased by 70.56% and 88.22%. The CSMGFRP strengthened HSC beams
exhibited an increase in deflection which varied from 58.95% to 122.38% at ultimate load level. The UDCGFRP
strengthened HSC beams exhibited an increase in deflection which varied from 153% to 171% at ultimate load level.
For the B series beams of steel ratio 0.628%, the ultimate load increased by 13.65% and 18.19% for 3mm and 5mm thick
CSMGFRP laminated beams. For beams strengthened with 3mm and 5mm thick UDCGFRP laminates, the ultimate load
increased by 63.64% and 86.37%. The CSMGFRP strengthened HSC beams exhibited an increase in deflection which varied
from 15.82% to 170.36% at ultimate load level. The UDCGFRP strengthened HSC beams exhibited an increase in deflection
which varied from 189.98% to 211.59% at ultimate load level. For the C series beams of steel ratio 0.905%, the ultimate
load increased by 12.49% and 66.72% for 3mm and 5mm thick CSMGFRP laminated beams. For beams strengthened with
3mm and 5mm thick UDCGFRP laminates, the ultimate load increased by 74.72% and 91.40%. The CSMGFRP
strengthened HSC beams exhibited an increase in deflection which varied from 144.27% to 162.61% at ultimate load level.
The UDCGFRP strengthened HSC beams exhibited an increase in deflection which varied from 196.34% to 208.78% at
ultimate load level.
4.2 Ductility of beams
TABLE 5 Ductility Indices of Tested Beams
Ductility
Beam Designation
Deflection
Energy
RA
2.66
4.16
RAC3
3.71
6.82
RAC5
4.63
7.81
RAU3
4.66
7.98
RAU5
5.33
9.27
RB
3.86
6.97
RBC3
3.19
7.58
RBC5
4.59
7.65
RBU3
4.75
7.78
RBU5
5.17
8.80
RC
3.72
6.82
RCC3
4.17
7.86
RCC5
3.96
8.06
RCU3
4.08
7.89
RCU5
4.09
9.32
Ductility is considered as an important factor in designing of structures especially in the seismic prone areas. The
ductility of a beam can be defined as its ability to sustain inelastic deformation without loss in load carrying capacity, prior
to failure. The ductility values for the beams were calculated based on deflection and energy absorption. The deflection
ductility values were calculated as the ratio between the deflection at ultimate point to the deflection at yield point. The
energy ductility values were calculated as the ratio of the cumulative energy absorption at ultimate stage to the cumulative
energy absorption at yield. The ductility indices for the tested beams are presented in Table 5. The deflection ductility for the
strengthened beams showed a maximum increase of 94.36%.
65
An Experimental And Analytical Study On Composite High Strength Concrete…
4.3 Comparison of Experimental Results with FEM Results
The load - deflection response for tested beams is presented in Figs.18-20. The general behaviour of the finite
element models show good agreement with observations and data from the experimental tests. The failure mechanism of a
reinforced concrete beam is modeled quite well using FEA and the failure load predicted is very close to the failure load
measured during experimental testing.
90
EXP RA
Load in kN
80
70
EXP RAC3
60
EXP RAC5
50
EXP RAU3
40
EXP RAU5
30
FEM RA
20
10
FEM RAC3
0
FEM RAC5
0
20
40
60
80
FEM RAU3
FEM RAU5
Deflection in mm
Figure 18. Load – Deflection Response of ‘A’ Series Beams
120
EXP RB
Load in kN
100
EXP RBC3
80
EXP RBC5
60
EXP RBU3
40
EXP RBU5
20
FEM RB
0
FEM RBC3
0
20
40
60
80
FEM RBC5
FEM RBU3
Deflection in mm
Figure 19. Load – Deflection Response of ‘B’ Series Beams
140
EXP RC
120
EXP RCC3
Load in kN
100
EXP RCC5
80
EXP RCU3
60
EXP RCU5
40
FEM RC
20
FEM RCC3
0
0
10
20
30
40
50
60
Deflection in mm
FEM RCC5
FEM RCU3
Figure 20. Load – Deflection Response of ‘C’ Series Beams
66
An Experimental And Analytical Study On Composite High Strength Concrete…
V.
CONCLUSIONS
Based on the experimental results the following conclusions are drawn:

Strengthening of HSC beams using GFRP laminates resulted in higher load carrying capacity. The percentage
increase in ultimate load varied from 23.51% to 88.22% for GFRP strengthened HSC beams.

The percentage increase in deflection at ultimate stage varied from 15.82 % to 211.59% for HSC beams
strengthened with GFRP laminates.

The GFRP strengthened HSC beams show enhanced ductility. The increase in deflection ductility varied from
17.36% to 100.38%.

GFRP strengthened beams failed in flexural mode only.

The general behaviour of the finite element models show good agreement with observations and data from the
experimental tests. The failure mechanism of a reinforced concrete beam is modeled quite well using FEA and the
failure load predicted is very close to the failure load measured during experimental testing.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
ANSYS Manual, Version (10.0).
Anthony J.; Wolanski, B.S. Flexural Behavior of Reinforced and Prestressed Concrete Beams Using Finite
Element Analysis”, Master’s Thesis, Marquette University, Milwaukee, Wisconsin, 2004.
Arduini, M.; Tommaso, D.; A., Nanni, A. Brittle Failure in FRP Plate and Sheet Bonded Beams, ACI Structural
Journal, 1997, 94 (4), pp.363-370.
Dong-Suk Yang.; Sun-Kyu Park.; Kenneth W.; Flexural behavior of reinforced concrete beams strengthened with
prestressed carbon composites, composite part B: engineering , volume 88, Issues 4, pages 497508,doi:10.1016/j.compstruct.2008.05.016.
Hsuan-Teh Hu.; Fu-Ming Lin.; Yih-Yuan Jan, “Nonlinear finite element analysis of reinforced concrete beams
strengthened by fiber-reinforced plastics”, Composite Structures 63, 2004, pp 271–281, doi:10.1016/S02638223(03)000174-0.
Ibrahim, A.M.; Sh.Mahmood, M. Finite element modeling of reinforced concrete beams strengthened with FRP
laminates, European Journal of Sci. Research, Euro Journals Publishing, 2009, Inc., 30(4), pp 526541.
Kachlakev, D.; Miller, T.; P Yim, S.; Chansawat, K.; Potisuk, T., Finite Element Modeling of Reinforced Concrete
Structures Strengthened with FRP Laminates, Oregon Department of Transportation, Research Group, 2001.
Pham, H. B., R. Al-Mahaidi and V. Saouma Modeling of CFRP- concrete bond using smeared and discrete cracks,
composite structures, 2006, volume 75, Issues 1-4, pages 145-150, Thirteen International Conference on
Composite Structures – ICCS/13doi:10.1016/j.compstruct.2006.04.039.
Rabinovitch, O., and Y. Frostig. Experiments and analytical comparison of RC beams strengthened with CFRP
composites”, composite part B: engineering, volume 34, Issues 8, 1996, pages 663-677, doi:10.1016/S13598368(03)00090-8.
Revathi, P., Devdas Menon. Nonlinear finite element analysis of reinforced concrete beams”, Journal of Structural
Engineering, Structural Engineering Research Centre, Chennai, 2005,32(2), pp 135-137,
67
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 56-59
Development of Design Charts for the Deflection of a Loaded
Circular Plate Resting on the Pasternak Foundation
S. Kumar Shukla1, Ashish Gupta2 and Dilip Kumar3
1
Associate Professor, Discipline of Civil Engineering, School of Engineering, Edith Cowan University,
Perth, WA 6027, Australia;
2
Assistant Professor, Department of Civil Engineering, B.I.E.T.,Jhansi – 284128 (U.P.)
3
Assistant Professor, Department of Civil Engineering, M.M.M.E.C.,Gorakhpur (U.P.)
Abstract––In this paper, nondimensional expression for the deflection of a thin circular elastic plate resting on the
Pasternak foundation by the strain energy approach has been adopted. Various design charts has been developed for
different nondimensional values of the various parameters like modulus of subgrade reaction (ks*) and shear modulus
(G*) for estimating the deflection (w*) of the circular plate.
Keywords––Pasternak Foundation; Modulus of Subgrade Reaction; Shear Modulus and Deflection.
Notations
a
Radius of the plate (m)
D
Flexural rigidity of the plate (Nm)
G
Shear modulus of the Pasternak foundation (N/m2)
H
Thickness of the Pasternak shear layer (m)
ks
Modulus of subgrade reaction of the Pasternak foundation (N/m3)
q
Reaction for the foundation to applied concentrated load (N/m2)
Q
Concentrated load applied at the centre of the plate (N)
r
radius vector of any point on the plate (m)
rb
Contact radius (= b/a) (dimensionless)
w
Deflection of the plate (m)
υ
Poisson’s ratio of the plate material (dimensionless)
I.
INTRODUCTION
The soil-foundation interaction problem is of importance to many engineering situations, which include the study of
raft foundations, pavement of highways, water tanks, airport runways, buried pipelines and pile foundations. A major
proportion of these studies are primarily concerned with elastic analysis of the interaction problem. A complete analysis of
the interaction problem for elastic bodies generally requires the determination of stresses and strains within the individual
bodies in contact. For generalized foundations, the model assumes that at the point of contact between plate and foundation
there is not only pressure but also moments caused by interaction between the springs. Such a continuum model is often used
to represent flexible structures that are essentially one-dimensional in geometry. The ground may be represented by
foundation model like the one-parameter Winkler model or the two-parameter models proposed by Pasternak (1954) and
Vlazov and Leont’ev (1966) or higher order models like those developed by Kerr (1964) and Reissner (1958). In recent years,
efforts have been made to analyze the thin circular elastic plate resting on a two parameter Pasternak foundation under
concentrated and distributed loads for determining approximate plate deflection. The analysis also considers the effect of
tensionless characteristics of the Pasternak foundation, that is, lift off of the plate from the foundation surface. It was reported
that lift off appears when the slopes of the foundation surface and that of the plate are not equal. A circumferential shear force
develops along the contour of the plate due to the slope discontinuity of the plate and soil surfaces. It also reflects the lift off
the plate and, consequently, the soil reactions develop within the range of contact radius.
II.
REVIEW OF PAST WORKS
A system of equations was developed by Reissner (1945) for the theory of bending of thin elastic plates which
takes into account the transverse shear deformability of the plate. Mindlin (1951) deduced the three dimensional equations of
elasticity in to a two dimensional theory of flexural motions of isotropic elastic plates and computed the velocities of
straight-crested waves and found to agree with those obtained from the three-dimensional theory. Galletly (1959) studied the
circular plates on a generalized elastic foundation. Rodriguez (1961) presented the three-dimensional bending of a ring of
uniform cross-sectional area and supported on a transverse elastic foundation. The modified Bessel function for the threedimensional case of a finite beam or plate was given by Kerr (1964) by using the Kernal function. Weitsman (1970)
analyzed the one parametric Winkler and Reissner foundation under a single concentrated load under the assumption that
tensile stresses could not be transmitted across the interface between beams or plates and their supporting sub grade. This
assumption forms no contact regions under the beam or plate. Due to no contact region the beam or plate lifts up away from
its foundation and the sub grade does not support the structure. The lift off phenomenon under various boundary conditions
56
Development of Design Charts for the Deflection of a Loaded Circular Plate Resting…
based on the beam or plate was also reported.
Celep (1988) presented a phenomenon that a thin circular plate subjected to eccentric concentrated load and
moment as well as uniformly distributed load has been supported on an elastic foundation of Winkler type foundation reacts
in compression only and he reported that the plate will lift off when the foundation stiffness is low. It has been also reported
that, if the edge of plate is not anchored against lifting off, the plate may have a partial contact with the foundation.
Celep and Turhan (1990) analyzed the axisymmetric vibrations of a circular plate subjected to a concentrated
dynamic load at its center. The solution given by them was the extended solution given by Weitsman (1970). The solution
was carried out by using the Galerkin’s method and the modal functions of the completely free plate. This problem was a
nonlinear one but linear theory of plate was used in the problem.
Guler and Celep (1993) reported the extended study of axisymmetric vibrations of a circular plate on tensionless elastic
foundation by Celep and Turhan (1990). The solution was given by using Galerkin’s method using the modal functions of
the completely free plate. Various contact curve and time variation with the central and edge deflection were shown. The
most valuable result was given that increasing the load values, keeping loading eccentricities value constant, an increase in
the displacements without changing the extent of the contact.
Utku et al. (2000) presented a new formulation for the analysis of axisymmetrically loaded circular plates supported on
elastic foundation. In this formulation the circular plate was modeled as a series of concentric annular plates in such a way
that each annular plate is considered as simply supported along the inner and outer edges, hence no contact develops between
the plate and the foundation. The deflection and radial bending moment were chosen as the solution variables at each interior
support. This analysis has been presented the circular plates on tensionless Winkler foundation; that reacts in the
compression only.
Wang et al. (2001) presented the exact axisymmetric buckling solutions of Reddy circular plates on the Pasternak
foundation. The solution was the extended solution of Kirchhof’s buckling solution of circular plates without elastic
foundation.
Guler (2004) analyzed a thin circular plate resting on a two parameter Pasternak type foundation under
concentrated central and distributed loads. It was analyzed the effect of tensionless characteristics of the Pasternak
foundation system. For the analytical expressions, lift off (plate takes lift from the foundation surface) had been considered.
The system of nonlinear algebraic equations accomplished by using an iterative method. Galerkin technique has been
adopted for the approximate analytical solutions.
Gupta, A. (2007) derived an expression for the deflection of a thin circular elastic plate resting on the Pasternak
foundation under concentrated load by adopting the strain energy approach.
III.
DESIGN CHARTS DEVELOPMENT
Design charts for deflection (w*) of the circular plate at different nondimensional modulus of subgrade reaction (ks*)












2
2

r  ks

Qr 
Q 
1


w 

1


2




 
k

 ks


k 
 



  s  2G  

2
G
 2G   s 
 2





2
2






k  


 s

k
81     s  2G  

3


(1)
Considering nondimensional parameters as;
w
w   
a
r
r   
a
 Qa 
Q  

 D 
 Ga 2 H 


G  
D


 k a4 

k s   s 
 D 
(1a)
(1b)
(1c)
(1d)
(1e)
57
Development of Design Charts for the Deflection of a Loaded Circular Plate Resting…
Eq. (1) has been used to develop the design charts for different nondimensional values of the modulus of subgrade reaction
(ks*) for determining the deflection (w*) of the circular plate for different values of the shear modulus (G*) at any radial
distance (r*), measured from the centre of the circular plate. By using these design charts for any values of modulus of
subgrade reaction and shear modulus at any radial distance, it can be directly estimated the deflection of the circular plate.
Four such design charts have been shown in Figs. 1, 2, 3 and 4.
0.008
r*=0.0, Q*=1.0, ν=0.33
0.007
0.006
w*
0.005
G*=0.0
G*=2.5
G*=5.0
G*=7.5
G*=10.0
0.004
0.003
0.002
0.001
100
200
300
400
500
ks *
Figure 1. A design chart for estimating the deflection for different nondimensional k s* at different G* and central
distance r*=0
0.008
G*=0.0
G*=2.5
G*=5.0
G*=7.5
G*=10.0
r*=0.25, Q*=1.0,ν=0.33
0.007
0.006
w*
0.005
0.004
0.003
0.002
0.001
100
200
300
400
500
ks *
Figure 2. A design chart for estimating the deflection for different nondimensional ks* at different G* and central
distance r*=0.25
0.006
r*=0.50, Q*=1.0, ν=0.33
0.005
G*=0.0
G*=2.5
G*=5.0
G*=7.5
G*=10.0
w*
0.004
0.003
0.002
0.001
100
200
300
400
500
ks *
Figure 3. A design chart for estimating the deflection for different nondimensional k s* at different G* and central
distance r*=0.50
58
Development of Design Charts for the Deflection of a Loaded Circular Plate Resting…
0.003
G*=0.0
G*=2.5
G*=5.0
G*=7.5
G*=10.0
r*=0.75, Q*=1.0, ν=0.33
0.0025
w*
0.002
0.0015
0.001
0.0005
0
100
200
300
400
500
ks *
Figure 4. A design chart for estimating the deflection for different nondimensional k s* at different G* and central
distance r*
IV.
CONCLUSIONS
Estimation of deflection of circular plates is of great importance as it is associated with many geotechnical
engineering structures. As described earlier, it has been found that a large number of studies and various analytical methods
and the numerical techniques have been adopted for the analysis of circular plates resting on the different foundation models
under concentrated loading condition. Various efforts have been made to analyze a thin circular elastic plate resting on the
Winkler foundation and Pasternak foundation by adopting different approaches in order to present the analytical expressions
for the plate deflection as well as for the study of the plate lift off. Strain energy approach has been also reported for twoparameter Pasternak foundation soil case by Gupta, A. (2007) and With the help of the complete derivations by Gupta, A.
(2007), an effort has been done for analyzing the circular plates resting on two-parameter foundation soil in the present study
with some design charts. The general conclusions of the present study are summarized as follows:

Design charts has been developed for different nondimensional values of the modulus of subgrade reaction (ks*)
for estimating the deflection (w*) of the circular plate considering several values of the shear modulus (G*).

For any values of the modulus of subgrade reaction (ks*) and the shear modulus (G*) the deflection (w*) at any
radial distance (r*), can be directly interpolated with the help of design charts as illustrated by a numerical
example.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
Celep, Z. (1988). “Circular Plates on Tensionless Winkler Foundation.” Journal of Engineering Mechanics, ASCE,
Vol.114, No.10, pp.1723-1739.
Celep, Z. and Turhan, D. (1990). “Axisymmetric Vibrations of Circular Plates on Tensionless Elastic
Foundations.” Journal of Applied Mechanics, ASME, Vol.57, pp.677-681.
Galletly, G D. (1959). “Circular Plates on a Generalized Elastic Foundation.” Journal of Applied Mechanics,
Transactions ASME, Vol.26, pp.297.
Guler, K. (2004). “Circular Elastic Plate Resting on Tensionless Pasternak Foundation.” Journal of Engineering
Mechanics, ASCE, Vol.130, No.10, pp.1251-1254.
Guler, K. and Celep, Z. (1993). “Static and Dynamic Responses of a Circular Plate on a Tensionless Elastic
Foundation.” Journal of Sound and Vibration, Vol.183, No.2, pp.185-195.
Gupta, A. (2007) “Strain Energy Analysis of a Circular plate resting on the Pasternak Foundation Model.” M Tech
Dissertation.
Kerr, A.D. (1964). “Elastic and Viscoelastic Foundation Models.” Journal of Applied Mechanics, Transactions
ASME, Vol.31, No.3, pp.491-498.
Mindlin, R.D. (1951). “Influence of Rotatory Inertia and Shear on Flexural Motions of Isotropic, Elastic Plates.”
Journal of Applied Mechanics, Transactions ASME, Vol.18, pp.31-38.
Pasternak, P.L. (1954). “Fundamentals of a New Method of Analyzing Structures on an Elastic Foundation by
Means of Two Foundation Moduli.” (In Russian), Gosudarstvenuoe izdatel’slvo literary po stroital’stvu I
arkhitekture, Moskva-Leningrad (as cited in Vlasov and Leont’ev 1966).
Rodriguez, D.A. (1961). “Three-dimensional bending of a ring on an elastic foundation.” Journal of Applied
Mechanics, 28, 461-463.
Utku et al. (2000). “Circular Plates on Elastic Foundations Modeled with Annular Plates.” Computers and
Structures, Vol.78, pp.365-374.
Vlazov, V.Z., and Leontiev, U.N. (1966). Beams, plates and shells on elastic foundations, Israel Program for
Scientific Translations, Jerusalem (translated from Russian).
Wang, C. M., Xiang, Y., and Wang, Q. (2001). “Axisymmetric buckling of reddy circular plates on Pasternak
foundation.” Journal of Engineering Mechanics, 127(3), 254-259.
Weitsman, Y. (1970). “On Foundations That React in Compression Only.” Journal of Applied Mechanics, ASME,
Vol.37, No. 1, pp.1019-1031.
59
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 2(September 2012) PP: 66-72
Evaluation of Urban Water Sector Performance: A Case Study of
Surat City
Mrs.Reena Popawala1, Dr. N C Shah2
1
Associate. Prof. in Civil Engineering Department, C.K.Pithawalla College of Engg. & Tech., Surat
2
Prof. of CED & Section Head (TEP), SVNIT, Surat, India
Abstract––Sustainability of Urban water management system is a concern and stated objective for all municipal
corporation and organization, but is often vaguely defined and clear measurement procedure are lacking. Integrated
water resource management is relatively new approach and it involves the field of water supply, urban drainage, waste
water treatment and storm water management. In this paper, sustainability was extrapolated from the perspective of
“Triple bottom line” which highlights social, environmental and economic dimensions. For evaluation of urban water
sector a case study of Surat city in Gujarat, India is taken and simple additive weightage method is used. The results
provide useful information regarding loopholes in the system and potential approaches where a chances of improvement
lies. It provides sufficient information to water managers and decision makers for framing development program and
future policy for sustainable urban water management.
I.
INTRODUCTION
The population growth and socio-economic development has posed great threat on water sector. As the result, it is
not harmonious between the water resource, development of social-economic and ecology protection. So the protection and
management of water resources must be enforced and taken on sustainable way. As water demand increases, conventional
system need to be change in a perspective of unsustainable to sustainable systems. The development of model for sustainable
management of urban water system is one important aspect for this research paper. A model was developed based on the
principle of control circuit. (1, 2) presented the strategy in terms of control circuit to discuss the integrated water
management. By measuring variables in the field we determine the actual situation in a water system. This actual situation is
compared to the desired situation (target level). If measured values match the desired value, the difference is zero and a
controlling action will not come into play. Based on this history, a model for sustainable management system can be
developed. In this model, the desired value is the need of user, stakeholder and authorities the judgment operator is replaced
by sustainability assessment the controller is the improvement strategies based on various water technologies. The service
data is measured by data collection from SMC officers.
Desired value
Controller
System
Fig.1 Principle of control circuit (2)
If we find the measured service data does not match the desired value after sustainability assessment we need to make
improvement strategy and implement it.
II.
BACKGROUND
The principle of sustainable development is embedded first time in the 1972 Stockholm conference which was
introduced by the international union for the conservation of the nature (3). The IUCN is the first who has lay down the
concrete base for economic, social and environmental sustainability. Then after in subsequent years at International level
effort has been made to develop sustainable urban water management system.
The comparison is made between old paradigm and emerging paradigm to fill the gap or deficiency for more sustainable
urban water management system (4).
The Old Paradigm
The Emerging Paradigm
Human waste is nuisance. It should be disposed of after
Human waste is a resource. It should be captured and
treatment.
processed effectively, used to nourish land and crops.
Storm water is a nuisance. Convey storm water away from
Storm water is a resource. Harvest storm water as a water
urban area as rapidly as possible.
supply, and infiltrate or retain it to support aquifers,
waterways and vegetation.
66
Evaluation of Urban Water Sector Performance
Demand is matter of quantity. Amount of water required or
produced by different end user is only parameter relevant to
infrastructure choice. Treat all supply side water to potable
and collect all wastewater for treatment.
Demand is multi-faceted. Infrastructure choice should match
the varying characteristics of water required or produced for
different end users in terms of quantity, quality, level of
reliability, etc.
Water follows one-way path from supply, to a single use, to
treatment and disposal to the environment. It is open loop
system.
Water can be used multiple times, by cascading from higher
to lower quality needs, and reclamation treatment for return
to the supply side of infrastructure. It is closed loop system.
Bigger / centralized is better for collection system and
treatment plants.
Small / decentralized is possible, often desirable for
collection system and treatment plants.
Urban water management is based on technically sound and
priority is given to health and hygiene criteria.
Urban water management is taken as technically sound,
economically viable, socially acceptable for public. Decision
makers are multidisciplinary. Allow new management
strategies and technologies.
Table 1 Comparison of old paradigm and emerging paradigm
III.
STATE OF KNOWLEDGE
For Integrated assessment UNEP describes selected tools which are stake holder analysis and mapping, Expert
panel, Focus groups, household survey, sustainable framework indicator, casual chain analysis, root cause analysis, trend
analysis, scenario-building , multi criteria decision analysis (5) . The comparative study of sustainability index was carried
out by Carden et al. (11) for South African cities by Triple bottom line perspective and MCA was used. Sahely et al. (6)
used mathematical model for quantifying environmental and economic sustainability indicator. Shovini Dasgupta et al. (7)
have categorized mandatory screening indicator and judgment indicator. Multi layer approach was used to incorporating
these indicators. A normalization procedure has been adapted to work within the framework and to compare alternative
across a range of indicator and different orders of data magnitude. E Lai et al. (8) have reviewed numbers of method for
integrated sustainable urban water system. The four dominant approaches applied were cost benefit analysis, triple bottom
line, integrated assessment and multi criteria analysis. Vairavamoorthy et al. (9) has done risk assessment for sustainable
urban water management using fuzzy logic. Stefan Hajkowicz et al. (10) has review the method of MCA which includes
fuzzy set analysis, comparative programming, analytical hierarchy process, ELECTRE, PROMETHEE, MAUT, MCQA,
EXPROM, MACBETH, SAW, TOPSIS etc .
IV.
PROBLEM STATEMENT
In the Surat City, due to population explosion & urbanization the stress on urban water sector is increasing. Water
supply, sanitation provision and drainage – are vital in the quest to promote economic, environmental and social healthy
development. The scientific approach to facilitate decision making in equity, efficiency and sustainability criteria is the main
goal for performance evaluation of urban water sector. For efficient management of urban water sector Sustainability Index
was found which indicates performance of urban water sector in different dimensions.
Surat city has perennial river Tapi, which is main source of water supply. The tragedy is local government can
extract only 300 cusec of water daily from river Tapi according to riparian right, which is not sufficient to fulfill the demand
of citizen and high growth rate of population. Surat Local government demanding more water extraction capacity from river
Tapi, with state government since long time but these all are political issues and not yet resolved. The City limit is increased
in last few years from 112 Sq.Km to 334 Sq.Km area and corporation is not in position to cope up demand of city at faster
rate. Due to construction of weir cum cause way on river Tapi reservoir is formed on upstream side of river, which led to
stagnation of flowing river water. Stagnation of water give rise to growth of algal and weed, hence raw water quality get
degraded which will cause problem in intake well as well as in subsequent treatment process. It also reduces the yield of
water. To improve the raw water quality, frequently release of water from nearby (Ukai) dam is required. Moreover, a
sewage discharge from some of the area has created terrible impact on river water quality on upstream of river. Sewage
discharge enhances the growth of algae, weed and other vegetation. Recently, it was suggested in city development plan to
lay down pipelines from Ukai dam to Surat (100 Km) to resolve the issues regarding quality and quantity of water supply?
Will this decision economically viable or sustainable?
In the downstream of weir in river Tapi, due to tidal influences river water become brackish. Owing to these
problems the bore water of adjacent area and old walled city area becomes salty and not fit for drinking. Over withdrawal of
ground water for industrial and irrigation purpose has depleted the ground water table and degraded the quality of ground
water also. Due to increased city limit 100% population is not covered with access to water supply and sewerage system.
67
Evaluation of Urban Water Sector Performance
V.
METHODOLOGY
Decide system boundary
Decide criteria and indicator
Data collection
Prepare questionnaire for interview of experts, stake holders
Ranking of criteria and indicator
Decide weightage of each criteria and indicator
Normalization of data
Aggregate weight of indicator and criteria
Composite index value (SI)
Fig.2 Flow chart describing methodology
1. System Boundary For Urban Water Management System:
System boundary is decided based on systematic consideration of the various dimension of water sector. Domain
of system boundary consists of water supply system, waste water, storm water, rain water recharging/harvesting & its sub
criteria. Sustainability is related to prolonged time perspectives hence it should be selected accordingly.
2. Selection of Indicator And Criteria:
Criteria selection involved the selection of appropriate criteria for the field of research, their relevance to current
issues, their appropriateness to the area in question, their scientific and analytical basis plus their ability to effectively
represent the issues they are designed for. Theoretical framework building provides the underlying basis for criteria selection
and supported the overall structure of urban water management. The four dimensional view on sustainability was employed,
and these four dimensions constituted the basic components for measure of sustainability of system.
3. Data Collection:
The data were collected related to the criteria and indicators which were selected for the study. This includes data
related to social, economic, environmental and engineering factors and its sub factor like population served by water supply
and waste water system, storm water, capital investment, economic expenditure and maintenance, water supply per capita
per day, waste water generation per capita per day, area covered under pipe network, energy consumption, cost recovery,
revenue collection from water supply, sewerage system, flood prone area etc. from Surat municipal corporation (SMC). The
non-availability of data is one of the largest constraint to the success of most assessment study; where there were instance of
indicators with incomplete data, either substitution or exclusion of variables was adopted.
4. Analysis Method:

For Analysis the Simple additive weightage method is used. A ranking approach was adopted, in which criteria and
sub-criteria were ranked within their category and then assigned corresponding weight based on expert’s opinion.
68
Evaluation of Urban Water Sector Performance

The Normalization involved the conversion of these criteria and sub criteria to a comparable form which ensures
commensurability of data. The criteria are compared with target value based on their unit of measurement.
The scores were normalized (converted) by the following formulas
Xij= 𝒂𝒊𝒋 ---------------- (1)
𝒂𝒋𝒎𝒂𝒙
Xij=𝒂𝒋 𝒎𝒊𝒏---------------- (2)
𝒂𝒊𝒋
Where,
aij = actual existing value for the sub-criteria
ajmax, ajmin = target value for sub-criteria
When criteria are maximized, formula (1) is to be used, and formula (2) is to be used when criteria are minimized. For
normalization target value/ threshold value is taken as a standard value.

The Weighting entailed the aggregation of criteria and sub-criteria. The aggregation refers to grouping of criteria and
sub-criteria. A composite index approach was employed to calculate the overall sustainability index score. The
normalized value for each criterion Xij, was multiplied by the aggregate weight of criteria and sub-criteria Wj. The
score for each sub-criterion was added to get final Sustainability Index value.
Sustainability Index (S.I) =
Where,
n = number of criteria,
wj = weight of the criterion, and
𝒏
𝒋=𝟏 𝑿𝒊𝒋𝒘𝒋 𝒋 = 𝟏, … 𝒏
xij = normalized score for the criterion.
CRITERIA
VARIABLE USED
Social
(0.24)






Access to water supply (0.20)
Access to sanitation (0.15)
Water availability/capita/day (0.14)
Supply hours (0.13)
Service complaints (0.17)
Flood prone area (0.21)
Economic
(0.24)



Capital investment (0.29)
Cost recovery & Operation and maintenance cost (0.50)
Research and development investment (0.21)
Environmental
(0.28)









Water withdrawal (0.14)
Energy consumption (0.12)
Pollution load on environment (0.12)
Waste water treatment performance (0.12)
Water reuse (0.10)
Recycling of nutrients and sludge reuse (0.09)
Storm water-area covered under pipe network (0.10)
Rain water harvesting/recharging (0.10)
Salinity ingress (0.11)
Engineering
(0.24)


Metered connection (0.40)
Service interruption & Water losses (0.60)
Sustainability
Index
Table 2 Decided criteria and indicator along with its weightage
69
Evaluation of Urban Water Sector Performance
ENGINEERING
CRITERIA
METERE
D
CONN…
0.03
0.02
0.01
SERVICE 0
ECONOMIC CRITERIA
INDEX
VALUES
INTERR
UPTIO…
CAPITAL
INVEST
MENT
0.5
0.4
0.3
0.2
0.1
RESEAR
0
CH &
DEVELO
PMEN…
OPERAT
ION
AND
MAIN…
INDEX
VALUES
Fig. 3 Index value for engineering and economic criteria
ENVIRONMENTAL CRITERIA
WATER
WITHDRAWAL
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
SALINATION
INGRESS
RAIN WATER
HARVESTING/
RECHARGING
STORM WATER AREA COVERED
UNDER PIPE…
ENERGY
CONSUMPTION
AMOUNT OF
WASTE WATER
TREATED
INDEX VALUES
WASTE WATER
TREATMENT
PERFORMANCE
RECYCLING OF
NUTRIENTS AND
SLUDGE REUSE
WATER REUSE
Fig. 4 Index value for environmental criteria
70
Evaluation of Urban Water Sector Performance
SOCIAL CRITERIA
WATER
AVAILABILITY
ACCESS TO WATER
SUPPLY
0.3
0.25
0.2
0.15
0.1
0.05
0
SUPPLY HOURS
DAILY
ACCESS TO
SANITATION
SERVICE
COMPLAINTS
FLOOD RISK
INDEX VALUES
Fig. 5 Index value for social criteria
MAIN CRITERIA
ENGINEERING
CRITERIA
SOCIAL
CRITERIA
0.2
0.15
0.1
0.05
0
ECONOMIC
CRITERIA
INDEX …
ENVIRONME
NT CRITERIA
Fig. 6 Index value for main criteria
Criteria
Aggregate
Index value
Social criteria
0.1087
Economic criteria
0.158196
Environment criteria
0.1218
Engineering criteria
0.0075936
Sustainability Index
0.3926
Table 3 Composite Sustainability Index value for all criteria
VI.
RESULTS AND DISCUSSION
The result shows that Surat city standing in the sustainability continuum is moderate with individual index scoring
of social, economical, environmental, & engineering criteria are 0.453, 0.659, 0.4351, and 0.031 respectively. Composite
sustainability index for urban water management system found is 0.396.
Engineering index is very less and it has high potential for improvement. According to the collected data metered
connection is only in the area of 0.41% of Surat city which brings down engineering index. Water losses can also be
minimized by installing efficient devices and conducting water audit.
71
Evaluation of Urban Water Sector Performance
Environmental index can be raised by implementing water reusing system. The energy consumption contributes
66% of total water management cost so, it can be reduced to some extent by implementing energy efficient technique or
renewable energy sources should be used.
There is huge variation between area covered under pipe network & percentage population covered before & after
extension of city limit. This is because of transition stage of extension of city limit. It takes time for establish infrastructure
facilities which represents a drop in population & area coverage. Apart from that important issue is riparian right for water
withdrawal capacity hence it is urgent need for sustainability of system to develop rainwater recharging and harvesting
system, Reuse of water and waste water and water meter should be implemented to minimize water losses.
ACKNOWLEDGEMENT
This project has been funded by The Institution of Engineers (India), 8 Gokhale Road, Kolkata 70020 under R & D
Grant- in –aid scheme 2010-2011.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Geldof G.D (1995a). Policy analysis and complexity – a non-equilibrium approach for integrated water
management. Water science and technology, 31(8),301-309
Geldof G.D (1995b) Adaptive water management: integrated water management on the edge of chaos. Water
science and technology,32(1) , 7-13
World Commission on Environment and Development, Our Common Future (Brundtland report), 1987, Oxford
University Press
Barbara Anton, Ralph Philip (2008) “Integrated water management- what’s in it for the city of the future,
Stockholm conference
Integrated assessment: Mainstreaming sustainability into policymaking UNEP A guidance manual (August 2009)
Halla R. Sahely and Christopher A. Kennedy “ Water used model for environmental and economic sustainability
indicators” Journal of water resources planning and management , ASCE/Nov/Dec/2007 pg 550-559
Shovini Dasgupta and Edwin K.L.Tam “ Indicators and framework for assessing sustainable infrastructure”
Canadian Journal of Civil engineering 32:30-44(2005)
E.Lai, S. Lundie and N.J.Ashbilt “ Review of multi- criteria decision aid for integrated sustainability assessment
of urban water system” Urban water journal vol. 5. No.4 December – 2008, 315-327.
Vairvamoorthy, K & Khatri, K.B et al “Risk assessment for sustainable urban water management” UNESCO-IHP
report (2007)
Stefan Hajkowicz, Kerry Collins et al “A Review of multicriteria analysis for water resource planning and
management” Water resource management (2007) 21: 1553-1566
SCP De Crvalho, K J Carden and NP Armitage “ Application of sustainability index for integrated urban water
management in Southern African cities: Case study comparision – Maputo and Hermanus” Water SA Vol 35 No.
2, 2009
72
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September2012) PP: 68-76
Integrated Conditional Random Fields with the Layered
Approach for Intrusion Detection
K Ranganath1, Shaik Shafia2
1
Computer Science and Engineering Department Hyderabad Institute of Technology And Management
Management R.R.Dist
2
Hyderabad Institute of Technology And Computer Science and Engineering Department R.R.Dist
Abstract— Both Conditional Random Fields (CRFs) and Layered Approach have some things in common. They can be
used for solving two issues of Accuracy and Efficiency. They solely do have certain disadvantages and advantages which
almost completely disappear by combining both concepts. Intrusion detection (ID) is a type of security management
system for computers and networks. An ID system gathers and analyzes information from various areas within a
computer or a network to identify possible security breaches, which include both intrusions (attacks from outside the
organization) and misuse (attacks from within the organization).The CRFs can effectively model such relationships
among different features of an observation resulting in higher attack detection accuracy. Another advantage of using
CRFs is that every element in the sequence is labelled such that the probability of the entire labelling is maximized, i.e.,
all the features in the observation collectively determine the final labels. Hence, even if some data is missing, the
observation sequence can still be labelled with less number of features. A layered model is to reduce computation and the
overall time required to detect anomalous events. The time required to detect an intrusive event is significant and can be
reduced by eliminating the communication overhead among different layers. To improve the speed of operation of the
system. Hence, we implement the LIDS and select a small set of features for every layer rather than using all the 41
features. This results in significant performance improvement during both the training and the testing of the system. This
project presents high attack detection accuracy can be achieved by using CRFs and high efficiency by implementing the
Layered Approach. Finally, we show that our system is robust and is able to handle noisy data without compromising
performance
Keywords— CERT, CRF, Intrusion, Session
I.
INTRODUCTION
In this project we address three significant issues which severely restrict the utility of anomaly and hybrid
Intrusion Detection systems in present networks and applications. The three issues are; limited attack detection coverage,
large number of false alarms and inefficiency in operation. Present anomaly and hybrid intrusion detection systems have
limited attack detection capability, suffer from a large number of false alarms and cannot be deployed in high speed
networks and applications without dropping audit patterns. Hence, most existing intrusion detection systems such as the
USTAT, IDIOT, EMERALD, Snort and others are developed using knowledge engineering approaches where domain
experts can build focused and optimized pattern matching models [1]. Though such systems result in very few false alarms
[2], they are specific in attack detection and often tend to be incomplete. As a result their effectiveness is limited. We, thus,
address these shortcomings and develop better anomaly and hybrid intrusion detection systems which are accurate in attack
detection, efficient in operation and have wide attack detection coverage.
The objective of an intrusion detection system is to provide data security and ensure continuity of services
provided by a network [3]. Present networks provide critical services which are necessary for businesses to perform
optimally and are, thus, a target of attacks which aim to bring down the services provided by the network. Additionally, with
more and more data becoming available in digital format and more applications being developed to access this data, the data
and applications are also a victim of attackers who exploit these applications to gain access to data. With the deployment of
more sophisticated security tools, in order to protect the data and services, the attackers often come up with newer and more
advanced methods to defeat the installed security systems [4], [5].According to the Internet Systems Consortium (ISC)
survey, the number of hosts on the Internet exceeded 550,000,000 in July 2008 [6]. Earlier, a project in 2002, estimated the
size of the Internet to be 532,897 TB [7]. Increasing dependence of businesses on the services over the Configuration errors
and vulnerabilities in software are exploited by the attackers who launch powerful attacks such as the Denial of Service
(DoS) [8] and Information attacks [9].
According to the Computer Emergency Response Team (CERT), the number of vulnerabilities in software has
been increasing and many of them exist in highly deployed software [10].Considering that it is near to impossible to build
‘perfect’ software, it becomes critical to build effective intrusion detection systems which can detect attacks reliably. The
prospect of obtaining valuable information, as a result of a successful attack, subside the threat of legal convictions. The
problem becomes more profound since authorized users can misuse their privileges and attackers can masquerade as
authentic users by exploiting vulnerable applications. Given the diverse type of attacks (DoS, Probing, Remote to Local,
User to Root and others), it is a challenge for any intrusion detection system to detect a wide variety of attacks with very few
false alarms in real time environment. Ideally, the system must detect all intrusions with no false alarms. The challenge is,
68
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
thus, to build a system which has broad attack detection coverage and at the same time which results in very few false
alarms. The system must also be efficient enough to handle large amount of audit data without affecting performance at the
deployed environment. However, this is in no way a solution for securing today’s highly networked computing environment
and, hence, the need to develop better intrusion detection systems
II.
BACKGROUND
Detection Intrusion in networks and applications has become one of the most critical tasks to prevent their misuse
by attackers. Intrusion detection started in 1980’s and since then a number of approaches have been introduced to build
intrusion detection systems [1]. However, intrusion detection is still at its infancy and naive attackers can launch powerful
attacks which can bring down an entire network [5]. To identify the shortcoming of different approaches for intrusion
detection, we explore the related research in intrusion detection. We describe the problem of intrusion detection in detail and
analyse various well known methods for intrusion detection with respect to two critical requirements viz. accuracy of attack
detection and efficiency of system operation. We observe that present methods for intrusion detection suffer from a number
of drawbacks which significantly affect their attack detection capability. Hence, we introduce conditional random fields for
effective intrusion detection and motivate our approach for building intrusion detection systems which can operate
efficiently and which can detect a wide variety of attacks with relatively higher accuracy, both at the network and at the
application level.
A.
Intrusion Detection and Intrusion Detection System
The intrusion detection systems are a critical component in the network security arsenal. Security is often
implemented as a multi layer infrastructure and different approaches for providing security can be categorized into the
following six areas
Attack Deterrence – Attack deterrence refers to persuading an attacker not to launch an attack by increasing the perceived
risk of negative consequences for the attacker. Having a strong legal system may be helpful in attack deterrence. However,
it requires strong evidence against the attacker in case an attack was launched.
Attack Prevention – Attack prevention aims to prevent an attack by blocking it before an attack can reach the target.
However, it is very difficult to prevent all attacks. This is because, to prevent an attack, the system requires complete
knowledge of all possible attacks as well as the complete knowledge of all the allowed normal activities which is not always
available. An example of attack prevention system is a firewall.
Attack Deflection – Attack deflection refers to tricking an attacker by making the attacker believe that the attack was
successful though, in reality, the attacker was trapped by the system and deliberately made to reveal the attack.
Attack Avoidance – Attack avoidance aims to make the resource unusable by an attacker even though the attacker is able to
illegitimately access that resource. An example of security mechanism for attack avoidance is the use of cryptography.
Attack Detection – Attack detection refers to detecting an attack while the attack is still in progress or to detect an attack
which has already occurred in the past. Detecting an attack is significant for two reasons; first the system must recover from
the damage caused by the attack and second, it allows the system to take measures to prevent similar attacks in future.
Attack Reaction and Recovery – Once an attack is detected, the system must react to an attack and perform the recovery
mechanisms as defined in the security policy. Tools available to perform attack detection followed by reaction and recovery
are known as the intrusion detection systems. However, the difference between intrusion prevention and intrusion detection
is slowly diminishing as the present intrusion detection systems increasingly focus on real time attack detection and blocking
an attack before it reaches the target. Such systems are better known as the Intrusion Prevention Systems.
B.
Principles and Assumptions in Intrusion Detection
The principle states that for a system which is not under attack, the following three conditions hold true:

Actions of users conform to statistically predictable patterns.

Actions of users do not include sequences which violate the security policy.

Actions of every process correspond to a set of specifications which describe what the process is allowed to do.
Systems under attack do not meet at least one of the three conditions. Further, intrusion detection is based upon some
assumptions which are true regardless of the approach adopted by the intrusion detection system. These assumptions are:

There exists a security policy which defines the normal and (or) the abnormal usage of every resource.

The patterns generated during the abnormal system usage are different from the patterns generated during the
normal usage of the system; i.e., the abnormal and normal usage of a system results in different system behavior.
This difference in behavior can be used to detect intrusions.
C.
Components of Intrusion Detection Systems
An intrusion detection system typically consists of three sub systems or components:
Data Preprocessor – Data preprocessor is responsible for collecting and providing the audit data (in a specified form) that
will be used by the next component (analyser) to make a decision. Data preprocessor is, thus, concerned with collecting the
data from the desired source and converting it into a format that is comprehensible by the analyser Data used for detecting
intrusions range from user access patterns to network packet level features (such as the source and destination IP addresses,
type of packets and rate of occurrence of packets) to application and system level behaviour (such as the sequence of system
calls generated by a process.) We refer to this data as the audit patterns.
Analyzer (Intrusion Detector) – The analyser or the intrusion detector is the core component which analyses the audit
patterns to detect attacks. This is a critical component and one of the most researched. Various pattern matching, machine
69
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
learning, data mining and statistical techniques can be used as intrusion detectors. The capability of the analyser to detect an
attack often determines the strength of the overall system.
Response Engine – The response engine controls the reaction mechanism and determines how to respond when the analyzer
detects an attack. The system may decide either to raise an alert without taking any action against the source or may decide to
block the source for a predefined period of time. Such an action depends upon the predefined security policy of the network.
The authors define the Common Intrusion Detection Framework (CIDF) which recognizes a common architecture for
intrusion detection systems. The CIDF defines four components that are common to any intrusion detection system. The four
components are; Event generators (E-boxes), event Analyzers (A-boxes), event Databases (D-boxes) and the Response units
(R-boxes). The additional component, called the D-boxes, is optional and can be used for later analysis.
D.
Challenges and Requirements for Intrusion Detection Systems
The purpose of an intrusion detection system is to detect attacks. However, it is equally important to detect attacks
at an early stage in order to minimize their impact. The major challenges and requirements for building intrusion detection
systems are:

The system must be able to detect attacks reliably without giving false alarms. It is very important that the false
alarm rate is low as in a live network with large amount of traffic, the number of false alarms may exceed the
total number of attacks detected correctly thereby decreasing the confidence in the attack detection capability of the
system. Ideally, the system must detect all intrusions with no false alarms, i.e. it can detect a wide variety of
attacks and at the same time which results in very few false alarms.

The system must be able to handle large amount of data without affecting performance and without dropping data,
i.e. the rate at which the audit patterns are processed and decision is made must be greater than or equal to the rate
of arrival of new audit patterns. In addition, the system must be capable of operating in real time by initiating a
response mechanism once an attack is detected.

A system which can link an alert generated by the intrusion detector to the actual security incident is desirable.
Such a system would help in quick analysis of the attack and may also provide effective response to intrusion as
opposed to a system which offers no after attack analysis. Hence, it is not only necessary to detect an attack, but it
is also important to identify the type of attack.

It is desirable to develop a system which is resistant to attacks since, a system that can be exploited during an
attack may not be able to detect attacks reliably.
III.
IMPLEMENTATION
Ever increasing network bandwidth poses a significant challenge to build efficient network intrusion detection
systems which can detect a wide variety of attacks with acceptable reliability. In order to operate in high traffic environment,
present network intrusion detection systems are often signature based. As a result, anomaly and hybrid intrusion detection
systems must be used to detect novel attacks. However, such systems are inefficient and suffer from a large false alarm rate.
To ameliorate these drawbacks, we first develop better hybrid intrusion detection methods which are not based on attack
signatures and which can detect a wide variety of attacks with very few false alarms. Given the network audit patterns where
every connection between two hosts is presented in a summarized form with 41 features, our objective is to detect most of
the anomalous connections while generating very few false alarms. In our experiments, we used the KDD 1999 data set.
Conventional methods, such as decision trees and naive Bays, are known to perform well in such an environment; however,
they assume observation features to be independent. We propose to use conditional random fields which can capture the
correlations among different features in the data
The KDD 1999 data set represents multiple features, a total of 41, for every session in relational form with only
one label for the entire record. However, we represent the audit data in the form of a sequence and assign label to every
feature in the sequence using the first order Markov assumption instead of assigning a single label to the entire observation.
Though, this increases complexity, it also improves the attack detection accuracy. Figure 3.1 represents how conditional
random fields can be used for detecting network intrusions.
Figure3.1: Conditional Random Fields for Intrusion Detection
In the figure, observation features ‘duration’, ‘protocol’, ‘service’, ‘flag’ and ‘source bytes’ are used to
discriminate between attack and normal events. The features take some possible value for every connection which are then
used to determine the most likely sequence of labels < attack, attack, attack, attack, attack > or < normal, normal, normal,
normal, normal >. Custom feature functions can be defined which describe the relationships among different features in the
observation. During training, feature weights are learnt and during testing, features are evaluated for the given observation
which is then labeled accordingly. It is evident from the figure that every input feature is connected to every label which
indicates that all the features in an observation determine the final labeling of the entire sequence. Thus, a conditional
70
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
random field can model dependencies among different features in an observation. Present intrusion detection systems do not
consider such relationships. They either consider only one feature, as in case of system call modeling, or assume
independence among different features in an observation, as in case of a naive Bays classifier.
We also note that in the KDD 1999 data set, attacks can be represented in four classes; Probe, DoS, R2L and U2R. In order
to consider this as a two class classification problem, the attacks belonging to all the four attack classes can be relabeled as
attack and mixed with the audit patterns belonging to the normal class to build a single model which can be trained to detect
any kind of attack. The problem can also be considered as a five class classification problem, where a single system is
trained with five classes (normal, Probe, DoS, R2L and U2R) instead of two. As we will see from our experimental result,
considering every attack class separately not only improves the attack detection accuracy but also helps to improve the
overall system performance when integrated with the layered framework. Furthermore, it also helps to identify the class of
an attack once it is detected at a particular layer in the layered framework. However, a drawback of this implementation is
that it requires domain knowledge to perform feature selection for every layer.
E. Description of our Framework
We present our unified logging framework in Figure 3.2 which can be used for building effective application
intrusion detection system
.
Figure 3.2: Framework for Building Application Intrusion Detection System
In our framework, we define two modules; session control module and logs unification module, in addition to an
intrusion detection system which is used to detect malicious data accesses in an application. The logs unification module
provides input audit patterns to the intrusion detection system and the response generated by the intrusion detection system is
passed on to the session control module which can initiate appropriate intrusion response mechanisms. In our framework,
every request first passes through the session control which is described next.
Session Control Module
The prime objective of an intrusion detection system is to detect attacks reliably. However, it must also ensure that
once an attack is detected, appropriate intrusion response mechanisms are activated in order to mitigate their impact and
prevent similar attacks in future. The session control module serves dual purpose in our framework. First, it is responsible for
establishing new sessions and for checking the session id for previously established sessions. For this, it maintains a list of
valid sessions which are allowed to access the application. Every request to access the application is checked for a valid
session id at the session control and anomalous requests can be blocked depending upon the installed security policy.
Second, the session control also accepts input from the intrusion detection system. As a result, it is capable of acting as an
intrusion response system. If a request is evaluated to be anomalous by the intrusion detection system, the response from the
application can be blocked at the session control before data is made visible to the user, thereby preventing malicious data
accesses in real time.
The session control can either be implemented as a part of the application or can also be implemented as a separate
entity. Once the session id is evaluated for a request, the request is sent to the application where it is processed. The web
server logs every request. All corresponding data accesses are also logged. The two logs are then combined by the logs
unification module to generate unified log which is described next.
Logs Unification Module
Analyzing the web assesses logs and the data access logs in isolation are not sufficient to detect application level
attacks. Hence, we propose using unified log which can better detect attacks as compared to independent analysis of the two
logs. The logs unification module is used to generate the unified log. The unified log incorporates features from both the
web access logs and the corresponding data access logs. Using the unified log, thus, helps to capture the user application
interaction and the application data interactions. Hence, we first process the data access logs and represent them using simple
statistics such as ‘the number of queries invoked by a single web request’ and ‘the time taken to process them’ rather than
analyzing every data access individually. We then use the session id, present in both, the application access logs and the
associated data access logs, to uniquely map the extracted statistics (obtained from the data access logs) to the corresponding
web requests in order to generate a unified log. Figure 3.3, represents how the web access logs and the corresponding data
access logs can be uniquely mapped to generate a unified log. In the figure, f1, f2, f3...fn and g1’,g2’,…gm’ represent the
features of web access logs and the features extracted from the reduced data access logs respectively.
71
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
Figure 3.3: Representation of a Single Event in the Unified log
From Figure 3.3, we observe that a single web request may result in more than one data accesses which depend
upon the logic encoded into the application. Once the web access logs and the corresponding data access logs are available,
the next step involves the reduction of data access logs by extracting simple statistics as discussed before. The session id can,
then, be used to uniquely combine the two logs to generate the unified log.
Issues in Implementation
Experimental results show that our approach based on conditional random fields can be used to build effective
application intrusion detection systems. However, before deployment, it is critical to resolve issues such as the availability of
the training data and suitability of our approach for a variety of applications. We now discuss various methods which can be
employed to resolve such issues.
Availability of Training Data
Though our system is application independent and can be used to detect malicious data access in a variety of
applications, it must be trained before the system can be deployed online to detect attacks. This requires training data which
is specific to the application. To obtain such data may be difficult However; training data can be made available as early as
during the application testing phase when the application is tested to identify errors. Logs generated during the application
testing phase can be used for training the intrusion detection system. However, this requires security aware software
engineering practices which must ensure that necessary measures are taken to provide training data during the application
development phase, which can be used to train effective application intrusion detection systems.
Suitability of Our Approach for a Variety of Applications
As we already discussed, our framework is generic and can be deployed for a variety of applications. It is
particularly suited to applications which follow the three tier architecture which have application and data independence.
Furthermore, our framework can be easily extended and deployed in the Service Oriented Architecture. Our proposed
framework can be considered as a special case for the service oriented architecture which defines only one service. The
challenge is to identify such correlations automatically and this provides an interesting direction for future work.
IV.
RESULTS
The Programs Developed has been validated by testing them with variety of inputs. Here we can see the outputs
for different inputs which are given as input to the Intrusion Detection system.
Figure 4.1: RMI Registry window
72
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
This form shows RMI Registry, by using which we can invoke one machine can be invoked by other method of the
object running inside other machine
Figure 4.2 Intrusion Detection System Windows.
This form shows the main screen of the Intrusion Detection System
Figure 4.3: Authorized Person Login System Window This form shows the main screen of the Authorized Person Login
System
Figure 4.4: Home Page of actual Intrusion Detection System Window
This form shows the main screen of the Authorized Person Login System and Intrusion Detection System
Figure 4.5(a): password mismatch window
73
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
Figure 4.5(b): IDS detecting unauthorized user window
This Forms shows Intrusion is detected when the user enter the flaw credentials
Figure 4.6: Unauthorized user detection window
This Form shows what type of operation unauthorized user performed and what type of actions taken by the Intrusion
Detection System performed
Figure 4.7: Authorized user IDS window
This form shows valid user Information for IDS
Figure 4.8: Authorized user retrieving system properties window
74
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
This Form shows retrieving the system properties by the authorized user
Figure 4.9: Process level intruder detection window
This form shows if authorized user trying to retrieve the processes level information then IDS detected and gives the alert
information as dialog box
Figure 4.10(a): how to send data window
Figure 4.10(b): how to add file window
Figure 4.11: Transmission status window
75
Integrated Conditional Random Fields with the Layered Approach for Intrusion Detection
This form shows no. of packets are transferred, size of the packets and process status
V.
CONCLUSION
We explored the suitability of conditional random fields for building robust and efficient intrusion detection
systems which can operate, both, at the network and at the application level. In particular, we introduced novel frameworks
and developed models which address three critical issues that severely affect the large scale deployment of present anomaly
VI.
FUTURE ENHANCEMENT
The critical nature of the task of detecting intrusions in networks and applications leaves no mar- gin for errors.
The effective cost of a successful intrusion overshadows the cost of developing intrusion detection systems and hence, it
becomes critical to identify the best possible approach for developing better intrusion detection systems. Every network and
application is custom designed and it becomes extremely difficult to develop a single solution which can work for every
network and application. In this thesis, we proposed novel frameworks and developed methods which perform better than
previously known approaches. However, in order to improve the overall performance of our system we used the domain
knowledge for selecting better features for training our models. This is justified because of the critical nature of the task of
intrusion detection. Using domain knowledge to develop better systems is not a significant disadvantage; however,
developing completely automatic systems presents an interesting direction for future research. From our experiments, it is
evident that our systems performed efficiently. However, developing faster implementations of conditional random fields
particularly for the domain of intrusion detection requires further investigation.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Stefan Axelsson. Research in Intrusion-Detection Systems: A Survey. Technical Report 98-17, Department of
Computer Engineering, Chalmers University of Technology, 1998.
SANS Institute - Intrusion Detection FAQ.
Last accessed: Novmeber 30, 2008. http:
//www.sans.org/resources/idfaq/.
Kotagiri Ramamohanarao, Kapil Kumar Gupta, Tao Peng, and Christopher Leckie. The Curse of Ease of Access
to the Internet. In Proceedings of the 3rd International Confer- ence on Information Systems Security (ICISS),
pages 234–249. Lecture Notes in Computer Science, Springer Verlag, Vol (4812), 2007.
Overview of Attack Trends, 2002.
Last accessed: November 30, 2008. http://www.
cert.org/archive/pdf/attack_trends.pdf.
Kapil Kumar Gupta, Baikunth Nath, Kotagiri Ramamohanarao, and Ashraf Kazi. Attacking Confidentiality: An
Agent Based Approach. In Proceedings of IEEE International Conference on Intelligence and Security Informatics,
pages 285–296. Lecture Notes in Computer Science, Springer Verlag, Vol (3975), 2006.
The ISC Domain Survey. Last accessed: Novmeber 30, 2008. https://www.isc. org/solutions/survey/.
Peter Lyman, Hal R. Varian, Peter Charles, Nathan Good, Laheem Lamar Jordan, Joyojeet Pal, and
Kirsten Swearingen.
How much Information.
Last
accessed:
Novmeber 30,
2008.
http://www2.sims.berkeley.edu/research/ projects/how-much-info-2003.
Tao Peng, Christopher Leckie, and Kotagiri Ramamohanarao. Survey of Network-Based Defense Mechanisms
Countering the DoS and DDoS Problems. ACM Computing Surveys,39(1):3, 2007. ACM.
Animesh Patcha and Jung-Min Park. An Overview of Anomaly Detection Techniques: Existing Solutions and
Latest Technological Trends. Computer Networks, 51(12): 3448–3470, 2007.
CERT/CC Statistics. Last accessed: Novmeber 30, 2008. http://www.cert.org/
AUTHORS
Mr.K.Ranganath, Graduated in Computer
Science and Engineering from Osmania University Hyderabad, India, in 2006 and M.Tech in Software
Engineering from Jawaharlal Nehru Technological University, Hyderabad, A.P., India in 2010. He is
working presently as Assistant Professor in Department of C.S.E in Hyderabad Institute of Technology
and Management (HITAM), R.R.Dist, INDIA, A.P. He has 4 years of Experience. His research interests
include Network security and Data Mining.
Ms.Shaik Shafia, Graduated in Computer
Science and Engineering from JNTU Hyderabad, India, in 2006 and M.Tech in Computer Science
from Jawaharlal Nehru Technological University, Hyderabad, A.P., India in 2011. she is working
presently as Assistant Professor in Department of C.S.E in Hyderabad Institute of Technology And
Management (HITAM), R.R.Dist, INDIA, A.P. She has 6 years of Experience. Her research interests
include Secure Computing
76
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 5 (September2012) PP: 60-63
A Survey on Algorithms of Mining Frequent Subgraphs
Maryam Gholami 1, Afshin Salajegheh2
1,2
Department of COMPUTER Eng., South Tehran Branch, Islamic Azad University, Tehran, Iran
Abstract––Graphs are currently becoming more important in modeling and demonstrating information. In the recent
years, graph mining is becoming an interesting field for various processes such as chemical compounds, protein
structures, social networks and computer networks. One of the most important concepts in graph mining is to find
frequent subgraphs. The major advantage of utilizing subgraphs is speeding up the search for similarities, finding graph
specifications and graph classifications. In this article we classify the main algorithms in the graph mining field. Some
fundamental algorithms are reviewed and categorized. Some issues for any algorithm are graph representation, search
strategy, nature of input and completeness of output that are discussed in this article.
Keywords––Frequent subgraph, Graph mining, Graph mining algorithms
I.
INTRODUCTION
Extraction of useful and new information from data is defined as data mining. The aim of data mining is to
discover beneficial algorithms hidden in a data set and using them for explicit knowledge. With the increasing amount and
complexity of today’s data, there is an urgent need to accelerate data mining for large databases. Furthermore, most of the
data is either structural in nature, or is composed of parts and relations between those parts. Hence, there exists a need for
developing scalable tools to analyze and discover concepts in a structural database. Graphs can represent a wide spectrum of
data. However, they are typically used when relationships are crucial to the domain. A Graph, as a general data structure, can
be used to model many complicated relations among the data. Labels for vertices and edges can represent different attributes
of entities and relations among them. For instance, in chemical compounds, the labels for vertices and edges can be different
types of atoms and bounds, respectively. In electronic transactions, labels for vertices represent account owners while
occurrence of payments is indicated by edges between these owners. Since graph can model more complicated objects, it has
been widely used in chemical information, text retrieval, computer vision, and video indexing. Graph-based data mining or
graph mining is defined as the extraction of novel and useful knowledge from a graph representation of data. In recent years,
graph mining has become a popular area of research due to its numerous applications in a wide variety of practical fields
such as sociology, software bug localization, and computer networking. Graph mining is an important tool to transform the
graphical data into graphical information. One of the most important concepts in graph mining is to find frequent subgraphs
in it, i.e. to find graphs that are repeated in the main graph (Fig. 1). Frequent subgraph mining delivers effective structured
information such as common protein structures and shared patterns in object recognition. It can also be utilized in fraud
detection to catch similar fraud transaction patterns from millions of electronic payments.
Fig. 1. From left to right: theobromine, caffeine and frequent subgraph.
II.
APPROACHES OF GRAPH MINING
Graph mining algorithms can be divided in three main categories of Greedy search, Inductive logic programming
and Graph theory as shown in Fig. 2. These three categories are discussed briefly in the following sections.
Fig. 2. Categorization of Graph mining algorithms
60
A Survey on Algorithms of Mining Frequent Subgraphs
2-1- Greedy search approaches
In these approaches at each stage we make a decision that appears to be best at the time. A decision made at one
stage is not altered in later stages and thus, each decision should assure feasibility. The two pioneering works in this field are
SUBDUE [1] proposed by Cook et al and GBI[8] proposed by Yoshida et al. these two algorithms can use powerful
measures such as information entropy, gini-index and description length to figure out important graph structures[4].
SUBDUE is an algorithm to find a subgraph which can maximally compress graph G based on MDL [1]. MDL is a theory in
which the best hypothesis for a given set of data is the one that leads to the best compression of the data. Usually a code is
fixed to compress the data, most generally with a computer language.
The motivation for this process is to find substructures capable of compressing the data, which is done by
abstracting the substructure and identify conceptually interesting substructures that enhance the interpretation of the data. In
this method a threshold of similarity is defined which is the cost of changes in one graph to isomorphs it with another one.
There could be some little differences between the samples. SUBDUE’s search is guided by the MDL principle given in
equation (1) where DL(S) is the description length of the substructure being evaluated, DL (G|S) is the description length of
the graph as compressed by the substructure, and DL(G) is the description length of the original graph. The best substructure
is the one that minimizes this compression ratio:
Where DL(G) is a measure of the minimum number of bites of information needed to represent G.
2-2- Inductive Logic Programming approaches
In the Inductive Logic Programming (ILP) [9] System represents predicates combining examples and background
knowledge. The core of ILP is to use of logic for representation and the search for syntactically legal hypothesis constructed
from predicates provided by the background knowledge.
2-3- Graph theory approaches
These approaches mine a complete set of subgraph mainly using a support or a frequent measure. Frequent
subgraph is a graph with a frequency more than that of the minimum support. Mining frequent substructures usually consists
of two steps. First, the frequent substructure candidates are generated. Second, the frequency of each candidate frequency is
checked. Graph based approaches are mainly divided into Apriori based approaches and pattern growth approaches.
2-3-1- Apriori based algorithms
In these approaches before mining any of the subgraphs of size k+1, mining of subgraphs with size k needs to be
completed [3]; where the size of the graph is defined by the number of its vertices. In this method, since the candidates are
generated in level-wise manner, the BFS search strategy must be used. To generate the biggest candidate for frequent
subgraphs, frequent subgraphs of size k are merged together to generate a graph of size k+1. Major algorithms with this
approach are AGM1 proposed by Inokuchi et al in 1998 [3], FSG2 proposed by Kuramochi et al in 2001 [7] and PATH
proposed by Vanetik in 2002 [10]. In the AGM algorithm, the graph is presented by an adjacency matrix [4]. First, this
matrix is sorted by the lexicographic order of the labels of the vertices to find a unique form. Then, the code of the adjacency
matrix is defined as shown in Fig. 3 to reduce the memory consumption.
Fig. 3. Code of the adjacency matrix
Two graphs of size k can join only if they have k-1 equal elements and the first matrix has a code less or equal
than the second one. After they are joined, a graph with a normal form is generated. As the normal form representation in
general is not unique, canonical form is defined as the one which has the minimum code. The canonical form is used to
count the frequency of frequent subgraph.
Apriori-like algorithms suffer two considerable overheads: (1) the generation of size k+1 subgraph candidates
from size k is costly and complicated, (2) Subgraph isomorphism test is an NP-Complete problem, which means that the
problem could not be solved in a polynomial time. Due to these limitations, non-Apriori algorithms are developed.
2-3-2- Pattern Growth Algorithms
1- Apriori-based Graph Mining
2- Frequent Sub-Graph discovery
61
A Survey on Algorithms of Mining Frequent Subgraphs
In these methods, the candidate graph is generated by adding a new edge to the previous candidate. Indeed a
frequent subgraph is extended in all manners. The major algorithms in this method are gSpan3 proposed by Yan et al in
2002, MoFa proposed by Borgelt et al in 2002, FFSM4 proposed by Huan et al in 2003, SPIN5 proposed by Huan et al in
2004 and Gaston6 proposed by Nijssen in 2004 [2]. In gSpan unlike Apriori methods, the search method utilized is DFS [11].
In this method each graph is represented using a DFS canonical code. The isomorphism test is done by comparing these
codes. This code is written based on the DFS tree. Since a graph can have many different DFS codes, the code with the
conditions noted in [12] is called the minimum DFS code in order to produce a unique code. Two graphs are isomorphic if
they have equal minimum DFS codes. If a DFS code is frequent, all of its ancestors are frequent and if not, none of its
descendants are frequent. Also if a DFS code is not a minimum code, all of its subtrees could be omitted to reduce the size of
the search space.
III.
COMPARING ALGORITHMS
Different algorithms of graph mining have different approaches in the fields like graph representation, candidate
generation, algorithm approach, frequency evaluation, search strategy, nature of input and completeness of output [6]. In the
next sections, we will compare some of these attributes.
3.1. Graph Representation
Graph representation is one the major attributes of an algorithm because it directly affects memory consumption
and runtime. Graphs are represented by adjacency matrix, adjacency list, Hash table and Trie data structure [5]. It is notable
that the adjacency matrix has numerous empty elements which could waste the time and memory.
3.2. Search Strategy
Search strategy is categorized into DFS and BFS. BFS mines all isomorphic subgraphs, whereas DFS does not do
so and therefore, DFS consumes less memory.
3.3. Nature of input
Another classification of the algorithms is based on how they take an input. In the first case, we have a set of graph
databases which consist of individual small graphs; Chemical molecules are in this category. In the second case we have a
big graph which consists of small subgraphs; Social networks belong to this category.
3.4. Completeness of output
The algorithms can be classified into two groups based on the completeness of the search they carry out and the
output they generate. The algorithms in the first group do not mine the complete set of frequent subgraphs, whereas those in
the second group discover the whole set of frequent subgraphs. Thus, there is a trade-off between performance and
completeness. It is necessary to consider that mining all frequent subgraphs is not always efficient.
Table 1 shows a comparison of the described algorithms based on the explained attributes.
Table 1. comparison between SUBDUE, AGM and gSpan
SUBDUE
AGM
gSpan
Graph Representation
Adjacency matrix
Adjacency matrix
Adjacency list
Algorithm approach
Greedy search
Apriori based
Pattern growth based
Frequency evaluation
MDL
Canonical form
DFS code
Search strategy
BFS
BFS
DFS
Nature of input
First case
Second case
Second case
Mining all frequent subgraphs
No
Yes
Yes
IV.
CONCLUSION
In this paper different graph mining algorithms have been categorized and described. In addition, their differences
and structure have been discussed. Because of the importance and efficiency of graphs in different fields of science, graph
mining is a rapidly growing field. Graph mining algorithms generally concentrate on frequent substructures. There are
various kinds of categorization for algorithms. In brief, SUBDUE is preferred to AGM and gSpan when data is given in
large graphs. Logic based systems make more efficient use of background knowledge and are better at learning semantically
complex concepts. For graph based methods, the goal is to find those parts of the graph which have more support. These
methods guarantee to mine all subgraphs under user-defined conditions. For large databases and those with a high order of
stochastic, AGM and gSpan are better choices. gSpan outperforms AGM by an order of magnitude and is capable of mining
larger frequent subgraphs in a bigger graph set with lower minimum supports.
REFERENCES
1.
Cook, D.J.; Holder, L.B. Graph-Based Data Mining, IEEE Intelligent System, 2000, pp. 32-41.
3- Graph-based Substructure pattern mining
4- Fast Frequent Subgraph Mining
5- SPanning tree based maximal graph mINing
6- GrAph, Sequences and Tree extractiON
62
A Survey on Algorithms of Mining Frequent Subgraphs
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Han, J.; Pei, J.; Yin, Y. Mining frequent patterns without candidate generation. ACM-SIGMOD International
Conference on Management of Data (SIGMOD’00), 2000, pp. 1–12.
Inokuchi, A.; Washio, T.; Motoda, H. An apriori-based algorithm for mining frequent substructures from graph
data. European Symposium Principle of Data Mining and Knowledge Discovery (PKDD’00), 2000, pp. 13–23.
Inokuchi, A.; Washio, T.; Motoda, H. Complete Mining of Frequent Patterns from Graphs: Mining Graph Data.
Machine Learning, 2003, pp. 321-354.
Kolosovskiy, M.A. Data structure for representing a graph: combination of linked list and hash table. CoRR,
2009.
Krishna, V.; Ranga Suri, N. N. R.; Athithan, G. A comparative survey of algorithms for frequent subgraph
discovery, Current Science, 2011.
Kuramochi, M.; Karypis, G. Frequent subgraph discovery. International Conference on Data Mining (ICDM’01),
pp. 313–320, San Jose, CA, Nov. 2001.
Matsuda, T.; Horiuchi, T.; Motoda, H.; Washio, T. Extension of graph-based induction for general graph
structured data. PAKDD, 2000, pp. 420–431.
Muggleton, S. Stochastic logic programs. In L. de Raedt, ed. Advances in Inductive Logic Programming, 1996,
IOS Press, Amsterdam, pp.254-264.
Vanetik, N.; Gudes, E.; Shimony, S.E. Computing frequent graph patterns from semistructured data. ICDM, 2002,
pp. 458–465.
Yan, X.; Han, J. CloseGraph: Mining Closed Frequent Graph Patterns. SIGKDD, 2003, pp. 286-295.
Yan, X.; Han, J. gSpan: Graph-Based Substructure Pattern Mining. ICDM, 2002, pp. 721-724.
63
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September2012) PP: 77-79
An Optimal FTL for SSDs
Ilhoon Shin
Department of Electronic Engineering, Seoul National University of Science & Technology, South Korea
Abstract–– SSDs (Solid State Drives) are replacing the traditional HDDs (Hard Disk Drives) in personal and enterprise
computing environments. SSDs use NAND flash memory as storage medium and require to deploy FTL (Flash
Translation Layer) to emulate the overwrite feature, because NAND flash memory does not support it. The efficiency of
FTL is a prime factor to determine the overall performance of SSDs. This work analyses the strengths and the
weaknesses of the representative FTL schemes, and presents the directions to design the optimal FTL for SSDs.
Keywords –– sector mapping, flash translation layer, NAND flash memory, SSD
I.
INTRODUCTION
SSDs (Solid State Drives) are replacing the traditional HDDs (Hard Disk Drives) in personal and enterprise
computing environments, due to the strengths of light-weight, silence, low energy consumption, shock-resistance, and fast
performance. The drawbacks are a relatively low write performance and a fluctuation in the write performance, which
mainly stem from the property of NAND flash memory that is used as storage medium in SSDs.
NAND flash memory is a kind of EEPROM (Electrically Erasable Programmable Read Only Memory) that
consists of blocks and pages. It basically provides three operations: read, write, and erase. The overwrite operation is not
supported. Once a cell is written, it should be erased first to write new data, which is called erase before write feature. The
worse thing is that an erase unit that is a block is larger than a read/write unit that is a page. In order to support the overwrite
feature like the standard block devices, SSDs deploy a special firmware called FTL (Flash Translation Layer). FTL emulates
the overwrite operation with an out-of-place update, which writes new data to another clean page and invalidates the
previous page. In the out-of-place update, the location of valid data becomes different on every overwrite operation, and thus
FTL maintains the mapping information between the logical address space and their physical addresses. For example, if the
above file system requests to read the sector 50, the FTL refers to the mapping table, finds the current physical location of
the sector 50, and finally reads the data of the sector 50. For a fast access, the mapping table generally resides in the internal
RAM.
Meanwhile, the clean pages will be eventually lacking by the continuous overwrite requests, which triggers a
garbage collector. The garbage collector reclaims the invalidated pages to clean pages by the erase operation. The garbage
collection accompanies multiple block erasures and page copies, and thus it is a main source of the bad write performance
and the write performance fluctuation. In order to improve the write performance and to reduce the fluctuation, lots of works
have been conducted to design the effective FTL schemes. This work examines the strengths and the drawbacks of the
representative FTL schemes, and presents the direction to design the optimal FTL scheme for SSDs.
II.
FLASH TRANSLATION LAYER
The representative FTL schemes are categorized to page mapping [1], block mapping [2], and hybrid mapping [34], according to the mapping unit between the logical address space and the physical address space. The page mapping
scheme [1] uses a NAND page as a mapping unit. Upon a write request, it searches for a clean page, and the data are written
to the found page. If the amount of the data is less than the page size, the unmodified partial data are copied from the old
page. After finishing writing the data, the previous page is invalidated, and the mapping table is updated to remember the
new location.
If the clean pages are lacking by the continuous overwrite operations, the garbage collector is conducted. It selects
a victim NAND block to reclaim. The valid pages of the victim block are copied to a reserved clean block, and the victim
block is erased. The erased victim block becomes the reserved clean block, and the previous reserved block serves the
subsequent overwrite requests. Thus, the latency of the garbage collector is proportional to the number of valid pages in the
victim block. In addition, the frequency of the garbage collection is also influenced by the number of valid pages, because
the number of the created clean pages by the garbage collector is anti-proportional to the number of valid pages. Thus, it is
beneficial to select the most invalidated block as victim [5].
The page mapping scheme delivers a good write performance by fully utilizing the clean pages. The garbage
collection is delayed as possible. The drawback is a large memory consumption to maintain the mapping table. It requires the
same number of the mapping entries with the number of pages in NAND. Even though SSDs deploy a large sized internal
RAM, the large mapping table can be a burden.
The block mapping scheme [2] uses a NAND block as a mapping unit. Upon a write request, it searches for a clean
block, and the data are written to the found block. If the amount of the data is less than the block size, the unmodified partial
data are copied from the old block, and the page order inside the block is always preserved. After finishing writing the data,
77
An Optimal FTL for SSDs
the previous block is reclaimed to a clean block with the erasure operation, and the mapping table is updated to remember
the new location.
The block mapping scheme reduces the mapping table greatly, because the number of the mapping entries is the
same with the number of blocks in NAND. Note that a block generally consists of 64 or 128 pages. The drawback is a bad
write performance. Even upon a small sized write request, the whole block is copied to the new block. Due to its low write
performance, the block mapping scheme is rarely used in SSDs.
In order to cope with the small sized write pattern with reasonable memory consumption, the BAST (Block
Associative Sector Translation) scheme [3], which is a hybrid of the block mapping scheme and the page mapping scheme
has been presented. The BAST scheme uses several NAND blocks as write buffer, which are called log blocks, in order to
absorb the small sized write requests. The log blocks are managed with the page mapping scheme, which means that the
pages in the log block can be written regardless of the logical order, and the other blocks that are called data blocks are
managed with the block mapping scheme, which means that the pages in the data block should be written in the logical order.
A log block is associated with one data block. Upon a write request, it searches for the associated log block with the target
data block from the block mapping table of the data blocks, and writes the requested data to the log block sequentially,
regardless of the logical order. If the data block does not have the associated log block, a clean log block is allocated. At this
time, if there is no clean log block, the garbage collector is triggered. It selects a victim log block with the LRU (Least
Recently Used) replacement scheme and merges the victim log block with the associated data block. During the merge
process, the log block and the data block are erased after the valid pages in them are copied to a new clean data block.
The BAST scheme reduces the mapping table because the page mapping tables are required only for the log blocks.
Also, it copes with the small sized write pattern by absorbing them with the log blocks. However, it is vulnerable against the
widely distributed random write pattern [4]. In the random write pattern, the clean log blocks are easily exhausted because
the log block cannot be shared by multiple data blocks. The garbage collector is frequently initiated, and the log blocks are
erased even though they still have clean pages.
In order to address this problem, the FAST (Fully Associative Sector Translation) scheme [4] allows a log block
shared by multiple data blocks. Upon a write request, the data are written to the current working log block, regardless of its
target sector number. If there are no clean pages in the working log, the next log block becomes the current working log, and
if the next log block was used as the working log before and does not have clean pages, the garbage collector is triggered.
The garbage collector selects a victim log block with the FIFO (First In First Out) replacement scheme. The victim log is
merged with the associated data blocks. The merging process is repeatedly conducted the same number of times as the
associated data blocks, which makes the garbage collection latency significantly longer than in the BAST scheme.
The FAST scheme reduces the mapping table also, and it delivers a good average performance even in the random
write pattern by fully utilizing the log space. However, as mentioned above, the latency of the garbage collector can be
significantly long, which causes a serious fluctuation of the write performance.
III.
REQUIREMENTS OF OPTIMAL FTL
Most of the previous works have mainly focused on improving the average performance of the FTL schemes and
the worst-case performance or the performance fluctuation has not been sufficiently studied. However, in some computing
environments, the worst-case performance may be as important as the average performance. Thus, the optimal FTL requires
delivering both the good average performance and the worst-case performance. In order to achieve the goal, the optimal FTL
should incur infrequent garbage collection, and the latency of the garbage collection should be short and evenly distributed.
The computation overhead and the memory consumption should be low also, because the FTL scheme is executed by the
internal controller using internal RAM inside SSDs. Even though SSDs tend to deploy a large sized RAM inside it, the
available RAM space except the region to maintain the mapping tables can be used as the read/write buffer and contribute to
increase the overall performance by absorbing some read/write requests without accessing NAND flash memory. Reducing
the number of the write requests to NAND flash memory with the internal buffer leads to reducing the frequency of the
garbage collection.
Page mapping
Block mapping
BAST
FAST
Computation
overhead
insignificant
insignificant
insignificant
significant
Table 1. Evaluation of the FTL schemes
Memory consumption Frequency of garbage
collection
large
infrequent
small
significantly frequent
small
frequent
small
infrequent
Latency of garbage
collection
skewed
evenly distributed
evenly distributed
seriously skewed
Table 1 shows the evaluation of the representative FTL schemes in the four aspects described above. From the
aspect of the frequency of the garbage collection which mainly affects the average performance, the block mapping scheme
is not the optimal FTL for SSDs. Its significant page copying overhead on every write requests hurts the effectiveness
despite of the other strengths. The BAST scheme is also not the optimal FTL due to the same reason.
The page mapping scheme delivers the good average performance by fully utilizing the clean pages. The garbage
collector is infrequently triggered. Also, its computation overhead is low because the location of the valid data is easily
found by accessing the page mapping table. The time complexity is O (1). However, it consumes the large memory, which
restricts the benefits of the internal buffer. This drawback can be mitigated by loading only hot portion of the page mapping
table and storing the cold region to NAND flash memory [6]. However, the tradeoff between the benefits of the buffer and
the miss penalty of the page mapping table access has not been sufficiently studied. Also, in the page mapping scheme, the
latency of the garbage collection fluctuates according to the number of the valid pages in the victim block. The garbage
78
An Optimal FTL for SSDs
collection latency is proportional to the number of the valid pages because the valid pages should be copied to another clean
block. Thus, the greedy replacement scheme that selects the mostly invalidated block as victim is beneficial to improve the
average performance. However, if the invalidated pages are evenly distributed among the blocks, even the greedy scheme is
not helpful. The latency of write requests can fluctuate, and the average performance will be hurt, because of the long
garbage collection latency. In this case, we need to move the valid pages of a block to another block during the idle time to
create the mostly invalidated blocks. If there are the mostly invalidated blocks, the garbage collection latency becomes short
and thereby the fluctuation of the write latency becomes flatter.
Therefore, in order to make the page mapping scheme optimum for SSDs, the original scheme should be revised so
that only hot portion of the page mapping table is loaded to RAM and that the valid pages are moved to another clean blocks
in background to create the mostly invalidated blocks. The optimal tradeoff point between the benefit of the buffer and the
miss penalty of the mapping table access should be investigated.
The FAST scheme also delivers the good average performance by fully utilizing the log space. The garbage
collector is infrequently triggered. Also, its memory consumption is low, which means that the available memory space can
be used as the buffer and contribute to improve the average performance more. However, its computation overhead of
finding the location of valid data can be significant because the valid data are distributed across the entire log blocks, which
means that we need to scan all the page mapping tables of the entire log blocks in the worst-case. However, this drawback
can be mitigated by using the hashed page table, which records the physical location of the valid data [7]. The hashed page
table is accessed in O (1) time using the logical sector number. The tradeoff is the increased memory consumption, but it is
not serious. Another main drawback of the FAST scheme is the large fluctuation of the garbage collection latency, which can
hurt the worst-case write performance significantly. It stems from the excessive association of the victim log block. In order
to reduce the worst-case write latency of the FAST scheme, the KAST (K-way set Associative Sector Translation) scheme
[8] restricts the maximal association of each log block to K. The cost is increased memory consumption. The block mapping
table should record the associated data block numbers additionally, and thus the memory consumption is proportional to K.
Also, a small value of K reduces the worst-case latency, and however the average performance is hurt, because the utilization
of the log blocks becomes lower than the FAST scheme. Therefore, in order to make the FAST scheme optimum for SSDs,
the similar works to reduce the computation overhead and the worst-case latency like [7-8] should be conducted, while at the
same time the average performance should not be hurt.
IV.
CONCLUSION
This work analyzed the strengths and the weaknesses of the representative sector translation schemes, and
presented the directions to design the optimal FTL for SSDs. The block mapping scheme and the BAST scheme could not be
the optimal FTL because of their low average performance. The page mapping scheme could be the optimal FTL, if its large
memory consumption is adequately reduced and the background page movement is performed without hurting the average
performance. The FAST scheme also could be the optimal FTL because it delivers the good average performance However,
the computation overhead and the worst-case latency should be reduced without hurting the average performance
significantly.
ACKNOWLEDGEMENTS
This work was supported by Seoul National University of Science and Technology.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
Ban, A. Flash file system, United States Patent, 1995, No. 5,404,485.
Ban, A. Flash file system optimized for page-mode flash technologies, United States Patent, 1999, No. 5,937,425.
Kim, J.; Kim, J. M.; Noh, S.; Min, S.; Cho, Y. A space-efficient flash translation layer for compactflash systems,
IEEE Transactions on Consumer Electronics, 2002, 48, pp. 366–375.
Lee, S.; Park, D.; Chung, T.; Choi, W.; Lee, D.; Park, S.; Song, H. A log buffer based flash translation layer using
fully associative sector translation. ACM Transactions on Embedded Computing Systems, 2007, 6 (3).
Kawaguchi, A.; Nishioka, S.; Motoda, H. A flash-memory based file system. Proceedings of USENIX Technical
Conference, 1995, pp. 155-164.
Gupta, A.; Kim, Y.; Urgaonkar, B. DFTL: A flash translation layer employing demand-based selective caching of
page-level address mappings. Proceedings of ACM ASPLOS, 2009.
Shin, I. Reducing computational overhead of flash translation layer with hashed page tables. IEEE Transactions on
Consumer Electronics, 2010, 56 (4), pp. 2344–2349.
Cho, H.; Shin, D.; Eom, Y. KAST: K-associative sector translation for NAND flash memory in real-time systems.
Proceedings of DATE, 2009.
79
International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September2012) PP: 80-90
A Modern Approach of a Three Phase Four Wire Dstatcom for
Power Quality Improvement Using T Connected Transformer
A.Dheepanchakkravarthy.M.E.1, Prof.Jebasalma.M.E.2,
1
2
Department Of EEE, M.I.E.T Engineering College, Trichy – 07
Department Of EEE, A.C. College Of Engineering And Technology, Karaikudi-04
Abstract––Three-phase four-wire distribution systems are facing severe power quality problems such as poor voltage
regulation, high reactive power and harmonics current burden, load unbalancing, excessive neutral current, etc., due to
various reasons such as single phase loads, nonlinear loads etc. A new topology of DSTATCOM [Distribution Static
Compensator] is proposed in this paper in which a three phase three leg VSC [Voltage Source Converter] is Integrated
with T connected transformer for nonlinear loads and is able to perform all the compensation required for three phase
four wire system. The T-connected transformer connection mitigates the neutral current and the three-leg VSC
compensates harmonic current, reactive power, and balances the load. Two single-phase transformers are connected in
T-configuration for interfacing to a three-phase four-wire power distribution system and the required rating of the VSC is
reduced. The DSTATCOM is tested for power factor correction and voltage regulation along with neutral current
compensation, harmonic reduction, and balancing of nonlinear loads. The performance of the three-phase four-wire
DSTATCOM is validated using MATLAB software with its Simulink and power system block set toolboxes.
Keywords––Power quality improvement, DSTATCOM, voltage source converter, T connected transformer, neutral
current compensation
I.
INTRODUCTION
Three-phase four-wire distribution systems are used in commercial buildings, office buildings, hospitals, etc. Most
of the loads in these locations are nonlinear loads and are mostly unbalanced loads in the distribution system. This creates
excessive neutral current both of fundamental and harmonic frequency and the neutral conductor gets overloaded. The
voltage regulation is also poor in the distribution system due to the unplanned expansion and the installation of different
types of loads in the existing distribution system. The power quality at the distribution system is governed by various
standards such as IEEE-519 standard [1]. The remedies to power quality problems are reported in the literature and are
known by the generic name of custom power devices (CPD) [2]. These custom power devices include the DSTATCOM
(distribution static compensator), DVR (dynamic voltage restorer) and UPQC (unified power quality conditioner). The
DSTATCOM is a shunt connected device, which takes care of the power quality problems in the currents, whereas the DVR
is connected in series with the supply and can mitigate the power quality problems in the voltage and the UPQC can
compensate power quality problems both in the current and voltage.
Some of the topologies of DSTATCOM for three-phase four-wire system for the mitigation of neutral current
along with power quality compensation in the source current are four-leg voltage source converter (VSC), three single-phase
VSCs, three-leg VSC with split capacitors [5], three-leg VSC with zigzag transformer [9],[10], and three-leg VSC with
neutral terminal at the positive or negative of dc bus [11]. The voltage regulation in the distribution feeder is improved by
installing a shunt compensator [12]. There are many control schemes reported in the literature for control of shunt active
compensators such as instantaneous reactive power theory, power balance theory, synchronous reference frame theory,
symmetrical components based, etc. [13], [14]. The synchronous reference frame theory [14] is used for the control of the
proposed DSTATCOM.
The T-connected transformer is used in the three-phase distribution system for different applications [6]–[8].But
the application of T-connected transformer for neutral current compensation is demonstrated for the first time. Moreover, the
T-connected transformer is suitably designed for magneto motive force (mmf) balance. The T-connected transformer
mitigates the neutral current and the three-leg VSC compensates the harmonic current and reactive power, and balances the
load. The IGBT based VSC is self-supported with a dc bus capacitor and is controlled for the required compensation of the
load current. The DSTATCOM is designed and simulated using MATLAB software with its Simulink and power system
block set (PSB) toolboxes for power factor correction and voltage regulation along with neutral current compensation,
harmonic reduction, and load balancing with nonlinear loads.
II.
BLOCK DIAGRAM REPRESENTATION
80
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
Fig. 2.1: Block Diagram Representation
The block diagram representation of the proposed Three-Phase Four-Wire DSTATCOM and T-connected
Transformer based distribution System is as shown in fig 2.1. It consists of three phase linear/nonlinear load block, ripple
filter block, control circuit block and shunt active filter block. The T-connected Transformer block is used for neutral current
compensation and it reduces the rating of three leg voltage source converter. The control circuit consists of DSATATCOM
with Three leg Voltage Source Converter. This block is used to compensate the harmonic current and reactive power and
load balancing. Also The DSTATCOM is tested for power factor correction and voltage regulation. The three leg VSC is
used as an active shunt compensator along with a T-connected transformer. The ripple filter block is used to reduce the high
frequency ripple voltage in the voltage at Point of Common Coupling (PCC). High frequency ripple is due to switching
current of the VSC of the DSTATCOM. All the blocks should be connected at PCC.
III.
SYSTEM CONFIGURATION AND DESIGN
Fig.3.1 (a) shows the single-line diagram of the shunt-connected DSTATCOM-based distribution system. The dc
capacitor connected at the dc bus of the converter acts as an energy buffer and establishes a dc voltage for the normal
operation of the DSTATCOM system. The DSTATCOM can be operated for reactive power compensation for power factor
correction or voltage regulation. Fig.3. (b) shows the phasor diagram for the unity power factor operation. The reactive
current (Ic) injected by the DSTATCOM has to cancel the reactive power component of the load current, so that the source
current is reduced to active power component of current only (I S). The voltage regulation operation of DSTATCOM is
depicted in the phasor diagram of Fig. 3.1 (c). The DSTATCOM injects a current Ic such that the voltage at the load (VL) is
equal to the source voltage (VS). The DSTATCOM current are adjusted dynamically under varying load condition.
The proposed DSTATCOM consisting of a three-leg VSC and a T-connected transformer is shown in Fig.3.2,
where the T-connected transformer is responsible for neutral current compensation. The windings of the T-connected
transformer are designed such that the mmf is balanced properly in the transformer. A three-leg VSC is used as an active
shunt compensator along with a T-connected transformer, as shown in Fig. 3.2, and this topology has six IGBTs, and one dc
capacitor. The required compensation to be provided by the DSTATCOM decides the rating of the VSC components. The
data of DSTATCOM system considered for analysis is shown in the Appendix 1. The VSC is designed for compensating a
reactive power of 12 KVAR, as decided from the load details. The ripple filter block is used to reduce the high frequency
ripple voltage in the voltage at Point of Common Coupling (PCC). High frequency ripple is due to switching current of the
VSC of the DSTATCOM. All the blocks are connected at PCC. The selection of dc capacitor and the ripple filter are given
in the following sections.
3.1. DC CAPACITOR VOLTAGE
The minimum dc bus voltage of VSC of DSTATCOM should be greater than twice the peak of the phase voltage
of the system [17]. The dc bus voltage is calculated as
Vdc = 2√2VLL / √3 m
(1)
Where m is the modulation index and is considered as 1 and VLL is the ac line output voltage of DSTATCOM. Thus, Vdc is
obtained as 677.69V for VLL of 415 V and is selected as 700V.
81
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
Fig.3.1. (a) Single-line diagram of DSTATCOM system. (b) Phasor diagram for UPF operation. (c) ZVR operation
Fig.3.2. Schematics of the proposed three-leg VSC with T-connected transformer- based DSTATCOM connected in
distribution system
3.2. DC BUS CAPACITOR
The value of dc capacitor (Cdc) of VSC of DSTATCOM depends on the instantaneous energy available to the
DSTATCOM during transients [17]. The principle of energy conservation is applied as
(1/2) Cdc [(Vdc)2 - (Vdc1)2] = 3V(a I) t
(2)
Where Vdc is the reference dc voltage and Vdc1 is the minimum voltage level of dc bus, a is the overloading factor,
V is the phase voltage, I is the phase current, and t is the time by which the dc bus voltage is to be recovered. Considering, a
1.5 %( 10 V) reduction in DC bus voltage during transients, Vdc1 = 690 V, Vdc = 700 V, V = 239.60 V, I = 28.76 A, t = 350
μs, a = 1.2, the calculated value of Cdc is 2600 μF and is selected as 3000 μF.
3.3. RIPPLE FILTER
A low-pass first-order filter tuned at half the switching frequency is used to filter the high-frequency noise from
the voltage at the PCC. Considering a low impedance of 8.1 Ω for the harmonic voltage at a frequency of 5 kHz, the ripple
filter capacitor is designed as Cf = 5 μF. A series resistance (Rf) of 5 Ω is included in series with the capacitor (Cf ). The
82
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
impedance is found to be 637 Ω at fundamental frequency, which is sufficiently large, and hence, the ripple filter draws
negligible fundamental current.
IV.
DESIGN OF THE T-CONNECTED TRANSFORMER
Fig. 4.1 (a) shows the connection of two single-phase transformers in T-configuration for interfacing with a threephase four-wire system. The T-connected windings of the transformer not only provide a path for the zero-sequence
fundamental current and harmonic currents but also offer a path for the neutral current when connected in shunt at point of
common coupling (PCC). Under unbalanced load, the zero-sequence load-neutral current divides equally into three currents
and takes a path through the T-connected windings of the transformer. The current rating of the windings is decided by the
required neutral current compensation. The voltages across each winding are designed as shown shortly.
The phasor diagram shown in Fig. 4.1 (b) gives the following relations to find the turn’s ratio of windings. If Va1
and Vb1 are the voltages across each winding and Va is the resultant voltage,
Then
Va1 = K1Va
Vb1 = K2Va
Where K1 and K2 are the fractions of winding in the phases.
Considering
|Va | = |Vb | = V and
From phasor diagram,
cos 30◦ = Va1 / Va
Va1 = Va cos 30◦
and
sin 30◦ = Vb1 / Va
Vb1 = Va sin 30◦
(3)
(4)
Fig. 4.1 (a) Design of T-connected transformer (b) Phasor diagram
Then from (4) and (5), one gets, K1 = 0.866 and K2 = 0.5. The line voltage is
Vca = 415 V
Va = Vb = Vc = 415 √3= 239.60 V
(5)
Va1 = 207.49 V, Vb1 = 119.80 V
(6)
Hence, two single-phase transformers of ratings 5kVA, 240 V/120V/120 V and 5kVA, 208 V/208 V are selected.
V.
CONTROL OF DSTATCOM
The control approaches available for the generation of reference source currents for the control of VSC of
DSTATCOM for three-phase four-wire system are instantaneous reactive power theory (IRPT), synchronous reference frame
theory (SRFT), unity power factor (UPF) based, instantaneous symmetrical components based, etc. [13], [14]. The SRFT is
used in this investigation for the control of the DSTATCOM. A block diagram of the control scheme is shown in Fig. 5.1.
The load currents (iLa, iLb, iLc), the PCC voltages (VSa, VSb, VSc), and dc bus voltage (Vdc) of DSTATCOM are sensed as
feedback signals. The load currents from the a–b–c frame are converted to the d–q–o frame using Park’s Transformation
iLd
iLq
iL0
=
2
3
cosθ
−sinθ
cos θ − 120
− sin θ − 120
cos θ + 120
sin θ + 120
1
2
1
2
1
iLa
iLb
iLc
(7 )
2
Where cos θ and sin θ are obtained using a three-phase phase locked loop (PLL). A PLL signal is obtained from
terminal voltages for generation of fundamental unit vectors [18] for conversion of sensed currents to the d–q–o reference
frame. The SRF controller extracts dc quantities by a low-pass filter, and hence, the non-dc quantities (harmonics) are
separated from the reference signal. The d-axis and q-axis currents consist of fundamental and harmonic components as
83
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
iLd = id dc + iq ac
iLq = iq dc + Iq ac
(8)
(9)
5.1. Unity Power Factor (UPF) operation of DSTATCOM
The control strategy for reactive power compensation for UPF operation considers that the source must deliver the
mean value of the direct-axis component of the load current along with the active power, component current for maintaining
the dc bus and meeting the losses (iloss) in DSTATCOM. The output of the proportional-integral (PI) controller at the dc bus
voltage of DSTATCOM is considered as the current (iloss) for meeting its losses
i loss (n) = i loss (n−1) + K pd (Vde (n) − Vde (n−1)) + K idVde(n)
(10)
where Vde(n) = V*dc-Vdc(n) is the error between the reference (V*dc)and sensed (Vdc) dc voltages at the nth sampling
instant. Kpd and Kid are the proportional and integral gains of the dc bus voltage PI controller The reference source current is
therefore
I*d = id dc + iloss
(11)
The reference source current must be in phase with the voltage at the PCC but with no zero-sequence component.
It is therefore obtained by the following inverse Park’s transformation with i*d as in and i*q and i*0 as zero.
i∗a
i∗b
i∗c
=
cosθ
sinθ
1
cos⁡
(θ − 120) sin⁡
(θ − 120) 1
cos⁡
(θ + 120) sin⁡
(θ + 120) 1
i∗d
i∗q
i∗0
(12)
5.2. Zero-Voltage Regulation (ZVR) operation of DSTATCOM:
The compensating strategy for ZVR operation considers that the source must deliver the same direct-axis
component i*d, as mentioned in along with the sum of quadrature-axis current (iq dc) and the component obtained from the PI
controller (iqr ) used for regulating the voltage at PCC. The amplitude of ac terminal voltage (VS) at the PCC is controlled to
its reference voltage (V*S ) using the PI controller. The output of PI controller is considered as the reactive component of
current (iqr) for zero-voltage regulation of ac voltage at PCC. The amplitude of ac voltage (VS) at PCC is calculated from the
ac voltages (vsa, vsb, vsc) as
VS = (2/3)1/2 (v2sa + v2sb + v2sc) ½
(13)
Then, a PI controller is used to regulate this voltage to a reference value as
i qr (n) = i qr (n−1) + Kpq (Vte (n) − Vte (n−1)) + K iq Vte (n)
(14)
Where Vte (n) = V*S – VS(n) denotes the error between reference (V*S ) and actual (VS(n)) terminal voltage amplitudes at the nth
sampling instant. Kpq and Kiq are the proportional and integral gains of the dc bus voltage PI controller. The reference source
quadrature-axis current is
I*q = iq dc + iqr
(15)
The reference source current is obtained by inverse Park’s transformation using (12) with i*d as in (11) and i*q as in (15) and
i*0 as zero.
Fig.5.1 Control algorithm for the three-leg-VSC-based DSTATCOM in a three phase four-wire system.
84
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
5.3. Current controlled PWM generator:
In a current controller, the sensed source currents (isa, isb, isc) and reference source currents (i sa*, isb*, isc*) are
compared and a proportional controller is used for amplifying current error in each phase. Then, the amplified current error
is compared with a triangular carrier signal of switching frequency to generate the gating signals for six IGBT switches of
VSC of DSTATCOM. The gate signals are PWM controlled so that sensed source currents follows the reference source
currents precisely.
VI.
MODELING AND SIMULATION
The three-leg VSC and the T-connected-transformer-based DSTATCOM connected to a three-phase four-wire
system is modeled and simulated using the MATLAB with its Simulink and PSBs. The ripple filter is connected to the
DSTATCOM for filtering the ripple in the PCC voltage. The system data are given in the Appendix I. The MATLAB-based
model of the three-phase four-wire DSTATCOM is shown in Fig.6.2. The T connected transformer in parallel to the load,
the three-phase source, and the shunt-connected three-leg VSC are connected as shown in Fig. 6.2. The available model of
linear transformers, which includes losses, is used for modeling the T-connected transformer. The control algorithm for the
DSTATCOM is also modeled in MATLAB. The reference source currents are derived from the sensed PCC voltages (vsa,
vsb, vsc), load currents (iLa, iLb, iLc), and the dc bus voltage of DSTATCOM (vdc). A PWM current controller is used over the
reference and sensed source currents to generate the gating signals for the IGBTs of the VSC of the DSTATCOM.
6.1 Simulation diagram of Three-Phase Four-Wire Distribution System without Controller Circuits
It consists of two three phase circuit breakers and Active Reactive power block, Power factor calculation Block
and Display. The circuit Breakers are used to simulate the unbalanced condition. The Source voltage (V s), Source current
(Is), Load current (IL), Load neutral current (ILn), Source neutral current (ISn) are measured from the corresponding scopes as
in shown fig.6.1
6.2. Simulation diagram of the T-connected Transformer and Three leg VSC based DSTATCOM for power quality
improvement
It consists of Three Three-phase Circuit Breakers, Nonlinear load, DSTATCOM block, T-connected transformer,
controller block, Power factor correction, ripple filter and the measurement scopes as shown in fig.6.2. Initially the threephase four-wire distribution system is in stable condition (CB1 and CB2 are open), and the controller circuit is not connected
to the balanced three-phase four-wire distribution system. When Circuit breaker1 gets closed at 0.2sec, one phase of the load
is disconnected resulting load become unbalanced. At this junction the circuit breaker3 gets closed thereby connecting the
controller circuit to the three-phase four-wire distribution system. The circuit breaker1 remain closed from 0.2sec to 0.5 sec.
Further at 0.3sec the circuit breaker2 gets closed disconnecting another phase. The circuit breaker 2 remains closed till
0.4sec. During unbalanced condition as a result of fault is rectified by the controller action.
Fig.6.1. Simulation diagram of three-phase four-wire System without controller circuits
85
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
Fig.6.2. Simulation diagram of the proposed three-phase four-wire DSTATCOM-connected system for Non-linear Load
VII.
RESULTS
7.1. Performance of three phases four wire distribution system for linear load without controller circuits:
The source voltage (Vs), source current (Is), load current (IL), load neutral current (ILn), source neutral current (ISn)
are measured from the corresponding scopes of fig.6.1. and shown in Fig.7.1. The power factor measured in this condition is
0.5119 as shown in fig. 6.1. The source current (Is) and load current (IL) both contain harmonics. The THD measured is
37.85% with help of FFT analysis. The Source current and harmonic spectrums without controller circuit are also shown in
fig. 7.1.
Source Voltage (Vs)
Source Current (Is)
Source Current (Isa)
Load Current (IL)
Load Neutral Current (ILn)
86
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
Source Neutral Current (ISn)
Source current and Harmonic Spectrum
Fig.7.1. Performance of three phases four wire distribution system for nonlinear load without controller circuits
7.2. Performance of three phases four wire DSTATCOM with Non-Linear load for Harmonic compensation, Load
balancing
The dynamic performance during Non-linear, unbalanced load condition is shown in fig.6.2. At 0.2 sec, the load is
changed to two phase load and single phase load at 0.3 sec. These loads are applied again at 0.4 sec and 0.5 sec respectively.
The PCC voltage (VS), source current (IS), load current (IL), compensator current (IC), source neutral current (ISn), load
neutral current (ILn),and compensator neutral current (ICn), DC bus Voltage of DSTATCOM (VDC), amplitude of voltage
(VAmp) at PCC, reactive Power (Q), active power (P), harmonic spectrum of source current and Filter current are shown in
Fig 7.2.
The source currents are observed as balanced and sinusoidal under all these condition. The harmonic currents are
added with the filter currents so we can get the pure sinusoidal source current with help of DSTATCOM and VSC. The total
Harmonic Distortion (THD) of the source current was improved from 37.85% to 5.52% and this shows the satisfactory
performance of DSTATCOM for harmonic compensation as stipulated by IEEE 519 – standard. The load current and the
compensator current are observed as balanced under all these condition with help of DSTATCOM. The T - connected
transformer is responsible for neutral current compensation. By adding the load neutral current (ILn) with Compensator
neutral current (ICn) as a result source neutral current is equal to zero (ISn = ILn + ICn = 0) and this is used to verify the
proper compensation.
The Reactive power (Q) is compensated for power factor correction with help of DSTATCOM. From the
waveform we can measure the reactive power (Q = 0) value is equal to zero. Here power factor value nearly one (cosΦ = 1).
Here the power factor value was improved from 0.5119 to 0.8040.
Source Voltage (Vs)
Source Current (Is)
Source current (ISa)
Load current (IL)
87
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
Compensator current (Ic)
Load neutral Current (ILn)
Compensator neutral Current (ICn)
Source neutral Current (ISn)
Reactive power (Q) and Active power (P)
Filter current (If)
Amplitude of PCC Voltage (VAmp)
DC bus voltage (VDC)
Source current and Harmonic Spectrum
88
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
Fig.7.2. Performance of three phases four wire DSTATCOM for Nonlinear load with controller circuits
7.3. Comparison with other techniques
A three-leg single-phase-VSC-based DSTATCOM [5] requires a total of 12 semiconductor devices, and hence, is
not attractive, and the three-leg VSC with split capacitors [5] has the disadvantage of difficulty in maintaining equal dc
voltage at two series-connected capacitors. The four-leg-VSC-based DSTATCOM [3] is considered as superior considering
the number of switches, complexity, cost, etc. A three-leg VSC with T connected transformer [10] is reported recently and
has shown improved performance. The three-leg VSC with T-connected transformer has the advantage of using a passive
device for neutral current compensation, reduced number of switches, use of readily available three-leg VSC, etc. The
proposed three-phase four-wire DSTATCOM is based on a three-leg VSC and a T-connected transformer. The T connected
transformer requires two single-phase transformers. A star/delta transformer is also reported [19] for neutral current
compensation and the kVA rating required is higher compared to T-connected transformer. Table I shows the rating
comparison of the two transformer techniques for a given neutral current of 10 A. It is observed that the kVA rating of the Tconnected transformer is much less compared to a star/delta transformer. Similarly, comparison with the four-leg converter
shows that the numbers of switches are reduced in the proposed configuration, thus reducing the complexity and cost of the
system.
Winding voltage
Winding
Number of
Transformer
(V)
current(A)
KVA
Transformer
Total KVA
Star/Delta
T-Connected
240/240
240/120/120
208/208
10
10
2.4
2.4
2.08
3 Nos
1 Nos
1 Nos
7.2
4.48
Table.7.1. Comparison with other technique.
VIII.
CONCLUSION
A new three phase four wire DSTATCOM using T-connected transformer has been proposed for Three-Phase
Four-wire DSTATCOM system to improve the power quality. The performance of distribution system has been
demonstrated for neutral current compensation along with reactive power compensation, harmonic reduction and load
balancing for non-linear load.The performance of three-phase four-wire distribution system with and without controller
circuits for Non-linear load was discussed in the above section and following observation is obtained.From the performance
of the distribution system without controller source current of each phase is reduced to zero during the fault period
(ISA=0:ISB=0;ISC=6A from 0.2sec to 0.5sec as shown fig.7.1). This is compensated by using the DSTATCOM circuits and
also the load current of the each phase is compensated as shown fig.7.2. The THD of the system without controller is 37.85%
this is reduced to 5.57% with help of DSTATCOM. The reactive power Q was compensated and the power factor was
improved from 0.5119 to 0.8040 by using DSTATCOM. The VDC was regulated to reference value (700V) under all load
disturbances. The voltage regulation and power factor correction mode of operation of the DSTATCOM has been observed
as the expected ones.Two single phase transformers are used for the T-configuration of the transformer to interface with a
three-phase four-wire system. The total kilovolt amperes rating of the T-connected transformer is lower than a star/delta
transformer for a given neutral current compensation. From the above discussion, the proposed technique “A MODERN
APPROACH OF THREE-PHASE FOUR-WIRE DSTATCOM FOR POWER QUALITY IMPROVEMENT USING
T-CONNECTED TRANSFORMER” is very efficient one for power quality improvement of Three-phase Four-wire
distribution system compared to other techniques(Using star-delta and star-hexagon transformer).
APPENDIX-I
Line impedance: Rs = 0.01 Ω, Ls = 2 mH
For linear Loads: 20 KVA, 0.80 pF lag
For Nonlinear: Three single-phase bridge rectifiers with R = 25 Ω and C = 470 μF
Ripple filter: Rf = 5 Ω, Cf = 5 μF
DC bus voltage of DSTATCOM: 700 V
DC bus capacitance of DSTATCOM: 3000 μF
AC inductor: 2.5 mH
DC voltage PI controller: Kpd = 0.19, Kid = 6.25
PCC voltage PI controller: Kpq = 0.9, Kiq = 7.5
AC line voltage: 415 V, 50 Hz
PWM switching frequency: 10 kHz
Hence, two single-phase transformers of rating are
Rating of Transformer1: 5 kVA, 240 V/120 V/120 V and
Rating of Transformer2: 5 kVA, 208 V/208 V are selected.
REFERENCES
1.
2.
3.
IEEE Recommended Practices and Requirements for Harmonics Control in a Electric Power Systems, IEEE Std.
519, 1992.
A.Ghosh and G. Ledwich, Power Quality Enhancement using Custom Power devices, Kluwer Academic
Publishers, London, 2002.
Bhim Singh, Jayaprakash Pychadathil, and Dwarkadas Pralhaddas Kothari
“Three-Leg VSC and a Transformer
Based Three-Phase Four-Wire Dstatcom for Distribution Systems” Fifteenth National Power Systems Conference
(NPSC), IIT Bombay, December 2008.
89
A Modern Approach of a Three Phase Four Wire Dstatcom for Power…
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
Bhim Singh, Jayaprakash Pychadathil, and Dwarkadas Pralhaddas Kothari.“Star/Hexagon Transformer Based
Three-Phase Four-Wire DSTATCOM for Power Quality Improvement” International Journal of Emerging Electric
Power Systems Volume 9, Issue 6 2008 Article 1
H. Akagi, E H Watanabe and M Aredes, Instantaneous power theory and applications to power conditioning,
John Wiley & Sons, New Jersey, USA, 2007.
B. A. Cogbill and J. A. Hetrick, “Analysis of T–T connections of two single phase transformers,” IEEE Trans.
Power App. Syst., vol. PAS-87, no. 2, pp. 388–394, Feb. 1968.
B. Singh, V. Garg, and G. Bhuvaneswari, “A novel T-connected auto transformer based 18-pulse AC–DC
converter for harmonic mitigation in adjustable-speed induction-motor drives,” IEEE Trans. Ind. Electron., vol54,
no. 5, pp. 2500–2511, Oct. 2007.
IEEE Guide for Applications of Transformer Connections in Three-Phase Distribution Systems, IEEE C57.1051978 (R2008).
H.-L. Jou, J.-C. Wu, K.-D. Wu, W.-J. Chiang, and Y.-H. Chen, “Analysis of zig-zag transformer applying in the
three-phase four-wire distribution power system,” IEEE Trans. Power Del., vol. 20, no. 2, pp. 1168–1173, Apr.
2005.
H.-L. Jou, K.-D.Wu, J.-C.Wu, andW.-J.Chiang, “A three-phase four-wire power filter comprising a three-phase
three-wire active filter and a zig-zag transformer,” IEEE Trans. Power Electron., vol. 23, no. 1, pp. 252–259, Jan.
2008.
H. L. Jou, K. D. Wu, J. C. Wu, C. H. Li, and M. S. Huang, “Novel powerconverter topology for three-phase fourwire hybrid power filter,” IET Power Electron., vol. 1, no. 1, pp. 164–173, 2008.
H. Fugita and H. Akagi, “Voltage-regulation performance of a shunt active filter intended for installation on a
power distribution system,” IEEE Trans. Power Electron., vol. 22, no. 1, pp. 1046–1053, May 2007.
M. I. Milan´es, E. R. Cadaval, and F. B. Gonz´alez, “Comparison of control strategies for shunt active power
filters in three-phase four-wire systems,” IEEE Trans. Power Electron., vol. 22, no. 1, pp. 229–236, Jan. 2007.
M. C. Benhabib and S. Saadate, “New control approach for four-wire active power filter based on the use of
synchronous reference frame,” Electr. Power Syst. Res., vol. 73, no. 3, pp. 353–362, Mar. 2005.
“A TEXT BOOK OF ELECTRICAL TECHNOLOGY” B.L.Theraja, A.K.Theraja. Volume III. First multicolor
edition, 2005. S.Chand and company Ltd.,
“THYRISTOR-BASED FACTS CONTROLLERS FOR ELECTRICAL TRANSMISSION SYSTEMS” R.
Mohan Mathur ,Rajiv K. Varma, “a john wiley & sons, inc. Publication”2002.
B.N.Singh, P.Rastgoufard, B.Singh, A.Chandra and K.Al.Haddad, “Design, simulation and implementation of
three pole/ four pole topologies for active filters,” IEE Eletr. Power Appl., vol. 151, no. 4, pp. 467-476, July 2004.
S. Bhattacharya and D. Diwan, “Synchronous frame based controller implementation for a hybrid series active
filter system,” in Proc. IEEE Ind. Appl. Soc. Meeting 1995, pp. 2531–2540.
P. Enjeti,W. Shireen, P. Packebush, and I. Pitel, “Analysis and design of a
New active power filter to cancel neutral current harmonics in three-phase
Four-wire electric distribution systems,” IEEE Trans. Ind. Appl., vol. 30,
no.6, pp. 1565–1572, Nov./Dec. 1994.
Wei-Neng Chang and Kuan-Dih Yeh” Design And Implementation Of Dstatcom For Fast Load Compensation Of
Unbalanced Loas” Journal of Marine Science and Technology, Vol. 17, No. 4, pp. 257-263 (2009) 257
Mahesh K. Mishra, Arindam Ghosh, Avinash Joshi, and Hiralal M.Suryawanshi,” A Novel Method of Load
Compensation Under Unbalanced and Distorted Voltages” IEEE transactions on power delivery, vol. 22, no. 1,
january 2007.
Fabiana Pottker and Ivo Barbi “Power Factor Correction of Non-Linear Loads Employing a Single Phase Active
Power Filter: Control Strategy, Design Methodology and Experimentation” IEEE Transaction.
Hendri Masdi, Norman Mariun, S.M.Bashi, A. Mohamed, Sallehhudin Yusuf “DESIGN OF A PROTOTYPE DSTATCOM FOR VOLTAGE SAG” IEEE Transaction on Power delivery.
Pierre Giroux, Gilbert Sybille and Hoang Le-Hye “modeling and Simulation of a DSTATCOM using Simulinh’s
power System Blockset” The 27th annual conference on The IEEE Electronics Society.
H. Nasiraghdam, and A. Jalilian “Balanced and Unbalanced Voltage Sag Mitigation Using DSTATCOM with
Linear and Nonlinear Loads” World Academy of Science, Engineering and Technology 28 2007.
Oscar Armando Maldonado Astorga, José Luz Silveira, Julio Carlos Damato “The Influence of Harmonics from
Non-Linear Loads In The Measuring Transformers Of Electrical Substations” Laboratory of High Voltage and
Electric Power Quality.
Arindam Dutta, Tirtharaj Sen, S. Sengupta “Reduction in Total Harmonic Distortion in a Non-Linear Load with
Harmonic Injection Method” International Journal of Recent Trends in Engineering, Vol 1, No. 4, May 2009.
www.feg.unesp.br/gose
www.ieee.org
The Berkely Electronic Press
www.google.com
90
Download