Synopsis Detailsl for Titles 11- 15

advertisement
www.globalsoftsolutions.in
gsstrichy@gmail.com
SECURE SPATIAL TOP-K QUERY PROCESSING VIA UNTRUSTED
LOCATION-BASED SERVICE PROVIDERS
ABSTRACT
Collaborative location-based information generation and sharing is considered,
which become increasingly popular due to the explosive growth of Internetcapable and location-aware mobile devices. The system consists of a data
collector, data contributors, location-based service providers (LBSPs), and
system users. The data collector gathers reviews about points-of-interest (POIs)
from data contributors, while LBSPs purchase POI data sets from the data
collector and allow users to perform spatial top-k queries which ask for the
POIs in a certain region and with the highest k ratings for an interested POI
attribute. In practice, LBSPs are untrusted and may return fake query results
for various bad motives, e.g., in favor of POIs willing to pay. Three novel
schemes is used for users to detect fake spatial snapshot and moving top-k
query results as an effort to foster the practical deployment and use of the
proposed system.
www.globalsoftsolutions.in
gsstrichy@gmail.com
INTRODUCTION
The explosive growth of Internet-capable and location aware mobile devices and
the surge in social network usage are fostering collaborative information
generation and sharing on an unprecedented scale. Almost all smart phones
have cellular/ Wi-Fi Internet access and can always acquire their precise
locations via pre-installed positioning software. Also owing to the growing
popularity of social networks, it is more and more convenient and motivating
for mobile users to share with others their experience with all kinds of points of
interests (POIs) such as bars, restaurants, grocery stores, coffee shops, and
hotels. Meanwhile, it becomes commonplace for people to perform various
spatial POI queries at online location-based service providers (LBSPs) such as
Google and Yelp. As probably the most familiar type of spatial queries, a spatial
(or location-based) top-k query asks for the POIs in a certain region and with
the highest k ratings for a given POI attribute. For example, one may search for
the best 10 Italian restaurants with the highest food ratings within five miles of
his current location.
www.globalsoftsolutions.in
gsstrichy@gmail.com
PROBLEM DEFINITION
There are two essential drawbacks with current top-k query services. First,
individual LBSPs often have very small data sets comprising POI reviews. This
would largely affect the usefulness and eventually hinder the more prevalent
use of spatial top-k query services. The data sets at individual LBSPs may not
cover all the Italian restaurants within a search radius. Additionally, the same
restaurant may receive diverse ratings at different LBSPs, so users may get
confused by very different query results from different LBSPs for the same
query. A leading reason for limited data sets at individual LBSPs is that people
tend to leave reviews for the same POI at one or at most only a few LBSPs’s
websites which they often visit. Second, LBSPs may modify their data sets by
deleting some reviews or adding fake reviews and return tailored query results
in favor of the restaurants that are willing to pay or against those that refuse to
pay. Even if LBSPs are not malicious, they may return unfaithful query results
under the influence of various attacks such as the Sybil attack whereby the
same attacker can submit many fake reviews for the same POI. In either case,
top-k query users may be misled by the query results to make unwise
decisions.
www.globalsoftsolutions.in
gsstrichy@gmail.com
PROBLEM SOLUTION
To introduce some trusted data collectors as the central hubs for collecting POI
reviews is a good solution. In particular, data collectors can offer various
incentives, such as free coffee coupons, for stimulating review submissions and
then profit by selling the review data to individual LBSPs. Instead of submitting
POI reviews to individual LBSPs, people can now submit them to a few data
collectors to earn rewards. The data sets maintained by data collectors can
thus be considered the union of the small data sets currently at individual
LBSPs. Such centralized data collection also makes it much easier and feasible
for data collectors to employ sophisticated defenses, to filter out fake reviews
from malicious entities like Sybil attackers. Data collectors can be either new
service providers or more preferably existing ones with a large user base, such
as Google, Yahoo, Facebook, Twitter, and MSN. Many of these service providers
have already been collecting reviews from their users and offered open APIs for
exporting selected data from their systems.
The above system model is also highly beneficial for LBSPs. In particular, they
no longer need struggle to solicit faithful user reviews, which is often a
daunting task especially for small/medium-scale LBSPs. Instead, they can
focus their limited resources on developing appealing functionalities (such as
driving directions and aerial photos) combined with the high-quality review
data purchased from data collectors. The query results they can provide will be
much more trustworthy, which would in turn help them attract more and more
users. This system model thus can greatly help lower the entrance bar for new
LBSPs without sufficient funding and thus foster the prosperity of locationbased services and applications.
A main challenge for realizing the appealing system above is how to deal with
untrusted and possibly malicious LBSPs. Specifically, malicious LBSPs may
still modify the data sets from data collectors and provide biased top-k query
results in favor of POIs willing to pay. Even worse, they may falsely claim
www.globalsoftsolutions.in
gsstrichy@gmail.com
generating query results based on the review data from trusted data collectors
which they actually did not purchase. Moreover, nonmalicious LBSPs may be
compromised to return fake top-k query results.
www.globalsoftsolutions.in
gsstrichy@gmail.com
EXISTING SYSTEM
Data privacy techniques
 Ensuring data privacy requires the data owner to outsource encrypted
data to the service provider, and efficient techniques are needed to
support querying encrypted data. A bucketization approach was
proposed to enable efficient range queries over encrypted data, which
was recently improved.
 Shi et al. presented novel methods for multi-dimensional range queries
over encrypted data. Some most recent proposals aim at secure ranked
keyword search or fine-grained access control over encrypted data.
Ensuring query integrity
 Ensuring query integrity is studied, i.e., that a query result is indeed
generated from the outsourced data and contains all the data satisfying
the query. In these schemes, the data owner outsources both its data
and also its signatures over the data to the service provider which
returns both the query result and a verification object (VO) computed
from the signatures for the querying user to verify query integrity.
 Many techniques were proposed for signature and VO generations, based
on signature chaining and based on the Merkle hash tree.
Secure query processing technique
 Secure remote query processing in tiered sensor networks is also
studied. These schemes assume that some master nodes are in charge of
storing data from regular sensor nodes and answering the queries from
the remote network owner.
www.globalsoftsolutions.in
gsstrichy@gmail.com
Disadvantages
 None of these schemes consider spatial top-k queries
 As spatial top-k queries exhibit unique feature in that whether a POI is
among the top-k is jointly determined by all the other POIs in the query
region and that the query region cannot be predicted in practice.
www.globalsoftsolutions.in
gsstrichy@gmail.com
PROPOSED SYSTEM
 To propose three novel schemes to tackle the special top k query
processing challenge for fostering the practical deployment and wide use
of the envisioned system.
 The key idea is that the data collector pre-computes and authenticates
some auxiliary information about its data set, which will be sold along
with its data set to LBSPs.
 To faithfully answer a top-k query, a LBSP need return the correct top-k
POI data records as well as proper authenticity and correctness proofs
constructed from authenticated hints.
 The authenticity proof allows the query user to confirm that the query
result only consists of authentic data records from the trusted data
collector’s data set, and the correctness proof enables the user to verify
that the returned top-k POIs are the true ones satisfying the query.
 The first two schemes both target snapshot top-k queries but differ in
how authenticated hints are precomputed and how authenticity and
correctness proofs are constructed and verified as well as the related
communication and computation overhead.
 The third scheme, built upon the first scheme, realizes efficient and
verifiable moving top-k queries.
www.globalsoftsolutions.in
gsstrichy@gmail.com
HARDWARE REQUIREMENTS
Processor
Ram
Hard Disk
: Any Processor above 500 MHz.
: 128Mb.
: 10 Gb.
Compact Disk
: 650 Mb.
Input device
: Standard Keyboard and Mouse.
Output device
: VGA and High Resolution Monitor.
SOFTWARE SPECIFICATION
Operating System
: Windows Family.
Pages developed using : Java Swing
Techniques
: JDK 1.5 or higher
Database
: MySQL 5.0
NEIGHBOR SIMILARITY TRUST AGAINST SYBIL ATTACK IN P2P
E-COMMERCE
ABSTRACT
Peer to peer (P2P) e-commerce applications exist at the edge of the
Internet with vulnerabilities to passive and active attacks. These
www.globalsoftsolutions.in
gsstrichy@gmail.com
attacks have pushed away potential business firms and individuals
whose aim is to get the best benefit in e-commerce with minimal
losses. The attacks occur during interactions between the trading
peers as a transaction takes place. Sybil attack is addressed as an
active attack, in which peers can have bogus and multiple identities
to fake their own. Most existing work, which concentrates on social
networks and trusted certification, has not been able to prevent
Sybil attack peers from doing transactions. Neighbor similarity trust
relationship is used to address Sybil attack. Duplicated Sybil attack
peers can be identified as the neighbor peers become acquainted
and hence more trusted to each other.
www.globalsoftsolutions.in
gsstrichy@gmail.com
INTRODUCTION
P2P networks range from communication systems like email and
instant messaging to collaborative content rating, recommendation,
and delivery systems such as YouTube, Gnutela, Facebook, Digg,
and BitTorrent. They allow any user to join the system easily at the
expense of trust, with very little validation control. P2P overlay
networks are known for their many desired attributes like
openness,
anonymity,
decentralized
nature,
self-organization,
scalability, and fault tolerance. Each peer plays the dual role of
client as well as server, meaning that each has its own control. All
the resources utilized in the P2P infrastructure are contributed by
the peers themselves unlike traditional methods where a central
authority control is used.
www.globalsoftsolutions.in
gsstrichy@gmail.com
PROBLEM DEFINITION
Peers can collude and do all sorts of malicious activities in the
open-access distributed systems. These malicious behaviors lead to
service quality degradation and monetary loss among business
partners. Peers are vulnerable to exploitation, due to the open and
near-zero cost of creating new identities. The peer identities are
then utilized to influence the behavior of the system. However, if a
single defective entity can present multiple identities, it can control
a substantial fraction of the system, thereby undermining the
redundancy. The number of identities that an attacker can generate
depends on the attacker’s resources such as bandwidth, memory,
and computational power. The goal of trust systems is to ensure
that honest peers are accurately identified as trustworthy and Sybil
peers as untrustworthy.
Defending against Sybil attack is quite a challenging task. A peer
can pretend to be trusted with a hidden motive. The peer can
pollute the system with bogus information, which interferes with
genuine business transactions and functioning of the systems. This
must be counter prevented to protect the honest peers. The link
between an honest peer and a Sybil peer is known as an attack
edge. As each edge involved resembles a human-established trust, it
is difficult for the adversary to introduce an excessive number of
attack edges.
www.globalsoftsolutions.in
gsstrichy@gmail.com
SYBIL ATTACK OVERVIEW
A peer can give positive recommendation to a peer which is
discovered is a Sybil or malicious peer. This can diminish the
influence of Sybil identities hence reduce Sybil attack. A peer which
has been giving dishonest recommendations will have its trust level
reduced. In case it reaches a certain threshold level, the peer can be
expelled from the group. Each peer has an identity, which is either
honest or Sybil.
A Sybil identity can be an identity owned by a malicious user, or it
can be a bribed/stolen identity, or it can be a fake identity obtained
through a Sybil attack [24]. These Sybil attack peers are employed
to target honest peers and hence subvert the system. In Sybil
attack, a single malicious user creates a large number of peer
identities called sybils. These sybils are used to launch security
attacks, both at the application level and at the overlay level. At the
application level, sybils can target other honest peers while
transacting with them, whereas at the overlay level, sybils can
disrupt the services offered by the overlay layer like routing, data
storage, lookup, etc. In trust systems, colluding Sybil peers may
artificially increase a (malicious) peer’s rating (e.g., eBay). Systems
like Credence rely on a trusted central authority to prevent
maliciousness.
www.globalsoftsolutions.in
gsstrichy@gmail.com
EXISTING SYSTEM
 The only known promising defense against Sybil attack is to
use social networks to perform user admission control and
limit the number of bogus identities admitted to a system.
 Authentication-based mechanisms are used to verify the
identities of the peers using shared encryption keys, or
location information.
 Social network is used to eliminate Sybil attack, and the
findings are based on preventing Sybil identities.
www.globalsoftsolutions.in
gsstrichy@gmail.com
PROPOSED SYSTEM
 Neighbor similarity trust is used in a group P2P ecommerce
based on interest relationships, to eliminate maliciousness
among the peers. This is refereed to as SybilTrust.
 In SybilTrust, the interest based group infrastructure peers
have a neighbor similarity trust between each other, hence
they are able to prevent Sybil attack.
 SybilTrust
gives
a
better
relationship
in
e-commerce
transactions as the peers create a link between peer
neighbors.
 Peers use self-certifying identifiers that are exchanged when
they initially come into contact. These can be used as public
keys to verify digital signatures on the messages sent by their
neighbors.
 More honest peers are admitted compared to malicious peers,
where the trust association is aimed at positive results.
 Distributed admission control is used which only requires
each peer to be initially aware of only its immediate trusted
neighbors, and to look for honest neighbors. The neighbors
assist to locate other peers of same interest in other levels.
www.globalsoftsolutions.in
gsstrichy@gmail.com
Advantages
 SybilTrust that can identify and protect honest peers from
Sybil attack.
 The Sybil peers can have their trust canceled and dismissed
from a group.
 Based on the group infrastructure in P2P e-commerce, each
neighbor is connected to the peers by the success of the
transactions it makes or the trust evaluation level.
 A peer can only be recognized as a neighbor depending on
whether or not trust level is sustained over a threshold value.
 SybilTrust enables neighbor peers to carry recommendation
identifiers among the peers in a group.
 Group detection algorithms to identify Sybil attack peers to be
efficient and scalable in large P2P e-commerce networks.
www.globalsoftsolutions.in
gsstrichy@gmail.com
SYSTEM ARCHITECTURE
Peer
Register to
region
Find
neighbor
peer
Assign trust
to neighbor
Check
similarity
trust
Identify Sybil
peer
Sybil attacker
Use
Malicious ID
Wrong
recommendation
Blacklist Sybil
peer
www.globalsoftsolutions.in
gsstrichy@gmail.com
HARDWARE REQUIREMENTS
Processor
: Any Processor above 500 MHz.
Ram
: 128Mb.
Hard Disk
: 10 Gb.
Compact Disk
: 650 Mb.
Input device
: Standard Keyboard and Mouse.
Output device
: VGA and High Resolution Monitor.
SOFTWARE SPECIFICATION
Operating System
: Windows Family.
Programming Language : JDK 1.5 or higher
Database
: MySQL 5.0
SECURE AND VERIFIABLE POLICY UPDATE OUTSOURCING FOR
BIG DATA ACCESS CONTROL IN THE CLOUD
ABSTRACT
Due to the high volume and velocity of big data, it is an effective option to store
big data in the cloud, as the cloud has capabilities of storing big data and
www.globalsoftsolutions.in
gsstrichy@gmail.com
processing high volume of user access requests. Attribute-Based Encryption (ABE)
is a promising technique to ensure the end-to-end security of big data in the cloud.
However, the policy updating has always been a challenging issue when ABE is
used to construct access control schemes. A trivial implementation is to let data
owners retrieve the data and re-encrypt it under the new access policy, and then
send it back to the cloud. This method, however, incurs a high communication
overhead and heavy computation burden on data owners. A novel scheme is
proposed that enable efficient access control with dynamic policy updating for big
data in the cloud. Developing an outsourced policy updating method for ABE
systems is focused. This method can avoid the transmission of encrypted data and
minimize the computation work of data owners, by making use of the previously
encrypted data with old access policies. Policy updating algorithms is proposed for
different types of access policies. An efficient and secure method is proposed that
allows data owner to check whether the cloud server has updated the ciphertexts
correctly. The analysis shows that this policy updating outsourcing scheme is
correct, complete, secure and efficient.
INTRODUCTION
Big data refers to high volume, high velocity, and/or high variety information
assets that require new forms of processing to enable enhanced decision making,
insight discovery and process optimization. Due to its high volume and
complexity, it becomes difficult to process big data using on-hand database
management tools. An effective option is to store big data in the cloud, as the cloud
has capabilities of storing big data and processing high volume of user access
www.globalsoftsolutions.in
gsstrichy@gmail.com
requests in an efficient way. When hosting big data into the cloud, the data security
becomes a major concern as cloud servers cannot be fully trusted by data owners.
PROBLEM DEFINITION
The policy updating is a difficult issue in attribute-based access control systems,
because once the data owner outsourced data into the cloud, it would not keep a
copy in local systems. When the data owner wants to change the access policy, it
has to transfer the data back to the local site from the cloud, reencrypt the data
under the new access policy, and then move it back to the cloud server. By doing
so, it incurs a high communication overhead and heavy computation burden on
data owners. This motivates us to develop a new method to outsource the task of
policy updating to cloud server.
The grand challenge of outsourcing policy updating to the cloud is to guarantee the
following requirements:
1) Correctness: Users who possess sufficient attributes should still be able to
decrypt the data encrypted under new access policy by running the original
decryption algorithm.
2) Completeness: The policy updating method should be able to update any type of
access policy.
3) Security: The policy updating should not break the security of the access control
system or introduce any new security problems.
www.globalsoftsolutions.in
gsstrichy@gmail.com
EXISTING SYSTEM
 Attribute-Based Encryption (ABE) has emerged as a promising technique to
ensure the end-to-end data security in cloud storage system. It allows data
owners to define access policies and encrypt the data under the policies, such
that only users whose attributes satisfying these access policies can decrypt
the data.
 The policy updating problem has been discussed in key policy structure and
ciphertext-policy structure.
Disadvantages
 When more and more organizations and enterprises outsource data into the
cloud, the policy updating becomes a significant issue as data access policies
may be changed dynamically and frequently by data owners. However, this
policy updating issue has not been considered in existing attribute-based
access control schemes.
 Key policy structure and ciphertext-policy structure cannot satisfy the
completeness requirement, because they can only delegate key/ciphertext
with a new access policy that should be more restrictive than the previous
policy.
 Furthermore, they cannot satisfy the security requirement either.
www.globalsoftsolutions.in
gsstrichy@gmail.com
PROPOSED SYSTEM
 Focus on solving the policy updating problem in ABE systems, and propose
a secure and verifiable policy updating outsourcing method.
 Instead of retrieving and re-encrypting the data, data owners only send
policy updating queries to cloud server, and let cloud server update the
policies of encrypted data directly, which means that cloud server does not
need to decrypt the data before/during the policy updating.
 To formulate the policy updating problem in ABE sytems and develop a new
method to outsource the policy updating to the server.
 To propose an expressive and efficient data access control scheme for big
data, which enables efficient dynamic policy updating.
 To design policy updating algorithms for different types of access policies,
e.g., Boolean Formulas, LSSS Structure and Access Tree.
 To propose an efficient and secure policy checking method that enables data
owners to check whether the ciphertexts have been updated correctly by
cloud server.
www.globalsoftsolutions.in
gsstrichy@gmail.com
Advantages
 This scheme can not only satisfy all the above requirements, but also avoid
the transfer of encrypted data back and forth and minimize the computation
work of data owners by making full use of the previously encrypted data
under old access policies in the cloud.
 This method does not require any help of data users, and data owners can
check the correctness of the ciphertext updating by their own secret keys and
checking keys issued by each authority.
 This method can also guarantee data owners cannot use their secret keys to
decrypt any ciphertexts encrypted by other data owners, although their secret
keys contain the components associated with all the attributes.
www.globalsoftsolutions.in
gsstrichy@gmail.com
LITERATURE SUMMARY
Attribute-based encryption (ABE)
ABE technique is regarded as one of the most suitable technologies for data access
control in cloud storage systems.
There are two complementary forms of ABE
Key-Policy ABE (KP-ABE)
In KPABE, attributes are used to describe the encrypted data and access policies
over these attributes are built into user’s secret keys
Ciphertext-Policy ABE (CP-ABE)
In CP-ABE, attributes are used to describe the user’s attributes and the access
policies over these attributes are attached to the encrypted data.
Recently, some attribute-based access control schemes were proposed to ensure the
data confidentiality in the cloud. It allows data owners to define an access structure
on attributes and encrypt the data under this access structure, such that data owners
can define the attributes that the user needs to possess in order to decrypt the
ciphertext.
However, the policy updating becomes a difficult issue when applying ABE
methods to construct access control schemes, because once data owner outsource
the data into cloud, they won’t store in local systems.
To change the access policies of encrypted data in the cloud, a trivial method is to
let data owners retrieve the data and re-encrypt it under the new access policy, and
then send it back to the cloud server.
www.globalsoftsolutions.in
gsstrichy@gmail.com
But this method will incur a high communication overhead and heavy computation
burden on data owners.
Key-Policy Attribute-Based Encryption
This method discussed on how to change the policies on keys. Authors also
proposed a ciphertext delegation method to update the policy of ciphertext.
However, these methods cannot satisfy the completeness requirement, because
they can only delegate key/ciphertext with a new access policy which is more
restrictive than the previous policy.
They cannot satisfy the security requirement either.
www.globalsoftsolutions.in
gsstrichy@gmail.com
HARDWARE REQUIREMENTS
Processor
: Any Processor above 500 MHz.
Ram
: 128Mb.
Hard Disk
: 10 Gb.
Compact Disk
: 650 Mb.
Input device
: Standard Keyboard and Mouse.
Output device
: VGA and High Resolution Monitor.
SOFTWARE SPECIFICATION
Operating System
: Windows Family.
Techniques
: JDK 1.5 or higher
Database
: MySQL 5.0
NEIGHBOR DISCOVERY IN WIRELESS NETWORKS WITH MULTIPACKET
RECEPTION
ABSTRACT
Neighbor discovery is one of the first steps in configuring and managing a
wireless network. Most existing studies on neighbor discovery assume a singlepacket reception model where only a single packet can be received successfully
at a receiver. Neighbor discovery in MPR networks is studied that allow packets
www.globalsoftsolutions.in
gsstrichy@gmail.com
from multiple simultaneous transmitters to be received successfully at a
receiver. Starting with a clique of n nodes, a simple Aloha-like algorithm is
analyzed and show that it takes time to discover all neighbors with high
probability when allowing up to k simultaneous transmissions. Two adaptive
neighbor discovery algorithms is designed that dynamically adjust the
transmission probability for each node. The adaptive algorithms yield
improvement over the Aloha-like scheme for a clique with n nodes and are thus
order-optimal.
www.globalsoftsolutions.in
gsstrichy@gmail.com
INTRODUCTION
Neighbor discovery is one of the first steps in configuring and managing a
wireless network. The information obtained from neighbor discovery, viz. the
set of nodes that a wireless node can directly communicate with, is needed to
support
basic
functionalities
such
as
medium
access
and
routing.
Furthermore, this information is needed by topology control and clustering
algorithms to improve network performance.
Due to its critical importance, neighbor discovery has received significant
attention, and a number of studies have been devoted to this topic. Most
studies, however, assume a single packet reception (SPR) model, i.e., a
transmission is successful if and only if there are no other simultaneous
transmissions. The proposed neighbor discovery is based on multipacket
reception
(MPR)
networks
where
packets
from
multiple
simultaneous
transmitters can be received successfully at a receiver. This is motivated by the
increasing prevalence of MPR technologies in wireless networks. For instance,
code division multiple access (CDMA) and multiple- input and multiple-output
(MIMO), two widely used technologies, both support multipacket reception.
www.globalsoftsolutions.in
gsstrichy@gmail.com
EXISTING SYSTEM
 Most studies assume a single packet reception (SPR) model, i.e., a
transmission is successful if and only if there are no other simultaneous
transmissions.
 In a SPR network, a node is discovered by each of its neighbors if it is the
only node that transmits at a given time instant
 Randomized/deterministic
neighbor
discovery
algorithms
were
proposedin SPR networks.
 Tseng
et
al.
propose
three
power-saving
protocols
to
schedule
asynchronous node wake-up times in IEEE 802.11-based multi-hop ad
hoc networks, and describe deterministic neighbor discovery schemes in
each of the three protocols.
 A multiuser-detection based approach is used for neighbor discovery.
They require each node to possess a signature as well as know the
signatures of all the other nodes in the network. Further, nodes are
assumed to operate in a synchronous manner.
Disadvantages
 Although
multiuser-detection
based
approach
allow
multiple
transmitters to transmit simultaneously, their focus is on using
coherent/ noncoherent detection or group testing to identify neighbors
with a high detection ratio and low false positive ratio, and do not provide
analytical insights on the time complexity of their schemes.
PROPOSED SYSTEM
www.globalsoftsolutions.in
gsstrichy@gmail.com
 MPR network, a node can transmit simultaneously with several other
neighbors, and each of these nodes may be discovered simultaneously by
the receiving nodes.
 Randomization is used a powerful tool for avoiding centralized control,
especially in settings with little a priori knowledge of network structure
and
 Randomization offers extremely simple and efficient algorithms for
homogeneous devices to carry out fundamental tasks like symmetry
breaking.
 To propose two adaptive neighbor discovery algorithms, one being
collision-detection based, and the other being ID based.
 In both algorithms, a node becomes inactive once it is discovered by its
neighbors, allowing the remaining active nodes to increase their
transmission probability.
 When the number of neighbors is not known beforehand or nodes
transmit asynchronously, and show that these generalizations result in
at most a constant or slowdown in algorithm performance.
Advantages
 Faster neighbor discovery leads to shorter delays
www.globalsoftsolutions.in
gsstrichy@gmail.com
LITERATURE SUMMARY
 Randomized/deterministic
neighbor
discovery
algorithms
were
proposedin SPR networks.
 McGlynn and Borbash propose birthday-like randomized neighbor
discovery algorithms that require synchronization among nodes.
 Tseng
et
al.
propose
three
power-saving
protocols
to
schedule
asynchronous node wake-up times in IEEE 802.11-based multi-hop ad
hoc networks, and describe deterministic neighbor discovery schemes in
each of the three protocols.
 Zheng et al. provide a more systematic treatment of the asynchronous
wakeup problem and propose a neighbor discovery protocol on top of the
optimal wakeup schedule that they derive.
 Borbash et al. propose asynchronous probabilistic neighbor discovery
schemes for large-scale networks.
 Keshavarzian et al. propose a deterministic neighbor discovery algorithm.
 Dutta and Culler propose an asynchronous neighbor discovery and
rendezvous protocol between a pair of low duty cycling nodes.
 Khalili et al. propose feedback based neighbor discovery schemes that
operate in fading channels.
 A multiuser-detection based approach is used for neighbor discovery.
They require each node to possess a signature as well as know the
signatures of all the other nodes in the network. Further, nodes are
assumed to operate in a synchronous manner. Although these studies
allow multiple transmitters to transmit simultaneously, their focus is on
using coherent/ noncoherent detection or group testing to identify
www.globalsoftsolutions.in
gsstrichy@gmail.com
neighbors with a high detection ratio and low false positive ratio, and do
not provide analytical insights on the time complexity of their schemes.
 There are numerous studies on neighbor discovery when nodes have
directional antennas. The focus in these works is on antenna scanning
strategies for efficient neighbor discovery. There have been several recent
proposals on neighbor discovery in cognitive radio networks. They
determine the set of neighbors for a node as well as the channels that
can be used to communicate among neighbors.
www.globalsoftsolutions.in
gsstrichy@gmail.com
HARDWARE REQUIREMENTS
Processor
Ram
: Any Processor above 500 MHz.
: 128Mb.
Hard Disk
: 10 Gb.
Compact Disk
: 650 Mb.
Input device
: Standard Keyboard and Mouse.
Output device
: VGA and High Resolution Monitor.
SOFTWARE SPECIFICATION
Operating System
: Windows Family.
Techniques
: JDK 1.5 or higher
Database
: MySQL 5.0
TRUTHFUL GREEDY MECHANISMS FOR DYNAMIC VIRTUAL
MACHINE PROVISIONING AND ALLOCATION IN CLOUDS
ABSTRACT
A major challenging problem for cloud providers is designing efficient
mechanisms for virtual machine (VM) provisioning and allocation. Such
mechanisms enable the cloud providers to effectively utilize their available
resources and obtain higher profits. Recently, cloud providers have introduced
auction-based models for VM provisioning and allocation which allow users to
www.globalsoftsolutions.in
gsstrichy@gmail.com
submit bids for their requested VMs. Dynamic VM provisioning and allocation
problem is studied for the auction-based model as an integer program considering
multiple types of resources. Truthful greedy mechanism is designed and optimal
mechanisms for the problem such that the cloud provider provisions VMs based on
the requests of the winning users and determines their payments.
AIM
The aim of this study is to facilitate dynamic provisioning of multiple types of
resources based on the users’ requests. To design truthful greedy mechanisms that
solves the VM provisioning and allocation problem in the presence of multiple
types of resources (e.g., cores, memory, storage, etc.). The mechanisms allocate
resources to the users such that the social welfare (i.e., the sum of users’ valuations
for the requested bundles of VMs) is maximized.
www.globalsoftsolutions.in
gsstrichy@gmail.com
INTRODUCTION
The number of enterprises and individuals that are outsourcing their workloads to
cloud providers has increased rapidly in recent years. Cloud providers form a large
pool of abstracted, virtualized, and dynamically scalable resources allocated to
users based on a pay-as-you-go model. These resources are provided as three
different types of services: infrastructure as a service (IaaS), platform as a service
(PaaS), and software as a service (SaaS). IaaS provides CPUs, storage, networks
and other low level resources, PaaS provides programming interfaces, and SaaS
provides already created applications.
IaaS providers such as Microsoft Azure and Amazon Elastic Compute Cloud
(Amazon EC2) offer four types of VM instances: small (S), medium (M), large
(L), and extra large (XL). Cloud providers face many decision problems when
offering IaaS to their customers. One of the major decision problems is how to
provision and allocate VM instances. Cloud providers provision their resources
either statically or dynamically, and then allocate them in the form of VM
instances to their customers. In the case of static provisioning, the cloud provider
pre-provisions a set of VM instances without considering the current demand from
the users, while in the case of dynamic provisioning, the cloud provider provisions
the resources by taking into account the current users’ demand. Due to the variable
load demand, dynamic provisioning leads to a more efficient resource utilization
and ultimately to higher revenues for the cloud provider.
EXISTING SYSTEM
www.globalsoftsolutions.in
gsstrichy@gmail.com
Game theory
Resource allocation problem is formulated as a task scheduling problem with QoS
constraints. They proposed a game-theoretic approximated solution.
However, there is an assumption that the cloud provider knows the execution time
of each subtask, which is unrealistic in cloud environments.
An efficient truthful-in-expectation mechanism is designed for resource allocation
in clouds where only one type of resource was considered.
A stochastic mechanism is designed to allocate resources among selfish VMs in a
non-cooperative cloud environment.
Studies considered only one type of VM instances, thus, the problem they solved is
a one dimensional provisioning problem.
Mechanism design theory has been employed in designing truthful allocation
mechanisms in several areas. In particular, there is a large body of work in
spectrum auctions, where a government or a primary license holder sells the right
to use a specific frequency band in a specific area via auction-based mechanisms.
In these studies, only one type of resource (i.e., the spectrum) is available for
allocation.
www.globalsoftsolutions.in
gsstrichy@gmail.com
Greedy mechanism
A truthful mechanism considering grouping the users based on their spatial
conflicts is considered. Greedy mechanism is based on the ordering of the groups’
bids.
A simple greedy metric for ordering the users that is based on the ratio of their bids
to the number of requested channels.
Design of truthful mechanisms
The design of truthful mechanisms for resource allocation in clouds has been
investigated by Zaman and Grosu. They proposed a combinatorial auction based
mechanism, CA-GREEDY, to allocate VM instances in clouds. They showed that
CA-GREEDY can efficiently allocate VM instances in clouds generating higher
revenue than the currently used fixed price mechanisms.
Disadvantages
Previous work considered only one type of resource and did not take into account
the scarcity of each resource type when making the VM instance provisioning an
allocation decisions.
In greedy mechanism, VM instances cannot be simultaneously assigned to the
users and thus, their mechanism cannot be used to solve the VM allocation
problem.
However, proposed mechanisms incorporate bid density metrics that not only
consider the structure of VMs, but also take into account the scarcity of resources.
PROPOSED SYSTEM
 Design of mechanisms for auction-based settings is considered.
www.globalsoftsolutions.in
gsstrichy@gmail.com
 In the auction-based models, users can obtain their requested resources at
lower prices than in the case of the fixed-price models.
 Also, the cloud providers can increase their profit by allowing users to bid
on unutilized capacity.
 In this setting, users are allowed to request bundles of VM instances.
 A set of users is considered and a set of items (VM instances), where each
user bids for a subset of items (bundle).
 Since several VM instances of the same type are available to users, the
problem can be viewed as a multi-unit combinatorial auction. Each user has
a private value (private type) for her requested bundle.
 The users are single minded, that means each user is either assigned her
entire requested bundle of VM instances and she pays for it, or she does not
obtain any bundle and pays nothing.
 The users are also selfish in the sense that they want to maximize their own
utility. It may be beneficial for them to manipulate the system by declaring a
false type (i.e., different bundles or bids from their actual request).
 One of the key properties of a provisioning and allocation mechanism is to
give incentives to users so that they reveal their true valuations for the
bundles.
Advantages
 Proposed mechanisms allow dynamic provisioning of VMs, and do not
require pre-provisioning the VMs.
www.globalsoftsolutions.in
gsstrichy@gmail.com
 To address the problem of VM provisioning and allocation in clouds in the
presence of multiple types of resources.
 Proposed mechanisms consist of determining the VM provisioning and
allocation and the payments for each user.
 Proposed greedy mechanisms provide very fast solutions making them
suitable for execution in short time window auctions.
 To design truthful greedy mechanisms in spite the fact that greedy
algorithms, in general, do not necessarily satisfy the properties required to
guarantee truthfulness.
 In doing so, the allocation and payment determination of the proposed
mechanisms are designed to satisfy the truthfulness property.
 Cloud providers can fulfill dynamic market demands efficiently.
 A key property of proposed mechanisms is the consideration of multiple
types of resources when provisioning the VMs, which is the case in real
cloud settings.
HARDWARE REQUIREMENTS
Processor
: Any Processor above 500 MHz.
Ram
: 128Mb.
Hard Disk
: 10 Gb.
Compact Disk
: 650 Mb.
Input device
: Standard Keyboard and Mouse.
www.globalsoftsolutions.in
Output device
gsstrichy@gmail.com
: VGA and High Resolution Monitor.
SOFTWARE SPECIFICATION
Operating System
: Windows Family.
Techniques
: JDK 1.5 or higher
Database
: MySQL 5.0
Download