CUSTOMER_CODE SMUDE DIVISION_CODE SMUDE

advertisement
CUSTOMER_CODE
SMUDE
DIVISION_CODE
SMUDE
EVENT_CODE
JAN2016
ASSESSMENT_CODE MCA3020_JAN2016
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
18248
QUESTION_TEXT
What are the Advantages of Transaction Processing System? Describe
the methods used for Resolving Deadlocks.
SCHEME OF
EVALUATION
Advantages of Transaction Processing System:
1.Each business firm needs a system for collection, accumulating and
recovering of data and statistics, so that it can function competently. A
transaction processing system will fulfill this requirement. Every single
transaction is managed and monitored by means of a transaction
processing system, in order that the system will identify if entered data is
valid. Once the gathered information clears the test, it will then be
accumulated and generated in the processing system.
2.The process of monitoring all transactions can be made simpler by
means of an organized transaction processing system. It will enormously
save a firm’s time, energy and money and attempt. Furthermore, all the
entered data will be kept processed and all transactions performed will
be monitored and recorded in a organized and protected manner. Only
approved people have access to its functions.
3.By means of a consistent transaction processing system, any firm will
successfully please its customers and get more satisfied customers.
Customers rely on the consistency of a company. In future, they will
stand by a firm with whom that they can trust their earnings, money and
personal information.
:[2 Marks x 3 = 6 Marks]
Methods used for Resolving Deadlocks:
1.Deadlock Detection By Timeout: On the occurrence of a deadlock, it is
not usually possible to fix it in order that transactions included could
continue. Therefore in such a situation transaction may need to be rolled
back (aborted and restarted). Timeout is the easiest manner of identifying
and resolving deadlocks. A limitation may be maintained on the active
period of a transaction, and roll it back if a transaction goes beyond this
time.
2.The Waits-For Graph: Waits-for graph can be effectively deal with
deadlocks that take place due to transactions waiting for locks held by
some other. The wait-for graph predicates the transactions waiting for
locks held by some other transaction. This graph can be used to identify
deadlocks after its formation or to avoid their occurrence at all. It
requires us to reserve the wait-for graph always, denying permitting an
action which forms a cycle in the graph.
[2 Marks x 2 = 4 Marks]
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
18249
QUESTION_TEXT
Explain various architectural approaches of OODBMS?
SCHEME OF
EVALUATION
1.Distributed Client Server Approach:
Client processes manage application specific activities like utilization
and updation of separate objects. These processes may be situated on the
same workstation or on dissimilar workstations. Usually a single server
will communicate with numerous clients providing simultaneous
requests for data which is managed by that server. There are three
different work-station server architectures that have been proposed for
OODBMS. They are as follows:
a.Object ServerApproach: A object is considered as the unit of transfer
from server to client. Both machines store objects and are competent of
performing methods on objects. Object-level locking is carried out
easily. The main disadvantage of this approach is the overhead related
with the server interaction needed to access each object.
b.Page Server Approach: In this approach, we consider page as the unit
of transfer from server to client. The overhead of object access is
decreased by the transfers of page level since it does not need server
interaction at all times. You can simplify the architecture and
implementation of the server as it needs only executing the services of
backend databases.
c.File Server Approach: In this approach, the client processes of
OODMBS have an interaction with a network file service for reading
and writing database steps. This approach makes the process of the
server implementation simpler because there is no need to manage
secondary storage. The main disadvantage of this approach is that it
requires two network interactions for accessing data.(4 marks)
2.Data Access Mechanism: Assessment of Object Oriented DBMS
products take into account the procedure required to shift data from
secondary store unit into a consumer application. Usually this
necessitates interaction with the server process, probably across one
network. Objects are stored into a consumer’s memory may need more
processing. The cost and procedure of releasing locks, and updated
objects that are returned to the server should be considered. (2 marks)
3.Object Clustering: The process of transferring units larger as compared
to an object is done under the supposition that an access of an application
to a specified object specifies a high possibility that it may also access
other related objects. When transferring number of objects, further server
interaction may not be required to assure these further object accesses.
Object clustering can be defined as the capability for an application to
offer information to the object oriented DBMS. This is done so that
objects which are usually accessed mutually can be accumulated close to
each other and therefore benefits from bulk transfers of data.(2 marks)
4.Heterogeneous Operation: In this approach, an object oriented DBMS
offers a method in which application can work together. This is done by
sharing access to a common group of objects. Numerous concurrent
applications are supported by a usual OODBMS; these applications are
executed on numerous processors which are connected through a local
area network. Frequently, the processors will be from dissimilar
computer companies where each company comprises its own data
representation formats. To make applications work together in this kind
of environment, data must be converted to the representation format
appropriate for the processor. (2 marks)
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
18250
QUESTION_TEXT
What are the differences between Homogeneous and Heterogeneous
Databases?
SCHEME OF
EVALUATION
Homogeneous Databases: When the database technology is the same at
each of the locations and the data at several locations are also
compatible, that data is knows as homogeneous database. Homogeneous
database make the sharing of data between the different users simpler.
This signifies the design goal for the distributed database. Achieving this
objective needs a very high level planning during the planning phase.(1
mark)
The following conditions should exist for homogeneous database:
1.The operating system used at each of the locations is the same or at
least they should be extremely compatible.
2.The data models used at every location should be the same.
3.The database management system used at every location should be the
same or at least they should be extremely compatible.
4.The data at the different locations should have common definitions and
formats.
(4 marks)
Heterogeneous Databases: In heterogeneous DBMS, every one of the
site might manage different types of DBMS wares, which does not need
to be established on the similar original data model and so the system
should be made of RDBMS, OODBMS, and ORDBMS products.(1
mark)
*In heterogeneous database, contact among various DBMS is needed for
translations.
*So as to give database transparency, users should be able to make
requests in DBMS language at the local site.
*Data from other sites might have variety of hardware, diverse DBMS
products and mixture of a variety of hardware and DBMS products.
*The job of finding these data and executing any essential translation are
the capabilities of the heterogeneous DBMS.
(4 marks)
QUESTION_T
DESCRIPTIVE_QUESTION
YPE
QUESTION_ID 125267
QUESTION_T
State the differences between centralized and distributed database.
EXT
5 differences ->2 marks each. Any 5 advantages among those mentioned
below may be written
SCHEME OF
EVALUATION
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
125271
QUESTION_TEXT
Differentiate between BCNF and 3 NF.
Comparison of BCNF and 3NF
We have seen BCNF and 3NF.
It is always possible to obtain a 3NF design without sacrificing losslessjoin or dependency-preservation.
If we do not eliminate all transitive dependencies, we may need to use
null values to represent some of the meaningful relationships.
Repetition of information occurs.
These problems can be illustrated with Banker-schema.
As banker-name bname , we may want to express relationships
between a banker and his or her branch.
SCHEME OF
EVALUATION
Figure : An instance of Banker-schema.
Figure shows how we must either have a corresponding value for
customer name, or include a null.
Repetition of information also occurs.
Every occurrence of the banker's name must be accompanied by the
branch name.
If we must choose between BCNF and dependency preservation, it is
generally better to opt for 3NF.
If we cannot check for dependency preservation efficiently, we either
pay a high price in system performance or risk the integrity of the data.
The limited amount of redundancy in 3NF is then a lesser evil.
To summarize, our goal for a relational database design is
BCNF.
Lossless-join.
Dependency-preservation.
If we cannot achieve this, we accept
3NF
Lossless-join.
Dependency-preservation.
A final point: there is a price to pay for decomposition. When we
decompose a relation, we have to use natural joins or Cartesian products
to put the pieces back together. This takes computational time.
QUESTION_TYPE DESCRIPTIVE_QUESTION
QUESTION_ID
125272
QUESTION_TEXT What is an external sort algorithm? Explain with an example.
A sort can be classified as being internal if the records that it is sorting are
in main memory, or external if some of the records that it is sorting in
auxiliary storage.
External Sorting:
SCHEME OF
EVALUATION
When the size of file to be sorted is bigger than that of available
memory. External sorting algorithms are more device dependent
than internal sorting. External Merge Sort is a relatively common used
algorithm, based on the internal sorting and external merging principle. In
general, this strategy is called Divide and Conquer :
1. Divide the data into parts that are as small as possible
2. After applying the sorting algorithm on those parts, remerge them
Those already sorted portions of data are called runs. For the working
process, initial runs must be created. According to this strategy, all data is
to be split into several smaller parts that fit to main memory completely:
Assuming we have to sort n records of equal length stored in a file. The
system’s main memory can hold exactly p records with p << n. So the
source file is divided into [n / p] parts. After sorting all of them they
represent the initial runs. At the end, for retrieving the complete sorted set,
those parts need to be
merged.
4marks
External Sorting Algorithm carries out the operation in following 2 phases
:
Sorting Phase
Merging Phase
Sorting Phase:
In this phase, the runs are read into the main memory. Over there the runs
are sorted by using internal sorting algorithm and the result is written back
to the disk as temporary sorted runs. The number of initial runs and the
size of a run are governed by the number of file blocks and available
buffer space.
For Instance:
If nB = 5 blocks
b = 1024 blocks
Then, nR =[(b/ nB ) =205 intial runs each of size 5 blocks
Merging Phase
In this phase, merging of the sorted runs is carried out over one or more
phase. The number of runs that can be merged together in each pass is
termed as the degree of merging.
Download