CUSTOMER_CODE SMUDE DIVISION_CODE SMUDE

advertisement
CUSTOMER_CODE
SMUDE
DIVISION_CODE
SMUDE
EVENT_CODE
OCTOBER15_ReExam
ASSESSMENT_CODE BC0041_OCTOBER15_ReExam
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
35431
QUESTION_TEXT
Briefly explain Entity Relationship Modeling.
SCHEME OF
EVALUATION
E-R Modeling
Entity Relationship Modeling (ER modeling) is by far the most common
way to express the analytical result of an early stage in the construction
of a new database. E-R Diagrams are the way to achieve this.
Entity relationship diagrams are a way to represent the structure and
layout of a database. It is used frequently to describe the database
schema. ER diagrams are very useful as they provide a good conceptual
view of any database, regardless of the underlying hardware and
software. An ERD is a model that identifies the concepts or entities that
exist in a system and the relationships between those entities. An ERD is
often used as a way to visualize a relational database: each entity
represents a database table, and the relationship lines represent the keys
in one table that point to specific records in related tables. ERDs may
also be more abstract, not necessarily capturing every table needed
within a database, but serving to diagram the major concepts and
relationships. This ERD is of the latter type, intended to present an
abstract, theoretical view of the major entities and relationships needed
for management of electronic resources. It may assist the database design
process for an e-resource management system, but does not identify
every table that would be necessary for an electronic resource
management database.
The Entity-Relationship (ER) model was originally proposed by Peter in
1976 [Chen76] as a way to unify the network and relational database
views. Simply stated the ER model is a conceptual data model that views
the real world as entities and relationships. A basic component of the
model is the Entity-Relationship diagram which is used to visually
represent data objects. Since Chen wrote his paper the model has been
extended and today it is commonly used for database design. For the
database designer, the utility of the ER model is:

it maps well to the relational model. The constructs used in the
ER model can easily be transformed into relational tables.

it is simple and easy to understand with a minimum of training.
Therefore, the model can be used by the database designer to
communicate the design to the end user.
In addition, the model can be used as a design plan by the database
developer to implement a data model in a specific database management
software.
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
73651
QUESTION_TEXT
Explain the advantages of Database system and also explain the
functions of DBMS. What is the Role of DBA?
SCHEME OF
EVALUATION
Advantages of Database System:a. Minimal data redundancy
b. Data consistency
c. Data Integration
d. Data Sharing
e. Data independence (3 marks)
Functions of a DBMS:a. Data Definition
b. Data Manipulation
c. Data Security and Integrity
d. Data Recovery and Concurrency
e. Performance (3 marks)
Role of a DBA:a. Defining the Schema
b. Liasing with Users
c. Defining Security & Integrity Checks
d. Monitoring Performance
e. Defining Backup/Recovery Procedures.
(4 marks)
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
73652
QUESTION_TEXT
Explain 3 level architecture of a Database and also explain the concept
of data independence
SCHEME OF
EVALUATION
i. A commonly used view of data approach is the three-level
architecture suggested by ANSI/SPARC. The three levels of the
architecture are three views of data :
-External : Individual user view
- Conceptual – community user view
- Internal – physical or storage view.
(2 marks)
ii. External view: This is the view that the individual user of the
database has. This view is often a restricted view of the database and
the same database may provide a number of different views for
different classes of users. In general, the end users and even the
application programmers are only interested in subset of the database.
For example, a department head may only be interested in the
departmental finances and student enrolments but not library
information. The librarian would not be expected to have any interest in
the information about academic staff.
(2 marks)
iii. Conceptual view: it is the information model of the enterprise and
contains view of the whole enterprise without any concern for the
physical implementation. The conceptual view is the overall community
view of the database and it includes all the information that is going to
represented in the database. The conceptual view is defined by the
conceptual schema which includes definitions of each of the various
types of data.
(2 marks)
iv. Internal view: this view is about the actual physical storage of data.
It tells us what data is stored in database and how. At least the
following aspects are considered at this level:
a. Storage allocation e.g.: B-trees, hashing etc.,
b. Access paths e.g.: specification of primary and secondary keys,
indexes and pointers and sequencing
c. Miscellaneous e.g.: data compression and encryption techniques,
optimization of the internal structures. (2 marks)
v. Data Independence: the separation of the conceptual view from
the internal view enables us to provide a logical description of the
database without need to specify physical structures. This is often called
physical data independence
Separating the external views from the conceptual view enables us to
change the conceptual view without affecting the external views. This
separation is sometimes called logical data independence.
(2 marks)
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
110511
QUESTION_TEXT
Write a detailed note on Clustering and Indexing.
SCHEME OF
EVALUATION
Clustering: Clustering is the method of storing logically related records,
physically together.
Assume that there are 100 customer records with each record size is 128
bytes and the typical size of a page retrieved by the File Manager is
1 Kb. If there is no clustering, it can be assumed that the customer
records are stored at random physical locations. In the worst case
scenario, each record may be placed in a different page. Hence a
query to retrieve 100 records with consecutive Customer IDs will
require 100 pages to be accessed which in turn translates to 100 disk
accesses
But if the records are clustered, a page can contain 8 records. Hence the
number of pages to be accessed for retrieving 100 consecutive
records will be Ceil(100/8)=13 i.e. only 13 disk accesses will be
required to obtain the query results
Intra-file clustering – clustered records belong to the same file
Inter-file Clustering – Clustered records belong to different files
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
110513
QUESTION_TEXT
Discuss what is meant by each of the following terms: Database
integrity, Database confidentiality, Role-based access control, Flow
control and Public Key Encryption.
SCHEME OF
EVALUATION
Database Integrity : It refers to the requirement that information to be
protected from improper modification. Modification of data includes
creation, insertion, modification, changing the status of data and
deletion. Integrity is lost if unauthorized changes are made to the
data by either intentional or accidental acts - 2 Marks
Database confidentiality : It refers to the protection of data from
unauthorized disclosure. The impact of unauthorized disclosure of
confidential information can range from violation of the Data
Privacy Act to the jeopardization of national security. - 2 Marks
Role-based Access Control: Role-based access control (RBAC) emerged
in the 1990s as proven technology for managing and enforcing
security in large-scale enterprise wide systems. Its basic notion is
that permissions are associated with roles, and users are assigned to
appropriate roles. Roles can be created using the CREATE ROLE
and DESTROY ROLE commands.
RBAC appears to be a viable alternative to traditional discretionary and
mandatory access controls; it ensures that only authorized users are
given access to certain data or resources. - 2 Marks
Flow control: Flow control regulates the distribution or flow of
information among accessible objects. A flow between object X and
object Y occurs when a program reads values form X and write
values into Y. Flow controls check that information contained in
some objects does not flow explicitly or implicitly into less
protected objects. Thus, a user cannot get indirectly in Y what he or
she cannot get directly from X.
A flow policy specifies the channels along which information is allowed
to move. - 2 Marks
Public-Key Encryption
In 1976 Diffie and Hellman proposed a new kind of cryptosystem, which
they called public key encryption. Public key algorithms are based
on mathematical functions rather than operations on bit patterns.
They also involve the use of two separate keys, in contrast to
conventional encryption, which uses only one key. The two keys
used for public key encryption are referred to as the public key and
the private key. Invariably, the private key is kept secret, but it is
referred to as private key rather than secret key to avoid confusion
with conventional encryption. As the name suggests, the public key
of the pairs is made public for others to use, whereas the private key
is known only to its owner. - 2 Marks
QUESTION_TYPE
DESCRIPTIVE_QUESTION
QUESTION_ID
110514
QUESTION_TEXT
Explain the Desirable properties of Transactions and also explain
COMMIT and ROLLBACK operations.
SCHEME OF
EVALUATION
Desirable properties of Transactions
1.
Atomicity :A transaction is an atomic unit of processing; it is either
performed in its entirety or not performed at all - 1.5 Marks
2.
Consistency preservation : A transaction is consistency preserving
if its complete execution take database from one consistent state to
another - 1.5 Marks
3.
Isolation : The execution of a transaction should not be interfered
with by any other transactions executing concurrently - 1.5 Marks
4.
Durability or permanency: The changes applied to the database by
a committed transaction must persist in the database. These changes
must not be lost because of any failure. - 1.5 Marks
COMMIT
The COMMIT operation indicates successful completion of a
transaction. Which means that the database is in a consistent state
and all updates made by the transaction can now be made
permanent. If a transaction successfully commits, then the system
will guarantee that its updates will be permanently installed in the
database even if the system crashes immediately after the COMMIT.
- 2 Marks
ROLLBACK
The ROLLBACK operation indicates that the transaction has been
unsuccessful which means that all updates done by the transaction
till then need to be undone to bring the database back to a consistent
state. To help undoing the updates once done, a system log or
journal I maintained by the transaction manger. - 2 Marks
Download