Projek Sarjana Muda Data Warehouse (Front End) NIK

advertisement
Projek Sarjana Muda Data Warehouse (Front End)
NIK NURKARTINIE BINTI KAMARUDDIN
This report is submitted in partial fulfillment of the requirements for the
Bachelor of Computer Science
(Database Management)
FACULTY OF INFORMATION AND COMMUNICATION TECHNOLOGY
KOLEJ UNIVERSITI TEKNIKAL KEBANGSAAN MALAYSIA
2006
ABSTRACT
Projek Sarjana Muda (PSM) is a compulsory subject for the final year Kolej
*versiti Teknikal Kebangsaan Malaysia (KUTKM) student to develop an individual
sject that related to the industry problem. PSM for Bachelor in Computer Science is
signed to give students an opportunity to make use of the expertise and knowledge in
Rware development that they have gained in their first three years of study. Projek
rjana Muda Warehouse (Front End) is to be done for Student, Lecturer and PSM
lmin at Faculty of Information and Communication Technology (FTMK). This project
ns to ease the way to submission report and to simplify the way information being
xed by Student, Supervisors and PSM Adrnin. This means that the develop system will
a tool for the admin to publish information about PSM and to make easy submission
d download process. Thus, the user is managed to access the information and making
nsaction to submission report, to give comment or suggestion and to know about PSM.
,art from that, the functions that included in the system will cater the need of the adrnin
1 updated, delete, save and upload the information.
ABSTRAK
Projek Sarjana Muda (PSM) adalah subjek wajib kepada pelajar tahun akhir Kolej
,iversiti Teknikal Kebangsaan Malaysia untuk membangunkan projek individu yang
rkaitan dengan masalah Industri atau masalah semasa.PSM untuk Ijazah Sarjana Muda
ins Komputer memberi peluang kepada pelajar untuk mengaplikasikan kemahiran dan
ngetahuan pembangunan perisian yang diperolehi dalam pembelajaran tiga tahun yang
rtama. Projek Sarjana Muda
rta Warehouse (Front End) ini adalah untuk kegunaan pelajar, pensyarah dan Ahli
vatankuasa (AJK) PSM di Fakulti Teknologi Maklumat dan Komunikasi. Projek ini
-tujuan untuk memudahkan proses menghantar kerja kursus dan untuk memudahkan
ormasi yang dikongsi bersama oleh pelajar, pensyarah dan Ajk PSM.Sistem yang
,ins membolehkan AJK PSM memdedahkan maklumat mengenai PSM dan untuk
imudahkan penghantaran kerja kursus. Dengan sedernikian, pengguna boleh
lndapatkan maklumat dan melaksanakan transaksi seperti menghantar kerja kursus,
lmbuat cadangan atau aduan dan mengetahui maklumat mengenai PSM melalui sistem
Ahli Jawatankuasa PSM boleh mengemaskini maklumat, menyimpan maklumat,
nambah maklumat dan menghapuskan maklumat.
CHAPTER 1
INTRODUCTION
1.1
Project Background
PSM Data Warehousing (Front end) is the project for Projek Sarjana Muda
(PSM). The implementation of this project is to meet the requirement of Faculty of
Information and Communication Technology (FTMK). Besides, it helps to solve the
business problem and enhance the business performance of FTMK.
This will be running internally by using local host or server. The user for this
system is admin, students and supervisors. Admin is the one who carry out the assign of
maintaining the PSM information, he or she can do the update, add, delete, and also
check the status of submission of the student's PSM. The adrnin can also see and reply
the comment and suggestion from students. For the students, they can access this system
to know senior's report title, to get information about PSM, to know supervisors
department and area of interest. The students can submit report in PDF format and
softcopy of the system in WINZIP format by online and also can give comment and
suggestion on PSM's project. As for the supervisor, they can access this system to get
informations on PSM, to know the list of interest, to check the status of submission of
the students 's PSM and also can submit comment and suggestion.
Therefore, this application will be built by using standard English language but yc
simple and can be easily understood. The interface will follow KUTKM standard i
terms of screen layout, background color, font size and font color for headings an
contents.
1.2
Problem Statement
No systematic system for PSM project storage.
- No systematic system to keep PSM projects, where tht
submission of all report will be kept in one room with diplom;
projects. If there is no space to keep all the reports. AJK must fin(
another room to keep the new reports.
No indexing
- To find a student report, the AJK must search one by one as then
is no indexing method that can fasten searching process.
Limited space
- For one room the space is limited so, they have to find another room
to keep all the new reports. While FTMK alone needs a lot o:
spaces,
so there will be much need for spaces.
review mechanism for previous project title
- Difficult to search whether the student's PSM purposed title ir
already done by student before or not.
difficult to download information manually
-
cannot download softcopy because there is no source
- if a student want more information on PSM, the student have tc
see supervisor or PSM's AJK
1.3
Objective
a
To decrease time, work force, errors and increasing the work quality in PSM
management.
a
To create one indexing method to make easy on data information process.
a
To replace current method of PSM report storage (save space).
a
To make easy on browsing to previous topic for report PSM
a
To make easy submission and download process.
1.4
Scope
PSM DW administrator (AJK PSM), student and supervisor are the specific user:
of PSM DW. For the administrator, the system is a tool to handle all the data, contain
and security management. Through this system, the student and supervisor is able to
retrieve information from the web site, make online registration, to submit report, can
access information about PSM and can viewing. The overall system can be divided into
three different modules, administrator's modules, supervisor's modules and student's
modules.
Table 1.1: Project Scope
User
Modules
Administrator
Maintain system and activity
Allows the administrator to delete
(secretary)
information
information
Description
/
I
for
the
specific
period.
To clean up the database
Provides functionalities for staff
and student to add new, update
upload and delete information.
Check Status Submission
Admin
can
check
status
submission student.
Change Password
Admin can change password
The password will be processed
automatically and the information
is stored in the database.
View and access Information
Supervisor
Retrieving
information
about
PSM, area of interest, lecturer
department, and E-panduan and
task supervisor in web base
application.
Check status submission
Supervisor check the status of
submission of the student's PSM
Omment and Suggestion
Supervisor can comment and
suggest about PSM.
Change Password
Supervisor can change password
The password will be processed
automatically and the information
I
I
I View and access Information
I
Student
is stored in the database.
I
/o
Retrieving
information
about
PSM, activity, E-panduan, area o.
interest, lecturer department, am
task
supervisor in web bast
application.
I
Student can search about thc
previous
title
by
abstract.
keyword and by year.
Comment and Suggestion
Student can comment and suggest
I
Student project submission
about PSM.
The students can submit report in
PDF format and softcopy of the
system in WINZIP format by
online
Change Password
Student can change password
The password will be processed
automatically and the information
is stored in the database.
1.5
Project significances
The Projek Sarjana Muda Data Warehousing will be developing to the PSM Unit
of Kolej Universiti Teknikal Kebangsaan Malaysia (KUTKM). The System can use by
administrator, supervisor and students.
A user fiiendly website and system will be delivered to user. The function and
steps on both website and system are simple and easy to use.
Nowadays, internet is being used widely by Malaysian and there are many
Malaysian users online over the internet. With the development of this PSM, online user
inside or outside KUTKM can download the PSM easily.
While developing PSM, a website will be developed to distribute the PSM in the
website. The website will have nice and user-friendly interface. Info of the PSM will be
put at the website.
This system can decrease time, work force, errors and increasing the work
quality in PSM management. System able to create one indexing method to make easy
on data information process and it is difficult to submit and download process.
1.6
Expected output
At last stage of the project, the features that are expected to be in the Projek
Sarjana Muda Data Warehousing system are information viewing, registration, to submit
report, submit softcopy report and system to administrator, list checking, E-panduan,
updating, upload information, activity information PSM, can access previous the topic ,
can send comment and suggestion about PSM and other. General introduction about
PSM, Organization and detail information on activity, schedule, administrator, task
supervisor in web base application and organized in the web site. By the way, the user
(Admin, supervisor, and Student) can obtain more understanding about the flow PSM. In
system to make the successful registration, the user (Admin, supervisor, and Student)
needs to fill in the organization registration form by providing the required information.
Upon the registration, the user (Adrnin, supervisor, and Student) will be given the
password. The password is the identity for the user (Admin, supervisor, and Student)
and need to be specified during making login. For the student can submit report by
format report PDF, softcopy report and system. Student and supervisor can view
and access about PSM (title, E-panduan, abstract, activity, task supervisor and other).
Based on the selection, system will suggest the part of comment and suggestion for the
user's requirement.
Administrator is the feature that minimizes the time that taken to manage process
could be more efficient and well orderly. Record and information on supervisor,
students, title and flow PSM could be well managed and this could reduce or eliminate
redundancy. Replacement supervisor process will also be well managed. Additionally,
the maintain system and activity information feature allow the admin add new
information, update existing information, check status and delete information from the
database by just the form and pressing the corresponding button. Finally, the system will
integrated with the Crystal Report software to generate the result student and result
chart. These features will function properly by displaying the report once administrator
inserts the input to the form and clicking the button.
1.6
Conclusion
This system will be known as PSM Data Warehousing. The FTMK (PSM Unit) will be
using this system. This system is a web based application and will be developed PHP as
server side scripting and SQL Server as a database server. Tools are typically computer
programs that make it easy to use and benefits from the techniques and follow the
guidelines of the overall development methodology. Macromedia Dream Weaver and
Adobe Photoshop 7.0 will be used as a tool for development of this system. Survey is
being done as one of the techniques to determine the purpose or the needs of developing
this system. The survey includes the observation and interview. Through this
observation and interview, the problems using the current conventional system are
identified. There are several functions being included for the ease of use. User fkiendly
characteristics are being considered for the development of this system. System
~evelopmentLife Cycle will be used as a solving method. Windows 20031Window XP
will be the platform to run this system. This system can be run either local host or server.
CHAPTER I1
LITERATURE REVIEW AND PROJECT METHODOLOGY
2.1
Introduction
Fact-Finding is the formal process to collect information about systems,
requirement and preferences. Fact-finding is most crucial to the systems planning and
systems analysis phases. It helps to learn about the vocabulary, problems, opportunities,
constraints, requirements and a system.
Interview, research and literature study are the fact-finding technique that be
used during the early stage of the system planning and system analysis phase in order to
collect the related information. Wide ranges of information resources need to consult
during research and literature study. The inform sources include contact with peers,
colleagues, supervisor and the user of the system; the formal sources including books,
journals, research papers, encyclopedias, newspapers, magazines, handbooks, thesis,
bibliographies and World Wide Web (WWW). Internet and WWW exploring provides
an immeasurable of information and good source of information. Literature study is
information finding in the literature related to the study undertaken in this thesis. The
purpose of literature study is to convey to the researcher what knowledge and ideas have
been established on a topic, how the subject has been studied previously, and the flaws
and outline gaps have been highlighted in previous research. As a whole, the literature
=view draws on the knowledge, culture, methodology and theories of the topic.
2.2
Fact and finding
2.2.1 About "Projek Sarjana Muda"
Student in the KUTKM is compulsory for them to pass Projek Sarjana Muda
before being awarded the degree or diploma. Student can select the title of project and
specified period of time, to be exposed to actual working project environment where by
student will encounter many new problems and challenges. This project is build for
many reasons to solving the problem based on the containing fact. This project is build
to reach the current objective include the benefit about this project. Projek Sarjana Muda
(PSM) gives student opportunity to implementing their knowledge to developing project
individually. Student exposed to project and problem about industry and basic analysis.
Finally student final project must be able:
To know and give definition towords the problem about industry.
To proceed basic analysis like literature analysis and to choose the suitable method
for the analysis.
Developing project with project management method.
To create and present project.
To present and write formal report.
To work in the industry with minimun practice.
Continue education to higher level.
2.2.2
Tasks Scope
To make sure each title fulfills the specification and project proceeding level.
Distributing supervising task to teaching personnel by numbers and task load to
each the teaching personnel.
To make sure each necessity like lab and apparatus for student needs has been
fulfilled
Updating evaluation form
Monitoring supervision to each supervisor all the time during the project
proceeding taking place.
Organizing consistency and grade clearance.
2.2.3 AJK Projek for PSM
LEADER
Rusnida Romli
SECRETARY
Zeratul Izzah
Mohd Yusof
EVALUATION BUREAU
Wahidah Md Shah
Azlianor Abdul Aziz
Amir Syarifuddin Kasim
Farah Nadia Azman
COMMUNICATION
BUREAU
Zuraida Abal Abas
EXECUTION BUREAU
MONITORING
Dr Abdul Razak Hussain
Zakiah Ayob
Sheikh Faisal Abdul Latif
Mohd Suffian Sulaiman
Nuridawati Mustafa
Figure 2.1 : Organization chart
2.2.4
Evolution Bureau task Scope
Updating evolution criteria tiom time to time
Collecting grades value from supervisor, compiling the grades values into the
system and give statistical output about the student overall grade result.
Preparing presentation schedule and evaluator list
2.2.5
Proceeding Bureau tasks scope
Preparing, complementing and maintaining supervisors-students list (Example:
organizing the suitable supervisor for certain student)
Building and updating student, supervisor and project title information
database.
Preparing and maintaining project proceeding milestone
2.2.6 Communication Bureau tasks scope
a
Informing to supervisor project milestone from time to time.
Accepting, handling and managing the students that face with problem
Accepting and follow up through any question and hesitations with the project.
2.2.7 Faculty Target
e
Updating all present course proceeding curriculum and strenghten to make it
more quality, always fullfill and suit with the industry needs.
2.2.8 Cornmitee Targets
Make sure project milestone managed well and effectively
Make sure archivement of project objective
Make sure student capable handling and proceeding the project or analysis
whenever graduated
Implementing project as a platform to gain knowledge for further studies or work
in the industry.
2.2.9
Data Warehousing
A data warehouse is a database that collects and stores integrated data from
several database, usually integrating data from multiple source and providing a different
way of looking at the data than do the databases being integrated. Data warehousing
extends traditional interests of database management systems. Therefore, knowledge of
data modeling, relational algebra, query processing, and transaction processing,
advanced database architecture are all prerequisites of data warehousing techniques. The
success or failure of data warehousing is closely related to the historical development of
database management system (DBMSs). [9]
Data warehouse architecture exhibits various layers of data in which data from
one layer are derived from data of the lower layer. Data sources, also called operational
database, form the lowest layer. They may consist of structured data stored in open
database systems and legacy systems or unstructured or semi structured data stored in
files. The data sources can be either part of the operational environment of an
organization or external, produced by a third party. They are usually heterogeneous,
which means that the same data can be represented differently, for instance through
different database schemata, in the source.
The design of a data warehouse is a difficult task. There are several problems
designers have to tackle. First of all, they have to come up with the semantic
reconciliation of the information lying in the sources and the production of an enterprise
model for the data warehouse. Then, a logical structure of relations in the core of data
warehouse must be obtained, either serving as buffers for the refreshment process or as
persistent data stores for querying or further propagation to data marts. This not a simple
task by itself; it becomes even more complicated since the physical design problem
uises; the designer has to choose the physical tables, processes, indexes and data
mtitions, representing the logical data warehouse designer.[4]
2.2.9.1 Data Warehouse Research :Issues and Projects.
In this topic, look at the issues of:
a)
Data Extraction and Reconciliation
Data extraction and reconciliation are still carried out on a largely intuitive basis
in real applications. Existing automated tools do not offer choices in the quality of
service. Making the decisions taken for the integration process difficult to understand
and evaluate.
Data reconciliation is first a source integration task at the schema level, similar to
the traditional task of view integration, but with a richer integration language and
therefore with more opportunities for checking the consistency and completeness of
data. Wrappers and loaders based on such enriched source integration facilities facilitate
the arduous task of instance-level data integration fiom the sources to the warehouse,
such that a larger portion of inconsistencies, incompatibilities and missing information
can be detected automatically.
b)
Data aggregation and Customization
Aggregation means here a grouping of data by some criteria, followed by
application of a computational function (sum, average, trend etc.) for each group.
Results on the computational complexity of these language extensions need to be
~btainedand practical algorithm for reasoning about metadata expressed in these
languages need to be developed and demonstrated. The gain fiom such research result is
Setter design-time analysis and rapid adaptability of data warehouses, thus promoting
he quality goals of relevance, access to nonvolatile historical data and improved
:onsistency and completeness.
Query optimization
Data warehouses provide challenges to existing query processing technology for
a
of reasons. Typical queries require costly aggregation over huge sets of data,
while, at the same time OPLAP users pose many queries and expect short response
times; users who explore the information content of a data warehouse apply
sophisticated strategies and demand query modes like hypothetical and imprecise
querying that are beyond the capabilities of SQL-based systems.
There are two kinds of met knowledge in the data warehouse that are relevant
integrity constraints expressed in rich schema languages and knowledge about
redundancies in the way information is stored. Optimization for nested queries with
aggregates can be achieved through the transformation of a query in an equivalent one
that is cheaper to compute. Techniques for constraint pushing prune the set of data to be
considered for aggregation. Integrity constraints can be used to establish the equivalence
of queries that are not syntactically similar. Rewriting techniques reformulate queries in
such a way that materialized views are used instead of computing previous results. To
accomplish its task for queries with aggregation, the query optimizer must be capable to
reason about complex relationships between the groupings over which the aggregation
takes place. Finally, these basic techniques must be embedded into complex strategies to
support OLAP user that formulate many related queries and apply advanced query
modes. [4j
d)
Update Propagation
Update to information sources need to be controlled with respect to the integrity
:onstmints specified during the design of the data warehouse and derived views, a
:onstraint may state conditions that the data in an information source must satisfy to be
3f a quality relevant to the data warehouse. A constraint may also express conditions
3ver several information sources that help to resolve conflicts during the extraction and
integration process. Thus, an update in a source database may degrade the quality of
information, thus resulting in the evolution of the view the data warehouse has over this
source. It is very important that violations of constraints are handled
appropriately, e.g., by sending messages or creating alternative time-stamped versions of
the updated data.
Updates that meet the quality requirements defmed by integrity constraints must
then be propagated towards the views defined at the data warehouse and user level. This
propagation must be done efficiently in an incremental fashion. Recompilations can take
advantages of useful cached views that record intermediate results. The decision to
create such views depends on an analysis of both the data warehouse query workload
and the updates activity at the information sources. [4]
2.2.9.2 Some Major Research Project in Data Warehousing.
To summarize, we need techniques and tools to support the rigorous design and
operation of data warehouses:
a)
Based on well-defined data quality factors
b)
Addressed by a rich semantic approach
c)
Realized by bringing together enabling technologies
Regrettably, the coverage of existing research projects does not address all the
question either, Most research in data warehousing focuses on source integration and
update propagation. We sketch the approaches of several well-known recent projects:
the Information Manifold (IM) developed at AT&T, the TSIMMIS) project at Stanford
University, the Squirrel project at the University of Colorado and the WHIPS project at
Stanford University.
The Information Manifold (IM) system was developing at AT&T for
infomation gathering from disparate source such as databases, documents and
unstructured files. It is based on a rich domain model, expressed in a knowledge base,
which allows for describing various properties of the information sources from the
topics they are about to the physical characteristics they have. This enables users to pose
high-level queries to extract the information from different sources in a unified way. The
architecture of IM suits the dynamic nature of the information sources. In particular, to
add a new source, only its description (i.e., its view as a relational schema and the
related types and constraints) needs to be added, without changing the existing
integration mechanisms of the system. Updates on the information sources are external
to IM, while propagation of updates from the world view to the single sources is not
supported.
Squirrel was a prototype system at the University of Colorado which provides a
framework for data integration based on the notion of an integration mediator.
Integration mediators are active modules that support integrated views over multiple
databases. A squirrel mediator consists of a query processor, an incremental update
processor; virtual attributors are generated from high-level specifications. In a mediator,
a view can be fully materialized, partially materialized or fully virtual. The queries
which are sent to the mediator are processed by the query processor using the
materialized view or by accessing the source databases. The update processor maintains
the materialized views incrementally using the incremental updates of the sources. The
architecture of the Squirrel mediator consists of three components: a set of active rules,
an execution model for such rules, and a View Decomposition Plan (VDP). The notion
of VDP is analogous to the query decomposition plan in query optimization.
The Warehouse Information Prototype at Stanford (WHIPS) project
ieveloped a data warehouse prototype test bed to study algorithms for the collections,
integration and maintenance of information from heterogeneous and autonomous
sources. The WHIPS architecture consists of a set of independent modules implemented
3s CORBA objects to ensure modularity and scalability. The central component of the
system is the integrator to which all other modules report. Different data models may be
used both for each source and the warehouse data. [4]
2.2.10 Queries
Queries in database involve a mapping from a problem statement into a
statement of a database language such as SQL understood by a DBMS. A query in
database is very important as a part for report generating. Queries can allow user to
view, change andlor analyze data in different ways. It will request to extract useful data
from database.
It allows an interrogation using a declarative query language based on an exact
matching and forced by the data schema structure."
Several query language can be used to do query such as SQL server. Since
browsing and search engines present severe limitations, several query languages for the
web have been recently proposed. These approaches are mainly based on a loose notion
of structure and tend to see the web as a huge collection of unstructured object,
organized as a graph. Cleary, traditional database techniques are of little use in this field
and new techniques need.
2.2.10.1 Queries at the Front End
The front end comprises the tools by which end users access the data warehouse.
The front end consists of the OLAPIDSS tools and possibly their underlying data marts.
There is a wide range of ways in which users retrieve information: from managers who
have a quick look at a graphical user interface, to their assistant who inspect the latest
version of a complex report, to analysts who query a data cube with a special tool for
adhoc queries, to systems personnel who to build such interface and define the
under which the data warehouse can be perceived through the query tools.
r41
2.2.10.2 Query processing
Query being submitted by the user will be processed before producing the result.
The query processing is very important as a choice on how queries are implementing on
the physical database. They said that the important aspect of query processing is query
optimization. The aim of query optimization is to choose the one that minimizes
resource usage, try to reduce the total execution time of the query.
The figure 3 shows about phases of query processing.
f
I
Query in high-level language
Query
decomposition
I
(SQL Server)
System
4
Relational algebra expression
4
Compile time
Query
optimization
Databas
statistic
4
Execution plan
L
Generated code
7
Runtime
Runtime query
execution
I
Main
Databas
Query output
Figure 2.2: Query processing
Further, Thomas Connolly has stated "There are two choices for when the first
three phases of query processing can be carried out. One option is to dynamically carry
out decomposition and optimization arises fiom the fact that all information queried to
select an optimum strategy is up to date. The disadvantages are that the performance of
the query is affected because the query has to be parsed, validated and optimized before
it can be executed. Further, it may be necessary to reduce the number of execution
strategies to be analyzed to achieve an acceptable overhead, which may have the effect
of selection a less than optimum strategy. The alternative solution is static query
optimization, where the query is parsed, validation and optimized once. This approach is
similar to the approach taken by a compiler for a programming language. The advantage
of static optimization are that the runtime over-hear is removed, and there may be more
time available to evaluate a large number of execution strategies, thereby increasing the
changes of finding a more optimum strategy. For queries that executed many times,
taking some additional time to find a more optimum plan may prove to be highly
beneficial. The disadvantages arise from the fact that the execution strategy that is
chosen as being optimum when the query is compiled may be no longer optimal when
the query is run. However, a hybrid approach could be used to overcome this
disadvantage, where the query is re-optimized if the system detects that the database
statistics have changed significantly since the query was last compiled. Alternatively, the
system could compile the query for the first execution in each session and then cache the
optimum plan for the remainder of the session, so the cost is spread across the entire
DBMS session".
Clientlserver describes the relationship between two computer programs in
which one program, the client, makes a service request from another program, the
server, which fulfills the request. Although the clientlserver idea can be used by
programs within a single computer, it is a more important idea in a network. In a
network, the clientlserver model provides a convenient way to interconnect programs
that are distributed efficiently across different locations. Computer transactions using the
clientlserver model are very common.
The clientlserver model has become one of the central ideas of network
computing. Most business applications being written today use the clientlserver model.
So does the Internet's main program, TCPIIP. In marketing, the term has been used to
distinguish distributed computing by smaller dispersed computers from the "monolithic"
@ntralized computing of mainframe computers. But this distinction has largely
disappeared as m a i h e s and their applications have also turned to the client/server
model and become part of network computing.
In the usual client/server model, one server, sometimes called a daemon, is
activated and awaits client requests. Typically, multiple client programs share the
services of a common server program. Both client programs and server programs are
often part of a larger program or application. Relative to the Internet, your Web browser
is a client program that requests services (the sending of Web pages or files) from a Web
server (which technically is called a Hypertext Transport Protocol or HTTP server) in
another computer somewhere on the Internet. Similarly, your computer with TCP/IP
installed allows you to make client requests for files from File Transfer Protocol
(m)
servers in other computers on the Internet.
2.2.11.1 Types of client
Clients are generally classified as either "thick clients" or "thin clients",
although more recently there has been a proliferation of other classifications.
Thick clients
A thick client (also known as a fat client or rich client) is a client that performs
the bulk of any data processing operations itself, and relies on the server it is associated
with primarily for data storage. Although the term usually refers to software, it can also
apply to a network computer that has relatively strong processing abilities.
Thin clients
A thin client is a minimal client. Thin clients utilize as few resources on the host
computer as possible. A thin client's job is generally just too graphically display
information provided by an application server, which performs the bulk of any required
data processing.
2.2.11.2 Server
Servers occupy a place in computing similar to that occupied by
minicomputers in the past, which they have largely replaced. The typical server is a
computer system that operates continuously on a network and waits for requests for
services from other computers on the network. Many servers are dedicated to this role,
but some may also be used simultaneously for other purposes, particularly when the
demands placed upon them as servers are modest. For example, in a small office, a large
desktop computer may act as both a desktop workstation for one person in the office and
as a server for all the other computers in the office.
Servers today are physically similar to most other general-purpose computers,
although their hardware configurations may be particularly optimized to fit their server
roles, if they are dedicated to that role. Many use hardware identical or nearly identical
to that found in standard desktop PCs. However, servers run software that is often very
different from that used on desktop computers and workstations.
Servers should not be confused with mainframes, which are very large computers
that centralize certain information-processing activities in large organizations and may
or may not act as servers in addition to their other activities. Many large organizations
have both mainframes and servers, although servers usually are smaller and much more
numerous and decentralized than mainfFames.
Servers frequently host hardware resources that they make available on a
mnvolled and shared basis to client computers, such as printers (print servers) and file
(file semen). This sharing permits better access control (and thus better
security) and can reduce costs by reducing duplication of hardware. Type of server:
•
Server hardware
Although servers can be built from commodity computer components is a
particularly for low-load and/or non-critical applications dedicated, high-load, missioncritical servers use specialized hardware that is optimized for the needs of servers. CPU
speeds are far less critical for many servers than they are for many desktops. Not only
are typical server tasks likely to be delayed more by 110 requests than processor
requirements, but the lack of any graphic user interface in many servers frees up very
large amounts of processing power for other tasks, making the overall processor power
requirement lower. If a great deal of processing power is required in a server, there is a
tendency to add more CPUs rather than increase the speed of a single CPU, again for
reasons of reliability and redundancy.
Server software
The major difference between servers and desktop computers is not in the
hardware but in the software. Servers often run operating systems that are designed
specifically for use in servers. They also run special applications that are designed
specifically to carry out server tasks.
Download