Nowadays Spatial Databases are gaining more importance for

advertisement
BUILDING A CARTOGRAPHIC QUALITY CONTROL SYSTEM
ON TOPOGRAPHIC MAP PRODUCTION
H.P. Dalkıran, Ö. Simav, M. Üstün
General Command of Mapping, Ankara, TURKEY
polat.dalkiran@hgk.mil.tr, ozlem.simav@hgk.mil.tr, mehmet.ustun@hgk.mil.tr
Abstract
Managing Spatial Data needs deep understanding of the data and its nature of the relation
with the geographical objects. Once it is obvious in most cases, unlike a common data, the spatial
data has its own behaviors which can not be easily estimated while starting a quality control
process. However, there are widely used data collecting and digitizing criteria and rules, there are
no fully standard procedures for quality control system which is applied to spatial data and
databases for Spatial Data Producers. To find a proper way to quality check and error correction
methods, each producer needs to develop its own quality control system, fits to her geodatabase
structure.
In this case, General Command of Mapping (GCM) in Türkiye has been developing such a
quality control management system for her spatial data to produce Cartographic Vector Maps
which have been manufactured since 1999. The existent data was very complex and the system
developed in 3 years. The spatial data is not an easy fairy tale giant to fight with the Quality
Control weapon. It needs very hard study and training for many different situations to deal with. In
GCM, there has been manufactured about 3000 Cartographic Vector Maps in 7 years. Meanwhile
the digitizing and the cartographic editing rules and criteria have been changed many times for the
1:25 000 scaled topographic maps.
There were to be handled many different types of coverages that have distinctive attributes
and spatial relations. We have to resolve many errors and problems while developing automation,
semi-automation and manual process procedures. Also the optimization of the system was
necessary to gain more effective results in a short time. In this article you will find the very abstract
of the whole story, the walls faced with, the problems occurred, some statistical results and the
lessons learned for developing such a quality management system.
1
Key Words: Data Quality, Quality Management System, Quality Assurance, GIS,
Topographic Map production, Quality Control, Cartographic Vector Maps, Spatial Database, Map
Library
1.
INTRODUCTION
Quality Control Process is neither a new discussion point, nor a new thing for the map
production system. It is a must for the life-cycle of any project or any production line.
Improvements on the GIS technology enhance the quality of Quality Control Management (QCM)
Systems but do not prevent human factors in it. On the other hand it is not a handicap or a
weakness. It is the nature of the product and its life-cycle. Touching any part of a production line
needs a careful inspection for the Quality Control process, if it affects other parts of the system.
Once the balance of the system is stabilized, every improvement or development may produce
unforeseen errors. Hence, for any Map or GIS Data manufacturer has to consider such errors and
issues while she is upgrading the production system. In this article you will find a case study for an
upgrade process of the map production system in GCM Cartographic Data Model and how the
quality control management is handled in this period.
The entire story began with a major requirement for the generalization of the 25 key
Cartographic Maps to 100 key by using new aspects of technologies developed in recent years. Fast
map production was the main objective. What was to be handled was a huge, not standardized and
complex cartographic data. The most important part of the problem was the data itself
In General Command of Mapping, Turkiye, when the initial works started on creating a
Spatial Database and Digital Mapping in early 1990’s it was not possible to predict the potential
risks of cartographic concerns. The pre-version of the spatial data structure, which is the
fundamental part of the Spatial Database, was based on the aerial photographs. Implementation
and conversion of this data to produce the paper maps could not be solved for many years. So
initial publication of the paper maps were to be re-digitized from printed sheets and existing paper
maps after correction and completion by topographers. In 2001, the data structure and collecting
techniques had been changed in order to gain the ability to convert and integrate this data to satisfy
the cartographic concerns. This situation created two types of data sets. After 2001, GCM has
decided to start a project, named KartoGen, which is based on generalization of 1:25000 to
2
1:100000 scale maps by using this data. In 2002, the generalization project developers determined
that a geodatabase is required.
The initial study, which has been done on the existing data, brings about the need of a
detailed inspection for the map production system. A closer look at the map production system in
GCM shows the framework as a starting point. All 1:25 000 scale sheets are digitized in
Microstation DGN format from aerial photographs. This kind of spatial data is known as digital
landscape model (DLM) and has no visual scope. In cartographic model theory, primary model is
called Digital Landscape Model (Bildirici & Ucar 2000). A DLM denotes the type of data that most
of us consider as base GIS data compiled from source information that is registered to the ground.
Many agencies have the intention of deriving lower resolution DLMs from higher resolution
databases using various generalization processes (Buckley et al 2005).
After producing DLMs, they are printed on matte films for correction and completion by
topographers on the field. This is a long and costly process. Then the data of each tile is converted
to Arcinfo Coverage format via object conversion tables. This data is also crude in the sense of
cartographic concern and far from cartographic esthetics.
A map production system, which is called KARTO25, implements Arcinfo Workstation
environment, gets the coverages and matte films as the inputs. Then operators start to edit and
modify the data according to cartographic rules written for many years of experience. Manipulating
the spatial data changes it into a new type that is not related to the previous spatially accurate data.
This new kind of data, known as digital cartographical model (DCM), includes esthetics, rules,
representational view and much information about area of interest. A single DCM could be used to
support the production of multiple map products if changes in the workflow only, and not the data,
were required (Buckley et al 2005). Adding other map elements like legend, grids, title and other
texts, this map is ready for publishing. After a paper map is published, the outputs of KARTO25
system that are EPS, PDF, TIF, MAP and DATA are stored on the HDDs and DVDs for archiving.
Above is the very brief presentation of the map production system in GCM. So many issues
that need to be explained in the production system are not focused on this paper. But the reviewing
of dataflow shows the pathway for the building of a spatial database. Because of the existing data
structure, the initial spatial database has to be constructed on Arcinfo Coverage format. This is
named initial spatial database because it will be the base for a real geodatabase managed by
Oracle DBMS and Arcsde as agreed by GCM. The most suitable structure for the coverage format
is the Map Library, which is managed by Arcinfo Librarian module. This module is an extension
and integrated into Arcinfo default package without any cost, very easy to build up and manage.
3
For these reasons GCM prefers this pathway to the geodatabase. This intermediate solution also
shows the data quality, consistency and integration with the cross-tiles. Other reasons are the
operator’s familiarity with the Arcinfo Workstation software environment and KARTO25, which is
built upon Arcinfo Workstation.
2.
DATA STRUCTURE OF KARTO25
Starting point of the Quality Control Management System is the product itself. In order to
maximize efficiency and data quality, the product and its resources must be inspected very
carefully. This will create the road map of the development cycle. In Karto25, data structure is
divided into 9 different classes. Each class is subdivided into 3 coverages as line, point and
polygon. With the additional annotation coverage, total number of coverages is 28. Table 1 shows
these coverages.
Table 1. Data Classes (Aslan 2003).
CLASS (English)
CLASS (Turkish)
Abbreviation
Annotation
Yazı
yazi
Boundary
Sınırlar
bnd
Elevation
Yükseklik
ele
Hydrography
Hidrografya
hyd
Industry
Endüstri
ind
Physiography
Fizyografya
phy
Population
Yerleşim
pop
Transportation
Ulaşım
tra
Utilities
Tesisler
uti
Vegetation
Bitki Örtüsü
veg
Each coverages has some common attribute fields that are given in table 2.
Table 2. Attribute Fields (GCM 2003).
Name
Explanation
4
F_CODE
Feature Code defined uniquely and based upon FACC
F_NAME
Feature Name defined uniquely and as Turkish name
P_NAME
Private Name for the object’s special identifier
VALUE
Value for some special objects
3.
DATABASE STRUCTURE OF KARTO25 MAP LIBRARY
In order to manage the data, it must have a base and a structure. Commonly the structure is
expected to be well defined, but if the resources are not the same it has to be reorganized.
Restrictions of the software used for the spatial data force the project to use a map library structure
as a base. Hence we decided to use ArcInfo Librarian module for our base to reorganize map sheets
which are separately collected and stored on the network computers.
Map libraries organize spatial data into a logical, manageable framework. Librarian uses a
hierarchical data model (figure 1) to keep track of available map libraries on a computer system
and the actual coverage that make up each individual tile. Map libraries are composed of standard
Arcinfo data structures (coverages and info files). Librarian is the Arcinfo subsystem for creating
and maintaining map libraries (ESRI 1992).
Figure 1. Librarian Hierarchical Data Model
Total drive space would need for this huge data was approximately 150 GB in size for the
5550 cartographic vector maps which cover entire the country extend. There were 3000 sheets of
vector map and 28 layers for each have to be restructured.
5
Construction of the map library cannot be done by manually. The batch processing should
be used to reconstruct the vector maps. On the other hand it is not possible to develop a stable
batch process for a data which has no metadata of any kind.
Early releases of Karto25 had many versions while the product line was operating. It took 5
years to make it stable software for the production of cartographic vector maps based on
photogrametric resources, meanwhile 1400 sheets of map had been manufactured and archived.
Tracking this data was almost impossible and there should be used iterative development for the
batch process. The weight of the job was on this part and took 3 years of working on it. However
the data were produced after the first stable release of the Karto25 had its consistency and suitable
for batch processing. It was easy to run scripts and take the results that expected.
4.
SOME OF DATA QUALITY PROBLEMS
Spatial database is a factory that produces bullets for generalization machine gun. Creating
a spatial database requires much consideration of other users of the data than for only
cartographic purposes. Cartographic data modeling is similar to the process used in geographic
data modeling (Burrough 1992, Goodchild 1992, Hadzilacos & Tryfona 1996, Peuquet 1984). So if
one needs to build an enterprise geodatabase by using very different types of spatial data, he should
design the system architecture with the modern approaches of development methods. Examining the
data and dataflow, inspecting bottlenecks in the system and debugging the architecture are
essential parts of the development process. Conceptual design, logical design and physical design
are the three categories of the design process. There are ten steps (Arctur & Zeiler 2004) to design
a geodatabase:
1.
Identify the information products that will be produced with your GIS
2.
Identify the key thematic layers based on your information
3.
Specify the scale ranges and spatial representations for each thematic layers
4.
Group representations into datasets
5.
Define your tabular structure and behavior for descriptive attributes
6.
Define the spatial properties of your dataset
7.
Propose a geodatabase design
8.
Implement, prototype, review, and refine your design
9.
Design work flows for building and maintaining each layer
6
10.
Document your design using appropriate methods
These steps are the most common design principles and one should obey these steps not to
go back to the starting point. To meet the cartographic requirements that might be imposed on a
GIS, a complete cartographic data model should be considered (Arctur &, Zeiler 2004).
In this scope, the data and the dataflow have been analyzed and the framework of the study
has been drawn in GCM. The conceptual part of the design was completed by the end of 1998 and
most of the main structure was defined in great detail. Logical design was clarified by grouping the
spatial objects into datasets and adding descriptive attributes. After implementing this structure to
the workflow, in time, some minor refinements had to be done. Data structure that was changed
over and over again was the main handicap of the study. Standardizing this data requires too much
batch and automation processes. These processes will be explained in the next part.
A projection system and the map extent are the main characteristics of a map and these are
related with the geodetic reference system that is stuck into the Datum. Changing the datum seems
to be a very common and simple task for any GIS Software but in a workflow of a map production,
it is not an easy thing. GCM has decided to produce maps in WGS84 datum in 2001. On the other
hand, about 1500 sheets of vector data had been produced in ED50 datum. This was the second
main problem that needed to be solved. The datum transformation of the sheets and automation of
this process showed that computers with very high capacity were needed for such a huge job.
Datum transformation also includes a major problem that each map’s extent overlaps 4 more
maps’ extent in the previous datum. For creating a map in a new datum, a clipping process is
required for each tile and coverage. This task is also time-consuming for a batch process.
For generalization, the spatial data must be standardized, checked for errors and must be
seamless. To achieve this goal, the edge-matching process for adjacent tiles must be done manually
or semi-automatically by operators. Statistics on the existing data have been collected for this task
showed that for each edge of a tile, 3 hours of work is needed. When it is expanded to the sheets
cover whole country for only the edge-matching process, it is approximately 16.500 hours of work
is required to accomplish this task.
The process of map making will always require human intervention - as natural languages
require smart interpretation. Therefore, it is impossible to fully standardize the methods or
processes of thematic cartography, nor to provide universal rules of information generalization
(Beconyte & Govorov 2005). It has been experienced for many years the data workers, the
cartographers in this issue; themselves are the main error resource for the spatial data. Such
7
factors as loss of attention, lack of knowledge, concentration, motivation and personal problems
create rude errors that are inevitable. To avoid this kind of errors there must be a task to be done
which is the Raster Control Method. In this method, vector data is symbolized and compared with
the scanned paper maps or published raster data. There are also some statistics collected for this
task. The mean time for each tile is 4.5 hours of work and total work needs about 25 000 hours.
5.
APPLICATION OF QUALITY CONTROL MANAGEMENT SYSTEM ON
EXISTING DATA
A system designer must be meticulous about the dataflow. It is essential that the workflow
itself is a resource for errors and bottlenecks. Especially the Arcinfo Coverage system is very
vulnerable by the operating system and operator tasks. The structure of the Karto25 is built upon
Arcinfo coverage system and each vector map layer is stored in a tile-based system on the hard
discs. These tiles are managed and authenticated by Microsoft Server 2003 Operating System in
Karto25 Map Library. The tiles of 1:25 000 scale vector maps is collected in the relevant 1:100 000
scale tile names and all tiles are stored under the LIBRARIES/KVK25/TILES folder on a map
network drive.
8
Figure 2. Quality checking workflow
An adjustment between map edges and neighboring areas is the main problem of seamless
database. To make it easier, there must be a workaround solution for this process. Therefore all
1:25 000 scale tiles in a 1:100 000 tile are appended and edited by manually to prevent from
unwanted errors stemming from programs. After merging 25 key tiles, 100 key tiles are processed
by edge matching on the neighboring cross-tiles in 1:250 000 scale extent. Then the vector and
raster data comparison process starts. All of these time-consuming processes are required to be
done manually. Finishing these steps the data are prepared for quality checking. The quality
checking software is a program that is based on Arcgis environment and implements Arcgis objects.
The software produces a report for each tile to be corrected by operator interactively. This task
loops for several times until there are no errors. Faultless tiles are ready to be put into Map
Library. In this step, another batch process takes the action and prepares the data by cleaning
temporary files and doing some other operations needed. Once the data is ready in the map library,
the operation of extraction is performed and the data is ready for generalization purposes.
Conversion software gets these coverages and converts them to a personal geodatabase, which will
be used by KartoGen software tool kit. Figure 2 depicts the quality checking workflow.
Clear map graphics and a good legibility of the content are much more important. Objects
in DCM are not only simplified, but also selected, enlarged and displaced. For a DCM the
positional precision is not as significant as the topologic consistency. It is achieved if the relative
position of objects towards its neighbors is correct. The structure of DCM data can be compared to
the data of landscape models. In order to perform automated generalization, the data need to be
topologically correct. The vector data of a DCM can be visualized with different symbol sets. A
limitation to the freedom of symbolization is the displacement, since it is performed with respect to
a specific symbol with or size (Nreiter 2005). Deciding which data model has to be selected for a
spatial database is clarified by the Cost-Benefit analyses. One of the prime decisions to be made in
all changes of technology is whether start now or later. In general, the later start-up, the greater
the cost of establishing databases and the longer the time before benefits are realized (Bernhardsen
1992).
The data model is the main bottleneck for the generalization that needs highly integrated
and very qualified data to accomplish its tasks. The level of ease for the implementation of the
algorithms of generalization tools depends on the mostly data model. Semantic, geometric,
topologic and cartographic restrictions are the basics of the data model constrains. Table 3
includes some of these for a spatial database built upon Karto25 data.
9
Table 3. Some Examples of semantic, geometric, topologic and cartographic restrictions
(Dalkiran 2005).
Description
Sample
Explanation
River in the polygon
Representing geographical
objects by the graphical elements
layer and its contour lines in
the line layer.
that are separated on many layers.
geographical
Representation of a dam
phenomenon is very complex to be
in front of a barrage with
modeled and there must be taken
undefined black lines.
Some
some workaround solutions.
Due to the restrictions of
software,
some
geographical
objects in polygon layers have to
be represented by lines.
Big gaps and rises on a
land that are represented by
polygons,
not
have
by
directional brown lines to show
the slope.
The object of the same type
does
symbolized
the
identical
attributes due to operator faults or
mistyping.
The adjacent polygons in
the vegetation layer, has different
values.
the
The label “Kiraz Çş.” in
annotation layer are not related
the annotation layer, for the
with
nearby fountain object’s attribute
The
the
labels
in
geographical
object
attributes.
is missing.
Streets do not connect to
Connectivity errors for the
each other in a suburban area.
line objects.
Some
continuous
A Stream lines must be
geographical objects are broken
continuous, unless there is a
by absence of required lines.
shaft.
10
Some of the geographic
objects are not orthogonal.
Polygon buildings are not
orthogonal.
Line objects are split by
many arcs that cause knitted lines.
Overlapping lines of a
polygon object that represents a
river.
Elevation contour lines that
are snapped each other that are
caused by high snap and edit
Snapping
elevation
contour lines.
tolerances.
Different attributes of the
elevation layers on the edges of
tiles.
Geodetic reference points
require
orthometric
height
information as value attribute.
Mistyped values of some
very important objects.
Some
edge-matching
problems of datum transformation.
Data collection differences
of two operators.
Elevation lines must be
continuous.
Missing
values
for
geodetic reference points
Some contour lines have
wrong elevation value.
Missing polygon object
attributes on the edge of between
two tiles.
On the edge of two tiles
the polygons cannot be merged.
11
6.
QUALITY CHECKING PRODUCTION LINE
Quality checking is a streamline of the some following procedures and it can be named as
Quality Checking Production Line. There are several tasks and loops should be done by automatic
or interactive procedures.
As mentioned before, the quality checking and attribute completion tasks should be done in
order to prevent the rude errors. This task needs very detailed knowledge about the data structure,
data itself and programming skills. Automatic control steps are listed below for data quality
process on the Karto25 data.
1.
Coverage existence control
2.
Creating or completing the missing coverages
3.
Deletion of temporary coverages
4.
Redefining projections
5.
TIC coordinate equality for each coverage to the index coverage
6.
Attributes and item order standardization
7.
F_CODE, F_NAME, and SYMBOL items cross-checking process
8.
Unspliting line coverages with a unique temporary item
9.
Overlapping points deletion
10.
Deletion of objects without any attribute
11.
Recreating the outline arc of the polygon coverages by using index polygon
12.
Tolerance Setting for each coverage
13.
Building and cleaning the topology of the coverages
14.
Killing unwanted feature classes
15.
Deleting log files and other system files
Below is the semi-automated log based error reporting processes;
1.
Incremental value checking for elevation contour lines
2.
Edge-matching for line layers
3.
Checking intersections for contour lines
12
4.
P_NAME and VALUE control for polygons and lines
5.
Related object existence
6.
Value standardization control for P_NAME and VALUE attributes
7.
Character and syntax error checking for annotations and labels
8.
Abbreviation checking for annotation coverage
9.
Annotation layer symbol checking
Below is the list for full manual processes.
7.
10.
Vector and raster data comparison
11.
Comparison with object value and annotation coverage
12.
Missing required value control for some important objects
SOME STATISTICS ON THE DATA
It’s obvious that one cannot comprehend the volume of the entire data and whole work,
unless one has them together. On the other hand it is not necessary to hold all the data collected in
the hard discs. Statistics is a tool that assists to estimate the whole work and the data. Sampling is
very important for estimation and opinion about the entire collection.
Statistics have been applied on approximately 1500 sheets of 1:25 000 scale vector maps.
Table 4 indicates some impressive results of these.
Processing such a tremendous dataset needs too much manpower, computer processing time
and automation. Batch processes are very essential. On the other hand it’s very dangerous to create
a batch process for every task for spatial data. Cartographic design and production consist of
complex procedures, which cannot be automated easily (Konstantinos & Lysandros 2005). There
are always some unexpected errors and surprising situations that cannot be foreseen before occur.
A careful programmer or developer should test the batch processes many times for many situation
and it needs to be experienced. All of these cause a paradox.
The benefit-cost estimations must be done for which tasks will be batched and which not.
After deciding the batch processes, one must consider computer processing time and optimization of
codes.
Table 4. Statistics on the data (Dalkiran, 2005).
Explanation
M
Num
Estimation
13
ean
ber
of
Entire
Sheets
Total Arcs
4
.954
Total polygons
68
.555
89.615
59
78
space
0 Mb
data to Map library
.5 hours
25.280.250
x
5.55
2.162.363.250
x
5.55
3.102.450
x
5.55
1.542.900
x
5.55
110 Gb
x
5.55
13875 hours ≈ 578 days
≈ 1.5 years
0
2
8
5.55
0
2
Total Coverages
x
0
2
Total work to upload
3.152.400
0
2
Total Physical drive
5.55
0
5
Total Annotations
x
0
3
Total Labels
27.499.039
0
4
Total Segments
5.55
0
5
Total Points
x
x
5.55
155.400
0
Statistics on the workflow of Quality Checking Process Line indicates that about 30% of
work can be batch-processed securely on the vector data. The other 70% is covered by interactive
or manual tasks. Some statistics about time needed to accomplish the interactive and manual tasks
for a 1:25 000 scale vector map data are shown below (Table 5).
These results signify that, for 8 hours of work with 10 data workers, time required for the
entire data is estimated as 1.283 workdays and it is about 5.3 years.
Table 5. Estimation of Total Hours of Work (Dalkiran 2005).
14
H
Task Name
ours of
Work
Edge-matching for line layers
Num
ber
Estimation
of of Total Hours of
Entire
Work
Sheets
2
x
5.55
11.100
x
5.55
5.550
x
5.55
11.100
x
5.55
11.100
x
5.55
8.350
x
5.55
8.350
x
5.55
2.775
x
5.55
2.775
x
5.55
24.975
x
5.55
16.650
x
5.55
102.675
0
Edge-matching polygon layers
1
0
P_NAME and VALUE control for
2
polygons and lines
Value
standardization
0
control
for
2
P_NAME and VALUE attributes
0
Missing required value control for
some important objects
1
,5
Character and syntax error checking
for annotations and labels
1
,5
Abbreviation checking for annotation
coverage
0
0
0
,5
Annotation layer symbol checking
0
0
,5
Vector and raster data comparison
0
4
,5
Comparison with object value and
0
3
annotation coverage
0
Total
1
8,5
8.
0
CONCLUSION
15
Pathway to the GIS heaven passes through the Spatial Database Bridge. Establishing the
database is not the end; it is the beginning of the future works. The first thing to be done is to find a
strategy. Without an implementation strategy, projects often become technology-driven, and
technology takes the upper hand, ahead of the tasks it is to perform (Bernhardsen 1992). In order to
publish this data as soon as possible and get the users’ feedbacks, establishing this map library is
very essential. There are many milestones to be passed. After establishing this map library the first
thing to be done is converting this data to the geodatabase format, which is managed by Relational
Database Management System. The new model and the previous model conversion tools should
cover both formats and should have no conflicts with the map production system.
Geonames are another issue to be solved. It is estimated that there will be 1.5 million names
in the database and each one must be checked by semi-automatic processes, in case a geodatabase
can not exist without geonames. Correction and completion of geoobject’s attributes are also very
crucial for querying tasks. This will be included in the future works of this study.
Also a very good knowledge base system must be established in the workflow. The operators
should easily share their experiences without any restrictions. In order to do this a web based
content management system has to be setup and a course for the operators should be given. Lessons
Learned section is the summary of a good knowledge base. Here are some of lessons learned from
all of the study;
 Whenever a process launches, you have to know that something is going wrong.
 Every automation task can lead to new problems that are to be solved by manually.
 If there isn’t any knowledge base and communication, each operator produces only
errors.
 When you find yourself happy to finish the quality control, you must know that there
still are many errors.
 Do not hesitate to give low tolerances for the coverages, unless you want to tolerate
new errors.
 A batch process always fails when you are resting at home.
 A quality control process must be controlled by another quality control process.
 As a manager, be nice to your data workers, otherwise you’ll have to do the rest of
job by yourself.
16
 Quality control is a vortex. You cannot get out once you get in.
 When a function does its job properly for a piece of spatial data, know that it might
not do for the rest.
 Topology is a gun that can shoot backward.
 You must test your program on many computers, even though they have the same
configuration.
 As a programmer, please test your program on a copy of data.
 Spatial data is a living monster that enters your daydreams.
In conclusion, every road leads to Rome. The first thing is to start somewhere in the spatial
space of one and zeros. Cost-Benefit analyses show the first direction. Direct application of modern
approaches is not always true for your requirements and data structure. Give a chance to your
data, before moving it to the trashcan. There must be much invaluable information in it. The level of
knowledge about spatial data itself enlightens the level of your advanced researches.
9.
REFERENCES
Arctur, D., Zeiler, M. (2004), "Designing Geodatabases: Case Studies in GIS Data
Modeling", Esri Press, USA.
Aslan, S., (2003), “Settlement and Road Generalization in Topographic Maps”, Master
Thesis, Yildiz Technical University, Istanbul. (In Turkish)
Beconyte, G., Govorov, M., (2005), “In Search for Models Of Cartographic Representation
(Language Oriented Approach)”, ICA-The 22th International Cartographic Conference, A Coruna,
Spain.
Bernhardsen, T., (1992), “Geographical Information Systems”, VIAK IT and Norwegian
Mapping Authority.
Bildirici, I.O., Ucar, D., (1996), “Geographic Information Systems and Gneralization”, 6.
Harita Kurultayi, Ankara, Türkiye. (In Turkish)
Buckley, A., Frye, C., and Buttenfield, B., (2005), “An Information Model for Maps:
Towards Cartographic Production From GIS Databases”, ICA 2005, Spain.
Burrough, P.A., (1992), “Are GIS Data Structures Too Simple Minded?” Computers &
Geosciences, 18/4, pp 395-400.
17
Dalkıran, H.P. (2005), “Geographical Database Models and an Application of A
Topological and Seamless 1:25 000 Scale Geographical Data Library For The Purpose Of
Cartographical Generalization”, Master Thesis, Selcuk University, Konya, Türkiye. (In Turkish)
GCM, (2003) “Data Dictionary of 1:25 000 Scale Topographical Maps”, General
Command of Mapping, Türkiye.
Goodchild, M.F., (1992), “Geographical Data Modeling”, Computers & Geosciences, 18/4,
pp 401-408.
Hadzilacos, T., and Tryfona, N., (1996), “Logical Data Modeling for Geographical
Applications”, International Journal of Geographical Information Systems, 10/2, pp179-203.
Konstantinos, S., Lysandros, T., (2005) “Structure and Development of a Knowledge Base
For Cartographic Composition”, ICA-The 22th International Cartographic Conference, A Coruna,
Spain.
Nreiter, N. , (2005), “Redesign of The Swiss National Map Series”, ICA 2005, Spain.
Peuquet, D.J., (1984), “A Conceptual Framework and Comparison of Spatial Data
Models”, Cartographica, 21/4, pp 66-113.
18
Download