Data Organization slides

advertisement
Data Organization
 Basic data organizations
– Linux, Unix, Windows store data in files in a hierarchical file system (tree)
– z/OS uses datasets
– z/TPF uses records organized by record types
 z/TPF provides two kinds of record types
– Fixed records
• Static
• “files” that always exist
– Pool records
• Dynamic
• “files” that are created as needed
 Record types form basis of most z/TPF databases
– Used directly by application
• Traditional z/TPF database
– Forms basis of other data services
• z/TPFDF
• z/TPFCS //Collection support
• Globals
• File systems
1
© 2013 IBM Corporation
Fixed Records
 Fixed records are a static repository of data
– Static means the fixed records are predefined
• Always available - can’t be created or deleted by application
• Predefined does not mean data in fixed records is valid
• Responsibility of application to initialize data
– Actual data in fixed records can change frequently
 Single fixed record
– Stored as a block of data on disk
– Must be one of three sizes
• Small: 381 bytes
• Large: 1055 bytes
• 4K:
4095 bytes
 Fixed records are defined in sets called “Fixed Record Types”
– Defined through System Initialization Process (SIP)
2
© 2013 IBM Corporation
Fixed Record Type: A set of fixed records
 Named set of logically related fixed records
- Fixed record type name-an 8 character symbolic name
- Normally used for specific application
- Example:
“#CY2KT ” is an IBM required fixed record type- manages pool sections
 Predefined number of records
– Each record identified by ordinal from 0-n
– Application determines how to store data across the records
– Example:
• #CY2KT has 48 ordinals from 0-47
• Each fixed record contains data for one pool section
 Each record has same characteristics #CY2KT Fixed Record Type: Numbers are
– Size
ordinals of the fixed records
– Duplication
 Can have different characteristics
– Device type, uniqueness, etc.
3
0 1 2
47
© 2013 IBM Corporation
Accessing fixed records
Use fixed record type
name and ordinal to
retrieve FARF* address
Find (read) record using
FARF address. z/TPF
uses FARF address to
find physical address
File (write) record
Application
face_facs()
Fixed Record Type
name & ordinal
z/TPF
Control
Program (CP)
FARF Address
findc()
finwc()
finhc()
fiwhc()
filnc()
filec()
filuc()
Subset of
find APIs
Data Record
Data Record
Subset of
file APIs
FARF
Tables
FARF Address
Find and File
service
routines
* File Address Reference Format (FARF) Address: Symbolic address of a record
4
© 2013 IBM Corporation
Pool Records
 Pool records are a dynamic repository of data
– Dynamic means pool records are dispensed as needed
• Application gets pool records from z/TPF to store data
• Application returns pool records to z/TPF when no longer needed
– Multiple pool records used to store dynamic amounts of data
– z/TPF tracks which pool records are in-use and which are available
 Pool records are predefined
– Defined through System Initialization Process (SIP)
– Defined in sets called pool types
• Pool types defined by 4 characteristics
• Size (Small, Large, 4K)
• Longevity (short and long term)
• Duplication (duplicated, nonduplicated)
• Address type: 4 byte address (non-FARF6) or 8 byte (FARF6)
• All pool records in a pool type have the same characteristics
5
© 2013 IBM Corporation
Pool types and sections
 The basic pool types are:
– Small short-term (SST)
– Small long-term nonduplicated (SLT)
– Small long-term duplicated (SDP)
– Large short-term (LST)
– Large long-term nonduplicated (LLT)
– Large long-term duplicated pool (LDP)
– 4K short-term pool (4ST)
– 4K long-term nonduplicated (4LT)
– 4K long-term duplicated (4DP)
– 4K long-term duplicated FARF6 (4D6)
 Pool section is a subset of a pool type
– Defined by device type (DASD type) where pool records reside
– Represented by an A, B, C, or D suffix on pool type
• Example: LLTB is Large long term nonduplicated pools on
device type B
6
© 2013 IBM Corporation
Short term vs. long term pool records
 Short-term pool records
– Intended for short-term data storage: a few seconds or minutes
– z/TPF automatically reuses short term pools after a specific
period of time (recycle time)
• Time period defined by customer
• Unique time period defined for each short term pool section
• Applications exceeding this time period risk data corruption
– Example: Storing temporary data during transaction processing
 Long-term pool records
– Intended for long-term data storage: a few days to a few years
(indefinite)
– z/TPF only reuses long term pool records when:
• The application explicitly returns the pool record to z/TPF
• z/TPF determines the pool record is not being used (a form of
garbage collection)
7
© 2013 IBM Corporation
Accessing pool records
Use record ID* to get a
new pool record for a
specific pool type
Find (read) record using
FARF address. z/TPF
uses FARF address to
find physical address
File (write) record
Return the pool record
to the system
Application
Record ID
getfc()
z/TPF
Control
Program (CP)
FARF Address
findc()
finwc()
finhc()
fiwhc()
Subset of
find APIs
filnc()
filec()
filuc()
Subset of
file APIs
relfc()
FARF Address
Data Record
Data Record
FARF Address
Pool
Management
and
Record ID
Tables
Find and File
service
routines
* Record ID: 2-byte symbolic ID used to ensure database integrity and define record
attributes, including pool type
8
© 2013 IBM Corporation
Horizontal Record Allocation
 Fixed and pool records are spread evenly across all modules
 Ensures I/O spread evenly across all volumes
 z/TPF reads from either module and writes to both modules
PK0001
Fixed Record
with 20 records
Pool type with
24 records
Record layout
automatically
determined
during SIP
9
PK0003
PK0005
0 3 6 9
1 4 7 10
2 5 8 11
12 15 18
13 16 19
14 17
0 3 6 9
1 4 7 10
2 5 8 11
12 15 18 21
13 16 19 22
14 17 20 23
PK0002
PK0004
PK0006
0 3 6 9
1 4 7 10
2 5 8 11
12 15 18
13 16 19
14 17
0 3 6 9
1 4 7 10
2 5 8 11
12 15 18 21
13 16 19 22
14 17 20 23
Prime Modules
Example z/TPF
subsystem with
6 modules
Duplicate Modules
© 2013 IBM Corporation
Vertical Record Allocation
 Fixed record allocated on each module in sequence
 Only used for specific IBM records
 Data requires sequential ordinals on the same module
 Keypoints, transaction logs, etc
PK0001
Vertical fixed
record with 24
ordinals
PK0003
PK0005
0 1 2 3
8 9 10 11
16 17 18 19
4 5 6 7
12 13 14 15
20 21 22 23
Prime Modules
Example z/TPF
subsystem with
6 modules
PK0002
Record layout
automatically
determined
during SIP
1
PK0004
PK0006
0 1 2 3
8 9 10 11
16 17 18 19
4 5 6 7
12 13 14 15
20 21 22 23
Duplicate Modules
© 2013 IBM Corporation
Multiple Database Function (MDBF)
 Subsystems
– Each subsystem is a unique set of modules (disks)
– Physically separate sets of data( and programs)
• Unique set of fixed and pool records
• Unique set of programs
– Primary subsystem is the Basic Subsystem (BSS)
 Subsystem Users (SSUs)
– Logical set of data under a subsystem
• Shared and unique fixed records
• Shared set of pool records
• Shared set of programs (same program base)
– Minimum of one subsystem user
 Helps manage multiple airlines, hotels, etc. on a single z/TPF
system
1
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database 
 z/TPF Database Facility (z/TPFDF)
 z/TPF Collection Support (z/TPFCS)
 Globals
 File Systems
 General Data Sets & General Files
 z/TPF Application Requester (z/TPFAR)
 Utilities
 System Tables and Additional Details
1
© 2013 IBM Corporation
Traditional z/TPF Database Design
Fixed and pool records
 Hierarchical database (non-relational) – Network database
 Built using combination of fixed records and pool records
 Fixed record type is top level index
– Used to find “start” or “entry point” of database
• Fixed record type name hard coded in application
• Input data used to find to specific ordinal in that fixed record type
– Fixed record contain
• Entries that point to pool records
• Application data
 Pool records
– Chained to other records using FARF address
– Overflow for indexes
• May contain additional index entries that can’t fit in fixed record
– Data records
• Contains detailed data for database
• May have overflows if data doesn’t fit in single pool record
1
© 2013 IBM Corporation
Traditional z/TPF database
Example: Simple passenger database
Fixed
Records in
record type
#FLT1234
Adams
Bailey
Anderson
Blackwell
Chan
Zhen
Brooks
Brown
Pool
Records
Anderson
Adams
Brown
Chan
Zhen
Brooks
Blackwell
Bailey
Bailey
(overflow)
 Fixed record type #FLT1234 are index records
– Index by 1st character of passenger’s last name
– 26 fixed records in the fixed record type
Bailey
(overflow)
 Each index entry contains FARF address of pool records containing passenger details
1
© 2013 IBM Corporation
Traditional z/TPF Database /Application Defined Database
 Structure defined by needs of application and type of data
– Structure can be almost anything (tree, multi-level index, network, web, etc.)
– Usually defined to efficiently find specific subset of detailed data
For example, single passenger record on a given flight
Application contains all code for defining and managing the structure(BAD)
-Index method is part of the application code
-Managing overflow if number of entries in index is too large (part of appl.)
For example, if index can only hold 100 entries, what should application
do when the 101st entry is added?
>Add pool record as an overflow to the index record
>Return error that index record is full
-Managing pool records that contain detailed data
>Adding new pool records to the index for new entries
>Deleting pool records from the index when an entry is deleted
Changing Application code risks the index and overflow code
1
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database
 z/TPF Database Facility (z/TPFDF) 
 z/TPF Collection Support (z/TPFCS)
 Globals
 File Systems
 General Data Sets & General Files
 z/TPF Application Requester (z/TPFAR)
 Utilities
 System Tables and Additional Details
1
© 2013 IBM Corporation
z/TPF Database Facility (z/TPFDF)
Database facility for applications that run on the z/TPF system
– Separate set of documentation and APIs
– Licensed separately from the z/TPF system
 Hierarchical database (non-relational)
 Increases productivity of the application programmer by providing:
– Access Method of database organization
– APIs to access data programming languages
– Routines for database access and manipulation
– Utilities for database maintenance and testing
 The management of data structures is centralized and can be performed by
a database administrator (DBA)
 The DBA concentrates on design of data structures (e.g. performance,
access)
 The application programmer concentrates on application processing
 Physical database layout, indexes, sorting, merging, etc. is managed or
provided by z/TPFDF – application just has to read & write basic data
1
© 2013 IBM Corporation
Basic z/TPFDF Structures
 File
– Consists of one or more subfiles where each subfile is a logical grouping of
data
– Subfiles are used to distribute the data for access performance, concurrency
and isolation in case of errors (broken chains)
– All records in the file have the same file ID
 Subfile
– A subfile is a prime block (record) with zero or more chain blocks (records)
– z/TPFDF manages all chaining and overflows
– Subfile consists of one or more logical records (LRECs)
 LREC
– A logical group of data within a subfile that is identified by a 1-byte LREC ID
• Different LREC types: fixed size, variable size, or Large Logical Record
(LLR)
• Can be as simple or as complex as needed
– Within a subfile, multiple LRECs may have same LREC ID
• Represent same type of LREC
• Example: One LREC for each of a passenger’s phone numbers (cell, work,
etc.)
– z/TPFDF manages inserting, sorting, deleting LRECs
1
© 2013 IBM Corporation
z/TPFDF Structures
1
© 2013 IBM Corporation
z/TPFDF subfile example
 File: Passenger records for all
passengers in system
Contents of a single z/TPFDF subfile
 Subfile: Passenger records for
single passenger
LREC ID
Description of LREC Contents
 LRECs: Several different LRECs
to hold contact information, travel
information, etc.
– LRECs typically sorted by
LREC ID (primary key)
– Additional keys can be used
20
Passenger name
40
Passenger home address
42
Phone number
42
Phone number
60
Flight info
80
Travel preferences
2
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database
 z/TPF Database Facility (z/TPFDF)
 Globals 
 File Systems
 General Data Sets & General Files
 z/TPF Application Requester (z/TPFAR)
 Utilities
 System Tables and Additional Details
2
© 2013 IBM Corporation
Globals
 Efficient mechanism for accessing data
– Data stored on disk for persistence across IPLs
– Application uses working copy of data in main storage
– Very efficient access by avoiding delays from I/O operations
 Typical uses
– Passing data between application programs to avoid I/O operations
– Providing efficient access to very frequently accessed
information, such as fare data in an airline application
 Two types of globals
– Format-1 Globals
– Format-2 Globals
2
© 2013 IBM Corporation
Globals features - common characteristics
 Format-1 and format-2 globals share common characteristics
 Size and attributes are defined through specific process
– Not application controlled
 They can be defined as:
– SSU unique or shared
– I-stream unique or shared
– Processor unique or shared
– Keypointable
• Most recent copy of global is in main storage
• Written (keypointed) to DASD on time-initiated and on-demand
basis
• Typically data that has high-update rates but can afford to
lose some updates
– Synchronizable
• Most recent copy of global is on DASD
• Read from DASD when locked (sync lock)
• Written to DASD when unlocked (sync unlock)
• Provides data consistency across i-streams and processors
 Reside in either protected or unprotected storage
 Automatically loaded into main storage
2
© 2013 IBM Corporation
Format-1 Globals
 Format-1 globals are an older type of globals
– Defined through SIP
– Initialized through Pilot Tapes
 FILKW, GLMOD, GLOBZ, and GLOUC macros for applications to
access, modify, and file format-1 globals
 Format-1 Globals
– Contained in fixed locations in main storage. These locations are
identified as format-1 global areas.
– Stored on DASD in format-1 global records
• These records can have various attributes (such as
keypointability, SSU unique, and I-stream unique)
• Limited to 1055 bytes in size
2
© 2013 IBM Corporation
Format-1 Globals (continued)
 A format-1 global record can be subdivided into format-1 global fields
– Each field is 1 to 256 bytes
– Up to 256 fields per global record
– Individually addressable using the GLOBZ macro
– Shares attributes of the global record
– Managed through a global directory containing pointers to individual
fields
– Global synchronization is necessary to keep global data current
among active I-streams in a processor and across processors in a
loosely coupled complex after modifications have been made to
format-1 global fields and records.
– z/TPF system does not perform global synchronization automatically
– SYNCC macro is provided for applications to perform
synchronization
– Most recent copy of global kept on DASD
2
© 2013 IBM Corporation
Format-2 Globals
 Format-2 globals are the newer type of globals
– Independent of format-1 globals
– No offline definition process
– Data updates do not require the use of STC, pilot tapes, or online
data loader.
– Enhanced functionality and usability compared to format-1 globals
– No size limitations
• Unlike format-1 globals, which must reside in 1055-byte records
– No maximum number or maximum size of keypointable records
– No maximum number or maximum size of synchronizable
records
 Characteristics unique to format-2 globals:
– Global records only - no global fields
– Defined, initialized, and managed through z/TPF commands
• Define a global: ZGLBL GLOBAL DEFINE
• Initialize a global: ZGLBL GLOBAL INIT
– Can be system-controlled or user-controlled
– User exits allow for easy customization and implementation of usercontrolled globals
2
© 2013 IBM Corporation
Format-2 Globals (continued)
 Format-2 globals and main storage
– Reside in dynamically allocated areas of system storage
– Reside either below the 2-GB bar (31-bit globals) or above the 2-GB bar
(64-bit globals)
– Loaded to main storage in restart, cycle up, or on demand
 They are accessible by using the GLOBLC macro or related tpf_gl* functions (for
example, tpf_glOpen)
 Example application use:
// Open the global
descriptor = tpf_glOpen ("_TPFAUX1", TPF_GLRDWR, &myGlob);
// Copy a string to the global storage
strcpy(myGlob, "THIS IS JUST A TEST");
// close and update the global
rc = tpf_glClose (descriptor, (enum t_glopt)(TPF_GLUPD+TPF_GLPART)
offset, length);
2
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database
 z/TPF Database Facility (z/TPFDF)
 z/TPF Collection Support (z/TPFCS)
 Globals
 File Systems 
 General Data Sets & General Files
 z/TPF Application Requester (z/TPFAR)
 Utilities
 System Tables and Additional Details
2
© 2013 IBM Corporation
File Systems
 Cover FARF addressing here
 z/TPF provides support for the following file systems
– z/TPF collection support file system (TFS)
– Fixed file system (FFS)
– Pool file system (PFS)
– Memory file system (MFS)
2
© 2013 IBM Corporation
Continued
Hierarchical (tree) file system
Consists of directories and files
TFS always defined as the root file system
Other file systems mounted under directories anywhere else in the tree
Application programs standard C and C++ file system APIs
All z/TPF file systems are partially compliant with the Portable Operating System
Interface for Computer Environments (POSIX) standard
Deviations documented in z/TPF C/C++ Language Support Users Guide
3
© 2013 IBM Corporation
File system types
 z/TPF collection support file system (TFS)
– Shared across all processors in a loosely-coupled complex
– Uses transaction semantics (commit/rollback)
– Automatically mounted by z/TPF
– Limited access (read-only) in 1052 state with pools not active
– Full access with pools active
– Sub System unique
 Fixed file system (FFS)
– Processor and Subsystem unique file system!!!
• May only be mounted on one processor in read/write mode
• May be mounted on multiple processors in read-only mode
– Fixed records are used to store file data
– Multiple separate FFS’s may be defined
– Does not use transaction scopes
– File system size limited by number of fixed records allocated
– Full access in 1052 state
3
© 2013 IBM Corporation
File system types
 Pool file system (PFS)
– Processor unique file system
• May only be mounted on one processor in read/write mode
• May be mounted on multiple processors in read-only mode
– Multiple separate PFS’s may be defined
– Fixed record types only used as upper level indexes (inodes)
– Pool records used to store file data
– Does not use transaction scopes
– Accessible only when pools are active
 Memory file system (MFS)
– Processor and Sub-System unique file system
• Only accessible on processor where it is defined
– 1-MB frames used as storage mechanism
• Files are lost across an IPL or filesystem unmount
• High-performance file system where data loss is acceptable on a
catastrophic error
– Limited by amount of main storage (system heap) allocated
– Multiple separate MFS’s may be defined
– Accessible in any system state
3
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database
 z/TPF Database Facility (z/TPFDF)
 z/TPF Collection Support (z/TPFCS)
 Globals
 File Systems
 General Data Sets & General Files 
 z/TPF Application Requester (z/TPFAR)
 Utilities
 System Tables and Additional Details
3
© 2013 IBM Corporation
General Data Sets and General Files
 General Data Sets (GDS)
– z/OS BSAM or QSAM data sets
– Identified by a data set name
– Allocated sequentially on each module and may span modules
– Typically used to share data between z/TPF and z/OS
• Example: Load airline fare data generated by z/OS to z/TPF
• Example: Archive old passenger records from z/TPF to z/OS
 General Files (GF)
– Sequential file using z/TPF standard record sizes
– Designed to be used primarily by z/TPF
3
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database
 z/TPF Database Facility (z/TPFDF)
 z/TPF Collection Support (z/TPFCS)
 Globals
 File Systems
 General Data Sets & General Files
 z/TPF Application Requester (z/TPFAR) 
 Utilities
 System Tables and Additional Details
3
© 2013 IBM Corporation
z/TPF Application Requester (z/TPFAR)
 Middleware that provides a method for z/TPF applications to access
data on any remote database server that is compliant with Distributed
Relational Architecture (DRDA)
– Typically used with IBM DB2 on z/OS
 Use z/TPFAR with a z/TPF application to directly query and update
data residing on a remote IBM DB2 system
 Structured Query Language (SQL) provides the interface between the
z/TPF application and DRDA-compliant database servers
– SQL commands are coded directly into the z/TPF application
– DB2 precompiler on z/OS or Linux translates SQL into code
representing the request
– Request is sent to application server (AS) on remote system
– AS executes request and returns result to z/TPF
 Used data between native z/TPF databases and a remote database
3
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database
 z/TPF Database Facility (z/TPFDF)
 z/TPF Collection Support (z/TPFCS)
 Globals
 File Systems
 General Data Sets & General Files
 z/TPF Application Requester (z/TPFAR)
 Utilities 
 System Tables and Additional Details
3
© 2013 IBM Corporation
Data Organization Tools and Utilities
 File Capture and Restore
– Ensures against data loss by copying online data to tape and, when
necessary, copying the data back to the online module
Mostly replaced by DASD copy services (copies performed directly between
DASD control units)
 Database Reorganization (DBR)
– Used to move or re-stripe (reorganize) records when adding new DASD
modules or reformatting old areas
– Offline version that requires system outage – ZDBRO / ZDBSO commands
– Online version (ODBR) that does not require an outage – ZODBR
command
 File Copy
– Create new prime or dupe module and bring online to z/TPF
• Prime or dupe module is offline - Copy data from online module of
prime/dupe set
• Move online module to new module - Usually done to move DASD control
units
Online Formatter
– Format new areas on DASD or old areas no longer in use
3
© 2013 IBM Corporation
Data Organization Tools and Utilities (continued)
 Pool Directory Generation and Reconfiguration
– Initialize the online pool database (all pool records are available)
– Reconfigures online pool database by
• Adding new pool records (pool expansion)
• Deleting old pool records (pool deactivation)
 Pool Directory Maintenance (PDU)
– Returned pool records (from appl.) are are placed in staging area waiting
for next PDU
– PDU processes returned pools in staging area and makes then available
for dispensing
 File Recoup
– Recover long-term pool storage
– Validates database structure (chains and record IDs)
– Finds “lost” pool records ( no longer referenced in the database)
3
© 2013 IBM Corporation
Running Data Organization Tools and Utilities
 Most of these utilities are run on a regular basis,
-Initializaztion utilities generally run only once,To generate the system
– Very rarely when the system definition requires change
 Utilities are typically run during periods of low system activity and
– In a loosely coupled environment, usually run on only one processor
– Sensitive to subsystems and subsystem users
4
© 2013 IBM Corporation
File Recoup
 Occasionally, application programs sometimes fail to return long-term pool
addresses to the system, in which case the records are lost. If left unchecked,
would result in pool depletion.
Additionally, whenever the z/TPF system experiences a catastrophic error,
some pool directories may be partially processed.
After a system restart, the z/TPF system, in order to guarantee data integrity,
throws away a pre-defined amount of long-term pool addresses, which are
never returned.
The file recoup utility recovers usable long-term pool addresses.
-lost addresses are made available to the system for subsequent reuse.
-The file recoup utility interfaces closely with the applications to determine which
long-term pool addresses are validly in use.....Garbage Collection via “Chain
Chasing”
4
© 2013 IBM Corporation
File Recoup (continued)
 Long-term pool addresses are lost in the following ways:
– Errors in application programs, programs fail to release file
addresses when they should or release file addresses when they
should not.
– Overt action by z/TPF. Following a catastrophic error, pool restart
sets a predetermined number of records to in-use status. This
ensures that the same long-term pool address is not dispensed twice.
 In recoup chain-chase phase, file record accesses every fixed record
and main storage table that can reference a pool record or a chain
of pool records.
 File recoup reconciles status of every pool record with what it found
during chain-chase. Pool records listed as “in-use” but not found are
considered “lost” and eligible to be returned to z/TPF.
 Navigates all databases and validates the database structures
– Validates chains and record IDs
– Does not validate application data contained in the database
 File recoup produces error summaries, record ID reports, and
broken chain reports used to find and diagnose database errors.
4
© 2013 IBM Corporation
Agenda
 Data Organization
 Traditional z/TPF Database
 z/TPF Database Facility (z/TPFDF)
 z/TPF Collection Support (z/TPFCS)
 Globals
 File Systems
 General Data Sets & General Files
 z/TPF Application Requester (z/TPFAR)
 Utilities
 System Tables and Additional Details 
4
© 2013 IBM Corporation
File Address Compute Table (FCTB)
 One with each MDBF subsystem
 Contains system configuration information related to all fixed and pool
records for this MDBF subsystem
– For each set of fixed and pool records in the system
• FARF address range
• Number of ordinals
• Physical record location: Module-Cylinder-Head-Record
(MCHR)
• Record size
– Additionally for fixed records
• Record type name
• Uniqueness
 Used to map between:
Fixed record type name/ordinal and FARF address
– FARF address and physical address
– Pool ordinals and FARF address
 FCTB is generated offline
 Loaded through ZFCTB or ZTPLD commands
4
© 2013 IBM Corporation
Record IDs
 Record IDs
– Used to ensure and validate database integrity. Every record in the
system, whether fixed or pool, is associated with a two-byte
record ID.
 Record ID validation
– A record ID is placed in each fixed record when the database is
initialized, and
– in each pool record by application programs as aquired
– This record ID is given as a parameter within a file address
reference word (FARW) of the entry control block (ECB) when a
record is accessed.
– Database integrity is ensured and validated because the record ID
in a record is compared to the record ID parameter in the
FARW. If the comparison fails, the access request is not valid.
4
© 2013 IBM Corporation
Record ID Attribute Table (RIAT)
 RIAT
– Holds the information about record ID’s and their attributes
– The RIAT is organized and accessed by record ID
 Whether retentive, DASD fast, or cache fast writes. RIAT contains
information for both fixed records and pool records, such as:
– Logging Characteristics: A record is to be written to a real-time tape
whenever a file type macro is processed for a record with this ID.
– Exception Recording Characteristics: A record is to be written to a
real-time tape if a file type macro has been processed after the record
is captured by the file capture and restore utility.
– User Exit Characteristics: Permit an application to dynamically
modify the data record attributes that are assigned to a record ID at
system generation time.
– DASD Record Caching Candidacy Characteristics: Specify
whether or not a record is a candidate for record caching in the
DASD control unit.
4
© 2013 IBM Corporation
RIAT (continued)
– Pool type attributes: Set max of 10 pool attributes for this record ID.
• Each attribute defines what pool type or pool section to dispense
when requesting a pool type for this record ID.
– Virtual File Access (VFA) Candidacy Characteristics: Specify
whether a record is a VFA candidate.
– If the record is a VFA synchronization candidate, the data also is
specified whether it is a candidate for delay filing or immediate filing
and sync between processors.
– Lock Maintenance Characteristics: Specify whether the lock is held
only in the record hold table (lock-in-processor) or in both the record
hold table and the external lock facility (XLF).
4
© 2013 IBM Corporation
Pool Directories (diagram next sheet)
 Pool records are managed through pool directories
– Set of pool directories for each pool section
– Each pool directory contains status (available/in-use) for up to
8000 pool records
– Example: pool directories for pool sections LSTA, 4STB, etc.
 When file storage is requested, the service routine scans the
directory of the relevant pool type for an available pool record.
 When an available pool record (I.e.bit) is found
– The position of the bit in the directory represents a specific pool
record and ordinal number
– The record is marked in-use (not available) by setting the bit to zero
– FARF address for the pool record is returned to the application
– This is known as dispensing a pool record
 FARF address is converted to a physical address by the find and file
macro routines
4
© 2013 IBM Corporation
Pool Directory
4
© 2013 IBM Corporation
Get File Storage and Release File Storage
 From an application’s point of view, the life of a pool record is as
follows:
– A pool record address is obtained via a get file macro
– The application uses the pool record to store data
Duration may be short (seconds) or long (indefinite)
– A pool record is returned via a release file macro
Application programs use a get file storage macro to request (obtain) a
pool record in any one of the standard z/TPF record sizes.
 Application programs use the return file pool address request macro
request to release pool records (return pool records to the z/TPF
system). At this point, the application program has fulfilled its obligation to
return any pool file storage to the z/TPF system that it no longer requires.
5
© 2013 IBM Corporation
Pool Management
Dispensing of a pool record can be either by ratio dispensing or pool fallback –
meant to be transparent to application
>Ratio dispensing: Dispense among pool sections of same pool type
• Example: Dispense evenly between LDPA and LDPB
>Pool Fallback: Dispense another pool type if no pools available
• Example: 4D6 can fallback to 4DP
 The return file service routine is responsible for resetting the appropriate bit
within a pool directory record to indicate availability
– Short-term pool records:
• If the directory is still in use, the bit is reset immediately and the l
record is immediately available.
• If the directory is no longer in use, the return is ignored.
The next time the directory used, all pool records available.
because short term pool recycle times define minimum reuse time
– Long-term pool records
• The setting of the bit is deferred and returned through PDU process
 Only pool routines and pool utilities (PDU, recoup, etc.) manipulate the pool
record bits in the pool directories.
5
© 2013 IBM Corporation
Record ID validation for pool records
 Get File Initialization (GFI)
– When pool record is dispensed through getfc() API, z/TPF files the record
with x’FC37’ record ID
– Shows record in initial state, was dispensed, but not used by application
 Release ID Verification (RVI)
– When a pool record is returned through tpf_relfc_ext() API, z/TPF reads and
validates the record ID matches the application provided record ID
– If verification fails, pool record is not returned
 Multiple Release Detection (MRD)
When a pool record is returnedI, z/TPF files the record with x’FC38’ record ID (
through any relfc() API)
– If record already has x’FC38’ record ID, pool record is not returned because it
was already returned
5
© 2013 IBM Corporation
Unique Records and Shared Records
 Fixed record types can be specified as shared or unique based on:
– Subsystem user
– Processor
– I-stream
 Unique means those specific fixed records are accessed only by a
specific subsystem user, processor, or I-stream
 Shared means fixed records are accessed by any subsystem user,
processor, or I-stream
 Shared or unique characteristics depends on application requirements
and system configuration
– Examples:
• Separate configuration options for each subsystem user
• System statistics for each processor
 Pool records are always shared (no uniqueness) within a
subsystem
5
© 2013 IBM Corporation
Unique Records
 Defining Unique Records
– For a given fixed record type, a separate set of fixed records is
defined for each unique item e.g processor unique
– If the fixed record type is not needed across all unique items, fixed
records do not have to be defined for it
– Example:
• A Processor unique record type has separate fixed records
defined for processors A and B, but not for processor C
• Processor A and B access their own, separate fixed records,
using the same fixed record type name and ordinals
 Unique across multiple dimensions
– Define fixed records using any combination of subsystem user,
processor, and I-stream
– Example:
• Four sets of fixed records that are subsystem user and
processor unique: (SSU 1, Proc A), (SSU 1, Proc B), (SSU 2,
Proc A), (SSU 2, Proc B)
5
© 2013 IBM Corporation
Retaining Module Records in Main Storage
 There is a time delay when accessing data from external storage (e.g
disk) during which time an Entry is waiting (IO wait). One performance
objective is to minimize the waiting time.
There are two mechanisms that eliminate or minimize I/O accesses:
-Data may be put into globals (format-1 or format-2 globals) instead
of fixed or pool records
-Virtual file access (VFA) is the z/TPF system software caching
technique for temporarily holding module records in main storage to
improve the access time of frequently referenced records by reducing the
number of physical I/O operations.
• Any record type (identified by a record ID), fixed or pool, can
be identified as a VFA candidate
• After initial read/write from DASD, record is in VFA and future
read/write requests can use VFA copy of record
• As with all caching techniques, when the cache becomes full,
it’s necessary to remove some old records in order to make
room for new records
5
© 2013 IBM Corporation
File Address Reference Format (FARF)
 FARF
– The method used by the z/TPF system to symbolically address
fixed and pool records.
– The 4-byte file address defined by FARF is a basic element of data
organization in the z/TPF system.
– Allow the z/TPF system the flexibility of adapting to different disk
geometries and capacities without changing the FARF addresses
used in the databases.
– The formats of these symbolic file addresses have evolved over the
years as the size of DASD modules has increased.
 There are four file address reference formats: FARF3, FARF4, FARF5,
and FARF6.
 File address reference format 3 (FARF3): A format from the past and,
because of inherent characteristics, limited in its addressing capacity
and flexibility.
5
© 2013 IBM Corporation
FARF (continued)
 File address reference format 4 (FARF4): Uses almost all of the
addressing capacity allowed by a 4-byte symbolic file address. Used
primarily as a migration stage from FARF3 to FARF5.
File address reference format 5 (FARF5): Uses all of the addressing
capacity in a 4-byte symbolic file address.
File address reference format 6 (FARF6): An 8-byte symbolic file
address.
 Only two address formats from the FARF3, FARF4, and FARF5 set
can be generated in a z/TPF system at the same time: either FARF3
and FARF4, or FARF4 and FARF5. Moving between migration stages
(from FARF3/FARF4 to FARF4/FARF5) in an online system requires that
you load a new FACE table. FARF6 addresses are independent from the
other FARF address formats and can be generated at any time.
 Now for a closer look at each format…………
5
© 2013 IBM Corporation
FARF3 Specifics... Visual next foil
 FARF3 File address reference format 3
– 32 bits long (4bytes-8 bits in a byte)
– limited in its addressing capacity
– used bits in the address to represent characteristics of address
– provides for 2**28 (268,435,456) fixed file addresses.
– fixed record address is made up of control bits, a band number, and an
ordinal number.
– the band number is a unique random value between 0 and 4095 that is
associated with a fixed record type
– provides for 2**26 (67,108,864) pool addresses per pool type
– pool record address is made up of control bits, and an ordinal number
5
© 2013 IBM Corporation
FARF3 - Fixed File and Pool Record Formats
Fixed Record
0
16-Bit Ordinal
12-Bit Band
1
size
FARF3 Indicator
Duplication
Fixed Record = 0
Pool Record
1
26-Bit Ordinal Number
1
1
size
Pool Record = 1
Short Term or Long Term
5
FARF3 Indicator
Duplication
FARF3 Pool Indicator
© 2013 IBM Corporation
FARF4 Specifics
 FARF4 File address reference format 4
– 32 bits long (4bytes-8 bits in a byte)
– uses almost all of the addressing capacity in a 4-byte
symbolic file address
– primarily intended as a migration stage from FARF3 to
FARF5
– supports a file address capacity of 2**30 1 Giga
– FARF4 address is a universal format type (UFT), a format
type indicator (FTI), an ordinal number, and control bits.
– UFT and FTI are like the band number in FARF3
– unique random values associated with a type of record(pool
or fixed)
A
l
l
R
e
c
o
r
d
s
U
F
T
F
T
I
/
O
r
d
i
n
a
l
6
B
i
t
U
F
T
V
a
r
i
a
b
l
e
S
i
z
e
O
r
d
i
n
a
l
1
s
i
z
e
F
A
R
F
4
I
n
d
i
c
a
t
o
r
V
a
r
i
a
b
l
e
S
i
z
e
F
T
I
6
© 2013 IBM Corporation
FARF3 & FARF4
FARF3
FARF4
6
© 2013 IBM Corporation
FARF5 & FARF6
FARF5
FARF6
6
© 2013 IBM Corporation
FARF5 Specifics
 FARF5 File address reference format 5
– 32 bits long (4bytes-8 bits in a byte)
– Contains the same types of fields as a FARF4 address
except there are no control bits
– uses all of the addressing capacity in a 4-byte symbolic file
address
– supports an addressing capacity of 2**32 4 Giga
A
l
l
R
e
c
o
r
d
sU
F
T
F
T
I
/
O
r
d
i
n
a
l
6
B
i
t
U
F
T
V
a
r
i
a
b
l
e
S
i
z
e
O
r
d
i
n
a
l
V
a
r
i
a
b
l
e
S
i
z
e
F
T
I
6
© 2013 IBM Corporation
FARF6 Specifics
 FARF6 File address reference format 6
– 8-byte symbolic file address (8 bits in a byte = 64 bits)
– used to address 4k long-term duplicated pools only (4D6)
– provides additional pool address expansion
– can be used with FARF3, FARF4, and FARF5 addresses
– 2**56 72 Peta
0
0
A
l
l
R
e
c
o
r
d
s
U
F
T
2
B
y
t
e
U
F
T
F
T
I
/
O
r
d
i
n
a
l
M
u
s
t
b
e
a
t
l
e
a
s
t
2
b
y
t
e
s
,
b
u
t
n
o
l
a
r
g
e
r
t
h
a
n
4
b
y
t
e
s
M
u
s
t
b
e
a
t
l
e
a
s
t
1
b
y
t
e
a
n
d
c
a
n
b
e
n
o
l
a
r
g
e
r
t
h
a
n
3
b
y
t
e
s
6
© 2013 IBM Corporation
Backup Slides
6
© 2013 IBM Corporation
Example: Simple traditional z/TPF database
Fixed
Records in
record type
#FLT1234
Adams
Bailey
Anderson
Blackwell
Chan
Zhen
Brooks
Brown
Pool
Records
Blackwell
Adams
Bailey
(overflow)
Bailey
Anderson
Brooks
Brown
Chan
Bailey
(overflow)
Zhen
 Fixed record type #FLT1234 are index records
– Index by 1st character of passenger’s last name
– 26 fixed records in the fixed record type
 Each index entry contains FARF address of pool records containing passenger details
6
© 2013 IBM Corporation
Data Organization
Horizontal Allocation
Horizontal Record Allocation
All Records spread evenly across all DASD
All DASDdefined and
dedicated to TPF
No database sharing
ensures control access
to all the database
DASDwithout
interference fromother
systems
Entire database is
defined during the
systemgeneration
process
PR0001
PR0002
PR0003
1 4 7 10
13 16 19 22
2 5 8 11
14 17 20 23
3 6 9 12
15 18 21 24
Prime Modules
Write to both Prime and Duplicate
Read fromleast busy
PR0004
Dupe of
PR0001
1 4 7 10
13 16 19 22
PR0005
Dupe of
PR0002
PR0006
Dupe of
PR0003
2 5 8 11
14 17 20 23
3 6 9 12
15 18 21 24
Duplicate Modules
If DASDFails, TPF Runs With Remaining Mod
Operator Intervention to Recover Failed Mod
6
Note: By definition, givenrandomaccess, NOhot spots or hot packs.
© 2013 IBM Corporation
Download