College of the Redwoods Redwoods Community College District RCCD Technology Infrastructure Specifications Report

advertisement
College of the Redwoods
Redwoods Community College District
February 25, 2008
RCCD Technology Infrastructure Specifications
Report
Fiber-optic Infrastructure, High Speed Data $etwork
Infrastructure, Telephone Infrastructure, Classroom
Technology and Video Conferencing Infrastructure Technical
Specifications
Revision 6.2
July 20, 1995 - V1.0
revised
ovember 29, 1995 -V1.1
March 29, 1996 -V1.2
April 5,1996 - V2.0
April 19, 1996 - V2.1
May 30, 1996 - V2.2
June 3, 1996 - V3.0
October 23,1996 - V3.1
December 23, 1996 - V4.0
January 10, 1997 - V4.0a
December 10, 1998 – V4.1
February 5, 1999 – V4.2
March 15, 2002 - V5.0
February 16, 2005 – V6.0
October 16, 2007 – V6.1
February 25, 2008 – V6.2
Paul W. Agpawa
Manager, Technology Infrastructure
Technical Support Services
Ray Kingsbury
etwork Administrator
Technical Support Services
PART 1 - GE$ERAL
1.00
Intent
1.01
Scope
PART 2 -SYSTEM HARDWARE
2.00
General System Requirements
2.01
Backbone (Fiber Optic)
2.02
Cross Connect (Fiber Optic Patch Panels)
2.03
Horizontal Distribution (Copper UTP)
2.04
Cross Connects
2.05
Modular Patch Cord (Copper UTP)
2.06
Information Outlets (Wall Plates)
2.07
Hubs
2.08
Fiber Optic Transceivers
2.09
CSU/DSUs, Routers, Bridges, Gateways, and LA$/WA$ links
2.10
Fiber Optic Backbone Switch
2.11
Backbone/Riser (Fiber Optic)
2.12
Other Related Equipment
PART 3 - IMPLEME$TATIO$
3.00
Horizontal Distribution (Copper)
3.01
Information Outlets
3.02
Operational Parameters
3.03
Test Equipment Specifications
3.04
System Management
3.05
Jurisdiction
3.06
District Data Backbone
3.07
Frame Types and Protocols
3.08
$odes
3.09
Administration
3.10
District Data $etworks
3.11
The $etwork Administrator
3.12
The $etwork $ame
3.13
Address and $aming for TCP/IP (IP Addresses/D$S $ames)
3.14
$etware 4.X $DS Data Structures
3.15
Internet Access
3.16
Establishing an Internet Server
3.17
The District $etwork Administrator
PART 4 - SYSTEM LAYOUT DIAGRAMS
4.00
Physical Connection Campus Distribution System Fiber Optic
4.01
Fiber Optic Infrastructure
4.02
Data Logical $etwork Overview
4.03
Backbone $etwork Overview
4.04
Forum Theater
4.05
Distance Learning System Layout
4.06
Video Conferencing System Layout
4.07
Telephone System PBX Layout
PART 5 - WA$ IMPLEME$TATIO$
5.00
Intent
5.01
$etwork Installation at Del $orte/ Mendocino Campuses
5.02
Wan Links Over Leased Lines
5.03
Inter-District Internet Access
5.04
Mainframe Access
PART 6 - SYSTEM EXPA$SIO$S
6.00
Backbone
6.01
Energy Management System
6.02
Video Distribution System
6.03
Distance Education
6.04
Video Teleconferencing
6.05
CE$IC
6.06
Telephone System
PART 7 - I$FRASTRUCTURE RETROFIT A$D CLASSROOM TECH$OLOGY
7.00
Infrastructure Upgrades and Enhancements
7.10
Proposal Overview and Scope
7.12
Smart Classrooms
7.13
Forum Theater
GLOSSARY OF ACRO$YMS
PART 1 - GE$ERAL
1.00
Intent
The intent of this specification is to define the equipment and performance requirements for
the data networks, video conferencing, distance learning and telephone systems of the College of
the Redwoods’ Eureka, Fort Bragg, Del Norte, Arcata, and Klamath/Trinity campuses. This
specification will also provide the framework for long term scalability and expansion
implementation.
1.01
System Brief
The RCCD data network is single large integrated system that services administrative,
faculty, staff and student workstations. Separation is accomplished by the utilization of group
policies. Security is accomplished by zoning and is currently in process. The network is IEEE
802.3 10/100Base-T compliant and all wireless networks will be 802.11G compliant.
The residential halls network, encompasses the two residential buildings. This network
services the dorm students (~160) and is independent of RCCD’s network infrastructure. It is
connected to the internet via Cable broadband from Suddenlink ISP. It consists of 2 cable
connections with 3 MBS bandwidth, 1 per residential hall.
RCCD’s physical network key elements include horizontal pathways, backbone pathways,
work station, telecommunications rooms, equipment rooms and entrance facilities. These key
elements meet the TIA569-A/B specifications documentation guidelines and procedures. The
physical network consists of fiberoptic cabling and Cat 5e copper wiring.
The video conferencing system consists of endpoints utilizing legacy H.320 protocol for
communication and digital video transfer. Connection to all remote sites is accomplished through
RCCD’s WAN infrastructure. Multipoint videoconferencing is carried out via a Picturetel Montage
570A multipoint conferencing system (MCS) Legacy H.320 protocol. This system also supports
RCCD’s distance education program. Capability to link conferences within the CENIC video IP
system is accomplished through a Cisco IP/VC 3520 H.320 to H.323 gateway.
Each of the three campus are linked to the PBX system via partial T1 connections.
This consists of utilizing 6 DSOs. 5 DSOs form the trunk with the remaining DSO as control.
Connection to Pac Bell is accomplished via 2 PRI trunk lines. The the PRI trunks Support the PBX
for voice communications. A linking PRI is utilized for the MCS for IMUX bonding dial in and
dial out functions. The telephone system supports DID phones as well as analog phones along with
a Reparte voice mail system.
Current Usage:
•
Ethernet (IEEE 802.3) 10/100Base-T Local Area Network (LAN)
•
GBIC TX (Del Norte)
•
Ethernet wireless (IEEE802.11a,b)
•
Distance Education
•
Video Distribution Network
•
Teleconferencing
•
Integrated services to Educational centers (Del Norte and Fort Bragg) Voice, Video, Data
System Expansions/Upgrades:
•
VoIP
•
Video over IP
•
RCCD Intranet
•
GBIC SX/LX Backbone
•
Wireless Hotspots (IEEE802.11G)
•
Voice Mail System
1.02
Scope
1. Technical Support personnel is responsible for design, installation, testing, administering, and
maintaining the each of the infrastructure systems for the RCCD as defined in this specification.
2. All work performed is performed in a thorough and craft-like manner according to industry
standards (IEEE).
3. Technical Support personnel coordinates all work through proper channels with vendors and
any contractors for installation, connection and maintenance of the infrastructures.
System documentation consists of the following information:
1. A narrative description of each system and its sequence of operation.
2. An itemized listing of each device, by pertinent information such as range, operating
characteristics, set point values, etc.
3. Point-to-point wiring diagrams, device installation details, riser diagrams, device location plans,
site plans, conduit and wire routing layouts.
4. Labeling of all patch panels, connector termination points, and cabling as per TIA 606
standards.
5. System documentation consisting of the logical network configuration layout.
PART 2 - SYSTEM HARDWARE
2.00
General System Requirements
The Structured cabling infrastructure must meet TIA568-A, TIA569-A/B standards as
outlined within these requirements. The active network infrastructure equipment will combine
advanced network architecture with microprocessor technology and will be capable of supporting
College of the Redwoods minimum computer standards operational parameters as set forth by the
Computer Standards Committee. All new network equipment must adhere to the approved
equipment list to insure compatibility and standards compliance. Additional features of the network
is to provide scalability and expandability for future upgrading.
The system hardware and components must be capable of operating without special
environmental controls in the temperature range of 50 degrees to 95 degrees Fahrenheit and form
20% to 60% relative humidity, non-condensing. Acceptable input voltages should be 100 VAC to
120 VAC, 50hz to 60Hz.
All equipment, hardware, and accessories of a given type are in accordance with
specifications recommended by Information Technology Services, and the Computer Standards
Committee.
2.01
Backbone Transport (Fiber Optic)
The main Distribution Frame (MDF) is located in the Administration phone room (Main
Point of Entry or MPOE). Fiber optic cables run from the MDF to each Data Distribution Frame
(DDF) to comprise the BACKBONE of the UWCS. All inter-building distribution is installed using
the fiber optic cable specified (Siecor or equivalent). See Fiber Optic Backbone Diagram. The
BACKBONE of the UWCS is a hybrid cable consisting of 24 strands of 62.5/125 micron multimode fiber and 12 strands of 8.3/125 micron single mode fiber or 6 strands of 62.5/125 micron
multi-mode fiber and 6 strands of 8.3/125 micron single mode fiber. All backbone cabling and
entrance facilities is in compliance with TIA/EIA 569-A/B standards
All multi-mode fiber optic connectors shall be “ST” type, glue and polish type connectors
for BACKBONE interconnect, and pre-manufactured type for intra-building jumper terminations.
Maximum connector attenuation:
1. .40 db @ 850nm
2. .20 db @ 1300nm
Maximum connector loss due to temperature variation (-40deg C to +70 deg C) after 5 cycles with
loss measured at room temperature after 12 hours:
1. .20 db
Maximum connector loss after 1000 insertions:
1. .20 db
Interconnection sleeves will be used to mate and align the fiber optic connectors and on the fiber
patch panels.
All connectors will provide strain relief to prevent the fiber cable from being damaged.
2.02
Cross Connects (Fiber Optic Patch Panels)
Fiber optic cross connect patch panels will be 19 inch rack mountable fiber connect panels
supporting “ST-II” type connections, or wall mount panels depending upon application.
2.03
Horizontal Distribution (Copper UTP)
Horizontal distribution will be achieved through the use of 4-pair Category-5e certified
Unshielded Twisted Pair (UTP) cable, running from the DDF patch panels to the information
outlets.
1. Horizontal distribution of the UWCS will be 4-pair 24 AWG UTP, plenum jacketed cable, or
PVC jacketed cable, NEC CMP rated or equivalent.
2. All horizontal distribution and pathways will be in compliance with TIA/EIA 569-A standards.
2.04
Cross Connects UTP
All patch panels located in the MDF or the DDF will be certified to Category-5e levels and
will be configured in accordance with TIA/EIA 568B and TIA/EIA 569-A specifications.
1. Cross connect punch panels will be type 110 gas tight displacement type to prevent errors and
high frequency loss.
2.05
Modular Patch Cords (Copper UTP)
All modular patch cables used to connect computers to hubs or patch panels will be wired
according to EIA/TIA 568B AT&T/WECO standards.
1. All modular patch cables will be Category-5e type UTP cable or equivalent.
2.06
Information Outlets (Wall Plates)
All information outlets will be certified to Category-5e levels and will be configured in
accordance with EIA/TIA 568B specifications.
1. Wall plates will be “double gang” type and UL listed.
2. Data jacks will be Category-5e CT type couplers.
2.07
Switches
Large capacity modular intelligent wiring switches will at a minimum meet the following
requirements:
1. Will be designed in accordance with UL478, UL910, NEC725-2(b), CSA, IEC, TUV, VDE
Class A, and meet FCC part 15, Class A limits.
2. Will operate in a standard office and/or wiring closet environment (65 degrees to 90 degrees
Fahrenheit and from 20% to 80% relative humidity, non-condensing) and will not require any
special heating or cooling requirements. The hubs will operate on standard 110 volt, 60Hz
current.
3. Options for 19” rack mounting.
4. Must have the capability to support at least four (4) separately bridged Ethernet segments.
5. The management, interface, and power supply modules must each provide LED diagnostic
indicators.
6. Must provide local and remote SNMP management capability or Cisco IOS.
7. An Ethernet to Ethernet bridge module option is desirable.
8. An Ethernet to Ethernet router module option is desirable.
9. An Ethernet to Local Talk gateway or router module option is desirable.
10. An Ethernet Terminal Server interface module option is desirable.
11. Media modules for these topologies will be capable of supporting media options that are in
conformance with existing standards. These standards are IEEE 802.3 Ethernet, 10/100Base-T,
10Base-2, Local Talk, and Fiber Optic Inter Repeater Link (FOIRL).
The management/bridge module must support Flash EEPROM allowing for firmware updates to the
management module to be provided across the network.
12. When configured with the management/bridge module the modular hub must support at least
three (3) internally bridged and (1) externally bridged Ethernet network.
13. Must utilize RJ45 ports for interfacing.
The Ethernet management module must meet the following requirements:
1. Must provide an error break-down per port of Runts (less than 64B), Giants (over 1500B),
Aligns (inappropriate number of bytes), Cyclical Redundancy Checks (CRCs), Collisions, and
Out of Windows errors.
.
2. The ability to set threshold alarm conditions on a per port basis.
3. The ability to turn ports off and on, and display the real-time status of a port.
4. Port locking security feature to limit access to authorized users only.
The management/bridge module must be capable of counting the number of packets transmitted
into each port for each packet size range (Runts, 64-127, 128-511, 512-1023, 1024-1518, and
Giants) and maintain counts on the port, module, and network levels.
5. The management/bridge module must be capable of counting the number of packets transmitted
into each port for each protocol type (TCP/IP, Novell, LocalTalk, WFWG, and others) and
maintain counts on the port, module, and network levels.
6. The management/bridge module must be designed in accordance with UL478, UL910, NEC
725- 2(b), CSA, IEC, TUV, VDE class A, and must meet FCC Part 15, Subparagraph J, Class A
limits.
7. The management/bridge module must provide LED diagnostic indicators displaying the status of
the board and interface ports.
8. The management/bridge module must provide remote and local SNMP management.
9. The management/bridge module must be 100% SNMP, MIB II, and RMON compliant.
10. Will have the capability of being segmented as a separate manageable network.
The Fiber Optic Inter Repeater Link (FOIRL) Modules will meet the following requirements:
1. Will be fully IEEE 802.3 FOIRL compliant.
2. Designed in accordance with UL478, UL910, NEC 725-2(b), CSA, IEC, TUV, VDE Class A.
Meets FCC part 15, Subparagraph J, Class A limits.
3. Will provide ST or SC type fiber connectors for connection to a 62.5/125 micron multi-mode
fiber optic cable.
4. Will have the capability to be hot swapped in the event of failure.
5. Will provide (1) AUI connector for connection to an IEEE 802.3 device and the capability to
detect and correct crossover.
2.08
Fiber Optic Transceivers
The fiber optic transceivers will have the following features:
1. Will be 10Base-FL or 100Base-FL (TX), FOIRL compliant.
2. Will have (1) AUI connector for connection to an IEEE 802.3 device.
3. Media converters will provide RJ45 connector for connection to an IEEE 802.3 device.
4. Will provide ST type fiber connectors for connection to a 62.5/125 micron multi-mode fiber
optic cable.
5. Will have diagnostic LEDs showing status on link, power, transmit, receive, SQE enable, and
collision.
6. Will have an external switch to enable/disable the Signal Quality Error (SQE) test.
7. Designed in accordance with UL478, UL910, NEC 725-2(b), CSA, IEC, TUV, VDE Class A.
Meets FCC part 15, Subparagraph J, Class A limits.
2.09
CSU/DSUs, Routers, Bridges, Gateways, and LA$/WA$ Links
These high speed, high capacity LAN/WAN equipment will at minimum meet the following
specifications:
CSU/DSU
Will be designed in accordance with UL478, UL910, NEC725-2(b), CSA, TUV, VDE Class and
meet FCC part 15, Class A limits. Will operate in a standard office and/or wiring closet
environment (65 degrees to 90 degrees Fahrenheit and from 20% to 80% relative humidity, noncondensing) and will not require any special heating or cooling requirements. This equipment will
operate on standard 110 volt, 60Hz current.
1. CSU/DSU must provide dial backup to re-establish data links and fallback for secondary links.
2. CSU/DSU must be capable of remote management via RS-232, V.35, V.32, and if possible
SNMP via ethernet is desirable.
3. Options for 19” rackmounting.
4. Must provide local and remote SNMP management capability.
5. The management/bridge module must be 100% SNMP, MIB II, and RMON compliant.
Routers
1. Will be designed in accordance with UL478, UL910, NEC725-2(b), CSA, TUV, VDE Class and
meet FCC part 15, Class A limits.
2. Will operate in a standard office and/or wiring closet environment (65 degrees to 90 degrees
Fahrenheit and from 20% to 80% relative humidity, non-condensing) and will not require any
special heating or cooling requirements. This equipment will operate on standard 110 volt,
60Hz current.
3. Options for 19” rackmounting.
4. Must be capable for attachment of an FOIRL or media converter for intra-building backbone
link.
5. Must provide local and remote SNMP management capability.
6. The management/bridge module must be 100% SNMP, MIB II, and RMON compliant.
Bridges and Gateways
1. Will be designed in accordance with UL478, UL910, NEC725-2(b), CSA, TUV, VDE Class and
meet FCC part 15, Class A limits.
2. Will operate in a standard office and/or wiring closet environment (65 degrees to 90 degrees
Fahrenheit and from 20% to 80% relative humidity, non-condensing) and will not require any
special heating or cooling requirements. This equipment will operate on standard 110 volt,
60Hz current.
3. Options for 19” rackmounting.
4. Must provide local and remote SNMP management capability.
5. The management/bridge module must be 100% SNMP, MIB II, and RMON compliant.
LA$/WA$ Link Equipment
1. Will be designed in accordance with UL478, UL910, NEC725-2(b), CSA, TUV, VDE Class
and meet FCC part 15, Class A limits.
2. Will operate in a standard office and/or wiring closet environment (65 degrees to 90 degrees
Fahrenheit and from 20% to 80% relative humidity, non-condensing) and will not require any
special heating or cooling requirements. This equipment will operate on standard 110 volt,
60Hz current.
3. Options for 19” rackmounting.
4. Must provide local and remote SNMP management capability.
5. The management/bridge module must be 100% SNMP, MIB II, and RMON compliant.
2.10
Fiber Optic Backbone Switch
The Fiber Optic Backbone Switch will at minimum meet the following requirements:
1. Will be designed in accordance with UL478, UL910, NEC725-2(b), CSA, IEC, TUV, VDE
Class A, and meet FCC part 15, Class A limits.
2. Will operate in a standard office and/or wiring closet environment (65 degrees to 90 degrees
Fahrenheit and from 20% to 80% relative humidity, non-condensing) and will not require any
special heating or cooling requirements. The backbone switch will operate on standard 110
volt, 60HZ current.
3. Equipment will provide capability for 19” rackmounting.
4. Must provide remote and local management capability.
5. Remote and local SNMP management is desirable.
6. The management/bridge module must be 100% SNMP, MIB II, and RMON compliant.
7. Will provide ST or SC type fiber optic connectors for interfacing to 62.5/125 micron multimode fiber optic cable.
8. Capability to be hot swapped in the event of failure is desirable.
9. Port locking security feature to limit access to authorized users only.
10. Must be capable of supporting media that are in conformance with existing standards. These
standards are IEEE 802.3 Ethernet, 10/100Base-FL, Local Talk, and Fiber Optic Inter Repeater
Link (FOIRL).
11. Modular design for repair, upgrading and expansion is desirable.
12. Chassis must be able to support multiple module types, Ethernet 10Mhz, 100Mhz, 1000Mhz
Ethernet, ATM, 56K, ISDN, FDDI and V.35 etc.
13. Redundant Media Links, Power and Environmental Systems are desirable.
14. Capability of transition from traditional switching to VLAN to VNET is desirable.
15. Scalability via a non-blocking switch fabric that utilizes high port density, distributed
processing, distributed management, and high-performance ASIC design is desirable.
2.11
Backbone/Riser (Fiber Optic)
Ten feet of slack cable will be left coiled and mounted to the wall behind each equipment
rack at each end of the cable prior to termination. Attention to manufacturer’s minimum bend radius
will be considered when attaching the slack cable.
1. Must be in compliance with TIA/EIA 569-A standards.
2. Fiber optic cables that run horizontally will be supported and secured in cable trays.
2.12
Other Related Equipment
All other related digital equipment, such as, data, video teleconferencing, video distribution
chassis, A-D converters, etc. when applicable, will comply with the standards outlined in Part 2 of
this specification.
PART 3 - IMPLEME$TATIO$
3.00
Horizontal Distribution (Copper)
All horizontal cable (copper) will be Category-5e, UTP, 24 AWG as defined and run in
accordance with the DATA CABLE RUN LIST. All UTP cabling will be installed in compliance
with TIA/EIA 568B wiring scheme and TIA/EIA 569-A standards.
The maximum horizontal cable length is 100 meters independent of media type. This is the
cable length from the mechanical termination of the media in the DDF consolidation point to the
information outlet in the work area.
3.01
Information Outlets
Information outlets will be configured in compliance with TIA/EIA 568B wiring scheme.
3.02
Operational Parameters
Backbone (Fiber Optic)
Operational parameters will be tested and measured through the use of an Optical Time
Domain Reflectometer (OTDR) and documented as a reference for overall backbone performance
over time.
1. Maximum attenuation:
3.75 db/km @ 850 nm.
1.5 db/km @ 1300 nm.
2. Minimum bandwidth:
160 MHZ/km @ 850 nm.
500 Mhz @ 1300 nm.
Horizontal Distribution (Copper)
Unshielded Twisted Pair (UTP) cables will meet or exceed the following standards as
outlined in EIA/TIA TSB-36 for Category-5e cable:
1. Maximum Mutual capacitance @ 1Khz:
17nf per 1000 ft.
2. Maximum D.C. resistance:
26 ohms per 1000 ft.
3. Maximum Impedance:
100 ohms (± 15%)
4. Maximum Near End Crosstalk (NEXT):
@ 4Mhz
@ 10 Mhz
@ 16 Mhz
@20 Mhz
@100 Mhz
53 db
47 db
44 db
42 db
32 db
5. Maximum Attenuation:
@ 4 Mhz
@ 10 Mhz
@16 Mhz
@ 20 Mhz
@ 100 Mhz
13 db
20 db
25 db
28 db
67 db
3.03
Test Equipment Specfications
As outlined in 3.02 operational parameters will meet the technical specifications for
attenuation through measurements taken by an OTDR. In addition, testing and troubleshooting will
require other test equipment in order to meet the specifications for the fiber optic backbone and
horizontal distribution system (copper).
OTDR (Optical Time Domain Reflectometer)
The OTDR will provide the following features and meet or exceed the following
specifications.
1. Multimode Operation: @ 850nm ± 20nm
Dead Zone Event ≤ 4M
Dynamic Range ≅20dB
Dead Zone Attenuation ≤ 9M
@ 1300nm ± 20nm
Dead Zone Event ≤ 4M
Dynamic Range ≅20dB
Dead Zone Attenuation ≤ 9M
2. Singlemode Operation: @
1310nm ± 20nm
Dead Zone Event ≤ 5M
Dynamic Range ≅30dB
Dead Zone Attenuation ≤ 15M
@ 1550nm ± 20nm
Dead Zone Event ≤ 10M
Dynamic Range ≅28dB
Dead Zone Attenuation ≤ 15M
3. Technical Specifications: dB Readout Resolution
Length Readout Resolution
Distance Measurement
Accuracy
Units of Measurement
Refractive Index Range
Detection Tone
≅0.01 dB (loss and reflectance)
≅0.1 M
from ± .01% ± 2.0M (5km range)
to ±.01% ± 4.0M (160 km range)
Meters and Feet (user selectable)
≅1.4000 to 1.7000
2kHz stepped
4. General Specifications:
CRT or LCD min. 5” Diagonal
ST compatible, FC, SC, DIN
-10° C to 50° C
Interchangeable Nickel-MetalHydride Batteries
100-250 VAC, 50-60 Hz
10-18VDC Operation
Class 1 CFR21- (VFL is Class 2 IEC
0825, 9/90)
MS DOS Compatible 1.44 MB
Parallel, IBM Centronics Compatible
RS-232
Display Type
Connector Types
Operating Temperature
Power Supply
Laser Certification
Disk Drive (optional)
Printer Port
Serial Port
5. Options:
Visual Fault Locator
Wavelength 650 ± 10nm
Output Power ≥-2 dBm(0 dBm max)
Transmission CW or 1.5 Hz
Laser Source
Wavelength 1310/1550 (± 20nm)
Output Power ≥ -8 dBm
Stability @ 23° C ±0.1 dB
Spectral Width
≤ 10 nm (RMS)
Power Meter
Wavelengths 850/1300/1310/1550 nm
Detector Type InGaAs, anti-reflective
Measurement Range +3 to -7- dBM
Accuracy
±0.2 dB (+23° C, 1310
nm, -20 dBm)
Linearity (1300/1310/1550 nm)
±0.1 dB (0 to -60 dBm)
Linearity (850 nm)
±0.1 dB (0 to - 50 dBm)
Resolution
0.01 dB/dBm
Optical Fault Locator (Multimode)
The Optical Fault Locator will provide the following features and meet or exceed the
following specifications.
1. Must be capable of reflective and non-reflective fault detection.
2. Capable of multiple fault detection.
3. Go/no-go splice qualification.
4. Capable of variable fault threshold settings (0.5 dB to 6.0 dB)
5. Capable of fault location up to 82 kilometers.
6. Conform to the following general specifications:
Operating Wavelength:
850 nm ± 20 nm
1300 nm ±30 nm
Distance Accuracy:
±2m
± 5 m to 300 m
± 20 m to 50 km
Fiber type:
Multimode w/Singlemode option
Connector type:
AT&T ST, STII
Power:
AC or Battery
LA$ Tester (Copper, Co-ax, Category 5e Certification)
The Lan Tester will provide the following features and meet or exceed the following
specifications.
1. Capability to certify any copper cable up to 100Mhz.
2. Support for the following LAN types:
10Base-T, AppleTalk, T1
ISDN, Fast Ethernet, 56K
ATM51, and ATM155 Mbps
3. Capability to test cable opens, shorts, noise and termination. Measurements for Near-End
Crosstalk, attenuation, resistance, and impedance, in addition to cable length, wire mapping and
capacitance.
4. Capability to test any copper cable, Category 5e twisted pair as well as all other grades of
shielded and unshielded twisted pairs, including Coax types (RG58, RG59).
5. Unit must be portable (handheld) and battery operated.
6. The following is recommended general specifications:
Distance:
Twisted Pair: 0-3000 Ft. (0-915 m);
Coax: 0-4000 ft. (0-1220 m).
Loop Resistance:
0 to 2000 ohms
3.04
Capacitance:
0 to 100,000 pF
Impedance:
Range: 40 to 200 ohms
Accuracy: ±5 ohms
Resolution:
2 ohms
Attenuation:
Frequency range: 0.512 to 100 Mhz
Dynamic range: -50 to 0 dB
Accuracy: ±1.5 dB
Resolution: 0.1 dB
Connector types:
RJ45, BNC, Appletalk Mini Din, DB15, DB9.
Power:
Rechargeable battery pack and /or AC adapter
MDF/IDF Closets
All MDF and IDF closets and enclosures will consist of equipment capable of rack
mounting network equipment. Enclosures will be wall mount similar to the Hubbell RE2 type
cabinets or of similar design. Other enclosures may be of the larger type, which have hinged front
and rear doors capable of locking. MDF closets will have one to several 7 foot 19” racks, which
may be enclosed or open type. All cabling will be routed via the use of a cable management
system.
3.05
System Management
District Data $etworking Strategy and Standards
Management of the network equipment, switches, routers, bridges, and all other SNMP
compliant equipment will be performed via Cisco IOS or other SNMP, RMON, MIB II compliant
software. The operational system platform will be a portable Intel based system linked into the
backbone via the network, which, should be accessable from any node point within the system.
This will allow for diagnostics, troubleshooting, and repair of all attached SNMP compliant
equipment as outlined in section 2.07.
3.05
Jurisdiction
This set of standards applies to all equipment capable of communicating over the district
data backbone.
The data backbone is intended to be the primary means of interconnecting the various local
networks throughout the district. Adherence to a set of standards can ensure equipment
compatibility and avoid address and naming conflicts and other general network chaos.
“Equipment capable of communicating over the district backbone” is intended to include all
computers, printers, hubs, switches, routers, and other resources that are in some way connected to
the district data backbone. Stand-alone networks and computers are not affected.
3.06
District Data Backbone
This includes the primary backbone switch and all equipment with which it can
communicate without routing.
This definition is consistent with the operation of network-level protocols (protocols that operate at
layer three, or the network layer, of the OSI model), which require routing from network to
network.
The District Data Backbone (or simply the backbone) refers to the data network that thus,
communication between the backbone network and other connected networks would require a
routing device. When communication between the backbone and a specific node does not require
routing, that node would be considered part of the backbone network.
3.07
Frame Types and Protocols
Communication over the backbone adheres to the Digital/Intel/Xerox Ethernet II
specifications, supporting only the following protocol suites:
•
TCP/IP
•
IPX/SPX
(Transmission Control Protocol/Internet Protocol)
(Internetwork Packet Exchange/Sequenced Packet Exchange)
Ethernet was chosen as the backbone network because of its inexpensive cost and ease of
maintenance.
There are two recognized Ethernet standards: the original Digital/Intel/Xerox (DIX)
specifications, and the IEEE 802.3 specifications. Both are very similar, with only some minor (but
significant) differences in the framing format. For stand-alone networks, either format would be
acceptable. In the Internet community, however, the DIX standard is preferred1 and most commonly
used2
ote: Novell’s designation for true DIX Ethernet framing is Ethernet_II, while IEEE 802.3
framing is called 802.2. Novell also supports a third frame type called 802.3, but this type is
supported only in the Novell community and is not a recognized standard.
The TCP/IP protocol suite is necessary for communication over the Internet. IPX/SPX
protocols are used for communication between Novell clients and servers.
3.08
$odes
The connection of any device to the district data backbone requires the notification of the
district network administrator and Technical Support services.
This standard is intended to ensure that nodes connected to the district backbone are
properly configured. A mis-configured IP address, for example, could cause packet routing
problems throughout the district.
3.09
Administration
The district data backbone is co-administered by the district network administrator and
Technical Support services, and are subject to the standards specified under, 3.10 District Data
$etworks.
This standard was included to clarify the fact that the backbone is just another network
within the district internetwork, and as such, requires a certain amount of administration,
documentation, and maintenance.
3.10
District Data $etworks
The following standards apply to individual networks throughout the district. Exceptions can
be made on a case-by-case basis by the district network administrator, who must document such
exceptions as specified under, 3.17 The District etwork Administrator.
Networks that fail to comply are subject to disconnection from the district data backbone by the
district network administrator and or Technical Support services.
(District Data $etworks cont...)
The standards presented below are intended to provide a framework for smooth and
uninterrupted operation of the district-wide data network. By nature the addressing and naming
conventions must be well structured and documented; as these control the flow of information
1
2
Branden R. T. 1989. Requirements for Internet Hosts - Communication Layers, RCF 1022, section 2.3.3, Ethernet and
IEEE 802 Encapsulation. Also see RCF 894, Standard for the transmission of IP datagrams over Ethernet networks,
and RCF 1042, Standard for the transmission of IP datagrams over IEEE 802 networks.
Stevens, W. R. 1993. TCP/IP Illustrated Volume 1 - The Protocols, page 2.
throughout the network. Beyond that, however, it is intended that the individual network
administrators be granted as much autonomy as possible. To that end, no mention of protocol
requirements, network types (ethernet, token ring, FDDI, etc.), network hardware, etc. is made, as
long as the addressing and naming is in compliance, and the backbone connection complies with the
standards under, 3.06 District Data Backbone.
3.11
The $etwork Administrator
Each individual network must be assigned a network administrator to serve as the primary
contact for the district network administrator and to supervise the operation of the network locally,
with additional responsibilities described below.
By agreement between the local network community and the district network administrator,
the district network administrator may take responsibility for the local network administration.
Certain network administration tasks are best centrally handled, such as the assignment of
names and addresses, server monitoring and recovery, etc. In order to grant as much autonomy to
local networks as possible, such tasks would be performed by a local network administrator, who
would also serve as the primary contact between the local network community and the district
network administrator.
It is also recognized, however, that in some cases, there may be no such qualified person.
Thus, by agreement between all affected parties, provisions are made so that the district network
administrator can take on the local administration duties.
3.12
The $etwork $ame
Each individual network within the district internetwork is assigned a unique name for
administration purposes. The name assigned is determined by agreement between the district
network administrator and the local network community.
Within a data network (including the Internet or any internetwork) there are certain
resources that must be assigned unique names. By assigning one unique name to each network and
basing the names of resources within those networks on that name, the task of ensuring uniqueness
is simplified and can be distributed to the local district administrators.
3.13
Address and $aming Conventions for TCP/IP (IP addresses/D$S names)
Backbone Address
If the network connects directly to the district data backbone and TCP/IP protocols are used
for internetwork communication, a permanent backbone IP address is assigned by the district
network administrator with notification to Technical Support services.
The routing device used to connect the network to the backbone will require an IP address
on the backbone side. The address assigned will also be used for updating routing tables throughout
the district internetwork.
$etwork Address
Each individual data network must be assigned a unique IP (Internet Protocol) network
address and subnet mask by the district network administrator, whether or not Internet access is
required.
Any resulting changes required to routing tables in other networks throughout the district are
coordinated by the district network administrator.
The local IP network address may be subnetted at the discretion of the local network
administrator in accordance with Internet standards3. Note, however, that the network-portion of the
IP address cannot consist of all zero or all one bits.
All networks require an IP address in order to comply with other address and naming
conventions described below. The network IP address is assigned by the district network
administrator, because it must be coordinated with other IP addresses and routing tables throughout
the district internetwork.
Subnetted network addresses of all one bits are disallowed, because they could conflict with
IP broadcast addressing7. Network address bits of all zeros are reserved for dynamic IP address
assignment protocols like BOOTP4 and DHCP5
$ode Addresses
Node IP addresses throughout the local network are assigned from the range implied by the
network IP address and subnet mask by the local network administrator according to the
following guidelines:
•
ode Address Standards: The node address portion of the IP address (as determined by the
value of the subnet mask) must not consist of all zero or all one bits.
($ode Address cont...)
3
4
5
6
•
ovell etware Servers: Netware servers must be assigned a permanent static IP address.
•
Permanent Static Addresses: Some or all of the local network nodes may be assigned
permanent static IP addresses. Nodes serving as internetwork resources (Internet servers,
district-wide printers, etc.) must be assigned a permanent IP address.
•
Dynamically Assigned Addresses: Nodes that do not require a permanent static IP address may
be assigned a dynamic address using RARP6, BOOTP4, DHCP5, or other appropriate protocol.
Mogel J. 1984, Internet Subnets, RFC 917. Also see Mogel J. and Postel J. 1985, Internet Standard Subnetting
Procedure, RFC 950.
Croft W. and Gilmore J. 1985, Bootstrap Protocol, RFC 951. Also see Alexander S. and Droms R. 1993, DHCP
Options and BOOTP Vendor Extensions, RFC 1533, and Droms R. 1993, Interoperation Between DHCP and BOOTP,
RFC 1534.
Droms R. 1993, Dynamic Host Configuration Protocol, RFC 1531. Also see Alexander S. and Droms R. 1993, DHCP
Options and BOOTP Vendor Extensions, RFC 1533, and Droms R. 1993, Interoperation Between DHCP and BOOTP,
RFC 1534.
Finlayson R., Mann T., Mogul J., and Theimer M. 1984, Reverse Address Resolution Protocol, RFC 903.
This standard simply makes the local district administrator responsible for assigning IP
addresses to nodes on the local network.
Node addresses consisting of all one bits are disallowed, because they would be interpreted
as broadcast addresses7. Addresses of all zero bits are disallowed, because they would conflict with
the assigned network IP address.
Netware servers are required to have a permanent IP address
because of the numbering and naming conventions described below. Other internetwork servers
need to have permanent IP addresses so that internetwork clients can find them.
All other addressing considerations are left to the discretion of the local network
administrator.
Default IP $ode $ames
The district network administrator ensures that there is an IP node name registered in the
name server for each possible IP address. By convention, the default names are established in the
following format:
IP12-digit-ip-address.Redwoods.CC.CA.US
A node with an IP address of 199.4.95.101, for example, would have a default
registered name of IP199004095101.Redwoods.CC.CA.US.
Depending on the resource being accessed, a node may or may not require an associated IP
name, so an IP name is established for each address just in case. The name format was chosen so as
to guarantee uniqueness.
Alternative Internal IP $ode $ames
Those nodes that are assigned a permanent static IP address may also be assigned one or
more permanent IP node names (in addition to the default name) by agreement between the district
and local network administrators. The name must start with the associated network name and end
with the domain name registered to the district with the Internet authority (redwoods.cc.ca.us).
Note that this is an internal name; additional steps must be taken for the name to be
accessible from the Internet.
IP names provide a convenient method for accessing network resources from anywhere in
the district internetwork. Each name is required to start with the associated network name in order
to ensure uniqueness throughout the district Internetwork.
Alternative External IP $ode $ames and Addresses
IP names and addresses that are to be accessible from the Internet need to be established at
the firewall, then translated to the associated internal addresses. See, 3.15 Internet Access, for more
information.
7
Mogel J. 1984, Broadcasting Internet Datagrams, RFC 919.
Address and $aming Conventions for IPX/SPX (ovell etwork Addresses)
ote: These standards apply only to networks using ovell’s IPX/SPX protocols for file and print
sharing and access to other network resources.
IPX $etwork Address8
The IPX address (specified on the BIND command in the file server) assigned to a Novell
network is simply the IP network address for the same network expressed in hexadecimal.
If, for example, a network’s assigned IP address is 199.4.95.0, than the IPX network address
would be C7045F00.
If IPX is bound to more than one frame type (not recommended), additional hexadecimal
values must be derived from the domain of the network IP address. Thus, any network IPX address,
when converted to decimal, must resolve to a valid IP address for the same network. Furthermore it
is recommended that the resulting IP address not be assigned as a node address.
Each ethernet frame type supporting IPX on a Novell network must be assigned a unique
eight-digit IPX network number. By happy coincidence, any IP address (including that assigned to
the network) can be expressed as eight hexadecimal digits, and since all IP addresses must be
unique, this seems like a convenient method for satisfying all requirements.
IPX File Server (SAP) Address8
The IPX internal net number assigned to a Netware file server (usually at the beginning of
the AUTOEXEC.NCF file) is simply the file server’s IP address expressed in hexadecimal.
In the event that the file server is connected to multiple networks (and, therefore, has more
than one IP address), and a permanent district data backbone IP address has been assigned, then the
IPX internal net number is based on the backbone IP address. Otherwise, the IP address of the
network closest to the backbone is used. Proximity to the backbone is measure in hops (the number
of routers that must be crossed to get to the backbone). When there is more than one network closest
to the backbone (tied for the least number of hops), the local network administrator can base the
IPX internal net number on any one of the qualifying IP addresses.
If, for example, a server’s backbone address is 199.4.95.31, then the internal IPX net
number would be C7045F1F.
Each Netware file server must be assigned one unique eight-digit hexadecimal IPX internal
net number. Since the any IP address can be expressed as eight hexadecimal digits, and since IP
addresses are already unique, this seems like a convenient method for satisfying all requirements.
File Server $ames
The name assigned to each Netware file server (usually at the beginning of
the AUTOEXEC.NCF file) must start with the associated network name assigned
under,
The etwork ame. The remainder of the file server name is left to the discretion of the local
network
3.12
8
IPX numbering conventions are based on the Utah Education Network ovell IPX etwork umber and ovell SAP
Service aming Standard, 1994, by Doupnik J. and Hoisve D.
administrator, with the understanding that the name must be unique (among both file and print
servers) within the district internetwork.
This standard ensures that each Netware file server is assigned a unique name within the
district internetwork. Since the name is required to start with the unique assigned network name, the
local network administrator only needs to worry about uniqueness within the local network.
Note that this may result in longer server names than have been common in the past, but
with advent of Netware 4.x and new client software, a file server name is rarely typed by users once
it has been established. Thus, trading longer server names for guaranteed uniqueness becomes a
more reasonable option.
Print Server $ames
The name assigned to each Netware print server must start with the associated network
name assigned under, 3.12 The etwork ame. The remainder of the print server name is left to
the discretion of the local network administrator, with the understanding that the name must be
unique (among both print and file servers) within the district internetwork.
Like file servers, Netware print servers advertise their names across the internetwork using
SAP (service advertising protocol)9, and must, therefore, be unique within the district internetwork.
Accordingly, the rationale for this standard is the same as that for, File Server ames.
Print Queue $ames
Print queue names are not subject to the same restrictions and file and print servers, since
they are not advertised. Instead, their name requirements are based on the version of Netware under
which they are established:
etware 3.x (or 4.x in bindery emulation): In this case the print queue name need only be
unique within the file server on which it resides.
etware 4.x: In this case, a print queue is established as a NDS (Netware Directory
Services) database object under the associated network’s branch. See, 3.14 3.14
etware
4.x DS Data Structures, on for more information.
Netware print queues are local to the server or NDS branch in which they are established,
and can, therefore, be entirely managed by the local network administrator. They are mentioned
here only to avoid confusion with print servers (as well as other print objects in the NDS database).
Other Internetwork Uniqueness Requirements
Any other network resource, software, or equipment requiring uniqueness within the district
internetwork (or beyond) in addition to TCP/IP and IPX/SPX names and addresses must be
coordinated with the district network administrator.
This standard is established in order to ensure that network resources requiring uniqueness
are satisfied at the internetwork level. It is intended that if certain such resources become common,
they be specifically added to the district network standards documentation.
9
Sant’Angelo, R. 1994, etware Unleashed, page 1125.
3.14
$etware 4.x $DS Data Structures
These standards apply only to those networks with at least one etware 4.x file server:
Tree Administration
All Netware 4.x servers participating in the district internetwork are incorporated into the
district NDS (Netware Directory Services) tree named REDWOODS. The REDWOODS tree is
managed by the district network administrator.
he name REDWOODS, being the name of the district, seemed appropriate for the NDS tree.
Since the NDS tree is a district-wide internetwork resource, it also seems appropriate that the
overall management be performed by the district network administrator.
$etwork $DS Branch
Local district networks with at least one Netware 4.x file server participating
in the district internetwork are allocated a branch (organizational unit) on the
REDWOODS NDS tree. All of the resources that are established for a local network
in the NDS tree are established under it’s exclusive branch. The district network
administrator creates the branch using the network name assigned under,
3.12
The etwork ame.
Allocating networks with participating Netware 4.x file server(s) an exclusive branch on the
NDS tree helps to distribute the management of the network and the NDS tree, while maintaining
the security of other branches and networks.
Branch Administration
When a network branch is created on the NDS tree, a user object is created below it with full
supervisory rights to the branch organizational unit object for the local network administrator.
This standard essentially provides the local network administrator with the capabilities to
manage the local network NDS data structures.
File Server Time Synchronization
All Netware 4.x file servers established in the REDWOODS NDS tree must be configured
as Secondary Time Servers (STS). The only exception to this standard is the Data Processing file
server (PRIMO), which is configured as a Single Reference Time Server (STRS, the sole authority
for time on the district internetwork).
In a Netware 4.x environment, an agreement on a time reference throughout the district
internetwork is absolutely critical This is primarily due to the distributed nature of the NDS
database, where updates can come from any server or node and must be posted in the correct order.
The simplest method for such time synchronization involves declaring one server as the time
source, then all other servers set their time to it’s time. Since the Data Processing “PRIMO” server
was the first server in the NDS tree, it became the time source by default.
The drawback to this method is that the “PRIMO” server becomes a single point-of-failure.
In the future, therefore, a more distributed form of time synchronization may be established.
Partitioning
The district network administrator may choose to partition the NDS tree at the network
branch for performance and backup reasons.
NDS database partitioning refers the way the database is distributed among the participating
Netware 4.x servers throughout the district internetwork. Having multiple partitions within the
internetwork implies that there are multiple redundant copies of the NDS database that can take
over if the server with the primary copy fails. However, the more copies (partitions) established, the
more network traffic it takes to keep all of the partitions synchronized.
Because of the Internetwork performance and autonomy implications, partitioning decisions
left to the discretion of the district network administrator.
$aming Exceptions for Third-Party Products
It is recognized that some third-party products come pre-configured with their own hardcoded name, and that the name may conflict with the district network standards. The district
network administrator must approve such products before they can be used within the district
internetwork.
When a product is to be used that conflicts with the district network standards, joint
approval of the district network administrator and Technical Support services is required in order to
ensure that the product does not conflict with any existing network resources, software, or
equipment. It is intended that the district network administrator make attempts to resolve any such
conflicts if possible, rather than reject the product.
Internetwork Packet Routing
The use of RIP (router information protocol) to exchange routing information is
discouraged. Instead, routers to other networks should be configured with appropriate routing tables
and a default routes.
Networks established as branches on the district NDS tree should be able to route IPX/SPX
protocols to other networks. TCP/IP protocols may also be routed.
Routers the use RIP produce excessive network traffic. The district network administrator
can assist local network administrators in establishing appropriate routing tables and default routes.
$etwork Documentation
It is the responsibility of the local network administrator to maintain current documentation
that includes all of the following topics:
• ame and Context of the Administrative User in the DS Tree: This provides the means to
find the user with supervisory access to the network’s NDS branch. Documenting the user password
is discouraged.
•
etwork Servers:
ο Server Boot-Up Procedures: The method used to bring the server(s) up to a usable state (even if
it’s simply “turn on the power”).
ο Server Shutdown Procedures: The procedures required to shut the server(s) down the server to
the point where the power can safely be turned off.
ο Server Power-Fail Recovery Procedures: The procedures that must be followed in the event of
power failure in order to bring the server(s) back to a usable state..
ο Server CrashRecovery Procedures: The procedure that must be followed in the event that the
server goes down unexpectedly (ABEDs or abnormal end for Netware file servers) to bring the
server(s) back to a usable state.
ο File Server Backup Procedures: The procedures used for backing up file server data (file
servers only).
• Permanent Static IP Addresses Assigned: A record must be kept of each IP address
permanently assigned by the local network administrator.
• Other Special Equipment Requirements: Additional documentation may be required for other
equipment, software, resources, etc.
Ideally, the documentation would be kept in an obvious, though reasonably secure location (close to
file servers or accessible by either the network administrator or the Technical Support Staff).
Network documentation is invaluable when problems arise - especially when the local network
administrator is not available. Proper documentation eliminates most of the guesswork that takes
place when an unfamiliar person (the district network administrator, for example) must come in and
troubleshoot network problems, or even perform routine tasks like bringing a server up or shutting it
down.
3.15 Internet Access
The Internet can be accessed from the district internetwork through a firewall. The firewall
performs IP address conversions so that the addresses visible to the Internet are different from the
actual addresses used internally.
The Public $etwork
The firewall acts as a router between the district backbone and the public network, or the
network fully accessible to the Internet. The public network and the firewall are managed by the
district network administrator.
Logically, the public network is an extension of the backbone, but since it is separated from
the backbone by a router (the firewall), it qualifies as a separate network under the district data
backbone, and must, therefore, be assigned a network administrator with notification to Technical
Support services.
3.16
Establishing an Internet Server
External Internet Servers
External internet servers are physically part of the public network and, as such, are
automatically accessible to the internet. The district network administrator approves and manages
(at least jointly) external Internet servers.
Because of standards specified under, 3.10 District Data $etworks, the network
administrator for the public network would be responsible for any external Internet server.
Internal Internet Servers
Internal Internet servers may be established on non-public district networks (networks on the
inside of the firewall), but they require the approval of the district network administrator and
notification to Technical Support services.
Because of the protective nature of the firewall, an additional “external” IP address must be
assigned to internal Internet servers and configured in the firewall. Since the firewall is managed by
the district network administrator, this would require approval.
3.17
The District $etwork Administrator
Enforcement of $etworking Standards
The district network administrator and Technical Support services will ensure that the
networking standards specified in this document are satisfied.
District Internetworking Coordination
Coordination between various district networks is the responsibility of the district network
administrator and Technical Support services, including the following:
•
Resolving address and naming conflicts
•
Advising local network administrators of changes to routing tables
•
Coordinating access to internet work resources
•
“Other duties as required.”
District Data Backbone $etwork Administration
The district network administrator and Technical Support services, will perform the network
management duties specified under, 3.10
District Data $etworks, for the district data backbone
network defined under, 3.06 District Data Backbone.
District Public $etwork Administration
The district network administrator and Technical Support services will perform the network
management duties specified under, 3.10
District Data $etworks, for the district public
network, The Public etwork.
District Internetworking Documentation
The district network administrator and Technical Support services will maintain current
documentation on the following topics:
• District Data etwork Relationships: The relationships between the various district data
networks - this is typically documented graphically (block relationship diagrams, for example).
• District Data etwork Administrators: A list of local network administrators for the various
district networks as specified under, The etwork Administrator.
• Internal etwork IP Address Assignments: A list of network IP addresses assigned to each
local district network as specified under,
Address and aming Conventions for TCP/IP.
• External Licensed IP Address Assignments: A list of all IP addresses licensed to the district by
the Internet authorities/Internet service provider.
• Assigned IP ames (DS names): A list if district IP names registered in the domain name
server (DNS) as specified under,
Address and aming Conventions for TCP/IP.
• Assigned etwork ames: A list of names assigned to various local district networks as
specified under,
The etwork ame.
• District Internetwork Static Routing Tables: A list of entries that should be in each network’s
routing tables.
• Approved Exceptions to District etworking Standards: A list of exceptions to the networking
standards approved by the district network administrator as specified under, 3.10 District Data
$etworks.
District Internetwork documentation is just as invaluable as individual network documentation. See,
etwork Documentation for more information.
Part 4 - SYSTEM LAYOUT DIAGRAMS
Diagram 4.00 Physical Connection Campus Distribution System Fiber Optic
Diagram 4.01 Fiber Optic Infrastructure
Diagram 4.02 Data Logical $etwork Overview
Redwoods Community College District
Backbone Network
Updated: 22 Feb 2008 12:00 PM
Printed: 22 Feb 2008 11:51 AM
BB-Online
253
LS-CA
Physical Science Subswitch
(10Mb/s + 100BaseFX uplink)
22
192.168.22.0
Union
Ricks
Arcata
192.168.23.0
192.168.14.0
192.168.19.0
Arcata Lab
BB-Arcata
Sci-Math
Sci-Math
11
15
192.168.15.0
Panel
312
PS200A-01
112
Gr:Bl/Or
Panel
LI105-01
LAC
LAC
23
14
19
Phone Switch
Gr:Bl/Or
192.168.11.0
PhoneReg
231
247
233
220
AimWorx
PS
Or:Bl/Or
Panel
AD102-01
134 112
512
Panel
AD102-02
112 612
212
Applied Technology Subswitch
(10Mb/s + 100BaseFX uplink)
CaddLab
5
192.168.5.0
AT
Panel
112 156
AT133F-01
18
192.168.18.0
RE-$etlab
Panel
PE103X-01
9
AJ-PE
Panel
AJ110-01
21
192.168.21.0
Maint
8
192.168.8.0
Public
1
207.62.203.0
ISLab
Mendocino Coast
Campus
Mendocino
Subswitch
(10Mb/s)
251
134
Panel
(PhoneRm)
112
Administration of Justice Subswitch
(10Mbs + 100BaseFX uplink)
312
246
200
241
244
245
254 BB-Services
249
243
Root-1
Panel
MT105-01
Panel
(PhoneRm)
BB-Mendocino
534
BB-ISWeb
IS1
Del orte
Campus
Del $orte
Subswitch
(10Mb/s)
Panel
512
AD106E-01
Or:Gr/Br
Maint
Primo
Panel
(Labs)
252
Panel
(Labs)
7
10
13
6
3
192.168.3.0
BB-DelNorte
RE-Admin-1
192.168.2.0
312
334
Dial-Up
(RAS)
RE-Admin
WA$ Subswitch
(10Mb/s + 100BaseFX uplink)
Gr:Gr/Br
192.168.9.0
248
Gr:Gr/Br
CADDLab
Video Support
232
Primary Backbone Switch
192.168.1.250
(100BaseFX only)
4
192.168.4.0
Video Switch
Phone Support
Gr:Bl/Or
Printranet
Panel
112
T90-01
712
Gr:Bl/Or
IPX Ethernet_II: C0A80100
Gr:Gr/Br
192.168.1.0
255.255.255.0
192.168.1.1
(none)
Gr:Gr/Br
Backbone
Panel
134 112
LS101A-01
Gr:Sl/Wh
Network Name:
IP Address:
Subnet Mask:
Default Gateway:
DHCP Server:
Panel
112
LS101A-02
2
ITS Subswitch
(10/100Mb/s)
242
MC-IS
Del$orte
D$IS
D$CC
192.168.252.0
192.168.7.0
192.168.24.0
WebAdvisor
Diagram 4.03 Backbone $etwork Overview
MC-IS
192.168.10.0
MC-Adm
MC-DSPS
MC-Adm
MC-Lib
MC-DSPS
192.168.13.0
192.168.16.0
192.168.6.0
Diagram 4.04 Forum Theater
4.05 Distance Education Control Room (Audio)
4.05 Distance Education Control Room (Video)
Diagram 4.06 Video Conferencing System
NEC NEAX IMS2400
PBX
Eureka Campus
NEC IPX2000
Slave PBX
Del Norte Campus
Adtran CSU
6 Dedicated DSO’s
SD
1x
2x
3x
4x
5x
6x
7x
8x
9x
10x
11x
12x
13x
14x
15x
16 x
17 x
18 x
19x
20x
21x
22x
23x
24x
60
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
SD
EI A 23 2
1x
2x
3x
4x
5x
6x
7x
8x
9x
10x
11x
12x
1 3x
1 4x
1 5x
1 6x
1 7x
1 8x
19x
20x
21x
22x
23x
24x
Li nk
1
2
13
14
10
Li nk
100
Digital Phones
Del Norte Campus
82
SD
E IA 232
Li nk
100
SD
1
10
3
4
5
6
7
8
9
10
11
12
15
16
17
18
19
20
21
22
23
24
U t li z
i a ti on %
1
5
100
100
1
2
13
14
10
Li nk
MD I
1 5 2 5 3 0 65
10
3
4
15
16
5
6
7
8
9
10
11
12
17
18
19
20
21
22
23
24
U t li z
i a ti on %
1
5
15
25
30
MDI
65
1 2
Adtran CSU
Adtran CSU
Digital Phones
Eureka Campus
E IA 232
Li nk
100
1
2
13
14
10
Li nk
100
10
3
4
5
6
7
8
9
10
11
12
15
16
17
18
19
20
21
22
23
24
U t li z
i a ti on %
1
5
MD I
1 5 2 5 3 0 65
2x
3x
4x
5x
6x
7x
8x
9x
10x
11x
12x
13x
14x
15x
16 x
17 x
18 x
19x
20x
21x
22x
23x
24x
6
9
8
#
*
Analog Phones
Del Norte Campus
SD
1x
3
4 5
7 8
Adtran CSU
1 2
3
4 5
6
7 8
9
8
#
*
1 2
3
4 5
6
7 8
9
8
#
*
SD
E IA 232
1x
2x
3x
4x
5x
6x
7x
8x
9x
10x
11x
12x
13x
14x
15x
16 x
17 x
18 x
19x
20x
21x
22x
23x
24x
Li nk
100
Li nk
100
1
2
13
14
10
10
3
4
5
6
7
8
9
10
11
12
15
16
17
18
19
20
21
22
23
24
U t li z
i a ti on %
1
5
1 5 2 5 3 0 65
MD I
NEC IPX2000
Slave PBX
Mendocino Campus
Adtran CSU
Analog Phones
Eureka Campus
Digital Phones
Mendocino Campus
82
60
SD
1
1 2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
3
4 5
6
7 8
9
8
#
*
2
1 2
3
4 5
6
7 8
9
8
#
*
SD
EI A 23 2
1x
2x
3x
4x
5x
6x
7x
8x
9x
10x
11x
12x
1 3x
1 4x
1 5x
1 6x
1 7x
1 8x
19x
20x
21x
22x
23x
24x
Li nk
100
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
10
Li nk
100
10
U t li z
i a ti on %
1
PRI Trunk Lines
5
15
25
30
MDI
65
Analog Phones
Mendocino Campus
Adtran CSU
SD
1648 SM
Communication Subsystem Shelf
1 2
3
4 5
6
7 8
9
8
#
*
d i gi t a l
NEC IPX2000
Slave PBX
Eureka Downtown
Campus
Voice Mail System
Digital Phones
Eureka Downtown
Campus
82
60
SD
1
AT&T Central Office
PRI Trunks
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1 2
3
4 5
6
7 8
9
8
#
*
SD
EI A 23 2
1x
2x
3x
4x
5x
6x
7x
8x
9x
10x
11x
12x
1 3x
1 4x
1 5x
1 6x
1 7x
1 8x
19x
20x
21x
22x
23x
24x
Li nk
100
1
2
13
14
10
Li nk
100
10
3
4
15
16
5
6
7
8
9
10
11
12
17
18
19
20
21
22
23
24
U t li z
i a ti on %
1
5
15
25
30
65
MDI
Adtran CSU
Analog Phones
Eureka Downtown
Campus
1 2
Drawn: Paul W. Agpawa
February 25, 2008
Diagram 4.07 Telephone System PBX Layout
3
4 5
6
7 8
9
8
#
*
PART 5 - WA$ IMPLEME$TATIO$
5.00
Intent
The intent of this section is for future reference in determining the viability and capability of
expanding the network system to the outlying campuses. Each campus is a network system that is capable of
stand-alone operation. The connectivity to the main campus will allow access to services presently available
only to the Eureka campus. It will also enhance any services presently in use at the outlying campuses.
5.01
$etwork Installation at Del $orte/ Mendocino Campuses
Networks at the Del Norte and Mendocino campuses will conform to the specifications outlined in
section 2.07. In addition the Network Operating System will consist of Windows Server 2003, Sun Unix, or
Linux.
5.02
Wan Links Over Leased Lines
Connectivity to the outlying campus is accomplished via T1 lines. The bandwidth is shared between
data, voice and video information. Separation occurs at the DSU/CSU, the voice is routed directly from the
DSU/CSU and connects to the sub PBX control on site via a FX DSO assigned card, video is port (V.35)
separated and the bandwidth is DSO assigned as well, data is separated in the same fashion as the video
information. Planned expansion includes a second T1, which will be dedicated to data only. The freed
bandwidth on the present T1 will be added to the voice channel expanding the voice trunk to 11 voice
channels and 1 control. The video bandwidth remains the same.
5.03
Inter-District Internet Access
Interconnection with the Eureka campus via section 5.02 will also allow access to the CENIC
network system. Connectivity will be gained by access through the Information Technology Services
network and RCCD firewall.
5.04
Mainframe Access
As outlined in section 5.02 and 5.03, interconnection will also include HP mainframe access to any
workstation on the network.
PART 6 - SYSTEM EXPA$SIO$S
6.00
Backbone
Because of the nature of advances in technology, especially within the multimedia area; traffic on the
backbone will increase to the point that exceeds the original specifications of the operational speed. At this
juncture in time the backbone equipment should be retrofitted to operate at speeds equal to or exceeding
1000Mhz. In addition, it may become necessary to increase the fiber count from each building within the
backbone cabling.
6.01
Energy Management System
Heating, cooling, lighting and energy control systems have the capability of being operated through a
network and extended throughout the campus via the fiber optic backbone. Presently, operation of the
energy management system is via RS-422 bussed architecture and IP based at the branch campuses.
6.02
Video Distribution System
The video distribution system will comprise of a H.323 Multipoint Control Unit (MCU) and an IP
based MCU system adhering to the IEEE standards for H.323. Support for CCITT QCIF, RS-422/RS-530,
RS-366, V.35 and NTSC Composite will also be included. Building distribution will also include RG-6 COAX and CAT-5e copper wire. This gives any video signal the flexibility and capability for both reception and
transmission, allowing for implementation of distance learning, classroom cable viewing, and interactive
video classes. The Video infrastructure will be incorporated within the present system and will operate over
the existing copper infrastructure. Bandwidth requirements will consist of both digital video and data
combinations. Reserved Bandwidths for video and data will be observed. Equipment will adhere to the
standards listed in section 2.12 Other Related Equipment
6.03
Distance Education
The distance education implementation will be in compliance with the QCIF specification for
teleconferencing and telecommunications equipment. The QCIF specification will allow interface into the
proposed 4C- net, and, in addition, will also allow for multi-point conferencing. Interface links will be in
compliance with V.35, RS-422A/RS-530, RS-366, ISDN, Switched 56K and NTSC Composite IEEE
standards for connection to the T-1 TSU and related equipment, in addition will also interface in accordance
with H.323 and H.320 standards into the network infrastructure (as described in 6.02)
Transmission speeds for the CODEC will be at 768 K, allowing for 30 FPS broadcast quality picture.
Links through the T-1 will utilize partial circuit with 12 DSO’s and a dedicated V.35 interface port.
The classrooms will be setup and utilize high grade studio and broadcast quality video cameras and
audio equipment. Control and switching will be accomplished through the control room system located
within the Learning Resource Center (LRC) building. As more classrooms are setup within each building,
sub-control rooms coordinated by the master control room may become necessary in the future. Head-ins for
the public cable broadcasting will be located in the control room, and within each educational center.
6.04
Video Teleconferencing
Video Teleconferencing will utilize equipment as per section 6.02 (paragraph 1) and be distributed
throughout the District as per section 6.02.
6.05
CE$IC
The planning and implementation of the Redwoods District fiber optic backbone and high speed data
network infrastructure has been in compliance with already established industry standards. This allows for
compatibility and interface to CENIC , which, can be incorporated within the parameters of the “Baseline for
Planning and Implementing an Internal-Campus Telecommunications Infrastructure Systems for the
California Community Colleges10”. The link to CENIC consist of a DS3 connection which provides
integrated services of data and video.
6.06
Telephone System
Each of the three campuses are linked to the PBX system via partial T1 connections. This consists of
utilizing 6 DSOs. 5 DSOs form the trunk with the remaining DSO as control. Connection to Pac Bell is
accomplished via 2 PRI trunk lines. The the PRI trunks Support the PBX for voice communications. A
linking PRI is utilized for the MCU for IMUX bonding dial in and dial out functions. Additionally, the PBX
has the capability of upgrades that will support VoIP.
10
Facilities Planning and Utilization Unit, Fiscal Policy Division Chancellor’s Office, California Community Colleges Sept. 1996
Part 7 I$FRASTRUCTURE RETROFIT A$D CLASSROOM TECH$OLOGY
7.00
Infrastructure Upgrades and Enhancements
In order to meet the growing demand and advancements in technology along with support for the
classroom technology, the current RCCD network infrastructure will need to be upgraded and enhanced.
Abstract
The purpose of this document is to determine the logistics and cost factor in implementing a district-wide
network infrastructure upgrade project.
Summary
1. Sites
a. Eureka Campus
Site statistics: 31 classrooms, 53 ASF spaces, 9 student occupied buildings, DS3 connection
(Cenic)
b. Eureka Downtown Center
Site statistics: 4 classrooms, 2 conference rooms, T1 connection
c. Arcata Instructional Site
Site statistics: 2 classrooms, hospitality kitchen, T1 connection
d. Prosperity Center
Site statistics: 1 classroom, Cox internet
e. Klamath/Trinity (Hoopa)
Site statistics: ADN56K connection
f. Fort Bragg
Site statistics: 6 classrooms, 10 ASF spaces,2- T1 connections
g. Del Norte
Site statistics: 4 classrooms 8 ASF spaces,2- T1 connections
2. Infrastructure Requirements
a. MPOE
Firewall replacement and or ASA blade, DMZ implementation, QoS implementation, 6509
Supervisor II upgrade or replacement, GBIC LX SFF implementation for Backbone.
b. Building POE
GBIC LX QoS Switch, 10/100/1000 QoS workgroup switches, DHCP server, wiring closet,
Horizontal wiring frame w/patch panels (Network/Voice), UPS and routers.
7.10
Overview and Scope
Overview
The purpose of this specification is to bring the data network to current technology standards. This will
allow several new services and features to be available including video remote interpreting, Video over IP
and will bring network security to current technology. These upgrades will also allow us to keep pace with
the proposed bond projects and provide the necessary services for the technology employed within the new
classrooms.
Scope
This proposal is to be an enterprise wide project spanning approximately a three year period of time. Each
phase will focus on certain aspects of the infrastructure. Each phase may be implemented either
independently or in conjunction with each other.
Phase I
Upgrade of aggregate switch, replacement of firewall, reorganization of logical network DMZ, QoS
implementation, VLAN implementation, Network Security protocols implementation.
Phase II
Replacement and upgrade of edge switches, implementation of GBIC on the backbone.
Phase III
Replacement and upgrade of local networks, implementation of wireless VLAN.
Other Considerations
Restructuring and relocation of the MPOE will be necessary since it’s current location is scheduled to be
deconstructed. The LRC basement has been chosen as the most likely site. It is a central location for new
building accesses and has adequate space as well as the ability to have a controllable environment. The
AT&T trunk will also need to be relocated to the new site.
The fiber backbone will be expanded to a 64 fiber strand bundle (multimode) to accommodate current and
future services. Telephone cabling will be included with the relocation and will need to be run to all of the
new buildings.
7.11
Smart Classrooms
In order to maintain a consistent standard of classroom presentation equipment throughout the district, this
specification was developed from the bond planning committees during the early stages of project
development. This equipment falls into several categories and forms an integrated system:
1) Instructional computer equipment.
2) Presentation equipment
3) Audio Visual equipment
4) Network infrastructure connectivity equipment
5) Furniture
6) Accessibility
Equipment Specifications Guideline
Tangent Computer
Desktop Tower or All in One unit
RCCD Standard as per contract
Motorized Screen 90” X 96”
Draper Salara Powered or non-Powered Projection Screen
W71-32365 or equiv.
LCD Projector
Epson Powerlite 822P or equiv.
LCD Projector Spare bulb
Epson Powerlite 822P or equiv.
Digital Presenter (Elmo type)
Samsung SDP-900DXA or equiv.
Audio System
Pioneer Receiver #A-35R or equiv.
Paradigm Atom Speakers pr.
Paradigm Mounts
Or equiv.
DVD/VHS player
Sony SL-VD271P DVD/VHS player/recorder
Or equiv.
Accessibility Requirement
As Per DSPS ADA Specifications
7.13
Forum Theater
As per specifications given by PCD consulting via Nadon and Associates.
See diagram 4.06.
GLOSSARY OF ACRO$YMS
Americans with Disabilities Act
(ADA)
American National Standards Institute
(ANSI)
Building Industry Consulting Service International
(BICSI)
Corporation for Educational Network Initiatives in California
(CENIC)
California State Fire Marshal
(CSFM)
Channel Service Unit
(CSU)
Data Distribution Frame
(DDF)
Digital Service Unit
(DSU)
Electronic Industries Association
(EIA)
Federal Communications Commission
(FCC)
Fiber Optic Inter Repeater Link
(FOIRL)
Institute of Electrical and Electronics Engineers
(IEEE)
Integrated Services Digital Networking
(ISDN)
Intermediate Distribution Frame
(IDF)
Local Area Network
(LAN)
Main Distribution Frame
(MDF)
Main Point of Entry
(MPOE)
Media Access Control
(MAC)
Medium Attachment Unit
(MAU)
Multipoint Control Unit
(MCU)
National Electrical Manufacturers Association
(NEMA)
National Electric Code
(NEC)
National Fire Protection Association
(NFPA)
Occupational Safety and Health Administration
(OSHA)
Office of Statewide Architect
(OSA)
Point of Entry
(POE)
Remote Monitoring
(RMON)
Simple Network Management Protocol
(SNMP)
Underwriters Laboratories Standards
(UL)
Unshielded Twisted Pair
(UTP)
Universal Wiring Cabling System
(UWCS)
Virtual Local Area Network
(VLAN)
Virtual Network
(VNET)
Video Service Unit
(VSU)
Wide Area Network
(WAN)
Download