Prepared by: - Farr Institute (Scotland)

advertisement
Project Engagement Document
University of Dundee
Farr Institute IT Infrastructure Design Document
Release: 1.0
Date: 09/01/2014
Author: Richard Feltham
Document ID: University of Dundee - Farr Institute IT Infrastructure Design Document
SDS No: S11805
Project No: PR
Version
1.0
Index
1 DOCUMENTATION INFORMATION .........................................................................................4
1.1 Document Location ....................................................................................................................4
1.2 Revision History .........................................................................................................................4
1.3 Contact Information ...................................................................................................................4
2 EXECUTIVE SUMMARY ...........................................................................................................5
2.1 Business Direction .....................................................................................................................6
2.2 Project Goals .............................................................................................................................6
3 REQUIREMENTS ......................................................................................................................8
3.1 Outline Requirements ................................................................................................................8
3.1.1 Locations ........................................................................................................................8
3.1.2 Network ..........................................................................................................................8
3.2 Non-functional Requirements ....................................................................................................9
4 SOLUTION DESIGN ................................................................................................................ 10
4.1 Solution Overview .................................................................................................................... 10
4.2 File Storage Platform ............................................................................................................... 12
4.3 Semi-Structure Data Platform .................................................................................................. 14
4.4 High Performance Storage ...................................................................................................... 15
4.4.1 Disk Configuration ........................................................................................................ 15
4.5 Large Memory Machine ........................................................................................................... 16
4.6 Server Infrastructure Nodes..................................................................................................... 16
4.7 High Speed Interconnect and Management Network .............................................................. 17
4.8 Firewalls and VPN ................................................................................................................... 18
4.9 Business Continuity, Disaster Recovery and Archiving ........................................................... 19
4.9.1 Sizing ........................................................................................................................... 20
4.9.2 File Storage Platform Backup ...................................................................................... 20
4.9.3 Tape Encryption ........................................................................................................... 21
4.9.4 Backup Catalogue ........................................................................................................ 21
4.10 Cloud and Systems Management Software ....................................................................... 21
4.11 Virtual Desktop Infrastructure ............................................................................................. 25
5 ARCHITECTURAL DECISIONS .............................................................................................. 27
6 MANAGEMENT SERVERS ..................................................................................................... 28
6.1 Primary Site ............................................................................................................................. 28
6.2 DR Sites .................................................................................................................................. 29
6.3 Implementation ........................................................................................................................ 29
7 PHYSICAL ENVIRONMENT ................................................................................................... 30
APPENDIX A – RACK LAYOUT ................................................................................................... 31
A.1 ACF ......................................................................................................................................... 31
A.2 Dundee – DR Site1 .................................................................................................................. 32
A.3 Dundee – DR Site3 .................................................................................................................. 33
APPENDIX B – NETWORK SWITCH CONFIGURATION ............................................................ 34
B.1 GSS24 ..................................................................................................................................... 35
B.2 x3760M4 .................................................................................................................................. 35
B.3 Management Servers .............................................................................................................. 36
B.4 Storwize V7000 ....................................................................................................................... 36
B.5 NeXtScale – Semi-structured Nodes ....................................................................................... 36
B.6 NeXtScale – VDI ...................................................................................................................... 36
B.7 NeXtScale – Compute Nodes .................................................................................................. 37
APPENDIX C – SAN PATCHING AND ZONING .......................................................................... 40
C.1 SAN Patching .......................................................................................................................... 40
C.2 SAN Zoning ............................................................................................................................. 40
-2Tectrade Computers Limited – Confidential
Version
APPENDIX D – TSM CONFIGURATION ......................................................................................
42
1.0
D.1 TSM Sizing .............................................................................................................................. 42
D.1.1 Source Data ................................................................................................................... 42
D.1.2 Primary Storage Pools.................................................................................................... 42
D.1.3 Tape Requirements ........................................................................................................ 42
D.1.4 TSM Clients .................................................................................................................... 42
D.2 TSM Internal Disk Configuration .............................................................................................. 42
D.2 TSM Disk Storage Pools ......................................................................................................... 43
APPENDIX E – SYSTEM CONFIGURATIONS ............................................................................. 44
E.1 GSS24 ..................................................................................................................................... 44
E.2 x3760M4 .................................................................................................................................. 47
E.3 Management Servers .............................................................................................................. 47
E.4 Storwize V7000 ....................................................................................................................... 48
E.5 NeXtScale – Semi-structured Storage Nodes.......................................................................... 48
E.6 NeXtScale – VDI ...................................................................................................................... 50
E.7 NeXtScale – Compute Nodes .................................................................................................. 51
E.8 TSM ......................................................................................................................................... 52
E.9 Network Switching ................................................................................................................... 53
E.10 Racking.................................................................................................................................. 53
APPENDIX F - SITE INFORMATION ............................................................................................ 54
F.1 Production – Primary Site ........................................................................................................ 54
F.2 DR Sites................................................................................................................................... 54
F.1.1 DR Site 1 ........................................................................................................................ 54
F.1.2 DR Site 2 ........................................................................................................................ 54
-3Tectrade Computers Limited – Confidential
1
Version
1.0
Documentation Information
1.1 Document Location
The source of this document will be found in the following location:
X:\U\University_of_Dundee\Closed\2013\131031_S18805_Farr_Data_Analytics
Prepared for:
University of Dundee
Prepared by:
Tectrade Computers Limited
River Court,
Mill Lane,
Godalming, Surrey
GU7 1EZ
1.2 Revision History
Date of this revision:
Version Date
15/01/2014
Summary of Changes
Initial Release
Version Number
V1.3
1.3 Contact Information
Tectrade Contacts:
Name
Richard Feltham
Mike Rickards
Grant Bean
Neil Ballinger
Office Phone
01483 908319
01423 340942
Mobile Phone
07887 725626
07974 356434
07976 632095
Email Address
Richard.Feltham@tectrade.com
Mike.Rickards@tectrade.com
Grant.bean@tectrade.com
01483 521944
07590 245135
Neil.ballinger@tectrade.com
Universities of Dundee and Edinburgh Contacts:
Name
Office Phone
Mobile Phone
Jonathan Monk
01382 388723
07985 039758
Mike Brown
0131 445 7834
Email Address
j.g.c.monk@dundee.ac.uk
Mike_Brown@ed.ac.uk
©2014 Tectrade United Kingdom. This document contains information which is confidential and of
value to Tectrade Computers Ltd. Tectrade Computers Ltd prior written consent is required before
any part is redistributed or reproduced.
-4Tectrade Computers Limited – Confidential
2
Executive Summary
Version
1.0
The Farr Institute Scotland, collaboration between NHS Scotland and six academic Institutions
(Universities of Dundee, Aberdeen, Strathclyde, Glasgow, Edinburgh and St Andrews) has been
funded to develop a new system for the analysis of Electronic Health Record (EHRs) for research
purposes in Scotland, as part of a broader network across the UK. The overall objective of the Farr
Institute network is to harness the knowledge from health records data to improve the health of
patients and communities by establishing a pre-eminent interdisciplinary UK health informatics
research institute
Like many healthcare customers, The Farr Institute is faced with the challenge of managing very
sensitive data that is growing exponentially; 80% of medical data is unstructured and is clinically
relevant. This data, which is frequently under-utilised, resides in multiple places like individual EHRs,
lab and imaging systems, physician notes, medical correspondence, and various patient
management systems.
The goal of Farr Institute Scotland is to gain better access to valuable clinical data and to harvest it
for information using advanced analytics in order to improve the quality of care, reduce healthcare
costs and encourage the right behaviour and improve patient outcomes. Research groups should
only have access to data via trusted, secure data repositories (“safe havens”) where identifiable
individual-level data is held. The informatics system must then allow secondary use of the data for
research and other approved purposes.
This solution provides the foundational elements for such as ‘safe haven’ where researchers can
access the linked datasets for research purposes. Due to the key nature of the requirement and the
sensitivity of the data, it is crucial that the Farr Institute networks adopt a solution that is fully
integrated, where the key components are proven to work together and are fully committed and
supported products from a major vendor. This approach improves manageability and will also
reduce the risk of data loss through ‘holes’ in the solution or poorly supported and developed
software.
The solution from Tectrade and IBM has the following notable benefits: (1) Provides an integrated platform which offers a structured, managed interface layer that
abstracts the system interfaces enabling Farr Scotland to respond more quickly to change.
(2) Provides a means to integrate information, processes and people to deliver tailored patient
care and clinical decision support solutions.
(3) Enables clinical applications to exchange data more easily, which can better inform hospital
systems of patient details and their medical requirements.
(4) Analyses both structured and unstructured data in a secure, scalable and automatic fashion
to better understand all interactions the patient has had in a clinical environment.
(5) Captures, stores and utilizes data in real time to provide more proactive alerts to clinicians
and help improve the quality of care.
(6) Combines and analyses the structured and unstructured data to match treatments with
outcomes, predict patients at risk for disease or readmission, and provide efficient care and
use of staff.
(7) Provides environments that can scale in multiple dimensions as the needs of Farr Scotland
grow and evolve, allowing additional analytical tools or data capture tools to be integrated as
required.
(8) Builds upon the existing infrastructure and skills at the University of Dundee minimising the
costs of training and allowing the budget to be spent where it can provide the most benefit.
-5Tectrade Computers Limited – Confidential
Version
1.0
2.1 Business Direction
Data are recorded about individuals by many different organisations. Linking of these data is
invaluable for research. Linking different types of health data such as the Scottish Morbidity Records,
GP Data, Dispensing Records, Hospital Admissions and Lab Results is extremely powerful for many
reasons such as directing government policy, finding subjects to recruit into clinical trials,
investigations of poly-pharmacy, research into personalised medicine, investigations into the
efficacy of drugs, the progression of diseases and analysis of disease management, to name just a
few. Linking Health Data to other data types such as social services, educational institutions and the
police add additional dimensions so that other factors can also be analysed. The potential for
different studies is immense, however, the field is in its infancy and the linking of data sets is rarely
undertaken routinely.
In the past researchers needing to link data sets for a particular study have had to seek approval
for the right to obtain these data. Consent may not be given to release the data to an individual
researcher without the data being anonymised. This means that it will then not be possible to link
the required data sets, in which case the investigation may not be possible. In the event that the
data custodian does allow the release of the identifiable data, the researcher will have to
anonymise the results before publishing any information.
To resolve this issue, the National and Local Safe Havens within Scotland have been providing
linked data for research in a secure anonymised way. The safe havens provide a service to link
the data, anonymise the information and then release the data into a remotely accessible virtual
machine, a “safe haven”, for researchers to analyse. The system has strictly controlled permission
restrictions to ensure that researchers can only view the data specific to their particular project.
Researchers cannot copy or remove the raw anonymised data sets from the safe haven; any
requested outputs are checked by approved staff to ensure it is appropriate for release.
Researchers are not allowed copies of the raw anonymous data to reduce the chance that
combining information from many different anonymous data sets could lead to subjects being
identified.
There is a principle adhered to within Farr that unconsented data (anonymised or otherwise)
should not be given directly to research groups to manage on their own hardware. Research
groups should access the data via what is described as a “safe haven”. A safe haven in the spirit
of the Walport Report 2005 is a trusted, secure repository where identifiable individual-level data is
held as part of an informatics system that will allow secondary use of the data for research and
other, approved purposes.
The solution infrastructure will provide the “safe haven” infrastructure for researchers to access
these linked datasets. As this will be the only way that researchers will have to access the data it
is imperative that these are fit for purpose for researcher needs.
The 6 local safe havens across Scotland (corresponding to the 6 academic collaborators) and the
National safe haven groups will also have to access the system via secure virtual machines.
These Service groups will have different levels of access but must be able to provide tools and
virtual appliances for the research groups
2.2 Project Goals
Farr UK Vision: To harness the knowledge from health records data to improve the health of
patients and communities by establishing a pre-eminent interdisciplinary UK health informatics
research institute. The Farr Institute will be concentrated in the four Centres, act as the nexus of the
-6Tectrade Computers Limited – Confidential
Version
wider UK Institute network and make the UK the come-to place for data-intensive, 1.0
translational
health science.
Farr UK Aim: The aim of the Farr Institute is to integrate and scale, at the UK level, the work of
the four Centres. The Institute will thus provide a new UK-wide focus for each of the work-streams
in the Centres: e-infrastructure, research, capacity and public engagement.
-7Tectrade Computers Limited – Confidential
3
Version
1.0
Requirements
3.1 Outline Requirements
The University of Dundee on behalf of the Farr Institute of Health Informatics Research Scotland
has requested the supply, support/maintenance and delivery of servers, storage/backup solutions
and associated management hardware and software for an IT infrastructure to meet the provision
of Electronic Health Records (EHRs) for research purposes.
EHRs are collected routinely by the NHS. These records are in a format which is fit for the purpose
of clinical management and recording. To use these records for research, the structure of the
underlying data needs to be altered to aid querying, additional meta-data is required, user interfaces
are needed to view the data, and stringent security has to be implemented to allow access only to
be groups with governance approval to view the data sets. In addition, Farr is concerned with the
linkage of non-consented EHRs and so the data has to be anonymised before it can be accessed.
The aim of the IT infrastructure is to provide systems and capability for storage and analysis of
Electronic Health Record (EHR) data in a wide variety of forms. Given the diversity of requirements,
a single analytics platform is deemed to be inflexible and therefore the focus is on providing an on
demand, private cloud like environment including per project, multi-tenant isolation within a secure
boundary envelope. The main deliverables will be 2+PB of scalable, hierarchical raw storage for file,
semi structured and structured data, a flexible, computing environment including large memory
capability and a private cloud ensemble for rapid provisioning of new systems running a wide variety
of operating systems and environments connected via a high speed, interconnect secured from
external access via an advanced firewall and remote access solution. Extensive orchestration,
provisioning and configuration management software will reduce the requirement for significant
system administration overheads. Advanced analytics capability that can be delivered as part of the
overall solution is highly desirable.
As the Farr Institute wishes to pursue an excellence, best of breed approach the infrastructure has
been broken down into a number of systems within overall project. It is vital that the solution
delivered provide the most flexible systems and topology possible as it is highly likely that the
required system will need to evolve over time to meet varying user demands.
3.1.1 Locations
The Farr IT Infrastructure will be located within two geographically distinct locations with the intention
of using these as a prime site and two disaster recovery (DR) sites. The DR sites will provide an
offsite recovery position in the event of a significant event affecting the prime site. Two DR sites will
ensure a second copy of all backup data exists in the event of loss of one DR site.
3.1.2 Network
All systems will be connected to the JANET network in order to leverage existing investment in
advanced high speed networking. A high element of security is required throughout the system due
to the nature of data being stored and analysed therefore all data must be encrypted while in transit.
-8Tectrade Computers Limited – Confidential
3.2 Non-functional Requirements
Version
1.0
Whilst Non Functional requirements were not specified as part of the RFP, we have captured a view
at a high level of how the system may perform against a common set of non-functional requirements
for reference. Enhancing any requirement where the solution does not meet the desired end-state
will be possible but may require a design change request.
 Scalability – Almost all components can be extended and the main compute systems are
designed to be deployed in a Grid like manner. For example Platform Cluster Manager –
Advanced Edition is tested up to 500 nodes with additional testing planned to raise this to
2,500 nodes. The two components that may need more attention would be the storage and
network. The storage and network both have switch/chassis considerations that may require
either an additional storage enclosure or the introduction of a network aggregation layer.
 High Availability – For critical components resilient configurations (for example using
redundant power supplies) have been deployed. For the compute nodes, Platform Cluster
Manager and the batch scheduler, LSF, will monitor servers and jobs. Both can automatically
resubmit work or build additional servers should issues arise that affect jobs.
 Disaster Recovery - All of the compute and fast storage has been configured for the
Edinburgh ACF site. Data will be protected by Tivoli Storage Manager and this will be backed
up to a remote facility via the Janet network. Note however that to access this information in
the event of a disaster the management, compute and network will be required at a minimum
to rebuild the compute systems.
 Security – Access to the system will be via a next generation firewall and will be managed
via virtual desktop systems. Additional logging and auditing can be implemented using the
core Windows/Linux/Application tooling should this be required.
 Data Centre Constraints
 Manageability – The system should be easy to manage and where possible commercial
software should be used to provide supported function.
-9Tectrade Computers Limited – Confidential
4
Version
1.0
Solution Design
4.1 Solution Overview
This solution allows The University of Dundee to expand their current information infrastructure and
analytics capabilities to address big data challenges of Farr Scotland. It incorporates the idea that
big data and analytics are inextricably linked – from infrastructure to big data platform, to analytics
and ultimately to a solution with domain content to accelerate the time to value.
The solution leverages key capabilities from IBMs Big Data Platform. The IBM Big Data Platform is
a framework that merges traditional information management capabilities with the speed and
flexibility required for ad hoc data exploration, discovery, and unstructured analysis. The solution
will allow Farr Scotland to apply advanced analytics to information in its native form, and visualize
all available data for ad hoc analysis. It delivers the development environment required for building
new analytic applications. Workload optimization, scheduling, security and governance also play
key roles to address the challenges of delivering analytic capabilities as cloud services.
File Storage Platform
The file storage platform utilises the IBM General Parallel File System (GPFS) Storage Server
(GSS). GSS has been specifically designed for Big Data environments and offers significant
performance and functionality advantages over conventional RAID solutions. The RAID controller
functionality is provided in the software on the integrated IBM servers, and the use of de-clustered
arrays allows up to 18 drives to be pulled simultaneously from a running array without any data loss
-10Tectrade Computers Limited – Confidential
Version
and much more rapid data rebuild times as all drives participate. The appliance controls
1.0the end to
end data path it provides extreme data integrity built in as standard.
GSS also provides integration with the TSM backup solution offering seamless backup and
hierarchical storage management. Both can scale out as required with the addition of more building
blocks.
Semi-structured Data Platform
To provide a distributed compute and storage platform for the semi-structured, this solution utilises
IBM InfoSphere BigInsights. Big Insights takes the standard Hadoop platform and makes a number
of changes such as replacing HDFS with GPFS and including the Platform EGO scheduler to
improve workload. There are a number of benchmarks showing the benefits of the performance as
well as offering a fully supported and simply installed stack. The Big Insights platform simplifies the
install and offers tools to allow users to create queries using simpler interfaces such as
spreadsheets, critical for researchers to get faster time to value with.
High Performance Storage
To provide the high performance storage function this solution utilises the IBM Storwize V7000 Disk
System. The IBM Storwize V7000 delivers unparalleled performance and is capable of providing
automated tiering and supports the use of SSD, SAS, or NL-SAS drives. IBM Storwize V7000 also
provides snapshot (FlashCopy) and thin-provisioning capabilities as standard.
For connectivity to the VDI and structured data platforms, IBM Storwize V7000 utilise 10 Gbps iSCSI
for presentation of storage via the high speed interconnect.
The use of the IBM Storwize V7000 provides a virtualized software defined storage system designed
to consolidate workloads into a single unit for simplicity of management, reduced cost, highly
scalable capacity, high performance and high availability.
Large Memory Machine
For the large memory machine requirement, this solution includes the IBM X3750 M4 server. Within
a dense 2U design, the IBM System x3750 M4 provides advanced features and capabilities. These
include support for up to four sockets and 48 DIMMs, mix and match internal storage, up to 16 HDDs
or 32 eXFlash SSD drives, six hot-swap dual rotor fans, two power supplies and integrated 1 Gigabit
Ethernet (GbE) and 10 GbE networking with options for fibre or copper.
The x3750 M4 excels in high performance computing (HPC), offering a balance of high
computational power with high IOP local storage and fast I/O to external SAN storage. With an ultradense design, the x3750 M4 can help conserve floor space and lower data centre power and cooling
costs. Flexible eXFlash SSD storage options can deliver extreme internal storage performance to
support your most demanding applications. The x3750 M4 is ideal for applications and workloads
that have outgrown their 2-socket systems but do not require mission-critical RAS and availability
features.
Server Infrastructure Nodes
-11Tectrade Computers Limited – Confidential
Version
The compute nodes are all based on the IBM NeXtScale nx360 M4 server nodes. NeXtScale
offers
1.0
a superior building block approach for hyperscale computing built around one architecture and
optimised for many use cases.
NeXtScale compute nodes provide dense computing in a half wide form factor offering high
performance, coupled with energy efficiency and IO flexibility.
High Speed Interconnect and Management Network
The network solution is based on IBM Systems Networking switches as these are highly performing
and low latency rack mounted switches which also integrate into the Platform Computing Cluster
Manager cloud management suite.
The G8264 (48 10GigE and 4 40GigE) and G8052 (48 GigE and 4 10 GigE) switches will provide
VLAN aggregations for maximum link bandwidth as well as routing and QoS.
Firewalls and VPN
We have selected Stonesoft appliances for the firewall based on the estimated JANET bandwidth.
It should be noted however that if the firewalls are to provide L3 routing for the inter-VLAN traffic
then the requirements for this traffic will need to be factored into the solution.
Business Continuity, Disaster Recovery and Archiving
For the disk backup, where de-duplication has been mentioned, we have sized a set of backup
servers to allow for server side de-duplication to be performed. We have also included a set of
management servers that will be required to run, amongst other things, Key Management software.
Cloud and Systems Management Software
The solution includes Platform Cluster Manager Advanced Edition, Platform LSF, Platform
application Centre and Platform Process Manager. This comprehensive suite will allow the system
to be effectively managed with minimal resources thorough automation as well as empowering end
users with self-service capabilities.
Virtual Desktop Infrastructure
This solution provides the base infrastructure to support the VDI environment. This based on IBM
NeXtScale. All software and service pertaining to the VDI environment will be based on Citrix and
be supplied by the University of Dundee.
Authentication Environment
Authentication for tis solution is based on Microsoft active Directory. This environment will be
supplied by the University of Dundee along with DNS and DHCP.
4.2 File Storage Platform
The File Storage Platform consists of a single GSS24 building block, which includes two IBM System
x3650 M4 servers and four disk enclosures. In total, the configuration consists of 232 NL-SAS disks
and six solid state drives (SSDs). The 232 x 4TB NL-SAS drives provide 928TB of RAW capacity.
The SSDs are used for de-clustered buffering of small write I/Os and logging the RAID metadata.
-12Tectrade Computers Limited – Confidential
Version
The IBM GPFS Native RAID software, which runs on the servers, offers sophisticated data
1.0 integrity,
using end-to-end checksums for both read and write operations, and dropped-write detection.
The GSS24 storage server, using the latest 4TB disk drives, has been configured with 10 GigE
presentation for connection to the high speed interconnect. We would propose that this is configured
with 8+2P protection (whilst 8+3P can be used, with the TSM backup protection and only a single
unit deployed this level of resilience will be sufficient). The usable space after formatting and allowing
for some hot spare space will be 680TB to house data and metadata.
For NFS provision, a further two clustered GPFS NSD nodes via the in-built Clustered NFS function
(cNFS). For CIFS provision, SAMBA and CTDB packages will also be deployed.
Access to the file system is via high speed interconnect connecting to the Janet Production network.
Further details are contained in Appendix B.
Data will be archived from the GSS24 and NSD servers to the DR sites using the Hierarchical
Storage Management features of IBM Tivoli Storage Manager. Using the GPFS policy engine (which
is parallel and scales with the number of GPFS Servers) and making use of the fast metadata disks,
policies will be defined and schedules created. By using HSM, if a file is already backed up then it
will just be migrated to the relevant TSM Storage Pool and then stubbed on the disk (removing the
data).
This process will be performed from TSM / HSM agents which are installed onto the cNFS GPFS
Servers, these can then target a remote TSM Server over the IP network which will allow this to be
located at a remote site if required. Standard SSL communication encryption will be used by TSM
to secure this communication link.
-13Tectrade Computers Limited – Confidential
Version
File Storage Platform deployment will be provisioned by IBM Services, as such this is currently
out
1.0
of scope of this design document and will be covered under a separate scope of works. This will
include decision around Access Controls, Quotas, Snapshots, and HSM/TSM policies and
schedules.
4.3 Semi-Structure Data Platform
The solution includes a 6 node cluster, with one node acting as the management node and the
remaining 5 acting as data nodes. The management node will be responsible for running the web
console, JAQL and Job tracker services. The 5 data nodes are identical from a BigInsights point of
view, except some of them will be performing GPFS functions.
The solution has kept as close as possible to the IBM reference architecture by recommending that
the management node not be used for storing and querying data. The additional disk has been
designated to be used as a staging/landing zone. However, it would be possible to use this additional
disk for storing BigInsights data if required.
The platform includes a single NeXtScale chassis and 6 nx360M4 storage nodes. Each nx360M4
has dual Intel Xeon E5-2660V2 10 core processors with 128GB memory,8 x 4TB NL-SAS, and
10GbE connectivity.
-14Tectrade Computers Limited – Confidential
Version
1.0
All software elements of the semi-structured data platform deployment will be provisioned by IBM
Services, as such this is currently out of scope of this design document and will be covered under a
separate scope of works.
All hardware implementation, network patching, and initial system testing will be carried out by
Tectrade services and covered under the agreed scope of works.
4.4 High Performance Storage
The solution for the High performance storage is the IBM Storwize V7000 utilising 600 GB 10k RPM
SAS HDDs. The unit comprises a dual controller enclosure plus an expansion enclosure resulting
in a system total of 42 x 600GB disk. This provides 25.2 TB of raw storage including provision for
spare drives. A Model 324 Controller has been selected which has the optional 10 GigE iSCSI
interface cards installed. This provides a pair of resilient connections from each of the redundant
controllers within the V7000. This will provide high performance storage for management functions
that need HA, fast storage for the compute nodes and other uses as required.
4.4.1 Disk Configuration
The table below details the recommended disk configuration to achieve the optimum performance:-15Tectrade Computers Limited – Confidential
Nu.
Arrays
5
Parity
7+1
RAID
5
Disk
SAS
Disk
Size
600
GB
Array / Mdisk
Type
Size
ACF_V7000_1_mdisk0
Striped
ACF_V7000_1_mdisk1
Striped
4195GB
4195GB
ACF_V7000_1_mdisk2
Striped
ACF_V7000_1_mdisk3
Striped
ACF_V7000_1_mdisk4
Striped
4195GB
MdiskGroup
ACF_V7000_1
Version
1.0 Extent
size
1 GB
4195GB
4195GB
University of Dundee are to provide details of the required RAID configuration is different from
above.
4.5 Large Memory Machine
The solution includes an IBM x3750 M4 Server complete with Sandy Bridge E5-4610 processor.
The server is configured with quad six-core E5-4610 processors and 48 DIMM slots. To achieve
over 1TB with 48 DIMM slots, 32GB Load reduced (buffered) DIMMs have been selected, equating
to 1.536TB of memory within a single machine.
For internal drive capability, the eXFlash backplane with 5 x 400GB SSD are included along with 8
x 600GB small form factor SAS drives. This provides 2TB of Flash and 4.8TB of SAS raw capacities.
The hardware will be installed by Tectrade services. Details around installation of operating system
and any application software will require to be agreed between University of Dundee and Tectrade
under the agreed scope of works.
4.6 Server Infrastructure Nodes
The server infrastructure consists of four IBM NeXtScale N1200 chassis. Each chassis has the full
complement of 12 IBM NeXtScale nx360M4 compute nodes. Each nx360M4 has dual Intel Xeon
E5-2660V2 10 core processors with 128GB memory.
All of the compute nodes have been licensed for Platform Cluster Manager Advanced Edition as
well as Platform LSF and floating user licenses for Platform application Centre and Platform Process
Manager. There are also GPFS client licenses to facilitate integration with the file storage platform.
-16Tectrade Computers Limited – Confidential
Version
1.0
All Platform software implementation will be provisioned by IBM Services, as such this is currently
out of scope of this design document and will be covered under a separate scope of works.
All hardware implementation, network patching, and initial system testing will be carried out by
Tectrade services and covered under the agreed scope of works.
4.7 High Speed Interconnect and Management Network
The solution includes three IBM System Networking RackSwitch G8264 for all Production LAN
traffic. Each G8264 contains 48x 10GigE and 4x 40GigE ports. For connectivity the solution includes
144x 10GBase-SR SFP+ transceivers.
In addition, for the NeXtScale nodes, there are DAC cables included for connection to G8264s
located in the corresponding racks.
For the management network there are three IBM System Networking RackSwitch G8052. Each
G8064 contains 48x GigE and 4x 10 GigE ports.
-17Tectrade Computers Limited – Confidential
Version
1.0
4.8 Firewalls and VPN
The solution integrates Stonesoft 32020 Security Engine appliances to provide Firewall, IPS and
VPN services for remote connectivity to ACF.
Firewall
The Stonesoft firewall operates using Multi-Layer inspection. On a rule by rule basis the user can
choose whether to apply stateful connection tracking, packet filtering or application-level security.
The system expends the resources necessary for application level security only when the situation
demands it and it does so without unnecessarily slowing or limiting network traffic.
Intrusion Prevention System
-18Tectrade Computers Limited – Confidential
Version
Deep Inspection (IPS) - Stonesoft's deep inspection technology is designed to protect1.0
public web
services, internal networks and client users as they access the internet. Deep inspection detects
malicious activity within regular network traffic and prevents intrusions by blocking offending traffic
automatically before any damage occurs.
Stonesoft uses protocol identification, normalization and data stream-based inspection technology
to detect and block threats, in both clear-text HTTP and inside encrypted HTTPS connections.
Vulnerability-based protection fingerprints and recommended policy configurations are updated
regularly via dynamic updates and administrators have the option to automate the entire process
when needed. The latest protection is in place at all times.
Stonesoft can protect against known and unknown threats. Misuse detection (signatures) protects
against known situations and Protocol analysis and enforcement protects against new/unknown
threats
VPN
The Stonesoft IPsec VPN solution provides very high security. Symmetric encryption supports up to
256 bit key lengths with AES in different modes, SHA-2 message digest, authentication with preshared keys or RSA, DSS and ECDSA signatures, and Diffie Hellman groups up to group 21 (521
bit ECP group).
Management and Monitoring
The Stonesoft Management Centre (SMC) provides unified network security management for the
Stonesoft Security Engine, Firewall/VPN, IPS, and SSL VPN. In addition to managing Stonesoft
devices, Stonesoft Management Centre also provides event management, status monitoring, and
reporting capabilities for third- party devices. By collecting all this information in one centralised
system, administrators can get a thorough overview of what is happening in their environment.
The SMC includes at least one Management Server and one Log Server, which can be installed
either to the same or to separate servers. The Management Client is the graphical user interface
used for configuring, managing and monitoring the entire system. Optionally the SMC solution can
be extended by adding additional Management and Log Servers, Web Portal Servers and
Authentication Servers.
Real-time monitoring is available as standard via the SMC. Statistics can be viewed per firewall
instance or for the solution as a whole. Customisable, real-time dashboards provide high level
visual information about the deployment. A dedicated Log Browser allows administrators to view
both real-time and historical log data down to packet level information with the ability to perform
dynamic filtering in order to zoom in on the required information. The Log Browser can display
multiple logs or detailed individual log entries including a summary of the event, an event
visualisation and payload data. High level statistical reports can be created on the fly via the Log
Browser along with more detailed analysis using the log aggregation and mapping tool.
4.9 Business Continuity, Disaster Recovery and Archiving
The solution utilises IBM Tivoli Storage Manger to provide the business continuity, disaster recovery
and archiving services.
The solution includes the provision of TSM servers located at the two DR sites and connected to the
ACF production site via JANET. The TSM infrastructure will reside solely at the DR sites ensuring
-19Tectrade Computers Limited – Confidential
Version
all backup data resides away from the Primary site. Systems at the Primary site 1.0
included in
scheduled backups will be recoverable in the event of a Primary site failure.
Note: All Racking, SAN switching and SAN cabling will be provided by University of Dundee.
Standard images should be deployed where possible to facilitate bare machine recovery. Once the
image is recovered, TSM will then be used as the recovery mechanism for system specific data.
TSM Node replication will be deployed to ensure a copy of specified node backup data is available
in the event of site failure at one of the DR sites. Primary site systems backups will be spread evenly
between the two DR sites to ensure a balanced workload to the TSM servers.
4.9.1 Sizing
Deduplicated storage pools will be deployed at each DR site utilising a SAN attached IBM Storwize
V3700 with 72TB useable capacity. Along with client based compression, this will allow for
approximately 32TB of source data from day one, and enable the backup infrastructure at both sites
to exist for 3 years without requirement for upgrade or expansion.
Growth has been estimated at 10% for data and 10% for clients over a 3 year period with the initial
number of clients set at 26.
Further sizing information is included in Appendix D.
4.9.2 File Storage Platform Backup
-20Tectrade Computers Limited – Confidential
Version
The backup and restore of the GPFS cluster at the Primary site will be via the integration
of TSM for
1.0
Space Management and TSM Backup Archive Client capabilities on the GPFS nodes and TSM
infrastructure.
TSM for Space Management and TSM Backup Archive Client integration with GPFS provide a highly
available solution for migration and recall of files between storage pool tiers alongside file backup
and recovery operations. GPFS provides the command “mmbackup” that combines the TSM Backup
Archive Client function with the GPFS policy engine. A proxy node will allow for multiple client
sessions to aid performance.
4.9.3 Tape Encryption
The solution utilises IBM’s encryption key management tool is IBM Security Key Lifecycle Manager
(otherwise known as IBM Tivoli Key Lifecycle Manager - TKLM) for key management. TKLM
simplifies, centralises and automated the encryption-key management process. Key data will be
protected by regular backup of the key repository to TSM.
Tape encryption will be accomplished by implementing 8 encryption capable drives within each tape
library at each DR site, with key management through TKLM. The solution includes 16 IBM TS1060
Ultrium 6 Tape Drives which provide drive level 256-bit AES encryption and support for TKLM.
Primary and Replica TKLM servers will be also deployed at the two DR sites to ensure availability.
4.9.4 Backup Catalogue
The backup catalogue has been designated based on a standard 28 days retention for all daily
incremental backups and monthly backups held for 1 year: -
Schedule
Recovery Recovery
Start
Point
Time
Time
Objective Objective
Sched
Day
Backup
Change
Retention
Versions
Rate %
Disk
Daily Incr
24 hrs
Disk
Monthly Incr 1 month
Data
Target
Classification
6 hrs
18:00
Mon-Sun
30
1 month
2%
6 hrs
18:00
Sun
12
12 Months 10%
Unstructured
4.10 Cloud and Systems Management Software
Platform Computing
The solution stack includes licenses for all of the compute nodes to provide:  Platform Cluster Manager – Advanced Edition
 LSF Standard Edition
As well as 12 user licenses within the Semi-structured and VDI environments for:  Platform Application Centre
 Platform Process Manager
-21Tectrade Computers Limited – Confidential
Version
IBM Platform Cluster Manager Advanced Edition automates assembly of multiple
high1.0
performance technical computing environments on a shared compute infrastructure for use by
multiple teams. IBM PCM AE includes support for multi-tenant HPC cloud and multiple workload
managers. It creates an agile environment for running technical computing and analysis workloads
to consolidate disparate cluster infrastructure, resulting in increased hardware utilization and the
ability to meet or exceed service level agreements while lowering costs. In addition PCM-AE offers
the capability to manage sub-clusters so this can be extended to provide a high level automation
and management platform (almost a cloud of clouds) should this be required.
IBM Platform LSF is a powerful workload management platform for demanding, distributed HPC
environments. It provides a comprehensive set of intelligent, policy-driven scheduling features that
enable you to utilise all of your compute infrastructure resources and ensure optimal application
performance.
IBM Platform Application Center provides a flexible, easy to use interface for cluster users and
administrators. Available as an add-on module to IBM Platform LSF, Platform Application Center
enables users to interact with intuitive, self-documenting standardized interfaces.
IBM Platform Symphony software delivers powerful enterprise-class management for running
distributed applications and big data analytics on a scalable, shared grid. It accelerates dozens of
parallel applications, for faster results and better utilization of all available resources .
The cloud stack will provide functionality for all required components, and can be extended to
manage other clusters and to dynamically manage resources between them. Looking at the
components of the stack of PCM-AE and the integration we can see how the components will
interact to provide the end to end cloud management platform:
-22Tectrade Computers Limited – Confidential
Version
1.0
Virtualisation
Server Virtualisation
The Server virtualisation will be provided by KVM as this is the only supported virtualisation stack
for PCM-AE. PCM-AE has resource adapters and leverages the underlying xCAT provisioning
engine to deploy Bare-metal, CentOS or RHEL. Windows 2008 will be deployed as KVM guests
initially, bare metal windows 2008 support is part of the PCM roadmap and we expect support to
arrive later in 2014 with the 4.2 version. Using either GPFS or the cNFS adapter, a shared repository
can be configured to allow the sharing of VMs and templates.
Storage Virtualisation
The Storage virtualisation will be provided by GPFS which will allow storage devices to be presented
as filesystems and managed via quotas.
Network Virtualisation
The solution includes IBM Systems Networking switches. PCM-AE can deploy VLAN Secure
Networks as part of a physical machines definition, this would allow Hyper-V hosts or physical
machines to be isolated from each other. We can also pre-define sets of IP addresses in IP Pools
for VMs, allowing the virtual machines to be logically isolated from each other on the same physical
network.
Hypervisor Management
-23Tectrade Computers Limited – Confidential
Version
PCM-AE and the underlying xCAT provisioning engine can be used to manage the hypervisor
layer.
1.0
Using these tools with a shared NFS repository, guests could be migrated, hosts placed into
maintenance mode and the host updated to the latest operating system and patches.
Event Management and Monitoring
Alerts
Individual components (such as switches via SNMP, operating systems via Syslog forwarding and
storage arrays) will be complemented by PCM-AE which has policies that can be defined to trigger
alerts against rules or when cluster wide thresholds are breached. A number of alerts and events
are pre-configured with the standard deployment.
Capacity Management
The solution stack will report on storage capacity via GSS and V7000 GUI, switch and compute
cluster capacity via PCM-AE.
Data Resiliency
Backup and Archiving
TSM comes with a management portal which allows centralised management of the backup policies
and schedules and also allows users (subject to strict permission controls) to perform backup and
restore options should you wish to allow self-service for this operation.
We have also included the Hierarchical Storage Management extensions which allow GPFS using
its DMAPI interface to provide a tape tier of storage for almost unlimited filesystem capacity. This
integration is seamless and allows efficiencies such as when a backup up file is archived, this can
be accomplished without having to re-send the data from GPFS to TSM (TSM will just move the
backed up data to the HSM pools and then stub the file in GPFS).
Storage
The V7000 and GSS storage have management GUIs which provides a centralised management
and operation function.
Image Management
PCM-AE can define shared image repositories using NFS which can store common images and
resources for both physical and virtual images.
Virtual Machines
PCM-AE will manage guests to KVM running on Windows 2008 Server Datacneter Edition. Officially
supported are RHEL 6.3, KVM, CentOS and Windows 2008 Standard guests.
Physical Machines
PCM-AE can provision, using the underlying xCAT engine, physical machines which can be
stateless (RAM image and use GPFS filesystem for persistent storage) or stateful as well as KVM
hypervisor hosts.
Metering and Accounting
-24Tectrade Computers Limited – Confidential
Version
Usage
1.0
Whilst commercial software could have been included for sophisticated usage and accounting, the
core functionality of PCM-AE will provide usage reports which will show the compute resources
allocated, for example from a simple PCM-AE cluster:
Chargeback
Again as above, enterprise class tools such as IBM Tivoli Usage and Accounting Manager were
considered but we feel that the out of the box functionality of PCM-AE with simple customisations
(for example the reports above can be generated as HTML which can then be processed as a simple
spreadsheet to account for resources) and the capability of GPFS to report in allocations will meet
the requirements.
Implementation
All Platform software implementation will be provisioned by IBM Services, as such this is currently
out of scope of this design document and will be covered under a separate scope of works. This
includes PCM-AE, LSF and PAC.
The reaming management software will be implemented by Tectrade under guidance/assistance
from University of Dundee under the agreed scope of works.
4.11 Virtual Desktop Infrastructure
The Virtual Desktop Infrastructure environment is provided by a single IBM NeXtScale chassis
containing 6 x nx360M4 GPU capable nodes. Each nx360M4 has dual Intel Xeon E5-2660V2 10
core processors with 128GB memory.
The solution has been sized based on the requirement to accommodate 38 users per system with
each system capable of handling 125 users.
The solution comprises: 2 x Management System
4 x VDI Servers
2 x Windows Storage Servers
For detailed configuration refer to Appendix E.
Once available, the nodes can be upgraded by the edition of GPU, Phi or GPU riser card to provide
graphical processing capability to support the VDI environment.
-25Tectrade Computers Limited – Confidential
Version
1.0
All VDI related operating systems and software will be implanted by University of Dundee. Any
assistance required from Tectrade will need to be agreed under a separate scope of works.
All hardware implementation, network patching, and initial system testing will be carried out by
Tectrade services and covered under the agreed scope of works.
-26Tectrade Computers Limited – Confidential
5
Architectural Decisions
Version
1.0
The following is a summary of the architectural decisions:
 NeXtScale compute nodes. The decision has been made to use the latest IBM hyperscale
compute platform, NeXtScale, the next iteration beyond iDataPlex. Although the NeXtScale
nodes do not currently support GPUs, the requirement for GPUs is not well defined, for
example are VDI Grid style GPUs or compute style Kepler cards required? For this reason
NeXtScale has been chosen and a chassis with 6 compute nodes and space for future GPU
nodes has been configured should these GPUs be required later in the project.
 GPFS Storage Server for file storage. The GSS has additional features over and above native
GPFS due to the unique product packaging employed. These features such as end to end
checksum and de-clustered RAID for ultra-fast recovery and RAID rebuilds from failed drives
are unique to GSS and this has been chosen for these reasons over and above a solution
created using component parts.
 V7000 for fast storage. Although other platforms were considered (such as the V3700) the
ability to upgrade the V7000 in the future with features such as Real Time Compression are
the reason we have used the V7000 here.
 Dundee will provide a pair of 10 GigE interfaces on a pair of 6509 VSS switches for
termination into the next generation firewalls, as well as defining how the network will
interface (suspect that NAT will be required). These firewalls will then be connected into one
of more of the core switches which are IBM G8264 10 GigE switches. Dundee or a delegate
will also define the VLANs and IP address range to use.
 Power will be supplied from the re-use of some existing (20 or more) 3-phase 32A power
supplies.
 Cooling requirements will not exceed 15KW per rack.
 8x16GB DIMMs will be deployed in each compute node to get the best bandwidth and lowest
price per GB. By having 128GB per machine, virtualisation can be employed to offer a very
flexible and configurable system to accommodate future requirements.
 VDI software and configuration has been omitted and will be part of a future project.
NextScale hardware for supporting the VDI service has been included with space for GPUs
to be included should these be required.
 SSL encryption within TSM will be used to secure data in transit and TKLM library encryption
for data at rest. This should meet the security requirements for backup and archive systems.
 RHEL and Windows licenses have not been included in the hardware spec and will be
procured under the university education discount directly by the university as these
commercial terms are far better than IBM or Tectrade can secure by buying licenses for this
project in isolation.
 DB2 software will be provided under the educational evaluation scheme only, this is a change
from the original RFP response where it was included.
-27Tectrade Computers Limited – Confidential
6
Management Servers
Version
1.0
The solution deploys four Management servers. Each IBM x3650M4 has dual 10-Core E5-2660v2
Processors, 128GB memory, dual 10GbE and 2 x 300GB SAS HDD.
The University of Dundee expressed a preference to utilise Windows Hyper-V as the chosen
hypervisor. To enable support for IBM Platform Computing software, Hyper-V will run on Windows
2008 Datacenter Edition.
The management servers require to run suite of tools to enable support of the overall Farr IT
infrastructure including Platform Computing: -
6.1 Primary Site
Management Server
ACF_Mgmt_1
ACF_Mgmt_1
ACF_Mgmt_1
ACF_Mgmt_1
ACF_Mgmt_2
ACF_Mgmt_2
ACF_Mgmt_2
VM
1
2
3
4
1
2
3
Function/Tool
Active Directory
DHCP
PCM-AE, PXE Boot
System Tools
Primary Domain Controller
DNS
LSF
-28Tectrade Computers Limited – Confidential
ACF_Mgmt_2
ACF_Mgmt_2
4
5
Version
1.0
PAC
System Tools
6.2 DR Sites
Management Server
DDR1_Mgmt_1
DDR1_Mgmt_1
DDR2_Mgmt_2
DDR2_Mgmt_2
VM
1
2
1
2
Function/Tool
TKLM
TSM Operations Centre
TKLM
TSM Operations Centre
These tools and services are to be spread amongst the Management servers with two servers
located in the Primary Site, and one in each of the DR sites.
6.3 Implementation
This solution includes provision of the following software as part of the agreed scope of works: 

TKLM
TSM Operations Centre
The following software/services will
assistance/guidance from Tectrade:  Active Directory
 DHCP
 DNS
 System Tools
 PXE-Boot
be
implemented
by
University
of
Dundee
with
The remaining management software will be implemented by IBM services under a separate scope
of works: 


PCM-AE
LSF
PAC
-29Tectrade Computers Limited – Confidential
7
Version
1.0
Physical Environment
The solution comprises 4 IBM 42U 1100mm Enterprise V2 Dynamic Racks to be located at ACF.
Each racks comes with a number of 32 Amp 3 Phase PDU to ensure sufficient power outlets to the
hardware to be located in those racks.
ACF rack information:Rack
Location
ACF_1
ACF_2
ACF_3
ACF_4
ACF
ACF
ACF
ACF
32 Amp 3
Phase PDUs
4
4
2
4
PDU
Model
Inlet Connector
Outlets
59Y7891
46M4137
59Y7884
59Y7891
4 x IEC 309 3P+N+G
4 x IEC 309 3P+N+G
2 x IEC 309 3P+N+G
4 x IEC 309 3P+N+G
12 IEC-320-C13 (10 A) and 12 IEC-320-C19 (16 A)
12 IEC-320-C13 (10 A) and 12 IEC-320-C19 (16 A)
12 IEC-320-C13 (10 A) and 12 IEC-320-C19 (16 A)
12 IEC-320-C13 (10 A) and 12 IEC-320-C19 (16 A)
Note: The final destination of the ACF racks is to be confirmed by University of Dundee.
Environmental Information: Rack
Component
Rack
U
Outlets
Required
Max
Current
(Amps)
Max
Load
(Watts)
Max
BTU/hour
Weight
(kg)
ACF_1
GSS24
24
48
41.06
9004
30720
784
ACF_2
V7000
4
4
12
1344
4587
50.4
ACF_2
x3650M4_Mgmt1
2
2
2.08
469
1601
30
ACF_2
x3750M4
2
2
5.73
677
4094
31.1
ACF_2
NeXtScale – VDI
14
6
29.5
3463
11816
76.4
ACF_3
NeXtScale – Hadoop
14
12
43.92
7150
17578
128.6
ACF_3
x3650M4_Mgmt2
2
2
2.08
469
1601
30
ACF_4
NeXtScale – Compute
28
28
85.4
19220
65588
402
DR_1
TSM1
6
12
1.7
415
1360
30
DR_2
TSM2
6
12
1.7
415
1360
30
DR_1
x3650M4_TKLM1
2
2
1.9
431
1470
30
DR_2
x3650M4_TKLM2
2
2
1.9
431
1470
30
-30Tectrade Computers Limited – Confidential
Appendix A – Rack Layout
A.1 ACF
-31Tectrade Computers Limited – Confidential
Version
1.0
Version
1.0
A.2 Dundee – DR Site1
-32Tectrade Computers Limited – Confidential
Version
1.0
A.3 Dundee – DR Site3
-33Tectrade Computers Limited – Confidential
Appendix B – Network Switch Configuration
Version
1.0
All information contained in this section is limited to patching between devices and rack switches.
Rack switch information: Rack
ACF_1
ACF_1
ACF_3
ACF_3
ACF_4
ACF_4
Switch Name
ACF_1_PRD1
ACF_1_MGT1
ACF_3_PRD2
ACF_3_MGT2
ACF_4_PRD3
ACF_4_MGT3
Switch Function
Production
Management
Production
Management
Production
Management
Switch Type
G8264 Front to Rear
G8052 Front to Rear
G8264 Front to Rear
G8052 Front to Rear
G8264 Front to Rear
G8052 Front to Rear
-34Tectrade Computers Limited – Confidential
Version
1.0
Production Switch Inter-rack Patching
For patching between Production Switches, each G8264 has two 40GbE QSFP+ SR4 Transceivers.
Each G8264 is patched to both the other two G8264s to ensure full redundancy.
Source Switch
ACF_1_PRD1
ACF_1_PRD1
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD3
ACF_1_PRD3
Connection
Type
QSFP+
QSFP+
QSFP+
QSFP+
QSFP+
QSFP+
Cable Type
No of
Connections
1
1
1
1
1
1
MTP Optical
MTP Optical
MTP Optical
MTP Optical
MTP Optical
MTP Optical
Destination Switch
ACF_1_PRD2
ACF_1_PRD3
ACF_1_PRD1
ACF_1_PRD3
ACF_1_PRD1
ACF_1_PRD2
Management Switch Inter-rack Patching
For patching between Management Switches, each G8052 has two 10GbE SFP+ SR Transceivers.
Each G8052 is patched to both the other two G8052s to ensure full redundancy.
Source Switch
ACF_1_MGT1
ACF_1_MGT1
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT3
ACF_1_MGT3
Connection
Type
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
Cable Type
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
No of
Connections
1
1
1
1
1
1
Destination Switch
ACF_1_MGT2
ACF_1_MGT3
ACF_1_MGT1
ACF_1_MGT3
ACF_1_MGT1
ACF_1_MGT2
B.1 GSS24
Device
Cable Type
ACF_GSS_1
ACF_GSS_2
ACF_GSS_NSD_1
ACF_GSS_NSD_2
Connection
Type
SFP+
SFP+
SFP+
SFP+
ACF_GSS_1
ACF_GSS_2
ACF_GSS_NSD_1
ACF_GSS_NSD_2
ACF_GSS Stg_Cntr_1
ACF_GSS Stg_Cntr_2
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Connection
Type
SFP+
Cable Type
LC-LC
LC-LC
LC-LC
LC-LC
No of
Connections
6
6
4
4
1
1
1
1
2
2
Destination
Switch
ACF_1_PRD1
ACF_1_PRD1
ACF_1_PRD1
ACF_1_PRD1
ACF_1_MGT1
ACF_1_MGT1
ACF_1_MGT1
ACF_1_MGT1
ACF_1_MGT1
ACF_1_MGT1
B.2 x3760M4
Device
ACF_X3750
LC-LC
No of
Connections
2
-35Tectrade Computers Limited – Confidential
Destination
Switch
ACF_1_PRD2
ACF_X3750
RJ45
Cat5E
2
ACF_1_MGT2
B.3 Management Servers
Device
Cable Type
ACF_MGMT_1
ACF_MGMT_2
DDR1_MGMT_1
DDR1_MGMT_1
Connection
Type
SFP+
SFP+
SFP+
SFP+
ACF_MGMT_1
ACF_MGMT_2
DDR1_MGMT_1
DDR1_MGMT_2
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cable Type
ACF_V7000
ACF_V7000
Connection
Type
SFP+
SFP+
ACF_V7000
ACF_V7000
RJ45
RJ45
Cat5E
Cat5E
LC-LC
LC-LC
LC-LC
LC-LC
No of
Connections
2
2
2
2
2
2
2
2
Destination
Switch
ACF_1_PRD1
ACF_1_PRD2
TBD
TBD
ACF_1_MGT1
ACF_1_MGT2
TBD
TBD
B.4 Storwize V7000
Device
LC-LC
LC-LC
No of
Connections
4
4
2
2
Destination
Switch
ACF_1_PRD1
ACF_1_PRD2
ACF_1_MGT1
ACF_1_MGT2
B.5 NeXtScale – Semi-structured Nodes
Device
Cable Type
ACF_HAD_NX360_1
ACF_HAD_NX360_2
ACF_HAD_NX360_3
ACF_HAD_NX360_4
ACF_HAD_NX360_5
ACF_HAD_NX360_6
Connection
Type
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
ACF_HAD_NX360_1
ACF_HAD_NX360_2
ACF_HAD_NX360_3
ACF_HAD_NX360_4
ACF_HAD_NX360_5
ACF_HAD_NX360_6
ACF_HAD_Chassis_Mgmt
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Connection
Type
SFP+
SFP+
SFP+
SFP+
SFP+
Cable Type
DAC
DAC
DAC
DAC
DAC
DAC
No of
Connections
1
1
1
1
1
1
1
1
1
1
1
1
1
Destination
Switch
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
B.6 NeXtScale – VDI
Device
ACF_VDI_NX360_1
ACF_VDI_NX360_2
ACF_VDI_NX360_3
ACF_VDI_NX360_4
ACF_VDI_NX360_5
No of
Connections
1
1
1
1
1
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
-36Tectrade Computers Limited – Confidential
Destination
Switch
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
Version
1.0
ACF_VDI_NX360_6
ACF_VDI_NX360_7
ACF_VDI_NX360_8
SFP+
SFP+
SFP+
LC-LC
LC-LC
LC-LC
1
1
1
ACF_1_PRD2
ACF_1_PRD2
ACF_1_PRD2
ACF_VDI_NX360_1
ACF_VDI_NX360_2
ACF_VDI_NX360_3
ACF_VDI_NX360_4
ACF_VDI_NX360_5
ACF_VDI_NX360_6
ACF_VDI_NX360_7
ACF_VDI_NX360_8
ACF_VDI_Chas_Mgmt
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
1
1
1
1
1
1
1
1
1
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
ACF_1_MGT2
B.7 NeXtScale – Compute Nodes
Chassis #1
Device
Cable Type
ACF_COM1_NX360_1
ACF_COM1_NX360_2
ACF_COM1_NX360_3
ACF_COM1_NX360_4
ACF_COM1_NX360_5
ACF_COM1_NX360_6
ACF_COM1_NX360_7
ACF_COM1_NX360_8
ACF_COM1_NX360_9
ACF_COM1_NX360_10
ACF_COM1_NX360_11
ACF_COM1_NX360_12
Connection
Type
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
ACF_COM1_NX360_1
ACF_COM1_NX360_2
ACF_COM1_NX360_3
ACF_COM1_NX360_4
ACF_COM1_NX360_5
ACF_COM1_NX360_6
ACF_COM1_NX360_7
ACF_COM1_NX360_8
ACF_COM1_NX360_9
ACF_COM1_NX360_10
ACF_COM1_NX360_11
ACF_COM1_NX360_12
ACF_COM1_Chass_Mgmt
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Connection
Type
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
Cable Type
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
No of
Connections
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Destination
Switch
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
Chassis #2
Device
ACF_COM2_NX360_1
ACF_COM2_NX360_2
ACF_COM2_NX360_3
ACF_COM2_NX360_4
ACF_COM2_NX360_5
ACF_COM2_NX360_6
ACF_COM2_NX360_7
DAC
DAC
DAC
DAC
DAC
DAC
DAC
No of
Connections
1
1
1
1
1
1
1
-37Tectrade Computers Limited – Confidential
Destination
Switch
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
Version
1.0
ACF_COM2_NX360_8
ACF_COM2_NX360_9
ACF_COM2_NX360_10
ACF_COM2_NX360_11
ACF_COM2_NX360_12
SFP+
SFP+
SFP+
SFP+
SFP+
DAC
DAC
DAC
DAC
DAC
1
1
1
1
1
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_COM2_NX360_1
ACF_COM2_NX360_2
ACF_COM2_NX360_3
ACF_COM2_NX360_4
ACF_COM2_NX360_5
ACF_COM2_NX360_6
ACF_COM2_NX360_7
ACF_COM2_NX360_8
ACF_COM2_NX360_9
ACF_COM2_NX360_10
ACF_COM2_NX360_11
ACF_COM2_NX360_12
ACF_COM2_Chass_Mgmt
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
1
1
1
1
1
1
1
1
1
1
1
1
1
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
Cable Type
ACF_COM3_NX360_1
ACF_COM3_NX360_2
ACF_COM3_NX360_3
ACF_COM3_NX360_4
ACF_COM3_NX360_5
ACF_COM3_NX360_6
ACF_COM3_NX360_7
ACF_COM3_NX360_8
ACF_COM3_NX360_9
ACF_COM3_NX360_10
ACF_COM3_NX360_11
ACF_COM3_NX360_12
Connection
Type
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
ACF_COM3_NX360_1
ACF_COM3_NX360_2
ACF_COM3_NX360_3
ACF_COM3_NX360_4
ACF_COM3_NX360_5
ACF_COM3_NX360_6
ACF_COM3_NX360_7
ACF_COM3_NX360_8
ACF_COM3_NX360_9
ACF_COM3_NX360_10
ACF_COM3_NX360_11
ACF_COM3_NX360_12
ACF_COM3_Chass_Mgmt
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Connection
Type
SFP+
SFP+
SFP+
Cable Type
Chassis #3
Device
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
No of
Connections
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Destination
Switch
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
Chassis #4
Device
ACF_COM2_NX360_1
ACF_COM2_NX360_2
ACF_COM2_NX360_3
DAC
DAC
DAC
No of
Connections
1
1
1
-38Tectrade Computers Limited – Confidential
Destination
Switch
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
Version
1.0
ACF_COM2_NX360_4
ACF_COM2_NX360_5
ACF_COM2_NX360_6
ACF_COM2_NX360_7
ACF_COM2_NX360_8
ACF_COM2_NX360_9
ACF_COM2_NX360_10
ACF_COM2_NX360_11
ACF_COM2_NX360_12
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
SFP+
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
DAC
1
1
1
1
1
1
1
1
1
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_1_PRD3
ACF_COM2_NX360_1
ACF_COM2_NX360_2
ACF_COM2_NX360_3
ACF_COM2_NX360_4
ACF_COM2_NX360_5
ACF_COM2_NX360_6
ACF_COM2_NX360_7
ACF_COM2_NX360_8
ACF_COM2_NX360_9
ACF_COM2_NX360_10
ACF_COM2_NX360_11
ACF_COM2_NX360_12
ACF_COM2_Chass_Mgmt
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
RJ45
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
Cat5E
1
1
1
1
1
1
1
1
1
1
1
1
1
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
ACF_1_MGT3
-39Tectrade Computers Limited – Confidential
Version
1.0
Version
1.0
Appendix C – SAN Patching and Zoning
C.1 SAN Patching
All SAN connectivity is limited to the DR sites. All SAN switch, SAN patch cabling, SAN switch ports
are to be provided by University of Dundee.
The information in this section is intended as a guide.
Device
Device Name
TSM Server
TSM Server
V3700
V3700
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
Tape Drive
DDR1_TSM1
DDR2_TSM1
DDR1_TSM1_V3700
DDR2_TSM1_V3700
DDR1_LTO6_1
DDR1_LTO6_2
DDR1_LTO6_3
DDR1_LTO6_4
DDR1_LTO6_5
DDR1_LTO6_6
DDR1_LTO6_7
DDR2_LTO6_1
DDR2_LTO6_2
DDR2_LTO6_3
DDR2_LTO6_4
DDR2_LTO6_5
DDR2_LTO6_6
DDR2_LTO6_7
Connection
Speed
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
8 Gb/s
Cable
Type
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
LC-LC
No of
Connections
4
4
8
8
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Destination
Switch
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
C.2 SAN Zoning
Device Name
DDR1_TSM1
DDR2_TSM1
DDR1_TSM1_V3700
WWPN
Alias
Zone Sets
DDR1_TSM1_H0
DDR1_TSM1_H1
DDR1_TSM1_H2
DDR1_TSM1_H3
DDR2_TSM1_H0
DDR2_TSM1_H1
DDR1_TSM1_H0_V3700
DDR1_TSM1_H1_V3700
DDR1_TSM1_H2_V3700
DDR1_TSM1_H3_V3700
DDR2_TSM1_H0_V3700
DDR2_TSM1_H1_V3700
DDR1_A
DDR1_B
DDR1_A
DDR1_B
DDR2_A
DDR2_B
DDR2_TSM1_H2
DDR2_TSM1_H2_V3700
DDR2_A
DDR2_TSM1_H3
DDR2_TSM1_H3_V3700
DDR2_B
DDR1_TSM1_V3700_N1P1
DDR1_TSM1_H0_V3700
DDR1_TSM1_H2_V3700
DDR1_TSM1_H1_V3700
DDR1_TSM1_H3_V3700
DDR1_TSM1_H0_V3700
DDR1_TSM1_H2_V3700
DDR1_TSM1_H1_V3700
DDR1_TSM1_H3_V3700
DDR1_TSM1_H0_V3700
DDR1_TSM1_H2_V3700
DDR1_TSM1_H1_V3700
DDR1_TSM1_H3_V3700
DDR1_A
DDR1_TSM1_V3700_N1P2
DDR1_TSM1_V3700_N1P3
DDR1_TSM1_V3700_N1P4
DDR1_TSM1_V3700_N2P1
DDR1_TSM1_V3700_N2P2
-40Tectrade Computers Limited – Confidential
Zoning
Configuration
DDR1_B
DDR1_A
DDR1_B
DDR1_A
DDR1_B
DDR1_TSM1_V3700_N2P3
Version
1.0
DDR1_LTO6_1
DDR1_TSM1_LTO6_1
DDR1_TSM1_H0_V3700
DDR1_TSM1_H2_V3700
DDR1_TSM1_H1_V3700
DDR1_TSM1_H3_V3700
DDR2_TSM1_H0_V3700
DDR2_TSM1_H2_V3700
DDR2_TSM1_H1_V3700
DDR2_TSM1_H3_V3700
DDR2_TSM1_H0_V3700
DDR2_TSM1_H2_V3700
DDR2_TSM1_H1_V3700
DDR2_TSM1_H3_V3700
DDR2_TSM1_H0_V3700
DDR2_TSM1_H2_V3700
DDR2_TSM1_H1_V3700
DDR2_TSM1_H3_V3700
DDR2_TSM1_H0_V3700
DDR2_TSM1_H2_V3700
DDR2_TSM1_H1_V3700
DDR2_TSM1_H3_V3700
DDR1_TSM1_LTO6_1_A
DDR1_LTO6_2
DDR1_TSM1_LTO6_2
DDR1_TSM1_LTO6_2_B
DDR1_LTO6_3
DDR1_TSM1_LTO6_3
DDR1_TSM1_LTO6_3_A
DDR1_A
DDR1_LTO6_4
DDR1_TSM1_LTO6_4
DDR1_TSM1_LTO6_4_B
DDR1_B
DDR1_LTO6_5
DDR1_TSM1_LTO6_5
DDR1_TSM1_LTO6_5_A
DDR1_A
DDR1_LTO6_6
DDR1_TSM1_LTO6_6
DDR1_TSM1_LTO6_6_B
DDR1_B
DDR1_LTO6_7
DDR2_TSM1_LTO6_7
DDR2_TSM1_LTO6_7_A
DDR1_A
DDR2_LTO6_1
DDR2_TSM1_LTO6_1
DDR2_TSM1_LTO6_1_A
DDR2_A
DDR2_LTO6_2
DDR2_TSM1_LTO6_2
DDR2_TSM1_LTO6_2_B
DDR2_B
DDR2_LTO6_3
DDR2_TSM1_LTO6_3
DDR2_TSM1_LTO6_3_A
DDR2_A
DDR2_LTO6_4
DDR2_TSM1_LTO6_4
DDR2_TSM1_LTO6_4_B
DDR2_B
DDR2_LTO6_5
DDR2_TSM1_LTO6_5
DDR2_TSM1_LTO6_5_A
DDR2_A
DDR2_LTO6_6
DDR2_TSM1_LTO6_6
DDR2_TSM1_LTO6_6_B
DDR2_B
DDR2_LTO6_7
DDR2_TSM1_LTO6_7
DDR2_TSM1_LTO6_7_A
DDR2_A
DDR1_TSM1_V3700_N2P4
DDR2_TSM1_V3700
DDR2_TSM1_V3700_N1P1
DDR2_TSM1_V3700_N1P2
DDR2_TSM1_V3700_N1P3
DDR2_TSM1_V3700_N1P4
DDR2_TSM1_V3700_N2P1
DDR2_TSM1_V3700_N2P2
DDR2_TSM1_V3700_N2P3
DDR2_TSM1_V3700_N2P4
-41Tectrade Computers Limited – Confidential
DDR1_A
DDR1_B
DDR2_A
DDR2_B
DDR2_A
DDR2_B
DDR2_A
DDR2_B
DDR2_A
DDR2_B
DDR1_A
DDR1_B
Version
1.0
Appendix D – TSM Configuration
D.1 TSM Sizing
D.1.1 Source Data
Initial Source Data
(GB)
End of year 1
(GB)
End of year 2
(GB)
End of year 3
(GB)
33000
36300
39930
43923
D.1.2 Primary Storage Pools
End of year 1
(GB)
End of year 2
(GB)
End of year 3
(GB)
52961.70
58257.87
64083.66
D.1.3 Tape Requirements
End of
year 1
End of
year 2
End of
year 3
Total number of tapes needed per DR site
51
56
63
Onsite Media
13
14
17
Offsite Media
38
42
46
Tapes going offsite per day
2
2
3
D.1.4 TSM Clients
Client
Server
OS
Quantity
Mgmt Servers
x3650M4
Win2012
2
TKLM Servers
x3650M4
Win2012
2
GPFS GSS24
x3650M4
RHEL
2
GPFS NFS Client
x3650M4
RHEL
2
Big Insights - Mgmt
nx360M4
RHEL
1
Large Memory
x3750M4
RHEL
1
Infra Nodes
nx360M4
RHEL
8
VDI Server Storage
nx360M4
Win2012
2
VDI Planar Nodes
nx360M4
ESX
4
VDI Planar Mgmts
nx360M4
ESX
2
Total
Number of clients
26
Initial
Count
End of
year 1
End of
year 2
End of
year 3
29
32
35
39
D.2 TSM Internal Disk Configuration
-42Tectrade Computers Limited – Confidential
Version
1.0
Dundee DR Site 1
Array
Partition
Function
Partition
Size (GB)
Operating System
128
TSM Archive Log
384
Parity
RAID
Disk
Size
1+1
1
SSD
1024 GB
2
1+1
1
SSD
256GB
TSM Active Log
256
3
1+1
1
SSD
1024 GB
TSM DB
1024
Parity
RAID
Disk
Size
Partition
Function
Partition
Size (GB)
1+1
1
SSD
1024 GB
Operating System
128
TSM Archive Log
384
256
1024
1
Dundee DR Site 2
Array
1
2
1+1
1
SSD
256GB
TSM Active Log
3
1+1
1
SSD
1024 GB
TSM DB
D.2 TSM Disk Storage Pools
Dundee DR Site 1
No.
Arrays Parity RAID
2
9+P+Q
6
Disk
NL-SAS
Size
Array / Mdisk
4 TB
DDR1_TSM1_V3700_1mdisk0
DDR1_TSM1_V3700_1mdisk1
Type
Size
Striped
36TB
Striped
36TB
Type
Size
Striped
of Array
Striped
of Array
MdiskGroup
DDR1_TSM1_V3700_1_MDG1
Dundee DR Site 2
No.
Arrays Parity RAID
2
9+P+Q
6
Disk
NL-SAS
Size
Array / Mdisk
4 TB
DDR2_TSM1_V3700_1mdisk0
DDR2_TSM1_V3700_1mdisk1
-43Tectrade Computers Limited – Confidential
MdiskGroup
DDR2_TSM1_V3700_1_MDG1
Version
1.0
Appendix E – System Configurations
E.1 GSS24
Material
PN
7915FT2
94Y6602
81Y6822
90Y5743
90Y5776
94Y6599
90Y5744
00Y3535
81Y6821
59Y2471
00W0053
90Y4340
90Y8877
94Y6669
46M2982
00D2997
69Y1194
00D2900
49Y8578
90Y5759
81Y4559
90Y5742
90Y5761
90Y3110
90Y3901
68Y7399
90Y5778
69Y5321
90Y4344
25R4194
00W0591
49Y1013
95Y4173
49Y1049
59Y8136
46M2873
49Y2186
90Y4033
00FF148
00AJ874
49Y1063
49Y2932
49Y1062
5374FT1
90Y4344
95Y4268
95Y4215
95Y4173
7915FT2
94Y6602
81Y6822
90Y5743
90Y5776
94Y6599
90Y5744
00Y3535
81Y6821
Description
IBM System x3650 M4 2.5" Base without Power Supply
Addl Intel Xeon Processor E5-2670 8C 2.6GHz 20MB 115W W/Fan
IBM System x Gen-III CMA
x3650 M4 System Level Code
x3650 M4 8x 2.5" HS HDD Assembly Kit
Intel Xeon Processor E5-2670 8C 2.6GHz 20MB Cache 1600MHz 115W
x3650 M4 Agency Label GBM
LSI SAS9201-16e Quad-port miniSAS x8 PCIe 2.0 SAS HBA
IBM System x Gen-III Slides Kit
IBM UltraSlim Enhanced SATA DVD-ROM
Mellanox ConnectX-3 EN Dual-port SFP+ 10GbE Adapter
ServeRAID M5100 Series 875mm Flash Power Module Cable
IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD
IBM System x 750W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
x3650 M4 Mini SAS Cable 820MM
x3650 M4 ODD Cable
IBM System x Lightpath Kit
IBM 10GbE SW SFP+ Transceiver
x3650 M4 PCIe Riser Card 1 (1 x8 FH/FL + 2 x8 FH/HL Slots)
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x
IBM System x3650 M4 Planar
System Documentation and Software-French
8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM
IBM Integrated Management Module Advanced Upgrade
Select Storage devices - no IBM-configured RAID required
x3650 M4 Riser 2 Bracket
x3650 M4 PCIe Riser Card 2 (1 x8 FH/FL + 2 x8 FH/HL Slots)
ServeRAID M5110e SAS/SATA Controller for IBM System x
Integrate in manufacturing
IBM GNRx Solution
Rack Installation >1U Component
Configuration ID 01
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U09
RHEL for HPC 2 Skts Head Node Prem RH Support 3Yr
RHEL for HPC 6 Media Kit
GPFS Native RAID v3.x, Per Managed Server w/3Yr SW S&S
No Preload Specify
Red Hat Specify
Drop-in-the-Box Specify
Base 5374-FT1 Starting Point
ServeRAID M5110e SAS/SATA Controller for IBM System x
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade Placement
Controller 01
Configuration ID 01
IBM System x3650 M4 2.5" Base without Power Supply
Addl Intel Xeon Processor E5-2670 8C 2.6GHz 20MB 115W W/Fan
IBM System x Gen-III CMA
x3650 M4 System Level Code
x3650 M4 8x 2.5" HS HDD Assembly Kit
Intel Xeon Processor E5-2670 8C 2.6GHz 20MB Cache 1600MHz 115W
x3650 M4 Agency Label GBM
LSI SAS9201-16e Quad-port miniSAS x8 PCIe 2.0 SAS HBA
IBM System x Gen-III Slides Kit
Qty
1
1
1
1
1
1
1
3
1
1
3
1
2
2
2
1
1
1
6
1
1
1
1
8
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
3
1
-44Tectrade Computers Limited – Confidential
59Y2471
00W0053
90Y4340
90Y8877
94Y6669
46M2982
00D2997
69Y1194
00D2900
49Y8578
90Y5759
81Y4559
90Y5742
90Y5761
90Y3110
90Y3901
68Y7399
90Y5778
69Y5321
90Y4344
25R4194
00W0591
49Y1013
95Y4174
49Y1049
59Y8136
46M2873
49Y2187
90Y4027
00AJ874
49Y1063
49Y2932
49Y1062
5374FT1
90Y4344
95Y4268
95Y4215
95Y4174
7915FT2
81Y6822
90Y5743
90Y5776
90Y5744
81Y6821
00W0053
90Y4340
90Y8877
94Y6669
00D2997
00D2900
39Y7932
49Y8578
90Y5759
81Y4559
00D5049
46W4366
68Y7399
90Y5778
46W4348
69Y5321
90Y4344
90Y5772
46W4379
25R4194
49Y1013
95Y4175
IBM UltraSlim Enhanced SATA DVD-ROM
Mellanox ConnectX-3 EN Dual-port SFP+ 10GbE Adapter
ServeRAID M5100 Series 875mm Flash Power Module Cable
IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD
IBM System x 750W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
x3650 M4 Mini SAS Cable 820MM
x3650 M4 ODD Cable
IBM System x Lightpath Kit
IBM 10GbE SW SFP+ Transceiver
x3650 M4 PCIe Riser Card 1 (1 x8 FH/FL + 2 x8 FH/HL Slots)
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x
IBM System x3650 M4 Planar
System Documentation and Software-French
8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM
IBM Integrated Management Module Advanced Upgrade
Select Storage devices - no IBM-configured RAID required
x3650 M4 Riser 2 Bracket
x3650 M4 PCIe Riser Card 2 (1 x8 FH/FL + 2 x8 FH/HL Slots)
ServeRAID M5110e SAS/SATA Controller for IBM System x
Integrate in manufacturing
IBM GNRx Solution
Rack Installation >1U Component
Configuration ID 02
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U11
RHEL for HPC 2 Skts Compute Nodes Subscription 3Yr
GPFS Native RAID v3.x, Per Managed Server w/3Yr SW S&S
No Preload Specify
Red Hat Specify
Drop-in-the-Box Specify
Base 5374-FT1 Starting Point
ServeRAID M5110e SAS/SATA Controller for IBM System x
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade Placement
Controller 01
Configuration ID 02
IBM System x3650 M4 2.5" Base without Power Supply
IBM System x Gen-III CMA
x3650 M4 System Level Code
x3650 M4 8x 2.5" HS HDD Assembly Kit
x3650 M4 Agency Label GBM
IBM System x Gen-III Slides Kit
Mellanox ConnectX-3 EN Dual-port SFP+ 10GbE Adapter
ServeRAID M5100 Series 875mm Flash Power Module Cable
IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD
IBM System x 750W High Efficiency Platinum AC Power Supply
x3650 M4 Mini SAS Cable 820MM
IBM System x Lightpath Kit
4.3m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
IBM 10GbE SW SFP+ Transceiver
x3650 M4 PCIe Riser Card 1 (1 x8 FH/FL + 2 x8 FH/HL Slots)
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x
16GB (1x16GB, 2Rx4, 1.5V) PC3-14900 CL13 ECC DDR3 1866MHz LP
RDIMM
Addl Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 95W
Select Storage devices - no IBM-configured RAID required
x3650 M4 Riser 2 Bracket
Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 1866MHz 95W
x3650 M4 PCIe Riser Card 2 (1 x8 FH/FL + 2 x8 FH/HL Slots)
ServeRAID M5110e SAS/SATA Controller for IBM System x
System Documentation and Software-UK English
IBM System x3650 M4 Planar (IVB Refresh )
Integrate in manufacturing
Rack Installation >1U Component
Configuration ID 03
1
3
1
2
2
2
1
1
1
6
1
1
1
1
8
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
4
4
2
2
4
2
2
2
16
2
2
2
2
2
2
2
2
2
2
2
-45Tectrade Computers Limited – Confidential
Version
1.0
49Y1049
59Y8136
59Y8118
46M2873
49Y1024
90Y4031
49Y1063
49Y2932
49Y1062
5374FT1
90Y4344
95Y4268
95Y4215
95Y4175
0796016
39Y7916
25R4194
00W0591
00D5234
49Y1013
49Y1049
59Y8136
46M2873
49Y1018
0796016
39Y7916
25R4194
00W0591
00D5234
49Y1013
49Y1049
59Y8136
46M2873
49Y1019
0796016
39Y7916
25R4194
00W0591
00D5234
49Y1013
49Y1049
59Y8136
46M2873
49Y2188
0796015
39Y7916
25R4194
00W0591
00D5234
49Y1013
49Y1049
59Y8136
46M2873
49Y2192
1410PRB
59Y7884
49Y2914
49Y1056
25R4194
49Y1065
49Y1039
40K5794
49Y2940
49Y1011
25R4170
49Y2337
49Y1049
e1350 Solution Component
Integrated Solutions
Advanced Grouping
Rack 01
Rack location U21
RHEL for HPC 2 Skts Head Node Std RH Support 3Yr
No Preload Specify
Red Hat Specify
Drop-in-the-Box Specify
Base 5374-FT1 Starting Point
ServeRAID M5110e SAS/SATA Controller for IBM System x
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade Placement
Controller 01
Configuration ID 03
IBM System x GPFS Storage Server JBOD (58x4TB+2x200GB SSD)
2.5m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
Integrate in manufacturing
IBM GNRx Solution
3m IBM miniSAS to miniSAS SAS Cable
Rack Installation >1U Component
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U01
IBM System x GPFS Storage Server JBOD (58x4TB+2x200GB SSD)
2.5m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
Integrate in manufacturing
IBM GNRx Solution
3m IBM miniSAS to miniSAS SAS Cable
Rack Installation >1U Component
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U05
IBM System x GPFS Storage Server JBOD (58x4TB+2x200GB SSD)
2.5m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
Integrate in manufacturing
IBM GNRx Solution
3m IBM miniSAS to miniSAS SAS Cable
Rack Installation >1U Component
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U13
IBM System x GPFS Storage Server JBOD (58x4TB)
2.5m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
Integrate in manufacturing
IBM GNRx Solution
3m IBM miniSAS to miniSAS SAS Cable
Rack Installation >1U Component
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U17
Intelligent Cluster 42U 1100mm Enterprise V2 Dynamic Rack
Combo PDU & 3p, 32A/380-415V, IEC 309 3P+N+G LC
-SB- 2-bay arrangement
Use 200V (high voltage)
Integrate in manufacturing
3U black plastic filler panel
5U black plastic filler panel
10m Green Cat5e Cable
5m LC-LC Fiber Cable (networking)
Rack Assembly - 42U Rack
Cluster Hardware & Fabric Verification - 1st Rack
Cluster 1350 Ship Group
e1350 Solution Component
2
2
2
2
4
2
2
2
2
2
2
2
2
2
1
2
1
1
4
1
1
1
1
1
1
2
1
1
4
1
1
1
1
1
1
2
1
1
4
1
1
1
1
1
1
2
1
1
4
1
1
1
1
1
1
4
1
1
1
1
3
8
14
1
1
1
1
-46Tectrade Computers Limited – Confidential
Version
1.0
00W0591
59Y8136
49Y2918
46M2873
IBM GNRx Solution
Integrated Solutions
dual source distribution
Rack 01
1
1
1
1
00Y4502
80Y9476
80Y9475
IBM Platform Cluster Mgr Adv V4.x, Per Managed Server w/3 Yr SW S&S
IBM GPFS for x86 Architecture, GPFS Svr Per 250, 10 VUs w/3Yr SW S&S
IBM GPFS for x86 Architecture, GPFS Svr Per 10 VUs w/3 Yr SW S&S
2
2
4
Version
1.0
E.2 x3760M4
Material PN
System x
8722B1G
88Y7336
90Y3105
60Y0360
88Y7419
00AJ050
90Y8872
81Y4481
81Y4487
88Y7371
88Y7429
88Y7373
46M0902
Description
Qty
X3750M4 1TB Base
1
x3750 M4, 2x Xeon 6C E5-4610 95W 2.4GHz/1333MHz/15MB, 2x 8GB, O/Bay HS 2.5in
SATA/SAS, 1400W p/s, Rack
Intel Xeon 6C Processor Model E5-4610 95W 2.4GHz/1333MHz/15MB
32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM
IBM eXFlash 8x 1.8" HS SAS SSD Backplane
IBM 8x 2.5" HS SAS/SATA/SSD HDD Backplane
IBM 400GB 1.8in SATA MLC S3500 Enterprise Value SSD
IBM 600GB 2.5in SFF G2HS 10K 6Gbps SAS HDD
ServeRAID M5110 SAS/SATA Controller for IBM System x
ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM System x
IBM x3750 M4 PCIe 3 x8 riser
IBM Dual port 10Gb SFP+ Ethernet Adapter Card
IBM 1400W HE Redundant Power Supply
IBM UltraSlim Enhanced SATA Multi-Burner
1
2
48
1
1
5
8
1
2
1
1
1
1
E.3 Management Servers
Part No
System x
7915CTO_00D5047
7915CTO_00D5135
7915CTO_00D9687
7915CTO_00D9693
7915CTO_25P2853
7915CTO_25P2854
7915CTO_39Y7979
7915CTO_46W4312
7915CTO_46W4330
7915CTO_46W4378
7915CTO_49Y4217
7915CTO_59Y8179
7915CTO_68Y7397
7915CTO_81Y4576
7915CTO_81Y6562
7915CTO_81Y6781
7915CTO_81Y6782
7915CTO_81Y6784
7915CTO_81Y6785
7915CTO_81Y6831
7915CTO_81Y6835
7915CTO_81Y6839
7915CTO_81Y6884
7915CTO_81Y6892
7915CTO_90Y4302
Description
X3650M4
16GB (1x16GB- 2Rx4- 1.5V) PC3-14900 CL13 ECC DDR3 1866MHz LP
Controller 1- ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrad
3U Bracket for Mellanox ConnectX-3 10 GbE Adapter
Mellanox ConnectX-3 10 GbE Adapter for IBM System x
Unknown or not required
Customer provided and installed
4.3m- 10A/100-250V- C13 to IEC 320-C14 Rack Power Cable
Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 1866MH
Addl Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 9
IBM System x3650 M4 Planar (IVB Refresh )
Brocade 10Gb SFP+ SR Optical Transceiver
Controller 1- ServeRAID M5110e SAS/SATA Controller
Select Storage devices - no IBM-configured RAID required
ServeRAID M5100 Series 875mm Flash Power Module Cable
IBM System x 750W High Efficiency Platinum AC Power Supply
x3650 M4 PCIe Riser Card 1 (1 x8 FH/FL + 2 x8 FH/HL Slots)
x3650 M4 WW Packaging
x3650 M4 System Level Code
x3650 M4 Agency Label GBM
IBM System x3650 M4 2.5inch Base without Power Supply
x3650 M4 8x 2.5inch HS HDD Assembly Kit
x3650 M4 Riser 2 Bracket
IBM System x Lightpath Kit
x3650 M4 PCIe Gen-III Riser Card 2 (1 x8 FH/FL + 2 x8 FH/HL
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM Syst
Qty
2
16
2
2
2
2
2
4
2
2
2
2
2
2
2
4
2
2
2
2
2
2
2
2
2
2
-47Tectrade Computers Limited – Confidential
7915CTO_90Y4343
7915CTO_90Y5739
7915CTO_90Y6458
7915CTO_90Y6461
7915CTO_90Y8879
7915CTO_95Y4095
ServeRAID M5110e SAS/SATA Controller for IBM System x
System Documentation and Software-UK English
IBM System x Gen-III Slides Kit
IBM System x Gen-III CMA
IBM 300GB 10K 6Gbps SAS 2.5inch SFF G2HS HDD
Configuration ID 01
2
2
2
2
12
2
E.4 Storwize V7000
Part No
Storage
2076-324
10
3546
5305
5711
6008
9730
9801
2076-224
3546
5401
9730
9802
5608-W07
5639-VM7
Description
Qty
V7000
IBM Storwize V7000 Disk Control Enclosure
Storage Engine Preload
600GB 6Gb SAS 10K 2.5-inch SFF HDD
5 m Fiber Optic Cable LC-LC
IBM 10GbE Optical SW SFP 2 pairs
Cache 8 GB
Power Cord - PDU connection
AC Power Supply
IBM Storwize V7000 Disk Expansion Enclosure
600GB 6Gb SAS 10K 2.5-inch SFF HDD
1 m 6 Gb/s external mini SAS
Power Cord - PDU connection
AC Power Supply
IBM Tivoli Storage FlashCopy Manager V3.2
IBM Storwize V7000 Software V7
1
1
1
21
8
1
2
1
2
1
21
2
1
2
1
1
E.5 NeXtScale – Semi-structured Storage Nodes
Part No
5455FT1
46C8988
00Y8615
46W2714
00Y7842
00Y8546
68Y7399
00Y7826
00Y7839
46W2744
00D5049
00AD025
00D9700
00Y7836
00Y8618
44W1993
49Y1049
59Y8136
49Y1063
5456FT1
00AM474
00Y8569
39Y7937
00Y7859
00Y8568
00Y7856
00Y7857
00Y7862
00Y8570
Description
nx360 M4 Computer Node
N2115 SAS/SATA HBA for IBM System x
3.5" HDD RAID cage for nx360 M4 Storage Native Expansion Tray
Addl Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB 1866MHz 95W
System Documentation and Software - US English
IBM NeXtScale Storage Native Expansion Tray
Select Storage devices - no IBM-configured RAID required
Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 1866MHz 95W
nx360 M4 Computer Node Label GBM
nx360 M4 PCIe riser
16GB (1x16GB, 2Rx4, 1.5V) PC3-14900 CL13 ECC DDR3 1866MHz LP
RDIMM
IBM 4TB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System
Broadcom Single Port 10Gbe SFP + Embedded Adapter for IBM System x
Pwr/:LEDs Bezel Assy
1U Internal Storage Tray Label GBM
Group ID 01
e1350 Solution Component
Integrated Solutions
No Preload Specify
IBM n1200 Enclosure Chassis Base Model
IBM NeXtScale n1200 Enclosure Logo Nameplate
CFF 900W Power Supply
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
n1200 Enclosure Chassis Label GBM
n1200 Enclosure Shipping Bracket Kit
n1200 Enclosure fan Power Control Card Assembly
n1200 Enclosure Midplane Assembly
System Documentation and Software - US English
n1200 Enclosure Fan Assembly
Qty
6
6
6
6
6
6
6
6
6
6
48
48
6
6
6
6
6
6
6
1
1
6
6
1
1
1
1
1
10
-48Tectrade Computers Limited – Confidential
Version
1.0
00Y8366
25R4194
49Y1013
44W1993
49Y1049
59Y8136
46M2873
49Y2181
5455FT1
46W2714
00Y7842
46W2727
68Y7399
00Y7826
00AD005
00Y7835
00Y7839
00D5049
00D9700
00Y7836
44W1994
49Y1049
59Y8136
90Y4027
00FF148
49Y1063
49Y2932
49Y1062
5455FT1
46W2714
00Y7842
46W2727
68Y7399
00Y7826
00AD005
00Y7835
00Y7839
00D5049
00D9700
00Y7836
44W1994
49Y1049
59Y8136
90Y4027
49Y1063
49Y2932
49Y1062
5456FT1
00AM474
00Y7860
00Y8569
39Y7937
00Y7859
00Y8568
00Y7856
00Y7857
00Y7862
00Y8570
00Y8366
25R4194
49Y1013
44W1994
49Y1049
59Y8136
46M2873
KVM Dongle Cable
Integrate in manufacturing
Rack Installation >1U Component
Group ID 01
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U03
nx360 M4 Computer Node
Addl Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB 1866MHz 95W
System Documentation and Software - US English
xc360 M4 3.5-inch HDD cage
Select Storage devices - no IBM-configured RAID required
Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 1866MHz 95W
IBM 500GB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System
PCIe Bracket filler
nx360 M4 Computer Node Label GBM
16GB (1x16GB, 2Rx4, 1.5V) PC3-14900 CL13 ECC DDR3 1866MHz LP
RDIMM
Broadcom Single Port 10Gbe SFP + Embedded Adapter for IBM System x
Pwr/:LEDs Bezel Assy
Group ID 02
e1350 Solution Component
Integrated Solutions
RHEL for HPC 2 Skts Compute Nodes Subscription 3Yr
RHEL for HPC 6 Media Kit
No Preload Specify
Red Hat Specify
Drop-in-the-Box Specify
nx360 M4 Computer Node
Addl Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB 1866MHz 95W
System Documentation and Software - US English
xc360 M4 3.5-inch HDD cage
Select Storage devices - no IBM-configured RAID required
Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 1866MHz 95W
IBM 500GB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System
PCIe Bracket filler
nx360 M4 Computer Node Label GBM
16GB (1x16GB, 2Rx4, 1.5V) PC3-14900 CL13 ECC DDR3 1866MHz LP
RDIMM
Broadcom Single Port 10Gbe SFP + Embedded Adapter for IBM System x
Pwr/:LEDs Bezel Assy
Group ID 02
e1350 Solution Component
Integrated Solutions
RHEL for HPC 2 Skts Compute Nodes Subscription 3Yr
No Preload Specify
Red Hat Specify
Drop-in-the-Box Specify
IBM n1200 Enclosure Chassis Base Model
IBM NeXtScale n1200 Enclosure Logo Nameplate
1U Halfwide Node Dummy Filler
CFF 900W Power Supply
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
n1200 Enclosure Chassis Label GBM
n1200 Enclosure Shipping Bracket Kit
n1200 Enclosure fan Power Control Card Assembly
n1200 Enclosure Midplane Assembly
System Documentation and Software - US English
n1200 Enclosure Fan Assembly
KVM Dongle Cable
Integrate in manufacturing
Rack Installation >1U Component
Group ID 02
e1350 Solution Component
Integrated Solutions
Rack 01
1
1
1
6
1
1
1
1
1
1
1
1
1
1
1
1
1
8
1
1
1
1
1
1
1
1
1
1
5
5
5
5
5
5
5
5
5
40
5
5
5
5
5
5
5
5
5
1
1
6
6
6
1
1
1
1
1
10
1
1
1
6
1
1
1
-49Tectrade Computers Limited – Confidential
Version
1.0
49Y2186
730952F
46M2982
00CG089
00Y3069
25R4194
49Y1012
49Y1049
59Y8136
46M2873
49Y1018
1410PRB
00Y3068
59Y7891
00Y3067
49Y2914
49Y1056
25R4194
49Y1065
49Y1039
49Y2180
59Y1940
49Y1011
25R4170
49Y2337
49Y1049
59Y8136
49Y2918
46M2873
02R2271
Rack location U09
IBM System Networking RackSwitch G8052 (Front to Rear)
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
IBM System Networking Recessed 19" 4 Post Rail Kit
Switch Seal Kit
Integrate in manufacturing
Rack Installation of 1U Component
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U01
Intelligent Cluster 42U 1100mm Enterprise V2 Dynamic Rack
Cable Installation Tool For Racks
C13 PDU & 3p, 32A/380-415V, IEC 309 3P+N+G LC
Railhawks Rack Cable Management Bracket Kit
-SB- 2-bay arrangement
Use 200V (high voltage)
Integrate in manufacturing
3U black plastic filler panel
5U black plastic filler panel
Rack location U02
3m Molex Direct Attach Copper SFP+ Cable
Rack Assembly - 42U Rack
Cluster Hardware & Fabric Verification - 1st Rack
Cluster 1350 Ship Group
e1350 Solution Component
Integrated Solutions
dual source distribution
Rack 01
IntraRack CAT5E Cable Service
1
1
2
1
1
1
1
1
1
1
1
1
1
2
2
1
1
1
1
5
1
12
1
1
1
1
1
1
1
12
90Y3521
49Y7884
730964F
00Y4502
00AE302
00AE288
00AE286
80Y9477
80Y9478
30m IBM QSFP+ MTP Optical cable
IBM QSFP+ 40GBASE-SR4 Transceiver
IBM System Networking RackSwitch G8264 (Front to Rear)
IBM Platform Cluster Mgr Adv V4.x, Per Managed Server w/3 Yr SW S&S
IBM Platform Appl Ctr Std Ed Sys x V9.x Concurent User w/3 Yr SW S&S
IBM Platform Process Mgr for Sys x V9.x Concur User w/3 Yr SW S&S
IBM Platform LSF Std Ed for Sys x V9.x, Per RVU w/3 Yr SW S&S
IBM GPFS for x86 Architecture, GPFS Client Per 10 VUs w/3 Yr SW S&S
IBM GPFS for x86 Architecture GPFS Client Per 250, 10VUs w/3Yr SWS&S
2
2
1
12
12
12
240
180
6
Version
1.0
E.6 NeXtScale – VDI
Part No
00AM474
00D5049
00W1227
00Y3026
00Y7827
00Y7835
00Y7836
00Y7839
00Y7842
00Y7856
00Y7857
00Y7858
00Y7859
00Y7860
00Y7862
00Y8366
00Y8569
00Y8570
Description
IBM NeXtScale n1200 Enclosure Logo Nameplate
16GB (1x16GB- 2Rx4- 1.5V) PC3-14900 CL13 ECC DDR3 1866MHz LP
IBM 256GB SATA 1.8in MLC Enterprise Value SSD
Cable installation tool for racks
Intel Xeon Processor E5-2670 v2 10C 2.5GHz 25MB Cache 1866MH
PCIe Bracket filler
Pwr/LEDs Bezel Assy
nx360 M4 Computer Node Label GBM
System Documentation and Software-US English
n1200 Enclosure Fan Power Control Card Assembly
n1200 Enclosure Midplane Assembly
n1200 Enclosure Chassis Package
n1200 Enclosure Chassis Label GBM
1U Halfwide Node Dummy Filler
System Documentation and Software-US English
KVM Dongle cable
CFF 900W Power Supply
8056 Fan Assembly
Qty
1
56
4
1
8
6
8
8
8
1
1
1
1
4
1
1
6
10
-50Tectrade Computers Limited – Confidential
39Y7937
41Y8382
44W1993
46M3072
46M4992
46W2715
46W2727
46W2731
46W2733
46W2744
49Y1063
49Y2735
49Y2934
5455FT1
5456FT1
59Y8148
68Y7399
81Y4448
90Y3900
90Y5179
90Y6454
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
IBM USB Memory Key for VMWare ESXi 5.1 Update 1
Server Tray Group ID 01
No HDD Selected
Preload by Hardware Feature Specify
Intel Xeon Processor E5-2670 v2 10C 2.5GHz 25MB Cache 1866MHz 115W
nx360 M4 3.5-inch HDD cage assembly
nx360 M4 1.8-inch SSD cage assembly
nx360 M4 1.8-inch bracket and cable assembly for HW RAID
nx360 M4 PCIe riser
No Preload Specify
No SATA HDD Selected
VMWare Specify
nx360 M4 Computer Node
n1200 Enclosure Chassis
3U bracket for low profile-internal-storage adapters
Select storage devices ? no RAID required
ServeRAID M1115 SAS/SATA Controller for IBM System x
IBM Integration Management Module Standard Upgrade
Qlogic Embedded VFA FCoE/iSCSI License for IBM System x (FoD)
Qlogic Dual Port 10GbE SFP+ Embedded VFA for IBM System x
6
6
16
6
6
8
6
2
2
2
8
6
6
8
1
2
8
2
6
8
8
E.7 NeXtScale – Compute Nodes
Part No
5455FT1
46W2714
00Y7842
46W2727
68Y7399
00Y7826
00AD005
00Y7835
00Y7839
00D5049
00D9700
00Y7836
44W1993
49Y1049
59Y8136
49Y1063
5456FT1
00AM474
00Y8569
39Y7937
00Y7859
00Y8568
00Y7856
00Y7857
00Y7862
00Y8570
00Y8366
25R4194
49Y1013
44W1993
49Y1049
59Y8136
59Y8118
46M2873
49Y2186
Description
nx360 M4 Computer Node
Addl Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB 1866MHz 95W
System Documentation and Software - US English
xc360 M4 3.5-inch HDD cage
Select Storage devices - no IBM-configured RAID required
Intel Xeon Processor E5-2660 v2 10C 2.2GHz 25MB Cache 1866MHz 95W
IBM 500GB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System
PCIe Bracket filler
nx360 M4 Computer Node Label GBM
16GB (1x16GB, 2Rx4, 1.5V) PC3-14900 CL13 ECC DDR3 1866MHz LP
RDIMM
Broadcom Single Port 10Gbe SFP + Embedded Adapter for IBM System x
Pwr/:LEDs Bezel Assy
Group ID 01
e1350 Solution Component
Integrated Solutions
No Preload Specify
IBM n1200 Enclosure Chassis Base Model
IBM NeXtScale n1200 Enclosure Logo Nameplate
CFF 900W Power Supply
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
n1200 Enclosure Chassis Label GBM
n1200 Enclosure Shipping Bracket Kit
n1200 Enclosure fan Power Control Card Assembly
n1200 Enclosure Midplane Assembly
System Documentation and Software - US English
n1200 Enclosure Fan Assembly
KVM Dongle Cable
Integrate in manufacturing
Rack Installation >1U Component
Group ID 01
e1350 Solution Component
Integrated Solutions
Advanced Grouping
Rack 01
Rack location U09
Qty
48
48
48
48
48
48
48
48
48
384
48
48
48
48
48
48
4
4
24
24
4
4
4
4
4
40
4
4
4
48
4
4
4
4
4
-51Tectrade Computers Limited – Confidential
Version
1.0
49Y2190
49Y2181
49Y1025
730952F
46M2982
00CG089
00Y3069
25R4194
49Y1012
49Y1049
59Y8136
46M2873
49Y1018
1410PRB
00Y3068
59Y7891
00Y3067
49Y2914
49Y1056
25R4194
49Y1065
49Y1039
49Y1064
49Y2180
59Y1940
49Y1011
25R4170
49Y2337
49Y1049
59Y8136
49Y2918
46M2873
02R2271
Rack location U15
Rack location U03
Rack location U23
IBM System Networking RackSwitch G8052 (Front to Rear)
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
IBM System Networking Recessed 19" 4 Post Rail Kit
Switch Seal Kit
Integrate in manufacturing
Rack Installation of 1U Component
e1350 Solution Component
Integrated Solutions
Rack 01
Rack location U01
Intelligent Cluster 42U 1100mm Enterprise V2 Dynamic Rack
Cable Installation Tool For Racks
C13 PDU & 3p, 32A/380-415V, IEC 309 3P+N+G LC
Railhawks Rack Cable Management Bracket Kit
-SB- 2-bay arrangement
Use 200V (high voltage)
Integrate in manufacturing
3U black plastic filler panel
5U black plastic filler panel
1U black plastic filler panel
Rack location U02
3m Molex Direct Attach Copper SFP+ Cable
Rack Assembly - 42U Rack
Cluster Hardware & Fabric Verification - 1st Rack
Cluster 1350 Ship Group
e1350 Solution Component
Integrated Solutions
dual source distribution
Rack 01
IntraRack CAT5E Cable Service
4
4
4
1
2
1
1
1
1
1
1
1
1
1
1
4
4
1
1
1
1
2
3
1
48
1
1
1
1
1
1
1
48
90Y3521
49Y7884
730964F
00Y4502
00AE286
00AE287
80Y9477
80Y9478
30m IBM QSFP+ MTP Optical cable
IBM QSFP+ 40GBASE-SR4 Transceiver
IBM System Networking RackSwitch G8264 (Front to Rear)
IBM Platform Cluster Mgr Adv V4.x, Per Managed Server w/3 Yr SW S&S
IBM Platform LSF Std Ed for Sys x V9.x, Per RVU w/3 Yr SW S&S
IBM Platform LSF Std Ed for Sys x V9.x, Per 250 RVU w/3 Yr SW S&S
IBM GPFS for x86 Architecture, GPFS Client Per 10 VUs w/3 Yr SW S&S
IBM GPFS for x86 Architecture GPFS Client Per 250, 10VUs w/3Yr SWS&S
2
2
1
48
210
3
220
26
Version
1.0
E.8 TSM
Part No
7915F2
G
69Y5328
49Y1397
69Y5319
90Y8648
49Y5844
81Y4559
90Y4273
42D0510
46M090
7
69Y5321
90Y6456
Description
Qty
X3650M4 TSM Server
x3650 M4, Xeon 6C E5-2640 95W 2.5GHz/1333MHz/15MB, 1x8GB, O/Bay HS 2.5in SAS/SATA, SR M5110e, 750W p/s,
Rack
Intel Xeon 6C Processor Model E5-2640 95W 2.5GHz/1333MHz/15MB W/Fan
8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM
x3650 M4 Plus 8x 2.5in HS HDD Assembly Kit with Expander
IBM 128GB SATA 2.5in MLC HS Enterprise Value SSD
IBM 512GB SATA 2.5in MLC HS Enterprise Value SSD
ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x
ServeRAID M5100 Series SSD Performance Key for IBM System x
QLogic 8Gb FC Dual-port HBA for IBM System x
2
2
14
2
4
10
2
2
4
IBM 6Gb SAS HBA
x3650 M4 PCIe Gen-III Riser Card 2 (1 x8 FH/FL + 2 x8 FH/HL Slots)
Emulex Dual Port 10GbE SFP+ Embedded VFA III for IBM System x
4
2
2
-52Tectrade Computers Limited – Confidential
1
94Y6669
00FF247
69Y1194
46M090
1
90Y3901
46C3447
2072L2C
00Y2475
00Y2491
00Y2465
00Y2461
00AR08
8
2072LE
U
Version
2
1.0
2
IBM System x 750W High Efficiency Platinum AC Power Supply
Windows Server 2012 R2 Standard ROK (2CPU/2VMs) - MultiLang
x3650 M4 ODD Cable
2
IBM UltraSlim Enhanced SATA DVD-ROM
IBM Integrated Management Module Advanced Upgrade
IBM SFP+ SR Transceiver
V3700
IBM Storwize V3700 LFF Dual Control Enclosure
IBM 4TB 3.5in HS 7.2K 6Gbps SAS NL HDD
8Gb FC 4 Port Host Interface Card
0.6m SAS Cable (mSAS HD to mSAS HD)
1.5m SAS Cable (mSAS HD to mSAS)
2
2
4
1
2
48
4
4
8
5m Fiber Cable (LC)
8
IBM Storwize V3700 LFF Expansion Enclosure
2
E.9 Network Switching
Part No
Description
Qty
730964F
90Y3521
39Y7937
49Y7884
46C3447
10GbE Switch
IBM System Networking RackSwitch G8264 (front to rear)
30m IBM QSFP+ MTP Optical cable
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
IBM QSFP+ 40GBASE-SR4 Transceiver
IBM SFP+ SR Transceiver
1
2
2
2
144
730952F
46C3447
1GbE Switch
IBM System Networking RackSwitch G8052 (front to rear)
IBM SFP+ SR Transceiver
1
1
1
2
E.10 Racking
Part No
Description
Qty
93634PX
IBM 42U 1100mm Enterprise V2 Dynamic Rack
1
46M4137
IBM 0U 12 C19/12 C13 Switched and Monitored 32A 3 Phase PDU
4
-53Tectrade Computers Limited – Confidential
Appendix F - Site Information
F.1 Production – Primary Site
ACF Building,
Edinburgh Technopole,
Bush Estate
Penicuik
Midlothian
EH26 0QA
F.2 DR Sites
F.1.1 DR Site 1
College of Life Sciences
University of Dundee
Wellcome Trust Biocentre
Dow Street
Dundee,
Tayside
DD1 5EH
F.1.2 DR Site 2
Jacqui Wood Cancer Centre,
University of Dundee,
George Pirie Way
Dundee
Scotland
DD1 9SY
-54Tectrade Computers Limited – Confidential
Version
1.0
Download