LIFT_00062_Final_Report_Part_2b_RCOP

advertisement
Prepared for
Department of Homeland Security
Contract Number HSHQDC-10-C-00062
Contracting Officer: Sharon Flowers 202.254.6816 sharon.flowers1@dhs.gov
COTR: Christine Lee 202.254.6397 christine.lee@dhs.gov
HomeTech Pilot Program
Final Report: December 21, 2011
Milestone Event 0001ah A01-03-12-17
Part 2b
Project 1, Phase 2: Conceptual Approach to Regional Pilot – Demonstration
Report Date: December 21, 2011
Long Island Forum for Technology (LIFT)
LIFT Program & Administrative POC: Lois Gréaux 631.846.2742 lgreaux@lift.org
LIFT Technical POC: Richard Rotanz 516.390.5201 rrotanz@aschs.org
Period of Performance: July 2, 2010 – January 1, 2012
1
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Form Approved
OMB NO. 0704-0188
REPORT DOCUMENTATION PAGE
Public Reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering
and maintaining the data needed, and completing and reviewing the collection of information. Send comment regarding this burden estimates or any other aspect of this collection of
information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for information Operations and Reports, 1215 Jefferson Davis Highway, Suite
1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188,) Washington, DC 20503.
1. AGENCY USE ONLY (Leave Blank)
2. REPORT DATE
3. REPORT TYPE AND DATES COVERED
December 21, 2011
Final Report
August 01, 2010 through July 31, 2011
4. TITLE AND SUBTITLE
5. FUNDING NUMBERS
Project 1 Phase 2: Conceptual Approach to Regional Pilot - Demonstration
Contract No. HSHQDC-10-C-00062
(Regional Common Operating Picture – RCOP)
6. AUTHOR(S)
Lois Gréaux, Richard Balfour, and Robert Balfour
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
Long Island Forum for Technology
510 Grumman Road West, Suite 201
Bethpage, New York 11714
8. PERFORMING ORGANIZATION
Balfour Technologies (VCORE Solutions)
510 Grumman Road West, Suite 212
REPORT NUMBER: A01-03-12-17 Part 2b
Bethpage, New York 11714
9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES)
10. SPONSORING / MONITORING
Department of Homeland Security Science and Technology (DHS S&T)
Contract No. HSHQDC-10-C-00062
Milestone: 0001s, 0001ac, 0001ag, 0001ah
11. SUPPLEMENTARY NOTES
The views, opinions and/or findings contained in this report are those of the author(s) and should not be construed as an official Department of Homeland Security,
Science & Technology position, policy or decision, unless so designated by other documentation.
12 a. DISTRIBUTION / AVAILABILITY STATEMENT
12 b. DISTRIBUTION CODE
Distribution limited to U.S. Government agencies only.
13. ABSTRACT (Maximum 200 words)
This is the final report of the Regional Common Operating Picture (RCOP) implemented by Balfour Technologies in partnership with the Long Island Forum for
Technology (LIFT) and the Applied Science Foundation for Homeland Security (ASFHS), and sponsored by the U.S. Dept. of Homeland Security Science & Technology
Directorate (DHS S&T) demonstrates deployable C4I Common Operating Picture (COP) capabilities and processes for the downstate Metropolitan NY and L.I. Region
that effectively manages, correlates and shares information from many diverse systems and sensor sources, across local, state and federal first responder agencies.
RCOP utilizes DHS S&T existing SBIR-sponsored applied technology (based on fourDscape® from Balfour Technologies) for an intuitive, improved and timely
decision-making tool available across a broad array of users during an event. It augments first responder capability through the timely availability of actionable
information gathered from disparate information sources during a crisis or disaster.
14. SUBJECT TERMS
15. NUMBER OF PAGES
information sharing; object analytics
(includes cover sheet,
Situational awareness; common operating picture; geographic visualization; visualization; visual analytics; C4I;
97
form 298, and appendices)
16. PRICE CODE
17. SECURITY CLASSIFICATION
18. SECURITY CLASSIFICATION
19. SECURITY CLASSIFICATION
20. LIMITATION OF ABSTRACT
UNCLASSIFIED
UNCLASSIFIED
UNCLASSIFIED
UL
OR REPORT
ON THIS PAGE
OF ABSTRACT
NSN 7540-01-280-550
Standard Form 298 (Rev.2-89)
Prescribed by ANSI Std. 239-18
2
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
RCOP
Project 1, Phase 2: A Regional Common Operating Picture
Initial Regional Demonstration for Enhanced Situation Awareness and
Information Sharing among First Responders
FINAL REPORT
Prepared by: Richard Balfour, President
Balfour Technologies / VCORE Solutions
Morrelly Homeland Security Center
510 Grumman Rd. West, Bethpage, NY 11714
rich@BAL4.com
For:
Lois Gréaux, Program Manager
Long Island Forum for Technology
Applied Science Foundation for Homeland Security
510 Grumman Rd. West, Bethpage, NY 11714
lgreaux@asfhs.org
For:
Christine Lee, Program Manager
Department of Homeland Security
Science & Technology Directorate
1120 Vermont Ave, NW, Washington, D.C.
christine.lee@dhs.gov
3
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
TABLE OF CONTENTS
PAGE
RCOP – A Regional Common Operating Picture…………………………………………….
6
Integration with ASFHS COP BaseStation ………………………………….………………. 10
Regional Data ………………………………………………………….…………….. 10
Physical Asset Database / Facility Model ………………………...….……….……… 11
Regional COP Common View ………………….………………….………….……... 12
Regional COP Cyber Component ……………………………………..……………... 18
Integration with ASFHS Mobile Technology …………………….................…..…………… 19
tailgatER Display ………………………………………………………..…………… 19
tailgatER Portable Sensors & Devices …………………………………………..…… 20
tailgatER Communications ……………………………………………………..……. 21
tailgatER Deployment ……………………………………….……………….….…… 21
Production Methods & Deployment Processes ……………………………………….….…... 23
Imagery Acquisition & Correlation………...………………………………….….…... 23
Sensors Acquisition & Correlation …………...….……………………………..…….. 23
Network Architectures ……………………………………………………….….……. 24
Operational Training Methods ………………………………………………..………. 24
Delivery Methods …………………………………………………….………………. 25
Training / Operations -- A Successful Partnership with DHS S&T ………….……….……… 26
RCOP Project Follow-on Initiatives ………………………………………………….………. 28
Utilize RCOP Operational Technology Testbed ……………………………….…….. 28
Expand RCOP Core Technology Implementation …………………………..……….. 29
Expand RCOP in Tri-State Metro Region, with National Rollout Plan ……..……….. 30
Operational Exercises & Demonstrations – “powered by RCOP” ………..………….. 30
4
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Current RCOP Commercialization Opportunities ……………………………….…………… 31
APPENDIX A – RCOP Imagery Production Methods & Processes ………………….……… 34
APPENDIX B – RCOP Sensors Production Methods & Processes ………………………….. 42
APPENDIX C – RCOP Networks Production Methods & Processes ……...………………… 51
APPENDIX D – RCOP Operational Training Production Methods & Processes …….....…… 59
APPENDIX E – RCOP Delivery Production Methods & Processes …..................................... 66
APPENDIX F – User Guide: RCOP fourDscape® Portal ……………………………………. 78
APPENDIX G – National Roll-Out Plan / RCOP Commercialization ….………………….... 91
5
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
RCOP -- A Regional Common Operating Picture
Initial Regional Demonstration for Enhanced Situation Awareness and Information
Sharing among First Responders
The Regional Common Operating Picture (RCOP) implemented by Balfour Technologies in
partnership with the Long Island Forum for Technology (LIFT) and the Applied Science
Foundation for Homeland Security (ASFHS), and sponsored by the U.S. Dept. of Homeland
Security Science & Technology Directorate (DHS S&T) demonstrates deployable C4I Common
Operating Picture (COP) capabilities and processes for the downstate Metropolitan NY and L.I.
Region that effectively manages, correlates and shares information from many diverse systems
and sensor sources, across local, state and federal first responder agencies. RCOP utilizes DHS
S&T existing SBIR-sponsored applied technology (based on fourDscape® from Balfour
Technologies) for an intuitive, improved and timely decision-making tool available across a
broad array of users during an event. It augments first responder capability through the timely
availability of actionable information gathered from disparate information sources during a crisis
or disaster. The primary deliverable capabilities of this RCOP project were:

Demonstration and initial deployment of C4I COP capabilities at the ASFHS facility
(at the Morrelly Homeland Security Center in Bethpage, NY), implemented from
DHS S&T sponsored innovative research, and integrating geospatial technologies,
physical asset inventories, cyber network analysis, and correlated real-time data
sources to provide enhanced situation awareness and information sharing for first
responder capabilities;

Integration with an operational C4I COP BaseStation at the ASFHS and a mobile
tailgatER deployment that includes data acquisition and correlation from disparate
live and archived data sources, sharing regional information datasets and distributing
realtime surveillance information across the first-responder community. This also
provides an operational test bed at the ASFHS facility for future DHS S&T / First
Responder capabilities development and field testing;

Repeatable methods and processes for C4I COP Imagery, Sensors, Information
Networks, Training and Delivery components that can be utilized to reproduce an
operational RCOP deployment for many cities/regions nationwide at the federal, state
and local levels; and

A tailgatER Remote Demonstration Unit delivered to DHS S&T configured as a
fourDscape®-based “Remote Browser” on a ruggedized, multi-cpu MobileDemand
laptop/tablet unit (with touchscreen display) to connect back to the ASFHS
BaseStation for ‘live’ demonstrations of the RCOP / tailgatER mobile capabilities.
A two-page RCOP flyer has been produced (on the following pages) that summarizes the RCOP
goals, problem/solution, primary capabilities, and key aspects of a transition / commercialization
plan, which is the primary focus of this RCOP demonstration project – delivering a deployable
solution for enhanced situation awareness for emergency managers and first responders at all
levels, nationwide (see APPENDIX G – National Roll-Out Plan / RCOP Commercialization).
6
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
7
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The remainder of this Final Report document covers sections that focus on many of the key
aspects of the RCOP demonstration project, including
…. what was done:
 Integration with ASFHS COP BaseStation
 Integration with ASFHS Mobile Technology
…. how it was done:
 Production Methods & Deployment Processes
(with detailed Reports on Imagery, Sensors, Networks, Operational Training, and
Delivery Methods provided in APPENDICIES A-thru-E)
…. why it succeeded:
 Training / Operations – A Successful Partnership with DHS S&T
(with an RCOP fourDscape® User Guide provided in APPENDIX F)
And what’s next……
 RCOP Project Follow-on Initiatives
 Current RCOP Commercialization Opportunities
(Plus a separate National Roll-Out Plan / RCOP Commercialization report included
here as APPENDIX G)
8
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Integration with ASFHS COP BaseStation
The primary implementation of the fourDscape®-based RCOP demonstration system revolves
around the capability of the ASFHS facility (at the Morrelly Homeland Security Center, in
Bethpage, NY) to serve as the RCOP BaseStation and Command Center. This facility provides a
large video wall and projection display capability, along with large video monitors and an
analyst table with multiple displays with which to view and interact with the RCOP system.
Multiple fourDscape® browsers can be operating simultaneously, viewed on different displays
within the ASFHS Command Center, all interacting with the same RCOP fourDscape® portal
contents that represent the available Regional Data (including imagery, GIS, surveillance
cameras, tracking and other sensors) and Physical Asset Databases (including 3D building
models, facility floor plans, 360deg. video and photo surveys). This RCOP Common View can
be accessed locally in the ASFHS Command Center, as well as remotely from laptops with a
broadband internet or 4G-LTE connection back to the ASFHS Command Center.
REGIONAL DATA
The following elements are currently integrated into the RCOP visual browser interface and
data/sensor server engines, focusing on available imagery / GIS and regional camera feeds,
tracking and other sensors, to create a baseline RCOP portal for L.I./NYC and also a sample
Remote Portal for Baltimore Harbor:
 IMAGERY (seamlessly ‘joined’ when served in the fourDscape® browser)
a) Nassau County 2004 imagery; then updated with current 2007 from NYS GIS
Clearinghouse (6” resolution full-color orthophotos)
b) Suffolk County 2007 imagery from NYS GIS Clearinghouse (6” resolution full-color
orthophotos) first processed as a single 1m-resolution full-county image; then updated
as 6”resolution Suffolk—West—Central—East segments.
c) NYC (all five boroughs) 2004 6” resolution full-color orthophotos.
d) NYC (all five boroughs) 2004 4” resolution Pictometry neighborhood obliques fullcolor image warehouse.
e) Port of Baltimore (Inner Harbor) – 3” resolution ortho and oblique imagery from
FugroEarthdata, setup as a separate fourDscape® portal and intended to be served
remotely.
 GIS DATASETS (fully geo-registered with the 6” orthophotos, displayed automatically
with mouse-over or mouse selection)
Nassau County – all streets and parcels with names (owner) and addresses
Suffolk County – all streets and parcels with names (owner) and addresses
NYC (all five boroughs) all streets and parcels with names (owner) and addresses
NYS (SEMO) SLOSH Model flood zones -- transparent overlay for the entire region
of Nassau / Suffolk / NYC (and points north up the Hudson & LI Sound)
e) Town / Borough Borders (and names for Nassau / Suffolk / NYC)
a)
b)
c)
d)
9
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
f) ‘live’ Weather Radar overlay for NYC, Nassau & Suffolk Counties from the
National Weather Service
 LIVE SURVEILLANCE CAMERAS (geo-registered in the 3D scene; automatically
presented to the user through the fourDscape®-based RCOP browser by simply ‘looking’
in that direction of the 3D scene)
a) NYSDOT L.I. Inform Traffic Cameras – 166 live camera feeds for major highways
and secondary turnpikes throughout Nassau, Suffolk and Queens Counties; thru a
web-based connection to their Axis video servers.
b) Bethpage School District – 20 live camera feeds across 5 building campuses (all
exterior cameras); thru a VPN connection to the BHS network.
c) Nassau County Police – currently a single live feed from an overt camera location at
a major Bethpage intersection, thru a direct connection to the NCPD camera server
(more can be added per NCPD control).
d) NYCDOT Traffic Cams – 100 live camera views of traffic on major NYC
roadways; connected thru NYCDOT public camera website.
e) Thomson Reuters (Times Square) – security camera locations on all interior floors;
currently no live connections to the camera feeds, but demonstrates how remote
access can be setup for emergency or backup operations.
f) Private Sector Security Cameras -- exterior building views of a local business
vehicle dispatch area demonstrates simple / direct access to private security systems
(also includes GPS vehicle tracking).
g) Maryland DOT – Baltimore area live traffic cameras for major roadways
demonstrates remote portal connectivity to the sample Baltimore 4Dportal.
 TRACKING & OTHER SENSORS
a) GPS – from tailgatER, other vehicle trackers (local private business), registered
smartphones, handhelds, etc., as available.
b) Mobile Mesh – deployed locally community to demonstrate multi-sensor/data
connectivity for emergency vehicles and mobile devices; tracking locations and
delivering ‘live’ mobile video feeds.
c) Weather Buoys / Stations – local temperature, winds, humidity, wave heights, storm
surge, etc.
d) Airport Delays – status of arrivals / departures for major area airports (web-based
access)
e) Incident Alerts – ‘live’ incident messages provided from Breaking News Network
(email / text messages) for L.I./NYC, geospatially located in the RCOP scene;
selectable by the user for current alert info / history.
PHYSICAL ASSET DATABASE / FACILITY MODEL
In addition to the above Regional Datasets, the RCOP demonstration portal is also populated
with the following 3D Building / Facility elements that are currently integrated into the RCOP
visual browser interface and server engines, focusing on specific locations / facilities. This
demonstrates how the facility / building details serve to enhance the overall situation awareness
for the baseline L.I/NYC RCOP portal and also the sample Remote Portal for Baltimore Harbor:
10
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
 3D BUILDING MODELS (fully interactive and geo-registered in the 3D scene; and can
be vertically ‘stretched’ to get into each individual floorplan, as available)
a) Jamaica Station – detailed interior and exterior 3D model (platforms, stairs,
escalators, elevators, columns, roofs, etc.), including recorded cameras for interior
views of the multi-level station (useful for producing a simulated training
environment);
b) Bethpage High School and houses in the surrounding area (with actual geometry and
photo textures) – with detailed interior floor plans and an integrated facility (photos)
database for the High School building
c) Complete ASCHS 90,000sq.ft. building model to demonstrate facility integration,
including building structure (beams & columns), HVAC mechanicals,
walls/doors/ceiling/finish, integrated 360deg. video walkthrough, and ‘live’
surveillance cameras (both interior & exterior).
d) Midtown Manhattan – FugroEarthdata (FEDI) SymMetry models (detailed 3D
geometry; no photo textures) for the area surrounding Times Square
e) Thomson Reuters Building (3 Times Square) – with detailed interior floor plans for
16 floors of the 30+story building
f) Port of Baltimore (Inner Harbor) – about 50 buildings in and around the Inner
Harbor (with actual geometry and photo textures) built from 3” resolution oblique
imagery from FugroEarthdata
REGIONAL COP COMMON VIEW
This is the primary purpose of the Regional COP (RCOP) visualization: to provide a userdefined interactive view into the available regional datasets and live sensor feeds – where all
information is presented together in the same virtual scene, accessed by many simultaneous
users; with each user viewing the data/sensors from different perspectives and from different
locations, either locally or remotely thru broadband internet or 4G connections. The RCOP
fourDscape®-based interactive browser provides the capability for each user to independently
select the desired area / data /sensor of interest at any point in time, from any viewing
perspective. Each user view is completely interactive and independent, all derived from the
same RCOP common view available for all integrated data and live sensor feeds. Specific views
can range from overall county or city views, down to detailed interior exploded views for
individual buildings – all using the same RCOP fourDscape®-based visual browser running on
anything from high-end graphics workstations (for a Command Center display) to your average
laptop (for mobile connectivity back to the Command Center) or even ruggedized handheld
tablets (for the first responder).
The best way to ‘see’ RCOP is to see RCOP ‘live’, through a common view interactive remote
connection to the operational L.I./NYC portal available 24/7 at the ASFHS Command Center in
11
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Bethpage, NY. [These live demonstrations of what’s possible are available through Balfour
Technologies, or LIFT / ASFHS, or DHS S&T, or various active commercialization partners
throughout the country (see cover page contacts for more info).] RCOP is currently deployed
and operational at the Morrelly Homeland Security Center (in Bethpage, NY) utilizing the
existing assets of the ASFHS command center facility, running on a multi-cpu, multi-channel
graphics workstation to drive an interactive video wall and large screen projection, and acting as
a BaseStation for many tailgatER-type direct network connections for producing the same RCOP
common views remotely on deployed laptops or portable workstations. In addition, RCOP
servers are deployed in the ASFHS Data Center facility that support RCOP remote browser
connections that are ‘served-everything’, without any local datasets – this provides the capability
to connect the same remote browser (on a laptop or handheld tablet) to many different RCOP
portals for different locations around the country; a critical deployment feature for rolling out
this RCOP capability nationwide. The images on the following pages provide a sample of the
types of RCOP visualization capabilities available thru various RCOP common view methods.
The image below depicts a county-wide interactive RCOP view that includes high-resolution
aerial imagery instrumented with town boundaries, street names and parcel GIS datasets and
hurricane flood zone overlays, with hundreds of live traffic cameras positioned throughout the
county and live views of local school campuses with surveillance cameras.
For the
local
high school campus, RCOP visualization includes 3D floor plan models derived from 2D CAD
drawings (see the image below) and geospatially located within the correlated RCOP scene.
These interactive 3D floor models are spatially connected directly to a facility database that
12
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
identifies in realtime the location of specific rooms within the building, as well as any available
information about those locations (i.e., interior photo survey, contact info, etc.) based on user
selection. The live video feeds from the local camera system are accessed via a VPN network
connection from the ASFHS facility, and can be controlled (pan, tilt, zoom) through the RCOP
view as well as selectively recorded (to a Steelbox video archive system) and replayed under
user control.
Interactive building floor models can be highly detailed (including furnishings and finishes) to
accurately represent building interiors and exteriors in the RCOP virtual scene, and then
‘virtually’ stretched apart under user control (as shown in the image below) to see inside each
floor for more information and access to interior live camera views and other sensors.
Connection to the building facility database can also provided interior building walkthroughs
(shown as the ‘purple line’ paths within the floor models in the image below). Selecting a path
will provide a user-controlled 360deg. video walkthrough of that location, presented in a separate
camera frame, plus a yellow camera icon correlating the view location within the 3D floor
models. Any type of building infrastructure data or sensors can be integrated into the RCOP
visual scene to enhance realtime situation awareness for emergency response.
13
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
GPS locations are represented by vertical bars geospatially located within the RCOP virtual
scene (as shown in the images above and below) and can optionally include history tracks
depicting previous recent locations or current travel motion. These live tracking feeds can be
from emergency response vehicles (i.e., the ASFHS tailgatER SUV), or personnel tracks from
GPS enabled smart phones or other handheld devices. RCOP at the ASFHS also integrates live
fleet tracking for a local private sector firm, which includes detailed information about the driver,
at idle or current speed, time stopped at a location, etc., that is available to the user by a simple
mouse-over on the current GPS track location within the visual scene. This provides RCOP with
a common view of all available GPS-located assets.
The image below also demonstrates the integration of a third-party video analytics algorithm that
stitches together four overlapping camera views into one panoramic view and tracks all moving
objects, in realtime. The image below shows the analytics results geospatially embedded within
the visual scene (tracking individual pedestrians walking across a parking lot), with an individual
camera view (third one down on the left) zoomed in and tracking a particular user-selected object
(the person to the far right in the scene). The RCOP fourDscape®-based open architecture can
support many other analytics algorithms, operating in real time on video feeds or other sensor
data that can generate alerts on detecting user-identified conditions.
14
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Alerts are depicted in the RCOP visual scene as flashing vertical bars (larger than GPS tracks),
located at the position of the reported incident. These alerts can be generated external to the
RCOP system (i.e., from email text parsed and geospatially located; or NOAA weather alerts; or
directly from other Incident Command software systems) or generated internally by sensor
monitoring (i.e., user-defined thresholds) or object analytics. Alert information (and history) is
available textually (by a simple mouse-over) and updated in realtime as incident reports come in.
The fourDscape®-based RCOP visualization can include many other available features, such as
reports from weather buoys and storm-surge monitors, weather radar overlays, integrated
building systems (HVAC, CBRNE sensors, etc.), maritime systems, radar tracks, county-wide
parcel databases, etc., and can be integrated with data provided from many other systems for
police, fire, maritime and airport applications. Again, the key element is the fourDscape®-based
open architecture, which is essentially a Resource-Oriented Architecture (ROA) that provides the
flexibility to be deployed and integrated into almost any existing environment.
15
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
RCOP also can provide realtime fully-interactive 3D cityscapes, with high-resolution terrain and
building models integrated as the visual RCOP basemap (see 3D model sample of the Baltimore
Inner Harbor in the image below), or a highly detailed model of a defined perimeter space such
as a port, rail terminal, or other infrastructure facilities (i.e., as shown in the highly-detailed
interactive airport model shown below).
16
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
RCOP basemap city models can be generated for currently available 3D models for all major
cities, such as Boston, New York, Philadelphia, Miami, Fort Lauderdale, Tampa, Atlanta,
Chicago, Charlotte, Indianapolis, Toronto, Montreal, Quebec, Buffalo, St Louis, New Orleans,
Dallas, Houston, Denver, Las Vegas, San Diego, Los Angeles, Long Beach, San Francisco, etc.
This provides a solid foundation for a national rollout plan that can readily produce RCOP
portals for many important regions throughout the country.
REGIONAL COP CYBER COMPONENT
Visualizing the network architecture is another aspect that can be included within the scope of
the Regional Common Operating Picture (see sample image below). This would plug into
Layer4 of the fourDscape® architecture (Object Analytics) as an external service, depicting the
network configuration as the ‘object’ and visually presenting the connectivity and monitoring the
network health and activity, while leveraging the existing fourDscape®-based RCOP
visualization capabilities for 'exploded views' (like building internals), representing the active
deployed network features with key elements tethered to geo-specific reference points. This
17
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
layer can then also leverage the existing fourDscape®-based RCOP capabilities to drill-down
into available realtime data layers for specific network elements.
Integration with ASFHS tailgatER Mobile Technology
Although the primary RCOP deployment at the ASFHS was for Command Center operations, a
key benefit of the fourDscape®-based open architecture is the ability to deploy in many different
configurations and on many levels of deployable assets, from large command center views to the
same interactive view on a portable handheld device that can reach the individual first responder.
The flexibility of RCOP on mobile devices was demonstrated by actively deploying the RCOP
capabilities on the ASFHS tailgatER mobile technology assets, described in the sections below.
TAILGATER DISPLAY
The primary display for the ASFHS tailgatER mobile asset is a 46” touchscreen monitor
(pictured below) that provides the same RCOP common view from the command center,
deployed in the back of a SUV or retrofitted within any mobile emergency vehicle. All RCOP
live sensor feeds are relayed to the tailgatER mobile vehicle upon demand and displayed within
the fully interactive RCOP visual scene on the 46” touchscreen display. [Note: a similar 46”
touchscreen display was delivered to DHS S&T as part of the tailgatER Remote Demonstration
Unit.]
18
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
These same tailgatER capabilities can be demonstrated ‘live’ (connected to the ASFHS
Command Center through available broadband internet or 4G channels) on a portable ruggedized
12” laptop or 8” tablet with touchscreen displays (also pictured above). [Note: these portable
laptop/handheld units were also delivered to DHS S&T as part of the tailgatER Remote
Demonstration Unit.]
TAILGATER PORTABLE SENSORS & DEVICES
These ruggedized portable devices (from MobileDemand)
replicate the full RCOP interactive capabilities on a
portable device that can operate anywhere, either
connected wirelessly through a primary tailgatER mobile
vehicle, or independently connected directly back to the
ASFHS Command Center (tethered through a 4G cellular
or broadband internet line).
The tailgatER handheld unit also incorporates an integrated GPS sensor and high-resolution
video camera that provide real time sensor feeds back to the mobile tailgatER vehicle and on to
19
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
the ASFHS Command Center. Other mobile sensors can similarly be integrated through a
mobile tailgatER vehicle (i.e., the ASFHS tailgatER has a high-res PTZ video surveillance
camera on a 25’ telescoping pole; or GPS-enabled smart phones tracking tailgatER personnel)
providing ‘live’ sensor feeds from an incident location back to the RCOP Command Center
display. These integrated portable devices provide extended RCOP capabilities through a widerange of potential deployments.
TAILGATER COMMUNICATIONS
Any type of device connection back to the
ASFHS Command Center will provide
tailgatER with all available ‘live’ sensor feeds
from the ASFHS. This can be wireless
broadband, 3G, 4G or satellite capabilities
(ASFHS Command Center dish pictured to the
right), which have all been tested with the
ASFHS tailgatER SUV operational
deployment. The tailgatER Remote
Demonstration Unit (RDU) delivered to DHS
S&T was demonstrated operational with a real time data link back to the ASFHS Command
Center, tethered wirelessly through a cellular phone 4G connection. Remote laptop
demonstrations of full RCOP capabilities are regularly connected to the ASFHS Command
Center through standard 4G air cards, or ‘guest’ internet broadband connections, from anywhere
in the world.
TAILGATER DEPLOYMENT
With the RCOP system fully-deployed in the ASFHS Command Center, active remote
demonstrations of RCOP tailgatER capabilities have been given to various emergency response,
law enforcement and transportation security agencies utilizing the fully operational ASFHS
tailgatER SUV – demonstrating RCOP capabilities in a fully mobile environment (see images
below). Actively exercising a key benefit of the fourDscape®-based RCOP resource-oriented
architecture through the ability to deploy in many different configurations and on many levels of
deployable assets. A flexibility that will successfully enable an RCOP rollout for many agencies
in many regions across the country.
20
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
[ASFHS tailgatER shown above with RCOP deployed in the back of an SUV.]
21
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Production Methods & Deployment Processes
The following production methods and deployment processes have been implemented at the
Applied Science Foundation for Homeland Security (ASFHS – at the Morrelly Homeland
Security Center in Bethpage, NY) to produce a Regional COP (RCOP) for the L.I./NYC area,
and are repeatable elements in producing a Common Operating Picture for any city or region or
state. By documenting these specific methods while producing the initial RCOP demonstration
and deployment at the ASFHS, significant savings of time and dollars should be realized when
re-producing these RCOP capabilities for other city/regional/state command centers.
 Imagery & GIS
 Sensors
 Networks
 Delivery Methods
 Operational training
Detailed reports have been delivered for each of these topics (included in this Final Report as
APPENDICIES A-thru-E), and are summarized below:
Imagery Acquisition & Correlation – (see APPENDIX A for detailed report)
Visual content for the Regional COP begins with high-resolution ortho and oblique imagery
(typically available directly from the state or local municipality) that is seamlessly integrated for
interactive fly-through in a desktop browser, and geo-registered with relevant GIS shapefiles and
3D models for creating visual geospatial awareness for the city/region. The methods and
procedures for this process can be standardized and semi-automated for importing these common
datasets into the RCOP system and making them readily available in a shared information
environment, independent of the specific visual content for each city/region/state.
These processes have been implemented at the ASFHS facility to a create high-resolution RCOP
basemap (imagery & GIS & 3D building models) covering the L.I./NYC area (all 5 NYC
boroughs plus Nassau & Suffolk Counties, at 6” resolution) that can be accessed locally or
served remotely from the ASFHS Data Center to the fourDscape®-based RCOP visual browsers
on workstations, laptops, tablets or handheld devices.
Sensors Acquisition & Correlation – (see APPENDIX B for detailed report)
The next step in building the Regional COP is integrating sensor streams (both ‘live’ and
archived, i.e., for video forensics) from a variety of sensor types, geo-registered into the
Regional COP baseline imagery. Although there are many vendors with many different sensor
products (e.g., surveillance cameras, GPS tracking devices, chem./bio sensors, explosives
detectors, etc.) used by many different cities/agencies, many standard protocols have been
adopted for reporting information such as sensor metadata (typically XML-based), surveillance
video (typically MJPEG / MPEG-4 compression), still imagery (JPEG / JPEG2000), GPS
tracking (NMEA reports over CDMA/GSM cell or broadband comm.), and for alerts / alarms /
reports (using common alerting protocols (CAP) and NIMS standard homeland security
22
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
symbologies). Methods and processes for re-configurable sensor interfaces have been
implemented at the ASFHS facility as standardized Sensor Management Engine interfaces that
encapsulate these sensor streaming protocols, so that repeatable deployments can readily adapt to
different sensor hardware devices, instead of integrating each sensor supplier’s proprietary
software development kit (SDK) for each unique device. This will produce a major savings in
time, integration, and system maintenance when deploying in different cities/regions using
different sensor providers.
Network Architectures – (see APPENDIX C for detailed report)
The backbone that provides the realtime connectivity to support this RCOP visualization (i.e.,
realtime imagery, GIS & sensors) is the network architecture living underneath it all.
Understanding the potential network architectures that can be deployed for an RCOP system
across many cities/regions is an important element for effective system performance, as sensor
integration is dependent on reliable networks for delivering both high and low bandwidth data
streams from remote sensors to a Regional Command Center (i.e., over wireless mesh, satellite,
3G/4G cellular, broadband internet, local intranet, analog IP sensor encoders, etc.) and
distributed on-scene to the first-responder community through the tailgatER mobile technology
solution. Implementing an operational RCOP deployment for LI/NYC has identified many of
the issues related to integrating various network architectures to support this sensor integration
and distribution to remote devices.
Operational Training Methods – (see APPENDIX D for detailed report)
Establishing operational training methods and procedures for the initial RCOP deployment at the
ASFHS facility (including the mobile tailgatER capabilities) is based on an understanding of
standard operating procedures within the Emergency Management community, and utilizes the
same deployed operational environment configured for training that will provide a cost-savings
multiplier readily applied across many cities/regions/states. This includes the capability of the
RCOP operational deployment to be utilized as the core visualization for simulated
exercises/HSEEPs, pre-packaging simulated sensor modules that could produce the sensor
streams for pre-planned exercise scenarios.
RCOP’s underlying fourDscape® technology is based on real-time simulation technology
(developers at Balfour have extensive experience building real-time flight/tactics simulators for
the military), and is inherently capable of generating an interactive RCOP from real
sensors/assets, or a simulated RCOP for embedded training using the simulated fourth dimension
of time. Realistic and effective operator training cannot always be achieved using the real-world
subsystems; it is not always feasible to affect the operating HVAC system during a training
exercise, or stage suspicious activity in the lobby for surveillance video feeds, or excite a CBE
sensor with a toxic agent or explosive device. To overcome this, fourDscape® modules can be
integrated with “simulated” generic subsystem modules for both training and testing. For
example, fourDscape® has a simulated Camera Server module, used to replay previously
recorded video loops (which could very well depict staged security events for training purposes).
Like this simulated camera server, other simulated modules for HVAC, alarms, CBE sensors,
23
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
etc. can be generated from their respective ICDs. Not only does this produce a realistic and
effective training environment, but also aids in the validation and integration of each subsystem
interface. This inherent ability of fourDscape® to both “operate” and “simulate” gives RCOP
the ability to create very realistic embedded training experiences.
Delivery Methods – (see APPENDIX E for detailed report)
Implementation of an initial LI/NYC RCOP deployment at the ASFHS has provided the
opportunity to evaluate various product delivery methods available for the RCOP solution, from
packing the entire server/browser system on a portable laptop, to running on a high-end desktop
graphics platform, or a configuration based on a rack-mountable 1u server with web-based
access (for the mobile tailgatER display). Understanding the needs of the user community from
the initial L.I./NYC implementation, along with integrating with the ASFHS command center
resources (large screens, policy rooms, etc.) has established effective baseline delivery methods
that can be readily replicated for other cities and regions.
RCOP’s underlying fourDscape® technology provides extensive interactive visualization and
data integration capabilities within an architecture that provides significant flexibility in
delivering specific networked services at many different levels. This readily translates to a widerange of delivery methods, from supporting multi-screen videowalls in command centers, down
to two-way interaction with mobile handheld devices, smartphones, etc., all coming together to
build an effective Common Operating Picture, and sharing the information where needed.
The ASFHS facility provides the resources to implement a Common Operating Picture of an
incident at the command center level (COIN facility), which can then be managed through a
mobile technology tailgatER package to provide the same common view to mobile on-scene
vehicles and then distributed by the tailgatER capabilities to handhelds and smartphones of first
responders, while managing the available network bandwidth and controlling the RCOP views
for each level of device/vehicle/facility display capability. RCOP has created a unique test bed
(at the ASFHS) for delivery methods and processes that enable the distribution of a common
operating picture for local / regional command and control for incident management.
24
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Training / Operations -- A Successful Partnership with DHS S&T
The RCOP system has been operating 24/7 at the ASFHS facility in the Morrelly Homeland
Security Center since November 2010. Several commercialization partners have been trained on
RCOP operations, with the ability to connect ‘live’ from remote sites to provide fully-featured
RCOP demonstrations from anywhere in the world (through a broadband or 4G internet
connection). And a tailgatER Remote Demonstration Unit was also delivered to DHS S&T
(powered by a portable, ruggedized laptop operating standalone with a 12” touchscreen display,
or attached to a 46” touchscreen HD monitor; plus an 8” rugged handheld display with built-in
camera and GPS), including training for ‘live’ RCOP demonstrations through a broadband or
4G-LTE connection to the ASFHS RCOP system.
A key element that allowed for rapid RCOP prototype implementation at the ASFHS, mitigating
risk and quickly producing an operational baseline RCOP environment, was the direct result of
utilizing DHS S&T-sponsored SBIR technology prototypes as the basis for demonstration and
commercializing a Regional COP capability. Under a Phase II SBIR contract from DHS S&T
for Automated Situation Awareness (ASA), Balfour had furthered the development of a
fourDscape®-based ASA system to manage and control large arrays of multi-media surveillance
and tracking sensor suites over large regions and/or confined areas. The scalable fourDscape®based architecture, and natural, visual user interface delivers to the user full contextual and
interactive situation awareness and control, with a modular, network-based technology and
configurable interfaces. This DHS S&T-sponsored SBIR research effectively produced an
operational fourDscape®-based RCOP system with intuitive and interactive C4I COP system
capabilities and processes that can be repeatable in a shared information environment throughout
every critical region nationwide (see “A Successful Partnership with DHS S&T” data sheet on
the pages below for a further description).
25
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
26
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
27
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
RCOP Project Follow-on Initiatives
RCOP Phase 2 deliverables will be completed July 2011, including an operational RCOP
Command Center at the ASFHS, focused on Nassau County but including elements of NYC and
Suffolk County, and integrated with a mobile tailgatER vehicle and a remote demonstration unit
of tailgatER capabilities available to DHS S&T in the D.C. office. Rollout has already begun
through Nassau County agencies (Nassau OEM and Police), partner sites have been setup in
Maryland, Canada and NYC to build distribution channels to the larger emergency management
and security markets, and many commercialization opportunities for the underlying DHS S&Tsponsored fourDscape®-based technology are underway (see Current RCOP Commercialization
Opportunities discussed below).
The RCOP capabilities at the ASFHS also provide an operational testbed that can readily
incorporate other emergency/security-related tools and products including information-sharing
frameworks, and Balfour / ASFHS has recently submitted a DHS S&T PhaseII SBIR proposal to
leverage this testbed for accelerating the transfer of University Program technologies into
operational first responder solutions. To continue building upon the current RCOP capabilities at
the ASFHS and its repeatable methods & processes will provide a cost-effective means to deliver
operational solutions to the first responder community. To that end, several specific areas are
proposed as direct follow-on tasks for RCOP Phase 3:
NEXT Follow-on Tasks (for RCOP Phase 3):
 Utilize RCOP Operational Technology Testbed
The RCOP2 Phase 2 Demonstration project has resulted in a significant set of operational
capabilities at the ASFHS facility which has already stimulated integration tasks with other DHS
S&T-sponsored technologies and industry solutions that create enhanced capabilities for first
responders. Utilizing this RCOP operational testbed will maximize the value gained for each of
these individual technologies, from situation awareness, to information-sharing frameworks, to
incident management, and right down to tracking real-time emergency response and the dynamic
events that drive it. As an operational platform for verification and validation of DHS S&T
technology initiatives, RCOP Phase 3 could fully implement the integration of independent
frameworks and service-oriented architectures like UICDS, SIMON, NICS, and vUSA with the
RCOP platform, and utilize this operational environment on a daily basis for testing application
of technologies from universities, national laboratories, and industry that further the goal of
delivering operational solutions to enhance situation awareness for improved decision support in
the real world of emergency response. The unique environment offered by the ASFHS, with an
active first responder community directly involved with the RCOP operational technology
testbed activities, can provide a proven path to successful technology transition and
commercialization that can be leveraged to deliver valuable new interoperable capabilities for the
first responder toolkit.
28
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
TASK 1: Continue the technology integration of UICDS, SIMON, NICS and vUSA
frameworks for information sharing with the operational RCOP regional situation awareness
platform at the ASFHS.
TASK 2: Test these integrated frameworks in an operational environment with ASFHS
partner agencies (representing OEM, Police, Fire, etc.), demonstrating seamless information
exchange with their existing systems through these various interfaces.
TASK 3: Test these integrated frameworks in an operational environment with ASFHS
industry partners with commercially-available applications for fire/rescue (i.e., GEOcommand)
and law enforcement (i.e., Impact Technologies)
TASK 4: Validate the system methodologies and the system architecture, and verify
system operations through end-user involvement.
[Note that these tasks focus on initial technical integration and testing of these various
frameworks with the RCOP platform in the RCOP operational testbed at the ASFHS facility, and
should be followed by a comprehensive long-term effort that incorporates extensive scenarios
and field exercises to achieve PRL level 7 – finalized verification and validation of systems; and
an initial rollout of these combined seamless information-sharing capabilities that targets early
adopters and commercialization opportunities for delivering these integrated systems and
expanded RCOP platform capabilities to the first responder community, achieving TRL level 9 -actual system proven through successful mission operations.]
 Expand RCOP Core Technology Implementation
During the course of the Phase 2 implementation of the RCOP demonstration project, several
additional capabilities have already been identified that with further technology integration
efforts will build a more comprehensive and robust RCOP solution. Specific capabilities in the
areas of interoperable communications for both voice (VOIP / radios) and data (shared
information services); persistent broad-area surveillance (high-resolution geo-referenced aerial
imagery) for realtime coverage of ground-based dynamic events; configuring robust, secure and
resilient network architectures that maintain the information backbone in emergency situations;
and further leveraging the ever-growing market of smart phones and tablets (the social network)
as a fully-enabled distributed sensor network and alerting mechanism. Leveraging the current
RCOP platform at the ASFHS will demonstrate how these technologies and others can be
implemented to support the difficult tasks of first responders and emergency managers, building
an integrated environment that complements current capabilities and drives the technology
roadmap.
TASK 1: Identify technology providers with specific solutions ready for integration into
an RCOP testbed environment. [Note that Balfour / ASFHS already has access to three different
interoperable communications systems which are ready for RCOP integration; several partners
with persistent broad-area surveillance sensors; access to University research on robust networks
(being done for the military); and active relationships with smartphone / handheld providers such
as Motorola and MobileDemand.]
29
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
TASK 2: Build integrated RCOP solutions that meet first responder and both public and
private security / surveillance needs.
TASK 3: Test and demonstrate integrated solutions in the operational RCOP environment
for application to real world problems that solve identified customer needs – developing a ready
market for commercialization to get these solutions deployed into the field.
 Expand RCOP in Tri-State Metro Region, with National Rollout Plan
The current RCOP portal at the ASFHS is primarily L.I.-centric, fully engaged with Nassau
County (being deployed with multiple agencies) and covering Suffolk County and the five
boroughs of NYC. Further engagements with Suffolk and NYC agencies, as well as private
sector security for critical infrastructure, public transportation, school, water and fire districts,
etc. will expand the level of situation awareness provided to the region and create a robust, farreaching solution. Initially extending the RCOP coverage area to the entire Tri-state Metro (NY,
NJ, CT) region through application of the RCOP repeatable methods & process, will demonstrate
the ability to readily scale the RCOP solution and build the foundation for a national rollout plan
that can provide this capability city-by-city, state-by-state, region-by-region, across the country,
leveraging the growing network of ASFHS partners and current RCOP portals being
implemented for specific sites outside the NYC region.
TASK 1: Extend the initial RCOP L.I./NYC portal outward for additional coverage
within the Northeast FEMA Region II – leveraging current relationships already being developed
within those regions (NJ, NYC, Westchester, Orange, etc.) for potential commercial
deployments, and providing access to the extended RCOP capabilities that integrate other DHS
S&T information-sharing initiatives (vUSA, UICDS, etc.) and key additional core technologies
being integrated such as interoperable communications and persistent surveillance.
TASK 2: Expand the initial RCOP L.I./NYC portal coverage downward to include more
specific agencies and organizations (i.e., towns, cities, schools, private sector, police, fire, etc.)
for additional coverage within the Northeast FEMA Region II – demonstrating how a baseline
regional RCOP portal can readily expand to serve the wide range of specific users within that
region.
TASK 3: Implement a National rollout plan for other regions, initially leveraging some of
the current RCOP commercialization opportunities underway that can accelerate establishing
initial baseline RCOP portals for those additional regions.
[Note: A “National Roll-Out Plan / RCOP Commercialization” effort is documented
separately, and included here as APPENDIX G.]
 Operational Exercises & Demonstrations – “powered by RCOP”
With the current RCOP platform in place and operating 24/7 in the ASFHS command center,
coupled with the significant ASFHS relationships being established with public safety and
30
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
emergency response agencies at local, state and federal levels, demonstrating the extended and
expanded RCOP capabilities both throughout the Northeast FEMA Region II and to other places
of national interest is a key element to accelerate RCOP commercialization and field
deployments. In addition, RCOP support for tabletop and operational exercises at the ASFHS
facility can be readily implemented for a wide-range of emergency response scenarios, utilizing
the extended RCOP capabilities that integrate other DHS S&T information-sharing initiatives
(vUSA, UICDS, etc.) and key additional technologies being integrated. From situation
awareness, to interoperable communications, to decision support, driven by extensive sensor and
surveillance networks, the RCOP platform is a key component for building operational exercises
that closely reflect real world situations. The power of RCOP to deliver a flexible operational
training environment through its simulation and modeling capabilities (plug-in simulated
sensors, models, tracks, and dynamic scripting) within the same RCOP platform utilized in dayto-day real world operations, provides a rich experience that ‘feels real’ for supporting
interactive field exercises at any level.
TASK 1: Demonstrate the operational RCOP capabilities throughout the Northeast
FEMA Region II area to stimulate expanded participation of local, state and federal partners, and
to actively cultivate potential commercialization opportunities for RCOP deployments. The
RCOP remote demonstration capability (i.e., tailgatER on laptops) connected ‘live’ to the
ASFHS command center has proven to be a very effective means to deliver the RCOP
experience to any location across the country, developing additional interested customers and
deployments.
TASK 2: Exercise ASFHS / RCOP capabilities in an operational training environment to
support specific planned interactive tabletop or field exercises within the Northeast FEMA
Region II area covered by the extended RCOP portal, or as a participant in national-level
exercises.
Current RCOP Commercialization Opportunities
The primary goal of the RCOP project is to transition these integrated technology solutions and
core interactive visualization capabilities to operational environments in the field, providing
operational solutions for both first responders and the wider (both public and private) security /
surveillance and emergency management communities. To this end, Balfour / ASFHS has taken
the RCOP project ‘on-the-road’ since it first became operational at the ASFHS in Nov.2010,
producing initial contracted deployments locally (Nassau County) and many current
commercialization opportunities in the works (a sampling of which is listed below). Leveraging
these current commercialization efforts by Balfour / ASFHS will provide an active channel for
delivering extended and expanded RCOP solutions to the wider market in an accelerated and
cost-effective manner.
 Nassau County – currently under a 4-year contract for a phased deployment of the
fourDscape®/RCOP solution, initially being implemented for Nassau Police, then
planned for Nassau OEM.
31
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
 Town of Oyster Bay (largest in Nassau)– ready to deploy a fourDscape®/RCOP
solution in both command center and tailgatER implementations. Proposal being
submitted.
 Suffolk County – both Suffolk Police & Suffolk Water Authority under discussion
for deployments of a fourDscape®/RCOP solution.
 L.I. Schools & Hospitals – several L.I. school districts in both Nassau and Suffolk,
and the major L.I./NYC hospital network discussing proposals for campus
deployments of the fourDscape®/RCOP solution.
 NYS Orange & Westchester Counties – discussions underway to extend the
fourDscape®/RCOP solution beyond NYC and to support various commercial
initiatives.
 NJ Meadowlands Stadium – pilot project under discussion for bringing an integrated
fourDscape®/RCOP solution to enhance stadium security.
 NYC Private Sector Security – currently deploying a fourDscape®/RCOP solution
at a partner’s NYC command center to market this capability to major NYC building
owners.
 NYC MTA / LIRR -- demonstrated and presented the fourDscape®/RCOP solution
at several meetings with different groups – with a pilot deployment under discussion
for command center operations.
 NYS DEC – pilot project defined for surveillance of coastal erosion (pre-post
storms); looking to establish a funding source for a pilot deployment.
 Port Authority of NYNJ – demonstrated and presented the fourDscape®/RCOP
solution at several meetings with different groups – with a pilot deployment under
discussion.
 Chicago O’Hare Airport – proposal submitted (through Immersimap as a partner) for
a fourDscape®/RCOP solution to enhance physical security at terminal buildings.
 Port of Long Beach CA – proposal submitted (through Radia Systems as a partner)
for a Common Operating Environment supporting port security and operations.
 Port of Tampa FL – under subcontract with SRI International to integrate existing
SIMON capabilities into an RCOP demonstration portal implemented for the Port.
 DHS SBI-net Northern Border Operations – discussing integration at the Detroit
Border technology operations center.
 DHS S&T SBIR Phase II – proposal submitted to accelerate the transition of
technologies from University Programs research utilizing the RCOP operational
testbed capabilities and Balfour / ASFHS commercialization channels.
 Universities – NYIT contracted for an operational fourDscape®/RCOP laboratory
for training and education; St. John’s University also in discussion; Stonybrook
(NYS University) research centers actively engaged to integrate technologies with
the RCOP operational testbed.
 DHS S&T ALERT Center of Excellence (at Northeastern University) has executed
membership with ASFHS in order to leverage the RCOP operational testbed
capabilities to accelerate technology transition.
Several partner channels have also been established that can readily bring the RCOP solution to
many locations throughout North America, including SRI International, Patton Electronics
(Visuality), QTAGS (GuestAssist), RDG2 (Canada), and MSA (Private Security), with many
additional commercialization opportunities developing in Houston / Gulf area (Energy/Oil
32
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
industries); major sports stadiums (NFL, MLB, MLS); NYC private sector security (major
building owners); private communities and parks in FL; major Shopping Mall developers in CA;
and more………….
[Note that to support and accelerate the commercialization effort for this fourDscape®-based
RCOP solution, Balfour Technologies has formed a joint venture with another ASFHS Resident
Research Partner to establish V.C.O.R.E. Solutions LLC, which is now actively pursuing an
initial round of private investment capital ($2M-$10M) specifically for marketing and supporting
the active deployment of this fourDscape®-based technology solution. [A “National Roll-Out
Plan / RCOP Commercialization” effort is documented separately, and included here as
APPENDIX G.]
33
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
APPENDIX A
Regional Common Operating Picture (RCOP)
Imagery Production Methods & Deployment Processes
Visual content for a Regional COP begins with high-resolution ortho and oblique imagery
(typically available directly from the state or local municipality) that is seamlessly integrated for
interactive fly-through in a desktop browser, and geo-registered with relevant GIS shapefiles and
3D models for creating visual geospatial awareness for the city/region. The methods and
procedures for this process can be standardized and semi-automated for importing these common
datasets into the RCOP system and making them readily available in a shared information
environment, independent of the specific visual content for each city/region/state.
The following imagery & GIS production methods and deployment processes are being
implemented at the Applied Science Center for Homeland Security (ASCHS in Bethpage, NY) to
produce a Regional COP (RCOP) for the L.I./NYC area, and are repeatable elements in
producing a Common Operating Picture for any city or region or state. By documenting these
specific methods while producing the initial RCOP demonstration and deployment at the
ASCHS, significant savings of time and dollars should be realized when re-producing these
RCOP capabilities for other city/regional/state command centers.
A. Ortho Photography
Digital orthophotos (top-down perpendicular aerial views of the ground) are the primary imagery
utilized for the RCOP basemap visualization, ortho-rectified to maintain geospatial accuracy.
These images are typically provided by the municipalities (counties / cities, etc.) that fly new
aerial imagery every 2-3 years, or can be found at the USGS National Map Seamless Server, or
from commercial distributors (such as Digitalglobe, MapMart, etc.). The best source for highestresolution (6” natural color) up-to-date imagery is typically the local municipality, which also
provides consistency with other local GIS users. The imagery utilized for the L.I.
Nassau/Suffolk RCOP at the ASCHS is from the NYS GIS Clearinghouse.
STEP 1 – ACQUIRE IMAGERY
Ortho imagery is typically provided in “tiles” of smaller images that when put together provide a
seamless view of a larger region, usually in a GeoTiff image format that provides the georegistered image coordinates (either in image headers or metadata files). These databases can be
very large for city or county-wide imagery at 6” resolution (and grow exponentially as imagery
begins to be delivered at 3” resolution by some aerial imagery providers), which drives the need
for near-lossless compression techniques at high compression ratios. This can be accomplished
with wavelet compression algorithms implemented through the JPEG2000 imagery standard
(typically 20:1 compression ratios with minimal loss of detail or compression artifacts). In fact,
34
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
current imagery now being distributed by the NYS GIS Clearinghouse has already been
produced in a JPEG2000 compressed format (for individual tiles).
STEP 2 – MOSAIC & COMPRESS IMAGERY
To facilitate rapid access and realtime decompression of imagery databases, the individual tiles
must be mosaiced together into larger but manageable JPEG2000 images. This process can be
accomplished by any available GIS toolset that can import the native image format, build a
seamless image of tiles, and then export into a single JPEG2000 image. To build the RCOP
basemap imagery at the ASCHS, Global Mapper (an inexpensive GIS tool) was utilized to
import either GeoTiff or JPEG2000 tiles, mosaic and compressed into larger JPEG2000 images.
Note that the target size for optimum image output is in the 5-7GB range – trying to serve each
tile as an individual JPEG2000 image will bog down the realtime image server with too much
overhead manipulating access to so many images simultaneously; and trying to build it all in one
JPEG2000 image (i.e., all of Suffolk in one image is 20GB, compressed) will burden the realtime
system with large memory needs, and slower access into specific locations within the image.
The L.I./NYC RCOP imagery was built into 5 Nassau-County-sized chunks (NYC, Nassau,
Suffolk-West, Suffolk-Central, and Suffolk-East) for optimum realtime performance. [Note that
building larger mosaics will also stress the compression algorithms in most systems, either
failing to complete or stretching the processing time to weeks instead of days.] This
compression should be performed on a multi-cpu system with plenty of memory – for example at
the ASCHS an 8-core server with 16GB of memory (running Global Mapper 64-bit) was utilized
to produce a Nassau County JPEG2000 compressed image (20:1) from 5600 tiles; output sized at
4.6GB; processing took about 24 hours.
STEP 3 – REALTIME IMAGE BROWSING
The above process produces high-resolution basemap aerial imagery that can be readily served
and viewed at varying resolutions (the primary feature of wavelet compression technology), and
can even be ‘portable’, i.e., the full L.I./NYC basemap 6” resolution imagery can fit on a 25GB
hard drive (as 5 flat files), instead of needing to access a complex (and expensive) database
management system to serve out individual tiles, which in turn drives the need for highbandwidth network access. With this JPEG2000 compression process, local high-resolution
imagery access can be done on a standard portable laptop – running a visual browser optimized
for realtime imagery access.
Balfour’s patented fourDscape® visualization technology is at the center of the system
architecture for the RCOP demonstration at the ASCHS. The fourDscape® browser/server
technology provides a four-dimensional (4D) graphical user interface and layered system
architecture that integrates on top of the variety of systems that can deliver spatial and realtime
information for an interactive Regional COP, and provides the mechanism for realtime viewing
of seamless, high-resolution JPEG2000 imagery over large regions that is the foundation for
building and delivering a Common Operating Picture to all levels of users. The fourDscape®
35
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
browser controls image decompression (either locally or through a remote image server) to
create seamless, smooth, interactive motion through a 3D scene built from the basemap imagery
by drilling down into the imagery at different levels of resolution based on the current viewers
eyepoint. Other GIS viewers can also load these JPEG2000 images, but how the user interacts
and the level of seamless, smooth interaction and resolution is highly dependent on the capability
of the browser.
fourDscape® utilizes the Kakadu Software library to process these JPEG2000 images. Kakadu
is an implementation of Part 1 of the JPEG2000 standard. It should be fully conformant with
“Profile-1”, “Class-2”, as defined in Part 4 of the standard. Kakadu also implements many
features from other parts of the JPEG2000 standard, including:
 Virtually all features of the JPX file format, described by Part 2 of the JPEG2000
standard, including rich color spaces, multiple compositing layers, animation and rich
metadata support.
 All aspects of the Motion JPEG2000 standard (Part 3) which apply to video.
 Client and server components for a comprehensive implementation of the JPIP (IS
15444-9) standard for interactive image communications.
The Kakadu software library provides rich support for the file formats which
have been developed by the JPEG committee: the baseline JP2 file format;
the extended JPX file format; and the motion MJ2 file format; plus support for the JPIP image
server standards. fourDscape® leverages this capability to deliver fully interactive 3D scenes in
a desktop browser, serving and accessing imagery through the JPEG2000 standards.
B. Oblique Imagery
Several aerial imagery providers are delivering high-resolution oblique angle photography (3D
views at approx. 45deg.) along with the standard basemap orthophotos. These oblique image
libraries can contain thousands of views from many different perspectives (typically at least
North, South, East and West). Unlike standard orthorectified basemap imagery, these individual
images are not delivered in tiles that can be built into a seamless mosaic (exactly matching edgeto-edge), but instead generally provide a 20-30% overlap between images. These 3D oblique
views can add significant value to the basemap imagery in an urban environment, as
orthorectified tiles provide straight-down views of building rooftops (eliminating building ’lean’
that obscures other ground features), which when overlaid with the oblique imagery produces a
scene that can depict “every-square-inch” of a city, including views of buildings and features
from all sides.
36
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The process utilized to integrate these 3D oblique image libraries into the ASCHS Regional COP
must effectively correlate them with the ortho basemap imagery, and includes the following
steps:
STEP 1 – CONVERT TO JP2
Oblique image libraries are typically delivered as individual JPEG images, which must be
converted to the JPEG2000 standard JP2 format for effective viewing in an interactive RCOP
system. This should be done as a batch process, utilizing available toolsets such as Global
Mapper, or GDAL, etc. to convert each individual image – at the ASCHS a programmatic
approach was taken to convert oblique imagery from various aerial imagery providers utilizing
the Kakadu software libraries to create a servable JPEG2000 oblique image set for realtime
multi-resolution access.
STEP 2 – GEOLOCATE IMAGES
Oblique imagery is also delivered with metadata that defines the specific geo-location of each
individual image (i.e., the exact position and orientation of the camera when the image was
taken). This data can be provided in an XML file, or ESRI shapefile or extracted from the
imagery headers using GUI tools or DLLs (for a programmatic solution) supplied by the aerial
image providers. This metadata must be collected to define the ground points (rectangle /
polygon) covered by each individual image.
STEP 3 – REALTIME IMAGE BROWSING
Other than proprietary viewers supplied by the third-party aerial imagery providers, there are not
many known browsers that can seamlessly correlate oblique 3D imagery with ortho basemap
imagery (potentially from different providers). For the ASCHS RCOP, fourDscape®
browser/server technology can automatically determine the oblique image that best-fits the
current 3D viewing perspective (from the ground points defined for each image), and seamlessly
overlay that oblique image into the 3D scene. This method interactively changes the oblique
image displayed and registered into the scene by the user simply moving (i.e., ‘flying’ around) or
changing (i.e., ‘spinning’ around) the viewing perspective.
C. 360deg. Video Images
For the ASCHS RCOP, the fourDscape® browser/server technology also provides a unique
perspective on imagery for building interiors, in the form of 360deg. high-resolution video
walkthroughs. The video is produced from specialized mobile cameras (third-party vendors) by
walking pre-defined paths through the building. Each path is delivered as an AVI movie file,
where each frame is a panoramic JPEG image covering the full 360deg. view. Specialized
fourDscape® support libraries can process these AVI files and convert them to a multi-layered
37
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
JP2 format using the JPEG2000 JPX or MJ2 movie standard. This produces a unique dataset of
JPEG2000 imagery that can be served and rendered in the interactive fourDscape® browser as a
360deg. spherical view where the user can be looking in any direction and moving along any of
the predefined paths through the building for a 3D immersive experience.
The process utilized to integrate these processed (JPX) 360deg. video libraries into the ASCHS
Regional COP must effectively correlate them with the 3D building model geometry, and
includes the following steps:
STEP 1 – ACQUIRE DATA / VIDEO
Third-party providers will pre-define walkthrough paths (typically driven from AutoCAD
building drawings) that are followed to collect the desired 360deg. video. These path definitions
must be delivered in some reference format, linked to the associated video files. For RCOP this
was done through access to the provider who collected, processed, and delivered the datasets
using Microsoft SQL Server.
STEP 2 – CONVERT TO POSTGRESQL
In order to incorporate these walkthrough paths into the RCOP portal, the blueprint / node /
segment tables must be exported from the Microsoft SQL Server (as a standard .sql file), then
inserted into a servable PostgreSQL database using an automated text editing procedure.
STEP 3 – LINK to FACILITY DATABASE
Finally, a new database object is created for the new facility (i.e., a building model with 360deg.
walkthroughs) which links each pre-defined path through the building to its associated JPX video
file.
D. GIS – Instrumented Imagery
Geo-referenced basemap aerial imagery can also be instrumented with any geo-specific dataset
such as street names, parcel addresses, feature locations, etc.. In the ASCHS RCOP
configuration utilizing the fourDscape® browser/server technology these datasets can be viewed
as separate layers or used as automated mouse-over hints in the 3D scene. GIS datasets can
either be accessed locally or served from a GIS SQLite database, which provides additional
capability of user queries to filter GIS result sets, potentially as a some combination of specific
GIS datasets. The process begins with any GIS point or line shapefile exported from any GIS
toolset, which is then processed into the ASCHS RCOP portal as follows:

Open up the shapefile in Global Mapper to find the data’s projection. Reproject the data
if the data’s projection doesn’t match the imagery, saving the reprojected data out to a
shapefile
38
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology



Either use Global Mapper or dbfdump to find the names of the dbf attributes to include in
the hint. For streets, it would be the full street name. Sometimes the entire street name is
one column, and other times it’s in three. For parcels, it could either be the owner of the
property, the street address, or both. Find a combination of dbf attributes that would
make a decent looking hint for mousing over the data.
For local access, place the shapefiles in a directory, and use utilities in the fourDscape®
support library to convert streets and other gis data into ‘gislabels’ and ‘gisgeometry’ for
use with fourDscape®.
For serving GIS data, place the shapefiles in a directory, and use utilities in the
fourDscape® support library to convert shapefiles and spreadsheets into an SQLite
database. Place the outputted SQLite database in the directory used by the fourDscape®
GIS Server. Then create a GIS XML file to configure fourDscape® with the name of the
SQlite database and the ip address and port of the GIS server to load these GIS datasets,
including configuration options for the color of the GIS data and line thickness. For GIS
data that is tied directly to a building model, the building’s ID and floor number are also
required as well as the IP address and port of the Facility Database server, which can
provide a networked link to all available facility information for that floor/building
directly from the source (remotely), with access controlled by the building owner.
E. GIS – Imagery Overlays
For GIS datasets that represent areas (instead of lines/points) such as level 1-4 coastal flood
zones for example, the geo-referenced basemap aerial imagery can also be overlayed with semitransparent color-coded shapes. In the ASCHS RCOP configuration utilizing the fourDscape®
browser/server technology these types of GIS shapefiles are converted into a polygon model,
given attributes of color and transparency, and then overlaid in the 3D virtual scene as a 3D
model. All the interactive features of 3D models, i.e., on/off, item selections, drill down into
underlying data, etc. can then be applied visually to GIS shapes.
F. GIS – 3D Models
Virtual 3D models can be available in many different formats and created with many different
modeling toolsets. They can include buildings, vehicles, terrain, vegetation, ground features, etc.
39
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
to create highly photorealistic visual scenes. In the ASCHS RCOP configuration utilizing the
fourDscape® browser/server technology these 3D model datasets can be correlated with the
high-resolution aerial imagery and GIS datasets to create a fully interactive virtual scene. The
primary focus of the ASCHS RCOP would be 3D building models, both interior and exterior.
These are generated using two primary methods:
1) From 3D oblique geo-coded photography, exterior 3D building geometries can be
automatically created (including detailed upper building/roof geometries) and accurately
overlaid with the high-resolution visual textures directly from the oblique aerial imagery.
These 3D building models can be produced by multiple third-party vendors from their
oblique imagery datasets.
2) 3D building model interiors can be created (floor-by-floor) directly from AutoCAD
architectural drawings, photo surveys, material textures, object libraries (doors,
furnishings, lighting, etc.) using any modeling toolset. For the ASCHS RCOP, individual
3D building models are currently being created using SketchUp.
Most methods for creating 3D building models are not geared for realtime rendering in large
visual scenes, and therefore require significant levels of optimization in order to be effectively
utilized interactively. For the ASCHS RCOP, 3D building models are optimized utilizing
fourDscape® support libraries within the following process:
 SketchUp models are exported to a workable .3ds format, then converted from .3ds to the
native fourDscape® format with optimizations turned on, which automatically reduces
the number of geodes / geometries / groups to create a 3D model correctly organized for
realtime rendering in the fourDscape® browser.
 Building models in .3ds or other standard modeling formats can be imported into various
modeling toolsets that will automatically optimize vertices and surfaces by removing
duplicates and bad surfaces within the model.
 fourDscape® support libraries are then used to convert the .3ds model to a native binary
format, while implementing various texture compression techniques, including level-ofdetail support. This resulting binary is now a single compressed flat file representation of
the 3D building model that can be rapidly loaded into an interactive fourDscape®
browser.
As an example within the ASCHS RCOP, a 3D model of Jamaica Station (a major multitrack
multi-level rail station in NYC) was modeled from available drawings and photos using
SketchUp. The optimization process removed 500,000 duplicate verticies (a 30% reduction) and
250,000 duplicate surfaces (a 25% reduction), and reorganized the model geometry into 1500
geodes (reduced from 600,000 individual geometric objects) – which effectively reduced the
Jamaica Station 3D building model to half its size and improved the realtime rendering frame
40
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
rate from 4 fps to 75 fps. Without this optimization, these types of 3D building models cannot
be utilized in large interactive virtual scenes.
3D building models are also attributed with metadata to accurately geo-reference them onto the
building footprint within the aerial basemap imagery. This is automatically provided if they
were generated from geo-referenced oblique imagery, or can be generated for independently
modeled 3D buildings (i.e., from SketchUp) by attributing a shapefile of the footprint from the
aerial imagery, correlated with specific reference points in the model to accurately scale, rotate,
and translate it within the visual scene.
Multi-story buildings can be modeled floor-by floor by attributing building IDs, floor#s, and
floor heights to independent floor models. These floor models are then stacked together in the
fourDscape® browser, which also has the unique capability to vertically ‘stretch’ these building
models to ‘see’ inside each individual floor. This capability to go ‘vertical’ is an important
feature for an interactive RCOP visualization, to get inside the details of critical infrastructure
and key resources, as well as underground facilities, utilities, subways, tunnels, etc..
41
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
APPENDIX B
Regional Common Operating Picture (RCOP)
Sensors -- Production Methods & Deployment Processes
Visual content for a Regional COP began with high-resolution ortho and oblique imagery that is
seamlessly integrated for interactive fly-through in a desktop browser, and geo-registered with
relevant GIS shapefiles and 3D models for creating visual geospatial awareness for the
city/region. The next step in building the Regional COP is integrating sensor streams (both
‘live’ and archived, i.e., for video forensics) from a variety of sensor types, geo-registered into
the Regional COP baseline imagery. Although there are many vendors with many different
sensor products (e.g., surveillance cameras, GPS tracking devices, chem./bio sensors, explosives
detectors, etc.) used by many different cities/agencies, many standard protocols have been
adopted for reporting information such as sensor metadata (typically XML-based), surveillance
video (typically MJPEG / MPEG-4 compression), still imagery (JPEG / JPEG2000), GPS
tracking (NMEA reports over CDMA/GSM cell or broadband comm.), and for alerts / alarms /
reports (using common alerting protocols (CAP) and NIMS standard homeland security
symbologies). Methods and processes for re-configurable sensor interfaces can be implemented
as standardized Sensor Management Engine interfaces that encapsulate these sensor streaming
protocols, so that repeatable deployments can readily adapt to different sensor hardware devices,
instead of integrating each sensor supplier’s proprietary software development kit (SDK) for
each unique device. This will produce a major savings in time, integration, and system
maintenance when deploying in different cities/regions using different sensor providers.
The following sensor production methods and deployment processes are being implemented at
the Applied Science Center for Homeland Security (ASCHS in Bethpage, NY) to produce a
Regional COP (RCOP) for the L.I./NYC area, and are repeatable elements in producing a
Common Operating Picture for any city or region or state. By documenting these specific
methods while producing the initial RCOP demonstration and deployment at the ASCHS,
significant savings of time and dollars should be realized when re-producing these RCOP
capabilities for other city/regional/state command centers.
A. Video Streams
Digital video is readily available from a variety of sources through many different vendors and
camera servers and systems. Video streams can be provided either directly from individual IP
cameras, or through servers that manage many cameras, or from networked video recorders
(NVRs). Most of these systems communicate through common protocols and formats, while
running into the occasional proprietary encoding for specific vendors. Many systems also
provide SDKs/APIs to provide desired integration methods, but many times are geared towards
video display and control through a dedicated user window, are not very flexible in delivering
video streams through direct transfer protocols, and many times are not compatible from version
to version. An alternative to using SDKs as a camera interface method is to simply communicate
42
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
to cameras / servers using the same transfer protocols as the network client interface provided by
the specific camera/system vendor. This has been found to be the lowest common denominator
between many different cameras / systems, and therefore can generally utilize generic video
communication engines, with minor adaptations / modifications. Even systems with proprietary
binary encodings could be implemented in this manner if the codec (i.e., lib or dll) is provided,
otherwise these systems are limited to integration through available API/SDK capabilities. The
following process is utilized to integrate with a variety of camera systems currently integrated
with the Regional COP, and provides a method for readily integrating more independent camera
sources deployed in various towns, communities, campuses, etc., without lengthy procedures for
implementing and updating specific SDK interfaces.
STEP 1 – IDENTIFY CAMERA SOURCES
There are many different camera system providers that were encountered in implementing the
initial Regional COP for L.I./NYC:








ONSSI
Milestone
Axis
Sony
Panasonic
Vicon
Patton
Intralogic
With the exception of Vicon (a proprietary, closed system requiring an SDK interface), each of
these utilize similar network interface clients that were readily implemented through a generic
RCOP video engine that adapts itself to communicate with each different system just like its’
standard network client. Through this method many different camera systems can be quickly
integrated into the RCOP without any impact or modification to the existing camera systems
currently deployed for different communities, organizations, agencies, municipalities, etc.
STEP 2 – IDENTIFY TRANSPORT PROTOCOLS & FORMATS
Many different image formats can be supported through standard encoding libraries:
 JPEG (i.e., updated 1/sec)
 MJPEG (series of high-speed JPEG frames)
 MPEG4 (video compression)
 H.264 (video compression)
And utilizing many different network communication protocols:
 TCP (standard handshaking)
 UDP (data level packets)
 RTP (realtime interface)
43
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
 RTSP (realtime video streaming)
 MJPEGTS (motion JPEG packaging)
If an existing camera system deployment is utilizing one of the standard formats / protocols,
communicating with it to integrate video streams can be quickly established utilizing the existing
RCOP video engine, with the camera system simply interfacing with RCOP in the same manner
as it does with its’ own available network client. This provides a solid, stable interface that has
proven to be reliable and relatively unchanging (an occasional firmware upgrade in a camera
server has required re-establishing direct communications with RCOP).
Camera Stream Types Pros/Cons:
MJPEG:
- Pros:
- Almost every camera system out there supports it.
- In almost every instance we can control the resolution/quality and framerate (exception:
Onssi).
- Usually accessible on port 80 or on a open default port.
- Usually a very standard implementation.
- Cons:
- Inefficient bandwidth hogs since every frame needs to be transmitted in its entirety.
MPEG4:
- Pros:
- Can have fairly efficient bandwidth usage: key frames and changes between frames need
only be transmitted.
- Can be transmitted via UDP or TCP, usually utilizing RTP wrapper protocol.
- Cons:
- Not every camera supports it even if the type of camera system does.
- Typically we don't have control over resolution, quality, or framerate. Whatever the
camera can do is what we get.
Problems could arise, when a camera is not setup efficiently the MPEG4 (300KB/s) stream
can become a much bigger bandwidth hog then the MJPEG (100KB/s) stream.
- By default most publicly available cameras do not have the MPEG4 option enabled.
- Usually the MPEG4 (RTSP/RTP) stream is not on the default camera port and that means
that the camera system needs to have that port open.
- How the stream is wrapped is unique per camera system.
H264:
- Pros:
44
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
- Can have a very efficient bandwidth usage: key frames and changes between frames need
only be transmitted.
- Can be transmitted via UDP or TCP, usually utilizing RTP wrapper protocol.
- Cons:
- Fairly few cameras support it even if the type of camera system does.
- Typically we don't have control over resolution, quality, or framerate.
- By default most publicly available cameras do not have the H264 option enabled.
- Usually the H264 (RTSP/RTP) stream is not on the default camera port and that means that
the camera system needs to have that port open.
- How the stream is wrapped is unique per camera system.
STEP 3 – IMPLEMENT CUSTOM INTERFACES
For some deployments, these standard protocols and formats are fronted by an existing webbased system that manages access to the specific video streams. In these cases, another level of
customization is required to communicate through these web-based systems, but done through a
similar method by simply interfacing in an automated manner in the same way a user would
through a network browser. These are typically standard http web interfaces, and although they
add another level of communications, are sometime very useful in capturing large amounts of
camera configuration information automatically (i.e., URL, name, location, type, ptz, direction,
etc.) which otherwise needs to be acquired through other means -- i.e., query existing camera
servers that typically report some of this info, or supplement with manual design info (i.e.,
camera locations inside buildings). All cameras must be network-accessible either directly
through the internet (publically available or password protected), or through established VPNtunnels, or locally on the CoIE building network.
STEP 4 – CONFIGURE CAMERA FILE
All available information on all cameras integrated into the RCOP system is captured in a textreadable XML file which controls the realtime access to each camera stream through the RCOP
video management engine (see sample configuration file below). This includes camera name,
type, IP and authorization for access to the networked camera/server, and location/translation and
display hint for how to display the camera in RCOP. The camera type provides a lookup into a
camera database to determine the specific method of communication for that specific camera
(format, protocol, ptz commands, etc.).
Sample XML configuration file:
<?xml version="1.0" encoding="iso-8859-1" ?>
<SMECONFIG MAINPORT="4090" LOCALIP="localhost" GLOBALIP="69.74.214.218"
FOURDSERVERIP="localhost:4040">
<SMTP DOMAIN="mail.bal4.com" PORT="25" USER="server@bal4.com"
PASSWORD="server"></SMTP>
45
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
<CAMERAS DB2="C:\Program Files\fourDscape-6.0\engines\CamSysConfig-v2.db"
CMDS="%STREAMFPS:5[]" PROJECTION="+init=NAD83:3104 +units=m +datum=NAD83
+no_defs" TRANSLATION="0,0">
<!-- A combo of both MPEG4 and MJPEG cameras -->
<!-<CAMERA NAME="ncpd01" MPEG4="RAW" TYPE="SONYRX" PTZ="YES"
IP="71.249.184.76:8000" AUTHORIZATION="baltech:fourdscape" LOCATION="73.482831,40.725385,8,100" HINT="[Bethpage-Hempstead Tpke Stewart Ave] Camera
1" />
<CAMERA NAME="bhscam01" TYPE="AX2110" PTZ="YES" NUM="1"
IP="10.30.10.2:80" AUTHORIZATION="security:security" LOCATION="73.48196077,40.75511221,4,100" HINT="Bethpage High School Cam #1" />
<!-- INFORM NY cams -->
<CAMERA TYPE="AX2400" NAME="223-1" NUM="1" HINT="CIP at Northern Blvd.(Queens
County)" LOCATION="-73.7572,40.7621,10,180" IP="12.154.142.223:80" />
<CAMERA TYPE="AX2400" NAME="224-1" NUM="1" HINT="CIP South of West Alley
Rd.(Queens County)" LOCATION="-73.7409,40.7499,10,180" IP="12.154.142.224:80"
/>
<CAMERA TYPE="AX2400" NAME="211-4" NUM="4" HINT="Grand Central Pkwy at Cross
Island Pkwy(Queens County)" LOCATION="-73.7279,40.7497,10,90"
IP="12.154.134.211:80" />
<CAMERA TYPE="AX2400" NAME="225-1" NUM="1" HINT="295 just s/of VMS66(Queens
County)" LOCATION="-73.7739,40.7486,10,180" IP="12.154.142.225:80" />
<CAMERA TYPE="AX2400" NAME="226-1" NUM="1" HINT="I-495 at Grand Central
Pkwy(Queens County)" LOCATION="-73.846,40.7394,10,90" IP="12.154.129.226:80"
/>
</CAMERAS>
<CAMERA NAME="nycdot/nycdot_01" TYPE="PCS1" IP="localhost:4060"
LOCATION="305130,74960,10,0" HINT="Broadway @ 169 St" />
<CAMERA NAME="nycdot/nycdot_03" TYPE="PCS1" IP="localhost:4060"
LOCATION="304199,73957,10,0" HINT="Riverside Dr @ 153 St-H Hudson Pkwy" />
<CAMERA NAME="nycdot/nycdot_04" TYPE="PCS1" IP="localhost:4060"
LOCATION="303610,72901,10,180" HINT="Riverside Dr @ 135 St-H Hudson Pkwy" />
</CAMERAS>
<CAMERAS PROJECTION="" TRANSLATION="-342000,-64000" RADIUS="20">
<CAMERA NAME="EXTERIOR NORTHWEST VIEW" TYPE="VICON" IP="localhost:4069"
LOCATION="327,834,3,-75" FLOOR="37:2" />
<CAMERA NAME="EXT. S.PARKING LOT VIEW" TYPE="VICON" IP="localhost:4069"
LOCATION="302,758,3,180" FLOOR="37:2" />
</CAMERAS>
</SMECONFIG>
Video stream resolution and framerate are the complex factors that have significant impact on
network bandwidth utilization of realtime camera streams (compared with the typically small
data packets or slow frequencies reported by other sensors). Coupled with the video
compression method and transport protocol used by the specific camera systems, this will define
the quality of the resulting RCOP video stream. For example MPEG4 where each frame is
relative to the previous frame, a UDP network interface at a high frame rate could easily lose
packets (with no recovery handshake) and therefore produce significant video artifacts/breakup.
Whereas MJPEG (basically a series of JPEG images) over a TCP network protocol can produce
a smooth high framerate (min 10fps), with a tradeoff of network bandwidth utilization.
Depending of the video source, these streaming parameters should be monitored and managed
46
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
based on available network bandwidth (i.e., over internet LAN, 3G/4G cellular, satellite, mobile
mesh, etc.).
B. GPS Tracks
GPS tracking devices simply report specific locations around the globe. Basic differences in
reporting are in frequency of reports (i.e., 1sec vs. two minutes) and the coordinate system used
to report the data. Otherwise GPS data streams are relatively small data packets that simply need
to be parsed and plotted. The following process is utilized to accomplish this:
STEP 1 – IDENTIFY GPS PROTOCOL
GPS data streams are based on a standard NMEA format or some adaptation based on some form
of XML. Data either comes from remote GPS servers that report positions of multiple devices,
or directly from individual devices, either of which is handled by the RCOP geo-tracking engine.
Standard networking protocols (i.e., TCP, UDP, etc.) are utilized for networked data transfers.
STEP 2 – CONFIGURE GPS FILE
Similar to camera configurations, GPS streams from devices or servers are identified in a
configuration file that defines the type of data format that needs to be parsed and the coordinate
system of the data. The RCOP geo-tracking engine uses this information to re-project the
location data on-the-fly to the local projection/coordinates of the specific RCOP portal.
C. Radar Tracks & Images
There are basically two types of radar sources, either location tracks (similar to GPS streams)
which would typically be packaged with other motion sensor data streams reporting at a regular
rates; or radar images covering defined areas that are produced from a collection of radar returns
(i.e., weather images), which would typically be retrieved from networked databases from a
remote server at a very low frequency (i.e., minutes). Again the process to utilize this data is
similar to video or GPS or any other sensor, identify the available source and configure the
interface:
STEP 1 – IDENTIFY RADAR SOURCE
Radar sensors will typically report its dynamic location (could be moving) along with a data
table of relative information (i.e., heading and range of detected objects). These could be
ground-based radar systems tracking objects moving across the land/water, or radar systems
tracking aircraft in the sky. The basic datasets simply define the relative location of multiple
objects given IDs by the system to relate new reported locations to the previous location reports,
defining tracks for independent objects (based on the capabilities of the specific radar system).
47
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
These object locations/tracks are plotted visually (similar to GPS tracks) in the RCOP system. In
the case of weather images which are retrieved periodically from remote servers, the location
provided is not a track, but simply the geo-location of the image where it can then be visually
overlaid (with transparency) into the RCOP visual scene (with the lat/long reprojected into local
coordinates on-the-fly as necessary).
STEP 2 – CONFIGURE RADAR INTERFACE
Similar to video and GPS systems, specific radar sources are configured into the RCOP system
by defining specific locations (with translation), ID/name, and type which then dictates to the
RCOP sensor management engine the method used to parse the data stream reported by each
unique type of radar system (see sample configuration file below). Weather images are
configured by simply providing an a priori web-based URL where current imagery is retrieved,
given a known radar location for mapping the weather image overlay.
Sample XML configuration file:
<?xml version="1.0" encoding="iso-8859-1" ?>
<SMECONFIG FOURDSERVERIP="localhost:4040" MAINPORT="4090"
PROJECTION="+init=NAD83:2113 +units=m +datum=NAD83 +no_defs"
TRANSLATION="4150000,150000">
<SENSORS >
<SENSOR ID="RDRCP01_RDR094001_COM84_GARMINGPS" TYPE="RADAR" LOCATION="82.52231,42.61511,5,5" NAME="Russel Island Garmin GPS" />
<SENSOR ID="RDRCP01_RDR094001_COM62_PSRSCONTROLSTATION" TYPE="RADAR"
LOCATION="-82.52431,42.61511,5,5" NAME="Russel Island PSRS Control Station"
/>
<SENSOR ID="RDRCP01_RDR094001_COM63_MIDSUGS" TYPE="INTRUSION" LOCATION="82.52631,42.61511,5,5" NAME="Russel Island MidsUgs" />
<SENSOR ID="RDRCP01_RDR094002_COM85_GARMINGPS" TYPE="RADAR" LOCATION="82.42476479,42.99869228,5,5" NAME="Blue Water Bridge (center) Garmin GPS" />
<SENSOR ID="RDRCP01_RDR094002_USB1_ORTECDETECTIVE" TYPE="RADAR"
LOCATION="-82.42413504,42.99857685,5,5" NAME="Blue Water Bridge (center)
Ortec Detective" />
<SENSOR ID="RDRCP01_RDR094002_COM78_LCD3" TYPE="RADAR" LOCATION="82.42360358,42.99845653,5,5" NAME="Blue Water Bridge (center) LCD3" />
<SENSOR ID="RDRCP01_RDR094002_COM80_LRM" TYPE="RADAR" LOCATION="82.42293899,42.99831431,5,5" NAME="Blue Water Bridge (center) LRM" />
<SENSOR ID="RDRCP01_RDR094003_COM86_GARMINGPS" TYPE="RADAR" LOCATION="82.42514200, 42.99909900,5,5" NAME="Blue Water Bridge (Patrol) Garmin GPS" />
<SENSOR ID="RDRCP01_RDR094003_USB2_ORTECDETECTIVE" TYPE="RADAR"
LOCATION="-82.42573179, 42.99914736,5,5" NAME="Blue Water Bridge (Patrol)
Ortec Detective" />
<SENSOR ID="RDRCP01_RDR094003_COM79_LCD3" TYPE="RADAR" LOCATION="82.42511356, 42.99951668,5,5" NAME="Blue Water Bridge (Patrol) LCD3" />
<SENSOR ID="RDRCP01_RDR094003_COM81_LRM" TYPE="RADAR" LOCATION="82.42569312, 42.99952649,5,5" NAME="Blue Water Bridge (Patrol) LRM" />
<SENSOR ID="RDRCP01_RDR094004_COM87_GARMINGPS" TYPE="RADAR" LOCATION="82.42732,42.959103,5,5" NAME="Seaway Terminal Garmin GPS" />
48
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
<SENSOR ID="RDRCP01_RDR094004_COM64_MIDSUGS" TYPE="RADAR" LOCATION="82.42832,42.959103,5,5" NAME="Seaway Terminal MidsUgs" />
<SENSOR ID="RDRCP01_RDR094004_COM101_VAISALAWEATHERWTX510" TYPE="RADAR"
LOCATION="-82.42932,42.959103,5,5" NAME="Seaway Terminal Vaisala WTX510" />
</SENSORS>
</SMECONFIG>
D. Environmental Sensor Packages
The deployment of multiple sensor packages covering specific areas or perimeters (i.e., motion,
radiological, chemical, intrusion, etc.) can provide a unique picture of the surveillance
environment. These specific sensor data reports can be collected remotely by servers that
repackage the specific sensor data into standard XML reports that are then streamed to the RCOP
system. Or if necessary, individual environmental sensors can be managed directly by the RCOP
sensor management engine (similar to connecting directly to individual IP cameras vs.
communicating with a camera system server managing multiple diverse camera types). A
similar process and method (as already defined for video, GPS, and radar above) will be utilized
to interface to any type of sensor data stream, adapting existing sensor interface modules to
accommodate the specific sensor/server data formats. The basic methods are the same for the
common elements of identifying, locating and configuring any type of available networked
sensor stream – with the key element of not requiring any modification to the sensor source, just
an accessible network connection to the native data stream.
E. Interfaces with other Systems
As more and more sensors are deployed, adaptations to standardized formats and frameworks
will continue to become more prevalent. The RCOP system will support these frameworks as a
method to standardize access to sensor and other available information that can be shared from
remote systems networked together. Two methods currently being implemented by DHS S&T
will be integrated into the RCOP system at the CoIE:
1. UICDS (Unified Incident Command Decision Support System) – a data reporting
framework that conforms information from various software/systems to a common
data sharing format, through agents that collect information from participating
systems (through a common SDK/API) and disseminates it to other participating
systems in a common format through a UICDS Core. The Regional COP being
implemented at the CoIE will be setup as a UICDS Core to effectively apply this data
sharing framework.
2. vUSA (Virtual USA) – an information sharing portal that provides controlled access
to available information (controlled by the information provider). Links to sets of
diverse information in a variety of standard data formats are provided to authorized
users (with vUSA accounts). The Regional COP being implemented at the CoIE will
49
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
include access to available information shared through a vUSA portal, and integrated
into the RCOP visualization to effectively demonstrate this data sharing capability.
50
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
APPENDIX C
Regional Common Operating Picture (RCOP)
Networks -- Production Methods & Deployment Processes
Visual content for a Regional COP began with high-resolution ortho and oblique imagery that is
seamlessly integrated for interactive fly-through in a desktop browser, and geo-registered with
relevant GIS shapefiles and 3D models for creating visual geospatial awareness for the
city/region. The next step in building the Regional COP was integrating sensor streams (both
‘live’ and archived) from a variety of sensor types, geo-registered into the Regional COP
(RCOP) baseline imagery. The backbone that provides the realtime connectivity to support this
RCOP visualization is the network architecture living underneath it all.
Understanding the potential network architectures that can be deployed for an RCOP system
across many cities/regions is an important element for effective system performance, as sensor
integration is dependent on reliable networks for delivering both high and low bandwidth data
streams from remote sensors to a Regional Command Center (i.e., over wireless mesh, satellite,
3G/4G cellular, broadband internet, local intranet, analog IP sensor encoders, etc.) and
distributed on-scene to the first-responder community through the tailgatER mobile technology
solution. Implementing an operational RCOP deployment for LI/NYC has identified many of
the issues related to integrating various network architectures to support this sensor integration
and distribution to remote devices.
The following network production methods and deployment processes are being implemented
for the Applied Science Foundation for Homeland Security (ASFHS) at the Morrelly Homeland
Security Center (in Bethpage, NY) to produce the Regional COP (RCOP) for the L.I./NYC area,
and are repeatable elements in producing a Common Operating Picture for any city or region or
state. By documenting these specific methods while producing the initial RCOP demonstration
and deployment at the ASFHS, significant savings of time and dollars should be realized when
re-producing these RCOP capabilities for other city/regional/state command centers.
 Distributed RCOP Architecture
 Scalable Networked Components
 Broadband Internet Access
 Multiple Integrated Networks
 Satellite Interfaces
 Cyber Information Layer
51
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
A. Distributed RCOP Architecture
The RCOP architecture is a fourDscape®based system that consists of many
interconnected layers, modules, engines, and
other components to create an Automated
Situation Awareness solution for a Regional
Common Operating picture (see architecture
diagram). Layer 1 begins with raw data from
many diverse sensors sources, and as it
moves up through the layers more
information is added (i.e., through analytics,
databases, etc.) and it becomes correlated
geospatially with other data/sensors into a 4D
Portal, which is then served out to the Layer 8
visual browser. The interface between each
Layer is data-driven, and intended to be
distributed across accessible network
architectures that will support the required
data flow. This provides a modular, flexible
deployment method that can leverage the
available networked computing resources of
different installations. For example, the
operational RCOP deployments at the
ASFHS facility supports multiple network options for acquiring sensor data (broadband
internet, VPN tunnels, satellite, mobile mesh, cellular 3G/4G), with the Layer 3 Sensor
Management engine running on a dedicated (multi-processor) Command Center
computer, which also supports the (multi-channel) graphics for the Command Center 4D
Browser (on a video wall). The Command Center is also supported by a networked
(LAN) server farm (12 machines – 96 core processors) that runs additional engines and
services (geo-tracking, facility database, video analytics, image servers, etc.) required to
build the complete 4D Portal (i.e., Common Operating Picture). This RCOP portal is
accessible through multiple fourDscape® browsers (locally on the dedicated COIN
network LAN) as well as public-facing through managed global IPs that can deliver the
same interactive RCOP capability to registered fourDscape® browsers anywhere in the
world, through its basic service-oriented distributed architecture.
This modular, networked, fourDscape®-based RCOP system can be readily adapted to a
variety of networked architectures (discussed in the sections below), including potential
cloud computing implementations that could support many private sector deployments.
B. Scalable Networked Components
52
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The scalable nature of the fourDscape®-based RCOP distributed architecture allows for
multiple servers and engines / services networked together in order to maximize access to
data and sensor streams operating on different networks and in different locations, while
facilitating the sharing of that information with multiple fourDscape® servers and visual
browsers across global networks (see diagram below).
This scalability creates a shared Common Operating Environment leveraging the
available network architecture, and providing for multiple views at multiple locations and
at multiple levels, across local, regional and national boundaries. The networking
diagram below shows how regional command centers can be linked together running
fourDscape®-based RCOP servers, engines and browsers (i.e., remotely accessible
common-view ‘services’ ), each one servicing local tailgatER mobile vehicles, which can
then service individual handheld devices (i.e., ruggedized tablets, cell phones, etc.). The
robust IP backbone is the critical factor that drives these services and builds automated
situation awareness across these networked layers. The flexibility to connect multiple
methods for network access is also an important component (discussed in the sections
that follow).
53
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
C. Broadband Internet Access
The typical network backbone to the internet is the foundation that any network is built
upon. The diagram below represents the basic configuration of the ASFHS COIN
network that facilitates the RCOP network architecture.
54
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The secure firewall (with intrusion protection) controls the COIN command center
connection to the Broadband Internet Service Provider (a 50Mb network pipe through the
ISP) for the internal data center servers and COIN workstations for the RCOP system.
This builds the path through the internet for secure VPN connections to other servers,
services, and sensors (from public agencies or private sector), and public internet access,
as well as connecting to the ASFHS building network servicing the corporate side of the
house. A block of Global IPs are also provided to create a public-facing side (through the
DMZ) for remote access to the RCOP services.
D. Multiple Integrated Networks
The diagram below depicts how the RCOP network IP backbone is comprised of various
independent networked capabilities merged together into one unified system.
The network foundation is the ‘Trusted Network’ configuration for the COIN Command
Center that supports the internal networked architecture for the fourDscape®-based
55
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
RCOP system. It has a backbone connection to the internet for accessing publicly
available information and sensors (including video streams, weather stations & radar,
etc.), as well as for establishing secure VPN connections to specific local networks (i.e.,
schools, agencies, private infrastructure, etc.). In addition, a secure VPN is also
established through a separate WiFi Mobile Mesh network (described below) for data
communications to mobile command vehicles and handheld devices. The available
3G/4G cellular network also provides a link for the mobile vehicle (or simply remote
fourDscape® browsers on laptops) to connect with the RCOP command center (through
the internet backbone), or for cell phone / handheld GPS trackers or video streams to
connect directly with the RCOP command center as well (described below). Satellite
uplink / downlink is also integrated for the mobile command vehicle.
o 3G / 4G Cellular
Standard 3G or 4G data networks have been integrated (from Sprint, Verizon, &
T-Mobile) that can be simply used for streaming video from RCOP to a cell
phone, and streaming a GPS track from a cell phone directly into the RCOP
system; OR can drive a full RCOP data stream to a remote vehicle (or remote
laptop) to support an interactive fourDscape® browser for the complete RCOP
environment (including imagery, GIS, multiple GPS tracks, and multiple ‘live’
camera streams). The current experience with 4G rivals the performance of a
standard broadband consumer-level internet connection (up to around 12Mb/sec)
– sufficient to support a realtime RCOP browser connection.
o Mobile Mesh Networks
The mobile mesh solution currently implemented for the RCOP architecture is the
Portable Mesh Exchange (PMX from SRI International). The PMX is a hardware
device used to enable wireless communications, tracking and video surveillance
where network cabling is not feasible, or to connect public safety officials with
private surveillance networks (in a 2.4 GHz, or 5GHz or 4.9GHz wireless bands).
The PMX units can be either fixed-mounted or vehicle-mounted (the RCOP
deployment at ASFHS has a fixed unit on the building roof, and a mobile unit on
the tailgatER roof rack). The embedded GPS and alternate power options (POE,
AC, 12vDC) enables users to reposition and reconfigure units as necessary. Its
rugged, weatherproof NEMA 6 enclosure has weatherproof connections for
Ethernet, USB, and AC power, with a fixed camera directly attached (i.e., for
tailgatER); or the unit can operate in a gateway mode to link an entire wired
network (i.e., for a mesh connection to the COIN network from the building roof).
The second PMX antenna on the tailgatER vehicle is also utilized by RCOP to
wirelessly connect a second mobile (handheld) camera through the PMX unit.
56
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
E. Satellite Interfaces
Bonding a satellite connection to the RCOP core network is implemented through iDirect
routers, setup as a separate accessible VLAN that feeds to/from the satellite dish on the ASFHS
building roof, and connects directly to other satellite locations (also through an iDirect router),
including a roof-rack-mounted portable satellite dish on the RCOP tailgatER mobile vehicle (see
sample diagram below). The satellite capability at the ASFHS is provided by GlobeComm,
which can also serve as a backup path for broadband internet connectivity through their Satellite
Port at GlobeComm headquarters (in Hauppauge, NY).
Satellite bandwidth is relatively expensive, and not all satellite networks can talk with each other
(they need to be compatible). So deploying satellite capability will be unique to each
installation, while utilizing common methods and processes. Portable satellite capability (for
phones, or BGAN for laptops) is also available, but talk-time is even more expensive. The
RCOP configuration has been tested and demonstrated (RCOP command center to/from
tailgatER mobile vehicle) on a 500Kb uplink / 1MB downlink satellite connection provided by
GlobeComm on their available satellite bandwidth.
F. Cyber Information Layer
Visualizing the network architecture is another aspect that can be included within the scope of
the Regional Common Operating Picture (see sample image below). This would plug into
Layer4 of the fourDscape® architecture (Object Analytics) as an external service, depicting the
57
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
network configuration as the ‘object’ and visually presenting the connectivity and monitoring the
network health and activity, while leveraging the existing fourDscape®-based RCOP
visualization capabilities for 'exploded views' (like building internals), representing the active
deployed network features with key elements tethered to geo-specific reference points. This
layer can then also leverage the existing fourDscape®-based RCOP capabilities to drill-down
into available realtime data layers for specific network elements.
Potential sources of ‘live’ network information that would feed this Layer4 service (as network
sensor-types) could include:
-- WILDCAT wireless intrusion vulnerability and detections reported to RCOP
-- commercially-available cyber-security modules from third-party sources
-- realtime cyber network data available from RCOP network connections to sensor/tracking
sources and remote viewers
-- realtime cyber monitoring data from the COIN Network service provider (through a 24/7
NOC)
58
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
APPENDIX D
Regional Common Operating Picture (RCOP)
Operational Training -- Production Methods & Deployment Processes
Establishing operational training methods and procedures for the initial RCOP deployment at the
ASFHS facility (including the mobile tailgatER capabilities) is based on an understanding of
standard operating procedures within the Emergency Management community, and utilizes the
same deployed operational environment configured for training that will provide a cost-savings
multiplier readily applied across many cities/regions/states. This would include the capability of
the RCOP operational deployment to be utilized as the core visualization for simulated
exercises/HSEEPs, pre-packaging simulated sensor modules that could produce the sensor
streams for pre-planned exercise scenarios. RCOP’s underlying fourDscape® technology is
based on real-time simulation technology (developers at Balfour have extensive experience
building real-time flight/tactics simulators for the military), and is inherently capable of
generating an interactive RCOP from real sensors/assets, or a simulated RCOP for embedded
training using the simulated fourth dimension of time. Realistic and effective operator training
cannot always be achieved using the real-world subsystems; it is not always feasible to affect the
operating HVAC system during a training exercise, or stage suspicious activity in the lobby for
surveillance video feeds, or excite a CBE sensor with a toxic agent or explosive device. To
overcome this, fourDscape® modules can be integrated with “simulated” generic subsystem
modules for both training and testing. For example, fourDscape® has a simulated Camera
Server module, used to replay previously recorded video loops (which could very well depict
staged security events for training purposes). Like this simulated camera server, other simulated
modules for HVAC, alarms, CBE sensors, etc. can be generated from their respective ICDs. Not
only does this produce a realistic and effective training environment, but also aids in the
validation and integration of each subsystem interface. This inherent ability of fourDscape® to
both “operate” and “simulate” gives RCOP the ability to create very realistic embedded training
experiences.
The following operational training methods and deployment processes are being implemented
for the Applied Science Foundation for Homeland Security (ASFHS) at the Morrelly Homeland
Security Center (in Bethpage, NY), utilizing the Regional COP (RCOP) for the L.I./NYC area,
and are repeatable elements in producing a Common Operating Picture for any city or region or
state. By documenting these specific methods while producing the initial RCOP demonstration
and deployment at the ASFHS, significant savings of time and dollars should be realized when
re-producing these RCOP capabilities for other city/regional/state command centers.
 Modular RCOP Architecture
 Training -- Sensors
 Training -- Alerts
 Training Exercises
 Simulated Virtual Scenes
 Record / Playback
59
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
A. Modular RCOP Architecture
The RCOP architecture is a fourDscape®based system that consists of many
interconnected layers, modules, engines, and
other components to create an Automated
Situation Awareness solution for a Regional
Common Operating picture (see architecture
diagram). Layer 1 begins with raw data
from many diverse sensors sources, and as it
moves up through the layers more
information is added (i.e., through analytics,
databases, etc.) and it becomes correlated
geospatially with other data/sensors into a
4D Portal, which is then served out to the
Layer 8 visual browser. The interface
between each Layer is data-driven, and
intended to be distributed across accessible
network architectures that will support the
required data flow. This provides a modular,
flexible deployment method that can
leverage the available networked computing
resources of different installations. The
RCOP portal is accessible through multiple
fourDscape® browsers that can deliver the same interactive RCOP capability to
registered fourDscape® browsers anywhere in the world, through its basic serviceoriented distributed architecture.
This modular, networked, fourDscape®-based RCOP system can be leveraged and
readily adapted to create an interactive simulation and training environment by pluggingin simulated sensor feeds in Layer 1, or simulated analytic results in Layer 2, or
simulated objects in Layer 4 that represent a simulated scenario that is then driven
through the server/browser architecture to the user/trainee as if it were the real sensor
environment. The following sections describe the specifics of how this architecture is
utilized for simulating sensors, alerts, exercises, and interactive virtual scenes in the same
RCOP operational deployment, so training is done on the same system used in day-to-day
operations, including playback for post-training briefings. This produces a valuable
multi-use RCOP deployment that enables training on operational systems, and can serve
as a testbed for introducing other technologies and systems within the same operational
training environment.
B. Training -- Sensors
60
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The scalable nature of the fourDscape®-based RCOP distributed architecture allows for
multiple servers and engines / services networked together in order to maximize access to
data and sensor streams operating on different networks and in different locations, while
facilitating the sharing of that information with multiple fourDscape® servers and visual
browsers across global networks. This same capability can be leveraged to inject
simulated sensors into the operational architecture to produce training scenarios utilizing
the same RCOP operational deployment. For example, the fourDscape®-based RCOP
sensor management engine can either get video streams from ‘live’ camera sources or
from recorded video files and deliver them both simultaneously to the interactive user.
The same is true for any fourDscape®-based sensor engine; simulated data streams can
be readily injected from pre-recorded time-based files to simulate any type of sensor in a
training scenario. This can be extended to produce an ‘augmented reality’ with fullysimulated operational objects of any type plugged-in from any external simulation
environment.
In the visual examples presented above, (left) ‘live’ camera recordings were made during
the Rose Bowl Parade in Pasadena CA, then two weeks later were compiled into a
4Dportal to simulate a replay of the parade route in a fourDscape®-based visual scene,
interjected with video representing areas of interest and specific training video for an
exercise with the L.A. County Sheriff’s Department. On the right image, video analytics
are included with recorded camera feeds, mixed together with ‘live’ camera images and
GPS tracks that are presented in the RCOP demonstration portal at the ASFHS facility.
Both of these examples represent flexible use of the fourDscape®-based RCOP
architecture to seamlessly inject recorded sensors feeds (of any type) to create a variety of
simulated exercises.
C. Training -- Alerts
61
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Specific alerts of any type can also be scripted and injected into the fourDscape®-based
RCOP portal over time to simulate emergency events injected into a ‘live’ interactive
training environment. These alerts are simply ‘played’ from a recorded file by a
fourDscape®-based dynamic scripting engine, and fed into the fourDscape®-based
RCOP system as if they were coming from the ‘live’ sensors. In the images below, (left)
a variety of ‘live’ sensors were fed through a Sentel Brain Box that compiled a collection
of motion sensors, radiological and chemical sensors (both mobile and fixed), plus
ground-based radar and weather sensors, delivering a common stream to a fourDscape®based visual portal representing an area along the northern border. This sensor stream
can be played back (at any speed) to simulate specific radiological and chemical events,
or radar tracks crossing the river, combined with both ‘live’ and recorded camera feeds to
demonstrate how these sensor packages can protect border areas and deliver an effective
training scenario in an interactive fourDscape®-based RCOP visual portal.
I
n
t
h
e
i
m
a
g
e
In the image to the right, a ‘live’ interactive portal representing the Anaheim Convention
Center (where a fourDscape®-based Command & Control system is currently deployed)
includes a 3D building model with ‘live’ feeds from all video cameras and building
(HVAC) systems. A simulated chemical sensor feed (recorded during testing of an actual
sensor) plays a script that streams detection values for a large variety of chemicals, and
delivers an alert for a high-level detected for a specific chemical at a specific location.
That simulated alert script can also override current ‘live’ video streams with recorded
video of people running out of the building for example (originally recorded during an
earthquake) to simulate the response that would be immediately seen on the cameras near
the detected chemical event. This demonstrates how the modular fourDscape®-based
RCOP architecture can be leveraged to seamlessly create any number of specific training
scenarios desired.
D. Training Exercises
62
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The images below depict two examples of how the RCOP fourDscape®-based modular
architecture is utilized to create recorded scenarios that can represent specific training
events. To the left is an extension of the chemical detection event discussed above that
includes recorded images of an evacuated building following a chemical detection by a
specific sensor. The data, video and standard operating procedures are all displayed to
the user in the fourDscape®-based visual scene so that users can be trained to both
recognize and respond to these types of events while using the same operational system
they use day-to-day in their security office.
To the right is a recorded training scenario that represents a replay of all GPS feeds from
a live exercise where a team of players entered a building to find a man down. This was
viewed ‘live’ in an interactive fourDscape®-based portal, while the GPS streams were
simultaneously recorded to disc. These recorded GPS sensor streams can be fed back
into the interactive fourDscape®-based portal (as if it were ‘live’ again) while the user
can interact and view the data from any perspective, as if it were happening live again.
This ability to both record any type of ‘live’ sensor feeds and utilize the modular
fourDscape® architecture to play them back into the interactive fourDscape®-based
RCOP system provides a flexible environment for producing interactive visual training
scenarios that can be utilized for both tabletop and field-level exercises – implemented
with the same operational fourDscape®-based RCOP system that is deployed for day-today as well as emergency operations.
E. Simulated Virtual Scenes
63
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The fourDscape®-based RCOP architecture also has the ability to present highly-realistic,
highly-detailed virtual scenes that can accurately represent real-world scenes in an interactive
fourDscape® browser. This can be utilized for training in a fully-immersive simulated
environment (ie., like a flight simulator or driving trainer), or can import actions/movements for
any number of dynamic objects from both ‘live’ and simulated external sources (i.e.,
locating/moving all ground vehicles at an airport; or importing ‘live’ players from other
simulators; or instrumenting the motions of all troops/vehicles at a ‘live’ training ground, etc.).
The fourDscape®-based RCOP architecture can plug-in active users from remote locations, or
simulated objects from playback files, or instrumented ‘live’ dynamic objects (from GPS or other
sensors) to create the realism needed for effective simulated training environments.
As an example (below right), fourDscape® is currently being used as the basis of an airport
ground operations virtual training system (subcontracted under a NASA grant) that includes both
simulated airport air-side operations and ‘live’ interactive players that can drive or fly through
the scene, while being delivered a diverse series of operational visual environments as well as
geo-specific training text embedded in the 3D scene. To implement this embedded training
environment, fourDscape® has a Dynamic Scripting engine (DSE) component that can be
utilized by an RCOP system to process time and event-based scripted actions, for both an
embedded training model and table top exercises. XML scripts are used to configure the DSE
for specific training scenarios.
Below left is an example of a highly-detailed 3D terrain model of a real-world training grounds
that can serve as the basis of an instrumented, interactive training environment. The
fourDscape®-based RCOP system can produce this type of visual realism to represent any area
or location that then can be combined with both live and simulated sensors feeds, dynamic
moving objects and dynamic event scripts to deliver an augmented reality for an effective
training environment using the same RCOP operational system utilized day-to-day.
F. Record / Playback
64
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Along with the built-in ability to ‘play back’ any type of sensor stream as if it were ‘live’
(already discussed above), the fourDscape®-based RCOP system visual scene can also be
recorded directly to disc and edited to standard movie formats utilizing a variety of postproduction 3rd-party tools. This delivers a post-training debriefing capability that can be utilized
to share the training results with large groups or remote trainees via video. The entire
fourDscape®-based RCOP visual scene is an OpenGL canvas that can be digitally recorded and
post-processed into useful training videos.
Above (left) is a screenshot of the FRAPS recording tool utilized at the ASFHS, which can
record the full OpenGL frame buffer at 30fps with minimal impact to the interactive operation of
the fourDscape®-based RCOP system. The source recording is in an uncompressed format
(using a custom FRAPS realtime codec), and will produce large recording files that can be
compressed during post-processing. FRAPS can also optionally record the mouse/cursor layer,
and the digital audio channel.
Above (right) is a screenshot of one of the post-processing video editing tools (VirtualDub)
utilized at the ASFHS, which can open the FRAPS recording files, apply a collection of frame
editing options, and store the video in a variety of compressed video formats – typically
producing a much smaller AVI movie file with minimal performance loss (i.e., using a Windows
Media video9 codec). VirtualDub can also combine selected video clips, and dub in the desired
audio track, etc., or simply convert the source FRAPS video file to another format to be imported
into other advanced video editing / movie-making applications.
65
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
APPENDIX E
Regional Common Operating Picture (RCOP)
Delivery -- Production Methods & Deployment Processes
Implementation of an initial LI/NYC RCOP deployment at the ASFHS has provided the
opportunity to evaluate various product delivery methods available for the RCOP solution, from
packing the entire server/browser system on a portable laptop, to running on a high-end desktop
graphics platform, or a configuration based on a rack-mountable 1u server with web-based
access (for the mobile tailgatER display). Understanding the needs of the user community from
the initial LI/NYC implementation, along with integrating with the ASFHS command center
resources (large screens, policy rooms, etc.) has established effective baseline delivery methods
that can be readily replicated for other cities and regions.
RCOP’s underlying fourDscape® technology provides extensive interactive visualization and
data integration capabilities within an architecture that provides significant flexibility in
delivering specific networked services at many different levels. This readily translates to a widerange of delivery methods, from supporting multi-screen videowalls in command centers, down
to two-way interaction with mobile handheld devices, smartphones, etc., all coming together to
build an effective Common Operating Picture, and sharing the information where needed.
The ASFHS facility provides the resources to implement a Common Operating Picture of an
incident at the command center level (COIN facility), which can then be managed through a
mobile technology tailgatER package to provide the same common view to mobile on-scene
vehicles and then distributed by the tailgatER capabilities to handhelds and smartphones of first
responders, while managing the available network bandwidth and controlling the RCOP views
for each level of device/vehicle/facility display capability. RCOP has created a unique test bed
(at the ASFHS) for delivery methods and processes that enable the distribution of a common
operating picture for local / regional command and control for incident management.
The following delivery methods and deployment processes are being implemented for the
Applied Science Foundation for Homeland Security (ASFHS) at the Morrelly Homeland
Security Center (in Bethpage, NY), utilizing the Regional COP (RCOP) for the L.I./NYC area,
and are repeatable elements in producing a Common Operating Picture for any city or region or
state. By documenting these specific methods while producing the initial RCOP demonstration
and deployment at the ASFHS, significant savings of time and dollars should be realized when
re-producing these RCOP capabilities for other city/regional/state command centers.
 Modular RCOP Architecture
 ASFHS Command Center & BaseStation
 Ruggedized Laptops, Handhelds and Smartphones
 Remote Browsers (vs. tailgaters)
 tailgatER Mobile Technology
A. Modular RCOP Architecture
66
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The RCOP architecture is a
fourDscape®-based system that consists
of many interconnected layers, modules,
engines, and other components to create
an Automated Situation Awareness
solution for a Regional Common
Operating picture (see architecture
diagram). Layer 1 begins with raw data
from many diverse sensors sources, and
as it moves up through the layers more
information is added (i.e., through
analytics, databases, etc.) and it becomes
correlated geospatially with other
data/sensors into a 4D Portal, which is
then served out to the Layer 8 visual
browser. The interface between each
Layer is data-driven, and intended to be
distributed across accessible network
architectures that will support the
required data flow. This provides a
modular, flexible deployment method
that can leverage the available
networked computing resources of different installations, utilizing various delivery
methods for RCOP portals accessible through multiple fourDscape® browsers. Through
its basic service-oriented distributed architecture, fourDscape®-based RCOP systems can
deploy modularly across many different levels of hardware capabilities, linked through
different levels of network connectivity to create many diverse operational environments.
This modular, networked, fourDscape®-based RCOP system can be leveraged and
readily adapted to support many different delivery methods and processes, while
integrating with many diverse systems that participate in the common operating
environment. The following sections describe the specifics of how this architecture is
utilized for delivering RCOP systems from multi-screen command centers, serving
mobile tailgatER units and first responder handheld devices and smartphones, providing
various levels of access to the interactive RCOP virtual scenes through a variety of RCOP
operational deployments that can be used in day-to-day operations.
B. ASFHS Command Center & BaseStation
67
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The ASFHS Command Center (which also serves as the BaseStation for the mobile
tailgatER technology) represents one of the primary delivery methods for the
fourDscape®-based RCOP capability. The ASFHS Command Center / BaseStation
provides virtual command and control capabilities utilizing a large multi-screen display
wall (approx. 4ft.x10ft.) configured for multi-user interaction with a fourDscape®-based
RCOP environment, and providing visual elements for realtime distribution to the
deployed mobile tailgatER technology.
ASFHS COMMAND CENTER FEATURES
 large multi-screen display wall (approx. 4ft. x 10ft.) configured by stacking six (6)
46” LCD displays together into one display wall for multiple users interacting with
imagery, video, text, etc. Plus repeated on a large-screen projection display, or
multiple individual large-screen LCDs and dedicated desktop displays around an
analyst table.
 powerful multimedia, multi-cpu, multi-graphics computer system to produce and
manage the RCOP display on the multi-channel wall.
 fourDscape®-based visualization providing virtual command and control capabilities,
interacting with the realtime RCOP views, and distributing views for each level of
connected device/vehicle/facility display.
 Data Center with twelve (12) HP ProLiant servers that facilitate access to all RCOP
services (i.e., fourDscape® engines), providing remote tailgatER views for all
interactive portal contents (including imagery, 3D models, facility databases and live
sensor feeds).
 Interfaced to available broadband communications networks (i.e., fiber / satellite) at
the command center facility to stream the RCOP views to/from the tailgatER mobile
technology.
Note that this represents a very-high-end deployment of the RCOP capabilities,
leveraging the existing facilities at the ASFHS which are typical of a large command
center environment. The minimum specs that are recommended for this type of
deployment are described below:
RCOP COMMAND CENTER -- SYSTEM RECOMMENDATIONS
o A Command Center BaseStation to feed a video wall (multiple monitors)
– This unit will resemble the system running in the ASFHS Command Center
that feeds the video wall at the Morrelly Homeland Security Center, using a
high-end Stratosphere wotkstation (from Digital Tigers; see specs at the
website http://www.digitaltigers.com/stratosphere-elite.asp)
This unit will serve two purposes:
1. Installation of the tailgatER mobile technology software on this machine will
allow connection back to the ASFHS command center at the Morrelly Homeland
Security Center.
68
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
2. The Command Center version of fourDScape will be installed to build a common
operating picture for a demonstration location for any area, and connect to remote
tailgatER units (could be a laptop or a fully equipped vehicle)
Recommend purchasing one of these Stratosphere servers with an upgraded graphics
card that will push out the RCOP views to multiple monitors, serve out to other
remote tailgatER units, and connect back to the ASFHS command center for
demonstration purposes.
Since this unit will be used as a command center, recommendation is for a good
processor, with a 64-bit operating system and a minimum of 8GB of RAM. For
pushing out to a video wall the key is getting a desktop that will provide enough PCIE slots to utilize multiple graphics cards (ASFHS command center utilizes three (3)
nVidia Quadro FX 3800 graphics cards to push out to the video wall). Server should
have enough expansion slots to allow for video cards to support as many monitors as
required. For initial installation, one video card with multiple outputs can push the
video out to two monitors.
Note: The nvidia Quadro FX 3800 graphics cards used in the ASFHS Command
Center have 3 outputs (2 Display Ports and 1 DVI), but can only use two at any given
time. These 3 cards and utilize 2 outputs off of each card to run 6 screens. These
cards can also support 1920x1080 resolutions to take full advantage of the HD TV
screens. Other cards may support different outputs or resolutions, please refer to your
displays to understand your video card needs.
o A portable demonstration laptop (a single unit) - This unit will run the
Mobile TailgatER technology and will connect back to any Command Center.
This laptop will be able to optionally connect to a touch screen monitor, or
any other display device, such as a projector. The fourDscape software will
run on most laptops, with these minimum requirements:

Processor: Dual Core

RAM: 2 GB of RAM

Graphics Card: Separate Graphics card with Dedicated Memory - (for
example: nvidia Quadro FX 1800) – Check the resolution on the
touchscreen and compare it to the supported resolutions on the
graphics card in the laptop to obtain optimal resolutions. (ideally,
utilize HD outputs from the card to get HD video on the monitor)

Disk Space: 30 GB of free Space
69
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The tailgatER Software can be deployed with a variety of different hardware
options, but minimally a computer with the above specifications and a wireless
connection (WiFi / 3G connection) will be needed to deploy a mobile asset.
Note that the basic RCOP fourDscape®-based system does not necessarily require this
full level of equipment / facility to be deployed – an RCOP command center can also
readily be deployed on a standalone desktop workstation (or multi-cpu laptop) to provide
the basic services for a single-user command center / tailgatER basestation.
C. Ruggedized Laptops, Handhelds & Smartphones
For portability, the fourDscape®-based RCOP system can run on standard laptop
configurations (i.e., commercial demonstrations are provided on a daily basis using a
standard HP laptop 12” touch screen tablet PC with an Intel i5CPU), but for the first
responder market has been deployed and demonstrated on a ruggedized tablet laptop (the
xTablet C1200 from MobileDemand) with similar capabilities:
The xTablet C1200 is WLAN and WWAN
compatible, offers optional Bluetooth, optional Gobi
2000 radio for 3G communication and GPS. Security
features include TPM 1.2 technology, a fingerprint
scanner and BIOS administrator password / boot
password. The rugged xTablet C1200 with
Microsoft® Windows 7 Professional is powered by
the high performance performance Intel® Core™ i5
520UM processor with Turbo Boost up to 1.86 GHz
and Intel® HM55 chipset with 2GB RAM standard
and with up to 8GB of memory.
Dimensions:
12.9" W x 10.9" H x 1.59" D
(328 mm W x 277 mm H x 40.8 mm D)
Display:
12.1" (307.34 mm) diagonal widescreen
Weight:
5.5 lb. (2.5 kg) - base configuration
Sealing Dust:
100C.C water drop on all C cover area in 2-5 seconds w/
system operational
Drop / Shock:
Lightweight magnesium alloy for strength MIL-STD 810G,
516.6 IV: 26 repeated drops to plywood over concrete from 48
inches
Operating Temp:
-4 F to +122 F (-20 C to +50 C)
70
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Storage Temp:
-40 F to +167 F (-40 C to +75 C)
Humidity:
20% - 90% RH
Processor:
Intel® i5 560 UM, 1.33 GHz (1.866 GHz max.turbo
frequency), 3MB L2 Cache, DMI
System Memory:
Dual Channel DDR-3 800/1333, 8GB max.
Hard Drive:
Removeable HDD and SSD modules
64-256 GB SSD
External data rate of 3.0 Gbps
Operating System:
Windows® 7 32 bit, 64 bit
Display:
1280(W) x 800(H) (WXGA) Color TFT LED
backlight, Touch screen standard, Digitizer optional, Optional
resistive touch panel, Sunlight readable LCD optional
Keyboard:
86 keys with Windows key
Key pitch: 19mm
Key travel: 2.5mm
System Expansion:
SD Card Slot supports SDHC,Express 34 card slot, RJ-45
10BaseT 10/100/1000 Ethernet, USB 2.0, Serial port optional
(15 pin D-SUB)
I/O Board Configuration:
Standard: USB (1), Ethernet, DC-in,
e-SATA/USB 2.0 combo (1), 15-pin D-sub VGA, audio,
docking connector; Other I/O configurations: See Brochure
Std. capacity Li-Ion 6-cell
Total: 5200mAH, 3S2P, 11.1V, 57Wh
Battery operation: 4 hours minimum
Battery System*:
Second battery: Polymer 3-cell Total: 4000mAH, 3S1P
*Hot-swappable w/ 2nd battery attached
LED:
Status indication LEDs for power state, battery state, and
keyboard funtions
Audio:
High definition audio
Stereo supports two 1.5W speakers
Analog microphone
Camera:
1.3 megapixel integrated, color
Wireless:
Intel® WiFi Link 1000 (1x2)
Mini Express card slot
Bluetooth v2.1 + EDR (option)
Open Slot WWAN:
Express Card slot open for various service provider wireless
Opcards
Protective cover maintains seal/ruggedness
71
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Global Positioning System
(GPS):
GPS Optional
Desktop Cradle:
USB (4), Serial, Ethernet (RJ-45), VGA,
2 slot battery charging
For smaller handheld devices, the fourDscape®-based RCOP or tailgatER system has
also been deployed on the ruggedized xTablet8700 (an 8” handheld unit from
MobileDemand) that can communicate back to a command center deployment either
directly through 4G / WiFi (running as a standalone tailgatER or remote browser),
providing a live camera feed and GPS location; or can be tethered wirelessly to a mobile
vehicle or mobile phone that can then provide the connectivity (4G / satellite / etc.) back
to the command center / basestation.
Smartphones can also be enabled to contribute to the RCOP views, acting as mobile
sensors that can provide live camera streams and GPS locations, or accept live camera
streams pushed out from a tailgatER or command center basestation. This is done by
simply installing a smartphone app (see the Android installation procedure below) that is
registered with the command center, and activated by the user.
SMARTPHONE APP (Android) for CAMERA / GPS
 There are 3 projects in eclipse: VCORE, VCOREtab, and PushGPS. PushGPS is the
GPS streaming app, VCORE is the camera streaming app, and VCOREtab is the GPS
and Camera streaming in one application separated by tabs.
1. Installation from the Mac:
- Open up Eclipse
- Open a project
- Plug in Android phone, if asked set up as disk drive, and ignore network setup
- On android phone -> settings->Applications->Allow installation of 3rd party apps
[->Development -> Enable Debugging]
- From Mac -> Applications->android-mac-sdk...->tools->ddms
- DDMS is debugger/logger tool. Under devices the phone should show up (it will be
a bunch of letters and numbers like HWA342W6FJW3…..). Click it to show the logs
- From eclipse -> click run and a window should pop up to allow you to choose the
device (if it automatically loads the emulator, go to run configuration and click
manual to allow you to choose which device to install to)
72
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
- On the phone, the app should pop up eventually after the installation is done
2. Setting up the SmartPhone App
- When you open the app, it will automatically start listening for GPS streams but
won’t process it until you initialize (make sure location and internet is enabled on the
phone, checking to see if its enabled as soon as the app starts)
- Enter the ID, IP, and Port and click initialize
- It should send out the GPS streams at this point (view the logger in case there are
any errors).
- The connection settings are registered once the GPS is enabled and the camera gets
its connection settings from the GPS tab so be sure to turn on the GPS before
enabling the camera stream.
D. Remote Browsers (vs. tailgatERs)
The fourDscape®-based RCOP Remote Browser capability provides RCOP viewers the
ability to operate effectively as a remote tailgatER machine (registered with the
basestation) without ANY local data, except for one simple configuration file identifying
the URL for the remote portal. The main difference between the tailgatERs and Remote
Browsers is that tailgatERs have all the models / imagery locally on the tailgatER
machines, while the Remote Browsers get everything from Remote Servers operating at
the ASFHS Command Center. For example:
1. MOBILE UNIT OPERATING as a TAILGATER
 The first server that any tailgatER would access is the fourDserver on port 4040. This
is necessary in order to tell the Command Center and the other connected tailgatERs
about this tailgatER and tell this tailgatER about the Command Center and the other
connected tailgatERs. Now this newly connected tailgatER has the capability to receive
GPS signals, BNN reports, and weather alerts. It also has the capability to send camera
streams to phones, Command Center, and the other tailgatERs.
 The next server that any tailgatER would access is the SMEengine on port 4090. This
will give the tailgatER access to all the cameras that the Command Center can see. It will
also give the tailgatER access to the weather buoys, airport weather, and airport delays.
Beneath the SMEengine there are many modules that access specific types of cameras
and specific types of sensors. We have 37 sensors and 570 cameras from various sources.
73
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
 Any tailgatER can access PostgresModule on port 4020 if it needs to see the Bethpage
High School building imagery. An ImageServer also feeds the Bethpage High School
imagery through the postgresModule to the tailgatER.
 The GeoTrackingEngine operates on port 4096 for tailgaters to send their own GPS
locations to the Command Center. When a tailgatER or a phone does this, the location is
shown to both the tailgatERs and the Command Center.
2. MOBILE UNIT OPERATING as a REMOTE BROWSER
The RCOP Remote Browser accesses a series of HP ProLiant servers at the ASFHS
Command Center through Global IP addresses. Basically the only thing that resides on
the Remote Browser machines is the link to Remote Server and the fourDscape
distribution.
 The first server that any Remote Browser would access is the fourDserver on port
4001. It first tells the Remote Browser about the portal and then the Remote Browser
requests all the models and GIS streets through the fourDserver since none of these reside
on the Remote Browser machines.
 The Remote Browser can access PostgresModule on port 4002 when it needs to see
the Bethpage High School building imagery. An ImageServer also feeds the Bethpage
High School imagery through the PostgresModule to the Remote Browser.
 The next server that any Remote Browser would access is the SMEengine on port
4003. This will give the Remote Browser access to all the cameras. It will also give the
Remote Browser access to the weather buoys, airport weather, and airport delays.
Beneath the SMEengine there are many modules that access specific types of cameras
and specific types of sensors. The ASFHS Command Center (L.I./NYC portal) has 37
sensors and 570 cameras from various sources.
 The GISServerModule on port 4004 can send GIS parcel information for New York
City and Long Island on demand to the Remote Browser.
 The Imagery Server on ports 4020-4023 and port 4024 provide the
JPIPServerModule(s) that serve .jp2 ground imagery of New York City and Long Island
for the Remote Browsers. This is done in near-real-time as the Remote Browser user
navigates through the scene, viewing different locations at varying resolutions, and can
operate effectively over a 4G network connection.
E. tailgatER Mobile Technology
74
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
An important feature of the ASFHS RCOP implementation is the tailgatER Mobile
Technology package, designed to produce an effective mobile communications node that
brings the resources of the ASFHS command center on-site to any incident location, and
enhances the RCOP with live information directly from the scene. The tailgatER Mobile
Technology deployment seamlessly communicates with the tailgatER BaseStation command
& control capabilities to provide an integrated system that extends this capability out into the
field, facilitating the sharing of regional information datasets and distribution of realtime
surveillance information across the first-responder community. tailgatER is a flexible
technology package based on COTS resources that can serve as a remote view of regional
command center operations streamed out directly to an incident location. Based on patented
fourDscape®-based server/browser visualization technology, tailgatER provides intuitive
interactive capability for manipulating views of a shared virtual environment by incident
commanders, integrating local sensor assets available at the scene and distributing a scaleddown incident view to first responder hand-held devices. tailgatER produces an effective
mobile communications node that brings the resources of the ASFHS RCOP BaseStation
command & control on-site to any incident location, and enhances the RCOP view with live
information directly from the scene.
tailgatER capabilities can go mobile as simply as configuring on a laptop, or added to
existing mobile vehicles. The tailgatER mobile environment deployed at the ASFHS is
configured into a portable custom rack that integrates all display and communications gear,
and can be strapped down in the cargo space of a standard SUV or Van to provide intuitive
interactive capability for view manipulation of the RCOP shared virtual environment by local
incident commanders (shown below).
FEATURES
75
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
 A fully-enclosed, standalone, high-brightness, full-HD 46” LCD display unit
deployed in a mobile environment (i.e., the back of an SUV or Van) that can be
viewed by multiple people at the same time.
 This LCD display cell has integrated touchscreen capability for intuitive control of
the interactive RCOP view, selecting elements for pushing out to local multimedia
handheld devices over a local wireless LAN, and selecting local sensor feeds to be
fed back to the tailgatER Basestation (at the ASFHS command center).
 A powerful multi-cpu, high-end graphics computer deployed in the custom tailgatER
rack to produce and manage the tailgatER touchscreen display, integrating the
fourDscape®-based RCOP visualization capabilities.
 interfaces to managed broadband communications networks (i.e., via 3G cellular
EVDO and 4G LTE, or available satellite) to acquire and send realtime data streams
to/from the tailgatER Basestation at the ASFHS.
 local wireless LAN network (local WiFi hotspot) to interface mobile sensors and to
distribute locally-selected sensor elements (using the tailgatER display) to hand-held
devices within range of the tailgatER vehicle.
 Interface to portable electronic devices (with built-in WiFi) for scaled-down views on
a TFT LCD color display of distributed RCOP elements from the tailgatER
interactive display.
 Additional capability for built-in GPS locators or separate handheld or embedded
devices for locally tracking first responder assets, integrated with the tailgatER
display.
 Locally deployed high-resolution (640x480) wireless IP cameras (with PTZ control
from the tailgatER display) providing video feeds back to the tailgatER BaseStation
and/or to local hand-held devices.
 Portable power solutions (low-noise generator) providing power distribution to the
tailgatER display, computer and network equipment in the custom tailgatER rack
(and any available external satellite communications gear), with a backup portable
generator to ensure continuity of power and minimize equipment failures.
tailgatER provides an RCOP delivery method that can be retrofitted to any mobile vehicle, or
simply deployed on a standard laptop to provide extremely portable methods for delivering
RCOP capabilities to the field, capturing additional sensor data at the incident site, and
supporting locally-networked first responder devices.
TAILGATER RDU (Remote Demonstration Unit)
76
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
A tailgatER Remote Demonstration Unit (RDU) delivered to DHS S&T (pictured below) was
configured as a fourDscape®-based “Remote Browser” on a ruggedized, multi-cpu
MobileDemand laptop/tablet unit (with touchscreen display) to connect back to the ASFHS
BaseStation for ‘live’ demonstrations of the tailgatER mobile capabilities. This can also be
connected to a large 46” touchscreen display and demonstrated to larger audiences (for
conferences, etc.). Additionally, this same fourDscape®-based “Remote Browser” was
configured on a rugeddized 8” MobileDemand unit, with built-in WiFi, GPS and camera for
handheld demonstrations of the RCOP capabilities (through tailgatER remote connections to the
ASFHS Command Center / BaseStation).
77
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
[APPENDIX F]
Long Island Forum for Technology
Integrated Command and Control Solution
For
First Responder and Facility Asset Management
Regional Common Operating Picture (RCOP)
fourDscape® User Guide
510 Grumman Road West, Bethpage NY 11714
LIFT: 631.969.3700 Office 631.846.2789 Fax
VCORE: 516-513-0030 Office 516-513-0027 Fax
www.vcoresolutions.com
78
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Table of Contents
1.0
Introduction 80
2.0
Start Up
81
3.0
Controls
82
3.1 Primary Mouse Mode Controls 82
3.2 HotKeys
4.0
82
RCOP Portal Interface
4.1 Sensor Menu
84
4.2 Camera Panel
86
4.3 RCOP Portal Scene
5.0
Contact Information
84
87
90
79
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
fourDscape® RCOP User’s Guide
1.0
Introduction
Balfour’s patented fourDscape® technology enables automated situational awareness for
human interaction, analysis and response. It creates a single, interactive Common
Operating Picture through the integration of geo-spatial data and real time multi-media
surveillance sensor data feeds. The system architecture coordinates and accumulates live
sensor data, visually represented in a fourDscape® browser, as a temporally-spatially
correlated set of dynamic objects in a 4D scene. This scalable, network-based architecture
is capable of managing large arrays of surveillance sensor suites. The straightforward,
visual interface delivers full contextual and interactive environmental preparedness and
control to all users.
The fourDscape®-based interactive 4Dportal currently includes:

Imagery – high-resolution aerial orthogonal imagery of the surrounding area

3D Building Model – interactive 3D model of all floors of the building (built from
CAD drawings)
Figure 1: fourDscape® is showing a 3D building model (right image) with real time camera feeds (left images).
The orange lines represent the location of the camera feeds and the orange cones illustrate their field of view.
80
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology

Surveillance Camera Interface – integrates new and legacy cameras systems and
displays them in the virtual portal. Features include auto-selection and manual
selection of camera feeds.

Camera Control – built-in capability to pan/tilt/zoom any PTZ-capable camera
Figure 2: Example of a camera feed that
has pan and zoom capabilities.

Sensor Visualization – visual capability for any sensor feeds
Figure 4: Illustration of weather module sensor feed.
2.0

Visual Alarms – visual capability for alarms/SOPs (ready to go when SOPs are
established)

Automated Alerts – capability in SOPs to automatically send out text/email alerts
Start Up
Double click on the fourDscape® launcher icon located on the desktop. This icon will
launch the RCOP fourDscape® browser into the portal, and will load the various modules
used to display live imagery and data sets.
The RCOP fourDscape® browser opens into a new window. As the RCOP portal is
loading, the window will display the fourDscape® logo. Afterwards, the building model,
floor plans, areal imagery and cameras will come into view.
81
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
3.0
Controls
The RCOP fourDscape® browser is primarily controlled with the user’s mouse. A three
button mouse is required for navigation.
3.1 Primary Mouse Mode Controls
3.1.1 TrackBall - The track ball mode will provide the following controls for the
mouse:
a. The Left Mouse Button – The Left button allows the user to click on a
location and drag the screen around by that single point. The entire scene
will move relative to the point that has been clicked, as well as the
movement of the mouse.
b. The Center Mouse Button – Clicking and holding the center mouse
button allows the user to move the portal scene in the X and Y
directions.
c. The Right Mouse Button – Right Mouse button moves the scene in the
Z direction. By holding the right mouse button and moving the mouse up
and down, the user can zoom in and out of the scene.
3.1.2 Fly – The fly mode is similar to the movements of an airplane over the city. It
allows the user to move back and forward in any direction.
a. Looking Around – Hold down the middle mouse button while moving
the mouse to look around. This can move the view up, down, and
around while remaining stationary.
b. Moving Forward – While holding down the middle mouse button, press
the left mouse button to accelerate forward. Move the mouse around to
change direction. The longer the left mouse button is held down, the
faster the view will move forward. Release the left mouse button while
still keeping the middle mouse button held down to stop accelerating.
Release the middle mouse button to stop.
c. Moving Backward – While holding down the middle mouse button,
press the right mouse button to accelerate backward. Move the mouse
around to change direction. The longer the right mouse button is held
down, the faster the view will move backward. Release the right mouse
button while still keeping the middle mouse button held down to stop
accelerating. Release the middle mouse button to stop.
3.1.3 Hover – The hover mode works similar to the fly mode, and uses the same
controls. Hover mode, however, always keeps the view parallel to the ground.
3.1.4 Oblique – The oblique mode sets the camera at a fixed angle. Hold the middle
mouse button while moving the mouse around turns and moves the view. Using
this mode, the camera angle cannot be changed. Oblique mode is used to
provide a “fly by” view of a portal scene.
3.2 HotKeys
82
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The RCOP fourDscape® Portal has hotkeys built into the browser to provide quick
access to important features of the portal. The follow is a list of the important
HotKeys:
3.2.1 Number Key Mouse Modes:
a. Press “1” – Trackball
b. Press ”2” – Fly
c. Press “3” – Hover
d. Press “4” – Oblique
3.2.2 Additional Hotkeys:
a. F1 – Toggles Show/Hide the control panel bar and all open control
windows.
b. Space Bar – re-centers the 3d viewpoint of the portal to its default
location.
c. F – Toggles between windows mode and full screen.
d. F3 – Toggles camera modes between best-fit and manual selection.
83
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
4.0
RCOP Portal Interface
The RCOP fourDscape® browser seamlessly integrates camera feeds, sensor inputs, and
building controls into a detailed, four-dimensional graphic scene. The RCOP fourDscape®
browser extrudes building floors plans into 3-D graphic models, and surrounds the model
with accurate high-resolution imagery to provide a realistic view of the building and its
surrounding environment. The fourth dimensional unit, time, is provided through the
accumulation of live camera feeds and sensor connections that show real-time sensor levels
and status updates. This information is easily accessible through the use of the smooth and
intuitive mouse controls. The user is provided with various options for customizing the
portal viewing experience, including lighting, time, transparency, extrusion, and sensor
display controls.
RCOP Portal Overview
Mouse
Camera
Controls
Panel
Sensor
Menu
Figure 3: Illustration of the RCOP portal view.
a. Sensor Menu – The sensor controls are located in a menu on the right hand side of
the portal screen. These icons provide control of the various sensor displays in the
portal scene.
b. Camera Panel – The Camera Panel displays the live feeds from the cameras in the
portal scene.
c. Portal Scene – This window is the main display area of the RCOP fourDscape®
portal. The area map, the floor plans, extruded floors, cameras, and sensors are all
graphically displayed in this window.
4.1 Sensor Menu – The icons on the right side of the portal screen identify the various
sensors that are being tracked by the fourDscape® System. Moving the mouse over
each of these icons displays their locations in the fourDscape® scene by highlighting
84
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
them with colored indicators. By clicking on each of the icons, the colored indicators
will stay lit so the user can navigate through the scene while still seeing the sensor
locations. The type of sensor can be determined by the highlighted color displayed in
the scene, as well as the shape of the sensor indicators.
Building Transparency – Building transparency controls the opacity of
the external building lines in the portal scene. By hovering over this icon
and scrolling with the mouse wheel, the user can set the transparency of
the exterior building walls.
Building Stretch – Building stretch allows the user to expand and
collapse the building floor plans. Hovering over this icon and scrolling
with the mouse wheel will change the distance between the floors,
moving them closer to or away from each other. This allows the user to
view the individual floor plans and sensors of each floor easier.
Cameras – This icon allows the user to highlight the location of the
cameras in the portal scene and display their line of sight. Hovering over
this icon will quickly display the orange camera cones in the portal
while the cursor is on the icon. Once the cursor is removed from the
icon, the indicators will disappear. By clicking this icon, the camera
indicators in the portal will remain visible until the icon is clicked again.
Fire Hydrants and Stations – Clicking this icon displays to the user the
locations of various fire hydrants and stations represented by red circles
while the cursor is on the icon. Large circles are Fire Department
Stations while small circles are fire hydrants. The fire hydrant and
station indicators will remain visible until the icon is clicked again.
Hospitals – Clicking this icon displays to the user the locations of
Hospitals which are represented by blue circles. The indicators will
remain visible until the icon is clicked again.
Weather Doppler Overlays – This icon gives the user the ability to
view live weather feeds throughout the portal. There are several buoys
stationed to gather data and they are put together to form these weather
feeds. Hovering over the icon will display the weather. Clicking it will
keep the feeds visible until the icon is clicked again.
Slosh Zone Overlays – This feature provides possible hurricane
affected areas. The various colors represent the level of intensity due to
the storm. Areas closest to the southern shore are affected the most.
85
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Clicking on it will keep the overlay visible until the icon is clicked
again.
Outlines and Labels – Clicking on this icon will display county and
city/town borders which are represented by white lines.
Streets – Clicking on this icon will highlight streets, highways, bridges,
etc., which are represented by green lines.
Command Center – This icon indicates that your tailgatER software is
connected to the command center, and should be receiving live sensor
feeds directly from the command center.
PDA – This icon indicates that a PDA has been registered with the
fourDscape software, and is ready to receive live streams.
4.2 Camera Panel – The Camera Panel, located at the left of the portal, displays four live
video feeds from the cameras in the portal scene. The RCOP fourDscape® browser
automatically updates the camera feeds based on the user’s movement throughout the
portal by displaying the camera feeds that are closest to the center of the user’s
screen. An orange line connects the live feeds with the location of the corresponding
cameras to give the user an accurate visual of what the camera is viewing in relation
to the rest of the building.
Cameras that have Pan and Tilt features are remotely controllable through the portal.
A camera view with green arrows around the edges indicates that the user can control
the view of the camera. Click the mouse button on the camera view to re-center the
camera on that position. For cameras that have zooming capabilities, the user can use
the mouse wheel to zoom in and out.
Additionally, the user can double click each video in the camera panel, to increase its
size. Double clicking twice makes the camera full screen within the portal, and a
third double click brings the camera back to its original size.
86
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
4.3 RCOP Portal Scene – The RCOP Portal Scene is the main display window of the
fourDscape® interface. This window displays the area map and imagery, as well as
the building floor plans, sensors, and cameras. The user can navigate through the
portal using the mouse to fly to specific locations around the building.
Using the building stretch tool, the floor plans for the building can be separated, and
each floor’s cameras and sensors can be viewed separately. By hovering the cursor
over the various objects in the portal (i.e.: floors, sensors, cameras), the user can view
data about the object including sensor reading levels, camera, sensor, and floor
names, and any other pertinent data .
The various sensors and cameras in the portal can be uniquely identified by their
shape and color. The cameras are displayed as orange view cones, with orange lines
connecting them to the live video feeds on the left camera menu.
Figure 4: fourDscape® illustrating building stretch mode so that individual floors can be easily
viewed and analyzed.
87
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Figure 5: fourDscape® illustrating street locations (highlighted in green) and displaying the camera
feeds that are closest to the center of the user’s screen
Figure 6: fourDscape® illustrating township and city boundaries (white lines)
88
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Figure 7: fourDscape® illustrating the hurricane affected areas using various
colors based on intensity.
Figure 8: fourDscape® illustrating fire hydrant (red dots) and hospital locations (blue dots)
89
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
5.0
Contact Information
Long Island Forum for Technology
Lois Gréaux – Program/Contract Management
510 Grumman Road West, Suite 201
Bethpage, New York 11714
Tel: 631-846-2742
Fax: 631-846-2789
Email: lgreaux@lift.org
VCORE Solutions
Shawn Paul – Project Manager
510 Grumman Road West
Bethpage, NY 11714
Tel: 516-465-0188x7842
Fax: 516-465-0180
Email: shawn.paul@vcoresolutions.com
Rich Balfour – President
510 Grumman Road West
Bethpage, NY 11714
Tel: 516-513-0030
Fax: 516-513-0027
Email: rich.balfour@vcoresolutions.com
90
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
[APPENDIX G – National Rollout Plan / RCOP Commercialization]
Regional Common Operating Picture
(RCOP)Rollout Report
The Goal
The Regional Common Operating Picture, developed by Balfour Technologies (Balfour) in partnership
with the Long Island Forum for Technology (LIFT) and the Applied Science Foundation for Homeland
Security (ASFHS), and sponsored by the U.S. Department of Homeland Security Science and
Technology Directorate (DHS S&T), demonstrates a deployable command, control, communications,
computers, and intelligence (C4I) Common Operating Picture (COP). The current pilot that has been
delivered to DHS S&T covers the downstate New York Metropolitan and Long Island Region. The pilot
shows how RCOP effectively manages, correlates and shares information from many diverse systems
and sensor sources, across multiple agencies at local, state & federal levels. The RCOP also includes
private sector entities including schools and campuses, small and large businesses, hospitals, and sports
and other large public gathering facilities. This project applies successful visual/interactive methods for
Automated Situation Awareness (from the DHS S&T Small Business Innovative Research SBIR
Program) for information sharing to help optimize and speed effective decisions and actions within the
First Responder community. This local RCOP pilot has been developed to become a national footprint
for Situational Awareness and Information Sharing.
The Problem it Solves
The primary issue being faced within the First
Responder community is that there are so
many important, but diverse and disparate
systems already deployed in the field that
provide access to various individual data
elements and sensors that are not capable of
sharing information as they are currently
structured, making it difficult to obtain a
robust situational awareness/common operating platform to function in a
uniformed fashioned. The key to solving this problem requires a solution that must seamlessly
pull together all of these legacy systems, and not need to replace them (too costly).
The Solution
RCOP will augment first responder capability through the timely availability of actionable
information, shared by local, state, federal and private sector responders. The RCOP will assist
the gathering of information from disparate information sources during a crisis or disaster. The
demonstration and commercialization of this effort will improve the “ground truth”, so important
to effective management decisions during crisis response.
91
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
The primary capabilities of RCOP are three-fold:



The initial deployment of C4I RCOP
capabilities for the New York
Metropolitan and Long Island Region at
the Applied Science Foundation for
Homeland Security, Bethpage, NY
(ASFHS), integrates geospatial
technologies, physical asset inventories,
cyber network analysis, and correlated
real-time data sources to provide
enhanced situation awareness and
information sharing for first responder
capabilities, leveraging the existing
CONOPS of first responder agencies and
developing enhancements to existing
CONOPS working directly with endusers through operational RCOP
deployments.
Integration with an operational C4I COP Base Station at the ASFHS and a mobile tailgatER™
deployment that includes data acquisition and correlation from disparate live and archived
data sources, sharing regional information datasets and distributing real-time surveillance
information across the first-responder community. This will also provide an operational test
bed at the ASFHS for future DHS/First Responder capabilities development and field testing;
Repeatable methods and processes for C4I COP Imagery, Sensors, Information Networks,
Training and Delivery components that can be utilized to reproduce an operational C4I COP
deployment for many cities/regions nationwide at the federal, state and local levels.
Transition and Commercialization Plan

RCOP will accelerate adoption of technology solutions with a distribution paradigm that
engages First Responders in the development process, through the unique ASFHS
industry/academic/customer environment
 Rollout to contracted Emergency Response and Law Enforcement agencies at all levels,
including public/private schools and campuses, and private sector security/facility/operations
for critical infrastructure protection.
 Currently building a partner network of existing technology providers to the target markets,
who can provide direct access (and contract vehicles) to an existing customer base that can be
enhanced by the RCOP solution
 Establishing Integrated Product Teams (IPTs) for specific RCOP-based solutions in Security,
Education and Persistent Surveillance, and transitioning University R&D to the market.
 Training environments will be established through both University and Center of
Excellence partners, and at the ASFHS utilizing the available Lecture Hall facility
connected to the Command Center running an operational RCOP solution.
Rollout Plan
92
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
This document is provided for the sole purpose of the Department of Homeland Security, directorate of Science and Technology for the
HomeTech Pilot Project. Unauthorized reproduction or distribution is prohibited.
PROPRIETARY

The rollout plan is to expand the capabilities of RCOP to include broadening the
demonstration area, capturing and integrating as much regional information from
available sources as possible, and adding broad area persistent surveillance.

In order to ensure that the system can be utilized in any mission, we need to begin
information gathering from multiple sources across the New York region and select parts
of the country – initially identified as Southern California, Texas, North/South Carolina,
Washington DC, and the Northern Border region to initially to be defined as Michigan to
Maine. Ultimately, our objective is to expand the power of RCOP and demonstrate its
capabilities of not only providing situational awareness for local pre-planned or all hazard
events but also to deliver and carry out DHS’ publicized initiative of “enhancing critical
tools and institutionalize arrangements for timely access and effective sharing of
information and analysis to strengthen baseline capabilities and analytic capacity to
operate consistently, rapidly identify and disseminate information, and support and
enhance a federal, state and urban area intelligence platform for risk-based, informationdriven decision making by homeland security stakeholders”. These stakeholders would
include federal state and local government entities, military, first responders,
transportation, critical infrastructure, schools, major corporations etc..

In order to provide the information necessary for this program it will be necessary to
capture and integrate as much regional information from as many data sources as possible
in the areas we are proposing. These data sources will include base maps and GIS
datasets, 3D building models, multimedia surveillance sensors (including cameras),
weather data, available informational databases, etc.

RCOP will also leverage existing DHS technology investments based on the RCOP’s
fourDscape® layered and networked modular architecture, which can be readily adapted
to support information-sharing frameworks such as vUSA, SIMON and UICDS. The
RCOP services/engines are very flexible and modular for networked (even cloud)
deployments.

For this project we would also like to integrate advanced imaging architecture into RCOP
to provide persistent broad area coverage, 24/7, day/night imaging capability to get data
to tactical users for simultaneous forensic review and display for real time analysis that
will simultaneously support tactical operations at the mobile tailgatER™ unit and the
command & control center.

Once fully deployed, this program will give first responders and other government
entities a common operating environment that will enhance their ability to protect and
secure our nation like no other.
93
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
This document is provided for the sole purpose of the Department of Homeland Security, directorate of Science and Technology for the
HomeTech Pilot Project. Unauthorized reproduction or distribution is prohibited.
PROPRIETARY
Supporting Projects and Related Work
Below is a sampling of projects where the technology has been deployed as a comprehensive
situation awareness platform integrating a variety of geo-spatial datasets and 3D building models
(including interior floor plans) with an interactive user interface streaming real-time data from a
variety of live sensor sources (including geo-located video surveillance systems) to produce
comprehensive shared virtual environments for various applications.
The teams experience with these past and current projects will serve as a powerful foundation for
extending the capabilities of the RCOP System, currently deployed and operational at the stateof-the-art command center facility at the
Applied Science Foundation for Homeland
Security (ASFHS, at the Morrelly Center
for Homeland Security in Bethpage, NY).
Access to this RCOP demonstration facility
is also available thru the RCOP Remote
Demonstration Unit (RDU) delivered to
DHS S&T (pictured to the right), or thru
laptops configured with an RCOP portal
and connected to the ASFHS through
broadband internet or 4G-LTE
communications.
Operational Deployment for Nassau County Police Department
Project POC: Rich Balfour, President, Balfour Technologies LLC
Contract Responsibilities (current 2011): To provide Nassau County Police with a
fourDscape®-based Automated Situation Awareness System, leveraging the RCOP capabilities
implemented at the ASFHS facilities as a
Nassau County baseline portal. This
includes a realtime integration with existing
county police systems independently
deployed for video surveillance (live camera
feeds), vehicle tracking (GPS on police
assets), and ShotSpotter (for locating and
identifying gunshots). The fully interactive
system includes county-wide highresolution aerial imagery and GIS datasets
as a basemap for realtime data visualization
in support of various law enforcement
activities.
tailgatER™ C4I BaseStation & Mobile Technology Solution
94
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
This document is provided for the sole purpose of the Department of Homeland Security, directorate of Science and Technology for the
HomeTech Pilot Project. Unauthorized reproduction or distribution is prohibited.
PROPRIETARY
Project POC: Richie Rotanz, Executive Director, Applied Science Foundation for Homeland
Security (at the Morrelly Homeland Security Center)
Contract Responsibilities (2010): To provide a tailgatER™
Mobile Technology deployment that seamlessly communicates
with tailgatER™ BaseStation command & control capabilities
producing an integrated system that extends this C2 capability out
into the field, facilitating the sharing of regional information
datasets and distribution of real-time surveillance information
across the first-responder community. tailgatER™ is a flexible
technology package based on COTS resources that can serve as a
remote view of regional command center operations streamed out
directly to an incident location. Based on patented
server/browser visualization technology, tailgatER™ provides
intuitive interactive capability for manipulating views of a Shared
Virtual Environment (SVE) by incident commanders, integrating
local sensor assets available at the scene and distributing a scaleddown incident view to first responder hand-held devices.
tailgatER™ produces an effective mobile communications node
that brings the resources of the BaseStation command & control
on-site to any incident location, and enhances the BaseStation
view with live information directly from the scene.
U.S. Dept. of Homeland Security Science & Technology Regional Technology Integration
Initiative (Anaheim Convention Center)
Project POC: DHS Science & Technology
Contract Responsibilities (2009): Deployed patented
server/browser visualization technology Common
Operating Picture Command & Control System (C2
COP) providing situation awareness for Security and
Emergency Preparedness & Response at the Anaheim,
CA Convention Center. The C2 system includes a 3Dinteractive building model and is remotely integrated
through a VPN network interface with existing video
surveillance and building HVAC systems, including the
control of PTZ cameras, air handler units, detecting
alarms and sending of notification messages. Balfour worked together with the Johns Hopkins
Applied Physics Lab and MIT Lincoln Labs on this project team for DHS S&T.
U.S. Dept. of Homeland Security Advanced Research Projects Agency Small Business
Innovation Research (SBIR Phase I/II) Program
95
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
This document is provided for the sole purpose of the Department of Homeland Security, directorate of Science and Technology for the
HomeTech Pilot Project. Unauthorized reproduction or distribution is prohibited.
PROPRIETARY
Project POC: DHS Science & Technology
Contract Responsibilities (2006-2008): Developed a
deployable Automated Situation Awareness (ASA)
System based on patented server/browser visualization
technology that effectively manages large, distributed
multi-media surveillance sensor suites over large
regions, and delivers to the user a correlated,
integrated, seamless view of live sensors, imagery and
3D building models over extensive areas, significantly
enhancing security, emergency preparedness and
response.
This system was deployed with the Los Angeles
County Sheriff’s Dept, in their mobile command
vehicle, live at the ’09 Tournament of Roses Parade,
visually integrating a local suite of different
networked camera systems and tracking
responder/emergency assets & people during the
parade along Colorado Blvd.
Also implemented for Thomson Reuters security in
their Times Square NYC high-rise building, deploying
a corporate critical infrastructure common operating
picture, integrating a variety of camera systems connected to the interactive building model (with
3D floor plans) through a core video matrix switch driving IP bridging devices. [Note: actual
camera feeds from the private camera network cannot be shown; blacked out in video frames
below.]
NYC Broadband Wireless Public Safety Network Situation Awareness Visualization
Project POC: NYC DOITT (thru Northrop Grumman)
Contract Responsibilities (2007): Developed and deployed, at a prototype NYPD/FDNY
command center in Brooklyn, NY, a 4D Portal Common Operating Picture of New York City
operating in fourDscape®, including realistic 3D models of lower Manhattan and parts of
Brooklyn. This was integrated with live video camera systems, with alarms produced from video
96
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
This document is provided for the sole purpose of the Department of Homeland Security, directorate of Science and Technology for the
HomeTech Pilot Project. Unauthorized reproduction or distribution is prohibited.
PROPRIETARY
analytics, and GPS tracking devices deployed on the NYC Broadband Wireless Public Safety
Network. [Note: NYC 3D-interactive image Not Available for distribution – Proprietary /
Restricted for use by NYC DOITT]
97
HSHQDC-10-C-00062
Final Report, Part 2b – December 2011
Long Island Forum for Technology
Download