Uploaded by 33a2d117

4c65b61abb5bfc53aefc3f5814be473b7772

advertisement
International Journal of
Environmental Research
and Public Health
Technical Note
Digital Reconstitution of Road Traffic Accidents: A
Flexible Methodology Relying on UAV Surveying and
Complementary Strategies to Support
Multiple Scenarios
Luís Pádua 1,2, * , José Sousa 1 , Jakub Vanko 1 , Jonáš Hruška 1 , Telmo Adão 1,2 ,
Emanuel Peres 1,2 , António Sousa 1,2 and Joaquim J. Sousa 1,2
1
2
*
Engineering Department, School of Science and Technology, University of Trás-os-Montes e Alto Douro,
5000-801 Vila Real, Portugal; jmsousa@utad.pt (J.S.); vanko.jakub@gmail.com (J.V.); jonash@utad.pt (J.H.);
telmoadao@utad.pt (T.A.); eperes@utad.pt (E.P.); amrs@utad.pt (A.S.); jjsousa@utad.pt (J.J.S.)
Centre for Robotics in Industry and Intelligent Systems (CRIIS), INESC Technology and
Science (INESC-TEC), 4200-465 Porto, Portugal
Correspondence: luispadua@utad.pt; Tel.: +351-25-9350-762 (ext. 4762)
Received: 30 January 2020; Accepted: 10 March 2020; Published: 13 March 2020
Abstract: The reconstitution of road traffic accidents scenes is a contemporary and important issue,
addressed both by private and public entities in different countries around the world. However, the
task of collecting data on site is not generally focused on with the same orientation and relevance.
Addressing this type of accident scenario requires a balance between two fundamental yet competing
concerns: (1) information collecting, which is a thorough and lengthy process and (2) the need to
allow traffic to flow again as quickly as possible. This technical note proposes a novel methodology
that aims to support road traffic authorities/professionals in activities involving the collection of
data/evidences of motor vehicle collision scenarios by exploring the potential of using low-cost,
small-sized and light-weight unmanned aerial vehicles (UAV). A high number of experimental
tests and evaluations were conducted in various working conditions and in cooperation with the
Portuguese law enforcement authorities responsible for investigating road traffic accidents. The
tests allowed for concluding that the proposed method gathers all the conditions to be adopted
as a near future approach for reconstituting road traffic accidents and proved to be: faster, more
rigorous and safer than the current manual methodologies used not only in Portugal but also in many
countries worldwide.
Keywords: road traffic accidents; reconstitution; unmanned aerial systems; photogrammetry
1. Introduction
Regardless of the many road safety programs already implemented, undergoing or foreseen—e.g.,
the European Union (EU) is discussing their 5th Road Safety Action Programme for the 2020–2030
decade [1], intended to have a 50% reduction on both deaths and serious injuries between 2020 and
2030—related rates recorded worldwide are still significant. Indeed, the World Health Organisation
(WHO) estimates that, annually, road traffic accidents kill around 1.25 million people and injure about
50 million more across the globe. In fact, in 2012, road traffic accidents were the leading cause of death
between ages 15 and 29 years old [2]. Furthermore, the International Transport Forum (ITF), who
gathers information from 40 countries in its 2017 Road Safety Annual Report [3], corroborates the
aforementioned numbers related with casualties.
Int. J. Environ. Res. Public Health 2020, 17, 1868; doi:10.3390/ijerph17061868
www.mdpi.com/journal/ijerph
Int. J. Environ. Res. Public Health 2020, 17, 1868
2 of 24
Besides the social consequences, road traffic accidents also imply considerable economic impacts.
Wijnen and Stipdonk [4] estimated the economic impacts as a percentage of countries gross domestic
product (GDP): an average of 2.7% GDP to those they regarded as high-income countries and 2.2%
GDP to low and middle- income countries. Examples of road traffic accidents economic costs in some
countries were presented by the ITF [3]. This enormous economic impact weights down on families,
insurance companies (e.g., damages), healthcare systems (both public and private), companies (e.g.,
production loss) and society as a whole [2,4].
Road safety is usually related with human behaviour, physical infrastructures and vehicles [5,6]
(detailed information about road safety factors can be found in Rune and Vaa [7]). Countless media
campaigns are made worldwide each year to raise awareness and try to reduce on-road risk behaviours.
WHO has even released a mass media campaigns toolkit to help low- and middle-income countries
develop their own initiatives [8]. Physical infrastructures design and planning, as well as vehicles
active and passive safety measures or regular condition checking aim at giving a contribution to
improve road safety [9,10].
When all else fails and road traffic accidents happen, it is crucial to carry out a crash scene
reconstitution with the utmost rigor and objectivity, to determine the causes underlying the event. This
information is the basis in which all of the accidents subsequent stages lean on. Whilst investigation,
insurances, legal aspects and property damages are stages that directly depend on crash-site gathered
information. This information is used to determine the causes of the event which can potentially
have a valuable contribution in the aftermath, namely in the reduction of road traffic accidents.
Indeed, an accurate crash scene reconstitution may allow the identification of design flaws in physical
infrastructures design flaws and/or risk behaviours that support their mitigation through physical
interventions or by raising awareness.
Therefore, data collection at the crash site takes on a decisive importance [11]. Police forces
are usually the first to arrive to a road traffic accident site. Ordinarily, their mission is first of all
to assure victims (if any) proper medical assistance and then to secure the crash scene, preserving
both property and evidences alike, followed by the traffic re-routing procedures. Afterwards, the
investigation procedure starts by identifying and secure evidence, drawing sketches (e.g., vehicles
positions and paths, visible traffic signs), take note of on-site conditions (e.g., visibility), take measures
and sometimes, acquire photographs for future memory [12]. Besides the foremost objective to preserve
life, restoring normal traffic flow to the affected roads is a prime directive, avoiding yet more social
and economic impacts.
In addition to the difficulties connected with the need of collecting a high amount of information
on site, sometimes measurements taken from specific references cannot be met. For example, in what
regards distances from the wheels of the vehicles, these should be measured from their centre and not
from the edge of the tire. Moreover, this must be done from the same location, to the reference point.
Under adverse weather conditions, there is also the risk of some evidences becoming less visible or
even vanish. Furthermore, if pedestrians are involved in the accident, the information to be gathered
increases considerably.
Thus, data collection on site should be a simple, precise and swift procedure. However, currently
it is mostly a manual procedure done by police forces worldwide. Although there might be slight
differences from country to country, data collection methods used when road traffic accidents occur are
very similar. They are based mainly in measurements done manually, using triangulations and parallel
lines to produce the crash scene sketch: as such, both rigor and precision of a road traffic accident scene
sketch depend mostly on the professionalism of the police forces. These difficulties can be overcome
using small-sized, lightweight and cost-effective multi-rotor unmanned aerial vehicles (UAVs), capable
of operating in fully automatic mode or by manual control, which constitute the technological core
of Unmanned Aerial Systems (UAS)-based photogrammetric methodology proposed for road traffic
accidents reconstitution. More than providing scientific documentation, this technical note intends to
supply road traffic authorities and professionals with a set of procedures compiled in a UAV-based
Int. J. Environ. Res. Public Health 2020, 17, 1868
3 of 24
methodology that aims to achieve accurate, efficient and cost-effective digital reconstruction and
recording of motor vehicle collision events.
2. Background
Recent photogrammetry methods were boosted by solid and fast pace developments carried out
in digital image processing and computer vision. Some feature extraction algorithms—for example,
scale invariant feature transform (SIFT) [13] and structure from motion (SfM) analysis [14].Due to
its capability of generating wide and detailed digital data sets faithfully portraying real-world sites,
photogrammetry has been gradually adopted in many areas of application [12]. One of them is the
reconstruction and analysis of road traffic accidents in several projects and countries. A report provided
by Arnold et al. [15] explores alternatives to the laborious and time-consuming traditional road traffic
accident surveying methods. Close-range photogrammetry is highlighted. In spite of the resistance
that is still noticed in adopting advanced techniques, many law enforcement entities around the globe
admit the benefits of photogrammetry in both ease of use and time-saving factors, which has been
leading to progressive changes in road traffic accidents surveying operations [16]. Zuraulis et al. [17]
showed the relevance of photogrammetric systems for road traffic accident investigation, specifically
by using tire yaw marks as evidences. This study suggests that the speed calculation methodology
depends on tire curvature conditions, with lower errors for shortest chord lengths.
Photogrammetric methods demonstrated to be useful for substantial accuracy improvement in
speed determination: around 3% error was obtained when using curvatures shortest chord lengths,
which is an acceptable value for forensic practices. However, some disadvantages regarding close-range
photogrammetry have been noted [18].
With aerial photogrammetry, namely using Unmanned Aerial Systems (UAS)—that combine
a UAV, sensors as payload and a ground station, which has software for flight management
and planning [19]—both complexity and laborious road traffic accidents surveying operations
dropped significantly, comparatively to the previously referred approach, relying on multi-positional
photographs. This technological advancement turned UAV-based photogrammetry into a desirable
precise and efficient tool for business-to-business companies and law enforcement entities dealing
with road traffic accidents which has been also drawing the attention of researchers. Monadrone [20]
used a UAV to acquire images from a simulated accident scene. The Royal Canadian Mounted Police
and Pix4D company cooperated with the goal of comparing terrain surveying methods for road traffic
accidents investigation—including laser 3D scanning—with UAV-based image acquisition [21]. The
UAV-based photogrammetry reliability over traditional methods and its affordability comparatively to
other precision instruments—e.g., laser scanners—were highlighted, along with other aspects such as
view angles covering. A solution proposed by the Analist Group [22] resorts to terrain measurements
and a UAV flight to obtain data for mosaicking and 3D model processing, using Pix4Dmapper Pro, both
portraying different perspectives of a staged road traffic accident. Subsequently, measurements can
be done in a computer aided design (CAD) software, that also supports simulating events sequences.
Greg Gravesen, a specialist in road traffic accidents reconstitution who uses a fixed-wing UAV to create
detailed representations from orthophoto mosaics are the base for the various simulations, as well as
3D meshes to create graphical hypothesis [23]. Crash scenes are then reconstructed and simulated
using a dedicated software.
Some research has also been done in this area by Su et al. [24] which proposes a mapping system
that applies UAVs to rapidly set up a scene representation for road traffic accidents investigation, as an
alternative to the traditional hand-drawn sketches. Diaz-Vilarino et al. [25] identified a non-solved
question on runoff evaluation. They carried out a study to determine the influence of data quality
obtained using UAV-based photogrammetry in the calculation of a mountain road run off, when
compared to LiDAR data. A method to check the limits of using point clouds obtained from UAV
photogrammetry relying on the D8 (flow direction) algorithm was proposed. Data acquisition of
road traffic accidents comparing traditional methods with advanced techniques, namely using total
Int. J. Environ. Res. Public Health 2020, 17, 1868
4 of 24
stations and photographic documentation that can also involve UAV missions, was carried out by
Stáňa et al. [26]. Results analysing real occurrences as case studies pointed out both faster processing
and quicker road clearance, as well as (almost) error-free measurements, when applying advanced
techniques. A Sheriff of Colorado State (USA) testified for a report [27], sharing the same thoughts. X.
Liu et al. [28] proposed a framework method that integrates UAVs to achieve 2D/3D accident scene
reconstructions, highlighting the enhanced capabilities of those platforms to cover more angles without
the need to interrupt traffic flow. The results seem to attest this kind of approaches to digitally record
casualties in road traffic accidents. Concerned with mosaicking production, Y. Liu et al. [29] developed
a method that includes image registration and fusion, as well as intelligent distance measurement
based in Euclidian distances, a case study was used to demonstrate the whole pipeline. In Stevens
Jr et al. [30], UAS were used for real-time monitoring of secondary crashes, including alternate routes
monitoring. The report highlights UAS capabilities to deliver imagery and sensory data on-the-fly,
their safety during missions, as well as payload mobility. Coarse 3D reconstructions were also
presented to demonstrate mapping capabilities for future access, but quality measurements were not
carried out. Mekker [31] evaluated some consumer grade UAS for photogrammetric traffic crash
scene documentation.
3. Road Traffic Accidents Reconstitution
3.1. Current Approach
The current procedures commonly used by police forces in road traffic accidents reconstitution
relies on: a measuring tape and/or measuring wheel; a notepad, to draft the sketch; a pencil; chalk,
to mark the pavement; and a camera for photographic documentation. Some constraints regarding
the current data collection method are identified: (i) it involves a significant amount of time spent
at the crash site by police officers that could otherwise be engaged in different tasks; (ii) it causes a
disruption in regular traffic flow, with consequent economic and social impacts; (iii) it can potentially
lead to losing valuable data, due to the lack of detail level in the sketch; and (iv) it exposes both police
officers and civilians to unnecessary risks during the data collection process.
Figure 1 depicts the current road traffic accidents data collection method applied to a simulated
accident scenario. For this purpose, a simulation was conducted at the University of Trás-os-Montes e
Alto Douro (UTAD), Vila Real, Portugal, with the cooperation of the Portuguese Public Safety Police
(PSP) force from Vila Real, to ensure that the data collection process proceeded as if in a real road traffic
accident scenario. Measurements done by PSP officers are in a local cartesian coordinate system: in
this simulation, a lamp pole was used as the reference (x = 0.0 m; y = 0.0 m) and all measurements
were referenced in relation to it. Moreover, a handmade sketch of the road traffic accident scenario
was drawn into the notepad in situ (Figure 1c). For clarification purposes, the simulated road traffic
accident was represented through a sketch drawn using an online platform (AccidentSketch.com)
and is presented in Figure 1d. Besides measurements, some photographs are also usually required
for documentation purposes. In this simple road traffic accident scenario, PSP took about 30 min to
complete the data collection process. The handmade road traffic accident sketch (Figure 1c) was later
used in the office to generate a digital version using a proprietary software. It was then included in the
official accident report. This process took a few more hours of the police officers’ time, which could
otherwise be used in more meaningful police work.
3.2. Proposed Approach
Considering a set of requirements aligned with the cooperation of Portuguese police forces, some
preliminary technical decisions regarding the selection of hardware (UAV, camera and base station) and
software (UAV flight planner/manager and photogrammetric tools) were specified. Relying on such
requirements and specifications, the outline of a novel methodology to digitally reconstruct and record
road traffic accidents was proposed to complement or, eventually, supplant traditional procedures.
simulated road traffic accident was represented through a sketch drawn using an online platform
(AccidentSketch.com) and is presented in Figure 1d. Besides measurements, some photographs are
also usually required for documentation purposes. In this simple road traffic accident scenario, PSP
took about 30 minutes to complete the data collection process. The handmade road traffic accident
sketch (Figure 1c) was later used in the office to generate a digital version using a proprietary
Int.software.
J. Environ. Res.
Public
Health
2020, 17,in
1868
of 24
It was
then
included
the official accident report. This process took a few more hours5 of
the police officers’ time, which could otherwise be used in more meaningful police work.
Figure 1. Current data collecting method applied by the Portuguese Public Safety Police (PSP) of Vila
Figure 1. Current data collecting method applied by the Portuguese Public Safety Police (PSP) of Vila
Real (Portugal) in a simulated road traffic accident scene: (a) staged traffic accident scene; (b) police
Real (Portugal) in a simulated road traffic accident scene: (a) staged traffic accident scene; (b) police
officer using a measuring wheel to acquire data; (c) in situ sketch example; and (d) traffic accident
officer using a measuring wheel to acquire data; (c) in situ sketch example; and (d) traffic accident scene
scene sketch produced for clarification purposes by using an online platform.
sketch produced for clarification purposes by using an online platform.
3.2.Requirements
Proposed Approach
3.2.1.
Definition
Defining a set of requirements to a methodology that, from the outset, is intended not only
to demonstrate its feasibility but also to effectively be deployed and employed in real road traffic
accidents scenarios, can be a demanding task. As such, cooperation was sought from the Portuguese
National Road Safety Authority (ANSR)—whose mission is the “planning and coordination at national
level to support the Government policy on road safety, as well as the application of the law on road
safety”—and both police forces that deal with road traffic accidents in Portugal: PSP and National
Republican Guard (GNR).
This cooperation resulted in the following set of requirements to which a UAS, to be used in road
traffic accidents scenarios by police forces, must comply to be deemed suitable: (1) balanced relation
between cost, reliability and precision; (2) compact, easy to manage, to transport and to assemble;
(3) able to operate in both urban and nonurban contexts; (4) support both autonomous and manual
flight modes; (5) high manoeuvrability and adaptability both in autonomous and in manual flight
modes; (6) enable flights parameterisation to support a multiplicity of accident scenarios, namely
regarding different types of trajectories, camera angle and images overlap; (7) autonomy to acquire
data in typical road traffic accidents scenarios, using only one battery (assuming that most of road
traffic accidents involves between two to three vehicles and that a common rotary-wing UAV can fly up
to twenty minutes with only one battery, one flight is enough to cover the large majority of occurrences.
Regarding multiple vehicle collisions in motorways, crash scenarios with highly dispersed debris or
even involving explosions, the proposed methodology can be also addressed, but they may imply the
use of more than one battery to fully cover the crash scenario); (8) integration of an accurate Global
Navigation Satellite System (GNSS) receiver for positioning and imagery georeferencing; (9) support
for RGB sensors that acquire high quality and high-resolution images up to 120 m altitude, which
is the legal limit allowed for UAVs, according to European regulations (UAV flights are forbidden
in some urban contexts, but as the proposed methodology will be applied by police forces, these
situations must be foreseen in the legislation); and (10) allow for real-time access to the RGB sensor,
through the ground control station, enabling the on-scene police officers to monitor and manage the
data acquisition process and make an on-the-fly assessment about data being acquired.
Regarding post-flight data processing, more specifically the photogrammetric processing, another
set of specifications was created for selecting the photogrammetric tools: (1) support for photos
positioning and alignment, aiming the global or local data georeferencing and feature extraction; (2)
capability for data scaling operations; (3) intuitive and easy-to-use software; (4) support for accurate
dense point clouds; (5) support interactive highly accurate measurements and positional control; and
(6) outputs such as orthorectified GIS-based products and virtual models.
3.2.2. Hardware and Software Specification
Considering the set of requirements defined for UASs, the process of selecting the equipment
deemed suitable for the proposed methodology and for this study purpose started off by addressing
aspects such as packaging, transportation and ease of handling. Among the available types, small-sized
Int. J. Environ. Res. Public Health 2020, 17, 1868
6 of 24
rotary-wing UAVs superseded fixed-wing UAVs due to their ability to perform vertical take-off and
landing (VTOL) and to maintain a stationary position during flight. The authors experience in working
and operating with different UAVs also weighed in when selecting the two off-the-shelf UAV models
for this study: DJI Phantom 4 and its predecessor, DJI Phantom 3 Standard (DJI TECHNOLOGY CO.,
LTD, Shenzhen, China).
For mission planning and execution, two software tools were used: DJI GO (DJI TECHNOLOGY
CO., LTD, Shenzhen, China) and Pix4Dcapture (Pix4D SA, Lausanne, Switzerland). While the former
stands as a valid option to test UAV configurations, the latter is suitable for preparing and executing
mission flights that can be carried out manually or autonomously—in simple or double grid mode—and
also for selecting which area to cover, the flight height and imagery overlap.
To complement UAV-based imagery, ground-perspective images can be collected by using ground
cameras, namely smartphones belonging to a widely spread class of cost-effective consumer-grade
technological devices, with built-in GPS capabilities and equipped with fairly acceptable quality
cameras (12 MP). With these, police officers are able to deal with less accessible areas in the crash scene
that cannot be tackled with—only or at all—using UAVs, due to the presence of obstacles (e.g., dense
vegetation, poles, cables or buildings).
After acquiring data from road traffic accidents scenarios, a photogrammetric processing stage
occurs. As such, notwithstanding several fully compliant solutions (some of them opensource and/or
free to use), Pix4Dmapper Pro (Pix4D SA, Lausanne, Switzerland) was selected for this study. The
decisive factors for this choice were the research team significant experience in using this software and
an available license.
The resulting digital products from this pipeline represent a realistic and complete set of elements
that can help to reconstitute road traffic accidents scenarios.
3.2.3. Environmental Conditions Variability
UAV-based photogrammetric approaches for road traffic accidents reconstitution are usually
equipped with one RGB camera to acquire images from the scene and GNSS, which besides
georeferencing each image, enables autonomous flights throughout pre-established set of waypoints
(i.e., mission plans). As such, human intervention can be minimised during the data acquisition process,
thus reducing the potential for errors. However, some context conditions might significantly influence
this process, namely (1) the presence of unmovable obstacles that partially or completely prevent UAVs
access to the crash scenario; and (2) adverse light conditions, often found with some environmental
phenomena and/or at night. Acquired images are then processed by using a photogrammetric software
to accurately reconstitute the traffic accident scenario.
3.2.4. Methodology Overview
When a road traffic accident happens and assuming that police forces are called to the scene,
the proposed methodology starts of by having officers evaluate the crash scenario (e.g., area of
interest, number of vehicles involved, dispersion of potential elements of interest) and the context
conditions (e.g., environmental and light conditions and the presence of obstacles). Indeed, four road
traffic accident scenarios are foreseen, each with its corresponding imagery acquisition process, as
summarised in Table 1.
During the imagery acquisition process, a small number of distances between reference points can
be measured to ascertain the road traffic accident scenario reconstitution quality and also for calibration
purposes: bars with well-known dimensions can be used (placing them in the area of interest in
concurrent directions), measuring tape (or wheel) or by employing a GNSS receiver. Although it is
worth mentioning that if the objective is to reconstitute a road traffic accident scenario circumscribed
to a relatively small area (a few square meters) by using a UAV equipped with a precise GNSS receiver
(with errors around a few centimetres), then the distance measurement between reference points
Int. J. Environ. Res. Public Health 2020, 17, 1868
7 of 24
may
be unnecessary,
due 2020,
to photogrammetric
software capabilities in estimating 3D positions 7with
Int. J. Environ.
Res. Public Health
17
of 24
acceptable accuracy.
Artificial light sources can be setup around the area of interest. As
road traffic accident
scenarios
andofrespective
Table
1. Foreseen
Adverse
light proposedanmethodology
alternative/complement,
whenever
the use
UAVs is imagery
possible,
acquisition
processes.
UAV:
unmanned
aerial
vehicle.
conditions
portable lightweight light sources can be mounted on the UAV,
ensuring the camera is neither occluded or overexposed.
Foreseen Road Traffic Accident
Scenarios
Absence of obstacles (ideal situation)
Imagery Acquisition Process
A UAV can carry out a fully autonomous flight, planned with proper software
Image acquisition can be complemented either using manual flight (for wider
During
imagery
acquisition
Presencethe
of some
obstacles,
restricting process, a small number of distances between reference points
areas) or by using a ground camera, such as a smartphone (for confined areas
UAV
access
to
(at
least)
one
perspective
can be measured to ascertain the road traffic accident
scenario
quality and also for
of a few squared
meters,reconstitution
orbital style acquisition)
calibration
purposes: bars with well-known
can
them
Image dimensions
acquisition is done
onlybe
by used
using a (placing
ground camera,
suchin
as athe area of
Presence of obstacles that completely
smartphone (more suitable for confined areas of a few squared meters, orbital
interest in concurrent
measuring
tape (or wheel)
or
by
employing
a
GNSS
receiver.
inhibit the use directions),
of UAVs
style acquisition)
Although it is worth mentioning that if the
objective
is
to
reconstitute
a
road
traffic
accident
scenario
Artificial light sources can be setup around the area of interest. As an
alternative/complement,
whenever the
of UAVs
possible,
portable with a
circumscribed
to a relatively small area
(a few square meters)
byuse
using
a is
UAV
equipped
Adverse light conditions
lightweight light sources can be mounted on the UAV, ensuring the camera is
precise GNSS receiver (with errors around a few neither
centimetres),
then
the
distance
measurement
occluded or overexposed.
between reference points may be unnecessary, due to photogrammetric software capabilities in
estimating 3D positions with acceptable accuracy.
Lastly, the acquired data concerning the road traffic accident scenario must be processed using a
Lastly, the acquired data concerning the road traffic accident scenario must be processed using
photogrammetric software to obtain an orthophoto mosaic and a virtual model reconstruction, therefore
a photogrammetric software to obtain an orthophoto mosaic and a virtual model reconstruction,
digitally documenting the road traffic accident. The crash scenario reconstitution is represented by
therefore digitally documenting the road traffic accident. The crash scenario reconstitution is
a georeferenced 3D model or point cloud, in which precise measurements can be taken, edited or
represented by a georeferenced 3D model or point cloud, in which precise measurements can be
complemented at any point in time, through the use of specific software capable of dealing with
taken, edited or complemented at any point in time, through the use of specific software capable of
Geographic Information Systems (GIS) data and/or Computer Aided Design (CAD) models. Figure 2
dealing with Geographic Information Systems (GIS) data and/or Computer Aided Design (CAD)
presents the proposed methodology functional diagram.
models. Figure 2 presents the proposed methodology functional diagram.
Figure2.2.Unmanned
Unmanned aerial
aerial vehicle
vehicle (UAV)-based
(UAV)-based road
Figure
road traffic
traffic accidents
accidents reconstitution
reconstitutionmethodology
methodology
functional
diagram.
functional diagram.
4. Experimental Tests:
Tests: Fine-Tuning
Fine-Tuning Parameterisation
Parameterisation for
for the
TheProposed
ProposedMethodology
Methodology
4.1.
Experimental Scenario
Scenario
4.1. Experimental
A
most
suitable
context
conditions
(clear
sky,sky,
obstacles-free
A set
set of
ofpreliminary
preliminarytests,
tests,considering
consideringthe
the
most
suitable
context
conditions
(clear
obstacleswide-open
area),area),
were performed
with various
heights heights
and distinct
in angles
order toinassess
free wide-open
were performed
with various
and camera
distinctangles
camera
orderthe
to
proposed
methodology
feasibility.
These
tests
were
also
used
to
determine
flight
parameters
default
assess the proposed methodology feasibility. These tests were also used to determine flight
parameters default settings such as flight duration, most suitable number of acquired images,
processing time, Ground Sample Distance (GSD) and measurements precision. More specifically two
experimental tests sets were completed: one to optimise flights height and duration, as well as the
resulting digital products overall quality, including resolution and 3D model detail; another to
Int. J. Environ. Res. Public Health 2020, 17, 1868
8 of 24
settings such as flight duration, most suitable number of acquired images, processing time, Ground
Sample Distance (GSD) and measurements precision. More specifically two experimental tests sets
were completed: one to optimise flights height and duration, as well as the resulting digital products
Int. J. Environ. Res. Public Health 2020, 17
8 of 24
overall quality, including resolution and 3D model detail; another to compare both precision and
executionboth
timeprecision
of the current
data collection
and the
proposed
methodology,
a road
compare
and execution
time ofmethod
the current
data
collection
method and within
the proposed
traffic accidentwithin
scenario
context.
As aaccident
result, near-optimal
parameter
combinations
were determined
methodology,
a road
traffic
scenario context.
As a result,
near-optimal
parameter
for
future
reference.
combinations were determined for future reference.
4.2. Flight Height and Camera Angle
4.2. Flight Height and Camera Angle
Both flight height and camera angle are important parameters inasmuch as they influence
Both flight height and camera angle are important parameters inasmuch as they influence
orthorectified and virtual model outcomes obtained through photogrammetric processing of imagery
orthorectified and virtual model outcomes obtained through photogrammetric processing of imagery
acquired in a given area of interest. Regarding camera angle, nadiral images would allow for obtaining
acquired in a given area of interest. Regarding camera angle, nadiral images would allow for
the best orthorectified mosaics. However, it is a less favourable situation for obtaining 3D models
obtaining the best orthorectified mosaics. However, it is a less favourable situation for obtaining 3D
due to a poorer perspective coverage. With regard to flight height, it stands to reason that lower
models due to a poorer perspective coverage. With regard to flight height, it stands to reason that
flight heights allow for a higher spatial resolution. This can be important, to detect vehicles damage
lower flight heights allow for a higher spatial resolution. This can be important, to detect vehicles
and existent debris. Nevertheless, in road traffic accident scenarios, namely those in urban contexts,
damage and existent debris. Nevertheless, in road traffic accident scenarios, namely those in urban
lower heights can induce more operational constraints and hamper the detection of similar features in
contexts, lower heights can induce more operational constraints and hamper the detection of similar
extreme situations, with a negative impact on the photogrammetric processing. Experimental tests
features in extreme situations, with a negative impact on the photogrammetric processing.
were conducted to find the best possible compromise between these two parameters, aiming to obtain
Experimental tests were conducted to find the best possible compromise between these two
the most suitable outcomes from the photogrammetric processing.
parameters, aiming to obtain the most suitable outcomes from the photogrammetric processing.
To test and assess combinations between these two parameters, several flight missions were
To test and assess combinations between these two parameters, several flight missions were
carried out under ideal circumstances (clear sky sunny days with weak or non-existent wind, within an
carried out under ideal circumstances (clear sky sunny days with weak or non-existent wind, within
obstacle-free wide-open area), establishing accidents scenarios in controlled conditions, as presented in
an obstacle-free wide-open area), establishing accidents scenarios in controlled conditions, as
Figure 3.
presented in Figure 3.
Figure
3. Simulated
possible compromise
compromise between
parameters
Figure 3.
Simulated scenario
scenario to
to determine
determine the
the best
best possible
between flight
flight parameters
for
traffic
accidents
reconstitution
(a).
The
area
considered
for
measurements
was
of
67
m22,, which
for traffic accidents reconstitution (a). The area considered for measurements was of 67 m
which
corresponds
to
the
blue
dashed
polygon
(b),
five
measuring
points
are
highlighted
(A—E).
corresponds to the blue dashed polygon (b), five measuring points are highlighted (A—E).
Flights height varied between 5 m and 100 m, with 5 m increments. Flight
Flight missions
missions were
were carried
out manually for both the 5 m and 10 m heights. The
purpose
was
to
simulate
a
situation
The purpose was to simulate a situation where
where there
there
were eventual obstacles present, while maintaining aa clear
clear line-of-sight
line-of-sight with
withthe
theUAV.
UAV. The remaining
flights were all done in fully autonomous mode, using the same mission plan in Pix4Dcapture. The
influence of different perspectives was evaluated by carrying out both simple and double grid flight
missions.
missions. With
With regard
regard to
to camera
camera angle,
angle, itit stands
stands to
to reason
reason that
that it must be sharper in lower height
flights and wider (until nadiral position) in higher height flights, to be able to capture accident scene
◦ to
◦ inin
elements. It varied
varied from
from30
30°
to6060°
flights
where
height
equal
or lesser
mfrom
and
flights
where
thethe
height
waswas
equal
to ortolesser
thanthan
10 m10
and
◦
◦
from
90°the
forremaining
the remaining
flights.
To assess
positional
and model
generated
scaling,
60 to60°
90 tofor
flights.
To assess
bothboth
positional
rigorrigor
and model
generated
scaling,
two
two
perpendicularly
arranged
metal
bars,
with
a
known
length
(1
m),
were
placed
within
the
perpendicularly arranged metal bars, with a known length (1 m), were placed within the simulated
simulated
traffic
accidentFive
scenario.
Five points—from
A to
E—were
marked
in theusing
pavement
road trafficroad
accident
scenario.
points—from
A to E—were
marked
in the
pavement
chalk,
using
chalk,
for
validation
purposes
(Figure
3b).
for validation purposes (Figure 3b).
Table 2 presents data regarding the most representative combinations of the following
parameters: type of flight mission—manual or autonomous (simple/double grid), flight height,
camera angle, number of acquired images, flight mission duration, GSD and photogrammetric
processing duration. As the flight height decreased, flight duration increased, along with the number
of acquired images and the GSD. Furthermore, the number of acquired images demanded more
computational resources during the automatic photogrammetric processing (therefore excluding
Int. J. Environ. Res. Public Health 2020, 17, 1868
9 of 24
Table 2 presents data regarding the most representative combinations of the following parameters:
type of flight mission—manual or autonomous (simple/double grid), flight height, camera angle,
number of acquired images, flight mission duration, GSD and photogrammetric processing duration.
As the flight height decreased, flight duration increased, along with the number of acquired images
and the GSD. Furthermore, the number of acquired images demanded more computational resources
during the automatic photogrammetric processing (therefore excluding manual operations, such as
scale constraints handling and measurements determination), which increased its duration.
Table 2. Most representative combinations of flight parameters and photogrammetric processing
duration (PPD), considering the best possible compromise between flight height and camera angle.
Flight typology as simple grid (SG) or double grid (DG); GSD: ground sampling distance.
Height (m)
Camera
Angle
Flight
Typology
Number of
Images
Flight Duration
(min:s)
GSD
(cm)
PPD (min:s)
40
40
40
30
30
30
15
15
15
10
90
80
65
90
80
65
90
80
65
30–60
SG
DG
DG
SG
DG
DG
SG
DG
DG
Manual
18
15
36
18
36
36
32
72
76
76
3:52
2:27
3:39
3:35
3:27
3:27
4:34
4:58
4:56
4:46
1.70
1.54
1.81
1.21
1.08
1.33
0.65
0.59
0.72
0.34
3:32
4:33
9:25
3:03
11:11
7:43
4:36
21:59
14:02
30:18
Tape measurements made between the five points (from A to E) marked in the pavement of the
road traffic accident scenario (Figure 3b) were used as reference to compare to those extracted from
the photogrammetric-based virtual model, generated by Pix4Dmapper Pro. This allowed for both
assessing and validating the proposed method accuracy. Table 3 presents the most representative
results obtained from the different flight missions. Overall, lower flight heights, and therefore higher
spatial resolutions, enabled more accurate measurements. The most accurate result was achieved by a
double grid flight, carried out at 15 m height with the camera angle set to 65◦ .
Table 3. Comparison between the reference measurements and those most representative computed
from photogrammetric-based virtual models obtained from flight missions. Combinations of different
flight parameters, such as flight mode—manual or autonomous (single/double grid)—flight height and
camera angle are presented for each flight mission. The most accurate result is highlighted in bold.
Measurement
Measuring tape
40 m SG (90◦ )
30 m SG (90◦ )
15 m SG (90◦ )
40 m DG (65◦ )
30 m DG (65◦ )
15m DG (65◦ )
40 m DG (80◦ )
30 m DG (80◦ )
15 m DG (80◦ )
Manual
Distance (cm)
AB
CE
DE
AC
286
292 ± 2
288 ± 2
286 ± 1
289 ± 2
284 ± 1
286 ± 1
285 ± 2
283 ± 1
286 ± 1
287 ± 1
165
169 ± 1
166 ± 2
165 ± 1
166 ± 1
165 ± 2
165 ± 1
164 ± 2
163 ± 1
164 ± 1
166 ± 1
93
95 ± 2
94 ± 2
93 ± 1
94 ± 1
93 ± 1
93 ± 1
92 ± 1
92 ± 1
93 ± 1
94 ± 1
249
255 ± 1
250 ± 1
248 ± 1
250 ± 2
247 ± 1
249 ± 1
247 ± 3
246 ± 1
249 ± 1
251 ± 1
Int. J. Environ. Res. Public Health 2020, 17, 1868
10 of 24
4.3. Precision: Can a UAV-Based Photogrammetric Approach Be Comparable?
Another road traffic accident was simulated to assess the precision of photogrammetric-based
outcomes, by comparison, with a GNSS receiver and a total station. Validation was achieved using
measurements taken with a measuring tape as ground-truth. Figure 4a presents the 30 × 30 m accident
Int. J. Environ.
Public
Healthwere
2020, considered
17
10 of 24
scene,
whereRes.
eight
points
(A, B, C, D, E, F, G and H) and nine distances were measured.
Furthermore, the time required to apply each measuring procedure—proposed method, total station,
measured. Furthermore, the time required to apply each measuring procedure—proposed method,
GNSS receiver and tape—was also measured.
total station, GNSS receiver and tape—was also measured.
Figure 4.
4. Assessment
Assessment of
of different
different measuring
measuring methods
methods in
in aa car
car accident
accident scenario:
scenario: (a)
(a) aerial
aerial view
view of
of the
the
Figure
road
traffic
accident
scenario,
with
the
eight
points
(A–H)
considered
for
comparing
the
measuring
road traffic accident scenario, with the eight points (A–H) considered for comparing the measuring
methods; measuring
measuring distances
distances using
using (b)
(b) the
the GNSS
GNSS receiver;
receiver;and
and(c)
(c)the
thetotal
totalstation.
station.
methods;
Current procedures
procedures that
that are
are commonly
commonly used
used in
in road
road traffic
traffic accidents’
accidents’ reconstitution
reconstitution by
by police
police
Current
forces are
are based
based on
on aa measuring
measuring tape
tape (or
(or wheel).
wheel). As such, measurements
measurements acquired
acquired using
using this
this method
method
forces
were considered
considered as
as reference
reference or
or ground-truth.
ground-truth. The nine distances (AB, BC, CD, EF,
EF, GH,
GH, AH,
AH, BG,
BG, CF
CF
were
and DE)
DE)were
weremeasured
measured
approximately
three
minutes.
Regarding
GNSS,
measurements
were
and
in in
approximately
three
minutes.
Regarding
GNSS,
measurements
were taken
taken
in
approximately
eight
minutes
with
a
Topcon
HiPer
II
GNSS
receiver
(Topcon
Corporation,
in approximately eight minutes with a Topcon HiPer II GNSS receiver (Topcon Corporation, Tokyo,
Tokyo, while
Japan),
while ensuring
a positional
2 and
4 cm, in kinematic
for
Japan),
ensuring
a positional
precisionprecision
between between
2 and 4 cm,
in kinematic
mode. Asmode.
for theAs
total
the total
station,
a couple points
of additional
points
with known
coordinates
considered
for
station,
a couple
of additional
with known
coordinates
were considered
forwere
orientation
purposes,
orientation
purposes,
besides
the
existing
eight.
Using
a
Topcon
GPT-3005N
equipment,
the
entire
besides the existing eight. Using a Topcon GPT-3005N equipment, the entire process took about nine
process took
about
nine minutes
and 30 seconds
(equipment
configuration
and data
acquisition).
The
minutes
and 30
s (equipment
configuration
and data
acquisition).
The proposed
UAV-based
method
proposed
UAV-based
method was
applied in
based
on knowledge
in the
previous
section
was
applied
based on knowledge
presented
the previous
sectionpresented
and planned
using
Pix4Dcapture:
and
planned using
Pix4Dcapture:
an obstacle-free
accident
scene
forwith
an autonomous
flight
an
obstacle-free
accident
scene allowed
for an autonomous
flight
at 15allowed
m height,
a 65◦ camera angle
at 1575%
m height,
a 65°Acamera
and 75%
images
overlap.
A total of 423images
were
and
imageswith
overlap.
total ofangle
42 images
were
acquired
in approximately
min and
50 s.acquired
Images
in
approximately
3
minutes
and
50
seconds.
Images
were
processed
in
approximately
2
minutes
using
were processed in approximately 2 min using Pix4Dmapper Pro running on a computer equipped
Pix4Dmapper
on CPU
a computer
equipped with
a 2.6 GHz
Intel
i7-4720HQ
CPU16
(Intel
with
a 2.6 GHz Pro
Intelrunning
i7-4720HQ
(Intel Corporation,
California,
United
States
of America),
GB
Corporation,
States
of America),
16 GB
memory
(1600 MHz)
and a NVidia
GeForce
memory
(1600California,
MHz) andUnited
a NVidia
GeForce
GTX 970M
(Nvidia
Corporation,
California,
United
States
GTX
970M (Nvidia
Corporation,
California,
States
of4America)
GPU.
The resulting
mosaic
of
America)
GPU. The
resulting mosaic
had a United
0.55 GSD.
Table
presents the
measurements
acquired
hadeach
a 0.55
GSD. Table
4 presents the measurements acquired by each addressed method.
by
addressed
method.
GNSS presented the most accentuated deviations regarding ground-truth. When compared with
Table 4. Comparison
betweenmeasurements
measurements taken
measuring
tape,
the proposed
UAV-based
the (high-precision)
total station,
takenusing
usinga the
proposed
UAV-based
method
presented
photogrammetric
approach,
a
total
station
and
a
GNSS
receiver.
The
difference
to
the
reference
a quite similar accuracy, but with better time regarding operational performance. Moreover,
operating
measurements
(measuring
tape)
to
the
one
taken
by
the
other
equipment
is
also
provided.
a UAV involves less effort due to the autonomous flight capabilities. Another advantage when using
the proposed approach
it provides
more (cm)
data forTotal
road traffic
accidents
reconstitution—through
Distanceis that
Tape
(cm) GNSS
station
(cm) UAV
(cm)
acquired aerial imagery
conversion
into
orthophoto
mosaics,
dense
point
clouds
AB
257
259 (−2)
258 (−1)
258and
(−1)textured virtual
models—that canBC
be achieved180
by resorting179
to user-friendly
photogrammetric
software
(1)
180 (0)
181 (−1) tools.
CD
268
267 (1)
269 (−1)
269 (−1)
EF
264
262 (2)
264 (0)
265 (−1)
GH
264
263 (1)
264 (0)
264 (0)
AH
377
380 (−3)
377 (0)
378 (−1)
BG
440
443 (−3)
440 (0)
441 (−1)
CF
368
370 (−2)
369 (−1)
369 (−1)
DE
375
377 (−2)
376 (−1)
376 (−1)
GNSS presented the most accentuated deviations regarding ground-truth. When compared with
Int. J. Environ. Res. Public Health 2020, 17, 1868
11 of 24
Table 4. Comparison between measurements taken using a measuring tape, the proposed UAV-based
photogrammetric approach, a total station and a GNSS receiver. The difference to the reference
measurements (measuring tape) to the one taken by the other equipment is also provided.
Distance
Tape (cm)
AB
257
BC
180
CD
268
EF
264
GH
264
AH
377
BG
440
CF
368
Int.
J. Environ. Res. Public Health
DE
375 2020, 17
GNSS (cm)
Total station (cm)
UAV (cm)
259 (−2)
179 (1)
267 (1)
262 (2)
263 (1)
380 (−3)
443 (−3)
370 (−2)
377 (−2)
258 (−1)
180 (0)
269 (−1)
264 (0)
264 (0)
377 (0)
440 (0)
369 (−1)
376 (−1)
258 (−1)
181 (−1)
269 (−1)
265 (−1)
264 (0)
378 (−1)
441 (−1)
369 (−1)
of 24
376 11
(−1)
Moreover, operating a UAV involves less effort due to the autonomous flight capabilities. Another
5. Experimental
Tests:
Scenarios
advantage
when Simulated
using the proposed
approach is that it provides more data for road traffic accidents
reconstitution—through acquired aerial imagery conversion into orthophoto mosaics, dense point
The proposed
proved
to be a can
validbeapproach
reconstitution
of road traffic
clouds andmethodology
textured virtual
models—that
achieved for
by the
resorting
to user-friendly
photogrammetric
software
tools.
accidents scenarios
occurring
in obstacles-free
wide-open areas, under clear sky sunny days. However,
to be deployed and used by police forces under real scenarios, this methodology needed to be
5. Experimental Tests: Simulated Scenarios
assessed in less favourable conditions. Obstacles that can commonly be found in urban and non-urban
proposed methodology proved to be a valid approach for the reconstitution of road traffic
environments,The
such
as aerial electrical cables, buildings, walls, vegetation and trees, were considered
accidents scenarios occurring in obstacles-free wide-open areas, under clear sky sunny days.
in simulated
crash scenarios. Low light conditions or even night-time environments were also
However, to be deployed and used by police forces under real scenarios, this methodology needed
consideredtoin
order
toinfully
evaluate conditions.
the proposed
methodology.
Considering
findings
documented
be assessed
less favourable
Obstacles
that can commonly
be foundthe
in urban
and nonurban
environments,
such
as
aerial
electrical
cables,
buildings,
walls,
vegetation
and
trees,
were
in Section 4.2, all flights presented hereinafter had images overlap between 70% and 80%. All
considered
in simulated
crash scenarios.
Low
light conditions
or even
night-time
environments
measurements
of the
tests performed
in this
section
were taken
using
a measuring
tape.were
5.1.
also considered in order to fully evaluate the proposed methodology. Considering the findings
documented in Section 4.2, all flights presented hereinafter had images overlap between 70% and
Scenario 1: Poles and Electrical/Communications Aerial Cables
80%. All measurements of the tests performed in this section were taken using a measuring tape.
The first scenario was simulated in an area with electrical and telecommunications poles, which
5.1. Scenario 1: Poles and Electrical/Communications Aerial Cables
are very common obstacles in both rural and urban environments. As can be observed in Figure 5,
first scenario was simulated in an area with electrical and telecommunications poles, which
their presenceThe
does
not favour UAV flights at lower flight heights using autonomous flight modes.
are very common obstacles in both rural and urban environments. As can be observed in Figure 5,
Under these
conditions, a manual flight mode—providing full flight control to the operator and thus
their presence does not favour UAV flights at lower flight heights using autonomous flight modes.
the responsibility
to avoid/bypass
obstacles—seems
to be the
data
Under these
conditions, a manual
flight mode—providing
fullmost
flight reasonable
control to theoption
operatorfor
andacquiring
thus
the
responsibility
to
avoid/bypass
obstacles—seems
to
be
the
most
reasonable
option
for
acquiring
in a safe way for both police officers and equipment. In this simulated scenario the camera gimbal was
in a safetoway
for both
police
officers and equipment.
thistosimulated
theoverlap
camera between
manually data
controlled
adopt
several
orientations.
The goal In
was
ensure ascenario
suitable
gimbal was manually controlled to adopt several orientations. The goal was to ensure a suitable
images and
to properly cover all the scene during on-the-fly data acquisition, towards a faithful and
overlap between images and to properly cover all the scene during on-the-fly data acquisition,
highly detailed
reconstitution.
A total3D
of reconstitution.
119 images were
in about
four minutes.
towards3D
a faithful
and highly detailed
A totalacquired
of 119 images
were acquired
in about Figure 5
four
minutes.
Figure
5 presents
an aerial
viewscenario,
of the simulated
scenario,points
with six(A,
control
presents an
aerial
view
of the
simulated
crash
with crash
six control
B, C,points
D, E and F).
(A, B, C, D, E and F).
Figure 5. Road traffic accident simulation in a context with poles and aerial cables: (a) aerial view of
Figure 5. the
Road
traffic accident simulation in a context with poles and aerial cables: (a) aerial view
road traffic accident scenario, where six control points (A–F) were marked to compare results
of the road
trafficthe
accident
where
six control
points (A–F)
were
to compare results
between
proposedscenario,
approach and
ground-truth
measurements;
(b) and
(c) marked
different perspectives
of the
accident scene,
whereand
the poles
and aerial cables
are visible. (b) and (c) different perspectives of
between the
proposed
approach
ground-truth
measurements;
the accident scene, where the poles and aerial cables are visible.
A subset of measurements—AB, CD and EF—for comparison with the resulting point cloud
is recorded in Table 5. The maximum error is of 2 cm (0.44%) was observed in AB.
Table 5. Comparison between ground-truth measurements and the UAV-based photogrammetric
approach relative to road traffic accident in a context with poles and aerial cables. Errors associated
with the UAV-based measurements were calculated using Pix4D.
Int. J. Environ. Res. Public Health 2020, 17, 1868
12 of 24
A subset of measurements—AB, CD and EF—for comparison with the resulting point cloud is
recorded in Table 5. The maximum error is of 2 cm (0.44%) was observed in AB.
Table 5. Comparison between ground-truth measurements and the UAV-based photogrammetric
approach relative to road traffic accident in a context with poles and aerial cables. Errors associated
with theRes.
UAV-based
measurements
were calculated using Pix4D.
Int. J. Environ.
Public Health
2020, 17
12 of 24
Distance
Distance
AB
AB
CD
CD
EF
EF
Tape (cm)
Tape (cm)
451
451
117
117
42
42
Point Cloud (cm)
Point Cloud (cm)
453 ± 453
4 ±4
117 ± 1
117 ± 142 ± 1
42 ± 1
5.2. Scenario 2: Presence of Trees with Large Canopies
5.2. Scenario 2: Presence of Trees with Large Canopies
Another common situation is the existence of natural obstacles in a road traffic accident scenario.
Another common situation is the existence of natural obstacles in a road traffic accident scenario.
A second simulated scenario was set in such a way that large trees obstructed portions of the involved
A second simulated scenario was set in such a way that large trees obstructed portions of the involved
vehicles and other possible elements of interest. Figure 6 depicts both the road traffic accident scenario
vehicles and other possible elements of interest. Figure 6 depicts both the road traffic accident
(Figure 6a) and the 3D representations from the photogrammetric processing (Figure 6b,c). In the latter
scenario (Figure 6a) and the 3D representations from the photogrammetric processing (Figure 6b and
it is clear that the crash scene portions obstructed by the trees canopy cannot be reconstituted by solely
c). In the latter it is clear that the crash scene portions obstructed by the trees canopy cannot be
using aerial images. Regarding UAV imagery, 43 images were obtained with approximately 2 min
reconstituted by solely using aerial images. Regarding UAV imagery, 43 images were obtained with
double-grid autonomous flight, at 30 m height and with a 70◦ camera angle.
approximately 2 minutes double-grid autonomous flight, at 30 m height and with a 70° camera angle.
Figure 6.
6. Road
Roadtraffic
trafficaccident
accident
simulation
in which
partially
obstruct
the involved
vehicles.
Figure
simulation
in which
treestrees
partially
obstruct
the involved
vehicles.
Aerial
Aerial
view
of
the
scene
and
the
six
control
points
marked
for
measurement
(A—F)
(a);
generated
view of the scene and the six control points marked for measurement (A—F) (a); generated dense point
dense (b);
point
cloud
(b);model
and the
3D model
resulting
from
proposed methodology
(c).
cloud
and
the 3D
resulting
from
applying
theapplying
proposedthe
methodology
(c).
Table 6 presents
presents aa subset
subset of
of results—AB,
results—AB, BD,
BD, CD,
CD, AC,
AC, EF—comparing
EF—comparing the ground-truth
measurements to the ones obtained from the resulting point cloud. All distances are very similar in
both methods, with a maximum error of 1 cm
cm occurring
occurring in
in all
all measurements.
measurements.
Table 6.6. Comparison
Comparisonbetween
betweenground-truth
ground-truth
measurements
taken
the measurements
obtained
Table
measurements
taken
andand
the measurements
obtained
from
the
UAV-based
point cloud
a simulated
road traffic
where trees
obstructs
part the involved
from
the UAV-based
point in
cloud
in a simulated
roadaccident
traffic accident
where
trees obstructs
part the
vehicles.
associated
with the UAV-based
measurements
were calculated
using Pix4D.
involved Errors
vehicles.
Errors associated
with the UAV-based
measurements
were calculated
using Pix4D.
Distance
Distance
Tape
(cm) Point Point
CloudCloud
(cm) (cm)
Tape (cm)
69
69 ± 269 ± 2
AB AB
69
86
87 ± 187 ± 1
BD BD
86
CD CD
77
77
78 ± 278 ± 2
AC AC
122
122
123 ± 123
2 ±2
EF
79
80 ±1
EF
79
80 ±1
However, a complete crash scene reconstitution was not possible, due to the obstruction caused
However,
a complete
reconstitution
wasuseless
not possible,
caused
by the
trees canopy,
which crash
led toscene
the aerial
images being
for thatdue
parttoofthe
theobstruction
scenario. As
such,
by
the
trees
canopy,
which
led
to
the
aerial
images
being
useless
for
that
part
of
the
scenario.
As
such,
digital outcomes from the proposed methodology are somewhat incomplete, as can be seen in the
3D
digital
outcomes
from
the
proposed
methodology
are
somewhat
incomplete,
as
can
be
seen
in
the
model (Figure 6c). To overcome this issue and as mentioned in Section 4.2, ground-based image
acquisition can be integrated with UAV-based imagery, thus enabling a full and accurate crash
scenario reconstitution when parts are obstructed and not visible in the aerial imagery.
5.3. Scenario 3: Presence of Dense Vegetation
Int. J. Environ. Res. Public Health 2020, 17, 1868
13 of 24
3D model (Figure 6c). To overcome this issue and as mentioned in Section 4.2, ground-based image
acquisition
can beRes.
integrated
UAV-based imagery, thus enabling a full and accurate crash
Int. J. Environ.
Public Health with
2020, 17
13 ofscenario
24
reconstitution when parts are obstructed and not visible in the aerial imagery.
imagery acquisition was carried out in a manual flight that took approximately 4 minutes and 30
5.3. Scenario
3: atPresence
of Dense
seconds,
a variable
height Vegetation
between 5 to 10 m and with a 70° camera angle, enabling a complete
accident scene reconstitution.
This third
simulated road traffic accident presents a scenario where a significant portion of a
Before the proposed methodology processing stage, the vegetation was removed by defining a
vehiclesub-region
and possible
otherfocused
elements
of interest
are obstructed
byofdense
vegetation
(Figure
7a). This
of interest
on the
vehicle (Figure
7b). A total
50 aerial
images were
acquired
situation
blocks
the
aerial
view
in
part
of
the
scene.
Images
acquired
from
the
ground,
using a
using the UAV, which enabled the computation of a dense point cloud (Figure 7d). As the dense
vegetation
prevented
thea reconstitution
of one
the vehicles
sides,
to document
missingUAV-based
part,
smartphone
equipped
with
13 MP camera,
wereofthus
combined
with
the aerialthat
imagery.
17 images
from the
obstructed
areain
were
acquiredflight
using that
the smartphone
camera—this4 process
took
imagery
acquisition
was
carried out
a manual
took approximately
min and
30 s, at
◦
approximately
three
minutes—and
used
to
complement
the
aerial
imagery
(Figure
7e).
Lastly,
three
a variable height between 5 to 10 m and with a 70 camera angle, enabling a complete accident
measuring points (Figure 7f) were marked on the ground (A–C), to compare the reference
scene reconstitution.
measurements and the ones obtained from the resulting point cloud.
Figure 7. Road traffic accident simulation with one vehicle in a dense vegetation context: (a) vehicle
Figure 7. Road traffic accident simulation with one vehicle in a dense vegetation context: (a) vehicle
aerial view; (b) point cloud vegetation filtering; (c) dense point cloud combining both ground and
aerial view; (b) point cloud vegetation filtering; (c) dense point cloud combining both ground and aerial
aerial imagery; (d) resulting UAV-based point cloud; (e) resulting 3D model of the vehicle side
imagery;
(d) resulting
UAV-based
point
cloud; (e) resulting
3D model
side
obstructed by
obstructed
by vegetation,
using
ground-based
imagery; and
(f) partofofthe
thevehicle
resulting
orthophoto
vegetation,
using
ground-based
imagery; points
and (f)(A–C)
part used
of the
mosaic, highlighting
mosaic,
highlighting
in situ measuring
forresulting
precision orthophoto
assessment purposes.
in situ measuring points (A–C) used for precision assessment purposes.
The comparison between ground-truth measurements (AB 209 cm and AC 308 cm) with the ones
obtained
the finalmethodology
dense point cloud
(AB 210 ±stage,
1 cm and
309 ± 1 cm),
showed,
again,
1
Before
thefrom
proposed
processing
the AC
vegetation
was
removed
byonly
defining
a
cm discrepancy
both7b).
ground
andofaerial
imagery
can were
be combined
to using
sub-region
of interest between
focused them,
on theproving
vehicle that
(Figure
A total
50 aerial
images
acquired
obtain
a precise
roadthe
traffic
accident scenario
reconstitution
using(Figure
the proposed
methodology.
the UAV,
which
enabled
computation
of a dense
point cloud
7d). As
the dense vegetation
prevented the reconstitution of one of the vehicles sides, to document that missing part, 17 images from
5.4. Scenario 4: Presence of Buildings
the obstructed area were acquired using the smartphone camera—this process took approximately
This fourthused
scenario
simulates a the
roadaerial
trafficimagery
accident(Figure
in which
least one
of measuring
the involvedpoints
three minutes—and
to complement
7e).atLastly,
three
vehicles is fairly close to a building, thus having one of its sides obstructed to autonomous aerial
(Figure 7f) were marked on the ground (A–C), to compare the reference measurements and the ones
imagery acquisition. The aim is to depict a common situation, especially in urban contexts. Regarding
obtained
from the resulting point cloud.
UAV imagery, 40 images were obtained with an approximately 4 minutes and 30 seconds
The
comparison
between
measurements
(AB 209
cm and AC
308 cm)flight
withwas
the ones
autonomous
flight,
at 20 mground-truth
height and with
a 70° camera angle.
Furthermore,
a manual
obtained
the final
denseflight
pointheight
cloudduring
(AB 210
1 cm and
AC maintaining
309 ± 1 cm),the
showed,
alsofrom
performed
varying
the±mission,
while
camera again,
angle only
constant. Thisbetween
flight aimed
to document
vehicle
portion,
thus
complementing
data
1 cm discrepancy
them,
provingthe
that
bothobstructed
ground and
aerial
imagery
can be the
combined
to
acquisition
task.
obtain a precise road traffic accident scenario reconstitution using the proposed methodology.
Figure 8a presents this simulated crash scenario and the six points (A–F) marked on the ground
for
precision
assessment.
Figure 8b presents the final 3D model obtained through photogrammetric
5.4. Scenario 4: Presence
of Buildings
processing of the UAV-based imagery acquired by both the manual and automatic flights.
This fourth scenario simulates a road traffic accident in which at least one of the involved vehicles
is fairly close to a building, thus having one of its sides obstructed to autonomous aerial imagery
acquisition. The aim is to depict a common situation, especially in urban contexts. Regarding UAV
imagery, 40 images were obtained with an approximately 4 min and 30 s autonomous flight, at 20
m height and with a 70◦ camera angle. Furthermore, a manual flight was also performed varying
flight height during the mission, while maintaining the camera angle constant. This flight aimed to
document the vehicle obstructed portion, thus complementing the data acquisition task.
Int. J. Environ. Res. Public Health 2020, 17, 1868
14 of 24
Figure 8a presents this simulated crash scenario and the six points (A–F) marked on the ground
for precision assessment. Figure 8b presents the final 3D model obtained through photogrammetric
processing of the UAV-based imagery acquired by both the manual and automatic flights.
Int. J. Environ. Res. Public Health 2020, 17
14 of 24
Figure 8. Road traffic accident simulated scenario near buildings: (a) aerial overview of the scene
Figure 8. Road traffic accident simulated scenario near buildings: (a) aerial overview of the scene
along with the six control points (A–F) marked for precision assessment; (b) 3D model reconstituting
along with the six control points (A–F) marked for precision assessment; (b) 3D model reconstituting
the
crash scene
scene from
from the
the photogrammetric
photogrammetric processing
green dots
dots represent
represent the
the
the crash
processing of
of the
the aerial
aerial imagery,
imagery, green
camera positions.
positions.
camera
Table77compares
comparesthe
the
ground-truth
measurements
the ones
the dense
point
Table
ground-truth
measurements
withwith
the ones
takentaken
from from
the dense
point cloud.
cloud.
A
1
cm
maximum
potential
error
in
DE
demonstrates
the
feasibility
of
this
approach,
proving
A 1 cm maximum potential error in DE demonstrates the feasibility of this approach, proving that
that imagery
autonomous
and manual
may
be combined
to obtain
a precise
road
imagery
fromfrom
both both
autonomous
and manual
flightsflights
may be
combined
to obtain
a precise
road traffic
traffic
accident
scenario
reconstitution
using
the
proposed
methodology.
accident scenario reconstitution using the proposed methodology.
Table7.7.Comparison
Comparison
between
the ground-truth
measurements
withobtained
those obtained
from the
Table
between
the ground-truth
measurements
with those
from the generated
generated
cloud
where is
a building
partially obstructing
point
cloudpoint
where
a building
partially is
obstructing
a portion ofaaportion
vehicle.of a vehicle.
Distance
Distance
AB
BC
DE
DF
AB
BC
DE
DF
Tape
(cm)
Tape (cm)
459
459
251
251
268
268
669
669
Point Point
Cloud
(cm)(cm)
Cloud
459 ±459
1 ±1
251 ±251
1 ±1
269 ±269
1 ±1
669 ±669
1 ±1
5.5. Scenario 5: Light
Light Conditions
Conditions
obtaining reliable and accurate
accurate
Besides optimised flight parameters, imagery quality is key to obtaining
traffic accident
accident scenarios,
scenarios, using
using the
the proposed
proposed methodology.
methodology. While
While physical
physical
reconstitutions of road traffic
obstructions to aerial imagery can be managed
managed by using
using complementary
complementary acquisition equipment and
central role
role regarding
regarding imagery
imagery quality.
quality. Insufficient light
techniques, light conditions can play aa central
accident context require artificial light sources to apply
apply the proposed
proposed
conditions in a road traffic accident
methodology,asasititis is
highly
dependent
on the
imagery
quality
to deliver
complete
and accurate
methodology,
highly
dependent
on the
imagery
quality
to deliver
complete
and accurate
digital
digital outcomes.
A lighting
kitPhantom
for DJI Phantom
4 UAVs
was
selected
(Lume
Cube,
outcomes.
A lighting
kit for DJI
4 UAVs was
selected
(Lume
Cube,
Seattle,
WA, Seattle,
USA). ItWA,
was
USA). It was
composed
by two Light
Diode (LED)
lights—each
a 3.81 cmincube,
composed
by two
Light Emitting
DiodeEmitting
(LED) lights—each
a 3.81
cm cube, weighting
28.35weighting
g (battery
◦ beam
in 28.35 g supplying
(battery included),
supplying
at 1angle
m, with
a 60°
beam angle
andangle—and
an adjustable
included),
750 LUX at
1 m, with750
a 60LUX
and an
adjustable
position
all
position
angle—and
all the necessary
parts
it inillumination,
the UAV. As
ground-based
the
necessary
parts to assemble
it in the UAV.
As to
for assemble
ground-based
fourfor
Aputure
Amaran
illumination,
lightsShenzhen,
(AputureChina)
Imaging
Co. Ltd.,
AL-H198
LED four
lightsAputure
(AputureAmaran
ImagingAL-H198
IndustriesLED
Co. Ltd.,
wereIndustries
selected. Each
unit
Shenzhen,23.4
China)
selected.
unit×measures
23.4 × 9.1
14 cm (5.5V–9.5V
(length × width
× height),
measures
× 9.1were
× 14 cm
(lengthEach
× width
height), powered
by×battery
DC), weighs
325
powered
battery
(5.5V–9.5V
325 g,including
supplies 920
LUX atlight
1 m,intensity
with a 60°
beam
angle,
g,
suppliesby920
LUX at
1 m, withDC),
a 60◦weighs
beam angle,
adjustable
and
adjustable
including
adjustable light intensity and adjustable colour temperature.
colour
temperature.
simulated road
road traffic
traffic accident
accident scenario
scenario under
under adverse
adverse illumination
illumination conditions
conditions was
was
The first simulated
intended to visually assess acquired aerial imagery. The simulation
simulation took
took place in a suburban area,
without direct
direct light
light from
from nearby
nearby public
public illumination
illumination and
and involving
involving two
two vehicles.
vehicles. Relying only on
without
portable illumination
illuminationcoupled
coupledtotothe
theUAVs,
UAVs,images
imageswere
weretaken
taken
the
same
point
different
heights
portable
inin
the
same
point
at at
different
heights
(5
(5 m, 7.5 m, 10 m, 15 m, 20 m, and 25 m) over the scene, as shown in Figure 9. An image was acquired
in a manual flight, with the UAVs RGB camera in a nadiral position.
Int. J. Environ. Res. Public Health 2020, 17, 1868
15 of 24
m, 7.5 m, 10 m, 15 m, 20 m, and 25 m) over the scene, as shown in Figure 9. An image was acquired in
a manual flight, with the UAVs RGB camera in a nadiral position.
Int.
Environ.
Res.
Public
Health
2020,
Int.
J. J.
Environ.
Res.
Public
Health
2020,
1717
1515
ofof
2424
Figure9.9.Aerial
Aerialimages
imagesacquired
acquiredwithout
withoutpublic
publicillumination
illuminationsupport
supportatatdifferent
differentheights,
heights,with
with
Figure
Figure 9. Aerial images acquired without public illumination support at different heights, with portable
portableillumination
illuminationcoupled
coupledtotothe
theunmanned
unmannedaerial
aerialvehicle
vehicle(UAV).
(UAV).
portable
illumination coupled to the unmanned aerial vehicle (UAV).
Portable
illumination
coupled
theUAVs
UAVs
enough
document
road
trafficaccidents
accidents
Portable
illumination
coupled
totothe
isisenough
totodocument
road
traffic
Portable
illumination
coupled
to the
UAVs
is enough
to document
road traffic
accidents
scenarios
scenarios
withhigh
high
visual
quality,
under
adverseillumination
illumination
conditions,
heights
equaloror
inferior
scenarios
visual
quality,
under
adverse
conditions,
heights
inferior
with
high with
visual
quality,
under
adverse
illumination
conditions,
to heightstoto
equal
or equal
inferior
to
15 m.
to
15
m.
Flights
over
that
reference
height
cannot
be
dismissed,
mainly
because
obstacles
may
to
15
m.
Flights
over
that
reference
height
cannot
be
dismissed,
mainly
because
obstacles
may
bebe
Flights over that reference height cannot be dismissed, mainly because obstacles may be present in the
present
in
the
scene.
However,
images
visual
quality
decreases
as
flights
height
increases
(Figure
presentHowever,
in the scene.
However,
images decreases
visual quality
decreases
asincreases
flights height
increases
(Figure 9).9).
scene.
images
visual quality
as flights
height
(Figure
9).
Thesecond
secondsimulation
simulationunder
under
adverse
illumination
conditions
took
place
parking
lot,where
where
The
adverse
illumination
conditions
took
place
inina aparking
lot,
The
second
simulation
under
adverse
illumination
conditions
took
place
in
a parking
lot,
lighting
was
provided
exclusively
by
public
illumination
fairly
close
to
the
area
of
interest.
Regarding
lighting
was
provided
exclusively
by
public
illumination
fairly
close
to
the
area
of
interest.
Regarding
where lighting was provided exclusively by public illumination fairly close to the area of interest.
aerialimagery,
imagery,
fourDG
DGautonomous
autonomous
flightswere
werecarried
carried
outwith
with
(Figure
10band
andd)d)and
andwithout
without
aerial
flights
out
10b
Regarding
aerialfour
imagery,
four
DG autonomous
flights
were carried
out(Figure
with
(Figure
10b,d)
and
without
(Figure10a
and
c)the
theportable
portable
illumination
coupled
the
UAV:
two
mheight
height
(Figure10a
10aand
and
(Figure10a
and
illumination
coupled
totothe
UAV:
atat2020m
(Figure
(Figure
10a,c)
thec)portable
illumination
coupled
to the UAV:
two
at 20two
m
height
(Figure
10a,b)—25
images
b)—25
images
were
acquired
in
approximately
4
minutes
and
30
seconds—and
another
two
at
30
b)—25
images
were
acquired
in
approximately
4
minutes
and
30
seconds—and
another
two
at
30
mm
were acquired in approximately 4 min and 30 s—and another two at 30 m height (Figure 10c,d)—17
height(Figure
(Figure10c
10c
andd)—17
d)—17images
images
acquired
approximately
minute
and
seconds.
All
flights
◦ camera
height
acquired
minute
and
4040seconds.
flights
images
acquired
inand
approximately
1 min
and 40in
s.inapproximately
All
flights had a1 170
angle
and theAll
portable
had
a
70°
camera
angle
and
the
portable
illumination
coupled
to
the
UAV
was
manually
adjusted
had
a
70°
camera
angle
and
the
portable
illumination
coupled
to
the
UAV
was
manually
adjusted
toto
illumination coupled to the UAV was manually adjusted to this angle. Lower flight heights were not
this
angle.
Lower
flight
heights
were
not
considered
due
to
the
presence
of
trees
and
aerial
cables
this angle. Lower
heightsof
were
not
considered
dueintothe
thescene
presence
of trees and aerial cables inin
considered
due to flight
the presence
trees
and
aerial cables
surroundings.
thescene
scenesurroundings.
surroundings.
the
Figure10.
10.Orthophoto
Orthophotomosaics
mosaics generatedfrom
from aerialimagery
imagery acquiredininananadverse
adverse lightcontext:
context:
Figure
Figure
10. Orthophoto
mosaics generated
generated from aerial
aerial imagery acquired
acquired in an adverse light
light context:
(a)
and
(c)
only
supported
by
public
illumination;
(b)
and
(d)
public
illumination
combined
with
the
(a) and
and (c)
(c) only
only supported
supported by
by public
public illumination;
illumination; (b)
(b) and
and (d)
(d) public
public illumination
illumination combined
combined with
with the
the
(a)
portableone
onecoupled
coupledtotothe
theUAV;
UAV;(a)(a)and
and(b)
(b)atat2020mmheight;
height;and
and(c)(c)and
and(d)
(d)atat3030mmheight.
height.
portable
portable
one coupled
to the UAV;
(a) and (b)
at 20 m height; and
(c) and (d)
at 30 m height.
Allthe
thepresented
presentedoutputs
outputsdisplayed
displayedquality
qualityreconstitutions
reconstitutionsof
the
scenario.Although
Althoughpublic
public
All
presented
outputs
displayed
quality
reconstitutions
ofofthe
the
scenario.
scenario.
illumination
was
sufficient
in
this
scenario,
portable
illumination
coupled
to
UAVs
can
also
be
used
illumination was
was sufficient
sufficientin
inthis
thisscenario,
scenario,portable
portableillumination
illuminationcoupled
coupled
UAVs
also
used
illumination
toto
UAVs
cancan
also
be be
used
as
complement,
improving
digitaloutputs
outputs
lightintensity/colour
intensity/colour
andcompensating
compensating
possible
a acomplement,
improving
light
and
possible
aasas
complement,
improving
digitaldigital
outputs
light
intensity/colour
and compensating
possible insufficient
insufficient
publicillumination
illumination
(e.g.,
poles
thatare
are
toomissing
spacedout,
out,
missingorormalfunctioning).
malfunctioning).
insufficient
public
(e.g.,
poles
that
too
spaced
missing
public
illumination
(e.g.,
poles that
are
too spaced
out,
or
malfunctioning).
thirdsimulation
simulationunder
underadverse
adverseillumination
illuminationconditions
conditionswas
wasset
setinina asuburban
suburbanarea,
area,without
without
AAthird
publiclighting
lightingsupport.
support.Two
Twovehicles—one
vehicles—oneofofdarker
darkerand
andone
oneofofbrighter
brightercolour—were
colour—wereused
usedand
andthe
the
public
accident
scene
was
documented
using:
(i)
only
ground-based
portable
illumination
(four
units);
(ii)
accident scene was documented using: (i) only ground-based portable illumination (four units); (ii)
onlyportable
portableillumination
illuminationcoupled
coupledtotothe
theUAV;
UAV;and
and(iii)
(iii)a acombination
combinationbetween
betweenground-based
ground-basedand
and
only
UAV-coupledillumination.
illumination.
UAV-coupled
Int. J. Environ. Res. Public Health 2020, 17, 1868
16 of 24
A third simulation under adverse illumination conditions was set in a suburban area, without
public lighting support. Two vehicles—one of darker and one of brighter colour—were used and the
accident scene was documented using: (i) only ground-based portable illumination (four units); (ii)
only portable illumination coupled to the UAV; and (iii) a combination between ground-based and
Int. J. Environ. Res. Public Health 2020, 17
16 of 24
UAV-coupled
illumination.
Regarding aerial imagery, several DG autonomous flights were carried out. Each flight had a 70◦
Regarding aerial imagery, several DG autonomous flights were carried out. Each flight had a 70°
camera angle and the portable illumination was manually adjusted to this angle. The 25 m height
camera angle and the portable illumination was manually adjusted to this angle. The 25 m height
flight—28 images acquired in approximately 5 min and 30 s—was considered for this scenario for two
flight—28 images acquired in approximately 5 minutes and 30 seconds—was considered for this
reasons: it is borderline regarding aerial imagery quality and this flight height allows for the presence
scenario for two reasons: it is borderline regarding aerial imagery quality and this flight height allows
of most type of obstacles. Figure 11 presents the final orthophoto mosaics focused on the vehicles
for the presence of most type of obstacles. Figure 11 presents the final orthophoto mosaics focused
involved in the simulation, for each aforementioned illumination setup.
on the vehicles involved in the simulation, for each aforementioned illumination setup.
Figure 11. Orthophoto mosaics generated from aerial imagery acquired in an adverse light context:
Figure 11. Orthophoto mosaics generated from aerial imagery acquired in an adverse light context:
(a) only ground-based illumination; (b) portable illumination coupled to the unmanned aerial vehicle
(a) only ground-based illumination; (b) portable illumination coupled to the unmanned aerial vehicle
(UAV); and (c) combination of both ground and UAV-coupled illumination. Yellow circles mark the
(UAV); and (c) combination of both ground and UAV-coupled illumination. Yellow circles mark the
location of the four ground-based illumination units.
location of the four ground-based illumination units.
The best
best visual
visual result
result seems
seems to
to be
be the
the one
one only
only using
The
using portable
portable illumination
illumination coupled
coupled to
to the
the UAV
UAV
(Figure
11b),
as
it
seems
to
reduce
shadow
effects
present
in
the
combination
between
both
portable
(Figure 11b), as it seems to reduce shadow effects present in the combination between both portable
illuminations (Figure
11c). Moreover,
has simpler
simpler logistics
logistics for
for police
police forces
forces when
when compared
compared
illuminations
(Figure 11c).
Moreover, it
it also
also has
with
ground-based
illumination
that
requires
setting
up
units
in
the
crash
scene.
with ground-based illumination that requires setting up units in the crash scene.
6. Experimental Tests: Real Scenarios
traffic accidents
accidents scenarios,
scenarios, addressed in cooperation
cooperation with
This section presents some real road traffic
both PSP and GNR, in which the proposed methodology is compared with the current reconstitution
method used by police forces.
6.1. Urban Context
Within
Within the
the scope of a close collaboration with PSP of Vila Real, our research group accompanied
PSP officers to some real
real road
road traffic
traffic accidents
accidents that
that happened
happened within
within Vila
Vila Real
Real urban
urban context.
context. Both
teams worked roughly simultaneously to document crash scenes. Any road traffic accident scenario
was disregarded
disregarded if
if any
any of the involved parties did not concede authorization to disclose the acquired
images.
most
meaningful
documented
occurrences
are presented.
using theirusing
own approaches:
images. Only
Onlythethe
most
meaningful
documented
occurrences
are presented.
their own
the
current
reconstitution
method
and
the
proposed
methodology,
respectively.
PSP
sketches
produced
approaches: the current reconstitution method and the proposed methodology, respectively.
PSP
for
the presented
occurrences
were provided
for both
and
validation
purposes.
sketches
produced
for the presented
occurrences
werecomparison
provided for
both
comparison
and validation
The first occurrence regards a daytime urban road traffic accident—one-way street, with two
purposes.
trafficThe
lanes
and
parking inregards
both sides
of the street—that
took
placeaccident—one-way
on 10 April 2017, directly
first
occurrence
a daytime
urban road
traffic
street, involving
with two
two
vehicles.
PSP parking
was called
the accident
and proceeded
documenting
it using
its directly
current
traffic
lanes and
in toboth
sides of scene
the street—that
tookbyplace
on 10 April
2017,
methodology
12c).
sketch
in Microsoft
Visio scene
(Microsoft
Corporation,
Redmond, USA)
was
involving two(Figure
vehicles.
PSPAwas
called
to the accident
and proceeded
by documenting
it using
produced
using
the data
its currentafterwards
methodology
(Figure
12c).acquired.
A sketch in Microsoft Visio (Microsoft Corporation, Redmond,
USA) was produced afterwards using the data acquired.
As for the proposed methodology, a DG autonomous flight was carried out roughly at the same
time that PSP officers were doing their investigation. The 20 m height flight—due to the presence of
public illumination poles in the surroundings of the scene—50 images were acquired in
approximately two minutes, with a 70° camera angle and 80% overlap between images. Figure 12a
Int. J. Environ. Res. Public Health 2020, 17
17 of 24
vehicles passing by are depicted as ghost-like elements, but allow for comprehending traffic flow
during the period of time lapsed between the road traffic accident and the end of PSP’s documenting
Int. J. Environ. Res. Public Health 2020, 17, 1868
of 24
process. Figure 12b presents the processed orthophoto mosaic without the ghost-like elements17(e.g.,
vehicles and people on the sidewalk), allowing the viewer to focus on the crash scene elements only.
Figure 12. Proposed method applied to a road traffic accident in urban context: (a) orthophoto mosaic
Figure 12. Proposed method applied to a road traffic accident in urban context: (a) orthophoto
generated from the aerial imagery photogrammetric processing; (b) orthophoto mosaic without
mosaic generated from the aerial imagery photogrammetric processing; (b) orthophoto mosaic without
ghost-like elements;
elements; (c)
(c)Police
Policeofficers
officerstaking
takingmeasurements
measurementswith
witha atape
tape
and
the
UAV
getting
ready
ghost-like
and
the
UAV
getting
ready
to
to
take-off;
(d)
aerial
perspective
of
the
scene,
yellow
ellipses
identify
the
vehicles
involved
in
the
take-off; (d) aerial perspective of the scene, yellow ellipses identify the vehicles involved in the accident;
accident;
(e) PSP’s
sketch;
and (f)between
overlap PSP’s
between
PSP’s
sketch
and the generated
orthophoto
(e)
PSP’s sketch;
and
(f) overlap
sketch
and
the generated
orthophoto
mosaic. mosaic.
Figure
12f proposed
presents an
overlap between
sketch (Figure
12e)carried
and the
processed
orthophoto
As
for the
methodology,
a DG PSP’s
autonomous
flight was
out
roughly at
the same
mosaic
12b), both
the their
sameinvestigation.
scale. There are
regarding
vehicles’
time
that(Figure
PSP officers
were in
doing
Thesome
20 m minor
height issues
flight—due
to thethe
presence
of
dimensions.
However,
considering
that
PSP
officers
use
existing
models
from
Microsoft
Visio
to
public illumination poles in the surroundings of the scene—50 images were acquired in approximately
◦
develop
their work,
scenario
bothand
the 80%
sketch
and the
orthophoto
mosaic
are very
similar. As
two
minutes,
with ain70this
camera
angle
overlap
between
images.
Figure
12a presents
thea
road
traffic
accident
scenario
reconstitution,
the
proposed
approach
provides
a
more
complete
set
of
orthophoto mosaic, reconstituting this road traffic accident scenario. Some debris and possible elements
outcomes,
enabling
different
perspectives
to
be
analysed
(Figure
12d),
zoom
in
a
certain
area,
assess
of interest can be clearly distinguished lying on the road. In the open traffic lane, vehicles passing by
the depicted
overall accident
scene environmental
and physical
contexts, precise
of debris
other
are
as ghost-like
elements, but allow
for comprehending
trafficlocation
flow during
the and
period
of
elements
of
interest
and
capability
to
measure
distances
between
any
two
points
within
the
scene.
time lapsed between the road traffic accident and the end of PSP’s documenting process. Figure 12b
Moreover,
accidentorthophoto
scene is digitally
preserved
analysis
when necessary.
Thisand
relatively
presents
thethe
processed
mosaic without
the for
ghost-like
elements
(e.g., vehicles
people
simple
urban road
traffic accident
only
down
one traffic
lane for approximately
25 minutes
on
the sidewalk),
allowing
the viewer
to closed
focus on
the crash
scene elements
only.
(on-site
documenting
process),
which
meant PSP’s
that traffic
still12e)
flowand
in the
trafficorthophoto
lane. With
Figure
12f presents
an overlap
between
sketchcould
(Figure
theother
processed
the proposed
approach,
documenting
crashThere
scene are
tooksome
only minor
two minutes,
with five more
minutes
mosaic
(Figure
12b), both
in the samethe
scale.
issues regarding
the vehicles’
spent
in
assembling
the
UAV
and
setting
up
the
flight
plan.
dimensions. However, considering that PSP officers use existing models from Microsoft Visio to
A second
occurrence
also
a daytime
urban
traffic accident,
a busy
develop
their work,
in thisregards
scenario
both
the sketch
androad
the orthophoto
mosaic
are intersection,
very similar.that
As
place
on accident
May 11, 2017,
involving
two vehicles.
PSP was also
called to
the traffic
accident
scene
atook
road
traffic
scenario
reconstitution,
the proposed
approach
provides
a more
complete
andofproceeded
documenting
it perspectives
using its current
produced
sketchinisa presented
in
set
outcomes,by
enabling
different
to bemethods.
analysedThe
(Figure
12d), zoom
certain area,
Figure
13a.
Two
traffic
signs—a
stop
(PFA)
and
a
wrong
way
(PFB)
sign—and
the
collision
point
(CP)
assess the overall accident scene environmental and physical contexts, precise location of debris and
are considered
as interest
reference
points.
PSP to
officers
tookdistances
around 35
minutes
complete
on-site
other
elements of
and
capability
measure
between
anytotwo
points the
within
the
documenting
process.
As
for
the
proposed
methodology,
a
DG
autonomous
flight
was
carried
out.
scene. Moreover, the accident scene is digitally preserved for analysis when necessary. This relatively
The 20urban
m height
flight—due
to only
the presence
of public
illumination
poles and buildings
in the
simple
road traffic
accident
closed down
one traffic
lane for approximately
25 min (on-site
surroundings—acquired
57
images
in
approximately
2
minutes,
with
a
70°
camera
angle
and
documenting process), which meant that traffic could still flow in the other traffic lane. With80%
the
imagery overlap.
Figure
13b presents
the produced
mosaicwith
of this
accident
proposed
approach,
documenting
the crash
scene took orthophoto
only two minutes,
fiveroad
moretraffic
minutes
spent
scenario.
The the
three
points
considered
in
assembling
UAVreference
and setting
up the
flight plan.in PSP’s sketch (Figure 13a) are highlighted.
Furthermore,
debris
and
other
possible
elements
of interest
be clearly
distinguished
lying onthat
the
A second occurrence regards also a daytime urban
roadcan
traffic
accident,
a busy intersection,
road,
near
the
collision
point
(PC).
took place on May 11, 2017, involving two vehicles. PSP was also called to the traffic accident scene
and proceeded by documenting it using its current methods. The produced sketch is presented in
Figure 13a. Two traffic signs—a stop (PFA) and a wrong way (PFB) sign—and the collision point (CP)
are considered as reference points. PSP officers took around 35 min to complete the on-site documenting
process. As for the proposed methodology, a DG autonomous flight was carried out. The 20 m height
flight—due to the presence of public illumination poles and buildings in the surroundings—acquired
57 images in approximately 2 min, with a 70◦ camera angle and 80% imagery overlap. Figure 13b
Int. J. Environ. Res. Public Health 2020, 17, 1868
18 of 24
presents the produced orthophoto mosaic of this road traffic accident scenario. The three reference
points considered in PSP’s sketch (Figure 13a) are highlighted. Furthermore, debris and other possible
elements of interest can be clearly distinguished lying on the road, near the collision point (PC).
Int. J. Environ. Res. Public Health 2020, 17
18 of 24
Figure 13. Sketch from Public Safety Police, based on data documented at the traffic accident scene
Figure 13.
Sketch from Public Safety Police, based on data documented at the traffic accident scene with
with two vehicles involved (a). Orthophoto mosaic generated from the photogrammetric processing
two vehicles
involved
mosaicpoints
generated
from in
thePSP’s
photogrammetric
processing
of aerial
imagery(a).
(b),Orthophoto
the three reference
considered
sketch are highlighted
usingof aerial
imageryyellow
(b), the
three
reference
points
considered
in PSP’s
sketch
are point
highlighted
usingbetween
yellow circles:
circles:
two
traffic signs
(yellow
enclosed “×”)
and the
collision
(PC). Overlap
the accident
sketch and
the orthophoto
mosaic,
ellipses
mark
the discrepancies
regarding
two traffic
signs (yellow
enclosed
“×”) and
the white
collision
point
(PC).
Overlap between
thetheaccident
position
of both vehicles
(c).
sketch and
the orthophoto
mosaic,
white ellipses mark the discrepancies regarding the position of both
vehicles (c).
Figure 13c presents an overlap between PSP’s sketch (Figure 13a) and the processed orthophoto
mosaic (Figure 13b), both in the same scale. The main noticeable discrepancy regards the position of
Figure 13c presents an overlap between PSP’s sketch (Figure 13a) and the processed orthophoto
the vehicles. In a more complex road traffic accident scenario, the proposed approach would provide
mosaic a(Figure
13b), both
in the same
The main noticeable
discrepancy
regards
the position of
more precise
reconstitution,
withscale.
a comprehensive
context allowing
for different
perspectives,
the vehicles.
In
a
more
complex
road
traffic
accident
scenario,
the
proposed
approach
would
provide a
precise location of the debris, other elements of interest and the capability of measuring distances
between
any
two
points
within
the
scene.
Again,
the
crash
scene
is
fully
preserved
for
both
present
more precise reconstitution, with a comprehensive context allowing for different perspectives, precise
future
analysis.
Thiselements
urban road
accident
took approximately
35 minutes
to be
locationand
of the
debris,
other
oftraffic
interest
and scenario
the capability
of measuring
distances
between
documented by PSP, with some impact in the usual traffic flow. With the proposed approach,
any two points within the scene. Again, the crash scene is fully preserved for both present and future
documenting the entire crash scene took only 2 minutes, with 5 more minutes spent in assembling
analysis.
This urban road traffic accident scenario took approximately 35 min to be documented by
the UAV and setting up a proper flight plan.
PSP, with some impact in the usual traffic flow. With the proposed approach, documenting the entire
6.2. Motorway
Scenario
crash scene
took only
2 min, with 5 more minutes spent in assembling the UAV and setting up a proper
flight plan. A motorway in the outskirts of Vila Real (IP4) is the last road traffic accident scenario, in real
environment, presented in this work. It consisted on a simulated scene between two vehicles in an
6.2. Motorway
Scenario
IP4 stretch
with two lanes in one direction and one lane in another. The scenario was created to be
particularly complex due to the overall physical context and to the scattered debris field, formed by
A motorway in the outskirts of Vila Real (IP4) is the last road traffic accident scenario, in real
objects of reduced dimensions (when compared to those of a vehicle), by skid marks and by an oil
environment,
in one
thisofwork.
It consisted
a simulated
sceneshown
between
two vehicles
spill, putpresented
right next to
the vehicles
involvedon
(different
perspectives
in Figure
14a–d). in an
IP4 stretch
with
two
lanes
in
one
direction
and
one
lane
in
another.
The
scenario
was
created
GNR was responsible both for staging the road traffic accident and by meticulously documenting
it to be
using complex
its current due
method:
a measuring
wheel (Figure
14e), aand
measuring
and a hand
drawn
sketch
particularly
to the
overall physical
context
to the tape
scattered
debris
field,
formed by
14f). Officers
took approximately
3 hourstotothose
document
accident
sketch
was
objects (Figure
of reduced
dimensions
(when compared
of a the
vehicle),
byscene.
skid The
marks
and
by an oil
then used to produce two sketches in Microsoft Visio: one non-scaled—that took two hours to be
spill, put
right next to one of the vehicles involved (different perspectives shown in Figure 14a–d).
done—and a scaled one (Figure14g), that took 16 hours to be completed.
GNR was responsible both for staging the road traffic accident and by meticulously documenting it
using its current method: a measuring wheel (Figure 14e), a measuring tape and a hand drawn sketch
(Figure 14f). Officers took approximately 3 h to document the accident scene. The sketch was then used
to produce two sketches in Microsoft Visio: one non-scaled—that took two hours to be done—and a
scaled one (Figure 14g), that took 16 h to be completed.
Int. J. Environ. Res. Public Health 2020, 17
19 of 24
Int. J. Environ. Res. Public Health 2020, 17, 1868
19 of 24
Int. J. Environ. Res. Public Health 2020, 17
19 of 24
Figure 14. IP4 motorway road traffic accident scenario: different perspectives from the involved
Figure 14. IP4 motorway road traffic accident scenario: different perspectives from the involved
vehicles
debris (a,road
b, c, and
d); National
officers
documentingfrom
the accident
Figure 14.
IP4 and
motorway
traffic
accidentRepublican
scenario: Guard
different
perspectives
the involved
vehiclesscene
and debris
(a, b, c,handmade
and d); National
Republican
Guard
officersbydocumenting
the accident scene
(e);
the
on-site
sketch
(f);
and
scaled
sketch
produced
GNR
in
office
(g).
vehicles and debris (a, b, c, and d); National Republican Guard officers documenting the accident
(e); the on-site handmade sketch (f); and scaled sketch produced by GNR in office (g).
scene (e); the on-site handmade sketch (f); and scaled sketch produced by GNR in office (g).
As for the proposed methodology, a DG autonomous flight was carried out. The 25 m height
encompassed
67 × 40 m area,a acquiring
135 imagesflight
in approximately
minutes
Asflight
for the
proposed amethodology,
DG autonomous
was carried5 out.
Theand
25 30
m height
As seconds,
for the with
proposed
methodology,
a80%
DGimagery
autonomous
flight
was
carried
out.a The
25 m height◦
a
70°
camera
angle
and
overlap.
In
only
a
few
minutes,
substantial
flight encompassed a 67 × 40 m area, acquiring 135 images in approximately 5 min and 30 s, with a 70
flight encompassed
a 67
40 mscene
area,
acquiring 135quickly
images
in approximately
5 minutes
the× crash
dimensions—was
documented.
Furthermore,
the high- and 30
camera area—considering
angle and 80% imagery
overlap.
In only a few minutes,
a substantial
area—considering
the
seconds,level
with
a 70°achieved
cameracan
angle
and
80% imagery
In the
only
a afew
minutes,
substantial
of detail
be seen
in Figure
15. Debrisoverlap.
scattered on
road,
car wheel
in theaditch,
crash scene dimensions—was quickly documented. Furthermore, the high-level of detail achieved can
skid marks and
cones
circumscribing
the accident
scene are
clearly identifiable.
area—considering
thetraffic
crash
scene
dimensions—was
quickly
documented.
Furthermore, the highbe seen in Figure 15. Debris scattered on the road, a car wheel in the ditch, skid marks and traffic cones
level of detail achieved can be seen in Figure 15. Debris scattered on the road, a car wheel in the ditch,
circumscribing the accident scene are clearly identifiable.
skid marks and traffic cones circumscribing the accident scene are clearly identifiable.
Figure 15. Part of the orthophoto mosaic of the IP4 motorway road traffic accident scenario (a). Both
vehicles and other elements of interest (white ellipses): debris (c), skid marks (d), a tire (b).
Measurement lines are also identified (A to H).
Figure 15. Part of the orthophoto mosaic of the IP4 motorway road traffic accident scenario (a). Both
Figure 15. Part of the orthophoto mosaic of the IP4 motorway road traffic accident scenario (a).
vehicles and other elements of interest (white ellipses): debris (c), skid marks (d), a tire (b).
Both vehicles and other elements of interest (white ellipses): debris (c), skid marks (d), a tire (b).
Measurement lines are also identified (A to H).
Measurement lines are also identified (A to H).
Int. J. Environ. Res. Public Health 2020, 17, 1868
20 of 24
Int. J. Environ. Res. Public Health 2020, 17
20 of 24
Figure 16 presents an overlap between the scale sketch (Figure 14g) and the processed orthophoto
Figure 16 presents an overlap between the scale sketch (Figure 14g) and the processed
mosaic (Figure 15a), both in the same scale. Notwithstanding the GNR officers experience and
orthophoto mosaic (Figure 15a), both in the same scale. Notwithstanding the GNR officers experience
the painstaking documenting process carried out at the accident scene, there are some important
and the painstaking documenting process carried out at the accident scene, there are some important
discrepancies between the sketch and the orthophoto mosaic, independently on how they are aligned.
discrepancies between the sketch and the orthophoto mosaic, independently on how they are aligned.
Both images were put to the same scale and lined up considering motorway overpasses and the
Both images were put to the same scale and lined up considering motorway overpasses and the
motorway guard rails next to the vehicles. Regarding the designated area of interest, debris positions
motorway guard rails next to the vehicles. Regarding the designated area of interest, debris positions
are not correct, as are also the vehicles positions and skid marks. In a broader context, both the
are not correct, as are also the vehicles positions and skid marks. In a broader context, both the
motorway profile and the respective reference elements do not match. Again, the proposed approach
motorway profile and the respective reference elements do not match. Again, the proposed approach
provides a more precise reconstitution, with a comprehensive context and debris precise location.
provides a more precise reconstitution, with a comprehensive context and debris precise location.
Figure 16. Overlap between National Republican Guard’s (GNR’s) sketch and the generated
Figure 16. Overlap between National Republican Guard’s (GNR’s) sketch and the generated
orthophoto mosaic.
orthophoto mosaic.
To
further validate
validate the
the proposed
proposed methods
methods capability
To further
capability to
to preserve
preserve such
such aa complex
complex road
road traffic
traffic
accident
scenario
for
both
present
and
future
analysis,
different
perspectives
of
the
traffic
accident
accident scenario for both present and future analysis, different perspectives of the traffic accident
scene
are shown
shownin
inFigure
Figure1717and
andthat
that
which
determines
ability
to measure
distances
between
scene are
which
determines
thethe
ability
to measure
distances
between
any
any
two
points
within
the
scene.
Based
on
GNR
officers
work
in
documenting
the
crash
scene,
Table
two points within the scene. Based on GNR officers work in documenting the crash scene, Table 8
8presents
presents
a comparison
between
on-site
measurements
andobtained
those obtained
the proposed
a comparison
between
on-site
measurements
and those
from thefrom
proposed
approach.
approach.
eachdistance
measured
distance is represented
Figure
15a for purposes.
clarification
purposes.
Moreover, Moreover,
each measured
is represented
in Figure 15ainfor
clarification
Considering
Considering
tolerances,
thepossible
maximum
possible
error
of 6 cm in measurement
(4% error),
taken
tolerances, the
maximum
error
is of 6 cm
in is
measurement
F (4% error), Ftaken
between
the
between
thegreen
front vehicle
of the green
vehicle
front of the
and the
road and
line.the road line.
This motorway road traffic accident scenario took approximately 16 office hours to be represented in
an official sketch. With the proposed approach, documenting the entire crash scene took approximately
5 min and 30 s (with 5 min more spent in assembling the UAV and setting up a proper flight plan)
and approximately 30 min in the office to obtain the digital outputs. In addition to being faster, the
proposed approach also enables precise measurements to be taken between two points in the crash
scene, as well as different perspectives of the accident with a broader context. The digital outputs of
the road traffic accident scenario reconstitution are made available for future analysis, if necessary.
Figure 17. Different 3D perspectives from the road traffic accident scenario in a motorway (IP4).
To further validate the proposed methods capability to preserve such a complex road traffic
accident scenario for both present and future analysis, different perspectives of the traffic accident
scene are shown in Figure 17 and that which determines the ability to measure distances between
any two points within the scene. Based on GNR officers work in documenting the crash scene, Table
8 presents a comparison between on-site measurements and those obtained from the proposed
approach.
Moreover,
each measured
Int. J. Environ.
Res. Public
Health 2020,
17, 1868 distance is represented in Figure 15a for clarification purposes. 21 of 24
Considering tolerances, the maximum possible error is of 6 cm in measurement F (4% error), taken
between the front of the green vehicle and the road line.
Figure 17. Different 3D perspectives from the road traffic accident scenario in a motorway (IP4).
Figure 17. Different 3D perspectives from the road traffic accident scenario in a motorway (IP4).
Table 8. Comparison between measurements taken by National Republican Guard (GNR) officers with
those obtained from the digital products generated by the proposed approach, for the motorway (IP4)
road traffic accident. Errors associated with the proposed methodology measurements were calculated
using Pix4D.
Distance Line
Tape (cm)
Point Cloud (cm)
A
B
C
D
E
F
G
H
163
163
163
260
252
143
300
70
166 ± 1
163 ± 3
162 ± 3
260 ± 2
254 ± 4
137 ± 4
300 ± 2
71 ± 1
7. Conclusions
Clearly, UASs allow for collecting more data with higher precision when compared to the current
approaches for documenting road traffic accident scenes. However, in certain conditions, some field
data retrieval may not be possible due to the presence of obstacles in the scenario—both in urban and
rural contexts—such as trees, cables or buildings. Moreover, UAV-based photogrammetric methods
can deal with adverse light conditions which remain scarcely addressed, leaving law enforcement
authorities with no other option than to resort to the traditional approaches or others based on expensive
and/or laboriously demanding technologies, whenever a road occurrence requiring their intervention
takes place, for example, at night or on cloudy days. To tackle these issues, this study proposes and
evaluates a low cost UAV-based photogrammetric methodology covering a wide range of scenarios,
including: (i) obstacle-free scenes using only UAS as surveying instrument; (ii) scenes featured by road
traffic accidents with partial inaccessibility for UAS usage, thus requiring complementary acquisition
approaches; (iii) simulations in adverse light contexts (e.g., at night or with bad visibility), requiring
the use of artificial light sources; and (iv) real road traffic accidents.
The several tests performed in this study and the literature review enabled us to obtain the benefits
and limitations regarding the available approaches to address the digital reconstitution road traffic
accident scenes, which are presented in Table 9.
Despite the number of valid traffic accident research works involving the use of UAV-based
surveying, most of them were developed considering ideal scenarios, either on urban or rural
environments. To address this gap, a methodology consisting of a full pipeline to digitally document
road traffic accidents using UAVs, ground cameras, artificial light sources and photogrammetry was
Int. J. Environ. Res. Public Health 2020, 17, 1868
22 of 24
proposed in this technical note. A preliminary set of experiments were carried out to determine default
flight parameters on ideal scenarios, which can be fine-tuned according with each set of conditions
characterising a road traffic accident occurrence. Afterwards, to demonstrate the usefulness of UAVs
on traffic accidents investigation, a set of experiments was performed, more specifically: (i) considering
areas populated with typical obstacles found in urban and rural settings, at different densities; and (ii)
under unfavourable illumination conditions. Furthermore, practical situations were addressed with
the Portuguese road traffic authorities, wherein traditional approach and proposed methodology are
compared in terms of efficiency and accuracy.
Table 9. Comparison between the available approaches to address road traffic accident scenes
reconstitution: pros, cons and possible outputs.
Approach
Pros
Cons
Outputs
Measuring Tape or
Wheel
3Ease-of-use
3Low cost
×Time-consuming
×Prone to errors
×No visual data
×Data not georeferenced
-Manual measurements
Sketches & Photographs
3Ease-of-use
3Low cost
×Time-consuming
×Inaccurate
×Data not georeferenced
-Measurements
-Images
-Forensic sketches
Total Station
3Rigorous
×Time-consuming
×Skilled operator
×Not suitable for mobility
×Considers line-of-sight
×Expensive
-Precise measurements
(angles and distances)
Laser Scanning
3Rigorous
33D Visualisation
×Time-consuming
×Loss of information due to its
position
×Confined action range
×Expensive
-Point clouds
-Images
GNSS Receivers
3Rigorous
×Time-consuming
×Prone to signal obstruction
×Expensive
-Precise georeferenced
points
UAS
3Ease-of-use
3Large coverage
3Multiple outputs
33D Visualisation
3Cost-effective
×Data acquisition constrains
×Liable to collisions
×Requires training
×Rigidly Regulated
-Point clouds
-3D meshes
-Orthophoto mosaics
-Digital Surface Models
Encouraging results were obtained from the assessment of the different experiments, which attest
the feasibility of using UAVs for traffic accident investigation in many environments characterised by
several conditions. Regarding extreme situations wherein traffic accident scenarios are under severe
influence of obstructing obstacles, other alternative/complementary strategies to digitally document
evidences can be used at the surveying stage, by using, for example consumer-grade digital cameras or
smartphones. Moreover, in regards to the tests carried out in real situations, in simpler scenarios the
results of both traditional approaches and the proposed methodology reached a fair level of similarity.
However, in more complex cases, significant advantages of the latter were observed in relation to the
former in terms of accuracy and operation times.
The complete solution implementing the proposed methodology requires the integration of the
following components: (1) aerial platform (UAS), endowed with artificial lighting to operate in the
most varied illumination conditions; (2) several types of sensors, according to the specificity of the
operation and; (3) a computer application—freely available or developed from scratch—to generate
the required digital documentation products, for consistent and non-redundant storing, enabling a
quick assessment and retrieval of reliable measurements. Forthcoming directions include: motivating
the change on actual traffic road accidents documentation paradigm with promotion and training
initiatives—that have already started to take place with local authorities—aiming to complement
and, eventually, replace the traditional approach with a technological pipeline, pursuing not only a
normalised digital documentation process but also the reduction of required times and endeavour
Int. J. Environ. Res. Public Health 2020, 17, 1868
23 of 24
in current practices carried out in the field for surveying purposes. Another important issue to
address in future is the need to have UAVs operating in a wider number of nonextreme—though
unfavourable—weather situations, in which structural upgrades such as UAV shield adjustments
for waterproofing and hardware improvements for fast compensation of windy conditions may be
expected, eventually, under the manufacturer’s initiative. The improvement of the methodology’s
pipeline by implementing features for parallel processing while the UAV is surveying road traffic
accidents, suppressing the need for post-processing activities in the office should also be addressed.
Real-time online access to reconstructed road traffic accident scenes is another relevant capability to be
developed. Moreover, layers of computational intelligence and analytics are being designed to enhance
the proposed methodology with decision support features that may assist authorities, urban planning
professionals and municipal entities in the continuous improvement of the road traffic system.
Author Contributions: Conceptualization, J.J.S.; data curation, L.P., J.S., J.V., J.H. and T.A.; formal analysis, L.P.,
J.S. and J.V.; funding acquisition, E.P., and J.J.S.; investigation, L.P., J.S., J.V., J.H. and T.A.; methodology, A.S.,
E.P. and J.J.S.; project administration, E.P. and J.J.S.; resources, J.S., E.P. and J.J.S.; software, L.P., J.V. and T.A.;
supervision, E.P., and J.J.S.; validation, L.P., J.V, E.P. and J.J.S.; visualization, L.P., J.S. and E.P.; writing—original
draft, L.P., J.S., J.V., T.A., and E.P.; writing—review and editing, A.S., and J.J.S. All authors have read and agreed to
the published version of the manuscript.
Funding: This work was partially financed by the European Regional Development Fund (ERDF) through the
Operational Programme for Competitiveness and Internationalisation—COMPETE 2020 under the PORTUGAL
2020 Partnership Agreement, and through the Portuguese National Innovation Agency (ANI) as a part of
project “REVEAL—Drones for supporting traffic accident evidence acquisition by Law Enforcement Agents”
(Nº 33113). Financial support provided by the FCT—Portuguese Foundation for Science and Technology
(SFRH/BD/139702/2018) to Luís Pádua.
Acknowledgments: The authors would like to acknowledge all the law enforcement agents that assisted with
this study.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the
study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to
publish the results.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
ETSC. Briefing: 5th EU Road Safety Action, 2020–2030; European Transport Safety Council, European Union:
Brussels, Belgium, 2018; p. 47.
WHO. Global Status Report on Road Safety 2015; World Health Organization: Geneva, Switzerland, 2015;
p. 340.
ITF. Road Safety Annual Report 2017; OECD Publishing: Paris, France, 2018; p. 586.
Wijnen, W.; Stipdonk, H. Social costs of road crashes: An international analysis. Accid. Anal. Prev. 2016, 94,
97–106. [CrossRef] [PubMed]
Rolison, J.J.; Regev, S.; Moutari, S.; Feeney, A. What are the factors that contribute to road accidents? An
assessment of law enforcement views, ordinary drivers’ opinions, and road accident records. Accid. Anal.
Prev. 2018, 115, 11–24. [CrossRef] [PubMed]
Wegman, F. The future of road safety: A worldwide perspective. IATSS Res. 2017, 40, 66–71. [CrossRef]
Elvik, R.; Høye, A.; Vaa, T.; Sørensen, M. Driver Training and Regulation of Professional Drivers’, The Handbook of
Road Safety Measures; Emerald Group Publishing Limited: Bingley, UK, 2009.
WHO. Save LIVES—A Road Safety Technical Package; World Health Organization: Geneva, Switzerland, 2017;
p. 60.
Ma, C.; Yang, D.; Zhou, J.; Feng, Z.; Yuan, Q. Risk Riding Behaviors of Urban E-Bikes: A Literature Review.
Int J. Environ. Res. Public Health 2019, 16, 2308. [CrossRef] [PubMed]
Ma, C.; Hao, W.; Xiang, W.; Yan, W. The Impact of Aggressive Driving Behavior on Driver-Injury Severity at
Highway-Rail Grade Crossings Accidents. J. Adv. Transp. 2018, 2018, 1–10. [CrossRef]
Wach, W. Structural reliability of road accidents reconstruction. Forensic Sci. Int. 2013, 228, 83–93. [CrossRef]
[PubMed]
Topolšek, D.; Herbaj, E.A.; Sternad, M. The Accuracy Analysis of Measurement Tools for Traffic Accident
Investigation. J. Transp. Technol. 2014, 4, 84–92. [CrossRef]
Int. J. Environ. Res. Public Health 2020, 17, 1868
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
24 of 24
Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE
International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; IEEE: Kerkyra,
Greece, 1999; Volume 2, pp. 1150–1157.
Dellaert, F.; Seitz, S.M.; Thorpe, C.E.; Thrun, S. Structure from motion without correspondence. In
Proceedings of the Proceedings IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2000
(Cat. No.PR00662), Hilton Head Island, SC, USA, 15 June 2000; Volume 2, pp. 557–564.
Arnold, E. Use of Photogrammetry as a Tool for Accident Investigation and Reconstruction: A Review of the Literature
and State of the Practice; Virginia Transportation Research Council: Charlottesville, VA, USA, 2007.
Cooner, S.A.; Balke, K.N. Use of Photogrammetry for Investigation of Traffic Incident Scenes; National Technical
Information Service: Springfield, VA, USA, 2000.
Zuraulis, V.; Levulytė, L.; Sokolovskij, E. Vehicle Speed Prediction from Yaw Marks Using Photogrammetry
of Image of Traffic Accident Scene. Procedia Eng. 2016, 134, 89–94. [CrossRef]
Fraser, C.; Cronk, S.; Hanley, H. Close-range photogrammetry in traffic incident management. In Proceedings
of the XXI ISPRS Congress Commission V, WG V; The International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences: Beijing, China, 2008; Volume XXXVII, pp. 125–128.
Pádua, L.; Vanko, J.; Hruška, J.; Adão, T.; Sousa, J.J.; Peres, E.; Morais, R. UAS, sensors, and data processing
in agroforestry: A review towards practical applications. Int. J. Remote. Sens. 2017, 38, 2349–2391. [CrossRef]
Monadrone Accident Reconstruction & Evidence Gathering. Available online: http://www.monadrone.com/
applications/mission-summaries/accident-reconstruction-evidence-gathering/ (accessed on 5 October 2016).
Pix4D. A New Protocol of CSI for the Royal Canadian Mounted Police. Available online: https:
//assets.ctfassets.net/go54bjdzbrgi/2ufTRuEockU20mWCmosUmE/88ad2758f8c1042ab696c974627239e6/
RCMP_Pix4D_collision_crime_scene_investigation_protocol.pdf (accessed on 21 August 2018).
Analist Group Traffic Accident Reconstruction. Available online: http://www.analistgroup.com/en/newsolutions-with-drones/solution-for-traffic-accident-reconstruction (accessed on 5 October 2016).
senseFly. Setting the Scene—Using Drones for Traffic Accident Reconstruction and Analysis—Waypoint.
Available online: http://waypoint.sensefly.com/setting-the-scene-using-drones-for-crash-reconstructionand-analysis/ (accessed on 5 October 2016).
Su, S.; Liu, W.; Li, K.; Yang, G.; Feng, C.; Ming, J.; Liu, G.; Liu, S.; Yin, Z. Developing an unmanned aerial
vehicle-based rapid mapping system for traffic accident investigation. Aust. J. Forensic Sci. 2016, 48, 454–468.
[CrossRef]
Diaz-Vilarino, L.; González-Jorge, H.; Martínez-Sánchez, J.; Bueno, M.; Arias, P. Determining the limits
of unmanned aerial photogrammetry for the evaluation of road runoff. Measurement 2016, 85, 132–141.
[CrossRef]
Stáňa, I.; Tokař, S.; Bucsuházy, K.; Bilík, M. Comparison of Utilization of Conventional and Advanced
Methods for Traffic Accidents Scene Documentation in the Czech Republic. Procedia Eng. 2017, 187, 471–476.
[CrossRef]
Mark Bateson Aerial Perspective. Available online: http://test.draganfly.com/pdf/Aerial-Forensics-V5.pdf
(accessed on 22 August 2018).
Liu, X.; Zhao, L.; Peng, Z.-R.; Gao, T.; Geng, J. Use of Unmanned Aerial Vehicle and Imaging System for Accident
Scene Reconstruction; Transportation Research Board: Washington, DC, USA, 2017; p. 14.
Liu, Y.; Bai, B.; Zhang, C. UAV Image Mosaic for Road Traffic Accident Scene. In Proceedings of the 2017 32nd
Youth Academic Annual Conference of Chinese Association of Automation (YAC), Hefei, China, 19–21 May
2017; Volume 5.
Stevens, C.R., Jr.; Blackstock, T. Demonstration of Unmanned Aircraft Systems Use for Traffic Incident Management
(UAS-TIM); Texas A&M Transportation Institute: College Station, TX, USA, 2017; p. 73.
Mekker, M.M. Evaluation of Consumer Grade Unmanned Aircraft Systems for Photogrammetric Crash Scene
Documentation; Transportation Research Board: Washington, DC, USA, 2018; p. 22.
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Download