A ADVANCE ES IN THE E DEVELO

advertisement
 ISPR
RS. REPORT T OF THE SCIENTIFIC INITIATIVE A
ADVANCEES IN THE
E DEVELO
OPMENT
T OF AN A
ALL‐PUR
RPOSE OP
PEN‐
SOURCE PHO
OTOGRAMMETRIIC TOOL inteGRAteed PHOtograammetric Suiite Nº PA
AGES: 13 REVISIO
ON: 01 DATE: 211/12/2015 NAME DATTE Investigato
or USAL: Diego González‐Agu ilera, Rodriguez‐Gonzalvez Luiss López, Pablo Investigato
ors UCLM: Diego Guerrrero, David Hernandez‐LLopez INVESSTIGATORS 21//12/2015 Investigato
ors FBK: Fabio Men
nna, Erica Noocerino, Isab
bella Toschi,, Fabio Remondino
o Investigato
ors UNIBO:
Andrea Ballabeni, Marcco Gaiani 1 REPORT OF THE SCIENTIFIC INITIATIVE
1. INTRODUCTION AND PARTNERSHIPS
Photogrammetry is facing new challenges and changes and the Scientific
Community is replying with new algorithms and methodologies for the automated
processing of imagery. However, such advances are often limited to in-house
research activities due to the intrinsic difficulties of reliably and efficiently
implementing such solutions and/or the prior theoretical knowledge required for a
proper use. Therefore non-expert users outside the field of photogrammetry have
difficulties to access these solutions and apply them to their specific applications to
support their problem solving and decision making. However, trying to
"universalize" the workflow from 2D to 3D from any image acquired in any situation
by non-expert users is not an easy task, especially in situations where the
geometric constraints and radiometric conditions are complex. Some existing oneclick solutions are ideal for non-experts but they lack of transparency, reliability,
repeatability and accuracy evaluation.
With the aim of providing a solution in this context, an open photogrammetric
platform, called GRAPHOS (inteGRAted PHOtogrammetric Suite), has been
developed. GRAPHOS allows to obtain dense and metric 3D point clouds from
images and it encloses robust photogrammetric and computer vision algorithms
with the following aims: (i) increase automation (allowing to get dense 3D point
clouds through a friendly and easy-to-use interface); (ii) increase flexibility
(working with any type of images and cameras); (iii) improve quality (guaranteeing
high accuracy and resolution). Last but not least, the educational component has
been reinforced with some didactical explanations about algorithms and their
performance. In particular, the expert user can test different advanced parameters
and configurations in order to assess and compare different approaches.
The project has been led and managed by USAL in collaboration with UCLM, FBK
and Bologna University who supported the image preprocessing, photogrammetric
processing, dataset creation and system evaluation. The secret of success has been
to find a multidisciplinary and international team with experience in image
analysis and close-range photogrammetry in order to design and develop an
efficient and robust pipeline for automated image orientation and dense 3D
reconstruction.
2. OBJECTIVES
This main goal of the project was to advance with the development of an allpurpose open-source photogrammetric platform. The project aims to bring
photogrammetry and computer vision even more closer realizing an open tool which
integrates different algorithms and methodologies for automated image orientation
and dense 3D reconstruction. Automation is not be the only key-driver of the tool
but feature precision, reliability, repeatability and guidelines for non-experts are
also considered. The tool wants to provide an easy-to-use and ease-to learn
2 fram
mework for image-bas
sed 3D mo
odelling ap
pplied to architectura
al, heritage
e and
engin
neering app
plications.
Since
e the phottogrammetric processs involves basically three differrent steps (e.g.
extra
action and matching of
o features,, camera self-calibration and oriientation, dense
d
pointt cloud gen
neration), th
he specific
cs goals off the scientific initiativ
ve are:





Add ima
age pre-pro
ocessing alg
gorithms an
nd strategie
es for impro
oving the im
mage
quality and thus the photog
grammetric
c processes. In part icular, Con
ntrast
Preserviing Decolorrization and
d Wallis filte
ering were implementted. In add
dition,
an automatic estim
mation of ssensor size
e paramete
ers was inccluded for nonexpert users
u
and unknown ca
ameras.
Incorporrate furtherr tie point d
detectors and
a
descrip
ptors to imp
prove the im
mage
matching phase, paying parrticular atttention to texturelesss and repe
eated
pattern situations. In particcular, SIFT
T, ASIFT and
a
MSD detectors were
combine
ed with SIIFT descri ptor in orrder to prrovide the best key
ypoint
extracto
or. Furtherm
more, two different feature-based match
hing approa
aches
were used: one ba
ased on the
e classical FLANN mattcher and tthe other based
b
on a seq
er strategy
quential rob
bust matche
y.
Improve
e the comp
putational ccost exploitting GPU and
a
paralle
el computin
ng. In
particula
ar, CUDA programmin
ng capabilities were included in o
order to imp
prove
computa
ation times, especially
y in dense matching.
m
Improve
e the bundlle adjustme
ent perform
mances usin
ng differentt self-calibrration
approaches. From
m a basic calibration
n which uses
u
just the five inner
calibration parame
eters to a more ad
dvanced calibration which includes
additional parametters and allo
ows the user to fix some of them
m.
Improve
e the dense
e matching
g methods with multi-view apprroaches in order
to increa
ase the reliability of th
he 3D resullts.
The following figure (Fig
gure 1) ou
utlines the proposed pipeline ((main GRA
APHO)
inclu
uding the diifferent algo
orithms and
d technique
es.
Figure 1. Workflow w
with the main
n GRAPHO highlighted.
3
3. DATAS
SETS
e capabilitie
es and scope of
Some dataset related were created and used to test the
e studies and geometric confiigurations were
the developed tool. Diffferent case
sidered. Additionally, smartphon
ne and tab
blet sensors
s were tessted in order to
cons
3 assess the flex
xibility of th
he tool in d
data acquis
sition. As a result, 38
8 case stu
udies
e used and evaluated in GRAPHO
OS (Figure 2 and Table
e 1).
were
Fiigure 2. Some of the images of tthe datasett used in the testing off technolog
gy.
Dataset Bones Case Difficulty Nº of images Forensic MEDIUM 12 Size (Mp) Sennsor Foca
al 4752 15.1 Canonn 500D 46 m
mm ‐ 3168 Otherss MEDIUM 18 2048 1536 3.1 mia Lum
10220* Car_Bllack Automotive VERY HIGH 15 4032 3024 12.2 E‐P M1 14 m
mm Car_Green_Damage Automotive LOW 9 4608 3456 15.9 E‐P M2 14 m
mm Car_Lu
umia Automotive MEDIUM 6 7152 5360 38.3 Lumiaa 1020 7.2 m
mm ‐ Box_Smartphone Car_SeeatLeon_Lumia_5
5Mp Automotive Cent Otherss Chip_22 Otherss Cimorro_Facade Coins HIGH VERY HIGH VERY HIGH Architectu
ure MEDIUM Otherss VERY HIGH 5 2592 1936 5.0 mia Lum
10220* 9 2560 1920 4.9 Caam unkno
own 5 2560 1920 4.9 Caam unkno
own 4 3872 2592 10.0 Nikonn D80 18 m
mm 6 4752 3168 15.1 Canonn 500D 300 m
mm DenseeMVS Architectu
ure MEDIUM 11 3072 2048 6.3 EOS D60 20 m
mm Facadee Architectu
ure LOW 27 2256 1504 3.4 EOS 4450D 20 m
mm Forenssic Forensic VERY HIGH 23 4032 3024 12.2 E‐P M1 14 m
mm Forenssic_Gun Forensic LOW 5 4752 3168 15.1 Canonn 500D 17 m
mm Forensic VERY HIGH 48 4608 3456 E‐P M2 e 7.5
fisheye
mm
m Forenssic_Homogeneuss Forenssic_Sony_XperiaLL 15.9 Forensic HIGH 15 3264 2448 8.0 XpeeriaL 3 mm
m Kango
oo Automotive HIGH 48 5616 3744 21.0 EOS 5D
D mkII 20 m
mm LaCoba Aerial archaeolo
ogy MEDIUM 5 1131
0 1731
0 195.8 UltraCCamXp 100.5 mm Otherss VERY 30 1280 1024 1.3 MCCA6 9.6 m
mm Multisspectral 4 Dimensions HIGH
Paraglider Tolmo Aerial archaeology Portico LOW 58 5616 3744 21.0 EOS 5D mkII 5 0mm Architecture MEDIUM 48 4608 3072 14.2 Nikon D3100 18 mm Presa_Lodos Aerial engimeering 27 4032 3024 12.2 E‐P1 14 mm SanNicolas Architecture 6 4032 3024 12.2 E‐P1 14 mm SanSegundo Architecture 4 4752 3168 15.1 Canon 500D 17 mm LOW VERY LOW VERY LOW Skull_Fontanela Forensic LOW 15 4608 3456 15.9 E‐PM2 42 mm Skull_Ring Forensic HIGH 43 4752 3168 15.1 Canon 500D 19 mm Skull_Stripe Forensic LOW 7 4752 3168 15.1 Canon 500D 17 mm 24 4752 3168 15.1 Canon 500D 17 mm 17 640 480 0.3 NEC TH9260 41.7 mm 7 640 480 0.3 FLIR SC655 24.6 mm Subtation Engineering Thermal_EPSA Others Thermal_Roofs Others VERY HIGH VERY HIGH VERY HIGH UX5 Urban aerial LOW 21 3264 4912 16.0 NEX‐5R 15 mm Welding Engineering LOW 25 4752 3168 15.1 Canon 500D 50 mm Primiero Urban aerial LOW 9 6048 4032 24.4 Nikon D3x 50 mm Muro Architecture VERY LOW 24 4608 3072 14.1 Nikon D3100 35 mm Column Architecture LOW 14 2816 2112 5.9 DSC‐W30 6.3 mm Chigi Architecture VERY LOW 26 4416 3312 14.6 Canon G10 6.1 mm Ventimiglia Aerial MEDIUM archaeology 80 6048 4032 24.4 Nikon D3x 50 mm Width (mm) Height (mm)
E‐PM1 17.3333 E‐PM2 17.3333 13 8.8 6.6 unknown unknown Canon 500D 22.35 14.9 EOS D60 22.35 14.9 XperiaL 5.72 4.29 EOS 450D 22.2 14.8 Lumia 1020 Cam 13 EOS 5D mkII 36.0 24.0 UltraCamXp 67.86 103.86 MCA6 6.66 5.32 Nikon D3100
23.1 15.4 E‐P1 17.3333 13 16 12 FLIR SC655 10.88 8.16 NEX‐5R 23.4 15.6 NEC TH9260 Nikon D3x 36 24 Canon G10 7.44 5.58 DSC‐W30 5.8 4.3 Table 2. Sensor dimensions of the different cameras used for data acquisition.
5 The last 5 case
e studies in
ncluded a g
ground trutth by mean
ns of contro
ol points, check
c
d.
pointts and laser scanning point cloud
4
4. IMAGE
E ACQUISI
ITION PR
ROTOCOL
Conc
cerning the
e photogram
mmetric prrocedure, one
o
of the greatest b
barriers for nonexpe
ert users is the data acquisition.
a
. However,, whilst it may
m
be tecchnically sim
mple,
protocol sh
the p
hows severral rules (e
e.g. geometrical and radiometricc restrictions or
came
era calibrattion) which
h make diffficult the data acquisition and, tthus, deterrmine
the q
quality of th
he final res
sult. In this sense, som
me basic rules have been creatted to
help non-experrt users wh
ho want to
o create a 3D object/
/scene from
m some im
mages
(thro
ough conve
entional cam
meras and e
even smarttphones).
Rega
arding the image acquisition rul es, there are
a two pro
otocols whiich can be used
by th
he user:

P
Parallel prrotocol. Id
deal for de
etailed reco
onstruction
ns in speci fic areas of
o an
o
object. In this case, the user needs to capture between
b
fiv
ve-nine im
mages
fo
ollowing a cross shap
pe as show
wn (Figure 3, left).The
e overlap b
between im
mages
n
needs to be
e at least 80%.
8
The master ima
age or central image (shown in red)
w
will capture
e the are
ea of inte
erest. The remaining
g photos (four) hav
ve a
c
complementary nature and sho
ould be taken from the left, rright (show
wn in
p
purple), top
p and botto
om (indicat ed in green
n) with respect to the
e central im
mage.
T
These photo
os should adopt
a
a cerrtain degree of perspe
ective, turn
ning the camera
to
owards the
e middle of
o the inte
erest area. It should be noted that, with
h the
p
purpose of a complete
e reconstru
uction, each photo ne
eeds to cap
pture the whole
w
a
area of interest.

C
Convergen
nt protoco
ol. Presentss an ideal behavior in the reco
onstruction of a
3
360º objectt (e.g. statu
ue, column,, etc.). In this
t
case, th
he user sho
ould capturre the
im
mages follo
owing a rin
ng path (ke
eeping an approximate constan
nt distance from
the object).. It is nece
essary to en
nsure a go
ood overlap
p between iimages (>8
80%)
(Figure 3, right).
r
In th
he situation
ns where the object cannot
c
be ccaptured with
w
a
u
unique ring it is possib
ble to adop t a similar procedure based on m
multiple rin
ngs or
a half ring.
erent adop
pted acquissition proto
ocols. Para
allel protoccol (left) and
a
Figurre 3. Diffe
conv
vergent protocol (rightt). 6 5
5. WORKF
FLOW FOR DATA P
PROCESSIING
GRAPHOS enclo
oses a pho
otogramme
etric pipelin
ne divided in five maiin steps ap
pplied
uentially:
sequ
1) project definition, 2) pre-processsing, 3) tie
e point exttraction an
nd matching, 4)
n and 5) d
dense matc
ching 6) Ge
eo-referenccing and qu
uality
self-c
calibration//orientation
control.
e
step will affect the quality
y of the next one, sso achieve good
The result of each
o get a go
ood dense 3D point cloud.
c
results in each step will be crucial in order to
ough GRAP
PHOS requires that th
he user inte
eracts in each step, ssome automatic
Altho
batch
h process can
c
be generated allo
owing to pe
erform the whole pipe
eline in an easye
to-us
se way, esp
pecially for non-experrt users.
1
1. Project definition. The usser defines a project’s name a
and descrip
ption.
he images of
o the proje
ect are sele
ected and can be exa
amined together
Next, th
with the
e camera us
sed. GRAPH
HOS provides an automatic estim
mation of se
ensor
parametters (forma
at size) for tthose unkn
nown camerras.
2
2. Pre-pro
ocessing. The
T
image pre-proces
ssing is an important
i
sstep since it
i can
provide a better feature extra
action and matching,
m
in
i particula
ar in those cases
c
e-processin
ng functions are
where the texture quality is unfavorable. Two pre
e in GRAPH
HOS:
available
- Contra
ast Preserving Decolo rization: it applies a decolorizat
d
GB to
ion (i.e. RG
Gray) to
o the image
es preservi ng contrastt (Lu et al., 2012). Co
ontrary to other
methods
s, it deliverrs better im
mages for the
t
success
sive feature
e extraction
n and
matching steps (Fig
gure 4).
Figure 4. Decolorizati
D
ion (RGB to
o Gray) of the
t
images..
- Wallis filter: this
s enhancem
ment algorrithm (Wallis, 1976) is available for
those te
extureless images or images wiith homoge
eneous tex
xture, improving
the keypoint extra
action and matching steps. In particular,
p
the Wallis filter
ast of the pixels that lie in certain
n areas where it
adjusts brightness and contra
7 is neces
ssary, according to a weighte
ed average
e. As a re
esult, the filter
provides
s a weighte
ed combina
ation of the
e average and
a
the sta
andard deviiation
of the original imag
ge. Althoug
gh default parameters
p
s are define
ed in GRAP
PHOS,
ast, brightn ess, standa
ard deviatio
on and win dow size ca
an be
the averrage contra
introduc
ced by the user (as ad
dvanced pa
arameters) in case the
e default values
are not suitable (Fiigure 5). Figure 5. W
Wallis filtering results..
3
3. Feature
e extraction and m
matching (tie point
t extracti on). GRAP
PHOS
includes
s three alg
gorithms ccoming from the com
mputer vission comm
munity
which can be com
mbined and
d used with
h different types of d
data acquisition
ue acquisiti ons) (Figurre 6).
(parallell and obliqu
Figure 6. Feature
F
extraction and
d matching for the tie point extra
action step.
8 Tapioca: It is a combined keypoint detector, descriptor and matching
strategy. The keypoint detection and description strategy is based on the
SIFT algorithm developed by (Lowe, 1999) which allows a scale invariant
feature transform.
ASIFT: It is a keypoint detector developed by (Yu and Morel, 2011). ASIFT
considers two additional parameters that control the presence of images
with different scales and rotations. In this manner, the ASIFT algorithm can
cope with images displaying a high scale and rotation difference, common in
oblique scenes. The result is an invariant algorithm that considers changes
in scale, rotations and movements between images.
MSD: It is a keypoint detector developed by (Tombari and Di Stefano, 2014)
which finds maximal self-dissimilarities. In particular, it supports the
hypothesis that image patches highly dissimilar over a relatively large extent
of their surroundings hold the property of being repeatable and distinctive.
Once the keypoint are extracted, a matching strategy should be applied to
identify the correct image correspondences. GRAPHOS contains three
different matching strategies, independently from the protocol followed in
data acquisition:
- Tapioca: The matching strategy of this method applies a classical matching
strategy based on the L2-Norm to find the point with the closest descriptor
for each keypoint extracted in the initial image. Moreover Tapioca allows two
select two strategies: (i) for unordered image datasets, the keypoint
matching is recursively calculated for all possible image pairs; (ii) for linearly
structured image datasets, the matching strategy can check only between nadjacent images.
- Robust matcher approach: In this case a brute force matching strategy is
used in a two-fold process: (i) first, for each extracted point the distance
ratio between the two best candidates in the other image is compared with a
threshold; (ii) second, those remaining pairs of candidates are filtered by a
threshold which expresses the discrepancy between descriptors. As result,
the final set of matching points is used to compute the relative orientation
(fundamental matrix) between both images, re-computing the process in
order to achieve optimal results. Additionally, and if the number of matched
points is high, a filtering based on the n-best according to their quality
ranking can be applied, making easy and more robust the next step of selfcalibration and orientation.
- FLANN: this is optimal and alternative method useful in those cases where
the number of extracted keypoints is pretty high. It similar to the previous
method but it optimizes the computation time.
4. Self-calibration/Orientation. Self-calibration and image orientation are
fundamental pre-requisite for the successive dense point cloud generation.
Additionally, GRAPHOS includes different well-known radial distortion profiles
(i.e. Gaussian, Balanced) together with a conversor to export the calibration
results to the most common close-range commercial tools such as
PhotoScan and PhotoModeler. As a result, different comparisons and analysis
can be carried out through different tools.
9 The image orien
ntation is performed through a combin
nation between
er vision an
nd photogra
ammetric strategies in
ncluded in tthe open-so
ource
compute
tool APE
ERO (Deseilligny and Clery, 2011). This combination
c
n is fed by
y the
previous
sly extracte
ed tie poin
nts. In a first step, an approx
ximation of the
externall orientatio
on of the ccameras is calculated
d following a fundam
mental
matrix approach. Later, ev erything is
s refined by a bun dle adjusttment
wton metho
od (Figure 7). During the proces
ssing,
solution based on Gauss-New
the user can run also self-ccalibration to simultan
neously co
ompute also
o the
GRAPHOS allows
a
to use three diifferent typ
pes of
interior camera parameters. G
bration models:
self-calib
 Basiic calibration: five
e interior parameterrs (f-focal length, x0,y0princ
cipal point of symmettry, k1, k2-rradial disto
ortion) are used (Kukelova
and Pajdla, 200
07). This m
model is suitable when using un known cam
meras
w
camera
as with lesss quality (such
(
as sm
martphoness or tabletts) or
or with
when
n the image
e network iis not very suitable for camera ca
alibration.  Com
mplete calibration: it implem
ments the Fraser ca libration model
m
(Fras
ser, 1980) with 12 p
parameters
s (f, x0,y0, x1,y1 - dis
istortion ce
enter,
k1,k2,k3, p1,p2, scaling
s
and
d affinity factors). 
Adva
anced caliibration, w
which allows the user to fix or un
nfix the diffferent
addittional param
meters of tthe previous calibration model. The resu
ults of the
e bundle ad
djustment are availab
ble in deta
ail in a log
g file,
while im
mage orienttation resullts are imm
mediately accessible
a
i n the 3D frame
f
of the grraphical user interface
e in the forrm of a spa
arse point ccloud and im
mage
pyramids.
Figure 7. Retrie
eved camerra poses of a dataset with
w
27 ima
ages.
5
5. Dense matching. GRAPHOS
S offers th
he possibility to deri ve dense point
w
variouss methods:
clouds (Figure 8) with
c algorithm
m (Micmac w
website), more
m
suited for the parrallel protoc
col;
- MicMac
- PMVS-PatchBased
d algorithm
m (Furukawa and Ponc
ce, 2012);
10 - SURE (Rothermel et al., 201
12), based on the SGM
M algorithm
m (Hirschm
müller,
2008).
OS also inc
cludes som
me export functionalit
f
ies in orde
er to work with
GRAPHO
various point cloud
ds formats ((e.g. LAS, LAZ, PLY, etc.).
e
he tool to e
expert users, the different approa
aches rema
arked
In orderr to open th
above ca
an be setup
p with adva
anced param
meters.
Figure 8. Dense matching
m
re
esults based on a multi-view algo
orithm.
6
6. Geo-refferencing and qualiity control. GRAPHOS includes a tool for georeferenc
cing the po
oint cloud in a Carte
esian referrence syste
em through 3D
control points.
p
The
e tool is acccessible under the Tools menu a
after perforrming
image orientation.
o
The user is asked to load tw
wo files con
ntaining th
he 2d
image coordinates
c
and the 3
3D coordin
nates of at least thre
ee points of
o the
object. For
F perform
ming furthe
er analysis and quality
y checks, ttools for picking
points as
a well as
s measurin
ng distance
es (scale bars check
ks) and angles
(verticallity checks)) on the po int cloud arre available
e too.
7
7. FURTHER EXAM
MPLES
In the following
f
figures
f
som
me other results
r
achieved with
h GRAPHOS
S are
presente
ed. The too
ol works un
nder Windo
ow OS and will be so
oon available for
downloa
ad.
11 Figure 9. Matchin
ng examplle (up) an
nd externa
al orientattion (below
w) of
‘Column’ dataset.
Figure 10.
1 Dense matching
m
re
esult of ‘Ch
higi’ datase
et. Color (le
eft) and sh
haded
(right).
12 Figure 11.
1 Dense matching m
model (up) and comp
parison betw
ween GRAP
PHOS
results and
a
ground truth (belo
ow) of ‘Murro’ case.
8
8. DISSEM
MINATION
T
The Scientiific Initiativ
ve has be
een
e
events held
d in 2015 such as th
he
S
Symposium in Taipei (Taiwan). In
the opportunity to show the proje
ect
already
y advertise
ed during some scie
entific
ISPRS 3DARCH in
n Avila (Sp
pain) and CIPA
e
the
e involved researchers got
these events,
aims du
uring the op
pening or d
demo sessio
ons.
T
The partnerrs have extensively ttested the tool and used extern
nal testers (e.g.
s
students) fo
or dissemin
nation and e
ot least, se
everal
evaluation purposes. Last but no
im
mpact facto
or papers have
h
been p
published using
u
this to
ool and pipe
eline.
C
Currently, GRAPHOS
G
is
i being re
efined to be
e presented at CATC
CON conte
est in
the ISPRS Internation
nal Congre
ess in Prag
gue. In addition, som
me of the case
s
studies testted with GRAPHOS w
will be pres
sented as a full pape
er submitte
ed by
U
USAL at the
e incoming ISPRS Con gress.
9
9. SCIENT
TIFIC INI
ITIATIVE BUDGET
G
Grant provided by ISP
PRS: 6000 CHF (5735.34 EUR)
T
Total grantt: 5735,34
47 EUR
E
Expenses - incl. 21% VAT:
 Staff cos
sts - implem
mentation a
and validation: 3827.9
90 €
 Open ac
ccess journa
al publicatio
ons: 2439.32 €
 Travels: 529.22 €
T
Total expen
nses: 6796
6.44 EUR
U
USAL - as PI
P and man
nager of the
e SI funds – has co-fin
nanced the
e missing fu
unds.
R
Reference
es
C
Cewu Lu, Li Xu, Jiaya Jia, 2012. Co
ontrast Pres
serving Deco
olorization, IIEEE Interna
ational
C
Conference on
o Computational Photo
ography (ICC
CP).
13 Wallis, K.F. 1976. Seasonal adjustment and relations between variables. Journal of the
American Statistical Association, 69(345), pp. 18-31.
Lowe, D. G. 1999. Object recognition from local scale-invariant features. In Computer
vision, 1999. The proceedings of the seventh IEEE international conference on Vol. 2,
pp. 1150-1157.
Guoshen Yu, and Jean-Michel Morel, 2011. ASIFT: An Algorithm for Fully Affine Invariant
Comparison, Image Processing On Line.
F. Tombari, L. Di Stefano, 2014. Interest Points via Maximal Self-Dissimilarities” 12th
Asian Conference on Computer Vision (ACCV).
Kukelova, Z.; Pajdla, T. A minimal solution to the autocalibration of radial distortion. In
IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA,
17–22 June 2007; pp.1–7
Fraser, C. S. 1980. Multiple focal setting self-calibration of close-range metric cameras.
Photogrammetric Engineering and Remote Sensing, 46, 1161-1171.
Deseilligny, M. P., & Clery, I. 2011. Apero, an open source bundle adjusment software
for automatic calibration and orientation of set of images. ISPRS-International Archives
of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 38, 5.
Hirschmüller, H., 2008. Stereo Processing by Semiglobal Matching and Mutual
Information. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 328341.
Micmac website. http://www.tapenade.gamsau.archi.fr/TAPEnADe/Tools.html.
Furukawa, Y., & Ponce, J. 2012. PMVS-Patch based Multi-View Stereo software.
University of Washington at Seattle.
Rothermel, M., Wenzel, K., Fritsch, D., Haala, N. 2012. SURE: Photogrammetric Surface
Reconstruction from Imagery. Proceedings LC3D Workshop, Berlin.
14 
Download