jane12449-sup-0006-AppendixS1

advertisement
Appendix S1
Overlap of seasonal home ranges: procedure for computing, including the R
script, and identification of migration cases.
S1.1 Procedure
Determination of the time windows to compute all combinations of UD overlap
In this appendix we pasted the R script (R Development Core Team 2014) for computing Utilization
Distribution (UD) minimum overlap (Worton 1989; Fieberg and Kochanny 2005) of animal locations
grouped by shifting time windows, in each sampling year. Specifically, the time windows separate 2
or 3 successive groups of fixes, for which the function ‘kernel overlap’ computes the UD overlap, in
all possible combinations (R package “adehabitat”; Calenge 2006).
Three tables are recalled from the script, including the data (Appendix DS1), and the identifier to
group them according to the shifting time windows (Appendix DS2 and Appendix DS3). The table
‘dataset_gps’ (Appendix DS1) includes the animals’ ids, all GPS locations, their timestamp, and the
‘relative month’, i.e. the progressive counting of months since the beginning of each sampling year,
fixed to 15th of February (see also main text). Below, an extraction of the dataset in Appendix DS1
(Table S.1.1) is reported.
The table ‘combination’ (Appendix DS2) includes all possible combinations of the shifting
time windows. Among the fields of this table, ‘Id’ is a progressive identifier of all time windows used
to subsample the locations in each year, and compute the overlap among UDs. ‘a’ and ‘b’ are the
integer variables delimiting the shifting windows, with 1≤a≤11, and 2≤b≤11. To understand their
function, they can be read as ‘locations to be subsampled until ‘month a (or b)’’, where month is the
relative month, as in Table DS1. Below, we provide and extraction of Appendix DS2 (Table S.1.2). For
example, id1 is the combination of two time windows, one including month 1 (subsample until a=1),
and one including months from 2 to 12. Id2 is the combination of one time window including months
from 1 to 2 (subsample until a=2), and the other including months from 3 to 12, and so forth until
id11, which is the combination of one window including months from 1 to 11, and the other with
month 12. In all these cases (id1-id11), the overlap is computed between two successive home
ranges, hr1 and hr2.
From id12, the same combinations as above are repeated, but starting from month 2, i.e. an
initial subsample including month 1 is added. For example, id14 is the combination of three time
windows, one including month 1 (subsample until a=1), one including the months 2-4 (subsample
until b=4), and the third including the months 5-12. The same happens for b=2, and so forth until
id66, which is the combination of time windows with months 1-10, with month 11 and with month
12. In all cases with three time windows, all possible combinations of overlap between three home
ranges are computed: h0 with h1 (first and second time window), h1 with h2 (second and third time
window) and h0 and h2 (first and third time window). This is a default output of the function ‘kernel
overlap’.
Finally, in the table ‘combination_complete’ (Appendix DS3), the combinations of time windows are
joined to animals’ location datasets. An extract is presented in Table S1.1.3. Note the ‘null’ cases that
are due to incomplete portions of the yearly dataset. The total number of combinations for each
animal in each year may be therefore smaller than expected.
Table S1.1.1 Extract of the dataset in Appendix DS1 (animal locations)
Table S1.1.2 Extract of Appendix DS2: combination of time windows to subsample yearly location
datasets and compute UD overlap
Table S1.1.3 Extract of Appendix DS3: combination of time windows joined to the dataset of
animal trajectories’.
Procedure to identify migration cases from the minimum seasonal overlap for each animal
in each year
The final output of the script is the minimum seasonal overlap between h0 vs h1, and h1 vs
h2, across all time windows, for each animal in each year. We defined the above as ‘mixedseasons’ overlap, i.e. overlap between either winter range vs summer range (h0 vs h1), or
summer range vs successive winter ranges (h1 vs h2).
At this point, we had to identify migration cases. We followed two main criteria:
1) First, we compared the minimum seasonal overlap with the median of all minimum
seasonal overlaps in that population, and verified if that case was below such threshold. For
those populations for which the median minimum overlap was very low (i.e.: reindeer,
Norway: 3%; red deer, Norway: 8%), we used a threshold of 15%.
2a) If the first criterion was satisfied, we looked at the minimum overlap between ‘same
seasons’, i.e. between the two successive winter ranges (h0 vs h2). If the value of this
overlap was very low (approximately below 50%), we considered this as a ‘no-return
movement’, such as dispersal, or nomadism. Otherwise, we retained that case as
‘migration’.
2b) If the first criterion was satisfied, but for a combination with only two home ranges (hr1
vs hr2, see above id1-id11 in Table S.1.2), we also classified that case as a ‘no-return
movement’.
S1.2 Script
Coded in R Development Core Team (2014) environment.
####################################################################
############
# overlap_script.R
# Script that computes UD overlap, given position data from marked
individuals
#
# Input: SQLite database with animal fixes, issued by var-id
grouping script
#
or access to PostgreSQL database compatible to Eurodeer
format
# Output: homeranges, shapefiles, logfile with all overlaps, summary
tables
#
# Usage:
# source("overlap_script.R")
# outputs <- main(speciesPrefix="reindeer", migrationThreshold=25,
#
exportShapefile=TRUE,
driver="SQLite",inputDB="reindeer.db",
#
minFixes=20, outputDir="/home/anne/Rsandbox/",
resolution=800)
#
#
Copyright (C) 2013 Anne Ghisla, Francesca Cagnacci, Damiano
Preatoni
#
#
This program is free software: you can redistribute it and/or
modify
#
it under the terms of the GNU General Public License as
published by
#
the Free Software Foundation, either version 3 of the License,
or
#
(at your option) any later version.
#
#
This program is distributed in the hope that it will be
useful,
#
but WITHOUT ANY WARRANTY; without even the implied warranty of
#
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
#
GNU General Public License for more details.
#
#
You should have received a copy of the GNU General Public
License
#
along with this program. If not, see
<http://www.gnu.org/licenses/>.
require(adehabitatHR)
require(rgeos)
require(rgdal)
# NOTE: requires one of RSQLite or RPostgreSQL, see runsql function
# Accessory functions. The core function is last one, called main.
runsql <- function(sql, driver, db=inputDB, connection){
## Function that runs SQL queries in a safe and short way.
## From http://www.r-bloggers.com/querying-an-sqlite-databasefrom-r/
if (driver == "SQLite") {
require(RSQLite)
dbdriver <- dbDriver(driver)
connect <- dbConnect(dbdriver, dbname=db)
closeup <- function(){
sqliteCloseConnection(connect)
sqliteCloseDriver(dbdriver)
}
dd <- tryCatch(dbGetQuery(connect, sql), finally=closeup)
return(dd)
}
else if (driver == "PostgreSQL") {
# Connection is opened once by main function. No code here.
closeup <- function(){
print("SQL finished")
}
dd <- tryCatch({
res <- dbSendQuery(connection, sql)
fetch(res,-1)
}, finally=closeup)
return(dd)
}
else print("DB Driver not available. Please choose SQLite or
PostgreSQL.")
}
dataCleaner <- function(dataTable, minFixes) {
# Removes ids with less than minFixes, discarding the whole var.
# Remove vars with only one id, or with 0 1 ids but not 2.
attach(dataTable)
dataTable$newID <- factor(paste(animal_year_id, var, id, sep='-'))
dataTable$groupID<- factor(paste(animal_year_id, var, sep='-'))
dataTable$var <- factor(var)
dataTable$animal_year_id <- factor(animal_year_id)
dataTable$id <- factor(id)
detach(dataTable)
print(paste(date(), " -- Data cleanup (duplicates, incomplete
ids)."))
# Reject animals with less than minFixes parameter for newID
data.freq <- as.data.frame(xtabs(~dataTable$newID))
data.drop <- as.vector(data.freq[data.freq$Freq < minFixes
,]$dataTable.newID)
# Drop the whole var, if one of its ids has been dropped.
groupID.drop <- unique(dataTable[dataTable$newID %in%
data.drop,"groupID"])
dataTable_filtered <- dataTable[!dataTable$groupID %in%
groupID.drop,]
# There are still some incomplete id sets
dataTable_p <- unique(dataTable_filtered[, c("newID",
"animal_year_id",
"groupID")])
groupID.freq <- as.data.frame(xtabs(~dataTable_p$groupID))
groupID.singleid.drop <- groupID.freq[groupID.freq$Freq < 2 ,
"dataTable_p.groupID"]
dataTable_filtered <dataTable_filtered[!dataTable_filtered$groupID %in%
groupID.singleid.drop,]
dataTable_filtered <- droplevels(dataTable_filtered)
groupID.doubleid.check <- as.vector(groupID.freq[groupID.freq$Freq
== 2,]$dataTable_p.groupID)
# Check the list of IDs and drop the 0 1 ones. Drop the ones with
0.
groupID.doubleid.zero.drop <- as.vector(unique(dataTable_filtered[
(dataTable_filtered$groupID %in% groupID.doubleid.check &
dataTable_filtered$id == "0"),
"groupID"]))
dataTable_filtered <- dataTable_filtered[
!dataTable_filtered$groupID %in% groupID.doubleid.zero.drop,
]
# After selection, refactor because of the missing levels.
dataTable_filtered$newID <- factor(dataTable_filtered$newID)
# Convert acquisition time in POSIXct format, if needed
acq_time <- strptime(dataTable_filtered$acquisition_time, "%Y-%m%d %H:%M:%S")
dataTable_filtered$acquisition_time_posixct <as.POSIXct(acq_time)
removed_lines <- nrow(dataTable) - nrow(dataTable_filtered)
print(paste("Removed", removed_lines, "lines, belonging to:"))
print("Vars with ids with too few fixes:")
print(as.character(groupID.drop))
print("Vars with only one id:")
print(as.character(groupID.singleid.drop))
print("Vars with ids 0 1 but not 2:")
print(groupID.doubleid.zero.drop)
dataTable_filtered <- droplevels(dataTable_filtered)
return(dataTable_filtered)
}
homerangeCalculator <- function(datasubset, resolution) {
# Generalised homerange calculation, the same throughout the
script.
# 1. Select coordinates and grouping factors
xys <- datasubset[, c('utm_x', 'utm_y')]
names(xys) <- c("X", "Y")
ids <- datasubset[, 'id']
ids <- droplevels(as.data.frame(ids))
# 2. Compute a grid around the fixes.
buffer_x <- as.integer((max(xys$X) - min(xys$X)) * 0.5/100) * 100
buffer_y <- as.integer((max(xys$Y) - min(xys$Y)) * 0.5/100) * 100
buffer <- max(buffer_x, buffer_y)
xy_sp <- SpatialPoints(data.frame(x = c((as.integer((max(xys$X) +
100)/100) * 100 + buffer),
(as.integer((min(xys$X) 100)/100) * 100 - buffer)),
y = c((as.integer((max(xys$Y) +
100)/100) * 100 + buffer),
(as.integer((min(xys$Y) 100)/100) * 100 - buffer))))
customGrid <- ascgen(xy_sp, cellsize = resolution)
spdf <- SpatialPointsDataFrame(coords=xys, data=ids,
#proj4string=CRS() #@TODO
)
# 3. Calculate homeranges
output <- kernelUD(spdf, grid=customGrid, kern="bivnorm")
return(output)
}
overlapCalculator <- function(inputPoints, animalID, resolution) {
print(paste(date(), " --Calculating overlaps."))
overlaptable = data.frame()
for (var in levels(inputPoints$var)) {
print(paste("Processing overlapCalculator of ", var))
datasubset <- inputPoints[inputPoints$var == var, ]
# Call kernelUD with a specified grid resolution.
uds <- homerangeCalculator(datasubset, resolution)
overlap <- kerneloverlaphr(uds, method = "BA")*100
# Logging
print(overlap)
groupid <- unique(datasubset$groupID)
genericDF <- data.frame(animal_year_id=animalID, var=var,
group_ID=groupid)
# Save results in overlaptable
overlap12 <- data.frame(idA="1", idB="2",
overlapPercent=overlap["1","2"],
typeover="mixed_season")
overlap12 <- merge(genericDF, overlap12)
overlaptable = rbind(overlaptable, overlap12)
if ("0" %in% levels(factor(datasubset$id))) {
overlap01 <- data.frame(idA="0", idB="1",
overlapPercent=overlap["0","1"],
typeover="mixed_season")
overlap01 <- merge(genericDF, overlap01)
overlaptable = rbind(overlaptable, overlap01)
overlap02 <- data.frame(idA="0", idB="2",
overlapPercent=overlap["0","2"],
typeover="same_season")
overlap02 <- merge(genericDF, overlap02)
overlaptable = rbind(overlaptable, overlap02)
}
}
return(overlaptable)
}
animalOverlapCalculator <- function(animalID, minFixes, resolution,
speciesPrefix,
driver, inputDB, connection){
# For a given animal ID, computes overlaps and processes results
in a table.
overlaptablepairs <- data.frame()
if (driver == "SQLite") {
sqlQuery <- paste("select * from", speciesPrefix, "where
animal_year_id = '",
animalID, "'")
}
else if (driver == "PostgreSQL") {
animal_id <- strsplit(as.character(animalID), split="-")[[1]][1]
year <- strsplit(as.character(animalID), split="-")[[1]][2]
sqlQuery <- paste("
SELECT
dataset_gps.animals_id,
CONCAT(dataset_gps.animals_id, '-', dataset_gps.yearx) AS
animal_year_id,
dataset_gps.acquisition_time,
dataset_gps.utm_x,
dataset_gps.utm_y,
dataset_gps.yearx,
dataset_gps.month_rel,
case
when month_rel <= a and b is null then 1
when month_rel <= a and b is not null then 0
when (month_rel > a and month_rel <= b and b is not null) then 1
when (month_rel > a and b is null) then 2
WHEN (month_rel > b) and b is not null then 2
end id,
id::integer as var
FROM
ws_fem.dataset_gps,
ws_fem.combination
WHERE
animals_id = ", animal_id, " and
yearx = ", year, "order by id, acquisition_time;")
}
animalTableRaw <- runsql(sqlQuery, driver=driver, db=inputDB,
connection=connection)
animalTable <- dataCleaner(animalTableRaw, minFixes)
overlapTable <- overlapCalculator(animalTable, animalID,
resolution)
# Select the minimum of the mixed season overlaps
overlapTableMixed <- overlapTable[overlapTable$typeover ==
"mixed_season", ]
overlapTableMin <overlapTableMixed[which.min(overlapTableMixed$overlapPercent),]
overlapOtherpair <- overlapTable[overlapTable$var ==
overlapTableMin$var, ]
overlaptablepairs <- rbind(overlaptablepairs, overlapOtherpair)
# For all factors, drop unused levels
overlaptablepairs <- droplevels(overlaptablepairs)
return(overlaptablepairs)
}
summaryStatistics <- function(overlaptablepairs, migrationThreshold,
outputDir) {
# Computes proportion of migratory animals wrt the given threshold
and
# saves summary tables in csv files.
all_overlaps_belowt <- overlaptablepairs[
overlaptablepairs$overlapPercent < migrationThreshold,
"animal_year_id"
]
overlaps_belowt <- unique(all_overlaps_belowt)
# @TODO move to end, this function is run on each animal
separately
propmigr <length(overlaps_belowt)/length(unique(overlaptablepairs$animal_year_
id))
print(paste("Proportion of migrating animals with threshold",
migrationThreshold,
"is", propmigr))
summaryoverlaptable <as.data.frame(xtabs(~overlaptablepairs$group_ID))
overlaptablepairs_belowt <- overlaptablepairs[
overlaptablepairs$overlapPercent < migrationThreshold,
]
summaryoverlaptablebelowt <as.data.frame(xtabs(~overlaptablepairs_belowt$group_ID))
summaryoverlaps <- merge(summaryoverlaptable,
summaryoverlaptablebelowt,
by.x="overlaptablepairs.group_ID",
by.y="overlaptablepairs_belowt.group_ID",
all.x=TRUE)
names(summaryoverlaps) <- c("group_ID", "Freq_overlaps",
"Freq_overlaps_belowt")
summaryoverlaps[is.na(summaryoverlaps$Freq_overlaps_belowt),
"Freq_overlaps_belowt"] <- 0
write.csv(summaryoverlaps, file.path(outputDir,
"summaryoverlaps.csv"), row.names=FALSE)
lookuptable_overlaps <as.data.frame(xtabs(~summaryoverlaps$Freq_overlaps+summaryoverlaps$F
req_overlaps_belowt))
# Reorder for readability
lookuptable_overlaps <lookuptable_overlaps[order(lookuptable_overlaps$summaryoverlaps.Freq
_overlaps),]
write.csv(lookuptable_overlaps, file.path(outputDir,
"lookuptable_overlaps.csv"), row.names=FALSE)
}
homerangeMetricsCalculator <- function(groupIDlist, minFixes,
resolution,
speciesPrefix, driver,
inputDB,
connection) {
# Recalculates homeranges of the groupIDs with minimum overlaps
and computes
# statistics (maxima, centers and borders distances)
homerange_box <- list()
for (animalGID in groupIDlist) {
ayd <- strsplit(as.character(animalGID), split="-")[[1]][1]
avar <- strsplit(as.character(animalGID), split="-")[[1]][2]
if (driver == "SQLite") sqlQuery <- paste("select * from",
speciesPrefix,
"where animal_year_id
= '", ayd,
"' and var = '", avar,
"'")
else if (driver == "PostgreSQL") {
# animalGID is actually animal-year-var, so can be split three
times
year <- avar
thevar <- strsplit(as.character(animalGID), split="-")[[1]][3]
sqlQuery <- paste("select * from ws_fem.extract4r_anne(", ayd,
",", year,
",", thevar, " )")
}
animalTableRaw <- runsql(sqlQuery, driver=driver, db=inputDB,
connection=connection)
animalTableRaw$animal_year_id <factor(paste(animalTableRaw$animals_id,
animalTableRaw$yearx, sep="-"))
animalTableRaw$var <- animalTableRaw$id
animalTableRaw$id <- animalTableRaw$sub_cluster
datasubset <- dataCleaner(animalTableRaw, minFixes)
print(paste("Processing", animalGID))
homeranges <- homerangeCalculator(datasubset, resolution)
# adehabitatHR::findmax finds local maxima, so it's not
suitable.
# sp::coordinates gets the center of the polygon that contains
20% of the UD
polygons20 <- try(getverticeshr(homeranges, percent=20),
silent=TRUE)
if ("try-error" %in% class(polygons20)) {
dFC <- NA
peakpoints <- NA
} else {
peakpoints <- coordinates(polygons20)
distancesFromCentres_matrix <- spDists(peakpoints)
dFC <- as.data.frame(distancesFromCentres_matrix)
if (nrow(dFC) == 3) {
names(dFC) <- c("0", "1", "2")
row.names(dFC) <- names(dFC)
} else if (nrow(dFC) == 2) {
names(dFC) <- c("1", "2")
row.names(dFC) <- names(dFC)
}
}
polygons90 <- try(getverticeshr(homeranges, percent=90))
if ("try-error" %in% class(polygons90)) {
distancesFromBorders <- NA
} else {
distancesFromBorders <- gDistance(polygons90, byid=TRUE)
}
homerange_box[[animalGID]]$hrs <- homeranges
homerange_box[[animalGID]]$maxima <- peakpoints
homerange_box[[animalGID]]$distancesFromCentres <- dFC
homerange_box[[animalGID]]$distancesFromBorders <distancesFromBorders
}
return(homerange_box)
}
summaryTableFunction <- function(overlaptablepairs,
migrationThreshold,
minFixes, homerangeBox,
speciesPrefix, driver,
inputDB, connection) {
homerange_table <- data.frame()
# Fill the final table in
for (animalgroupID in levels(overlaptablepairs$group_ID)) {
ayd <- strsplit(as.character(animalgroupID), split="-")[[1]][1]
avar <- strsplit(as.character(animalgroupID), split="")[[1]][2]
if (driver == "SQLite") sqlQuery <- paste("select * from",
speciesPrefix,
"where animal_year_id
= '", ayd,
"' and var = '", avar,
"'")
else if (driver == "PostgreSQL") {
# animalGID is actually animal-year-var, so can be split three
times
aid <- avar
thevar <- strsplit(as.character(animalgroupID), split="")[[1]][3]
sqlQuery <- paste("
SELECT
dataset_gps.gps_data_animals_id,
CONCAT(dataset_gps.animals_id, '-', dataset_gps.yearx) AS
animal_year_id,
dataset_gps.animals_id,
dataset_gps.acquisition_time,
dataset_gps.utm_x,
dataset_gps.utm_y,
dataset_gps.yearx,
dataset_gps.month_rel,
case
when month_rel <= a and b is null then 1
when month_rel <= a and b is not null then 0
when (month_rel > a and month_rel <= b and b is not null) then 1
when (month_rel > a and b is null) then 2
WHEN (month_rel > b) and b is not null then 2
end id,
id::integer as var
FROM
ws_fem.dataset_gps,
ws_fem.combination
WHERE
animals_id = ", ayd, " and
yearx = ", aid, " and
combination.id = ", thevar, "order by id, acquisition_time; ")
}
animalTableRaw <- runsql(sqlQuery, driver=driver, db=inputDB,
connection=connection)
datasubset <- dataCleaner(animalTableRaw, minFixes)
datasubset <- datasubset[,c("groupID", "newID", "id",
"acquisition_time_posixct")]
HR1_start_date <- min(sort(datasubset[datasubset$id == "1",
"acquisition_time_posixct"]))
HR2_start_date <- min(sort(datasubset[datasubset$id == "2",
"acquisition_time_posixct"]))
HR1_days <- HR2_start_date - HR1_start_date
HR2_days <- max(sort(datasubset[datasubset$id == "2",
"acquisition_time_posixct"])) - HR2_start_date
HR1_HR2isBelowThreshold <overlaptablepairs[(overlaptablepairs$group_ID == animalgroupID &
overlaptablepairs$idA == "1" &
overlaptablepairs$idB == "2"),
"overlapPercent"]
< migrationThreshold
if (length(homerangeBox[[animalgroupID]]$distancesFromCentres) >
1) {
HR1_HR2centersDistance <homerangeBox[[animalgroupID]]$distancesFromCentres["1", "2"]
} else {
HR1_HR2centersDistance <- NA
}
if (length(homerangeBox[[animalgroupID]]$distancesFromBorders) >
1) {
HR1_HR2bordersDistance <homerangeBox[[animalgroupID]]$distancesFromBorders["1", "2"]
} else {
HR1_HR2bordersDistance <- NA
}
minOverlap <- min(overlaptablepairs[overlaptablepairs$group_ID
== animalgroupID &
overlaptablepairs$typeover
== "mixed_season",
"overlapPercent"])
minOverlapPair <- paste(
overlaptablepairs[overlaptablepairs$group_ID
== animalgroupID &
overlaptablepairs$overlapPercent == minOverlap, "idA"],
overlaptablepairs[overlaptablepairs$group_ID
== animalgroupID &
overlaptablepairs$overlapPercent == minOverlap, "idB"])
if ("0" %in% levels(datasubset$id)) {
maxOverlapMixedSeason <max(overlaptablepairs[overlaptablepairs$group_ID == animalgroupID &
overlaptablepairs$typeover
== "mixed_season",
"overlapPercent"])
maxOverlapMixedSeasonPair <- paste(
overlaptablepairs[overlaptablepairs$group_ID
== animalgroupID &
overlaptablepairs$overlapPercent > minOverlap &
overlaptablepairs$typeover
== "mixed_season", "idA"],
overlaptablepairs[overlaptablepairs$group_ID
== animalgroupID &
overlaptablepairs$overlapPercent > minOverlap &
overlaptablepairs$typeover
== "mixed_season", "idB"])
sameSeasonOverlap <overlaptablepairs[overlaptablepairs$group_ID == animalgroupID &
overlaptablepairs$typeover == "same_season",
"overlapPercent"]
HR0_start_date <- min(sort(datasubset[datasubset$id == "0",
"acquisition_time_posixct"]))
HR0_days <- HR1_start_date - HR0_start_date
HR0_HR1isBelowThreshold <overlaptablepairs[(overlaptablepairs$group_ID == animalgroupID &
overlaptablepairs$idA
== "0" &
overlaptablepairs$idB
== "1"),
"overlapPercent"] <
migrationThreshold
HR0_HR2isBelowThreshold <overlaptablepairs[(overlaptablepairs$group_ID == animalgroupID &
overlaptablepairs$idA
== "0" &
overlaptablepairs$idB
== "2"),
"overlapPercent"] <
migrationThreshold
if (length(homerangeBox[[animalgroupID]]$distancesFromCentres)
> 1) {
HR0_HR1centersDistance <homerangeBox[[animalgroupID]]$distancesFromCentres["0", "1"]
HR0_HR2centersDistance <homerangeBox[[animalgroupID]]$distancesFromCentres["0", "2"]
} else {
HR0_HR1centersDistance <- NA
HR0_HR2centersDistance <- NA
}
if (length(homerangeBox[[animalgroupID]]$distancesFromBorders)
> 1) {
HR0_HR1bordersDistance <homerangeBox[[animalgroupID]]$distancesFromBorders["0", "1"]
HR0_HR2bordersDistance <homerangeBox[[animalgroupID]]$distancesFromBorders["0", "2"]
} else {
HR0_HR1bordersDistance <- NA
HR0_HR2bordersDistance <- NA
}
} else {
maxOverlapMixedSeason <- NA
maxOverlapMixedSeasonPair <- NA
sameSeasonOverlap <- NA
HR0_start_date <- as.Date.POSIXct(NA)
HR0_days <- as.difftime(0, units="days")
HR0_HR1isBelowThreshold <- NA
HR0_HR2isBelowThreshold <- NA
HR0_HR1centersDistance <- NA
HR0_HR2centersDistance <- NA
HR0_HR1bordersDistance <- NA
HR0_HR2bordersDistance <- NA
}
homerange_row <- data.frame(
animal_year_id = strsplit(animalgroupID, "-")[[1]][1] ,
Var = strsplit(animalgroupID, "-")[[1]][2],
group_ID = animalgroupID,
HR0_start_date = HR0_start_date,
HR1_start_date = HR1_start_date,
HR2_start_date = HR2_start_date,
HR0_days = HR0_days,
HR1_days = HR1_days,
HR2_days = HR2_days,
HR0_HR1isBelowThreshold = HR0_HR1isBelowThreshold,
HR0_HR1centersDistance = HR0_HR1centersDistance,
HR0_HR1bordersDistance = HR0_HR1bordersDistance,
HR1_HR2isBelowThreshold = HR1_HR2isBelowThreshold ,
HR1_HR2centersDistance = HR1_HR2centersDistance,
HR1_HR2bordersDistance = HR1_HR2bordersDistance,
HR0_HR2isBelowThreshold = HR0_HR2isBelowThreshold ,
HR0_HR2centersDistance = HR0_HR2centersDistance,
HR0_HR2bordersDistance = HR0_HR2bordersDistance,
minOverlapPair = minOverlapPair,
minOverlap = minOverlap,
maxOverlapSameSeason = maxOverlapMixedSeason,
maxOverlapSameSeasonPair = maxOverlapMixedSeasonPair,
sameSeasonOverlap = sameSeasonOverlap
)
homerange_table <- rbind(homerange_table, homerange_row)
}
return(homerange_table)
}
main <- function(speciesPrefix, driver, inputDB, outputDir,
migrationThreshold=25,
minFixes=20, resolution=500,
exportShapefile=FALSE){
# Function that computes overlaps, summary statistics and
homerange
# polygons as shapefiles.
# 0. Logging. #@TODO: use logging package.
logfilename <- paste(speciesPrefix, Sys.Date(), "_logfile.txt",
sep="")
sink(file.path(outputDir, logfilename), append=F, split=T)
if (driver == "SQLite") printdb <- inputDB
else if (driver == "PostgreSQL") {
printdb <- inputDB[["dbname"]]
require(RPostgreSQL)
dbdriver <- dbDriver(driver)
connection <- dbConnect(dbdriver, dbname=inputDB[["dbname"]],
host=inputDB[["host"]],
port=inputDB[["port"]],
user=inputDB[["user"]],
password=inputDB[["password"]])
}
print(paste("Script run on ", speciesPrefix, ", with db = ",
printdb,
", migrationThreshold =", migrationThreshold,
", minFixes = ", minFixes, ", resolution =",
resolution))
# 1. Extract the list of animal_year_ids from the given column
if (driver == "SQLite") {
animal_year_id_list <- runsql(paste("select distinct
animal_year_id from",
speciesPrefix),
driver=driver, db=inputDB)
}
else if (driver == "PostgreSQL") {
sql = "select distinct concat(animals_id, '-', yearx) as
animal_year_id from ws_fem.combination_complete ascending;"
animal_year_id_list <- runsql(sql, driver=driver, db=inputDB,
connection=connection)
}
# 2. Calculate overlaps and save them in table
overlaptablepairs <- data.frame()
for (animalID in animal_year_id_list$animal_year_id) {
print(paste(date(), "Now processing", animalID))
animal_overlap_table <- animalOverlapCalculator(animalID,
minFixes, resolution,
speciesPrefix,
driver, inputDB,
connection)
overlaptablepairs <- rbind(overlaptablepairs,
animal_overlap_table)
}
overlaptablepairs <- droplevels(overlaptablepairs)
write.csv(overlaptablepairs, file.path(outputDir,
"overlaptablepairs.csv"),
row.names=F)
# 3. Compute summary statistics and write them on files
summaryStatistics(overlaptablepairs, migrationThreshold,
outputDir)
# 4. Compute homeranges of groupIDs with minimum overlap
hrContainer <homerangeMetricsCalculator(unique(overlaptablepairs$group_ID),
minFixes, resolution,
speciesPrefix,
driver, inputDB,
connection)
# Optionally, export shapefiles of the recalculated homeranges
if (exportShapefile == TRUE) {
for (animalGID in names(hrContainer)) {
animalhr <- hrContainer[[animalGID]]$hrs
polygons90 <- try(getverticeshr(animalhr, percent=90))
if ("try-error" %in% class(polygons90)) {
print(paste("Error: no 90% contour has been calculated for
", animalGID,
sep=""))
} else {
layername <- paste(animalGID, sep="-")
writeOGR(polygons90, dsn=outputDir, layer=layername,
driver="ESRI Shapefile")
}
}
}
# 5. Produce output table
summaryTable <- summaryTableFunction(overlaptablepairs,
migrationThreshold,
minFixes, hrContainer,
speciesPrefix,
driver, inputDB, connection)
write.csv(summaryTable, file.path(outputDir,
"summary_full_table.csv"),
row.names=FALSE)
save.image(file.path(outputDir,".RData"))
print(paste(date(), " -- Finished!"))
if (driver == "PostgreSQL") {
dbDisconnect(connection)
dbUnloadDriver(dbdriver)
}
sink()
result <- list(overlaptablepairs, summaryTable, hrContainer)
return(result)
}
References
Calenge, C. (2006) The package “adehabitat” for the R software: a tool for the analysis of space and
habitat use by animals. Ecological Modelling, 197, 516-519.
Fieberg, J. & Kochanny, C. O. (2005) Quantifying home range overlap: the importance of the
utilization distribution. Journal of Wildlife Management, 69, 1347–1359.
R Development Core Team (2014) R: A language and environment for statistical computing. R
Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org.
Worton, B. J. (1989) Kernel methods for estimating the utilization distribution in home-range
studies. Ecology, 70, 164–168.
Download