PRECISION FORESTRY PROCEEDINGS OF THE SECOND INTERNATIONAL PRECISION FORESTRY SYMPOSIUM

PRECISION FORESTRY
PROCEEDINGS OF THE SECOND INTERNATIONAL
PRECISION FORESTRY SYMPOSIUM
UNIVERSITY OF WASHINGTON COLLEGE OF FOREST RESOURCES
FERIC, THE FOREST ENGINEERING RESEARCH INSTITUTE OF CANADA
IUFRO, THE INTERNATIONAL UNION OF FOREST RESEARCH ORGANIZATIONS
USDA FOREST SERVICE, PACIFIC NORTHWEST RESEARCH STATION
SEATTLE, WASHINGTON
JUNE 15-17, 2003
PRECISION FORESTRY
PROCEEDINGS OF
THE SECOND INTERNATIONAL
PRECISION FORESTRY SYMPOSIUM
Printed in the United States of America
All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopy, recording, or any information storage or retrieval system, without permission in writing from the publisher, the
College of Forest Resources.
Institute of Forest Resources
College of Forest Resources
Box 352100
University of Washington
Seattle, WA 98195-2100
(206) 685-0887
Fax: (206) 685-3091
http://www.cfr.washington.edu/Pubs/publist.htm
Proceedings of the Second International Precision Forestry Symposium, sponsored by the University of Washington College of Forest
Resources, the Precision Forestry Cooperative, Seattle, Washington, FERIC, the Forest Engineering Research Institute, Vancouver, BC,
IUFRO, The International Union of Forest Research Organizations, Vienna, Austria, and the USDA Forest Service, Pacific Northwest
Research Station, Resource Management and Productivity Program, Portland, Oregon.
Additional copies of this book may be purchased from the University of Washington Institute of Forest Resources, Box 352100,
Seattle, Washington 98195-2100.
For additional information on the Precision Forestry Cooperative please visit http://www.precisionforestry.org
II
TABLE OF CONTENTS
Acknowledgments
Preface
Keynote Speakers
VI
VIII
Opening Remarks and Welcome to the First International Precision Forestry Symposium
B. Bruce Bare
1
Precision Forestry – The Path to Increased Profitability!
Bill Dyck
3
Precision Technologies: Data Availability Past and Future
Daniel L. Schmoldt and Alan J. Thomson
9
Plenary Session A: Precision Operations and Equipment Moderator, Alex Sinclair
Multidata and Opti-Grade: Two Innovative Solutions to Better Manage Forestry Operations
Pierre Turcotte
A Test of the Applanix POS LS Inertial Positioning System for the Collection of Terrestrial Coordinates
Under a Heavy Forest Canopy
Stephen E. Reutebuch, Ward W. Carson, and Kamal M. Ahmed
17
21
Ground Navigation Through the Use of Inertial Measurements, a UXO Survey
Mark Blohm and Joel Gillet
29
Precision Forestry Operations and Equipment in Japan
Kazuhiro Aruga
31
Precision Forestry Applications: Use of DGPS Data to Evaluate Aerial Forest Operations
Jennie L. Cornell, John Sessions and John Mateski
37
Plenary Session B: Remote Sensing and Measurement of Forest
Lands and Vegetation - Moderator, Tom Bobbe
Estimating Forest Structure Parameters on Fort Lewis Military Reservation Using Airborne Laser
Scanner (LIDAR) Data
Hans-Erik Andersen, Jeffrey R. Foster, and Stephen E. Reutebuch
45
Developing “COM” Links for Implementing LIDAR Data in Geographic Information System
(GIS) to Support Forest Inventory and Analysis
Arnab Bhowmick, Peter P. Siska and Ross F. Nelson
55
Large Scale Photography Meets Rigorous Statistical Design for Monitoring Riparian Buffers and LWD
Richard A. Grotefendt and Douglas J. Martin
61
Forest Canopy Models Derived from LIDAR and INSAR Data in a Pacific Northwest Conifer Forest
Hans-Erik Andersen, Robert J. McGaughey, Ward W. Carson, Stephen E. Reutebuch,
Bryan Mercer, and Jeremy Allan
65
III
Enhancing Precision in Assessing Forest Acreage Changes with Remotely Sensed Data
Guofan Shao, Andrei Kirilenko and Brett Martin
67
Automatic Extraction of Trees from Height Data Using Scale Space and SNAKES
Bernd-M. Straub
75
A Tree Tour with Radio Frequency Identification (RFID) and a Personal Digital Assistant (PDA)
Sean Hoyt, Doug St. John, Denise Wilson and Linda Bushnell
85
Plenary Session C: Terrestrial Sensing, Measurement and Monitoring
Moderator, Steve Reutebuch
Value Maximization Software – Extracting the Most from the Forest Resource
Hamish Marshall and Graham West
87
Costs and Benefits of Four Procedures for Scanning on Mechanical Processors
Glen E. Murphy and Hamish Marshall
89
Evaluation of Small-Diameter Timber for Value-Added Manufacturing – A Stress Wave Approach
Xiping Wang, Robert J. Ross, John Punches, R. James Barbour, John W. Forsman
and John R. Erickson
91
Early Experience with Aroma Tagging and Electronic Nose Technology for Log and Forest Products Tracking
Glen Murphy
97
Plenary Session D: Design Tools and Decision Support Systems
Moderator, Glen Murphy
Modeling Steep Terrain Harvesting Risks Using GIS
Jeffrey D. Adams, Rien J.M. Visserm and Stephen P. Prisley
99
Use of the Analytic Hierarchy Process to Compare Disparate Data and Set Priorities
Elizabeth Coulter and Dr. John Sessions
109
Use of Spatially Explicit Inventory Data for Forest Level Decisions
Bruce C. Larson and Alexander Evans
115
Elements of Hierarchical Planning in Forestry: A Focus on the Mathematical Model
S. D. Pittman
117
Update Strategies for Stand-Based Forest Inventories
Stephen E. Fairweather
119
A New Precision Forest Road Design and Visualization Tool: PEGGER
Luke Rogers and Peter Schiess
127
Harvest Scheduling with Aggregation Adjacent Constraints: A Threshold Acceptance Approach
Hamish Marshall, Kevin Boston and John Sessions
131
Preliminary Investigation of Digital Elevation Model Resolution for Transportation Routing in Forested
Landscapes
Michael G. Wing, John Sessions and Elizabeth D. Coulter
Comparison of Techniques for Measuring Forested Areas
Derek Solmie, Loren Kellogg, Michael G. Wing and Jim Kiser
IV
137
143
Posters and Abstracts
Can Tracer Help Design Forest Roads?
Abdullah E. Akay
150
CPLAN: A Computer Program for Cable Logging Layout Design
Woodam Chung and John Sessions
150
List of Contributors
151
List of Attendees
155
Second International Precision Forestry Symposium Agenda
160
V
ACKNOWLEDGMENTS
Many individuals and organizations contributed to the success of this symposium and the collection of papers in this
volume. The conference was sponsored by the University of Washington College of Forest Resources, the Precision Forestry
Cooperative, the USDA Forest Service, Pacific Northwest Research Station, Resource Management and Productivity Program,
Portland, Oregon, FERIC, the Forest Engineering Research Institute of Vancouver, BC, Canada, and IUFRO, the International
Union of Forest Research Organizations of Vienna, Austria.
The program was planned by a committee consisting of:
Chair
Professor David Briggs, College of Forest Resources, University of Washington
Scientific Sub-Committee:
Hans-Erik Andrsen, College of Forest Resources, University of Washington
David Briggs, University of Washington
Ward Carson, USDA Forest Service, PNW Research Station
Megan O’Shea, University of Washington
Steve Reutebuch, USDA Forest Service, PNW Research Station
Professor Gerard Schreuder, Acting Director, Precision Forestry Cooperative
This committee developed the program and recruited authors for the topics presented. The lead authors, in turn, worked
with coauthors and consulted with others to make this a truly international effort. The time and effort of all these contributors
resulted in excellent presentations and posters. Papers were reviewed before acceptance for publication, and the input of the
many reviewers is much appreciated. The session moderators Alex Sinclair, Vice President, FERIC Western Division, Vancouver,
BC, Tom Bobbe, Remote Sensing Applications Center, USDA Forest Service, Salt Lake City, UT, Steve Reutebuch, Team
Leader, Silviculture and Forest Models Team, USDA Forest Service, Pacific Northwest Research Station, Seattle, WA, Glen
Murphy, Professor, Forest Engineering, Oregon State University, Corvallis, OR, and B. Bruce Bare, Rachel B. Woods Professor of Forest Management, and Dean, College of Forest Resources, University of Washington provided program linkage and
kept the conference on schedule.
VI
We would also like to thank the Precision Forestry Board of Directors for their support in the early planning of the symposium.
Chair, Rex McCullough, Weyerhaeuser Company
Wade Boyd, Longview Fibre
Craig D. Campbell, Boise Cascade Corporation
David Crooker, Plum Creek Timber
Suzanne Flagor, Seattle Public Utilities Watersheds Division
Sherry Fox, Washington Forest Protection Association
John Gorman, Simpson Investment Company
Peter Heide, Washington Forest Protection Association
Edwin Lewis, Bureau of Indian Affairs Yakima Agency
John Mankowski, Washington State Department of Fish and Wildlife
John Olsen, Potlach Corporation
Charley Peterson, USDA Forest Service
Mike Renslow, Spencer Gross, Inc.
Bryce Stokes, USDA Forest Service
David Warren, The Pacific Forest Trust
Laurie Wayburn, The Pacific Forest Trust
Maurice Wiliamson, Forestry Consultant
Megan O’Shea of the University of Washington College of Forest Resources was responsible for conference arrangements
and management. Andrew Cooke edited and produced the proceeding CD. Their efforts and those of the registration workers,
projectionists, and other volunteers were critical for the smooth operation of the conference and are greatly appreciated.
Production of this volume was coordinated through the Institute of Forest Resources; a special thanks to John Haukaas for
his assistance in editing. Megan O’Shea handled final editing and publication preparation. The capable efforts of this skilled
group are gratefully acknowledged.
The proceedings are reproductions of papers submitted to the organizing committee after peer review. We wish to acknowledge the efforts of all the scientists involved in the peer reviews of these proceedings papers. No attempt has been made to
verify results. Specific questions regarding papers should be directed to the authors.
David Briggs, Chair
Second International Precision Forestry Symposium
VII
PREFACE
The need for precision forestry is no longer a choice in managing forest and producing forest products. Driven by both the
ever increasing scrutiny over the protection of forest resources, and the economic need to use forest products to the fullest,
professional foresters and product managers are demanding quality detailed information about forests they manage and products they make. I am confident that the presentations and discussion we have in the next few days will lead to the implementation of technologies that will move forestry to a higher level of information resolution. Please take note of the fine corporate
exhibitors featured on the following page. I am grateful for their participation in this symposium. I want to give special
thanks to the College of Forest Resources faculty, staff and students who worked on the Symposium Planning Committee, as
volunteers or scientific reviewers.
Gerard Schreuder
Acting Director, Precision Forestry Cooperative
VIII
Opening Remarks and Welcome to the First International
Precision Forestry Symposium
B. BRUCE BARE, DEAN, COLLEGE OF FOREST RESOURCES, UNIVERSITY OF WASHINGTON
WELCOME
The College of Forest Resources, University of Washington is pleased to host this second international symposium dedicated to Precision Forestry. We hope your participation and ideas will help focus attention on innovative technologies and
approaches to guide the future of forestry and the forest industries in Washington State and elsewhere. A few words about our
College.
HISTORY OF ADVANCED TECHNOLOGY INITIATIVE (ATI)
•
•
•
The UW’s Precision Forestry Cooperative is one research cluster funded by the State’s Advanced Technology Initiative (ATI).
The ATI is a partnership between the Legislature, private industry, and the research universities of the State of
Washington.
Washington State Legislature funded six Advanced Technology Initiatives during the 1999/2001 biennium.
HISTORY OF ATI
•
•
Each ATI “cluster” is expected to generate new industries or transform existing industries of importance to Washington State.
And, each “cluster” is a bridge between research, education, and new economic activity. New leaders are being
educated to help transform the industries vital to the State’s economic future.
PRECISION FORESTRY
•
•
•
•
Employs high technology sensing and analytical tools to support site-specific, economic, environmental, and sustainable decision-making for the forest sector.
Provides highly repeatable measurements, actions, and processes to grow and harvest trees, as well as to protect and
enhance riparian areas, wildlife habitat, esthetics, and other environmental resources.
Provides valuable information and linkages between resource managers, the environmental community, manufacturers, and public policy.
Links the practice of sustainable forestry and conversion facilities to produce the best economic returns in an ecologically and socially acceptable manner.
INNOVATIVE TECHNOLOGIES
•
•
•
•
•
•
GPS, GIS for precise ground measurements
Remote sensing (LIDAR, INSAR)
Wireless systems
Real-time process control scanners
Visualization
Decision support systems (integrated data systems)
1
PRECISION FORESTRY COOPERATIVE FOCUS
•
•
•
•
Decision Support Systems
Remote Sensing and Geospatial Analysis
Silvicultural and Ecological Engineering
Precision Operations and Terrestrial Sensing
•
To develop tools and processes that increase the precision of forest data to support better decisions about forests —
their services and products, through a collaborative effort with private landowners, public agencies, manufacturers,
and harvesters.
PRECISION FORESTRY COOPERATIVE GOAL
PRECISION FORESTRY SYMPOSIUM
•
•
•
Brings scientists, managers, and developers together to work collaboratively.
Will provide insights into the current “state of the art” and provide a springboard for new ideas and innovations.
We hope you enjoy the symposium, the campus, and the city during your stay with us.
2
Precision Forestry – The Path to Increased Profitability!
BILL DYCK
Abstract: The market wants good wood and the forest industry wants to see greater profitability. Precision Forestry has a role
to play in both developing tools to find the best wood in existing forests and trees, and also in providing the knowledge to grow
better wood in the first place. New technologies are being developed that can help us evaluate forests at a macro-level, enhance
our ability to estimate stand volumes, and even measure the properties of individual trees and logs. These tools should lead to
greater profitability as higher value wood can be allocated to higher value markets. Increased profitability can also be achieved
by understanding the interactions of genetics, site and silvicultural management to grow more valuable forests.
INTRODUCTION
perhaps an optimistic sceptic, when it comes to the development and application of technology for the forest industry.
I learned early on that ideas are cheap, there is relatively
little that is really new patenting is often a costly waste of
time and money, and that implementation is everything.
Therefore, I want to start off by making one point with regard to Precision Forestry research and technology and its
application to industrial forestry:
The term “precision forestry” means different things to
different people. To a geneticist it probably means precisely
matching the genetics of a tree species to the site to maximise
growth. To an industrial forester it might mean precisely
managing a forest to match what the market needs. But, to a
conservationist it probably means being able to precisely manage a forest to optimise environmental benefits.
What the website for this symposium said was that: “Precision Forestry uses high technology sensing and analytical
tools to support site-specific, economic, environmental, and
sustainable decision-making for the forestry sector.
It provides for highly repeatable measurements, actions,
and processes to initiate, cultivate, and harvest trees, as well
as enhance riparian zones, wildlife habitat, and other environmental resources. It provides valuable information linkages between resource managers and processors.
The Symposium will bring together scientists to present
state-of-the-art information on topics such as precision sensing techniques, operations-sensing techniques and their use
for decision-making.”
What this meant to me was that the audience was going
to be interested in a wide range of topics all designed to
improve the precision by which we manage forests, whether
it is for commercial, environmental, or social benefits.
A keynote paper is supposed to be thought provoking and
generally delivered at a fairly high level to set the scene for
the rest of the meeting. I’m going to attempt to do that, but
will focus on one side of forestry – “industrial forestry” –
i.e., that part of forestry that seeks to make money from trees,
and I’ll look at how I think Precision Forestry can improve
profitability.
I have been in the Science & Technology game for over
25 years and consider myself to be a sceptical optimist, or
Get the market to provide the lead. Technology driven
research is almost certainly doomed to fail.
The main objective of this presentation is to give you my
views on where precision forestry technology can play a role
in the industrial sector and specifically how it can help the
forest industry become more profitable.
HOW PRECISE DOES FORESTRY NEED
TO BE?
When I started out in forestry, and even relatively recently, there used to be an expression commonly used by
foresters: “close enough for forestry”. What that really meant
was that in forestry you didn’t have to be very precise, after
all, forestry was just cutting down trees and getting them to
a sawmill where logs were made into lumber and shipped
off the to market; generally a pretty crude business.
The business has changed, primarily as logs have become more valuable and cost cutting puts the squeeze on
operations. But, how precise do we really need to be? A tree
is a tree is a tree! At least if it is the same species, the same
size, and the same shape it should be, correct? However,
that is not the case. All logs are different even if they are
3
priate technology would have saved his business.
Back to the question “how precise does forestry need to
be?” The answer is “it depends” and it mainly depends on
the market being targeted.
One of the main reasons that forestry is complicated and
there is a need for greater precision is the enormous complexity of trees. After decades of research we still fail to understand some of the fundamental principles of tree growth
and wood. We can certainly grow big trees and quickly, but
we don’t fully understand the linkages between growing trees
and creating high value wood. While this complexity creates problems, it also creates opportunities, at least for those
who take the time and effort to really understand the nature
of trees.
I suggest there are two paths if followed that will make
forestry more precise and lead to greater profitability: (1)
know what you’ve got, (2) grow what the market wants.
clonal and even if they come from the same tree. In Figure 1
several hundred logs from two radiata pine plantation forests in New Zealand were selected for similar grade and
tested for sound speed, a measure of intrinsic wood stiffness
and other properties. The results were very revealing as there
was wide variability in wood properties, from similar looking logs.
The prices paid for the logs were all the same, but the
value of the structural lumber from the fastest and stiffest
logs was much greater than the industrial grade from the
slower logs. Of course it is the forest owner that is missing
out on this premium! But it is also the mill owner that is
wasting resources processing inferior logs in an attempt to
make premium products.
Frequency
120
PATH 1 – KNOW WHAT YOU’VE GOT
80
Regardless of whether the market is for forests, standing
trees, logs, lumber, or fibre, you need to know what you’ve
got and where it is. This path should perhaps therefore read,
“Know what you’ve got – in terms the market values”
The forest market wants to know where the forests are,
how big they are, and what’s in them. It also wants to know
the “risk” – how healthy are the forests, what’s their nutritional status, and are there potential liabilities associated
with high value conservation areas, endangered species habitat, or cultural sites that need to be protected. We are reasonably good at valuing forests on a very broad basis, but we’re
not all that good at rapidly determining risk values, such as
nutritional and health status.
The tree or stumpage market would like to know much
more precisely then what we are currently able to determine
both the volume that is in the forest and the value. Ideally, it
would also like to know just how variable the quality is in
the stand, both within and between trees.
The log market wants to know volumes by external log
grade (log dimensions, sweep, knot size and spacing) but it
is also starting to ask for more than this – hence the crude
measure of strength in some log markets of “rings per inch”.
Ideally we would like to be able to match specific logs to
specific markets, both for lumber, veneer and even chips,
but we are only just starting to make progress in this area.
40
4
3.8
3.6
3.4
3.2
3
2.8
2.6
2.4
0
Speed, Km/s
Figure 1: The sound speeds (km/s) of a large sample of
similar logs from two geographically distinct radiata
pine forests in New Zealand demonstrating the large
variability in the intrinsic wood properties of the logs.
(Industrial Research Ltd data).
There is another expression that I’ve heard more recently
and that is perhaps more relevant. “Forestry isn’t rocket science, it’s more complicated than that!” I’m not sure who
coined the phrase but I believe it is appropriate. The results
in Figure 1, and in fact the underlying technology underpinning what is now a commercial tool, is the work of an
ex-space physicist Dr Mike Andrews, currently working at
Industrial Research Ltd in New Zealand.
Clearly, a more precise grading system for the radiata
pine logs shown in Figure 1 would have seen greater log
segregation based on intrinsic wood properties and greater
price differential in the market. However, a word of caution, for Precision Forestry technologies to be useful we need
to be careful that they don’t over complicate the business of
forestry, or the winners will be concrete and steel. But, on
the other hand, new log and wood segregation technologies
can play a big role in protecting wood’s place in the market
by providing better quality control and product assurance
for wood products. There was an example during the Sydney
Olympic construction days when a very large laminated beam
failed in use because the manufacturer had used low strength
components, although they looked just as good as previous
material he had used. In this case the application of appro-
Technologies to tell us what we have:
Seeing the Trees from the Sky
Satellite technologies have been very disappointing, at
least for industrial forestry applications, and to my knowledge there have been very few examples of satellite technology improving forest management and helping to increase
revenue flow. Aerial photography from planes and helicopters, on the other hand, has been the workhorse of remote
4
ance on “seat-of-the-pants forestry”. In the absence of reliable technology, local knowledge and experience becomes
extremely important for estimating the inherent wood properties and hence the value of stands and trees. That reliance
is changing as new tools become available to assess stands
for wood properties.
Silvascan-2 developed by Rob Evans and his team at
CSIRO in Australia, has proved to be an extremely invaluable tool for improving our evaluating wood properties. This
technology can measure fibre properties from an increment
core up to 1000 times faster than traditional lab-based methods. The tool is especially useful for measuring microfibril
angle, which in the past has been expensive and somewhat
unreliable, as well as for determining other cellular properties that translate into useful market values. Many forestry
companies are now using Silvascan-2 to improve their inventory assessments of wood properties and values by analyzing increment cores from selected trees.
Director (also known as Hitman), a technology developed by IRL in New Zealand and owned and marketed by
CHH FibreGen, is being used to determine the structural
properties and by inference the value of logs (Figure 3a).
This technology is based on time-of-flight sonics and has
been demonstrated to reliably predict the average stiffness
of lumber produced in logs. Because the MOE of the log is
simply equal to density times the speed of sound squared,
the technology is basically measuring fibre properties that
influence macro properties such as stiffness, strength, and
stability. The challenge is to interpret what the log is “saying” and translate this information into meaningful values
(Figure 3b).
Director is currently being used to identify resource stiffness by stand and by forest, and to a lesser extent to segregate individual logs for high value structural processing,
mainly LVL.
The future is the development of technology to cost-effectively assess the properties of standing trees and thereby
greatly improving value estimates of stands and forests, of
particular interest for the stumpage or forests market. Research is currently focused on hand held tools to measure
the density and stiffness of trees.
sensing. More recently in this area we’ve seen tree counting
algorithms developed that enable automatic determination
of the number of trees per hectare from digital imagery, and
also much better forest boundary identification than what
has been possible in the past.
There have been new developments in remote sensing
technology that show potential for industrial forestry. I’m
particularly excited about the promise of hyperspectral imagery for assessing disease infestation and nutrient deficiencies in production forests. Researchers in CSIRO (Nicholas
Coops in press), Australia have demonstrated the application of airborne hyperspectral imagery for the assessment of
Dothistroma a needle blight disease of radiata pine (Figure
2) and plans are underway to launch this as a commercial
service, thus enabling more rapid and more accurate detection of the disease. As well as showing promise for monitoring forest health, it appears that the technology appears to
also have application for determining tree species, the nutritional status of forests, and monitoring the spread of weeds.
Healthy
Un-healthy
Figure 2. Hyperspectral image of Dothistroma infection
in radiata pine. (N. Coops et al, CSIRO in press).
Seeing the Trees from the Ground
While being able to measure everything remotely is the
dream, we still need to measure trees and forests from the
ground.
Traditional ground-based forest inventories give a reasonable estimate of tree volumes by species and to some extent external log grades, but as a rule we tend to be rather
poor at estimating the true value of stands. New technologies are coming onto the market that will change all this.
Instead of simply estimating log grades, laser tree-profiling technology collects digital images of trees that can be
fed into optimising software to predict values as well as volumes per hectare. Currently this is too expensive to be used
as more than just an audit tool but it does point to the future.
However, what is really exciting is our increasing ability to
see into trees and quantify some of the more valuable intrinsic properties such as density, stiffness and specific fibre properties.
Wood is a very complex biomaterial that is poorly understood by forest managers and scientists alike, hence the reli-
Figure 3a: Application of Director sonics technology to
pine logs (IRL photo).
5
somewhat understandable in that until recently we haven’t
been able to rapidly measure wood properties, but all that’s
changing and we no longer have the lack of tools as an
excuse.
We are now entering what I consider to be the fourth
stage of industrial forestry – High Performance Wood (Figure 4). Stage 1 consisted simply of felling old growth natural forest and processing the logs into lumber. Stage 2 was
the start of plantation forestry in which vast areas of trees
were planted, often to replace dwindling supplies from natural forests. In Stage 3 we started to get more sophisticated
and practiced more intensive silviculture resulting in improved genetics, faster growth, and generally fatter trees by
an early age, but the focus was simply on what the trees
looked like and had nothing to do with wood quality. Stage
4 is what I optimistically refer to as the stage of “high performance wood”, and this is where Precision Forestry comes
in. Precision Forestry for growing better wood that is.
I do not accept the argument that we cannot predict what
the market wants 25 years out, or even 100 years out. If we
look back at what the market for wood products has wanted
for the last 1000 years it has been for strong and stable material, and for some applications, attractive wood. Getting a
bit of durability is a bonus, but if we focus on strong and
stable wood then we have to be on the right track. We could
add to this list with a few other obvious parameters, such as
defect-free (internal checks) and blemish-free wood (resin
pockets etc). For some fibre applications we will want strong,
coarse fibres, whereas for others we want short fibres and
often fibres that will collapse to give a soft finish. But, let’s
keep this simple and focus on solid wood products. Our inability to reliably produce strong, stable, and attractive wood
at a reasonable cost is at least partially responsible for the
introduction of substitute products, including wood composites.
So, where does Precision Forestry come into growing a
good crop of trees, or better stated, growing good wood? It
comes in everywhere, starting with genetics and ending with
Of course, having managed to precisely locate your forests and determine what is in the trees, you then need to
ensure you extract maximum value, which gets into log processing technology. That is a whole new subject; so instead
of going forward down that path, let us go back to the beginning – growing what the market wants.
1
Site G11 Stem #4 25.8m
Amplitude
0.75
0.5
0.25
0
0
100
200
300
400
500
600
700
Frequency, Hz
Figure 3b: Sonics trace from a radiata pine stem.
PATH 2 – GROW WHAT THE MARKET
WANTS
The market wants good wood!
We now know how to grow big trees quickly, but we have
yet to determine how to reliably produce good wood. Critics
of this statement claim that the definition of “good wood”
depends on the end use, and while this is true to a certain
extent, we can definitely state what constitutes “bad wood”.
If we don’t know what good wood is, or at least understand
what we don’t want in wood, then we really haven’t got
much hope in growing what the market wants.
For decades forest growers have focused on very unsophisticated markets – the log market and the tree market (or
the market for forests). Consequently we’ve either strived
to grow volume per hectare or volume per stem. Other than
branch size and straightness, there has been relatively little
focus on wood quality.
Even worse, in some countries, especially where the forests are government owned, there has only been a focus on
getting a new crop started and above the weeds with little
attention to where the final harvest might end up. What we
should really be asking of course is, what does the market
want and how do we go about growing what the market
wants. We need to put a lot more thinking into growing
wood than we have in the past.
Stage 1 – Old growth forests
Stage 2 – Plantations
Stage 3 – Growing big and
straight
I’ve yet to meet a forester who can tell me the formula
for growing good wood.
Stage 4 – High performance
wood
Geneticists who believe that genetics is the answer to
everything have taken us for a ride down the wrong forest
path. And for the most part we’ve basically ignored the influence of site and management on wood properties. This is
Figure 4: The four stages of industrial forestry. Timing
will vary by country.
6
harvesting. In fact, it goes back to the molecular level and
understanding how wood cells respond to site and management stimulus. The reason that some NZ radiata pine is
“trash” and treated as such in some markets, is not because
there is anything wrong with the species, it’s the way we’ve
grown some of our forests.
The so-called “S-diagram” in Figure 5 provides a framework to indicate why we need to be more precise when growing trees. The key is to have a reasonable understanding of
what the market wants, and then to have a much better
handle on how genetics, site, and management impacts on
what is produced.
est), but also in how we select our sites and manage our
trees. In fact we don’t really even attempt to manage trees,
but we tend to manage stands and forests.
The best pruned radiata pine stands in New Zealand
are worth twice the value of the average pruned stands,
and the reason is a combination of genetics, site, and
management practices.
It is the influence of both the site and management of the
individual trees that results in the differences in wood quality that we get within a forest. We are only just starting to
really understand how much genetics pre-determines wood
quality, and that trees growing next to each other on basically the same site and with the same management, will
produce very different wood.
A Move to Genotype Forestry?
One way to overcome the effect of genetics on variable
wood properties is to use genotype (clonal) forestry. This is
done for short rotation pulp and paper hardwood crops and
is starting to be employed for longer-rotation conifers. However, while this will certainly reduce variability, there is no
guarantee in my mind that it will lead to higher value forests as I’m not convinced that we have even begun to understand the relationship between genetics and wood quality. The promises of molecular biology and tools such as
“marker-aided selection” are there, but are they real or are
they just hype?
Choosing the wrong genotype (i.e., clone) can have disastrous results unless we are 100% certain that we have
gotten everything correct, not just the one trait that we might
be selecting for. We see this in our genetics programmes
where we’ve focused on volume and form and have had
virtually no understanding how selecting for these traits
would affect other features that are actually more important
for the ultimate wood market.
Figure 5:The quality of the lumber and fibre products
derived from a tree is dependent upon the ultrastructure
and molecular properties of the wood cells, which are in
determined by a combination of genetics, site, and
management. (S-diagram from University of Canterbury).
A Move to Site-specific Forestry?
I suggest that we can make more progress producing what
the market wants, i.e., good wood, by moving to more sitespecific forestry.
I do not believe that we are ready to match genotypes to
site, but we can certainly match families to site and avoid
some of the more serious impacts of disease, water logging,
certain wood quality defects etc. and also make gains in
productivity.
We can also begin to be more precise in managing sites
to produce better trees and better wood by first of all understanding the effects of soils and climate on wood properties.
We can also be much more precise in how we manage weed
competition by, for example, careful chemical selection and
precision application, and in how we manage nutrition,
which should be on a site basis, rather than a stand basis.
There is also a need for much better understanding as to
the impacts of silvicultural interventions, such as pruning
Producing good wood products that the market wants is
very similar to producing good wine. It’s getting the combination of genetics, site selection, and management regime
just right, and then of course processing the grapes in the
best possible way.
Certainly genetics is important to wine, hence we are
able to make choice of a cab sav or a sauvignon blanc depending on our mood at the time. But, as any wine drinker
knows, it’s possible to buy a very good cab sav and also a
very poor one. What makes the difference? That really comes
down to site – particularly soil and climate – and then to
management – how the vines were managed to optimize the
quality of the grapes. Skill in processing good grapes is also
very critical of course, but as wine makers have told me,
“anyone can make a good wine in a good vintage year”.
In forestry we tend to be very imprecise not only in selecting genetics (we have tended to choose what grows fast7
Forestry research and technology developments have not
been all that good at really understanding what the market
wants, but it has not necessarily been all our fault. Often we
ask for input, but we ask the wrong people or we ask the
wrong question, and therefore we get the wrong answer. Or
we get the correct answer but we do not know enough to
provide the solution.
There is little doubt in my mind that what the market for
industrial forestry really wants is good wood. We have two
ways to produce this good wood (1) find it in our existing
forests, and (2) grow it in the first place.
To become more profitable we need to better understand
what wood is, particularly what good wood is, what key properties we need to measure in all stages of the value chain,
and we need to understand what this means to the end user.
We then need to develop tools that can help us to make these
assessments, but we have to be able to implement this technology in such a way that the costs do not outweigh the
benefits.
Precision Forestry is required in both enhancing our ability to “know what we’ve got” and also in understanding
how to “grow what the market wants” as we need research
and technology to understand what is in the forest, right
down to the tree and log level, and we need to be much
more precise matching genetics with site and silvicultural
management.
A greatly improved ability to know what we have got and
to grow what the market wants will lead to greater profitability, provided we can do all this cost effectively.
and thinning, shelter wood management etc on final wood
quality. It appears that thinning may have a detrimental effect on wood quality as it stimulates cell growth producing
steeper microfibril angles in the secondary cell walls leading to reduced stiffness and stability. It also appears to be
responsible for increasing the amount of compression wood
in a stem, possibly a response to greater wind movement in
the stand.
The Challenges are There!
Forestry needs to focus on genetics, site, and forest management practices that will produce the best cells, which in
turn will lead to the best wood. Sounds difficult? You bet it
is! The alternative is “hit and miss forestry” which in many
cases will lead to reasonable wood, in some cases be a total
disaster, and if we are really lucky, will lead to really good
wood that the market can’t get enough of. Ironically, in New
Zealand, the best wood that I know of has come from untended fire-regenerated stands of radiata pine that were harvested at age 50.
Clearly there is a role for Precision Forestry to focus on
the underlying mechanisms that influence wood quality, as
ultimately, the market that we are targeting is not simply
demanding better forests, but it is demanding better wood or
it will turn to substitutes.
CONCLUSIONS
This is a keynote paper so I will I wrap up with a couple
of salient lessons for Precision Forestry, and to do this I want
to go back to the point that I made at the start of this presentation:
ACKNOWLEDGEMENTS
Several people have provided input to this paper and I
particularly thank Mike Andrews (IRL), Nicholas Coops
(CSIRO), Peter Carter (CHH), Rick Walden (Smart Forests),
and Brian Rawley.
Get the market to provide the lead. Technology driven
research is almost certainly doomed to fail.
8
Precision Technologies: Data Availability Past and Future
DANIEL L. SCHMOLDT AND ALAN J. THOMSON
Abstract:Current precision and information technologies portend a future filled with improved capabilities to manage
natural resources with greater skill and understanding. Whereas practitioners have historically been data limited in their
management activities, they now have increasing amounts of data and concomitant sophistication in data management, analysis, and decision tools. Expanding precision forestry technologies beyond traditional reliance on optics-based tools offers new
opportunities for forest resource interrogation. However, as data become more immediate and information rich, traditional
views of data availability may lose some relevance. Technical constraints are becoming less daunting and social and ethical
responsibility and sensitivity are gaining prominence. Because data that might be deemed private or protected can be readily
moved and combined with other data, new concerns arise about who uses those data and how they use them. Capabilities built
into newer analysis and decision support tools add further apprehension about privacy, accuracy, and accessibility. It does not
require an extraordinary string of suppositions to imagine when regulation and legal decisions will promulgate certain safeguards for data management and for software that handles data. Such restrictions could likely limit data availability in
currently unforeseen ways—counteracting, to some extent, technology-based advances in data availability. Still, irrespective
of those possibilities, there are actions that natural resource professionals can take to lessen potential future restrictions on data
availability. These include defining an “information space” for each precision technology, understanding language and knowledge flows, and planning for integrated systems and processes that holistically address information needs and uses.
INTRODUCTION
of habit. Once bacterial counts become part of our everyday
information environment, though, we have to alter our consciousness to incorporate a more “dirty-aware” reality that
accepts our existence with microbes. Similarly, a fire manager, presented with large amounts of real-time data and
the capability to manipulate it, begins to see the landscape
in a truly dynamic way. Now, decisions that he or she makes
can be continually updated, or tweaked, as conditions change.
Dynamic decision making creates increased confidence and
control for the manager, and minimizes the likelihood that
field judgments will be questioned later. In fact, decision
support systems can track the decision making process for
subsequent audit.
Not only are more data available more often, but the time
between measurement and application is shrinking rapidly.
Whereas, at one time, field crews collected volumes of information on the ground, recorded data with pencil and paper, and entered data into a computer back in the lab for
analysis, it is now possible to collect more spatially dense
data much faster without going into the field, in some instances. The former process could take many days (or weeks)
for relatively low resolution, while current technologies can
potentially reduce the time to just hours. Such just-in-time
information promises to bring decision making out from
behind the computer display and into the field (e.g., Clark
2001). Here, then, managers and field operators can react
One of the prominent thrusts in agriculture, food, and
natural resource systems brings increasingly data-rich environments into everyday use. Consumers, for example,
might soon be able to scan a package of chicken in the refrigerator and know exactly where the product was grown
and processed, and what its current shelf-life is based on
bacterial counts (Pathirana et al. 2000). In other cases, land
managers might have real-time information about fuel loads
across large geographic areas and simulate a large number
of hypothetical ignition scenarios based on 24-hour weather
forecasts. For sustainably grown timber, chain-of-custody
verification might rely on programmable identification devices (Simula et al. 2002) or chemical markers (q.v., companion article in this volume). Significant scientific and
technical hurdles still remain and modifiers, such as “soon”
and “large number,” are as yet undefined, but theoretically
there is nothing to prevent either scenario from becoming
reality, as feasibility is well established in both cases.
These data have the capacity to tell us more about the
world in which we live and work, and also can alter our
professional, and emotional, viewpoints of that world and
how we interact with it. In the examples above, we don’t
currently give much thought to bacterial counts on the food
we eat, although we wash food, such as chicken, as a matter
9
more quickly to changing conditions and have a broader,
more informed picture of the resource being managed.
The advantages of high spatial and temporal data resolution for researchers and practitioners are obvious, so data
volume and rapidity have been primary scientific thrusts.
However, we are entering a phase where subtle shifts are
occurring in what “data availability” means. While, there
are still many forest and forest product characteristics that
we would like to measure and apply effectively, the technical hurdles to doing so are not insurmountable. Those previous science and technology limits to data availability may
soon be supplanted with other availability issues, such as
data-use policy and legal restrictions. Then, the issue becomes not one of technically capable information technology (IT), but rather one of human-centric IT (Schmoldt
2001). That is, how well these information tools fit within
organizational and social cultures and how well they reflect
the users’ ethical standards and expectations. Just because
physical hurdles to data generation have been reduced does
not mean that limitations on data application, based on ethical concerns (Thomson and Schmoldt 2001), won’t be
equally problematic.
In the sections that follow, we describe and illustrate the
three phases of precision technologies: basic research; engineering and technology development; and application and
adoption. While most of the companion papers in this volume deal with the latter two phases, the basic research phase
cannot be completely ignored as it provides the scientific
basis for a technology’s capabilities and limitations. The
second issue addressed in this paper is the growing importance of ethics in data collection and data use. There needs
to be awareness by scientists and practitioners regarding
ethical standards of conduct and how they may dictate development and use of precision technologies.
which may not necessarily reflect biological realities. These
and other issues have hindered data availability and application in the past and continue to present challenges for
some recent technologies. Still, as the papers in this volume and cited works elsewhere demonstrate, forest science
and management have increased access to data, collection
frequency, and possess more powerful tools to manipulate
the data.
PRECISION TECHNOLOGIES AND
CURRENT DATA AVAILABILITY
Technologies, i.e., tools, processes, and materials, ensue
from scientific discovery. Biophysical and chemical phenomena must first be understood before they can be translated into useful devices and products. For example, the
optical properties of the atmosphere and plants, and the
physics of collecting light at great distances, must be known
before remote sensing makes sense. Similarly, the mathematics of optimizing constrained production functions must
be developed before solution algorithms can be written.
These scientific developments provide fundamental knowledge for subsequent, possibly unforeseen, technologies.
In some research settings (e.g., a university or federal
laboratory), end uses for science endeavors may not always
be immediately apparent; neither are they necessarily offered as justification for the research. In other cases, a longterm goal (process, device, product) drives the science, with
a proof-of-concept targeted as the immediate research objective. The latter is more common in the private sector. In
relatively few cases, however, do agriculture and forestry
applications drive research efforts related to precision technologies. Once various precision technologies have been
developed, though, they often find ready application to re-
Definition
Before proceeding further, it is important to provide a
definition for the broad area of “precision technologies.”
For most intents and purposes inherent in this paper, the
following should suffice:
Instrumentation, mechanization, and information technologies that measure, record, process, analyze, manage, or actuate multi-source data of high spatial and/
or temporal resolution to enable information-based
management practices or to support scientific discovery
This definition applies equally well to technologies that
might be employed in agriculture, food, and environmental
systems. While the definition doesn’t explicitly state so,
biophysical, chemical, and engineering sciences provide the
bases for these technologies, and information technologies
(IT) often provide the application mode—although, in some
cases practices are realized through the use of electro-mechanical devices driven by microprocessors to actuate a response.
Basic Research
Introducing precision technologies into forest environments is difficult for many reasons. First among those are
scale issues. Our measurements must be possible at spatial
scales in the millimeter range (nitrogen fixation in the soil)
and also at the kilometer range (stand health, stand timber
volume). Events occurring over short time periods (e.g.,
stomatal aperture) can be equally important to much longerperiod phenomena (e.g., tree diameter growth). Second,
there is tremendous variability over time and space when
repeatedly measuring the same phenomenon. While this
creates problems for taking consistent measurements, our
ability to take frequent measurements helps us understand
that variability and better deal with it. Third, most of our
measurement modalities to date have relied on optics, which
limits our observations to line-of-sight interrogation. Fourth,
when taking measurements at finer spatial and temporal
resolutions, we then often aggregate those data—in our
models and decision support systems—to, somewhat arbitrary, coarser resolutions that suit anthropocentric needs,
10
of communicating MEMS that can measure ecological variables across an entire watershed, for example.
Convergence of the Internet, communications, and information technologies with techniques for miniaturization
has placed sensor network technology at the threshold of a
period of major growth. Emerging technologies can decrease the size, weight, and cost of sensors and sensor arrays by orders of magnitude, and increase their spatial and
temporal resolution and accuracy. Large numbers of sensors may be integrated into local- or wide-area systems to
improve performance and lifetime, and decrease life-cycle
costs. Communications networks provide rapid access to
information and computing, eliminating the barriers of distance and time for tracking endangered species, detecting
insects and pathogens, monitoring engineered structures and
air and water quality. The coming years will likely see a
growing reliance on and need for more powerful sensor systems, with increased performance and ecological functionality.
search and management of environmental and ecological
systems.
Engineering and Technology Development
The second phase of technology R&D involves applied
engineering, wherein scientific discoveries are turned into
new prototypes. These early stage technologies undergo testing and validation (either in the laboratory or in the field) to
establish their capabilities and limitations. It is at this point
where theoretical expectations and operational realities often come into conflict, and solutions and compromises must
be tried, tested, and resolved. Companion IT also needs to
be developed to make the new technologies operationally
effective. The following paragraphs highlight several emerging technologies—biosensing, micro-electromechanical systems (MEMS), and sensor networks—that offer new and
innovative possibilities for precision forestry, and mitigate
some of the aforementioned difficulties working in forest
environments.
In agriculture, food, and the environment, there is an
ever-increasing need to detect and measure minute quantities of chemicals or microbes (e.g., biosecurity) occurring
in both indoor and outdoor environments, and to do so almost instantaneously (just-in-time information). Areas of
particular interest include: food production and processing,
agricultural products, pest management, surface and ground
water, soils, and air. Universities, federal laboratories, and
other federal agencies have been developing biosensing technologies to measure trace levels of biological and chemical
materials in real-time. Biosensing includes systems that
incorporate a variety of means, including electrical, mechanical, and photonic devices; biological materials (e.g., tissue,
enzymes, nucleic acids, etc.); and chemical analysis to produce detectable signals for the monitoring or identification
of biological phenomena. In a broader sense, the biosensing
includes any approach to detecting biological elements (or
their chemical signatures) and the associated software or
computer identification technologies (e.g., imaging) that
identify biological characteristics. Because of the scale of
these biological entities and the masses involved, new advances in nanoscience and nanotechnology are proving useful.
MEMS integrate mechanical elements, sensors, actuators, and electronics on a common silicon platform. MEMS
make possible the realization of complete systems-on-a-chip.
Sensors gather information from the environment by measuring mechanical, thermal, biological, chemical, optical,
or magnetic phenomena. The electronics then process the
information derived from the sensors, and through some
decision making capability direct the actuators to respond
by moving, positioning, regulating, pumping, or filtering,
thereby controlling the environment for some desired outcome or purpose. For many environmental applications,
the actuation step will take the form of a wireless transmission of data collected. These devices become particularly
useful and powerful, however, when combined into networks
Application, Adoption, and Economics
Enabling technologies are converging with fields of application, e.g., agriculture and forestry, to provide the measurement, storage, analysis, and decision-making needs of
producers and processors. In many cases, though, innovations are frequently adopted in clusters; e.g., genetically
improved rice + fertilizer + insecticide. Here, there would
be little economic payback for applying costly agronomic
treatments to low-yield rice, whereas the same treatments
applied to an improved rice strain would be more readily
adopted. The marriage of remote sensing and geographic
information systems in forestry represents another cluster
example.
In agriculture, for example, techniques are currently being developed to: (1) make precise measurements and continuously monitor field and plant conditions through sensors and instruments, (2) organize large volumes of data
with spatially referenced databases, and (3) analyze and interpret that information using decision support systems that
make economically favorable choices. The greatest “technology push” has been in precision agriculture (PA)—where
information technologies provide, process, and analyze
multisource data of high spatial and temporal resolution for
crop production operations. Very similar technologies are
being developed and promoted in the forestry arena for timber production and ecological assessments.
Despite this “push,” the “pull” by the end-user community has been hesitant and weak, although most producers
admit that they will have to adopt PA technology eventually. Currently, most see initial cost, uncertain economic
returns, and technology complexity as limiting factors. These
empirical observations are consistent with Rogers’s theory
of innovation diffusion (Rogers 1995). Furthermore, in light
of recent and anticipated regulatory requirements for nutrient release and water/air quality, many producers feel that
the environmental benefits of precision agriculture might
11
be the eventual driving force for technology adoption.
Nevertheless, small- and medium-sized producers (both
in agriculture and forestry) have a distinct disadvantage
versus large producers. In high-volume food and fiber production, economies of scale and narrow profit margins provide an economic advantage to large producers. Furthermore, large producers tend to have more education and are
less technology averse than smaller producers. These characteristics of food and fiber production suggest that most
technological advances, including precision agriculture/forestry, are not scale neutral. Furthermore, the factors limiting PA adoption, noted above, are also less problematic for
larger producers, giving them an additional competitive
advantage.
One way for smaller producers to combat these competition trends is to create, or reach into, unique markets where
their small size is an advantage. Value-added products expand the profit margin for producers that are positioned to
provide enhanced value to consumers—which is more often the case for small producers that deal with small quantities of raw products and have more direct access to consumers. In addition, smaller producers can become more competitive in a technology world by mitigating the barriers to
adoption. By spreading the initial cost of technology over
many producers and by sharing information about how to
use the technology, smaller producers can obtain the adoption capabilities held by large producers. One way to accomplish these tasks—that has been applied successfully by
nonindustrial private forest (NIPF) landowners—is by forming landowner cooperatives (Stevens et al. 1999). These
cooperatives are grass-roots activities (as distinct from existing agricultural cooperative enterprises) wherein members share equipment, information, and market power to
achieve some common goals for managing their operations.
A nominal fee is usually charged members and the cooperative becomes a business entity.
In the eastern U.S., approximately 60% of timberland
resides in NIPF ownerships. Yet, only a small portion of
that acreage is actively managed. In the past several decades, the number of forestland owners has been increasing, with more non-farm and absentee owners. This new
cohort of owners also has diverse interests. As with agriculture, precision forestry technologies are more readily
adopted for use on large ownerships (industrial and public),
but if economic and educational hurdles can be overcome,
smaller ownerships will also participate, either individually
or in groups.
manipulation, data availability, and “alternatives” formulation and selection. The ethical concerns addressed below
do not include intentionally malicious behavior, such as
computer crime, software theft, hacking, viruses, surveillance, and deliberate invasions of privacy, but rather explores the subtler, yet important, impacts that data collection and use can have on people and their social, cultural,
corporate, and other institutions.
Privacy
Improper access to personal information is the issue that
“privacy” usually brings to mind. Any unauthorized access
to information about an individual or their property can be
an invasion of privacy, just as unauthorized access to one’s
property has traditionally been considered invasive. However, even authorized access may lead to privacy concerns,
when access to separate data sources is used to combine
information (Mason, 1986). For example, one institution
may record landowners’ names and land ownerships, while
another may be authorized to store land records and timber
values for tax purposes. Individually, the databases are properly authorized, but if the records are combined by a third
party, it may be possible for unauthorized parties to gain
financial data about individual landowners. As environmental databases increase in size, complexity, and connectivity, projects that involve adding data fields or combining
data or knowledge sources must consider the ethical implications of those activities.
In recent years, a new privacy issue has arisen in the area
of geographic information systems (GISs), related to location protection. For example, many cultural sites on public
lands are protected either by law, policy, or regulation. Yet,
entering site locations in a GIS may disclose locations for
unethical use. One way around this problem is to define a
polygon that contains a site or group of sites, without disclosing exact point locations. A similar situation exists in
relation to biodiversity and rare species protection. Innovative approaches are required to facilitate resource monitoring and protection while simultaneously ensuring there is
no loss of privacy resulting from location disclosure.
Current remote sensing technologies allow anyone to
“look into” someone else’s property—assessing, without the
owners knowledge or consent, timber or crop value that can
be used for insurance, bank loan, or taxation purposes. Even
when such data have been collected legitimately, there is no
guarantee that adequate safeguards have been instituted to
protect unauthorized access and use. Future technologies
will create even greater opportunities for remote intrusion.
As more and more data become available on the Internet,
unintended use has become a major problem. Web surfers
can borrow data from different source or combine data inappropriately from many sources either misusing it or
misattributing it. Similar concerns related to re-packaging
of information may arise where public funding of government research places researcher and research information
in the public domain. Enterprising organizations, then, turn
ETHICS AND FUTURE DATA
AVAILABILITY
Ethics is the study of value concepts such as “good,”
“bad,” “right,” “wrong,” and “ought,” applied to actions in
relation to group norms and rules. Therefore, it deals with
many issues fundamental to practical decision-making. Precision and information technologies lie at the heart of modern decision making, including data/information storage and
12
that public-domain information into company revenue.
While not illegal, it may be unethical if there is minimal
value added to the publicly available, and public-funded,
information.
expense of global sustainability (Hart, 1999). Scale is a key
determinant of indicator usefulness: some indicators that
are useful at the household or community level are difficult
to measure at the regional level, and some regional indicators may have little meaning at the community or household level. Because indicators compress so much ecological, economic, or social information into a single variable
or set of variables, it is especially crucial that they are chosen, measured, and interpreted carefully.
Accuracy may also be influenced by the sequence in which
operations are applied. In theory, error limits of predictions
should be supplied; however, while error limits of individual
equations may be known, it is rare that models actually compute the consequences of combining multiple equations.
Mowrer (2000) examines error propagation in simulation
models and presents several approaches (Monte Carlo simulation and Taylor series expansion) to project errors. This
has become an active research topic recently (q.v., Mowrer
2000), as several models are typically used in combination
to predict future conditions.
Key language and terminology used to frame a question
can significantly influence the applicability of data or information. This is true for any information system in which
the user is forced to converse using concepts unfamiliar to
them. This cultural mismatch is of special significance in
studies of Native peoples, where the interview subject may
have concepts and values very different from those of the
questioner. For example, the term “forest” is a key concept
for resource management, but certain Native peoples have
no concept for forest in their culture or any word in their
native tongue. Instead, they have a more holistic view of
the land that includes trees, plants, animals, and people
(Thomson, 2000). Once such basic cultural differences are
identified, the important challenge becomes one of understanding the ramifications of those differences, how they
affect data needs and data use.
Statistics, images, graphs, and maps are all methods of
summarizing, presenting, or filtering information. Ethical
decisions behind the selection and transformation of material can significantly affect the accuracy with which recipients may perceive a situation. When a situation is highly
charged or contentious, objectivity in portraying information becomes critically important.
Certain decision support software may increase considerably the power of users to make or influence decisions
that were formerly beyond the limits of their knowledge and
experience. For example, upper management may gain direct access to lower level data and information summaries.
This helps bypass intervening distortions, resulting in more
accurate perceptions. Greater accuracy is dependent, however, on support software that has itself been developed with
appropriate ethical considerations and higher level managers must be willing and able to use the software to achieve
distortion-free information sharing. This type of situation
has been a bane of statisticians for years. Very powerful
software packages have allowed users to perform all manner of inappropriate statistical tests on data without full
Accuracy
A software developer’s ability to know and predict all
states (especially error states) is low for complex systems.
At first sight, it would appear that a software developer would
be ethically bound to correct all system errors. However,
dealing with errors can raise ethical dilemmas: 15-20% of
attempts to remove program errors introduce one or more
new errors. For programs with more than 100,000 lines of
code, the chance of introducing a severe error when correcting an original error is so large that it may be better to retain and work around the original error rather than try to
correct it (Forester and Morrison, 1994). The frequency of
disclaimers, software updates and patches, as well as the
lack of substance to software warranties, result from software developers’ recognition of this problem. The ultimate
effect is larger and more complex software, whose size is
less related to functional capability than it is related to software age and the battery of “fixes” that it has received over
time. Similar ethical conflicts arise with decision support
tools, where modelers and developers realize that a model’s
results can only be broad approximations in many cases.
Another problem related to accuracy is determining which
specific information to use. For example, it is often difficult to select appropriate socio-economic or biological indicators or to choose among predictive models. An indicator
is something that points to an outcome or condition, and
shows how well a system is working in relation to that outcome or condition. For example, in a forest simulation
model, tree diameter at breast height (dbh) is a key indicator of treatment effects. However, there may be a range of
potential equations available to predict dbh. One equation
may simply predict dbh from tree height, while another equation may predict it from both height and crown width. The
equation selected will have different consequences with regard to accuracy, precision, data costs, and suitability for
extrapolation. This choice relates, in turn, to precision and
bias in the estimators used. Requirements of the intended
user and usage should guide the choice.
When a social or economic indicator is being used, ethical considerations are even more significant. If the indicator misrepresents a value set, then it cannot be considered
accurate. Indicators have long been used in predictive systems (Holling, 1978): such indicators must be relevant, understandable, reliable, and timely. In natural resource disciplines, with their current emphasis on sustainability, indicators must have additional characteristics. Sustainability
indicators must include community carrying capacity; they
must highlight the links between economic, social and environmental well-being; they must be usable by the people
in the community; they must focus on a long range view;
and they must measure local sustainability that is not at the
13
knowledge of what they are doing. While current statistical
software manuals contain a great deal of information regarding model specification and assumptions, they cannot
replace a well-founded understanding of basic statistics by
the experimenter.
lie in the adoption phase. Even though engineering and
technology development aspects may seem daunting and
time-consuming, it is the economic, cultural, and educational issues that often doom or advance technology use.
This suggests that more thought needs to be given to that
final phase. Some issues that need to be addressed in the
development phase (or pre-diffusion) are: intended users,
intended uses, workflow changes, education and training,
economics, associated IT changes or requirements, favorable or unfavorable regulations, early adopters, commercialization entities, and user communities. All these factors
can impact if, and how, a new technology is accepted and
used.
Privacy has long been considered an inherent right of
individuals in a “free” society. Initially, this involved protection of the individual from unwanted or unwarranted invasion of their physical space. More recently, privacy has
been extended into an individual’s information space, as well.
For precision technologies currently under development in
natural resource and agricultural domains, more real threats
are likely to arise from unintentional and unforeseen information breaches than from any intentional conspiracy. These
occur when information sources are combined or used in
unintended ways. As long as information about individuals
exists and is accessible by others, individual privacy can
potentially be compromised. During technology development, designers need to be cognizant of users, co-developers, publics, cultures, special interest groups, commercial
enterprises, governments, and other groups that might be
affected directly or indirectly by their products. Designers
must also consider the information their technology uses or
generates, and the decision-making landscape that it affects
or creates.
Use of appropriate language is at the heart of many accuracy issues. Even if an information system does not estimate the accuracy of results explicitly, it is important to make
end-users aware of the variability in potential outcomes, and
the assumptions and trade-offs that have contributed to it.
Similarly, non-textual rendering of system outputs should
be designed to address accuracy concerns in the flow of
knowledge. It is also essential to address the way in which
knowledge flows through organizational hierarchies, and to
ensure its appropriate use at different organizational levels.
As with accuracy issues, language lies at the heart of many
accessibility issues. Information delivery must be geared to
concepts appropriate to the intended audience, and information overload avoided, as knowledge can be inaccessible
if the recipient is swamped with information. Limitations
of technical accessibility by some groups may require developing an integrated range of systems and processes to ensure access by all stakeholders in a decision environment.
While there will always be some ethical culpability on
the individual’s part, much responsibility still rests with organizations to institute standards of ethical conduct that create an atmosphere of social morality for their employees and
members. Self regulation is always more readily accepted
and effective than regulation from governmental institutions,
Accessibility
Appropriate access to data and software has both technical and intellectual components. To make use of software, a
person must have access to the required hardware and software technology, must be able to provide any required input, and must be able to comprehend the information presented. For example, for a Web-based system, users must
have reliable connections to the Internet and sufficient bandwidth. Each end-users must also have a browser compatible with the material sent to it (including such things as
the appropriate Java classes for use with applets) and any
helper applications or browser plug-ins for viewing and
hearing content. If an intended audience lives in a developing country, or in a remote area, such technological issues
may be critical. For this reason, when software or a data
base is developed, its implementation should be part of an
integrated process that includes the full range of affected
individuals. This may include specifying duties for a suite
of “actors” such as technology transfer officers or field personnel.
Accessibility is also limited if results are presented inappropriately. For example, data may be aggregated at a fixed
scale that may have limited value for many users. In other
cases, language and concepts beyond the end-user’s understanding or vernacular might render a decision support system useless for a large audience segment. While it is neither practical nor possible to accommodate all who might
“stumble onto” data or software, primary target audiences
need to be defined and understood.
In a digitally networked age, the ability to connect systems, databases and information-rich environments becomes
more possible but also more problematic. The goal of seamless, transparent, and “user-friendly” information access
makes interoperability a required attribute of databases, systems, and vocabularies. This desired attribute requires both
technical and human dimensions to enhance interoperability
within regional, national, and global forest information systems. Interoperability ensures that systems, procedures, and
cultures of an organization are managed in such a way as to
maximize opportunities for exchange and re-use of information, whether internally or externally. Because end-users of data are not necessarily local or regional and because
large-scale forest assessments are becoming more important (e.g., carbon management), standards and protocols for
forest data are looming on the horizon.
SOME STEPS TO TAKE
Once basic, scientific principles have been demonstrated,
the biggest hurdles to realizing an operational technology
14
which may not always fully understand the issues involved.
By thinking in advance about ethical issues that may eventually impinge on data availability, organization might alleviate potential future restrictions or reduce their impacts.
sensitive biosensor for Salmonella. Biosensors and
Bioelectroncis 15: 135-141.
Rogers, E. M. 1995. Diffusion of Innovations, Fourth
Edition. The Free Press, New York.
Schmoldt, D. L. 2001. Precision agriculture and
information technology. Computers and Electronics
in Agriculture 30(1/3): 5-7.
LITERATURE CITED
Clark, N. 2001. Applications of an automated stem
measurer for precision forestry. Pages 93-98 in D.
Briggs (ed.) Proceedings of the First International
Precision Forestry Cooperative Symposium, College
of Forest Resource, Univ. of Washington, Seattle WA.
Simula, M., J. Lounasvuori, J. Löytömäki, M. Rytkönen.
2002. Implications of forest certification for
information management systems of forestry
organizations. Forest information technology 2002
international congress and exhibition. 6 pp.
www.indufor.fi/documents%26reports/pdf-files/
article07.pdf
Forester, T., and P. Morrison. 1994. Computer Ethics.
MIT Press, Cambridge, Mass.
Hart, M. 1999. Guide to sustainable community
indicators. Hart Environmental Data, North Andover,
MA.
Stevens, T. H., D. Dennis, D. Kittredge and M.
Richenbach. 1999. Attitudes and preferences toward
cooperative agreements for management of private
forestlands in the Northeastern United States. Journal
of Environmental Management 55:81-90.
Holling, C.S. 1978. Adaptive environmental assessment
and monitoring. John Wiley & Sons, Chichester.
Mason, R.O. 1986. Four ethical issues of the information
age. MIS Quarterly 10(1): 5- 12.
Thomson, A.J. 2000. Elicitation and representation of
Traditional Ecological Knowledge, for use in forest
management. Computers and Electronics in
Agriculture 27(1-3): 155-165.
Mowrer, T. 2000. Uncertainty in natural resource
decision support systems: Sources, interpretation, and
importance. Computers and Electronics in
Agriculture 27(1-3): 139-154.
Thomson, A. J., and D. L. Schmoldt. 2001. Ethics in
computer software design and development.
Computers and Electronics in Agriculture 30(1/3): 85102.
Pathirana, S. T., J. Barbaree, B. A. Chin, M. G. Hartell,
W. C. Neely, and V. Vodyanoy. 2000. Rapid and
15
16
Multidata and Opti-Grade: Two Innovative Solutions to
Better Manage Forestry Operations
PIERRE TURCOTTE
Abstract: You can’t manage what you don’t measure. Two novel systems recently developed by FERIC address this dilemma: MultiDAT allows forest contractors to maximize their machine uptime and Opti-Grade provides an integrated package for optimal forest road management.
MultiDAT is a multi-purpose datalogger for forestry managers. MultiDAT can record machine functions, machine movement, machine location and collect operator feedback. The associated software can analyze the data and produce reports on
which optimal decisions can be based. The MultiDAT is designed specifically for heavy equipment operating in areas where
communication systems are not existent or very expensive.
Opti-Grade is a road management system to help focus grading or re-profiling activities where they will have the greatest
impact on the road condition for the money invested. Opti-Grade is used to collect important information on the condition of
the road network on a regular basis, using equipment installed on a log truck. This data is used to schedule maintenance
activities. This new concept is considerably more efficient than the traditional concept of grading whole road segments.
apply for forest machines, because almost all reporting on
truck systems are based on distance and not on time. We
had no choice but to develop a complete system.
CONTEXT OF THESE RESEARCH
PROJECTS
The two projects described here were done at the Forest
Engineering Research Institute of Canada (FERIC) during
the last 3 years. FERIC is a private research organization
that has served the Canadian forest industry for more than
25 years, and started recruiting new members in the United
States this year. Projects are determined for the most part by
the industrial members and orientations are reviewed on a
yearly basis, which accounts for very practical, results-oriented research. FERIC programs cover all aspects of forest
operations, from silviculture to harvesting and transportation, but because of the general theme of this symposium,
two projects closely related to precision forestry will be presented.
Why measure machine utilization?
Improving machine utilization has a major impact on
the profits of the contractors who conduct a large portion of
forest operations. The documentation of the daily operating hours and the nature of all delays is the first step in
determining the possible avenues of improvement.
To illustrate the importance of machine utilization in
today’s context, let’s analyze the effect of improving the utilization of a typical harvester operating in Eastern Canada.
This example is based on the following assumptions (all
$ in Canadian funds):
Cost of the harvester:
Useful life:
Resale value:
Insurance:
Repairs and maintenance:
Interest rate:
Operation schedule:
Cost of operator:
Cost of fuel:
Productivity:
Origin of the MultiDAT development
Foresters have been using paper chart recorders for more
than 30 years to track the utilization of their equipment.
They are still in use today in many operations. This crude
system has many limitations. The precision of the recording is usually only about 5 to 10 minutes, the charts get
easily damaged and the operators can falsify the recordings.
When members asked FERIC to develop or find a simple
electronic device, we first tried to adapt a recorder designed
for trucks. We realized that the truck paradigm does not
Revenue:
17
$480,000
5 years
$144,000
$24,000 per year
$96,000 per year
8%
4,000 hours per year
$30 per hour
$18.40 per hour
40 cubic metres per productive
machine hour
$3.00 per cubic metre
EXAMPLES OF USE
These assumptions do not necessarily represent an average, but are only typical values used to represent the relative importance of the costs. Figure 1 shows the variation of
net profit that the owner of this harvester would make if the
utilization rate would vary from 70% to 85%. It is easy to
see that the owner would make 7 times more profit if the
machine is used at 85% instead of 70%.
Weyerhaeuser Company Limited in Dryden, Ontario have
used 20 MultiDAT Junior to support a productivity improvement program in partnership with two major contractors.
For more than a year, they followed a fleet of harvesting
equipment, tracking downtime and finding solutions to reduce its impact and occurrence. In some cases, the utilization of skidders improved from 50% to more than 80%. They
used very simple configurations and connection methods,
relying mostly on the motion sensor to determine machine
activity.
In Quebec, Gestion Remabec inc., a large contractor, is
using the MultiDAT on harvesters for two purposes. First,
they monitor the utilization using the sensors, but they also
use the GPS option and record the travel path of the machine. When harvesting is completed in a block, they provide their clients with a map showing the area harvested.
This map is used to make sure that the operator was not in
violation when working close to the block boundaries.
The Saskatchewan Department of Transportation is using the MultiDAT exclusively on graders. The MultiDAT
PC software has a speed analysis function that can be used
to determine approximately the sections of road that were
graded. In general, grader operators drive at a higher speed
when they are traveling than when they are grading.
In the Atlantic Provinces, J.D.Irving Ltd. is gradually
implementing the MultiDAT on all their contractors’ equipment. The company provided the MultiDATs free of charge
to the contractors in exchange for their commitment to use
them and provide utilization reports.
In Windsor, Quebec, Domtar Inc. used a GPS- equipped
MultiDAT in 2002 to track site- preparation equipment. Although at the time, the MultiDATs were using Garmin25
GPS receivers with no WAAS capabilities, the results were
very good; most of the time within 1% of the area measured
by walking the block with a GPS receiver after the work
was completed. In 2003, they are using GPS-equipped
MultiDATs on all of their site-preparation equipment. These
MultiDATs now use the CSI SX-1 WAAS receiver, with submetre accuracy.
In 2002, the MultiDAT was also used by FERIC researchers to evaluate the productivity of new models of Tigercat
equipment in a Tembec Industries Inc. harvesting operation
in Ontario. The operations were followed in detail for more
than 3 months, and the MultiDAT recordings were downloaded each week by the supervisors and transferred to
FERIC for analysis.
MultiDAT has been used successfully on harvesters, feller
bunchers, skidders, forwarders, bulldozers, excavators, graders, sand trucks, mobile chippers, and loaders.
Figure 1: Profit VS Utilization
DESCRIPTION OF THE MULTIDAT
The MultiDAT has the following characteristics:
- 4 channels programmable as timers, counters, or
frequency meters
- Internal motion sensor sensitive to low frequency
vibrations
- Optional WAAS GPS receiver
- Typical autonomy: 2 to 6 weeks
The MultiDAT comes in two versions. The regular unit
has an operator interface that allows easy input of operator
number, work codes and delay codes. The MultiDAT Junior
has all the characteristics of the regular unit without the
operator interface.
The recordings are downloaded and transferred to a PC
using a PDA, either a Palm OS or Windows CE device. The
average size of a MultiDAT download file is 50 Kb for one
week of recording, which means that a data shuttle can easily download many MultiDATs before being synchronized
with a PC. The MultiDAT is built for the harsh conditions
in which heavy equipment operates. It is enclosed in a heavy
aluminum casing, all components can withstand temperatures varying from -40 to 85 deg. C, and the supply voltage
can be from 10 to 28 volts.
Almost everything on the MultiDAT is configurable. The
number of sensor channels used, the threshold of the motion sensor, the number of work and delay codes used and
their meaning. Even the report format is fully configurable.
STRONG AND WEAK POINTS
The strongest point of the MultiDAT is its versatility.
The recorder itself and the PC software can be configured
18
CURRENT DEVELOPMENT PROJECTS
in a very simple way to simply record the working hours of
a bulldozer for example. They can also be configured in a
more complex way to identify the harvesting time, idle time
and traveling time of a harvester in a given block.
A second strong point is the ease of use for the operator.
As shown in Figure 2, the operator interface is very simple,
there is no scroll down menus and all the activity and stop
codes are visible at a glance.
Current projects include the development of geo-fencing
for the MultiDAT. Some analysis software for truck
dataloggers give the user the possibility of defining polygon
boundaries called geo-fences. The GPS position recordings
can then be analyzed in relation to these geo-fences, and
additional information can be derived such as the time spent
in a given region, or the number of times that a truck passed
a given location.
With the MultiDAT, the geo-fences are pre-determined,
and the recorder can be configured to record only the time
of entry into and exit out of each polygon. This method requires much less memory than recording all the GPS positions and does not require intensive computer processing
after the data is downloaded.
The first application that we envision for this development is the management of wood flow in the mill yard. By
tracking the passage of loaders between zones, we will attempt to determine the volume of wood that is moved between various sections of the yard and better balance the
tasks assigned to each loader.
Finally, we are working on the development of a blade
contact sensor for graders. This sensor will provide a more
accurate map of the road sections that were graded.
Since February 2003, the MultiDAT is fabricated and
distributed under license by Geneq (www.geneq.com).
Figure 2: Operator interface
The first weak point of the MultiDAT is, paradoxically,
its versatility, which sometimes makes the initial configuration laborious. Many of the first MultiDAT users needed
support to configure the recorders and the reports, simply
because they did not know what was the most effective configuration. With later versions of the software, we provided
more configuration templates and we are using the experience of the first users to determine the configurations that
give the best results. We are thus transforming the MultiDAT
from a simple activity recorder into a methodology to improve machine utilization.
The second weak point is the data shuttle. Although since
the inception of the MultiDAT, users could choose between
a $150 consumer PDA and a $1500 rugged field computer,
so far only consumer PDAs have been used as shuttles. Batteries have given users the most problems. The average user
is not aware that a Palm computer is never turned off, even
when the display is switched off, and that it wears out its
batteries over a period of 3 to 6 weeks even when sitting in
a drawer. That is, until users lose their first data files and
have to re-install the shuttle program in the PDA. In 2003,
we are introducing shuttles using Windows CE, and that
should improve the shuttle performance drastically. The data
and shuttle programs can now be backed up in flash memory,
and the PC communication is much simpler, using the USB
cable that is provided with the PDA. Still, keeping ahead of
product development for the shuttle is not an easy task. We
saw product lives of 9 months with the Palm PDAs and we
hope that the Windows CE devices will stay on the market
longer.
THE OPTI-GRADE SYSTEM
During the last two years, FERIC also worked on the
development of another precision forestry tool, the OptiGrade system.
Opti-Grade follows two simple principles:
1- Measuring road roughness lets you identify
sections that need grading—and those that
don’t.
2- Using graders efficiently means grading only the
sections that need it, and travelling at full speed
over sections that don’t.
HOW DOES IT WORK?
Before the Opti-Grade system is used, the road network
(all km markers, bridges, intersections, etc.) must be surveyed using GPS. Then, the recording equipment is installed
in one of the trucks that regularly travel the road being
monitored.
While a sensor continuously measures the roughness of
the road, a GPS receiver determines the time and position
of each measurement. A datalogger stores the road roughness values, plus the position and recording time of each
value.
19
When the truck enters the mill yard, the recordings are
transferred by spread spectrum radio to an office computer.
The Opti-Grade software then analyzes the roughness and
GPS data to determine which road sections require grading
and calculates the travel speed of the truck on each section.
The user can display a map showing the road roughness for
different sections as seen in Figure 3. Because the kilometre
markers of the road have been surveyed when the Opti-Grade
system was set-up, the map can show the correspondence
between those markers and the sections to be graded.
typical grading schedule. On this example, the operator only
needs to grade 22 kilometres out of the total 63 kilometres
of the road, thus saving 65% of the normal grading time.
TYPICAL RESULTS
The Opti-Grade system is in use at more than 20 locations in Canada. The users generally attempt to maintain
the same road quality while reducing grading cost. Experience with the system has shown that the grading costs can
be reduced by up to 30% on average. For a company maintaining 400 km of roads, this can represent an annual saving of more than $70,000, or a payback of a few months.
The companies using Opti-Grade are also very interested
with the speed and time recordings provided by the system.
They use this information to establish more precise cycle
times and thus fairer trucking rates.
Finally, analysis of Opti-Grade data helps to identify road
segments that need constant maintenance. Investments in
road improvements can thus be justified more easily.
The software then prepares an optimized grading schedule based on three criteria:
123-
a roughness level above a trigger threshold
the minimum length of road to treat
the minimum distance between sections to be
treated
The next morning, the grader operator uses that schedule to determine the sections to grade. Figure 4 shows a
Figure 3: Map of road roughness
Figure 4: Grading schedule
20
A Test of the Applanix POS LS Inertial Positioning System for
the Collection of Terrestrial Coordinates Under a Heavy
Forest Canopy
STEPHEN E. REUTEBUCH, WARD W. CARSON, AND KAMAL M. AHMED
Abstract: The Applanix POS LS backpack-mounted inertial land positioning/navigation system was used to collect terrestrial coordinates along a previously surveyed closed traverse. A total station surveying instrument was used to establish 26
ground-level stakes along a 1 mile traverse under the dense canopy of a 70 year-old conifer forest in the Capitol State Forest
near Olympia, Washington. The Applanix POS LS was initialized at a fixed monument and carried through the forest along
the traverse 12 times. Coordinate readings were collected continuously both at the survey posts and between posts. Both the
system’s location accuracy and its potential for developing terrain profiles were evaluated. The system’s average real-time
position accuracy was 2.3 ft (1.6 ft Stdev., 7.0 ft max.) and average post-processed accuracy was 1.4 ft (0.9 ft Stdev., 4.0 ft
max.), measured at each survey stake. An earlier study provided a 5 by 5-foot, gridded digital terrain model (DEM) derived
from high-density LIDAR data. Profiles generated from the LIDAR DEM were compared with profiles measured by the POS
LS system. Average post-processed elevation difference along the profiles was 0.7 ft (1.0 ft Stdev., 4.5 ft max.).
INTRODUCTION
In general, the INS and GPS components of an integrated
POS system complement each other’s strengths or, rather,
compensate for weaknesses (Farrel and Barth, 1999). With
an initial coordinate fix and the acceleration vectors sensed
by the IMU, an INS can integrate the velocity vectors and
compute the coordinate path as the unit moves; however,
positional accuracy is eroded as instrument drift accumulates over time.
In contrast, GPS errors are not accumulated over time but,
rather, GPS accuracy is maintained by regular, frequent, independent readings. In short, an INS can measure direction
and distance in the short run, but benefits greatly by regular
coordinate updates from the GPS to correct drift. The GPS
is good over the long run, and benefits greatly from the INS
data acquired between GPS recordings.
As a surveying device operating in the open under a good
constellation of GPS satellites, the POS LS will deliver coordinates accurate in the range of RTK GPS capabilities—approximately 4 inches or better. However, when the GPS signal is blocked, under a forest canopy for example, it reverts
to sole dependence upon its INS, and, the error due to drift
will begin to diminish the position accuracy over time.
The effect of INS drift can be mitigated in two ways: 1) by
position updates—the obvious technique of re-initializing the
system position with either GPS readings or by periodically
Applanix* [a Canadian company that has developed a
number of position and orientation systems (POS) based
upon inertial navigation systems (INS)] has recently produced a system designed for land surveyors (Gillet et al.,
2001). The POS LS system combines an INS [with its
embedded inertial measurement unit (IMU)], a roving global position system (GPS) unit, and a computer datalogger
into a backpack system weighing about 40 pounds.
When utilizing the internal roving GPS receiver, the POS
LS unit is intended to be used with a user-supplied realtime kinematic (RTK) GPS basestation that would provide
the necessary carrier-phase ambiguity resolution, thereby
supplying frequent, accurate coordinate updates to the INS
system.
Applanix has established with other products, such as
the airborne POS AV system, that uninterrupted,
postprocessed data from such a GPS/INS system can deliver coordi-nates accurate in the range of inches. However, Applanix anticipates that the land sur-veyor will on
occasion lose the GPS signal—for example, under a forest
canopy. The question then becomes: what accuracy can
one expect from the POS LS system under these less than
optimal GPS conditions?
*
Use of trade or firm names in this publication is for reader information and does not imply endorsement by the U.S. Department of Agriculture of any product or
service.
21
re-visiting known points; or, 2) by zero velocity updates
(ZUPTs)—a method used to obtain a velocity re-initialization that has been incorporated into the POS LS instrument.
If the INS unit is momentarily held still (at zero velocity),
Elosegui et al., 1995; Firth and Brownlie, 1998; Lachapelle
and Henriksen, 1995). The POS LS unit offers an alternative method for collecting geographic positions under such
adverse conditions. The accuracy of the POS LS system is a
function of the frequency of coordinate updates—from either a GPS signal or the input of known coordinates—and
the frequency of ZUPTs.
In this initial study, we only examined POS LS coordinate accuracy at a fixed ZUPT interval (nominally 30 seconds) under forest canopy. We examined both real-time
accuracy of the POS LS unit and accuracy obtained by postprocessing POS LS positions after each trial run (traverse)
was closed on a known point.
It is also important to note that in our test, because of
extremely dense canopy conditions, GPS positions were not
collected with the POS LS unit. Instead, the unit was initialized over previously surveyed reference points for each
trial run. These reference points were located in a clearcut
adjacent to the forested area. In practice, the GPS unit in
the POS LS could have been used in the clearcut to accurately establish the initial location of the instrument before
entering and after emerging from dense forest.
METHODS
On May 21-22, 2002, Applanix technical personnel
brought a POS LS instrument to the Capitol State Forest
near Olympia, Washington for trials under the canopy in
our forest test site. The forest is managed by the Washington State Department of Natural Resources. Our test site
has a mix of forest canopy cover, ranging from 70-year-old
conifer cover to recently clearcut areas. It has been the site
of several other geomatic (Reutebuch et al., 2003) and forestry (Curtis, et al., in press) research trials.
LIDAR data sets were collected in 1998, 1999, and 2000,
and a high resolution, 5x5-ft gridded digital terrain model
(DTM) was produced from the 1999 data. Additionally, a
closed traverse, total station survey was performed under a
full-canopy segment of the forest and the staked-points were
available for use in assessing the accuracy of the POS LS
system. Our test of the POS LS unit was build primarily
around re-visiting these surveyed points. A comparison was
also conducted between POS LS position elevations and elevations interpolated from the LIDAR-based DTM.
The closed traverse survey loop consisted of 26 points,
marked with 2x2-inch wooden pegs driven into the forest
floor down to ground level. Two reference points, marked
as 1A and 2A, were established from local HARN points
with a carrier-phase, survey-grade GPS instrument. Other
points, spaced around a roughly circular traverse of approximately 1 mile in length, were established with a Topcon
ITS-1 total station survey instrument. Closure calculations
showed the horizontal accuracy was 1:2840 and vertical closure was 1.1 inches. After adjustment, the horizontal and
vertical accuracy of the ground points were within 6 inches
and 1 inch, respectively.
Figure 1: Applanix POS LS system is held steady in
one position during a ZUPT.
the POS LS software can re-initialize the velocity vector to
zero and thereby correct for accumulated velocity drift. In
Figure 1, note how the operator uses a blue staff to help hold
the system steady during a ZUPT.
The desired time interval between ZUPTs is user defined;
however, longer intervals increase position errors. When
under dense canopy (when the position is being updated using only INS data) an audible announcement and text display on the POS LS datalogger informs the user when it is
time for either a ZUPT or position fix (acquiring a new GPS
location in a clearing or moving to a known point). A ZUPT
is also automatically initiated when the INS unit senses that
it is stationary. Additionally, the operator can manually initiate a ZUPT at any time.
At the end of a survey, the location of the POS LS unit is
accurately established by either acquiring a high-accuracy
GPS position in an opening, or by returning to a known reference point. This allows the traverse data to be post-processed to derive more accurate, adjusted positions.
OBJECTIVES
It is well established that GPS is not reliable for surveys
under or near a forest canopy due to obstruction of GPS satellite signals or signal multi-path problems (Darche, 1998;
22
Sets of POS LS coordinate data were collected continuously at a once per second rate over the course of the closed
traverse. Most of the readings were collected while in transit between survey stakes; however, specific blocks of recordings were noted. These blocks were:
1)
Alignment Fix: The operator set the backpack
at reference point 1A to establish the initial
position and allow the system to determine true
north.
2)
Point Visitation: After alignment, with the instrument on his back, the operator located himself over a survey point (i.e., the 2x2-inch peg)
and held himself steady enough to record several seconds of consistent coordinate readings.
(Note: A vertical bias of 3.0 ft was subtracted
during data reduction to account for the height
of the unit’s recording point above the peg in
these standing positions).
3)
ZUPTs: When alerted by the unit, the operator stopped with the instrument still on his
back and held steady—at zero velocity—for
several seconds.
4)
Position Fix: The operator took the instrument
off his back and set it on a survey stake for
several seconds and commanded the system
to update position. (A survey stake approximately midway through our closed traverse,
was used as this intermediate point).
that the operator would see on the datalogger coordinate readout in the field as the POS LS is being carried in the forest.
Each real-time data file begins with an initial Position Fix.
The data from that initial point forward were computed by
dead reckoning based upon the IMU readings and INS projections augmented by the operational ZUPTs.
The ‘post-processed’ data result from the same recordings, but they depend upon a final Position Fix at the end of
each run. An algorithm implemented in the Applanix
POSPac software is designed to adjust to zero the error at
this fixed terminus and to minimize the error over each run.
Both these data sets were examined in this study to quantify
the real-time point-by-point accuracy that one can expect in
the field, and the accuracy obtainable from further POSPac
refinements accomplished after data collection in the office.
RESULTS
Both the ‘real-time’ and ‘post-processed’ data from this
test are presented similarly. Tables 1 and 2 summarize the
basic results, including average of coordinate errors at all
survey stakes, standard deviation, and maximum error associated with each run. Plots display error accumulation over
time for ‘real-time’ and ‘post-processed’ data and differences
between them (fig. 2 and 3).
Run descriptions
A ‘run’ is defined by an initial Position Fix and, with the
exception of Run 5, is termi-nated by a final Position Fix.
(Run 5 was terminated by an unexpected battery failure and,
therefore, did not have a terminal fix and could not be postprocessed). Each run took a certain time—recorded and
shown in seconds—and covered a certain point-to-point distance—computed as the accumu-lated, straight-line distance
between the points visited. The count of the actual num-ber
of points visited is shown in the tables as well. The average
run time, number of points, and length was 48 minutes, 2,440
points, and 2,472 ft, respectively.
Coordinate and orientation data were collected continuously while following the closed traverse through several
loops. We divided this continuous stream of data into runs.
Each run was initiated at a Position Fix and terminated later
at another Position Fix with Position Visitations and ZUPT
updates registered in between. Twelve runs were made during our test.
Data Management and Reduction
During the two days of POS LS field testing, our operator (Joel Gillet from Applanix) tramped through our rough,
forested terrain for a total of nearly 6 miles while stopping
at 175 known positions to either re-initialize the instrument
or record points as coordinate data. The task took over ten
hours and the instrument, recording constantly at the rate
of one coordinate set per second, collected nearly 40,000
points.
From these data, Applanix delivered to us two types of
coordinate files: 1) the ‘real-time’ files that held lists of field
recorded ‘time, X, Y, Z’ data, and 2) the ‘post-processed’
files of the same data after adjustment. All coordinate data
had been transformed into the State Plane System, Washington South Zone, NAD83, Mean Sea Level Elevation,
NAVD88 datum, International Feet.
The ‘real-time’ and ‘post-processed’ data are purposely
distinguished in this report. The ‘real-time’ data are those
Coordinate errors at survey stakes
Tables 1 and 2 present the average coordinate errors (defined at each point as the coordinates collected over the survey stake (generally the average of ten readings) minus the
survey stake coordinates) for each run for the ‘real-time’ and
the ‘post-processed’ data, respectively. The overall error
means and standard deviations, weighted in proportion to
the number of points visited within each run, are computed
and displayed at the bottom of each table.
For the ‘real-time’ runs, the mean horizontal error at the
stakes was only 2.3 ft (1.6 ft stdev, 7.0 ft max). The mean
real-time elevation error was 1.4 ft (1.0 ft stdev, 5.4 ft max).
For the ‘post-processed’ runs, the mean horizontal error
at the stakes was only 1.4 ft (0.9 ft stdev, 4.0 ft max). The
average post-processed elevation error was a remarkable 0.4
ft (0.3 ft stdev, 1.4 ft max).
23
Error Plots at survey stakes versus time
differs by a tenth of a foot from the average of coordinates
over a range of plus and minus 5 seconds, is taken as a
point where the operator is in motion and between points.
Table 3 summarizes the post-processed data in this ‘betweenpoints’ class. The table shows the total number of points
recorded in a run, and the total number of points recorded
while moving.
There is remarkably little difference between the LIDAR
DTM elevations and the POS LS elevations (DTM elevation minus the POS LS elevation). The mean, standard
deviation, and root-mean-square difference over all the moving points were 0.7, 1.0, and 1.3 ft, respectively, with differences ranging from –4.1 to 4.5 ft.
Figures 2 and 3 display the coordinate error as it accumulated over time during each run. The plots distinguish
each run and are organized to contrast error drift in the ‘realtime’ data (fig.2) and the ‘post-processed’ data (fig.3).
Comparisons with a LIDAR DTM
As noted earlier, coordinates were being collected constantly at a one second interval while the operator traveled
between points. We have distinguished these ‘betweenpoints’ blocks of data by noting the operator movements.
By our definition, any point with a coordinate (X, Y) that
Table 1. Real-time POS LS system error computed from ground survey stakes.
Stakes Total Horizontal Error
Vertical Error
Run
Run Length Visited Time
(ft)
(ft)
No.
(ft)
(no.) (sec) Avg. Stdev. Max. Avg. Stdev. Max.
1
3732
20
6931 1.9 1.2
3.9 1.4 0.8
2.4
2
2241
15
2496 1.7 1.2
5.2 1.4 0.9
2.8
3
3799
20
5201 2.4 2.1
5.8 2.6 1.6
5.4
4
2141
12
2553 1.9 1.1
3.7 2.0 1.8
4.6
5
2597
12
3103 3.5 2.2
6.9 1.1 0.4
1.6
6
957
7
1290 4.0 2.5
7.0 0.8 0.4
1.4
7
2237
12
2018 2.7 1.5
4.6 1.2 0.7
2.3
8
253
3
806 1.4 0.7
1.9 0.7 0.4
1.1
9
3486
17
3104 2.7 1.8
5.3 1.6 0.8
3.1
10
2239
13
2245 2.0 1.1
3.6 1.2 0.6
1.8
11
3739
19
3192 1.9 1.5
4.8 0.8 0.7
2.0
12
2239
12
1819 1.6 0.6
2.4 1.3 0.8
2.3
Weighted Avg., Stdev., Max. Error--all runs 2.3 1.6
7.0 1.4 1.0
5.4
*Time spent collecting data at each stake and traveling to next stake.
Combined Error
(ft)
Avg. Stdev. Max.
2.4 1.4
4.6
2.2 1.4
5.7
3.7 2.5
7.1
2.8 1.9
5.4
3.7 2.2
7.0
4.1 2.5
7.1
3.0 1.6
5.0
1.5 0.8
2.2
3.1 1.9
5.7
2.4 1.2
3.9
2.1 1.5
5.1
2.2 0.9
3.1
2.8 1.8
7.1
Time per Stake*
(sec)
Avg. Stdev. Max.
328 397 1912
162 127 529
256 198 893
207 137 522
259 208 797
177 62
256
165 101 374
254 226 515
180 98
402
167 100 351
165 95
415
148 87
312
Table 2. Post-processed POS LS system error computed from ground survey stakes.
Stakes Total
Horizontal Error
Vertical Error
Run
Run Length Visited Time
(ft)
(ft)
No.
(ft)
(no.)
(sec) Avg. Stdev. Max. Avg. Stdev. Max.
1
3732
20
6931 1.1
0.6
2.6 0.3
0.3
0.8
2
2241
15
2496 2.3
1.1
4.0 0.5
0.3
1.0
3
3799
20
5201 1.5
0.9
3.1 0.5
0.4
1.4
4
2141
12
2553 1.3
0.8
2.4 0.6
0.5
1.3
6
957
7
1290 0.9
0.4
1.4 0.2
0.2
0.5
7
2237
12
2018 1.2
0.9
2.6 0.2
0.2
0.7
8
253
3
806
0.4
0.2
0.6 0.1
0.1
0.3
9
3486
17
3104 1.3
1.2
3.4 0.3
0.3
0.8
10
2239
13
2245 0.9
0.6
1.8 0.3
0.3
0.9
11
3739
19
3192 1.5
1.1
4.0 0.4
0.3
1.2
12
2239
12
1819 1.3
0.8
2.5 0.4
0.3
0.7
Weighted Avg., Stdev., Max. Error--all runs 1.4
0.9
4.0 0.4
0.3
1.4
*Time spent collecting data at each stake and traveling to next stake.
24
Combined Error
(ft)
Avg. Stdev. Max.
1.2
0.6
2.7
2.5
1.1
4.0
1.7
0.9
3.1
1.5
0.8
2.7
0.9
0.3
1.4
1.3
0.9
2.6
0.4
0.2
0.6
1.4
1.2
3.5
1.0
0.7
2.0
1.6
1.1
4.0
1.4
0.8
2.5
1.5
0.9
4.0
Time per Stake*
(sec)
Avg. Stdev. Max.
328 397 1912
162 127
529
256 198
893
207 137
522
177
62
256
165 101
374
254 226
515
180
98
402
167 100
351
165
95
415
148
87
312
Figure 2: Real-time combined (horizontal and vertical) position error over time.
Figure 3: Post-processed combined (horizontal and vertical) position error over time.
25
DISCUSSION
seconds (about 13 minutes) in Run 8, to 6931 seconds (almost 2 hours) in Run 1. The errors are generally dependent
upon time, however, as is apparent in both Table 1 and Figure 2, there are exceptions. It does seem safe to expect a
total vector error of less than 3 ft with a maximum error less
than 8 ft for operations under 30 minutes in length.
Table 2 and Figure 3 show the results when the same
data are post-processed. Generally, as is apparent from the
results, one can expect the error to be cut by half—total
vector errors less than 1.5 ft and maximums under 4 ft for a
30 minute operation.
Table 3 presents results in a format that should aid in our
evaluation of the instrument’s potential for collecting data
for a local DTM or linear profiles (stream, roads, trails, etc.).
As mentioned above, the elevation difference statistics in
Table 3 are based upon POS LS ‘moving points’ (17,635
points in total) compared to elevations interpolated from
our LIDAR DTM. The DTM is gridded at 5 by 5 ft. Its
accuracy was scrutinized closely and reported by Reutebuch
et al. (2003). The statistics in this LIDAR DTM evaluation
were based upon the differences between the DTM and the
elevations of a larger set of surveyed ground locations.
Using a subset of 121 points under the same portion of
the forest canopy where this POS LS test was conducted, we
computed a mean LIDAR DTM error of 1.02 ft, a standard
deviation of 0.95 ft, and minimum, maximum error of –
1.97 and 4.30 ft.
Clearly, the weighted means and standard deviations for
the POS LS system (Table 3) are very comparable. As
Reutebuch et al. (2003) make clear, the elevation differences
are small and, most likely, can be attributed primarily to the
Our tables and plots were developed to help evaluate the
usefulness of the POS LS instrument in a forestry context,
particularly in those situations where GPS is unreliable or
known to be inaccurate. Three situations are of interest:
1) How well would the instrument serve as a tool for
locating specific field coordinates—a plot center,
for example, or the boundary points of a unit—in
real-time?
2) How well would the instrument serve as a tool for
collecting and post-processing coordinates to
record, for example, an existing plot center or
stream bed under a riparian canopy?
3) How well would the instrument serve as a tool for
collecting and post-processing the coordinates
necessary to define or evaluate a terrain profile or
a digital terrain model in areas of dense canopy?
In the first situation, the operator would use the POS LS
in a ‘real-time’ mode—out in the forest, using the real-time
coordinate read-out to navigate. With the other situations,
the operator would collect data and then post-process in the
office to prepare an accurate coordinate file.
Both Table 1 and Figure 2 demonstrate typical error patterns in runs initiated at a known point and accumulated
over time in the field. The total time lapse varies from 806
Table 3. Differences between LIDAR DTM and POS LS elevations while unit was in motion (excludes data collected
while the POS LS unit was at rest).
Elevation Difference
Run
Total
Moving
(LIDAR DTM elevation minus the POS LS elevation, ft)
(no.)
Points
Points
Avg.
Stdev.
RMS*
Min.
Max.
1
4986
3111
0.6
1.0
1.2
-3.0
4.3
2
2324
1628
0.4
0.9
0.9
-2.7
3.9
3
4222
2649
0.8
1.1
1.3
-4.1
4.2
4
2345
1434
1.5
1.0
1.8
-1.8
4.0
6
1065
710
1.1
0.7
1.3
-1.5
3.1
7
1896
1309
0.8
0.9
1.2
-1.0
4.4
8
253
154
0.9
0.4
1.0
-0.1
1.7
9
2982
2113
0.8
1.0
1.3
-2.3
3.9
10
2079
1187
0.4
0.8
0.9
-2.0
3.6
11
3033
2159
0.8
0.9
1.4
-1.5
4.5
12
1660
1181
0.5
0.9
1.0
-1.4
3.3
Weighted Avg., Stdev., RMS, Min., Max. Difference--all runs
0.7
1.0
1.3
-4.1
4.5
*Root-mean-square difference between LIDAR and POS LS elevations
26
smoothing effect of the DTM, the slight positive bias that
was noted in the LIDAR DTM, the operator climbing over
large logs, small random errors in the ground survey, and/
or micro-topography of the actual forest floor.
Darche, M. 1998. A comparison of four new GPS systems
under forestry conditions. Special Report 128. Forest Research Institute of Canada, Pointe-Claire, Quebec, Canada.
16p.
CONCLUSIONS AND
RECOMMENDATIONS
Elosegui, P., J. Davis, R. Jaldehag, J. Johansson, A. Niell, I.
Shapiro. 1995. Geodesy using the Global Positioning System: The effects of signal scattering on estimates of site position. J. Geophys. Res.,Vol.100: 9921-9934.
The POS LS does seem to have great potential in forestry. In an operating mode that is typical for forestry and,
unimpeded by heavy canopy cover, the POS LS post-processed data are considerably better than that of a roving
GPS instrument. This is true for its ‘real-time’ mode as
well. Therefore, whether a forester is navigating to a point
or preparing to record and later post-process coordinate data,
the POS LS offers a considerable accuracy improvement
over a roving GPS instrument. Currently, the unit is quite
expensive and heavy compared to conventional GPS units.
However, when accurate positions under heavy canopy were
needed in the past, foresters have been forced to use more
labor-intensive, expensive and heavy ground survey methods and equipment. And, as happened with GPS units, it is
expected that both the cost and weight of the POS LS unit
will decrease as the system is miniaturized in the future.
It is clear that ZUPTs are very important to the accuracy
of the POS LS system, as they are the only means of compensating for IMU drift when reliable GPS signals are unavailable and known points are not nearby. However, ZUPTs
will in some circumstances be an operational impediment—
requiring the operator to stop too frequently and thus slow
down progress of both navigation and data collection. The
average time between ZUPTs in the runs of this test was
38.2 seconds, and the average time spent stopped for a ZUPT
was 16.6 seconds. One can expect to erode the accuracy if
the ZUPTs are less frequent; however, there are many operations in forestry where less accuracy would be acceptable. Therefore, we would recommend that a series of tests
be designed to test the positional accuracy versus ZUPT frequency relationship. There are certain to be many situations where this relationship will be of interest.
Farrel, J.A. and M. Barth. (1999). The Global Positioning System and inertial navigation. McGraw-Hill, New York, NY.
Firth, J. and R. Brownlie. 1998. An efficiency evaluation of the
global positioning system under forest canopies. NZ Forestry, May 1998: 19-25.
Gillet, J., R. McCuiag, B. Scherzinger, E. Lithopoulos. (2001).
Tightly coupled inertial/GPS system for precision forestry
surveys under canopy: test results. First International Precision Forestry Symposium, University of Washington, College of Forest Resources, Seattle, WA, June 17-20, 2001:
131-138.
Lachapelle, G. and J. Henriksen. 1995. GPS under cover: the
effect of foliage on vehicular navigation. GPS World, March
1995: 26-35.
Reutebuch, S., R. McGaughey, H. Andersen, and W. Carson. In
press. Accuracy of a high resolution LIDAR-based terrain
model under a conifer forest canopy. Canadian Journal of
Remote Sensing, Vol. 29, No. 5, pp. 527-535.
ACKNOWLEDGMENTS
Support for this research was provided by the USDA Forest Service, Pacific Northwest Research Station, and the
Precision Forestry Cooperative within the University of
Washington, College of Forest Resources.
The authors wish to acknowledge the Washington State
Department of Natural Resources for their generous contributions, by allowing use of the test site and assistance from
the Resource Mapping Section, that made this study possible. We also acknowledge Brian Johnson and Joel Gillet
of Applanix Corporation for all their help with both field
work and data processing. Finally, we wish to thank Andrew Cooke, University of Washington student, for all
spreadsheet setup and graphics he provided for this paper.
REFERENCES
Curtis, R.O., D.M. Marshall, and D.S. DeBell, (eds.). In press.
Silvicultural options for young-growth Douglas-fir forests:
The Capitol Forest Study—establishment and first results.
U.S. Department of Agriculture, Forest Service, Pacific
Northwest Research Station, Portland, Oregon, General
Technical Report PNW-GTR-XXX.
27
28
Ground Navigation Through the Use of Inertial
Measurements, a UXO Survey
MARK BLOHM AND JOEL GILLET
Abstract: While portable inertial navigation systems have been successfully developed to facilitate land surveying under
tree canopy, and are currently used in the Oil and Gas exploration business, their inherent cost has so far been an hindrance
to their acceptance in other markets.
A new generation of inertial survey instruments is being developed to keep the advantages of the previous generation, such
as portability, productivity and low environmental impact, but at a lower cost, more to the level of traditional survey instruments and geodetic GPS receivers.
The US army corps of engineers having identified this need after a first phase of demonstration of “Innovative Navigation
Equipment and Methodologies to Support Accurate Sensor Tracking in Digital Geophysical Mapping (DGM) Surveys” during the year 2001, has financed further studies of lower cost portable inertial navigation systems by Applanix Corporation and
Blackhawk Geoservices.
This paper presents the joint efforts made by these two companies to define the requirements for a lighter, smaller, less
expensive inertial Position and Orientation System, that can be directly integrated to existing field instrumentation (such as a
Geophysical instrument), for use under canopy. This instrument could be of value to forestry applications.
29
30
Precision Forestry Operations and Equipment in Japan
KAZUHIRO ARUGA
Abstract: A higher level of various forest operational activities will be required to meet the higher demands on forest
resources. The Japanese Forestry Agency has been developing equipment required for more efficient and precise operations. A
tool consisting of GPS, an electrical compass, a laser range finder, digital calipers, and PDA has been developed to measure
background information such as topography and forest conditions more easily. In order to access steep terrain of more than 30
degrees, monorails equipped with a crane or a grapple will start to be introduced into forestry. Furthermore, an autonomous
monorail for transportation has been developed. According to a harvester and a forwarder, remote controlled or autonomous
machines have been studied to increase forestry productivity as well as to reduce forest environmental impact.
INTRODUCTION
SURVEY TOOL AND SIMULATION
The world’s population is over 6 billion in 2001 and is
projected to be near 9.3 billion by 2050. This increase in the
world population will demand more resources such as freshwater, energy, food, logs, lumber, pulp, and other forest products. Besides providing wood products, the forest has other
important functions. Trees and other vegetation on forestlands remove carbon dioxide from the air and release oxygen. Streamside trees provide shade and cool water for fish
and other aquatic species during hot summer months; they
supply large woody debris to streams that provide and maintain fish and wildlife habitat, and they are a critical component of wetlands and river banks assisting in the protection
of water quality and habitat for fish and wildlife. The forest
is also a major source of outdoor recreation where people
can fish, hunt and engage in other outdoor activities.
A higher level of various forest operational activities will
be required to meet these higher demands on resources. An
increase in large and heavy forestry machines and more forest roads will be needed to increase operational productivities
to meet the higher demands on resources. However, it is
difficult to use existing forestry machines in mountainous
areas. Even if they could be used, their operational cost would
become high. Furthermore, they have had a negative impact on forest environments by causing soil disturbance and
residual stand damage even on gentle slopes. Proper planning and implementation of forestry operations minimize
the negative impact. The Japanese Forestry Agency has been
developing equipment required for more efficient and precise operations. This paper describes this equipment that
includes a forest survey tool, a forestry operation simulation
tool, and remote controlled or autonomous machines.
Implementation of a more efficient and precise forestry
operation requires more precise and accurate data on topography and forest conditions. Topography can be measured
accurately by LIDAR and much research has been conducted
to develop a filter to obtain more accurate topography from
raw data of LIDAR. Forest conditions include tree number,
location, species, height, diameter, and volume. The Japanese Forestry Mechanization Society and the Japanese company, Timbertech, have developed the survey tool, Formas,
consisting of a GPS, an electrical compass, a laser range
finder, digital calipers, and PDA under a Japanese Forestry
Agency project. Since these components used in Formas
already exist, this project is aimed at integrating the equipment to measure location, height, and diameter and to calculate volume more easily. Though automated individual
tree measurements with LIDAR were studied (Andersen et
al. 2002), this project tries to develop a smaller groundbased laser detector which scans topography and trees in
three dimensions.
More accurate simulation will result with more precise
and accurate data on topography and forest conditions. Many
simulations on forestry operations have been performed
throughout the world. In Japan, Sasaki simulated a mobile
yarder with C++ (Sasaki and Kanzaki 1998). Also, Zhou
simulated a mobile yarder and processor on steep terrain,
and a harvester and forwarder on a gentle slope with GPSS
(Zhou and Fujii 1995). In addition, Sakurai simulated a
mobile yarder, a processor, and a forwarder (Sakurai 2001).
However, these simulations are used only for research. In
order to use these simulations in the forestry industry, it is
necessary to collect data associated with specific sites and
31
equipment. Finally then, verification of simulations on operational sites should be conducted.
FOREST ROAD AND MONORAIL
Road density in the forest is 13 m/ha in Japan. Over the
course of 40 years, the Japanese Forestry Agency is planning to construct roads in the forest up to 18 m/ha. Unfortunately, 18 m/ha is not high enough to conduct forestry operations with a mobile yarder and small forwarder (used
typically in Japan). As forest roads must be constructed based
on a forest road standard (the safest means), costs can exceed 100,000 yen/m. A low volume road, called a strip road
is constructed without strict standards in order to complement the forest road. Its width is about 2 m. A small forwarder and small truck can be driven on this road. Its cost is
about 10,000 yen/m. However, a strip road in the mountainous area is subject to minor landslides. In fact, strip road
studies in Japan specifically mentioned that minor landslides
were related to topography, vegetation, soil, climate, and
road structure (Cheng et al. 2002, Suzuki and Yamauchi
2002, Yoshimura et al. 1996). Though vegetation, soil, and
climate must be measured in the site, topography and road
structure can be measured with high resolution DEM (e.g.
from LIDAR). This is helpful to forecast where minor landslides occur and to design proper road locations.
It is difficult and expensive to construct even low volume roads at many forest sites in Japan because of the steep
slope (in some cases more than 30 degrees). In order to access the sites, monorails have been introduced to forestry
(Nitami 2003). Many industrial monorails for agriculture
Figure 2. Monorail with a crane.
small monorail is about 15,000 yen/m and the cost of a large
one is about 35,000 yen/m including rail material, labor,
and cars (if more than 700 m long rails are constructed). All
monorails can climb more than 40 degrees and their speed
is about 40 m a minute. Monorails have rack and pinion
traction mechanisms for slip-less movement and a dual brake
system for safety, especially for downhill. In addition, monorails can be equipped with cranes and grapples to load logs
(Jinkawa et al. 1998). In the end, it is important that monorails cause little disturbance to the overall forest environment. The Japanese Forestry Mechanization Society has
developed autonomous monorails for transportation.
FORESTRY VEHICLE
The Japanese Forestry Agency has developed forestry
machines to be used in mountainous forest regions. Recently,
a semi-legged vehicle was introduced and modified for Japanese forestry (Aruga et al. 2001). In Europe, a harvester
with four triangle shaped crawlers was produced by Valmet
(Stampfer and Steinmulle 2001). These machines can maneuver in steep and rough terrain. Hence, forestry machines
could be widely used on steep and rough terrains. When
using a harvester and forwarder system, the forwarding cost
represents approximately 10% of the forest industry’s raw
material cost. Forwarders compact soil more than harvesters because forwarders move with many loaded logs. This
operation’s efficiency can be improved by combining satellite navigation (GPS) and radio communication. After the
harvester cuts and processes the trees, if the log positions
could be transferred to a forwarder, the forwarder would not
have to move around to find the trees. In this way productivity is improved because moving and judging time are
shortened. The environmental impacts are also reduced because the number of vehicle passes decreases and trail areas
are restricted. In addition, the use of a transport optimization algorithm for the forwarding operation would make this
process even more efficient.
Figure 1. Monorail with a passenger car.
and civil engineering are used in Japan. Most notably, agriculture monorails are used at an orange grove on the steep
terrain on Shikoku Island, Japan. Many companies produce
industrial monorails. Maximum load ranges from 200 kg
(Figure 1) to 5,000 kg (Figure 2). The construction cost of a
32
Figure 3. The examination of the speed sensor on national road. (The thick line indicates roads on which the speed
sensor could be used.)
Figure 4. The examination of GPS data communication system using cellular phones. (The thick line indicates roads on
which communication was successful.)
Though transportation efficiency has been improved with
satellite navigation and radio communication in other industries, it is difficult to use GPS in the forest (Reutebuch et
al. 1999). To counter this, a speed sensor, consisting of a
GPS, yaw and pitch gyro, and an acceleration meter produced by Datatec Co., Ltd., was tested. The sensor measured vehicle positions in tunnels and on a forest road along
a stream (Figure 3), locations unfavorable to the use of GPS
(Figure 4). The sensor could not be used however to measure positions on strip roads where a considerable amount
of slippage occurs. (The device was not produced for use on
off-road vehicles.) (Figure 5, 6). As a result, it will be necessary to develop a new sensor with a GPS, gyro, an acceleration meter, and other meters for off-road vehicles (Imou et
al. 2001, Mozuna and Yamaguchi 2003).
As cellular phones are widely used in Japan, they could
be instrumental in enhancing a vehicle navigation system.
The transfer of GPS data by cellular phone was tested in the
Tokyo University Forest in Chichibu. A car equipped with a
GPS receiver (Trimble AgGPS124) traveled from the Tokyo
University Forest office along R140 to the end of the Irikawa
Forest Road. The distance between the office and the end of
the Irikawa Forest Road is 25 km. GPS data obtained from
the GPS receiver was downloaded to a computer in the car
and transmitted from the computer to a computer in the
office by cellular phone. It was demonstrated that GPS data
can be transmitted anywhere except for sections with tunnels and the portion of the test forest road along a stream
(Figure 4).
A Japanese construction machinery company has started
to equip construction equipment used as base-machines for
forestry equipment with satellite communication systems.
This system transfers the machine position obtained from
GPS as well as operational information (such as working
33
Figure 5. The examination of the speed sensor on a forest road.
Figure 6. The examination of the speed sensor on a forest strip road.
34
tion. J. Jpn. For. Eng. Soc. 13(1):3-14. (in Japanese with
English summary)
time and earthwork efficiency) so that a customer in an office decides when the machine needs to be maintained. This
system is only used to transmit daily reports. Another Japanese forestry equipment company has developed a GPS data
and message transmission system. This system transfers GPS
position data (e.g. forestry workers or vehicles) to other workers, vehicles and office locations through satellite communication and Internet applications. Satellite communication
systems are unique in that they can transfer information anywhere. Unfortunately, their continuous use is cost prohibitive. For this reason satellite communication systems associated with construction machines transfer information only
once a day and the system developed by the Japanese forestry equipment company transfers only information during
emergency situations. Certainly, if satellite communication
system costs go down, we can expect its use will become
widespread in the forestry industry.
Imou, K., Okamoto, T., Kaizu, Y., and Yoshii, H. (2001) Ultrasonic Doppler Speed Sensor for Autonomous Vehicles. J.
JSAM 63(2): 39-46.
Jinkawa, M., Tsujii, T., Furukawa, K., and Fujii, T. (1998) Development and construction of the tram-car for slopes. J. Jpn.
For. Eng. Soc. 13(3):183-192. (in Japanese with English
summary)
Mozuna, M. and Yamaguchi, H.(2003) A Study on an Autonomous Forwarder by Remote Brain Control System. Proceedings of Int. Seminar on New Roles of Plantation Forestry
Requiring Appropriate Tending and Harvesting Operations:
474-479.
Nitami, T. (2003) Network of Roads in the Forest with Compound Standards. Proceedings of Int. Seminar on New Roles
of Plantation Forestry Requiring Appropriate Tending and
Harvesting Operations: 91-95.
CONCLUSIONS
This paper describes the development of equipment required by more efficient and precise operations. More specifically, this equipment includes the forest survey tool, a
forestry operation simulation tool, and remote controlled or
autonomous machines. First, we have to gather background
information such as tree location, property and topography
with survey tools. Second, we must develop criteria for cost,
productivity, energy consumption and environmental impact.
Third, we have to evaluate equipment and systems based on
these criteria using simulation tools. Once this is done, we
can determine which are technically sound, economically
efficient and environmentally acceptable. Finally, we must
implement our decision and monitor its results. If current
systems and equipment are inadequate, then modifications,
improvements, or even new concepts must be investigated.
Remote controlled or autonomous machines will be useful
for forestry operations. However, it is most important that
any forest operational activities protect the forest ecosystem.
Reutebuch, S. E., Fridley, J. L., and Johnson, L. R. (1999) Integrating Realtime Forestry Machine Activity with GPS positional Data. ASAE Annual International Meeting: Paper No.
99-5037.
Sakurai, R. (2001) The study on the development of mechanized
logging operational system*. Ph.D. thesis, The University of
Tokyo. 203pp (written in Japanese with a tentative translation by the author).
Sasaki, S., and Kanzaki K. (1998) A computer simulation of
yarding operation using an object-oriented model. J. Jpn. For.
Eng. Soc. 13(1):1-8. (in Japanese with English summary)
Stampfer, K., and Steinmulle, T. (2001) A New Approach to
Derive a Productivity Model for the Harvester “Valmet 911
Snake”. Proceedings of The International Mountain Logging and 11th Pacific Northwest Skyline Symposium: 254262 (http://depts.washington.edu/sky2001/).
LITERATURE CITED
Suzuki, Y., and Yamauchi, K. (2000) Practical investigation on
affiliate structures for prevention of disaster and degradation
on low-standard forest roads. J. Jpn. For. Eng. Soc. 15(1):113124. (in Japanese with English summary)
Andersen, H., Reutebuch, S., and Schreuder, G. F. (2002) Automated Individual Tree Measurement Through Morphological Analysis of a LIDAR-Based Canopy Surface Model. Proceedings of First International Precision Forestry Symposium: University of Washington, College of Forest Resources,
June, 2001, pp.11-22.
Yoshimura, T., Akabane, G., Miyazaki, H., and Kanzaki, K.
(1996) The evaluation of potential slope failure of forest roads
using the fuzzy integral -Testing the discriminant model. J.
Jpn. For. Eng. Soc. 11(3):165-172. (in Japanese with English summary)
Aruga, K., Iwaoka, M., Sakai, H., and Kobayashi, H. (2001) The
Dynamic Analysis of Soil Deformation Caused by a Semilegged Vehicle. Proceedings of the Symposium IUFRO Group
3.11.00 at the XXI IUFRO World Congress: 1-7.
Zhou, X. and Fujii, Y. (1995) Simulation of the yarding and logmaking operations system with a use of GPSS. J. Jpn. For.
Eng. Soc. 10(2):243-252. (in Japanese with English summary)
Cheng, P. F., Gotou, J., and Zhao, W. M. (2002) Assessing the
stability of cut slopes by using soil profile pattern classifica-
35
36
Precision Forestry Applications: Use of DGPS Data to
Evaluate Aerial Forest Operations
JENNIE L. CORNELL, JOHN SESSIONS AND JOHN MATESKI
Abstract: Aerial operations play an important role in efficient and cost-effective management of forestlands. The focus of
this paper is on potential uses of precision forestry data for evaluation, planning and implementation of an aerial forest
operation. Helicopters are used for aerial seeding of harvested or burned areas; application of herbicides, insecticides and
fungicides; fertilization; timber harvesting; delivery of water and retardant in fire suppression efforts; transportation of crews,
equipment and supplies; slash disposal; emergency medical evacuations; cone collection; tree pruning; insect and disease
surveys; and for general reconnaissance. Planning and implementation of aerial forest operations with helicopters includes
the safest and least-cost approach for personnel and the aircraft.
A helicopter operation involved with the application of experimental minerals on stands of Douglas-fir in the central Coast
Range of Oregon was used as a case study. The differential global positioning satellite data collected during application was
used to evaluate empirical estimates for production and costs for one mineral ($/ton applied); to compare the influence of
operational aspects on cycle time (e.g. heliport approach and departure pathways); and to develop a regression to estimate
production based on operational parameters. The regression model derived from the differential global positioning satellite
data collected on the operation validated the empirical estimates for helicopter production. This paper summarizes the data
analyses and discusses some of the potential uses and limitations of differential global positioning satellite data for aerial forest
operations.
INTRODUCTION
growth reductions in recent years due to the effects of Swiss
needle cast disease (Filip et al. 2002). Preliminary results
indicate application of specialized minerals have potential
to offset growth reductions caused by Swiss needle cast. To
facilitate incorporation of minerals into the soil with natural precipitation, the minerals were applied aerially during
the winter and spring season to units in the project (Figure
1) (Gourley 2002).
The experimental project covered nine application units.
The units varied from 5 acres to 169 acres, with tree ages
from 2 to 30 years. The minerals were applied from January 16, 2002 to February 1, 2002. The operation involved
applications of up to six minerals at two different times.
Dry material (granular and pelletized form) was applied in
the winter using a bucket system and included up to five
different minerals. Then, a two-stage liquid sulfur application with a spray boom configuration followed in late spring.
The total 2002 acreage for the case study (dry material) application was 302 acres, with a combined amount of applied
minerals of 306 tons. The mineral selected for production
and cost estimates and the flight data analyses was doloprill,
a pelletized dolomitic lime with a molasses binder consisting primarily of calcium with a 9% magnesium content.
The application rate ranged from 1000 – 2273 lbs/acre.
The application of large amounts of minerals during the
winter season potentially increases road improvement and
Aerial operations play an important role in efficient and
cost-effective management of forestlands. Helicopters have
historically been used in the forested environment for a variety of management activities from application and harvesting operations to fire suppression and reconnaissance.
Planning for aerial forest operations has traditionally sought
the use of the safest and least-cost approach for the helicopter, crew and support personnel. The focus of this paper is
the use of precision forestry data collected for a case study
of a mineral application operation. The case study served
as the model for development of a planning approach to
consider the economics of several options for the transportation and aerial application of specialized minerals to stands
of Douglas-fir (Pseudotsuga menziesii) in the Coast Range
of Oregon (Cornell 2003).
CASE STUDY
A forestland manager identified a practical need for an
operations planning approach to minimize costs for transportation and aerial application of experimental minerals
on private forestland. Stands of Douglas-fir regeneration in
the Coast Range of Oregon have experienced significant
37
N
6
5
4
8
Newport
7
3
9
Highway 20
2
Corvallis
1
Philomath
LEGEND
Unit #
Acres
1
2
3
4
5
6
7
8
9
20
12
37
12
169
10
5
15
22
Scale:1 inch
Tons Applied
10
20
41.3
12
172.5
12.2
3.3
10
25
8 miles
Figure 1: Vicinity map for case study mineral amendment project.
eration/deceleration airspeed, a maximum ferry speed, a total
ferry distance, a maximum payload and a calibrated application rate. Calculations for the model were in spreadsheet
format. The cycle elements for the empirical model were
the same for the flight data collection for the delay-free
cycles. Details of the empirical model and the cycle components are discussed in another paper (Cornell 2003).
The production rate for the helicopter was assumed to be
the controlling factor for overall production on the operation. Production rates for the agricultural trucks were based
on the lowest estimated production rate for a helicopter for
a given heliport.
maintenance costs for the landowner and thus can increase
overall project costs. The increased road costs are a result
of ground transportation of the large quantities of minerals
to heliports. The landowner was interested in evaluating
potential transportation and application scenarios to minimize the total project costs and facilitate future planning on
a landscape scale for an aerial operation.
The objective of the research was to develop a planning
approach using mixed-integer linear programming techniques to evaluate a combination of heliports, aircraft, and
transportation options to minimize overall project costs under the constraints of operational safety while meeting the
forest landowner’s objectives. Key cost and production components for the operation were identified and formulated
for the mathematical model on a common basis. The approach was suitable for the case study and could be applicable to other aerial forest operations involving helicopters
as well.
Empirical production estimates for application were used
to estimate ferry and application costs for the Bell 47G3
(B47G3) helicopter in the modeled transportation network
options. Data collection in the field application phase of
the case study supported formulation of the production and
cost estimate framework for the project.
DATA COLLECTION
Previous experience with collecting production data in
logging operations has shown it to be challenging due to
variability of the operating environment (Olsen and Kellogg
1983). The same is true for aerial forest operations because
the activity is outdoors and performed under varying weather
conditions and changing geographic locations. The overall
objective for data collection was to obtain production and
flight information for the B47G3 helicopter (Figure 2) applying the doloprill and to use that information to verify
empirical production estimates and predict production and
costs of future operations. The helicopter had a maximum
hover-out-of-ground-effect1 payload of 1000 pounds for this
operation.
Detailed flight time data was collected using the Ag-
EMPIRICAL PRODUCTION MODEL
The empirical production model estimated a total cycle
time for the helicopter based on an average forward accel38
Nav®22 differential global
positioning satellite
(DGPS) system installed
on the B47G3 helicopter
(Figure 3). The latitude,
longitude received from
Wide Area Augmentation
System (WAAS) transmitters and used a
WGS84 datum for map
coordinates. The AgNav®2 system recorded a
unit map with flight lines
and application swaths
flown by the helicopter
(Figure 4). The flight and
application data were
cross-referenced with
Figure 2: Bell 47G3
non-productive (non-aphelicopter reloading the
plication) flight times and
application bucket from the
activities recorded with
agricultural truck on the case
the shift level information
study application project.
to identify delay-free
cycles.
The cycle data for selected samples from the Ag-Nav®2
DGPS system were converted from binary code data files to
Figure 3: Ag-Nav®2 display screen and directional
light bar installed in the Bell 47G3 helicopter.
a spreadsheet format using the CROP2TXT software from
Ag-Nav, Inc. The converted flight data were used to calculate total flight path distance per cycle; to estimate a maximum difference in elevation per cycle; to calculate average
acceleration and deceleration; to determine reload time; and
determine time and distance for the helicopter to transition
and accelerate to ferry speed and decelerate to flare to load
minerals (average airspeed below 25 mph).
Unit boundary
Flight path
Application swath
Heliport
Figure 4: Ag-Nav®2 map with flight paths and application swaths for unit 3 of case study.
1
Hover-out-of-ground-effect (HOGE) payload is the maximum payload the helicopter can lift for a given air temperature, altitude, and wind velocity when the
helicopter is at a vertical distance from the ground greater than one-half the rotor diameter.
2
The mention of commercial operators and trade names of commercial products, equipment and software in this paper does not constitute endorsement or
recommendation by the authors or Oregon State University.
3
Wide Area Augmentation System
39
80
Empirical Model
70
DGPS Data
60
Airspeed (mph)
50
40
30
20
10
Helicopter turning to
begin new swath
0
0
2000
4000
6000
8000
10000
12000
Total Flight Path Distance (feet)
Figure 5: Empirical model estimate compared to 10 random samples of actual flight data for unit
6 on case study (Airspeed versus total flight path distance).
170
Unit 5 model output from
y = 0.0088x + 70.557
R2 = .5822
n = 25
160
Total Cycle Time (seconds)
150
Unit 3 model output from
y = 0.0094x + 59.832
R2 = .6131
n = 25
140
130
120
110
Unrestricted approach and departure flight path
Restricted approach and departure flight path
100
Data Range Overlap
(4540 ft - 6750 ft)
90
80
2500
3500
4500
5500
6500
7500
8500
9500
10500
11500
Total Flight Path Distance (feet)
Figure 6: Effect on cycle time of restricted versus unrestricted heliport approach and departure flight
path (Lines fitted from 25 random delay-free cycles for each unit).
40
Regression Line from Group 1 DGPS Data
250
95% Prediction Band Using Scheffe's Multiplier
Group 2 DGPS Data
Total Cycle Time (seconds)
200
Empirical Model Estimates
150
Regression line equation:
Yest = 0.0093Xest + 66.9119
R2 = 0.7686
n1 = 50
100
50
Range of Validation (3900 ft - 10,500 ft)
0
0
2000
4000
6000
8000
10000
12000
14000
16000
Total Flight Path Distance (feet)
Figure 7: Simple linear regression to predict a total cycle time as a function of the estimated total
flight path distance with a 95% prediction band and range of validation data (Based on two
random, exclusive, independent 50-cycle samples from entire case study).
DATA ANALYSES
the estimated total flight path distance of the helicopter per
cycle. The empirical model estimates for helicopter cycle
time were within the 95% prediction band for the simple
linear regression. It was assumed the regression could be
used as a surrogate for the empirical model to adequately
predict a total cycle time for a similar operation with similar parameters.
There were three objectives for the flight data analyses:
first, to compare empirical model estimates for delay-free
cycle time to actual flight data; second, to compare the effect of a restricted heliport approach and departure path on
cycle time; and third, to derive a statistical relationship from
the flight data to estimate cycle time for project planning
and cost estimations.
The first analysis compared empirical model estimates
for selected field units of helicopter cycle times and production to actual flight data (Figure 5). The empirical model
appeared to give a reasonable approximation of total cycle
time as a function of total flight path distance. The second
analysis compared the effect of restricted or obstructed helicopter approach and departure pathways on total cycle time
for two heliports (Figure 6). Restricted heliports with obstacles and/or steep approach and departure paths can increase operational costs and risk to the pilot, aircraft and
support personnel. The helicopter has to ascend loaded and/
or maneuver around to clear obstacles before acceleration
to ferry or application speed, or deceleration to land or load.
The heliport with the obstructed approach and departure
pathway had an overall increase in total cycle time for the
data sample. The third analysis developed a simple linear
regression from a sample of recorded flight data to predict
an average total cycle time for the helicopter (Figure 7). A
second independent data sample was used to validate the
regression. Through the use of an extra-sum-of-squares Ftest, the single most significant independent variable was
USES OF PRECISION DATA
Data Analysis for Case Study
A comparison of empirical model estimates to actual flight
data illustrated that the model gave a good representation of
the actual flight pattern of the helicopter during a cycle for
the given operational scenario.
The flight data analysis indicated heliport approach and
departure flight path access had an effect on the production
and cost of the helicopter. Payload capability is perhaps the
performance characteristic of greatest economic importance
for some operations (Stevens and Clark 1974). In general,
reduced payloads from restricted heliports decrease production and increase cost. Heliport access has a direct effect on
risk management for an operation. Restricted heliports can
reduce pilot visibility and increase the performance demands
on the helicopter (Stevens and Clark 1974). If the straightline flight path gradient is greater than 29%, it is not safe to
fly and this effect is exaggerated on short distances (O’Brien
and Brooks 1996).
The simple linear regression developed from the flight
41
25
Tons/hour = [(3600 secs/hr)/(Yest)] * [(payload in pounds)/(2000 lbs/ton)]
Production (tons/hour)
20
15
10
Payloads
5
800 pounds
1000 pounds
0
0
2000
4000
6000
8000
10000
12000
Total Flight Path Distance (feet)
Figure 8: Projected production trend for Bell 47G3 helicopter based on the regression model in Figure 7.
tions and materials. Figure 8 illustrates the production trend
for the B47G3 helicopter for two payloads over a range of
total flight path distances using the regression to estimate
total cycle time. Figure 9 illustrates the cost trend for the
helicopter for two payloads over a range of total flight path
distances based on the production estimates from Figure 8.
data predicted a total cycle time in seconds as a function of
the estimated total flight path distance in feet:
Yest = 0.0093Xest + 66.9119
where:
Yest = prediction estimate for average total cycle time (seconds)
Xest = estimated average total flight path distance (feet)
Other Potential Uses
Aerial operations are well suited to the use of DGPS systems for data collection and analysis of the operation. Unlike most other forestry applications where this technology is
used, the aerial operation is above the forest canopy where
satellite signal reception is unimpeded and the system provides an abundant source of precise positional data throughout the entire operation. Compared to manual methods for
time study data collection, this technology does not require
constant visual contact with the helicopter.
The maps from the DGPS system can serve as a visual
record for an application or operation. The real-time headsup display of the operation can assist pilots with a consistent
and even distribution of materials on a field unit, plan optimum flight patterns and approaches, and help delineate
boundaries and buffers. Unit maps and coordinates can be
downloaded into the navigation system ahead of time, reducing the flight time used to digitize boundaries and helping to
identify the field units and heliports on the ground.
The flight data in conjunction with field tests for application efficacy could assist in the validation of aerial productivity models previously developed for aerial application operations (Ghent 1999; Potter et al. 2002; Ray et al. 1999; Wu
et al. 2002). In addition, flight data may be used to evaluate
and provided a reliable estimation of average total cycle
time within the data range indicated. The regression had a
R2 = 0.7686, with a residual standard error of 12.64 on 48
degrees of freedom. Due to compound uncertainty in estimating several means simultaneously, the Scheffe’ method
was used to construct the 95% prediction band (Ramsey
and Schafer 1997).
The regression had a good fit within the range of data in
the samples. The estimated cycle time can be used to calculate helicopter production for an average payload of doloprill
for a range of flight path distances within the parameter
and operational limits of the project.
Application of Regression
If it is assumed that small changes in parameters, such
as payload, do not substantially affect helicopter performance, the regression may be used to quickly generate general relationships and trends for helicopter production and
costs over a range of flight path distances. These types of
estimates may be useful to plan projects with similar condi42
60
$/ton = [$/hour (direct operating cost)] * [1/(tons/hour)]
50
Helicopter Cost ($/ton)
40
30
Payloads
20
800 pounds
1000 pounds
10
0
3000
4000
5000
6000
7000
8000
9000
10000
11000
Total Flight Path Distance (feet)
Figure 9: Projected cost trend for Bell 47G3 based on production estimates in Figure 8.
an increase in acceleration capacity. A reduced payload
decreases the performance demands on the helicopter and
can improve maneuverability of the aircraft, offsetting the
potential production decrease when not flying with a maximum payload.
Although the regression may be useful to help estimate
production and costs for an operation, a knowledgeable person is still needed to evaluate the operation and identify
operational limitations and hazards (e.g. access, heliports,
power lines, pilot experience with an operation, etc.) that
can influence helicopter productivity and project management.
and validate production models for other types of aerial operations, such as heli-logging (Giles and Marsh 1994; Lyons
et al. 1999; Sessions and Chung 1999).
Flight data analysis may also assist managers and pilots
with evaluation of safe and efficient use of the helicopter
for an operation. Visualizing a similar operation prior to
actual field application can help clarify procedures, processes
and potential hazards for a pilot and crew unfamiliar with a
certain type of operation. This method may also assist the
experienced manager and pilot by providing a slightly different perspective to identify approaches for a new operation to improve safety and efficiency.
Other Considerations
LIMITATIONS
Although the flight data collected with the DGPS system has been determined to be precise, it is the responsibility of the user to determine if the data and information
are accurate. Additional record keeping on an operation
(such as shift level production information) can be used to
cross-reference loads, time, etc. and check for delays or
other situations that may impact use of data for analysis or
interpretation. On the case study, for each mineral applied
on the field units and each cycle an observer recorded the
time the bucket was loaded, application time, the payload
and any delays.
The unit map generated by the DGPS system may also
have discrepancies between what is shown on the map and
the actual application. For example, the bucket may be
empty, but if the pilot does not release the switch the DGPS
map will indicate the area that has been flown over while
the switch is activated had received an application, when it
had not.
Case Study Data
For the flight data analysis, additional cycle data are
needed to check the reliability of the regression model beyond the limits and conditions of the data sets and case
study. Data parameter limits included: regression data limits
with a total flight path distance range from 1,900 feet to
10,500 feet; the second (validation) data set limits with a
total flight path distance range from 3,900 feet to 14,250
feet; one type of mineral; the B47G3 helicopter with a skilled
pilot; and the weather and operations conditions of the case
study. An assumption for use of regression is all of the data
set parameters are static as one variable of interest is changed
over a projected range of conditions (e.g. equivalent helicopter performance while varying payload over a range of
flight path distances). In a practical application, a pilot
may adjust (increase) production for a reduced payload with
43
Another source of discrepancy can arise from boundaries
that are digitized in flight. Although the digitized coordinate locations are precise, the mapped boundary consists of
a series of straight lines between digitized points. Care
must be taken to not cut off corners on the unit during the
digitizing phase of the mapping operation.
Gourley, M. 2002. Personal communication. Forester, Starker
Forests, Inc., Corvallis, Oregon.
Lyons, K., J. McNeel, J. Nelson, and R. Fight. 1999. Spatial
modeling of helicopter logging in dispersed and aggregated
partial cutting systems. In proceedings of the International
Mountain Logging and 10th Pacific Northwest Skyline
Symposium. March 28 – April 1. Eds. J. Sessions and W.
Chung. Corvallis, Oregon.
FUTURE CHALLENGES AND
RESEARCH NEEDS
O’Brien, S. and E.J. Brooks. 1996. A course filter method for
determining the economic feasibility of helicopter yarding.
Engineering Field Notes – Engineering Technical Information System. USDA Forest Service. Volume 28. 12 p.
The equipment used for DGPS data collection on the
case study has proven to be precise and efficient for gathering production information. However, initial capital investment in the system is considerable, and requires a substantial commitment by the operator of an additional investment in personnel training. Operation managers need
to recognize the potential for added benefits of using this
precision forestry tool to enhance overall operations safety
and improve project efficiency where possible.
With additional data collection and analysis, other regression equations and mathematical models could be developed for different pilots, helicopter types, materials, applications, flight path distances and operational conditions
to estimate production and costs. Automated flight data
recording systems, such as the Ag-Nav®2 DGPS system,
could be used to evaluate flight characteristics and performances under varying circumstances to develop production
relationships and cost estimates to assist in project planning.
Olsen, E., and L.D. Kellogg. 1983. Comparison of time-study
techniques for evaluating logging production. Transactions
of the ASAE, Vol. 26, No. 6. 1665-1668, 1672.
Potter, W.D., Ramyaa, J. Li, J. Ghent, D. Twardus, and H. Thistle.
2002. STP: an aerial spray treatment planning system. In
proceedings IEEE SoutheastCon, 2002. pp. 300-305.
Ramsey, F.L. and D.W. Schafer. 1997. The statistical sleuth, a
course in methods data analysis. Duxbury Press. Wadsworth
Publishing Company. Belmont, California.
Ray, J.W., B. Richardson, W.C. Schou, M.E. Teske, A.L. Vanner,
and G.C. Coker. 1999. Validation of SpraySafe Manager, an
aerial herbicide application decision support system. Canadian Journal of Forest Research, 29: 875-882.
REFERENCES
Reynolds, R.D. 1999. Three GPS-based aerial navigation systems for forestry applications. Forest Engineering Institute
of Canada. Field Note No.: Silviculture-118. October.
Vancouver, British Columbia.
Ag-Nav®2 by Ag-Nav, Inc. 1999. Newmarket, Ontario, Canada.
Cornell, J. 2003. Aerial forest operations: mineral amendment
project. M.For. paper, Oregon State University, Forest Engineering Department, Corvallis, Oregon. 259 p.
Sessions, J. and W. Chung. 1999. Optimizing helicopter landing location – a preliminary model. In proceedings of the
International Mountain Logging and 10th Pacific Northwest
Skyline Symposium. March 28 – April 1. Eds. J. Sessions
and W. Chung. Corvallis, Oregon. pp. 337 – 340.
Filip, G., A. Kanaskie, K. Kavanagh, G. Johnson, R. Johnson,
and D. Maguire. 2000. Silviculture and Swiss Needle Cast:
research and recommendations. RC 30. OSU College of
Forestry. Corvallis, Oregon.
Stevens, P.M. and E.H. Clarke. 1974. Helicopters for logging,
characteristics, operation, and safety considerations. USDA
Forest Service General Technical Report PNW-20. Pacific
Northwest Forest and Range Experiment Station. Portland,
Oregon.
Ghent, J. 1999. Development of an aerial productivity and efficiency model for large-scale aerial treatment programs.
USDA Forest Service Research Proposal. R8-2000-02.
Giles, R., F. Marsh.1994. How far can you fly and generate
positive stumpage in helicopter salvage logging? Advanced
Technology in Forest Operations: Applied Ecology in Action. Oregon State University. Portland and Corvallis, Oregon. pp. 231-236.
Wu, L., W.D. Potter, K. Rasheed, J. Ghent, D. Twardus, H.
Thistle, and M. Teske. 2002. Improving the genetic algorithm performance in aerial spray deposition management.
In proceedings IEEE SoutheastCon, 2002. pp. 306 – 311.
44
Estimating Forest Structure Parameters on Fort Lewis
Military Reservation using Airborne Laser Scanner
(LIDAR) Data
HANS-ERIK ANDERSEN, JEFFREY R. FOSTER, AND STEPHEN E. REUTEBUCH
Abstract: Three-dimensional (3-D) forest structure information is critical to support a variety of ecosystem management
objectives on the Fort Lewis Military Reservation, including habitat assessment, ecological restoration, fire management, and
commercial timber harvest. In particular, the Forestry Program at Fort Lewis requires measurements of shrub, understory, and
overstory canopy cover to monitor vegetation response to various management approaches. At present, these measurements
are acquired through field-based procedures, which are relatively costly and time-consuming. The use of remotely sensed data,
such as airborne laser scanning (LIDAR), has the potential to significantly reduce the cost of acquiring these types of measurements over large areas. As an active remote sensing technology, LIDAR provides direct, three-dimensional measurements of
the forest canopy structure and underlying terrain surface. LIDAR-based cover measurements can be related to forest vegetation cover through a mathematical function based upon the Beer-Lambert law, which accounts for scanning geometry and
vertical foliage density. This study was carried out to determine the utility of small-footprint, discrete-return LIDAR for
estimation of forest canopy cover at Fort Lewis. LIDAR-based structural measures were compared to spatially-explicit field
measurements acquired from inventory plots in five forest stands representative of the various forest types at Fort Lewis and a
variety of terrain. Results indicate that LIDAR-based cover estimates for overstory and understory are generally related to
field-based estimates.
INTRODUCTION
surements of vegetation cover, inventory costs could be significantly reduced through the use of remote sensing technology.
In particular, actively-sensed airborne laser scanning
(LIDAR) technology has the potential to provide information relating to spatial structure throughout the depth of the
forest canopy and understory. Previous studies have shown
that large-footprint, continuous- waveform LIDAR data can
be used to characterize the vertical distribution of canopy
foliage (Harding et al., 2001; Lefsky et al., 1999; Means et
al., 1999). Researchers have related the vertical distribution of small-footprint, first-and multiple-return LIDAR data
to empirical- and model-based estimates of leaf area distribution within Pacific Northwest forests (Magnussen and
Boudewyn, 1998; Andersen, 2003). Other studies have
shown that quantitative measures derived from the vertical
distribution of small-footprint, discrete return LIDAR data
are related to important stand parameters, such as volume,
height, and biomass (Means et al., 2000).
While LIDAR-derived measures of canopy cover have
been used as independent variables in estimation of forest
stand parameters (Means et al., 2000), the utility of LIDAR
for differential characterization of canopy and subcanopy
forest structure components has not been assessed. In this
paper, a methodology for measurement of vegetation cover
Forests are structured as complex systems in three-dimensional (3-D) space. The 3-D structural organization of
forest canopies is the primary determinant of the understory
light regime, micro-climate, and habitat structure. In the
Pacific Northwest, the vertical distribution of canopy elements is one of the more important components describing
the spatial structure of a forest stands. For example, the
Forestry Program at Fort Lewis Military Reservation requires
this structural information to guide an active silvicultural
program designed to promote the development of forests
with more diverse structures and composition, and to provide habitat for the northern spotted owl (Strix occidentalis
caurina). Fort Lewis has implemented an inventory program to document and monitor the spatial structure of the
installation’s forests. Three-dimensional forest structure is
quantified by measuring vegetation cover of the overstory,
understory, shrub, and ground layers.
Vegetation cover, expressed as the proportion of the forest floor covered by the vertical projection of vegetation
within a layer of the forest canopy, is a conventional measure of forest structure (Jennings et al., 1999). As the inventory program currently relies upon ocular, field-based mea45
LIDAR-Based Digital Terrain Models
within discrete canopy layers using first return LIDAR data
will be presented and evaluated.
A filtering technique coded in IDL (Interactive Data Language version 5.5, Research Systems, Inc.) was used to identify the probable ground reflections within the last-return
LIDAR data (Haugerud and Harding, 2001). An interpolation algorithm was used to generate a digital terrain model
(DTM) on a grid, with a post spacing of 5×5 m, for the two
areas covered by the LIDAR dataset. Figures 2a and 2b
show hill-shade graphics of the DTMs.
A comparison of LIDAR-estimated elevation from the
DTMs to the elevations of 225 topographic survey points
indicated a mean absolute error of -0.14 meters and a root
mean square error (RMSE) of 1.00 meters for the southwestern Fort Lewis LIDAR DTM. A similar comparison
for the northwestern Fort Lewis LIDAR DTM (244 topographic survey points) indicated a mean absolute error of 0.05 meters and an RMSE of 0.72 meters.
STUDY AREA AND DATA
Study Sites Within Fort Lewis, Washington
Five stands considered to be representative of the variety
of forest types present at Fort Lewis were selected as study
areas for the project. The first two stands were located in
the southwestern portion of Fort Lewis on an old recessional
moraine of the Vashon Glaciation with hummocky topography and location variation in vertical relief of ca. 10 m.
Area 1 was a 65-year-old mixed red alder/Douglas-fir stand,
and Area 2 was a 75-year-old Douglas-fir stand. The other
three stands were located on flat glacial outwash. Area 3,
ca. 3 km southeast of Areas 1 and 2, was an 85-year-old
mixed white oak/Douglas-fir stand. Areas 4 and 6 were 95year-old Douglas-fir stands in the northeastern portion of
Fort Lewis. (Area 5 was in a prairie and therefore was not
used in this study).
Approximately 35 plots were located within each stand
(169 total) to validate the remote sensing estimates (Figure
1). These plots were established in a systematic pattern of
clusters to ensure a well-distributed sample within each stand
type. Plot coordinates were established by a highly accurate
total station topographic survey. An additional 300 topographic check points were established in these stands to assess the accuracy of the LIDAR digital terrain models.
Field Cover Data
To compare LIDAR-based measures of vegetation cover
to conventional inventory metrics, field-based observations
of vegetation cover were acquired at each of the 169 plots
located in the five stands. Following the established field
protocol of the Fort Lewis inventory program, ocular estimates of vegetation cover within the overstory, understory
(1.8 m - base of overstory), and shrub (0.46 m – 1.8 m)
were made at each plot (Figure 3). Overstory and understory cover were estimated for an 809 m2 circular plot, and
shrub and ground cover for an 81 m2 circular plot. It should
be noted that while the base of the upper and lower boundaries of the shrub and ground layers are at fixed heights in
the inventory protocol, the height of the base of the overstory layer is a local characteristic of forest structure and
was subjectively estimated at each plot. Cover was defined
as the proportion of the total area “filled” by the two-dimensional (2-D) vertical projection of tree crowns and
shrubs onto the ground.
LIDAR Data
LIDAR data were acquired over two 50-km2 areas on Fort
Lewis in August, 2000 (fig. 1). These data were acquired
with an Earthdata Aeroscan laser scanning system operating from a fixed-wing platform. System specifications and
flight parameters are shown in Table 1.
Table 1. System specifications for Earthdata Aeroscan
LIDAR system.
Pulse rate
Ground spacing between
pulses
Laser wavelength
Scan pattern
Pulse length
Attitude precision
Range resolution
Range accuracy
15,000 per second
1 meter (nominal)
1.064 µm
Sinusoidal
12 ns
0.004 degrees
3 cm
2-4 cm
Figure 3. Canopy layers used for cover estimation.
46
Area 4
Fort Lewis Military Reservation
#
##
#
#
# #
#
##
#
##
# # ##
#
#
#
# #
##
###
Are a 4
Fort Le wis Mil ita ry Rese rva tion
#
##
#
##
#
Ar ea 6
Ar ea 1
#
#
##
#
##
#
##
####
## #
###
#
## #
#
#
###
# ##
# #
#
##
#
#
#
##
##
##
Are a 2
#
#
#
##
Ar ea 3
Area 6
Area 1
Area 2
#
##
# ##
# #
##
#
##
# ##
##
## # # #
## #
#
##
##
### ##
#
#
# ##
# ###
##
# # #
##
#
#
# ##
##
# ####
##
#
# #
##
# ##
### #
##
####
#
#
Area 3
Figure 1. LIDAR coverage and study areas within Fort Lewis Military Reservation, Washington.
(a) Southwest area
(b) Northeast area
Figure 2. LIDAR-based DTMs (5-m resolution) within Fort Lewis Military Reservation.
47
METHODS
To convert LIDAR coordinate data into vegetation height
data, the elevation of the underlying terrain (interpolated
from the LIDAR DTM) was calculated for each LIDAR return; then, this terrain elevation was subtracted from the
LIDAR return elevation to yield vegetation height.
Vertical Point Quadrat Sampling
Ground-based measures of canopy cover are typically
based upon a sample of measurements acquired with a vertical sighting instrument. The percent cover is computed as
the proportion of sample points where the sky is obscured
by vegetation (Jennings et al., 1999). As Jennings noted, a
limitation of this approach is that it is highly susceptible to
sampling error.
When the vertical heights from the ground to first contact with vegetation within each canopy layer are measured
above each sample point, the canopy cover for a given canopy
layer can be estimated as the ratio of the number of measured heights within a layer to the total number of sample
points. This approach has been used to estimate vertical foliage distributions and is termed vertical point quadrat sampling (Ford and Newbould, 1971). When an optical range
finding device, such as a laser, is used to measure the height
to first leaf contact, this sampling technique is termed optical point quadrat sampling (MacArthur and Horn, 1969;
Aber, 1979; Radtke and Bolstad, 2001).
Figure 4. LIDAR-based cover estimation for overstory,
understory, and shrub layers. Vertical lines represent
LIDAR pulses.
LIDAR-Based Cover Estimation
If the geometry of the laser range-finding is inverted, an
estimate of cover within a given layer of the canopy can be
generated from LIDAR data, where cover within each layer
is calculated as the ratio of the number of first return LIDAR reflections within a layer to the total number of LIDAR pulses entering the layer (Figure 4).
A LIDAR-based cover estimate, based on first return data,
was generated for overstory, under-story, and shrub layers
for each of 169 plots Although field cover estimates for the
shrub layer were based upon an 81 m2 plot, all LIDAR estimates were based upon an 809 m2 plot to maintain an adequate sample area. Estimates based upon the larger area
will not bias the results.
A single value for height was used to characterize the
base of the overstory within each stand. A K-means clustering algorithm was applied to the LIDAR data within each
forest stand to estimate the height that separated the understory and overstory layers (Mardia et al., 1995). This height
was 10.0 meters for Area 1, 21.4 meters for Area 2, 7.5
meters for Area 3, 23.7 meters for Area 4, and 21.3 meters
for Area 6.
Figure 5 is a 3-D representation of a plot within Area 2
(75-year-old Douglas-fir stand) created using the Stand Visualization System (McGaughey, 1997). Figure 6 shows a
3-D perspective view of the distribution of first return LI-
Figure 5. Visualization of 809 m2 plot within Douglas-fir
stand.
DAR data used to generate LIDAR-based cover estimates.
If all LIDAR measurements were acquired at nadir, there
is a linear relationship between the LIDAR-based cover
measurement, generated using the theory developed in the
previous section, and the field-based cover estimate for each
layer. In practice, due to scanning geometry, LIDAR measurements are acquired at some off-nadir angle (-25 degrees
to +25 degrees in the Aeroscan system used in this study).
This off-nadir angle affects the probability of a laser pulse
passing through a given layer of vegetation and influences
the functional form of the relationship between a LIDARbased measurement of cover and the cover estimate observed
in the field.
48
and shrub) was held fixed by randomly assigning a value of
either 0 or 1 to each layer until the two-dimensional (2-D)
projection of the “filled” area for each layer equaled the specified cover percentage. A specified density in the vertical dimension was also randomly allocated throughout the layer,
while keeping the 2-D projection of cover fixed for each
layer.
The paths of LIDAR pulses traveling at some specified
off-nadir angle through the 3-D array were calculated and
the coordinates (x, y, height) for the “first vegetation contact” (i.e. first encounter with a cell with code “1”) were
recorded as simulated first return LIDAR measurements. A
simulated LIDAR-based cover estimate was then generated
using the approach described in the previous section. These
simulated LIDAR estimates were then compared to the specified, fixed (“simulated field”) cover values used in filling
the 3-D array.
Figures 7 and 8 show the influence of the off-nadir angle
on simulated LIDAR-based cover estimates. Figure 7 shows
the cover estimates with the off-nadir angle fixed at zero.
Perhaps not surprisingly, there is a linear relationship between LIDAR- and field-based cover estimates. Figure 8
shows the relationship between LIDAR-based and field-based
cover estimates when the off-nadir angle for each pulse is a
random draw between -25 and +25 degrees and the foliage
density within each layer is randomly chosen.
Clearly the relationship is no longer linear, and appears
to follow a relatively smooth curve. While this effect is apparent in the overstory and understory layers (Figures 8a
Figure 6. Distribution of first return LIDAR data within
809 m2 plot within Douglas-fir stand.
Simulation of LIDAR Data and Cover Estimates
A simulation approach was used to investigate the effect
of scanning geometry and foliage density on the relationship between field-based and LIDAR-based cover estimates.
In this simulation, a 3-D array was used to simulate the 3-D
“envelope” containing the canopy vegetation within a
100×100×60-meter area of forest. The cover within each
layer of the canopy (i.e. overstory, understory (to 25 meters),
1.0
1.0
0.8
0.8
0.6
0.4
0.2
Field-based Shrub Cover
Field-based Understory Cover
Field-based Overstory Cover
1.0
0.6
0.4
0.2
0.3
0.5
0.7
0.9
0.1
LIDAR-based Overstory Cover
0.4
0.0
-0.2
0.1
0.6
0.2
0.0
0.0
0.8
0.3
0.5
0.7
0.1
0.9
0.3
(a) Overstory
0.5
0.7
0.9
LIDAR-based Shrub Cover
LIDAR-based Understory Cover
(b) Understory
(c) Shrub
1.0
0.8
0.8
0.6
0.4
0.6
0.4
0.2
0.2
0.0
0.0
0.1
0.3
0.5
0.7
LIDAR-based Overstory Cover
(a) Overstory
0.9
1.0
Field-based Shrub Cover
1.0
Field-based Understory Cover
Field-based Overstory Cover
Figure 7. Simulated cover estimates with off-nadir of 0 degrees.
0.8
0.6
0.4
0.2
0.0
0.1
0.3
0.5
0.7
LIDAR-based Understory Cover
(b) Understory
0.9
0.1
0.3
0.5
0.7
LIDAR-based Shrub Cover
(c) Shrub
Figure 8. Simulated cover estimates with off-nadir angles ranging between -25 and +25 degrees.
49
0.9
vertical foliage density. Figures 9 and 10 show the influence of varying the vertical density of foliage. Figure 9 shows
the relationship between simulated field- and LIDAR-based
estimates of cover with a relatively high density of foliage
in the vertical dimension, while Figure 10 shows this relationship for a low density of foliage, with fitted curves superimposed.
These graphics indicate that both off-nadir angle and vertical foliage density will influence the relationship between
field- and LIDAR-based estimates of cover. It also appears
that the mathematical functional form can be adequately
represented by the logarithmic model based upon the BeerLambert law given above, where the parameter of the function will represent the effects of scan angle and foliage density.
and 8b) it is not evident in the shrub layer (Figure 8c), where
the relationship exhibits a more linear form.
The form of the mathematical function describing the
relationship between LIDAR-based cover and field-based
cover can be obtained from the principles of radiative transfer
theory (Martens et al., 1993). Martens and others showed
that the relationship between leaf area index and gap fraction, or the ratio of the amount of light beneath a canopy
layer to the amount of light above a canopy layer, is given
by the Beer-Lambert law, with the following form:
LAI = (-1/k)ln(gap fraction),
where k is the extinction coefficient governing the attenuation of light as it passes through the canopy.
RESULTS
In the context of cover estimation, the relationship between the field-based cover estimate (i.e., 2-D projection of
vegetation onto terrain surface) and LIDAR-based cover can
be estimated by the following function:
LIDAR-based cover estimates obtained using first return
LIDAR data were compared to the field-based cover estimates for the 169 plots on Fort Lewis. The relationship between LIDAR- and field-based estimates for each forest type
was quantified using regression analysis, with the results
shown in Figures 11-14. The coefficient of determination
(r2) of each regression model is shown as well.
The results indicate that LIDAR-based cover estimates
for overstory and understory are generally related to fieldbased estimates. There does not appear to be a significant
relationship between LIDAR- and field-based estimates of
shrub cover for any forest types.
Field cover = (a)ln[1/(1 – LIDAR cover)]
1.0
1.0
0.8
0.8
0.8
0.6
0.4
0.2
0.0
Field-based Shrub Cover
1.0
Field-based Understory Cover
Field-based Overstory Cover
where a is the parameter related to the extinction coefficient in Beer-Lambert law shown above, and cover values
are expressed as fractions.
Density of vegetation in the vertical dimension will also
influence this relationship between field- and LIDAR-based
measures of cover. The parameter a in the above function
will represent the combined effect of off-nadir angle and
0.6
0.4
0.2
0.4
0.6
0.8
0.4
0.2
0.0
0.2
0.6
0.0
1.0
0.1
0.3
LIDAR-based Overstory Cover
0.5
0.7
0.9
0.1
0.3
LIDAR-based Understory Cover
(a) Overstory
0.5
0.7
0.9
LIDAR-based Shrub Cover
(b) Understory
(c) Shrub
Figure 9: Simulated cover estimates with high density of foliage in vertical dimension.
1.2
Field-based Understory Cover
Field-based Overstory Cover
Field-based Shrub Cover
1.2
1.0
0.8
0.6
0.4
0.8
1.0
0.8
0.6
0.4
0.4
0.2
0.2
0.0
0.0
0.0
0.1
0.3
0.5
LIDAR-based Overstory Cover
0.7
(a) Overstory
0.9
0.2
0.4
0.6
LIDAR-based Understory Cover
(b) Understory
0.8
1.0
0.1
0.3
0.5
(c) Shrub
Figure 10: Simulated cover estimates with low density of foliage in vertical dimension.
50
0.7
LIDAR-based Shrub Cover
0.9
(a) Overstory (r2 = 0.63)
(b) Understory (r2 = 0.53)
(c) Shrub (r2 = 0.15)
Figure 11: Field-measured vs. predicted cover within 95-year-old Douglas fir stands (Areas 4 and 6). Dashed line shows
1:1 relationship.
(a) Overstory (r2 = 0.63)
(b) Understory (r2 = 0.53)
(c) Shrub (r2 = 0.15)
Figure 12: Field-measured vs. predicted cover within 65-year-old mixed red alder/Douglas-fir stand (Area 1).
creasing the sample size and increasing the error of cover
estimation. Results here indicate that sampling error is the
primary source of variation in the estimation of shrub cover
across all forest types.
The results obtained from simulations and field data suggest that the interaction of off-nadir LIDAR scanning geometry and the vertical distribution of canopy foliage introduces a significant source of variability in LIDAR-based
cover estimation. The geometry of LIDAR sensing leads to
measurements of forest struc-ture that are more representative of 3-D canopy density than 2-D (i.e. orthogonal) canopy
cover. However, if vertical structure is relatively constant
over a forest stand, then vertical density can be modeled,
allowing for a more accurate mapping of the LIDAR-based
cover estimate to the 2-D vegetation cover.
It should be noted that using a single height for separating overstory from understory layers adds a significant source
of variability in LIDAR-based cover estimation. Again, error will be decreased when the stand is more structurally
homogeneous. In stands exhibiting extremely complex vertical structure (e.g. the mixed red alder/Douglas-fir stand
in Area 1), this classification error may lead to gross errors
in cover estimation.
Relationships between LIDAR- and field-based estimates
are strongest in the mature Douglas-fir stand on flat terrain
(Figure 11) and the mixed white oak/Douglas-fir prairie
stands (Figure 14). It should be noted that the relationship
for the overstory and understory layers in Area 3 did not
exhibit a curvilinear form, so results for this area were based
upon untransformed lidar cover values (Figure 14a and 14b).
Relationships are still apparent, although less strong, within
the Douglas-fir stand with hummocky topography (Figure
13). Relationships within the mixed red alder/Douglas-fir
stand are extremely weak (Figure 12).
DISCUSSION
The graphical and quantitative results indicate that LIDAR has the potential to provide information relating to
vegetation cover in multiple canopy layers within forest
stands. The weaker relationship between field- and LIDARbased cover estimates for the shrub layer is most likely due
to sampling error. Occlusion of subcanopy vegetation by
overstory and understory foliage will reduce the number of
first returns penetrating to the shrub layers, effectively de51
(a) Overstory (r2 = 0.42)
(b) Understory (r2 = 0.38)
(c) Shrub (r2 = 0.22)
Figure 13. Field-measured vs. predicted cover within 75-year-old Douglas-fir stand in an area with varied, hummocky
topography (Area 2).
(a) Overstory (r2 = 0.79)
(b) Understory (r2 = 0.38)
(c) Shrub (r2 = 0.0005)
Figure 14. Field-measured vs. predicted cover within 85-year-old mixed white oak/Douglas-fir stand (Area 3).
CONCLUSIONS
It should also be noted that there is no assumption in
this study that the field-based, ocular estimate of vegetation
cover represents a “true” measurement. Although attempts
were made to “calibrate” the estimates through comparisons to the estimates of other observers, these measurements
are inherently subjective and susceptible to bias. In the management context of Fort Lewis, however, it has been determined that ocular estimation remains the most economically viable approach to estimating the spatial characteristics of vegetation cover efficiently and quickly over the extent of the installation.
The results of this study are, therefore, intended to show
the correspondence between field-based estimates, acquired
using established inventory protocol at Fort Lewis, and LIDAR-based estimates, and do not represent a true assessment of the accuracy of LIDAR-based cover estimates. Even
though LIDAR-based cover estimation is subject to both
systematic and random errors, due to the effects of sensing
geometry, occlusion, and sampling rate discussed above, it
provides for objective, spatially-explicit mapping of forest
vegetation cover over extensive areas.
LIDAR has the potential to be an extremely useful source
of data for mapping of forest structure characteristics, including canopy cover within overstory and understory layers. The increased sampling error at greater depths in the
canopy, due to the occlusion effect, limits the utility of LIDAR for estimation of cover within the shrub layer. The offnadir scanning geometry of LIDAR and the vertical foliage
distribution can have significant effects on the functional
relationship between LIDAR-based cover measurements and
field-based observations of cover based upon the 2-D projection of tree crowns. In forest areas with homogeneous structural characteristics, these factors can be modeled using a
mathematical function based upon radiative transfer theory.
The methodology presented in this paper will be further
developed and evaluated through comparison to intensive,
objective field-based canopy cover estimates acquired at the
same time as the LIDAR data.
A possible extension of this research would be the use of
52
multiple-return or continuous-waveform (i.e. “single photon”), small footprint LIDAR data. The use of multiplereturn data with intensity information may allow for more
sophisticated and accurate modeling of foliage density and
vegetation cover.
A follow-up project will use these results to model the
spatial patchiness of canopy cover within Fort Lewis Military Reservation to support habitat monitoring and silvicultural programs.
MacArthur, R. and H. Horn. 1969. Foliage profile by vertical
measurements. Ecology 50:802-804.
McGaughey, R. 1997. Visualizing forest stand dynamics using
the stand visualization system. In: Proceedings of the 1997
ACSM/ASPRS Annual Convention and Exposition, vol. 4,
pages 248-257, Bethesda, MD, April, 1997. American Society for Photogrammetry and Remote Sensing.
Magnussen, S. and P. Boudewyn. 1998. Derivations of stand
heights from airborne laser scanner data with canopy-based
quantile estimators. Canadian Journal of Forest Research
28:1016-1031.
LITERATURE CITED
Aber, J. 1979. A method for estimating foliage height profiles
in broad leaved forests. Journal of Ecology 67:35-40.
Mardia, K., J. Kent, J. Bibby. 1995. Multivariate Analysis. Academic Press, London.
Andersen, H.-E. 2003. Estimation of critical forest structure
metrics through the spatial analysis of airborne laser scanner data. Unpublished Ph.D. dissertation, University of
Washington, Seattle, WA.
Martens, S., S. Ustin, and R. Rousseau. 1993. Estimation of
tree canopy leaf area index by gap fraction analysis. Forest
Ecology and Management 61:91-108.
Means, J., S. Acker, D. Harding, J. Blair, M. Lefsky, W. Cohen,
M. Harmon, and W. McKee. 1999. Use of large footprint
scanning airborne lidar to estimate forest stand characteristics in the western Cascades of Oregon. Remote Sensing of
the Environment 67:298-308.
Ford, D. and P. Newbould. 1971. The leaf canopy of a coppiced
deciduous woodland: I. development and structure. Journal
of Ecology 59:843-862.
Harding, D., M. Lefsky, G. Parker, J. Blair. 2001. Laser altimeter canopy height profiles: Methods and validation for
closed-canopy, broadleaf forests. Remote Sensing of the Environment 76:283-297.
Means, J., S. Acker, B. Fitt, M. Renslow, L. Emerson, C. Hendrix.
2000. Predicting forest stand characteristics with airborne
scanning lidar. Photogrammetric Engineering and Remote
Sensing 66(1):1367-1371.
Haugerud, R. and D. Harding. 2001. Some algorithms for virtual deforestation (VDF) of lidar topographic survey data.
International Archives of Photogrammetry and Remote
Sensing, XXXIV-3/W4:211-217.
Radtke, P. and P. Bolstad. 2001. Laser point-quadrat sampling
for estimating foliage-height profiles in broad-leaved forests.
Canadian Journal of Forest Research 31:410-418.
Jennings, S.B., N. Brown, and D. Sheil. 1999. Assessing forest
canopies and understory illumination: canopy closure, canopy
cover and other measures. Forestry 72(1): 59-73.
Support for this research was provided by Fort Lewis Military Reservation, the USDA Forest Service Pacific Northwest Research Station, and the Precision Forestry Cooperative within the University of Washington College of Forest
Resources.
Lefsky, M., W. Cohen, S. Acker, G. Parker, T. Spies, and D.
Harding. 1999. Lidar remote sensing of the canopy structure and biophysical properties of Douglas-fir western hemlock forest. Remote Sensing of the Environment 70:339-361.
53
54
Developing “COM” Links for Implementing LIDAR Data in
Geographic Information System (GIS) to Support Forest
Inventory and Analysis
ARNAB BHOWMICK, PETER P. SISKA AND ROSS F. NELSON
Abstract: In the last decade the computerized technology made significant step forward in the data manipulation, storage,
design and analysis. At the same time the acquisition of spatial data experienced significant changes in the natural resources.
The field sampling methods, that originally represented the only source of spatial data, have been efficiently enhanced, in
some cases even replaced, with modern remote sensing sensors. The airborne laser systems are promising tools for measuring
heights of ground objects with high precision. In addition, LIDAR measurements can be linked with field sampling and
regression models to provide estimates of ecosystem parameters such as biomass, leaf area index, carbon and other volumetric
characteristics of vegetation structure. The objective of this project is to develop component object model (COM) for the direct
transfer of raw laser measurements to GIS, perform fundamental analysis in a form of close coupling strategy and compute
statistics on multiple return LIDAR data. This method allows GIS analyst and LIDAR users to effectively manipulate with
laser data within GIS environment. COM compliant software supports flexible applications using objects and components
from different sources. The close coupling of GIS with geospatial and statistical tools strengthens the role of GIS as an
interdisciplinary science. In this paper coupling strategy involves modules that simultaneously manipulate software components from the GIS application and the data analysis application. The paper explores the significance of coupling strategies for
natural resource management, engineering and the forest science applications.
INTRODUCTION
carbon. The exhaustive and tedious efforts to measure tree
heights can be readily enhanced and even replaced by the
laser systems that are capable of determining tree heights
with high precision.
Airborne laser profiling and scanning systems intensively
sample forestlands, and these height and density measures
can, using regression techniques, be used to infer stand characteristics (Nelson et al. 1988). The heights of trees and
canopy densities are the first step in estimating more complex biometric parameters using the laser data. Hyyppä et
al. (2001) estimated stem volume using high pulse rate laser scanner data, segmentation method and regression equations with a 10.5% error. Nelson et al. (1988) developed
and tested two logarithmic equations in conjunction with
six laser based canopy measurements. The results indicated
that the mean total tree volume can be predicted with a 2.6%
accuracy of the ground value and a mean biomass within a
2% accuracy based on 38 ground plots. The Scanning Lidar
Imager of Canopies by Echo Recovery (SLICER) was also
frequently used to collect data in the deciduous forests on
the North American continent (Lefsky et al. 1999). The
quadratic mean canopy heights explained the 70% variation in the stand basal area and the 80% variation in the
above ground biomass. The crucial segment in estimating
the above mentioned biometric parameters either with pro-
Over the past two decades LIDAR (Light Detection and
Ranging) data have been extensively used in natural resources and engineering. Efforts to determine the depth of
the ocean floor with high accuracy appeared to be one of the
first applications involving laser systems (Hoge et al. 1980).
Realizing that vegetation canopy “depth,” i.e., height and
structure, might also be measured using this bathymetric
LIDAR technology, Link and Collins (1981), Arp et al.
(1982), and Hoge et al. (1982) reported some of the first
airborne laser studies of terrestrial targets in the western
hemisphere. Roughly parallel, more theoretically based, terrestrial LIDAR investigations were ongoing in the United
Soviet Socialist Republic at the time (Solodukhin et al.
1977a, b; 1979; 1985; Stolyarov and Solodukhin 1987),
though the Cold War precluded scientific cooperation.
Over the past 20+ years, numerous researchers have demonstrated the capabilities and limitations of a wide variety
of airborne laser systems for forest, rangeland, geologic, and
topographic measurement and mapping. With respect to
forestry, airborne LIDAR research has centered on the assessment of forest canopy heights, crown closure, internal
canopy structure, and the use of these laser measures to estimate stems, basal area, wood volume, forest biomass, and
55
filing, SLICER, or other imaging laser systems is in developing regression equations that relate measured parameters on the ground with the airborne laser measurements.
The newest results in Delaware (Nelson et al. 2003) indicate that biometric estimates of biomass and volume in four
land cover categories (deciduous, mixedwood, conifer, and
wetlands) are within 16% of U.S. Forest Service estimates
statewide.
The geographic information systems (GIS) have provided,
until now, limited service to laser based projects. Such applications include data storage and display. More sophisticated application are associated with overlay operations. In
particular, classified vegetation maps can be overlaid in a
GIS with scanning laser data or transect lines (profiling lasers) in order to calculate stratified estimates of biomass
and other biometric parameters. Therefore, the spatial distribution of vegetation and its covariation with laser measurements across the study area plays a significant role in
the final evaluation of biometric parameters. Additional
overlays with soil cover can also be useful in relating laser
data to natural resource information.
In this project the close coupling procedure was developed to support the automated transfer of raw laser measurements to GIS from the ArcMap module for further analysis. The linkage continued back to excel spreadsheet for statistical analysis using moving window strategy and the process ended again in GIS environment. This task was accomplished using one of the most recent approaches in computerized technology - component object modeling (COM).
GIS tools. The computation of the dissectivity parameter outside of the GIS environment includes the “moving windows”
strategy that was developed in this project. The visual basic
module is capable of “dropping” a window of selected size
around each point sample value and computing the statistical
parameter from the samples that falls within the window size.
Since LIDAR data represent a highly dense clustered data,
the program computed statistics several thousands windows.
The purpose of this project was to develop a module based
on the component object model (COM) for the direct transfer
of raw laser measurements to GIS, perform statistical analysis outside of a GIS environment and then complete surface
analysis using the first and last return LIDAR value. This
close coupling procedure allows the GIS analyst and LIDAR’s
users to manipulate more effectively with laser data directly
from the GIS environment. COM compliant software supports flexible applications using objects and components from
different sources. Currently, a number of programs have COM
compatibility and therefore intelligent modules to perform
simple or sophisticated analysis on the spatial data can link
them. All spatial data sets were managed, viewed, queried,
and manipulated in an ArcGIS environment. Component
Object Modules (COM) was developed and embedded in an
ArcGIS environment using Visual Basic Applications (VBA)
to facilitate the automation of input, processing, generation,
management and representation of data. The Visual Basic.net
version would also be used for the web interface programming. The resultant GIS environment enables multiple users
to access and manipulate digital map and tabular data layers
and files. In general, the database handles three generic types
of data – 1) laser and ground transect ASCII files, 2) laser
coverages (i.e., GIS data layers), and 3) satellite-based land
cover vector layers and geostatistical vector layers.
PROCEDURES AND RESULTS
A fully integrated computerized system provides dynamic
links between spatial analysis tools such as GIS and statistical software packages. The newest development in computerized technology has introduced object oriented database
management systems and object component models. The
GIS packages such as ArcInfo 8.0 and newer versions and
IDRISI have developed their own component object system
that can be linked with other COM compliant software. This
integration increases the flexibility of analysis in spatial
analysis. Linking GIS with other statistical and analytical
packages via component object modeling can assist in the
pursuit of new and creative research ideas.
Remodeling ArcGIS Interface
The raw ACSII LIDAR data (Figure 1) are not easily transported to analytical packages for viewing, storage and manipulation. Using the COM technology, VBA codes were written to manipulate simultaneously objects in ArcGIS and MS
Office (Excel). These objects were embedded in a GIS environment using a visual basic module such as “menu” or
“toolbar” items. In addition, the buttons were designed to
make them operational. This technology endows the user to
leverage the functionality of statistical calculations, the data
management attributes in MS Excel and the spatial functions
of GIS without even leaving the GIS interface. The VBA
codes work in the background to couple the GIS and statistical/database software, perform the destined procedures, and
then import the data back to GIS for further analysis.
As can be seen in Figure 2, this interface converts the raw
data file into a *.csv data format. It also sorts the data according to laser return numbers. This file can be opened in any
database or simple ASCII formats for layer inputs (according
to return numbers) into GIS and other tools for further
geospatial analysis.
Database Management Systems
The integration of GIS and spatial analysis tools is commonly known as a coupling strategy. Ungerer and Goodchild
(2001) described four levels of coupling: isolated, loose, close
and integrated. The scope of this work falls under a close
coupling strategy whereby the actual task is taken from GIS,
and the spatial data is manipulated outside of GIS in an
excel package using a visual basic editor. After the computation of a desired task is completed, the results are automatically imported back via established COM links into the
GIS environment for further processing and analysis using
56
LIDAR COVERAGE FILES
Individual pulse locations from laser instruments along
the actual flight path are stored as ArcGIS point layers, along
with GMT time tags. After the data raw laser data are input
to GIS, they are stored as strips of point layers that can be
manipulated individually or merged together in one layer.
This, however, requires highly efficient computer power
(CRANE) due to the extremely high data density from laser
measurements.
The processing of all the individual layers in GIS is done
using the VBA-COM. The Terrain DTM is first developed
taking the lowest return after sufficient filtering of noise (Figure 5). After the digital terrain model (DTM) was generated from the last laser return value using TIN data structure in GIS, the canopy layers were also generated and superimposed over the DTM (Figure 6).
Figure 1. The raw multi-return laser data.
Figure 2. The new ArcGIS Interface.
Figure 4. Lidar Data Import in GIS.
Figure 5. Digital elevation models generated using
one strip of laser data (the lowest return) and TIN
method in GIS.
Figure 3. Output of Raw Data Conversion.
57
Spatial Analysis
The output of this program consists of parameters, attribute values of the interest that were calculated and tabulated by the program. New fields were appended to the database, which in turn automatically updates the attribute
tables in the GIS layers.
The moving window technique and the computation of
basic geo-spatial parameters and surface properties are being programmed in the VBA-COM as another object. The
same input file can be used to calculate spatial parameters
using 2D and 3D moving windows. The 2D window deals
with surface parameters like dissectivity (Siska and Hung,
2003) while the 3D window permits computation of volume
based parameters. Therefore, the 3D window data could be
linked with the regression equations to calculate the volume, biomass and carbon content in the forest.
The program is designed to capture surface characteristics that will yield to better understanding of the map content. As it was indicated earlier COM system play increasingly important role in current technology. This COM module will be also linked to the GIS platform and integrated
with MS Office products for computations and further analysis. The program is designed for a point data file and it
computes map characteristics.
2D and 3D Moving Windows
The first step in surface analysis began with developing
a module in VBA-COM for reading *.csv files that represented an output from previously transformed LiDAR raw
data. Surface analysis of ground DTM or tops of the canopy
is important for assessing the surface diversity and assessing the variability of statistical parameters, including the
uncertainty of estimated values. The user defined moving
windows are programmed around each data point and the
previously mentioned statistical parameters are computed
from each data value inside the moving window. In the next
paragraph the example of computation of the surface
dissectivity parameter (Di) is discussed. The formula for this
surface parameter is as follows:
Dissectivity (Di) = (
(z_max- z_min)
) *100
d
Where,
z_max = maximum elevation within window
z_min = minimum elevation within window
d
= distance between the points
The example of building flexible links between COM
compliant software packages for the purpose of utilizing the
advantages of each package involved and how newly created parameters can be implemented into these links is the
computation of surface dissectivity parameter in moving
windows. Di is a simple statistical parameter that captures
the surface gradient change. Computations of a number of
similar statistical parameters are to a large extent performed
manually by using individual statistical packages not linked
with GIS. This is laborious and extremely time consuming.
In this project, the computation of this parameter is fully
embedded in GIS using COM linkages.
Figure 6. Superimposition of Ground and Top of
Canopy DTMs.
Figure 8. Ground layer import for surface analysis in
2D windows.
Figure 7. Output from the VB based surface analysis
program.
58
a close coupling strategy. The results of this work will assist
users of LIDAR data in forestry and natural resource management sciences. The authors plan to continue developing
more sophisticated COM based links that will significantly
enhance the power of spatially oriented projects. For example, further coupling of this system with the Gradient
program (Meyer 2001) that computes true surface gradients
from irregular data, based on finite difference and the directional derivative method will be a great asset to geospatial,
engineering and natural resource management application.
The link will combine the strength of the method developed
here with the true gradient approximation at any point of
the surface that originated from irregularly spaced data sets
such as LIDAR. Application in forest inventory and analysis include also developing regression equations for calculating 3D density parameters. Another example of application includes determining the height of airborne laser and
spacing of flight lines for profiling laser in order to develop
stable, reliable, precise estimates of forest volume and biomass at the county and state level. The intention of authors
is to perform statistical sampling tests and developing algorithms for optimization of the grid distance in flight paths.
This would further economize the cost of repetitive flights
for forest inventory assessment.
Figure 9. Multiple layer import for density analysis in
3D windows.
DISCUSSION
The above-described processes include the integration
of COM compliant software such as Microsoft Office and
ArcGIS through state-of–the-art technology known as close
coupling. This ensures that the developed software program
gives the user-friendly GUI, and the VBA runs the algorithms and integrates LiDAR datasets, statistical programs
and MSOffice in the background without coming out of the
interface known to the user (GIS). Multiple linear regression procedures will then be used to relate computer simulated, top of canopy measurements to estimate volume and
biomass. Forest canopy simulation techniques would be
used to develop the regression equations that will predict
volume or biomass as a function of airborne laser measurements. Independently, neural networks may also be used to
predict volume and biomass as a function of canopy height
or density measures.
Forest Inventory, which has to be regularly monitored
and repetitively measured, is a major application field for
this kind of technology as was presented in this project.
The moving windows technique is extremely useful in determining the local variation of studied properties that might
be significantly different from global estimates. This particular 2D-window style that computed the Di parameter
indicates the spatial variability of forest canopy as measured
by the imaging laser system. If the dissectivity in a certain
window at the top of canopy were high, it would mean a
sharp difference in tree heights in that zone. This finding
may influence a decision making process in managing the
renewable resources. The 3D-window system that will be
implemented in a follow-up project will significantly improve spatial analysis of the forest inventory parameters such
as biomass, tree volume, carbon estimates, etc. using regression parameters that can be programmed as a VBA module and linked with GIS.
REFERENCES
Arp, H., J.C. Griesbach, and J.P. Burns. 1982. Mapping in
Tropical Forests: A New Approach Using the Laser APR.
Photogrammetric Engineering and Remote Sensing 48(1):
91-100.
ESRI. 2002. ArcGIS User’s Guide. ESRI Inc. Redlands, CA.
Goodchild, M.F., R. Haining, and S. Wise. 1992. Integrating
GIS and spatial data analysis: problems and possibilities.
International Journal of Geographic Information Systems
6(5):407-423.
Hoge, F.E., R.N. Swift, and E.B. Frederick. 1980. Water depth
measurements using airborne pulsed neon laser system. Applied Optics 19(6):871-883.
Hoge, F.E., R.N. Swift, and J.K. Yungel. 1982. Feasibility of
airborne detection of laser-induced fluorescence emissions
from green terrestrial plants. Applied Optics 22(19): 29913000.
Hyyppä, J., O. Kelle, M. Lehikoinen, and M. Inkinen. 2001. A
segmentation-based method to retrieve stem volume estimates from 3-D tree height models produced by laser scanners. Transactions of Geoscience and Remote Sensing
39(5):969-925.
CONCLUSION
Lefsky, M.A., D. Harding, W.B. Cohen, G. Parker, and H.H.
Shugart. 1999. Surface Lidar remote sensing of basal area
and biomass in deciduous forests of eastern Maryland. Remote Sensing of Environment 67:83-98.
The primary goal of this project was to develop a module
capable of linking GIS and statistical/database platforms in
59
Link, L.E., and J.G. Collins. 1981. Airborne Laser Systems
Use in Terrain Mapping. Proceedings 15th International
Symp. on Remote Sensing of Environment, ERIM, Ann Arbor, MI., Vol I: 95-110.
Solodukhin, V.I., A.Ya. Zhukov, I.N. Mazhugin, T.K. Bokova, and
V.M. Polezhai. 1977b. Vozmozhnosti lazernoi aeros”emki
profilei lesa (Possibilities of laser aerial photography of forest
profiles). Lesnoe Khozyaistvo No. 10: 53-58.
Nelson, R.F., W. Krabill, and J. Tonelli. 1988a. Estimating forest biomass and volume using airborne laser data. Remote
Sensing of Environment 24:247-287.
Solodukhin, V.I., I.N. Mazhugin, A.Ya. Zhukov, V.I. Narkevich,
Yu.V. Popov, A.G. Kulyasov, L.E. Marasin, and S.A. Sokolov.
1979. Lazernaya aeros”emka profilei lesa (Laser aerial profiling of forests). Lesnoe Khozyaistvo No. 10: 43-45.
Nelson, R.F., R. Swift, and W. Krabill. 1988b. Using Airborne
Lasers to Estimate Forest Canopy and Stand Characteristics. Journal of Forestry 34:3-38.
Solodukhin, V.I., A.V. Zheludov, I.N. Mazhugin, T.K. Bokova, and
K.V. Shevchenko. 1985. Lesotaksacionnaya obrabotka
lazernykh profilogram (The processing of laser profilograms
for forest mensuration). Lesnoe Khozyaistvo No.12: 35-37.
Nelson, R.F., M.A. Valenti, A.Short, and C. Keller. 2003. A
Multiple Resource Inventory of Delaware Using Airborne
Laser Data. BioScience, accepted for publication.
Stolyarov, D.P., and V.I. Solodukhin. 1987. O lazernoj taksacii
lesa (Laser forest survey). Lesnoi Zhurnal No.5: 8-15.
Solodukhin, V.I., A.G. Kulyasov, B.I. Utenkov, A.Ya. Zhukov,
I.N. Mazhugin, V.P. Emel’yanov, and I.A. Korolev. 1977a.
S”emka profilya krony dereva s pomoshch’ yu lazernogo
dal’nomera (Drawing the crown profile of a tree with the aid
of a laser). Lesnoe Khozyaistvo No. 2: 71-73.
Ungerer, M.J., and M.F. Goodchild. 2002. Intergrating spatial data
analysis and GIS a new implementation using Component
Object Model (COM). International Journal of Geographic
Information Systems 16(1):41-53.
60
Large Scale Photography Meets Rigorous Statistical Design
for Monitoring Riparian Buffers and LWD
RICHARD A. GROTEFENDT AND DOUGLAS J. MARTIN
Abstract: Large scale photography (LSP) proved to be a cost-effective and accurate method for examining the effects of
buffer zones on timber stand composition and wood recruitment to streams in Southeast Alaska. Rigorous statistical design
requirements were met for the comparison of riparian stand characteristics between a large photo population of logged and
unlogged units. The creation of the photo sample population (1,700 photo pairs from 52 km of streams) from 3,700 sq km of
remote terrain was facilitated by a fixed-base camera system that was mounted underneath a helicopter. Large scale photography was the only medium that could fulfill this design because 3D vision was required to see details as fine as twigs on down
trees. Visual classification of sample units by landform, stream direction, stand type, density, and treatment provided a
stratified population from which 62 paired, unbiased samples were selected. Photo digitization on an analytical stereoplotter
facilitated accurate measurements of key stand characteristics (tree density, height, and type; down tree density, length, position relative to stream, and decay class; stream length, area, and average bankfull width; and a stem map) that were evaluated
by the analysis. Large scale photography provided a cost effective means to gather a large sample population and provided the
medium for accurate measurement of a wide variety of ecosystem metrics. This study provided the largest known database of
riparian buffer characteristics in Southeast Alaska and the photography allows for future re-measurement and monitoring.
IINTRODUCTION
acteristics of the forest stand. Large scale photography (LSP;
>1:2,500) must be collected because standard aerial photography (scales of 1:12,000 to 1:62,000) has insufficient
image detail. The most common method of LSP collection
is from a fixed-wing aircraft that collects sequential, overlapping images that may be viewed in 3D and measured by
using scale derived either from known ground coordinates
or a global positioning system (GPS) / inertial measurement
unit (IMU). Interpretation of riparian buffer variables also
requires the ability to see in between the tree crowns to the
forest floor. LSP collected by fixed-wing aircraft have large
inter-photo distances that reduce the forest crown penetration and rugged topography and cloud conditions often prevent flights. Because of this a fixed base camera system
was used to collect the riparian buffer imagery and perform
a a retrospective study to determine: (1) the effects of the
standard buffer treatment on change in stand density; (2)
the post-logging mortality rates; (3) the relationship of stand
density change to location within the buffer; (4) the
windthrow effects compared to other stand mortality processes; (5) the importance of physical factors; and (6) the
effect of windthrow on wood recruitment to streams. This
met the statistical design requirements.
The Alaska Forest Resources and Practices Act requires
that 20 m wide buffer zones be retained along streams on
private timberlands with anadromous salmonids in Southeast Alaska (ADNR 1990). A key function of the buffer
zones is to supply large woody debris (LWD) that is important for the formation of fish habitat and influences other
ecological processes that support fish production (Bisson et
al. 1987). Because buffer zone rules were only implemented
in the early 1990s, their effectiveness to provide LWD to
streams has not been evaluated in Southeast Alaska and information from other regions including the Pacific Northwest is limited (Murphy 1995). The effects of wind on buffer
zone survival and LWD recruitment are a major concern in
Southeast Alaska because wind disturbance is the dominant
environmental force shaping forest composition and structure in the region (Harris and Farr 1974).
Remote sensing was determined to be the most cost-effective approach to evaluate buffer zone effectiveness. The
standard transect survey approach (Dunham and Collotzi
1975) was simply too costly given the sample intensity and
travel logistics that were needed to accurately characterize
buffer stand composition from remote areas. Transect data
also give different results from re-sampling in different locations due to the high variability of riparian buffer stands.
Imagery was the optimal remote sensing medium for this
study since 3D vision is required to interpret detailed char-
STUDY AREA
The study was conducted on Prince of Wales Island and
Revillagigedo Island in southern Southeast Alaska (Figure
61
##
#
(1,12)
#
#
##
##
#
##
#
#
####
##
####
##
#
(1,0)
(2,0)
#
###
#
#
##
##
##
#
##
##
#
##
##
#
##
#
##
#
####
##
##
##
(4,0)
##
##
#######
(2,3)
#####
#
#
#
(2,0)
(3,0)
#######
(1,0)
##
#
#
##
######
(1,0)
#########
###########
###
#####
####
##
##
##
##
#
#
#
##
#
#
##
##
#
##
(0,2)
KARTA
##
#####
###
#
######
##
(0,5)
#
####
##
#
##
#
###
######
##
##
(4,7)
TOLSTOI
(2,6)
#
#
###
###
##
##
#
#
##
#
(1,0)
KLAWOCK
(4,1)
(4,0)
(3,4)
### ######
######## ######
##
#
##
#
###
##
##
##
#########
######
#
#
##### #
###
#
#
#
#
#
#
###
########
#
#
#
#
####
##
##
#
##
#
(1,0)
##
#
#
###
(5,2)
########
##
##
#
#
##
######
##
#
(2,0)
###
######
########
####
##
(0,3)
(0,1)
####
(1,0)
##
#
#
#
SMITH
COVE
#######
#
## #####
#
###
######
(7,0)
#
##
##
###
##
##
(1,3)
#
#
###
##
###
##
#
SULZER
###
##
##
###############
#####
##
(4,8)
################
#
#####
###
(1,0)
##########
###
#
(0,5)
KETCHIKAN
Study area
##### ##
(4,8)
N
SEGMENTS
PHOTOGRAPHED
2
0
(5,0)
2
4
6 Miles
(N LOGGED, N UNLOGGED)
Figure 1. Location of stream segments photographed and study units sampled by geographic area in southern Southeast
Alaska.
unpublished data) were used to determine the minimum
sample size required to achieve an acceptable level of statistical power. Based on the range of density changes measured in these data, and the need to attain a statistical power
of 80%, the study required a sample of 62 buffer units.
The challenge of this study was to collect detailed stand
compostion data (e.g., density, height, position, mortality
agent, and decay class), as well as associated environmental
variables (e.g., channel confinement, channel width, length,
and aspect). The LSP of the riparian zone and stream channel was taken prior to leaf-out at a scale of 1:1,900 using a
fixed-base camera system (Grotefendt, et al. 1996) to provide the necessary resolution. Dual 70 mm Rolleiflex 6006
metric cameras with 80 mm planar lenses were mounted on
a 12 m long boom and carried transversely underneath a
helicopter. Each large scale photo pair covered approximately 1 ha and was taken every 30 m as the helicopter flew
at 3 knots along the centerline of the channel. This provided three different stereo views of the same stream location. To assist in classification of the LSP by physical factors and ensure the unlogged LSP was free of logging related disturbance a flight at 457 m collected smaller scale
1) and covered approximately 3,700 km2. Conifers dominate the temperate rain forest, which are predominantly composed of old-growth western hemlock Tsuga heterophylla
and Sitka spruce Picea sitchensis in the uplands, with mountain hemlock Tsuga mertensiana, western red cedar Thuja
plicata, and Alaska cedar Chamaecyparis nootkatensis on
wetlands. Deciduous trees (red alder Alnus rubra and Sitka
alder A. sinuata) are moderately abundant along streams.
The buffer zones studied were from lands managed for timber harvest either by Native American corporations or by
the Tongass National Forest.
METHODS
The population of buffer zone sample units was identified from reviews of timber harvest type maps, landowner
information, and reconnaissance aerial photography. Only
buffers that were 4 to 11 years old and that met Forest Resources and Practices Regulations (ADNR 1993) were included in the sample population. Data on buffer stand density from a pilot study that used LSP (Martin and Grotefendt,
62
aerial photos (1:5,712) with a metric camera. A radar altimeter was used to maintain consistent altitude during all
flights and a global positioning system (GPS) recorded photo
positions. An analytical stereoplotter with a measurement
precision of 7 to 15 microns (AP190, Carto Instruments,
Inc.) was used to measure and interpret riparian stand characteristics from the LSP.
Counts of standing trees, stumps, and down trees within
the buffer zone were used to calculate the proportional
change in stand density (PCID). Decay class was determined from the presence/absence of conifer needles and terminal branches on all down trees. These data were used to
identify recently fallen trees, which indicated post-logging
mortality. The distance of each standing tree, stump, and
down tree from the stream bank was measured from LSP
local coordinates and was used to evaluate the effect of location (inner [0-10 m], and outer, [10-20 m]) on the change
in stand density within the buffer. The cause of mortality
(windthrow, bank erosion, and other) was detected from the
LSP to compare windthrow effects to other stand mortality
processes.
The buffer units were classified by aspect, confinement,
and stand density to detect any effect from physical factors.
Initial confinement classification was done with the small
scale photography (1:5,712) and final unit classification was
completed with the LSP. Aspect was determined for each
photo pair from GPS position, the small scale photography,
LSP, and detailed maps produced from a geographic information system (GIS). The stand density category (low, medium, and high) was visually estimated from the LSP by a
forester with extensive timber cruising experience. The
logged and unlogged buffer units were stratified by geographic area and within those areas matched by aspect, confinement, and stand density category to reduce variability
in PCID.
The LSP was used to interpret the down trees’ positions
relative to the stream (in stream, over stream, or away) so
that the proportion of the stand that was recruited to the
stream could be computed. Tree heights were measured
from the LSP to determine whether the tree could potentially supply LWD. A tree needs to be at least 6 m tall to
produce the minimum sized log (10 cm diameter and 2 m
length) to qualify as LWD given the tree taper.
lytical stereoplotter, as well as the 3D view, aided in identification. The fixed base camera system facilitated larger
stereo views of the ground and down trees through the tree
crowns. Three stereo pairs provided three different views
of each same stream location. When the middle pair was
measured, the two neighboring stereo pairs could be examined and this reduced the number of missed down and standing trees and stumps as well as over counting multiple top
trees. Even though the ground was not visible by every
tree, visible ground points could be used to develop a digital terrain model that was then used for tree height measurement.
The use of decay class to define post-logging stand mortality was sufficient to detect a difference in the PCID for
recently down trees in logged and unlogged units. The fine
green branches, twigs, and bark texture and color could be
seen on the LSP and enabled the determination of recent
versus old windthrow. Also, reddish duff indicative of rotten logs was visible. Exposure of the LSP film during overcast skies increased the contrast and improved interpretability. The down tree details were visible even in shadowed areas when the diapositives were properly illuminated.
The stream bank edge was delineated with the LSP by
direct viewing or by interpolation when overhanging, tree
crowns obscured the bank. Bank location was necessary
for the GIS program to determine the position (i.e., distance from the stream edge) of all standing trees, down trees,
and stumps in the buffer zone. This information was used
to stratify the buffer zone into two sub-zones; 0 to 10 m and
10 to 20 m from the stream edge. Data analysis by subzones showed that the effects of windthrow on stand mortality depended on location within the buffer zone. The
PCID was not statistically significant within the inner zone
(0-10 m) but was in the outer zone (10-20 m). Although we
stratified the buffer zone into two sub-zones, the LSP data
could easily be formed into finer categories.
The cause of tree mortality was discernable for most of
the down trees. We used the presence of upturned root wads
to indicate windthrow and the tipping out of trees on the
stream bank to indicate bank erosion. Other forms of mortality were lumped together. Mortality as a result of gnawing by beaver was an example of stand mortality processes
that were visible on the LSP.
Channel confinement, aspect, and buffer density categories were reliably determined from interpretation of the LSP
by an experienced stream ecologist or forester. This classification enabled us to stratify the sample units, which facilitated a more powerful analysis. For example, we found
that all three strata influenced the PCID. We also found the
measured stand density (trees/ha) results corroborated the
density categories that were assigned during the photo stratification process.
The position of down trees relative to the streams could
be seen on the LSP. The analytical stereoplotter enabled us
to see the images in high resolution to determine if the down
trees were located in, over, or away from the stream. This
RESULTS
The stand density data (i.e., counts of standing trees,
down trees, and stumps) were sufficient to detect a 5% difference in the PCID between logged and unlogged buffer
zones with a statistical power of 81%. The quality of the
color professional film (Kodak Portra) and the scale
(1:1,900) of the LSP enabled discernment of details as fine
as tiny, western hemlock, drooping leaders. Stumps were
often similar in color to the dried branches, tree tops, and
logs remaining after timber harvest. In these cases, the
image enlargement provided by the zoom optics of the ana63
and analysis of the large population of LSP could occur
without more fieldwork given additional funding.
information was important for evaluating the effect of
windthrow on wood recruitment to the stream.
The fixed base method facilitated the collection of 1,700
photo pairs (scale 1:1,900) from 42 stream segments (29 km
length from logged areas and 23 km length from unlogged
areas) on 34 different streams. The same section of stream
was usually viewable on 3 separate pairs due to photo pairs
taken every 30 m. Ground objects that were obscured on
one pair, could thus be viewed on a subsequent pair for interpretation and measurement. Over 15,000 trees, down logs,
and stumps were measured and located with the analytical
stereoplotter. This instrumentation and the camera system
yielded horizontal errors ranging from 0.20% to 1.76% and
vertical errors ranging from 1.16% to 2.61% which is comparable or better than field methods (Grotefendt, et al. 1996).
Although the riparian buffers are highly variable, the large,
unbiased sample number and size of each sample unit (0.2
ha) enabled us to detect effects that are patchy, such as
windthrow.
REFERENCES
ADNR. 1990. Alaska forest resources and practices act.
Alaska Department of Natural Resources, Division of Forestry, Juneau, AK.
ADNR. 1993. Alaska forest resources and practices regulations. Alaska Department of Natural Resources, Division
of Forestry, Juneau, AK.
Bisson, P. A., Bilby, R. E., Bryant, M. D., Dolloff, C. A., Grette,
G. B., House, R. A., Murphy, M. L., Koski, K. V., and Sedell,
J. R.. 1987. Large woody debris in forested streams in the
Pacific Northwest past, present, and future. Pages 143-190
in E.O. Salo and T.W. Cundy, editors. Streamside management, forestry and fishery interactions. University of Washington Press, Seattle, WA.
CONCLUSION
Dunham, D. K. and Collotzi, A. 1975. The transect method of
stream habitat inventory: guidelines and applications.
Ogden, Utah. United States Forest Service, Intermountain
Region.
LSP proved to be a cost effective and accurate method for
examining the effects of buffer zones on timber stand composition and wood recruitment to streams. LSP facilitated
detailed measurements of stand conditions (e.g, tree height,
counts, and down tree lengths) as well as defining the environmental characteristics of buffer zones. Objects in shadowed areas could be seen and interpreted with extra illumination of the diapositives. The rigorous statistical design
requirements were met by the LSP. The reliability of the
inferences and conclusions were improved because all visible objects were reliably measured rather than subsampled.
A larger sample size and increased amount of data per sample
were possible with LSP for less cost than by field sampling
methods. The LSP data are comparable in accuracy to field
methods except for obscured objects that are missed. The
fixed base method of LSP collection overcame the limitations of other methods by providing scale without the collection of ground control or direct georeferencing, operating in
all types of rugged topography and non-optimal weather conditions, even rain, and providing stereo vision of the forest
floor through the canopy. Future additional measurement
Grotefendt, R.A., B. Wilson, N.P. Peterson, R.L. Fairbanks,
D.J. Rugh, D.E. Withrow, S.A. Veress, and D.J. Martin.
1996. Fixed-base large scale aerial photography applied to
individual tree dimensions, forest plot volumes, riparian
buffer strips, and marine mammals. Proceedings of the
Sixth Forest Service Remote Sensing Applications Conference: Remote Sensing; People in Partnership with Technology. April 29-May 3, 1996, ASPRS, Bethesda, MD.
Harris, A. S. and W.A. Farr. 1974. The forest ecosystem of
southeast Alaska. USDA Forest Service, Gen.Tech. Rep.
PNW-25, Portland, OR.
Murphy, M.L. 1995. Forestry impacts on freshwater habitat
of anadromous salmonids in the Pacific Northwest and
Alaska—requirements for protection and restoration.
NOAA Coastal Ocean Program, Decision Analysis Series
No. 7, NOAA Coastal Ocean Office, Silver Spring, MD.
64
Forest Canopy Models Derived from LIDAR and INSAR
Data in a Pacific Northwest Conifer Forest
HANS-ERIK ANDERSEN, ROBERT J. MCGAUGHEY, WARD W. CARSON, STEPHEN E. REUTEBUCH,
BRYAN MERCER, AND JEREMY ALLAN
ABSTRACT: Active remote sensing technologies, including interferometric radar (INSAR) and airborne laser scanning
(LIDAR) have the potential to provide accurate information relating to three-dimensional forest canopy structure over extensive areas of the landscape. In order to assess the capabilities of these alternative systems for characterizing the forest canopy
dimensions, canopy- and terrain-level elevation models derived from multi-frequency INSAR and high-density LIDAR data
were compared to photogrammetric forest canopy measurements acquired within a Douglas -fir forest near Olympia, WA.
Canopy and terrain surface elevations were measured on large scale photographs along two representative profiles within this
forest area, and these elevations were compared to corresponding elevations extracted from canopy models generated from Xband INSAR and high-density LIDAR data. In addition, the elevations derived from INSAR and LIDAR canopy models were
compared to photogrammetric canopy elevations acquired at distinct spot elevations throughout the study area. Results generally indicate that both technologies can provide valuable measurement s of gross canopy dimensions. In general, LIDAR
elevation models acquired from high-density data more accurately represent the complex morphology of the canopy surface,
while INSAR models provide a generalized, less-detailed characterization of canopy structure. The biases observed in the
INSAR and LIDAR canopy surface models relative to the photogrammetric measurements are likely due to the different
physical processes and geometric principles underlying elevation measurement with these active sensing systems.
65
66
Enhancing Precision in Assessing Forest Acreage Changes
with Remotely Sensed Data
GUOFAN SHAO, ANDREI KIRILENKO AND BRETT MARTIN
Abstract: The acreage of forest cover constantly changes over time as a result of natural and/or human-induced changes.
Remote sensing technology is an effective tool for detecting these changes over time. A commonly used remote sensing
technique is the post-classification change detection. In this case, classification accuracy of any individual-date data can affect
the accuracy of the change assessment. Various statistics are available for quantifying classification accuracy but they are not
developed for assessing the accuracy of the area of cover types. To assure accurately detect forest cover change, it is essential to
accurately quantify the area of forest cover from individual-date remote sensing data. In this study, we demonstrated how to
increase the precision of forest change detection with a combined accuracy index, which was derived for assessing areal
accuracy of cover classes. It was found that this new approach was effective in improving the accuracy of forest change
detection whereas conventional accuracy statistics normally over-estimate the accuracy of forest change detection. We examined and explained several possible situations with actual remotely sensed data and hypothetical examples. The proposed
technique has practical significance in decision making that is based on forest acreage changes.
INTRODUCTION
not have statistical relations with the errors in areal changes
for individual land cover types. In other words, the errors in
areal changes cannot be readily corrected with conventional
accuracy assessment methods.
Data processing and analysis involve errors, which propagate from one stage to the next and up to the end users.
Because change detections are made by comparing data between two time periods, the errors associated with information on changes includes all the accumulated errors from
both data sets used. On one hand, classification errors from
a single time cannot explain the total errors of change detections; on the other hand, errors in change detections are
higher than the errors involved in data from each time period. The overall effects of error propagations determine that
the detected changes in cover class areas may not reflect the
actual changes on the ground. The corrections of areal errors prior to change detection can help reduce the errors in
areal changes over time. This paper will demonstrate the
effectiveness of areal corrections with a combined accuracy
index developed by Shao et al. (2003) for accurately quantifying changes in forest acreage over time.
The study of change usually increases our understanding
about the natural and human-induced processes at work in
the landscape (Jensen 2000). Forest management activities
generally lead to changes in forest area over time. Forest
clearing for agriculture, urbanization, and other land uses
results in deficits in forest area; afforestation, on the other
hand, increases forest area. Reliable information on forest
change over time reflects the overall forest management efforts and is particularly useful to understand wildlife populations, habitat, forest biodiversity, and forest productivity
(Franklin et al. 2000). If a forest is the home of rare and
endangered species, its areal change over time indicates how
well the habitat is protected or managed. As can be seen,
forest change information can yield many types of useful
data, and therefore needs to be performed in a precise and
accurate manner.
Remotely sensed data are commonly used for forest cover
change detection (e.g. Hayes and Sader 2001, Rogan et al.
2002, Turner et al. 2001). Both pre-classification and postclassification methods can be used to determine these
changes (Franklin et al. 2000). In the latter approach, two
dates of imagery are independently classified and registered.
The accuracy of such procedures depends upon the accuracy of each of the independent classifications used in the
analysis (Lillesand and Kiefer 1999).
Congalton and Green (1999) demonstrated a matrix technique to assess errors in changes between two time periods.
However, the errors in changes among land cover types do
METHODS
We conducted the study on forest cover change in a forested landscape on the eastern Eurasian Continent (128o E
and 42o N) (Fig. 1). The study area was covered mainly
with old-growth broadleaved-coniferous mixed forest
67
(Barnes et al. 1993), one of typical vegetation zones in the
eastern Eurasian Continent (Nakashizuka and Iida 1995).
Extensive logging in this area did not start until 1970s when
state owned forestry enterprises were founded throughout
forested regions in China (Shao et al. 1996). Forests were
largely cut with a so-called small-area clear cutting method.
The average size of a cutting area or field was about 15 ha
(Shao and Zhao 1998). Following forest cutting, cleared fields
were planted with ginseng, larch or pine seedlings, or left
for natural regeneration. It took 5-10 years of natural regeneration for secondary forests to develop into closed canopy
forest. The secondary forests were composed mainly of birch
and aspen (Shao et al. 1994). It was common that the remaining forests between cutting fields were damaged by selectively cutting valuable trees during logging processes. The
14 pairs of thematic maps. The acreage of forest classes was
computed for each thematic map. Changes in forest acreage
between any of 14 1985 thematic maps and any of 14 1997
thematic maps were computed, resulting in 196 image pairs.
Manually digitized thematic maps for the two areas, sized
7 by 12 km and 4 by 8 km, respectively, were used as reference data for accuracy assessment. We assume that a manual
digitization is correct because the fragmentation of the homogenous forestland has some regular patterns and manual
digitizing is more capable to trace the actual pattern than
computer-aided classifications. A total of 1,300 points (pixels) were randomly selected from the two areas and used to
build an error matrix for assessing classification accuracy
of each thematic map (Congalton and Green 1999).
Producer’s, user’s, and overall accuracy were computed with
Fig. 1. The location of study site and a display of the TM image used in the study.
dimension of the study area was defined by a quarter scene
of Landsat Thematic Mapper (TM) imagery. Except for cutting areas and roads, there were no other major human disturbances within the study areas.
TM data of path 116 and row 31 were acquired from May
12, 1985 and September 4, 1997 (Fig. 1). The 1997 data
were rectified into a 30m resolution image in the UTM coordinate system by referring to 1:50,000 topographic maps.
The 1985 data were rectified against the 1997 data and the
RMS errors were controlled within 0.5 pixels. A composite
data set was made by stacking the 1985 and 1997 image
data. The bi-temporal data contain richer information about
forest change than a single-temporal data (Wu and Shao
2003). The image data classifications were performed by
seven student analysts. Each temporal data set was classified with supervised and unsupervised algorithms available
from the computer program Erdas Imagine (http://gis.leicageosystems.com/Products/). After initial classification, spectral classes were grouped into two information classes: forest and clear cut. The classification experiment resulted in
each error matrix. The relative error of area (REA) for the
forest class was also computed (Shao and Wu 2003) as follows:
 1
1 
× 100
REA f = 
−
(1)
 UA f PA f 


where, UAf is user’s accuracy and PAf is producer’s accuracy for forest class.
The area in percent for forest class from each thematic
map was corrected with the following formula (Shao and
Wu 2003):
A f ,c = A f , m −
n ff
N
× REA f × 100
(2)
where, nff is the number of points checked by both reference
and classification in the error matrix, N is the total of points
(N = 1,300 in this study), Af,m is forest area in percent derived from a thematic map (referred to as “original forest
68
same between the two data sets. Both the corrected and the
original data followed normal distributions (Fig. 3).
area” in this paper), and Af,c is corrected forest area in percent.
Changes in forest acreage between one of 14 1985 thematic maps and one of 14 1997 thematic maps were computed as follows:
∆=
A1997 − A1985
× 100
A1985
DISCUSSION AND CONCLUSIONS
The range or variation of forest acreage change is an indication of the uncertainty in forest change detection. Table 3
shows that the original data had four times higher uncertainty in assessments of deforestation than the corrected data.
The correction filter shown in Eq. 2 proved significantly
effective on increasing the certainty of forest area change
assessment. In contrast, selectively using maps with higher
overall accuracy was not as effective as the filtering process
for reducing the uncertainty of forest acreage change assessment. This is because the area of a land cover class had
no close relationships with the overall accuracy (Shao and
Wu 2003).
The extremely low values of forest area change are found
in the lower right corner of the plot of scattered data of the
two data sets (Fig. 4). They resulted from combinations of
the 1997 forest area in map #13, which overestimated forest
area, and the 1985 data from other maps. The algorithm
successfully corrected the extremely low values (left part of
the group being discussed), but over-corrected the values
previously closer to average. The extreme variability of raw
data (5 to 20% deforestation) was successfully reduced to
26.5 – 31%, and average estimates of forest land reduction
for the sample (12.4% for raw data and 28.6% for corrected
data) have come closer to the overall mean of 24.8%.
It is concluded that the suggested algorithm provides reasonably good corrections for assessing forest area change.
After the correction, the range and variation were significantly reduced, the mean value remained the same, and data
distribution stayed normal. Yet there was a notable propensity to slightly over-correct the samples that included extreme values. This was caused mainly by sampling errors
involved in building an error matrix because REA was derived under the assumption that the distribution of errors in
the error matrix is representative of the types
misclassification made in the entire area classified.
The thematic maps used in this study, particularly the
1985 maps, have classification accuracy that is acceptable
in real remote sensing applications. This does not mean that
the computations of forest acreage change with these maps
provide reasonably reliable estimations. If the mean of forest acreage change, which is 24.8%, is used as standard, the
errors of forest acreage change derived from the original
data can be between -60% ((10.0-24.8)/24.8*100 = -60%)
and 61% ((40.0-24.8)/24.8*100 = 61%). It is obviously too
risky to use uncorrected areas to quantify changes in area. If
the 1985 maps had as low classification accuracy as the 1997
maps, the estimations of forest acreage change would be
misleading in decision making that is based on forest acreage changes. The areal correction technique is especially
effective to make the estimations of forest acreage change
more reliable and meaningful.
3)
where ⌬ is forest area change in percent and A1985 and A1997
are forest acreage in 1985 and 1997, respectively.
Forest area change is computed with both original and
corrected forest areas. In each case, there are 196 combinations.
RESULTS
Classification accuracy for forest class from the 1985 TM
data is consistently higher than that from the 1997 TM data
(Fig. 2). The former ranges between 93.4 – 95.9% for overall accuracy, 92.2 – 97.5% for user’s accuracy, and 94.1 –
99.6% for producer’s accuracy; the latter ranges between
75.8 – 88.2% for overall accuracy , 71.4 – 90.7% for user’s
accuracy, and 70.3 – 94.0% for producer’s accuracy. The
different ranges of the percentages indicate that the 1997
maps have much lower classification accuracy but higher
variations in classification accuracy than the 1885 maps. As
the changes are derived from the data of both years, the 1997
data limit the accuracy of change detections more than the
1985 data in this study.
Similar to the variations in classification accuracy, the
variations in forest acreage from the 1985 data set were much
smaller than those from the 1997 data set (Fig. 2). The area
of forest is between 304,442 and 360,514 ha in 1985 and
declined to a range of 206,519 and 289,143 ha in 1997 depending on which thematic map was used to compute forest
acreage.
The ranges of forest acreage change were between 5 and
43 percent when the original forest area data were used (Table
1). There were 189 (out of 196) combinations that resulted
in forest acreage change between 10 and 40%. Based on the
overall accuracy of the 1997 maps, seven higher-accuracy
maps were selected from the 14 maps (Table 1). When these
better maps were used, the range of forest acreage change
was between 13 and 40%.
After area correction with Eq. 2, the ranges of forest areas were reduced to between 28,399 and 30,155 ha for the
1985 maps and between 20,853 and 22,462 ha for the 1997
data (Fig. 2). With the corrected forest areas, the changes in
forest acreage were between 21 and 31 percent (Table 2).
When only the better maps were used, the range of the
changes dropped slightly to between 22 and 29 percent (Table
2).
The major differences in forest acreage change between
the original and corrected data sets were the range and variation (Table 3). The mean value of the change was about the
69
1985
1
1
2
2
3
3
4
4
5
5
6
Overall
Accuracy
7
User's
Accuracy
8
6
8
Producer's
Accuracy
9
10
11
11
12
12
13
13
14
14
60
70
80
90
Corrected
Area
9
10
50
Original
Area
7
15000
100
20000
25000
30000
35000
1997
1
1
2
2
3
3
4
4
5
5
6
Overall
Accuracy
7
User's
Accuracy
8
Producer's
Accuracy
9
6
8
10
11
11
12
12
13
13
14
14
60
70
80
90
10000
100
Corrected
Area
9
10
50
Original
Area
7
15000
20000
25000
30000
Fig. 2. Classification accuracy in percent (left) and area in ha (right) of forest class in 14 thematic
maps derived from the 1985 TM data (above) and 1997 TM data (below).
Figure 3. Distribution of the raw (a) and corrected (b) estimates of forest area change.
70
Table 1: Changes in forest area by percent among 14 maps between 1985 and 1997 using original forest area values.
Map#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1
30.5
32.4
34.8
24.0
19.0
17.6
26.9
34.7
38.1
20.5
29.8
18.1
13.4
19.1
2
29.6
31.5
34.0
23.0
17.9
16.5
26.0
33.9
37.3
19.4
28.9
17.0
12.3
18.0
3
26.6
28.5
31.1
19.7
14.4
12.9
22.8
31.0
34.6
15.9
25.8
13.4
8.5
14.5
4
31.6
33.4
35.8
25.2
20.2
18.9
28.0
35.7
39.1
21.7
30.9
19.3
14.7
20.3
5
32.2
34.0
36.3
25.8
20.9
19.5
28.6
36.3
39.6
22.3
31.5
20.0
15.4
21.0
6
27.6
29.5
32.0
20.8
15.5
14.1
23.8
31.9
35.5
17.1
26.8
14.6
9.7
15.6
7
23.8
25.8
28.5
16.7
11.1
9.6
19.9
28.4
32.2
12.8
23.0
10.2
5.0
11.2
8
29.9
31.7
34.2
23.3
18.2
16.8
26.2
34.1
37.6
19.7
29.1
17.3
12.6
18.3
9
32.8
34.5
36.9
26.4
21.6
20.3
29.3
36.8
40.1
23.0
32.1
20.7
16.2
21.7
10
35.7
37.4
39.6
29.6
25.0
23.7
32.3
39.6
42.7
26.3
35.0
24.2
19.8
25.0
11
30.4
32.3
34.7
23.9
18.9
17.5
26.8
34.6
38.1
20.4
29.7
18.0
13.3
18.9
12
31.6
33.4
35.8
25.2
20.2
18.9
28.0
35.7
39.1
21.7
30.9
19.3
14.7
20.3
13
29.6
31.5
34.0
23.0
17.9
16.6
26.0
33.9
37.4
19.5
28.9
17.1
12.3
18.0
14
24.7
26.7
29.3
17.6
12.1
10.6
20.8
29.2
32.9
13.7
23.9
11.2
6.1
12.2
Note: Bold numbers indicate higher-accuracy maps in 1997.
71
Table 2: Changes in forest area by percent among 14 maps between 1985 and 1997 using corrected forest area values.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1
26.4
25.0
26.8
26.4
23.6
23.5
24.6
25.2
27.7
23.4
25.6
24.3
28.9
25.0
2
26.0
24.6
26.4
26.0
23.2
23.1
24.2
24.8
27.3
23.0
25.2
23.9
28.5
24.6
3
24.7
23.3
25.1
24.8
21.9
21.7
22.9
23.5
26.1
21.7
24.0
22.6
27.3
23.4
4
26.6
25.2
27.0
26.7
23.9
23.7
24.9
25.5
27.9
23.7
25.9
24.6
29.1
25.3
5
27.4
26.0
27.7
27.4
24.7
24.5
25.6
26.2
28.6
24.4
26.6
25.3
29.8
26.0
6
24.8
23.4
25.2
24.8
22.0
21.8
23.0
23.6
26.1
21.7
24.0
22.6
27.3
23.4
7
24.0
22.6
24.4
24.0
21.2
21.0
22.2
22.8
25.3
20.9
23.2
21.8
26.6
22.6
8
26.2
24.8
26.6
26.2
23.5
23.3
24.4
25.0
27.5
23.2
25.4
24.1
28.7
24.9
9
27.2
25.8
27.6
27.2
24.5
24.3
25.4
26.0
28.5
24.2
26.4
25.1
29.6
25.9
10
28.4
27.1
28.8
28.5
25.8
25.6
26.7
27.3
29.7
25.5
27.7
26.4
30.9
27.1
11
27.0
25.6
27.4
27.0
24.3
24.1
25.3
25.8
28.3
24.0
26.3
24.9
29.5
25.7
12
26.6
25.2
27.0
26.7
23.9
23.7
24.9
25.5
27.9
23.7
25.9
24.6
29.1
25.3
13
26.5
25.1
26.9
26.6
23.8
23.6
24.8
25.3
27.8
23.6
25.8
24.4
29.0
25.2
14
24.1
22.7
24.5
24.2
21.3
21.1
22.3
22.9
25.5
21.0
23.4
21.9
26.7
22.7
Map#
Note: Bold numbers indicate higher-accuracy maps in 1997.
Table 3. A comparison of forest area change between the original data and the corrected data.
Minimum
Maximum
Mean
Range
Standard Deviation
Variance
Skewness
Standard Error of Skewness
Kurtosis
Standard Error of Kurtosis
Original Data
5.03
42.72
24.80
37.69
8.36
69.82
-.046
.174
-.935
.346
72
Corrected Data
20.90
30.85
25.21
9.95
2.02
4.08
.159
.174
-.279
.346
Corrected data (%)
vegetation regrowth in a time series. Photogrammetric Engineering and Remote Sensing 67: 1067-1075.
45
Jensen, J.R. 2000. Remote Sensing of the Environment: An Earth
Resource Perspective. Prentice Hall, Upper Saddle River,
NJ. 544 p.
35
Lillesand, T.W. and R.W. Kiefer. 1999. Remote Sensing and
Image Interpretation (4th Edition). John Wiley & Sons, New
York. 736 p.
25
Nakashizuka, T. and S. Iida. 1995. Composition, dynamics and
disturbance regime of temperate deciduous forests in Monsoon Asia. Vegetatio 121: 23-30.
15
Rogan, J., J. Franklin and D.A. Roberts. 2002. A comparison of
methods for monitoring multitemporal vegetation change
using Thematic Mapper imagery. Remote Sensing of Environment 80: 143-156.
5
5
15
25
35
45
Raw data (%)
Shao, G., P. Schall, and J.F. Weishampel. 1994. Dynamic simulations of mixed broadleaved-Pinus koraiensis forests in the
Changbaishan Biosphere Reserve of China. For. Ecol. Manage. 70: 169-181.
Figure 4. Forest area change, %: raw data vs. corrected
data. Notice the 13 lower right corner points—all of those
were generated using 1997 forest areas estimated by one
of the experts.
Shao, G. and G. Zhao. 1998. Protecting versus harvesting of oldgrowth forests on the Changbai Mountain (China and North
Korea): A remote sensing application. Natural Areas Journal 18: 334-341.
REFERENCE:
Barnes, B.V., Z. Xu and S. Zhao. 1993. Forest ecosystems in an
old-growth pine-mixed hardwood forest of the Changbai
Shan Preserve in northeastern China. Canadian Journal of
Forest Research 22: 144-160.
Shao, G., S. Zhao, and H.H. Shugart. 1996. Forest Dynamics
Modeling: Preliminary Explanations of Optimizing Management of Korean Pine Forests. China Forestry Publishing
House, Beijing (in Chinese). 159 p.
Congalton, R.G. and K. Green. 1999. Assessing the Accuracy
of Remotely Sensed Data: Principles and Practices. Lewis
Publishers, New York. 137 p.
Shao, G., W. Wu, G. Wu, X. Zhou, and J. Wu. 2003. An explicit
index for assessing the accuracy of cover class areas. Photogrammetric Engineering & Remote Sensing 69: 907-913.
Franklin, S.E., E.E. Dickson, D.R. Farr, M.J. Hansen, and L.M.
Moskal. 2000. Quantification of landscape change from satellite remote sensing. Forestry Chronicle 76: 877-886.
Turner, B.L., S.C. Villar, D. Foster, J. Geoghegan, E. Keys, P.
Klepeis, D. Lawrence, P.M. Mendoza, S. Manson, Y.
Ogneva-Himmelberger, A.B. Plotkin, D.P. Salicrup, R.R.
Chowdhury, B. Savitsky, L. Schneider, B. Schmook, C.
Vance. 2001. Forest Ecology and Management 154: 353370.
Hayes, D.J. and S.A. Sader. 2001. Comparison of change-detection techniques for monitoring tropical forest clearing and
73
74
Automatic Extraction of Trees from Height Data Using Scale
Space and SNAKES
BERND-M. STRAUB
Abstract: An approach is presented for the automatic extraction of trees and the boundaries of treecrowns. It is based on a
multi-scale representation of an orthoimage and a surface model in Linear Scale Space. The segmentation of the surface model
is performed using a watershed transformation. Finally the boundary of every crown is measured with Snakes (Active Contour
Models). The approach was tested with data from laser scanner (1 m) and image matching (0.25 m).
INTRODUCTION
strategy. In the last section some exemplary results are
shown. The paper closes with a short summary and an outlook.
In this paper we present a new approach for the automatic extraction of individual trees using a true orthoimage
and a surface model as input data. The surface model is
used as main source of information for the extraction of
individual trees. Additional colour information from the
orthoimage is used to differentiate between vegetation and
other objects in the scene. The aim of the presented approach
is to detect every tree in the observed area of the real world
and to measure the boundary of its crown. Originally, the
method was developed for the automatic extraction of trees
in settlement areas using height data from image matching.
The type of surface model used has a ground sampling of
0.25 m, it was produced by the French company ISTAR using 1:5000 color infrared aerial images. An example of such
a data set, acquired over Grangemouth, Scotland in summer 2000 is depicted in Figure 1, refer (Straub and Heipke
2001) for details.
In order to demonstrate the potential of the approach, we
applied it on a test site in a forest in the Austrian alps. The
main species in this test site is spruce (94%). A surface model
was used in the investigations. The laser scanner flight with
a Toposys I Scanner was carried out in August 1999 in Austria, close to Hohentauern. The flying height was
approximatley 800 m above ground, leading to 4-5 points
per square meter, refer (Baltsavias 1999) for an overview
about airborne laser scanner. The data were provided by
Joanneum Research in Graz, Austria for this investigation.
In the next section of the paper a short overview is given
on the related work in the field of automatic extraction. In
the main section of the paper the approach is described in
detail, it is divided in two subsections: The first one depicts
the object model for trees and the second one the processing
RELATED WORK
The first trial to utilize an aerial image for forest purposes was performed in 1897 (Hildebrandt 1987). Since that
time the scientific forest community has worked on methods for the extraction of tree parameters from aerial images.
Early work was carried out on the manual interpretation of
images for forest inventory (Schneider 1974), (Lillesand and
Kiefer 1994). Pioneering work in the field of the automated,
individual tree extraction from images emerged about one
and a half decades ago (Haenel and Eckstein 1986),
(Gougeon and Moore 1988), (Pinz 1989). Recent work in
the field was published by Pollock (1996), Brandtberg and
Walter (1998), Larsen (1999), Andersen et al. (2002),
Persson et al. (2002), Schardt et al. (2002). An excellent
state-of-the-art overview is given by Hill and Leckie (1999).
Some of the recent publications are described in detail in
the following section.
A common element of most approaches is the geometric
model of a tree as it was proposed by Pollock (1994). In the
following, this surface description is referred to as the Pollock-Model. The geometric part of the Pollock-Model can
be depicted as follows:
n
2
2 2
zn ( x + y )
+
=1
an
bn
(1)
The parameter a corresponds to the height, and b to the
radius of the crown, n is a shape parameter. Two examples
of surfaces which can be described with Equation 1 are de75
Figure 1: Orthoimage, surface model and 3D vizualization of automatically extracted trees in
settlement areas.
DESCRIPTION OF THE APPROACH
picted in Figure 2. The surface of a real tree is of course very
noisy in comparison to the Pollock-Model. This “noise” is
not caused by the measurement of the surface, it is simply a
consequence of using such a model for a complex shape like
the real crown of a tree. But the main shape of the crown is
well modelled with this surface description.
Another common element in the most approaches is the
application of the Linear Scale-Space in the early processing stages (refer to (Dralle and Rudemo 1996), (Brandtberg
and Walter 1998), (Schardt et al. 2002), and (Persson et al.
2002)). In (Andersen et al. 2001) a Morphological ScaleSpace is used for the extraction of tree positions. A basic
idea of the Linear Scale-Space is to construct a multi-scale
representation of an image, which only depends on one parameter and has the property of causality: that means it has
to be insured, that features in coarse scale have always a
reason in fine scale (Koenderink 1984). One can show, that
a multi-scale representation based on a Gaussian function
used as a low pass filter fulfils this requirement. In practice,
r
the original signal f ( x ) is convolved with a Gaussian kernel with a different scale parameter s; the result of the convolution operation is assigned as
. Small values of σ
correspond to a fine scale level, large values to a coarse scale.
An extensive investigation and mathematical reasoning including technical instructions can be found in (Lindeberg
1994).
One of the crucial problems is the estimation of the scale
parameter σ , i.e., the selection of the scale level for the
extraction of the low-level features. In (Schardt et al. 2002)
it was proposed to use a scale selection mechanism, refer to
(Lindeberg 1998) for details, based on the maximum response after Scale-Space transformation. In our approach
the scale selection is applied on a higher level, i.e. after the
segmentation of the image, and not before, as it was proposed in (Schardt et al. 2002). This allows an internal evaluation of the segments on a semantic level, which is an important possibility if it is necessary to distinguish between
trees and other objects.
The idea of our approach is to create a multi-scale representation of the surface model similar to (Persson et al. 2002).
The selection of the scale level is of crucial importance for
the extraction of trees, the reasons are: (1) The correct scale
level depends mainly on the size of the objects one is looking for. In the case of trees this size can neither be assumed
to be known nor is it constant for all trees in one image. The
size of trees depends on the age, the habitat, the species and
many more parameters, which cannot be modelled in advance. (2) The correct scale is of crucial importance for the
segmentation. The small structures of the crown are very
difficult to model and – except for small structures - the
crown has a relatively elementary shape. The image is segmented in a wide range of scales, bounded by reasonable
values for the minimum and maximum diameter of a tree’s
crown. In (Gong et al. 2002) the typical range for the diameter is proposed to be minimal 2.5 m up to 15 m covering
all species of trees.
This section is subdivided into two parts. In the first part
the model for trees, which constitutes the basis for the strat-
Figure 2: 3D visualisation of the Pollock-Model.
Left: Surface model of a typical deciduous tree:
a=7, b=3.5, n=1.2.
Right: Coniferous tree: a=20.0; b=5.0; n=1.2,
different scales of the horizontal and vertical axis.
76
In the case of real data this model is only valid in a convenient scale level. A height profile from real data is used to
explain the term “convenient” in this context. Two different
Scale-Space representations of the surface model with σ values of 0.5 m and 8 m are depicted in Figure 4. One can see
that more and more fine structures disappear and the coarse
structure is enhanced with the increase of the scale
parameter σ.
The height profile along the tree tops is measured along
the dotted line which is superimposed to the surface model
in Figure 4. The left height profile measured in the original
surface model is noisy compared to the profile of the synthetic trees. As a result of this noise the Laplacian is oscillating close to zero. In the “correct” scale level for this small
group of trees the assumptions regarding the Laplacian are
fulfilled quite well. The Laplacian is negative for trees and
positive for the valleys between them, just like the profile of
the synthetic Pollock-Trees (Figure 3). The coarse structure
of the crown is enhanced, and as a result of this the properties of the Pollock-Model are valid also for the surface model
of real trees in this scale level.
egy of extraction, is described. The strategy is explained in
detail in the second part of the paper.
MODEL FOR TREES
The geometric part of the model of an individual tree
simplifies the crown to a 2.5D surface, see the Pollock-Model
(Equation 1). The parameter n can be used to define the
shape of a broad-leafed tree with a typical range of values
from 1.0 to 1.8, and also for conifers with a typical range
for n from 1.5 to 2.5. These numerical values are based on
an investigation described in (Gong et al. 2002).
Based on the Pollock-Model the following features for
the extraction from the surface model can be derived: the
projection of the model into the xy-plane is a circle with a
diameter in given range. Furthermore, the 3D shape of the
surface is always convex. The image processing is based on
differential geometric properties. A profile is used along four
tree tops to study the geometrical properties of the surface if
the trees stand close together. In the left part of Figure 3
four Pollock-Trees computed with a=6 m, b=2 m, and n=2.0
(1 m is equivalent to 10 pixels respectively grey values) are
depicted. The profile is plotted in dark grey in Figure 3.
PROCESSING STRATEGY
In general, there are two possibilities to build a strategy
for the automatic extraction of trees from the image data.
The first possibility is to model the crown in detail: one
could try to detect and group the fine structures in order to
reconstruct the individual crowns. The second possibility is
to remove the fine structures from the data with the aim to
create a surface which has the character of the PollockModel. In the literature examples for both strategies can be
found: Brandtberg (1999) proposed to use the typical fine
structure of deciduous trees in optical images for the detection of individual trees. In (Andersen et al. 2002) the fine
structure of the crown is modelled as a stochastic process
with the aim to detect the underlying coarse structure of the
crown. The other strategy, the removal of noise, was proposed by Schardt et al. (2002) and by Persson et al. (2002).
The main problem of this type of approach is the determination of an optimal low pass filter for every single tree in
the image. This is kind of a chicken-and-egg problem, because the optimal low pass filter depends mainly on the diameter of the individual tree one is looking for, which is not
known in advance.
The basis of our approach is the Linear Scale-Space
Theory. The watershed transformation is used as a segmentation technique, Fuzzy Sets for the evaluation of the segments, and Snakes for the refinement of the crowns outline.
The basic ideas of the Linear Scale-Space Theory were originally proposed by Koenderink (1984), and were worked out
by Lindeberg (1994). The watershed transformation for the
segmentation of images was introduced by Beucher and
Lantéjoul (1979). Details about the watershed transformation can be found in (Soille 1999). Fuzzy Sets (Zadeh 1965)
are used, because they are a “very natural and intuitively
Figure 3: Profile of the surface model of four PollockTrees, the location of the profile is depicted in the
upper left corner.
One can see that the “valley” between the trees decreases
from the left to the right. The absolute value of the gradient
(black line in Figure 3) decreases also. Obviously
this is a consequence of the decreasing distance between
the trees, and of the crown shapes.
The surface at the tree tops has a convex shape in both
directions, along and across the profile. Therefore, the sum
of the second partial derivations is always negative for the
whole crown (refer to the light grey line in Figure 3). At a
point on the profile between two trees the partial, second
derivative is smaller than zero along the profile and larger
than zero perpendicular to the profile. Therefore, the
Laplacian of the surface model
at points like this is
normally higher than at points on the crown, because both
partial second derivatives are smaller than zero at the tree
tops. These characteristics lead to local maxima between
the crowns in
.
77
r
Figure 4: Representation of the surface model H ( x ) at two different scale levels, left: σ=0.5 m, right: σ=8 m. The height
profiles below are measured along the dotted lines in the images.
r
Bσ (a ) with a feature vector ar of four fuzzy membership
values.
(3) Selection of valid hypothesis: Every tree hypothesis
r
Bσ (a ) is first evaluated based on the feature vector. In some
cases this is leading to valid hypothesis from different scale
levels which are covering each other in the xy-plane. These
covering segments have to be detected and the best one acr
cording its membership value, is selected as Treeσ (a ) .
r
(4) The outline of the crown of every selected Treeσ (a )
is measured using Snakes.
plausible way to formulate and solve various problems in
pattern recognition.” (Bezdek 1992). Snakes were introduced
by Kass et al. (1988) as a mid-level tool for the extraction of
image features. They “look on nearby edges, localizing them
accurately” (Kass et al. 1988).
These tools were combined into a strategy, whose main
steps are depicted in Figure 5. The aim is to detect individual
trees first and reconstruct the outline of the crown in a second step. As mentioned above a multi-scale representation
of the image in the Linear Scale-Space is used as a basis for
the approach:
Segmentation of the Surface Model
of the sur(1) Segmentation: Every scale level
face model
is subdivided into segments
using the
watershed transformation. The resulting segments are the Basins of the watershed transformation, where
indicates the
scale level.
(2) Computation of membership values: Membership
values were assigned to every segment , which are partly
derived from both segments (size and circularity), or the
r
area belonging to the appropriate scale level H ( x , σ ) of the
r
surface model (curvature), and the image I ( x , σ ) ( ⇔ vegetation index or texture). This results in hypothesis for trees
The segmentation of the surface model is the part of the
approach which depends heavily on the scale. As mentioned
before the segmentation of the surface model is performed
in many scales. The segmentation procedure itself should be
(1) free of parameters and (2) operate only in the image space,
not in the feature space. The reason is that a feature space
has to be independent from the scale level. The watershed
transformation fulfils these requirements. Additionally it is
well suited for the segmentation of height data because the
key idea of the watershed transformation is a segmentation
Figure 5: Processing strategy for the extraction of trees.
78
of an image by means of a flooding simulation (Soille 1999).
Basins are the domains of the image, which are filled up
first if a water level increases from the lowest grey value in
the image, Watersheds are embankments between the basins. This segmentation technique is also used in (Schardt
et al. 2002) and a quite similar technique in (Persson et al.
2002) with the aim of detecting individual trees.
If the watershed procedure is applied to extract trees from
height data, the surface model has to be transformed in such
a way, that the trees itself are basins. The easiest way to do
this is to invert the surface model, as proposed in (Schardt
et al. 2002). In forest areas there are usually narrow valleys
between the individual crowns. In other areas the situation
may change, for example if trees occur in small groups (e.g.
in settlement areas), or if a way or a road occurs in forest
areas. If these valleys are wide, the outlines of the basins
are quite poor approximations of the crowns. In the general
case it leads to better results to use the first or the second
order derivatives of the surface model as the segmentation
function, in closed stands as well as in open areas with individual trees. Good results have been found using the
Laplacian as the segmentation funcsquared
tion in our experiments.
The following break points are used to define the membership function (Figure 6, upper right) for the size of a tree: The
lower border is 20 m² for a minimum diameter of 2.5 m and the
upper border is 700 m² (maximum 15 m diameter). For larger
values the membership value decreases, the largest possible
diameter is assumed to be 35 m (3850 m²). These typical values for diameters cover all tree species, they can be found in
(Gong et al. 2002). The feature vitality is derived from an
optical image, used to discriminate between vegetation and
non-vegetation areas. In the settlement example the Normalized Difference Vegetation Index (NDVI) is used for the vitality (Figure 6 lower left) of a segment. A membership function
with increasing membership values for positive NDVI values
is used with a break point at (0.5, 0.8). This breakpoint is set
empirically, motivated by the fact that the NDVI values measured at healthy trees are usually higher than the values for
other vegetation types as bushes or lawn.
Selection of Valid Hypothesis
The classification of the segments is subdivided into two
steps. First, valid segments are selected according to their
membership values. A tree is an object with a defined size,
circularity, convexity and vitality. Consequently the minimum value of the feature vector is the value which defines
if a hypothesis
is a
or not. In some cases a
valid hypothesis can occur at a more or less identical spatial position in the scene, but at different scale levels. Some
examples can be found in Figure 7, the left image shows the
valid hypothesis for trees at a scale level of
= 2 pixels
(according to 0.5 m), the middle at
= 4 pixel, and the
right at
= 8 pixel in the foreground. All Basins of the
watershed transformation are depicted in the background
superimposed on the surface model in the corresponding
scale level. One can see, that valid tree hypothesis occur in
more than one scale. In some cases the segments are quite
similar in both depicted scale levels, and in some other cases
the segments are subdivided in the finer scale level. The
trivial case – a segment in just one scale – is rather an exception.
These different situations of every segment have to be
analyzed. Hence, the type of the topological relation between the segments of different scale levels has to be classified. If the type is known, the best hypothesis for a tree can
be selected for a given spatial position.
The classification of the topological relations between
the valid segments is performed as proposed by Winter
(2000). In general, eight different topological relations exist in 2D space: disjoint, touch, overlap, equal, covers, contains, contained by, and covered by (Egenhofer and Herring 1991). These topological relations can be subdivided
into two clusters C1 and C2, where the C1 cluster includes
the relations disjoint, touch and C2 includes the other types.
The overlap relation is between these two clusters, it can be
divided into weak-overlap (C1) and strong-overlap (C2)
(Winter 2000). The motive behind this partitioning is that
the relations in C1 are similar to disjoint, and in C2 to equal.
Computation of Membership Values
Four membership functions are used to transform the
values of circularity, convexity, size and vitality into membership values. Circularity of a segment is expressed as
(Figure 6 upper left), where Area is the aera
covered by the segment and
is the maximum distance
from the center of the region to the border. A sensible lower
border is close to the value of 0.7 (circularity of square) and
a the upper border is 1 (circularity of a circle, the largest
possible value). The sign of the Laplacian of the surface
model is used to discriminate between convex surfaces as
trees and non-convex surfaces. For example, the surfaces of
buildings and most ground surfaces are planes, whereas the
crown of a tree is a convex surface. Thus, a negative mean
value of the Laplacian within the covered area of a segment
leads to a membership value of 1, and in the case of a positive mean value, the membership value is 0 (Figure 6, lower
right).
Figure 6: Membership functions, upper left: size, upper
right: circularity, lower left: convexity, lower right:
vitality.
79
Figure 7: Upper row: Basins of the watershed transformation on three scale levels. Lower row: Hypotheses for
trees in the corresponding scale levels. Left: σ = 2 pixels, middle: σ = 4 pix, right: σ = 8 pixels.
r
We postulate that all the segments BA (a ) which have a
r
topological relation in C2 to another segment BB (a ) , A ≠ B
from another scale level are potential hypothesis of the same
tree in the real world. The best hypothesis - the one with the
r
highest membership value - is selected as a Treeσ (a ) instance. Accordingly, both investigated hypothesis are assumed to be valid, if the relation between the two segments
is part of C1.
The final selected hypotheses are depicted in Figure 9.
But even if the trees in the scene were detected correctly, the
boundaries are often poor approximations for the outline of
the individual crowns. This problem leads to the last processing step, where the outlines of the crowns will be refined with Snakes.
or an image, or the edges of an image.
Snakes were originally introduced by Kass et al. (1988)
as mid-level algorithms which combine geometric and/or
topologic constraints with the extraction of low-level features from images. The principal idea is to define a contour
with the help of mechanic properties like elasticity and rigidity, then to initialize this contour close to the boundary
of the object one is looking for, and finally to let the contour
move in the direction of the boundary of the object. In general, there are two main drawbacks to the application of
Snakes as a measurement tool. The first one is that the Snake
has to be initialized very close to the features one is looking
for. The second one is the challenging tuning of the parameters, primarily the weighting between internal and external forces and the selection of the external force field itself.
In our approach the Snake is used only for fine measurement in the last stage, the coarse shape of the crown is more
or less known. Furthermore, the approximation is often too
small. Based on these constraints one can built a Snake which
is quite stable under these special conditions: the geometry
r
of the Snake is initialized for every Treeσ (a ) as circular
shaped closed polygon at the gravity center of the appropriate Basin Bσ . The snake could also be initialized with the
outline obtained from the watershed transformation. The
idea of using a circle instead of that is to make the Snake
Measurement of the Crown’s Outline
Up to now the geometry of the segments stems from different scale levels, as the Basins Bσ were extracted in different scales. But the outline of the crown is an object without a changing scale, as distinct from the crown itself. The
outline of the crown is measured in the fine scale with the
help of Snakes. A Snake is a kind of a virtual rubber cord
which can be used to detect valleys in a hilly landscape with
the help of gravity. This landscape may be a surface model,
80
were extracted manually. This is looked upon as the reference
data set for this evaluation (left of Figure 9). It should be
noted, that these manually extracted data are a kind of an
optimal result of what the approach should deliver from the
developer’s point of view. The relationship between the
manually extracted reference trees and the trees in the real
world is not discussed here.
An automatically extracted tree is assigned as a True Positive (TP), if it has a topological relation from the C2 cluster
with a tree in the manually extracted reference data set, otherwise it is assigned as a False Positive (FP). Those trees in
the reference data set with a C1 relation to an automatically
extracted tree are assigned as False Negatives (FP). Based
on these numbers, the Completeness and the Correctness2
of the extraction result can be computed:
Figure 8: Example for the measurement of a crown
outline with a Snake, five different optimization steps are
depicted.
Completeness = TP
optimization a bit more independent from the geometry of the
segment stemming from the watershed transformation. The
radius of the circular closed polygon is computed
r
by Area ( Bσ ( a ) ) / π . This initialization stage is depicted
in Figure 8 as the black circle in the right image. The parameters for the internal energies were tuned in the following way: the length of the contour is weighted low, and the
curvature is weighted high. Without external forces, a Snake
which is tuned in such a way converges to a circle with a
trend to decrease its length1. As the approximation is often
too small, an additional force is added which makes the
Snake behave like a balloon, which is inflated (Cohen 1991).
With this additional force the contour moves towards the
outline of the crown if no external forces influence the movement. The sum of the gradients over all scale levels is used
as external force.
r
Finally, the membership values of every Treeσ (a ) has to
be computed again because the outlines have changed. Also
the topological relations between all tree hypothesis are no
longer valid and have to be computed again. A changing of
r
the topology occurs, if two or more segments Bσ (a ) are
parts of the same crown in the real world. In these cases, the
Snake usually converges to the correct solution, i.e. the topological relation changes from the C1 cluster (similar to
disjoint) to the C2 cluster (similar to equal). As these updated membership values are quite independent from the
pre-processing in the different scale levels, these values are
used as an internal evaluation of the tree hypothesis.
TP + FN
Correctness = TP
2
TP + FP
In order to characterize the accuracy of the correct automatically extracted trees, the mean value and the standard
deviation of the mean value were computed for the distance
between the centers of gravity and the radii between reference tree and automatically extracted tree. The results are
depicted in Table 1.
One can see that the internal evaluation, which is performed after the measurement of the crown’s outline, leads
to a significant degradation for the completeness. As expected the Correctness is enhanced, 97% of the extracted
trees are correct. The accuracy measures are nearly equivalent. This is a little bit surprising, because it was expected
that the outline of the crown would be much more precisely
delineated by the Snake than by the watershed transformation. Similar experiences were made with other datasets.
SUMMARY AND OUTLOOK
In this paper an approach for the automatic extraction of
trees is presented. The object model and the processing strategy are illustrated in detail, as well as some exemplary results. The approach is free of assumptions about the scale
level, because the segmentation is performed in a wide range
of different scale levels. The classification of the tree hypothesis is based only four parameters: size, circularity, convexity, and vitality. Of these four parameters only the vitality is dependent on using image data, the others are geometric object properties. It should be noted that the values for
the size of the crowns stems from an independent investigation (Gong et al. 2002), and the convexity is always positive. Only the breakpoints in the circularity membership
function are empirical values.
The measurement of the crown’s outline is performed
with a Snake algorithm. The adjustment of the parameters
for the Snake is a quite difficult task. But once adjusted, the
algorithm is stable as a measurement tool for this task with-
RESULTS
The described approach was applied to a small subset of
the Hohentauern dataset as mentioned in the introduction
of this paper. The selection of the subset was mainly motivated by the fact that ground truth is available for a part of
this scene. The LIDAR first return data were transformed
into a 0.25 m raster. In order to get an initial idea of the
performance of this approach in forest areas the trees in a
slightly smoothed (σ =1.0 pixel) version the surface model
81
Figure 9: Left: Manually extracted trees superimposed to the surface model. Middle: Selected tree hypotheses,
different gray values correspond to different scale levels. Right: Results of the approach, final selected trees after
internal evaluation.
est or dense settlement areas. This is possible, because the
approach is free of assumptions about the terrain and the
height of the trees is not used for the detection.
out changing these settings for different scenes. Unfortunately, the accuracy of the results, namely the position and
the radius of trees, did not increase. This should be investigated in detail, to deterime if this applies only for the center
of gravity and the radius or for the whole outline.
The approach was tested with synthetic data (refer Figure
3), high resolution data in settlement areas (Completeness
68%, Correctness 82%), and a small dataset of a forest (Completeness 70%, Correctness 86%). In the forest case it is necessary to evaluate the results with more reliable reference
data. Furthermore, it is planned to use the information about
the outline of the individual trees for a detailed classification. For example, we will investigate how the curvature is
correlated with the shape parameter of the Pollock-Model,
which can be used as a feature for the classification of the
species. Another idea is the use of the approach in a combined strategy for the extraction of the ground surface in for-
ACKNOWLEDGEMENT
Parts of this work were funded by the European Commission under the contract IST-1999-10510. The surface
model and the true othoimages were produced by the French
company ISTAR. All aerial images, digital elevation models, and true orthoimages are copyrighted by ISTAR, Sophia
Antipolis, France. Many thanks go to Alix Marc and Frank
Bignone for their valuable cooperation. The Hohentauern
dataset was provided by Joanneum Research in Graz, Austria. Many thanks go to Matthias Schardt and Roland Wack
for their trustful and open discussion in Graz, April 2003.
Table 1: Quality measures and accuracy approximations (in pixels) for the Hohentauern dataset (1 pixel = 0.25 m). The
numbers are given after the step “Selection of the valid hypotheses” in the first row, and after “Measurement of the crowns
outline” in the second row in the table.
Step
Selection of valid hypothesis
Measurement of the crowns outline
Quality
Completeness
Correctness
70%
86%
45%
97%
82
Position
Mean Difference
RMS
4.4
3.6
4.6
3.5
Radius
Mean Difference
RMS
-3.0
3.6
-2.9
3.6
REFERENCES
Geoscience and Remote Sensing Symposium, 2. IEEE.
Haenel, S., and Eckstein, W. 1986. Ein Arbeitsplatz zur
automatischen Luftbildanalyse. P. 38-42, Springer, Berlin,
Deutschland.
Hildebrandt, G., 1987. 100 Jahre forstliche Luftbildaufnahme Zwei Dokumente aus den Anfängen der forstlichen
Luftbildinterpretation. Bildmessung und Luftbildwesen,
(1987)55:221-224.
Andersen, H., Reutebuch, S.E., and Schreuder, G.F. 2001. Automated Individual Tree Measurement through Morphological Analysis of a LIDAR-based Canopy Surface Model. P.
11-22, in Proceedings of the First International Precision
Forestry Cooperative Symposium, Seattle, USA, June 1720.
Andersen, H., Reutebuch, S.E., and Schreuder, G.F. 2002. Bayesian Object Recognition for the Analysis of Complex Forest
Scenes in Airborne Laser Scanner Data. P. 35-41, in The
International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, XXXIV. 3A.
ISPRS, Graz, Austria.
Hill, D.A., and Leckie, D.G. (eds.), 1999. International forum:
automated interpretation of high spatial resolution digital
imagery for forestry, February 10-12, 1998. Natural Resources Canada, Canadian Forest Service, Pacific Forestry
Centre, Victoria, British Columbia, 395p.
Baltsavias, E., 1999. Airborne laser scanning: existing systems
and firms and other resources. ISPRS Journal of Photogrammetry and Remote Sensing, (1999)54:164-198.
Kass, M., Witkin, A., and Terzopoulus, D., 1988. Snakes: Active Contour Models. International Journal of Conputer
Vision, (1988)1:321-331.
Beucher, S., and Lantéjoul, C. 1979. Use of Watersheds in Contour Detection. in International Workshop on Image Processing, Rennes, France.
Koenderink, J., 1984. The Structure of Images. Biological Cybernetics, (50):363-370.
Bezdek, J.C., 1992. Computing with Uncertainty. IEEE Communications Magazine, (1992)September:24-36.
Larsen, M. 1999. Individual tree top position estimation by template voting. P. 8, in 21st Canadian Symposium on Remote
Sensing, Ottawa, 21-24 June.
Brandtberg, T., and Walter, F., 1998. Automated delineation of
individual tree crowns in high spatial resolution aerial images by multiple scale analysis. Machine Vision and Applications, (1998)11:64-73.
Lillesand, T.M., and Kiefer, R.W., 1994. Remote Sensing and
Image Interpretation. John Wiley and Sons, Inc., New York
Chichester Brisbane Toronto Singapore, 750p.
Lindeberg, T., 1994. Scale-Space Theory in Computer Vision.
Kluwer. Academic Publishers, Boston, USA, 423p.
Brandtberg, T. 1999. Structure-based classification of tree species in high spatial resolution aerial images using a fuzzy
clustering technique. P. 165-172, in The 11th Scandinavian
Conference on Image Analysis, Kangerlussueq, Greenland,
June 7-11.
Lindeberg, T., 1998. Feature Detection with Automatic Scale
Selection. International Journal of Computer Vision,
(30)2:79-116.
Cohen, L., 1991. On active contour models and balloons.
CVGIP: Image Understanding, (53)2:211-218.
Persson, A., Holmgren, J., and Söderman, U., 2002. Detecting
and Measuring Individual Trees Using an Airborne Laser
Scanner. Photogrammetric Engineering and Remote Sensing, (68)9:925-932.
Dralle, K., and Rudemo, M., 1996. Stem Number Estimation
by Kernel Smoothing of Aerial Photos. Canadian Journal
of Forest Research, (1996)26:1228-1236.
Pinz, A. 1989. Final Results of the Vision Expert System VES:
Finding Trees in Aerial Photographs. P. 90-111, in Proceedings ÖAGM 13. Workshop of the Austrian Association for
Pattern Recognition, Oldenbourg OCG Schriftenreihe.
Egenhofer, M.J., and Herring, J.R., 1991. Categorizing Binary
Topological Relations Between Regions, Lines, and Points
in Geographic Databases. University of Maine, National
Center for Geographic Information and Analysis, Orono,
28p.
Pollock, R.J., 1996. The Automatic recognition of Individual
trees in Aerial Images of Forests Based on a Synthetic Tree
Crown Image Model. The University of British Columbia,
Vancouver, Canada, June 1996.
Gong, P., Sheng, Y., and Biging, G., 2002. 3D Model Based
Tree Measurement from High-Resolution Aerial Imagery.
Photogrammetric Engineering and Remote Sensing,
(68)11:1203-1212.
Pollock, R.J.. 1994. A model-based approach to automatically
locating tree crowns in high spatial resolution images. P.
526-537, in Image and Signal Processing for Remote Sensing, 2315. SPIE.
Gougeon, F., and Moore, T. 1988. Individual Tree Classification Using Meis-II Imagery. P. 927 -927, in IGARSS ’88
83
Schardt, M., Ziegler, M., Wimmer, A., Wack, R., and Hyyppä,
R. 2002. Assessment of Forest Parameter by Means of Laser Scanning. P. 302-309, in The International Archives of
the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXIV. 3A. ISPRS, Graz, Austria.
Straub, B., and Heipke, C., 2001. Automatic Extraction of Trees
for 3D-City Models from Images and Height Data. P. 267277. in Automatic Extraction of Man-Made Objects from
Aerial and Space Images. Vol. 3. A.A.Balkema Publishers.
Lisse/Abingdon/Exton(PA)/Tokio.
Schneider, S., 1974. Luftbild und Luftbildinterpretation. de
Gruyter, Berlin New York, 530p.
Winter, S., 2000. Uncertain Topological Relations between Imprecise Regions. International Journal of Geographic Information Science, (14)5:411-430.
Soille, P., 1999. Morphological Image Analysis: Principles and
Applications. Springer, Berlin Heidelberg NewYork, 316p.
Zadeh, L., 1965. Fuzzy Sets. Information Control, (8):338-353.
84
A Tree Tour with Radio Frequency Identification (RFID) and
a Personal Digital Assistant (PDA)
SEAN HOYT, DOUG ST. JOHN, DENISE WILSON AND LINDA BUSHNELL
Abstract: A popular tree tour at the University of Washington campus has been automated via RFID and a PDA. The
previous 81-tree hardcopy tour has also been updated to include more information on each tree, including digital photos. A
survey conducted demonstrates the updated, electronic tree tour is easier to navigate, full of better visuals, and results in less
false identifications.
85
86
Value Maximization Software – Extracting the Most from the
Forest Resource
HAMISH MARSHALL AND GRAHAM WEST
Abstract: Global competition is encouraging all forest owners to manage their forested lands in more integrated manner
and extract more value from the resource. ATLAS is a new suite of forest management software tools developed by Forest
Research. The goal of the ATLAS concept is to have a suite of fully integrated software applications covering all forest
management decisions from planting through to sawmilling. Three major applications have been developed so far: ATLAS
Cruiser – a state-of-the-art forest inventory application, ATLAS GeoMaster – an advanced stand record systems and ATLAS
Market Supply – one of the first weekly market supply planning optimization systems. In the future further applications
covering growth modeling, saw mill optimization, strategic and tactical planning will be developed. This presentation will
give a brief overview of the ATLAS system and highlights its key attributes
87
88
Costs and Benefits of Four Procedures for Scanning on
Mechanical Processors
GLEN E. MURPHY AND HAMISH MARSHALL
Summary: Four simulated procedures for scanning and bucking Douglas fir, ponderosa pine and radiata pine trees were
evaluated on the basis of productivity, costs, and value recovery. The procedures evaluated were: (a) a conventional operating
procedure where quality changes and bucking decisions were input by the machine operator, (b) an automated scan of the full
stem prior to optimisation and bucking, (c) a 6 m automated scan with 6.2 m forecast ahead, and (d) a 4.7 m automated scan
with 7.5 m forecast ahead before optimal bucking took place. After subtracting costs, net value recovery for the automated
scanning methods was 4 to 9% higher than for a conventional procedure. Breakeven capital investment costs for new scanning and optimisation equipment were dependent on tree species and size, markets and scanning procedure and could range
between US$0 and US$1,400,000.
89
90
Evaluation of Small-Diameter Timber for Value-Added
Manufacturing – A Stress Wave Approach
XIPING WANG, ROBERT J. ROSS, JOHN PUNCHES, R. JAMES BARBOUR, JOHN W. FORSMAN
AND JOHN R. ERICKSON
Abstract- The objective of this research was to investigate the use of a stress wave technology to evaluate the structural
quality of small-diameter timber before harvest. One hundred and ninety-two Douglas-fir and ponderosa pine trees were
sampled from four stands in southwestern Oregon and subjected to stress wave tests in the field. Twelve of the trees, six
Douglas-fir and six ponderosa pine, were harvested and sawn into logs and lumber. The mechanical properties of wood were
then assessed by both stress wave and static bending techniques in the laboratory. Results of this study indicated a significant
difference in stress wave time (SWT) between Douglas-fir and ponderosa pine trees and between two stands of each species.
SWT of Douglas-fir trees increased slightly as tree diameter at breast height (DBH) increased; whereas, SWT of ponderosa
pine trees decreased significantly as DBH increased. The statistical analysis also revealed good relationships between SWT of
trees and modulus of elasticity (MOE) of logs and lumber produced from the trees as the two species were combined. However,
the strength of the relationships was reduced within the species because of small sample size and narrow property range.
INTRODUCTION
This study is part of the project “Evaluation of smalldiameter timbers for value-added manufacturing: An integrated approach” conducted jointly by Oregon State University, USDA Forest Service Forest Products Laboratory,
and USDA Forest Service PNW Research Station. The overall goal of the project was to design, construct, and deliver a
system by which communities and forest industries may efficiently recognize value-added wood products potential in
small diameter trees. The specific objective of this study was
to investigate the use of a stress wave nondestructive evaluation technique to assess the potential structural quality of
small-diameter timbers before timber harvest.
Throughout the United States, past management practices have created thousands of acres of forest densely stocked
with small-diameter trees. These stands are at increased
risk of insect and disease attack and have higher catastrophic
fire potential. Increased management emphasis on forest
health and bio-diversity has forced land managers to seek
economically viable stand treatments such as thinning to
improve the stand condition. Economical and value-added
uses for removed small-diameter timber can help offset forest management cost, provide economic opportunities for
many small, forest-based communities, and avoid future loss
caused by catastrophic wildfires. However, the variability
and lack of predictability of the strength and stiffness of
standing timber cause problems in engineering applications.
It is essential to develop cost-effective technologies for evaluating the potential structural quality of such materials.
The traditional log-to-product manufacturing process fails
to recognize a tree’s full value. The process occurs in a
series of mostly independent steps (trees, to logs, to lumber,
to parts), each optimized for its own outputs. The ultimate
end use is rarely a consideration during intermediate processing stages. By identifying final product potential before
timber harvest, we hope to 1) enhance resource utilization
efficiency, 2) make it economically viable for secondary wood
products manufacturers to utilize small-diameter timber, and
3) facilitate stand management activities by identifying smalldiameter timber value.
MATERIAL AND METHODS
A total of one hundred and ninety-two Douglas-fir
(Pseudotsuga menziesii) and ponderosa pine (Pinus ponderosa) trees were sampled for stress wave evaluation at four
different stands in southwestern Oregon. The stands were
located in the Applegate Ranger District on the Rogue River
National Forest. Stand A (Yale Twin) was a 70 year old evenaged stand consisting primarily of Douglas-fir with some
madrone and a small compliment of ponderosa pine. The
stand had a mean diameter of 6.4 inches (16.3 cm) and a
quadratic mean diameter of 7.4 inches (18.8 cm). Stand B
(Toe Top) consisted of a sparse stand of 90-year-old trees
(primarily ponderosa pine) with a 65 year old under-story
of Douglas-fir, smaller ponderosa pine, madrone, and an
occasional incense cedar. It had a mean diameter of 6.0
91
inches (15.2 cm) and a quadratic mean diameter of 7.8 inches
(19.8 cm). Both stand A and B were slow grown and stagnant, and the trees marked for thinning and testing had small
branches. Stand C (Squaw Ridge) was a 40-year-old evenaged ponderosa pine stand with a minor compliment of
Douglas-fir. The trees were vigorous and fast-growing, with
large crowns and large branch diameters. The stand had a
mean diameter of 8.7 inches (22.1 cm) and a quadratic mean
diameter of 9.4 inches (23.9 cm). Stand D (No Name) was a
mixture of Douglas-fir and ponderosa pine, with some madrone peristing in the understory. Tree age ranged from 35 to
40 years. The stand had a mean diameter of 7.2 inches (18.3
cm) and a quadratic mean diameter of 8.0 inches (20.3 cm).
All sampled trees were subjected to stress wave tests in
the field. Douglas-fir trees were evaluated in stands A and
B, and ponderosa pine trees were evaluated in stands C and
D. Trees of each stand were classified into six diameter
classes that had a mean diameter at breast height (DBH,
measured outside bark) of 5, 6, 7, 8, 9, and 10 inches (12.7,
15.2, 17.8, 20.3, 22.9, and 25.4 cm) respectively. A random
sample consisting of eight trees per diameter class was subjected to stress wave tests in each of the four stands.
A recently developed stress wave technique was used to
conduct in-situ tests on sampled trees (Wang 1999, Wang et
al 2001). The testing system consisted of two accelerometers, two spikes, a hand-held hammer, and a portable
scopemeter (Figure 1). Two spikes were imbedded in the
trunk at 45o to the trunk surface, one spike at each end of
the section to be assessed with a span of 4 ft (1.2 m). The
spikes were pounded into the stem about one inch (2.5 cm),
which was deep enough for the tips to penetrate the bark
and reach the sapwood. The Accelerometers were mounted
on the spikes using two specially designed clamps. A stress
wave was introduced into the tree in the longitudinal direction by impacting the lower spike with a hammer. The resulting signals were received by start and stop accelerometers and recorded on the scopemeter as waveforms. The
stress wave time (SWT, the time for a stress wave to travel
through the distance between two spikes) was determined
by locating the two leading edges of the waveforms on the
scopemeter (Wang et al 2001). Six measurements were obtained on each tree, three on each of two sides.
After field tests, one tree per diameter class was felled in
stands B (Toe Top) and C (Squaw Ridge), resulting in a
sample of six Douglas-fir and six ponderosa pine trees ranging from 5 to 10 inches (12.7 to 25.4 cm) in DBH. These
felled trees were then bucked into 10-foot (3.0 m) long logs
and transported to Michigan Technological University in
Houghton, Michigan for laboratory tests. For each log, the
green weight and diameters (at two ends and the middle of
the log) were measured and the green density was determined accordingly. All logs were then evaluated using longitudinal stress wave and static bending methods to obtain
stress wave time and static modulus of elasticity (MOE) of
the logs. A detailed description of the instrumentation and
analysis procedures for log tests is given by Wang et al.
(2002).
Standing tree
Accelerometer
L
Oscilloscope
Figure 1. Schematic of experimental setup used in field
test (L = test span).
To validate the stress wave analysis of trees and logs, all
logs were sawn into 2- by 4-in. (51 by 102-mm) and 2- by 6in. (51- by 152-mm) dimension lumber on a portable horizontal band sawmill for further assessment in terms of structural quality. Sawing pattern for each log was diagrammed
so that the location of each piece of lumber within each log
could be tracked. Each piece of lumber received a unique
identification number associating it with its location within
the log and tree from which it was sawn. The lumber was
stickered and stacked for air-drying until they reach the
moisture content of approximately 15 percent. When dry,
the lumber was planed to industry standard thickness and
width for surfaced dry lumber. Longitudinal stress wave and
static bending tests were also conducted on lumber at both
green and dry conditions.
RESULTS AND DISCUSSION
Stress Wave Time in Standing Trees
The stress wave time in standing trees was the average
value of six measurements from each tree and was reported
on the unit per length basis (time/length). Lower stress wave
time corresponds to higher stress wave speed (length/time).
The descriptive statistics of tree measurements (SWT and
DBH) from all tree samples are given in Table 1. Figure 2
shows histograms of stress wave time distribution for four
different stands.
The difference between Douglas-fir and ponderosa pine
can be easily distinguished in terms of stress wave time.
The mean SWT of ponderosa pine trees is about 27 percent
higher than that of Douglas-fir trees, which means stress
waves travel much slower in ponderosa pine than in Douglas-fir trees. In general, this result is in agreement with the
strength and stiffness difference between the two species as
given in the Wood Handbook (FPL 1999), which states the
modulus of rupture (MOR) and modulus of elasticity of ponderosa pine are about 34 percent lower than those of Douglas-fir (green condition).
92
14
(a) Douglas-f ir
Stand A:Yale Tw in
Stand B: Toe Top
12
Frequency (%)
The SWT of ponderosa pine trees also shows much higher
variation than the SWT of Douglas-fir trees. The standard
deviation of SWT is 4.50 ms/ft (14.8 ms/m) for Douglas-fir
(stand A and stand B combined), and 16.17 ms/ft (53.0 ms/
m) for ponderosa pine (stand C and stand D combined).
This might suggest a larger variation in strength and stiffness properties of ponderosa pine compared to those of Douglas-fir.
The statistical comparison analysis showed significant
SWT differences between two stands of each species, which
imply a potential difference in strength and stiffness between
the stands. But this could not be substantiated due to the
lack of mechanical property data of all tested standing trees.
The relationship between SWT and DBH of standing trees
is shown in Figure 3. For better illustration, stress wave
times in trees were analyzed in terms of diameter classes.
The data points are mean values of SWT for eight trees in
each class, and the error bar indicates the standard deviations (±1 standard deviation).
The SWT in Douglas-fir trees increased slightly as DBH
of the trees increased. The trend is more evident in stand A
(Yale Twin) than in stand B (Toe Top). The SWT for stand
A increased about 12 percent as DBH changed from 5 in. to
10 in. (12.7 to 25.4 cm). The SWT-DBH relationship for
ponderosa pine trees was quite different from Douglas-fir.
As shown in Figure 3(b), the SWT in ponderosa pine trees
decreased significantly as DBH of the trees increased, especially in stand C (Squaw Ridge) where the SWT dropped 24
percent when the DBH increased from 5 in. to 10 in. (12.7
to 25.4 cm). The causes for the different functional relationships between SWT and DBH for Douglas fir and ponderosa pine trees are not fully understood yet. Huang (2000)
reported that, for the same age trees, stress wave time is
lower for trees with slower growth rate or narrower rings.
This might explain the SWT-DBH trend found in Douglas
10
8
6
4
2
0
60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90
Stress w ave time in trees (µs/ft)
14
(b) Ponderosa pine
Stand C: Squaw Ridge
Stand D: No name
Frequency (%)
12
10
8
6
4
2
0
60
70
80
90
100
110
120
130
140
150
Stress w ave time in trees (µs/ft)
Figure 2. Histograms of stress wave time (SWT)
distribution for Douglas-fir and ponderosa pine trees.
Table 1. Diameter at breast height and stress wave time of standing trees. a
Sample
Species
Douglas-fir
Ponderosa pine
a
SWT (µs/ft)
DBH (in.)
Stand
No.
Mean
Min
Max
SD
Mean
Min.
Max.
SD
A
48
7.4
4.7
10.3
1.71
75.2
67.3
87.0
4.79
B
48
7.5
4.6
10.2
1.76
72.8
60.8
79.2
3.83
C
48
7.6
4.8
10.3
1.76
98.9
77.3
150.0
14.94
D
48
7.6
4.7
10.1
1.77
89.2
71.3
134.3
16.12
1 in. = 2.54 cm, 1 µs/ft = 3.28 µs/m.
DBH, diameter at breast height.
SWT, stress wave time.
SD, standard deviation.
93
fir trees. For ponderosa pine trees, the opposite SWT-DBH
trend could be more related to other factors such as the characteristics of tree forms (size and frequency of branches),
proportion of mature and juvenile wood in the cross section
as well as moisture content.
100
(a) Doudlas f ir
Stand A
SWT in trees ( s/ft)
90
Relationship Between Stress Wave Time in Trees and
Log Properties
Stress wave time in standing trees was measured in the
lower part of the stem, which tracks to the butt log after
harvesting and cutting. In this study, a total of 42 10-ft.
(3.0-m) long logs were obtained from 12 harvested trees.
The number of the logs produced from each tree varied from
3 to 5 for Douglas-fir and from 1 to 4 for ponderosa pine as
a result of the difference in tree height. The diameter of the
logs (average value of diameters measured at two ends and
the middle) ranged from 4.3 to 10.0 in. (10.9 to 25.4 cm)
for Douglas-fir and from 4.4 to 9.8 in. (11.2 to 24.5 cm) for
ponderosa pine. The physical and mechanical properties
(density, stress wave time, and static MOE) of logs are summarized in Table 2. Note that all these properties were determined in green and un-debarked logs.
Figure 4 shows the relationship between SWT of trees
and SWT of the butt logs cut from the trees. A linear regression analysis indicated a strong correlation (R2 = 0.95) when
two species were considered as a single population. The
strength of the relationship was weakened when the two
species were considered separately (R2 = 0.61 for Douglasfir, R2 = 0.85 for ponderosa pine). This was presumably due
to the small sample size (n=6) and limited property range
for samples of each species. It was found that SWT measured in standing trees was about 10 and 22 percent lower
than SWT of logs for Douglas-fir and ponderosa pine, respectively. This could be a systematic difference caused by
different stress wave approaches. It has been reported that
the stress wave speed measured in trees could be dominantly
controlled by the mature wood (outer wood in the crosssection) since both wave generation and sensing occurred
on the surface of the stem (Wang 1999, Huang 2000, Ikeda
Stand B
80
70
60
50
40
4
5
6
7
8
9
10
11
DBH of standing trees (in.)
SWT in trees ( s/ft)
160
150
(b) Ponderosa pine
Stand C
140
130
120
Stand D
110
100
90
80
70
60
50
40
4
5
6
7
8
9
10
11
DBH of standing trees (in.)
Figure 3. Relationship between stress wave time (SWT)
and tree diameter at breast height (DBH).
Table 2. Physical and mechanical properties of logs. a
Species
logs
Douglas-fir
Ponderosa pine
a
Stress wave time (µs/ft)
3
No. of
Density (lb/ft )
Mean
Min.
Max.
SD
Mean
Min.
Max.
SD
Mean Min. Max.
2
70.4
84.1
SD
25
41.45 35.17 48.45 3.539
76.1
3.68
0.99
0.52 1.33
0.213
17
51.75 43.12 57.55 4.310
116.3 106.0 134.7 8.84
0.57
0.33 0.79
0.149
1 lb/ft = 16.02 kg/m , 1 µs/ft = 3.28 ms/m, 1 lb/in = 6895 Pa.
3
6
MOE (10 lb/in )
3
2
MOE, modulus of elasticity determined by static bending method.
SD, standard deviation
94
et al. 2000, and Wang et al. 2001), whereas in logs the waves
were introduced into the stem from one end and sensed at
the other end (Wang et al. 2002).
The relationships between SWT of trees and the average
MOE of logs are shown in Figures 5. Regression analysis
indicated a linear relationship between SWT of trees and
MOE of logs as all samples combined. The coefficient of
determination (R2) was found to be 0.74. Again, the strength
of the relationships was reduced significantly as two species were analyzed separately.
1.2
MOEs of logs (Mpsi)
1
Ponderosa pine
20
40
60
80 100 120
SWT in trees (µs/ft)
140
160
Figure 5. Relationship between SWT in trees and
average MOE of logs.
3
Average MOEs of lumber (106 lb/in2)
SWT
SWT in
in butt
butt logs ((µs/ft)
s/ft)
Douglas-fir
0
Douglas-fir
Ponderosa pine
130
0.4
0
A total of 81 pieces dimension lumber (2 by 4s and 2 by
6s), 49 Douglas-fir and 32 ponderosa pine, were obtained
from the logs. Stress wave and static bending tests were
performed on lumber in both rough-cut and dry conditions
(air dried and 4-side surfaced). The moisture content (MC)
of rough-cut lumber (designated as green lumber) ranged
140
0.6
0.2
Relationship between Stress Wave Time in Trees and
Lumber MOE
150
0.8
120
110
100
90
80
70
60
2.5
2
1.5
1
Douglas-fir (Green)
Douglas-fir (Dry)
Ponderosa pine (Green)
Ponderosa pine (Dry)
0.5
0
50
50
60
70
0
80 90 100 110 120 130 140 150
SWT in trees (µs/ft)
20
40
60
80
100
120
140
SWT in trees (µs/ft)
Figure 6. Relationships between SWT in trees and
average MOE of lumber produced from the trees.
Figure 4. Relationship between SWT in trees and SWT of
butt logs.
Table 3. Stress wave and static bending properties of lumber
Rough-cut lumber (MC = 24%)
Species
Number
SWT
MOE
of lumber
(? s/ft)
Douglas-fir
49
Ponderosa pine
32
a
2
Dry lumber (MC = 9%)
SWT
MOE
(10 lb/in )
(? s/ft)
(10 lb/in )
67.8 (4.0)
2.14 (12.4)
59.1 (4.3)
2.60 (11.8)
115.3 (11.1)
1.06 (14.2)
78.1 (11.4)
1.33 (14.5)
6
2
1 ? s/ft = 3.28 ? s/m, 1 lb/in = 6895 Pa.
SWT, stress wave time.
MOE, modulus of elasticity determined by static bending method.
COV, coefficient of variation (%).
Data in parenthese represents coefficients of variation (%).
95
6
2
from 19 to 26 percent for Douglas-fir with an average of 24
percent and 30 to 42 percent for ponderosa pine with an average of 36 percent. The MC of dry lumber was 8 to 10 percent with an average of 9 percent for both species, which was
actually lower than target MC.
The averages and coefficients of variation (COV) for stress
wave and static bending properties of lumber are summarized in Table 3. The mean comparison results indicated a
significant difference between SWT in trees and SWT in lumber. For Douglas-fir, the mean SWT in rough-cut and dry
lumber decreased about 7 and 17 percent respectively compared to the mean SWT in trees. The low SWT in lumber is
mainly due to the low moisture content (the MC was below
fiber saturation point for both rough cut and dried lumber).
For ponderosa pine, however, the mean SWT in green lumber (rough cut) increased about 19 percent compared to that
in trees. This could be caused by the different wave propagation mechanisms associated with the testing approaches used
in tree and lumber measurements. As mentioned earlier, the
SWT measured in trees is more controlled by the mature wood
(outer wood in the cross-section) compared to the SWT measured in logs. The same interpretation could be reached for
lumber. The expectation is that, given the same moisture condition, the SWT in trees would be lower than the SWT in
lumber. In terms of moisture effect, since the MC of green
ponderosa pine lumber was well above the FSP, the moisture
has less effect on the SWT compared to Douglas-fir lumber.
Therefore, the high SWT in ponderosa pine green lumber
might be mainly due to the different wave propagation mechanism. In the case of dried ponderosa pine lumber (the MC
was far below the FSP), the mean SWT decreased about 19
percent compared to that in trees because the moisture effect
played a more important role compared to wave propagation
mechanism.
The relationships between SWT in trees and average MOE
of lumber produced from the trees are shown in Figure 6. In
the case of Douglas-fir, both tree and lumber property range
was very small, and no statistical relationship was found between SWT of trees and average MOE of lumber. In the case
of ponderosa pine, the data points had a wider property range
(tree and lumber) and shown a linear relationship between
SWT of trees and average lumber MOE (R2 = 0.39 – 0.63).
When the two species were combined, the statistical analysis
resulted in a good correlation between SWT of trees and average MOE of lumber. The coefficients of determination (R2)
were found to be 0.88 for green lumber and 0.86 for dry lumber.
difference in stress wave time between Douglas-fir and ponderosa pine trees. Stress wave time ranged from 60.8 to 87.0
ms/ft (199 to 285 ms/m) for Douglas-fir trees and 71.3 to
150 ms/ft (234 to 492 ms/m) for ponderosa pine trees. Statistical comparison analysis between stands suggested a potential difference in wood stiffness between the two stands
of each species. It was found that stress wave time in Douglas-fir trees increased slightly as tree diameter at breast
height increased; whereas, stress wave time in ponderosa
pine trees decreased significantly as tree diameter at breast
height increased. The statistical analysis resulted in good
relationships between stress wave time of trees and modulus of elasticity of logs and lumber when the two species
were combined. However, the statistical significance was
reduced as the two species were considered separately because of small sample size and narrow property range within
each species.
The data colleted for this study illustrates the potential of
the stress wave technique for assessing the structural quality of small-diameter timbers in the field. Further studies
are planned to develop a broader database of SWT-MOE
relationship with sufficient samples for each species, and
examine if species has an effect on the relationship.
LITERATURE CITED
Forest Products Laboratory. 1999. Wood Handbook – Wood as
an engineering material. Gen. Tech. Rep. FPL-GTR-113.
Madison, WI: U.S. Department of Agriculture, Forest Service, Forest Products Laboratory. 463 p.
.
Huang, Chih-lin. 2000. Predicting lumber stiffness of standing
trees. In: Proceedings of the 12th international symposium
on nondestructive testing of wood; 2000 September 13-15;
Western Hungary, Sopron, University of Western Hungary,
Sopron: 173-180.
Ikeda, K., S. Oomori, and T. Arima. 2000. Quality evaluation of
standing trees by a stress-wave propagation method and its
application III: Application to sugi (Cryptomeria japonica)
standing plus trees. Mokuzai Gakkaishi. 46(6): 558-565.
Wang, X. 1999. Stress wave-based nondestructive evaluation
(NDE) methods for wood quality of standing trees. Ph.D.
diss. Michigan Technological Univ., Houghton, MI.
CONCLUSIONS
Wang, X., R. J. Ross, M. McClellan, R. J. Barbour, J. R.
Erickson, J. W. Forsman, and G. D. McGinnis. 2001. Nondestructive evaluation of standing trees with a stress wave
method. Wood and Fiber Science, 33(4): 522-533.
A stress wave technique was used to evaluate the structural potential of small-diameter Douglas-fir and ponderosa
pine trees. The results of the study indicated a significant
Wang, X., R.J. Ross, J.A. Mattson, J.R. Erickson, J.W. Forsman,
E.A. Geske, M.A. Wehr. 2002. Nondestructive evaluation
techniques for assessing modulus of elasticity and stiffness
of small-diameter logs. Forest Prod. J. 52(2): 79-85.
96
Early Experience with Aroma Tagging and Electronic Nose
Technology for Log and Forest Products Tracking
GLEN MURPHY
Abstract: Worldwide the movement of logs from forest to customer can be conservatively estimated at over 5 billion logs
per annum. There is increasing interest in being able to track the movement of individual logs from stump to mill or at least
determine the chain-of-custody of groups of logs back to individual stands. Some segments of industry would ideally like to be
able to track wood products from the standing tree through to the ultimate product – “from seedling to rocking chair”. Barcoding and radio frequency identification, although not ideal, are the dominant technologies for tagging and tracking of forest
products. This presentation will cover early experience with a novel technology, aroma-tagging and an electronic nose, for
tracking logs from the forest through the mill and out of the drying kilns. Trials indicate that this novel technology is most
likely to be successful in chain of custody applications from forest to mill door. Development of new tools or further development of new procedures could eventually result in the ability to track of individual logs.
97
98
Modeling Steep Terrain Harvesting Risks Using GIS
JEFFREY D. ADAMS, RIEN J.M. VISSER, AND STEPHEN P. PRISLEY
Abstract: When preparing to harvest timber on steep terrain, it is necessary to assess a variety of risks, including slope
failure, excessive erosion, residual stand damage, and job-related injury. A number of the risks associated with steep terrain
harvesting can be modeled using terrain and soil characteristics such as slope gradient, slope form, soil strength, and soil
erodibility. Once assessed, these risks can often be mitigated through detailed harvest planning, an important part of which is
the selection of an appropriate harvesting system. This paper describes the development of a steep terrain harvesting risk
assessment model using ArcObjectsä. The model operates within the Visual Basic for Applicationsä (VBA) environment
embedded in ArcMapä, and accepts soil and digital elevation data as inputs into a decision matrix containing key steep terrain
harvest system parameters. Model outputs include maps depicting debris slide hazard, soil strength hazard, soil erosion
hazard, and harvest system recommendations. The intended use of the model is to serve as a decision support system in the
strategic planning phase of forest management, facilitating the identification of high-risk areas and long-term harvesting
system requirements. An application of the model is demonstrated on approximately 500 hectares of mountainous terrain in
southwest Virginia.
and personal injury can often lead to significant direct and
indirect costs for companies and injured parties.
The factors that contribute to the existence of the
abovementioned hazards are often unalterable features of
the terrain. However, many of the adverse impacts associated with the hazards can be mitigated through informed
planning. To properly assess the severity and extent of the
hazards, it is often necessary to conduct detailed field investigations in which site-specific data is collected and analyzed. When properly assessed, one of the more effective
ways to mitigate the identified hazards is to select and apply an appropriate harvesting system. For the purposes of
this research effort, harvesting system will refer specifically
to the equipment and techniques used to move felled trees
from the stump to the landing. Harvesting systems commonly used in mountainous terrain include wheeled skidder,
track skidder, cable, and helicopter systems.
The objective of this research was to design a GIS model
that could serve as a decision-support tool during both the
strategic (long-term) and tactical (short- and medium-term)
planning phases of forest management planning. During
the strategic phase, when forest-level management concerns
are being addressed, the model can be used to assess longterm harvesting system requirements. The model provides
estimates of the proportions of a land base that might be
appropriate for the different harvesting systems, which can
help forest managers and planners refine projected harvesting costs and determine whether the necessary equipment
or an adequate supply of harvesting contractors is available.
Model outputs also include the relative location, severity,
and geographic extent of the environmental hazards associ-
INTRODUCTION
In many mountainous regions, planning forest management activities can be complicated by a variety of terrain
factors (slope gradient, slope form, topographic complexity,
etc.) and host of soil characteristics (strength, erodibility,
etc.). This is particularly true in southwest Virginia, where
the topography is extremely diverse due to the convergence
of the Appalachian Plateau, Ridge and Valley, and Blue Ridge
physiographic provinces. In many locations throughout the
region, it is necessary to assess a number of potential environmental hazards when planning timber harvesting operations.
The more prominent hazards associated with conducting
timber harvesting operations on mountainous terrain include
soil erosion, soil compaction, and debris slides. Depending
on the severity and extent of the hazard, each can potentially lead to significant adverse environmental and economic
impacts if not properly assessed and managed. Soil compaction can retard the growth of regeneration as well as
lead to increased soil erosion (Martin 1988). Soil erosion, a
common byproduct of timber harvesting on steep terrain,
can lead to decreases in forest site productivity, water quality, and stream habitat (Rice, et al. 1972). Debris slides can
rapidly deliver sediment and woody debris to waterways resulting in high turbidity, bank scouring, channel aggradation, and potential damage to roads and other improvements
in their paths (Washington State Forest Practices Board
2000). In addition, steep terrain harvesting operations carry
a greater risk of equipment damage and personal injury than
operations conducted on flat terrain. Equipment damage
99
and can range from a few years to decades (Martin 1988,
Schnepf 2002).
Quite often, debris slides represent the dominant erosional
process in steep mountainous terrain (Wu and Sidle 1995).
Debris slides are mass failures in which the internal strength
of soil is exceeded by a variety of stressors, including gravity, soil pore pressure, and material weight (Dietrich, et al.
1986, Shaw and Johnson 1995). They commonly occur in
convergent topography, where water, sediment, and organic
debris become concentrated (Dietrich, et al. 1986). Areas
prone to debris slides will infrequently experience recurrent
activity, usually triggered by intense rainfall events. While
debris slides are a natural process, certain forest management activities are believed to increase the frequency and
severity of debris slide activity. As with surface erosion, the
management features commonly associated with debris slide
activity are poorly located or constructed roads.
In addition to environmental damage, conducting poorly
planned timber harvest operations in steep terrain can result in equipment damage and worker injury. Logging is
one of the most hazardous occupations, with a rate of occupational death, illness, or injury approximately 3 times
greater than the average incident rate for all private industries. As slope gradient increases, so too does the potential
for injury and accident. Most ground-based harvesting
equipment such as wheeled and track skidders possess relatively high centers of gravity and can overturn in steep or
uneven terrain (Conway 1982). The majority of groundbased and aerial systems (cable and helicopter) require
manual felling. Falling materials (i.e. trees, snags, and
branches) and poor felling practices are common causes of
injury and death for tree fellers. This is especially true in
locations characterized by complex stand structures and steep
terrain, such as the mixed hardwood stands of the Appalachians. The high-tension cables used in cable yarding operations pose additional threats to workers on the ground.
Lastly, helicopter operations can be extremely dangerous,
with crashes leading to severe injury or death to both pilots
and loggers (Manwaring and Conway 2001).
ated with steep terrain harvesting. During the tactical phase
of management planning, these hazard assessments can be
used to prioritize field investigation activities. To maximize the model’s operability and accessibility, data requirements were limited to widely distributed, publicly available
spatial data. To provide examples of model output, an analysis was conducted on approximately 500 hectares of mountainous terrain that serves as a teaching and demonstration
forest for Virginia Polytechnic Institute and State University.
STEEP TERRAIN HARVESTING RISKS
When conducting timber harvest operations in steep terrain, it is necessary to mitigate a number of risks. The sedimentation of waterways resulting from increased surface erosion is often cited as the primary concern associated with
forest management activity in steep terrain. Many of the
streams originating in or flowing through steep forested terrain provide important habitat for aquatic species and represent important sources for water supplies, recreation, and
a number of other uses. Sedimentation of these streams can
have adverse impacts on water quality and aquatic habitat,
as well as lead to increased flood potential (Virginia Department of Forestry 2002). As a result, many states have
established Best Management Practices (BMP) for forest
management activities. BMPs identify forest management
activities that mitigate increased erosion. Management activities that are commonly identified as potential contributors to increased surface erosion include logging operations,
road construction, grazing, and site preparations associated
with planting and fire (Toy, et al. 2002, Virginia Department of Forestry 2002). Of the above listed activities, road
construction is widely recognized as the biggest potential
contributor to increased surface erosion. Although some
degree of increased erosion may be unavoidable, measures
can be taken to minimize the severity and extent of erosion
(Rice, et al. 1972).
Another concern associated with steep terrain harvesting is the compaction of soil caused by the ground pressure
exerted by heavy harvesting equipment. Soil compaction
alters the physical properties of a soil by reducing the amount
of macropore space and increasing density. While soil compaction is a hazard that should be assessed for any harvesting operation, the amount of ground pressure exerted by
harvesting equipment is greater when operating on uneven
or sloping terrain (Adams 1998). The physical changes
brought about by compaction can have significant adverse
impacts, including restricted rooting depths for regeneration, restricted water and nutrient cycling, increased water
runoff, and increased surface erosion hazard (Adams 1998,
Krag, et al. 1986, Martin 1988, Miller and Sirois 1986, Rice,
et al. 1972, Schnepf 2002). Compacted soils can be restored
given an adequate period of time and the proper environmental conditions. The amount of time required to restore
compacted soils depends on the severity of the disturbance,
HARVESTING SYSTEMS
Harvesting systems commonly used throughout the Appalachians and other mountainous regions include wheeled
skidders, track skidders, cable yarders, and helicopters.
Under a broad range of conditions, the wheeled skidder system represents the most efficient ground-based alternative.
Wheeled skidders are rubber-tired vehicles specially outfitted to transport felled timber. They require a relatively low
initial capital investment, are relatively inexpensive to maintain, and can move a given quantity of wood from the stump
to the landing up to twice as fast as their tracked counterparts (Conway 1982). Wheeled skidders travel through harvested areas on a network of skid roads and skid trails. Skid
roads, which are the primary routes from the harvested area
to the landing, are often systematically located throughout
100
the harvested area and experience heavy use during a harvesting operation. In steep terrain operations, skid roads
are often located on cut-and-fill slopes. Skid trails are
secondary routes established while accessing felled timber
and can be somewhat random in location. Skid roads and
skid trails can be major sources of erosion in steep terrain
(Gibson and Biller 1975, Krag, et al. 1986, Rice, et al. 1972).
Track skidders, often referred to as crawler tractors, are
specially outfitted tracked vehicles used to transport felled
timber. While slower and more expensive than their wheeled
counterparts, track skidders can be much more versatile.
They are capable of transporting larger payloads and can be
used to construct roads and landings (Conway 1982). In
some situations, soil disturbance impacts can be mitigated
by switching from wheeled to track vehicles (Martin 1988).
Track skidders spread their weight over a much larger area,
which can significantly reduce the severity of soil compaction and rutting. This is particularly true for operations
conducted on wetter sites, where wheeled skidders can also
suffer significant decreases in pulling power (Conway 1982).
Aerial systems such as cable yarders and helicopters are
commonly used in locations possessing gradients too steep
for the safe and productive implementation of ground-based
systems. In cable harvesting systems, felled trees are rigged
to a suspended cable and pulled to the landing with winch
systems called yarders. Depending upon the configuration
of the system being used, felled trees are suspended either
partially or fully off the ground. In general, the soil distur-
bance associated with cable systems is less severe and widespread than the disturbance caused by ground-based systems, due in most part to the lack of skid roads and trails
(Krag, et al. 1986, Miller and Sirois 1986). A necessary
feature of any cable system configuration is deflection, which
is sag in the suspended skyline cable. In general, a minimum deflection of 5% is required for a skyline to possess an
acceptable load-carrying capability. Cable operations are
typically conducted on terrain characterized by concave
ground profiles, which allow for adequate deflection.
Helicopter systems are the most expensive alternative and
applied when all other systems are deemed inappropriate.
For the most part, the use of helicopter systems is relegated
to remote locations that are very sensitive to adverse environmental impacts. Trees are felled manually and then transported to the landing using a helicopter. The use of helicopters eliminates skid road construction, soil rutting associated with skid trails, and corridor damage associated with
cable systems. However, large landings with access roads
capable of heavy transport traffic are required, typically
within a 3-mile distance of the harvested area (Sloan 2001).
METHODS
In order to provide an automated spatial assessment of
the risks associated with terrain and soil conditions, a GISbased model was developed. The model operates within the
Figure 1. Screen capture of the user interface, which contains a set of tabbed pages on which the user identifies the model
input, selects output options, and can adjust model parameters for the different hazards assessed.
101
respect to the intensity and scale at which the soil units are
mapped, with SSURGO being the most detailed. The model
is designed to accept either SSURGO or STATSGO data.
The soil units in SSURGO datasets are mapped at scales
ranging from 1:12,000 to 1:63,000 and can contain up to
three different soil components. The availability of SSURGO
datasets, while increasing, is currently limited to select locations throughout the conterminous United States, Alaska,
Hawaii, and Puerto Rico. STATSGO datasets are available
for the entire conterminous United States, Alaska, Hawaii,
and Puerto Rico. STATSGO soil units can contain up to 27
different soil components, and with the exception of Alaska
(1:1,000,000), are mapped at a scale of 1:250,000.
Visual Basic for Applicationstm (VBA) environment embedded in ArcMaptm, and accepts soil and digital elevation data
as inputs into a decision matrix containing key steep terrain
harvest system parameters. The interface of the model contains a set of tabbed pages on which the user identifies the
model input, selects output options, and can adjust model
parameters for the different hazards assessed (Figure 1).
Default parameter values are provided, however, adjustments
can be made to suit local conditions or knowledge. Model
outputs include tabular and spatial output depicting soil erosion hazard, soil compaction hazard, debris slide hazard,
and harvest system allocation.
STUDY AREA
SOIL EROSION HAZARD MODELING
The study area selected to illustrate model operation is
the Fishburn Forest, a teaching and demonstration forest
owned by Virginia Polytechnic Institute and State University. The forest is situated on an isolated, east-west trending
ridge in the Valley and Ridge province of southwest Virginia and is comprised of approximately 500 hectares of
Appalachian hardwood and mixed pine-hardwood cover
types. Elevations range from approximately 550–730 meters
above sea level with a mean and standard deviation of 629
and 39, respectively. Slope gradients in the forest range
from 0-112%, with a mean and standard deviation of 28 and
15, respectively. Within the boundaries of the forest, the
following soil series are represented: Berks, Caneyville,
Craigsville, Duffield, Groseclose, Jefferson, McGary, and
Weaver series.
Soil erosion hazard is modeled using a combination of
slope gradient classes and Kffact. Kffact is an experimentally
determined value that quantifies the susceptibility of soil
particles to detachment and movement by water (Natural
Resources Conservation Service 1995). Kffact values can
range from 0 to 1, higher values indicating greater erosion
potential. In both SSURGO and STATSGO datasets, each
map unit can contain multiple soil components and each
component is typically comprised of multiple layers, each
of which is assigned a Kffact value. To characterize soil erosion hazard, the model required that each map unit be represented by only one Kffact value. For each soil component
within a particular map unit, the relevant Kffact value for the
modeling of surface erosion is the Kffact value associated with
the soil layer constituting the thickest mineral horizon in
the upper 15 cm of the component (Natural Resources
Conservation Service 1998). As such, each map unit contained multiple soil components represented by the Kffact
value attributed to the soil layer meeting the above-described
conditions. To provide the most conservative estimate of
soil erosion hazard, the highest Kffact value from the set of
soil components contained within the map unit was attributed to the particular map unit. The representative Kffact
value and slope gradient were then combined to characterize relative soil erosion hazard.
The default soil erosion hazard classification criteria
(Table 1) offered by the model is adapted from interpretive
DATA REQUIREMENTS
The data requirements for the model include elevation
and soil data, both of which represent important data sources
for GIS applications in a variety of disciplines, including
engineering, ecology, hydrology, natural resource management and geomorphology. With respect to elevation data,
the model is designed to accept grid-based data with either
30-meter or 10-meter horizontal resolution. The United
States Geological Survey (USGS) produces both 30-meter
and 10-meter grid-based digital elevation models as part of
the National Mapping Program (U.S. Geological Survey
1987). While the availability of 10-meter elevation data is
still somewhat limited, 30-meter data is available to the public
for a majority of the conterminous United States, Hawaii,
and Puerto Rico.
With respect to soil data requirements, the United States
Department of Agriculture’s (USDA) Natural Resources
Conservation Service (NRCS) distributes three spatial soil
databases, including the Soil Survey Geographic (SSURGO),
State Soil Geographic (STATSGO), and National Soil Geographic (NATSGO) databases. The databases consist of
mapped soil units (polygons) and a collection of relational
tables containing associated physical properties, chemical
properties, and interpretations. The databases differ with
Table 1. Default slope gradient classes and Kffact values
used to characterize relative soil erosion hazard.
Soil Erosion Hazard
Lower
Moderate
Higher
102
Kffact <
0.35
0 - 25%
25 - 45%
> 45%
Kffact ≥
0.35
0 - 17%
17 - 35%
> 35%
criteria used by the NRCS to rate potential off-road/off-trail
erosion hazard (Natural Resources Conservation Service
1998).
on the criteria used by the NRCS to rate log landing suitability, natural surface road suitability, and harvest equipment operability (Natural Resources Conservation Service
1998). Where slope gradient exceeded 20%, lower and
moderate ratings were shifted to moderate and higher ratings, respectively.
SOIL COMPACTION HAZARD
MODELING
DEBRIS SLIDE HAZARD MODELING
Soil compaction hazard is modeled using a combination
of Unified Classification soil group designations and slope
gradient. The Unified Classification System was developed
by the Army Corps of Engineers in 1952 and classifies soils
into groups based on a number of characteristics, including
grain size, gradation, liquid limit, and plasticity index
(Cernica 1995). Unified Classification designations are used
in a number of NRCS interpretive ratings as an indicator of
soil strength for forestry-related activities.
Up to four different Unified Classification group designations are provided for each soil layer in a soil component.
For each soil component, the relevant Unified Classification designations with respect to the modeling of soil compaction are the group designations attributed to soil layers
located in the upper 15 cm of the component that are ≥ 7cm
in thickness. For the purposes of modeling protocol, each
map unit can only be represented by a single Unified Classification designation. As with the soil erosion hazard modeling described above, the algorithm used to obtain a map
unit’s representative Unified Classification group designation was designed to provide the most conservative estimate
of soil compaction hazard. This was achieved by first selecting the most limiting of the multiple designations attributed to each layer located in the upper 15 cm of the component that were ≥ 7 cm in thickness. This designation
was subsequently attributed to the component to which the
layer belonged. The most limiting designation was then
selected from the set of designations corresponding to the
soil components in the map unit. The representative group
designation was assigned to the map unit, and used to characterize the relative soil compaction hazard. The default
classification scheme (Table 2) used by the model is based
Debris slide hazard is modeled using slope gradient and
slope form. The protocol to produce hazard ratings is
adapted from a slope morphology model developed by the
Washington Department of Natural Resources (Shaw and
Johnson 1995). Slope gradient is calculated from the elevation data and classified into low, moderate, steep, and
very steep classes (Table 3).
Slope form is captured spatially using planform surface
curvature, which proved to be very effective in the identification of the landforms commonly associated with debris
slide occurrences. Planform surface curvature is also calculated from the elevation data and classified into convex,
planar, and concave classes (Table 4). The combination of
the slope gradient and slope form classes provide a matrix
from which debris slide hazard classes are derived. The
default matrix used by the model to rate debris slide hazard
from the slope gradient and slope form classes is provided
in Table 5.
Table 3. Slope gradient classification parameters used in
the modeling of debris slide hazard.
Slope Gradient Class
Low
Moderate
Steep
Very Steep
Table 4. Slope form classification parameters used in the
modeling of debris slide hazard.
Table 2. Default classification scheme used to characterize relative soil compaction hazard.
Soil Compaction Hazard
Lower1
Moderate1
Higher
Slope Gradient (%)
0 - 25
25 - 45
45 - 65
>65
Unified
Classification Group
Other
CL, CH, CL-ML,
ML, MH
OL, OH, PT
Slope Form Class
Convex
Planar
Concave
Planform Curvature1
> -0.1
-0.1 - -0.4
< -0.4
1
the unit of measure in which planform curvature is
expressed is 1 over 100 units
hazard ratings shift to one class more limiting on slopes
>20%
1
103
Table 5. Debris slide hazard matrix.
Slope Form Class
Convex
Planar
Concave
Slope Gradient Class
Moderate
Steep
Low
Lower Hazard
Lower Hazard
Moderate Hazard
Lower Hazard
Lower Hazard
Higher Hazard
Very Steep
Lower Hazard
Moderate Hazard
Higher Hazard
Moderate Hazard
Higher Hazard
Higher Hazard
Table 6. Classification scheme used to allocate harvest systems.
System
Helicopter
Cable
Track
Skidder
Wheeled
Skidder
Equipment
Operability
(Slope
Gradient)
Min Max
(%) (%)
0
150
15
150
Hazard Tolerance
Soil
Erosion
Higher
Higher
Soil
Compaction
Higher
Higher
0
45
Lower
Moderate
0
30
Lower
Moderate
Debris
Slide
Higher
Higher
Moderate
Lower
Table 7. Area in hectares by relative hazard category for the Fishburn Forest.
Relative Hazard
Lower
Moderate
Higher
Soil Erosion
223.2
211.7
69.5
Soil Compaction
159.8
343.5
1.1
Table 8. Harvest system allocation for the Fishburn Forest.
Harvest System
Wheeled Skidder
Track Skidder
Cable
Helicopter
Area (ha)
223.1
173.7
104.4
3.2
104
Debris Slide
436.0
53.8
14.6
HARVEST SYSTEM ALLOCATION
rithms used to compute the relative ratings for the different
hazards and harvesting system allocations are coarse representations of complex systems. For example, debris slides
and excessive erosion are often initiated by intense or prolonged rainfall events, which are dynamic, localized features that are difficult to model spatially. Features such as
canopy cover and time of year can have an impact on the all
of hazards assessed. As such, care needs to be taken to
avoid overstepping the intended utility of the model output.
Appropriate inferences that can be drawn from the output
of the Fishburn analysis include the following:
Harvest system allocation is dictated primarily by slope
gradient and tolerance to the aforementioned environmental
hazards. Slope gradient limitations on ground-based equipment are imposed based on a combination of production,
environmental, and safety reasons (Conway 1982). For the
aerial systems, maximum operable slopes are imposed predominantly for the safety of forest workers. Maximum tolerable ratings for soil erosion, soil compaction, and debris
slide hazards are imposed based on the potential for adverse
impacts associated with the different harvesting systems. The
default classification scheme used by the model is contained
in Table 6. When two or more systems are deemed appropriate, the model defaults to the least expensive alternative. For
the purposes of this modeling effort, the wheeled skidder
system is considered the least expensive alternative, followed
by the track skidder, cable, then helicopter systems.
In addition to slope gradient and hazard tolerance, yarding distance and deflection are also factored into cable system allocation. While a number of different cable system
configurations exist, the model assesses the suitability of a
single span system with a default maximum yarding distance of approximately 450 meters. To ensure adequate loadcarrying capacity, the algorithm for cable system suitability
requires that a minimum mid-span deflection of at least 5%
is attainable given the shape of the terrain and a yarder tower
and tailhold of 18 meters and 2 meters, respectively.
1. Efforts to mitigate soil erosion and soil
compaction will have to be considered for over
50% of the forest.
2. The hazard of debris slide occurrence is low
for most of the forest; however, a few locations
will require detailed field investigation.
3. A majority of the forest can be harvested using
ground-based systems, however, approximately
20% will most likely require the use of a cable
system.
The coarseness of the algorithms is a function of
the model’s intended use and its reliance on datasets
readily available to the public. The intended use of the
model output is to supplement the planning of timber
harvests at the strategic and tactical levels. The model
is not intended to serve as an operational, site-specific
guide for forest management activities. For example, it
would be inappropriate to use the hazard and harvesting
system allocation maps to delineate harvesting or site
treatment boundaries without conducting detailed field
analyses. With respect to data requirements, the model
was designed to widely distributed datasets that were
readily available to the public. As such, parameter
selection is limited to variables that can be obtained
from these readily available datasets. Though limited
to the strategic and tactical phases, the model provides
a quick first approximation of harvesting system
requirements and can assist planners and managers in
the prioritization of detailed hazard inspection.
The value of any model, spatial or nonspatial, is
often assessed through verification and validation.
Verification is a subjective assessment of the internal
logic used by a model, given its intended purpose (Brady
and Whysong 1999). With respect to verification, the
protocol and default parameter values used by the model
are based primarily on published research. Given the
intended use and scale of model application, the protocol,
algorithms, and data used by the model are believed to
be more than adequate. Validation is an objective test
of model behavior and performance. Because the hazard
RESULTS AND DISCUSSION
The analysis on the Fishburn Forest was conducted using
elevation data obtained from the Blacksburg and Radford
North 10-meter USGS 7.5-minute DEMs and soils data from
the Montgomery County, VA SSURGO dataset. Tables 7
and 8 contain tabular results pertaining to the relative hazard assessments and harvest system allocation, respectively.
Figure 2 contains spatial output depicting soil erosion hazard, soil compaction hazard, debris slide hazard, and harvest system allocation. Even with the conservative approach
taken by the model, only a small portion of the forest was
assigned Kffact values indicative of greater potential erosion.
Specifically, 24 hectares were assigned a Kffact > 0.35 and
were subjected to the more restrictive slope gradient ranges
described in the erosion hazard assessment protocol outlined
in Table 1. With respect to soil compaction hazard, all but 5
hectares were observed to have higher soil strengths as dictated by their Unified Soil Group designations.
However, due to the influence of slope gradient, a good
portion of the higher strength soils was assigned a relative
soil compaction hazard of moderate.
Although the model generates relatively precise tabular
and spatial information, care must be taken in the interpretation and use of output. The purpose of the model is to
serve as decision support tool during the strategic and tactical phases of forest management planning, and the algo105
Figure 2. Model output depicting relative soil erosion hazard, relative soil compaction hazard, relative debris slide hazard,
and harvest system allocation for the Fishburn Forest (classification schemes in black-and-white reproductions of model
output are difficult to discern due to the hillshade effect used to convey topographic information).
106
assessments are qualitative (lower, moderate and higher
hazard), validation will most likely take the form of
sensitivity analyses, the results of which could vary
significantly depending on the terrain characteristics of
the study area. The flexibility built into the design of
the model with respect to the ability to manipulate key
parameter values and select datasets of varying scale
and resolution greatly facilitates the user’s ability to
conduct sensitivity analyses. Analyses can easily be
conducted to determine the sensitivity of the hazard
assessments to perturbations in parameters values and
to the use of datasets possessing different scales and
resolutions. Similar types of sensitivity analyses could
be conducted on the harvesting system allocation
component of the model.
Sensing and the American Congress on Surveying and Mapping. San Francisco, CA.
Brady, W.W. and G.L. Whysong. 1999. Modeling. P. 293-324
in GIS solutions in natural resource management: Balancing the technical-political equation. S. Morain (ed.). OnWord
Press, Santa Fe, NM.
Cernica, J.N. 1995. Geotechnical engineering: Soil mechanics. John Wiley & Sons, Inc, New York, NY. 453 p.
Conway, S. 1982. Logging practices: Principles of timber harvesting systems. Miller Freeman Publications, Inc., San
Francisco, CA. 416 p.
Davis, C.J. and T.W. Reisinger. 1990. Evaluating terrain for
harvesting equipment selection. Journal of Forest Engineering 2(1): 9-16.
CONCLUSIONS
Dietrich, W.E., C.J. Wilson and S.L. Reneau. 1986. Hollows,
colluvium, and landslides in soil-mantled landscapes. P.
361-388 in Hillslope Processes. A. D. Abrahams (ed.). Allen
and Unwin, Boston, MA.
Information technologies such as Geographic Information Systems (GIS) have long been used to assist natural
resources planning and similar models to the one presented
herein have been developed (Bobbe 1987, Davis and
Reisinger 1990). Existing models, however, do not specifically address the hazards associated with steep terrain, and
their use is often limited by the need for specialized data.
Acquiring the necessary spatial data is one of the biggest
limitations in the modeling of complex natural phenomena.
Database development typically constitutes a major expenditure with respect to both time and financial resources, often consuming up to 80% of a project’s budget (Antenucci,
et al. 1991, Green 1999). GIS models designed to utilize
publicly available spatial data, such as the steep terrain harvesting risk assessment model presented in this research,
free up resources that would otherwise be needed for data
acquisition and are accessible to a wide audience of users.
Gibson, H.E. and C.J. Biller. 1975. A second look at cable logging in the Appalachians. Journal of Forestry 73(10): 649653.
Green, K. 1999. Development of the spatial domain in resource
management. P. 5-15 in GIS solutions in natural resource
management: Balancing the technical-political equation. S.
Morain (ed.). OnWord Press, Santa Fe, NM.
Krag, R., K. Higginbotham and R. Rothwell. 1986. Logging
and soil disturbance in southeast British Columbia. Canadian Journal of Forest Research 16(6): 1345-1354.
Manwaring, J.C. and G.A. Conway. 2001. Helicopter logging
in Alaska – surveillance and prevention of crashes. P. 9-20
in Proc. of the International Mountain Logging and 11th
Pacific Northwest Skyline Symposium. P. Schiess and F.
Krogstad (eds.). Seattle, WA.
LITERATURE CITED
Adams, P.W. 1998. Soil Compaction on Woodland Properties.
Oregon State University Extension Service. 8p.
Martin, C.W. 1988. Soil disturbance by logging in New England—review and management recommendations. Northern Journal of Applied Forestry 5(1): 30-34.
Antenucci, J.C., K. Brown, P. Croswell, M. Kevany and H. Archer. 1991. Geographic Information Systems: A guide to
the technology. Van Nostrand Reinhold, New York, NY. 301
p.
Miller, J.H. and D.L. Sirois. 1986. Soil disturbance by skyline
yarding vs. skidding in a loamy hill forest. Soil Science
Society of America Journal 50(6): 1579-1583.
Bobbe, T.J. 1987. An application of a geographic information
system to the timber sale planning process on the Tongass
National Forest - Ketchikan area. P. 554-562 in Proc. of the
GIS ’87 - San Francisco: Second International Conference,
Exhibits and Workshops on Geographic Information Systems. American Society for Photogrammetry and Remote
Natural Resources Conservation Service. 1995. State Soil Geographic (STATSGO) Data Base Data Use Information.
Natural Resources Conservation Service. 1998. National Forestry Manual.
107
Rice, R.M., J.S. Rothacher and W.F. Megahan. 1972. Erosional
consequences of timber harvesting: an appraisal. P. 321-329
in Proc. of the Watersheds in Transition Symposium. American Water Resources Association, Urbana, IL.
Toy, T.J., G.R. Foster and K.G. Renard. 2002. Soil erosion: Processes, prediction, measurement and control. John Wiley
and Sons, Inc., New York, NY. 338 p.
U.S. Geological Survey. 1987. Digital Elevation Models Data
User’s Guide 5. U.S. Department of the Interior, USGS. 38
p
Schnepf, C. 2002. Prevent forest soil compaction - designate
skid trails. UI Extension Forestry Information Series, Forest Management No. 8. 1 p.
Virginia Department of Forestry. 2002. Virginia’s Forestry Best
Management Practices for Water Quality. 216 p.
Shaw, S.C. and D.H. Johnson. 1995. Slope morphology model
derived from digital elevation data. in Proc. of the Northwest ARC/INFO Users Conference. Coeur d’ Alene, ID.
Washington State Forest Practices Board. 2000. Washington
Forest Practices Board Manual (Section 16) - Guidelines
for evaluating potentially unstable slopes and landforms.
Washington State Department of Natural Resources, Forest
Practices Division, Olympia, WA.
Sloan, H. 2001. Appalachian Hardwood Logging Systems;
Managing Change for Effective BMP Implementation. in
Proc. of the 24th Annual Meeting of the Council on Forest
Engineering. J. Wang, M. Wolford and J. McNeel (eds).
Snowshoe, WV.
Wu, W. and R.C. Sidle. 1995. A distributed slope stability model
for steep forested basins. Water Resources Research 31(8):
2097-2110.
108
Use of the Analytic Hierarchy Process to Compare
Disparate Data and Set Priorities
ELIZABETH COULTER AND DR. JOHN SESSIONS
Abstract: Given the promise of more and better data, both physical and biological, the question of how to use it for decision
making still remains. The Analytic Hierarchy Process (AHP) may be useful. AHP is a technique that is used to compare
alternatives based upon a number of criteria that may not be directly comparable. The AHP involves structuring problems as
a hierarchy, completing pairwise comparisons between attributes to determine user preferences, and using these comparisons
to calculate weightings for each of the individual attributes. The major strength of the AHP is that it allows attributes measured on different scales (such as length, area, and categorical variables) to be compared. The utility of AHP will be demonstrated using one or more examples.
INTRODUCTION
ANALYTIC HIERARCHY PROCESS
As the ability to gather more and better data is increased,
the challenge becomes one of determining how to use this
information to make better, more informed decisions. Often
this data is physical and biological, quantitative and qualitative, and measured on many different scales. Additionally, in many cases science has not determined quantifiable
relationships between cause and effect, leaving the decisions
up to professional judgment. Multi-Criteria Decision Analysis (MCDM) is a field of theory that deals with analyzing
problems based on a number of criteria or on a number of
attributes (also called Multi-Attribute Utility Theory, or
MAUT).
Many MDCM techniques exist, such as goal programming and combinatorial optimization. However, these techniques have several drawbacks. For example, the weights
placed on individual attributes being compared, such as acres
harvested, tons of sediment, and dollars of net present value,
are required to serve two purposes: first, to make the variables measured on different scales comparables, and second
to adjust the relative importance to the problem of each variable.
An alternative MCDM method called the Analytic Hierarchy Process, or AHP, is presented here. AHP is not a new
technique, but it is a model that has not been widely applied
in natural resource situations and deserves a broader audience as it is well suited to many problems faced in forestry
and natural resource management. This paper will discuss
AHP methodology in general and give examples of its use in
natural resource management situations.
The Analytic Hierarchy Process (AHP) was originally developed in the mid-1970’s by Thomas L. Saaty (Saaty 1977)
and has been used widely in many fields such as business
and operations research. The AHP involves the following
three basic steps:
•
Structuring problems as a hierarchy;
•
Completion of pairwise comparisons between
attributes to determine the user’s preferences;
and
•
Weighting of attributes and calculation of priority.
Structuring Problems as a Hierarchy
AHP requires that problems be structured hierarchically
so that the overall goal is represented at the top and the
individual alternatives to be compared form the base of the
hierarchy. At the center of the hierarchy is one or more
layers containing the attributes alternatives will be compared
on.
For example, consider a problem where traffic is to be
routed through a network based on minimizing total transportation costs, represented by monetary costs (distance),
and environmental costs related to unstable roads (various
slope stability factors). This problem could be represented
with the hierarchy in Figure 1.
109
Minimize total transportation
cost (including environmental
costs)
Path Length
(m)
Path 1
Upslope
Contributing
Area (m2)
Path 2
Mean
Hillslope
Angle (deg.)
Path 3
Path 4
Surface Type
(Gravel,
Paved, Dirt)
Path 5
Path 6
Figure 1: The example problem presented as a hierarchy.
Pairwise Comparisons
Weighting of Attributes and Calculation of
Priority
Pairwise comparisons are made between each of the attributes to be compared based on the contribution of each
attribute to the overall goal (the highest level of the hierarchy). Comparisons use the one to nine scale shown in Figure 2, termed the fundamental scale, where one signifies
equal importance between the attributes and nine is used
when one attribute is strongly more important than the other
attribute. Reciprocals are used to express the strength of
the weaker of the two attributes. For example, if A is 7
times more important than B, then B is 1/7 as important as
A. The results from pairwise comparisons create a positive
reciprocal matrix as shown in Figure 3. AHP does not require that the user be rational or consistent in completing
these pairwise comparisons.
The original version of AHP required that the user also
complete pairwise comparisons for each attribute of each
alternative being compared (Saaty 1980), termed relative
scaling. Relative scaling is an acceptable method if fewer
than seven alternatives are being compared. If the problem
becomes larger than this the comparisons between all possible alternatives become unwieldy. Another approach is to
use an absolute scaling method where each alternative is
scaled against an “ideal” alternative, often chosen as the
largest alternative available. Depending on the specific nature of the problem, this relative value can be assigned linearly as a proportion of the largest value present or based
on some other non-linear function (Weich 1995).
Various methods for calculating attribute weights from
the pairwise comparison matrix have been proposed. Saaty
(1977, 1980) calculates the principle right eigenvector of
this positive reciprocal matrix while others (Lootsma 1996)
have used the normalized geometric mean of the rows of
the priority matrix. The method involving geometric means
is a simpler method and has not been conclusively shown to
be inferior to the eigenvector method. For our example, the
priority vector would contain the following weights: Distance 0.5876, Upslope Contributing Area 0.2230, Mean
Hillslope Angle 0.1591, and Surface Type 0.0402. Distance
received the highest priority value, meaning the user in this
case feels that distance is the attribute that contributes most
to minimizing overall transportation costs.
The calculation of priority of one route as compared to
another route (Pn) would be the product of the attribute weight
and the relative attribute value summed across all attributes
for each route. This can be written for the example here as:
Pn = 0.5876dn + 0.2230an + 0.1591sn + 0.0402tn
Where:
Pn = Relative priority of route n
dn = relative distance for route n
an = relative mean hillslope angle for route n
tn = relative surface type value for route n
110
Intensity of
Importance
Definition
Explanation
1
Equal importance
Two activities contribute equally to the objective
2
Weak
3
Moderate importance
4
Moderate plus
5
Strong importance
6
Strong plus
7
Very strong or demonstrated
importance
8
Very, very strong
9
Extreme Importance
Experience and judgment slightly favor one
activity over another
Experience and judgment strongly favor one
activity over another
An activity Is favored very strongly over another;
its dominance demonstrated in practice
The evidence favoring one activity over another is
of the highest possible order of affirmation
Figure 2: The Fundamental Scale used for pairwise comparisons in AHP.
Distance
Distance
Upslope Area
Slope Angle
Surface Type
1
1/5
1/3
1/9
Upslope
Area
5
1
1/3
1/5
Slope
Angle
3
3
1
1/7
Surface
Type
9
5
7
1
Figure 3: Matrix of pairwise comparisons used in the example problem.
Route
Distance
(m)
1
2
3
4
5
6
600
1200
800
1500
2000
900
Upslope
Contributing
Area (m2)
1,000,000
3,000,000
2,500,000
50,000
100,000
150,000
Mean Hillslope
Angle (degrees)
Surface
Type
45
20
15
50
10
75
Paved
Gravel
Gravel
Dirt
Dirt
Gravel
Figure 4: Example route data.
111
Dirt
Gravel
Paved
Dirt
Gravel
Paved
1
1/3
1/9
3
1
1/7
9
7
1
Normalized
Geometric
Mean
0.66
0.29
0.05
Attribute
Value
1.00
0.44
0.08
Figure 5: Using AHP to determine Surface Type relative attribute values.
Route
Distance
(m)
Upslope
Contributing
Area (m2)
1
2
3
4
5
6
0.30
0.60
0.40
0.75
1.00
0.45
0.33
1.00
0.83
0.02
0.03
0.05
Mean
Hillslope
Angle
(degrees)
0.60
0.27
0.20
0.67
0.13
1.00
Surface
Type
Preference
Value
Rank
0.08
0.44
0.44
1.00
1.00
0.44
0.35
0.63
0.47
0.58
0.65
0.45
1
5
3
4
6
2
Figure 6: Example results using AHP to prioritize transportation routes based on minimizing total transportation costs,
both economic and environmental.
Example
attribute values and the attribute weights, the larger the contribution to the overall cost of transportation. Therefore,
lower values, both of attribute values and later, overall priority values, indicate the least costly, or more preferred options. Problems can be worked in either “direction”, but
care must be taken to be consistent throughout the problem
formulation, implementation, and interpretation.
Figure 6 shows relative attribute values for the example
problem, the total priority values, and the relative ranked
preference for each route.
Let us assume we have the six routes shown in Figure 4
to compare. Because of the widely varying scales used, it
would be difficult to use these values as they are now. Instead, each of these values needs to be reduced to a relative
value. For this example, we will assign attribute values for
each route as a percentage of the largest value for each attribute overall. This will produce values between zero and
one, with larger values being the more “expensive” values,
or those that lead to a greater increase in the total transportation cost.
For the Surface Type attribute a different approach must
be taken. Here, we can either assign values between 0 and 1
for each surface type or we can use pairwise comparisons
between the surface types to determine weighting values.
Figure 5 shows a matrix of pairwise comparisons for Surface Type. The last two columns of Figure 5 give the normalized geometric mean of the rows as well as the value
that will be used for attribute values. Either of these values
could be used, however, to be consistent, all other attribute
values are decimal percentages of the largest attribute value
present and therefore this is the value that should be used.
It is important to remember the “direction” of the problem.
For this example, the higher the value, both for the relative
CONCLUSION
The major strength of the AHP is that it allows attributes
measured on different scales to be compared (Saaty 1980).
This is especially important to this problem where the comparison of values such as meters of distance, square meters
of upslope contributing area, degrees of hillslope, and a categorical surface type must be undertaken in order to arrive
at an overall priority for each proposed route. AHP also forces
the user to make explicit values used in decision making
(Keeney 1988) and is useful in situations where the quantification of cause and effect relationships is left up to professional judgment.
112
This paper has presented a brief overview of AHP methodology and an example demonstrating the technique’s usefulness in comparing alternatives with multiple criteria measured on different scales in a case where it is left up to professional judgment to determine the relative contribution of
each attribute towards the objective.
criteria in the Multiplicative AHP and SMART. European
Journal of Operational Research. 94:467-476.
Saaty, T.L. 1977. A scaling method for priorities in hierarchical
structures. Journal of Mathematical Psychology. 15:234281.
Saaty, T.L. 1980. The Analytic Hierarchy Process: Planning,
Priority Setting, Resource Allocation. McGraw-Hill, New
York. 287 p.
LITERATURE CITED
Keeney, R.L. 1988. Value-driven expert systems for decision
support. Decision Support Systems. 4:405-412.
Weich, B.G. 1995. Analytic hierarchy process using Microsoft
Excel. In Engineering. California State University,
Northridge, p. 24.
Lootsma, F. A. 1996. A model for the relative importance of the
113
114
Use of Spatially Explicit Inventory Data for Forest Level
Decisions
BRUCE C. LARSON AND ALEXANDER EVANS
Abstract: Society is demanding that forest managers produce more spatially complex forests even at the scale of within
forest stand. Harvest techniques for complex even-age management have taken on a variety of names such as partial cutting,
green tree retention, and partial overstory removal. Traditional growth models relying on stand averaging techniques are often
imprecise estimators of timber growth in these situations because many growth processes are non-linear and would require a
uniform pattern of leave trees. Likewise forest and landscape descriptions are less reliable predictors of non-timber values if
the forest is viewed as a pattern of discrete polygons (stands) instead of a smaller grain-sized mosaic of different sized trees,
especially in mixed species stands.
Most inventory systems now include a GPS location for each plot. These data can be used in a raster-based GIS system to
give a finer grain analysis of the forest. Information from each plot can be interpolated to give a smooth interpretation of
variable values across the forest. Almost any variable or combination of variables can be used. Examples are basal area or
volume either in total or for different species. Crown cover and downed wood volumes are example of other, non-timber
values, that can be depicted.
Most of our existing forest management quantitative tools were designed when desktop computational power was much
more limiting. New tools will have to be written such that forests and even stands can be depicted in a much more precise
manner. High precision data management and analysis will be the result of shifting computational paradigms. Much less
averaging and use of representative stands will result. It is doubtful that new tools will replace the need for existing models;
several models will be used in concert to make decisions.
Early indications are, as to be expected, if stand age is the driving variable for all others and the primary disturbance in the
forest is clearcutting, then traditional stand polygons are a more accurate representation of the forest. However, in many other
situations, stand averaged polygons will obscure the variation that forest managers are trying to create.
115
116
Elements of Hierarchical Planning in Forestry: A Focus on the
Mathematical Model
S. D. PITTMAN
Abstract: The hierarchical approach to forest management has been advanced as an integrated method for constructing
large-scale forest plans. While the planning process functions within a hierarchical construct, the mathematical models describing the plan also have multi-level structure. Two models which consistently appear in multi-level planning, are mathematical programs with block angular structure, also referred to as a hierarchical production planning problems, and the
hierarchical optimization problem. Depending on the conceptual model of the planning venture, each of these mathematical
models is a possible realization. The implication of these modeling formulations is discussed within the context of the hierarchical approach to forest planning.
117
118
Update Strategies for Stand-Based Forest Inventories
STEPHEN E. FAIRWEATHER
Abstract: Stand-based forest inventories are typically kept current with a combination of cruising, growth modeling, and
adjustments to represent harvest activity. At any point in time the inventory will have some stands with recent cruise data,
some stands which have never been cruised but carry estimates for the stratum they belong to, and some stands which were
cruised some time ago and have been grown each year using a growth model.
There are many strategies for keeping the inventory up to date. For example, the entire ownership may be cruised at one
point in time, and then grown and depleted annually until the ownership is cruised again. Or, cruising may be an ongoing
annual activity, such that a different portion of the ownership is cruised each year. Each strategy has advantages and disadvantages in terms of costs, how accurately it will portray the true inventory at any point in time, how accurately individual stand
volumes will be portrayed, and the degree to which the current inventory estimate will change from one year to the next simply
as an artifact of the updating system.
This paper defines the problem and presents a simulation model for evaluating different update strategies. The model
allows the user to study the impact of update strategy and several sources of estimation error on the accuracy of the inventory
estimates.
DEFINITION OF THE PROBLEM
ten years later? Or, perhaps it would be better to cruise
some of the stands every year, such that each stand gets
cruised every ten years, but not all stands are cruised at the
same time. Each of these update strategies has advantages
and disadvantages, and the selection of the proper strategy
is not always clear.
In a stand-based forest inventory system, the stand is the
basic unit of inventory. As such, the sum of all the individual stand inventories at one particular point in time constitutes the inventory for the entire ownership.
At any point in time the inventory estimate for any particular stand may be established in any of three ways:
•
•
•
GOALS FOR A STAND-BASED
INVENTORY
The stand may have an estimate based on a
cruise of that stand in the current year;
There are three goals for a stand-based forest inventory
that will help to define criteria for evaluating alternative
update strategies. The goals are:
The stand may have an estimate based on a
past cruise that has been grown, with a growth
model, to the current year;
1.
Provide an accurate estimate of the total forest
inventory at any point in time. This is necessary to facilitate valuations and appraisals.
The stand may have an estimate which is essentially the average for the stratum that the
stand belongs to, where the average is based
on the stands in the stratum which have been
cruised either in the current year or in the
past.
2.
Provide accurate volume estimates at the stand
level to support on-the-ground operations. It is particularly important for the system to provide inventory estimates that are close to removal volumes when a stand is
actually harvested; the “cutout”, or the ratio of the inventory estimate to the harvest volume, should be close to 100%.
If the cutout routinely runs much differently than that, the
confidence of the field foresters in the inventory system
will quickly erode, and their lack of support for the system
will place it in jeopardy.
As the forest-wide inventory is maintained over time, the
question of an appropriate “update strategy” will eventually
have to be considered. For example, should the strategy be
to cruise every stand, every year? Or, should the strategy
be to cruise all of the stands at one time, grow them ahead
each year with a growth model, and then recruise all of them
119
where an error is defined as the difference between the cruise
estimate and the true value of the stand inventory at that
point in time.
3.
Minimize the frequency and magnitude of year to
year changes in the inventory estimate, both for the ownership and for individual stands, where such changes are an
artifact of the estimation system being used. For example,
inventory foresters recognize the possibility of two well designed and well executed cruises in subsequent years in a
single stand suggesting a decrease in stand volume, when
in fact the stand has been growing steadily in volume, simply due to random chance. By the same token, two cruises
may suggest an increase in volume from one year to the
next that is beyond reasonable expectations of growth, again
due to random chance. At the larger scale, a current inventory established by cruising every stand ten years ago and
growing each stand to the current point in time may show
an alarming decrease in total volume when the ownership
is cruised again, perhaps because the growth model was biased high, or perhaps because either cruise was not conducted carefully. Inventory foresters may understand how
this could happen, but the folks in timberland accounting,
who are used to thinking in terms of changes in the inventory due to growth, depletions, and changes in the land base,
will be very uncomfortable with changes due to “better information”.
5.
Define a bias in the growth model being used to
grow stands cruised in the past to the current point in time.
6.
Define a cost per cruise plot. The model can then
calculate the total cost of cruising in any year, and the total
discounted cost of cruising for the 11-year period. The cost
of using a growth model or applying a stratum average to
uncruised stands is assumed to be inconsequential relative
to cruising.
7.
Conduct repeated applications of the update strategy, and collect the results over all replications.
The simulation model lets the user examine the impact
of several sources of error in the inventory update process:
•
Non-homogeneous stands within an inventory stratum. As the stand-to-stand variation
in volume per acre in a stratum increases,
the usefulness of applying a stratum average
to individual uncruised stands decreases.
•
Sampling error in cruise estimates for individual stands.
•
Growth model prediction error (bias).
THE SIMULATION MODEL
We have developed an easy to use simulation model in
MS Excel to let the user evaluate a range of inventory update strategies. The model lets the user do the following:
1.
Define a forest of 20 stands in terms of the actual
(true) inventory in each stand in each year of an 11-year
period, and the acres for each stand. It is helpful to think of
the 20 stands as constituting a single stratum (cover type, or
photo-interpreted type) in the ownership.
SIMULATION RESULTS
We simulated five update strategies for purposes of illustration. Our forest consisted of 20 stands ranging in size
from 10 to 56 acres, averaging 32 acres. The volume in the
stands averaged 3,040 units per acre, and ranged from 2,500
to 3,700. The growth rates varied from 2.8 to 4.5%, and
averaged 3.9% overall.
For any stand selected to be cruised, we specified an allowable error of +/- 20% at the 90% confidence level. At an
assumed coefficient of variation of 50%, the resulting sample
size was 18 plots per stand. We also specified that in any
stand there would be no more than 1 plot in 2 acres. Therefore, cruised stands with less than 36 acres ended up with 1
plot for every 2 acres, and stands with more than 36 acres
were cruised with 18 plots.
For this set of simulations we assumed the growth model
was “perfect”, i.e., the true growth percent for each stand
was predicted without error.
We assumed a cost per plot of $30, and discounted total
cruising costs each year at a rate of 8%. We assumed that
the costs of growth modeling and/or using stratum averages
(expanding) was negligible compared to the cost of cruising.
We collected results for 100 replications of each of the
following update strategies:
2.
Define an actual (true) annual growth rate for each
of the 20 stands.
3.
Describe the inventory update strategy to be evaluated. In each year of the 11-year period, the inventory estimate for a stand will be based either on a cruise of that
stand, a past cruise that has been grown with a growth model,
or a stratum average for the other stands which have either
been cruised in that year or grown to that year from a past
cruise. Figure 1 illustrates the characterization of one particular update strategy.
4.
Define the number of plots that will be used to cruise
any particular stand. The number of plots is calculated based
on the expected CV (coefficient of variation) for volume,
the allowable error, and the confidence level. In small stands,
the user can specify a minimum number of acres per plot
which will override the number of plots from the sample
size calculation. The model uses the number of plots and
the CV to randomly generate errors in the cruise estimate,
120
A. Cruise every stand, every year. This strategy
required no expanding, and no growth modeling, so the only source of error would be due to
sampling error in the cruising.
The results of the first update strategy are shown in Figure
2. The first graph shows the average inventory estimate, over
all stands, was equal to the true inventory in each year of the
simulation, as would be expected if our cruising (sampling
methodology) was unbiased. The graph also shows the range
in inventory estimates each year. In 1999, for example, there
was an estimate of the total inventory that was approximately
17% less than the true value, which might be surprising given
that all the stands were cruised in that year, and every year.
The second graph in Figure 2 displays how often the inventory estimate, over all stands, decreased from one year to
the next. For example, in 1999, in 30 replications out of 100,
the estimate of inventory was less than it was in 1998.
Figures 3 illustrates the results of update strategies B and
C. Cruising the entire ownership on either a 5-year or 10year interval, and using the growth model to update stand
inventories between cruising, resulted in unbiased estimates
with a range of errors no greater than was experienced when
every stand was cruised in every year. The rate of decreasing
inventory estimates was still between 20 and 30%, but at least
this would only be experienced only once every 5 or 10 years.
Figure 4 illustrates the results of update strategies D and
E. Both strategies cruised 10% of the stands each year, such
that each stand was cruised every 10 years. In strategy D, the
B. Cruise every stand at one time, on a 5-year interval; use growth modeling to update the stands
until they are cruised again.
C. Cruise every stand at one time, on a 10-year interval; use growth modeling to update the stands
until they are cruised again.
D. Cruise 10% of the stands each year, such that
each stand is cruised on a 10-year interval; expand the stratum average to uncruised stands
each year. No growth modeling is used.
E. Cruise 10% of the stands each year, such that
each stand is cruised on a 10-year interval; expand the stratum average to uncruised stands in
the first year only, and then grow all stands with
the growth model until they are cruised again.
This update strategy is illustrated in Figure 1.
Stand
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
1991
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
1
1
3
3
1992
2
2
2
2
2
2
2
2
2
2
1
1
2
2
2
2
2
2
2
2
1993
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1994
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1995
2
2
2
2
2
2
1
1
2
2
2
2
2
2
2
2
2
2
2
2
1996
2
2
2
2
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1997
2
2
2
2
2
2
2
2
1
1
2
2
2
2
2
2
2
2
2
2
1998
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
2
2
2
2
1999
2
2
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2000
2
2
2
2
2
2
2
2
2
2
2
2
1
1
2
2
2
2
2
2
2001
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
2
2
Figure 1. Illustration of how an update strategy is defined in the simulation model. Cells denoted with a “1” indicate
cruises in that stand in the given year. Cells denoted with a “2” indicate the estimate of inventory for the stand in the
given year will based on growing the previous year’s estimate with a growth model. Cells denoted with a “3” indicate
the estimate for the stand in that year is the average (weighted by acres) of the estimates in that year for the other
stands that have either been cruised (“1”) or grown (“2”). This particular update strategy features cruising 10% of the
ownership on a 10-year cycle. In the first year of the inventory, only two stands are actually cruised, and the other
stands are “expanded” to, i.e., they take on the average of the two cruised stands. After the first year, all stands are
either cruised or grown.
121
Total Inventory (MBF)
Cruise Every Stand, Each Year
No Growth Modeling, No Expansion
3500
3000
TRUE
2500
Avg Estimate
2000
Low Estimate
High Estimate
1500
1000
1990
1992
1994
1996
1998
2000
2002
Year
Frequency of Decreases in Estim ates of Total Inventory;
Cruise Every Stand Each Year
Freq/100
40
30
20
10
0
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
Year
Figure 2. Results of 100 replications of update strategy A. In this strategy every stand was cruised in every year.
egy. “RMSE” is the root mean squared error, defined as the
square root of the sum of squared differences between the
estimate of volume and the true volume for each stand, over
100 replications. A low RMSE would be preferred over a
high RMSE. The graph shows no clear advantage of any
particular strategy, but does show the tendency for stands
with small areas to have the least accurate inventory estimates; the spikes in the RMSE values for stands I, N, and P
correspond to the three smallest stands in the model forest.
The average discounted cruising costs for the five strategies are shown in Table 1. Given that all the strategies appeared to be unbiased, and there was no difference between
strategies with regard to the accuracy of individual stand
estimates (by the year 2000), the low cost for strategies D
and E might make them more attractive than A, B, or C.
Strategy E also offers a low rate of decreasing estimates from
year to year. It might be selected as the preferred strategy
depending on the analyst’s willingness to accept the possi-
average of the cruised stands was expanded to the uncruised
stands in each year, resulting in high rates of decreasing
inventory estimates. In strategy E, the average of the cruised
stands was expanded to the uncruised stands in only the
first year of the sequence; after that, each stand was grown
with the growth model until it was cruised. Once a stand
was cruised, its inventory was restated, and then grown from
there. Strategy E appear to be unbiased, and the variability
in the inventory estimates stabilizes enough by the year 2000
(i.e., 10 years into the cycle) to be as precise as any of the
other strategies. Strategy E, however, displays a large advantage over strategy D in terms of minimizing the frequency
of decreases in the inventory estimate from one year to the
next.
Figure 5 compares the accuracy of the update strategies
on an individual stand basis in the year 2000. The year
2000 was selected as the benchmark year because by that
time every stand has been cruised at least once in each strat122
Frequency of Decreases in Estimates of Total Inventory;
Cruise Every Stand, 5-Year Interval
3500
40
3000
TRUE
2500
Avg Estimate
2000
Low Estimate
1500
High Estimate
1000
1990
Freq/100
Total Inventory (MBF)
Cruise Every Stand, 5-Year Interval
Use Growth Modeling
30
20
10
0
1992
1994
1996
1998
2000
2002
1992
1993
1994
1995
Year
1997
1998
1999
2000
2001
Year
Cruise Every Stand, 10-Year Interval
Use Growth Modeling
Frequency of Decreases in Estimates of Total Inventory;
Cruise Every Stand, 10-Year Interval
3500
40
3000
TRUE
2500
Avg Estimate
2000
Low Estimate
High Estimate
1500
1000
1990
Freq/100
Total Inventory (MBF)
1996
30
20
10
0
1992
1994
1996
1998
2000
2002
1992
Year
1993
1994
1995
1996
1997
Year
Figure 3. Results of 100 replications of update strategies B and C.
Figure 4. Results of 100 replications of update strategies D and E.
123
1998
1999
2000
2001
RMSE in 2000 by Stand and Update Strategy
1100.0
1000.0
RMSE (Volume)
900.0
A
800.0
B
700.0
C
D
600.0
E
500.0
400.0
300.0
A B C D E F G H
I
J K L M N O P Q R S T
Stand
Figure 5. Comparison of update strategies with regard to accuracy of individual stand inventory estimates in the year
2000.
Table 1. Average discounted cruising costs by update strategy.
Strategy
Description
Cost
A
Cruise every stand, every year
$64,765
B
Cruise every stand on 5-year cycle
$18,008
C
Cruise every stand on 10-year cycle
$12,291
D
Cruise 10% of the stands each year;
expand each year
$6,545
E
Cruise 10% of the stands each year;
expand in the first year only
$6,545
124
bility of larger errors in the overall inventory estimate while
the system is being established, i.e., in the early years of
the process.
A simple variation on strategy E that might mitigate
the range in inventory estimates while the system is being
established would be to cruise the larger stands in the beginning years of the program. That is, while 10% of the
stands may be selected for cruising, they may account for
20% or 30% of the area. This idea, and the results of the
simulation, are shown in Figure 6.
In this particular case the results of the simulation in
Figure 6 indicate that the largest stands tended to have
more volume per acre, on average, than the rest of the
stands in the stratum, resulting in a biased inventory update strategy until most of the stands have been cruised.
Stand
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
1991
3
3
1
3
1
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
1992
2
2
2
2
2
2
2
2
2
2
1
2
2
2
1
2
2
2
2
2
1993
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
2
1
2
2
1994
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
The lower bound on the inventory estimate was improved
somewhat over update strategy E, and the average discounted
cruising cost for the new strategy only increased to $6,852.
But, the slight bias in the strategy underscores the importance of avoiding any relationship between stand size and volume per acre when stands are being assigned to strata.
CONCLUSION
This simple simulation model will be quite helpful in exploring different update strategies and the impacts of errors
attributable to cruise design, stratification, variability within
and between stands, and growth modeling. Future applications will use the growth modeling error control to understand the importance of calibrating the growth model to be
unbiased, i.e., to have 0% error.
1995
2
2
2
2
2
2
1
1
2
2
2
2
2
2
2
2
2
2
2
2
1996
2
2
2
1
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1997
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
2
2
2
1
2
Cruise 10% on 10-Year Interval,
Use Growth Modeling, Expand in First Year Only, Cruise
Largest Stands First
1998
2
2
2
2
2
2
2
2
2
2
2
1
2
2
2
1
2
2
2
2
1999
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
1
2000
2
2
2
2
2
2
2
2
2
2
2
2
1
1
2
2
2
2
2
2
2001
2
2
1
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
Frequency of Decreases in Estimates of Total Inventory;
Cruise 10% on 10-Year Interval, Expand in First Year Only, Cruise
Largest Stands First
40
3000
TRUE
2500
Freq/100
Total Inventory (MBF)
3500
Avg Estimate
Low Estimate
2000
High Estimate
20
10
1500
1000
1990
30
0
1992
1994
1996
1998
2000
1992
2002
1993
1994
1995
1996
1997
1998
1999
2000
2001
Year
Year
Figure 6. Simulation results for an update strategy similar to strategy E, but concentrating the cruising on the largest
stands in the stratum in the early years. Stands C, E, J, K, O, and R are the largest stands (by area) in the stratum.
125
126
A New Precision Forest Road Design and Visualization Tool:
PEGGER
LUKE ROGERS AND PETER SCHIESS
Abstract: By evaluating alternative routes in the office using a pegging routine, days or even weeks can be saved of
valuable field time and ultimately, a better design can emerge. Initial road design in forested landscapes often includes pegging
roads on large-scale contour maps with dividers and an engineers scale. An automated GIS based road-pegging tool (PEGGER)
was developed to assist in initial road planning by automating the road pegging process. PEGGER is an extension for the
commonly available GIS software Arcview®. PEGGER imports topography as digital contours. The user identifies the origin
of the new road, clicks in the direction they want to go and PEGGER automatically pegs in road at a specified grade. Through
the use of PEGGER, many alternatives can be quickly analyzed for alignment, slope stability, grades and construction cost
using standard GIS functionality. The resulting cuts and fills are then displayed in ROADVIEW, a road visualization package
for Arcview®.
This paper looks at the algorithm used, evaluates it’s usefulness in an operations planning environment and suggests
additional methods which might be incorporated into PEGGER to further assist the forest engineer.
INTRODUCTION
tional road design techniques into the GIS. With the availability of free 10-meter digital elevation data for the United
States and the continually decreasing cost of LIDAR data it
is possible to extend the road pegging technique to include a
more detailed analysis.
A computer program is presented that automates initial
forest road location through the use of a Geographic Information System and digital terrain data. Using PEGGER,
forest planners can quickly analyze many road location alternatives and, by taking advantage of standard GIS functionality, evaluate environmental and economic opportunities.
EXISTING MODELS
While many road design packages exist (RoadEng,
AutoCAD, F.L.R.D.S…) only one has given the user the
ability to quickly look at alternative road locations at varying scales, ROUTES (Reutebuch, Stephen E. 1988). Traditional road design software relies on survey data collected in
the field to generate terrain models and very detailed engineered road location and construction plans. Others have
taken a more holistic approach and looked at optimization
of road locations for a particular set of topographical, environmental or economical constraints (Xu 1996, Thompson
1988, Wijngaard and Reinders 1985, Cha, Nako and Watahiki
1991). All these programs have relied on a high degree of
training on the part of the user and few of the non-commercial packages have matured into an easy to use software package.
ROUTES was developed to automate the road pegging
process. Using a large-scale contour map (1in = 400ft) and a
digitizer, the user could digitize the contours and use the
BACKGROUND
Traditional methods for designing a forest road system
consisted largely of aerial photo interpretation and field reconnaissance. More recently, forest engineers have used
large-scale contour maps to select preliminary routes with
dividers, a process known as route projection or “pegging”.
According to Pearce (1960), “Route projection is the laying
out of a route for a road on a topographic map of aerial
photo. The route defines the narrow strip of land within
which the field preliminary survey is made.” This trial and
error method of initial paper based road location has proven
itself as a cost effective method for preliminary design and
analysis by avoiding intensive field investigations.
With the overwhelming popularity of Geographic Information Systems (GIS) in natural resource management it
is appropriate to explore opportunities to integrate tradi-
127
digitizer puck to locate the road. While the user interface
was primitive consisting of high and low pitch beeps from
the digitizer puck to signal that the user was “on-grade”,
the program worked well and kept track of such things as
grade, road length and stationing. ROUTES reliance on a
digitizer, it’s HP 9000 code base and the general lack of a
graphical user interface (GUI) left the program without
many users.
tured decision making tasks.” It is with the intention of providing an initial decision support system that PEGGER was
developed.
PEGGER is an Arcview® GIS extension that automates
the route projection (“road pegging”) process for use by engineers and forest planners. PEGGER imports topography
as digital contours much like using a paper contour map.
Standard tools available within Arcview GIS allow the user
to import the contours from Shapefiles, ESRI coverages,
AutoCAD dwg and dxf, and Microstation dgn files. In addition to importing data as digital contours, users can use
the Arcview Spatial Analyst extension or other publicly available tools to convert USGS digital elevation models to contours.
One of the goals of the PEGGER project was to make the
program as usable as possible for as many people as practical. One of the problems with technology is training users
to use the software. Forestry professionals responsible for
fieldwork have been slow to adopt new technology into their
work largely due to the complexity of the software and the
THE PEGGER PROGRAM
With the growing availability of LIDAR and IFSAR data,
locating roads in the office is becoming a more realistic
and practical exercise. Within the GIS framework many
tools exist to locate geographic features, examine spatial
relationships among natural elements and act as a foundation for a decision support system. Watson and Hill (1983)
define a decision support system as an “interactive system
that provides the user with easy access to decision models
and data in order to support semi structured and unstruc-
Figure 1 - The simple PEGGER interface in Arcview GIS.
128
can attach the grade attributes to the route segments, merge
the segments into one long road or spline to smooth sharp
corners (much like a finalized design).
time commitment of training. The PEGGER program was
designed to avoid these common pitfalls, requiring no training, minimal setup time and a simplified user interface. Included with the software are a detailed help file and complete tutorial.
Once digital contours have been imported into Arcview
the user must supply a few parameters, the road theme they
would like to edit, the contour theme they would like to use
as well as confirm the detected contour interval. In addition
to the contour and road themes the user can have any number of other layers available in the GIS such as soils, slope
classes, streams, wetlands, unstable slopes and property lines.
The next step is to locate the desired beginning and/or endpoints of the new road given operational parameters. Using
standard tools available in the GIS (ruler and identify) the
user can estimate the necessary grade for the road.
To start a road the user shift-clicks on the location where
they wish to begin and enters the desired grade. To “peg”
the road the user only has to click in the general direction
they wish to go in order to project the route into the GIS.
Successive clicks peg in additional segments of road from
contour to contour as fast as the user can press the mouse
buttons. Grade changes can be accomplished by using the
Roads pull-down menu or by right clicking the mouse and
selecting Increase or Decrease Grade.
If the road fails to reach the desired end point, the previously pegged segments can be quickly deleted and a new
grade can be tried. This method of trial and error that used
to mean changing the divider spacing and erasing undesirable segments from the map can now be accomplished in
the GIS in a fraction of the time.
LIMITATIONS
The PEGGER program relies on digital topographic information to identify potential road locations. To be of value
to the forest professional, the topographic information must
accurately represent the actual ground conditions. Steve
Reutebuch noted about ROUTES that “the accuracy of the
30-meter (USGS) DEM’s available at the time were insufficient for accurate route projection.” With the availability of
10-meter digital elevation data and the current popularity
of LIDAR data, route projection has become more feasible
but discrepancy between the data and actual field conditions should be expected.
The PEGGER program is a tool for quickly identifying
possible route location alternatives given grades specified
by the user. The tool does not evaluate additional environmental and economic constraints that must be considered
by the forest professional such as soil types, hydrology, property lines and slope classes. The GIS provides a framework
where these analyses can be implemented but it is outside
the scope of the PEGGER program.
NEXT STEPS
In addition to providing quick alternative location analysis, PEGGER should be extended to include some additional
functionality. With greater availability of high resolution
digital elevation data it will be possible to identify a route
location or P-Line (preliminary location line) using
PEGGER and then “survey” the surrounding area for export into a road design package like ROADENG or
AutoCAD. This digital survey within the GIS can be used
ANALYTICAL DESCRIPTION
PEGGER works by identifying contour lines that meet a
specific set of criteria. Every projected route segment must
begin and end on a contour line. To project a segment the
user enters a desired grade and PEGGER looks for a point
on an adjacent contour line at a distance computed by:
d = ci / (g / 100)
where d = the distance,
ci = the contour interval, and
g = the desired grade.
NOTE: For pegging on paper maps, the distance would
need to be multiplied by the map scale (ie: 1/4800) to get
the appropriate divider width.
If a point is found, a new route segment is created in the
GIS. If a point is not found, the user is notified that the
desired grade is not feasible and potential solutions are proposed. Unlike ROUTES, which allowed for a grade tolerance (+/- some tol), PEGGER gives an exact solution in the
GIS. After a desirable route location has been found the user
Figure 2 - ROADVIEW visualization of a route located
with PEGGER.
129
to generate the topographic information and field notes necessary to do a complete design in the road design package.
The final L-Line (location line) and slope staking notes can
be generated using the GIS and the road design package for
use in the field by the forest professional.
Complementing PEGGER is a companion program
ROADVIEW that takes the preliminary route location generated by PEGGER and creates a 3-dimensional model of
the road’s cuts, fills and running surface. Using the 3-D
model and a visualization program such as EnVision, professionals can look at the road as it might be constructed
and effectively communicate with non-forest professionals
regarding scenic and environmental impacts.
tutorial, a typical user can be locating roads in a few minutes on their own PC taking full advantage of forest technology.
LITERATURE CITED
Pearce, J. Kenneth. 1960. Forest engineering handbook. Portland, OR: U.S. Department of the Interior, Bureau of Land
Management. 220p.
Cha DS, Nako H, Watahiki K. 1991. A computerized arrangement of forest roads using a digital terrain model. Journal
of the Faculty of Agriculture Kyushu University. 36(12):131-142.
CONCLUSION
Reutebuch, SE. 1988. ROUTES: A Computer Program for Preliminary Route Location. Pacific Northwest Research Station: U.S. Department of Agriculture, Forest Service. General Technical Report PNW-GTR-216. 18p.
While route location has been used by forest professionals for many years and computerized in the 1980’s with the
introduction of ROUTES, it has never become a widely used
technology to evaluate initial road locations. With PEGGER,
the forest planner can quickly evaluate route locations within
a GIS framework, giving the planner access to additional
GIS functionality. PEGGER was designed with simplicity
and minimal investment cost as primary objectives. Through
the use of a carefully designed user interface and extensive
Watson, H. J. and M. M. Hill. 1983. Decision support systems
or what didn’t happen with MIS. Interface. 13(5):81-88.
Xu, Shenglin. 1996. Preliminary planning of forest roads using ARC GRID. Corvallis, OR: Oregon State University, Department of Forest Engineering. 112p.
130
Harvest Scheduling with Aggregation Adjacent Constraints: A
Threshold Acceptance Approach.
HAMISH MARSHALL, KEVIN BOSTON, JOHN SESSIONS
Abstract: Three different forest management planning unit sizes were used to compare the results from a tactical planning
model that included a maximum opening size constraint with aggregation and even-flow goals. The smallest size had a 22acre average size, the second had an average size of 41-acres, while the largest unit had an average size of 59-acres. There was
a direct correlation between discounted net revenue and unit size as the smallest unit definition produced $45 per acre more
than the second set of units and $225 more than the largest unit size. These results suggest that planners use the latest
technology when defining individual settings and managing their unit sizes as one method to improve the financial performance of their assets.
INTRODUCTION
form of the adjacency constraint limits the maximum opening size that can be created. This constraint is found in
many forest practices rules including those of Sweden,
Canada, and various western US states (Boston and Bettinger
2001, Dahlin and Sallnas 1993). The Sustainable Forestry
Initiative (SFI), a voluntary certification scheme that has
been adopted by much of US forest products industry, restricts the average opening size to less than 120 acres
(AFAPA 2002)
Digital surveying equipment, global positioning systems
(GPS), and geographical information systems (GIS) technologies allows for smaller settings to be accurately defined
and used in forest planning; however, many forest planners
restrict the solution space of the tactical scheduling model
by aggregating settings into larger units prior to solving the
tactical problems. This paper explores the impact of planning unit size on the quality of the solutions produced by
the tactical plan. Three degrees of pre aggregation were used
(Table 1) to describe the same 4450-acre planning area with
the identical yields.
The three data sets were used in a tactical planning model
that had the goal to maximize the discounted net revenue
subject to an area-restriction green-up constraint. The maximum opening allowed is 120 acres as specified in the Oregon Forest Practices Rules (Oregon Department of Forestry 2003). One potential risk of reducing the planning unit
size is that the model will widely disperse each period’s
harvest over the entire landscape, hence increasing harvesting and transportation costs. To reduce this possibility, an
additional goal was added to the model to aggregate original planning settings into larger planning units. The objective of the aggregation was to group settings on opposite
sides of valleys into planning units that will be logged in
the same year to improve the logging efficiency. Stumps
The strategic planner’s goal is to develop a plan that will
allow the firm to compete effectively (Porter 1986). Tactical planning aligns the operations to implement that strategy. Forestry planning problems, especially considering the
long rotations, are some of the more difficult business planning problems because of economic, biological and operational uncertainty in the data used. As the forest products
industry is one of the most capital-intensive industries in
the world, a detailed planning and scheduling system is required for competitive share–holder returns (Propper De
Callejon et al. 1998). Strategic plans have traditionally assumed the data is continuous and linear; thus allowing these
problems to be solved using linear programming algorithms.
Commercially available products such as FOLPI (Manley
et al. 1991), Magis (MAGIS 2003) and WOODSTOCK
(REMSOFT 2003) have been available for approximately
20 years to solve the long-term strategic planning problems.
Initially, the solutions from these strategic planning solutions have been implemented using various ad hoc procedures. To improve both the financial and environmental
performances, optimization routines have been developed
to solve tactical scheduling planning problems that incorporate various spatially explicit constraints. These constraints have included various forms of green-up restrictions
and unit-fixed cost tactical planning problems. Unfortunately, these new tactical planning models quickly exceeded
the capacity of commercial solvers and have resulted in the
employment of a variety of heuristics to solve these problems.
A common spatial component found in tactical forest planning involves the harvest of adjacent planning units. One
131
Table 1. Description of the planning data sets.
Aggregation Level
1
2
3
Mean Planning Unit Size
(acres)
21.7
40.6
58.9
can be used as tailhold anchors for cable logging systems
with less fear of failure than if they were allowed to begin to
decompose. A third even volume flow goal was added to
the model to regulate volume flow.
Maximum Planning Unit Size
(acres)
39.1
76.8
92.1
w = penalty weight, tv = target volume (Mbf), vit = total unit
volume (Mbf)
Aggregation Incentive
The aggregation adjacent constraints have been formulated to encourage the harvesting of adjacent planning units
in the same period. This constraint has also been formulated as a goal or “soft constraint”, modeled by increasing
the value of the objective function when aggregation is included in the model. The amount of the incentive was based
on the sum of the proportion of the perimeter of a planning
unit that it shared with other planning units harvested in
the same period, divided by the number of planning units.
MODEL FORMULATION
Objective Function
The objective function goal was to maximize the net
present value of the forest over a period of 20 years plus the
sum of incentives and penalty values. A discount rate of 5%
was applied to revenues at the end of each one-year period.
The details of this incentive and penalty values are discussed
below. The objective function is formulated as:
n
( ∑ ( pi X it))
(w *
n
Rit
20
max( ∑ ∑
i =1 t =1
(1.05 )
n
t
20
−∑∑
i =1 t =1
Cit
(1.05 )
t
)
Maximum Opening Constraint (Oregon State Rules)
The maximum opening constraint or area-restriction
problem (Murray 1998) was formulated as a hard constraint
that cannot be violated. The area-restriction constraint was
formulated so that a neighborhood of adjacent settings harvested within 4 years has to have a combined area of less
than 120 acres to not violate the Oregon State Forest Practices Rules.
s
(4)
Ai X it + ∑ Ai X it <= 120
Even Volume Flow Penalty
Maintaining continuity of supply to customers is a key
component in successfully operating a forestry business.
operations. We have assumed the goal is to maintain an evenflow of volume throughout the 20 year planning horizon,
although we recognize that it can be misapplied to small
areas where the result of an even-flow constraint can significantly reduce the harvest from an area that cannot support yearly. In this study it was formulated as a goal or “soft
constraint” where the objective function was penalized for
any deviation of the discounted volume from a target volume in each period. The formulation is given below; the
squared deviation was multiplied by a weighting factor and
subtracted from the objective function.
i =1
(1.05) t
( w * ∑ (tv −
)
2
(3)
)
w = penalty weight, pi = proportion of perimeter shared with
units harvested the same period that are either on the other
side of the valley or above or below, Xit = a 0,1 binary variable to identify where a unit has been cut.
Rit = Gross Revenue, Cit = Costs, i = harvest units, t =
planning period, n = number of harvest units, the
Even_Flow_Penalty and Aggregation_Incentive formulations are described below.
vit
n
(1)
−( Even _ Flow _ Penalty ) + ( Aggregation _ Incentive )
n
i =1
i =1
Ai = area, s = a subset of adjacency units all of which are
harvested within 4 years of each other.
HEURISTIC ALGORITHM: THRESHOLD
ACCEPTANCE ALGORITHM
Threshold acceptance (TA) was first developed by Dueck
and Scheuer (1990) when they claimed that it appeared to
be superior to simulated annealing (SA). The idea behind
(2)
132
Randomly select candidate
stand and harvest period
No
Does the new candidate
solution form an opening less
than 120 acres
Yes
Calculate the difference between volume
from the target volume for each period
raised to the power of two.
a
Calculate the average proportion of the
shared perimeter of adjacent units
harvested in the same period
c
Calculate the net present value
for each period
b
Calculate
Objective Function
= b - wa + zc
Is proposed
objective function less
than best ever
objection function
Yes
Retain old solution
Make current solution equal best ever solution
No
Is it within current
threshold levels
No
Is proposed
objective function less
than current
objective function
No
Yes
Yes
Accept solution
repetition = repetition + 1
Current solution = proposed
solution
Does repetition = maximum
number of repetitions ?
No
Yes
Select next threshold level
Stop and report the best
solution found during search
and the iterations at which it
first occurred
Any threshold levels
left?
No
Yes
Figure 1. Flow Diagram of the Threshold Acceptance Algorithm.
5000
4500
$10.40
$10.20
Net Present Value (million)
4000
Volume (mbf)
3500
3000
No Aggregation
Aggregation 1
Aggregation 2
Aggregation 3
2500
2000
$10.00
$9.80
$9.60
$9.40
$9.20
$9.00
$8.80
$8.60
1500
$8.40
No Aggregation
1000
Aggregation 1
Aggregation 2
Aggregation 3
Model
500
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Years
Figure 2. Net Present Value for the Different Aggregation
Models.
Figure 3. Projected Volumes (Mbf) over the 20 Year
Planning Horizon.
133
the TA algorithm (Figure 1) is similar to SA but much easier
to understand and implement. As in SA, the new candidate
is selected randomly from the neighborhood of the existing
solution and the objective function for the new solution is
calculated. Bettinger et al. (2002) looked at the performance of eight heuristic planning techniques to solve three
increasingly difficult wildlife-planning problems. The results showed that despite the simplistic nature of threshold
acceptance (TA), it performed well compared to some of
the more complex heuristic techniques.
The threshold acceptance algorithm was implemented
using Microsoft C# programming language. All spatial and
yield data was stored in an ESRI geodatabase. Due to the
stochastic components in the threshold acceptance algorithm, 20 final solutions were generated for each scenario
with each scenario using a new random starting solution.
Each solution considered 50,000 iterations at each of the
eight threshold levels. The run with the highest objective
function for each scenario has been presented in the following section of this paper.
Figure 5. Aggregation 1
RESULTS
As expected, the smaller planning unit size in aggregation 1 produced a higher net present value than the larger
units in aggregation 2 and 3. The result was an increase in
net present value of approximately $45 and $225 per acre
(Figure 2). These net present values do not include the penalties and incentives. An additional cost of increasing planning unit size is a greater variation in the scheduled volume
flow between periods. The smaller planning units allows
for them to be selected in a manner that will minimize the
volume penalty values (Figure 3). By incorporating the aggregation goal, the final harvest unit size has nearly doubled
the average original planning unit size. This suggests that
a well designed model can still produce the larger harvest
units, without sacrificing the net present value (Figures 47 and Table 2).
Figure 6. Aggregation 2
Figure 4. No Aggregation
Figure 7: Aggregation 3
134
Table 2. Harvest Unit Size Summary (acres).
Harvest Unit
Size
No Aggregation
Aggregation 1
Aggregation 2
Aggregation 3
Maximum
105.0
102.6
119.2
119.7
Minimum
0.4
0.4
20.3
14.0
Average
26.4
40.6
65.9
75.3
Boston, K., and P. Bettinger. 2001. The Economic impact of
green-up constraints in the SE USA. Forest Ecology and
Management. 145: 191-202.
CONCLUSIONS
Although in the past planners have pre-aggregated into
large units, this is no longer necessary given the current planning tools available. The results of this paper show that there
are potential gains to be made in financial performance when
smaller units are the primary data incorporated into the
model. This paper also demonstrates that goals or constraints
can be incorporated to the model formulation that encourages or requires the aggregation of model planning units
into larger harvest units with minimal impact on the revenues. By allowing the model to aggregate settings into planning units during the modeling, as apposed to aggregating
prior to modeling, allows a larger number of solutions to be
explored leading to better results. These results should encourage organizations to utilize the current technology such
as GPS and GIS that allows for the creation and management of smaller unit size and to maintain their identity
throughout the planning process.
Dahlin, B., and O. Sallnas. 1993. Harvest scheduling under adjacency constraints-A case study from the Swedish sub-alpine region. Scand. J. For. Res. 8:281-290.
Dueck, G., and Scheuer, T. 1990. Threshold Accepting: A General Purpose Optimization Algorithm Appearing Superior
to Simulated Annealing. Journal of Computational Physics
90, 161-175.
MAGIS. 2003. A Multi-Resource Analysis and Geographic Information System. www.forestry.umt.edu/magis/ (accessed
on 4/27/2003)
Manley, B., Papps, S., Threadgill, J., Wakelin, S. 1991. Application of FOLPI. A linear programming estate modeling system for forest management planning. FRI Bulletin. No. 164,
14 pp.
LITERATURE CITED
Murray, A. 1998. Spatial Restrictions in Harvest Scheduling.
For. Sci. 45(1): 45-52
American Forest and Paper Association. 2002. The 2002-2004
Edition, Sustainable Forestry Initiative Program (SFI) SM
Oregon Department of Forestry. 2003. Oregon forest practices
rules. Salem Oregon.
Porter M. 1986. Competition in global industries: A conceptual Framework. In Competition in Global Industries. ed
M. Porter. Harvard Business School Press Boston Mass. P
15-60.
h t t p : / / w w w. a f a n d p a . o r g / C o n t e n t / N a v i g a t i o n M e n u /
Environment_and_Recycling/SFI/Publications1/
C u r r e n t _ P u b l i c a t i o n s / 2 0 0 2 2004_SFI_Standard_and_Verification_Procedures/20022004_SFI_Standard_and_Verification_Procedures.pdf. Accessed Nov 4. 2003.
Propper De Callejon, D., T. Lent, M. Skelly. C. A. Webster. 1998.
Sustainable Forestry within an Industry Context. In M. B.
Jenkins. The John D. and Catherine T. MacArthur Foundation Press. P 2-1 to 2-39.
Bettinger, P., D. Graetz, K. Boston, J. Sessions, and W. Chung.
2002. Eight heuristic planning techniques applied to three
increasingly difficult wildlife-planning problems. Silva
Fennica. 36(2) 561-584.
REMSOFT. 2003. Intelligent software for the environment.
www.remosft.com (accessed on 04/27/03).
135
136
Preliminary Investigation of Digital Elevation Model
Resolution for Transportation Routing in Forested
Landscapes
MICHAEL G. WING, JOHN SESSIONS AND ELIZABETH D. COULTER
Abstract: Several transportation planning decision support systems utilize geographic information systems (GIS) technology to plan forest operations. Current decision support systems do not address upslope terrain conditions that may influence
the stability of road networks. This paper describes an on-going research project that uses a GIS and digital elevation model
(DEM) to identify transportation route alternatives. Our goal was to develop an algorithm that identified transportation
routes guided by an objective function that weighted road grade and potential drainage area. We used a 9 x 9 meter resolution
DEM. We found that the resolution of the DEM (9 x 9 meter) was unable to provide reliable road grade and landscape slope
estimates. In both road grade and landscape slope results, gradient estimations based on the DEM data appeared to overestimate expected values. These results encourage further investigations, including the use of finer resolution DEMs to model
topographic surfaces for transportation routing purposes.
INTRODUCTION
els for slope and other topographic landscape representations for input into a decision support system. With terrain
information, planners may be able to minimize traffic on
forested routes that are potentially less stable than others,
and reduce road failures, maintenance needs, sediment delivery to streams, and other factors related to transportation
costs. Alternately, the identification of problem sites along
a transportation network that is in use may help direct monitoring and maintenance efforts. Although a few studies have
reported progress in this area (Wing et al. 2001), there remains a need for further work. We present results in this
paper of efforts to use a DEM to model road grade, slope
conditions, and the amount of upslope contributing area for
the use of transportation route planning. We investigated
the usefulness of a 9-meter DEM to provide reliable road
grade and landscape slope estimates for transportation purposes.
An important part of forest operations is the development of an efficient transportation system that incorporates
economic, environmental, and safety considerations. Several decision support systems have been developed to assist
forest planners with scheduling transportation routes in forested terrain (Reutebuch 1988, Liu and Sessions 1993).
Previously, forest transportation planners relied on hard
copy maps and other manual techniques for transportation
scheduling and were subject to the time and efficiency limits imposed by these techniques. Often, this meant that a
full range of options may not have been developed and considered. The development of decision support systems has
helped planners with the identification and prioritization
of potential transportation routes, given a set of parameters
and accompanying constraints. Decision support systems
have also allowed planners to quickly create a range of transportation options with indices or other benchmarks through
which to evaluate and choose among alternatives.
While others have used GIS technology as a decision support system for forested route siting and analysis
(O’Neill 1991, MacNaughton et al. 1997, Epstein et al.
1999, Chung 2002, Akay 2002), accounting for potentially
unstable or landslide prone sites in the terrain surrounding
the road is not considered. With the increasing availability
of digital elevation models (DEMs), it is now possible to
find elevation data for most parts of the U.S. at 10 meter, or
finer, resolution. DEMs can be used to create terrain mod-
METHODS
Our study area is the Elliott State Forest, located in the
Oregon coast range. The Elliott State Forest is an actively
managed forest of 145 square miles (376 km2) and has relatively steep terrain with an approximate average ground
slope of 53%. The Elliott has a well-developed transportation network (3.8 miles per square mile) with approximately
550 miles (885 km) of roads, both paved and unpaved. We
obtained base GIS data from the Elliott staff for our project
with layers representing ownership boundaries, roads, and
137
a digital elevation model (DEM) that was derived from aerial
photography. All GIS operations were done using either
ArcInfo workstation or ArcGIS 8.3 software with the Spatial Analyst extension. Our analyses were completed primarily with raster (grid) data. The Spatial Analyst extension allows users to create and manipulate raster data, and
offers a number of tools for calculating preferred routes in a
transportation network.
Our goal was to guide the search for preferred routes by
use of a weighted objective function of environmental variables that influence environmental performance of roads.
The weighted objective function was intended to identify
the optimal route based on variable weightings. We chose
two variables: road grade and upslope contributing area.
Road grade was chosen because water power increases with
road gradient and steep road grades have been linked with
sediment delivery potential from forest roads (Boise Cascade 1999). The second variable was upslope contributing
area. Upslope contributing area is intended to provide a
potential indicator of saturated terrain conditions: the contributing drainage area flowing into each grid cell. This
measure provides a relative index of wetness and is a primary variable for many popular hydrologic models (Beven
and Kirkby 1979).
To derive road grade, we initially created a raster-based
GIS layer of the Elliott road system by converting the existing vector roads layer to this data structure. We then calculated the grade of the Elliott roads by using the existing 9meter resolution DEM from the Elliott forest, overlaying
the DEM on the roads layer, and calculating a grade using
the SLOPE function within ArcInfo. This resulting grade
value was used to approximate road grade. We considered
only those DEM cells that were coincident with the roads
and did not use other adjacent raster cells. Contributing
areas were calculated for each grid cell by summing the area
of all cells that drained into that cell. These processes resulted in separate raster layers for road grade and contributing area.
We partitioned our analysis into two parts. First we used
the existing roads in the forest as the transportation network and constrained our route selection process to consider only these existing roads. Second, we relaxed the route
selection process and did not constrain the route location
search to existing roads. We determined the shortest route
between the landing and exit sites as a function of distance,
and then compared this route to others generated through
different weightings of road grade and contributing area
importance.
Our first application used a potential timber landing located in the northeastern portion of the Elliott and an exit
site on the forest’s western perimeter (Figure 1). These example sites were chosen as the area between them encompasses a major portion of the Elliott.
We found unexpected results in the topographic variable
summaries for shortest paths created through our first application. Since we had constrained the search to existing
roads, we anticipated that all raster derived grades would be
F
!(
W
X
Legend
!(
W
X
0 1.5 3
6
9
12
Kilometers
Landing
Exit
Roads
Forest Boundary
Figure 1. Elliott State Forest, road network, landing
location, and exit point.
within normal truck operating road gradients. Although
the average grades were reasonable, there were a number of
road segments with grades exceeding 20 percent. This raised
doubts as to the correct location of the existing road network relative to the DEM we used for analysis. To investigate possible road network georeferencing problems, we
relaxed the search to examine the entire terrain. If the only
problem was georeferencing, we anticipated that the relaxed
search would identify routes that avoided the excessive gradients. We used the same landing and exit site but we altered our approach by constructing slope and contributing
area models from the DEM for the entire forest. Using these
layers, the route selection algorithm was not constrained to
the existing road network and was free to consider the entire Elliott forest. For the second application, we also determined the shortest path between the landing and exit sites
as well as other routes through the same combination of
grade and contributing area weightings that we used in the
first application. To facilitate weighting of the slope and
contributing area values, both the road grade and contributing area layers were reclassified from continuous data into
a 10-category equal area distribution. The equal area distribution creates continuous categories that contain an approximately equal number of observations. We manipulated variable weightings to calculate several different routes from
our landing to the forest outlet.
RESULTS
For the single landing and exit application, the first route
we identified was the shortest linear path along the road
network from the landing to the exit. This shortest path
created a base route for comparative purposes; no weights
138
Table 1. Variable weights and route distance, mean and maximum route grade, and mean and maximum contributing area
for existing road network.
Grade
Weight (%)
0
0
10
25
50
75
90
100
Contributing
Area Weight (%)
0
100
90
75
50
25
10
0
Route
Distance (miles)
36.5
38.2
38.1
38.0
37.8
36.8
36.8
36.8
Mean Route
Grade (%)
9.3
9.5
9.4
9.3
9.2
8.9
8.9
8.9
Max Route
Grade (%)
67.2
42.8
42.8
42.8
42.8
67.2
67.2
67.2
Mean Contributing
Area (acres)
26.3
12.6
12.7
12.8
12.9
17.0
17.3
27.4
Max Contributing
Area (acres)
12916.9
11391.5
11391.5
11391.5
11391.5
12916.9
12916.9
12916.9
Table 2. Variable weights and route distance, mean and maximum route grade, and mean and maximum contributing area
for unconstrained network routing.
Grade
Weight (%)
0
0
10
25
50
75
0
100
Contributing
Area Weight (%)
0
100
90
75
50
25
10
0
Route
Distance (miles)
18.8
22.5
23.0
23.0
24.7
24.5
22.7
24.5
Mean Route
Grade (%)
50.7
35.0
22.7
21.9
18.8
18.6
14.5
11.2
Max Route
Grade (%)
137.5
109.8
88.5
171.1
84.0
82.6
88.6
145.1
Mean Contributing
Area (acres)
14.6
3.9
7.3
7.3
0.5
9.9
354.9
1026.5
Max Contributing
Area (acres)
17337.3
18032.9
18032.9
18032.9
1870.2
43470.7
16000.1
15996.3
mean route grade decreased and contributing area increased,
although the changes in mean route grade were not pronounced.
The high grade values that resulted from our initial analyses indicated that the DEM we used was not able to provide
sufficient information for determining reliable road grade
estimates. Possible sources of error are DEM resolution,
incorrect location of the existing road network relative to
the DEM, a processing error, or an inherent bias in the methodology that creates the raster elevation values. To test for
a road location error, we applied a different approach to
route creation with the shortest path algorithm and different weightings of topographic variables; we did not constrain routes to the existing road network. By allowing the
shortest path algorithm to navigate freely throughout the
Elliott’s topography, we believed that the identification of
routes with less than a 20% grade maximum would confirm
a road location error.
Results varied more dramatically for the unconstrained
routing approach. The shortest path had a distance of 19
miles, mean grade of 50%, and a mean contributing area of
14.63 acres (Table 2). Route distances were dramatically
shorter for the all of the unconstrained routes when compared to the network constrained routes. Distances ranged
from approximately 23 to 25 miles. Route mean and maximum grades were also very different than the network con-
were applied in this initial route for road grade or contributing area. We then selected routes based on varying weights
of road grade and contributing area in order to determine
optimal routes given a range of variable importance (Table
1). Regardless of the route parameters, the range of resulting route distances was consistent. The shortest distance
between these points identified by the base route was about
37 miles and, with modifications of the grade and contributing area variables, the range of distances was between 37
and 38 miles. The mean grade of all routes was also consistent and ranged from 8 to 9%. The maximum grade differed
from 43 to 67%. These high gradient sections clearly exceed the gradients of the existing road network.
Contributing areas differed markedly with the shortest
path having a mean contributing area of 26 acres and the
routes resulting from considering grade and contributing area
having a range from 13 to 27 acres. The shortest path and
routes that had a grade weight of at least 75% all had the
same maximum contributing area (12,917 acres) whereas
all other routes had a maximum of 11,392 acres. Closer
inspection of the location of these routes revealed that the
maximum contributing areas all occurred along the same
small section of road. This section of road crossed a major
stream twice and gained the large contributing area values
associated with the stream. In general, as the weight of the
road grade increased (and contributing area decreased), the
139
strained routes; grades were consistently and, in many cases,
considerably higher. Contributing area results for the unconstrained routes varied considerably in terms of the mean.
With the exception of the 90 and 100% grade weights, mean
contributing areas were roughly half, or less, than those of
the constrained routes. Maximum contributing areas were
larger for all of the unconstrained route results and, with
two exceptions, ranged from 16,000 to 18,000 acres. These
large maximum contributing areas were again the result of
routes crossing major streams.
statistics that were not constrained to the existing network.
Whereas the average grade (47%) was slightly less than
our previous results (Table 2), the maximum grade was
slightly larger (143%). These similar results led us to believe that it was not a processing error, or necessarily inaccuracy, in our original DEM that contributed to the large
grade values.
Rather, a more likely explanation is that a finer resolution DEM is needed to provide a more reliable approximation of road grade and terrain. The DEMs we used were
unable to accurately capture the lower gradients that should
exist along the existing road network. In addition, our inability to create any route throughout the forest that avoided
grades above 20% suggests that slopes were systematically
over estimated throughout the forest. Wilson et al. (2000)
detected differences in slope as a function of DEM resolution, and considered resolutions between 30 and 200 meters.
One approach to verifying systematic slope exaggeration
estimates would be to create or obtain finer resolution (1-5
meter) DEM data for the Elliott, calculate slope values, and
examine values to compare differences with our reported
findings. These could be compared to measured road grades
and cross section data through precision instrumentation,
such as total station or digital clinometer, in order to better
understand what is being represented in the DEM.
DISCUSSION
All of the routes we developed from the existing road
network had average road grades that were well within normal acceptable grade tolerances (16-20%). In addition, all
routes also had maximum grades of more than 42%. While
travel distances were significantly lower for the routes we
created that were not constrained to existing roads, all had
maximum grades greater than 80% and all had average
grades that exceeded those that were created using the existing Elliott road network. These results indicated that the
DEM values were not providing reliable grade and slope
data.
We wanted to determine whether viable transportation
routes could be determined through the 9 meter DEM. In
order to create transportation routes that could be used by
typical log hauling vehicles, we adjusted the constraints of
our routing algorithms so that grades in excess of 20% would
not be considered in final route creation. We then attempted
to create routes that avoided 20% grades through the confines of the existing network and also through the unconstrained approach, where the entire landscape would be potentially available for transportation routes. We found that
this was not possible; every route possibility included multiple grade values that exceeded 20%. Given that many
parts of the existing route system have been used for log
hauling, these results shed doubt on the reliability of the
DEM that served as the basis for our topography representations.
We suspected that perhaps a processing error could have
contributed to these results and contacted the Elliott staff to
verify the DEM’s history. The base DEM was created from
elevation points derived from aerial photography taken in
1996. The points were converted into a triangular irregular
network (TIN) data structure and then converted into a raster file. We used the resulting raster file for our analysis.
Potential errors could have occurred during operations performed on the data prior to our receiving the data, or could
have resulted from our manipulations during this project.
For comparative purposes, we obtained USGS 10 meter
DEMs for the Elliott and used these data to create slope and
contributing area models of the Elliott. The average slope
of the USGS DEM was slightly less (50%) than the
photogrametrically derived DEM (53%). We then used the
baseline USGS data to calculate a shortest path and grade
LITERATURE CITED
Akay, A. 2002. Minimizing total cost of construction, maintenance, and transportation costs with computer-aided forest
road design. PhD dissertation, Oregon State University,
Corvallis. 229 p.
Boise Cascade Corporation. 1999. SEDMODL-Boise Cascade
road erosion delivery model. Technical documentation.
Boise Cascade Corporation, Boise, ID. 19 p.
Beven, K. J. and M.J. Kirkby. 1979. A physically based variable contributing area model of basin hydrology. Hydrological Sciences Bulletin 24(1):43-69.
Chung, W. 2002. Optimization of cable logging layout using a
heuristic algorithm for network programming. Phd dissertation, Oregon State University, Corvallis. 206 p.
Epstein, R., A. Weintraub, J. Sessions, J. B. Sessions, P. Sapunar,
E. Nieto, F. Bustamante, and H. Musante. 1999. PLANEX:
an equipment and road location system. In Proceedings of
the International Mountain Logging and 10th Pacific Northwest Skyline Symposium, March 28-April 1, 1999, Dept. of
Forest Engineering, Oregon State University, Corvallis. pp.
365-368.
Kramer, B. W. 2001. Forest road contracting, construction, and
maintenance for small forest woodland owners. Research
Contribution 35, Forest Research Laboratory, Oregon State
University, Corvallis.
140
Liu, K. and J. Sessions. 1993. Preliminary planning of road systems using digital terrain models. Journal of Forest Engineering 4:27-32.
Wilson, J. P., P. L. Repetto, and R. D. Snyder. 2000. Effect of
data source, grid resolution, and flow routing method on
computed topographic attributes. In: Wilson J P and J C
Gallant (editors), Terrain Analysis: Principles and Applications. New York, John Wiley and Sons, pp 133-161.
MacNaughton, J., J. Sessions, and S. Xu. 1997. Preliminary
planning of forest roads using ARC GRID. In: GIS ‘97 Conference Proceedings. Fort Collins: GIS World, Inc. 67-71.
Wing, M. G., E. D. Coulter, and J. Sessions. 2001. Developing
a decision support system to improve transportation planning in landslide prone terrain. In Proceedings of the International Mountain Logging and 11th Pacific Northwest Skyline Symposium, December 10-11, 2001, College of Forest
Resources, University of Washington and International Union
of Forestry Research Organizations, Seattle, WA. pp. 56-60.
O’Neill, W. A. 1991. Developing optimal traffic analysis zones
using GIS. ITE Journal 61: 33-36.
Reutebuch, S. 1988. ROUTES: A computer program for preliminary route location. USDA General Technical Report.
PNW-GTR-216, Portland, OR. 18 p.
141
142
Comparison of Techniques for Measuring Forested Areas
DEREK SOLMIE, LOREN KELLOGG , MICHAEL G. WING ANDJIM KISER
Abstract: Operational planning and layout are important steps in determining the feasibility of harvesting operations.
Higher-precision technologies may increase measurement accuracy and efficiency while decreasing total planning costs. Although a number of trials have been completed on the potential implementation of some of these new technologies, few have
quantified the benefits of such devices in an operational setting.
Sixteen (~1 ac) units were identified for an evaluation of different spatial data-collection instruments as well as techniques
for measuring area. Unit boundaries were measured by three surveying techniques, comprising 1) a string box, manual compass, and clinometer; 2) a laser, digital compass, and digital data collector; and 3) a global positioning system. The collected
data were compared with a series of benchmarks established with a total station. Techniques were statistically analyzed and
error distributions were developed at either a unit or an individual data-point scale. Time studies were conducted to determine
the overall efficiencies of each technique. Our results should assist forest resource managers in their decisions when selecting
alternate measurement tools for collecting spatial data.
INTRODUCTION
methods are needed to fully understand the benefits of these
newer tools.
Global positioning systems (GPS) have also been used
to collect spatial data in forested environments (Forgues
1998). Studies have identified variables that affect its usefulness, including the amount of canopy closure (Stjernberg
1997, Mancebo and Chamberlain 2001), receiver type and
grade (Darche 1998), weather conditions (Forgues 2001),
and topography (Liu and Brantigan 1996). Historically, one
of the challenges when using GPS has been the effect of
multi-path signals caused by the forested canopy (Stjernberg
1997, Forgues 2001). However, this effect has largely been
mitigated by manufacturers incorporating ‘multipath’ recognition into their firmware. Signal availability is another
problem (Karsky et al. 2000), primarily of the limited visibility of satellites due to forest cover and topography.
Differential GPS (DGPS) has been to be a cost-effective
technique for measuring land areas (Liu and Brantigan
1996). Both forest canopy and undulating terrain exert a
definite effect on traverse surveys completed by DGPS, with
accuracy being reduced as variations in canopy closure and
topography increase. Nevertheless, kinematic DGPS
traverses have proved more capable of achieving a closer
forest stand-area approximation than that obtained from a
traditional compass-and-chain traverse.
The objectives of this study were to: 1) gather time and
costing information to determine the relative efficiencies of
each measurement technique; 2) to compare information on
precision and accuracy of each method; and 3) to analyze
the patch-orientation due to discrepancies in angular measurements.
Studies of conventional methods employed in the Pacific Northwest have analyzed the use of nylon tapes, handheld compasses, and clinometers for operational measurements. Researchers have reported that costs can vary according to the type of harvesting system (Edwards 1993,
Kellogg et al. 1998), unit size and shape (Dunham 2001a),
silvicultural treatments (Kellogg et al. 1991, Edwards 1993,
Kellogg 1996a, Dunham 2001a, b), and level of crew experience (Kellogg et al. 1996b). However, no studies have
been published concerning more recent data-capturing technologies available to the forest industry.
Higher-precision technologies may increase measurement accuracy and efficiency while decreasing total planning costs. Although a number of trials have been completed on the potential implementation of some of these
new technologies, few have quantified the benefits of such
devices in an operational setting. Mixed results have been
reported for the usefulness of electronic distance- and azimuth-measuring (EDM) devices to traverse forest stand
boundaries (Liu 1995) and low volume road surveys (Moll
1993). The distance- and vertical angle-measuring capabilities of the lasers generally met the survey requirements,
but the azimuth measurements with the compass did not
due to offsets in the magnetic field.
Several studies have illustrated the potential for digital
data collectors, compasses, and laser rangefinders in operational settings including woodpile volumes (Turcotte
1999) and skyline corridor traversing (Wing and Kellogg
2001). In operational settings measurements are often difficult to obtain due to understory brush. Further comparisons between the laser rangefinder and more conventional
143
METHODS
distance, inclination, and azimuth were recorded in the data
collector. Two data recorders, one operating on a Windows
CE and DOS platform (Juniper Allegro), the other on a Windows CE platform (Tripod Data Systems (TDS) Ranger),
were used in tandem with the laser to determine the most
efficient data recording technique. One advantage in using
a DOS-based application was that the data could be directly
downloaded into the mapping software.
The office work for the Juniper data collector consisted
of downloading the information to a desktop computer via
an ActiveSync program. Data Plus software allowed the user
to structure the database to match the required input for the
mapping program. The data were then imported into
RoadEng, using the Terrain Module, and subsequently analyzed. Office work for the TDS data collector involved a
computer spreadsheet program that adjusted the coordinates
to a format that RoadEng could recognize.
The third survey technique incorporated a Trimble Pro
XR GPS. A one-person crew traversed the perimeter of the
patches, simultaneously logging points and using the area
function within the TSC1 data collector while moving between stations. This traverse was completed in a kinematic
mode, so that no differentiation existed among the stations
but, rather, the entire boundary was traversed as a single
segment. Therefore, the GPS portion of this study did not
include between-station measurements, and comparisons
could be made only at the patch level.
Data were downloaded to Trimble Pathfinder Office version 2.01 and base station data were used to differentially
correct the data and determine patch areas.
The previously described techniques were also compared
with a benchmark method that could produce the most accurate measurements. A Nikon DT-310 total station was used
along with 2 prisms and a four-person crew. A side shot
method was used that minimized the number of instrument
set-ups required to traverse the patch (Fig. 1), while collecting measurements at each station. Two crew members maneuvered prisms between the stations, while another cleared
sight-paths between the total station and the survey points.
Study Site
This study was located on the McDonald-Dunn College
Forest, managed by Oregon State University. The site is a
55-year-old mixed stand comprised primarily of Douglasfir (Pseudotsuga menziesii), big leaf maple (Acer
macrophyllum), and red alder (Alnus rubrus). The stand also
had minor shrub vegetation consisting of vine maple (Acer
circinatum), salal (Gaultheria shallon), and salmonberry
(Rubus spectabilis). Slopes ranged from 0 to 76% (average
of ~25%). Canopy closure was 60 to 95%, with an average
stand density of 280 trees per acre. The average tree was
~97 ft tall, with a dbh of 17 in.. Approximate volume per
acre was 15 to 24 mbf.
Data Collection
Sixteen study patches (~1 ac each) were selected based
on stand descriptions, topographies, and their location relative to other patches. Boundaries were delineated in the field
with surveyor’s flagging and paper tags. Benchmark stations, established along the vertices of each patch, were
flagged and locations measured with a Nikon DT-310 total
station. Measurement accuracies were reported to within 0.02
in. of the horizontal and vertical distances.
Three techniques for determining land area were compared against the benchmark measurements. These included
the use of: 1) a string box with a distance counter, handheld compass and a Suunto clinometer; 2) electronic distance- and electronic bearing-measurement devices; and 3)
a global positioning system. Time required to complete the
operational layout and planning was separated into three
components to determine the relative efficiencies of each
method as time spent surveying each patch, recording the
data, and either downloading or entering the information
into a database. Crew sizes depended on the surveying
method being employed. All members had at least one year
of experience with the survey equipment and were proficient in its operation.
The first method consisted of a single person measuring slope distance and slope percent. Data were collected
station to station, recorded in a field book and manually
entered into a software program, RoadEng (Softree;
Vancouver, BC), in the office. Traverse adjustments were
done using the compass rule (Mikhail and Gracie 1981,
Buckner 1983).
The second method employed an electronic distance- and
electronic bearing-measurement device manufactured by
Laser Technology Incorporated (LTI). The Impulse 200 EDM
was linked with a MapStar digital compass, which provided
data on slope distance, slope percent, and horizontal angles.
This system required a two-person crew, and the collected
data were logged into a handheld digital data recorder. The
lead traverser maneuvered between stations and held the
reflective prism at eye level, directly above the pin flag. The
rear traverser aimed the laser at the reflective prism and the
LEGEND
Total Station
Measurements
Stations
Figure 1. Side shot method for traversing with a total station.
144
140
Field Survey and Office Data Entry Time(Min)
120
100
80
60
40
20
0
10
11
12
14
15
16
17
18
19
20
21
22
23
24
25
26
Patch Number
String
Laser
GPS
Total Station
Figure 2. Time required to complete various forest-area measurement techniques.
The string-box method required 19 minutes more per
patch (195% increase) compared with the laser/Juniper data
collector. Contributing factors included the time required
to manually record the data in the field book and the need
to manually transcribe the field notes, whereas the laser
method included a digital download. Likewise, the GPS
method took 23 minutes longer (210% increase) than the
Juniper data collector, mainly because of intermittent satellite reception due to topography, canopy closure, and satellite orbits. The GPS data includes time spent on those patches
that were abandoned after one hour because of poor satellite configuration.
Average time difference for the total station was 54 minutes per patch (370% increase) compared with the laser/
Juniper data collector. This was due primarily to required
instrument set-up time.
The time, type of equipment, and crew size required to
complete a traverse were used to calculate the variable cost
of each survey method (Table 1). Equipment was depreciated over an two-year period. Hourly wages, which included
benefits were obtained from the 2001 Associated Oregon
Loggers Annual Wage Survey (Salem, OR, USA).
Survey data were transformed to determine the x, y, and
z coordinates, which were then downloaded as an ASCII
file into the RoadEng.
RESULTS AND DISCUSSION
Time and Cost Information
Time required to survey a patch and complete the office
work varied substantially, depending on the technique (Fig.
2). The task was considered complete when all information
was processed and entered into a common mapping program.
The method involving the laser, digital compass, and
Juniper data collector required the least amount of time per
patch (17 minutes). The second most time-efficient technique was that using the laser, digital compass, and the TDS
data collector. The latter method required approximately two
extra minutes per patch because of the additional step taken
by the TDS data collector to arrange the data in an acceptable format for the mapping program.
Table 1. Costs involved in completing land-area survey of 16 forested patches.
Method
String Box
Laser (Ranger)
Laser (Allegro)
GPS
Total Station
Crew
size
Labor cost
($/hr)
1
2
2
1
4
18.90
37.80
37.80
18.90
75.60
Equipment
cost
($/hr)
0.05
1.18
1.31
1.88
1.13
145
Total time
(hr)
10.3
5.9
5.3
11.5
19.7
Total
cost
($)
195.19
229.98
207.28
238.97
1511.58
Cost per
acre
($)
11.67
13.22
11.92
38.86
86.52
Cost per
mbf
($)
0.62
0.70
0.63
1.85
4.59
Table 2. Precision of measurements for patch areas, by survey method.
Method
String Box
Laser
GPS
Total Station
Mean patch area
(ac)
1.03
1.06
1.03
1.07
Mean difference in patch
area (ac)
-0.04
-0.01
-0.04
0
Percent difference
(%)
3.7
0.93
3.7
0
Mean patch precision
(%)
1.15
2.65
N/A
0.014
Table 3. Average distance errors produced by each survey method.
Method
String Box
Laser
GPS
Total Station
Mean slope distance error
(ft)
3.02
1.33
N/A
0
Mean horizontal distance
error (ft)
2.78
1.14
N/A
0
Mean vertical distance
error (ft)
2.82
1.81
N/A
0
Table 4. Average mean effectiveness values for each survey method.
Method
String Box
Laser (Ranger)
Laser (Allegro)
Total Station
Total Cost ($)
195.19
229.98
207.28
1511.58
Closing Error (%)
1.15
2.65
2.65
0
n
16
16
16
16
Mean Effectiveness (M.E.)
1236
3247
2902
94
mounted on a staff. Although the manufacturer’s accuracies had been achieved in trials with the equipment
mounted, this positioning was found to limit the user’s
mobility in the forested environment. The mean precision
of the string box (1.15%) was good given the perceived
minimal precision of the equipment.
Accuracy is defined as the degree of conformity or closeness of a measurement to the true value (Buckner 1983).
Survey methods were analyzed for significant differences
at the station level (Table 3). Average accuracy was calculated from the difference in measurements between the total station and each of the laser and string box methods.
GPS data are station independent and thus were not involved.
String box error can be attributed to several factors. For
example, use of the string box was affected by the amount
of brush and branches between stations. The string may
have gotten caught on the branches, preventing the traverser from following a straight path. Likewise, the string
may have become taut when maneuvering around obstacles,
thereby contributing to the error.
Errors for the total station and laser methods were primarily brought due to the operator. Both the laser and the
target had to be positioned vertically above the station. A
common problem involved the laser operator needing to
bend and shift away from the station in order to gain a
clear sight path toward the target.
Hourly labor and initial equipment costs played the most
significant role in overall operating costs. The difference between the two digital-data methods could be attributed to the
additional office time the TDS data collector required for formatting the field data.
PRECISION AND ACCURACY
Precision is the degree of closeness or conformity among
repeated measurements of the same quantity (Mikhail and
Gracie 1981). The average patch precision gained from each
of our methods is shown in Table 2.
The mean difference in area compared to the total-station
technique appears to be relatively small. Although this was a
fairly small land area (1742 ft2), one may assume a random
error effect, for which the percent error would hold fairly
constant. Therefore, this effect could dramatically impact area
calculations, timber volume estimates, and other operational
considerations on larger unit areas.
Because the survey of each patch started and ended at the
same point, the precision or repeatability could be calculated
from the difference in coordinates. This difference was then
divided by the total perimeter distance for each patch, resulting in a percent error term that was averaged for the16 traversed patches. The laser and compass method produced the
least amount of precision because the instrument was not
146
Figure 3. Differences in patch orientation generated by survey methods.
A multiple range test was used to confirm that all data
points were from the same population. In addition, t-tests
were conducted at the patch level to determine significant
differences in accuracy. Values from both the laser and the
string-box methods were significantly different (p<0.05)
from those obtained with the total-station technique. Likewise, the t-test used to compare the string box and laser
data also indicated a significant difference between these
two methods (p<0.05).
volved a combination of techniques for each method, a total-cost variable was calculated. Mean effectiveness (M.E.)
for each method (Table 4) was calculated by multiplying
the total cost by the closing error and dividing this by the
number of patches (16) to determine a mean effectiveness.
Here, the smaller the value, the more effective the surveying method.
((Total Cost)(
M.E. =
ORIENTATION
Closing Error %
Closing Error Total Station %
))
n
The total-station technique was the most effective, although it was the most time-consuming and expensive of
all the methods, it had a significantly smaller closure error. The large difference between the laser method and the
string-box technique was a result of the higher accuracy
and initial costs associated with the former. Effectiveness
with the GPS method was not included because no level of
accuracy had been calculated.
Although all traverses closed with adequate precision and
approximately equal areas regardless of the survey technique
employed, orientations varied substantially (Fig. 3). This
effect on alignment might have major consequences for a
number of tasks completed during operational planning and
layout. For example, such errors could be costly to both parties when working with legal boundaries between property
owners. This difference was most evident when the digitalcompass method was implemented because the position at
which the user held the equipment influenced the reading.
Although very good closing precision could be attained, large
deviations from patch alignment occurred. This effect could
have been minimized by mounting the laser and digital compass on a staff.
SUMMARY
Different methods for measuring forest areas may be used
to meet specific land-management objectives. This study
compared four techniques for completing a traverse of partial harvests within an uneven-aged management plan. The
method entailing the string box, manual compass, and clinometer was approximately 6% less expensive than the laser method. However, although the initial purchase price
and labor rates with the string-box technique were lower,
48% more time was spent conducting the traverse of all the
patches. The total-station technique was the most expensive because of the larger crew and time required to clear
the sight lines.
EFFECTIVENESS
It is difficult to account for practicality when comparing
survey techniques. Liu (1995) assessed individual methods
that used different equipment by multiplying the time needed
to complete a task by the resulting accuracy, thereby basing
effectiveness on time instead of cost. Because our study in147
The effectiveness of each survey method also varied substantially. Low (better) values were the result of a combination of small costs and/or high accuracies. The total-station
method rated well (value = 94) because of the high amount
of precision gained with its use. Although it was the most
expensive to operate, its resulting precision was magnitudes
higher than that gained by the other methods.
Relative to their specific measurement activities, each
method has its strengths (time, cost, and accuracy) and weaknesses (alignment, repeatability, and cost). Therefore, the
potential benefits must be weighed when allocating resources
to specific duties for operational planning. In conclusion,
this study illustrated that, although time was saved by using the digital instruments, their performances were not
always as effective as those achieved via traditional methods.
Kellogg, L.D., Pilkerton, S., and R. Edwards, 1991. Logging
Requirements to Meet New Forestry Prescriptions. P. 4349 In Proceedings of Council of Forest Engineering Annual Meeting, Nanaimo, BC, Canada.
Kellogg, L.D., Bettinger, P., and R.M. Edwards, 1996a. A Comparison of Logging Planning, Felling, and Skyline Costs between Clearcutting and Five Group-Selection Harvesting
Methods. Western Journal of Applied Forestry 11(3): 9096.
Kellogg, L.D., Milota, G.V., and M. Millar Jr., 1996b. A Comparison of Skyline Harvesting Costs for Alternative Commercial Thinning Prescriptions. Journal of Forest Engineering 1: 7-23.
Kellogg, L.D., Milota, G.V., and B. Stringham, 1998. Logging
Planning and Layout Costs for Thinning: Experience from
the Willamette Young Stand Project. Forestry Publications
Office, Oregon State University, Corvallis, OR, USA. 20 p.
LITERATURE CITED
Buckner, R.B., 1983. Surveying Measurements and their Analysis. Landmark Enterprises, Rancho Cordova, CA, USA.
275 p.
Liu, C.J., 1995. Using Portable Laser EDM for Forest Traverse
Surveys. Canadian Journal of Forestry Research 25: 753766.
Darche, M.-H., 1998. A Comparison of Four New GPS Systems under Forestry Conditions. Forest Engineering Institute of Canada Special Report 128, Pointe Claire, Quebec,
Canada. 16 p.
Liu, C.J., and R. Brantigan, 1996. Using Differential GPS for
Forest Traverse Surveys. Canadian Journal of Forestry Research 25: 1795-1805.
Dunham, M.T., 2001a. Planning and Layout Costs I: Group
Selection and Clear-cut Prescriptions. Forest Engineering
Research Institute of Canada, Vancouver, BC, Canada. 2(22):
6.
Mancebo, S., and K. Chamberlain, 2001. Performance Testing of the Trimble Pathfinder Pro XR Global Positioning
System Receiver. USDA Forest Service Technical Note 10
p.
Dunham, M.T., 2001b. Planning and Layout Costs II: Tree
Marking Costs for Uniform Shelterwood Prescriptions. Forest Engineering Research Institute of Canada, Vancouver,
BC, Canada. 2(34): 4.
Mikhail E.M., and G. Gracie, 1981. Analysis and Adjustment
of Survey Measurements. van Nostrand Reinhold Company,
New York, USA. 340 p.
Edwards, R.M., 1993. Logging Planning, Felling, and Yarding
Costs in Five Alternative Skyline Group Selection Harvests.
Master of Forestry paper, Department of Forest Engineering, Oregon State University, Corvallis, OR, USA. 213 p.
Moll, J.E., 1993. Development of an Engineering Survey
Method for Use with the Laser Technology, Inc. Tree Laser
Device. Master of Science thesis, Department of Civil Engineering, Oregon State University, Corvallis, OR, USA.
74 p.
Forgues, I., 1998. The Current State of Utilization of GPS and
GIS Technologies in Forestry. Forest Engineering Research
Institute of Canada Field Note, Pointe Claire, Quebec,
Canada. 2 p.
Stjernberg, E., 1997. A Test of GPS Receivers in Old-growth
Forest Stands on the Queen Charlotte Islands. Forest Engineering Institute of Canada Special Report 125, Vancouver,
BC, Canada. 26 p.
Forgues, I., 2001. Trials of the GeoExplorer 3 GPS Receiver
under Forestry Conditions. Forest Engineering Research
Institute of Canada, Pointe Claire, Quebec, Canada. 2(8):
4.
Turcotte, P., 1999. The Use of a Laser Rangefinder for Measuring Wood Piles. Forest Engineering Institute of Canada
Field Note 76, Pointe Claire, Quebec, Canada. 2 p.
Karsky, D., Chamberlain, K., Mancebo, S., Patterson, D., and
T. Jasumback, 2000. Comparison of GPS Receivers under
a Forest Canopy with Selective Availability Off. USDA
Forest Service Project Report 7100. 21 p.
Wing, M., and L.D. Kellogg, 2001. Using a Laser Range Finder
to Assist Harvest Planning. P. 147-150 In Proceedings of
the First International Precision Forestry Cooperative Symposium, Seattle, WA, USA.
148
149
Poster Abstracts
Can Tracer Help Design Forest Roads?
ABDULLAH E. AKAY, GRADUATE RESEARCH ASSISTANT, DEPARTMENT OF FOREST ENGINEERING, COLLEGE
OF FORESTRY, OREGON STATE UNIVERSITY, CORVALLIS, OR 97331
JOHN SESSIONS, PROFESSOR, DEPARTMENT OF FOREST ENGINEERING, COLLEGE OF FORESTRY, OREGON
STATE UNIVERSITY, CORVALLIS, OR 97331
CPLAN: A Computer Program for Cable Logging
Layout Design
WOODAM CHUNG, ASSISTANT PROFESSOR, SCHOOL OF FORESTRY, UNIVERSITY OF MONTANA,
MISSOULA, MT 59812
JOHN SESSIONS, PROFESSOR, DEPARTMENT OF FOREST ENGINEERING, OREGON STATE UNIVERSITY,
CORVALLIS, OR 97331
Abstract: A computerized method for optimizing cable logging layouts using a heuristic network algorithm has been
developed. A timber harvest unit layout is formulated as a network problem. Each grid cell containing timber volume to be
harvested is identified as an individual entry node of the network. Mill locations or proposed timber exit locations are recognized as destinations. Each origin will then be connected to one of the destinations through alternative links representing
alternative cable corridors, harvesting equipment, landing locations, and truck road segments. A heuristic algorithm for network programming is used to solve the cost minimization network problem. A computerized model has been developed to
implement the method. Logging feasibility and cost analysis modules are included in the model in order to evaluate the logging
feasibility of alternative cable corridors and estimate yarding and transportation costs.
150
List of Contributors
Jeffrey Adams
Virginia Tech
775A Sterling Drive
Charleston, SC 29412
USA
jeadams@vt.edu
Arnab Bhowmick
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195-2100
USA
arnabqis@hotmail.com
Kamal Ahmed
University of Washington
121C More Hall, Box 352700
Seattle, WA 98195-2700
USA
kamal@u.washington.edu
Tom Bobbe
USDA Forest Service
Remote Sensing Applications Center
2222 W. 2300 S.
Salt Lake City, UT 84119
USA
tbobbe@fs.fed.us
Abdullah E. Akay
Oregon State University
Corvallis, OR 97331
USA
akaya@ucs.orst.edu
Jeremy Allan
Intermap Technologies Corp.
Calgary, Alberta, CANADA T2P 1H4
Kevin Boston
Oregon State University
Department of Forest Engineering
213 Peavy Hall
Corvallis, OR 97331-5706
USA
kevin.boston.cof.orst.edu
Hans-Erik Andersen
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195-2100
USA
hanserik@u.washington.edu
David Briggs
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195-2100
USA
dbriggs@u.washington.edu
Kazuhiro Aruga
Oregon State University
Peavy Hall Department of Forest Engineering
Corvallis, OR 97331-5706
USA
aruga@fr.a.u-tokyo.ac.jp
Ward Carson
University of Washington
Pacific Northwest Research Station
Box 352100
Seattle, WA 98195-2100
USA
carsonw@u.washington.edu
R. James Barbour
USDA Forest Service
Pacific Northwest Region
PO Box 3623,
Portland, Oregon 97208-3623
USA
jbarbour@fs.fed.us
Woodam Chung
School of Forestry
University of Montana
Missoula, MT 59812
USA
wchung@forestry.umt.edu
B. Bruce Bare
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195-2100
USA
bare@u.washington.edu
Jennie L. Cornell
Oregon State University
Forest Engineering Operations
Corvallis, OR 97331
USA
bryancornell7014@msn.com
151
Sean Hoyt
University of Washington
Box 352500
Seattle, WA 98195
USA
naestyoh@u.washington.edu
Elizabeth Coulter
Oregon State University
Department of Forest Engineering
215 Peavy Hall
Corvallis, OR 97333
USA
Elizabeth.Coulter@orst.edu
Loren Kellogg
Oregon State University
Department of Forest Engineering
213 Peavy Hall
Corvallis, OR 97331-5706
USA
loren.kellogg.cof.orst.edu
Bill Dyck Ltd.
PO Box 11236
Palm Beach, Papamoa 3003
NEW ZEALAND
billdyck@xtra.co.nz
John R. Erickson
USDA Forest Service
Forest Products Laboratory
One Gifford Pinchot Drive
Madison, WI 53726-2398
USA
Andrei Kirilenko
Purdue University
Department of Forestry and Natural Resources Forestry
Building 195 Marsteller St.
West Lafayette, IN 47907-2033
USA
kirilenk@fnr.purdue.edu
Alexander Evans
Yale School of Forestry & Environmental Studies
205 Prospect Street
New Haven, CT 06511
USA
Jim Kiser
Oregon State University
Department of Forest Engineering
213 Peavy Hall
Corvallis, OR 97331-5706
USA
Jim.Kiser.cof.orst.edu
Stephen E. Fairweather
Mason, Bruce, & Girard, Inc.
707 SW Washington St., Suite 1300
Portland, OR 97205
USA
sfairweather@masonbruce.com
Bruce Larson
University of British Columbia
2329 West Mall
Vancouver, BC V6T 1Z4
CANADA
blarson@interchange.ubc.ca
John W. Forsman
School of Forestry and Wood Products
Michigan Technological University
Houghton, MI 49931
USA
Email:jwforsman@mtu.edu
Hamish Marshall
Oregon State University
Forest Engineering Department
215 Peavy Hall
Corvallis, OR 97331-5706
USA
hamish.marshall@orst.edu
Jeffrey R. Foster
Forestry Branch, Fort Lewis Military Reservation
Fort Lewis, WA
USA
Joel Gillet
Applanix Corp
85 Leek Crescent
Richmond Hill, ON L4B 3B3
CANADA
JGillet@applanix.com
John Mateski
Western Helicopter Services, INC.
PO Box 369
Newberg, OR 97132
USA
westernhelicopter@earthlink.net
Richard A. Grotefendt
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195
USA
grotefen@u.washington.edu
Brett Martin
Prudue University
2226 Willowbrook Dr. Apt. #192
West LaFayette, Indiana 47906
USA
brettm@fnr.purdue.edu
152
Douglas J. Martin
Martin Environmenta;
2103 North 62nd Street
Seattle, WA 98103
USA
doug@martinenv.com
Rober J. Ross
USDA Forest Service
Forest Products Laboratory
One Gifford Pinchot Drive
Madison, WI 53726-2398
USA
rjross@fr.fed.us
Glen Murphy
Oregon State University
Forest Engineering Department
Peavy 271
Corvallis, OR 97331
USA
glen.murphy@orst.edu
Peter P. Siska
Stephen F. Austin State University
1639 North Street
Nacogdoches, TX 75962
USA
siska@sfasu.edu
Peter Schiess
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195
USA
schiess@u.washington.edu
Ross F. Nelson
NASA Biospheric Sciences Branch
Code 923
NASA Goddard Space Flight Center
Greenbelt, MD 20771
USA
ross@ltpmail.gsfc.nasa.gov
Daniel L. Schmoldt
USDA/CSREES/PAS
Instrumentation & Sensors
Mail Stop 2220
Washington, DC 20250-2220
USA
dschmoldt@reeusda.gov
Sam Pittman
University of Washington
Box 352100
Seattle, WA 98195
USA
Sam.Pittman@weyerhaeuser.com
Stephen P. Prisley
Virginia Tech
229 Cheatham Hall
Blacksburg, VA 24061
USA
prisley@vt.edu
Gerard Schreuder
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195
USA
gsch@u.washington.edu
John Punches
Douglas Co Extension
Oregon State University
1134 SE Douglas
Roseburg, OR 97470-4344
USA
John.Punches@oregonstate.edu
John Sessions
Oregon State University
213 Peavy Hall
Corvallis, OR 97331-5706
USA
John@Sessions.cof.orst.edu
Steve Reutebuch
University of Washington
Pacific Northwest Research Station
Box 352100
Seattle, WA 98195-2100
USA
sreutebu@u.washington.edu
Guofan Shao
Purdue University
Department of Forestry and Natural Resources Forestry
Building 195 Marsteller St.
West Lafayette, IN 47907-2033
USA
gshao@fnr.purdue.edu
Alex Sinclair
Feric Western Division
2601 East Mall
Vancouver, BC V6T 1Z4
CANADA
alex-s@vcr.feric.ca
Luke Rogers
University of Washington
Rural Technology Initiative
Seattle, WA 98195
USA
lwrogers@u.washington.edu
153
Derek Solmie
Oregon State University
Department of Forest Engineering
215 Peavy Hall
Corvallis, OR 97330
USA
derek.solmie@orst.edu
Bernd-M. Straub
University of Hannover
Institute for Photogrammetry and GeoInformation
Nienburger Strasse 1
Hannover 30167
GERMANY
bernd-m.straub@ipi.uni-hannover.de
Pierre Turcotte
FERIC
580 Boul. St-Jean
Pointe-Claire, QC H9R 3J9
CANADA
pierre-t@mtl.feric.ca
Rien Visser
Virginia Tech
229 Cheatham Hall
Blacksburg, VA 24061
USA
visser@vt.edu
Xiping Wang
University of Minnesota Duluth
C1 Gifford Pinchot Drive
Madison, WI 53705-2398
USA
xwang@fs.fed.us
Graham West
Forest Research
Private Bag 3020
Rotorua, NEW ZEALAND
Graham.West@ForestResearch.co.nz
Denise Wilson
University of Washington
PO Box 352500
Seattle, WA 98195-2500
USA
wilson@ee.washington.edu
Michael G. Wing
Oregon State University
Forest Engineering Department
213 Peavy Hall
Corvallis, OR 97333
USA
michael.wing@orst.edu
Jianyang Zheng
University of Washington
Department of Civil and Environmental Engineering
Seattle, WA 98195-2700
USA
David Yates
Forest Technology Group
3950 Faber Place Dr.
North Charleston, SC 29405
USA
david.yates@ftgrp.com
List of Attendees
Jeffrey Adams
Virginia Tech
775A Sterling Drive
Charleston, SC 29412
USA
jeadams@vt.edu
Hans-Erik Andersen
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195-2100
USA
hanserik@u.washington.edu
Carolyn Anderson
Weyerhaeuser Company
361 Schooner Cove, NW
Calgary, Alberta T3L123
CANADA
carolynn.anderson@weyerhaeuser.com
Kazuhiro Aruga
Oregon State University
Peavy Hall Dept of Forest Engineering
Corvallis, OR 97331-5706
USA
aruga@fr.a.u-tokyo.ac.jp
B. Bruce Bare
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195-2100
USA
bare@u.washington.edu
Arnab Bhowmick
Stephen F. Austin State University
College of Forestry
1639 North Street
Nacogdoches, TX 75962
USA
arnabqis@hotmail.com
Earl T. Birdsall
Weyerhaeuser Co.
PO Box 9777, WWC-IF6
Federal Way, WA 98063-9777
USA
earl.birdsall@weyerhaeuser.com
Tom Bobbe
USDA Forest Service
Remote Sensing Applications Center
2222 W. 2300 S.
Salt Lake City, UT 84119
USA
tbobbe@fs.fed.us
Andrew Bourque
Potlatch Corporation - Hybrid Poplar Program
PO Box 38
Boardman, OR 97818
USA
andrew.bourque@potlatchcorp.com
David Briggs
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195-2100
USA
dbriggs@u.washington.edu
Anne G. Briggs
PO Box 663
Issaquah, WA 98027
USA
Ward Carson
University of Washington
Pacific Northwest Research Station
Box 352100
Seattle ,WA 98195-2100
USA
carsonw@u.washington.edu
Woodam Chung
University of Montana
School of Forestry
32 Campus Drive
Missoula, MT 59812
USA
wchung@forestry.umt.edu
Jennie L. Cornell
Oregon State University
Forest Engineering
Operations
Corvallis, OR 97331
USA
bryancornell7014@msn.com
Elizabeth Coulter
Oregon State University
Department of Forest Engineering
215 Peavy Hall
Corvallis, OR 97333
USA
Elizabeth.Coulter@orst.edu
Christopher Davidson
International Paper
1201 West Lathrop Avenue
Savannah, GA 31415
USA
chris.davidson@ipaper.com
Weihe Guan
Weyerhaeuser Co.
33405 8th Avenue S.
Federal Way, WA 98003
USA
weihe.guan@weyerhaeuser.com
Andrew Hill
University of Washington
Box 352100
Seattle, WA 98195
USA
adh2@u.washington.edu
Bill Dyck
Bill Dyck Ltd.
PO Box 11236
Palm Beach, Papamoa 3003
NEW ZEALAND
billdyck@xtra.co.nz
Olav Albert Høibø
Agricultural University of Norway
Department of Forest Science
P.O. Box 5044
N-1432 AAS
NORWAY
olav.hoibo@isf.nlh.no
Stephen E. Fairweather
Mason, Bruce, & Girard, Inc.
707 SW Washington St., Suite 1300
Portland, OR 97205
USA
sfairweather@masonbruce.com
Sean Hoyt
University of Washington
Box 352100
Seattle, WA 98195
USA
naestyoh@u.washington.edu
Dave Furtwangler
Cascade Timber Consulting Inc
PO Box 446
Sweet Home, OR 97386
USA
dfurtwangler@cascadetimber.com
Andrew Hudak
US Forest Service
Rocky Mountain Research Station
1221 S. Main St.
Moscow, ID 83143
USA
ahudak@fs.fed.us
Joel Gillet
Applanix Corp
85 Leek Crescent
Richmond Hill, ON L4B 3B3
CANADA
JGillet@applanix.com
David Gilluly
Weyerhaeuser Co.
33405 8th Avenue S., WWC 2B2
Federal Way, WA 98003
USA
Richard A. Grotefendt
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195
USA
grotefen@u.washington.edu
Yan Jiang
University of Washington
Box 352100
Seattle, WA 98195
USA
Dick Karsky
USDA Forest Service
5785 Highway 10 West
Missoula, MT 59808
USA
rkarsky@fs.fed.us
Phil Lacy
World Forestry Center
4033 SW Canyon Road
Portland, OR 97221
USA
placy@worldforestry.org
Bruce Larson
University of British Columbia
Vancouver, BC
CANADA
blarson@interchange.ubc.ca
Stephen Lewis
Timberline Forest Inventory Consultants
315-10357-109 Street
Edmonton, Alberta T5J IN3
CANDA
sjl@timberline.ca
Hamish Marshall
Oregon State University
Forest Engineering Department
215 Peavy Hall
Corvallis, OR 97331-5706
USA
hamish.marshall@orst.edu
Brett Martin
Prudue University
2226 Willowbrook Dr. Apt. #192
West LaFayette, Indiana 47906
USA
brettm@fnr.purdue.edu
Bob McGaughey
University of Washington
Pacific Northwest Research Station
Box 352100
Seattle, WA 98195-2100
USA
mcgoy@u.washington.edu
Kurt Muller
Forest Technology Group
16703 SE McGillivray Blvd. Suite 215
Vancouver, WA 98683
USA
kurt.muller@ftgrp.com
Ewald Pertlik
University of Bodenkultur Vienna
Peter-Jordan-Strasse 70
Vienna A-1190
AUSTRIA
ewald.pertlik@boku.ac.at
Charles Peterson
USDA Forest Service PNW
620 SW Main Street Suite 400
Portland, OR 97205
USA
cepetersen@fs.fed.us
Lester Power
Weyerhaeuser Co.
PO Box 9777
Federal Way, WA 98063-9777
USA
lester.power@weyerhaeuser.com
Steve Reutebuch
University of Washington
Pacific Northwest Research Station
Box 352100
Seattle, WA 98195-2100
USA
sreutebu@u.washington.edu
Luke Rogers
University of Washington
Rural Technology Initiative
Seattle, WA 98195
USA
lwrogers@u.washington.edu
Peter Schiessr
University of Washington
Box 352100
Seattle, WA 98195
USA
schiess@u.washington.edu
Glen Murphy
Oregon State University
Forest Engineering Department
Peavy Hall 271
Corvallis, OR 97331
USA
glen.murphy@orst.edu
Daniel L. Schmoldt
USDA/CSREES/PAS
Instrumentation & Sensors Mail Stop 2220
Washington, DC 20250-2220
USA
dschmoldt@reeusda.gov
Megan O’Shea
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195
USA
moshea@u.washington.edu
Gerard Schreuder
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195
USA
gsch@u.washington.edu
Guofan Shao
Purdue University
Department of Forestry and Natural Resources Forestry
Building 195 Marsteller St.
West Lafayette, IN 47907-2033
USA
gshao@fnr.purdue.edu
Pierre Turcotte
FERIC
580 Boul. St-Jean
Canada
Pointe-Claire, QC H9R 3J9
CANADA
pierre-t@mtl.feric.ca
Alex Sinclair
Feric Western Division
2601 East Mall
Vancouver, BC V6T 1Z4
CANADA
alex-s@vcr.feric.ca
Eric Turnblom
University of Washington
College of Forest Resources
Box 352100
Seattle, WA 98195
USA
ect@u.washington.edu
Jack A. Sjostrom
DigitShare / Sentry Dynamics, Inc.
721 Lochsa St., Suite 16
Post Falls, ID 83854
USA
jsjostrom@digitshare.org
Rien Visser
Virginia Tech
229 Cheatham Hall
Blacksburg, VA 24061
USA
visser@vt.edu
Derek Solmie
Oregon State University
Department of Forest Engineering
215 Peavy Hall
Corvallis, OR 97330
USA
derek.solmie@orst.edu
Matt Walsh
University of Washington
Box 352100
Seattle, WA 98195
USA
Xiping Wang
University of Minnesota Duluth
USDA Forest Products Laboratory
Gifford Pinchot Drive
Madison, WI 53705-2398
USA
xwang@fs.fed.us
Brant Steigers
Potlatch Corporation
807 Mill Road
Lewiston, ID 83501
USA
brant.steigers@potlatchcorp.com
Bernd-M. Straub
University of Hannover
Institute for Photogrammetry and GeoInformation
Nienburger Strasse 1
Hannover 30167
GERMANY
bernd-m.straub@ipi.uni-hannover.de
Jack Ward
Temperate Forest Solutions
PO Box 33
Asford, WA 98304
USA
tfs@rainerconneet.com
Welsey Wasson
VAP Timberland
695 W Satsop Rd
Montesano, WA 98563
USA
wsw@olynet.com
Cheryl Talbert
Weyerhaeuser Co.
PO Box 9777 Mail Stop: CH 2D25
Federal Way, WA 98063-9777
USA
cheryl.talbert@weyerhaeuser.com
158
Denise Wilson
University of Washington
PO Box 352500
Seattle, WA 98195-2500
USA
wilson@ee.washington.edu
David Yates
Forest Technology Group
3950 Faber Place Dr.
North Charleston, SC 29405
USA
david.yates@ftgrp.com
Michael G. Wing
Oregon State University
Forest Engineering Department 213 Peavy Hall
Corvallis, OR 97333
USA
michael.wing@orst.edu
159
Second International Precision Forestry Symposium Agenda
Sunday, June 15, 2003
5:00 PM to 7:00 PM
Reception at the UW Waterfront Activities Center
Monday, June 16, 2003
7:00 AM
Registration Desk Opens at Kane Hall room 220
7:00 AM
Continental Breakfast
8:30 AM
Welcome & Introductory Remarks - Dean B. Bruce Bare
8:45 AM
Keynote Speaker -Bill Dyck
Plenary Session A: Precision Operations and Equipment - Moderator, Alex
Sinclair
9:05 AM
Multidat and Opti-Grade: Two Innovative Solutions to Better Manage Forestry Operations presented by Pierre Turcotte, FERIC, Canada
9:25 AM
A Test of the Applanix POS LS Positioning System for the Collection of Terrestrial Coordinates
Under a Closed Forest Canopy - presented by Stephen E. Reutebuch and Ward W. Carson,
USDA Forest Service, Pacific Northwest Research Station
9:50 AM
Break & Poster Session
10:20 AM
160
Ground Navigation through the use of Inertial Measurements, a UXO Survey - presented by
Joel Gillet, Applanix Corp.
10:45 AM
Precision Forestry Operations and Equipment in Japan - Kazuhiro Aruga, University of Tokyo
11:10 AM
Precision Forestry Applications: Use of DGPS Data to Plan and Implement Aerial Forest Operations - presented by Jennie L. Cornell
Plenary Session B: Remote Sensing and Measurement of Forest Lands and
Vegetation - Moderator, Tom Bobbe
11:35 AM
Estimating Forest Structure Parameters Within Fort Lewis Military Reservation Using Airborne
Laser Scanner (LIDAR) Data - presented by Hans-Erik Andersen, University of Washington,
College of Forest Resources
12:00 PM
Lunch
1:00 PM
Geo-Spatial Analysis in GIS and LIDAR Remote Sensing using Component Object Modeling of
Visual Basic: Application to Forest Inventory Assessment - presented by Arnab Bhowmick and
Dr. Peter Siska, College of Forestry, Stephen F. Austin State University
1:25 PM
Large Scale Photography Meets Rigorous Statistical Design for Monitoring Riparian Buffers
and LWD - presented by Richard A. Grotefendt, University of Washington
1:50 PM
Forest Canopy Models Derived from LIDAR and INSAR Data in a Pacific Northwest Conifer
Forest - presented by Hans-Erik Andersen, University of Washington, College of Forest Resources
2:15 PM
Fine Tuning Forest Change Detections with a Combined Accuracy Index - presented by
Guofan Shao, Department of Forestry and Natural Resources, Purdue University
161
2:40 PM
Break & Poster Session
3:10 PM
Automatic Extraction of Trees From Height Data Using Scale Space and Snakes - presented
by Bernd-M. Straub, Institute for Photogrammetry and GeoInformation, Germany
3:35 PM
RFID Research-presented by Sean Hoyt
4:05 PM
Sean Hoyt Tree Tour
4:30 PM
Adjourn
Tuesday, June 17, 2003
7:00 AM
Registration Desk Opens at Kane Hall room 220
7:00 AM
Continental Breakfast
8:05 AM
Keynote Speaker - Dan Schmoldt
Plenary Session C: Terrestrial Sensing, Measurement and Monitoring Moderator, Steve Reutebuch
8:30 AM
Value Maximization Software-Extracting the Most from the Forest Rersource - presented by
Hamish Marshall and Graham West
8:55 AM
Costs and Benefits of Four Procedures for Scanning on Mechanical Processors - presented
by Glen E. Murphy and Hamish Marshall
162
9:20 AM
Evaluation of Small-diameter Timber for Value-added Manufacturing: A Stress Wave
Approach - presented by Xiping Wang and Robert J. Ross
9:45 AM
Break & Poster Session
10:15 AM
Aroma Tagging and Electronic Nose Technology for Tracking Log and Wood Products: Early
Experience - presented by Glen Murphy
Plenary Session D: Design Tools and Decision Support Systems - Moderator,
Glen Murphy
10:40 AM
Modeling Steep Terrain Harvesting Risks using GIS - presented by Jeffrey Adams, Rien Visser,
and Steve Prisley, Department of Forestry, Virginia Tech
11:05 AM
Use of the Analytic Hierarchy Process to Compare Disparate Data and Set Priorities presented by Elizabeth Coulter and John Sessions, Department of Forest Engineering, Oregon
State University
11:30 AM
Use of Spatially Explicit Inventory Data for Forest Level Decisions - presented by Bruce C.
Larson, University of British Columbia, Faculty of Forestry
11:55 AM
Lunch
1:00 PM
Elements of Hierarchical Planning on Forestry: A Focus on the Mathematical Model presented by Sam Pittman, University of Washington, College of Forest Resources
1:25 PM
Update Strategies for Stand-Based Forest Inventories - presented by Stephen E. Fairweather,
163
Mason, Bruce, & Girard, Inc.
1:50 PM
A New Precision Forest Road Design and Visualization Tool: PEGGER - presented by Luke
Rogers, Geographic Information Scientist; UW Rural Technology Initiative
2:15 PM
Harvest Scheduling with Aggregation Adjacent Constraint: A Threshold Acceptance Approach presented by Hamish Marshall, Graduate Student, Oregon State University
2:40 PM
Break (Poster Session Breakdown)
3:10 PM
Optimizing Road Network Location in Forested Landscapes - presented by Michael G. Wing,
John Sessions, and Elizabeth Coulter, Oregon State University
3:35 PM
Comparing Forest Area Measurement Techniques - presented by Derek Solmie, Department of
Forest Engineering, College of Forestry, Oregon State University
4:00 PM
Closing Remarks
4:25 PM
Adjourn
Wednesday, June 18, 2003 - Field trip CANCELED
164
165