hotnets07procrastina.. - University of California, Berkeley

advertisement
Procrastination Might Lead to
a Longer and More Useful Life
Prabal Dutta, David Culler, and Scott Shenker
University of California, Berkeley
1
In sensor networks,
energy is the defining constraint
2000 mA-Hr
• Battery-operated
2 “AA” batteries
– 10 mA active current
– 10 uA sleep current
– 1% duty cycle
•
•
•
•
•
CPU
RAM
ROM
Flash
Radio
10 MIPS
4 KB to 10 KB
32 KB to 128 KB
512 KB to 1 MB
40 kbps to 250 kbps
2
Many scientific data collection applications
stream sensor data from sensor nodes to sinks
“Great Duck Island” [Szewczyk04]
SELECT
*
FROM
sensors
SAMPLE PERIOD 5 min
“Redwoods” [Tolle05]
SELECT
time, epoch, id,
parent, voltage,
depth, hum, temp,
toplight, botlight
FROM
sensors
SAMPLE PERIOD 5 min
3
Necessity as the mother of invention:
Sensornets run at a few percent duty cycle
Year
2003
2004
2005
2006
Deployment
GDI
Redwoods
FireWxNet
WiSe
MAC
Polled
Sched
Sched
Sched
DC
2.2%
1.3%
6.7%
1.6%
Period (s)
0.54-1.085
300
900
60
4
Scheduled Communications
DC = tguard/Tpkt + tpkt/Tpkt
RX
TX
G
D
G
D
G
D
G
D
G
D
G
D
tguard
tpkt
Tpkt
5
Polled Communications
Tpoll
tpoll
RX
TX
P
P
D
D
D
P
D
D
D
P
P
D
D
D
D
tpkt
Tpoll + tpoll
Tpkt
6
D
Even at 1-2% duty cycles,
idle listening dominates power budget*
Low-Power Listening
(idle listening)
* See paper for detailed derivations
R. Szewczyk, A. Mainwaring, J. Polastre, D. Culler,
“An Analysis of a Large Scale Habitat Monitoring Application”,
ACM SenSys’ 04, November, 2004, Baltimore, MD
7
Procrastinate by decoupling
sensing and sending periods
SELECT
FROM
SAMPLE
REPORT
nodeid,
timestamp,
temperature,
humidity,
pressure,
totalrad,
photorad
sensors
PERIOD 5 min
PERIOD 1 day
8
Procrastination:
Opportunities and challenges across the network stack
9
Application
Transport
Network
Link
10
Network layer protocols are chatty
• Frequent routing beacons ensure availability
• But topology maintenance is expensive
• Why maintain the topology if it goes unused?
• Delay topology (tree) formation until needed
• However, interactive use could justify more
frequent communications
11
Network layer challenges:
Establishing the topology
• How would you quickly establish a gradient?
• Would you pick aggressive (long) links?
• Would you cache old state (perhaps a day old)?
• Would you try to use old state first?
• Would you prefer to build a new topology?
12
Quickly flooding the routing beacon with Ripple
• Start with a basic flooding protocol
• Modify the retransmission timer
– Delay proportional to RSSI, which
“schedules” network approximately
• Related work
– Trickle [Levis04]
– SRM [Floyd97]
13
Application
Transport
Network
Link
14
Transport layer opportunities
• Improve reliability by
using “good” links quickly
• Send data hop-by-hop and may achieve
higher throughput since RTT is smaller
• Avoid intra-path interference
due to multi-hop wireless flows
• Avoid expensive end-to-end retransmissions
15
But how to buffer route-through
bundles with a limited storage?
• Add cheap NAND flash to nodes
– 1GB spot price ~ $7
– > 40% annual price decline
– 68% decline in last year
• Energy-efficient
– 100x less costly to write a byte
than send a byte over radio
– Can write entire contents with just
a few percent of energy in AA
batteries
Source: DRAM Exchange
16
Procrastination: opportunities
and challenges across the network stack
Application
Transport
Network
Link
17
At the application layer, compress
sensor readings before transmission
Source: [Tolle05]
• Temperature
• Light
18
Lots of CPU cycles for compressing
prior to storage or communications
• Prior to transmission
– O(100K) instructions for O(1 byte) transmitted
– Massive asymmetry in computation and communication
– Unlikely to change: Moore’s Law vs Maxwell’s Law
• Prior to storage
– O(10K to 1M) instruction for O(1 byte) written
– Massive asymmetry in computation and storage
– Unlikely to change: [Super-]Moore’s Law vs Moore’s Law
• But raises some new questions
– What compression algorithms should be used?
– What logical and physical data structures should be used?
19
Application
Transport
Network
Link
20
Picking good links quickly
might not be too difficult
21
Caveat: Channel access costs still
dominate infrequent communications
α>1>β
Node Local Time
βt t’
αt
α
ON TX
1
β
ON RX
Min(RXon) = 2 x MaxDrift + Startup + Jitter
Research on:
(1) Minimizing clock skew
(2) Speeding up radio startup
(3) Providing bus arbitration
(4) Lowering RSSI detection cost
*CSMA-LPL costs don’t scale with time,
but have high fixed costs for transmission
t
Global “Real” Time
22
Conclusion
• Delay offers many optimization opportunities
–
–
–
–
Link
Network
Transport
Application
• But standby power dominates budget
– 90% of node power
– 25% of laptop power
– 8% of the UK electricity in 2004
23
Discussion
24
The total load on a 1-hop node
in an n-hop network is: 2(n2-1)+1
1-hop
Area  n2-1
2-hop
3-hop
n-hop
Area  1
N
1
2
3
4
5
6
2(n2-1)+1
1
7
17
31
49
71
1-hop area must:
(a) route-through (RX & TX) 2-hop to n-hop traffic: 2(n2-1)
(b) originate (TX) 1-hop traffic: 1
25
Rapid time synchronization
• [Kusy06, Sallai06]
• Achieved
– 2.7 us average error
– 26 us max error
• Over
– 11 hop network
– 45 nodes
• In 4 seconds
• With two-phase flood
• Using fixed neighbors
• But how to flood without
fixed neighbors?
26
What could storage enable?
Delay expensive operations to reduce overhead
• Many operations have high startup costs
– Sending packets
– Writing to flash or disk
– Acquiring sensor data
• Often better to postpone expensive operations
• But what to do with all the flowing data? Store it
• Related work
–
–
–
–
Nagle’s algorithm
Cache coherence
Disk buffering
The FedEx Truck
27
What else could storage enable?
Break synchrony between subsystems
• Many subsystems are synchronous
– Sensor and signal conditioning
– Signal conditioning and A/D conversion
– Packet arrival and processing
• Asynchrony allows each element to operate optimally.
Storage is the glue between elements
• Related work
–
–
–
–
–
Elastic pipelines [Sutherland89]
Bulk synchronous-parallel [Valiant89]
Queue element in Click [Morris99]
Fjords in streaming [Madden02]
Asynchronous computer architectures
28
Digging deeper into overhead
Radio Capacity
Node Data Rate
=
880 bits /sec
0.8 bits / sec
= 1,100
Typical
•
•
•
•
•
•
•
•
•
•
•
•
•
Data generation rate (raw):
20 bytes / 5 min
Data generation rate (raw):
0.53 bits / sec
Packet overhead (headers):
50% (17B hdr/36B data)
Data generation rate (w/ overhead):
0.80 bits / sec
Radio data rate (raw, CC1000):
40 kbps
Radio data rate (duty cycled at 2.2%)
880 bits / sec
Over-provisioning factor (1-hop)
1,100 times
Over-provisioning factor (2-hop)
157 times
Over-provisioning factor (3-hop)
64 times
Over-provisioning factor (4-hop)
35 times
Over-provisioning factor (5-hop)
22 times
…
Optimally provisioned for 23-hop, uniformly distributed network
29
Bottom line:
Streaming is not efficient
A typical 1% duty cycle translates to over 14 minutes of daily on time.
Radio On-Time
Radio Active Time
=
19 minutes
40kbps/10KB
= 1,100
Streaming delivers a trickle through a fire hose
30
Streaming every sensor sample is inefficient
• TCP doesn’t send single-byte packets (exception:
TCP_NODELAY); bytes are coalesced into larger
packets
• Operating systems don’t write to disk; they
cache multiple writes and flush
• College students don’t do their laundry daily;
they wait until their underwear runs out
31
Related work: Delay-Tolerant Networking
• Others [Fall03] have suggested DTN used by necessity
when:
–
–
–
–
No contemporaneous path from source to sink is available
End-to-end round-trip times exceed a few seconds
Nodes exhibit mobility over short time scales
Links exhibit high loss rates over short periods of time
• Others [Mathur06] have suggested NAND flash be used
because of energy-efficiency. This proposal explores how
• This proposal suggests DTN be used by choice when:
– Delay is tolerable
– Energy-efficiency is important
– The network is otherwise over-provisioned
• No characterization of the energy-efficiency of DTN in the
literature over streaming data transfer
32
Download