Sampling Circuits for 1D and 2D Sensors for Low-Power Purpose Laurent Fesquet

advertisement
Sampling Circuits for 1D and 2D Sensors
for Low-Power Purpose
Laurent Fesquet1, 2, Amani Darwish1, 2, Gilles Sicard3
1
Univ. Grenoble Alpes, TIMA, F-38000 Grenoble, France
2
CNRS, TIMA, F-38000 Grenoble, France
Email: firstname.name@imag.fr
3
CEA, LETI, F-38000 Grenoble, France
Email: firstname.name@cea.fr
Abstract — Sampling is becoming one of the most important
topics for the Internet of Things (IoT). Indeed, the amount of
data daily produced is so huge that the data processing cost will
deeply affect the electricity generation in a near future. The big
data is already a reality but a good way to mitigate this
incredible data flow is to differently sample the information. This
article presents – based on asynchronous analog-to-digital
conversion – a way to limit the data flow and the power
consumption for a host of sensors. By an adequate sampling
technique, such as a level-crossing sampling scheme, a drastic
activity and power reduction is feasible. An image sensor using 1level crossing sampling is given as an illustration.
Keywords — Sampling Circuits, Level Crossing Sampling
Scheme, Analog-to-Digital converters, asynchronous logic, power
consumption, energy.
I.
INTRODUCTION
Today, our digital society exchanges more and more data.
The data flow increases faster as never it has been the case in
the past. The amount of data is extremely large and in a near
future most of the data exchange will be directly operated by
technological equipment, robots, etc. We are now opening the
doors of the internet of things. This data orgy already wastes a
lot of power and contributes to a non-ecological approach of
our digital life. Indeed, Internet and the new technologies
consume about 10% of the electrical energy produced in the
world and this energy will growth very quickly if nothing is
done. Indeed, new data are everyday produced and the data
deluge generated by our technological environment requires
more and more power for processing, storing and transmitting.
The time to mitigate the data flow and to contain the electrical
power consumption of our communicating objects has come.
It already exists design solutions to enhance the energetic
performances of the electronic circuits and systems.
Nevertheless, another way to reduce energy is to completely
rethink the sampling techniques and the digital processing
chains. Considering that our digital life is dictated by the
Shannon theory, we produce more digital data than expected,
more than necessary. Indeed, as we produce useless data, this
generates more computation, more storage, more
communications and also more power consumption.
Disregarding the Shannon theory, new sampling and
978-1-4673-7353-1/15/$31.00 (c) 2015 IEEE
processing techniques can be explored. Some ideas have
already been explored theoretically [1] and more practically
[2, 3]. Nevertheless, the tight integration between sensor and
analog-to-digital conversion is probably the first and most
important stage of this revolution. In the sequel, this paper
presents possible adaptive architectures to fit application
requirements for sampling analog signal in 1D and in 2D.
II.
PRINCIPLES AND ARCHITECTURE
A. Non-uniform sampling
The principle of uniform sampling is shown in Fig. 1a:
samples are evenly-spaced in time because sampling is
triggered by an external clock at a fixed period Tsample. For the
non-uniform sampling (Fig. 1b), quantization levels are
disposed along the amplitude range of the signal. A sample is
captured only when the analog input signal Vin crosses one of
these levels. Sayiner et al. named this principle: “levelcrossing sampling scheme” [3]. Contrary to the classical
Nyquist sampling, samples are not uniformly spaced out in
time, because they depend on the signal variations: the sharper
the signal, the closer the samples. Thus, together with the
value of the sample axn, the time Dtxn elapsed since the
previous sample axn-1 must also be recorded. A local timer of
period TC is dedicated to this task.
Fig. 1: Regular sampling (a) vs. irregular sampling (b).
With classical A-to-D converter, considering an ideal clock
and an ideal Sample-and-Hold, time instants are perfectly
known. The only imprecision is due to the quantization noise
added during the A/D operation (clock skew is ignored). It is
characterized by the Signal-to-Noise Ratio (SNR) [1], which
only
depends
on
the
resolution
of
the
converter:
SNRdB = 1,76 + 6,02.N (1) where N is the number of bits and
the SNRdB is given for a sinusoid. For a non-uniform A-to-D
Converter (often call A-ADC for Asynchronous ADC), the
conversion of samples is triggered when a reference level is
crossed by the signal. The amplitude of the sample is then
precise (if ideal levels are considered), but the time elapsed
since the previous sample is quantized according to the
precision TC of the timer. Dtxn is known with an error δt,
which belongs to the interval [0,TC]. The SNR relation must
be rewritten. First, the error δt in time can be translated in an
dV
error in amplitude δV according to: δV = in .δt (2), where
dt
dVin
dVin
is the input signal slope. Then
and δt can be
dt
dt
considered as independent random processes, thus this
quantization noise power becomes: P(δV )=P dVin .P(δt ) (3).
 dt 
δt is a random variable uniformly distributed across [0,TC],
T2
thus: P(δt ) = C (4). Like in the synchronous case, the SNRdB
3
P(Vin )
(5). Using Eq.
is always defined as: SNRdB = 10. log
P(δV )
(6). If we determined the SNRdB
( ) + 20. log
3.P Vin
1
TC
 dVin 

P

 dt 
for a sinusoid of frequency f,
(3) and (4), we get: SNR dB = 10. log
the relation (6) becomes: SNRdB = −11,2 − 20 log( fTc ) . The
first term of Eq. (6) is only determined by the statistical
properties of the input signal Vin. The SNR only depends on
the timer period TC, and not on the number of quantization
levels nor their positions. Thus, for a non-uniform conversion,
the SNR can be externally tuned by changing the period TC of
the timer. Table 1 summarizes the main features of the two
conversion types.
TABLE 1: CHARACTERISTICS OF BOTH TYPES OF SAMPLING.
Conversion
trigger
Amplitude
Time
SNR dependency
Converter output
B. Asynchronous A-to-D conversion
Many A-to-D architectures can be used to implement the
non-uniform sampling scheme presented in the previous
paragraph [2, 4, 5, 6]. Our architecture (see Fig. 2) is fully
asynchronous thanks to the local synchronization
implemented between each block. The A-to-D is composed of
a difference quantifier, a state variable modeling the inner
state able to provide a digital reference Vnum (which is
technically a look-up table), and a Digital-to-Analog
Converter (DAC) processing the digital signal to produce a
reference voltage for the reference quantifier. The converter
resolution can be set by the user in order to fit the application
requirements. This means that the spacing between the
thresholds is not necessarily regular. In order to ease the A-toD presentation, we consider in the sequel that all the
thresholds are uniformly spaced and the Vnum signal is coded
on M bits. Nevertheless, all the reasoning is applicable with
non-evenly-spaced thresholds without too much effort. With
M bits and an A-to-D dynamic range ∆Vin, the quantification
step q (or LSB) is: q= ∆MVin (7).
2 −1
Regular
sampling
clock
Irregular
sampling
level crossing
quantized
exact value
number of
bits
amplitude
exact value
quantized
timer period
(amplitude, time)
Fig. 2: Block diagram of the asynchronous ADC
The output digital value Vnum is converted to Vr by the DAC
and compared to Vin. If the difference between them is greater
than ½.q, the state variable is incremented (up=’1’), if it is
lower than -½.q, it is decremented (dwn=’1’). In all other
cases, nothing is done (up=dwn=’0’) the converter output
signal Vnum remains constant and, there is no more activity.
The output signal is composed of couples (bi, Dti) where bi is
the digital value of the sample with Vnum={bi}i∈N, and Dti the
time elapsed since the previous converted sample bi-1, given
by the timer. Notice that the term A-ADC not only defines the
non-uniform sampling scheme but also the microelectronic
implementation style. As for any asynchronous digital circuit,
the key point of this converter is that information transfer is
locally managed with a bi-directional control signalling. Each
“data” signal is associated with two “control” signals: a
request (Req. in Fig. 2) and an acknowledgement (Ack. in Fig.
2). A first stage sends a request (Req.=’1’) to a second stage
when data are ready to be computed. The second stage sends
an acknowledgment (Ack.=’1’) to the first stage when data are
processed which indicates that it is ready to process another
data. Let δ be the total delay of the conversion loop. It is
defined as the time elapsed between the crossing of a level by
Vin and the instant when the correction on the reference Vr is
effective. When a sample conversion is triggered, the input
signal Vin must not cross any quantization level until the ADC
is ready to process another data. Thus, the slope of Vin must
dVin q
verify a “tracking condition” defined as:
≤ (8) (see
dt δ
[2]). This corresponds to technical implementation limits of
the A-ADC, similarly as what exists for the classical ADC
when looking the highest possible sampling frequency.
C. The “good” A-ADC performances
As it is explained in the previous section, the A-ADC
activity, its power consumption, and its silicon area are
reduced because of these four features:
i- the samples are only processed when useful, thanks to the
level-crossing sampling scheme,
ii- the sample conversion only needs one cycle for the AADC whatever the number of bits, whereas it needs N cycles
for a N-bit synchronous Successive Approximation ADC,
iii- the S/H circuitry is useless for the A-ADC and this
contributes to maintain the area as small as possible,
iv- the A-ADC hardware complexity does not depend of the
ADC resolution because, with non-uniform sampling, only the
timer period TC determines the SNR.
All these characteristics lead to a significant reduction in
terms of area, activity and power without considering the
benefits on the signal processing chain. In order to improve
the system performances, it is also possible, not only to
rethink the ADC itself, but also to deeply embed the ADC
with the sensor. This is a good approach to have when
designing MEMS (Micro Electro-Mechanical Systems) or
sensors to tightly integrate the analog to digital conversion
features at a very early stage in the design flow. In order to
illustrate this proposal, an example based on image sensors is
described in the sequel.
III.
SAMPLING IN 2D
A. Image sensor state of art
CMOS-based active pixel sensors (APS) have been used
since the 1980s as an alternative to the charge-coupled device
(CCD) technology. An image sensor system is composed of
an image sensor, a pixel matrix and a reading system
dedicated to extract and process the sensor information. The
classical behavior of an imaging system is based on extracting
the sensor information at a pre-defined time step, so called the
integration time and later on performing the reading process.
This sequence is known as the sensor frame rate. The
integration time is the time elapsed between the sensor
initialization phase and the beginning of the reading phase.
During the integration phase, the pixels convert their
associated luminance to an electrical signal. Then, during the
reading phase, the readout system converts the analog pixel
value to a digital one using an ADC. However, the image
sensor chain capabilities are not only limited to the data
extraction and conversion. Since the CMOS technology
enables different application in digital imaging due to its
improved performance regarding the ease of integration as
well as its high speed and low-power operation. Therefore,
several works have provided image sensors with additional
features like edge detection, region-of-interest (ROI)
detection, spatial redundancy elimination… However the
classic image sensor system still suffers from numerous
limitations. The described readout circuit consists in reading
the whole image for each frame. This results in a huge amount
of useless data and, hence increases the storage and the power
consumption. Since the power consumption is becoming more
and more crucial, especially for all the mobile applications
including embed cameras, researches have provided solutions
to reduce the power consumption or increase the sensor speed.
B. The asynchronous image sensors
A new type of image sensors has recently been introduced:
the asynchronous image sensors. They are one of the most
promising alternatives to the classical image sensors. Its
concept is most of the time based on associating the sensor
activity to the motion in the scene. In other words, the scene
activity triggers the image sensor processing. The pixels of
asynchronous image sensors behave as event detector. In this
case, asynchronous pixels trigger the readout circuit sending a
reading request. Thus, the asynchronous pixels control the
sensor data flow according to the scene [7]. For most of the
asynchronous image sensors, the reading request management
is performed thanks to the Address Event Representation
(AER) protocol, originally used for communicating between
chips. After the reception of the reading request, an
acknowledgment signal that initiates the pixel functioning
sequence is sent to the pixel. The communication between the
pixels and the reading system and the simultaneous reading
request management are established using an AER tree arbiter
[8].
Moreover, for an asynchronous image sensor, the analog to
digital pixel value conversion can be replaced using the time
to digital conversion. In other words, the pixel integration time
value is used to represent the light intensity instead of the
conversion of the voltage value, like in [9] [10]. Since the
ADC is the most power consuming component in a
conventional image sensor reading system, replacing it with
the time to digital conversion results a lower power image
sensor.
One of the other asynchronous image sensor strength
resides in the sensor dynamic range. The latter is the ratio
between the smallest and the largest possible value perceived
by the image sensor. For a classic image sensor, the dynamic
range is unfortunately limited by the pre-defined integration
time. However, since an asynchronous pixel is able to
determine itself the reading instant, its analog value is coded
with an optimal integration time value. Therefore the
asynchronous image sensor turns out to be the image sensor
with the “perfect” dynamic range [11].
C. Reducing the asynchronous image sensor power
The image sensor power consumption strongly depends on
its activity, its size (larger the sensor, higher the throughput)
and its analog-to-digital conversion technique. Hence, we tend
to reduce the sensor data flow by not only processing the
relevant pixels but, also, by adding image compression
techniques. Actually, the event-driven behavior of an
asynchronous image sensor offers the ability for integrating
efficient image compression technique and accordingly reduce
the sensor temporal and spatial redundancies.
On one hand, the temporal redundancy is the repetition of a
pixel value during several consecutive frames. This
redundancy can be eliminated at the pixel level in order to
perform a video compression. During the event detection, a
comparison between the current and the last pixel value is
performed. Then a request is sent if the two compared values
are different. Otherwise, a new integration is forced by a pixel
self-initialization [12].
On the other hand, the spatial redundancies are the
replication of the same pixel value during the reading process.
These redundancies are observed at the image sensor level and
can be remove by limiting the reading system activity to one
reading per pixel value.
IV.
blocks to the power consumption). Indeed, the analog-todigital conversion is processed thanks to a time-to-digital
conversion.
A. Time-to-Digital conversion
The reading system owns a time stamping mechanism in
order to encode the pixel luminance values. The pixel values
are encoded by the elapsed time during the integration phase.
The pixel integration time is computed as the time difference
between the previous reset phase and the current request
instant. It is important to note that the instant of the pixel
request is also considered as the instant of the next reset
phase. Since each asynchronous pixel goes through a selfreset phase, we need to store each pixel reset instant along
with its instant of request. Two memories are therefore
required: the reset instant memory (RTM) and the
integration time value memory (ITVM) (see Fig. 3).
Once a pixel request occurs, we compute the integration
time value and then update both memories. The
integration time value is obtained by firstly reading the last
pixel reset instant, and secondly calculating the difference
between the latter and the instant of the current pixel
request. The result reveals the pixel integration time value.
Then the RTM and ITVM are updated using the new
pixel information. The content of the integration time
value memory is used for displaying the final image.
1-LEVEL CROSSING SAMPLING SCHEME FOR
ASYNCHRONOUS IMAGE SENSORS
In the previous works [10] and [13], we introduced a new
sampling technique for reading images which does not require
the AER arbiter tree and the necessity to overcome pixel
priority issues and timing errors. This special sampling
technique based on a 1-level crossing sampling scheme
performs spatial redundancy cancelation without any
complicated circuitry. Indeed, the approach is based on the
usage of a special pixel able to send a request when its analog
value reaches a threshold corresponding to a certain amount of
accumulated photons. When the readout block receives the
request, the latter is time stamped in order to determine the
integration time. Then the integration time is digitally
computed and the pixel luminance can be obtained without
any classical ADC (the ADC is one of the most contributing
Fig. 3: Architecture of the asynchronous image sensor
B. Spatial redundancy cancelation
The Pixel Request Processing block (PRPB) mainly
manages the request reception. After receiving at least one
request, the encoder notifies the time stamp block. The latter
then sends the instant of request to a FIFO in the PRPB.
This value is later on used during the integration time
calculation. At this stage, the PRPB also performs the spatial
redundancy cancelation. All the active pixels with the same
request time are grouped. Afterwards, the PRPB stores all the
active pixel addresses in the PRPB FIFO in order to be saved
along with their associated instant of request and later on used
with the RTM and ITVM memories. Finally, the PRPB
acknowledges the active pixels in order to end the reading
phase and start the pixel self-initialization. Consequently, we
limit the pixel data flow not only by reading the relevant
pixels but also by reading once the value for a group of pixels,
thanks to the spatial redundancy cancelation method.
C. Simulation results
In order to demonstrate the efficiency of our approach, we
simulate the pixel matrix behavior with Matlab. The pixel
matrix produces for various images the request flow. In order
to create a realistic testbench, we generated the request flow
using real images (see Fig. 4). The request flow is later on
used in order to simulate the VHDL behavioral model of the
reading system. At the end of the simulation, we observe
the resultant data flow and the reconstructed image.
situation is equivalent to read the complete image, which is
the state of art for the commercial image sensors. Moreover,
the proposed architecture is able to provide a data flow
reduction thanks to the capture of the relevant pixels but also
to realize an online compression technique.
ACKNOWLEDGMENT
This work has been partially supported by the LabEx
PERSYVAL-Lab (ANR-11-LABX-0025-01).
[1]
[2]
[3]
a
b
c
[4]
Fig. 4: Real images have been used for this purpose.
Thanks to our reading technique, we limit the output data
flow to a maximum of 4.23% of the original data flow with
the 3 considered pictures (see Table 2). This image sensor is
able to produce a very low data flow compared to a classical
architecture. The final results are in accordance with our
expectations.
[5]
[6]
[7]
TABLE 2: SIMULATION RESULTS
Picture sample
% of the orginal
data flow
4a
4.23%
4c
4c
0.47%
3.88%
[8]
[9]
V.
CONCLUSION
This paper illustrates the advantages of using an eventdriven approach and a non-standard sampling scheme.
Asynchronous ADCs are probably one of the best ways to
drastically decrease the amount of data produced by sensors.
This is already clearly beneficial for reducing the activity in
the digital signal processing chain and, thus the power.
Nevertheless, a tight integration between the ADC function
and the sensor itself offers new possibilities for reducing the
activity and the power consumption of the “digital sensor”.
This has been illustrated thanks to our 1-level crossing image
sensor which integrates the analog-to-digital conversion
without requiring to a real ADC. The ADC is usually the most
consuming block of image sensors. As the image size is
always growing and the number of frames per second tends to
be constant (or increase), the ADC sampling frequency is also
increasing. A sparse sampling of the image with independent
asynchronous pixels is the way. Indeed, the worst case
[10]
[11]
[12]
[13]
REFERENCES
“Non-uniform sampling: Theory and Practice”, Ed. F. Marvati, Springer
Science & Business Media, 30 nov. 2001 - 924 pages
E. Allier, G. Sicard, L. Fesquet, M. Renaudin, "Asynchronous Level
Crossing Analog to Digital Converters", Special Issue on ADC
Modelling and Testing of Measurement, Vol. 37, Issue 4 , June 2005,
Pages 296-309.
N. Sayiner, H. V. Sorensen, and T. R. iswanathan, “A level-crossing
sampling scheme for a/d conversion,” IEEE Circ. and Syst, vol. 43, no.
4, pp. 335–339, April 1996.
E. Allier, G. Sicard, L. Fesquet, M. Renaudin, "A New Class of
Asynchronous A/D Converters Based on Time Quantization", IEEE
Async'03, Vancouver, Canada, May 12-16, 2003, pp. 196-205.
Akopyan, F., Manohar, R., Apsel, A.B., " A level-crossing flash
asynchronous analog-to-digital converter", 12th IEEE International
Symposium onAsynchronous Circuits and Systems, Grenoble, 2006, pp
11-22.
Y. Tsividis, “Integrated Continuous-Time Filter Design – An
Overview”, IEEE Journal of Solid-State Circuits, Vol. 29, No3, March
1994.
P. Lichtsteiner, T. Delbruck, and J. Kramer, “Improved on/off
temporally differentiating address-event imager,” in Proceedings of the
2004 11th IEEE International Conference on Electronics, Circuits and
Systems, 2004. ICECS 2004., 2004, vol. 4, pp. 211–214.
A. Myat, T. Linn, D. A. Tuan, C. Shoushun, and Y. K. Seng, “Adaptive
priority toggle asynchronous tree arbiter for AER-based image sensor,”
2011 IEEE/IFIP 19th Int. Conf. VLSI Syst., vol. 2, pp. 66–71, Oct.
2011.
C. Posch, D. Matolin, and R. Wohlgenannt, “An asynchronous timebased image sensor,” 2008 IEEE Int. Symp. Circuits Syst., pp. 2130–
2133, May 2008.
A. Darwish, L. Fesquet, and G. Sicard, “1-Level crossing sampling
scheme for low data rate image sensors,” New Circuits Syst. …, pp.
289–292, 2014.
C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB Dynamic
Range Frame-Free PWM Image Sensor With Lossless Pixel-Level
Video Compression and Time-Domain CDS,” IEEE J. Solid-State
Circuits, vol. 46, no. 1, pp. 259–275, Jan. 2011.
C. Posch, D. Matolin, and R. Wohlgenannt, “High-DR frame-free PWM
imaging with asynchronous AER intensity encoding and focal-plane
temporal redundancy suppression,” Proc. 2010 IEEE Int. Symp. Circuits
Syst., pp. 2430–2433, May 2010.
A. Darwish, G. Sicard, and L. Fesquet, “Low data rate architecture for
smart image sensor”, Image Sensors and Imaging Systems 2014,
Proceedings of IS&T/SPIE Vol. 9022, San Francisco, California, USA,
5–6 February, 2014.
Download