Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/260359203

Ultrasonic Doppler Sensing in HCI

Article in IEEE Pervasive Computing · February 2012


DOI: 10.1109/MPRV.2012.17

CITATIONS READS
55 838

4 authors, including:

Bhiksha Raj Paul Dietz


Carnegie Mellon University University of Toronto
380 PUBLICATIONS 14,259 CITATIONS 70 PUBLICATIONS 3,322 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

REVERB Challenge 2014 View project

Algorithms for profiling humans from their voice View project

All content following this page was uploaded by Paul Dietz on 30 June 2015.

The user has requested enhancement of the downloaded file.


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

Properties and Applications of Ultrasonic


Doppler Sensing in Human-Computer Interaction
Bhiksha Raj1 Kaustubh Kalgaonkar2 Chris Harrison1 Paul Dietz3
1 2 3
Carnegie Mellon University School of Electrical and Comp. Engineering Applied Sciences Group
Pittsburgh, PA 15213 Georgia Institute of Technology Microsoft
{bhiksha, chris.harrison} Atlanta, GA 30332 Redmond, WA 98052
@cs.cmu.edu kaustubh.kalgaonkar@gatech.edu paul.dietz@microsoft.com

ABSTRACT
We present an overview of our work on ultrasonic Doppler sensing. This is a technique that captures data
on the relative velocities of objects in the field of detection. We draw upon our experiences using the tech-
nology to characterize several unique properties that significantly differentiate it from other sensing tech-
niques, and we believe merit attention from the HCI community. These include high frame rate, low com-
putational overhead, instantaneous velocity readings (instead of e.g., frame differencing), and some degree
of range independence. Additionally, because it is not vision-based, the technique may sidestep privacy
concerns found in many camera-driven approaches, potentially opening the door to sensing in once taboo
locations, such as homes, restrooms, hospitals, and schools.

INTRODUCTION
Sensors are the eyes, ears and skin of computing interfaces. Whether they are simple buttons or sophisticat-
ed vision systems, we are empowered by their capabilities and constrained by their shortcomings. A tre-
mendous amount of HCI research has centered on maximizing the effectiveness and utility of these chan-
nels. These developments, in concert with significant advances in electronics, have enabled us to bring the
power of computation to a wider audience and into more aspects of our lives.
Researchers and practitioners can now draw upon a large suite of sensing technologies for their work. Rely-
ing on thermal, chemical, electromagnetic, optical, acoustic, mechanical and other means, these sensors can
detect blood pressure, faces, hand gestures, temperature, humidity, pupil dilation, acceleration, proximities
and many other aspects of our state and environment (see, e.g., [6,13]).
In this paper, we present an overview of our work on an ultrasonic Doppler sensor. Instead of operating like
sonar (e.g., ultrasonic range finders and proximity sensors), which sends out “pings” of ultrasonic sound,
our sensor emits a single, continuous tone. This bounces off objects in the detection field. In a static envi-
ronment, the return signal will be at the same frequency, but with a different phase and amplitude. Howev-
er, if an object is moving, the echoes are Doppler shifted, creating components at other frequencies, propor-
tional to their velocity relative to the sensor (Figure 1). This property, discussed in depth subsequently,
gives the sensor a significantly different character from other sensing techniques. We believe these qualities
make it a valuable addition to the suite of sensors HCI researchers and practitioners should consider in their
applications.
We note that both ultrasound and Doppler sensing have been used for a variety of purposes in many tasks.
The largest use of ultrasound has been for medical [14] and structural diagnostics [28]. Outside of diagnos-
tic purposes, ultrasound has largely been used for ranging, e.g. [15]. Other interactive applications have
been few, and generally centered around the principles used in diagnostic imaging e.g. [16] or ranging.
Doppler radars, in addition to their well-known uses in aviation, meteorology, speed guns etc., have also
been widely used in commercial and domestic settings for motion detection, intruder alarms (e.g. [27]),
light control etc. In the area of human-computer interaction their use has largely been directed towards the
characterization and analysis of human gait [17-21], and, to a much smaller extent, as a sensing mechanism
for recognizing gestures [22-25]. However, despite the development of micro-Doppler sensors [18], Dop-
pler radar remains a relatively expensive option, requiring either complicated hardware design e.g. [24] or
specialized hardware, e.g. [18].

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

Our Doppler ultrasound sensor, on the other hand, is constructed from off-the-shelf components, is simple
in design, versatile, and lends itself to a variety of different uses as we demonstrate through the example
applications we describe later in the paper.

DOPPLER-BASED CHARACTERIZATION OF MOTION


The Doppler shift due to the reflection from a moving target is approximately:
fd = 2vf/(c-v)
where fd is the observed frequency shift, v is the velocity of the target (in the direction of the sensor), f is
the emitted frequency, and c is the speed of sound.
If multiple objects are moving with different velocities, the reflected signal will contain multiple frequen-
cies, one for each object. To put this a slightly different way, the power spectrum of the return signal in-
cludes the velocities (in the direction of the sensor) of all objects in its field of view.
This is the same principal of operation as a radar gun used by police to catch speeders. However, in addi-
tion to using an acoustic signal instead of RF, the goal of our sensor is to observe the full velocity profile in
the field of view. Police radars, in contrast, only attempt to detect the velocity of a single object.

Figure 1: The velocity of proximate objects alters the frequency of the reflected sound. This is captured
the by the sensor, which performs an FFT. A simplified version of this output is illustrated.

Sensor
Our Doppler sensor consists of an ultrasound emitter and one or more receivers that capture reflections.
The emitter is an off-the-shelf MA40A3S ultrasound emitter with a resonant frequency of 40kHz. This fre-
quently is far above what the human ear can detect, allowing the device to operate silently. The emitter is
driven by a PIC microcontroller that has been programmed to produce a 40kHz square wave with a duty
cycle of 50%. Since the emitter is highly resonant, it produces a nearly perfect sinusoid, even when excited
by a square wave. The receiver is an MA40A3R ultrasound sensor that can sense frequencies in a narrow
frequency band around 40kHz, with a 3dB bandwidth of less than 3000Hz. The relatively narrow band-
width of the sensor ensures that it does not pick up out-of-band noise.
To ease the processing burden, the sensor takes advantage of the limited bandwidth of the signal and the
receiver. At 1 meter/sec, we would expect to see a frequency shift of only 242Hz. The Nyquist theorem
states that a signal must be sampled at greater than twice its bandwidth (which can be much smaller than its
highest frequency) to allow reconstruction. Thus, we chose to sample at a few kHz, which is sufficient to
capture human motion. In some experiments, we chose to use higher fidelity equipment that could sample
at much higher rates and with far greater precision. In these cases the digitized signal was heterodyned to
shift the carrier frequency of 40kHz down to 4kHz and resampled to 16k samples per second.
Figure 2 shows a prototype of our sensor. In this instance, the transmitter, driving circuit and receiver are
all on the same board. In other cases, e.g. for voice-processing applications, the transmitter and receiver are
separated from the board and strapped onto a microphone as shown in Figure 3, so that all three may be
collocated. For gesture recognition applications we employ multiple receivers as shown in Figure 5. Once
again, the transmitter and receivers are separated from the board. Here the transmitter is driven by the same
circuit used in Figure 2, but the signals from the three receivers were jointly captured by a multi-channel

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

A/D. The power consumption of the device in Figure 2 is in the order of tens of milliwatts, however the
exact value varies with configuration.
In all cases the captured signal is spectrographically analyzed to derive features. The analysis frame size
and frame rate varies with application. In our work, for gesture recognition tasks, where movements of the
hands were relatively fast, we employed an analysis window of 32ms, with a 50% overlap between adja-
cent windows resulting in a frame rate of 66.66 frames/sec. For other tasks such as gait recognition and
speaker identification we employed a relatively larger analysis window of 40-64ms, with a 50% overlap
between analysis frames. In all cases a sequence of cepstral vectors [26] was derived from the sequence of
analysis frames as follows: each analysis frame was Hamming windowed, and a 513-point power spectrum
computed from it using a 1024 point Fourier transform. The power spectrum was logarithmically com-
pressed and a Discrete Cosine Transform (DCT) applied to it. The first 40 DCT coefficients were retained
to obtain a 40-dimensional cepstral vector. Each cepstral vector was further augmented by the difference
between the cepstral vectors derived from adjacent frames to obtain a final 80-dimensional feature vector
representing each frame. We note that the entire feature computation is lightweight and easily performed on
a DSP. The resulting sequence of feature vectors is classified into one of a desired set of classes using a
Gaussian Mixture classifier [4].
The Doppler sensor is able to operate over a range of up to 10m for applications such as gait recognition,
where the movements to be characterized include large swings in position and velocity. For finer-grained
movements, such as finger gestures, the range is much smaller, in the order of one meter. The ultrasound
sensor has an angle of 50 degrees. The attenuation of the signal at greater distances also naturally provides
robustness to spurious activity in the background (i.e., outside the range of the sensor).

Figure 2: Our prototype ultrasonic Doppler sensor.

Properties
We have employed Doppler sensing in several distinct manners (discussed subsequently). Through these
experiences, we have been able to observe and assess the technique’s capabilities and shortcomings - espe-
cially as they relate to uses in the field of HCI. Below, we provide a synthesis of what we believe are the
most notable properties. We place special emphasis on comparing against camera-based approaches, as
these are presently popular and most similar in capability.
A) Doppler-based measurements capture snapshots of the instantaneous velocities of one or more
moving objects (in the direction of the sensor). This stands in strong contrast to cameras, as
well as optical and ultrasonic range finders. The latter techniques capture a series of static dis-
tances/images; velocities must be estimated through differentiation.
B) Doppler-based systems are active systems - they generate the signal they observe. Hence they
can operate in the dark while vision based systems are critically dependent on the presence of
light.

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

C) Camera-based systems work best when the motion to be characterized is in a plane perpen-
dicular to the vector from the object to the camera. Conversely, Doppler-based systems are
most effective when the motion is towards or away from the sensor (velocities perpendicular
to the vector to the sensor are undetectable). In other words, Doppler systems measure motion
in an axis orthogonal to cameras (and may offer interesting opportunities if combined).
D) Images observed by cameras shrink as the distance from the camera increases. On the other
hand, Doppler sensors detect frequency spectra that are independent of the amplitude of the
signal, and hence independent of the distance of a target object (as long as the signal is above
noise).
E) Vision-based algorithms are highly dependent on the ability to extract and track the silhouette
of the subject accurately. Even when the only moving object in the video is the subject, vari-
ous phenomena (e.g., shadows, layout of the background) can affect accurate tracking. On the
other hand, Doppler-based devices are relatively insensitive to constant background effects.
Additionally, because the emission frequency of the ultrasonic transducer is known (e.g.,
40kHz), segmenting motion from the environment is straightforward (simply look for the
presence of other frequencies). Thus, no calibration or training is required, making it robust
and easily deployable.
F) Compared to vision-based systems, Doppler sensors have far less data to deal with (e.g., stor-
age of previous frames, training data). Not only does this mean less advanced hardware is
needed for computation, but also reduces power requirements. Vision processing on mobile
phones is only now becoming feasible on the highest-end phones, and places a tremendous
strain on the processors and batteries. On the other hand, our sensor offloads all signal pro-
cessing (e.g., FFT) to a dedicated and highly efficient DSP chip, which could be easily inte-
grated into a small mobile device. Classification is also fairly lightweight, and could be
moved to embedded hardware.
G) At present, our Doppler-based approach is less expensive than comparable vision-based sys-
tems.
H) Finally, people are particularly sensitive to cameras in regards to their privacy [7]. The notion
of something “viewing” them, even if only processed locally, is somewhat uncomfortable.
This suspicion for vision-based sensing has slowed their adoption in the home, classroom,
workplace and other contexts. Doppler-based sensing, however, requires no lenses or other vi-
sion components, and may escape the stigma entirely (as motion detectors largely have).

Previous Work
In addition to the properties listed above, our Doppler-based approach is highly versatile, and can be used
largely unchanged in a variety of applications where other sensing mechanisms such as video would require
application-customized processing. In this section we describe our experience with several applications that
demonstrate this versatility. These are organized into three high-level and open areas in HCI where we be-
lieve Doppler sensing could lead to significant new opportunities. Accuracy results from preliminary stud-
ies are included to highlight the technique’s robustness.

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

Figure 3: Doppler setup for speech applications. A central microphone is augmented


with an ultrasonic transmitter and a receiver.

Speech
Doppler sensors can also be used to provide secondary measurements for speech. For example, a transmit-
ter and receiver could be mounted alongside a conventional microphone (Figure 3 depicts one of our set-
ups). This allows for the instantaneous capture of both audio and minute facial movements. We note that in
this setup it is also possible to dispense with the receiver altogether, and use a broadband-microphone to
simultaneously capture both the speech and the ultrasound reflections since they occur in widely separated
frequency bands.
We experimented with this technique in a speaker identification application [9]. We recruited 50 partici-
pants, who sat approximately 1m away from a microphone/Doppler setup. They were instructed to speak
normally and face in the general direction of the microphone. Each participant recorded 75 sentences from
the Timit corpus [5], with an average length of about three seconds. One third of these recordings were
used to train a Gaussian mixture for that speaker. The rest of the trials were used as test data. Surprisingly,
using only the Doppler recordings (i.e. no audio) of a single sentence, we achieved an identification accura-
cy of 90%.
In another application, the Doppler sensor was used for speech-activity detection [10]. In many speech
driven user interfaces, the detection of when the user is addressing the system is a difficult problem. Typi-
cally the user controls the interaction by pressing a button; however it is desirable to have the onset of
speech be detected automatically. Using a Doppler sensor, we were able to detect onset well over 90% of
the time, including conditions where conventional methods (based only on speech) successfully detected
speech onset ~10% of the time. An important problem in voice-based interfaces is that of detecting when a
subject is actually addressing the system, as opposed to merely speaking in the vicinity of the sensor. Con-
ventional speech-based devices cannot distinguish between these scenarios, and typically cameras are re-
quired to determine if the subject is facing the system or not. On the other hand, the ultrasound sensor was
also able to perform this task by detecting facial movements that are typical to when a subject addresses the
sensor. Deployed on an office floor, the sensor achieved false alarm rates that were 1/30th of that obtained
by a speech-only system.

Biological Motion
The human body is an articulated object, comprised of a number of rigid bones connected by joints. During
movements such as walking, the structure moves in characteristic patterns. The velocity of each part that
depends on its distance from the hinge around which it moves, on the velocity with which the hinge itself
might move around other hinges, and the overall movement of the body. This often forms complex, cyclic
pattern of velocities that are characteristic of particular motions. Doppler sensing in particular is well suited
to capturing this information. Figure 4 shows a spectrographic view of a 40kHz tone reflected by a walking
person. The cyclic motion of the limbs is clearly apparent in the repeated spectrographic patterns. Moreo-
ver, the shape and detail of these patterns are characteristic of the manner in which the walker’s limbs
move, and are thus also characteristic of the walker.

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

Sensing in this domain has many potential applications. For example, smart homes could be made more
intelligent by knowing the rough age of the occupants in any given space. This can occur because biome-
chanics of walking are radically different from that of crawling, and limb length is highly correlated with
age. This could allow for rough classification of occupants into three groups: infant, child, and adult. Addi-
tionally, ultrasonic Doppler sensing could be used for activity recognition (e.g., running, walking, biking),
important in many HCI applications. This could be achieved wirelessly, without the need to place sensors
on the body (e.g., [2]). Finally, the biomechanics of quadrupeds produces a unique movement signature,
and could for example, lead to automatic unlocking of the dog flap vs. the door based on who approaches.

Figure 4: Spectrogram of ultrasonic reflections from a subject walking towards the sensor.

Figure 5: Mounting the ultrasonic sensor for gait recognition

As a proof-of-concept application, we developed a person identifier based primarily on gait [11]. In prelim-
inary experiments, a total of 30 subjects were asked to walk 5m towards and away from a sensor mounted
at about hip high on a desk (Figure 5). Half of the data from each subject were used to train a Gaussian
Mixture classifier for that subject. The other half of the data was used to evaluate the system. The system
was able to identify the walker with 91.7% accuracy, and the direction of walking with 96.3% accuracy.
Additionally, we were able to determine participant sex with greater than 80% accuracy.

Gestures
Gestures are a convenient and intuitive mechanism for communicating with interactive systems. In the case
of ultrasonic Doppler sensing, these gestures can be performed wirelessly and without any sensors mounted
on the subject (unlike, e.g. accelerometer approaches [12]).
The application that originally spurred our interest in Doppler sensors was interactive theme park shows. A
show had been designed which required guests to raise their hands to answer questions. To detect this ges-
ture, a number of alternatives were investigated and found wanting. For example, a vision system was at-
tempted, but spurious background activity made it unreliable.

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

The solution was to place a 40kHz emitter/receiver pair in the ceiling above each guest participating in the
show [3]. The signal processing was done in the analog domain, looking for power in a band around
40.4kHz. Although there is vertical motion in normal walking, it does not produce velocities of the same
magnitude as an intentional hand raise. Similarly, standing up is a much slower motion. The system could
be confused by unintended gestures, but these were of such a nature that a human observer might well cate-
gorize them as intentional hand raising. Ultimately, the Doppler-based solution dramatically out-performed
the vision systems under consideration, and at a tiny fraction of their cost.
A second effort looked at recognizing more complex gestures [8]. We employed a single ultrasonic trans-
mitter and multiple receivers (located in front and to the right and left of the user). The gesture sensor can
recognize a number of different gestures, including forward/backward motion, left/right motion, up/down
motion and clockwise or anti-clockwise rotation of the hand.
In a preliminary experiment, we collected data from ten participants performing eight, exemplar gestures.
Participants were instructed on how to perform each gesture in front of the setup. Following this initial
training, participants performed each gesture ten times. This procedure produced 100 instances of each ges-
ture - 60 instances were used for training, the rest was reserved for testing. We employed a Gaussian Mix-
ture classifier, which jointly classified the ensemble of reflections captured by the three sensors. Our proof-
of-concept setup yielded a classification accuracy of 88.4%.

Figure 5: Doppler setup for gesture recognition. It includes a single transmitter in the center and
three receivers on the three peripheral arms.

Conclusion
The Doppler ultrasonic sensor is observed to be demonstrably versatile, as seen from our illustrative exam-
ples. The applications described above are diverse, ranging from the biometric applications of gait and
speaker recognition to the UI applications of gesture recognition and voice-activity detection. Conventional
sensors such as video, although applicable in all of these scenarios, would require significant application-
specific customization of the type and nature of features extracted from the sensed signals and the classifi-
cation mechanism employed. The Doppler sensor, on the other hand, achieves all them using essentially the
same processing mechanism – simple spectral characterization of the signal followed by Bayesian classifi-
cation, in all cases. The only variation required is the physical layout of the sensor itself, however this is
not a serious restriction.
In the paper we have also highlighted several unique qualities of Doppler-based sensors. The above review
of several preliminary uses of the technology highlights the sensors’ utility and effectiveness in several
example HCI domains. Moreover, the Doppler sensor may be used to augment other sensing modalities at
minimal cost. Overall, we believe that the Doppler ultrasound sensor will make a useful addition to the
suite of techniques HCI researchers and practitioners might consider in their applications.

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

REFERENCES
1. Cao, X. and Balakrishnan, R. VisionWand: interaction techniques for large displays using a passive
wand tracked in 3D. In Proc. UIST '03. 173-182.
2. Consolvo, S., McDonald, D. W., Toscos, T., Chen, M. Y., Froehlich, J., Harrison, B., Klasnja, P., La-
Marca, A., LeGrand, L., Libby, R., Smith, I., and Landay, J. A. Activity sensing in the wild: a field trial
of ubifit garden. In Proc. CHI '08. 1797-1806.
3. Dietz, P. H. "Apparatus for detecting guest interactions and method therefore." U.S. Patent
#6,307,952.
4. Duda, R. O., Hart, P. E., and Stork, D. G. Pattern Classification, 2nd ed. John Wiley & Sons, 2001.
5. Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., Pallett, D. S., Dahlgren, N. L. 1993.
TIMIT Acoustic-Phonetic Continuous Speech Corpus. Linguistic Data Consortium, Philadelphia.
LDC Catalog No. LDC93S1.
6. Hinckley, K., Pierce, J., Sinclair, M., and Horvitz, E. Sensing techniques for mobile interaction. In
Proc. UIST '00. 91-100.
7. Hudson, S. E. and Smith, I. Techniques for addressing fundamental privacy and disruption tradeoffs
in awareness support systems. In Proc. CSCW '96. 248-257.
8. Kalgaonkar, K. and Raj, B. One-handed gesture recognition using ultrasonic Doppler sonar. In Proc.
ICASSP ’09.
9. Kalgaonkar, K. and Raj, B. Recognizing talking faces from acoustic Doppler reflections. In Proc.
IEEE Intl. Conf. on Automatic Face and Gesture Recognition ’08.
10. Kalgaonkar, K. and Raj, B. Ultrasonic Doppler sensor for voice activity detection. IEEE Signal Pro-
cessing Letters, 14, 10, (Oct. 2007), 754 – 757.
11. Kalgaonkar, K. and Raj, B. Acoustic Doppler Sonar for Gait Recognition, In Proc. AVSS '07. 27-32.
12. Kela, J., Korpipää, P., Mäntyjärvi, J., Kallio, S., Savino, G., Jozzo, L., and Marca, D. Accelerome-
ter-based gesture control for a design environment. Personal Ubiquitous Computing, 10, 5 (Jul.
2006), 285-299.
13. Paradiso, J. A., Hsiao, K., Strickon, J., Lifton, J., and Adler, A. Sensor systems for interactive sur-
faces. IBM Syst. Journal, 39, 3-4 (Jul. 2000), 892-914.
14. http://aium.org
15. http://arduino.cc/en/Tutorial/Ping/
16. Vogt, F., McCaig, G., Ali, M. A., and Fels, Sidney. Tongue 'n' Groove: an ultrasound based music
controller. NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression.
17. Yardibi, T.; Cuddihy, P.; Genc, S.; Bufi, C.; Skubic, M.; Rantz, M.; Liang Liu; Phillips, C. Gait
characterization via pulse-Doppler radar. 2011 IEEE Pervasive Computing and Communications
Workshops (PERCOM Workshops), March 2011, Seattle WA. 662 - 667.
18. Tahmoush, D.; Silvious, J. Radar micro-doppler for long range front-view gait recognition. 3rd In-
ternational Conference on Biometrics: Theory, Applications, and Systems, 2009. BTAS '09. Sept.
2009, 1 - 6
19. Tivivie, F. H. C., Bouzerdoum, A. and Amin, M. G. A Human Gait Classification Method Based on
Radar Doppler Spectrograms. EURASIP Journal on Advances in Signal Processing, Volume 2010
(2010), Article ID 389716
20. Hornsteiner, C. and Detlefsen, J. Characterisation of human gait using a continuous-wave radar at
24GHz. Adv. Radio Sci., 6, 67–70, 2008
21. Geisheimer, J.L.; Marshall, W.S.; Greneker, E. A continuous-wave (CW) radar for gait analysis.
35th Asilomar Conference on Signals, Systems and Computers, 2001. 834-838.
22. http://logosfoundation.org/ii/dopplerFMradar.html
23. http://logosfoundation.org/ii/bumblebee.html
24. http://resenv.media.mit.edu/Radar/index.html
25. Leo, C. K. Contact and Free-Gesture Tracking for Large Interactive Surfaces. Masters thesis. Mas-
sachusetts Institute of Technology, 2002.
26. Oppenheim, A. and Schafer, R.W. Digitial Signal Processing. Prentice Hall.

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


This article has been accepted for publication in IEEE Pervasive Computing but has not yet been fully edited.
Some content may change prior to final publication.

27. http://www.protechusa.com
28. http://en.wikipedia.org/wiki/Ultrasonic_testing

Digital Object Indentifier 10.1109/MPRV.2012.17 1536-1268/$26.00 2011 IEEE


View publication stats

You might also like