Robotics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Robotics

Lecture 3.1
Particle Filter Overview
● The Particle filter combines a noisy process model and a noisy sensor model to better
estimate some systems true states
● The process model tells you where we should be based on where we were before and our
inputs
● The sensor model connects what our sensors see to where we might be
● We create a set of “guesses”, called particles, of where the robot is, then
○ We use the process model to make predictions of where the robot is now for each particle, based on the
previous particle state and the real input
○ We then read the sensor data
○ Then, for each particle, we see what the chances are that we got that sensor data if the robot was in that
particle’s position
○ From that information, we re-sample our guesses, choosing more that are high probability and less that are
low
● We repeat this until our guesses converge
Process Model
● The process model aims to predict the current state of the system based off the
previous state and the input
● We assume that there is noise in the process model, that is, that we can’t know
exactly where the system will be based off this information.
● We can therefore represent the process model as a probability distribution:
xt ~ p(xt|xt-1, ut)
● This is the distribution that xt is in some state, given xt-1 and ut
○ Sometimes you might see ut-1 instead, they are functionally the same, it is just a matter of how you
translate real life to a discrete model
● What we are saying is that we don’t know exactly where xt is but we know where
it is most likely to be and least likely to be if we know the previous state and the
input
Process Model
● We can represent p(xt|xt-1, ut) as a
parametric distribution, that is, as
a continuous function, e.g a
Gaussian distribution
● Or, we can represent a sampling
of the distribution, e.g a
histogram
● See on the right we have one
hundred samples of where the
robot might be, with the most, 35
samples, at position 1.
Process Model
● Parametric representations
perform poorly where it is hard to
fit some function to the
distribution
● The example on the right shows a
bimodal distribution, which is
hard to represent parametrically
● Whereas the non-parametric
representation has no issue
● We use these sample based
representations in particle filters
● This means the filter is not
restricted to particular kinds of
noise
Process Model
Ways to determine the process model distribution:

● Analytically
○ Calculate the dynamics of the system under ideal conditions
○ Add noise components
○ Discretize the probability function across time
● Empirically
○ Observe the system and collect a large dataset of states that are associated with their temporal
neighbours
○ Discretize the state space and bucket the data under that discretization
○ For each state value, take the subset of data points that followed from that state value
○ Sample from that distribution whenever the system is in that state
Process Model: Analytical
Process Model: Empirical
Observation Model
● The sensor model aims to match sensor data with system state
● It is essentially telling us the chance that we would have gotten some sensor
measurement zt if our system state was in fact xt.
● This lets us plug in a potential system state, and a sensor measurement, and find
out what the chances are that the sensor measurement came from the system
being in that state.
● We also can describe this as a distribution
zt ~ p(zt| xt)
● An example of an observation model is scan matching
○ You take a laser scan and a map
○ For some robot state xt, produce a virtual laser scan as if the robot was there
○ Compare the real and the virtual scan returning a confidence level that the scans match
○ When comparing we allow for some noise in the sensor
Observation Model
Ways of obtaining the observation model

● Analytically
○ Based off known sensor noise distribution and analytical perception techniques
● Empirically
○ Using a large dataset of measurements, each labelled with the accurate system state
■ Train a model that detects with some confidence scoring, or
● Discretize the state space and bucket the data under that discretization
● For each state value, take the subset of measurements that followed from that state
value
● Sample from that distribution whenever the system is in that state
Algorithm
Assume we are using n particles:
1. Start t=0 with n simulated particles sampled from the distribution for initial state
and n weights all set to 1/n as pairs in the set St=0
2. For i = 1 → n:
a. Sample a state xt-1i from St-1 based on the weights
b. Sample from p(xt|xt-1i, ut) to get a predicted next step xti
c. Give it a weighting score wti = p(zt| xti)
d. Add xti and wti to St
3. Divide each weight by the sum of all weights
4. Return to 2 for t ← t+1
If we want the estimated system state, we take the weighted average of the particles
and their weights
Live Example Algorithm
1. S0 = {[x01=1, w01=0.5], [x02=2, w02=0.5]}
2. t=1, S1 is empty
a. For i in 1, 2
i. i=1
1. Sample, with 50/50 chance of either particle, we get xt-11 = 2
2. x11 is sampled from p(xt|xt-1i, ut), we get x11 = 2.1
3. w11 = p(z1| x11) = 0.2
4. Add the pair [x11, w11] to S1
ii. i=2
1. Sample, with 50/50 chance of either particle, we get xt-11 = 1
2. Etc
3. Add the pair [x12=1.11, w12=0.9] to S1
b. Normalize the weights
i. w1i = w1i/1.1
c. S1 = {[x11=2.1, w01=0.2], [x12=1.11, w02=0.9]}
Algorithm
Localization
Object Tracking
● When we are trying to track obstacles
○ Our process model is how we believe the tracked object will move, when projected onto our sensor
- usually a 2D camera.
○ Our measurement model is our detection or classification algorithm
● Sometimes when we cannot know the process model, we use a static process
model, that is, xt = xt-1 + e, where e is the noise distribution
● This is done to capture the fact that objects can’t teleport.
Object Tracking

You might also like