Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Extending OpenSim’s Motion Detection and Depth Sensing Capacity –

Department of Kinesiology and Sports Sciences (Dr. Eltoukhy’s Lab)

The goal of this research project is to expand the motion detection and depth sensing
capabilities of OpenSim and develop a simplified pipeline for automated biomechanics data
extraction from markerless motion capture for musculoskeletal modelling. This would lower the
cost of conducting research by allowing researchers to migrate from expensive proprietary
software and tools to open-source software and consumer-level hardware for data collection
without significant compromise on data precision. Currently, most software must be bought or
subscribe to use their services. They are very specialized to support a narrow ecosystem of
devices which limits the freedom in choosing the appropriate equipment and devices required
for experiments. This often leads to multiple sets of different devices and equipment necessary
to carry our impactful research which substantially increases the cost of research.

The project will focus on implementing new OpenSim features to run on more accessible stand-
alone motion detection cameras. For our investigation, we will be using iPhone 13 and above
models for motion and depth data collection, implementing a real-time or “near” real-time
human pose estimation approach.

The first step in the project will be to evaluate the extent of modifying OpenSim by taking
advantage of its open-source nature to accommodate simplified data representation and
expand supported devices using Python scripting. Our main goal is to make UI and data
presentation more organized and categorized without needing to open multiple tabs and
windows to display the full range of data collected.

The next step would be to modify the data pipeline used to receive motion data from Microsoft
Azure Kinect. Our plan is to edit the source codes of OpenSim to allow compatibility between
Kinect’s data stream and OpenSim’s data handling. Current implementation of OpenSim does
not recognize and receive Kinect’s IR depth sensor data, only the camera data are processed for
motion tracking which is insufficient. Our current vision is to add Python code elements in
OpenSim source code to take full advantage of Kinect’s hardware without meddling with the
embedded C programs on Kinect’s hardware.

After implementing our changes, we would test Kinect’s motion detection capacity and trial
whether the data collected is within acceptable margin of error for successful research
conduction. Future plans include further exploring and expanding OpenSim’s features so our
code implementation must be modular and scalable. The team would communicate the
strengths and weaknesses of the demonstrated approach in Dr. Moataz Eltoukhy’s research
paper and discuss future possibilities of using such technologies in academic pursuits.

Zubaer R. Chowdhury
Undergraduate Research Assistant (Department of Kinesiology and Sport Sciences)
B.Sc. Computer Science – Comprehensive
Scope:

Requirements:
Receive data from

Changes:

Timeline:
Spring - Summer

Deadline:
End of Summer

No more Kinect
Only 2 iPhones

Input Data:
.sto file displays the raw graphing data.
One single file for each trial put into a raw format (Pickle)
Apply scripts to change the raw pickle file into pdfs and csv.

OpenCap:

Camera Marker-based Motion Tracking:


Qualisys

C3D file output

OpenSim receives the file

OpenCap processes online, the data received from OpenSim.

You might also like