Introducing The Non Blocking Internal Binary Search Tree Algorithm

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Introducing the

Non-Blocking
Internal Binary
Search Tree
Algorithm
Explore our non-blocking, scalable algorithm that achieves fast
performance while minimizing memory overhead. Say goodbye to locked
data structures and suboptimal search times.

by Karandeep Jagpal
Related Work
1 Concurrent Search Trees

Previous concurrent search tree solutions dealt with challenges like multiple mutable fields per node
and slower search times compared to simpler structures like linked lists.

2 Mitigating Concurrent Execution Issues

Execution in parallel binary search trees may pose challenges. Contention and concurrent updates
may affect the same nodes and impact performance.

3 Limitations of Existing Solutions

Existing solutions rely on locking data structures and have suboptimal memory use. The Parallel Binary
Search Tree Algorithm presented in this paper aims to address these challenges.
Algorithm Overview
Comprehensive Performance Non-Blocking Execution
Enhancements
The algorithm is non-blocking, enabling concurrent
The algorithm uses optimized single-word reads, execution and multi-threading when executing
writes, and compare-and-swap operations to achieve operations.
fast performance.

Simple Concurrency Handling Improved Scalability and Search


Time
Contention only occurs when concurrent updates
affect the same nodes, ensuring smoother The implementation of the algorithm allows for
concurrent operation execution. better search performance, increased scalability and
memory optimization.
Detailed Implementation

Structures contains(k) add(k)

The binary search tree algorithm The contains(k) operation checks The add(k) operation adds a specific
depends on specific data structures whether the tree contains a specific key to the binary search tree. Our
and auxiliary data structures that search key. Our implementation implementation includes concurrency
have been optimized for fast ensures proper concurrency handling handling and ensures correct
performance and scalability. and correct execution. execution and performance
optimization.
Detailed Implementation (continued)
Optimizing Traversal

The algorithm provides traversal optimizations to enhance search times.


Our implementation includes techniques like path compression to
achieve faster search times.

1 2

remove(k)

The remove(k) operation removes a specific key from the binary search
tree. Our implementation has overcome the challenges of removing
nodes during concurrent execution and ensures performance
optimization.
Memory Management
1 Memory Optimization

The Parallel Binary Search Tree Algorithm minimizes memory overhead and optimizes usage for specific
operations.
Correctness
Non-Blocking Linearizability
The algorithm ensures non-blocking execution, allowing The implemented algorithm ensures linearizability,
operations to progress smoothly even in the presence of ensuring operations appear to take effect instantly with
concurrent updates. We discuss how our correct execution.
implementation achieves non-blocking behavior and
ensures progress.
Results

Experimental Setup Throughput Memory Efficiency

We tested the algorithm using Our parallel binary search tree The implementation provides
industry-standard hardware and algorithm outperformed alternative substantial memory-efficiency gains
benchmark datasets that characterize solutions in terms of throughput. We and minimized overheads in
specific use-cases, ensuring accuracy present our quantitative comparison to other benchmarked
of the results we present. measurements of performance gains structures, positively impacting the
and discuss how the algorithm algorithm’s scalability and
outperforms other solutions. performance.

You might also like