Professional Documents
Culture Documents
Industry Focus - Robotics and Automation
Industry Focus - Robotics and Automation
Winter 22
3 NEWS
Monitoring the Effects of Climate
Change with Reef Robots
7 ARTICLE
What is Bionic Engineering?
12 NEWS
Keeping Semiconductor
Manufacturing Clean with Robotics
16 THOUGHT LEADERS
Robotic Microswimmers: The Next
Biomedical Breakthrough
25 THOUGHT LEADERS
Social distancing surveillance: how
robots will keep you in line
31 THOUGHT LEADERS
Pediatric Robotic Surgery: past,
present and potential
37 NEWS
SEA-KIT Unveils New H-class USV for
Ocean Survey
39 ARTICLE
3D Printing Polymers with Graphene
43 NEWS
Fully Autonomous Inspection Robot
Systems for Sewer Pipe Networks
News
The grant, awarded by the National Science Foundation and shared with Syracuse
University, is intended to help the organizations advance robots’ abilities to study coral
reefs. They plan to do this by developing autonomous underwater vehicles that can find
their way around complex underwater environments and gather data.
The Woods Hole team is led by computer scientist Yogesh Girdhar, and is trying to
create a robot that can measure multiple properties of the reef over a long period of
time. The robot needs to take readings on the reef’s biomass and biodiversity, as well as
record information about the behavior of organisms living or traveling through the reef
system.
Coral reefs play an important role in the well-being of our planet: they support a healthy
ocean ecosystem by creating habitats for numerous and varied animal species. About a
quarter of all marine organisms rely on a coral reef at some point in their lives, and reefs
also protect human habitats from storms and erosion.
In 2020, a report on the status of global reefs worldwide by the Global Coral Reef
Monitoring Network valued reefs’ benefits to human society at around $2.7 trillion each
year. But despite the important role they play, reefs are declining all over the world. This
is caused by global warming, ocean acidification, pollution, and damage from industrial
fishing and shipping.
To overcome this Girdhar and his team are working with a $1.5 million grant entitled “An
Ecologically Curious RObot for Monitoring Coral Reef Biodiversity.” The grant was made as
part of the US National Science Foundation’s National Robotics Initiative 3.0 program, which is
intended to support foundational research to integrate the next generation of robots with
human attributes like safety and independence.
The team is working with a concept that includes two input systems: image collection and
acoustic analysis. The robot will process this information in a way analogous to a “curious
diver,” that is, by exploring caves and passages that seem likely to give up more interesting
data further in.
The imaging system enables scientists to accurately identify individual species and any
locations of interest in a geospatially complex environment. However, these have limited
range underwater where not much light reaches. Acoustic signals, on the other hand,
work well underwater and can provide a good overall picture of a reef’s health.
However, they cannot provide enough information to identify individual species.
The Woods Hole team is combining the two input systems to get the best of both worlds
in their robot investigator. This will enable the robot to build an accurate, detailed
picture of the reef ecosystem, its health, and its functions in the same way that a trained
scientist diving into the system manually would operate.
The team also needs to overcome the problem of the robot itself disturbing animals that
live around the reef. The presence of a moving robot is enough to disrupt animals’
behavior and cause them to scatter to other parts of the reef or hide in gaps.
To address this, the robot has been designed with a “hopping” motion system that
makes it move quickly in short distances, then settle down on the sea bed or reef itself
to gather data. Moving in short bursts also enables the robot to use as little energy as
possible and greatly extend its possible mission length. This gives it advantages over
most other swimming robots used for environmental analysis and enables the robot to
gather much more information over a longer period of time.
“We envision putting the robot in a reef, and having it come back in a week or month
with a detailed understanding of how biodiversity is distributed across the reef in space
and time,” Girdhar said.
It will really give coral reef scientists more bang for the buck over each
deployment.”
The curious robot is one of many new bio-inspired robots being used for environmental
monitoring applications. For example, the Envirobot built by Swiss Biorobotics
Laboratory has a motion system inspired by eels, and detects water pollution in real-
world conditions. Robot jellyfish have also been designed to passively float in the ocean
and gather data to help us understand climate change.
Disclaimer: The views expressed here are those of the author expressed in their private
capacity and do not necessarily represent the views of AZoM.com Limited T/A
AZoNetwork the owner and operator of this website. This disclaimer forms part of the
Terms and conditions of use of this website.
Written by
Ben Pilkington
Ben Pilkington is a freelance writer who is interested in society and technology.
He enjoys learning how the latest scientific developments can affect us and
imagining what will be possible in the future. Since completing graduate studies
at Oxford University in 2016, Ben has reported on developments in computer
software, the UK technology industry, digital rights and privacy, industrial
automation, IoT, AI, additive manufacturing, sustainability, and clean
technology.
While it may seem to belong to the realm of science fiction, recent advances in the
field of bionic engineering and bionics have provided the potential for augmenting
the human body, repairing damage to limbs, and even replacing entire sections of the
human body. This article will look at the field of bionic engineering and recent
advances in bionic technologies.
Bionics
Bionics is a field of engineering that studies and develops mechanical systems that
accurately mimic living organisms' function or parts. Biological structures, methods, and
systems are applied to the design of engineering systems and modern technologies.
The inspiration for bionic engineering comes from the observation that evolutionary
pressures force biological organisms to adapt and develop structures and processes
which possess the optimal efficiency for survival.
Bionics is an interdisciplinary field that combines engineering and life sciences. Related
interdisciplinary fields include biophysics, biomechanics, cybernetics, biocybernetics,
information theory, biomedical engineering, and bioengineering.
A large overlap exists between these fields, making a sharp distinction difficult in that
they require the same basic information but only differ in their application and use.
The field’s origins can be traced back to 1960, when around seven hundred physicists,
engineers, biologists, biophysicists, psychologists, and psychiatrists attended a
congress in Dayton, Ohio. Some notable historical examples of biomimetic engineering
include Cat’s eye reflectors and Velcro.
Today, bionics and bionic engineering are pushing the boundaries of biology and
engineering with the development of numerous biomimetic devices that are changing
the future of areas such as physiotherapy, robotics, and medical science.
Bionic Vision
The bionic eye is a potential breakthrough technology that can enhance the vision of
patients with eye conditions and partial and complete blindness.
Bioelectric implants can interpret visual data and transmit them to cells in the visual
cortex, which can then be interpreted as visual data by the brain. There are two main
challenges that exist with this technology: the complexity of mimicking retinal function
and the preferences and constraints of consumers for miniature, implantable devices.
converting the signals into electrical impulses, acting as a vital link between the object
and optic nerve, bypassing damaged photoreceptors.
Auditory Bionics
Bionic devices can help people with partial or profound hearing loss. Cochlear implants,
auditory midbrain implants, and auditory brainstem implants are the three main classes
of this technology. An artificial link between the brain and the auditory source is created
via a microelectronic array implanted in either the brain stem or the cochlea.
Compared to bionic devices that aid sight, auditory bionics is a more mature commercial
technology field. The market has greater global adoption, a larger innovative ecosystem,
and more products currently on the market.
Bionic Limbs
According to the WHO, around 15% of the world’s population live with some form of
physical disability, whether it is hereditary or stemming from injuries and accidents.
For around one hundred years, prosthetic limbs have been the norm for providing some
degree of functional independence for patients. In recent decades, bionic limbs have
started to replace prosthetic limbs, which, despite some advances in technology and
lighter, advanced materials, suffer from limitations.
Bionic limbs are interfaced with the neuromuscular system of patients. This allows
enhanced control of limbs that mimic biological functions. Grasping, bending, and
flexing are controlled by the brain. Movement is controlled via an electronic pathway
that bypasses the damaged peripheral nerves connecting the brain and the mechatronic
limb.
The Bionic Engineering Lab at the University of Utah is one example of academic
institutions developing next-generation bionic limbs.
Examples of research include the Ergo Knee Expo, a research exoskeleton designed to
study human-robot interaction and ergonomics, an adaptive positioning ankle that uses
a novel non-backdrivable actuation system, and an the OpenLegs Bionics CAD model,
which is an open-source project that makes powered prostheses more accessible.
Perhaps the most notable bionic device under development at the Bionic Engineering
Lab is an advanced bionic leg design. Instead of being controlled by signals from the
brain, it utilizes artificial intelligence to control itself.
Additionally, MIT has announced the founding of an interdisciplinary bionics center with
the aid of $24 million in funding from Lisa Yang, a philanthropist and former investment
banker. The K. Lisa Yang Center for Bionics will focus on three bionic technologies
during its first four years of operation: bionic limb reconstruction, digital nervous
systems, and brain-controlled limb exoskeletons.
Beyond medical devices, bionic engineering has vast potential for multiple industries,
including human and animal-like biomimicking robots and military and construction
exoskeletons.
A notable example is the bird wing-designed morphing wing, which can change its
shape during flight in line with optimal mission performance. Other research is being
conducted into the perception, decision-making, implementation, and interaction
capabilities of autonomous robotic aircraft using bionics.
The Future
The field of bionic engineering is progressing rapidly, and the integration of
technologies such as artificial intelligence and the development of advanced materials is
pushing technology forward into the 20th century. The future of bionic engineering is
exciting and has vast potential for medicine and industry.
Further Reading
Roth, R.R (1983) The Foundation of Bionics [online] Perspectives in Biology and
Medicine 26(2) | muse.jhu.edu. Available at:
https://muse.jhu.edu/article/403622/summary
University of Utah (2021) Bionic Engineering Lab, [online] Available at:
https://belab.mech.utah.edu/
Michalowski, J (2021) New bionics center established at MIT with $24 million gift
[online] MIT News | news.mit.edu. Available at: https://news.mit.edu/2021/new-
bionics-center-established-mit-24-million-gift-0923
Frost & Sullivan (2017) Bionics: A Step into the Future [online] Alliance of Advanced
BioMedical Engineering | aabme.asme.org. Available at:
https://aabme.asme.org/posts/bionics-a-step-into-the-future
The SMC Society (2022) Autonomous Bionic Robotic Aircrafts [online] ieesmc.org.
Available at: https://ieeesmc.org/technical-activities/systems-science-and-
engineering/autonomous-bionic-robotic-aircraft
Disclaimer: The views expressed here are those of the author expressed in their private
capacity and do not necessarily represent the views of AZoM.com Limited T/A
AZoNetwork the owner and operator of this website. This disclaimer forms part of the
Terms and conditions of use of this website.
Written by
Reginald Davey
Reg Davey is a freelance copywriter and editor based in Nottingham in the
United Kingdom. Writing for AZoNetwork represents the coming together of
various interests and fields he has been interested and involved in over the
years, including Microbiology, Biomedical Sciences, and Environmental Science.
Keeping Semiconductor
Manufacturing Clean with Robotics
By Taha Khan
Both the robotics and semiconductor manufacturing fields are rapidly evolving
simultaneously. In this article, the benefits of using robots in the semiconductor
manufacturing sector, as well as the advantages and challenges associated with their
use, are discussed.
Using robots to handle microscopic parts has become essential in introducing micro and
nanotechnology for semiconductor manufacturers. Similarly, moving silicon wafers at
fast rates without causing damage is a critical problem for these machines in
semiconductor fabrication.
These tasks become more complex with variable wafer sizes. To avoid causing any
damage to these delicate components, advanced clean robots are used in handling
motions
For highly sensitive applications, these systems can offer an ultra-high performance line
of cleanroom robot arms. Such installations can manipulate cassettes, silicon wafers,
photomasks and panels in all semiconductor value chains with smooth trajectories, high
speed, and exceptional accuracy.
The latest robots offer reliability, flexibility, and high-speed handling of wafers or
cassettes for front-end semiconductor fabrication processes like cleaning, lithography,
etching, and polishing. Similarly, they perform back-end tasks like testing, packaging,
and assembly.
One common structural component observed amongst robots with arms is that they are
covered with glossy polyurethane surfaces. To meet the requirements of cleanrooms,
modern designs have to meet ISO standards, including ISO 14644-1 class 2/3.
Kawasaki has introduced two series of clean robots, including the NTS series and TTS
series. These high-precision systems work with advanced drive systems very smoothly.
In addition to high precision, the TTS series has a rigid telescopic mechanism allowing
high-speed handling in both low and high positions.
Both the NTS and TTS series can access two and three FOUPs of EFEM without a track.
NTS series includes two types of robots, namely NTS10 having a four-axis degree of
freedom and NTS20 having a five-axis degree of freedom. Similarly, the TTS series
include TTS10 having a four-axis degree of freedom and TTS20 having a five-axis
degree of freedom. They are compliant with SEMI-F47 and SEMI-S2 standards.
Although fields like soft robotics, micro and nanorobotics, artificial intelligence (AI) and
machine learning has boosted the development of this sector, there is still room for
improvement as far as applications in semiconductor manufacturing are concerned.
For example, there is still room for improvement in fully automated robots through AI
and machine learning to reduce manual labor, which will eventually help keep the
environment cleaner. Furthermore, this will also reduce the cost of hiring skilled staff to
operate these machines.
Since semiconductor production involves numerous small and fragile parts, better
resolution cameras are required. Only high-resolution cameras are capable of locating
and inspecting these pieces.
Disclaimer: The views expressed here are those of the author expressed in their private
capacity and do not necessarily represent the views of AZoM.com Limited T/A
AZoNetwork the owner and operator of this website. This disclaimer forms part of the
Terms and conditions of use of this website.
Written by
Taha Khan
Taha graduated from HITEC University Taxila with a Bachelors in Mechanical
Engineering. During his studies, he worked on several research projects related
to Mechanics of Materials, Machine Design, Heat and Mass Transfer, and
Robotics. After graduating, Taha worked as a Research Executive for 2 years at
an IT company (Immentia). He has also worked as a freelance content creator at
Lancerhop. In the meantime, Taha did his NEBOSH IGC certification and
expanded his career opportunities.
We speak to Dr. Daniel Ahmed from the Acoustic Robotics Systems Lab about the
development of robotic microswimmers modeled from starfish larvae. These
acoustically controlled microrobots are manipulated by ultrasound and are paving the
way for innovative collaborations between biomedical and microrobotic systems
research.
In my academic career, I was always drawn to the study of acoustics. During my doctoral
The possibilities are endless, particularly for medical applications, which motivates me
even more. Acoustically-activated microswimmers represent a relatively new
development with many advantages, and I hope to be at the forefront of this field.
The combination of manipulation and drug delivery capabilities means it can also
potentially see a use for conditions such as stroke and heart attack.
Watch On
Youtube
Despite the very different dynamics, we still encountered similar microstreaming, which
was unexpected.
Our attention was drawn to the configuration of multiple cilia arranged in ciliary bands.
Inspired by the starfish larva, we arranged ciliary arrays in a "+" and "-"configuration.
Based on the arrangement of the ciliary arrays, a fluid source or sink can be developed
when the soft microstructures are activated with ultrasound.
This discovery initiated the research project leading to the creation of the starfish-
inspired microrobot and the trapping mechanism, which is also derived from starfish
larvae.
The manipulation and separation of particles and cells are required in many applications
in biology and medicine for drug screening and therapeutics. Presently, there are many
technologies available to separate particles based on filtration, centrifugation, acoustics,
optics, and electrophoresis.
Using acoustic waves for particle manipulation is attractive because it facilitates the
manipulation of particles independent of their optical, magnetic, or electrical properties
in a simple, precise, and biocompatible manner.
Our goal is to develop a new design for the lab-on-chip system for automatic
pumping and label-free manipulation and separation of cells and particles for
portable diagnostics.
Schematic of a bioinspired trapping mechanism consisting of a + ciliary band adjacent to a – ciliary band
(left). Acoustic power-dependent transport and trapping; transport and trapping of 10 µm particles (colored
trajectories) with maximum transport efficiencies of microparticles traveling from + to – ciliary band were
achieved at a 12 and b 18 volts peak-to-peak (VPP) (right). © Dillinger, C., Nama, N., and D, Ahmed. (2021)
transport in liquid. Furthermore, the fabrication is very straightforward and does not
require magnetic particles to activate the cilia magnetically, i.e., these systems are
biocompatible.
Watch On
Youtube
How Bioinspired Ultrasound Microrobots revolutionize Medicine | Daniel Ahmed. Video Credit:
Falling Walls Foundation/YouTube.com
The ARSL is exploring many of the fundamental challenges associated with microrobots,
especially when exposed to flow conditions similar to those in the vasculature. With the
advent of artificial intelligence, I anticipate that navigation of these microrobots will
become much more sophisticated, allowing them to potentially target diseases
effectively and promptly.
Upon reaching a site, these swimmers can release drugs very locally. My belief is that
microrobots could be used to treat patients suffering from heart disease or stroke. It is
likely that we will witness a revolution in medicine regarding the treatment of disease,
and we would not need to wait 30 years.
Daniel Ahmed has published 31 peer-reviewed journal articles and has published 14
conference proceedings. He has been selected as one of the ten winners in Falling
Walls Science Breakthroughs of the Year 2021 in Engineering and Technology.
Disclaimer: The views expressed here are those of the interviewee and do not
necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and
operator of this website. This disclaimer forms part of the Terms and Conditions of use
of this website.
Written by
Megan Craig
Megan graduated from The University of Manchester with a B.Sc. in Genetics,
and decided to pursue an M.Sc. in Science and Health Communication due to
her passion for combining science with content creation. As part of her studies,
Megan partnered with Jodrell Bank Discovery Centre as a Digital Marketing
Assistant, producing content and updating sections of their website. In her
spare time, she loves to travel, exploring each location's culture and history -
including the local cuisine. Her other interests include embroidery, reading
fiction, and practicing her Japanese language skills.
We explored the literature studied the precautions that are required during a pandemic
and identified social distancing to be a strong recommendation in social or public settings.
This inspired us to develop a robot system that could monitor and encourage social
distancing in public places using AI technologies. This robot can be deployed without any
human supervision.
distancing has one of the highest impacts control the spread of a disease. It has been
effective and easy to follow. The scientific community has observed that during this
pandemic that social distancing has made a difference.
Our method uses the visual feed from a depth camera onboard a mobile robot and
existing CCTV infrastructure (if available) to detect social distancing breaches. In
addition, the robot can autonomously navigate and interact with people and encourage
them to maintain social distancing. The robot also has a thermal camera, whose thermal
image feed can possibly be transmitted to security/healthcare professionals.
discreetly follow protocols for COVID testing and contact tracing. We do not record
people’s faces or identities. Therefore, an individual’s identity or symptoms (or any
personal data) are not stored or associated.
If multiple groups are detected, the robot prioritizes groups depending on whether a
group is static or mobile, the number of members in a group, social interaction between
the members, etc., and autonomously navigates to the pedestrians in a top-priority
group to encourage them to move apart. For autonomous navigation, our robot uses a
Lidar sensor and our machine learning-based navigation algorithm.
Our robot uses a screen to display messages to the groups, and a thermal camera,
whose feed can be monitored by healthcare professionals for symptom monitoring.
As mentioned before, our robot also aids in contact tracing since it has an onboard
thermal camera. All these features would help curb the spread of the disease. We
believe that our robot could be another tool for healthcare/security professionals to
simplify their workloads.
Our current system lacks the capability to identify such social groups in all scenarios.
We are trying to devise algorithms that will help the robot to identify such groups and
make it more socially intelligent. We also need to develop better techniques for human-
robot interaction.
driver assistant systems, smartphones, IoT devices, etc. However, deep learning
systems need good, labeled datasets.
Moreover, they use high computational power for training and we need to develop
power-efficient methods that can be used with small and cheap devices or mobile
robots.
Many companies are developing personal robots that can be used for home monitoring
and as social companions. It is one of the most promising technologies with wide
applications in science and daily activities.
What are the next steps for you and your research?
Our upcoming research would focus on improving the human-robot interaction and
improving the social intelligence capabilities of such robots. We are also developing
next-generation robot navigation technologies that can enable us to operate in complex
indoor and outdoor scenes. Ultimately, these robots have to be safe and friendly, so that
they can be widely accepted by society. They can be used as delivery robots or social
robots
Written by
Emily Henderson
During her time at AZoNetwork, Emily has interviewed over 300 leading experts
in all areas of science and healthcare including the World Health Organization
and the United Nations. She loves being at the forefront of exciting new
research and sharing science stories with thought leaders all over the world.
Watch On
Youtube
Background
Usually, the surgeon places three or four laparoscopic ports into the child's abdomen,
much like standard laparoscopic surgery. The robot allows the surgeon to operate by
controlling those laparoscopic instruments using a console or control pad located near
the operating room. For the most part, the console is in the operating room,but it can
be someplace else, like an adjoining room or office. The console is not sterile, and
surgeons could operate without scrubbing; sterility is not an issue when using the
console.
Pediatric robotic surgery is still in its infancy compared to adult robotic surgery.
Pediatric robotic surgery is difficult because children have small abdomens compared to
adults. The small working areas pose a challenge for robotic surgery because there is
not much room for robotic arms. There are no special robots or robotic instruments for
pediatrics; the standard adult instruments are used. The robotic surgeon will use the
same instruments and robot for a 57-year-old prostatectomy and a six-month-old
nephrectomy.
Pediatric robotic surgery is more difficult than adult robotic surgery due to the
small size of the child's abdomen."
As surgeons became more comfortable with adult robotics, pediatric robotics emerged
as a better way to perform surgery on children. Before 2010, there was very little
pediatric robotic surgery being performed worldwide. This was because surgeons were
still on the learning curve for adult robotics and pediatrics represented a technical
challenge because of the child's small size.
But that changed quickly. At first, only the most skilled robotic surgeons would be
willing to perform robotic surgery on children. However, as surgeons became more
skilled in adult robotic surgery, more and more surgeons grew comfortable performing
robotic surgery on children. Remember, the robotic skills and instruments were the
same; the surgery just needed to be performed in a much smaller space. Adopting
robotics in children was a natural progression as adult surgeons became more
competent in robotics.
Around 2012, the medical community was hopeful that the internet would allow surgeons
from one location to perform surgery on a child in another location, for example, across
State lines. At the start, this was seen as a potential breakthrough and created the
possibility that robotic surgery could be offered to children in remote areas where
robotics was not an option.
Unfortunately, the speed of the internet was a limiting factor as the surgeon's
movements were delayed by a few seconds during the operation, a major drawback and
nonstarter for surgery. In the future, advances in internet speed, including 5G and
beyond, may reignite the discussion to perform robotic surgery remotely around the
globe.
The future of pediatric robotic surgery will include a wider application of robotic
surgery outside of urology."
The adoption of robotics in urology has been so fast that new pediatric urologists need
to be trained in robotics to be marketable. It is an expected skill that is demanded of
most new employers. Procedures such as nephrectomy, bladder reconstruction, and
removal of large kidney stones. As recent as ten years ago, no such training was needed
or offered at some fellowships. Now, robotics has taken over the world of pediatric
urological surgery and set it on a new course.
At our institution, Rutgers, Robert Wood Johnson Medical School, most major urological
cases are performed robotically. Two- or three-day hospital admissions are now cut
down to outpatient surgery as we push the envelope. This is a better use of our scarce
financial resources and is likely to become more common in the future. Patients can be
sent home on the same day because there is less pain and less recovery in robotic
surgery compared to open surgery.
At our center, surgeons are now using the daVinci® Surgical System Single port system
to perform surgery. Standard robotic surgery requires three or four laparoscopic ports to
be placed by the surgeon to perform surgery. For single port surgery, only one port is
needed resulting in less pain, faster recovery, and a better cosmetic outcome, which is
important for children from a psychological standpoint (Figure 1). This advance was
made possible by new technological innovations.
Single-port robotic surgery is being used at a few centers worldwide and is likely to
be adopted more widely in the next five years."
Conclusion
The future of robotic surgery is limitless. Newer technologies will emerge in the form of
small robots that can move around and perform surgery at faster internet speeds,
allowing remote robotic surgery to be performed around the globe. There may be a day
where robotic surgery is just that, surgery performed by a robot.
References
Kelly C Johnson 1, Doh Yoon Cha, Daniel G DaJusta, Joseph G Barone, Murali K
Ankem. Pediatric single-port-access nephrectomy for a multicystic, dysplastic kidney.
J Pediatr Urol. 2009 Oct;5(5):402-4. doi: 10.1016/j.jpurol.2009.03.011. Epub 2009 Apr
29.
Blanco FC, Kane TD. Single-Port Laparoscopic Surgery in Children: Concept and
Controversies of the New Technique. Minim Invasive Surg. 2012; 2012: 232347.
Dr. Barone's clinical research has appeared in scientific and non-scientific publications,
including Yahoo, the New York Times, and The Wall Street Journal, to name a few. His
focus is on performing research that can be useful for parents and children today. For
example, Dr. Barone demonstrated an association between breastfeeding and early
toilet training and documented the harmful effects of secondhand smoke on the
developing pediatric bladder.
Dr. Barone is part of the pediatric robotics team at Rutgers, Robert Wood Johnson
Medical School that also includes Dr. Sammy Elsamra, Chief of Robotics, Dr. Haris
Ahmed, Pediatric Urologist, and a dedicated group of anesthesiologists and nurses
whose collective vision is to innovate and advance robotic surgery for children.
Disclaimer: This article has not been subjected to peer review and is presented as the
personal views of a qualified expert in the subject in accordance with the general terms
and conditions of use of the News-Medical.Net website.
SEA-KIT H-Class USV 12m & 15m variants. Image Credit: SEA-KIT International
The SEA-KIT H-class USV, with its retractable gondola and dual sensor deployment
options, is a highly configurable design based on a wealth of operational data and
feedback collected from the company’s established X-class USVs. Several of these 12m
vessels are currently operational in the Indian Ocean, North Sea, Red Sea and the
Pacific.
The H-class features a composite hull for higher transit speeds, giving it greater range
and endurance, as well as active stabilisers to minimise roll. The new design has 12m
and 15m variants, with the 12m version transportable in a standard shipping container for
rapid, low-cost deployment. Both variants can be davit launched.
The H-class USV can accommodate a range of sensors as well as deploy a tow cage,
SVP, MAPR, CTD and side scan sonar for deep-water and nearshore bathymetric and
hydrographic survey missions. The vessel includes a Multibeam Echo Sounder (MBES),
station holding and winch-deployed sensor payloads for versatile ocean survey
capability.
The SEA-KIT team will officially launch the new design at Seawork, Europe’s largest
commercial marine exhibition, which takes place in Southampton, UK, later this month
(21-23 June 2022).
Source: https://www.sea-kit.com/
The idea of self-healing skin wrapped around a robotic skeleton conjures up all kinds
of sci-fi imagery, yet a group of Japanese researchers has managed to develop a
controllable robotic finger covered in “living” skin which they say could bring truly
human-like robots a step closer to reality.
Image Credit: The University of Tokyo. (2022) Robot skin heals | The University of Tokyo. [online]
Available at: https://www.u-tokyo.ac.jp/focus/en/press/z0508_00225.html
The team from the University of Tokyo has combined their expert knowledge in tissue
culturing and robotics to make this latest innovation which could have a significant
impact on how robots are developed in the near future.
Published in the journal Matter, the researchers detail the method they have developed
for generating seamless coverage of 3D objects with a “living” skin equivalent. Led by
Professor Shoji Takeuchi, a leader in the field of biohybrid robots, the research is a clear
indicator of the intersection of robotics and bioengineering.
We have created a working robotic finger that articulates just as ours does, and is
covered by a kind of artificial skin that can heal itself.
“Our skin model is a complex three-dimensional matrix that is grown in situ on the finger
itself. It is not grown separately then cut to size and adhered to the device; our method
provides a more complete covering and is more strongly anchored too,” Professor
Takeuchi continued.
The robotic finger covered in self-healing skin hopes to support medical research on
skin damage, including burns and severe wounds, as well as advancing various
manufacturing applications.
Takeuchi and his team were inspired by the medical treatment of deeply burned skin
using grafted hydrogels.
The living skin can be cultivated directly on to the robotic finger, which the researchers
found to be one of the more complex aspects of this research, necessitating engineered
structures that have been specifically made to which the collagen matrix can be
anchored.
Our creation is not only soft like real skin but can repair itself if cut or damaged in
some way. So we imagine it could be useful in industries where in situ repairability
is important as are human-like qualities, such as dexterity and a light touch.
When developing the technology further, the researchers also aim to add other kinds of
cells, giving robotic devices the capability to sense as humans do.
In the future, we will develop more advanced versions by reproducing some of the
organs found in skin, such as sensory cells, hair follicles and sweat glands. Also,
we would like to try to coat larger structures.
Human-like robotics could thus pave the way for new possibilities in advanced
manufacturing, and while the movement of the robotic finger is purely mechanical, it
could open up the door to advanced automation.
Furthermore, the self-healing skin technology could provide hope for patients that
require life-changing procedures for burns or other kinds of wounds, reducing the
complexity of future research.
Disclaimer: The views expressed here are those of the author expressed in their private
capacity and do not necessarily represent the views of AZoM.com Limited T/A
AZoNetwork the owner and operator of this website. This disclaimer forms part of the
Terms and conditions of use of this website.
Written by
David J. Cross
David is an academic researcher and interdisciplinary artist. David's current
research explores how science and technology, particularly the internet and
artificial intelligence, can be put into practice to influence a new shift towards
utopianism and the reemergent theory of the commons.
A maze of pipes and conduits for water, sewage, and gas can be found beneath the
streets. Regular monitoring of these pipes for leakage or maintenance usually
necessitates these to be dug up. The latter is not only laborious and costly (an
estimated yearly cost of £5.5 billion in the UK alone), but it also causes traffic
inconvenience and annoyance to individuals living nearby, not to mention
environmental damage.
Layout of the autonomous Joey mini-robot. Image Credit: TL Nguyen, A Blight, A Pickering,
A Barber, GH Jackson-Mills, JH Boyle, R Richardson, M Dogar, N Cohen
Imagine a robot that can navigate the tightest of pipe networks and send images of
damage or blockages to human operators. This is no longer just a dream, according to a
new study published in Frontiers in Robotics and AI by a team of scientists from the
University of Leeds.
Here we present Joey—a new miniature robot—and show that Joeys can explore
real pipe networks completely on their own, without even needing a camera to
navigate.
Dr. Netta Cohen, Study Final Author and Professor, University of Leeds
Joey is the first robot that is capable of navigating mazes of pipes as narrow as 7.5 cm
on its own. It weighs only 70 g and is small enough to nestle in the palm of the hand.
Pipebots Project
The current effort is part of the “Pipebots” project, a cooperation between the
universities of Sheffield, Bristol, Birmingham, and Leeds, as well as UK utility
corporations and other international academic and industrial partners.
Underground water and sewer networks are some of the least hospitable
environments, not only for humans, but also for robots. Sat Nav is not accessible
undergound. And Joeys are tiny, so have to function with very simple motors,
sensors, and computers that take little space, while the small batteries must be
able to operate for long enough.
To make the robot’s life more challenging, the researchers demonstrated that the robot
can navigate up and down inclined pipes with realistic slopes. They also introduced
sand and gooey gel (really dishwashing liquid) to the pipes to test Joey’s ability to
maneuver through muddy or slippery tubes, with success.Significantly, the sensors help
Joey navigate the pipes without the assistance of a camera or power-hungry computer
vision. This saves energy and extends the life of Joey’s current battery. When the
battery runs out, Joey will return to its starting point to “feed” on power.
Joey, however, has one flaw: it cannot right itself if it accidentally turns on its back, like
an upside-down tortoise. According to the authors, the next prototype will be able to
overcome this issue. In the future, the researchers hope to make Joey waterproof and to
be able to function underwater in liquid-filled pipelines.
Cohen says, “Ultimately we hope to design a system that can inspect and map the
condition of extensive pipe networks, monitor the pipes over time, and even execute
some maintenance and repair tasks.”
Dr. Netta Cohen, Study Final Author and Professor, University of Leeds
Journal Reference:
Nguyen, T. L., et al. (2022) Autonomous control for miniaturized mobile robots in
unknown pipe networks. Frontiers in Robotics and AI.
doi.org/10.3389/frobt.2022.997415.
Source: https://www.frontiersin.org/