Professional Documents
Culture Documents
Drawing System With Dobot Magician Manipulator Bas
Drawing System With Dobot Magician Manipulator Bas
Article
Drawing System with Dobot Magician Manipulator Based on
Image Processing
Pu-Sheng Tsai 1 , Ter-Feng Wu 2, *, Jen-Yang Chen 1 and Fu-Hsing Lee 3
Abstract: In this paper, the robot arm Dobot Magician and the Raspberry Pi development platform
were used to integrate image processing and robot-arm drawing. For this system, the Python
language built into Raspberry Pi was used as the working platform, and the real-time image stream
collected by the camera was used to determine the contour pixel coordinates of image objects. We
then performed gray-scale processing, image binarization, and edge detection. This paper proposes
an edge-point sequential arrangement method, which arranges the edge pixel coordinates of each
object in an orderly manner and places them in a set. Orderly arrangement means that the pixels
in the set are arranged counterclockwise to the closed curve of the object shape. This arrangement
simplifies the complexity of subsequent image processing and calculation of the drawing path. The
number of closed curves represents the number of strokes in the drawing of the manipulator. In
order to reduce the complexity of the drawing of the manipulator, a fewer number of closed curves
Citation: Tsai, P.-S.; Wu, T.-F.; Chen,
will be necessary. To achieve this goal, we not only propose the 8-NN (abbreviation for eight-nearest-
J.-Y.; Lee, F.-H. Drawing System with neighbor) search, but also use to the 16-NN search and the 24-NN search methods. Drawing path
Dobot Magician Manipulator Based points are then converted into drawing coordinates for the Dobot Magician through the Raspberry
on Image Processing. Machines 2021, Pi platform. The structural design of the Dobot reduces the complexity of the experiment, and its
9, 302. https://doi.org/10.3390/ attitude and positioning control can be accurately carried out through the built-in API function or
machines9120302 the underlying communication protocol, which is more suitable for drawing applications than other
fixed-point manipulators. Experimental results show that the 24-NN search method can effectively
Academic Editors: Mingcong Deng reduce the number of closed curves and the number of strokes drawn by the manipulator.
and Marco Ceccarelli
ible sporty performance, omni-directional mobile platforms are widely used in entertain-
ible sporty
ible sporty performance,
performance, omni-directional
omni-directional mobile platforms
platforms are widely
widely used
used in
in entertain-
entertain-
ment-oriented robots. In [4], Xu provide a mobile
structural analysisare
and motion resolving solu-
ment-oriented
solution to the
ment-oriented robots.
platformIn
robots.based[4],
Inbased Xu
[4],onon
Xu provide a structural
fourseparate
separate
provide Mecanum
a structural analysis and
wheels,
analysis motion
building
and resolving
motion the solu-
perspective
resolving solu-
tion to the platform four Mecanum wheels, building the perspective
tion to
to the
the platform
mathematical platform
model based on four
four separate
separate Mecanum wheels, building
building the
the perspective
perspective
tion
mathematical modelininits
itskinematics
based on
kinematics and
anddynamics terms.
Mecanum
dynamics wheels,
terms.
mathematical model in its kinematics and dynamics
mathematical model in its kinematics and dynamics terms. terms.
Figure1.
1. Star Wars
Wars BB-8SPHERO
SPHERO [1].
Figure 1. Star
Figure Star WarsBB-8
BB-8 SPHERO[1].
[1].
Figure 1. Star Wars BB-8 SPHERO [1].
Figure4.4. Performance
Figure Performance robot
robotof
ofWaseda
WasedaUniversity
University[6].
[6].
Figure 4. Performance robot of Waseda University [6].
together withtogether
are applied a painting
withrobot to realize
a painting artworks
robot with
to realize originalwith
artworks styles. Further
original examples
styles. of
Further
are applied together with a painting robot to realize artworks with original styles. Further
artistic robots are the interactive system for painting artworks by regions presented
examples of artistic robots are the interactive system for painting artworks by regions pre- in [15]
examples
and the in of artistic robots are the
collaborative interactive system
shownforin painting artworks by regions pre-
sented [15] and thepainting support
collaborative systemsupport
painting [16]. shown
system in [16].
sented in [15] and the collaborative painting support system shown in [16].
Figure6.
6. Portrait drawing
drawing by industrial
industrial robotHeidi
Heidi [17].
Figure 6. Portrait
Figure Portrait drawing by
by industrial robot
robot Heidi[17].
[17].
The Dobot
The Dobot Magician [19] [19] is a desktop
desktopfour-axis
four-axismanipulator with anan accuracy of 0.2
The Magician [19] isisaadesktop four-axis manipulator
manipulator with
with accuracy
an accuracy of
of 0.2
mm
0.2 based on an STM32 industrial chip. It uses Python as the working platform and co-
mmmm based
based on onan an STM32
STM32 industrial
industrial chip.chip. It uses
It uses Python
Python as the
as the working
working platform
platform andandco-
operates with
cooperates withthetheOpenCV
OpenCVimage imageprocessing
processingdevelopment
development kit kit and the Python
and the Python API API (the
(the
operates with the OpenCV image processing development kit API (the
application
application interface) to control the manipulator. The Dobot Magician has been widely
application interface)
interface) to to control
control the the manipulator.
manipulator. The The Dobot
Dobot Magician
Magician has has been
been widely
widely
used
used in academic research [20]. For example, Tsao [21] wrote an Android application for
used in academic research research [20].[20]. For Forexample,
example,Tsao Tsao[21]
[21] wrote
wrote anan Android
Android application
application for
smartphones
for smartphones to drive
to drivelaser engraving [22]. Images are sent from the smartphone to Rasp-
smartphones to drive laserlaser engraving
engraving [22].[22].
Images Images are from
are sent sent from the smartphone
the smartphone to Rasp-to
berry Pi, which
Raspberry calls the
Pi, which OpenCV
calls the OpenCV function library library
function to carry out imageoutprocessing and find
berry Pi, which calls the OpenCV function library to carrytoout carry
image image processing
processing and find
contour
and findpoints.
contour These are used
points. These to plan
are usedlaser toengraving
plan laser path points, which
engraving path are transmitted
points, which
contour points. These are used to plan laser engraving path points, which are transmitted
to
are the manipulator.
transmitted to In
the [23,24],
manipulator. Dobot and
In Raspberry
[23,24], Dobot Pi were
and applied
Raspberry toPiintegrate
were image
applied
to the manipulator. In [23,24], Dobot and Raspberry Pi were applied to integrate image
processing
to integrate and image robot-arm
processing drawing. The plug-in
and robot-arm camera
drawing. Thestreams
plug-in the image,streams
camera cooperatesthe
processing and robot-arm drawing. The plug-in camera streams the image, cooperates
with the
image, third-party
cooperates kit OpenCV
with the function
third-party kitlibrary,
OpenCV carries out image
function processing,
library, carries outfinds
imagethe
with the third-party kit OpenCV function library, carries out image processing, finds the
contour points,
processing, finds classifies
the andpoints,
contour calculates the drawing
classifies and path points,
calculates the and then
drawing transmits
path points, them
and
contour points, classifies and calculates the drawing path points, and then transmits them
to the
then manipulator
transmits themfor drawing.
to the manipulatorA similar set-up hasAbeen
for drawing. similarapplied
set-upto perform electronic
to the manipulator for drawing. A similar set-up has been applied tohas been applied
perform electronicto
acupuncture
perform and
electronic massages
acupuncture [25,26].
and Acupoint
massages recognition
[25,26]. is
Acupointachieved through
recognition is image
achieved de-
acupuncture and massages [25,26]. Acupoint recognition is achieved through image de-
tection, and
through image thedetection,
manipulator andis controlled to move to the corresponding acupoints. In [27],
tection, and the manipulator isthe manipulator
controlled to moveis controlled to move to the
to the corresponding corresponding
acupoints. In [27],
the Dobot In
acupoints. Magician
[27], the and
Dobotmachine
Magician visionandwere applied
machine to the
vision wereautomatic
applied optical
to the detection
automatic
the Dobot Magician and machine vision were applied to the automatic optical detection
of defects
optical on production
detection of defects lines. The researchers
on production lines.in [28] researchers
integrated our atmega328p devel-
of defects on production lines. The researchers inThe
[28] integrated our in [28] integrated
atmega328p our
devel-
opment board
atmega328p with the Dobot
development board for with
electromechanical
the Dobot forintegration solid lineintegration
electromechanical automation. The
solid
opment board with the Dobot for electromechanical integration solid line automation. The
DGAC
line design was
automation. Theproposed
DGAC design in [29],was the proposed
spirit of gradient descent
in [29], the spirittraining
of gradientis embedded
descent
DGAC design was proposed in [29], the spirit of gradient descent training is embedded
in genetic
training is algorithm
embedded(GA) to construct
in genetic algorithma main(GA) controller
to constructto search
a main optimum
controller control effort
to search
in genetic algorithm (GA) to construct a main controller to search optimum control effort
optimum
under possiblecontrol effort under
occurrence possible occurrence
of uncertainties. of uncertainties.
The effectiveness The effectiveness
is demonstrated by simula- is
under possible occurrence of uncertainties. The effectiveness is demonstrated by simula-
demonstrated
tion results, and by simulation
its advantages results, and
are its advantages
indicated are indicated
in comparison withinother
comparison with
GA control
tion results, and its advantages are indicated in comparison with other GA control
other
schemes GA forcontrol schemes
four-axis for four-axis manipulators.
manipulators.
schemes for four-axis manipulators.
Machines 2021, 9, x FOR PEER REVIEW 5 of 20
In this paper, the drawings of images and various curves are used to verify the feasi-
bility of Dobot manipulator drawing system. Connected-component labeling (CCL)
In this
[30,31] is anpaper, the drawings
algorithmic of images
application and various
of graph theory, curves
where are used of
subsets to connected
verify the feasibil-
compo-
ity of Dobot manipulator drawing system. Connected-component labeling (CCL) [30,31]
nents are uniquely labeled based on a given heuristic which is used in computer vision to
is an algorithmic application of graph theory, where subsets of connected components are
detect connected regions in binary digital images. Thus, in this paper, the method of CCL
uniquely labeled based on a given heuristic which is used in computer vision to detect
is applied to binary images construction which is used to cut an image in multiple single
connected regions in binary digital images. Thus, in this paper, the method of CCL is
objects. Besides, to reduce the complexity of robot path planning, the edge-point sequen-
applied to binary images construction which is used to cut an image in multiple single
tial arrangement method is explored to rearrange pixels’ image edge on the drawing path.
objects. Besides, to reduce the complexity of robot path planning, the edge-point sequential
This method is also investigated in this paper and is improved from the original idea of
arrangement method is explored to rearrange pixels’ image edge on the drawing path. This
connected component labeling. A popular expression is the parametric B-spline curve,
method is also investigated in this paper and is improved from the original idea of con-
typically defined by a set of control points and a set of B-spline basis functions, which has
nected component labeling. A popular expression is the parametric B-spline curve, typically
several beneficial characteristics. It can be generalized as (a) it lies entirely within the con-
defined by a set of control points and a set of B-spline basis functions, which has several
vex hull of its control points; (b) it is invariant under a rotation and a translation of the
beneficial characteristics. It can be generalized as (a) it lies entirely within the convex hull
coordinate
of its controlaxes; and(b)
points; (c)ititiscan be locally
invariant refined
under without
a rotation andchanging their of
a translation global shape. The
the coordinate
synthesis of the above concepts and B-spline properties leads to a successful methodology
axes; and (c) it can be locally refined without changing their global shape. The synthesis of
for above
the trajectory planning
concepts application.
and B-spline In thisleads
properties paper,
to athe divided methodology
successful B-spline function has been
for trajectory
proposed by Tsai [32] which is used to describe the mathematical expressions
planning application. In this paper, the divided B-spline function has been proposed of various
by
curves.
Tsai [32] which is used to describe the mathematical expressions of various curves.
2.2. System
SystemArchitecture
Architecture
Inthis
In thispaper,
paper,we weaimed
aimedtotorealize
realizethethedrawing
drawingfunction
functionof ofthe
themanipulator
manipulatorthroughthrough
machine vision. In addition to the mechanism of the four-axis
machine vision. In addition to the mechanism of the four-axis manipulator, the Dobot manipulator, the Dobot
Magician also includes
Magician includesexpansion
expansionmodules
modules(gripper
(gripper kit,kit,
airair
pump pumpbox,box,
conveyor
conveyorbelt, belt,
pho-
toelectric switch,
photoelectric andand
switch, color recognition
color recognition sensor), an external
sensor), an external communication
communication interface, and
interface,
a control
and program.
a control program. It also has
It also has1313I/O
I/Oextension
extensioninterfaces
interfacesand andprovides
provides Qt, C#, Python,
Python,
Java,VB,
Java, VB,and
andother
otherAPIs
APIsfor
fordevelopment.
development.Image Imagevision
visionisiscarried
carriedout
outusing
usingaaRaspberry
Raspberry
Pi
Pi33external
externalhigh-resolution
high-resolutionUSB USBcamera.
camera.Raspberry
RaspberryPi Pi33adopts
adoptsthetheBroadcom
BroadcomBCM2837
BCM2837
chipset,
chipset,which
whichisisaafour-core
four-core64-bit
64-bitARMv8
ARMv8 Series
SeriesCPU
CPU withwitha clock frequency
a clock frequency of 1.4 GHz.
of 1.4 GHz.It
has a dual-core
It has a dual-core videocore
videocoreIV 3D drawing
IV 3D drawing core; provides
core; provides40 GPIO and and
40 GPIO 4 groups of USB
4 groups 2.0;
of USB
and
2.0; supports 10/100
and supports Ethernet,
10/100 IEEE802.11
Ethernet, b/g/n
IEEE802.11 wireless
b/g/n network,
wireless Bluetooth
network, 4.1 network
Bluetooth 4.1 net-
function, and 15
work function, andpin15MIPI terminal
pin MIPI camera
terminal cameraconnection
connection function.
function. The
Thebuilt-in
built-inPython
Python
language
language is used as the working platform. The contour pixel coordinates of the objectin
is used as the working platform. The contour pixel coordinates of the object in
the
theimage
imageare aretransmitted
transmittedto tothe
theDobot
Dobotthrough
throughUART UARTusingusinggray-scale
gray-scaleprocessing,
processing,image
image
binarization,
binarization, edgeedge detection,
detection, andandedge-point
edge-point sequential
sequential arrangement
arrangement to todepict
depictthe theedge
edge
contour of the object. The overall architecture consists of the following
contour of the object. The overall architecture consists of the following five units shown five units shown in
Figure
in Figure8: Dobot
8: Dobot Magician
Magician (including
(including thethewriting-and-drawing
writing-and-drawingmodule), module),Raspberry
RaspberryPiPi33
embedded microprocessor, camera, user interface, and image
embedded microprocessor, camera, user interface, and image processing algorithm. processing algorithm.
Figure8.8.System
Figure Systemarchitecture.
architecture.
Machines 2021, 9, 302 6 of 20
Machines 2021, 9, x FOR PEER REVIEW 6 of 20
Machines 2021, 9, x FOR PEER REVIEW 6 of 20
2.1.
2.1. Specifications
SpecificationsofofDobot
DobotMagician
Magician
2.1. Specifications of Dobot Magician
As
Asshown
shownin inFigure
Figure9,9,the
themain
mainbody
bodyof ofthe
theDobot
Dobotcomprises
comprisesthe thebase,
base,base
basearm,
arm,rear
rear
arm, As shownand in Figure 9, the maincorresponding
body of the Dobot comprises the base, base arm, and
rear
arm,forearm,
forearm, andend endeffector.
effector.The
The correspondingfour fourjoints
jointsare
aredefined
definedasasJ1,
J1,J2,
J2,J3,
J3, and
arm,
J4. forearm, and end effector.origin
The corresponding four jointsofare defined as J1, J2, J3, and
J4. We
Wetake
takeJ2
J2as
asthe
theCartesian
Cartesian origin(0,0,0),
(0,0,0),and
andthe
theposition
position ofthe
theend
endtool
toolisisdefined
definedas as
J4.
(x, We
y, take
z). J2 as the Cartesian origin (0,0,0), and the position of the end tool is defined as
(x, y, z). The power supply and warning lights are located above the base. Thelength
The power supply and warning lights are located above the base. The lengthof of
(x, y,
the z). The power supply and warning lightslength
are located abovearmthe base. The lengththeof
theforearm
forearmof ofthe
themanipulator
manipulator isis147
147mm,
mm,the the lengthofofthe
therear
rear armisis135
135mm,
mm,and and the
the forearm
length of thearm
manipulator is 147 mm, theFigure
length of the rear arm is 135 mm, and the
lengthof ofthe
thebase
base armisis138
138mm,
mm,as asshown
shownin in Figure10.
10.
length of the base arm is 138 mm, as shown in Figure 10.
Figure 9.Image
Image ofDobot.
Dobot.
Figure 9. Image of
Figure 9. of Dobot.
The maximum load of the manipulator is 500 g, the maximum extension distance is
The maximum
The maximum load load of of the
the manipulator
manipulator is is 500
500 g,
g, the
themaximum
maximumextension extension distance
distance isis
320 mm, the movement angle of the base arm joint J1 is (−90~+90°), the movement angle
◦ ), thethe
320 mm,
320 mm,thethemovement
movementangle angleofof thethe base
base arm armjoint J1 isJ1(−
joint is90~+90
(−90~+90°), movement
movement angle
angle of
of the rear arm joint J2 is (0~+85°), ◦ ), thethe movement angle of the forearm joint J3 is (−10~+90°),
the rear
of the arm
rear armjoint J2 is
joint J2 (0~+85
is (0~+85°), themovement
movement angle
angle ofofthe
theforearm
forearm joint J3J3isis(−
joint 10~+90◦ ),
(−10~+90°),
and the movement angle of the end tool J4 is (−135~+135°). ◦The working range of the X-Y
and
and the movement angle angle of of the
theendendtool (−135~+135 The
toolJ4J4isis(−135~+135°). ). The working
working rangerange of X-Y
of the the
axis of the manipulator is within the J1 activity angle and the maximum extension dis-
X-Y axis of the manipulator is within the J1 activity angle
axis of the manipulator is within the J1 activity angle and the maximum extension dis-and the maximum extension
tance, while
distance, while the activity space ofofthe ZZaxis isiswithin the J2 and J3 activity angle
angle andand the
tance, while thethe activity
activity spaceof
space theZ
the axisis
axis withinthe
within theJ2J2 and
and J3J3 activity and the
the
maximum
maximum extension distance. In addition, under a load of 250 g, the maximum rotation
maximum extension
extension distance.
distance. In addition, under a load load of of 250
250 g,
g, the
themaximum
maximum rotation
rotation
speed ofthe
speed the frontand and reararms arms aswell well asthethe baseisis320 320◦ /s,
°/s, while
while the maximum
maximum rotation
speed ofof the front
front and rearrear armsas as wellasas thebasebase is 320 °/s, while thethe maximum rotationrotation
speed
speed of the end tool is 320 ◦ °/s.
speed ofof the
the end
end tool
tool isis 320 /s.
320 °/s.
2.2. Writing-and-DrawingModule
2.2. Module
2.2. Writing-and-Drawing
Writing-and-Drawing Module
The writing-and-drawing
The module includes a pen and a holder,
pen holder, as shown in Fig-
Thewriting-and-drawing
writing-and-drawing module
moduleincludes a pen
includes andand
a pen a pen
a pen as shown
holder, in Figure
as shown 11.
in Fig-
ure
For 11. For our purposes, we replaced the pen with a brush. Figure 12 shows the specifi-
ure our purposes,
11. For we replaced
our purposes, the penthe
we replaced with
pena with
brush. FigureFigure
a brush. 12 shows the specification
12 shows the specifi-
cation parameters
parameters of Dobotofwith
Dobot with a pen-holder
a pen-holder end tool.end tool.
cation parameters of Dobot with a pen-holder end tool.
Machines 2021, 9, 302 7 of 20
Machines 2021, 9,2021,
Machines x FOR9, xPEER
FOR REVIEW
PEER REVIEW 7 of 20
7 of 20
2.3.
2.3. Underlying
2.3. Underlying Communication
Communication
Underlying Communication Protocol
Protocol
Protocol
The APIThe API provided
The provided
API provided with
with with thetheDobot
the Dobot cannot
cannot
Dobot directly
directly
cannot callcall
directly and and
call control
control
and themanipulator
the
control manipulator
the on
manipulator
Raspberry
on Raspberry Pi. Therefore,
Pi. Therefore, we used
we used the Python
the Python pySerial
pySerial module
module for serial
for serialcommunication
communica- to
on Raspberry Pi. Therefore, we used the Python pySerial module for serial communica-
connect
tion to connect Raspberry
Raspberry Pi with
Pi withthe manipulator
the manipulator through
through the USB
the USB interface. The system must
tion to connect Raspberry Pi with the manipulator through the interface. The system
USB interface. The system
follow
mustmust
follow the communication
the communication protocol
protocol formulated
formulatedby the Dobot.
by the The pySerial module controls
follow the communication protocol formulated byDobot. The pySerial
the Dobot. module
The pySerial module
the manipulator through serial communication via the USB interface. The communication
controls the manipulator
controls the manipulatorthrough serialserial
through communication
communicationvia the viaUSB
the interface. The com-
USB interface. The com-
protocol includes the header, data length, transmitted data, and check code of the data
munication protocol
munication includes
protocol the header,
includes data length,
the header, transmitted
data length, data, data,
transmitted and check code code
and check of of
packet. Table 1 shows the relevant communication protocol.
the data
the packet. TableTable
data packet. 1 shows the relevant
1 shows communication
the relevant communication protocol.
protocol.
Table 1. Communication protocol of Dobot Magician.
TableTable
1. Communication
1. Communication protocol of Dobot
protocol Magician.
of Dobot Magician.
Payload
Payload
Payload Ctrl
Header Length Checksum
Header
Header Length Length ID Ctrl Ctrl Params
Checksum
Checksum
ID ID rw Is Params
QueuedParams
rw rw High is Queued
is Queued
Byte Low Byte
0xAA As Payload
0xAA 0xAA 1 Byte 1 Byte
High Byte 0x10Byte
High Byte Low or
Low Byte 0x00 or
0xAA 1 byte 1
1 byte byte 1 byte As directed Directed
Payload
As directed Calculate
calculate
Payload calculate
0xAA 0xAA 0x10 or0x10
0x00or 0x00
0x000x00 0x01or 0x01 0x01
or0x00
The physical
The layerlayer
Thephysical
physical is encoded
layer by the
isisencoded
encoded byIEEE-754
by the 32-bit
theIEEE-754
IEEE-754 single
32-bit
32-bit precision
single
single floating-point
precision
precision floating-point
floating-point
number.
number. For
Forexample,
For example,
number. if weififwant
example, we
wewant to
tocontrol
to control
want the
thedisplacement
the displacement
control of the
displacement of the
themanipulator
ofmanipulator fromfrom
manipulator a aa
from
certain
certain pointpoint
certain to
to the
point the
thecoordinate
tocoordinate point
coordinate (𝑥 =( x(𝑥
point
point 160, 𝑦 = y90,
==160,
160, 𝑧90,
𝑦==90,= −20, 𝑟20,
z𝑧==−−20, = 0).
r𝑟==Then,
00).
). Then, the
thedata
the data
Then, data
encoding
encoding process
process is is carried
carried out out
as as
shownshown
in in
Table
encoding process is carried out as shown in Table 2. Table
2. 2.
Machines 2021, 9, x FOR PEER REVIEW 8 of 20
𝑟: the end tool is used when the steering gear (servo motor) is installed. Although
r: the end
this system is tool
not is usedthe
used, when the must
value steering
be gear (servo
brought in.motor)
Figureis13
installed. Although
shows the this
conversion
system is not used, the value must be brought in. Figure 13 shows the conversion
results verified by the program written in Python; the following data can be used to con- results
verified
trol the by the program
manipulator written
with in Python;
the PTP the following
mode command datathe
through can be used
write to control
() method the
of pySe-
manipulator
rial. with the PTP mode command through the write () method of pySerial.
Figure13.
Figure 13.Verification
Verificationofofcommunication
communicationinstructions.
instructions.
2.4.
2.4.Camera
CameraLens
Lens
The
The imagesources
image sourcesare
arefiles
filesstored
storedin
inthe
thesystem
systemororreal-time
real-timeimages
imagescaptured
capturedthrough
through
the
the external CCD lens. The external lens has an automatic focusing function, asasshown
external CCD lens. The external lens has an automatic focusing function, shownin
in Figure 8. It uses the transmission interface of USB 2.0 and is interconnected
Figure 8. It uses the transmission interface of USB 2.0 and is interconnected with the USB with the
USB interface of Raspberry Pi3. Captured images are transmitted to the graphics
interface of Raspberry Pi3. Captured images are transmitted to the graphics window in- window
interface forimage
terface for imageprocessing.
processing.
Figure14.
Figure 14. Components
Components of
of Raspberry
Raspberry Pi.
Pi.
3. Machine-Vision Image Processing
3.1.
3.
3. Image Preprocessing
Machine-Vision
Machine-Vision Image
ImageProcessing
Processing
3.1. Image
The Preprocessing
proposed
3.1. Image Preprocessingdrawing system mainly uses Python language and imports it into the
OpenCVThe computer vision function library uses
to read the image source. Itimports
compiles it into a
The proposed
proposed drawing
drawing system
system mainly
mainly usesPythonPythonlanguage
languageand and importsititintointothe
the
visual
OpenCV window operation interface with the Tkinter graphical user interface function li-
OpenCV computer
computer vision
vision function library to
function library to read
read the
the image
image source.
source. ItItcompiles
compilesititinto
intoa
abrary
visualto window
simplify operation
the processinterface
of operating
withandthe controlling the manipulator.
Tkinter graphical user interfaceBefore the ro-
function
visual window operation interface with the Tkinter graphical user interface function li-
bot begins
library to draw,the
to simplify the input image
process must be
of operating andpreprocessed to communicate
controlling the manipulator.through
Before the the
brary to simplify the process of operating and controlling the manipulator. Before the ro-
underlying
robot beginscommunication
to draw, the inputprotocol
imageof the be
must robot.
preprocessed to communicate through the
bot begins to draw, the input image must be preprocessed to communicate through the
As shown
underlying in Figure 15,protocol
communication the image source
of the can be a pre-existing file in the system or a
robot.
underlying communication protocol of the robot.
real-time imageincaptured
As shown Figure 15,bythe the camera.
image Color
source canimages contain thefile
be a pre-existing pixel information
in the system or of a
As shown in Figure 15, the image source can be a pre-existing file in the system or a
three primary
real-time imagecolors: red (R),
captured green
by the (G), and
camera. blueimages
Color (B). Color images
contain themust
pixelfirst be converted
information of
real-time image captured by the camera. Color images contain the pixel information of
to grayscale
three primarytocolors:
describe
red color brightness.
(R), green (G), and Color
blue pixels are converted
(B). Color images must to first
a gray-level value
be converted
three primary colors: red (R), green (G), and blue (B). Color images must first be converted
to
ofgrayscale to describeblack,
0–255. 0 represents color brightness. Color pixels
and 255 is white. are converted
YIQ conversion is astofollows:
a gray-level value of
to grayscale to describe color brightness. Color pixels are converted to a gray-level value
0–255. 0 represents black, and𝑌255 is white. YIQ conversion is as follows:
of 0–255. 0 represents black, and 2550.299 is white. 0.587 0.114 𝑅
YIQ conversion is as follows:
𝐼 = 0.596 −0.275 −0.321 𝐺 (1)
𝑌
Y
𝑄 0.299
0.299
0.212 0.587 0.114
0.587
−0.528 0.114
0.311 𝑅
𝐵
R
I = 𝐼 = 0.596
0.596 −−0.275 −0.321 𝐺G
0.275 −0.321 (1)
(1)
Brightness signal Q= 0.299 𝑄 × 𝑅 + 0.587
0.212 −−0.528
0.212 × 𝐺 + 0.114
0.528 0.311 0.311× 𝐵, 𝐵B
Brightness 𝑌 is the
Therefore, signal = converted
0.299 × 𝑅 grayscale
+ 0.587 × value and × ,𝐵, are color components.
𝐺 + 0.114
Therefore, 𝑌 is the converted grayscale value and , are color components.
Image Preprocessing
Estimate the
Cam Studio Gray Image Preprocessing
Threshold Edge
Camera Processing Binarization Detection
Valuethe
Estimate
Cam Studio Gray Threshold Edge
Camera Processing Binarization Detection
Value
Edge-Points Sequential
Arrangement Method
8-NN sets [Vj] : 187 Edge-Points Sequential
8-NN pixels [Pij] : 2758 Arrangement Method
8-NN sets [Vj] : 187
8-NN pixels [Pij] : 2758
Figure15.
Figure 15.Flow
Flowchart
chartof
ofimage
imageprocessing.
processing.
Image binarization uses trial and error or the Otsu algorithm to obtain the image
Image binarization uses trial and error or the Otsu algorithm to obtain the image
threshold to distinguish whether each pixel belongs to black (0) or white (255). This helps
Machines 2021, 9, 302 threshold to distinguish whether each pixel belongs to black (0) or white (255). This 10helps
10 of 20
to reduce noise. The grayscale values are represented by pixel points 𝑓(𝑥, 𝑦) combined
Machines 2021, 9, x FOR PEER REVIEW of 20
to reduce noise. The grayscale values are represented by pixel points 𝑓(𝑥, 𝑦) combined
with the image threshold 𝑘, as depicted in Figure 16. Binary operation is then performed
with the image threshold 𝑘, as depicted in Figure 16. Binary operation is then performed
to generate a binary image output.
to generate a binary image output.
with the image
Image threshold k,uses
binarization as depicted
trial andinerror
Figure or16.
theBinary operation istothen
Otsu algorithm performed
obtain the imageto
generate
thresholda to
binary image output.
distinguish whether each pixel belongs to black (0) or white (255). This helps
to reduce noise. The grayscale valuesare represented by pixel points 𝑓(𝑥, 𝑦) combined
with the image threshold 𝑘,fas i f f ( x,16.
0, Figure ≤k
y) Binary
( x,depicted
y) = in operation is then performed(2)
to generate a binary image output. 255, otherwise
0, 𝑖𝑓 𝑓(𝑥, 𝑦) ≤ 𝑘
𝑓(𝑥, 𝑦) = 0, 𝑖𝑓 𝑓(𝑥, 𝑦) ≤ 𝑘 (2)
𝑓(𝑥, 𝑦) = 255, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (2)
255, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Therefore, 𝑓(𝑥, 𝑦) = 0 indicates that the pixel value is black and 𝑓(𝑥, 𝑦) = 255 in-
FigureTherefore, 𝑓(𝑥, 𝑦) =
Image threshold
16.Image k. 0 𝑘. indicates that the pixel value is black and 𝑓(𝑥, 𝑦) = 255 in-
Figure
dicates16.that the threshold
pixel value is white.
dicates that the pixel value is white.
Sobel edge detection is used to find the edges of the object in the binary image. Two
Therefore, x, y) = 0 indicates
Sobel edgef (detection is used tothat findthe thepixel value is black andin fthe y) = 255
( x, binary indicates
3 × 3 horizontal and vertical matrix 0, edges
convolution
of the
𝑖𝑓kernels
𝑓(𝑥, 𝑦)(asobject
≤ 𝑘shown image.
in Figure 17) or Sobel
Two
that the pixel value is white. 𝑓(𝑥, 𝑦) =
3 × 3 horizontal and vertical matrix convolution kernels (as shown in Figure 17) or Sobel (2)
operators are established in advance. The255, edges of𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
the image object are then obtained after
Sobel edge detection is used to find the edges of the object
operators are established in advance. The edges of the image object are then obtained afterin the binary image. Two
convolution of each
𝑓(𝑥,pixel
𝑦) =in the image, that
as shown in Figure 18.black
Convolution 𝑓(𝑥,is as=follows:
× 3 Therefore,
3convolution
horizontal and
of each vertical
pixel in0 the
indicates
matrix convolution
image, the pixel
as shown value
kernels
in Figure (asis shown
18. and
in Figure
Convolution is𝑦)
17)
as or255
Sobel
follows:in-
𝑔= 𝐺 +
dicates
operators are𝐺established in advance. The edges of the image object are then obtained after
that the pixel value is white. (3)
𝑔 =Sobel
𝐺 + 𝐺 detection is used to find the edges of the object in the binary image. Two (3)
= [(𝑧 +edge
convolution 2𝑧
of each ) − (𝑧in +
+ 𝑧 pixel the + 𝑧 )]
2𝑧image, + [(𝑧 +in2𝑧
as shown + 𝑧 18.
Figure ) − Convolution
(𝑧 + 2𝑧 + 𝑧 is)]as follows:
= [(𝑧 + 2𝑧 + 𝑧 ) − (𝑧 + 2𝑧 + 𝑧 )] + [(𝑧 + 2𝑧 + 𝑧 ) −
3 × 3 horizontal and vertical matrix convolution kernels (as shown in Figure 17) or Sobel (𝑧 + 2𝑧 + 𝑧 )]
Horizontal gradient
i1 after application of horizontal edge filter:
Horizontal
operators gradient
h are established
2 after
𝐺 in (𝑧 application
= advance.
+ 2𝑧 The )of−horizontal
+ 𝑧 edges (𝑧of+the +edge
2𝑧 image filter: are then obtained after
𝑧 ) object
g = Gx2 + Gy2 (𝑧 image, ) − (𝑧 in ) Convolution is as follows:
(4)
convolution of each pixel 𝐺 in = the + 2𝑧 +as𝑧 shown + 2𝑧 + 𝑧 18.
Figure (4)
(3)
Horizontal gradient after application2 of vertical edge filter: 2 2
1
= Horizontal
{[( z
𝑔 = 𝐺 +3 𝐺 4 + 2z gradient
+ z − after
z + application
2z
5 )𝐺 (=1(𝑧 + 82𝑧 + + z 7 )] of
+ vertical
z + 2zedge
+ z
𝑧 ) −[((𝑧7 + 2𝑧6 + 𝑧5 )) ( 1filter:
− z + 2z 2 + z 3 )] } (5)
(3)
𝐺 = (𝑧 + 2𝑧 + 𝑧 ) − (𝑧 + 2𝑧 + 𝑧 ) (5)
= [(𝑧 + 2𝑧 + 𝑧 ) − (𝑧 + 2𝑧 + 𝑧 )] + [(𝑧 + 2𝑧 + 𝑧 ) − (𝑧 + 2𝑧 + 𝑧 )]
Horizontal gradient after application of horizontal edge filter:
𝐺 = (𝑧 + 2𝑧 + 𝑧 ) − (𝑧 + 2𝑧 + 𝑧 ) (4)
Horizontal gradient after application of vertical edge filter:
𝐺 = (𝑧 + 2𝑧 + 𝑧 ) − (𝑧 + 2𝑧 + 𝑧 ) (5)
Figure 17.
Figure 17. 33 ×
× 33horizontal
horizontaland
andvertical
vertical matrix
matrix convolution
convolution kernels.
kernels.
Figure 17. 3 × 3 horizontal and vertical matrix convolution kernels.
1 24 23 22 21 20 19
1 16 15 14 13 2 18
1 8 7 2 12 3 17
a
2 6 3 11 4 r r
16
r
3 4 5 4 10 c
5 d
15
5 6 7 8 9 c
6 d
14
c
7 d
8 9 10 11 12 13
Figure
Figure TheThe
19. 19. 8-NN,
8-NN, 16-NN,
16-NN, andand 24-NN
24-NN search
search structures.
structures.
ThisThis algorithm
algorithm searches
searches thethe whole
whole image
image to obtain
to obtain thethe edge
edge coordinate
coordinate data dataof of
all all
objects in the image. The detailed procedure of the algorithm is described
objects in the image. The detailed procedure of the algorithm is described in the following: in the following:
StepStep
1: 1:
TheTheoperation
operationprocedure
procedure of of the
the 8-NN
8-NNsearch
searchmethod
method is illustrated
is illustratedby theby closed
the
curve in Figure 20A, where the foreground pixels are denoted
closed curve in Figure 20A, where the foreground pixels are denoted as “a”, “b”, “c” and as “a”, “b”, “c” and so on.
Start object detection from the first point in the image matrix. If
so on. Start object detection from the first point in the image matrix. If an object is detected,an object is detected,
proceed
proceed to Step
to Step 2. aIfscanned
2. If a scanned pixel
pixel is background,
is background, nono processing
processing is performed.
is performed. In In
thisthis
step, the jth scan to the foreground pixel is defined as the starting
step, the 𝑗th scan to the foreground pixel is defined as the starting coordinate of the edge coordinate of the edge
j
point
point of the
of the 𝑗thjth object,
object, expressed
expressed 𝑃 Pand
as as 1 and is recorded
is recorded to to
thethe edge
edge point
point 𝑉 .VReferring
setset j . Referring
to Figure 20A, where the first searched foreground coordinate
to Figure 20A, where the first searched foreground coordinate is represented by a symbolis represented by a symbol
“
“”. F ”.
Machines 2021, 9, x FOR PEER REVIEW 12 of 20
Machines 2021, 9, 302 12 of 20
Machines 2021, 9, x FOR PEER REVIEW 12 of 20
1 8 7
a s
21 a
8 s 67 1 8 aa
7 s
b a s r b
32 a4 s
56 r
21 b
b
8 a67 a s r
c db q c db
3 4 5 q cc
32 db4 56 q
r r db r
e c d q p e c d q p e cc
3 d
d
4 5 q p
f e o p f e o p f e o p
g f m n o g f m n o g f m n o
g h l m n g h l m n g h l m n
h i j k l h i j k l h i j k l
i (A)
j k i (B)
j k i (C)
j k
3 4 5 4 10 1 8 7
b
a
r 2 a 6 r 3 b r 11 r
s a a s
5 6 7 8 9 2 c d6
c d
r
q c d 3 4 5 r
q c 4 d 10 q 1 8 7 q
b rb r
p p 3 4 5
e
c d q
e
c d q
e
c
5 d
6 7 8 9 q
p e 2
c d
6 q
p
f
e
o
p
f
e
o
p
f
e
o
p
f
e
3 4 5 o
p
g m n g m n g m n g m n
f o f o f o f o
h l h l h l h l
g m n g m n g m n g m n
i j k i j k i j k i j k
h l h l h l h l
(A) (B) (C) (D)
i j k i j k i j k i j k
1 24 23 22 21 20 19
1 16 15 14 13 2 18
2 12 3 a
17
a
3 a
11 4 a r r
16
r a
r
4 10
r c
5 d b
15 r
5 6 7 8 9 q
6 5 14 1 8 7 q
c d q c d c cd q
q c d
p p
7 8 9 10 11 12 13 p
2 ee
6 p
e e e
ec d
o f o f o ff
3 4 5 o
f
g m n g m n g m n g m n
h l h l h l h l
i j k i j k i j k i j k
Figure
Figure 22. 24-NN 22.schematic
search 24-NN search schematic diagram.
diagram.
Step exists
Step 7: Subtle noise 6: Repeat in mostStepimages;
3. If the edge ofathe
therefore, object is
threshold a closed
value for thecurve,
number edge detect
of object edges complete
is defined:when Tb . If boundary
the numberpoint of edge pixels owned
coordinates 𝑃 equalby anstarting is below 𝑃 ; th
object n jcoordinates
(𝑥 ,n𝑦j <
the threshold, i.e., ) =Tb(𝑥 , 𝑦 ).treat
, then If the edgej of
object as the
noise. That
object is is,
notdelete object
a closed j and
curve, its all
once edge
24 of the n
point records. Repeat
neighbors Stepare1 until
foundalltopixels in the image
be background information
pixels, scanningare scanned; that is,
is stopped.
the sequential arrangement of all edge
Step 7: Subtle noisepoints
exists inof most
all objects
images;is complete.
therefore,In this manner,
a threshold valuewefor the nu
can find the number of objects
of object edges in the image𝑇as. Ifwell
is defined: the as the edge-point
number sequential
of edge pixels owned by an object 𝑛 is
coordinates
of each object. the threshold, i.e., 𝑛 < 𝑇 , then treat object 𝑗 as noise. That is, delete object 𝑗 and it
Let us consider the example
point records. Repeat presented in Figure
Step 1 until all pixels20B. Theimage
in the algorithm searches
information arefor
scanned; t
edge-point coordinates in eight directions. When an edge point is detected
the sequential arrangement of all edge points of all objects is complete. In this in direction 3, mann
the coordinatescan
of this pixel are stored in the edge point set of the object. This
find the number of objects in the image as well as the edge-point sequential c pixel is then
regarded as thenates
reference
of eachpoint of the next 8-NN scan, and the initial pixel “F” is set as
object.
background data. When none
Let us considerof the eight nearest neighbors
the example presented are foreground
in Figure 20B. pixels, 16-NN search
The algorithm
is implemented.edge-point
An edge point is then detected in the fifth neighbor, see
coordinates in eight directions. When an edge point is detected Figure 21C. These in direc
coordinates arethe
then stored in the edge point set of the object, and this pixel
coordinates of this pixel are stored in the edge point set of the object. This is regarded as pixel i
the reference point of the next scan. The previous benchmark indicated by the
regarded as the reference point of the next 8-NN scan, and the initial pixel “” is orange edge
point in Figure background
21D is then considered
data. Whenbackground.
none of the eightWhen the 16-NN
nearest scan does
neighbors not detect pixels, 1
are foreground
any foregroundispixels,
implemented. An edge point is then detected in the fifth neighbor,point
24-NN is used to detect edge points. In Figure 22C, an edge is
see Figure 21C.
detected at the coordinates
seventh pixel. This is stored in the edge point set of the object. Pixel
are then stored in the edge point set of the object, and this pixel is reg “e” is
regarded as theasreference point point
the reference of the of next
thescan,
next seescan.Figure 22D.
The previous benchmark indicated by the o
edge point in Figure 21D is then considered background. When the 16-NN scan do
3.3. Segmented B-Spline Curve
detect any foreground pixels, 24-NN is used to detect edge points. In Figure 22C, an
Machines 2021, 9, x FOR PEER REVIEW The spline curve has curve characteristics such as local control (Figure 23), convex 14 of 20
point is detected at the seventh pixel. This is stored in the edge point set of the object
hull property (Figure 24), partition of unity (Figure 25), and differentiability.
“e” is regarded as the reference point of the next scan, see Figure 22D.
𝑆(𝑢) = 𝐵 , (𝑢) 𝑃
Figure24.
Figure 24. Schematic
Schematic diagram
diagram of
ofconvex
convexhull
hullcharacteristics wherek 𝑘= 2.
characteristicswhere =2
Figure 24. Schematic diagram of convex hull characteristics where 𝑘 = 2
1 1 1 1
The constructed spline curve becomes an input source ⎡− of the−drawing⎤ system, while the
6 2 2 6of
curve trajectory is fed back to the Dobot to test the⎢ drawing function ⎥ the manipulator.
1 2 𝑡
⎢ −1 0 ⎥
𝑠 (𝑡) = [𝑃 𝑃 𝑃 𝑃 − ] ⎢6 2 2 − 2
1 1 1 ⎥ 𝑡 t3
1 3
(9)
⎢1− 1 −1 1 0 1 2 1
6
⎥ 𝑡 t2
2 3
s j (t) = Pj Pj+1 Pj+2 Pj+3 −⎢1 2 1 2 1 2 1 6 ⎥ 𝑡 t1 (9)
⎢1 1
Machines 2021, 9, x FOR PEER REVIEW
2 2 2 6 ⎥ 15 of 20
t0
⎣6 6 0 0 0 0 0 0⎦
Figure 26.
Figure B-splinesegment
26. B-spline
B-spline segment curve.
segment curve.
4.
4. Experimental
Experimental Results
Experimental Results
Results
To
Toverify
verifythe
verify thefeasibility
the feasibilityof
feasibility ofthe
of theproposed
the proposedsystem,
proposed system,we
system, weperformed
we performedfunctional
performed tests
functionaltests
functional with
testswith
with
the proposed algorithms for
algorithms for
the proposed algorithms edge-point sequential
for edge-point arrangement
edge-point sequential
sequential arrangementand piecewise
arrangement and spline
and piecewise curve
piecewise spline
spline
representation. FigureFigure
curve representation.
representation. 27 shows
Figure 27 the module
27 shows
shows the kit and
themodule
module kit measured
kitand
and environment
measured
measured environmentloaded
environment loadedin
loaded
the system.
in the system.
Serial Time
Communication Processing
Module Module
pySerial Time
Figure 27.
Figure27.
Figure Loading
27.Loading of
Loadingof module
ofmodule kit
modulekit and
kitand measured
andmeasured environment.
measuredenvironment.
environment.
4.1.
4.1. Graphical
4.1. Graphical Window
Graphical WindowInterface
Window Interface
Interface
We
We imported the Tkinter
We imported the Tkinter
imported the TkinterGUI
GUIfunction
GUI functionlibrary
function libraryinto
intothe
thePython
Pythonenvironment write
environmenttotowrite
write
the
the graphical
the graphical window
graphical window interface.
window interface. Image
interface. Image processing
Image processing includes
processingincludes gray-scale
includesgray-scale processing,
gray-scaleprocessing, adjusting
processing,adjust-
adjust-
binarization
ing
ing binarization thresholds,
binarization and edge
thresholds,
thresholds, and detection.
and edge The results
edge detection.
detection. The of image
The results
results ofpreprocessing
of image can be seen
imagepreprocessing
preprocessing can
can
in
be the image
seen in processing
the image window.
processing The function
window. The of robot
function drawing
of is
robot
be seen in the image processing window. The function of robot drawing is designed designed
drawing through
is designedthe
robot
throughcontrolrobot
key and image processing display area. Based on the user interface written by
through the the robot control
control key
key and
and image
imageprocessing
processingdisplay
displayarea.
area.Based
Basedon onthe
theuser
userinter-
inter-
Tkinter,
face the environment
written by Tkinter, comprises
the the following
environment comprisesfive working
the blocksfive
following shown in Figure
working blocks28:
face written by Tkinter, the environment comprises the following five working blocks
(1) originalFigure
shown image display area, (2) preprocessing displaypreprocessing
area, (3) function setting area,
shown in in Figure 28:28: (1)
(1) original
original image
image display
display area,
area, (2)
(2) preprocessingdisplay
displayarea,
area,(3)(3)
(4) edge-point
function sequence arrangement, and (5) robot moving coordinates.
function setting
setting area,
area, (4)
(4) edge-point
edge-point sequence
sequencearrangement,
arrangement,and and(5)
(5)robot
robotmoving
movingcoordi-
coordi-
nates.
nates.
Machines 2021, 9, x FOR PEER REVIEW 16 of 20
Gray Sobel
OTSU TH Brightness
Read File
OTSU Threshold
Camera ON
Gray Sobel
Brightness Adj
OTSU TH Brightness
8-NN 16-NN
X+ Y+ Z+
Connecting Dobot Drawing...
Cycloid 8-shape
X- Y- Z-
Horizontal
DrawingSize :XW846 H832Y Z Size : W846 H832
Position Vertical
F(+) B(-) R(+) L(-) U(+) D(-)
Position Coordinate
X:0.0 Y;0.0 Z:0.0 Exit
X+ Y+ Z+
Connecting Dobot Drawing...
X- Y- Z-
Figure 28. Graphical
Drawing X window
Y interfaceZ
Position F(+) B(-) R(+) L(-) U(+) D(-)
Position Coordinate
X:0.0 Y;0.0 Z:0.0 Exit
In addition, as shown in Figure 29, the coordinate system of the image is usually
Figure
different
Figure 28.Graphical
28. Graphical
from windowinterface.
the coordinate
window interface
system defined by the manipulator. It is assumed that for an
image 𝐶 , 𝐶 , the coordinate axis is (𝑔𝑥, 𝑔𝑦). The drawing range of the manipulator is
In addition, as shown in Figure 29, the coordinate system of the image is usually differ-
limitedIn to (△ 𝑝𝑥,△
addition, as𝑝𝑦) = (125𝑚𝑚,
shown in Figure230𝑚𝑚),
29, thewhich
coordinate
meanssystem axis isis(𝑝𝑥,
of the image
its coordinate 𝑝𝑦).
usually
ent from the coordinate system defined by the manipulator. It is assumed that for an image
different
We definefromthethe coordinate
starting pointsystem
of thedefined
manipulator as 𝑋. The x-axis
by the manipulator. It is assumed
coordinatethatisfor an
then
Cx , Cy , the coordinate axis is (gx, gy). The drawing range of the manipulator is limited
𝐶 , 𝐶 axis is (𝑔𝑥, 𝑔𝑦).
𝑜𝑓𝑓𝑠𝑒𝑡𝑋, the starting Y-axis coordinate is 𝑜𝑓𝑓𝑠𝑒𝑡𝑌, and the image coordinates are ex-
image , the coordinate The drawing range of the manipulator is
to (4 px, 4 py) = (125 mm, 230 mm), which means its coordinate axis is ( px, py). We
pressed
limited to (△ 𝑝𝑥,△ 𝑝𝑦) = (125𝑚𝑚, 230𝑚𝑚), which means its coordinate axis is (𝑝𝑥, 𝑝𝑦).
as follows:
define the starting point of the manipulator as X. The X-axis coordinate is then o f f setX, the
We define the starting point of the∆𝑝𝑥 manipulator as 𝑋. The x-axis coordinate is then
starting Y-axis coordinate is o f f setY, and the image coordinates are expressed as follows:
𝑝𝑥 =
𝑜𝑓𝑓𝑠𝑒𝑡𝑋, the starting Y-axis coordinate 𝑔𝑦
is + 180 + 𝑜𝑓𝑓𝑠𝑒𝑡𝑋,
𝑜𝑓𝑓𝑠𝑒𝑡𝑌, and the image coordinates are ex-
𝐶
pressed as follows: ∆px (10)
px = Cy gy + 180 + o f f setX,
∆𝑝𝑦 (10)
𝑝𝑦==∆py
py𝑝𝑥 ∆𝑝𝑥 𝑔𝑥−+− 115 +f 𝑜𝑓𝑓𝑠𝑒𝑡𝑌
+ o+
= Cx𝐶gx𝑔𝑦 115
180 f setY
𝑜𝑓𝑓𝑠𝑒𝑡𝑋,
𝐶
(10)
∆𝑝𝑦
𝑝𝑦 = 𝑔𝑥 − 115 + 𝑜𝑓𝑓𝑠𝑒𝑡𝑌 (0,0)
𝐶
125mm
230mm
Figure29.
Figure 29.Coordinate
Coordinateconversion
conversionbetween
betweenimage
imageand
andmanipulator.
manipulator.
4.2.
4.2. Mechanical
MechanicalDrawing
DrawingofofEdge-Point
Edge-PointSequential
SequentialArrangement
Arrangement
After
After the image source is obtained, the number ofofclosed
the image source is obtained, the number closed sections
sections in in
thethe
filmfilm
andand
the
the number of pixels
Figure 29.ofCoordinate
number in each
pixels in conversion
each closed closed
betweensection
sectionimage are obtained through
and manipulator.
are obtained image preprocessing
through image preprocessing and
and edge-point
edge-point sequential
sequential arrangement.
arrangement. As shownAs shown in Figure
in Figure 30, the30, the of
image image
Pikachuof Pikachu
features
features
4.2. edge
187 187
Mechanicaledge point
Drawing
point sets sets obtained
of Edge-Point
obtained through an through
Sequential an 8-NN search.
Arrangement
8-NN search. Therefore, the
Therefore, the robot arm must robotmake
arm
must make
187 strokes 187 strokes
to complete
After the to
image source complete
the image the image
of Pikachu.
is obtained, of Pikachu.
For a 24-NN
the number For a 24-NN
search,
of closed search,
56 edge
sections in thepoint 56 edge
film sets
and are
the
point setsofare obtained. Forclosed
all k, section
number
obtained. pixels
For all k,in each
2758 pixels are2758 pixels
searched. are searched.
areOnce
obtained Onceimage
through
the system the system
determines determines
preprocessing
the closed and
curve
the closed curve and edge point coordinates, it encodes them into batch data according
edge-point sequential arrangement. As shown in Figure 30, the image of Pikachu features
to the communication protocol formulated by the Dobot Magician. The pyserial module
187 edge point sets obtained through an 8-NN search. Therefore, the robot arm must make
carries out serial communication through the USB interface to control the Dobot Magician.
187 strokes to complete the image of Pikachu. For a 24-NN search, 56 edge point sets are
Figure 31 depicts the process of drawing Pikachu.
obtained. For all k, 2758 pixels are searched. Once the system determines the closed curve
Machines 2021,
Machines 2021, 9,
9, xx FOR
FOR PEER
PEER REVIEW
REVIEW 17 of
17 of 20
20
Machines 2021, 9, x FOR PEER REVIEW 17 of 20
Figure 30.
Figure 30. Results
Results of
of drawing
drawing Pikachu.
Pikachu.
Figure 30.Results
Figure30. Resultsof
ofdrawing
drawingPikachu.
Pikachu.
Figure 31.
Figure 31. Drawing
Drawing process
process of
of manipulator.
manipulator.
Figure31.
Figure 31.Drawing
Drawingprocess
processof
ofmanipulator.
manipulator.
4.3. Drawing
4.3.
4.3. Drawing of of B-Spline
B-Spline Curve
Curve Manipulator
Manipulator
4.3. Drawing
Drawing of of B-Spline
B-SplineCurve
CurveManipulator
Manipulator
According to
According
According to Equation
Equation (9),
(9), the
the following
following two two groups
groups ofof control
control point
point coordinates
coordinates areare
Accordingto toEquation
Equation(9), (9),the
thefollowing
followingtwo twogroups
groupsof ofcontrol
controlpoint
pointcoordinates
coordinatesare are
input:
input:
input: {(−10,10),
{(−10,10), (−5.135,12.194),
(−5.135,12.194),
−10,10), (−5.135,12.194), (−1.1486,16.7401),
(−1.1486,16.7401),
(−5.135,12.194), (−1.1486,16.7401), (0,20),
(0,20),
(−1.1486,16.7401),(0,20), (2.1943,15.1351),
(2.1943,15.1351), (6.7491,11.1486),
(6.7491,11.1486),
input:{({(−10,10), (0,20),(2.1943,15.1351),
(2.1943,15.1351), (6.7491,11.1486),
(6.7491,11.1486),
(10,10),(5.135,7.8057),
(10,10), (5.135,7.8057),(1.1486,3.2599),
(5.135,7.8057), (1.1486,3.2599),
(1.1486,3.2599), (0,0),
(0,0), (−2.1943,4.8650),
(−2.1943,4.8650), (−6.7401,8.8514),
(−6.7401,8.8514), (−10,10)}
(−10,10)} and
and
(10,10), (5.135,7.8057), (1.1486,3.2599), (0,0), (−2.1943,4.8650), (−6.7401,8.8514),(−
(10,10), (0,0), ( − 2.1943,4.8650), ( − 6.7401,8.8514), 10,10)} and
(−10,10)} and
{(−10,0),
{({(−10,0), (−5,5),
−10,0), ((−5,5), (0,0),
−5,5), (0,0), (5,−5),
(5,−5), (10,0),
−5), (10,0), (5,5),
(5,5), (0,0),
(0,0), (−5,−5),
(−5,−5), (−10,0)}.
5,−5), (−10,0)}. Python
−10,0)}. Python then
Python then generates
generates a
{(−10,0), (−5,5), (0,0), (5,
(5,−5), (10,0), (5,5), (0,0),(−
(5,5),(0,0), (−5,−5), ((−10,0)}. Python generates aaa
then generates
group of
group of four
four pointed
pointed cycloids
cycloids and
and a group
group of of eight-shaped
eight-shaped spline
spline curves.
curves. Figures
Figures 32–34
32–34
group
groupof offour
fourpointed
pointedcycloids
cycloidsandandaaagroup
groupof ofeight-shaped
eight-shapedsplinesplinecurves.
curves. Figures
Figures32–34
32–34
showthe
show
show theresults
the resultsof
results ofthe
of thetwo
the twogroups
two groupsofof
groups ofcurves
curvesdrawn
curves drawnby
drawn bythe
by themanipulator.
the manipulator.
manipulator.
show the results of the two groups of curves drawn by the manipulator.
Figure 32.
Figure 32. Drawing
Drawing results
results of
of (A)
(A) quadrupole
quadrupole cycloid
cycloid and
and (B)
(B) 8-shaped
8-shaped B-spline
B-spline curve.
curve.
Figure 32.Drawing
Figure32. Drawingresults
resultsof
of(A)
(A)quadrupole
quadrupolecycloid
cycloidand
and(B)
(B)8-shaped
8-shapedB-spline
B-splinecurve.
curve.
Machines 2021, 9, 302 18 of 20
Machines 2021, 9, x FOR PEER REVIEW 18 of 20
Machines 2021, 9, x FOR PEER REVIEW 18 of 20
Figure33.
33. Drawingprocess
process ofquadrupole
quadrupole cycloidmanipulator.
manipulator.
Figure 33.Drawing
Figure Drawing processof
of quadrupolecycloid
cycloid manipulator.
simulation, verification, and comparison. The image processing of machine vision is closely
related to the drawing effect of the manipulator. The effect of binarization can be adjusted
using the threshold; however, edge detection is achieved by convoluting the horizontal and
vertical edge filters. This effect depends entirely on the 3 × 3 arithmetic core. Directions
for future research include improving the Sobel edge detection algorithm and using the
proposed system with other end tools, such as laser engraving or 3D printing.
Author Contributions: P.-S.T. put forward the original idea of this paper and was responsible for the
planning and implementation of the whole research plan. T.-F.W. is responsible for thesis writing
and text editing. J.-Y.C. deduces the image processing algorithm in the manipulator drawing system
and implements it with Python program. F.-H.L. establishes the hardware architecture and function
verification of the manipulator drawing system. All authors have read and agreed to the published
version of the manuscript.
Funding: This research was funded by Ministry of Science and Technology (MOST) (https://www.
most.gov.tw, (accessed on 20 October 2021)), grant number MOST 110-2221-E-197-025.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors would like to thank the Ministry of Science and Technology (MOST)
of the Republic of China, Taiwan, for financially supporting this research under Contract No. MOST
110-2221-E-197-025. (https://www.most.gov.tw, (accessed on 20 October 2021)).
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Akella, P.; O’Reilly, O.M.; Sreenath, K. Controlling the Locomotion of Spherical Robots or why BB-8 Works. J. Mech. Robot. 2019,
11, 024501. [CrossRef]
2. Kennington, C.; Plane, S. Symbol, Conversational, and Societal Grounding with a Toy Robot. arXiv 2017, arXiv:1709.10486.
3. Souza, I.M.L.; Andrade, W.L.; Sampaio, L.M.R.; Araujo, A.L.S.O. A Systematic Review on the use of LEGO Robotics in Education.
In Proceedings of the 2018 IEEE Frontiers in Education Conference (FIE), San Jose, CA, USA, 3–6 October 2018; pp. 1–9.
4. Jia, Y.; Song, X.; Xu, S.S.-D. Modeling and motion analysis of four-Mecanum wheel omni-directional mobile platform. In Pro-
ceedings of the CACS International Conference on Automatic Control (CACS 2013), Nantou, Taiwan, 2–4 December 2013;
pp. 328–333.
5. LEGO®BOOST Videos. Available online: https://www.lego.com/en-us/themes/boost/videos (accessed on 28 October 2021).
6. Kato, I.; Ohteru, S.; Shirai, K.; Matsushima, T.; Narita, S.; Sugano, S.; Kobayashi, T.; Fujisawa, E. The robot musician “wabot-2”
(waseda robot-2). Robotics 1987, 3, 143–155. [CrossRef]
7. Sugano, S.; Kato, I. WABOT-2: Autonomous robot with dexterous finger-arm—Finger-arm coordination control in keyboard
performance. In Proceedings of the IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, 31 March–3
April 1987; pp. 90–97.
8. Ogata, T.; Shimura, A.; Shibuya, K.; Sugano, S. A violin playing algorithm considering the change of phrase impression.
In Proceedings of the 2000 IEEE International Conference on Systems, Man and Cybernetics, Nashville, TN, USA, 8–11 October
2000; Volume 2, pp. 1342–1347.
9. Huang, H.H.; Li, W.H.; Chen, Y.J.; Wen, C.C. Automatic violin player. In Proceedings of the 10th World Congress on Intelligent
Control and Automation (WCICA), Beijing, China, 6–8 July 2012; pp. 3892–3897.
10. Zhongyuan University—Robot Playing String Orchestra. Available online: https://www.facebook.com/cycuViolinRobot
(accessed on 28 October 2021).
11. Jain, S.; Gupta, P.; Kumar, V.; Sharma, K. A force-controlled portrait drawing robot. In Proceedings of the IEEE International
Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015; pp. 3160–3165.
12. Sun, Y.; Xu, Y. A calligraphy robot—Callibot: Design, analysis and applications. In Proceedings of the IEEE International
Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 85–190.
13. Gülzow, J.M.; Paetzold, P.; Deussen, O. Recent developments regarding painting robots for research in automatic painting,
artificial creativity, and machine learning. Appl. Sci. 2020, 10, 3396. [CrossRef]
14. Scalera, L.; Seriani, S.; Gasparetto, A.; Gallina, P. Non-photorealistic rendering techniques for artistic robotic painting. Robotics
2019, 8, 10. [CrossRef]
15. Otoniel, I.; Claudia, H.A.; Alfredo, C.O.; Arturo, D.P. Interactive system for painting artworks by regions using a robot. Robot.
Auton. Syst. 2019, 121, 103263.
Machines 2021, 9, 302 20 of 20
16. Guo, C.; Bai, T.; Lu, Y.; Lin, Y.; Xiong, G.; Wang, X.; Wang, F.Y. Skywork-daVinci: A novel CPSS-based painting support system. In
Proceedings of the IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China,
20–21 August 2020; pp. 673–678.
17. RobotDK: Lights out Robot Painting. Available online: https://robodk.com/blog/lights-out-robot-painting/ (accessed on 20
November 2021).
18. Robots Write Calligraphy. What Do You Think? Available online: https://kknews.cc/tech/alxnrrv.html (accessed on 24
August 2021).
19. Shenzhen Yuejiang Technology Co. Ltd. Dobot Magician Specifications/Shipping List/API Description. Available online:
https://www.dobot.cc./ (accessed on 24 August 2021).
20. Hock, O.; Šedo, J. Forward and Inverse Kinematics Using Pseudoinverse and Transposition Method for Robotic Arm DOBOT; IntechOpen:
Vienna, Austria, 2017.
21. Tsao, S.W. Design of Robot Arm Laser Engraving Systems Base on Android App. Master’s Thesis, National Taiwan Ocean
University, Keelung, Taiwan, 2019.
22. Truong, N.V.; Vu, D.L. Remote monitoring and control of industrial process via wireless network and Android platform.
In Proceedings of the IEEE International Conference on Control, Automation and Information Sciences, Ho Chi Minh City,
Vietnam, 26–29 November 2012; pp. 340–343.
23. Deussen, O.; Lindemeier, T.; Pirk, S.; Tautzenberger, M. Feedback-guided Stroke Placement for a Painting Machine. In Proceedings
of the Eighth Annual Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging, Annecy, France, 4–6 June
2012; pp. 25–33.
24. Avanzato, R.L. Development of a MATLAB/ROS Interface to a Low-cost Robot Arm. In Proceedings of the 2020 ASEE Virtual
Annual Conference Content Access, Virtual Online, 22 June 2020.
25. Luo, R.C.; Hsu, C.; Chen, S. Electroencephalogram signal analysis as basis for effective evaluation of robotic therapeutic massage.
In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14
October 2016; pp. 2940–2945.
26. Dhawan, A.; Honrao, V. Implementation of Hand Detection based Techniques for Human Computer Interaction. arXiv 2013,
arXiv:1312.7560.
27. University in Finland Build a Industry 4.0 Assembly Line Using Dobot Magician. Available online: https://www.youtube.com/
watch?v=razSDc79Xi4 (accessed on 24 August 2021).
28. Myint, K.M.; Htun, Z.M.M.; Tun, H.M. Position Control Method for Pick and Place. Int. J. Sci. Res. 2016, 5, 57–61.
29. Kuo, S.; Fang, H.H. Design of GA-Based Control for Electrical Servo Drive. Adv. Mater. Res. 2011, 201, 2375–2378.
30. Hanan, S.; Markku, T. Efficient Component Labeling of Images of Arbitrary Dimension Represented by Linear Bintrees. IEEE
Trans. Pattern Anal. Mach. Intell. 1988, 10, 579–586. [CrossRef]
31. Lifeng, H.; Xiwei, R.; Qihang, G.; Xiao, Z.; Bin, Y.; Yuyan, C. labeling problem: A review of state-of-the-art algorithms. Pattern
Recognit. 2017, 70, 25–43.
32. Tsai, P.S. Modeling and Control for Wheeled Mobile Robots with Nonholonomic Constraints. Ph.D. Thesis, National Taiwan
University, Taipei, Taiwan, 2006.
33. de Boor, C. A Practical Guide to Splines; Springer: New York, NY, USA, 1978.
34. Shen, B.Z.; Chen, C.S. A Novel Online Time-Optimal Kinodynamic Motion Planning for Autonomous Mobile Robot Using
NURBS. In Proceedings of the International Conference on Advanced Robotics and Intelligent Systems (ARIS 2020), Taipei,
Taiwan, 19–21 August 2020.