Download as pdf or txt
Download as pdf or txt
You are on page 1of 172

【Live2D Explanation】Expressive eyebrow modeling method using glue

【Live2D Explanation】 How to Transform Contours without Failure (X-axis)

【Progress of Live2D model update #5 and explanation】 I'm swamped, but I managed to get
my hair done.

Live2D Explanation] How to deform contours without any breakdowns (Y-axis)

【Live2D explanation】How to create realistic luster, thickness and reflection of glasses.

【Live2D/VTubeStudio Explanation】 How Hand Tracking Works and the Camera

【Live2D/VTubeStudio Explanation】 About movements and parameters detected by hand


tracking

【Distribution of cmo3 files and Live2D explanations】How to create a hand-tracking model


①Torso

【Live2D Explanation】How to Create a Model for Hand Tracking ② Upper Arm


【Live2D Explanation】Expressive eyebrow
modeling method using glue

Well, it's April! Sorry I have a bit of a shoulder injury and have not been able to post
consistently. Well, I haven't explained the details, so I think I'll explain how to make eyebrows
this time.

https://youtu.be/rwbtQDLdG7o

Like this. Tracking-only, it is possible to produce not only up-and-down and angular
movements, but also left-right differences. This is a nizimaLIVE movement, but it can also be
done in VtubeStudio.

This is a parameter. I am using 6 in total.

The "○" marks are parameters that are moved by tracking, the "★" marks are
parameters for physics calculations, and. The mark ◇ is the parameter that is moved at the
time of difference.

○Eyebrows Upper lower

https://youtu.be/ah0zMGbYuHg

It is a simple eyebrow up and down movement. I use a deformer to move the


eyebrows. Many people use separate deformers for the left and right eyebrows, but I do
them all together because I can't use the movement inversion if I do that.

〇 Eyebrow angle → 〇 Eyebrow angle ←

This parameter is for tracking only. No deformation is added.


VTS used to not allow separate left and right eyebrow tracking, but it has been able
to do so since some time ago. nizimaLIVE has had separate left and right eyebrow tracking
since the beginning. However, it is more difficult to move them asymmetrically compared to
Facerig. (Facerig's webcam is inaccurate and the movement is blurred, so it just looks that
way...)

To begin with, Japanese people have a narrower brow range of motion than
Westerners, so it is not possible to achieve such cartoon-like brow movements.

Therefore, the asymmetrical movement described above is managed by the


movement of the corners of the mouth. How to do this is described below.

★Brow angle→ ★Brow angle←

https://youtu.be/PDgb3ioe7zc

This is a parameter for physics calculation. This parameter is related to the


aforementioned "○ brow angle → ○ brow angle ←". Add a deformation to this parameter.

First, deform the art mesh directly as usual. Deform the eyelashes using deformers to
match the deformation of the eyebrows

https://youtu.be/9Zsmz63fSII

I think there are many ways to deform the eyebrows, but I make it so that when the
value is negative, it becomes an "inverted ㇵ-shape" and when it is positive, it becomes a "ㇵ
-shape".

It is very easy to use because you can easily create "angry eyebrows" and "troubled
eyebrows" by tracking.

Live2D's default parameters include "eyebrow deformation" and "eyebrow left/right,"


but I only use up/down and angle for tracking. The current application can only detect
eyebrow vertical movement, so I can't fully utilize such detailed parameters. (I use "eyebrow
deformation" as a parameter for key presses, though!)

Now, let's go back to the story. If we create "inverted ㇵ-shape " and "ㇵ-shape"
separately for the left and right sides in the art mesh, we get this.

https://youtu.be/nl9RaOv9iqk

Back in Facerig's heyday, many of you had your eyebrow links turned off and had this
kind of movement!
It's not bad, but frankly, I think the eyebrow connection when asymmetrical is
unnatural. This time, we came up with a new method to simplify the parameter structure
while eliminating this unnaturalness.

But first, let me explain about "◇ eyebrow deformation.

◇ Eyebrow deformation

https://youtu.be/7ocXAoMb7Uw

This is an emotional expression parameter for key press difference. Thus, the shape
of the eyebrows in the default state can be changed to accommodate various situations,
such as a smug or relaxed face.

In addition, the "★Brow Angle" movement is also created in each state so that the
eyebrows move by tracking even when the brow differential is turned on.

The deformation has been largely completed, but the unnaturalness of the eyebrow
connection has not yet been resolved. In the past, I used to move the left and right eyebrows
with the same parameters because I did not like this unnaturalness. The following tweet is
the result of my trial and error to find a better way to achieve the "bad face" of asymmetrical
eyebrows.

https://twitter.com/himono_vtuber/status/1331378657202368512?s=20&t=9Na-l541lDE2JQu
Ll6eT4w

I am freaked out by the passage of time that this video is 2 years old, but I can create
a "natural bad face" by interlocking the eyebrow heads left and right like this.

However, this time I am trying to manage even more detailed emotional expression of
the eyebrows with key presses, so the above method is a bit problematic.

you want to link the left and right eyebrows, you would type dots for both "★ eyebrow
angle →" and "★ eyebrow angle ←," but if you also link them to the key-press parameter "◇
eyebrow deformation," you will have three types of parameters.

Even if you get 9 points, it's nothing if you work hard, but this time we also aim to
"make the parameter stringing as simple as possible even if the movement is terribly
complicated," so we want to keep it to less than 2 types if possible.

So this time we will use glue.


First, copy and paste the regular eyebrow art mesh for glue.

Then move the associated parameter to the opposite brow parameter. If →, move it
to ←, if ←, move it to →.

https://youtube.com/shorts/aOqKauotNKc?feature=share

Then, I set the transparency of the art mesh for the glue to 0% using multikey editing.

In this case, the eyebrow art mesh is multiplicative, so if two art meshes are
overlapped, the color will be black. Even if it is normal, the border will be slightly thickened,
so it is better to make the art mesh for glue transparent.

Finally, glue together the normal art mesh and the art mesh for glue, and adjust the
weights like this.
Green is the "normal predominant" and red is the "glue predominant". (If the colors
are reversed when you make a copy, reverse the glue weight as well.)

Thus, by making the end of the brow 100% green weighted to give full dominance to
the normal brow, and adding a little red weight to the brow head (not 100%, but just enough
to make it yellow), the brow head can be made to move in tandem with the brow head when
the opposite brow moves.

When the opposite brow moves, the brow head can be moved in tandem.

https://youtu.be/ctSUKyM3Kq4

Like this! Symmetrical when both have the same movement. Bad face good~!

The deformation is now complete. Now it's time to set up the physics to make this
thing work.

The two physics groups are "← eyebrows" and "→ eyebrows".

Four types of input parameters are entered.


○Eyebrows Angle

https://youtu.be/ZaWpUJ-Ku14

It is simply the movement of the eyebrows. This will link the tracking parameters to
the physics parameters. If you've been following my know-how, you're already familiar with
this technique. Let's set the corresponding ones for the left and right eyebrows.

○Eyes Open/Close

https://youtu.be/3KxHDBNiZPI

Enter in position X so that the eyebrows move slightly when blinking. (Check
the Invert box.)

The "Angle" parameter moves more beautifully, but the default value of the eye
parameter is not in the middle, so the default eyebrows are deformed by the tilted position
when the "Angle" parameter is used. (If we create a parameter for normalization of input, I
can input angle, but since it moves beautifully even with position X, I feel that we do not
need to go that far this time.)

This is also entered separately for the left and right sides.

★ Eyeball Y

https://youtu.be/xZWsxYPRxU8

The eyebrows move in accordance with up-and-down eye movements. By placing


the eyebrows in the "★ eyeball Y" instead of the "○ eyeball Y," the eyebrows can respond to
minute eye movements when the angle of the face is moved up and down.

★mouth X

https://youtu.be/-sqD6N9OkLQ

Enter only "← eyebrows." By doing so, only the ← eyebrows move with the mouth X
movement, resulting in a bad asymmetrical face.

It is your choice which eyebrow to move, but when you move it, the eyebrow of the
one with the raised corners of the mouth should be lowered. If you move your face
asymmetrically in the same way yourself, you will see that the eyebrows naturally move that
way. Perhaps you are squeezing the facial muscles of one side.
Incidentally, the "★Mouth X" is marked for physics calculation, but this is because
nizimaLIVE has separated the movement of the mouth X from the left and right to force it to
move.

https://youtu.be/rD_bkIWf2pg

When the left and right mouth corners are moving asymmetrically, "mouthX" moves,
but when they are symmetrical, it does not move...and so on.

Therefore, when using VTS, it is OK to eliminate this physics entirely and simply
move "mouthX" with "mouthX".

...That's all for the commentary!

I can't wait to make the rest of the hair on the model, etc., but I really need to focus
on treatment for a while because my shoulder is really hurting and I can't do my daily work....
(I wrote this article after a good night's rest, but my shoulder is still shaking...)

I will share any progress I make on Patreon, so please be patient with me.

Thanks for reading.


【Live2D Explanation】 How to Transform Contours
without Failure (X-axis)

So this time, I would like to explain about contour deformation as a way of saying
thank you.

Speaking of contours, I wrote the following article in NOTE a year ago as an article
officially requested by Live2D, Inc.

https://note.com/himono_vtuber/n/na1eb00fbc3b4

I will be honest. The content of this NOTE is out of date.

The above NOTE does not contain mistakes, but there are so many parts that are
poorly explained or explain methods that are no longer used.

The current contour deformation looks like this.

https://youtu.be/OwePpR2Nlnw

It looks like a crazy way to move things around, but there is a certain amount of law.

In this article, I will explain the deformation of clean contours, comparing how the
technique described in the NOTE article is older.

◆Partitioning of contours
I used to explain that the line drawing and the base are separate parts.

That in itself has not changed.

However, the scope of contour parts has changed significantly.


Thus, the skull used to be made in such a way that it covered the entire skull, but
now it is only made up to the hairline.

Because, in reality, the hair grows on the outside of the hairline and no skin color is
ever visible. So there is no use in making it.

Moreover, once the skull portion is made, it is necessary to consider the


three-dimensionality of the ear when it is turned sideways.
When you turn at an angle like this, the ears come to the front of the contour, so you
have to think about their wrap-around.

If the part range is stopped at the hairline, such an order swap will not occur, making
it easier to create.

By the way, this idea of "making a contour part with the hairline" is not my idea.

but it is the idea of Oruko Amase.

https://twitter.com/Amase_Oruko/status/1491727461100363780?s=20&t=OFlOhex6rdrQC6E
EjAwk8w

It is HIMONO's way to incorporate ideas that we think are good!

I hope you will all adopt my ideas and make them your own.
◆Contour mesh

This is the way it used to be divided.

Old

Now
Now it looks like this. The interior dots have been reduced a bit and a row of dots has been
added inside the boundary.

The reason can be seen in the line drawing part.


By adding one more row of dots, the line drawing can be neatly enclosed as shown here.

Even if the line drawing is distorted by adding a large range of motion, this can be fine-tuned
manually.

(Although so far I've been able to deform it nicely with only a deformer!)

◆Contour deformation (angle X)


I used to give this explanation, but this is very old information now....

This deformation is missing something important.

It is "gills.

*In Japanese, the part of the image below is expressed as gills.↓


The protrusion of this part is not overwhelmingly sufficient.

One more thing.


The eyebrows do not protrude enough. The previous deformations weren't a huge mistake,
either. If we really want to achieve a deformity without breakdown, we have to be. We have
to be aware of the "bone".
If you touch this area circled by the red line and the same area on your own face, you
will see that the bones are protruding and hard.

These "protruding bone" areas must be deformed with an awareness that they will
appear to protrude when you turn your head to the side.
Furthermore, although it is not actually visible, it is recommended to be aware of the
three-dimensionality of the hairline here, as it will increase your understanding of the
three-dimensionality at once.
Furthermore, there is one more major change.

In the past, we used to make the deformer so that the horizontal line was straight.

The idea was, "The X-axis is a horizontal movement, so it shouldn't move vertically.
When made in this way, the chin, cheeks, and forehead are at the same height as
when facing forward.

At first glance, this movement seems to be correct. Of course, this is not wrong.

In fact, in many cases, the height of the head, chin, and nose are aligned like this in a
three-view drawing.

However, this is only a "2D" correct way of thinking.


Now we are making it so that the front side goes down and the back side goes up.

Why do we make it this way? It can be seen by looking at the movement of the 3D
model.
We borrowed Nikoni Rittai-chan as a sample.

Let's superimpose these "front-facing" and "diagonal" images


It's hard to understand with just this, so I'll explain with an image with guide lines.

I drew guide lines on the chin, mouth, nose, and eyebrows.

Can you see it? If you look at an angle, the height is slightly raised in the direction
you are facing.

Body movements are much easier to understand.


The arm in front appears to move downward while the arm behind appears to move
upward.

The reason for this appearance is, roughly speaking, that the horizontal axis is also in
perspective in 3D space.
It means that it is structurally correct to the left, but to the actual human eye it looks
right.
I had posted this self-made diagram in a previous NOTE, and in fact, you can already find
the answer here.

So now, as shown here, when deforming the X-axis, I move the deformer diagonally
so that the "front is down and the back is up" and deform it by making it protrude while being
aware of the bone.

https://youtu.be/ZLJY5iV5fmE

When you move it, it looks like this

https://youtu.be/UVSYszn687g

By the way, just the outline looks like this. It would be good to make this alone so that
you can see the three-dimensional effect.

That is all for this commentary.


As a side note, I said earlier that "3D also applies perspective to the
horizontal axis". Actually, this is half right and half wrong. Actually, it is correct to say,
"When perspective projection is used, perspective is also applied to the horizontal
axis.

Actually, there are two types of 3D: parallel projection and perspective projection.

You can think of the left side of the previous figure as "parallel projection" and the
right side as "perspective projection.

The reason why there is a left projection method when the actual view is on the right
is because it is easier to create it on the left. You use the left projection method to create it,
using the three-plane view, etc., as a guide. (That's right. It is very hard to make it with
perspective.)

Live2D models are also commonly created using the parallel projection concept,
right?

This is where we introduced the concept of perspective projection in the creation of


this model.

...I honestly didn't expect this progress video, which is just outlines and expressions,
to get such a good response. (currently 50,000 likes)

https://twitter.com/himono_vtuber/status/1516400884787208194?s=20&t=SB69OqKpLfmKy
3w45hjfxg
Of course, this is a work that I am very confident in, and I have put all my skills into it,
but it does not have any impactful gimmicks or a huge range of motion, so I wondered why
the tweet was so well received.

I wondered if it was because I brought the concept of perspective projection to the


Live2D industry, which until now had only been based on the idea of 2D parallel projection.

It looks like an illustration, but it moves in the same way as in the real or 3D world...
Perhaps that gap creates some kind of attraction.

Lastly, this perspective-based Live2D production method is seriously unsuitable for


beginners.

It is simply too difficult, and unless it is made by someone with some understanding
of perspective projection, it can lead to incorrect stereotypes.

Even if you make it in parallel projection, I think you can make it move quite nicely
just by being aware of the cheekbones and gills I explained at the beginning, so I
recommend using parallel projection if you are not familiar with Live2D!

That's all for this time! The explanation of the X-axis alone has become very long, so
I will write more about the Y-axis and diagonals in the near future. See you soon!
【Progress of Live2D model update #5 and
explanation】 I'm swamped, but I managed to get
my hair done.

https://youtu.be/_bAKL_JevdY

Anyway, I just had a hard time modeling the hair.

I was struggling with hair modeling!

(*Note for overseas readers: The title "swampy" means "swampy," but there is
a Japanese proverb "mire ni maremuru," which means "to struggle").

To begin with, my hairstyle has a three-dimensional structure of difficulty level S


called "air intake (*tentacles on the top of the head that are characteristic of Card Captor
Sakura and Daiwa Scarlet)," so don't underestimate it as a simple bob, but it is surprisingly
difficult to model.

In addition, the shadows and highlights are painted well, and there are inner colors,
so it was a lot of work…
Side hair: 14 Art Mesh

Bangs: 18 Art Mesh

Back hair: 64 Art mesh

The total is 96 art meshes of

No, no, no, that's too many!

I was trying to keep the structure as simple as possible to create a beautiful


deformation, but when I was taking into account the complexity of the shadows, the amount
of information and the wraparound, before I knew it, I had this many art meshes.

At any rate, now that the work on the hair has been completed

I would like to briefly explain our efforts again.

◆ separate line drawings and paintings


Yellow=Stray hair

Green=Paint

Red = Mask

Light blue = Line drawing

It's a common technique to put line art and paint on separate art meshes.

As you can see from this breakdown chart, I place the line art "all together under the
paint".
Thus, the line drawings are drawn around the paint art mesh, but by grouping them
together and placing them at the bottom of the…

https://youtu.be/0qAyOxl5GlI

In this way, the line art is made invisible in areas where the fill art meshes overlap
each other.

If the line art overlaps or is cut off in the opposite direction, it may cause a sense of
discomfort, but with this method, the line art appears in just the right balance, and clipping is
not necessary, so it is very easy to create.

◆Inner color to express the front and back of the hair

This is the point (1) that I struggled with.


I mentioned earlier that "inner color is difficult to create," but because it is difficult to
create, it is also an item that makes it easier to see the three-dimensionality and depth if it is
created well.

https://youtu.be/iwUI9QrOv7Q

To make it easier to understand, I moved it left and right with the sideburns removed.
Can you see the inner color of the side part of the back hair appearing and hiding?
Actually, I just copy and paste the art mesh on the back of the hair, clipping and
sliding it around, but it just makes the front and back of the hair clear, so the "depth of the
hair" looks better.

I use a similar technique for the inner color of the kson president.
I use this process not only on the horizontal axis but also on the vertical axis

https://youtu.be/vLawp0Nhnxc

This may be a little confusing…

When the model looks up, inner color appears at the ends of the back side of the
hair.

https://youtu.be/e18RntjwcMk
The structure is very simple because I just make these additional parts and slide
them by clipping.

Why did I do this kind of processing?

The bob-cut haircut has rounded ends, so the tips of the hair, which are not so visible
when facing the front, become visible when facing up... This is the principle behind this
process.

Conversely, the front hair color appears in the area where the inner color was visible
at the back of the hair.
In reality, it is just clipping and sliding the art mesh, so it is structurally lying, but it
simply increases the amount of information, and the color changes give the brain a nice
illusion of three-dimensionality.

At first, I tried to make the hair without this inner color treatment, but the top and
bottom did not look three-dimensional and the hair looked flat, and I really struggled to come
up with a good solution. I think I was finally able to bring it to a satisfactory quality line by
coming up with this "illusion that the hair ends are turned inside out" technique....

◆ patiently add the amount of information in the back of the hair

This is point (2) that I struggled with.

I assure you.

No matter what the original image is, the most difficult part of the hair to model is
the back of the hair.

Because when you look at it from the front, you can only see this much,
but when you look at it from the side, you can see this much.It takes a lot of work,
almost to the level of a new drawing.
If only the art mesh that was visible in the frontal orientation was displayed, it would
look like this.↓
The parts that were visible were also divided into smaller parts and shifted or
stretched well, but that was not enough by any means, so the entire back of the head was
drawn in addition.
Shadows and highlights are separated by hair tufts and transformed by hand with
patience....

In the end, patience is the key....


The hairline was also added later.

It may be the privilege of those who are in charge of both the original drawings and
Live2Drigger themselves to be allowed to make such reckless additions....

That's all for now. I omitted physics calculations, as I did not do anything unusual with
them. If there are many requests, I will explain it in another article.

https://youtu.be/9Cce6dX55t4

I was so swamped that I had a gestalt collapse in terms of appearance, so I couldn't


objectively see if it was done well when I finished it.

But now that I've made a video of it, I think I've achieved a satisfactory level of
kawaii. I'm glad…
By the way, the video at the beginning and the one immediately above were shot
using nizimaLIVE, but I thought it would be better to move them using VTS (VTubeStudio) in
the end, so I also shot a video of the one using VtubeStudio.

https://youtu.be/pZCV_5hniPk

I like nizimaLIVE's default background so much because it matches my eye color, so


I brought it to VTS too lol

The physics looks sluggish because it's set to 60 FPS!

When I'm actually making them, I calculate at 60FPS, but for some reason,
nizimaLIVE's 60FPS makes the physics buggy, so I shoot at 30FPS for nizimaLIVE.
Personally, I don't need too much FPS for my VTuber models, so 30FPS is fine, but I hope
the bug will be fixed as soon as possible... (I haven't noticed any reports on the official
Discord server, but I wonder if people haven't noticed? I'll report it when I get around to it)

Since nizimaLIVE has separate left and right mouth corners, nizimaLIVE's mouth
movements are softer and better. However, VTS has recently released an extension called
Vbringer, so you may not have to stick to nizimaLIVE if you use that. Now that the face is
almost done, I think I'll use VTS as my main tool to make adjustments from now on!

Next, I will probably make glasses or an upper body. I think I will be able to publish
the progress of those in June or later.

I will also continue to update the explanation of the modeling of the contours.

I will also post a video recording of the process of making the model on my YouTube
membership page. Sorry for the rush update at the end of the month! See you soon!
Live2D Explanation] How to deform contours
without any breakdowns (Y-axis)
.

This time, I will explain the Y-axis (up/down) movement.

https://note.com/himono_vtuber/n/na1eb00fbc3b4

I explained this in my last article, comparing it to this NOTE article I wrote a year ago.

I would like to take the same approach this time.

Before I do that, I will first illustrate the concept of vertical movement.

I have learned from building and looking at various models that there are two main
types of Y-axis motion concepts.

The two types of ideas are designated as A and B for convenience, and the diagram
shows the movement facing front and the movement when it is viewed from the side. Can
you tell the difference between the two?
I think you can immediately see that B appears to have a wider range of motion than
A. What I want you to pay attention to is the "contour shape" and "face direction.

I colored in the areas I wanted you to pay attention to.

First, "green" is the shape of the contour; you can see that the contour of A does not
change at all when it moves up and down. In contrast, the jawline of B becomes shorter
when it moves up and down.

Next, let's look at the guidelines drawn to make it easier to see the "purple" neck
movement, the "light blue" face orientation, and the "orange" face front and back.

In case A, the face is almost unchanged, and you can see that the face is moving
"back and forth". The neck also moves back and forth following the change.

In contrast, in B, the face is firmly oriented up and down, while the neck hardly moves
at all, only stretching and contracting the skin in front of it.

So, to put it simply, A is "the movement of the face back and forth" and B is "the
movement of the face up and down".
As you will see when you actually do it, both A and B movements can actually be
reproduced. So, neither movement is wrong. There are advantages and disadvantages on
both sides, so you can choose whichever you like.

The advantage of the A movement is that, personally, I find the "upward glance when
looking down" to be extremely cute; if you try to do this with the B movement, the wide range
of motion causes the eyes to look up too much and make the face look "squinty.

With A, the face is not angled too much, so it gives just the right amount of cute
upper eye contact.

The disadvantage is that there is a limit to the range of movement. If you move too
much, you will end up moving like a chicken.

The advantage of B is that it allows for a larger range of motion. Whether it is actually
used or not is another matter, but a real human being can move about 50° up and 90° down
(*①), so if you are serious about pursuing a full range of motion, it is not unnatural to move
the body up to that level.

(*①) I just checked by actually moving my neck myself, but my neck has a dead
range of motion, so a healthy person might be able to move it more.

The disadvantages are that it is more difficult to make than A (if you don't know the
structure of the jaw, it will look ugly) and the range of motion is too wide, so the movement
may be a bit flabby. However, the latter can be limited with a tracking application, so you
don't have to take it that seriously.

I like a high range of motion, so I am currently moving with B, and I will explain B
again.

I'll spare you the explanation of A. A has a smaller range of motion to begin with, so I
think it is less likely to cause a breakdown in movement.

Now that we know more about how A and B work, let's see what previous NOTE
articles have explained.

ーーーーーーーーーーーーーーーー

The following is a quote from a previous NOTE article I wrote


Deformers are shifted up and down without much deformation.

(For angle Y, do not use the motion reversal function, but create the vertical motion
manually.)

The angle Y movement is also fine-tuned each time, as it varies greatly from
character to character. Yulia's model is deformed quite significantly.

So much for the quote.

ーーーーーーーーーーーーーーーー

The explanation went something like this.


I had no intention at the time, but the way these two models (Aoba-kun and
Yuria-chan) are moved is exactly the same as A and B. (Aoba-kun is A and Yuria-chan is B).
(Aoba-kun is A and Yulia-chan is B)

Since we are going to explain B, let's focus on Yulia's deformation. As usual, this
deformation is not necessarily a mistake, but it is old-fashioned. (*By the way, Yulia was
requested to update her model afterwards, and we have corrected it to the latest technique,
so please don't worry about it!)

What's old is "constant deformation width".

"Transforms into a "∩" shape when looking up."

"If it looks down, it deforms into a U".

is not a mistake in itself.

However, if deform all of them equally


it will look like a mask like this. People often say that a Live2D model's movement is
uncomfortable and that the face looks like a mask, and this is probably the cause of it.

The actual human outline (= skull) does not look like this, right?

It is difficult to capture the shape in detail, so I tried to deform it very simply.

It is easiest to think of it as "a jaw attached to a sphere" like this.

What happens when you move it up and down…


It looks like this. You can see that the shape of the sphere in the red line has hardly
changed, but only the jaw in the blue line has changed.

If pick up only the contour lines of this…


It looks like this. So only the chin is shortened when it faces up and down.

When I described the movement of B at the beginning of this article,


I said, "The jawline is shortened when it moves up and down." I said, "This is how the
principle works.

Therefore, we are now transforming it according to this law.


https://youtu.be/Hw31F_-SsRI

When moving up and down, shorten only the chin. At this point,

note that only the red line should be shortened. Do not play with the length of the
blue line as much as possible.

If the contour line covers the entire skull, there is almost no need to deform that part,
but as mentioned in the previous article, the contour line is made at the hairline this time, so
the upper part of the contour is also deformed to give a three-dimensional effect to that part.
In principle, this is what I mean. (Depth is required near the hairline of the red line)

...So that's all for the explanation of Y-axis contour deformation.


It may have been hard to read this time because of the many explanations of the
structure. Sorry for being so difficult!

However, I believe that simply understanding this jaw structure will change the quality
of upper and lower movements in a completely different way, and I urge you to incorporate it
into your work.

In fact, before and after I understood this principle, the quality of my models changed
drastically.

In the next article, I would like to explain the most difficult part of contour deformation,
"diagonal deformation"!

See you soon!


【Live2D explanation】How to create realistic luster,
thickness and reflection of glasses.

It was a very busy month for both my work and personal life...(not quite finished
yet)...

Sorry for the delay, but I would like to continue with my previous explanation of
modeling glasses.

If you haven't read it yet, please click here first 👇.


https://www.patreon.com/posts/67368566

Last time I explained how to make lens refraction.

This time, as the title suggests, we will explain how to make "luster," "thickness," and
"reflection" of glasses.

【How to make glasses glossy】

I think there are many ways to make it, but I'll start with the following

◆Solid color art mesh as a foundation


◆”Highlight art mesh” with a smaller area than the “solid color layer”
◆Art mesh for masks (this time, make it in the shape of a stick) and create a clipping
the highlight art mesh with the mask art mesh, and sliding the mask art mesh.

By doing this, the glowing area looks as if it is moving by changing the angle of the
face.

https://youtu.be/6mNHay-YNmQ

Do this for each part.

The rim part of the glasses looks as if it is actually reflecting light because the
diagonal highlights appear to slide by rotating the bar in this way.

https://youtu.be/AOk3TGRvPYI

If I clipped all the pieces separately, I would have overcrowded the clipping frame.

I have clipped the left and right rims, the bridge and end piece, and other parts that
can be clipped together.
↑All of these up are put together in one clipping frame

【How to make a representation of the thickness of a pair of glasses.】

Actually, I think this process contributes the most to the realism of the glasses.haha
This area is especially easy to see when my face is turned sideways. You can see
the "side" of the rim of the glasses.
Without it, the glasses would look instantly flimsy (although there are thin frames like
this, too...!)

Let's go to the instructions on how to make it.

Copy all the Warp deformer rim solid art mesh, highlight art mesh, and art mesh for
masks that contain the glasses. You may want to add (side) to the end of the name for
clarity.

The point is always to copy the entire deformer. The reason is described below.

Once copied, place all of the copied art mesh below the art mesh before copying.
Then, the copied warp deformer is shifted slightly toward the front in both the X and Y
axes. (So far, this is a common technique to express thickness by shifting the art mesh of the
same shape slightly.

https://youtu.be/qIdKRb8UlA8

In addition, we will add a twist here.

When I explained gloss earlier, I said, "The rim part of the glasses looks as if it is
actually reflecting light because the diagonal highlights appear to slide by rotating the bar in
this way."

The art mesh for the sides should be the opposite of this location

The blue line is the location of the mask for the normal rim art mesh and the red line
is the location of the mask for the side rim art mesh. Let's move these in the same direction
https://youtu.be/cA9EY6YeJUs

By doing this, the gloss appears on the sides at a different location from the front
part. The position of the luster changes as the surface changes, and this gives the image a
more three-dimensional appearance. This technique is often used in illustrations.
As you can see, the light and dark areas are clearly defined, so it is very easy to see
the "change of plane".

I said earlier "always copy the entire deformer," but if the range of motion is about the
same as it is now, I don't think there is that big a disadvantage in shifting the art mesh
directly with the deformer the same.

This time, however, we plan to make the range of motion 90° or more horizontally, so
the glasses will turn completely sideways like this. (I transformed it just as an example)
To make the sides of the glasses commensurate with this condition, they would have
to be shifted considerably. Shifting inside the deformer is likely to cause overhang and
unexpected distortion at the midpoint, so in this case, the deformer was also made
separately.
【How to make a reflection of glasses.】

Reflected light looks great when it is present, but it is the least difficult to create.

First of all, the mask is made from "Art Mesh for Mask Inversion" as explained in the
previous article on the refraction of eyeglass lenses.

(If you make one such art mesh for mask inversion, it is very easy to reuse it.)

All that remains is to create an art mesh for reflection like this, invert the mask with
the art mesh shown at the top, and slide it.
https://youtu.be/OXa8Aw0fJgA

https://youtu.be/1kORqYweiNk

This in itself is a common technique.

However, this time, I was very particular about the "type of reflected light. I actually
took a self portrait of the glasses and thoroughly observed how the light was reflected.

In my environment, there are four reflected lights

① Reflection of the monitor

This is the reflection of light from the monitor in front of the screen. The blue light-cut
lenses are used, so the reflection is blue. An easy way to do this is to create a solid layer of
blue rectangles with the Shape tool and then line up several of them.

(In this case, I used a different material to make it more realistic.


Prepare a commercially available drawing or photo, and use PhotoShop's "gamut
specification" to select the brightest areas (play around with the amount of tolerance to your
liking. The larger the tolerance, the wider the selection.)
Press OK to create a selection of bright areas, create a new layer, and fill it with blue.
By deleting the original layer, an art mesh for the reflected light of the monitor was
created in no time. The key point is that bright areas such as desks are also slightly
reflective!
Since the border of the monitor is too clear by itself, blur the border a little by using
"Blur (Move)".

This "Blur (Move)" is recommended because it softens the impression of the border
while maintaining the sharpness of the image, since there is a difference in the width of the
vertical and horizontal blurring.
Now all that is left is to slide it to 20% opacity and the monitor's reflection is complete!
https://youtu.be/TYlrmJycW-k

I can slide it as it is, but to emphasize the "reflection on the concave lens," I have
deformed it by applying a slight deformer perspective on the left and right sides.

(I could have moved it to follow the vertical movement, but I did not do so this time in
order to emphasize the reflection of the light in the room and the "reflection of the monitor in
front of it," which will be discussed later.)

② Reflected light from room light


It is the reflected light that appears when the face looks up. It is a reflection of the
light in the room (outside, would it be a reflection of the sun?) So this is the strongest
reflection of all.

I created this one by tapping the round blur brush several times in Crysta to create a
shape that looks like that.
One notable feature is the use of "two art meshes.

In Live2D, there are only three blend modes: normal, additive, and multiplicative, so if
you try to create strong light with additive blending, the colors will inevitably become whitish.

First, a light blue color is applied in a normal layer…


By adding an additional layer of strong light on top of it, a gradation of blue to white is
created, resulting in reflected light with more information.

(This light appears only when the viewer looks upward.

https://youtu.be/b3cP5roUuVo

③Reflection on the side of the rim when the face is turned wide to the side
Here it is. If you have glasses, try it out and take a selfie with a large sideways view.
You will probably get the same reflected light.

Perhaps it's the reflected light from the sides of the rim reflecting off the lens.
I just draw three lines as you see them, but to emphasize the "rim reflection," I make
the line drawing with a border in a slightly darker color.
It's as if it appears when the face turns broadly to the side. This one follows up and
down.

https://youtu.be/80aoSqmdoFg

④??? (Reflection of door?)

It is actually a reflected light that was not there at the time of the progress I gave to
YouTube and Fanbox. Since the reflection of the monitor is toward the → side, there was a
lack of information about the reflected light when facing the front ← direction.
However, copying and pasting the monitor reflections would not be convincing
enough (Are there 6 monitors?!?).I was staring at my selfie to see if I could find any good
reflections, and I found a faint whitish reflection when I was facing ←.

There is a door on the far side of the soundproof room where I usually work, and light
from outside is coming in through the glass of the door. I am not sure why it is not blue, but I
implemented it as I saw it for the time being.

The above is a breakdown of the reflected light! I didn't think it was that difficult, but
when I drew it out like this, it really brings out how particular I am about it.lol
And then. Perhaps some of you may have noticed in the explanatory video/picture at
the top…

https://youtube.com/shorts/2MjXvon9Kqw?feature=share

I made the vines for the glasses.

Although the results are modest for the hard work involved, I think that the glasses
have become more convincing.

I would like to explain how to make this vine and how to make the XY diagonal of the
face, but sorry. This is the limit for this month's update... ;-)

I'm currently working on two Live2D jobs (one is a real-time production for
distribution) and also working on a huge project behind the scenes.

I'm also very busy with my personal life, so I have my hands full trying to accomplish
all of these things! (I'm also thinking about my birthday trip, and even if I can't go to the ......
wedding, I'd like to wear a dress and take some pictures).

I actually had a meeting today to discuss this huge project, and while I'm happy that
we're coming to a good conclusion because it's something worthwhile and fun and I wanted
to do, I also feel a sense of crisis that I have even less time to devote to updating my
models... sweat.

Frankly, at the current pace, it is hopeless to make it in time for the Live2D contest.

I am already feeling sad that I will not be able to make it this time again, but even so,
the fact that I was able to take a step forward in this way, that the work is progressing
steadily and well, and that my progress is already being evaluated (that I have acquired the
skills to be evaluated) is a great progress that I have never made before.

While accepting this as positive, I would like to make as much as I can until the
deadline of the contest.

See you next time in July!


【Live2D/VTubeStudio Explanation】 How Hand
Tracking Works and the Camera
*This article is a translation of an article I published on Pixiv Fanbox on October 12, 2022.

Hello everyone、 Kanbutsu Himono.

I've got my upper body done, so I can take all the videos I want, it's great!

https://youtu.be/xrIg87NjNUI

I got too excited while writing this article and started singing.

And when I paired it with my homemade microphone item, it looked great!

So I wrote about that in another article. ↓↓.

https://www.patreon.com/posts/if-combined-with-73329108?utm_medium=clipboard_copy
&utm_source=copyLink&utm_campaign=postshare_creator

Let's leave it at that...

I will start explaining hand tracking and the body in this issue.

However, hand tracking in VTubeStudio (hereafter referred to as "VTS") is not well explained,
and many people wonder, "How do I make it work in the first place?" I think there are many
people who wonder, "How do I make it work?

First of all, I will explain the principle and the parameters that react to it.

VTS hand tracking is detected by a webcam.

Even those who normally use iOS for face tracking can combine "webcam for hands" and
"iOS for face".
The accuracy of tracking looks like this.

https://youtu.be/Ka7Cja2RLQs

I was contacted by a VTS official and had a chance to experience it for a bit before it was
released to the public. (At that time, I used a lightly customized past model.)

My first impression was,

"A webcam can tracking with such high accuracy!!! I don't need the Leap Motion I spent over
10,000 yen to buy???." That was it

haha

I think it is comparable to Leap Motion, especially when it comes to "front facing finger
tracking" and "hand angle". There is no blurring at all.

https://youtu.be/rU9oXX8MJKQ

Sideways finger tracking is also a thing of beauty. It's a real mystery how they do it when the
fingers in the front are hiding the fingers in the back.
https://youtu.be/Ze-qBUkxVzo

However, if the hand coordinates are moved too quickly, they will be undetected.

https://youtu.be/LPDVJ3irC_s

As you become accustomed to the "hand speed at which the camera does not miss
detection," you will be able to make clean movements.

Also, as a matter of course, if a hand protrudes outside the camera, the detection will not
work.

https://youtu.be/EqHUYU5omgU

I place my webcam on the ← side of the monitor, so if I move my hand too far to the → side,
which creates distance from the camera, the movement will be buggy and uncomfortable.

Once I got used to this, I was able to grasp the range where the movement was not buggy.

https://youtu.be/7L52ZSqVrTM

And here are the disadvantages to write about.

One is the difficulty with the angle of the wrist.

As you can see, when made normally, the wrist moves as expected when the hand is facing
forward, but when the palm is turned back, the movement is reversed

https://youtu.be/gsqEb_R-zXY

Even more! More problematic is that when the hand is turned to the side, there is no
response at all at the wrist.

https://youtu.be/sKn1ms9Rjnw

In short, this hand tracking function picks up the "angle from the hand", not the "angle from
the monitor". So the angle of the hand when in landscape orientation is angle Z from our
point of view, but from the hand's point of view it is angle Y. The hand angle Y doesn't move
because it doesn't exist in the tracking item.

If the VTS management had developed this from scratch, this would not have happened
(since the developers are familiar with the Live2D way of thinking), but since this is said to be
an appropriation of an externally released hand tracking technology, I have a feeling that the
feature was not intended for use with Live2D in the first place.
And furthermore, and furthermore. The biggest problem is this.

If I twist my wrist too far inward in front of the camera, the model's hand bugs me like crazy.

WHY!!!???

https://youtu.be/mP_ihPbE5lA

People with soft wrists need to be careful

So there is surprisingly little "clean range of motion," and it takes quite a bit of getting used to
it to get your brain to grasp how much it is.

So much for how it works.

Next. Criteria for Selecting a Webcam to Consider Before Hand Modeling

You may think, "Just buy a high-performance camera with good image quality," but that's not
the case.

When this feature was first released, I bought a camera with a wide angle of view to "avoid
missing detection," but the wide angle of view = a large area on the screen and small hands,
so the movement became proportionally smaller.

https://youtu.be/_5J46uQ4bgI

Too small haha

It's too small and frequently mis-detects the left and right hand.

More problematic is the wide angle of view, so no matter how much I move my hand to the
side, I can't "get out of the tracking range"...

Then what happens...

https://youtu.be/6uw9lqGDTsI

it breaks like crazy.

If the camera has a narrow angle of view, stretching when yawning or reaching out to the
side to pick up a cup will cause the hand in front of the camera to move out of the angle of
view, automatically disengaging detection and placing the model's arm down.

However, if the camera's angle of view is wide enough, the hand cannot move out of the
angle of view and is continually detected in a messy way, so it breaks into a mess like the
video above.
So, to "move it nicely when you want to move it", an ordinary webcam used for remote
meetings is sometimes the best. I have been using an old webcam that I bought in 2013. The
quality is only 720p.

https://www.amazon.co.jp/gp/product/B00516G6DI/ref=ppx_yo_dt_b_asin_image_o00_s01?
ie=UTF8&psc=1

(I was doing a Monster Hunter live action on the 3DS at the time and didn't have a

capture board, so I bought it to take direct pictures. Too much nostalgia)

I think the best camera height is about the same as the face (eye level).

(Everyone wants the hands brought around the face to look the highest quality, so)

I have a desk like this (sorry for the messy desk top) with three monitors filling it.
The camera is on the far left.

The bottom one is a wide-angle camera that I don't use and the top one is a normal camera
that I usually use.

(The equipment below the camera I bought but don't know how to use, so it's just a camera
rest;)
I think the height is more important than the horizontal position because it tracks nicely even
in such an unbalanced position.

So, to summarize this article.


(1) Understanding and familiarity on the part of the person moving the model is necessary to
make the model's hands move beautifully.

(2) Buy an inexpensive camera.

Next time, I would like to talk about parameters!


【Live2D/VTubeStudio Explanation】 About
movements and parameters detected by hand
tracking
*This article is a translation of an article I published on Pixiv Fanbox on November 8., 2022.

The other day I was able to successfully show off my hand tracking.

https://youtu.be/o7vVvWwOwxI

https://twitter.com/himono_vtuber/status/1587075600677711872?s=20&t=a5HGOnXufprkP
cWYscN7EQ

The response has been tremendous, and I'm very grateful!! I'm glad I worked so hard on it!

I will continue our slow explanation of hand tracking here.

This time, we will talk about "what kind of movement is actually detected" and "what
parameters should be tied to it.

First of all, it is very basic, but the hand tracking settings are made from the third menu from
the left in the top-left Details , the menu with the cogwheel on the person.
“IN” is the motion parameters that can be detected.

“OUT" is the "parameters created by Live2D.

These two can be freely combined, so for example

When the hand goes up, the shoulder also goes up.

If the hand is in front of the body, turn on a parameter that brings the drawing order to the
front.

It can be applied like this.

Pressing the blue "IN" button brings up a list of detectable movements.

Of these, those involved in hand tracking are those with "Hand" or "Finger" attached.
The actual hand tracking can be used for

24 movements, 12 for each hand

2 movements involving both hands

for a total of 26.

I will explain one by one from the top to the bottom (it is long, so please skim over the
non-essential information as needed).

ーーーーーーーーーーーーーー

【HandLeftFound】【HandRightFound】【BothHandsFound】

https://youtu.be/m50Ro6Y7aS0

This is the parameter that moves when a "hand is detected."

Like the "cheek puk" parameter, the IN value moves from Min to Max instantaneously when a
hand is detected.
The IN value moves from Min to Max in an instant when a hand is detected.

You may vaguely understand that "Left" and "Right" are for one hand, and "Both" is for both
hands.

However But please note that "Left" is the "←" hand as seen from the camera.

In Live2D, the "L" and "R" are named from the character's point of view, so the hands to be
connected are probably reversed.

“Both" moves only when both hands are detected.

https://youtu.be/o5rbIHvQpuE

A quick way to use these parameters that comes to mind is to link them to "parameters that
raise the arm" or "parameters that switch to the difference of the arm being raised" so that
"the arm is instantly raised when the hand is detected. If you directly connect parameters
with deformations, the switching of movements will be too abrupt, so I think it would be
more natural to connect them with physics operations.

(I will explain about physics operations later.)

【Video when directly tied to deformation parameters.】

https://youtube.com/shorts/s8QPpN2ExN4?feature=share

【Using physics calculations】

https://youtube.com/shorts/zwydcqTNr0k?feature=share

By the way, I did not use these 【HandLeftFound】, 【HandRightFound】, and 【BothHandsFound
】 this time.

Then how are the arms detected? I will explain later.

ーーーーーーーーーーーーーー

【HandDistance】

It is a parameter that moves with the "distance between the right hand and the left hand.

The distance of the circle between the right and left hands on the webcam screen is exactly
that. By the way, it seems to become a hand sign when they are attached to each other (I just
noticed that).

https://youtu.be/tjvFGpfv0Xk
The values increase the farther apart the hands are from each other.

https://youtu.be/H7QDFNFbIQI

If I use it well, I could make it a "differential trigger to combine both hands" like a clap or a
heart mark hand. I have not used it so far.

ーーーーーーーーーーーーーーー

【HandLeftPositionX】【HandRightPositionX】

This is a parameter that moves with the "X-axis (horizontal axis) position of the hand."

It would work well if you include shoulder and elbow movements.

https://youtu.be/fcqrReaUf-E

It seems that if you move it slowly, it will detect even if you can't see it to some extent.

https://youtu.be/kVz6ERay5sM

This movement can also be output by using physics calculations.

The "shoulder" and "elbow" movements are blended well and output.

ーーーーーーーーーーーーーーーー

【HandLeftPositionY】【HandRightPositionY】

This time it is a parameter that moves with the position of the hand on the Y-axis (vertical
axis).

This one is a little more difficult to utilize.

Would it look good to include A or B movement?

(I have it so that it switches between A and B depending on the position of the hand on the
X-axis.)
https://youtu.be/jJeqtCv_vgM

Furthermore, the "range limit" on the "Input" button and physics are well combined to do
things like "only move the shoulders up and down when they are above a certain height".

https://youtu.be/BBtrJoxnUck

ーーーーーーーーーーーーーーーー

【HandLeftPositionZ】【HandRightPositionZ】

This is a parameter that moves with the Z-axis (back and forth) position of the hand.

It is a little bit covered by the Y-axis movement B mentioned earlier, but I think it will look
better if a movement to move the hand closer to the camera is included. (It is more difficult
to create).
Compared to the XY-axis, it does not move so spectacularly. It adds a little bit of dynamism...
or something.

(I use physics to blend the movement of the shoulder with the movement of the B of the
Y-axis as described above.)

https://youtu.be/R7jto5Cv3TY

https://youtu.be/PFypTw5ccBY

ーーーーーーーーーーーーーーーー

【HandLeftAngleX】【HandRightAngleX】

This is the movement of twisting the hand back and forth. This is the most difficult part to
make.

It is designed to detect wrist movement for 360° around the wrist, but

In reality, however, only 180° of "inner rotation" can be used for clean movement.

As I mentioned in my last article, there is a mysterious bug that when the internal rotation
exceeds 180°.

Once the angle at which the bug occurs is exceeded, it moves nicely from there, but the
discomfort of the bug is so great that it does not look right unless it is stopped at the angle
just before the bug occurs.
https://youtu.be/GUNfJYOx37Q

↑This works beautifully so far.

https://youtu.be/MDrQ7QeEGUg

↑Moving the wrist to an angle greater than this will cause the wrist to rotate once.

https://youtu.be/EiyOS0P5VRs

It is possible to move it beyond the angle at which the ↑ bug occurs, but it is not practical to
use because the wrist will make a full turn.

https://youtu.be/8-JG9tfzeA0

↑They also move outward a bit, but they are very buggy, so this is not realistic either (and will
hurt your wrists).

https://youtu.be/Fr0y5bSmvxY

This is what it looks like in numerical values. This bug doesn't seem to have anything to do
with smoothing, changing tracking values, or with or without physics, so I have no way to hit
it at the moment. If anyone knows a solution, please let me know.

I took the "hands only move so far" approach.

ーーーーーーーーーーーーーーーー

【HandLeftAngleZ】【HandRightAngleZ】

The angle of the wrist. This one has a few quirks as well...

As I mentioned in my last article, when the palm of the hand is facing forward, it moves
nicely as normal.

https://youtu.be/vKywiCAFqKo

However, when the back of the hand is facing forward, it moves in the opposite direction.

https://youtu.be/hoLkGlH6rrA

By the way, when the hand is pointing to the side, it does not move at all.

https://youtu.be/t482uIxHisg
This is a repeat of the previous article, but this is because the values are based on the "angle
from the hand". If the hand is turned over, the numerical value would be the opposite, and the
↑ movement when the hand is turned sideways is strictly "hand angle Y", so it does not move.

The hand angle Y is not yet implemented, so there is nothing we can do about it, but the
movement when the back of the hand is in front can be corrected either directly or by
correcting it with physics calculations. I use physics to correct for the opposite movement
only when the back of the hand is in front of the hand.

https://youtu.be/-qa5Tk0Aj7k

(When the back of the hand is in the front, it doesn't move much to the ← side, but the wrist
actually moves very little to the thumb side, so I think this is fine.)

https://youtu.be/FS4m1wYepEc

Here's what it looks like in numerical terms.

ーーーーーーーーーーーーーーーー

【HandLeftOpen】【HandRightOpen】

This is a parameter that moves with the opening and closing of the hand. There is no
distinction between fingers.

In this figure, the five circles are the movements of each finger, and the bar below them is
this 【Open】 parameter.

https://youtu.be/rM0awHWoSu0

It could be used to create a hand opening and closing, even though it would be tedious to
create finger distinctions, or a differential that is only triggered when the hand is held, or
many other triggers.

I didn't use it this time.

ーーーーーーーーーーーーーーーーーーーー

【HandLeftFinger_1_Thumb】

【HandLeftFinger_2_Index】

【HandLeftFinger_3_Middle】

【HandLeftFinger_4_Ring】
【HandLeftFinger_4_Pinky】

【HandRightFinger_1_Thumb】

【HandRightFinger_2_Index】

【HandRightFinger_3_Middle】

【HandRightFinger_4_Ring】

【HandRightFinger_4_Pinky】

There it is! Fingers. In the order of 1 to 5, we have "thumb", "index finger", "middle finger", "ring
finger", and "little finger".

As a side note, once you get used to this, I think it makes more sense to name them in
English. In Japanese, the ring finger is called the “Kusuri”="drug" finger.haha

https://youtu.be/zM0ddLYL3ac

All fingers move basically beautifully, but only the thumb moves poorly when it is "making a
fist with the hand turned to the side," so if it is made too open, it will look uncomfortable.

In the beginning, we wanted to have a wider range of motion for the thumb as well, but when
I held my hand, only the thumb would stick out as shown in the ↓. Now I give priority to the
appearance rather than the range of motion.

https://youtu.be/OZiZ9RMPvW0

That's it for the parameters! There are a lot of them, but if combined well, the large number
of them can be used to make a very beautiful hand.

ーーーーーーーーーーーーーーーーーーーーーーーーーーーーー

Finally, let me tell you something important.

If you are already able to create hand tracking and have followed the settings as described in
the ↑ explanation, you will notice that the movement is decidedly different from my hands.

https://youtu.be/Ts-wHosETmA

“What? When the tracking goes off, the arm doesn't go back to where it was!”

Yes, if this is the case, "when the tracking goes off, the arm stops in that state".

At the beginning of this article.


I didn't use these [HandLeftFound], [HandRightFound], and [BothHandsFound] this time.

'Then how are the arms detected?' I will explain that later.

I think I wrote, "Strictly speaking

"How do you lower the arm when the arm is not detected?"

The question is more correct.

Differences" and "hand gestures" play an active role here.

Differences and hand gestures are performed using the rightmost button on the top-left
detail menu.

When it comes to differencing, you may have a strong image of "sparkling eyes,"
"embarrassed face," "bye-bye motion," and other such on/off differencing and motions for
use with key presses, but VTS can be used in a variety of other ways.

In this case, we will create a difference with the condition of "fixing the arm in a specific
position and shape when the hand is not detected.

Press "Expression Editor" at the top of the tabs on the right side.
Press "Create New Expression".

You can see the parameters set in Live2D on the right side of the screen.

I named it "← Hand Detection" as appropriate.

Then, check the "← Parameter to be fixed when hand is not detected" checkbox and slide the
bar to fine-tune the position so that the hand appears to be "lowered without discomfort.

https://youtu.be/nCMIqvaNt1s

*Pudding is just putting it on to avoid a ban, sorry hahaha.

I move everything from my shoulders to my fingertips in tandem, so you need to check all the
boxes. If you only move from your elbows to the tips of your fingers, you only need to check
the points to be moved.

If all goes well, press "Save" and create a diff file (exp3.json).

The process of creating the diff file up to this point can actually be done without using VTS,
but I think it is easier to use VTS.
Next, press the "+" button in the lower right corner of the screen to create the difference.
The name was appropriately changed to "←Hand OFF".

“Hotkey Action" is "Set/Unset Expression”.

Put the difference file created earlier in "Expression".


Normally, I would enter some key (Shift, W, etc.) in the "Key Combination" field, but this time I
did not enter anything.

And the most important "Gesture Trigger".

When the ""Gesture"" button is pressed, various hand signs will appear.

If we make the hand sign set here toward the camera, we can turn on the differential just like
when we "press a key on the keyboard".
In this case, select the one with an X on the hand symbol in the lower right corner. This
means "ON when no hand is detected. This is exactly the condition for this time.

Click "L" under the hand sign since this is a ←hand setting.

The time to detection is your choice, but I set it to just under 0.3. If the time is too short, it
will easily lead to false positives!

Then check the "Deactive Expression when gesture not detected anymore" checkbox. This
will automatically turn off the differential when a hand is detected.

(It is a bit complicated, but "when the gesture of "hands not detected" is not detected" means
"when hands are detected"!)

When finished setting the gesture, press OK.

Last! The "Fade few sec" in the Hotkey setting, can be changed to change the "speed at
which the hand is lowered/raised" by changing the value here.

For 0.1

https://youtu.be/Dqwu9heR8Jc

For 0.3

https://youtu.be/-up7N0Sc9-c
For 0.5

https://youtu.be/I-gUa2LY4T4

At the time of the unveiling distribution, I had set it to 0.1 because I thought it would be
better if it moved quickly, but when I looked at it as a whole body, I thought it would be a little
uncomfortable when I took it down, so I just changed it to 0.3. I think people have different
tastes in these numbers, so go with your preference!

This completes the fixation of the ← hand. →Do the same for the ← hand.

https://youtu.be/rE5mLspqmts

That's all for this article! From the next article, I will finally explain how to create with Live2D.
(Thank you for your patience...!)

See you then!


【Distribution of cmo3 files and Live2D
explanations】How to create a hand-tracking model
①Torso
*This article is a translation of an article I published on Pixiv Fanbox on November 11., 2022.

*I have made the video portion into a GIF this time.

Some of you may not be able to view the GIF properly, so I have included a link to Youtube as
well.

If you guys are having no problem viewing it, I'd like to make it this way in the future!

(I plan to replace the parts of the article that I have so far...)

Please let me know in the comments section!

Hello everyone、 Kanbutsu Himono.

I will now begin to explain the actual rigging.

The order of rigging is as follows

(1) Torso

(2) Shoulder and upper arm

(3) Forearm

(4) Hands

I recommend that you build from the center of the body to the end of the body.
“I' m going to make a hand that can move freely!"

With this kind of enthusiasm, it is easy to suddenly start rigging from the terminal fingers.

However, this approach is likely to fail because of the lack of balance (I had to re-create the
hand many times). (I had to remake many times because of this.)

So I will start with the torso (shoulder area) first.


You said you were going to explain the arms! I'm sure you are thinking,

Sorry, let me explain in order...! (The movement of the torso is also very important to make
hand tracking look natural.)

The torso is roughly divided into the following parts.

Red - chest
Yellow - Shoulder

Green - torso

Blue - abdomen

Purple - pelvis

The "line drawing," "paint," and "shadow" are all separated for each part.

Clavicle (line drawing + shadow)


The chest part is divided into upper and lower line drawings.

The line drawing for the armpit part is also drawn on the solid color parts.

Also, as in the case of the hair, the line drawing is placed under the basic solid paint.
The shoulder parts are divided into arm and torso sides.

The same parts are duplicated and shaved off except for the area surrounding the shoulder.
These two are connected with glue so that the deformity does not break down with flexible
shoulder movement.

https://youtube.com/shorts/EQocMf3rPrE?feature=share
This is the armpit part!

It is usually hidden and is visible when the arms are raised.


Torso (line drawing + solid coloring + shadow on ribs)
If these " feature shadows in the parts" are kept separate, deformation becomes much
easier.

Belly part (line drawing + solid coloring + navel line drawing + navel shadow + rib shadow)
The line drawing and shadow of the navel are also separate parts.
Pelvic parts (line drawing + solid coloring) Parts for each side

Although the abdominal and pelvic parts are not strictly related to hand tracking, I thought it
would be easier to separate them for future rigging of the legs.

I have explained it here because it is a good opportunity to do so.

This is what happens when all of these parts are combined!


This is what happens when these are moved! (surreal hahaha)

https://youtu.be/RvTYqrzobVY
This time, I separated the movements of the upper body and hips. It looks very strange this
way, but I think it will be a nice movement with a little delay created by tracking with physics
calculations.

https://youtu.be/lWG9SIvglnU

https://youtu.be/oC-tY6xXqNA
By sandwiching the shoulder parts between the chest and torso parts, the thickness of the
torso is created.

https://youtu.be/xocY9RO9f10

Separating the collarbone parts makes it easier to create a three-dimensional effect in the
front and back of the body.

https://youtu.be/XVMmygNuon4
https://youtu.be/PuB3TE_mGzo

The ribs and navel are also moved individually to create a three-dimensional effect.

Incidentally, the waist parameter is for blend shape.

So much for normal torso movement! In other words, the preamble.

Next, I would like to introduce an important movement that is essential for the connection
with the arms.

They are "shoulder up and down" and "chest squeezing inward" movements.

https://youtu.be/DmcbVdpz2Ws
This is how the shoulder moves up and down.

https://youtu.be/DwWRDiE0DfA

By linking this with arm movements and physics calculations, the shoulder appears to be
raised naturally.

https://youtu.be/Lzu2ywYKYFY
This time, apart from that, I have included another type of shoulder movement. (This one
also moves downward).

https://youtu.be/8XydCF6zbPg

This is a "sway-only parameter" for swaying the shoulders in accordance with body
movements.

If these two parameters are not separated, when the arms are lowered, the shoulders will
also lower, and to prevent this from happening, a tedious physics correction had to be added,
so they were separated.
That's about it for the shoulders up and down.

The next movement I will describe, "squeezing the chest inward," looks like this.

https://youtu.be/NYGZFmQjv5M

At first glance, this is a "what's the use?" move.

https://youtu.be/d1hd_fI1cIY

Thus, by squeezing the chest when the upper arm is moved inward

This gives the impression that the chest is being pressed by the arms.
Even if the breasts are small like mine, there is still flesh in the armpits, so if that is not
moved, it will look unnatural.

https://youtu.be/7LIoENO9cCE

Furthermore, when the upper arm is moved inward, the order of drawing is changed so that
the arm comes forward, but by inserting a "chest-squeezing" movement in between, the
discomfort of the switch can be reduced.

I was talking about something like ...... and how even though we say "hand tracking" in one
word, we need to think about linking the torso as well in order to make it look like a natural
movement with the whole body.

Finally, since I can't tell you everything in writing alone

So, I will distribute the cmo3 file of the torso as a Patreon-exclusive great service.

(with color coding for clarity)

Diversion of deformers and art meshes is strictly forbidden. Please use it for your study.

↓↓↓

https://youtu.be/KIwDAcS2x9w
https://youtu.be/btlcyaT2r4E

Well then!
【Live2D Explanation】How to Create a Model for
Hand Tracking ② Upper Arm
*This article is a translation of an article I published on Pixiv Fanbox on December 9., 2022.

Well, before I know it, it's December!

Sorry for not updating for a while.

I've been having some troubles and my motivation to create in general has been low...

(Some of you may have guessed it, but if you don't know, it's okay to stay ignorant.)

I'm finally getting around to making some videos on YouTube.

I hope to update the Patreon and Fanbox articles little by little.

So, continuing from last time on hand tracking!

This time, I will explain how to make the upper arm.

*First, let me say that a lot of the work I'm going to be describing here will be solved by brute
force.

There's also a great deal of numerical talk, but I'm making this up as I go along, just by feel.

There may be a better way.

(I may be able to do it faster and more efficiently if I use mathematical formulas, but I don't
have the knowledge...;;)

ーーーーーーーーーーーーーーーーーーーーーーーーー

So, I made it disappear from the elbow down for explanation.

It moves like this. Surrealistic, isn't it?

The body movements I made last time are well utilized.

https://youtu.be/tP9t6iy2Z10
The joints of the arms are really flexible and have a wide range of motion.

The upper arm can be transformed almost 360 degrees into various positions.

So how can we express this in Live2D?

Unless there is a good reason to do so, I think everyone makes their arms in a rotating
deformer. So, if you want to move the arms to their fullest, you can't just rotate the rotary
deformer 360° and be OK. ......

https://youtu.be/ICl9tGGbRA8

If rotated normally, the shoulder would dislocate in the middle like this, and the arms would
be stuck in the clothes because the drawing order is not correct.

If I resolve that issue in a spirited manner, the movement will look something like this.

https://youtu.be/r6blG9xIlOM
This alone has taken me about a month, but even this is still not enough.

Furthermore, the arms can be "moved forward".

https://youtu.be/gyQoj72-6v0

The difficulty is doubled because the image of the arm switches in the middle of raising and
lowering the arm. (If you have ever experienced Live2D, you will know what I mean.)

Even more troublesome, the image does not switch when it is moved inward outward.

https://youtu.be/llURvBkI7B8
The process must be such that "only when the arm comes to a certain point, the up-down
alignment of the part is overturned, but elsewhere the angle only changes.

The movement at the beginning can be made only after that process is somehow done in a
spirited manner.

https://youtu.be/HuUo4oyRYYo

...and even a cursory description of the movements makes my head hurt, but let's take a look
at them one by one.

First, you need to have a knack for thinking about the parameters to be tied together.

In the usual way, Live2D arms are usually created with one parameter, since there are two
choices: "raise" or "lower".

https://youtu.be/8Zn1kBHqB64
"I want to move it a lot, I can just keep increasing the angle, and I'll be able to move my arm
180°!"

If you do this, the tracking and animation will not work well.

It will move like a wiper.

https://youtu.be/ua13LcdO7xE

To create the three-dimensional arm motion explained at the beginning of this section, it is
important to divide the arm motion into "X-axis" and "Y-axis" movements.
I know I've dotted a lot, but let's put that aside for a moment.

This is the default position, but it is biased to the minus side, isn't it?

X0,Y0 would be like this.

The hands will be in the front forward position.

Think of this as neutral, and


https://youtu.be/mgo_yqH5oDE

From there, the arms are extended in different directions.

In this way, the combination of up, down, left, right, front, and back can be expressed with a
three-dimensional sense of movement without waste.

Next, here's what the art mesh looks like

(Please go through the one that says "Quagsire")

The basic solid color, line drawing, and shadow are separated like this.

The line drawing is filled in and placed at the far end.

https://youtu.be/DXhFuxDFw8o
Furthermore, there are several shadows to add detail at different angles.

For example, this is a shadow art mesh that is only displayed when the arm is in front of the
body.

The parameters of the area to be hidden are handled so that they disappear in an instant by
not hitting the dots.

https://youtu.be/KePpanQ7bxU

This is the shadow of the armpit flesh when the arm is lowered.

It gradually disappears when the arm is raised by combining a change in opacity and a
process of not hitting the dots.

https://youtu.be/KSgFSfMWjk4
This is the shadow of the armpit seen when the arm is raised.

I not only raised and lowered the opacity, but also added a three-dimensional effect to the
armpit using deformers. I get excited when I can improve the quality of the armpit
expression!

https://youtu.be/OtELOPqK3PY

Next is the deformer structure.

(The shaded area is the deformer from the elbow down, so please ignore it for now.)
The underlying rotational deformer is in the upper and lower shoulder deformer described in
the previous section.

If moving the deformer, the arms will follow.

https://youtu.be/S6TTkORcerw
The next "arm on/off" is for difference, so ignore it.

The next item, "arm position," is for following the body XY. The deformer will follow the body
to some extent, so the deformation only needs to be fine-tuned.

https://youtu.be/9qdqKTHoZ1A

The deformer for the armpit is just below that.

Deformer for body XY > Deformer for arm XY > Art Mesh

In this order.

https://youtu.be/3pB2i_gayv4

The same hierarchy also includes the shoulder art mesh that is tied to the arms.

(This is the one explained in the previous article.)


By separating the art mesh for the body side and the arm side in this way, you can create
subtle deformations to match the arm.

https://youtu.be/RpS0PSF_aC0

Now, below that, "upper arm rotation."

This is the parameter that must be deformed the most difficult and tricky this time.

As I mentioned at the beginning of this article, this time, we have to do something like this:
"At a certain point (X,Y=0), the upper and lower position of the part is turned upside down,
but at other points, the angle only changes.

For example, when you think, "I see, I should just turn it over, right?"

Many modelers easily do this kind of transformation (I did).

https://youtu.be/QAep4KD0Gs8
It's like extending up, down, left, and right around the 0 point. That's easy.

And at this point, it looks very nice.

But what about the But what about the diagonal deformation?

Try to synthesize corners.

https://youtu.be/rlLASEBrQTE

Wahhhh!!!

It is now a loaf of bread, not an arm.

https://youtu.be/o6wKb2g3vDI

I will try to force it into the shape of an arm.

https://youtu.be/6kdHTvWOi3E
...It seems to be no good.

I can get away with it by forcibly deforming it with more crazy fine dots in between... but

But the result will be very messy.

If one easily swaps the top, bottom, left, and right transformations in this way, the
completion between them would be terrible, so this method cannot be used.

So, the deformer only does this way deformation of arm extension.

https://youtu.be/TVOrM9uhoqU

The direction is created by a rotational deformer.

https://youtu.be/AIQCeKorUFI
There are some drawbacks to this method as well. First of all, if the values of either X or Y
are swept away, a simple dot like this will work beautifully, but it is necessary to dot as close
to 0 as possible for X (e.g., 0.01).

https://youtu.be/Ocz6fD3TPDI

This is because without it, the rotational deformer values would be reversed and the
movement would be out of order in one place.

https://youtu.be/oGGvl7R3voA
By swapping the plus and minus of the X value in a split second between 0 and 0.01, I avoid
a move that goes backwards and backwards with this parameter. (I will explain more about
this later.)

https://youtube.com/shorts/xQPTOU-Z-xc?feature=share

*Sorry it's hard to see.

Do you see how the plus and minus of the angle of arm X switches every time the point
moves between 0.01 and 0?
Another drawback: if either X or Y value is swinging out, the ↑ method alone will work, but the
transformation between the two is not so easy. Especially when either value is 0. As you can
see, the deformation is not so good when the value between the two is 0.

https://youtu.be/OQrfw0HXMv0

To prevent this, you have to hit more points.

The result is this…

It's always creepy to see too many dots....

In case you're wondering, I'm not typing without moderation, but there are certain rules.
First, the value of the parameter is "-10,0,10" for both X and Y axes.

Then, the value of ① is half of that, 5 (or -5).

The value of ② is half of that, 2.5, ③ is another half of that, 1.25, ④ is still another half of
that, 0.625...

The interval gradually becomes narrower by halves, approaching the value of 0.

Next is the angle of rotation deformer for each value.

(I tried to fill in all of them at once, but the screen got too cluttered, so I started with just the
main angles.)
The outermost side looks like this. The value increases by 45° in increments of + when
around → and - when around ←. At the bottom, the numbers don't match (180° and -180°),
so we force the numbers to make sense by hitting a dot at 0.001 and setting the value to
-180° at 0 and 180° at 0.001.

The angle between the two will be half the main angle as before.

For example, in the upper right,

-22.5° at half of -45° (-22.5°)

Half of -22.5° for -11.25° (-11.3°)

Half of -11.25° is -5.625° (-5.7°)

Half of -5.625° is -2.8125° (-2.8°)

Since the angle of rotation deformer only supports angles up to one decimal place rounded
off to the first decimal place. That is the value in parentheses ().

Do the same for the other parts. (Numbers before rounding are omitted.)
(Sorry it's hard to see)

After applying the outer circumference values, slide the values to the inner circumference.

However, the values in the red circles (0°, 90°, 180°) remain the same.
Continue sliding further inward, and so on, but the number of points in the parameter in
between will become smaller and smaller. The values in the red circled area do not change.

(I'm not sure if this conveys the message properly...!)

Do this for all points.

The number of processes has become very large,


but,One advantage is that it can be adapted to any arm length, so it can be used in many
different ways.

Once it is made, it is ours.

https://youtu.be/pOhrAM8Pi2E

At this point, if I just move the rotating deformer, it looks like this! It spins around nicely. The
strange movement around 0 is not so strange, as it switches to the opposite direction in an
instant.

(The position of the rotational deformer itself has moved a little, but this is due to the
fine-tuning of its position to match that of the shoulder.)

The deformation of the warp deformer is very simple because we worked very hard to create
it with the rotational deformer.

https://youtu.be/pjGmkYsQnHw
Simply make a deformation that stretches like this

and stick it on all the blue circles except the middle one.

I would like to say this is OK! ......, but if this is all I do, the line art and shadows will be
drowned out when I raise my hand up.

https://youtu.be/43pu150YMyk

So, dot the line drawing and shadow art mesh and deform it so that it looks comfortable.

(This is where placing the line art deformer under the solid color deformer comes into play.)

https://youtu.be/Qn78QfL27GU
Especially near the middle, the angle changes so much that it is necessary to hit a fine point.

Sorry that this area is more of a tour de force than an explanation.

The result is this movement.

https://youtu.be/VOMdwLvrxHo

It's done~~~~~!!!!!

It's a three-dimensional, high-working arm. ~~~~!!!!!

I did it~~~~~~~!!!!!

By the way, I change the drawing order by hitting a dot on the folder.
When the arm X value goes from 0 to 0.6 and the Y value goes from 0 to 10, the drawing
order is changed so that the arm comes to the front, but that does not bring the hand behind
the body, so a new ◇← hand forward/backward parameter is created to allow the hand to
be placed at the back.

https://youtu.be/Q9nSCHIIMvU

When actually moving the body, the shoulder and body movements are blended using
physics calculations to further adjust the movement to make it more natural, as shown in the
↓ figure.

I will explain how to do this in a future issue.

https://youtu.be/-hYKrCTMxaE
That's all for this commentary!

This was the most difficult and long-winded commentary ever, but I hope you enjoyed it.

Are you following me?(:| I hope everyone is following along!

I'll try to explain it as concisely as possible,

However, the explanation from this point on is a lot of work to begin with, and there are many
things to explain, so it ends up being a complicated explanation article like this one.

I apologize for my lack of vocabulary.

I will try my best to verbalize as much as possible

Thank you for your continued support.

I'll explain from the elbow down next time!

See you then!

You might also like