Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Independent Art Installation / Avner Peled

Tzina: Symphony of Longing

About the project

Tzina: Symphony of Longing is WebVR documentary for the Chrome Browser / HTC Vive. It takes place in the main
city square in downtown Tel Aviv – “Dizengof square”, also named “Tzina square”, after the wife of Tel Aviv’s first
city mayor, Meir Dizengof. The documentary tells the story of different individuals that inhabit the square regularly,
spending most of their day sitting on the benches that surround the circular square and the monumental fountain
sculpture in its center. Whether it is because they are homeless, poor or just lonely, they all find themselves there,
pondering about lost loves and missed opportunities.

Illustration 1: Tzina square

Shirin Anlen, the director of the film, and a friend, was able to find common grounds with the people of the square, and
decided to tell their story to the world. What provided an additional urgency for the creation of the film was the fact
that the square as we know it, which is built high above the street, was about to be demolished and then leveled down to
the street’s level. There’s no way of telling what would come up of the people that currently occupy the space, so in fact
there was an archival value to the making of the film.
Being an avid new media pioneer as Shirin is, she wanted to use no less than the newest most experimental technologies
that exist for interactive storytelling. She assembled a team of very talented and multi-national people, recruited me to
be the lead developer, and we set off on this WebVR journey.
Project outcome

I am very proud of
the result that we
have achieved
during less than 6
months of
development (That
followed about 6
months of research
and shooting that I
was not a part of).
The film is both a
Vive and Web
experience, it
contains a modeled
and greatly enriched
version of the square
with 3D point cloud Illustration 2: Screenshot from tzina.space
scans of the trees
and passers of the square.
It contains about 45 minutes of footage with 10 interviewed characters that were shot using “DepthKit” depth shooting
technology, edited, rotoscoped and then embedded into the virtual world. Each character projects an animation that was
custom made for its dialog. We were also able to implement a multi-user feature in which viewers of the film can see
other viewers walking in real time as pigeons. An interaction mechanism was implemented that allows the viewer to
change between different times of the day by gazing at one of the suns that are blazing over the square. In VR mode, the
content rotates toward viewers, instead of them having to teleport around the square.
The film was selected for the Doclab exhibition of IDFA – International documentary film festival of Amsterdam. We
barely managed to complete the project in time for the festival, so the experience was not always fluid, but some of the
feedback that we got was really positive and assured us that we were able to deliver the desired message of the film.

Reflections
I would like to divide my reflections and conclusions from this project into 3 distinct parts:

1. Technological
2. Organizational
3. VR development and storytelling

Technological conclusions

WebVR:

The idea to work with WebVR was proposed even before I joined the project, but I fully supported it. In retrospect, I
did not really know what I was getting myself into, but the important fact is that I was working with open standards and
software that is in line with my ideology. Choosing to develop strictly for the browser, using only open source
technologies, not only drastically increases the exposure of the audience to the experience, it also contributes to the
dispersion of the technology to the general public and moves toward a state in which more and more people will be able
to produce content. Indeed, during the development process, I was already able to give back to the community by
reporting bugs and suggesting fixes1. The complete source of the experience is also publicly available2. Proprietary
engines such as Unreal and Unity do provide options to export the content to WebGL, but at this time they are still very
immature and lacking in that respect.
When working with WebVR, that is connecting an HMD such as the HTC Vive to the browser, one has to use an
experimental version of Chromium3 or Firefox4. At the time of development, Chromium was more advanced and

1
https://github.com/toji/chrome-webvr-issues/issues/69#issuecomment-243742725
2
https://github.com/avnerus/tzina
3
https://webvr.info/get-chrome/
4
https://mozvr.com/
performant than Firefox so it was chosen to be the target platform. The provided builds however, were still very
experimental, and the more we were pushing the limits of the platform, the more issues we came across. Every month or
so, a new version of Chromium WebVR was released, mostly maintained by Brandon Jones5. The new versions did fix
issues, but also frequently introduced new bugs and breaking changes to the API. To this date, the two most recent
versions of Chromium available offers a dilemma – The September version has a bug in which the browser crashes
every time the Vive loses its sensor tracking. The October version fixes that bug, but introduces memory leaks that
cause the browser to crash with an “Out of memory” exception after a reload of the experience. We have opted to use
the September version. Using the experimental Chromium version also introduced substantial obstacles in terms of
marketing the project, because curators have to put on an extra effort in downloading the chrome version and enabling
WebVR. However, WebVR is destined to land in the upcoming version of Chrome very soon6. All in all, I am pleased
with our choice to use WebVR, as we were able to push the envelope on the technology and create something truly
cutting edge.

3D Engine:

Once the decision was made to use open source software, the choice of a web 3d engine was narrowed down to two
possible engines: Three.JS and Babylon.JS. In retrospect, it may have been worthwhile to perform a more extensive
examination of Babylon.JS. I chose Three.JS for three main reasons:
1. Very big community and example database, including people working closely with WebVR.
2. More lightweight, less monolithic, not affiliated with large corporations (Babylon.JS is affiliated with
Microsoft).
3. I have worked with it before and the deadline was tight
When it comes to frameworks, I am rather fixated on issues such as modularity, lightness and flexibility. The size and
capacity of Babylon.JS startled me. However, I think many of the issues that I dealt with could have been avoided by
using Babylon.JS. It would be true to say that Three.JS is not really designed to support large scale projects such as
Tzina. When it comes to performance, the lightness of Three.JS is where it shines. I could have full control over the
rendering loop and embed my own modules, such as fast collision detection using Box-Intersect 7 and FBO based
particle engines8. The latter might actually be a good enough reason on its own to stick with Three.JS. I have been
polishing my Three.JS-FBO skills over the last couple of years, and using FBO allowed us to render, animate and
morph millions of particles without losing any FPS, and as I will get to later, particles really excel in VR.
However, Three.JS lacks optimization for garbage collection and requires manual disposing of most of the resources
after they are used, while Babylon.JS seems to be optimized for garbage collection, a very important aspect when using
Javascript.
In terms of workflow, we started off by using Unity as the designer’s editor of choice and then exported to Three.JS
using an exporter from the asset store9. That turned out to be a huge mistake. The exporter was buggy, missed features
and later became unsupported by the developer, many properties seemed visually different in Unity than in Three.JS and
the exported JSON files were bloated. On top of that it also cost $35. A better choice would have been to use the
Blender exporter, which we did end up doing for several modifications closer to the end. Instead, we had to resort many
times to manually editing the exported JSON files. Three.JS does have its own editor in development, but it is at a very
early stage. Babylon.JS has a very comprehensive and impressive looking editor, but until recently, there wasn’t really a
way to connect the editor with custom code, thus limiting the editor only to basic prototyping. Now, there seems to be a
way to use Typescript and Visual Studio Code alongside the web based editor to get a full featured coding
environment10. But this already raises concerns with some tight coupling to Microsoft produces. What I ended up doing,
was to simulate my own editor on top of Three.JS using the dat.gui library1112. I can’t say it is a very scalable solution,
but we managed to reach a rather efficient and semi-automatic workflow toward the end. Going from Blender to
Three.JS through dat.gui may require a lot of supporting development, but I ended up with a completely corporate free,
pure Javascript ES6 semi-large scale development platform.

Performance:

5
https://github.com/toji
6
https://github.com/electron/electron/issues/3271#issuecomment-242952641
7
https://github.com/mikolalysenko/box-intersect
8
http://barradeau.com/blog/?p=621
9
https://www.assetstore.unity3d.com/en/#!/content/40550
10
https://medium.com/babylon-js/programming-with-the-babylon-js-editor-ddab887c243a
11
https://github.com/Avnerus/tzina/blob/landing/client/util/debug.js
12
https://workshop.chromeexperiments.com/examples/gui
It didn’t take long before I realized that the main technical challenge of the project would be the performance. With VR,
dropping even 5 frames below the venerated 90FPS has major implications over the viewer’s experience. I would argue
that my main lesson from the battle over 90FPS is actually an organizational one, but here are some of the tips that I
have learned:

1. Video to Texture – When using “DepthKit”, the result of the depth shoot is a webm video file that contains
RGBD data, that is both RGB color and depth image. I will not go into detail in this paper, but to get a
reasonable quality, we had to hack the DepthKit format and shaders so we can generate a decent looking mesh
in real time from the RGBD video. However, it seems that the process of converting an HTML5 <video>
texture into a WebGL texture is a heavy duty challenge that has caused lots of performance issues in browsers,
especially chrome13. We have found that:
a. The performance between different Chrome versions is not consistent. The Sep29 WebVR Chromium
seemed to have the best results 14
b. We have gained a significant improvement by making sure that the video file’s resolution conforms to
Power of 2.
c. VP8 encoding yielded better performance than VP9. There wasn’t any noticeable improvement in
varying the compression ratio and video file size.
d. Starting Windows 10 RS1, hardware accelerated VPx decoding is available and does improve
performance15.
e. Manually decreasing the FPS of some videos that do not need to be viewed in full rate has improved
performance.
f. Whenever we could, we paused other videos while the viewer is focusing on one video.

2. FBO - It’s a no brainer that using GPU shaders anywhere possible would most likely increase performance, but
as previously mentioned, using FBO for particle simulations allowed us to generate and animate thousands of
particles easily.

3. Potree – The project makes extensive use of 3d point clouds. The standard THREE.Points/PointsMaterial that
are bundled with Three.JS are in no way scalable. Instead, we opted for the Potree 16library that can efficiently
present thousands of points in real time.

4. Collision Detection – As noted earlier, the use of BoxIntersect for collision between objects was relatively light
on performance. I also consolidated the use of RayCasting into one collision manager and tried to avoid when
not necessary. With the dat.gui system, I developed an interface for manual adjustment of object box colliders.

5. VSync off - As a general tip, but seems to work well for VR, it is wise to turn off VSync in the GPU settings
to get a performance boost.

Organizational conclusions

Branches:
In this project, I took the role of the lead developer. Other than developing the core platform for the experience, I was
also in charge of doing integrations between other programmers and designers. Code-wise, after some trial and error, I
was able to create a standard for Three.JS ES6 objects that the other developers had to conform to. Each module was
developed autonomously and integrated smoothly into the platform. However, in retrospect, I was not strict enough
regarding the branching policy and ended up doing a lot of debugging to find the causes of performance hits.
For example, we had one branch named design, which our design director used for basically everything, from setting up
trees, to the texturing the floor to updating the model of the square. At one sleepless night, close to the opening of the
festival I pulled the recent changes from the design branch. To my dismay, the FPS dropped significantly, but I did not
know why. I had to first go over the merge to see all of the things that were changed, and then use dat.gui and code
changes to repeatedly turn off and revert changes until the performance went back to normal. Instead, what I should
have done was lead a strict performance regime in which every feature/change hast its own branch and is not integrated
before testing that the performance remains the same. I should add though, that even if I have had that, the simple
measure of actual FPS is not always good enough, because sometimes you only see the FPS drop after an accumulation

13
https://bugs.webkit.org/show_bug.cgi?id=129626
14
Test: http://codeflow.org/issues/slow_video_to_texture/
15
https://codereview.chromium.org/2182263002
16
http://www.potree.org/
of multiple resources. More accurate statistics are required to assess the performance hit of every feature. The Debug
Layer of Babylon.JS looks like an appropriate statistics panel:

VR First:

Whether it was lack of resource, lack of knowledge or simply laziness and fear, we developed the project “Desktop
First”. This means that most of the content was developed as a web experience, and only then we went on to test it on
the vive. That was obviously a mistake. Just as there is now the “Mobile First” paradigm, there should be a “VR First”
paradigm. The logic behind it is simply that it’s much easier to adapt a VR experience to the Web than vice-versa. Here
are just some of requirements that a WebVR experience has that would work fine on a desktop with little to no
adjustments:

1. VR Would need at least 90 FPS while Desktop needs at least 60.


2. WebVR cannot display overlay HTML and instead needs to project a plane on the camera, which can work on
the Desktop as well.
3. VR needs realistic scale adjustments while Desktop viewers don’t necessarily notice.
4. VR cannot perform nauseating camera or world movements while on the desktop it’s possible.
5. In VR, it’s crucial to have positional audio while on the desktop it’s mostly a benefit.
6. VR has no mouse or keyboard.

Having said that, there are some things that would work well on VR but might not work so well on the desktop. But I
would still argue that the by taking the “VR First” strategy, the Desktop would be damaged less than the VR would be
by taking the “Desktop First” approach. I will talk further about VR specific elements in the next section.

VR Storytelling conclusions

Collision detectoion
Rotating the world
Planes to camera
Scaling, method of work – 2 people
Sitting with the characters - empathy
Particles, immersion
Positional sound
Pause when not focusing
Credits/Sprite text

You might also like