Virtopsy Visualisation: Mixed Data Gradient Model For More Accurate Thin Bone Visualization in 3D Rendering

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Forensic Imaging 33 (2023) 200529

Contents lists available at ScienceDirect

Forensic Imaging
journal homepage: www.sciencedirect.com/journal/forensic-imaging

Virtopsy visualisation: Mixed data gradient model for more accurate thin
bone visualization in 3D rendering
Wolf Schweitzer *, a, Michael Thali a, Eloisa Aldomar b
a
Institute of Legal Medicine, Universität Zürich, Zürich, Switzerland
b
Department Design, Subject Area Knowledge Visualization, Zurich University of the Arts (ZHdK), Switzerland

A R T I C L E I N F O A B S T R A C T

Keywords: Conventional 3D rendering methods of computed tomography (CT) as well as post-mortem data CT (PMCT)
3D reconstruction sometimes do not seem to be authentic enough, especially for relatively thin bones. This can be a problem when
Virtopsy imaging intact anatomy and considering fractures of the facial or temporal bones, where defects or holes may be
Forensic pathology
visualized instead of thin bone structures. The technical aspect of this is that all currently used visualization
Mixed data gradient model
methods (volume rendering, cinematic rendering and particle tracing, shaded surfaces and iso-surfaces) are
defined by a CT-density threshold, whereas the user at least implicitly expects the bone to have a certain min­
imum density CT. However, some bone regions, typically those with relatively thin bone, do not meet these
expectations, and lowering the threshold for visualization then results in all sorts of non-bone tissue being seen in
the rendered images. To provide a more authentic PMCT visualization of bone, we identified a mixed data
gradient model that improves the data from CT by increasing the CT density of low-density bone regions (but not
of non-bone tissues). That delivers more satisfactory results for otherwise unmodified volume rendering. As pre-
processing before 3D rendering, both hard and soft kernel data are used to obtain a 3D density map, a grayscale
co-occurrence matrix is determined using a 3 × 3 × 3 kernel as the 3D gradient map, and these are then com­
bined to obtain the final gradient model for mixed data.

1. Background trying to obtain authentic images [4,5].

Virtopsy has become accepted as a concept, an umbrella term, and a 2. Materials and methods
common name, also to combine routine case investigation with modern
imaging techniques, mainly based on post-mortem computed tomogra­ Data acquisition: PMCT was obtained using standard protocols [6].
phy (PMCT). Currently, the hottest application is imaging in forensic From that, conventional visualisations (volume rendering technique,
case triage, where a number of technical issues are still open [1–3]. From VRT) (Fig. 1, left and middle columns; Fig. 2, left and middle right
a pure application view, pre-autopsy scanning does not appear to be as columns; Fig. 3, left column; Fig. 5, left column) were achieved using
sensitive to technical PMCT issues, as there, all findings can be validated Syngo via with standard presets (Siemens, Erlangen, Germany). In these,
against the gold standard autopsy. However, one specific technical typical “problem zones” were visually identified (Fig. 1, 2, 3 and 5):
problem that also affects particularly difficult-to-access autopsy regions thereby, the lower CT densities resulted in a drop of these areas of the
(such as the midface or spine) is well known: the difficulty of 3D bone underneath the thresholds of the 3D-rendering, thereby generating
rendering methods to adequately represent bone, especially bone re­ artificial defects. Visualisations that are clearly improved with regard to
gions with a relatively low density, either because they are subject to the these identified “problem zones” were performed using a pre-processing
partial volume effect or because they are less calcified. Then, conven­ pipeline, as explained in the next paragraph, followed by VRT tech­
tional threshold-based visualizations (Shaded Surface Display (SSD), niques on some free software (as described elsewhere [7]); the actual
Volume Rendering Technique (VRT) or Cinematic Rendering (CR) and implementation there appears less relevant, as it may be achieved in any
similar approaches) all equally tend to result in 3D rendered images type of software that allows for computational processing of data.
containing false defects or artificial holes. This is a hindrance when Mixed data gradient model development: After literature study and

* Corresponding author.
E-mail addresses: wolf.schweitzer@irm.uzh.ch (W. Schweitzer), michael.thali@irm.uzh.ch (M. Thali).

https://doi.org/10.1016/j.fri.2022.200529
Received 16 September 2022; Received in revised form 4 November 2022; Accepted 11 November 2022
Available online 15 November 2022
2666-2256/© 2022 Elsevier Ltd. All rights reserved.
W. Schweitzer et al. Forensic Imaging 33 (2023) 200529

Fig. 1. Comparison of conventional and mixed data gradient volume rendering of facial bones with fractures. – Arrows: problem zones of conventional 3D ren­
derings are defined by their density m to reside below the visualisation threshold t, vice versa the density b of the bone that is displayed (see Eq. (2)). Left column:
Conventional commercial standard transfer function preset, volume rendering of PMCT data reconstructed with softer (“soft tissue”) kernel. This preset also provides
a discoloured appearance (ill assigned colours such as blue or red) and the impression of a wet or polished surface (excessive specular reflection). Middle column:
Improved transfer function, but otherwise standard volume rendering, now using PMCT data reconstructed with harder (“bone”) kernel. While this transfer function
improves colours and avoids the “wet” appearance, the harder kernel entails even larger artificial defects than the softer kernel. Right column: Improved visual­
isation using mixed data gradient model. – The thin and in part flake like bone fragments of the maxillary fracture are most accurately visualized in the new mixed
data and gradient model (right column) where also, artificial defects as present in the other methods (left and middle column) do not occur; much rather, thin
structures along the right orbital wall or maxillary sinus now appear adequately visualized. (For interpretation of the references to colour in this figure legend, the
reader is referred to the web version of this article.)

2
W. Schweitzer et al. Forensic Imaging 33 (2023) 200529

Fig. 2. Comparison of conventional and mixed data gradient volume rendering of facial bones without and with fractures. – Here, the conventional 3D-renderings
show artificial defects of the anterior wall of the maxillary sinuses (arrows). The case on the left exhibits no injuries when reconstructed correctly, whereas the case
on the right allows to examine how the mixed data gradient model delineates the fracture lines of the anterior wall of the left maxillary sinus well.

Fig. 3. Comparison of conventional and mixed data gradient volume rendering of cervical spine bones with sharp force (knife) injuries. Here, the conventional
volume rendering (left image), mixed data gradient model (middle image) and photograph after autopsy removal and maceration (right image) are shown for
comparison. Arrows: defects labelled; there were relatively shallow knife tip stab defects on the left dorsal side of the arch of the sixth cervical vertebra (C6) labelled
a and b; there was a deep cut of the cranial aspect of the left transverse process of the seventh cervical vertebra (c) and directly above it, a cut to the left transverse
process of the sixth vertebra (e) which then had fallen off in the process of maceration. Also, a more laterally located cut to the left transverse process of the seventh
vertebra (d) had caused the lateral end to fall off (see d in the rightmost image). Overall, the mixed data gradient model presents a more authentic visual repre­
sentation than the standard software preset with the conventional 3D rendering.

approaching the subject matter through practical testing, we developed bone kernel derived data first was smoothed (Gaussian smooth, to
a pre-processing procedure as detailed in Eq. (1). In summary, both hard achieve minimal data conditioning [8]); the data basis c for possible
and soft kernel CT data were combined into a 3D-density map (left bone related PMCT data then was maximized by combining both PMCT
column in Fig. 4, left column in Fig. 6), and from that, a gray level co- data sets. An empirically evaluated thin bone density was determined to
occurrence matrix was determined using a 3 × 3 × 3 kernel as 3D- be in the range of t ∼ [150, 500] Hounsfield units, which was used to
gradient map (middle column in Fig. 4, right column in Fig. 6). These selectively raise these densities to obtain d. We then determined variable
were combined to yield the final mixed data gradient model (right col­ e based on a gray level co-occurrence matrix for better gradient/edge
umn in Fig. 4). delineation [4,9,10], which then was applied for bone-typical densities
Detailed description of the pre-processing pipeline: to start, the PMCT in obtaining the final result f. That was then visualized using conven­
data was read into the software (variables: a, b in Eq. (1)), whereas the tional 3D-rendering with a threshold value based transfer function.

3
W. Schweitzer et al. Forensic Imaging 33 (2023) 200529

Fig. 4. Description of technique: both hard and soft kernel CT data are combined into a CT-density map (left column), and from that, a spatial CT-gradient map is
determined on basis of a gray-level co-occurrence matrix (GLCM) (middle column). In the CT-density map, the well-known and ubiquitously used axial image data
shows CT-densities. These tend to be higher for bone, but both fracture lines and thin bones may contain CT-densities that overlap with soft tissue. So any volume
rendering will use a threshold so that soft tissue are not displayed, which leads to the artificial appearance of defects in volume renderings of thin bone regions. Our
solution consists in adding a classifier that identifies bone; from a data level, bone may be differentiated from soft tissue by having a sharp rise in contrast over a short
distance within the CT-density map. The CT-gradient map (middle column) then is used as a classifier that registers all larger changes of CT density within a
relatively small region of observation. This is plausibly indicated by the contours of the skin but also of airway structures such as the trachea. Both are then combined
(see Eq. (1)) to yield the final mixed data gradient model (right column).– Top row of images: axial images; bottom row of images: volume rendering.

a = [gsσ=0.7 (db )]; b = [ds ] bone of minimal density (m) as if they were separate entities. Numeri­
(a > b)⋅a + (b > a)⋅b cally, b and m range on a very gradual continuum, as can be seen by
c=
2 studying Fig. 6 (bottom left image): there, thin bone m is not at all
categorically different from bone b. The separate bone regions described
(c > 150) ∗ (c < 500) ∗ c ∗ 1.3 + c
d= (1) by m however constitute the artificial defects (such as in, e.g., Fig. 1),
2
and these are brought about by the numeric relation of m (and s) to the
e = glcm[bin=16;3×3×3kernel] (d)
visualisation threshold t. This technical difficulty is not apparent in the
d ∗ (d > 50) ∗ (e + 1) ∗ 1.1 + d axial images (as in Fig. 6), but it arises in volume renderings of bone (see
f =
2 line [2], Eq. (2)). As minimal bone densities (m) have similar densities to
Equation legend: gs = Gauss smoothing; db = data (bone kernel); ds = some soft tissues or non-bone tissues (s), particularly CT-dense tissues
data (soft tissue kernel); glcm = gray level co-occurrence matrix [11]; such as tendons. When entering these values for a volume rendering v
comparisons evaluated as numerically binary flags (0,1). that invariably tries to exclude soft tissues (s) by implementing a
Applying Eq. (1) will result in elevated CT-density values for both threshold t with t > s, thin bone areas with minimal bone density m will
thin bone structures and fracture lines (or similar defects). That means also end up not getting visualised, as long as m ∼ s (see line [1], Eq. (2)).
that no fracture lines or defects are missed, while effectively avoiding That causes that volume rendering v with this data to fail actually dis­
the thin bones’ low density related defect artefacts. It is the combination playing minimal density thin bones m; the result may be comprehen­
of volume data and transfer function employed inside the volume sively called “problem 1” (P1 ). Solving the problematic loss of m to a
rendering, that will define how differences in CT-densities are threshold t in visualisation v can be approached by using a function g
visualized. that raises values of b, m and f, so fractures f are obviously still visual­
Detailed description of just how it is possible to methodically raise the ized; their visualisation can be achieved with g(b) > g(f) while simul­
problematic defect areas of thin bones without obliterating fracture display: taneously realizing g(m)≫g(s). That is the sole task of the function g (as
this may be easier to detail when including visualisation related aspects detailed in Eq. (1)); it is defined so it results in a different relative value
(see Figs. 4–6). See the following equation (Eq. (2)); thereby, we start distribution as shown in line [4] (Eq. (2)). Volume rendering vt then
with a normal CT data, whereas we find (see line [1], Eq. (2)) CT- results in a satisfying result methodically (see line [5]); the outcome P2
densities of what we usually consider as bone (b) by and large to be (“problem 2”) can then compared with P1 for qualitative or quantitative
greater than those of fracture lines (f), also larger than some of the aspects [12].
minimal densities that thin bone regions exhibit (m). Please note that
here, and for the only purpose of explanation, we treat bone (b) and thin

4
W. Schweitzer et al. Forensic Imaging 33 (2023) 200529

Fig. 5. Performance of new mixed data


gradient model for fracture hairlines:
comparing conventional 3D rendering (left
column) with our new mixed data gradient
model (right column), we find that wider
fracture lines (B, B’; E, E’) are both shown with
excellent delineation - whereas the conven­
tional 3D rendering not only exhibits problem­
atic artificial defects (thin bone area, A/A’), but
also does not appear to be as efficient in visu­
alising thinner hairline fractures as the new
mixed data gradient model (compare fracture
line between C/C’ and D/D’), and compare the
spot F/F’ where the conventional rendering
almost smoothes over the fracture line.

[1] b≫f ; b≳m; m ∼ s; conventional 3D-renderings and their absence can be observed in the
v : {b > t; f ≲t; m < t;s < t} new mixed data gradient models, both for intact and fractured bone,
v[b, f , m, s]t ⟶P1
(2) while maintaining if not improving the fracture details.
g : g(b) ∼ b; g(f ) ∼ f ; g(s)≪g(m);
v : g(b) > t; g(f )≲t; g(m) > t; g(s) < t; 4. Discussion
v[g(b), g(f ), g(m), g(s)]t ⟶P2

From that, we ask just how g achieves g(m) to raise above g(s) when We developed this new mixed data gradient model as a pre-
m ∼ s (Eq. (2)). That is where the graphical details of Figs. 4–6 (see processing step, that is, to precede conventional 3D-rendering of CT or
captions there) help to explain this; in essence, a first spatial derivative PMCT data, in order to better approximate authentic bone reconstruc­
of CT-density, a gradient, is obtained, that then is used to re-classify tion. The artificial defects that exist in conventional 3D-renderings do
tissue regions based on both density and gradient value (see Eq. (1)). not appear to be present any longer in the mixed data gradient 3D-ren­
Technical aspect: As it appears, comparing P1 with the outcome P2 derings, whereas the accuracies of fine fracture fragment and fracture
(see Eq. (2)), while the visualisation issues appeared to be solved, the line details appear methodically preserved. This technical visualization
computer working memory required for this exceeded that of around six issue may likely remain relevant in the near future, as any CT data
times a single original data set, and solving Eq. (1) showed to be resolution remains finite, and thus limited, even in currently forensically
computationally intensive (also see p. 105 in Popper and Schilpp [12]). relevant ways [13–15].
While an increase of both CT dose and spatial resolution may be able
3. Results to at least partly improve the problem of artificially defect 3D re­
constructions of low-density bones, anatomical resolution happens on a
Whereas the method (as described above) is the actual result, its fractal scale, so there always will be a thinner bone. Future challenges
applied use cases for visualization arguably constitute a more tangible may lie in more efficiently detecting and computationally removing
practical result. reconstruction artifacts, and, in obtaining similar if not better results
The artificial defects that conventional 3D-renderings exhibit have using deep learning and artificial intelligence methods.
been stated to be cumbersome, and to pose a danger of misinforming or
misleading the reader into believing there to be actual injuries or pa­ Declaration of Competing Interest
thology. The use of the newly developed mixed data gradient model so
far appears to effectively improve on this problem. Authors declare that they have no conflict of interest.
Relevant use cases are shown in Figs. 1, 2 and 5 (skull), as well as 3
(cervical spine), where the artefacts described above can be seen in

5
W. Schweitzer et al. Forensic Imaging 33 (2023) 200529

Fig. 6. Description of technique: When we


compare the CT-density map (left column)
with our gradient map (right column), we can
see that the gradient map classifies a local re­
gion, containing bone (b), but also thin bone
with minimal density (m) as well as thin frac­
ture lines (f); to understand how the resulting
mixed data gradient model will improve visu­
alisation of m without error cost to f, it is
relevant to understand that the adjacent soft
tissue (whose numeric similarity to m in other
regions of the scan causes the visualisation to
show local drop outs) is not classified as having
a similarly steep density gradient. When we
outline the regions that are classified with a
steep gradient with a dashed red outline and
place that outline over the CT-density map, we
can see how an increase in all values contained
therein will not change the relative differences
between b, m, and f. For fracture lines not
contained within the classifier region, the dif­
ference between the mixed data gradient values
and the original fracture densities will be even
larger in the resulting mixed data gradient
model (see the other figures). (For interpreta­
tion of the references to colour in this figure
legend, the reader is referred to the web version
of this article.)

Acknowledgements [6] P.M. Flach, D. Gascho, W. Schweitzer, T.D. Ruder, N. Berger, S.G. Ross, M.J. Thali,
G. Ampanozi, Imaging in forensic radiology: an illustrated guide for postmortem
computed tomography technique and protocols, Forensic Sci., Med., Pathol. 10 (4)
The authors express their gratitude to Emma Louise Kessler, MD for (2014) 583–606.
her generous donation to the Zurich Institute of Forensic Medicine, [7] L.C. Ebert, S. Franckenberg, T. Sieberth, W. Schweitzer, M. Thali, J. Ford,
University of Zurich, Switzerland. The data is published in accordance S. Decker, A review of visualization techniques of post-mortem computed
tomography data for forensic death investigations, Int. J. Legal Med. 135 (5)
with filing 15-0686 (Ethics Committee of the Canton of Zurich). (2021) 1855–1867.
[8] E. Wah, Y. Mei, B.W. Wah, Portfolio optimization through data conditioning and
References aggregation. 2011 IEEE 23rd International Conference on Tools with Artificial
Intelligence, IEEE, 2011, pp. 253–260.
[9] P.J. Costianes, J.B. Plock, Gray-level co-occurrence matrices as features in edge
[1] M.J. Thali, K. Yen, W. Schweitzer, P. Vock, C. Boesch, C. Ozdoba, G. Schroth, enhanced images. 2010 IEEE 39th Applied Imagery Pattern Recognition Workshop
M. Ith, M. Sonnenschein, T. Doernhoefer, et al., Virtopsy, a new imaging horizon in (AIPR), IEEE, 2010, pp. 1–6.
forensic pathology: virtual autopsy by postmortem multislice computed [10] M. Zhao, X. Zhang, Z. Shi, P. Li, B. Li, Restoration of motion blurred images based
tomography (MSCT) and magnetic resonance imaging (MRI)-a feasibility study, on rich edge region extraction using a gray-level co-occurrence matrix, IEEE Access
J. Forensic Sci. 48 (2) (2003) 386–403. 6 (2018) 15532–15540.
[2] V. Chatzaraki, J. Heimer, M. Thali, A. Dally, W. Schweitzer, Role of PMCT as a [11] L. Moya, H. Zakeri, F. Yamazaki, W. Liu, E. Mas, S. Koshimura, 3D gray level co-
triage tool between external inspection and full autopsy–case series and review, occurrence matrix and its application to identifying collapsed buildings, ISPRS J.
J. Forensic Radiol. Imaging 15 (2018) 26–38. Photogramm. Remote Sens. 149 (2019) 14–28.
[3] E. Helmrich, L. Decker, N. Adolphi, Y. Makino, Postmortem CT lung findings in [12] K.R. Popper, P.A. Schilpp, The Philosophy of Karl Popper, in: Library of Living
decedents with COVID-19: a review of 14 decedents and potential triage Philosophers, Open Court, 1974.
implications, Forensic Imaging 23 (2020) 200419. [13] S.A. Bolliger, U. Preiss, N. Gläser, M.J. Thali, S. Ross, Radiological stab wound
[4] U. Tiede, K.H. Höhne, M. Bomans, A. Pommert, M. Riemer, G. Wiebecke, channel depiction with instillation of contrast medium, Legal Med. 12 (1) (2010)
Investigation of medical 3D-rendering algorithms, IEEE Comput. Graph. Appl. 10 39–41.
(2) (1990) 41–53. [14] R.M. Carew, D. Errickson, Imaging in forensic science: five years on, J. Forensic
[5] T. Rodt, S.O. Bartling, J.E. Zajaczek, M.A. Vafa, T. Kapapa, O. Majdani, J.K. Krauss, Radiol. Imaging 16 (2019) 24–33.
M. Zumkeller, H. Matthies, H. Becker, et al., Evaluation of surface and volume [15] W. Schweitzer, M.J. Thali, G. Ampanozi, Planned complex suicide combining pistol
rendering in 3D-CT of facial fractures, Dentomaxillofacial Radiol. 35 (4) (2006) head shot and train suicide and virtopsy examination, Forensic Imaging 28 (2022)
227–231. 200485.

You might also like