Professional Documents
Culture Documents
Expert Systems With Applications: Raquel Barco, Pedro Lázaro, Volker Wille, L. Díez, Sagar Patel
Expert Systems With Applications: Raquel Barco, Pedro Lázaro, Volker Wille, L. Díez, Sagar Patel
a r t i c l e i n f o a b s t r a c t
Keywords: In the near future, several radio access technologies will coexist in Beyond 3G mobile networks (B3G) and
Automated management they will be eventually transformed into one seamless global communication infrastructure. Self-manag-
Expert systems
ing systems (i.e. those that self-configure, self-protect, self-heal and self-optimize) are the solution to
Network operation
tackle the high complexity inherent to these networks. This paper proposes a probabilistic model for
Diagnosis
Mobile communications self-healing in the radio access network (RAN) of wireless systems. The main difficulty in model construc-
Probabilistic reasoning tion is that, contrary to other application domains, in wireless networks there are no databases of previ-
Wireless networks ously classified cases from which to learn the model parameters. Due to this reason, in this paper, a
Bayesian networks knowledge acquisition procedure is proposed to build the model from the knowledge of troubleshooting
Self-healing experts. In order to support the theoretical concepts, a model has been built and it has been tested in a
Fault management live network, proving the feasibility of the proposed system. Additionally, a knowledge-based model has
Self-optimizing networks been compared to a data-based model, showing the benefits of the former when the number of training
cases is scarce.
2008 Elsevier Ltd. All rights reserved.
0957-4174/$ - see front matter 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2008.06.042
4746 R. Barco et al. / Expert Systems with Applications 36 (2009) 4745–4752
Currently, in mobile communication networks there are not his- shooting in the RAN of wireless networks is presented. Subse-
torical collections of diagnosed cases. Furthermore, diagnosis of the quently, a probabilistic diagnosis system for automatic diagnosis
RAN of cellular networks is not documented in the existing litera- is proposed, which comprises a method and a model.
ture. Thus, the experience of troubleshooting experts is, in most
cases, the only source of information to build a diagnosis model. 3.1. Definitions
Due to these reasons, in this paper, a knowledge acquisition proce-
dure is proposed to build the probabilistic model from the knowl- The first step in troubleshooting is the detection, that is the
edge of experts in troubleshooting the RAN. The main advantage of identification of the cells with problems. A problem is a situation
the procedure is that the model can be easily built by troubleshoot- occurring in a cell that has a degrading impact on the service. Every
ing experts, without the need to know anything about probabilistic operator uses a different method to identify the problematic cells,
models. As a consequence, domain experts can transfer their which can be based on different performance indicators, e.g.
expertise using a language that they understand. dropped calls, access failures, congestion, etc. The most severe
In order to support the theoretical concepts, a model has been problem for mobile network operators are cells experiencing a high
built and it has been tested in a live network, proving the feasibility number of dropped calls because a dropped call has a very negative
of the proposed system. impact on the service offered to the end-user. In that sense, the
This paper is organized as follows. In Section 2, previous work re- dropped call rate (DCR) is a good indicator of the quality of the cell.
lated to self-healing in the RAN of wireless networks is summarized. Once the cells with problems are isolated, a diagnosis of the
In Section 3, some concepts related to fault diagnosis are defined cause of the problems should be done for each problematic cell.
and the systems for self-healing and for automatic diagnosis are A cause or fault is the defective behavior of some logical or physical
presented. In Section 4, the knowledge acquisition procedure is component in the cell that provokes failures and generates a high
described. In Section 5, the system proposed in this paper is evalu- DCR, e.g. bad parameter value, hardware fault, etc. A symptom is a
ated. In addition, a data-based model and a knowledge-based model KPI or an alarm whose observed value might be used to identify a
are compared. Finally, conclusions are presented in Section 6. fault, e.g. the number of handovers due to interference. The aim of
the diagnosis system is to identify the cause of a problem based on
the values of some symptoms. Details about the most common
2. Related work
causes and symptoms in wireless networks can be found in Barco
et al. (2005) for GSM/GPRS and in Khanafer et al. (in press) for
First steps in self-healing in the RAN of wireless networks have
UMTS.
been focused on performance visualization (Lehtimäki & Raivio,
2005a) and in fault detection (Laiho, Raivio, Lehtimäki, Hätönen,
3.2. Automatic troubleshooting system
& Simula, 2005; Lehtimäki & Raivio, 2005b). However, very few ref-
erences can be found on diagnosis in the RAN. Automatic diagnosis
Fig. 1 shows the architecture of an automatic troubleshooting
has been extensively studied in other fields, such as diagnosis of
system (TSS) for the RAN of wireless networks. The fault detection
diseases in medicine (Ng & Ong, 2000), troubleshooting of printer
subsystem (FDS) provides the automatic diagnosis subsystem
failures (Heckerman, Breese, & Rommelse, 1995) and diagnosis in
(ADS) with the list of faulty cells to be diagnosed. The ADS requires
the core of communication networks (Steinder & Sethi, 2004). Nev-
a diagnosis model on which its reasoning mechanisms are based.
ertheless, diagnosis in the RAN of cellular networks has some dis-
The subsystem named model definition is in charge of building
tinctive characteristics, such as the continuous nature of KPIs and
the diagnosis model to be used by the ADS. Diagnosis models can
the existence of logical faults not related to a physical piece of
be built based either on the expertise of human troubleshooters
equipment (e.g. a wrong configuration). This makes automatic
or on statistics from the network. Those statistics are normally
diagnosis techniques used in other application domains not di-
saved in the network management system (NMS). The inputs to
rectly applicable to cellular networks.
the ADS are the symptoms, that is, alarms and KPIs for each of
Research studies in automation of diagnosis in the RAN of cellu-
the faulty cells. The NMS contains historical databases with the
lar networks have been traditionally focused on alarm correlation
values of all those inputs. The output of the ADS is a diagnosis on
(Wietgrefe, 2002). Alarm correlation consists in the conceptual
the fault that is causing the problems in each malfunctioning cell.
interpretation of multiple alarms, so that a single new meaning is
In addition, the ADS proposes a list of actions, ranked by their effi-
assigned to the original alarms. Although alarm correlation can be
ciency (efficiency = probability of the action solving the problem/
considered a first step in the diagnosis of faults, it does not provide
cost of the action), to be sequentially executed until the problem
conclusive information to identify the cause of problems, especially
is solved. These actions may be just changing a configuration
if the possible causes are not only faults in pieces of equipment.
parameter from a remote terminal or may involve sending person-
Other categories of faults, such as interference or wrong configura-
nel to a site to replace a faulty piece of equipment. The TSS may
tion are difficult to identify if KPIs are not considered.
even execute software-related repair actions. Although, normally,
In Barco, Wille, and Díez (2005) a diagnosis system for the RAN
operators prefer that the TSS only proposes the actions, but the fi-
of wireless networks based on probabilistic methods was pro-
nal decision be handled by a human expert (the TSS in this case
posed. KPIs were modelled as continuous variables. The problem
acts as a so called decision support system). Finally, the ADS is also
of this approach is that the results were very sensitive to an inac-
intended to generate a report about the diagnosed cause and the
curate definition of the parameters of the model or to a scarce
steps carried out in order to recover from the fault (which may
number of training examples. In other application domains, dis-
be integrated with the trouble ticket systems present in most com-
crete models have proven to be less sensitive to model parameters
munication networks).
(Pearl, 1988) than continuous ones. This is the reason why in the
The TSS can work independently from the NMS, but most of its
following sections a discrete model has been adopted.
benefits are achieved when it is an integrated part of it. This inte-
grated solution will provide direct access to information required
3. Problem formulation in fault analysis as well as access to the operator’s fault manage-
ment system. An integrated solution is also beneficial in case of
In this section, some terminology used in diagnosis of wireless multi-vendor networks and of multi-system networks (GSM,
networks is first described. Then, a system for automatic trouble- UMTS, WLAN). Hence, all relevant troubleshooting cases can be
R. Barco et al. / Expert Systems with Applications 36 (2009) 4745–4752 4747
automatically directed to the TSS and if it finds the solution, the of each possible cause. Given the value of the symptoms
case is cleared, reported and filed. If the problem is not found by {S1, . . ., SM}, the probability of cause ci can be obtained as
the expert system, it can be redirected to the specialists for further Q
analysis and the final conclusions can be incorporated into the Pðci Þ M j¼1 PðSj jc i Þ
Pðci jEÞ ¼ PK QM ð1Þ
knowledge of the expert system. Pðc
i¼1 i Þ j¼1 PðSj jc i Þ
This paper is focused on the ADS, which is the most complex
subsystem and, up to now, has received little attention in existing where P(ci) are the prior probabilities of the causes and P(Sjjci) are
literature. the probabilities of the symptoms given the causes.
Eq. (1) has been obtained applying the Bayes’ rule and taking
3.3. Bayesian modelling into account the following assumptions: (i) only a cause can hap-
pen at the same time; (ii) symptoms are independent given the
Two components of the ADS have been distinguished: the diag- causes. These assumptions are realistic in the RAN of wireless net-
nosis model and the inference method. The diagnosis model repre- works, but even if they were not, this model has proven to give
sents the knowledge on how the identification of the fault cause is very good results (Rish, 2001).
carried out. The elements of the model are causes and symptoms. Let a case be the set composed of the value of the symptoms in a
The inference method is the algorithm that identifies the cause of faulty cell (e.g. average values in a day) together with the actual
the problems based on the value of the symptoms. cause of the problems. Cases may be used either to train the sys-
Defining the diagnosis model comprises two phases. Firstly, the tem, i.e. to calculate the parameters of the model, or to test the sys-
qualitative model should be identified, that is, the causes and tem, i.e. to calculate the diagnosis accuracy (percentage of cases in
symptoms for diagnosis in a given technology (GSM, GPRS, UMTS, a test set correctly classified).
multi-technology networks, etc.). Causes can be modelled as dis-
crete random variables with two states: {absent/present}. Two 3.4. Model parameters
types of symptoms are considered: Alarms and KPIs. Alarms can
also be modelled as discrete random variables with two states: The parameters of the model are thresholds and probabilities.
{off/on}. KPIs are inherently continuous, but they can be modelled On the one hand, the thresholds are interval boundaries for the dis-
either as continuous or discrete random variables. In the latter, the cretization of the continuous symptoms. That is, tj,k is the k thresh-
discretized KPI may have any discrete number of states, each rep- old for symptoms Sj, which partitions it into states sj,k and sj,k+1. On
resenting a subset of the continuous range of the KPI, e.g. {normal/ the other hand, according to Eq. (1), the probabilities are the
high/very high}. following:
Secondly, the quantitative model should be specified, that is,
the parameters of the model. In a discrete model, the parameters Prior probabilities of causes: P(ci), i = 1 . . . K.
are thresholds for the discretized KPIs and probabilities. Conditional probabilities of symptoms given causes: P(Sj = sj,kjci),
Once the quantitative and qualitative models have been de- i = 1. . .K, j = 1 . . . M, is the probability of the symptom Sj being in
fined, the inference method consists in calculating the probability state sj,k given that the cause is ci.
4748 R. Barco et al. / Expert Systems with Applications 36 (2009) 4745–4752
Building the probabilistic model based on the knowledge from Parameters Range Description Ner
experts in the application domain, that is, knowledge acquisition, parameters
PM
involves two phases. Firstly, knowledge gathering, that is, obtain- ti,j i = 1, . . ., M Threshold j for symptom Si i¼1 T i
ing the knowledge from experts. Secondly, model construction, j = 1, . . ., Ti Ti: number of thresholds of symptom Si
that is, defining the model based on the previously acquired infor- PCi i = 1, . . ., K Probability of cause Ci = on K
mation provided by experts. PM
P Si;j jC k i = 1, . . ., M Probability of symptom Si = si,j i¼1 Ri Qi
8C k 2 C ir j = 1, . . ., Qi given cause Ck = 1 and Ch = 0 "h–k
4.1. Knowledge gathering PM
P Si;j jC 0 i = 1, . . ., M Probability of symptom Si = si,j i¼1 Q i
j = 1, . . ., Qi given cause C k ¼ 0; 8C k 2 C ir
Knowledge gathering is composed of the phases presented in
Fig. 2, which will be explained in the paragraphs below. Table 1
summarizes the qualitative information that the expert should specifies the possible causes of the fault category, that is, the
provide, whereas quantitative information can be found in Table 2: causes of the problem in the network for which the diagnosis
model is being built (e.g. ‘‘High DCR”), {C1, . . ., CK}. It is recom-
1. Select fault category. Fault categories are the diverse problems mended to include a cause called ‘‘Other causes”, in order to
that the RAN may suffer, such as ‘‘High DCR” or ‘‘Congestion”. A cover any other possible cause of the problem not explicitly
different model is built for each fault category. included in the defined causes. Secondly, the expert is
2. Define variables. There should be a database of causes and demanded to enumerate the symptoms that may help to iden-
symptoms. The expert has the chance of either selecting a var- tify the previously defined causes, {S1, . . ., SM}. The states, si,j, of
iable from the database or defining a new one, which should each symptom, Si, should also be specified.
then be incorporated into the database. Firstly, the expert 3. Define relations. In this phase, the user should define which are
the causes, C ir ¼ fC ir1 ; . . . ; C irR g, related to each symptom Si. The
i
term ‘‘related” is used to qualify those variables which have a
strong direct inter-dependency. For example, the cause ‘‘Lack
1. Select Fault Category of coverage” is related to the symptom ‘‘Percentage of UL sam-
ples with level <100 dBm”, whereas the cause ‘‘UL interfer-
ence” is not related to that symptom. The explanation is that
2. Define variables a lack of coverage reduces the received signal level in compar-
ison to the average received signal level in a network without
problems, whereas when the cause is interference, the received
signal level is not significantly decreased in comparison to the
3. Define relations level in a cell without problems. The causes not related to
symptom Si will be denoted as C in ¼ fC in1 ; . . . ; C inKR g.
i
4. Specify thresholds. For each continuous symptom, Si, interval
limits (i.e. thresholds), ti,j, between each defined interval should
4. Specify thresholds be requested to the user.
5. Specify probabilities. Verbal probability expressions are often
suggested as a method of eliciting probabilistic information
5. Specify probabilities (Renooij & Witteman, 1999). The number of verbal expressions
should be reduced in order to avoid misinterpretations. In addi-
tion, it is advisable to use a graphical scale with numbers on one
side and words on the other. In our experiments with cellular
6. Link to database (NMS) network operators, experts were asked to choose one out of five
levels of probabilities: ‘‘Almost certain”, ‘‘Likely”, ‘‘Fifty-fifty”,
‘‘Improbable” and ‘‘Unlikely”. Those levels are mapped to the
probabilities 0.85, 0.7, 0.5, 0.3 and 0.1, respectively.The proce-
Fig. 2. Phases in knowledge acquisition.
dure to define the probabilities is as follows. Firstly, the expert
is requested about the prior probabilities of each of the possible
Table 1 causes of the problem, PC i . As causes have only two states
Qualitative model defined by expert
(absent/present), only the probability of the cause being present
Parameters Range Description Example is demanded. Secondly, probabilities for symptoms are
Fi i = 1, . . ., A Fault categories F1 = High DCR requested. For a symptom Si, the requested probabilities,
A: number of fault categories PSi;j jC k , should be those of each state of the symptom given that
Ci i = 1, . . ., K Causes C1 = UL interf. each of the related causes, C k 2 C ir , is present and the other
K: number of causes causes are absent. In addition, the probability of each state of
Si i = 1, . . ., M Symptoms S30 = % UL the symptom given that none of the related causes are present,
M: number of symptoms interf.HOs PSi;j jC 0 , should be defined. In all cases, the expert should take into
si,j i = 1, . . ., M Symptom states s30,1 = low account that the sum of the probabilities over all the states of a
j = 1, . . ., Qi Qi: number of states of given symptom should be 1. The expert should be warned if this
symptom Si was not the case.
C ir ¼ fC ir1 ; . . . ; C irR g i = 1, . . ., M Set of causes related to C 1r ¼ fC 3 ; C 4 g 6. Link symptoms to database. The last step is linking the symp-
i
symptom Si toms in the model to the data in the NMS. Thus, symptoms
Ri: number of causes
should be related to a parameter (performance indicator, coun-
related to Si
ter, etc.) available in the NMS or a combination of parameters.
R. Barco et al. / Expert Systems with Applications 36 (2009) 4745–4752 4749
According to Eq. (1), the required probabilities to build the Causes Symptoms
model are the prior probabilities of causes, P(ci), and the probabil- Interference in uplink Dropped call rate (DCR), %
ities of symptoms given causes, P(Sjjci). Therefore, the data pro- Interference in downlink % Uplink quality handovers
vided by experts, which are those in Tables 1 and 2, should be Bad target cell coverage % Downlink quality handovers
converted into the required probabilities in Eq. (1). Combiner fault % Samples with downlink RXQUAL out of band 0
TRX fault % Samples with uplink RXLEV < 10
For causes, the probabilities elicited by the experts, PC i , are di- Bad coverage (borders) % A interface component of DCR
rectly the probabilities of the causes, P(ci). Taking into account that Bad coverage (holes) % Samples on uplink idle channels out of band 1
the model assumes that the causes are mutually exclusive, the sum A-bis interface fault A-bis alarms
of the probabilities of the causes should be 1. There are two ways
of dealing with that constraint: either the expert is responsible of
checking that he gave the right probabilities or it is allowed that Unknown diagnosis
the elicited probabilities do not comply with that constraint, and – On going: Analysis was started by the experts, but it was
then they are automatically modified. The latter will be done not completed, i.e. the correct cause was not yet known.
according to the following procedure: – Inconclusive: The real cause of the problem was unclear
and it was not possible to sort it out by the experts. There-
If the sum of probabilities is higher than 1, each probability fore, it was not possible to know if the analysis done by
should be normalized by the sum of all the probabilities of the the ADS was correct.
causes.
If the sum of probabilities of the causes is lower than 1, a cause Known diagnosis
cK+1, named ‘‘Others”, is added. That cause stands for any other – Correct: The analysis done by the ADS gave the right
cause of the problem not considered by the expert. The probabil- cause, which was verified and confirmed by the experts.
ity of that cause is 1 minus the sum of the probabilities of the – Incorrect: The analysis done by the ADS provided the
causes. wrong cause, when the experts knew the correct answer.
80 80
Knowledge-based
70
70 Data-based
Average diagnosis accuracy (%)
50 40
30
40
Thresholds 20
Prior
30 Conditional 10
All
20 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 N=50 N=1000
gether with the actual cause would be saved. In this way, the data-
sensitivity analysis proves the importance of setting accurate
base of training cases would be growing with each new case and a
parameters for the diagnosis model.
data-based model could be ready in a reasonable time.
5.3. Comparison with data-based model
6. Conclusions
The aim of the following experiments was to compare the re-
sults obtained from a knowledge-based system with those from This paper has presented an automatic diagnosis system for the
a data-based system. With that purpose, a three month trial was RAN of wireless networks. A knowledge acquisition procedure has
carried out in a GSM/GPRS network. The network was composed been proposed to build the diagnosis model required by the sys-
of about 25,000 cells and 10 NMSs. Everyday some problematic tem. The techniques in this paper can be used to increase opera-
cells were identified and their faults were manually diagnosed tional efficiency in current and future wireless networks (GSM,
by experts in troubleshooting. The average value of the main KPIs GPRS, UMTS, WLAN, B3G networks, . . .).
on the day the fault occurred, together with the diagnosed cause, Experimental results have shown the feasibility of the proposed
were saved in a database of classified cases. From those data, methods. The knowledge acquisition procedure has allowed to de-
1000 cases were used as training set and 1000 cases were used fine a knowledge-based model and to use it in a live network with
as test set. promising results. A sensitivity analysis has shown that a knowl-
Two data-based models were built, which differed in the size of edge-based model is especially sensitive to inaccuracies in the def-
the training set used to calculate their parameters, either a subset inition of thresholds. Experiments have also proven that a
of 50 cases or the whole 1000 cases. In order to build both models, knowledge-based model is the best option when the number of
an Entropy Minimization Discretization (Fayyad & Irani, 1993) training cases is scarce or nonexistent. On the contrary, when a
algorithm was applied to discretize KPIs based on the training large database of training cases is available, data-based models
set. Once thresholds were obtained, a Maximum Likelihood Esti- should be preferred.
mation was used in order to learn the probabilities from the train-
ing set. In parallel, a knowledge-based model was defined by References
troubleshooting experts.
Altman, Z., Skehill, R., Barco, R., Moltsen, L., Brennan, R., Samhat, A., et al. (2006). The
Fig. 7 shows the results of evaluating the three models on the Celtic Gandalf framework. In Proceedings of the IEEE mediterranean
same test set. The figure presents the diagnosis accuracy depend- electrotechnical conference MELECON’06, Benalmádena, Spain.
ing on the size of the training set N. It can be observed that, logi- Barco, R., Wille, V., & Díez, V. (2005). System for automated diagnosis in cellular
networks based on performance indicators. European Transactions on
cally, the accuracy obtained with the knowledge-based model Telecommunications, 16(5), 399–409.
does not depend on N. When the size of the training set is small Fayyad, U., Irani, K. (1993). Multi-interval discretization of continuous valued
(N = 50), the knowledge-based model outperforms the data-based attributes for classification learning. In Proceeding of the international joint
conference on artificial intelligence, Chambery, France.
one. However, when the number of training cases is large Heckerman, D., Breese, J., & Rommelse, K. (1995). Decision-theoretic
(N = 1000), then the data-based model provides the best results. troubleshooting. Communication of the ACM, 38(3), 49–57.
An important factor to consider is the long time required to Henrion, M., Pradhan, M., Favero, B. D., Huang, K., Provan, G., & O’Rorke, P. (1996).
Why is diagnosis using belief networks insensitive to imprecision in
build a knowledge-based model (e.g. months), especially taking probabilities? In Proceedings of the annual conference on uncertainty in artificial
into account that normally the parameters should be fine-tuned intelligence, Portland, Oregon.
after trials in a real network. However, in most cases, a knowl- Jamalipour, A., Wada, T., & Yamazato, T. (2005). A tutorial on multiple access
technologies for beyond 3G mobile networks. IEEE Communications Magazine,
edge-based model is the only option in the RAN of wireless net-
43(2), 110–117.
works, due to the lack of training cases and the difficulties to Khanafer, R., Solana, B., Triola, J., Barco, R., Nielsen, L., & Altman, Z., et al. (in press).
obtain them. Thus, a feasible solution to implement in a live net- Automated diagnosis for UMTS networks using bayesian network approach.
work would be to start with a knowledge-based model. Everyday, IEEE Transactions on Vehicle Technology. doi:10.1109/TVT.2007.912610.
Kipersztok, O., & Wang, H. (2001). Another look at sensitivity of bayesian networks
the ADS would be run to help the troubleshooting experts. Then, to imprecise probabilities. In Proceedings of the international workshop on
every time a fault was solved, the symptoms of all faulty cells to- artificial intelligence and statistics, Florida, USA.
4752 R. Barco et al. / Expert Systems with Applications 36 (2009) 4745–4752
Laiho, J., Raivio, K., Lehtimäki, P., Hätönen, K., & Simula, O. (2005). Advanced analysis Pradhan, M., Henrion, M., Provan, G., del Favero, B., & Huang, K. (1996). The
methods for 3G cellular networks. IEEE Transactions on Wireless Communication, sensitivity of belief networks to imprecise probabilities: An experimental
4(3), 930–942. investigation. Artificial Intelligence, 85(1–2), 363–397.
Lehtimäki, P., & Raivio, K. (2005). A knowledge-based model for analyzing GSM Renooij, S., & Witteman, C. (1999). Talking probabilities: Communicating
network performance. In Proceedings of the international conference on industrial probabilistic information with words and numbers. International Journal of
and engineering applications of artificial intelligence and expert systems, Bari, Italy. Approximate Reasoning, 22(3), 169–194.
Lehtimäki, P., & Raivio, K. (2005). A SOM based approach for visualization of GSM Rish, I. (2001). An empirical study of the naive bayes classifier. In Proceedings of the
network performance data. In Proceedings of the international symposium on international joint conference on artificial intelligence, Seattle, USA.
intelligent data analysis, Madrid, Spain. Steinder, M., & Sethi, A. (2004). Probabilistic fault localization in communication
Ng, G., & Ong, K. (2000). Using a qualitative probabilistic network to explain systems using belief networks. IEEE/ACM Transactions on Networking, 12(5),
diagnostic reasoning in an expert system for chest pain diagnosis. In Computers 809–822.
in Cardiology (pp. 569–572). USA: IEEE. Wietgrefe, H. (2002). Investigation and practical assessment of alarm correlation
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible methods for the use in GSM access networks. In Proceedings of the IEEE/IFIP
inference. San Francisco, California: Morgan Kaufmann. network operations and management symposium, Florence, Italy.