Professional Documents
Culture Documents
Intelligent Decision Support Methods
Intelligent Decision Support Methods
Intelligent Decision Support Methods
Quality of Model
• Accuracy, Explainability, Speed, Reliability..
Engineering Dimension
• Flexibility, Scalability, Ease of Use,...
Quality of Available Resource
• Learning Curve, Tolerances for Noise, Complexity,...
Logistical Constraints
• Independence from Experts, Computational Ease, Development Time,..
Logistical Constraints
Accuracy
• measures how dose the outputs of a system are to the correct or best decisi
on. Can you be confident that the errors(results that are not accurate)are n
ot so severe as to make the sys-tem too costly or dangerous to use?
Explainabilitv
• is the description of the process by which a conclusion was reached. Stati
stical models explain the output to some degree in the sense that each inde
pendent variable influences or ‘explains’ the dependent variable in that it
accounts for some portion of the variance of the dependent variable.
Scalability
• involves adding more variables to the problem or increasing the range of v
alues that variables can take. For example, scalability is a major issue wh
en you're interested in going from a prototype system involving 10 variabl
es to one with 30 variables. Scalability can be a real problem when the in
teractions among variables increase rapidly in unpredictable ways with th
e introduction of additional variables(making the system brittle)or where t
he computational complexity increases rapidly.
Compactness
• refers to how small (literally, the number of bytes) the system can be mad
e.Once a system has been developed and tested, it needs to be put into the
hands of the decision makers within an organization. It must be taken out
into the field, be that the shop floor, the trading floor, or the ocean floor.
Flexibility
• is the ease with which the relationships among the variables or their domains can be c
hanged, or the goals of the system modified. Most systems are not designed to be use
d once and then thrown away. Instead they must be robust enough to perform well as
additional functionality is added over time. In addition, many of the business process
es that you might model are not static (i.e., they change over time). As a result, the abi
lity to update a system or to have the system adapt itself to new phenomena important
.
Embeddability
• refers to the ease with which a system can be coupled with or incorporated into the inf
rastructure of an organization. In some situations, systems will be components of larg
er systems or other databases. If this is the case, systems must be able to communicat
e well and mesh smoothly with the other components of the organization infrastructur
e. A system that requires proprietary software engineer,or specific hardware will not
necessarily be able to integrate itself into this infrastructure.
Ease of use
• describes how complicated the system is to use for the businesspeople
who will be using it on a daily basis. Is it an application that requires a lot
of expertise or training, or is it something a user can apply right out of the
box?
Basic Concept:
• Natural Selection, i.e., Survival
The Example:
• Over the years, you must have a very good idea how much time you need to
spend on and how to prepare a quiz to get certain grade.
• That is, you build mental models based on the past experiences (data) by
generalization.
Intelligent DSS by HCH 40
The Origin - Neural Networks (II)
Neural networks were first theorized as early as the 1940’s by
two scientists at the University of Chicago (McColloch and
Pitts). Works was done in the mid-1950s as well (McCarthy
1956; Rosenblatt 1957) when researchers developed simple
neural nets in attempts to simulate the brain’s cognitive
learning processes.
ANNs are simple computer programs that build models from
data by trial and error.
Very useful in modeling complex poorly understood
problems for which sufficient data can be collected.
Intelligent DSS by HCH 41
Nervous Systems - Neural Networks (III)
Our nervous systems consist of a network of individual but
interconnected nerve cells called neurons.
Neurons can receive information (stimuli) from the outside
world at various points in the network.
The information travels through the network by generating
new internal signals that are passed from neuron to neuron.
These new signals ultimately produce a response.
A neuron passes information on to neighbor neurons by
firing or releasing chemicals called neurotransmitters.
Intelligent DSS by HCH 42
Nervous Systems - Neural Networks (IV)
The connections between neurons at which information
transfers are called synapses.
Information can either excite or inhibit neurons.
Synaptic connection can be strengthened (learning) or
weakened (forgetting) over time with experience.
With repeated learning, one can generalize his/her
experience, modifying the response to stimuli, and thus
ultimately reach the level of reflexes.
Step l: The network makes a guess based on its current weights and the input
data.
Step 2: The net calculates the error associated with the output (at the out,put n
ode). For example, if the desired output were 1, but the network output were 0,
the error would be +1, based on the difference between l and 0.
Step 3: The net determines by how much and in what direction each of the we
ights leading in to this node needs to be adjusted. How?
This is accomplished by calculating how much each of the individual weighte
d inputs to the node contributed to the error,given the particular input value. S
o, for example, if a node's output were too small, the net might need to concen
trate on (that is,increase) small or negative weights that lead up to that node. I
n essence, the network feeds back the information about how well it's doing to
the neurodes in the net, and where possible problems might be.
Step4: The net adjusts the weights of each node in the layer according to the a
nalysis in the previous step. For example, in the case where thc output was to
o small, the neural network will try to increase the values of the positive weig
hts since that would make the weighted sum larger. This would bring the outp
ut closer to 1, which is what you want in this case. Similarly, the neural net s
hould also try to decrease the size of the negative weights (or even make them
positive).
Step5:The net repeats the process by performing a similar set of calculations
(Step l-Step3)for-each node in the hidden layer below it. But since you canno
t tell the net what the desired output of each of the hidden nodes should be (the
y are internal and hidden), the neural network does a kind of sensitivity analysi
s to determine how large the error of each of these nodes is..
Forward Chaining
Hypothesize
Backward Chaining
The difficulty: making the right rules to fire at the right tim
e.