Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Cognitive Systems Research 81 (2023) 64–79

Contents lists available at ScienceDirect

Cognitive Systems Research


journal homepage: www.elsevier.com/locate/cogsys

A framework for cognitive chatbots based on abductive–deductive inference


Carmelo Fabio Longo a ,∗, Paolo Marco Riela a , Daniele Francesco Santamaria a , Corrado Santoro a ,
Antonio Lieto b,c
a Department of Mathematics and Computer Science, University of Catania, Italy
b University of Turin, Department of Computer Science, Italy
c
ICAR-CNR, Italy

ARTICLE INFO ABSTRACT

Action editor: Alexei V. Samsonovich This paper presents a framework based on natural language processing and first-order logic aiming at
instantiating cognitive chatbots. The proposed framework leverages two types of knowledge bases interacting
Keywords:
Chatbot with each other in a meta-reasoning process. The first one is devoted to the reactive interactions within
Question answering the environment, while the second one to conceptual reasoning. The latter exploits a combination of axioms
Artificial intelligence represented with rich semantics and abduction as pre-stage of deduction, dealing also with some of the state-
First-order logic of-the-art issues in the natural language ontology domain. As a case study, a Telegram chatbot system has
Cognitive architectures been implemented, supported by a module which automatically transforms polar and wh-questions into one
Meta-reasoning or more likely assertions, so as to infer Boolean values or snippets with variable length as factoid answer.
The conceptual knowledge base is organized in two layers, representing both long- and short-term memory.
The knowledge transition between the two layers is achieved by leveraging both a greedy algorithm and the
engine’s features of a NoSQL database, with promising timing performance if compared with the adoption of
a single layer. Furthermore, the implemented chatbot only requires the knowledge base in natural language
sentences, avoiding any script updates or code refactoring when new knowledge has to income.
The framework has been also evaluated as cognitive system by taking into account the state-of-the art
criteria: the results show that AD-Caspar is an interesting starting point for the design of psychologically
inspired cognitive systems, endowed of functional features and integrating different types of perception.

1. Introduction Such an ideal agent-chatbot, in addition to deductive, abductive and


inductive capabilities, should be endowed with commonsense catego-
Natural language represents the most basic interface allowing for a rization, even through the adoption of the so-called Frames (Minsky,
bidirectional information exchange between humans and machines and 1975), Scripts (Schank & Abelson, 1977), and other techniques. In
plays and will play a significant role in the future as witnessed by tech the overall process of achieving such a goal, the adoption of insights
giants such as Google, Facebook, Microsoft, IBM, OpenAI and Amazon,
coming from cognitive science has become mandatory (Lieto, 2021)
whom nowadays make a large adoption of such technology in the
together with a deep knowledge of linguistics.
forms of personal assistants or chatbots. Among applications leveraging
This work extends (Longo & Santoro, 2018; Longo, Santoro, &
Natural Language Processing (NLP), those related with chatbot systems
are growing very fast and offer a wide range of choices depending Santoro, 2019), where the authors showed the effectiveness of similar
on their usage, each one with different complexity levels, expressive approaches in the case of automation commands provided in natural
powers and integration capabilities: for instance, the Fandango Bot1 language, by presenting a framework called AD-Caspar that exploits the
retrieves the current trending movies in a selected geographical area, NLP and the First-Order Logic (FOL) Reasoning as a baseline platform for
while the NBA’s bot2 shows the user the NBA highlights and updates. implementing scalable and flexible cognitive chatbots with both goal-
On the contrary, the way towards the realization of human-like oriented and conversational features. Nevertheless, this architecture
fashioned and reasoning-oriented chatbot is quite far and challenging. leverages question-answering techniques and is able of combining facts

∗ Corresponding author.
E-mail addresses: fabio.longo@unict.it (C.F. Longo), paolo.riela@unict.it (P.M. Riela), daniele.santamaria@unict.it (D.F. Santamaria), santoro@dmi.unict.it
(C. Santoro), antonio.lieto@unito.it (A. Lieto).
1
https://www.facebook.com/Fandango/
2
https://www.facebook.com/nba/

https://doi.org/10.1016/j.cogsys.2023.05.002
Received 14 April 2023; Accepted 8 May 2023
Available online 19 May 2023
1389-0417/© 2023 Published by Elsevier B.V.
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

and axioms in order to infer new knowledge from its own Knowledge kind, often designed for business platforms support, assisting users on
Base (KB). A first overview of such an architecture has been presented tasks like buying goods or execute commands in domotic environments.
in Longo and Santoro (2020). In this case, it is crucial to extract from a utterance intentions, related
The first prototype of the framework cannot yet deal with complex parameters, in order to execute the wanted operations, providing then
dialog systems: however, unlike other similar systems, to enable the a proper feedback to the user. As for conversational ones, they are
handling of additional question–answer couples, users are required to mainly focused on having a conversation, giving the user the feeling to
only provide the related sentences in natural language without the need communicate with a sentient being, returning back reasonable answers
of updating the chatbot code at design-time. Once the agent has parsed optionally taking into account discussions topics and past interactions.
every sentence, a FOL representation is asserted in the KB that acts as One of the most common platforms for building conversational chat-
a deductive database (Ramamohanarao & Harland, 1994). bots is AIML (AIML fondation, 2023) (Artificial Intelligence Markup
AD-Caspar inherits most of its features directly from its predecessor, Language), based on words pattern-matching defined at design-time;
namely Caspar (Longo, Longo, & Santoro, 2021), whose name stands in the last decade it has become a standard for its flexibility to create
for Cognitive Architecture System Planned and Reactive. Caspar was de- conversation. In Madhumitha, Keerthana, and Hemalatha (2019), AIML
signed to build goal-oriented agents (vocal assistants) with enhanced and Chatscript (Wilcox, 2023) are compared and mentioned as the two
deductive capabilities in the realm of Internet of Things (IoT). In widespread opensource frameworks for building chatbots. On the other
addition to Caspar, the AD-Caspar engine introduces many enhance- hand, AIML chatbots are difficult to scale if patterns are manually built,
ments. For example, AD-Caspar makes use of abductive reasoning as they have great limitations on information extraction capabilities and
pre-stage of deduction to make inference only on a narrow set of they are not suitable for task-oriented chatbots. Other kinds of chatbots
query-related clauses in such a way as to deal, in combination of are based on deep learning techniques (Jing & Xu, 2019; Sutskever,
some question-answering techniques, with wh-questions by returning Vinyals, & Le, 2014), leveraging the so-defined Large Language Models
factoid answers (single nouns or snippets) in the best cases. Otherwise, (LLMs), whom make usage of huge corpus of examples of conversations
a relevance-based output will be optionally returned. to train a generative models that, given an input, are able to generate
A Python prototype implementation of AD-Caspar is publicly avail- answers with arguable levels of accuracy. Among them, the widely
able for research purposes through a dedicated Github repository.3 discussed ChatGPT5 learns language through repeated exposure, albeit
This paper is structured as follows: Section 2 deals with the litera- on a much larger scale. Without explicit rules, it can sometimes fail at
ture on classical approaches for chatbot applications; Section 3 presents the simplest of linguistic tasks, but it can also excel at more difficult
some of the issues of natural language ontology that must be considered ones like imitating an author or waxing philosophical. Although the
in the process of automatic translation in logical forms; Section 4 shows known biases before exposed, researchers (Elkins & Chun, 2020) still
in detail the architecture’s components and the underlying modules. argue whether or not such chatbot system is capable of passing the
Section 5 illustrates how AD-Caspar deals with polar and wh-questions; Turing Test. Of course, in the field of decision-making, no one would
Section 6 introduces the case-study showing a typical instance of a dare entrust any task to such natural language-based cognitive agents.
Telegram chatbot working on challenging knowledge bases; Section 7 At this regard, the author of Borji (2023) provides a comprehensive
provides an overall assessment in the realm of cognitive systems by analysis of ten categories of ChatGPT’s failures, including reasoning,
framing the contribution within the more general framework of the factual errors, math, coding and bias. Even the late version of such
Standard Model of Mind/Common Model of Cognition; finally, Sec- framework, namely GTP-4, as highlighted by Gary Marcus,6 it does not
tion 8 summarizes the paper and draws the conclusions, outlining actually solve any of the core problems of truthfulness and reliability.
future work perspectives. In general, all chatbots are not easily scalable without writing
additional code or repeating the training of a model with fresh datasets.
2. Related work As for the latter, unfortunately neural networks (in particular, the ones
used in deep learning applications) suffer from the problem known as
In the plethora of known chatbot systems, first of all stand out the catastrophic interference: a process where new knowledge overwrites,
ones participating to the Loebner Prize Competition (Loebner, 2023). rather than integrates, previous knowledge. In this regard, the usage
Typical sentences to prove Loebner price candidates effectiveness do of neural models working at semantic tier as dependency parsers, in
not require particular logical inference, but mainly the ability to pos- order to build logical models of utterances in natural language, prevents
sible gloss, convincingly, as a human being would. Some chatbots much more such a drawback.
make usage also of language tricks, such as monologues, not repeating In light of above, as far as we know, there is not any chatbot
itself, identify gibberish, play knock-knock jokes, etc. But despite such framework in the state-of-the-art relying on a cognitive architecture
features, these chatbots can hardly aspire to be somehow useful for that draws insights from many aspects of cognitive science for what
the task of decision-making, but just to fool their interlocutor. This is concerns the functional aspects of knowledge storage, retrieval and
because of the historical approach of such a technology, which always reasoning as AD-Caspar does. The advantage of this approach is repre-
aimed, before all, at passing the well-known Turing Test (Turing, 1950) sented by the availability of a transparent procedure for language-based
as intelligence evaluation criterion, event though the reader can find question answering that allows to build, unlike neural models, trust-
copious literature on this theme about which it is insufficient in such a worthy and inspectionable dialogical systems able to integrate and
task.4 exhibit different kinds of reasoning capabilities.
The first distinction between chatbot platforms distinguishes be-
tween: goal-oriented and conversational. The former is the most frequent 3. The issue of natural language ontology

3 The task of automatic creation of a knowledge base from the con-


http://www.github.com/fabiuslongo/ad-caspar
4
The most important objection is because the above test refers only to the
ceptual description in natural language of a specific domain, implies
manifest behavior of a given system, while no claim can be made about the several biases that must be considered. When sentences are not specif-
internal mechanism that have led to that behavior. Another objection is about ically well-formed, the task of translation from natural languages can
its excessive anthropocentrism, because it targets explicit human-like thinking, be quite hard, because of all possible semantic ambiguities of idioms
so cannot be used to provide a universal criterion for attributing intelligence.
A further problem concerns also the subjective evaluation of the interrogator,
since different human interrogators, indeed, might judge in a different way 5
https://openai.com/blog/openai-api/
6
the same behavior. On these points see Cordeschi (2002). https://garymarcus.substack.com/p/gpt-4s-successes-and-gpt-4s-failures

65
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

and the arbitrary descriptive nature of the world, given often also interpreted in the context of a conversation or text, i.e., what linguistics
by metaphoric expressions usage, which can induce morphologically refers to as ‘‘deixis’’. Considering articles like a and the, what is the
distinct sequences of words to express the same concept. In Moltmann difference between killed a policeman and killed the policeman? Only that
(2017), it is reported that: in the second sentence it is assumed that some specific policeman was
mentioned earlier or is salient in the context. Thus in isolation the two
Natural Language Ontology is a branch of both metaphysics and linguis- phrases are synonymous, but in the following context (the first from an
tic semantic. It aims to uncover the ontological categories, notions and actual newspaper article) their meaning are completely different:
structures which are implicit in the use of natural language, that is, the
ontology that a speaker accepts when using a language.
A policeman’s 14-year-old son, apparently enraged after being disci-
But such an acceptance implies several issues to be addressed in a plined for a bad grade, opened fire from his house, killing a policeman
translation from natural language into logical forms. For instance, and wounding three people before he was shot dead.
let us consider the following two sentences: ‘‘Roy walked down the
street ’’ and ‘‘Roy had a walk down the street ’’. How can one infer that A policeman’s 14-year-old son, apparently enraged after being disci-
they express the same concept in a decision process? In the second plined for a bad grade, opened fire from his house, killing the policeman
sentence we are in presence of the so-called deverbal nominalization of and wounding three people before he was shot dead.
the verb walk versus a noun expressing the action of walking. So we
are forced to create somehow a bridge between the two sentences, in Outside of a particular conversation or text, then, the words a and the
order to achieve the same result when either the one or the other are are quite meaningless. The fifth issue, namely synonymy, is when differ-
participating in a reasoning process. Similarly, let us consider also the ent sentences refer to the same event and therefore licence many of the
simple verbal phrase ‘‘the friends are happy’’ and the snippet ‘‘happy same inferences, similarly to deverbal and deadjectival nominalization.
friends’’. In this case, we are in presence of a deadjectival nominalization, For instance, consider the following sentences:
because the adjective happy becomes the object of a copular7 verb (be),
thus the subject (friends) has the property denoted by the adjective. Sam sprayed paint into the wall.
Hence, how to express, by means of logical forms, that the two pieces Sam sprayed the wall with paint.
of information are referred to the same concept? Paint was sprayed into the wall by Sam.
Pinker describes in Pinker (1994) a hypothetical language of the The wall was sprayed with paint by Sam.
mind, the so-called mentalese, which should stand above all ambigui-
ties, universally homogeneous across languages. He affirms also that
In all four cases, one can conclude that the wall has paint on it. But
whatever language people speak is hopelessly unsuited to serve as
despite the four distinct arrangements of words mean the same thing,
our internal medium of computation, because of five different type of
no simple processor would infer implicitly such a conclusion.
issues: ambiguity, lack of logic precision, coreferencing, deixis, and syn-
The above questions and many other issues related to the natural
onymy. As for ambiguity, it generally happens when the same word may
language ontology are described in Moltmann (2017), which can be
have different meaning depending on context. For instance, the word
a good starting point for filling the gap between a domain and an
bass can represent either a musical instrument or a fish, depending on
arbitrary description of it given by a speaker. In Section 4 it is shown
the context. In Longo, Longo, and Santoro (2021), it is shown how
how AD-Caspar had been endowed, than respect to its predecessor
Caspar, then also AD-Caspar which inherits all its features, addresses
Caspar, with production rules specifically introduced for dealing with
such an issue with a disambiguation strategy. As for lack of logical
some of natural language ontology issues. A parallel study was also
precision, let us consider the example devised by the computer scientist
conducted for another derivation of Caspar, namely SW-Caspar (Fabio,
Drew McDermott:
Corrado, Marianna, Domenico, & Francesco, 2022; Longo, Santoro,
Cantone, Santamaria, & Nicolosi Asmundo, 2021), which also addresses
Ralph is an elephant.
such issues but with the open-world assumption and leveraging the
Elephants live in Africa.
Semantic Web technologies. SW-Caspar addresses also the chance of
Elephants have tusks.
being possibly noisy on its representation, by operating in the so-called
local closed-world, i.e., forcing it to reason only over a limited set of
An inference-making device would deduce: Ralph lives in Africa and classes.
Ralph has tusks. The issue is that Africa is the common place where
all elephants live in, but Ralph’s tusks are his own and the agent 4. The architecture
does not know that, because the distinction is nowhere to be found
in any of the statements. The third issue, called ‘‘co-reference’’ (or, As outlined in the introduction, the proposed architecture derives
alternatively, anaphora resolution), concerns the resolution of pronouns directly from its predecessor Caspar, but, differently from the latter,
referencing nouns in the scope of the same discourse. The state-of-the- by introducing further human-like fashioned features as abduction and
art comprises several tools acting as co-referencers: one of these that the distinction between short- and long-term memory. All architecture’s
we have been testing is Neuralcoref,8 which can work also on different components are depicted in Fig. 1, each highlighted with a distinct
level of greediness. Greediness is one of the parameter of the model color: the Translation Service (left box) is the module responsible for
that makes it stricter about resolving coreferences. Neuralcoref works
translating a natural language sentence into a logical form; the Reactive
with a default greediness of .5, but, depending on the domain, it could
Reasoner (central box) is a module encompassing the Belief-Desire-
be necessary to change it to make coreferences resolved correctly. The
Intention (BDI) (Rao & Georgeff, 1995) engine Phidias (D’Urso, Longo,
fourth issue comes from those aspects of language that can only be
& Santoro, 2019) and providing routing functions between beliefs
from/to other components; the Smart Environment Interface (upper right
7 box) provides bidirectional interaction between the Reactive Reasoner
A copular verb is a special kind of verb used to join an adjective or noun
complement to a subject. Common examples are: be (is, am, are, was, were),
and the outer world’s sensors and devices; finally, the Cognitive Rea-
appear, seem, look, sound, smell, taste, feel, become, and get. A copular verb soner (bottom right box) which enables the architecture with (meta-)
expresses either that the subject and its complement denote the same thing or reasoning capabilities.
that the subject has the property denoted by its complement. One of AD-Caspar strengths is the use of production rules in its
8
https://github.com/huggingface/neuralcoref core components, whose approach aligns with the historical thread of

66
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fig. 1. The Software Architecture of AD-Caspar.

cognitive architectures that began with the General Problem Solver The two KBs (Beliefs KB and Low/High Clauses KB) represent, some-
(GPS) (Newell, Shaw, & Simon, 1959). More specifically: the Transla- how, two different types of human being memory: the so called pro-
tion Service, which is responsible for semantic parsing from natural cedural memory or implicit memory (Schacter, 1987), made of thoughts
language into logical forms; the Definite Clauses Builder, that con- directly linked to concrete and physical entities; the conceptual memory,
structs composite definite clauses (or nested clauses); the QA Shifter, based on cognitive processes of comparative evaluation. Nevertheless,
which transforms a question into one or more potentially relevant state- the two layers of the Clauses KB can be seen as Short-Term Memory
ments, are all made of production rules. Even the Smart Environment (High Clauses KB) and Long-Term Memory (Low Clauses KB), and
Interface is made by production rules, but fewer than respect others together they constitute what in cognitive science is called Conceptual
modules, one for each Sensor/Device interfacing the agent with the Space.
outer world. In this work, we eschew philosophical discussions of how real the
The architecture’s KB is divided into two distinct parts operating defined conceptual spaces are; what matter is that we can do things
separately, which we identify as Beliefs KB and Low/High Clauses KB. with them. In regard of conceptual contents, it is well-known that
The former contains information about physical entities that affect humans are extremely efficient at learning new concepts, but a central
and are affected by the agent, whereas the latter contains conceptual problem for cognitive science is how such process and the underlying
information not directly perceived by the agent’s sensors, but on which representation should be modeled. The main approaches have been the
the agent can make inferences. More specifically, the totality of the so-called symbolic and associationism10 : the former starts from the as-
knowledge is stored in the low layer, whereas the logical inference is sumption that cognitive system can be described as Turing machines; in
achieved in the high one, whose clauses will be the most relevant for the latter, associations between different kinds of information elements
carry the main burden of representations, occurring at the most fine-
the query in exam, taking into account a specific confidence threshold
grained level. A further (Gärdenfors, 2000) form of representation, due
which will be discussed ahead.
to Gärdenfors since 2000, is represented by points, vectors and regions
Each production rule of any modules can be triggered on the basis
in dimensional spaces. On the basis of these geometric structures, simi-
of the Beliefs KB content. When a rule is triggered, a specific plan is
larity (which is closely tied to the notion of concept learning) relations
executed, whose details can encompass the assertion of further beliefs
can be modeled in a natural way in terms of distances, particularly
or can be coded in a high-level language aimed to interact with external
suited to represent different kinds of relations: the closer two objects
devices.
are located in a conceptual space, the more similar they are. Although
The Low/High Clauses KB, which are made of composite first-order
an interesting research tool leveraged for common sense categoriza-
logic clauses,9 provide support for the conceptual (meta-) reasoning
tion (Lieto, Radicioni, & Rho, 2017), the Gärdenfors approach is not
with the closed-world assumption, in order to subordinate fast-dynamic suitable in the scope of this work, because such conceptual spaces do
real-world perceptions and actions with high-level cognition. Such level not naturally support negation and logical reasoning; plus, they present
of reasoning is not directly linked with the agent’s domain, but related issues whom are difficult to address for a comprehensive management
enough to achieve the desired behavior in correspondence of specific of natural language, as the non-trivial representation of verbs, adjec-
plans executions, thus providing the agent with the decision-maker role tives and prepositions. In contrast, the AD-Caspar conceptual KB is made
on the execution of plans. of FOL definite clauses, and its representation (as explained ahead)

9 10
The low layer encompasses also satellite information related to each Connectionism is a special case of associationism, which model associations
clause. by means of artificial neural networks.

67
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fig. 2. The Data Flow Schema of AD-Caspar.

also incorporates contextualized labels for each predicate achieved by analysis introduced in Anthony and Patrick (2015) concerning the so-
means of vectorial similarity. called slot allocation, which indicates specific policies about entity’s
It is also important to highlight that, as well as in human being, location inside each predicate, depending on verbal cases.
Beliefs KB and (Low/High) Clauses KB can interact with each other in Each verb and its relation with subject and object is represented by a
a very reactive decision-making process (meta-reasoning). ternary predicate: the first argument, defined as davidsonian variable,
In Fig. 2 it is shown a simplified functional data flow schema of AD- identifies uniquely the related action linked to the verb semantically
Caspar. Each sensor in the upper ovals can get information either from described by the label, whereas the other two arguments represent
a chatbot, a microphone (in the case of vocal commands) or whatever the subject and the object. The subject–object order is inverted in
other kind of device able of capturing physical quantities from the the presence of passive verbs, morphologically described by specific
environment. Each information coming from sensors is translated in dependencies as nsubjpass and agent, that implicitly summarize the
specific beliefs and stored in the Beliefs KB. In case a belief is not agent-driven Parsons (Parsons, 1990) approach (usually called also ‘‘neo-
Davidsonian’’); in case of intransitive or imperative verbal phrases,
related to an utterance, the former will be sent directly to the Smart En-
either subject or object slots are left empty, respectively. Moreover,
vironment Interface, without passing through the Translation Service,
terms with only one argument are used for both noun representation
where libraries for piloting devices are invoked. Otherwise, in the case
and quality modificators (adjectives and adverbs), sharing arguments
of natural language utterances, beliefs containing text will pass through
whom link them together.
the Translation Service, in order to produce logical forms by leveraging
Adverbs are represented by one-arity terms, whose arguments will be
a dependency parser and the lexical resource WordNet (Miller, 1995).
equal to the davidsonian variable of the verb which are referred to.
Depending on the utterances content, automation commands might be
Finally, prepositions are represented by binary predicates; the first
also subordinated by meta-reasoning in the conceptual space, through
argument is either a davidsonian or common variable, whereas the
interaction between the Reasoner and the Long-Short Term Memory,
second argument is a variable shared with the object of the preposition.
before interacting with devices via the Smart Environment Interface. For instance, considering the following sentence:

4.1. The translation service Roy slowly drinks good wine.

The Translation Service’s outcome will be the following expression11 :


This component consists of a pipeline of five modules conceived to
translate a text in natural language in a neo-Davidsonian FOL expression Roy ∶ NNP(x1 ) ∧ wine ∶ NN(x2 ) ∧ drink ∶ VBZ(e1 , x1 , x2 )∧
that inherits the shape from the event-based formal representation of slowly ∶ RB(e1 ) ∧ good ∶ JJ(x2 )
Davidson (Davidson, 1967), with every term t as follows:
where each predicate’s label is made of a lemmatized word concate-
t ∶= x|L ∶ POS(t)|L ∶ POS(t1 , t2 , t3 )|L ∶ POS(t1 , t2 ) nated with a POS. The shape of each label can be changed by modifying

with x a variable bound either to a universal or to an existential


quantifier, L a lemma, and POS a Part-of-Speech (POS) tag. Implication 11
We recall that NNP stands for a proper singular noun, NN for singular or
symbols are also admitted, when a group of predicates subordinate mass noun, VBZ for 3rd person singular present verb, RB for adverb, and JJ
the remaining ones. Such representation also takes into account the for adjective.

68
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

a parameter in a configuration file, giving the chance of omitting the carefully, because of the presence of commonsense biases. For instance,
POS, making usage of lemmatized words or not, and even enabling considering the sentence:
the usage of WordNet synsets12 as labels, selected with a disambigua-
𝑉 𝑖𝑏𝑟𝑎𝑡𝑖𝑛𝑔 𝑠𝑜𝑖𝑙 𝑐𝑎𝑢𝑠𝑒𝑠 𝑤𝑎𝑡𝑒𝑟 𝑡𝑜 𝑚𝑜𝑣𝑒, (5)
tion process which takes in account of the context. Such a disam-
biguation, which in its prior version due to Caspar (fully described being the description of a physical phenomenon, we might admit
in Longo, Longo, and Santoro (2021)) took in account of a naive bag- without meaningful biases the following clause in the KB:
of-words (Church, 2017) heuristic, in this work has been improved
and addressed by leveraging WordNet content (glosses and examples) vibrate_VBG(e1 ,x1 , _) ∧ soil_NN(x1 ) ⟹ move_VBZ(e2 ,
and BERT (Du, Qi, & Sun, 2019) encoding, providing a contextualized x2 , _) ∧ water_NN(x2 ).
selection of synsets with the body of discourse, which has been shown
One might also optionally object that vibrations must be in the imme-
here (Huang, Sun, Qiu, & Huang, 2019) to have an accuracy of more
diate vicinity for causing such water movement, and not in a distant
than 80%.
geographical place. So to fix such issue we might simply add to (5) the
The presence of a copular verb within a sentence gives also the
adjective local to the utterance for better express such assertion. For
chance to enrich the Low/High Clauses KB with the so-defined as-
instance, considering the following variation:
signment rules, whom will participate actively in the logical reasoning
via the Backward-Chaining algorithm. For instance, considering the 𝐿𝑜𝑐𝑎𝑙 𝑣𝑖𝑏𝑟𝑎𝑡𝑖𝑛𝑔 𝑠𝑜𝑖𝑙 𝑐𝑎𝑢𝑠𝑒𝑠 𝑤𝑎𝑡𝑒𝑟 𝑡𝑜 𝑚𝑜𝑣𝑒, (6)
following sentence:
which can be represented with the following logical form:
Roy is a man. vibrate_VBG(e1 , x1 , _) ∧ local_JJ(e1 ) ∧ soil_NN(x1 )
The corresponding logical form will be: ⟹ move_VBZ(e2 , x2 , _) ∧ water_NN(x2 ).

be ∶ VBZ(e1 , x1 , 𝑥2 ) ∧ Roy ∶ NNP(x1 ) ∧ man ∶ NN(𝑥2 ). (1) As a counterexample, consider the following expression:

After a contextual analysis of be, we find this lemma within the Word- 𝑆𝑚𝑜𝑘𝑖𝑛𝑔 𝑐𝑎𝑢𝑠𝑒𝑠 𝑙𝑢𝑛𝑔 𝑐𝑎𝑛𝑐𝑒𝑟. (7)
Net synset encoded by be.v.01 and defined by its gloss as: have the
In this case, we cannot entail from (7) that a smoking man has also
quality of being (referred to a subject and an object/adjective). In the
lungs cancer without committing an ad verecundiam fallacy16 , i.e., a log-
new conceptual domain given by a proper synsets choice as predicates’
ical error given by a person or community considered authoritative (in
labels, the expression (1) can also be rewritten as13 :
this case the physicians). Otherwise, all people who smokes would also
be.v.01_VBZ(d1 , y1 , y2 ) ∧ Robert_NNP(y1 )∧ be ill and the oncological wards of hospitals would be overcrowded.
The problem of implicit causality is comprehensively addressed in
man.n.01_NN(y2 ),
Hartshorne (2014). However, for the purpose of this work, we operate
where VBZ indicates the present tense of be.v.01, Robert_NNP(y1 ) within a conceptual closed-world where our assertions are deemed
means that y1 identify the person Robert, and man.n.01_NN(y2 ) admissible.
means that y2 identify an adult person who is male (as opposed to a Here is ahead a brief description of all pipeline’s modules of the
woman). Translation Service in Fig. 1. For a detailed description of them, since
Considering the meaning of be.v.01_VBZ and that be is considered a such component is utterly inherited from Caspar, the reader is referred
copular verb, it does make sense also to rewrite the previous expression to Longo, Longo, and Santoro (2021). The first module, namely the Sen-
as the following assignment rule: sor Instance,17 fulfills the connection of AD-Caspar with the outer world
through the Smart Environment Interface (described is Section 4.3).
Roy_NNP(y) ⟹ man.n.01_NN(y). (2) Each Sensor Instance is an instance of the superclass Sensor provided
By admitting such a rule in a KB we can implicitly admit additional by Phidias, whom let each instance work asynchronously with respect
clauses having man.n.01_NN(y) as argument instead of to the main agent’s engine. Differently from Caspar, whose sensors
Roy_NNP(y). obtain texts from Automatic Speech Recognition (ASR), in the case
Furthermore the above expression can also be rewritten as a composite of AD-Caspar each utterance can be achieved also from a chatbot
fact such as14 : interaction (CHAT). In a similar way, whatever other type of Sensor can
be instantiated, when specific physical parameters are detected. In this
man.n.01_NN(Roy_NNP(y)). (3) case, one or more rule must be added among the others in the Smart
Environment Interface and related plans must be create for them, to
As for other type of implicative axioms, through a dependency analysis
make the agent deal with new kinds of beliefs asserted by the Sensors
it is possible to infer whether a group of predicates in a FOL expression
Instances.
subordinate the remaining ones in the same expression. For instance,
The next pipeline module, namely the dependency parser (in this
considering the following sentence:
case-study’s prototype provided by spaCy18 ), extracts all semantic
𝑊 ℎ𝑒𝑛 𝑡ℎ𝑒 𝑠𝑢𝑛 𝑠ℎ𝑖𝑛𝑒𝑠 𝑠𝑡𝑟𝑜𝑛𝑔𝑙𝑦, 𝑅𝑜𝑦 𝑖𝑠 ℎ𝑎𝑝𝑝𝑦. (4) relation-ships between words in a utterance coming from the Sensor
15
Instance. Such relationships, which are also called dependencies play
In this case, the final outcome of the Translation Service will be : a key role in this work FOL expressions building. The study of such
shine_VBZ(e1 , x1 , _) ∧ sun_NN(x1 ) ∧ strongly_ADV(e1 ) dependencies helped us to fill the gap between utterances in natural
language and the machine information representations used for this
⟹ be_VBZ(e2 , x2 , x3 ) ∧ Roy_NNP(x2 ) ∧ happy_JJ(x3 ).
work.
It is important to highlight that such kind of clause regards the so- The so-called Uniquezer provides a distinct entity encoding for each
called explicit causality; as for implicit, causality should be admitted dependency coming from the dependency parser, which is a mandatory
step for correctness of the next module.

12
A synset is set of one or more synonyms.
13 16
Notice that in the new conceptual domain, the semicolon between lemmas Forms of reasoning which are logically invalid but cognitively effective.
17
and POS is replaced by the underscore. There can be more of them, each related to a distinct physical quantity
14
We assume to be in a conceptual domain, where possible. coming from the outer world.
15 18
By using lemmas as predicate labels. https://spacy.io/

69
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

The MST Builder is made of a production rules system, and it has the 4.2. The reactive reasoner
task of building a novel semantic structure called Macro Semantic Table
(MST), summarizing in a canonical shape all the semantic features of a This component (central box in Fig. 1) has the task of letting
sentence. MSTs can only represent verbal phrases and are built through other components to communicate with each other, through a specific
a matching of each dependency with a production rule, taking into interaction between the Beliefs KB and the Phidias engine. By using
account of languages diversity and dependency tagset. The same MST the latter, whose design got inspiration by Bratman (1987) theory
might be generated starting from two or more distinct set of dependen- of practical human reasoning, programs have the ability to perform
cies, in a non-deterministic fashion-way. With a good knowledge of any logical-based reasoning (in Prolog style) and developers are allowed to
idiom, the MST Builder can be shaped in order to create MSTs starting write reactive procedures, i.e., pieces of programs that can promptly re-
from different languages/dependency tags in a flexible and scalable spond to environment events. Moreover, it also encompasses additional
way. modules, such as:
Finally, the FOL Builder has the task of building FOL expressions
starting from the MST structures. Since (virtually) all approaches to - Speech-To-Text (STT) Front-End: with the task of parsing infor-
formal semantics assume the Principle of Compositionality19 , formally mation coming from other modules in specific beliefs, and gen-
formulated by Partee (Partee, 1995), every semantic representation can erating the here-defined clauses conceptual generalizations, which
be incrementally built up when constituents are put together during will be discussed briefly ahead.
parsing. In light of above, it is possible to build FOL expressions - Direct Command/Routine Parsers: with the task of parsing log-
straightforwardly starting from a MST, which summarize all semantic ical forms (related to commands in natural language) in specific
features in a sentence. beliefs, whom will interact with the Smart Environment Interface.
In regard of natural language ontology issues exposed in Section 3, - Definite Clauses Builder: with the task of transforming logical
the module Translation Service includes specific production rules for forms coming from the Translation Service in definite clauses,
dealing with the so-defined deadjectival nominalization. In detail, by made possible of composite predicates, which in this work are
activating the related flag in the configuration file of this work’s code defined as nested definite clauses.
prototype, whenever an adjective in a sentence is encountered during - QA Shifter: with the key role of generating more possible answers
the parsing, a new assignment rule (as described above) will be asserted starting from a question, before the translation in logical forms by
by treating the adjective as object of a verbal phrase, then increasing the means of the Translation Service.
the chances of successful reasoning. For instance, in the presence of a - Sensor Instances: each of them is an instance of the superclass
snippet like the wise Robert within a sentence, the Translation Service
Sensor provided by Phidias, which works asynchronously with
will generate the following predicates:
respect to the main agent’s engine and fulfilling the passive
Robert_NNP(y1 ) ∧ wise.a.01_JJ(y1 ). interaction with the environment. Each physical quantity detected
in the environment, such as sounds, temperatures, images, and so
Since the word wise has been classified as adjective (JJ), with wise.a.01
forth, is translated in specific beliefs which will interact with the
as chosen synset identifying a wise being, the following axiom will be
Smart Environment Interface.
also asserted:
Robert_NNP(y) ⟹ wise.a.01_JJ(y). The Reactive Reasoner contains also the Beliefs KB, which supports
both reactive and cognitive reasoning, being the effective working
Which means that whatever verbal action involves Robert, involves also memory of Phidias.
a wise being. In regard of the aforementioned clause conceptual generalization,
As for deverbal nominalization, which for this work is still under whose management is responsible the STT Front-End, they are more
development, differently from the deadjectival one, it is not possible general version of a clause, considering some (adjectives, adverbs and
to address such a issue in a comprehensive criteria making usage of prepositions) of its predicates as modifiers of other predicates. For
a common prototype of production rules, because the semantic of a instance, considering the following sentence:
deverbalized verb (made noun) might be mutable for some synonyms.
For instance, considering the case seen in Section 3: Robert slowly drinks good wine in the living room,
Roy walked down the street. more general versions (then conceptual generalizations) are:
The relative sentence containing a deverbalized form of walked with
Robert drinks good wine in the living room, Robert drinks wine in the
the same meaning might be:
living room, Robert drinks wine, etc.
Roy had a walk down the street.
The rationale behind the assertion of clauses starting from the afore-
It is evident that in this passage from verb to noun (considering also mentioned sentences, lies in the very composite nature of the nested
the past tense) the sentence’s meaning is preserved. On the contrary, definite clauses, whose expression degree, in combination with the
having the following sentence: assignment rules, to be maximized must consider each nesting level (as
shown in Fig. 4).
Roy crossed down the hill,
With the exception of the QA Shifter, which is also an internal com-
if we replicate the same criteria of above we will have: ponent of the Reactive Reasoner and is described in Section 5, readers
can refer to Longo, Longo, and Santoro (2021) for more information
Roy had a cross down the hill, about the latter.
which change utterly the sentence’s meaning. This force us to think
that each verb (or verbs class) must have its own criteria, given 4.3. The smart environment interface
by commonsense knowledge, to shift a verb to its deverbalized-noun
shape. This component provides a bidirectional interaction between the
architecture and the outer world. In Longo et al. (2019), we showed the
effectiveness of this approach by leveraging the Phidias predecessor,
19
‘‘The meaning of a whole is a function of the meanings of the parts and namely Profeta (Fichera, Messina, Pappalardo, & Santoro, 2017), even
of the way they are syntactically combined’’. with a shallower analysis of the semantic dependencies, as well as an

70
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fig. 3. AD-Caspar High Clauses KB assertions related to the Colonel West case knowledge base.

operation encoding via WordNet in order to make the operating agent Clauses KB is populated, in order to make logical inference. Each record
multi-language and multi-synonymous. in the Low Clauses KB is stored in a NoSQL database and is made of
Such a component is made of a production rules system, which is three fields:
triggered by beliefs asserted in the Beliefs KB. When a rule is triggered,
a specific plan consisting of code in high-level language is executed, - Nested Definite Clause: a definite clause made possible of com-
which will assert further beliefs destined to interact with the same posite predicates.
of other production rules (see Longo, Longo, and Santoro (2021) for - Features List: a list of all labels composing the above clause.
further details). - Sentence: a sequence of words in natural language, coming from
a Sensor Instance.
4.4. The cognitive reasoner
For instance, considering the following sentence:

This component (bottom right box in Fig. 1) enable an agent to Barack Obama became president of United States in 2009.
implicitly invoke a FOL reasoner (specifically the Backward-Chaining
algorithm) at runtime. This allows the agent to query the Clauses KB In this case, including in the notation word’s lemmas plus POS, the
and possibly achieve a meta-reasoning subordinating IoT commands. record in the Low Clauses KB will be as it follows:
To fully grasp this concept, it is important to first understand the
- In_IN(Become_VBD(Barack_NNP_Obama_NNP(x1),
first level of reasoning, which involves parsing utterances from natural
Of_IN(President_NN(x2), United_NNP_States_NNP
language and transforming them into beliefs that either match or do
(x3))), N2009_CD(x4))
not match the production rules in the Smart Environment Interface.
- [In_IN, Become_VBD, Barack_NNP_Obama_NNP, Of_IN,
The meta-reasoning is achieved by a proper encapsulation of the so-
President_NN, United_NNP_States_NNP, N2009_CD]
called active beliefs20 into one or more production rule. The latter, when
- Barack Obama became president of United States in
encapsulating an active belief, can be triggered or not, depending of its
2009
value. Within an active belief, it is possible to execute any type of code
using the built-in Phidias procedures called Actions. For example, the The abductive strategy of getting useful clauses from the Low Clauses
Backward-Chaining algorithm can be invoked, and its results can then KB, takes into account of a metric defined as Confidence𝑐 as it follows:
be used to subordinate the initial level of reasoning described above. ⋂
| (𝐹 𝑒𝑎𝑡𝑠𝑞 , 𝐹 𝑒𝑎𝑡𝑠𝑐 )|
Moreover, the Cognitive Reasoner comprises both layers of the Clauses 𝐶𝑜𝑛𝑓 𝑖𝑑𝑒𝑛𝑐𝑒𝑐 = (8)
|𝐹 𝑒𝑎𝑡𝑠𝑞 |
KB, which is filled by the Definite Clauses Builder integrated within the
Reactive Reasoner. where Feats𝑞 is the features list extracted from the query, and Feats𝑐
Unlike the corresponding component in Caspar, each assertion is is the features list of a generic record stored in the Low Clauses KB.
applied to both layers of the Clauses KB. Specifically, the High Clauses A typical access to the Low Clauses KB gets the sorted list of all
KB serves as a volatile memory that is emptied after the chatbot is query’s features occurrences, together with the related clauses, then
restarted. It is the first location where deduction is attempted, and if the most relevant clauses will be copied in the High Clauses KB. Such
reasoning is unsuccessful, the Low Clauses KB is used to encompass an operation is accomplished via the aggregate_clauses_greedy
new nested definite clauses with a specific number of features that algorithm shown in Algorithm 1. The latter, with a greedy heuristic,
comply with a predetermined threshold (which is described in detail takes in input the query clause, the set aggregated and the wanted
in Section 5). Once the High Clauses KB has been populated with these limitation of confidence threshold for abduction; as output, it gives back
new definite clauses, reasoning is reattempted. the set aggregated of clauses from the db that are going to be copied in
the High Clauses KB, taking in account of the wanted distance (8) from
5. Question answering the query. The next subsection will focus on discussing the algorithm
responsible for carrying out such a task.
This section shows how the proposed architecture is able to deal
with abductive–deductive question-answering. The Low Clauses KB has 5.1. Algorithm
very important role in such a task, because it supports the abduction
as pre-stage of the deduction and is the source from which the High First, at line 1 of Algorithm 1, aggregate_clauses_greedy
extracts the list containing all features of clause; at line 2 it creates a
sequence of tuples in descending order (by the first field) containing
20
Active belief is a Phidias native type of changing belief that assumes either an integer value and a list of clauses having in common such a value.
the value True or False. For instance, considering the Table 1, having a query with the features

71
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fig. 4. AD-Caspar High Clauses KB expansion after each assertion (through the command feed in the shell) related to the Colonel West case knowledge base.

Fig. 5. The Python Mongodb aggregate operator implementing the function get_relevant_clauses_from_db at line 2 of Algorithm 1.

[a, b, c], the function get_relevant_clauses_from_db will between features fields in the db and the features of the query clause.
create a list made of the following two tuples: At line 6 all clauses with the common value of such a size are grouped
in tuples. At line 7 such tuples are sorted by the intersection size.
(3, [cls_4, cls_5]),
Finally, at line 8 the output is limited to the two most significant tuples.
(2, [cls_2, cls_3]).
Nonetheless, we can affirm the algorithm is sound, accomplishing its
The rationale behind such function’s output is that the first value of job in at most:
the each tuple is the size of the intersection between the features
of the query and the features of Table 1, while the second value is 𝑂(𝑡 ∗ 𝑠2 )
a list containing the clauses themselves having in common such a
intersection size. At line 4, the first value of each tuple is extracted with 0 <t <1 as the confidence threshold and s the size of the Low
and used in line 5 to calculate the confidence (8). In the loop at line Clauses KB.
6, all clauses having the same features occurrences are considered, and
at line 7 the algorithm checks whether the clause has an admissible Algorithm 1 aggregate_clauses_greedy(clause, aggregated, threshold)
confidence and it is not already in aggregated; in the case it is, the clause Input: (i) clause: a definite clause, (ii) aggregated: a set of definite clauses
will be appended to the aggregated list. At line 9, there is a recursive (empty in the first call), (iii) threshold: the minimum confidence threshold
Output: a set of definite clauses.
call taking in input cls (the current clause which is being processed)
instead of clause, the updated aggregated and the threshold of the same 1: ft ← ←← extract_features(clause);
2: aggr ← ←← get_relevant_clauses_from_db(ft );
procedure call. Finally, at line 10, the list aggregated is returned. 3: foreach record in aggr do
Although there can be more strategies to implement the function 4: occurrencies_found ← ←← record.features_occurrencies
5: confidence ← ←← occurrencies_found / size(ft )
at line 2 of Algorithm 1, for the case-study prototype the desired result
6: foreach cls in record.clauses do
has been achieved by leveraging the Mongodb aggregation operator and 7: if cls not in aggregated and confidence ≥ threshold then do
runtime optimizations given by database indexes, in a single operation 8: aggregated.append(cls);
9: aggregate_clauses_greedy(cls, aggregated, threshold);
shown in Fig. 5. The latter is a pipeline of four different operations:
10: return aggregated
the first, at line 4, is the processing of all possible sizes of intersections

72
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Table 1
A simple instance of clause’s related
features. [PRE_AUX][POST_AUX][Biden][could][be][COMPL_ROOT][Dummy]
Clauses Features
cls_1 [a, x, z, y] At this point, joining all the chunks in such a sequence, the candidate
cls_2 [a, b] sentence to be a likely assertion might be the following one:
cls_3 [a, b, x, y]
cls_4 [a, b, c, d] Biden could be Dummy.
cls_5 [a, b, c, w] The meaning of the keyword Dummy will be discussed shortly. In all
verbal phrases where ROOT is a copular verb22 (like be), the verb has
the same properties of the identity function. Therefore, in the case
of the above sentence, we can consider also the following candidate
5.2. Polar questions
sentence:
In the field of question-answering, polar questions refer to questions Dummy could be Biden.
that seek a yes or no answer. In general such questions have the shape
of nominal assertion (excepting for the question mark at the end), All wh-questions for their own nature require a factoid answer, made of
therefore are transformed by AD-Caspar in nested definite clauses and one or more words (snippet); so, in the presence of the question: Who
treated as query as they are; while, those beginning with an auxiliary is Biden? as answer we expect something like:
term, will be deprived only of the auxiliary word and transformed in 𝐵𝑖𝑑𝑒𝑛 𝑖𝑠 𝑠𝑜𝑚𝑒𝑡ℎ𝑖𝑛𝑔, (9)
a clause to use as query. For instance, considering the following polar
question: but something surely is not what we are looking for as information, but
the elected president of United States or something similar. This means
Has Margot said the truth about her life? that, within the FOL expression of the query, something in (9) must be
represented by a mere variable and not a term. In light of this, instead
which can be distinguished from the following dependency: of something, this architecture uses the keyword Dummy: the rationale
of this choice is that, during the creation of a FOL expression containing
aux(said, Has).
such a word, the Translation Service will impose the POS DM to Dummy,
Such sentence will be treated by removing the auxiliary and considering whose parsing is not expected by the Definite Clauses Builder, thus it
the remaining text (without the ending question mark) as source to be will discarded. At the end of this process, as FOL expression of the
converted into a clause-query. query, we will have the following literal:

Be_VBZ(Biden_NNP(x1), x2), (10)


5.3. Wh-questions
which means that, if the High Clauses KB contains the representation
Differently from polar questions, for dealing with wh-question we of Biden is the president of America, namely
have to transform a question into assertions one can expect as likely Be_VBZ(Biden_NNP(x1), Of_IN(President_NN(x2),
answer, then query the Clauses KB with them. To achieve that, after an
analysis of several types of questions for each category,21 by leveraging
America_NNP(x3))),
the dependencies of the questions, we found it useful to divide the querying with the (10) by using the Backward-Chaining algorithm, the
sentences text into specific chunks as it follows: latter will return back as result a set of unifying substitutions23 with
the previous clause as it follows:
[PRE_AUX][AUX][POST_AUX][ROOT][POST_ROOT][COMPL_ROOT]
{v_41 ∶ x1,
The delimiter indexes between every chunk are given by AUX and x2 ∶ Of_IN(President_NN(v_42),America_NNP(v_43))},
ROOT (above marked in bold) related words positions in the sentence. which contains, in correspondence of the variable x2, the logic repre-
The remaining chunks are extracted on the basis of the latter. For sentation of the snippet: president of America as possible and correct
the likely answers composition, the module QA Shifter has the task of answer. Furthermore, starting from the lemmas composing the only
recombining the question chunks in a different sequence, depending answer-literal within the substitution, with a simple operation on a
on the idiom in exam, considering also the wh-question type. Such an string, it is possible to extract the minimum snippet from the original
operation, which is strictly language specific, is accomplished thanks to sentence containing such lemmas, thus to provide it in output instead
an ad-hoc production rule system which takes in account of languages of the substitutions.
diversity. For instance, let the question in exam be:
6. Case-study
Who could be Biden?
In this section, a Python prototype of a Telegram chatbot based
In this case, the chunks sequence will be as it follows:
on the AD-Caspar architecture is shown, focusing on how it deals
[PRE_AUX][could][POST_AUX][be][Biden][COMPL_ROOT] with each type of interaction with the user. The following subsections
are about all type of interactions such chatbots are capable of, and
more details regarding the nested reasoning, runtime evaluation and
where only AUX, ROOT and POST_ROOT are populated (in bold), while expressive richness.
the other chunks are empty. In this case a specific production rule
(related to who) of the QA Shifter will recombine all chunks in a
different sequence, by adding also another specific word in order to 22
The verbs for which we want to have such a behavior can be defined by
compose a likely assertion like it follows: a parameter in a configuration file. For further details we refer the reader to
the documentation in this work’s Github repository.
23
The first one substitution, even if not required, is given by the variables
21
Who, What, Where, When, How. standardization within the Backward-Chaining algorithm.

73
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fig. 6. AD-Caspar High and Low Clauses KBs changes, after assertions.

Fig. 7. Starting a Telegram chat session with an instance of AD-Caspar.

6.1. Starting, asserting and querying the chatbot

As the agent is started, to begin a new session the user has to provide
the keyword hello, for making the agent to come out from its idle state,
then choose the type of interaction in the chat window as it follows:

1. An assertion, namely, a verbal phrase with a dot at the end of


sentence (Fig. 7).
2. A question, with a question mark at the end of sentence (Fig. 8).
3. A direct command (Fig. 9) or routine definition, namely, an Fig. 8. Querying the Telegram Chatbot with who and when questions.

imperative verbal phrase, possibly with subordinating conditions


(in case of routine).
In Fig. 8 we can see how the chatbot is queried with wh-questions,
In regard of the first interaction type, the agent’s behavior will be the giving back as result a substitution from the High Clauses KB (From
same24 as in Caspar, having in addition the parallel population of the HKB: True) containing a literal, which is a logic representation of
Low Clauses KB. As for the second type, its outcome will be a set a snippet-result in natural language. In regard of the substitution of
of logical inferential substitutions containing composite literals (when the variable x5, the corresponding literal is the representation of the
not returning False) as answers to the questions. In a classical chatbot snippet Barack Obama, whose words are concatenated together their
instance, such output should be replaced with proper snippets, custom POS. After the chatbot rebooting,25 as we can see in Fig. 10, the
answers or further questions, following the schema of a dialog system. result will be extracted from the Low Clauses KB (From LKB: True)
The outcome of the third type of interaction is the same seen in Longo, taking in account of the confidence threshold (8), because the High
Longo, and Santoro (2021), considering also meta-reasoning; which Clauses KB is still empty (From HKB: False). Such a threshold,
means that a Telegram instance can be used also as IoT agent as in depending on the domain, can be changed by modifying the value
Fig. 9. of MIN_CONFIDENCE in config.ini (LKB Section) in the Github
repository.
The Github repository reports detailed instructions about both in-
stallation and configuration of a base chatbot instance, encompassing
6.2. Failing queries
all above features.
In Fig. 6 it is shown the content of both High and Low Clauses KB, In the bot closed-world assumption, the bot can give back only an-
after the events of Fig. 7, by means the AD-Caspar native commands swers unifying with the content of its own knowledge, otherwise it will
hkb() and lkb().
25
The user has to provide again the word hello to start a new session, after
24
For which the reader is referred to Longo, Longo, and Santoro (2021). the latter is expired.

74
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fig. 9. AD-Caspar execution of a IoT command in a Telegram chat session.

Fig. 11. Abductive results with confidence threshold = 0.6, after querying with a when
questions.

Fig. 12. Colonel West KB assertions (part. 1).

return False. Optionally, with the value of the parameter SHOW_REL


set to True in the configuration file, the closest results can be shown
together with their confidence (see Fig. 11).

6.3. Nested reasoning

In this subsection it will be shown how AD-Caspar addresses log-


ical inference when definite clauses are possible made of composite
Fig. 10. Querying with who and when questions, after chatbot rebooting and Low predicates, leveraging also assignment rules and clause conceptual gen-
Clauses KB fed. eralizations. By taking inspiration from the features of the cognitive

75
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fig. 13. Colonel West KB assertions (part. 2).

Table 2
Realtime cognitive performances (in seconds) of a Telegram
chatbot engine based on AD-Caspar, in the case of successful
reasoning for the question: Colonel West is American? and
three distinct KBs containing the clauses in Fig. 4 with a
confidence threshold of 0.6.
Knowledge base HKB LKB+HKB
0,377 0,469
West25 0,378 0,353 (19/25)
0,437 0,385 (19/25)
0,374 0,366 (19/25)
0,355 0,399 (19/25)
0,423 0,426
West104 0,362 0,327 (19/104)
0,342 0,327 (19/104)
0,353 0,388 (19/104)
0,731 0,323 (19/104)
0,407 0,463
West303 0,421 0,357 (19/303)
0,377 0,333 (19/303)
0,461 0,368 (19/303)
0,443 0,387 (19/303)
Overall AVG 0,416 0,378 Fig. 14. A successful reasoning on the Colonel West case’s KB, before and after the
chatbot rebooting (triggered by the hello hotword), getting clauses through abduction
(with confidence threshold = 0.6) from Low Clauses KB.

architecture SOAR (Rosenbloom, Laird, & Newell, 1993), able to re-


duce the symbolic distance between the goal state and current state by
of relating to an intelligent being providing an average reasonable
leveraging appropriate operators, with the so-defined nested reasoning
response time. In light of above, to address such a issue, since a chatbot
Caspar takes in account of subgoals, when better outcomes cannot be
relays on the internet, its realtime performances depends mainly on the
achieved. Since AD-Caspar inherits most features from its predecessor,
among them there is such high-level inference, thanks also a proper bandwidth of the internet connection, and, in a second phase, on the
expansion of the Clauses KB. The expanded knowledge base in Fig. 4 chatbot engine. Since the bandwidth is the more volatile parameter,
is achieved in a Telegram chatbot front-end: in Figs. 12 and 13 there because it depends on physical features, protocols and temporally
are all chatbot interactions required to feed the so-called Colonel West congestion, in this work we decided to focus on the engine performance
case’s KB; in Fig. 14 a successful reasoning is shown, considering the considering a widespread specific hardware. For this reason, in order
case in which all required clauses are within the High Clauses KB to achieve a runtime evaluation of the engine, an instance of AD-
(From HKB: True); otherwise, after the chatbot rebooting, in the same Caspar was tested, whose timings in seconds are shown in Table 2. The
picture it is shown how the agent gets from the Low Clauses KB all hardware the chatbot has been tested on is Intel i7-8550U 1.80 GHz
useful clauses through an abductive step with confidence threshold with 8 GB RAM. The first column Knowledge Base of Table 2 refers
equal to 0.6, then how it achieves a successful reasoning after the to three distinct KBs, each with different size, containing the clauses
transferring of the clauses in the High Clauses KB (From HKB: False, seen in Fig. 4. Such clauses are asserted starting from the five sentences
From LKB: True). related to Colonel West case, namely:

6.4. Runtime evaluation - Colonel West is American.


- Nono is a hostile nation.
In the scope of chatbot applications, the issue of responsiveness is - missiles are weapons.
worth of attention, because the user should have the feeling, somehow, - Colonel West sells missiles to Nono.

76
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

- When an American sells weapons to a hostile nation, that American in the scope of a distinct session, during a normal conversation, it is
is a criminal. most likely that words with same lemmas are considered to have the
same meaning. In light of this, by setting in the AD-Caspar configuration
Together with each sentence’s related clause, assignment rules and clause file a parameter named GMC_ACTIVE to the value of True, it is possible
conceptual generalizations are also asserted, making possible the increase to enable the usage of the GMC, then to set all labels with the same
of useful clauses from 5 in Fig. 3 up to 25 in Fig. 4. The first KB, lemma to the first encountered synset of them, in the scope of the same
namely West25, contains exactly such 25 clauses, while West104 and session.
West303 contain either such clauses, but respectively plus 78 and 278 Other parameters allow to shape in other ways the knowledge base
random clauses. The column HKB of Table 2 refers to five computation expansion, depending on the use case, for instance the part-of-speech
timings for each of the KBs, considering only High Clauses KB and the on which to operate generalization of the modifiers (adjectives, adverbs
query: Colonel West is a criminal. We remind that an instance of AD- and prepositions) described in Section 4.1. For further technical details
Caspar attempts firstly to achieve a reasoning making usage of only the reader is referred to the documentation in the Github repository.
the High Clauses KB, as in the case of Caspar, otherwise it will get
7. Compliance with the criteria of the standard model of mind
likely candidates from Low Clauses KB, considering their relevance to
(SMM)
the query according to a specific threshold confidence (8). The third
column LKB+HKB of Table 2 shows timings in the case the High Clauses A novel proposal within the field of cognitive AI and cognitive
KB is initially empty, thus both High Clauses KB and Low Clauses KB modeling research is represented by the so-called Standard Model of
are involved. Focusing on the third column, it appear clear timings in Mind.27 This model is based on an abstraction from some of the most
general are lesser than the second column, excepting for the values in adopted cognitive architectures, namely SOAR (Rosenbloom et al.,
first rows, which comprises the Mongodb access time and the filling of 1993), ACT-R (Ritter, Tehranchi, & Oury, 2019) and SIGMA (Rosen-
the High Clauses KB with clauses coming from the Low one, via the bloom, 2013) and on the consensus reached in the community over
aggregate_clauses_greedy algorithm. The other values in the decades of research about what a cognitively-inspired architecture
third column are lower than in the second one because the reasoning should be equipped with. The results have been distilled into the
is achieved over a lesser number of clauses (19) for each distinct following levels of analysis of intelligent behavior (and in the following
knowledge base. The average timings in the bottom row show how the section, we show how the provided system is compliant with such a
first rows’ value is amortized, by compensating the loss, due to the gain model):
achieved from reasoning on a fewer number of clauses than respect
1. Structure and Processing Mechanisms: it concerns that pro-
the initial content of all knowledge bases in exam (West25, West104
cessing in human-like architecture is assumed to be based on a
and West303). Intuitively it is expected such bias to be increased for
small number of task-independent modules, and should support
larger knowledge bases, which demonstrates the effectiveness of such
both serial and parallel information processing mechanisms.
approach for two distinct tasks: firstly, to deals with larger knowledge Here the consensus reached converge towards the necessity of
bases considering only the most relevant clauses in the reasoning distinguishing between a Long-Term Declarative Memory and a
process; secondly, to permit at the same time abduction as pre-stage Procedural one, as well as a control interface for the mutual
of deduction, in order to give back closer results also in presence of interaction between the Procedural Module, Perception/Motor
non-successful reasoning. modules and Declarative Memory.
2. Memory and Content: in this case the main point of conver-
6.5. Tuning: Expressive richness vs reasoning capabilities gence regards the integration of hybrid symbolic–subsymbolic
representations.
The prototype of the proposed architecture has been endowed with 3. Learning Processes: in this case, the elements of convergence
scalable features, whom permit to shape the morphology/expansion regard the following assumptions:
of the knowledge base in order to achieve a proper human-fashioned
reasoning. Since AD-Caspar’s logical reasoning capabilities are fulfilled (i) All types of long-term knowledge should be learnable from
a human-like architecture.
with the Backward-Chaining algorithm, which is based on unification,
(ii) Learning is seen as an incremental process typically based
the latter fails when predicates labels differ. Such differences, when
on some form of a backward flow of information through
not properly chosen,26 of course can be cause of missed unification,
internal representations of past experiences.
therefore unsuccessful reasoning. The reason of such morphological
(iii) Learning over times scales is assumed to arise from the
differences between predicates expressing common meanings, in this
accumulation of learning over short-term experiences.
architecture is due to misplaced classifications of synsets and part-of-
speech, which (as seen in Section 4.1) are linked together separated 4. Perception and Motor Mechanisms: perceptual and motor
by a colon to form predicates labels. On the other hand, the semantic modules are assumed to be modality specific (e.g. auditory,
richness of a logical form is proportional to labels diversification, taking visual, etc.) and associated with specific buffers for the access
in account of sentences context. In the light of above, in general, there to the working memory.
is a meaningful trade-off between expressive richness and reasoning
In the next subsection we analyze the proposed architecture in the
capabilities, especially when labels classification models have not an
light of its compliance with the Standard Model of Mind/Common
high accuracy in the domain. The architecture AD-Caspar gives the
Model of Cognition.
chance to shape such trade-off, depending on the case, through the lim-
itation of label classifications to a specific subset of POS, by modifying 7.1. SMM assessment
related flags in the configuration file, in order to increase reasoning
capabilities at the expense of expressive level. Still, if we want to keep In this section, it is addressed a comparative evaluation between
both high expressive level and reasoning capabilities, AD-Caspar can convergence criteria related to the Standard Model of Mind and AD-
take advantage of a feature inherited from Caspar and called Grounded Caspar, since such a model represents a starting point, a platform
Meaning Context (GMC). Such a feature is built on the assumption that, for developing high-profile research in both AI and (computational)
Cognitive Science.

26
There can be cases where different labels for the same lemma are due to
27
proper common sense choices. Later on called Common Model of Cognition.

77
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

7.1.1. Structure and processing mechanisms likely assertions that one can expect as answers, thanks to a production
In regard of such criteria, AD-Caspar is endowed of asynchronous rule system leveraging a dependency parser.
sensors instances which permit both parallel and serial information The combination of Translation Service/Definite Clause Builder and
processing mechanisms (Fig. 2). Furthermore, as seen in Section 4, QA Shifter makes the proposed Telegram bot easily scalable on the
such cognitive architecture is endowed also with a KB divided in represented knowledge, and the user has to only provide new sentences
Declarative Memory and Procedural one, plus a specific interface (the in natural language at runtime without code/script refactoring, like in
Smart Environment Interface in Fig. 1) for direct interaction with Per- real conversations.
ception/Motor modules. As a consequence, all the provided constraints The architectural constraints adopted by AD-Caspar, have shown
concerning these aspects are included in AD-Caspar. that the system can be ascribed to the class of cognitively-inspired
architectures, endowed of functional features for the design of com-
7.1.2. Memory and content putational models integrating different types of perception, especially
The working memory of AD-Caspar’s family28 has a native inte- reasoning from natural language and meta-reasoning. Being compliant
gration of hybrid symbolic–subsymbolic representations. The subsym- with the underlying model of the main cognitive architectures this
bolic representation comes from the neural dependency parser, which also implies that a future exploration could involve its integration in
is trained for the classification of common-sense semantic relations architectures like SOAR or ACT-R that have been traditionally not well
(dependencies) between words in natural language. In regard of the equipped (see Lieto, Lebiere, and Oltramari (2018)) with the type of
symbolic representation, it is achieved in the shape of beliefs, each of language and (related) reasoning capabilities exhibited by AD-Caspar.
whom interact with a system of production rules, and in the shape In the light of above, AD-Caspar can claim also a place in the
of clauses in first-order logic. As explained in Section 4, such two plethora of eXplainable Artificial Intelligence (XAI) systems, since it
groups of symbolic information can interact with each other in a can provide clear and understandable explanation to meta-reasoning
(meta-)reasoning process. involved in any decision-making process of the proposed architecture.
As future work, we plan to include a module for the design of en-
7.1.3. Learning processes hanced dialog systems, by taking in account also contexts and history,
As for such a criteria, its long-term knowledge is learnable by and leveraging external ontologies in order to create spontaneously
pipeline Translation Service. The learning process, which at this stage human-like fashioned knowledge’s patterns.
of the design does not take in account of tout court all the machine
learning, reinforcement learning and similar ones approaches,29 is based CRediT authorship contribution statement
on information (in the form of clauses) provided by past interactions
based on natural language. Moreover, the Clauses KB is split in two Carmelo Fabio Longo: Conceptualization, Methodology, Software,
layers representing Short- and Long-Term memory, therefore we can Writing – original draft. Paolo Marco Riela: Data curation, Writing
affirm that learning over times scales is arisen from the accumulation – original draft. Daniele Francesco Santamaria: Writing – original
of learning over short-term experiences. As a consequence, the current draft. Corrado Santoro: Supervision. Antonio Lieto: Methodology,
Validation, Writing – original draft, Co-supervision.
version of AD-Caspar, partially addresses all the constraints involved at
this level of analysis.
Declaration of competing interest
7.1.4. Perception and motor mechanisms
The authors declare that they have no known competing finan-
AD-Caspar support fully any kind of interaction with perceptual
cial interests or personal relationships that could have appeared to
and motor modules, through the Sensor Instances, which will have
influence the work reported in this paper.
their specific buffers for the access to the working memory. As long
as there will be production rules taking in account of beliefs related to
Data availability
such external perceptions, specific plans containing high-level code for
reactive interaction with the environment will be triggered.
No data was used for the research described in the article.

7.1.5. Overall
References
Overall, the comparison of AD-Caspar with the constraints posed
by the SMM shows how the proposed system fulfills (with the partial AIML fondation (2023). Artificial intelligence markup language. Available at http:
exception of the learning processes) almost all the common elements //www.aiml.foundation/.
individuated by such abstraction. Even if, as the authors themselves Anthony, Stephen, & Patrick, Jon (2015). Dependency based logical form transforma-
tions. In SENSEVAL-3: Third international workshop on the evaluation of systems for
admit, the SMM is currently underspecified, it still represents a refer-
the semantic analysis of text.
ence point for agent architectures encompassing the research fields of Borji, Ali (2023). A categorical archive of ChatGPT failures. arXiv.
robotics, AI, neuroscience and cognitive science. Bratman, M. E. (1987). Intentions, plans and practical reason. Harvard University Press.
Church, Kenneth Ward (2017). Word2Vec. Natural Language Engineering, 23(1),
155–162.
8. Conclusions and future works
Cordeschi, Roberto (2002). The discovery of the artificial: Behavior, mind and machines
before and beyond cybernetics, vol. 28. Springer Science & Business Media.
In this article, we presented a framework, called AD-Caspar, com- Davidson, Donald (1967). The logical form of action sentences. In The logic of decision
bining natural language processing and first-order logic with the aim of and action (pp. 81–95). University of Pittsburg Press.
Du, Jiaju, Qi, Fanchao, & Sun, Maosong (2019). Using BERT for word sense
instantiating cognitive chatbots capable for abductive–deductive reason-
disambiguation. arXiv:1909.08358v1 [Cs.CL].
ing. By the means of its Translation Service module, AD-Caspar parses D’Urso, Fabio, Longo, Carmelo Fabio, & Santoro, Corrado (2019). Programming
sentences in natural language in order to populate its KBs with beliefs intelligent IoT systems with a Python-based declarative tool. In The Workshops of
or composite (nested) definite clauses endowed of rich semantics. the 18th international conference of the Italian Association for Artificial Intelligence.
Moreover, the module QA Shifter is able to rephrase wh-questions into Elkins, Katherine, & Chun, Jon (2020). Can GPT-3 pass a writer’s turing test? Journal
of Cultural Analytics, 5(2), 17212.
Fabio, Longo Carmelo, Corrado, Santoro, Marianna, Nicolosi-Asmundo, Domenico, Can-
tone, & Francesco, Santamaria Daniele (2022). Towards ontological interoperability
28
Caspar, AD-Caspar and SW-Caspar. of cognitive IoT agents based on natural language processing. Intelligenza Artificiale,
29
Although Sensor Instances can be fully compatible with them. 16(1), 93–112.

78
C.F. Longo et al. Cognitive Systems Research 81 (2023) 64–79

Fichera, Loris, Messina, Fabrizio, Pappalardo, Giuseppe, & Santoro, Corrado (2017). Sutskever, Ilya, Vinyals, Oriol, & Le, Quoc V. (2014). Sequence to sequence learning
A Python framework for programming autonomous robots using a declarative with neural networks. Advances in Neural Information Processing Systems, 27.
approach. Science of Computer Programming, 139, 36–55. Turing, Alan (1950). Computing machinery and intelligence (pp. 433–460). Mind.
Gärdenfors, Peter (2000). Conceptual spaces: The geometry of thought. Cambridge, MA: Wilcox, Bruce (2023). Chatscript. Available at https://github.com/ChatScript/
MIT Press. ChatScript.
Hartshorne, Joshua K. (2014). What is implicit causality? Language, Cognition and
Neuroscience, 29(7), 804–824.
Huang, Luyao, Sun, Chi, Qiu, Xipeng, & Huang, Xuanjing (2019). GlossBERT: BERT
for word sense disambiguation with gloss knowledge. In Proceedings of the 2019 Carmelo Fabio Longo is a Ph.D. at the University of
conference on empirical methods in natural language processing and the 9th international Catania member of ARSLAB led by Prof. Corrado Santoro.
joint conference on natural language processing (pp. 3509–3514). Hong Kong, China: After his degree in Computer Science in 2000 and the
Association for Computational Linguistics. Ph.D achieved in 2022, he worked in Medical Care as dig-
Jing, Kun, & Xu, Jungang (2019). A survey on neural network language models. arXiv. italization analyst and as coordinator in projects involving
Lieto, Antonio (2021). Cognitive design for artificial minds. Routledge. robots. Now he is about to start as researcher at the ISTC-
Lieto, Antonio, Lebiere, Christian, & Oltramari, Alessandro (2018). The knowledge level CNR Institute in Catania (Italy). His main research interests
in cognitive architectures: Current limitations and possible developments. Cognitive include: Artificial Intelligence, Cognitive Architectures, Nat-
Systems Research, 48, 39–55. ural Language Processing, Internet of Things and Semantic
Lieto, Antonio, Radicioni, Daniele P., & Rho, Valentina (2017). Dual PECCS: a cognitive Web.
system for conceptual representation and categorization. Journal of Experimental &
Theoretical Artificial Intelligence, 29(2), 433–452.
Paolo Marco Riela is Ph.D. at the University of Catania.
Loebner, Hugh (2023). The Loebner prize. Available at https://www.ocf.berkeley.edu/
After the Ph.D. in Computer Science achieved in 2022,
~arihuang/academic/research/loebner.html.
he works in the Public Health field as statistician and
Longo, Carmelo Fabio, Longo, Francesco, & Santoro, Corrado (2021). Caspar: Towards
data scientist. His main research interests include: Artificial
decision making helpers agents for IoT, based on natural language and first order
Intelligence, Statistics, Epidemiology, Software Engineering
logic reasoning. Engineering Applications of Artificial Intelligence, 104, Article 104269.
and Developing.
Longo, Fabio, & Santoro, Corrado (2018). A python-based assistant agent able to
interact with natural language. In Proc. of the 19th workshop ‘‘from Objects to
Agents’’, vol. 2215 (pp. 142–146).
Longo, Carmelo Fabio, & Santoro, Corrado (2020). AD-CASPAR: Abductive-deductive
cognitive architecture based on natural language and first order logic reasoning.
In 4th workshop on natural language for artificial intelligence (NL4AI 2020) co-located
with the 19th international conference of the Italian Association for Artificial Intelligence
(AI*IA 2020).
Longo, Carmelo Fabio, Santoro, Corrado, Cantone, Domenico, Santa- Daniele Francesco Santamaria is Postdoctoral Researcher
maria, Daniele Francesco, & Nicolosi Asmundo, Marianna (2021). SW-CASPAR: and Adjunct Professor at the Department of Mathematics
Reactive-cognitive architecture based on natural language processing for the and Informatics of the University of Catania since 2019.
task of decision-making in the open world assumption. In Roberta Calegari, Since 2022 he is also Adjunct Professor at the Department
Giovanni Ciatto, Enrico Denti, Andrea Omicini, & Giovanni Sartor (Eds.), CEUR of Human Sciences of the same institute. His main research
Workshop Proceedings: vol. 2963, Proc. of the 22nd workshop ‘from Objects to Agents’’ interests range from Computational Logic, Knowledge Rep-
(pp. 178–193). resentation and Reasoning and Semantic Web to Blockchain,
Longo, Carmelo Fabio, Santoro, Corrado, & Santoro, Federico Fausto (2019). Meaning Algorithm and Software Engineering and Developing.
extraction in a domotic assistant agent interacting by means of natural language.
In 28th IEEE international conference on enabling technologies: Infrastructure for
collaborative enterprises. IEEE.
Madhumitha, S., Keerthana, B., & Hemalatha, B. (2019). Interactive chatbot using AIML.
International Journal of Advanced Networking & Applications, Special Issue.
Miller, George A. (1995). WordNet: A lexical database for English. In Communications
of the ACM vol. 38, no. 11 (pp. 39–41).
Minsky, M. (1975). A framework for representing knowledge. In P. Winston (Ed.), The Corrado Santoro is Associate Professor at the Department
psychology of computer vision. New York: McGraw-Hill. of Mathematics and Informatics of the University of Catania.
Moltmann, Friederike (2017). Natural language ontology. Oxford University Press. He obtained the degree in Computer Engineering in 1997
Newell, Allen, Shaw, J. C., & Simon, Herbert A. (1959). The general problem solver. and the Ph.D. in Computer and Electronic Engineering in
In Proceedings of the international conference on information processing, vol. 2 (pp. 2000. His research interests include: Autonomous Systems,
256–265). Artificial Intelligence, Robotics, Embedded Systems, Internet
Parsons, Terence (1990). Events in the semantics of English: A study in subatomic semantics. of Things, Distributed Systems, Multi-Agent Systems.
MIT Press.
Partee, Barbara H. (1995). Lexical semantics and compositionality. In Lila R. Gleitman,
& Mark Liberman (Eds.), Invitation to cognitive science language, vol. 1 (pp. 311–360).
Pinker, Steven (1994). The language instinct. William Morrow and Company.
Ramamohanarao, Kotagiri, & Harland, James (1994). An introduction to deductive
database languages and systems. The International Journal of Very Large Data Bases,
3, 107–122.
Rao, A. S., & Georgeff, M. P. (1995). BDI agents: From theory to practice. In Proceedings
of the first international conference on multi-agent systems (pp. 312–319). Antonio Lieto (Ph.D) is an Assistant Professor in Com-
Ritter, Frank E., Tehranchi, Farnaz, & Oury, Jacob D. (2019). ACT-R: A cognitive puter Science at the Department of Computer Science
architecture for modeling cognition. Wiley Interdisciplinary Reviews. Cognitive Science, of the University of Turin (Italy) and a Research Asso-
10(3), Article e1488. ciate at the ICAR-CNR Institute in Palermo (Italy). He
Rosenbloom, Paul S. (2013). The Sigma cognitive architecture and system. is an ACM Distinguished Speaker and the recipient of
Rosenbloom, P. S., Laird, J. E., & Newell, A. (1993). The Soar papers: Research on the ‘‘Outstanding Research Award’’ from the BICA Society
integrated intelligence. Cambridge, MA: MIT Press. (Biologically Inspired Cognitive Architectures Society). His
Schacter, D. L. (1987). Implicit memory: history and current status. Journal of research interests include: Artificial Intelligence, Knowl-
Experimental Psychology: Learning, Memory, and Cognition, 13, 501–518. edge Representation and Reasoning, Semantic/Language
Schank, Roger C., & Abelson, Robert P. (1977). Scripts, plans, goals, and understanding: Technologies, Cognitive Systems and Architectures.
An inquiry into human knowledge structures (pp. 211–277). Hillsdale, NJ: Erlbaum.

79

You might also like