A cognitive architecture is a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term ‘architecture’ implies an approach that attempts to model not only behavior, but also structural properties of the modelled system. These need not be physical properties: they can be properties of virtual machines implemented in physical machines (e.g. brains or computers).

Common researchers on cognitive architectures is the belief that understanding (human, animal or machine) cognitive processes means being able to implement them in a working system, though opinions differ as to what form such a system can have: some researchers assume that it will necessarily be a computational system whereas others argue for alternative models such as dynamical systems.

Cognitive architectures can be symbolic, connectionist, or hybrid. Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT). Many of these architectures are based on the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION).

Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models.

The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units.  Neural networks are by far the most commonly used connectionist model today, but there are some more generic approaches, too.

Is the Symbolic atomistic model (SAM) a cognitive architecture?

Hybrid Cognitive Architecture (Laitila)

Hybrid Cognitive Architecture (Laitila)

The SAM-principle, presented in this blog, is a hybrid cognitive architecture, which contains both the symbolic and connectionist approach.  There is a paper published about it at the Conference of  Hybrid Artificial Intelligence Systems (Burgos Spain, 2008).

Some links:

A Petri net (also known as a place/transition net or P/T net) is one of several mathematical modeling languages for the description of discrete distributed systems. A Petri net is a directed bipartite graph, in which the nodes represent transitions (i.e. discrete events that may occur, signified by bars), places i.e. conditions, signified by circles, and directed arcs that describe which places are pre- and/or postconditions for which transitions, signified by arrows.

Like industry standards such as UML activity diagrams, BPMN and EPCs, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis.

Because of its executability Petri net is an interesting formalism compared with the symbolic atomistic model.

Formalism for Petri nets

A Petri net consists of places, transitions, and directed arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition.

Places may contain a natural number of tokens. A distribution of tokens over the places of a net is called a marking. A transition of a Petri net may fire whenever there is a token at the end of all input arcs; when it fires, it consumes these tokens, and places tokens at the end of all output arcs. A firing is atomic, i.e., a single non-interruptible step.

Execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, any one of them may fire. If a transition is enabled, it may fire, but it doesn’t have to.  Since firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling the concurrent behavior of distributed systems.

The following formal definition is loosely based on (Peterson 1981). Many alternative definitions exist.

Syntax

A Petri net graph (called Petri net by some, but see below) is a 3-tuple (S,T,W)\!, where

  • S is a finite set of places
  • T is a finite set of transitions
  • S and T are disjoint, i.e. no object can be both a place and a transition
  • W: (S \times T) \cup (T \times S) \to  \mathbb{N} is a multiset of arcs, i.e. it defines arcs and assigns to each arc a non-negative integer arc multiplicity; note that no arc may connect two places or two transitions.

The flow relation is the set of arcs:  F  = \{ (x,y) \mid W(x,y) > 0 \}. In many textbooks, arcs can only have multiplicity 1, and they often define Petri nets using F instead of W.

A Petri net graph is a bipartite multidigraph (S \cup T, F) with node partitions S and T.

The preset of a transition t is the set of its input places: {}^{\bullet}t = \{ s \in S \mid  W(s,t) > 0 \}; its postset is the set of its output places: t^{\bullet} = \{ s \in S \mid W(t,s) > 0 \}

A marking of a Petri net (graph) is a multiset of its places, i.e., a mapping M: S \to \mathbb{N}. We say the marking assigns to each place a number of tokens.

A Petri net (called marked Petri net by some, see above) is a 4-tuple (S,T,W,M_0)\!, where

  • (S,T,W) is a Petri net graph;
  • M0 is the initial marking, a marking of the Petri net graph.

Executablity of Petri nets

Petri nets include a specific logic, with some variations.   Shortly said:

  • firing a transition t in a marking M consumes W(s,t) tokens from each of its input places s, and produces W(t,s) tokens in each of its output places s
  • a transition is enabled (it may fire) in M if there are enough tokens in its input places for the consumptions to be possible, i.e. iff \forall s: M(s) \geq W(s,t).

We are generally interested in what may happen when transitions may continually fire in arbitrary order.

We say that a marking Mis reachable from a marking M in one step if M \to_G M'; we say that it is reachable from M if M {\to_G}^* M', where {\to_G}^* is the transitive closure of \to_G; that is, if it is reachable in 0 or more steps.

For a (marked) Petri net N=(S,T,W,M_0)\!, we are interested in the firings that can be performed starting with the initial marking M0. Its set of reachable markings is the set R(N) \ \stackrel{D}{=}\  \{ M' \mid M_0 {\to_{(S,T,W)}}^* M' \}

The reachability graph of N is the transition relation \to_G restricted to its reachable markings R(N). It is the state space of the net.

A firing sequence for a Petri net with graph G and initial marking M0 is a sequence of transitions \vec \sigma = \langle t_{i_1} \ldots t_{i_n}  \rangle such that M_0 \to_{G,t_{i_1}} M_1 \wedge \ldots  \wedge M_{n-1} \to_{G,t_{i_n}} M_n. The set of firing sequences is denoted as L(N).

Types of different levels of Petri nets are shown below:

Petri net types.

Comparing Petri nets and Symbolic Atomistic Model

Formalism of Petri nets is different from the one of computer programming languages, because in each place there is a contents, which is a set of tokens. The purpose is to study firing and balance between modeling and analyzability.

In spite of the differences, I argue that Petri nets can be converted into the symbolic atomistic model (AHO objects) by some arrangements:

  1. Each place reserves an AHO object.
  2. Each transition reserves so many AHO’s that it requires when the semantics are converted atomistic (atomic formula).
  3. Contents of each place is a separate dynamic AHO – object.
  4. Code for transitions are written in a high level language.
  5. The executing logic will be programmed into the run – methods for AHO’s allowing nondeterminism.
  6. Otherwise, the logic of SimulationWare could be used.

There is no demonstration that it works, but it seems evident that a modified atomistic model (the architecture of AHO’s) should simulate Petri nets, too.

Some links:

In graph theory, reachability is the notion of being able to get from one vertex in a directed graph to some other vertex. Note that reachability in undirected graphs is trivial–it is sufficient to find the connected components in the graph, which can be done in linear time.

Definition for Reachability: For a directed graph D = (V, A), the reachability relation of D is the transitive closure of its arc set A, which is to say the set of all ordered pairs (s, t) of vertices in V for which there exist vertices v0 = s, v1, …, vd = t such that (vi – 1, vi ) is in A for all 1 ≤ id.

Algorithms for reachability fall into two classes: those that require preprocessing and those that do not. For the latter case, resolving a single reachability query can be done in linear time using algorithms such as breadth first search or iterative deepening depth-first search.

Reachability has many times been modeled using Petri nets.

Producer & Consumer Reachability by Pataricza and Bartha

Because Petri nets can be used for modeling reachability and locks of source code, and the input for it and the symbolic atomistic model (SAM) is the same, too, it is natural to assume, that SAM can be used for modeling reachability and source code locks, too.

In fig above the different slices (parts of the graph) could be symbolic models captured from methods of threads of Java or C++ or other code. Typically some threads are consumers and some producers.

It is straightforward to visualizate logic of threads using the symbolic atomistic model as a base so that crtical resources can be seen as connective nodes between different parts of the threads.

Some links:

Granularity is the extent to which a system is broken down into small parts, either the system itself or its description or observation. It is the “extent to which a larger entity is subdivided. For example, a yard broken into inches has finer granularity than a yard broken into feet.

Coarse-grained systems consist of fewer, larger components than fine-grained systems; a coarse-grained description of a system regards large subcomponents while a fine-grained description regards smaller components of which the larger ones are composed.

The terms granularity, coarse and fine are relative, used when comparing systems or descriptions of systems. An example of increasingly fine granularity: a list of nations in the United Nations, a list of all states/provinces in those nations, a list of all counties in those states, etc.

In physics a fine-grained description of a system is a detailed, low-level model of it. A coarse-grained description is a model where some of this fine detail has been smoothed over or averaged out. The replacement of a fine-grained description with a lower-resolution coarse-grained model is called coarse graining.

In molecular dynamics, coarse graining consists in replacing an atomistic description of a biological molecule with a lower-resolution coarse-grained model that averages or smooths away fine details. Coarse-grained models have been developed for investigating the longer time- and length-scale dynamics that are critical to many biological processes, such as lipid membranes and proteins.

In parallel computing, granularity means the amount of computation in relation to communication, i.e., the ratio of computation to the amount of communication. Fine-grained parallelism means individual tasks are relatively small in terms of code size and execution time.

The granularity of data refers to the fineness with which data fields are sub-divided. For example, a postal address can be recorded, with low granularity, as a single field:

  1. address = 200 2nd Ave. South #358, St. Petersburg, FL 33701-4313 USA

or with high granularity, as multiple fields:

  1. street address = 200 2nd Ave. South #358
  2. city = St. Petersburg
  3. postal code = FL 33701-4313
  4. country = USA

About granularity of computer models

Granularity can easily be defined for computer models like source code models including UML, abstract syntax tree, MOF, MDE, ADM, XML etc.  It is not clear what is the granularity of typical AST – implementations, because AST is often an abstraction about a computer language. Hence, granularity and abstraction has many common things, they are counterexamples.

In granular computing there are principles for studying variables.  One typical class of variable granulation methods derive more from data clustering methodologies than from the linear systems theory informing the above methods (see picture below).

About granularity of symbolic atomistic model and SDE

Symbolic atomistic model is by far the highest granular model to describe source code and its behavior. It models orginal code symbols as AHO – hybrid objects, although their commands are abstractions made by predicates, they don’t miss any information. That is excellent and provides reversibility to convert an internal model into the original code backwards. In simulation it models interpretations between symbols and creates side effects which are results from any computation from any symbol. Hence, the output of the simulation process is as fine granular as the original symbols. It provides excellent ways to study program execution. On the other side, there can appear too much information for a long run, therefore there should be ways to limit information captured from simulation.

Some links:

A Turing machine is a theoretical device that manipulates symbols contained on a strip of tape. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside of a computer.

A Turing machine that is able to simulate any other Turing machine is called a Universal Turing machine (UTM, or simply a universal machine).

Many misunderstandings of Turing machine

There are several wrong and misleading concepts considering Turing machine TM, argued by Andrew Wells[1].

The correct concept includes the facts that TM should have a finite control automaton, a register or similar to save status and an ability to read and write and a tape or similar as a memory. Many times persons define TM too specified, although its origin describes a person making computations.

Cognitive approach for Turing machine

From the cognitive approach Turing machine is a concept to model a human making computations. It is then a simple problem solving framework with in-built rules, which model the finite control automaton [2].

There is much information relating to cognitive architectures, symbolic paradigm and how our mind works, also criticism.

Symbolic analysis

Symbolic analysis (SAM) is a framework, which has been built based on automata defined by the corresponding symbols. Together the symbols and their original semantics (command facts) build cognitive models from the source code.  For more information, see KnowledgeWare.

Some links:

  • [1] A. Wells: Rethinking Cognitive Computation (Palgrave).
  • [2[ H. Putnam: Mind, language and reality
  • [3] T. Fodor: The mind doesn’t work that way.

Truth can have a variety of meanings, from the state of being the case, being in accord with a particular fact or reality, being in accord with the body of real things, events, actuality, or fidelity to an original or to a standard. In archaic usage it could be fidelity, constancy or sincerity in action, character, and utterance.

There are differing claims on such questions as what constitutes truth; what things are truthbearers capable of being true or false; how to define and identify truth; the roles that revealed and acquired knowledge play; and whether truth is subjective, relative, objective, or absolute. This post presents some topics how symbolic analysis covers some attributes of truth and truth theories.

Correspondence theory for truth

For the truth to correspond it must first be proved by evidence or an individuals valid opinion, which have similar meaning or context. This type of theory posits a relationship between thoughts or statements on the one hand, and things or objects on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle.

Coherence theory for truth

For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. Some variants of coherence theory are claimed to characterize the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries.

Concensus theory

Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.

Pragmatic theory

The three most influential forms of the pragmatic theory of truth were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that truth is verified and confirmed by the results of putting one’s concepts into practice.

Peirce defines truth as follows: “Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth.”

Peirce emphasizes that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and “reference to the future”, are essential to a proper conception of truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.

Semantic theory of truth

The semantic theory of truth has as its general case for a given language:

‘P’ is true if and only if P

where ‘P’ is a reference to the sentence (the sentence’s name), and P is just the sentence itself.

Alfred Tarski developed the theory for formal languages (such as formal logic). Here he restricted it in this way: no language could contain its own truth predicate, that is, the expression is true could only apply to sentences in some other language. The latter he called an object language, the language being talked about. (It may, in turn, have a truth predicate that can be applied to sentences in still another language.) The reason for his restriction was that languages that contain their own truth predicate will contain paradoxical sentences like the Liar: This sentence is not true. See The Liar paradox. As a result Tarski held that the semantic theory could not be applied to any natural language, such as English, because they contain their own truth predicates. Donald Davidson used it as the foundation of his truth-conditional semantics and linked it to radical interpretation in a form of coherentism.

Truth in logic

Logic is concerned with the patterns in reason that can help tell us if a proposition is true or not. However, logic does not deal with truth in the absolute sense, as for instance a metaphysician does. Logicians use formal languages to express the truths which they are concerned with, and as such there is only truth under some interpretation or truth within some logical system.

A logical truth (also called an analytic truth or a necessary truth) is a statement which is true in all possible worlds[38] or under all possible interpretations, as contrasted to a fact (also called a synthetic claim or a contingency) which is only true in this world as it has historically unfolded. A proposition such as “If p and q, then p.” is considered to be logical truth because it is true because of the meaning of the symbols and words in it and not because of any facts of any particular world. They are such that they could not be untrue.

How does Symbolic Analysis cover the theories (presented above)?

Symbolic analysis has been created based on Prolog (predicate logic) so that each symbol of the logic model refers to the corresponding term of a program,  reverse engineered from an application. Therefore, in the lowest level, each term in the  atomistic symbolic model is a propostion typical for truth in logic.  Each term has been implemeted as a AHO-artefact, a symbolic element having an excellent correspondence to the code . Links between these elements are coherent having a symbolic language to describe semantics of each term (semantic theory of truth).

AHO elements create in  simulation (aho:run()) results which are side effects caused of each element. They are mimicking computations of the program. Hence, it is possible for the user to compare side effects with his/her hypotheses in a higher level of logic. If those assumptions are different or equal, this information has pragmatic value from the program comprehension point-of-view. For possible contradictions some reasons should be found. They are possible bugs/errors of the code.

Some links:

Computer science or computing science (sometimes abbreviated CS) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems.

Main theories of CS

The study of the theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a computational problem.

The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.

CS Theories vs Symbol-Driven Engineering

SDE is a tuple (GrammarWare, ModelWare, SimulationWare, KnowledgeWare).


GrammarWare has strong relations to compilation theory, ModelWare to Graph Theory, SimulationWare to computation theory, and KnowledgeWare to information theory. Alltogether, they build a platform to evaluate computability theory from the sequential approach (not parallel), which have connections to algorithm theory.

Some theories not considered in SDE are neural networks, theories for hardware etc.

Some links:

Computational semiotics is an interdisciplinary field that applies, conducts, and draws on research in logic, mathematics, the theory and practice of computation, formal and natural language studies, the cognitive sciences generally, and semiotics proper. A common theme of this work is the adoption of a sign-theoretic perspective on issues of artificial intelligence and knowledge representation. Many of its applications lie in the field of computer-human interaction (CHI) and the fundamental devices of recognition (work at IASE in California).

Computational semiotics is that branch of one,which deals with the study and application of logic,computation to formal,natural language in terms of cognition and signs.

One part of this field, known as algebraic semiotics, combines aspects of algebraic specification and social semiotics, and has been applied to user interface design and to the representation of mathematical proofs.

Computational Semiotics by Gudwin

Fig below (http://www.dca.fee.unicamp.br/~gudwin/compsemio/Image24.gif):

Singularities captured from real world.

Gudwin describes knowlege as knowledge units (see figure above).

Computational Semiotics vs Symbol-Driven Engineering

SDE expresses phenomena using symbols. There are interpretations between symbols, expressed in predicates.

The classification of knowledge units by Gudwin is below (the origin is from Charles Peirce):

Classification of knowledge units.

First, source code fits well to the branch of the tree, which starts from the node rhematic. Second, when code is considered as sequences or using traces, then the dicent approach is relevant.  Third, when some features of the code or its assumed behavior is considered, then the approach argumentative is relevant.

Symbolic analysis is then a tuple (rhematic, dicent, argumentative).

Mastering these three branches of the knowledge tree gives possibilities to master source code.

Some links:

Executable UML, often abbreviated to xtUML or xUML, is the evolution of the Shlaer-Mellor method to UML. Executable UML graphically specifies a system using a profile of the UML. The models are testable, and can be compiled into a less abstract programming language to target a specific implementation. Executable UML supports MDA through specification of platform-independent models, and the compilation of the platform-independent models into platform-specific models.

Executable UML is used to model the domains in a system. Each domain is defined at the level of abstraction of its subject matter independent of implementation concerns. The resulting system view is composed of a set of models represented by at least the following:

  • The domain chart provides a view of the domains in the system, and the dependencies between the domains.
  • The class diagram defines the classes and class associations for a domain.
  • The statechart diagram defines the states, events, and state transitions for a class or class instance.
  • The action language defines the actions or operations that perform processing on model elements.

Shortcomings of Executable UML

UML doesn’t define semantics of a programming language. Hence, it is not complete to describe behavior of any exact system. In order to correct this shortcoming some new principles are needed. Some of them are: automata theory and symbol-driven engineering.

Automata Theory and Symbol-Driven Engineering (SDE)

Automata theory defines behavior model for corresponding code elements. Therefore, this theory background  eliminates the gap between program behavior and the corresponding model, like UML

In SDE every symbol has an automaton behind it. In order to execute those symbols an invocation Symbol:run is needed plus some specific initailizations for some specific symbols like definitions (method invocations and  object and variable definitions). See figure below.

Some links:

Round-trip engineering is a functionality of software development tools that provides generation of models from source code and generation of source code from models; this way, existing source code can be converted into a model, be subjected to software engineering methods and then be converted back.

Round trip is a particular model perspective in the UML standard, and it covers conceptual elements of the full execution use case that happens in an implemented system. This includes the network communication, in typical web-based applications.

Round-trip engineering vs. Symbolic Analysis

When rount-trip engineering creates typical UML models from code, Symbolic analysis instead creates symbolic atomistic models from code: from GrammarWare to ModelWare.

Some links:

Erkki Laitila, PhD (2008) computer engineer (1977)