You are currently browsing the category archive for the ‘Computer science’ category.

In mathematics and logic a traditional proof is done by starting from one or more axioms. Clauses are used as steps (bridges) into higher-order decisions. Lemmas and corollarys are many times useful in order to mark the route to the final decision, which is usually a proof or a theorem.

Below the most important concepts regarding how to derive logical decisions:

  1. An axiom (or postulate) is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.
  2. A lemma is simultaneously a contention for premises below it and a premise for a contention above it.
  3. A corollary is a statement which follows readily from a previous statement. In mathematics a corollary typically follows a theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. Proposition B is a corollary of proposition A if B can readily be deduced from A, but the meaning of readily varies depending upon the author and context.
  4. A theorem is a statement which has been proven on the basis of previously established statements, such as other theorems, and previously accepted statements, such as axioms

Symbolic analysis and the mathematic dictionary

In Symbolic Analysis we use the concepts above in the following way:

  1. Symbols are axioms to be started. Some of them are grounded, but some are non-ground: fuzzy, vague or mental models without exact information.
  2. Those symbols that are ground and captured from source code – have a certain semantics. A lemma is that for any symbol having a semantic notation, a symbolic notation can be created, because each grammar term is independent of other grammar terms by default.  Therefore, each symbol can be simulated separately.
  3. The corollary is that each source language symbol is an automaton, which can be programmed in a tool as a state machine having the same principal logic as Turing machine.
  4. The final decision from the steps 1, 2 and 3 above is that by simulating source code it is possible to mimic original program execution step by step. This makes a foundation for interactive program proof.

Some links:

Proof of Symbolic Analysis described using 7 automata.

There is an attractive presentation about presenting knowledge from the various roles like practicians, software people, and scientists: How to tell stuff to the computer. It describes a triangle (see below), whose corners are practical domain knowledge (lower left corner), software artifacts (top corner) and science (low right corner). The picture proposes some technologies inside the triangle. The most important things in the triangle are – in writer’s opininion – the steps in the lines connecting corners.

Triangle of Knowledge Representation (KR, see

In his/hers conclusion the writer forecasts that in future there is a revolution, caused by descriptive logics (see I warmly agree that conclusion, because logic has a very strong role in the framework of symbolic analysis.

Instead, it is difficult to see what is the beef in the desriptive logic here:, the text contains traditional monolitic Lisp. However, the idea of the title: Marriage of Logic and Objects is a very good vision. I have had the same goal in the architecture of AHO hybrid objects. Furthermore, there is a solid contact surface between semantic web and symbolic analysis (see more).

Symbolic and holistic approach for estimating knowlege produced by the software

The triangle (above) is useful as a base for illlustrating software development and its knowledghe representation, too.  In the lower triangle (see below) I have named the corners respectively: Domain knowlege, source of program and information system (IS) pragmatics caused by the software.

Software Knowledge Representation (SKR). (Laitila 2010)

The last corner is not science as in the triangle about, but it simulates all purposes to understand the software and its value as an empiric product. The last corner is then an attempt to get empiric and practical research information from the implemented software. It is then a large approach. It has two sides:

  1. problem specific approach supported by reverse engineering and
  2. holistic approach in order to evaluate the whole

There some essential roles in the figure. All essential information is thought to be stored into an imagined megamodel (specification, resource information, sprints, tests etc).

The three lines are:

  1. The left line describes software development, to code.
  2. The line from top to the right lower corner is symbolic analysis containing the technology spaces: GrammarWare, ModelWare, SimulationWare and KnowledgeWare. For practical purposes there is a problem reasonign technology (PRT) close to the right corner.
  3. The bottom line is a problem, because there is no direct support for estimating how does a system satisfy all possible user needs, but there are some technologies to create end user services so that they can be mapped into code and remain visible in the system. SOA, aspects, Zachman architecture and metrics are some means for that purpose.

Some links:

We have a definition for scientific symbolic framework at:

In this post we use it for a domain specific purpose, for handling a navigator, the software JvnMobileGis.

For analyzing this kind of practical application together with its source, there is a relevant approach in modeling, MDE, and the concepts CIM, PIM and PSM: Computation Independent Model, Platform Independent and Platform Specific Models. Some parts of them are domain specific (DS) and some implementation specifics (IS).

A specific framework for a navigator using symbolic analysis

We define the symbolic analysis framework for navigating in 10 levels as follows:

  1. Ontology is a set of domain specific (DS) concepts for a navigator plus implementation specific (IS) symbols.  Concepts can be regarded as non-grounded higher level symbols. Navigator concepts are the map, the objects, the feature of the object, a road etc. The implementation specific concepts are a menu, a user interface, a database, a socket etc.
  2. Epistemology is a set of transformation rules from concepts to IS symbols. There are two directions: one to develop software and map features to code and another transformation principles how symbols could be connected into concepts. Both transformation directions need some knowledge and they create new knowledge. They describe semantics of each symbol in the ontology.
  3. Paradigm is here reductionist: how to describe ontology and epistemology and the theories and methods as atomic elements. Its “competitor” is holistic approach.
  4. Methodology is a set of theories how ontology will be transformed using epistemology to information, capable of expressing knowledge. There are domain specific theories for the product, the navigator plus implementation specific theories for the software expressed as a symbolic notation.
  5. Method is any way to use the framework in practice. Some methods are for the product, the UI and some for developing software and some for analyzing it.
  6. Tool is a specific means to apply the method in practice. A tool can be any tool, which applies (here) symbolic execution or symbolic analysis, for example for simulating code. The user can work as a tool, too, in order to make something that is impossible for the computer, or something for checking what computer does correctly, or not.
  7. Activity is a human interaction intended for understanding code. The high level types of activities are a) using the product, b)  forward engineering for creating artefacts or c) reverse engineering: finding a bug, browsing code in order to understand some principles etc.
  8. Action is a piece of activity: using the product or forward or reverse engineering.
  9. Sub-action is a part of an action. Lowest sub-actions are primitives like reading an item, making a decision etc.
  10. Lowest level is practical data for the method, tool, activity, action and sub-action. In symbolic analysis practical data can be non-symbolic or symbolic. Non-symbolic data in a program can have any type of the type system of the original source code. Symbolic data can have at most any type in the ontology. It is then very much richer than the non-symbolic notation.

Using the levels 1-10 a complete conceptual framework for any programming language and any operating system and for any application area can be written. There are – as we know – limitations how to ground concepts, but we can model them in many phases using the modeling technology. After the modeling process we can in most cases sharpen our concepts into the symbolic level.

Some links

In computer science, symbolic execution (also symbolic evaluation) refers to the analysis of programs by tracking symbolic rather than actual values, a case of abstract interpretation. The field of symbolic simulation applies the same concept to hardware. Symbolic computation applies the concept to the analysis of mathematical expressions.

Symbolic execution is used to reason about all the inputs that take the same path through a program.

Symbolic execution is useful for software testing because it can analyse if and when errors in the code may occur. It can be used to predict what code statements do to specified inputs and outputs. It is also important for considering path traversal.

Symbolic execution is used to reason about a program path-by-path. This may be superior to reasoning about a program, like Dynamic program analysis does, input-by-input.

Symbolic execution vs Symbolic Analysis (Laitila, 2008)

Symbolic execution emphasizes execution, travering program paths.  Symbolic analysis has the same purpose, but furthermore, it is a formalism to run / execute symbols using their internal state automaton. The automaton can be programmed to do anything, which is characteristics for the symbol. One exellent feature of symbolic analysis is its reduncancy and the  internal semantics of each symbol because of its clause-notation in the Symbolic-language. It is possible to reconstruate parse trees from the symbols so that the side effects caused by any symbol can be matched with the corresponding symbols. This makes it possible to partially verify the code.

Some links:

There is a nice block by Robert MacIntosh intended for PhD students at:

He describes light at the end of the research tunnel.  There are some steps in the tunnel, forming a scientific framework for research people to follow:

  • Ontology … to do with our assumptions about how the world is made up and the nature of things
  • Epistemology … to do with our beliefs about how one might discover knowledge about the world
  • Methodology … to do with the tools and techniques of research

The author claims that ontology, epistemology and methodoly are three pillars of the thesis.

An extended framework with the applications for symbolic analysis

We define symbolic analysis as a framework (light in the tunnel) in 10 levels as follows:

  1. Ontology is a set of symbols as well as concepts made by the user.  Obs. Concepts are higher level symbols, non-grounded.
  2. Epistemology is a set of transformation rules for symbols, in order to get knowledge. They describe semantics of each symbol in the ontology.
  3. Paradigm is here symbolic analysis: how to describe ontology and epistemology and the theories and methods. Its “competitors” are static and dynamic analyses.
  4. Methodology is a set of theories how ontology will be transformed using epistemology to information, capable of expressing knowledge. There are theories for parsing, making a symbolic model, simulating the model etc.
  5. Method is any way to use the methodology in practice. Some methods are control flow analysis, making a call tree etc.
  6. Tool is a specific means to apply the method in practice. A tool can be any tool, which applies (here) symbolic execution or symbolic analysis, for example for simulating code.
  7. Activity is a human interaction intended for understanding code. Some activities are finding a bug, browsing code in order to understand some principles etc.
  8. Action is a piece of activity of activity, for example browsing items or selecting a view or making a hypothesis.
  9. Sub-action is a part of an action. Lowest sub-actions are primitives like reading an item, making a decision etc.
  10. Lowest level is practical data for the method, tool, activity, action and sub-action. In symbolic analysis practical data can be non-symbolic or symbolic. Non-symbolic data in a program can have any type of the type system of the original source code. Symbolic data can have at most any type in the ontology. It is then very much richer than the non-symbolic notation.

Using the levels 1-10 a complete conceptual framework for any programming language and any operating system  can be written. There are however, some limitations, how to reverse engineer different kinds of features of source code. In order to alleviate these problems/ shortcuts, symbolic analysis has its rather expressive format: each relation is expressed as a Prolog predicate, which can implicitely point to its neighbour symbols, even though there is no defintion for their semantics.

The levels 7-9 tie the framework into action theory, which is empiric research.

Some links

Man and His Symbols is a famous book edited by Carl Jung. He was one of the great doctors of all time and one of the great thinkers of that century. His object always was to help men and women to know themselves, so that by self-knowledge and thoughtful self-use they could lead full, rich, and happy lives.

In his book there are following parts:

  • Part 1. Approaching The Unconscious

  • Part 2 Ancient Myths and Modern Man Joseph L. Henderson The Eternal Symbols

  • Part 3  The Process of Individuation M.-L von Franz  The Pattern of Psychic Growth

  • Part 4 Symbolism in the Visual Arts Aniela Jaffe

  • Part 5 Symbolis in Individual Analysis Jolande Jacobi

The book is a great work describing symbols and symbolism which cannot be completely explained. It is psychology.

Dreams and feeling and our attitude towards different kinds of icons like Coca Cola, Nokia, Bible or sudden dead are strongly personal. Many features of our behavior depend on our temper or our personal characteristic inherited in our birth day.

Symbolism and symbolic analysis

Symbolic analysis, presented in this blog, is the opposite of Jung’s work. In it symbols are imagined to be either formal or known. If some symbol is not known and not formal, it can be skipped if it is not relevant. If it is relevant and not known, there should be a learning process in order to make it familiar to the user. We cannot do a systematic learning process for dreams and for other typical phenomenon described in Jung’s book. However, the book is nice to be read with its great figures and photos. And it describes the real unfamiliar life of everybody.

There are two kinds of symbols: the one of Psychology and the one for formal notations like Computer science.


Cognition is the research term for “the process of thought“. Usage of the term varies in different disciplines; for example in psychology and cognitive science, it usually refers to an information processing view of an individual’s psychological functions. Other interpretations of the meaning of cognition link it to the development of concepts; individual minds, groups, and organizations.

Cognitive space uses the analogy of location in two, three or higher dimensional space to describe and categorize thoughts, memories and ideas. Each individual has his/her cognitive space, resulting in a unique categorization of their ideas. The dimensions of this cognitive space depend on information, training and finally on a person’s awareness. All this depends globally on the cultural setting.

When understanding software we need certain types of cognition spaces.  In book Symbolic Analysis for PC the spaces are named:

  1. The most abstract space is Concept (an unlimited definition for a thing).
  2. The next concrete space is Context (a situation)
  3. The next concrete space is Architecture slice (high level module with its dependencies and attributes)
  4. The concrete space is Slice, which refers to symbols (part of program).

A typical piece of software (slice) can be modeled upwards in a person’s mind using these cognitive spaces. There is a certain meaning for it to be understood (concept), it have some ways to be used (context) and it can have some extensions to the architecture implementation. It is the responsibility for the developer and maintainer to catch this information, but tools can be useful in order to shorten that time.

An analogy between topological understanding (see figure) and software understanding is clear.

Intelligence is an umbrella term describing a property of the mind including related abilities, such as the capacities for abstract thought, reasoning, planning, problem solving, communication, and learning. Problem solving is the most promising area of these from the point-of-view of  symbolic analysis.

There is much research on what is intelligence and how to define it (see more).

J. P. Guilford is one of them. He  explored the scope of the adult intellect by providing the concept of intelligence with a strong, comprehensive theoretical backing. The Structure-of-Intellect model (SI model) was designed as a cross classification system with intersections in the model providing the basis for abilities similar to periodic table in chemistry. The three-dimensional cube—shaped model includes five content categories (the way in which information is presented on a test; visual, auditory, symbolic, semantic, and behavioral), six operation categories (what is done on a test; evaluation, convergent production, divergent production, memory retention, memory recording, and cognition), and six product categories (the form in which information is processed on a test; units, classes, relations, systems, transformations, and implications). The intersection of three categories provides a frame of reference for generating one or more new hypothetical factors of intelligence.

Mapping Guilford’s cube to Symbolic Analysis

An interesting idea is to map Guilford’s cube to symbolic analysis, the AHO objects. In the atomistic model every atom is a symbol and have a formal contents. Operations for atoms allow studying its impacts and transformations automatically (this is the AI-approach).

There is an article coming on that topic.

Some links:

A cognitive architecture is a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term ‘architecture’ implies an approach that attempts to model not only behavior, but also structural properties of the modelled system. These need not be physical properties: they can be properties of virtual machines implemented in physical machines (e.g. brains or computers).

Common researchers on cognitive architectures is the belief that understanding (human, animal or machine) cognitive processes means being able to implement them in a working system, though opinions differ as to what form such a system can have: some researchers assume that it will necessarily be a computational system whereas others argue for alternative models such as dynamical systems.

Cognitive architectures can be symbolic, connectionist, or hybrid. Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT). Many of these architectures are based on the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION).

Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models.

The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units.  Neural networks are by far the most commonly used connectionist model today, but there are some more generic approaches, too.

Is the Symbolic atomistic model (SAM) a cognitive architecture?

Hybrid Cognitive Architecture (Laitila)

Hybrid Cognitive Architecture (Laitila)

The SAM-principle, presented in this blog, is a hybrid cognitive architecture, which contains both the symbolic and connectionist approach.  There is a paper published about it at the Conference of  Hybrid Artificial Intelligence Systems (Burgos Spain, 2008).

Some links:

A Petri net (also known as a place/transition net or P/T net) is one of several mathematical modeling languages for the description of discrete distributed systems. A Petri net is a directed bipartite graph, in which the nodes represent transitions (i.e. discrete events that may occur, signified by bars), places i.e. conditions, signified by circles, and directed arcs that describe which places are pre- and/or postconditions for which transitions, signified by arrows.

Like industry standards such as UML activity diagrams, BPMN and EPCs, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis.

Because of its executability Petri net is an interesting formalism compared with the symbolic atomistic model.

Formalism for Petri nets

A Petri net consists of places, transitions, and directed arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition.

Places may contain a natural number of tokens. A distribution of tokens over the places of a net is called a marking. A transition of a Petri net may fire whenever there is a token at the end of all input arcs; when it fires, it consumes these tokens, and places tokens at the end of all output arcs. A firing is atomic, i.e., a single non-interruptible step.

Execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, any one of them may fire. If a transition is enabled, it may fire, but it doesn’t have to.  Since firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling the concurrent behavior of distributed systems.

The following formal definition is loosely based on (Peterson 1981). Many alternative definitions exist.


A Petri net graph (called Petri net by some, but see below) is a 3-tuple (S,T,W)\!, where

  • S is a finite set of places
  • T is a finite set of transitions
  • S and T are disjoint, i.e. no object can be both a place and a transition
  • W: (S \times T) \cup (T \times S) \to  \mathbb{N} is a multiset of arcs, i.e. it defines arcs and assigns to each arc a non-negative integer arc multiplicity; note that no arc may connect two places or two transitions.

The flow relation is the set of arcs:  F  = \{ (x,y) \mid W(x,y) > 0 \}. In many textbooks, arcs can only have multiplicity 1, and they often define Petri nets using F instead of W.

A Petri net graph is a bipartite multidigraph (S \cup T, F) with node partitions S and T.

The preset of a transition t is the set of its input places: {}^{\bullet}t = \{ s \in S \mid  W(s,t) > 0 \}; its postset is the set of its output places: t^{\bullet} = \{ s \in S \mid W(t,s) > 0 \}

A marking of a Petri net (graph) is a multiset of its places, i.e., a mapping M: S \to \mathbb{N}. We say the marking assigns to each place a number of tokens.

A Petri net (called marked Petri net by some, see above) is a 4-tuple (S,T,W,M_0)\!, where

  • (S,T,W) is a Petri net graph;
  • M0 is the initial marking, a marking of the Petri net graph.

Executablity of Petri nets

Petri nets include a specific logic, with some variations.   Shortly said:

  • firing a transition t in a marking M consumes W(s,t) tokens from each of its input places s, and produces W(t,s) tokens in each of its output places s
  • a transition is enabled (it may fire) in M if there are enough tokens in its input places for the consumptions to be possible, i.e. iff \forall s: M(s) \geq W(s,t).

We are generally interested in what may happen when transitions may continually fire in arbitrary order.

We say that a marking Mis reachable from a marking M in one step if M \to_G M'; we say that it is reachable from M if M {\to_G}^* M', where {\to_G}^* is the transitive closure of \to_G; that is, if it is reachable in 0 or more steps.

For a (marked) Petri net N=(S,T,W,M_0)\!, we are interested in the firings that can be performed starting with the initial marking M0. Its set of reachable markings is the set R(N) \ \stackrel{D}{=}\  \{ M' \mid M_0 {\to_{(S,T,W)}}^* M' \}

The reachability graph of N is the transition relation \to_G restricted to its reachable markings R(N). It is the state space of the net.

A firing sequence for a Petri net with graph G and initial marking M0 is a sequence of transitions \vec \sigma = \langle t_{i_1} \ldots t_{i_n}  \rangle such that M_0 \to_{G,t_{i_1}} M_1 \wedge \ldots  \wedge M_{n-1} \to_{G,t_{i_n}} M_n. The set of firing sequences is denoted as L(N).

Types of different levels of Petri nets are shown below:

Petri net types.

Comparing Petri nets and Symbolic Atomistic Model

Formalism of Petri nets is different from the one of computer programming languages, because in each place there is a contents, which is a set of tokens. The purpose is to study firing and balance between modeling and analyzability.

In spite of the differences, I argue that Petri nets can be converted into the symbolic atomistic model (AHO objects) by some arrangements:

  1. Each place reserves an AHO object.
  2. Each transition reserves so many AHO’s that it requires when the semantics are converted atomistic (atomic formula).
  3. Contents of each place is a separate dynamic AHO – object.
  4. Code for transitions are written in a high level language.
  5. The executing logic will be programmed into the run – methods for AHO’s allowing nondeterminism.
  6. Otherwise, the logic of SimulationWare could be used.

There is no demonstration that it works, but it seems evident that a modified atomistic model (the architecture of AHO’s) should simulate Petri nets, too.

Some links:

Erkki Laitila, PhD (2008) computer engineer (1977)