You are currently browsing erkkilaitila’s articles.

Synergy means the interaction of multiple elements in a system to produce an effect different from or greater than the sum of their individual effects.Synergy can be found in almost all practical areas.

In management positive synergy means positive effects such as improved efficiency in operations, greater exploitation of opportunities, and improved utilization of resources.

Synergia tarkoittaa systeemin osien yhteisvaikutusta niin, että tulos on suurempi kuin osien summa.

High level summary of the blog:

Crunchy numbers

Featured imageA Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 4,800 times in 2010. That’s about 12 full 747s.

In 2010, there were 29 new posts, growing the total archive of this blog to 125 posts. There were 17 pictures uploaded, taking up a total of 968kb. That’s about a picture per month.

The busiest day of the year was March 21st with 565 views. The most popular post that day was About.

Where did they come from?

The top referring sites in 2010 were reddit.com, rjlipton.wordpress.com, stackoverflow.com, linkedin.com, and discuss.visual-prolog.com.

Some visitors came searching, mostly for hermeneutic circle, engineering symbol, symbolic analysis, computer science theories, and knowledge representation.

Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

About August 2009

2

The Symbolic Language and Model and Gödel’s Incompleteness Theorems November 2009

3

Maximum Expressive Power with Minimal Construction December 2009
1 comment

4

Core Theories of Computer Science vs Symbol-Driven Engineering February 2010

Knowledge Representation vs. Symbolic Knowledge Capture for Software June 2010

In mathematics and logic a traditional proof is done by starting from one or more axioms. Clauses are used as steps (bridges) into higher-order decisions. Lemmas and corollarys are many times useful in order to mark the route to the final decision, which is usually a proof or a theorem.

Below the most important concepts regarding how to derive logical decisions:

  1. An axiom (or postulate) is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.
  2. A lemma is simultaneously a contention for premises below it and a premise for a contention above it.
  3. A corollary is a statement which follows readily from a previous statement. In mathematics a corollary typically follows a theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. Proposition B is a corollary of proposition A if B can readily be deduced from A, but the meaning of readily varies depending upon the author and context.
  4. A theorem is a statement which has been proven on the basis of previously established statements, such as other theorems, and previously accepted statements, such as axioms

Symbolic analysis and the mathematic dictionary

In Symbolic Analysis we use the concepts above in the following way:

  1. Symbols are axioms to be started. Some of them are grounded, but some are non-ground: fuzzy, vague or mental models without exact information.
  2. Those symbols that are ground and captured from source code – have a certain semantics. A lemma is that for any symbol having a semantic notation, a symbolic notation can be created, because each grammar term is independent of other grammar terms by default.  Therefore, each symbol can be simulated separately.
  3. The corollary is that each source language symbol is an automaton, which can be programmed in a tool as a state machine having the same principal logic as Turing machine.
  4. The final decision from the steps 1, 2 and 3 above is that by simulating source code it is possible to mimic original program execution step by step. This makes a foundation for interactive program proof.

Some links:

Proof of Symbolic Analysis described using 7 automata.

There is an attractive presentation about presenting knowledge from the various roles like practicians, software people, and scientists: How to tell stuff to the computer. It describes a triangle (see below), whose corners are practical domain knowledge (lower left corner), software artifacts (top corner) and science (low right corner). The picture proposes some technologies inside the triangle. The most important things in the triangle are – in writer’s opininion – the steps in the lines connecting corners.

Triangle of Knowledge Representation (KR, see http://www.lisperati.com).

In his/hers conclusion the writer forecasts that in future there is a revolution, caused by descriptive logics (see http://www.lisperati.com/tellstuff/conclusion.html). I warmly agree that conclusion, because logic has a very strong role in the framework of symbolic analysis.

Instead, it is difficult to see what is the beef in the desriptive logic here: http://www.lisperati.com/tellstuff/dl.html, the text contains traditional monolitic Lisp. However, the idea of the title: Marriage of Logic and Objects is a very good vision. I have had the same goal in the architecture of AHO hybrid objects. Furthermore, there is a solid contact surface between semantic web and symbolic analysis (see more).

Symbolic and holistic approach for estimating knowlege produced by the software

The triangle (above) is useful as a base for illlustrating software development and its knowledghe representation, too.  In the lower triangle (see below) I have named the corners respectively: Domain knowlege, source of program and information system (IS) pragmatics caused by the software.

Software Knowledge Representation (SKR). (Laitila 2010)

The last corner is not science as in the triangle about, but it simulates all purposes to understand the software and its value as an empiric product. The last corner is then an attempt to get empiric and practical research information from the implemented software. It is then a large approach. It has two sides:

  1. problem specific approach supported by reverse engineering and
  2. holistic approach in order to evaluate the whole

There some essential roles in the figure. All essential information is thought to be stored into an imagined megamodel (specification, resource information, sprints, tests etc).

The three lines are:

  1. The left line describes software development, to code.
  2. The line from top to the right lower corner is symbolic analysis containing the technology spaces: GrammarWare, ModelWare, SimulationWare and KnowledgeWare. For practical purposes there is a problem reasonign technology (PRT) close to the right corner.
  3. The bottom line is a problem, because there is no direct support for estimating how does a system satisfy all possible user needs, but there are some technologies to create end user services so that they can be mapped into code and remain visible in the system. SOA, aspects, Zachman architecture and metrics are some means for that purpose.

Some links:

We have a definition for scientific symbolic framework at:  https://symbolicanalysis.wordpress.com/2010/06/01/scientific-framework-for-symbolic-analysis/

In this post we use it for a domain specific purpose, for handling a navigator, the software JvnMobileGis.

For analyzing this kind of practical application together with its source, there is a relevant approach in modeling, MDE, and the concepts CIM, PIM and PSM: Computation Independent Model, Platform Independent and Platform Specific Models. Some parts of them are domain specific (DS) and some implementation specifics (IS).

A specific framework for a navigator using symbolic analysis

We define the symbolic analysis framework for navigating in 10 levels as follows:

  1. Ontology is a set of domain specific (DS) concepts for a navigator plus implementation specific (IS) symbols.  Concepts can be regarded as non-grounded higher level symbols. Navigator concepts are the map, the objects, the feature of the object, a road etc. The implementation specific concepts are a menu, a user interface, a database, a socket etc.
  2. Epistemology is a set of transformation rules from concepts to IS symbols. There are two directions: one to develop software and map features to code and another transformation principles how symbols could be connected into concepts. Both transformation directions need some knowledge and they create new knowledge. They describe semantics of each symbol in the ontology.
  3. Paradigm is here reductionist: how to describe ontology and epistemology and the theories and methods as atomic elements. Its “competitor” is holistic approach.
  4. Methodology is a set of theories how ontology will be transformed using epistemology to information, capable of expressing knowledge. There are domain specific theories for the product, the navigator plus implementation specific theories for the software expressed as a symbolic notation.
  5. Method is any way to use the framework in practice. Some methods are for the product, the UI and some for developing software and some for analyzing it.
  6. Tool is a specific means to apply the method in practice. A tool can be any tool, which applies (here) symbolic execution or symbolic analysis, for example for simulating code. The user can work as a tool, too, in order to make something that is impossible for the computer, or something for checking what computer does correctly, or not.
  7. Activity is a human interaction intended for understanding code. The high level types of activities are a) using the product, b)  forward engineering for creating artefacts or c) reverse engineering: finding a bug, browsing code in order to understand some principles etc.
  8. Action is a piece of activity: using the product or forward or reverse engineering.
  9. Sub-action is a part of an action. Lowest sub-actions are primitives like reading an item, making a decision etc.
  10. Lowest level is practical data for the method, tool, activity, action and sub-action. In symbolic analysis practical data can be non-symbolic or symbolic. Non-symbolic data in a program can have any type of the type system of the original source code. Symbolic data can have at most any type in the ontology. It is then very much richer than the non-symbolic notation.

Using the levels 1-10 a complete conceptual framework for any programming language and any operating system and for any application area can be written. There are – as we know – limitations how to ground concepts, but we can model them in many phases using the modeling technology. After the modeling process we can in most cases sharpen our concepts into the symbolic level.

Some links

In computer science, symbolic execution (also symbolic evaluation) refers to the analysis of programs by tracking symbolic rather than actual values, a case of abstract interpretation. The field of symbolic simulation applies the same concept to hardware. Symbolic computation applies the concept to the analysis of mathematical expressions.

Symbolic execution is used to reason about all the inputs that take the same path through a program.

Symbolic execution is useful for software testing because it can analyse if and when errors in the code may occur. It can be used to predict what code statements do to specified inputs and outputs. It is also important for considering path traversal.

Symbolic execution is used to reason about a program path-by-path. This may be superior to reasoning about a program, like Dynamic program analysis does, input-by-input.

Symbolic execution vs Symbolic Analysis (Laitila, 2008)

Symbolic execution emphasizes execution, travering program paths.  Symbolic analysis has the same purpose, but furthermore, it is a formalism to run / execute symbols using their internal state automaton. The automaton can be programmed to do anything, which is characteristics for the symbol. One exellent feature of symbolic analysis is its reduncancy and the  internal semantics of each symbol because of its clause-notation in the Symbolic-language. It is possible to reconstruate parse trees from the symbols so that the side effects caused by any symbol can be matched with the corresponding symbols. This makes it possible to partially verify the code.

Some links:

There is a nice block by Robert MacIntosh intended for PhD students at: http://doctoralstudy.blogspot.com/2009/05/being-clear-about-methodology-ontology.html

He describes light at the end of the research tunnel.  There are some steps in the tunnel, forming a scientific framework for research people to follow:

  • Ontology … to do with our assumptions about how the world is made up and the nature of things
  • Epistemology … to do with our beliefs about how one might discover knowledge about the world
  • Methodology … to do with the tools and techniques of research

The author claims that ontology, epistemology and methodoly are three pillars of the thesis.


An extended framework with the applications for symbolic analysis

We define symbolic analysis as a framework (light in the tunnel) in 10 levels as follows:

  1. Ontology is a set of symbols as well as concepts made by the user.  Obs. Concepts are higher level symbols, non-grounded.
  2. Epistemology is a set of transformation rules for symbols, in order to get knowledge. They describe semantics of each symbol in the ontology.
  3. Paradigm is here symbolic analysis: how to describe ontology and epistemology and the theories and methods. Its “competitors” are static and dynamic analyses.
  4. Methodology is a set of theories how ontology will be transformed using epistemology to information, capable of expressing knowledge. There are theories for parsing, making a symbolic model, simulating the model etc.
  5. Method is any way to use the methodology in practice. Some methods are control flow analysis, making a call tree etc.
  6. Tool is a specific means to apply the method in practice. A tool can be any tool, which applies (here) symbolic execution or symbolic analysis, for example for simulating code.
  7. Activity is a human interaction intended for understanding code. Some activities are finding a bug, browsing code in order to understand some principles etc.
  8. Action is a piece of activity of activity, for example browsing items or selecting a view or making a hypothesis.
  9. Sub-action is a part of an action. Lowest sub-actions are primitives like reading an item, making a decision etc.
  10. Lowest level is practical data for the method, tool, activity, action and sub-action. In symbolic analysis practical data can be non-symbolic or symbolic. Non-symbolic data in a program can have any type of the type system of the original source code. Symbolic data can have at most any type in the ontology. It is then very much richer than the non-symbolic notation.

Using the levels 1-10 a complete conceptual framework for any programming language and any operating system  can be written. There are however, some limitations, how to reverse engineer different kinds of features of source code. In order to alleviate these problems/ shortcuts, symbolic analysis has its rather expressive format: each relation is expressed as a Prolog predicate, which can implicitely point to its neighbour symbols, even though there is no defintion for their semantics.

The levels 7-9 tie the framework into action theory, which is empiric research.

Some links

Man and His Symbols is a famous book edited by Carl Jung. He was one of the great doctors of all time and one of the great thinkers of that century. His object always was to help men and women to know themselves, so that by self-knowledge and thoughtful self-use they could lead full, rich, and happy lives.

In his book there are following parts:

  • Part 1. Approaching The Unconscious

  • Part 2 Ancient Myths and Modern Man Joseph L. Henderson The Eternal Symbols

  • Part 3  The Process of Individuation M.-L von Franz  The Pattern of Psychic Growth

  • Part 4 Symbolism in the Visual Arts Aniela Jaffe

  • Part 5 Symbolis in Individual Analysis Jolande Jacobi

The book is a great work describing symbols and symbolism which cannot be completely explained. It is psychology.

Dreams and feeling and our attitude towards different kinds of icons like Coca Cola, Nokia, Bible or sudden dead are strongly personal. Many features of our behavior depend on our temper or our personal characteristic inherited in our birth day.

Symbolism and symbolic analysis

Symbolic analysis, presented in this blog, is the opposite of Jung’s work. In it symbols are imagined to be either formal or known. If some symbol is not known and not formal, it can be skipped if it is not relevant. If it is relevant and not known, there should be a learning process in order to make it familiar to the user. We cannot do a systematic learning process for dreams and for other typical phenomenon described in Jung’s book. However, the book is nice to be read with its great figures and photos. And it describes the real unfamiliar life of everybody.

There are two kinds of symbols: the one of Psychology and the one for formal notations like Computer science.

Links:

Cognition is the research term for “the process of thought“. Usage of the term varies in different disciplines; for example in psychology and cognitive science, it usually refers to an information processing view of an individual’s psychological functions. Other interpretations of the meaning of cognition link it to the development of concepts; individual minds, groups, and organizations.

Cognitive space uses the analogy of location in two, three or higher dimensional space to describe and categorize thoughts, memories and ideas. Each individual has his/her cognitive space, resulting in a unique categorization of their ideas. The dimensions of this cognitive space depend on information, training and finally on a person’s awareness. All this depends globally on the cultural setting.

http://cybergeo.revues.org/index194.html

When understanding software we need certain types of cognition spaces.  In book Symbolic Analysis for PC the spaces are named:

  1. The most abstract space is Concept (an unlimited definition for a thing).
  2. The next concrete space is Context (a situation)
  3. The next concrete space is Architecture slice (high level module with its dependencies and attributes)
  4. The concrete space is Slice, which refers to symbols (part of program).

A typical piece of software (slice) can be modeled upwards in a person’s mind using these cognitive spaces. There is a certain meaning for it to be understood (concept), it have some ways to be used (context) and it can have some extensions to the architecture implementation. It is the responsibility for the developer and maintainer to catch this information, but tools can be useful in order to shorten that time.

An analogy between topological understanding (see figure) and software understanding is clear.

Intelligence is an umbrella term describing a property of the mind including related abilities, such as the capacities for abstract thought, reasoning, planning, problem solving, communication, and learning. Problem solving is the most promising area of these from the point-of-view of  symbolic analysis.

There is much research on what is intelligence and how to define it (see more).

J. P. Guilford is one of them. He  explored the scope of the adult intellect by providing the concept of intelligence with a strong, comprehensive theoretical backing. The Structure-of-Intellect model (SI model) was designed as a cross classification system with intersections in the model providing the basis for abilities similar to periodic table in chemistry. The three-dimensional cube—shaped model includes five content categories (the way in which information is presented on a test; visual, auditory, symbolic, semantic, and behavioral), six operation categories (what is done on a test; evaluation, convergent production, divergent production, memory retention, memory recording, and cognition), and six product categories (the form in which information is processed on a test; units, classes, relations, systems, transformations, and implications). The intersection of three categories provides a frame of reference for generating one or more new hypothetical factors of intelligence.

Mapping Guilford’s cube to Symbolic Analysis

An interesting idea is to map Guilford’s cube to symbolic analysis, the AHO objects. In the atomistic model every atom is a symbol and have a formal contents. Operations for atoms allow studying its impacts and transformations automatically (this is the AI-approach).

There is an article coming on that topic.

Some links:

Erkki Laitila, PhD (2008) computer engineer (1977)