You are currently browsing the category archive for the ‘Symbolic computation’ category.

We have a definition for scientific symbolic framework at:  https://symbolicanalysis.wordpress.com/2010/06/01/scientific-framework-for-symbolic-analysis/

In this post we use it for a domain specific purpose, for handling a navigator, the software JvnMobileGis.

For analyzing this kind of practical application together with its source, there is a relevant approach in modeling, MDE, and the concepts CIM, PIM and PSM: Computation Independent Model, Platform Independent and Platform Specific Models. Some parts of them are domain specific (DS) and some implementation specifics (IS).

A specific framework for a navigator using symbolic analysis

We define the symbolic analysis framework for navigating in 10 levels as follows:

  1. Ontology is a set of domain specific (DS) concepts for a navigator plus implementation specific (IS) symbols.  Concepts can be regarded as non-grounded higher level symbols. Navigator concepts are the map, the objects, the feature of the object, a road etc. The implementation specific concepts are a menu, a user interface, a database, a socket etc.
  2. Epistemology is a set of transformation rules from concepts to IS symbols. There are two directions: one to develop software and map features to code and another transformation principles how symbols could be connected into concepts. Both transformation directions need some knowledge and they create new knowledge. They describe semantics of each symbol in the ontology.
  3. Paradigm is here reductionist: how to describe ontology and epistemology and the theories and methods as atomic elements. Its “competitor” is holistic approach.
  4. Methodology is a set of theories how ontology will be transformed using epistemology to information, capable of expressing knowledge. There are domain specific theories for the product, the navigator plus implementation specific theories for the software expressed as a symbolic notation.
  5. Method is any way to use the framework in practice. Some methods are for the product, the UI and some for developing software and some for analyzing it.
  6. Tool is a specific means to apply the method in practice. A tool can be any tool, which applies (here) symbolic execution or symbolic analysis, for example for simulating code. The user can work as a tool, too, in order to make something that is impossible for the computer, or something for checking what computer does correctly, or not.
  7. Activity is a human interaction intended for understanding code. The high level types of activities are a) using the product, b)  forward engineering for creating artefacts or c) reverse engineering: finding a bug, browsing code in order to understand some principles etc.
  8. Action is a piece of activity: using the product or forward or reverse engineering.
  9. Sub-action is a part of an action. Lowest sub-actions are primitives like reading an item, making a decision etc.
  10. Lowest level is practical data for the method, tool, activity, action and sub-action. In symbolic analysis practical data can be non-symbolic or symbolic. Non-symbolic data in a program can have any type of the type system of the original source code. Symbolic data can have at most any type in the ontology. It is then very much richer than the non-symbolic notation.

Using the levels 1-10 a complete conceptual framework for any programming language and any operating system and for any application area can be written. There are – as we know – limitations how to ground concepts, but we can model them in many phases using the modeling technology. After the modeling process we can in most cases sharpen our concepts into the symbolic level.

Some links

In computer science, symbolic execution (also symbolic evaluation) refers to the analysis of programs by tracking symbolic rather than actual values, a case of abstract interpretation. The field of symbolic simulation applies the same concept to hardware. Symbolic computation applies the concept to the analysis of mathematical expressions.

Symbolic execution is used to reason about all the inputs that take the same path through a program.

Symbolic execution is useful for software testing because it can analyse if and when errors in the code may occur. It can be used to predict what code statements do to specified inputs and outputs. It is also important for considering path traversal.

Symbolic execution is used to reason about a program path-by-path. This may be superior to reasoning about a program, like Dynamic program analysis does, input-by-input.

Symbolic execution vs Symbolic Analysis (Laitila, 2008)

Symbolic execution emphasizes execution, travering program paths.  Symbolic analysis has the same purpose, but furthermore, it is a formalism to run / execute symbols using their internal state automaton. The automaton can be programmed to do anything, which is characteristics for the symbol. One exellent feature of symbolic analysis is its reduncancy and the  internal semantics of each symbol because of its clause-notation in the Symbolic-language. It is possible to reconstruate parse trees from the symbols so that the side effects caused by any symbol can be matched with the corresponding symbols. This makes it possible to partially verify the code.

Some links:

There is a nice block by Robert MacIntosh intended for PhD students at: http://doctoralstudy.blogspot.com/2009/05/being-clear-about-methodology-ontology.html

He describes light at the end of the research tunnel.  There are some steps in the tunnel, forming a scientific framework for research people to follow:

  • Ontology … to do with our assumptions about how the world is made up and the nature of things
  • Epistemology … to do with our beliefs about how one might discover knowledge about the world
  • Methodology … to do with the tools and techniques of research

The author claims that ontology, epistemology and methodoly are three pillars of the thesis.


An extended framework with the applications for symbolic analysis

We define symbolic analysis as a framework (light in the tunnel) in 10 levels as follows:

  1. Ontology is a set of symbols as well as concepts made by the user.  Obs. Concepts are higher level symbols, non-grounded.
  2. Epistemology is a set of transformation rules for symbols, in order to get knowledge. They describe semantics of each symbol in the ontology.
  3. Paradigm is here symbolic analysis: how to describe ontology and epistemology and the theories and methods. Its “competitors” are static and dynamic analyses.
  4. Methodology is a set of theories how ontology will be transformed using epistemology to information, capable of expressing knowledge. There are theories for parsing, making a symbolic model, simulating the model etc.
  5. Method is any way to use the methodology in practice. Some methods are control flow analysis, making a call tree etc.
  6. Tool is a specific means to apply the method in practice. A tool can be any tool, which applies (here) symbolic execution or symbolic analysis, for example for simulating code.
  7. Activity is a human interaction intended for understanding code. Some activities are finding a bug, browsing code in order to understand some principles etc.
  8. Action is a piece of activity of activity, for example browsing items or selecting a view or making a hypothesis.
  9. Sub-action is a part of an action. Lowest sub-actions are primitives like reading an item, making a decision etc.
  10. Lowest level is practical data for the method, tool, activity, action and sub-action. In symbolic analysis practical data can be non-symbolic or symbolic. Non-symbolic data in a program can have any type of the type system of the original source code. Symbolic data can have at most any type in the ontology. It is then very much richer than the non-symbolic notation.

Using the levels 1-10 a complete conceptual framework for any programming language and any operating system  can be written. There are however, some limitations, how to reverse engineer different kinds of features of source code. In order to alleviate these problems/ shortcuts, symbolic analysis has its rather expressive format: each relation is expressed as a Prolog predicate, which can implicitely point to its neighbour symbols, even though there is no defintion for their semantics.

The levels 7-9 tie the framework into action theory, which is empiric research.

Some links

Man and His Symbols is a famous book edited by Carl Jung. He was one of the great doctors of all time and one of the great thinkers of that century. His object always was to help men and women to know themselves, so that by self-knowledge and thoughtful self-use they could lead full, rich, and happy lives.

In his book there are following parts:

  • Part 1. Approaching The Unconscious

  • Part 2 Ancient Myths and Modern Man Joseph L. Henderson The Eternal Symbols

  • Part 3  The Process of Individuation M.-L von Franz  The Pattern of Psychic Growth

  • Part 4 Symbolism in the Visual Arts Aniela Jaffe

  • Part 5 Symbolis in Individual Analysis Jolande Jacobi

The book is a great work describing symbols and symbolism which cannot be completely explained. It is psychology.

Dreams and feeling and our attitude towards different kinds of icons like Coca Cola, Nokia, Bible or sudden dead are strongly personal. Many features of our behavior depend on our temper or our personal characteristic inherited in our birth day.

Symbolism and symbolic analysis

Symbolic analysis, presented in this blog, is the opposite of Jung’s work. In it symbols are imagined to be either formal or known. If some symbol is not known and not formal, it can be skipped if it is not relevant. If it is relevant and not known, there should be a learning process in order to make it familiar to the user. We cannot do a systematic learning process for dreams and for other typical phenomenon described in Jung’s book. However, the book is nice to be read with its great figures and photos. And it describes the real unfamiliar life of everybody.

There are two kinds of symbols: the one of Psychology and the one for formal notations like Computer science.

Links:

A cognitive architecture is a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term ‘architecture’ implies an approach that attempts to model not only behavior, but also structural properties of the modelled system. These need not be physical properties: they can be properties of virtual machines implemented in physical machines (e.g. brains or computers).

Common researchers on cognitive architectures is the belief that understanding (human, animal or machine) cognitive processes means being able to implement them in a working system, though opinions differ as to what form such a system can have: some researchers assume that it will necessarily be a computational system whereas others argue for alternative models such as dynamical systems.

Cognitive architectures can be symbolic, connectionist, or hybrid. Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT). Many of these architectures are based on the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION).

Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models.

The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units.  Neural networks are by far the most commonly used connectionist model today, but there are some more generic approaches, too.

Is the Symbolic atomistic model (SAM) a cognitive architecture?

Hybrid Cognitive Architecture (Laitila)

Hybrid Cognitive Architecture (Laitila)

The SAM-principle, presented in this blog, is a hybrid cognitive architecture, which contains both the symbolic and connectionist approach.  There is a paper published about it at the Conference of  Hybrid Artificial Intelligence Systems (Burgos Spain, 2008).

Some links:

A Turing machine is a theoretical device that manipulates symbols contained on a strip of tape. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside of a computer.

A Turing machine that is able to simulate any other Turing machine is called a Universal Turing machine (UTM, or simply a universal machine).

Many misunderstandings of Turing machine

There are several wrong and misleading concepts considering Turing machine TM, argued by Andrew Wells[1].

The correct concept includes the facts that TM should have a finite control automaton, a register or similar to save status and an ability to read and write and a tape or similar as a memory. Many times persons define TM too specified, although its origin describes a person making computations.

Cognitive approach for Turing machine

From the cognitive approach Turing machine is a concept to model a human making computations. It is then a simple problem solving framework with in-built rules, which model the finite control automaton [2].

There is much information relating to cognitive architectures, symbolic paradigm and how our mind works, also criticism.

Symbolic analysis

Symbolic analysis (SAM) is a framework, which has been built based on automata defined by the corresponding symbols. Together the symbols and their original semantics (command facts) build cognitive models from the source code.  For more information, see KnowledgeWare.

Some links:

  • [1] A. Wells: Rethinking Cognitive Computation (Palgrave).
  • [2[ H. Putnam: Mind, language and reality
  • [3] T. Fodor: The mind doesn’t work that way.

Computer science or computing science (sometimes abbreviated CS) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems.

Main theories of CS

The study of the theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a computational problem.

The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.

CS Theories vs Symbol-Driven Engineering

SDE is a tuple (GrammarWare, ModelWare, SimulationWare, KnowledgeWare).


GrammarWare has strong relations to compilation theory, ModelWare to Graph Theory, SimulationWare to computation theory, and KnowledgeWare to information theory. Alltogether, they build a platform to evaluate computability theory from the sequential approach (not parallel), which have connections to algorithm theory.

Some theories not considered in SDE are neural networks, theories for hardware etc.

Some links:

Computational semiotics is an interdisciplinary field that applies, conducts, and draws on research in logic, mathematics, the theory and practice of computation, formal and natural language studies, the cognitive sciences generally, and semiotics proper. A common theme of this work is the adoption of a sign-theoretic perspective on issues of artificial intelligence and knowledge representation. Many of its applications lie in the field of computer-human interaction (CHI) and the fundamental devices of recognition (work at IASE in California).

Computational semiotics is that branch of one,which deals with the study and application of logic,computation to formal,natural language in terms of cognition and signs.

One part of this field, known as algebraic semiotics, combines aspects of algebraic specification and social semiotics, and has been applied to user interface design and to the representation of mathematical proofs.

Computational Semiotics by Gudwin

Fig below (http://www.dca.fee.unicamp.br/~gudwin/compsemio/Image24.gif):

Singularities captured from real world.

Gudwin describes knowlege as knowledge units (see figure above).

Computational Semiotics vs Symbol-Driven Engineering

SDE expresses phenomena using symbols. There are interpretations between symbols, expressed in predicates.

The classification of knowledge units by Gudwin is below (the origin is from Charles Peirce):

Classification of knowledge units.

First, source code fits well to the branch of the tree, which starts from the node rhematic. Second, when code is considered as sequences or using traces, then the dicent approach is relevant.  Third, when some features of the code or its assumed behavior is considered, then the approach argumentative is relevant.

Symbolic analysis is then a tuple (rhematic, dicent, argumentative).

Mastering these three branches of the knowledge tree gives possibilities to master source code.

Some links:

Executable UML, often abbreviated to xtUML or xUML, is the evolution of the Shlaer-Mellor method to UML. Executable UML graphically specifies a system using a profile of the UML. The models are testable, and can be compiled into a less abstract programming language to target a specific implementation. Executable UML supports MDA through specification of platform-independent models, and the compilation of the platform-independent models into platform-specific models.

Executable UML is used to model the domains in a system. Each domain is defined at the level of abstraction of its subject matter independent of implementation concerns. The resulting system view is composed of a set of models represented by at least the following:

  • The domain chart provides a view of the domains in the system, and the dependencies between the domains.
  • The class diagram defines the classes and class associations for a domain.
  • The statechart diagram defines the states, events, and state transitions for a class or class instance.
  • The action language defines the actions or operations that perform processing on model elements.

Shortcomings of Executable UML

UML doesn’t define semantics of a programming language. Hence, it is not complete to describe behavior of any exact system. In order to correct this shortcoming some new principles are needed. Some of them are: automata theory and symbol-driven engineering.

Automata Theory and Symbol-Driven Engineering (SDE)

Automata theory defines behavior model for corresponding code elements. Therefore, this theory background  eliminates the gap between program behavior and the corresponding model, like UML

In SDE every symbol has an automaton behind it. In order to execute those symbols an invocation Symbol:run is needed plus some specific initailizations for some specific symbols like definitions (method invocations and  object and variable definitions). See figure below.

Some links:

The Symbol Grounding Problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful. According to a widely held theory of cognition, “computationalism,” cognition (i.e., thinking) is just a form of computation. But computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols’ shapes, not their meanings.

Semiotics is an approach to ground symbols

Semiotics, also called semiotic studies or semiology, is the study of sign processes (semiosis), or signification and communication, signs and symbols, and is usually divided into three branches:

  • Semantics: Relation between signs and the things to which they refer; their denotata
  • Syntactics: Relations among signs in formal structures
  • Pragmatics: Relation between signs and their effects on those (people) who use them

Peirce’s Theory: Semiotic Elements of Classes and Signs to be Used for Software

Peirce held that there are exactly three basic semiotic elements, 1) the sign, 2) object, and 3) intepretant, as outlined above and fleshed out here in a bit more detail:

  1. A sign (or representamen) represents, in the broadest possible sense of “represents”. It is something interpretable as saying something about something. It is not necessarily symbolic, linguistic, or artificial.
  2. An object (or semiotic object) is a subject matter of a sign and an interpretant. It can be anything discussable or thinkable, a thing, event, relationship, quality, law, argument, etc., and can even be fictional, for instance Hamlet. All of those are special or partial objects. The object most accurately is the universe of discourse to which the partial or special object belongs. For instance, a perturbation of Pluto’s orbit is a sign about Pluto but ultimately not only about Pluto.
  3. An interpretant (or interpretant sign) is the sign’s more or less clarified meaning or ramification, a kind of form or idea of the difference which the sign’s being true or undeceptive would make. (Peirce’s sign theory concerns meaning in the broadest sense, including logical implication, not just the meanings of words as properly clarified by a dictionary.) The interpretant is a sign (a) of the object and (b) of the interpretant’s “predecessor” (the interpreted sign) as being a sign of the same object. The interpretant is an interpretation in the sense of a product of an interpretive process or a content in which an interpretive relation culminates, though this product or content may itself be an act, a state of agitation, a conduct, etc. Such is what is summed up in saying that the sign stands for the object to the interpretant.

That classification has been used in SDE (Symbolic Analysis of this blog) as follows:

  1. Sign is a symbol captured from source code
  2. AHO, Atomistic Hybrid Object is the object
  3. Interpretant is the output from executing the object using an automaton.

Some links:

  1. Peirce’s theory: http://en.wikipedia.org/wiki/Semiotic_elements_and_classes_of_signs_%28Peirce%29
  2. Semiotics:  http://en.wikipedia.org/wiki/Semiotics
  3. About Symbol Grounding

Erkki Laitila, PhD (2008) computer engineer (1977)