You are currently browsing the category archive for the ‘Mathematical formalisms’ category.

In mathematics and logic a traditional proof is done by starting from one or more axioms. Clauses are used as steps (bridges) into higher-order decisions. Lemmas and corollarys are many times useful in order to mark the route to the final decision, which is usually a proof or a theorem.

Below the most important concepts regarding how to derive logical decisions:

  1. An axiom (or postulate) is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.
  2. A lemma is simultaneously a contention for premises below it and a premise for a contention above it.
  3. A corollary is a statement which follows readily from a previous statement. In mathematics a corollary typically follows a theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. Proposition B is a corollary of proposition A if B can readily be deduced from A, but the meaning of readily varies depending upon the author and context.
  4. A theorem is a statement which has been proven on the basis of previously established statements, such as other theorems, and previously accepted statements, such as axioms

Symbolic analysis and the mathematic dictionary

In Symbolic Analysis we use the concepts above in the following way:

  1. Symbols are axioms to be started. Some of them are grounded, but some are non-ground: fuzzy, vague or mental models without exact information.
  2. Those symbols that are ground and captured from source code – have a certain semantics. A lemma is that for any symbol having a semantic notation, a symbolic notation can be created, because each grammar term is independent of other grammar terms by default.  Therefore, each symbol can be simulated separately.
  3. The corollary is that each source language symbol is an automaton, which can be programmed in a tool as a state machine having the same principal logic as Turing machine.
  4. The final decision from the steps 1, 2 and 3 above is that by simulating source code it is possible to mimic original program execution step by step. This makes a foundation for interactive program proof.

Some links:

Proof of Symbolic Analysis described using 7 automata.


A Petri net (also known as a place/transition net or P/T net) is one of several mathematical modeling languages for the description of discrete distributed systems. A Petri net is a directed bipartite graph, in which the nodes represent transitions (i.e. discrete events that may occur, signified by bars), places i.e. conditions, signified by circles, and directed arcs that describe which places are pre- and/or postconditions for which transitions, signified by arrows.

Like industry standards such as UML activity diagrams, BPMN and EPCs, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis.

Because of its executability Petri net is an interesting formalism compared with the symbolic atomistic model.

Formalism for Petri nets

A Petri net consists of places, transitions, and directed arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition.

Places may contain a natural number of tokens. A distribution of tokens over the places of a net is called a marking. A transition of a Petri net may fire whenever there is a token at the end of all input arcs; when it fires, it consumes these tokens, and places tokens at the end of all output arcs. A firing is atomic, i.e., a single non-interruptible step.

Execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, any one of them may fire. If a transition is enabled, it may fire, but it doesn’t have to.  Since firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling the concurrent behavior of distributed systems.

The following formal definition is loosely based on (Peterson 1981). Many alternative definitions exist.


A Petri net graph (called Petri net by some, but see below) is a 3-tuple (S,T,W)\!, where

  • S is a finite set of places
  • T is a finite set of transitions
  • S and T are disjoint, i.e. no object can be both a place and a transition
  • W: (S \times T) \cup (T \times S) \to  \mathbb{N} is a multiset of arcs, i.e. it defines arcs and assigns to each arc a non-negative integer arc multiplicity; note that no arc may connect two places or two transitions.

The flow relation is the set of arcs:  F  = \{ (x,y) \mid W(x,y) > 0 \}. In many textbooks, arcs can only have multiplicity 1, and they often define Petri nets using F instead of W.

A Petri net graph is a bipartite multidigraph (S \cup T, F) with node partitions S and T.

The preset of a transition t is the set of its input places: {}^{\bullet}t = \{ s \in S \mid  W(s,t) > 0 \}; its postset is the set of its output places: t^{\bullet} = \{ s \in S \mid W(t,s) > 0 \}

A marking of a Petri net (graph) is a multiset of its places, i.e., a mapping M: S \to \mathbb{N}. We say the marking assigns to each place a number of tokens.

A Petri net (called marked Petri net by some, see above) is a 4-tuple (S,T,W,M_0)\!, where

  • (S,T,W) is a Petri net graph;
  • M0 is the initial marking, a marking of the Petri net graph.

Executablity of Petri nets

Petri nets include a specific logic, with some variations.   Shortly said:

  • firing a transition t in a marking M consumes W(s,t) tokens from each of its input places s, and produces W(t,s) tokens in each of its output places s
  • a transition is enabled (it may fire) in M if there are enough tokens in its input places for the consumptions to be possible, i.e. iff \forall s: M(s) \geq W(s,t).

We are generally interested in what may happen when transitions may continually fire in arbitrary order.

We say that a marking Mis reachable from a marking M in one step if M \to_G M'; we say that it is reachable from M if M {\to_G}^* M', where {\to_G}^* is the transitive closure of \to_G; that is, if it is reachable in 0 or more steps.

For a (marked) Petri net N=(S,T,W,M_0)\!, we are interested in the firings that can be performed starting with the initial marking M0. Its set of reachable markings is the set R(N) \ \stackrel{D}{=}\  \{ M' \mid M_0 {\to_{(S,T,W)}}^* M' \}

The reachability graph of N is the transition relation \to_G restricted to its reachable markings R(N). It is the state space of the net.

A firing sequence for a Petri net with graph G and initial marking M0 is a sequence of transitions \vec \sigma = \langle t_{i_1} \ldots t_{i_n}  \rangle such that M_0 \to_{G,t_{i_1}} M_1 \wedge \ldots  \wedge M_{n-1} \to_{G,t_{i_n}} M_n. The set of firing sequences is denoted as L(N).

Types of different levels of Petri nets are shown below:

Petri net types.

Comparing Petri nets and Symbolic Atomistic Model

Formalism of Petri nets is different from the one of computer programming languages, because in each place there is a contents, which is a set of tokens. The purpose is to study firing and balance between modeling and analyzability.

In spite of the differences, I argue that Petri nets can be converted into the symbolic atomistic model (AHO objects) by some arrangements:

  1. Each place reserves an AHO object.
  2. Each transition reserves so many AHO’s that it requires when the semantics are converted atomistic (atomic formula).
  3. Contents of each place is a separate dynamic AHO – object.
  4. Code for transitions are written in a high level language.
  5. The executing logic will be programmed into the run – methods for AHO’s allowing nondeterminism.
  6. Otherwise, the logic of SimulationWare could be used.

There is no demonstration that it works, but it seems evident that a modified atomistic model (the architecture of AHO’s) should simulate Petri nets, too.

Some links:

Truth can have a variety of meanings, from the state of being the case, being in accord with a particular fact or reality, being in accord with the body of real things, events, actuality, or fidelity to an original or to a standard. In archaic usage it could be fidelity, constancy or sincerity in action, character, and utterance.

There are differing claims on such questions as what constitutes truth; what things are truthbearers capable of being true or false; how to define and identify truth; the roles that revealed and acquired knowledge play; and whether truth is subjective, relative, objective, or absolute. This post presents some topics how symbolic analysis covers some attributes of truth and truth theories.

Correspondence theory for truth

For the truth to correspond it must first be proved by evidence or an individuals valid opinion, which have similar meaning or context. This type of theory posits a relationship between thoughts or statements on the one hand, and things or objects on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle.

Coherence theory for truth

For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. Some variants of coherence theory are claimed to characterize the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries.

Concensus theory

Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.

Pragmatic theory

The three most influential forms of the pragmatic theory of truth were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that truth is verified and confirmed by the results of putting one’s concepts into practice.

Peirce defines truth as follows: “Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth.”

Peirce emphasizes that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and “reference to the future”, are essential to a proper conception of truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.

Semantic theory of truth

The semantic theory of truth has as its general case for a given language:

‘P’ is true if and only if P

where ‘P’ is a reference to the sentence (the sentence’s name), and P is just the sentence itself.

Alfred Tarski developed the theory for formal languages (such as formal logic). Here he restricted it in this way: no language could contain its own truth predicate, that is, the expression is true could only apply to sentences in some other language. The latter he called an object language, the language being talked about. (It may, in turn, have a truth predicate that can be applied to sentences in still another language.) The reason for his restriction was that languages that contain their own truth predicate will contain paradoxical sentences like the Liar: This sentence is not true. See The Liar paradox. As a result Tarski held that the semantic theory could not be applied to any natural language, such as English, because they contain their own truth predicates. Donald Davidson used it as the foundation of his truth-conditional semantics and linked it to radical interpretation in a form of coherentism.

Truth in logic

Logic is concerned with the patterns in reason that can help tell us if a proposition is true or not. However, logic does not deal with truth in the absolute sense, as for instance a metaphysician does. Logicians use formal languages to express the truths which they are concerned with, and as such there is only truth under some interpretation or truth within some logical system.

A logical truth (also called an analytic truth or a necessary truth) is a statement which is true in all possible worlds[38] or under all possible interpretations, as contrasted to a fact (also called a synthetic claim or a contingency) which is only true in this world as it has historically unfolded. A proposition such as “If p and q, then p.” is considered to be logical truth because it is true because of the meaning of the symbols and words in it and not because of any facts of any particular world. They are such that they could not be untrue.

How does Symbolic Analysis cover the theories (presented above)?

Symbolic analysis has been created based on Prolog (predicate logic) so that each symbol of the logic model refers to the corresponding term of a program,  reverse engineered from an application. Therefore, in the lowest level, each term in the  atomistic symbolic model is a propostion typical for truth in logic.  Each term has been implemeted as a AHO-artefact, a symbolic element having an excellent correspondence to the code . Links between these elements are coherent having a symbolic language to describe semantics of each term (semantic theory of truth).

AHO elements create in  simulation (aho:run()) results which are side effects caused of each element. They are mimicking computations of the program. Hence, it is possible for the user to compare side effects with his/her hypotheses in a higher level of logic. If those assumptions are different or equal, this information has pragmatic value from the program comprehension point-of-view. For possible contradictions some reasons should be found. They are possible bugs/errors of the code.

Some links:

Automated theorem proving (ATP) or automated deduction, currently the most well-developed subfield of automated reasoning (AR), is the proving of mathematical theorems by a computer program.

Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, with no (“proper”) axioms, Gödel’s completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven.

However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized.

A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable.

Some techniques for theorem proving


Further reading

How to make it as easily as possible?

A lean theorem prover is an automated theorem prover implemented in a minimum amount of code. Lean provers are generally[citation needed] implemented in Prolog, and make proficient use of the backtracking engine and logic variables of that language. Lean provers can be as small as a few hundred bytes of source code.

The following is a small Prolog implementation:

 a ; b ; c.

The beef is that this procedural definition for a theorem can be converted into an atomistic model so that each domain will be transformed into a symbolic element.

Therefore,  proving  the theorem means running the top element of the theorem. The principle is exactly the same as in source code simulation (see SimulationWare).

When those variable a, b and c will be captured from the original code, this work will enable proving source code sequences written in Java or similar languages.

Some links:

Derivation is a popular example about symbolic computation,  i.e.  symbolic analysis from the mathematical approach.

This example demonstrates the expressive power and ease-of-use of symbolic manipulation in Prolog (more examples here:

Procedural approach to Prolog, Derivation

d(X,X,1) :- !.                               /* 1  x     dx = 1                     */
d(C,X,0) :- atomic(C).                       /* 2  c     dx = 0                     */
d(-U,X,-A) :- d(U,X,A).                      /* 3  -u    dx = - d u dx              */
d(U+V,X,A+B) :- d(U,X,A), d(V,X,B).          /* 4  u+v   dx = d u dx + d v dx       */
d(U-V,X,A-B) :- d(U,X,A), d(V,X,B).          /* 5  u-v   dx = d u dx - d v dx       */
d(C*U,X,C*A) :- C \= X, d(U,X,A), !.         /* 6  c*u   dx = c*d u dx              */
d(U*V,X,B*U+A*V) :- d(U,X,A), d(V,X,B).      /* 7  u*v   dx = u*d v dx + v*d u dx   */
d(U/V,X,A) :- d(U*V^(-1),X,A).               /* 8  u/v   dx = d (u*v)^-1 dx         */
d(U^C,X,C*U^(C-1)*W) :- C \= X, d(U,X,W).    /* 9  u^c   dx = c*u^(c-1)*d u dx      */
d(log(U),X,A*U^(-1)) :- d(U,X,A).            /*10  ln(u) dx = u^-1 * d u dx         */

Derivation in Visual Prolog

d(int(_),_) = int(0). 

d(var(X),X) = int(1):- !.
d(var(_),_) = int(0):-!.
d(plus(U,V),X) = plus(d(U,X),d(V,X)).

d(minus(U,V),X) = minus(d(U,X),d(V,X)).

d(mult(U,V),X) = plus(mult(d(U,X),V),mult(U,d(V,X))).

d(div_(U,V),X) = div_(minus(mult(d(U,X),V),mult(U,d(V,X))),mult(V,V)).

d(ln(U),X) =  mult(div_(int(1),U),d(U,X)).

d(potens(E1,int(I)),X) = mult(mult(int(I),potens(E1,int(I-1))),d(E1,X)).

d(sin(U),X) = mult(cos(U),d(U,X)).

d(cos(U),X) = minus(int(0),mult(sin(U),d(U,X))).

d(tan(U),X) = mult(potens(sec(U),int(2)),d(U,X)).

Starting derivation

differentiate(STR) = ResultStr:-
  InputExpression = diff::parse(Str,EXP),
  DifferentiatedExpression = diff::d(InputExpression,"x"),
  OutputExpression = diff::reduce(DifferentiatedExpression),
  ResultStr = diff::writeexp(OutputExpression).

The Atomistic Correspondence for the Procedural Definition

Let’s assume SE to be the base class for all atomistic objects. It is then the definition for AHO. Let’s  Symbol be the definition for the whole model semantics, here derivation.

Model Weaver to create the atomistic model:

differentiateExpression(Str) = OutputExpression:-
   S = s::new(),
   BuiltModel = sEModelBuilder::build(S:parse(Str)),
   ResultExpression = BuiltModel:d("x"),
   ResultElement = seData::new(ResultExpression),
   OutputExpression = ResultElement:reduce().

Interface for the Symbol:

interface symbol
  exp      =
   ref(se).          /* this is a reference to a symbolic element, AHO */
   parse: (string) -> exp determ.
   writeExp: (exp) -> string.
 end interface symbol

To convert a procedural Prolog-derivation model into an atomistic one, there is only one definition needed to add into the original code:

d(ref(SE),X) =  SE:d(X).

Architecture for the Derivation Example


This application shows that it is possible to translate “procedural”, i.e. non-object-based programs into atomistic ones, when the semantics is formal.

Some links

Mathematics is the study of quantity, structure, space, and change. Mathematicians seek out patterns, formulate new conjectures, and establish truth by rigorous deduction from appropriately chosen axioms and definitions.

The evolution of mathematics might be seen as an ever-increasing series of abstractions, or alternatively an expansion of subject matter.

From these areas discrete mathematics is closest to computer languages and source code analysis.

Discrete mathematics

Discrete mathematics is the common name for the fields of mathematics most generally useful in theoretical computer science. This includes, on the computer science side, computability theory, computational complexity theory, and information theory. Computability theory examines the limitations of various theoretical models of the computer, including the most powerful known model – the Turing machine. Complexity theory is the study of tractability by computer; some problems, although theoretically solvable by computer, are so expensive in terms of time or space that solving them is likely to remain practically unfeasible, even with rapid advance of computer hardware. Finally, information theory is concerned with the amount of data that can be stored on a given medium, and hence deals with concepts such as compression and entropy.

Typical kinds of discrete mathematics are shown:

Relations between Discrete Mathematics and Symbolic Analysis

In fact, symbolic analysis is one part of mathematics (in Finnish Ruohonen). However, that kind of mathematical symbolic analysis is one kind of analysis principle typical for mathematics like symbolic differentiation, symbolic integration etc. That kind of features can be found in modern mathematical tools like Mathematica 7.

From the software point-of-view, in Symbolic Analysis (Laitila) there is an atomistic model to describe source code and its behavior. That models obey the pure rules of graph theory, where all tools intented for graphs are useful.  Furthermore, the theory for simulating these graph elements (see AHO-objects, e.g symbols) is pure theory of computation. Simulating branches with unknown conditional values lead to a problem of combinations, a state explosion (combinatorial explosion) problem. However, there is no connections from the atomistic symbolic model to cryptography.

In summary, the framework of the atomistic model is rather close to the theory of discrete mathematics.

As a conculusion, it is then reasonable to ask whether there is a gap between mathematical formulation like albebra and formalism adapted from source code (programming languages) to be expressed in the symbolic atomistic model.

There is no Gap

Mathematics is a set of theories based on their type systems, where selected symbols are connected with carriers (symbolic clauses) and constants as operations (see figure below).

In programming languages the operations are expressions in the grammar (see Java grammar). From the side of automata theory each operation has been executed by a register machine or a similar automaton, which is a subset of the universal machine (see more).

Modern computer languages contain the features to allow programming any mathematical functions (except some demanding, very specific vector and array operations).

They are Turing strong (see more).  The Church–Turing thesis states that any function that is computable by an algorithm is a computable function.

Progamming languages are an extension fo traditional mathematics where there are definitions, loops, conditionals (e.g. paths) and some other types of clauses. Formalism for Pascal has been proved to be downwards compatible to mathematics for decades ago, and deducible.

In book Symbolic Analysis we show that Java can partially be simulated by SAM. It can create an output tape (Turing machine) for any algorithm even though some symbols are unknown. Therefore execution of the Symbolic Abstract Machine, SAM, can directly be mapped from the mathematical side e.g. equations to the software side symbols. There is no gap between these two sides, because all the semantics can be expressed in Symbolic language and all symbols will be executed by Symbol:run-invocation.

Some links:

A Propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A Boolean function can be represented as a rooted, directed acyclic graph of the following form:

  • Leaves are labeled with \top (true), \bot (false), or a Boolean variable.
  • Non-leaves are \mathcal{4} (logical and), \mathcal{5} (logical or) and \Diamond (logical not).
  • \mathcal{4}– and \mathcal{5}-nodes have at least one child.
  • \Diamond-nodes have exactly one child.

Leaves labeled with \top (\bot) represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled with a Boolean variable x is interpreted as the assignment x = 1, i.e. it represents the Boolean function which evaluates to 1 if and only if x = 1. The Boolean function represented by a \mathcal{4}-node is the one that evaluates to 1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a \mathcal{5}-node represents the Boolean function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a \Diamond-node represents the complemenatary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean function of its child evaluates to 0.

Every binary decision diagram (BDD) and every negation normal form (NNF) is also a PDAG with some particular properties. The following pictures represent the Boolean function f(x1,x2,x3) = − x1 * − x2 * − x3 + x1 * x2 + x2 * x3. See picture below for that.

The Correspondence between PDAG and AHO (Atomistic Hybrid Object)

The semantics of PDAG resembles the one of AHO the symbolic element in the atomistic model. PDAG is an automaton of type three in the Chomsky hieararchy.

Grammar Languages Automaton Production rules (constraints)
Type-0 Recursively enumerable Turing machine \alpha \rightarrow \beta (no restrictions)
Type-1 Context-sensitive Linear-bounded non-deterministic Turing machine \alpha A\beta \rightarrow \alpha\gamma\beta
Type-2 Context-free Non-deterministic pushdown automaton A \rightarrow \gamma
Type-3 Regular Finite state automaton A \rightarrow a
A \rightarrow aB

However, the command of AHO is more powerful, it can express any clause adapted from source code (loopClause is any loop, pathClause is any branch, setClause is any change of any variable etc) when splitted into sub-commands. There is one characteristic feature when describing mathematical calculations or non-trivial logical equations. Each AHO should be atomistic. The command of each AHO for calculations, logical operations and transformations is opClause. It refers to an operation.

The semantics of the command covers all levels of the Chomsky hieararchy. However, simulating levels 0 and 1 needs some arrangements. See more about the limitations from the book.

Some links:

Symbolic Manipulation Program, usually called SMP, was a computer algebra system designed by Chris A. Cole and Stephen Wolfram at Caltech circa 1979 and initially developed in the Caltech physics department. It was first sold commercially in 1981 by the Computer Mathematics Corporation of Los Angeles which later became part of Inference Corporation; Inference Corp. further developed the program and marketed it commercially from 1983 to 1988. SMP was essentially Version Zero of the more ambitious Mathematica system.

As we can assume due to the name, the core of SMP is (was) symbol. Let’s study it more.

Motivation for SMP (see)

The primary motivation for the construction of the SMP program was the necessity of performing very complicated algebraic manipulations in certain areas of theoretical physics. The need to deal with advanced mathematical constructs required the program to be of great generality. In addition, the size of the calculations anticipated demanded that the program should operate quickly and be capable of handling very large amounts of data. The resulting program is expected to be valuable in a wide variety of applications.

Main principles of SMP (see)

The basic purpose of SMP is to manipulate symbolic expressions. These expressions may represent algebraic formulae, on which mathematical operations are performed. By virtue of their symbolic nature, they may also represent procedures and actions.

The ability to manipulate symbolic expressions, as well as sets of numbers, allows for much greater generality and a much richer architecture than in numerical computer languages.

The structure of expressions in SMP is defined recursively as follows:

expr consists of symbol (1)
or expr[expr,expr,…] (projection)                         (2)
or { [expr]: expr, [expr]: expr,…} (list)                 (3)

Symbols are the fundamental units. Projections represent extraction of a part in an expression. Lists allow expressions to be collected together.

These three fundamental forms suffice to represent all the objects, operations and procedures required in symbolic manipulations.

Comparison with Symbolic Analysis

In Symbolic Analysis our model to represent all objects etc is even more simple, much  simpler. In it the only artefact is symbol itself. It is possible because of the hybrid structure of the symbolic element (see more about AHO and Symbol).  We argue, that Symbol is the core of core computer science.

Case 1: Symbol at SMP  (see)

Symbols are labelled by a unique name (e.g. x3 or Mult) which is used to represent them for input and output. Expressions may be assigned as values for symbols. A symbol to which a value has been assigned becomes essentially a short notation for its value, and is replaced by the value whenever it occurs. If no value has been assigned to a symbol, the results of (most) operations are such as would hold for any possible value of the symbol. The set of possible values represented by a symbol may be delimited by assigning a list of properties to the symbol.

Case 2: Projection at SMP (see)

The projection f[expr] represents the part of the expression f selected by the “filter” expr. If f is a list, the entry with index expr is selected. If f is a symbol with no value, operations performed on f[expr] hold for any value of f. Properties assigned to f may specify particular treatment. “System-defined” symbols stand for many fundamental operations. Projections from these symbols (e.g. Plus) yield expressions which represent the action of these operations on the filter expressions.

Case 3: Lists at SMP  (see)

Lists are ordered and indexed sets of expressions. The index and value of each entry in a list may be arbitrary expressions. A particular value in a list is extracted by projection with the corresponding index as a filter.  If the value of some symbol f is a list, the entries of the list describe parts of the “object” f: they give the values for projections of f with different filters. Entries may be introduced into such a list by assignment of values for projections of f.

For example, f:{ [1]: a} (or f[1]: a) defines f to be an object whose projection with filter 1 is a. The values of other projections from f remain unspecified. f[x+y]: (x+y)^2 then yields {{ [x+y]: (x+y)^2, [1]: a} } and defines the value for a projection of f with filter x+y to be (x+y)^2.

Symbolic Analysis is a Reverse Engineering Architecture and Methodology

Meanwhile  SMP has been written for expressing mathematical formulas and equations, the main purpose of symbolic analysis was reverse engineering: expressing program models in the symbolic notation. However, there are many similar needs and goals in both of them. It is obvious that symbolic analysis can be extended into mathematic formulations, too, because its internal formalism allows presenting semantics for a wide range of formal languages  expressions.

Correspondence between SMP and symbolic analysis

It is obvious that any expression of SMP can be transformed into the symbolic analysis model, only by extending the Symbolic language for the necessary expressions. Lists (case 3) and projections (case 2) can be some extensions to the Symbolic language, the clause definition.


Mathematica is the follower of SMP. In it everything is an expression.  There are 3000 different expressions in it. See a video .

Note: In symbolic analysis (Laitila) everything is a symbol.

Some links:

Erkki Laitila, PhD (2008) computer engineer (1977)