You are currently browsing the category archive for the ‘Mathematical formalisms’ category.

In mathematics and logic a traditional proof is done by starting from one or more axioms. Clauses are used as steps (bridges) into higher-order decisions. Lemmas and corollarys are many times useful in order to mark the route to the final decision, which is usually a proof or a theorem.

Below the most important concepts regarding how to derive logical decisions:

- An
**axiom**(or**postulate**) is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths. - A
**lemma**is simultaneously a contention for premises below it and a premise for a contention above it. - A
**corollary**is a statement which follows readily from a previous statement. In mathematics a corollary typically follows a theorem. The use of the term*corollary*, rather than*proposition*or*theorem*, is intrinsically subjective. Proposition*B*is a corollary of proposition*A*if*B*can readily be deduced from*A*, but the meaning of*readily*varies depending upon the author and context. - A theorem

### Symbolic analysis and the mathematic dictionary

In Symbolic Analysis we use the concepts above in the following way:

- Symbols are axioms to be started. Some of them are grounded, but some are non-ground: fuzzy, vague or mental models without exact information.
- Those symbols that are ground and captured from source code – have a certain semantics. A lemma is that for any symbol having a semantic notation, a symbolic notation can be created, because each grammar term is independent of other grammar terms by default. Therefore, each symbol can be simulated separately.
- The corollary is that each source language symbol is an automaton, which can be programmed in a tool as a state machine having the same principal logic as Turing machine.
- The final decision from the steps 1, 2 and 3 above is that by simulating source code it is possible to mimic original program execution step by step. This makes a foundation for interactive program proof.

### Some links:

- What is the difference between theorem.. and corollary by David Richeson.
- Definitions for axioms etc at Wiki: http://users.dickinson.edu/~richesod/
- Ideal of Science: https://symbolicanalysis.wordpress.com/2009/10/22/ideal-of-science/
- Transformation process for reverse engineering: https://symbolicanalysis.wordpress.com/2009/09/09/reverse-engineering/ (the figure describes the symbolic analysis transformation process as successive automata: A1-A7).

A **Petri net** (also known as a **place/transition net** or **P/T net**) is one of several mathematical modeling languages for the description of discrete distributed systems. A Petri net is a directed bipartite graph, in which the nodes represent transitions (i.e. discrete events that may occur, signified by bars), places i.e. conditions, signified by circles, and directed arcs that describe which places are pre- and/or postconditions for which transitions, signified by arrows.

Like industry standards such as UML activity diagrams, BPMN and EPCs, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution. *Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis.*

*Because of its executability Petri net is an interesting formalism compared with the symbolic atomistic model.*

*Formalism for Petri nets*

*Formalism for Petri nets*

A Petri net consists of *places*, *transitions*, and *directed arcs*. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the *input places* of the transition; the places to which arcs run from a transition are called the *output places* of the transition.

Places may contain a natural number of tokens. A distribution of tokens over the places of a net is called a *marking*. A transition of a Petri net may *fire* whenever there is a token at the end of all input arcs; when it fires, it consumes these tokens, and places tokens at the end of all output arcs. A firing is atomic, i.e., a single non-interruptible step.

Execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, any one of them may fire. If a transition is enabled, it may fire, but it doesn’t have to. Since firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling the concurrent behavior of distributed systems.

The following formal definition is loosely based on (Peterson 1981). Many alternative definitions exist.

### Syntax

A **Petri net graph** (called *Petri net* by some, but see below) is a 3-tuple , where

*S*is a finite set of*places**T*is a finite set of*transitions**S*and*T*are disjoint, i.e. no object can be both a place and a transition- is a multiset of arcs, i.e. it defines arcs and assigns to each arc a non-negative integer
*arc multiplicity*; note that no arc may connect two places or two transitions.

The *flow relation* is the set of arcs: . In many textbooks, arcs can only have multiplicity 1, and they often define Petri nets using *F* instead of *W*.

A Petri net graph is a bipartite multidigraph with node partitions *S* and *T*.

The *preset* of a transition *t* is the set of its *input places*: ; its *postset* is the set of its *output places*:

A *marking* of a Petri net (graph) is a multiset of its places, i.e., a mapping . We say the marking assigns to each place a number of *tokens*.

A **Petri net** (called *marked Petri net* by some, see above) is a 4-tuple , where

- (
*S*,*T*,*W*) is a Petri net graph; *M*_{0}is the*initial marking*, a marking of the Petri net graph.

### Executablity of Petri nets

Petri nets include a specific logic, with some variations. Shortly said:

- firing a transition
*t*in a marking*M*consumes*W*(*s*,*t*) tokens from each of its input places*s*, and produces*W*(*t*,*s*) tokens in each of its output places*s* - a transition is
*enabled*(it may*fire*) in*M*if there are enough tokens in its input places for the consumptions to be possible, i.e. iff .

We are generally interested in what may happen when transitions may continually fire in arbitrary order.

We say that a marking *M*‘ *is reachable from* a marking *M* *in one step* if ; we say that it *is reachable from* *M* if , where is the transitive closure of ; that is, if it is reachable in 0 or more steps.

For a (marked) Petri net , we are interested in the firings that can be performed starting with the initial marking *M*_{0}. Its set of *reachable markings* is the set

The *reachability graph* of *N* is the transition relation restricted to its reachable markings *R*(*N*). It is the state space of the net.

A *firing sequence* for a Petri net with graph *G* and initial marking *M*_{0} is a sequence of transitions such that . The set of firing sequences is denoted as *L*(*N*).

Types of different levels of Petri nets are shown below:

### Comparing Petri nets and Symbolic Atomistic Model

Formalism of Petri nets is different from the one of computer programming languages, because in each place there is a contents, which is a set of tokens. The purpose is to study firing and balance between modeling and analyzability.

**In spite of the differences, I argue that Petri nets can be converted into the symbolic atomistic model (AHO objects) by some arrangements:**

- Each place reserves an AHO object.
- Each transition reserves so many AHO’s that it requires when the semantics are converted atomistic (atomic formula).
- Contents of each place is a separate dynamic AHO – object.
- Code for transitions are written in a high level language.
- The executing logic will be programmed into the run – methods for AHO’s allowing nondeterminism.
**Otherwise, the logic of SimulationWare could be used.**

There is no demonstration that it works, but** it seems evident that a modified atomistic model (the architecture of AHO’s) should simulate Petri nets, too.**

**Some links:**

- See more about Petri nets from: http://en.wikipedia.org/wiki/Petri_net
- About a tool Maria (in Finnish): http://www.tcs.hut.fi/Software/maria/petri.fi.pdf

**Truth** can have a variety of meanings, from the state of being the case, being in accord with a particular fact or reality, being in accord with the body of real things, events, actuality, or fidelity to an original or to a standard. In archaic usage it could be fidelity, constancy or sincerity in action, character, and utterance.

There are differing claims on such questions as what constitutes truth; what things are truthbearers capable of being true or false; how to define and identify truth; the roles that revealed and acquired knowledge play; and whether truth is subjective, relative, objective, or absolute. This post presents some topics how symbolic analysis covers some attributes of truth and truth theories.

### Correspondence theory for truth

For the truth to correspond it must first be proved by evidence or an individuals valid opinion, which have similar meaning or context. This type of theory posits a relationship between thoughts or statements on the one hand, and things or objects on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle.

### Coherence theory for truth

For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. Some variants of coherence theory are claimed to characterize the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries.

### Concensus theory

Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.

### Pragmatic theory

The three most influential forms of the *pragmatic theory of truth* were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that truth is verified and confirmed by the results of putting one’s concepts into practice.

Peirce defines truth as follows: “Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth.”

Peirce emphasizes that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as *fallibilism* and “reference to the future”, are essential to a proper conception of truth. Although Peirce uses words like *concordance* and *correspondence* to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than *nominal* definitions, which he accords a lower status than *real* definitions.

### Semantic theory of truth

The semantic theory of truth has as its general case for a given language:

- ‘P’ is true if and only if P

where ‘P’ is a reference to the sentence (the sentence’s name), and P is just the sentence itself.

Alfred Tarski developed the theory for formal languages (such as formal logic). Here he restricted it in this way: no language could contain its own truth predicate, that is, the expression *is true* could only apply to sentences in some other language. The latter he called an *object language*, the language being talked about. (It may, in turn, have a truth predicate that can be applied to sentences in still another language.) The reason for his restriction was that languages that contain their own truth predicate will contain paradoxical sentences like the Liar: *This sentence is not true*. See The Liar paradox. As a result Tarski held that the semantic theory could not be applied to any natural language, such as English, because they contain their own truth predicates. Donald Davidson used it as the foundation of his truth-conditional semantics and linked it to radical interpretation in a form of coherentism.

### Truth in logic

Logic is concerned with the patterns in reason that can help tell us if a proposition is true or not. However, logic does not deal with truth in the absolute sense, as for instance a metaphysician does. Logicians use formal languages to express the truths which they are concerned with, and as such there is only truth under some interpretation or truth within some logical system.

A logical truth (also called an analytic truth or a necessary truth) is a statement which is true in all possible worlds^{[38]} or under all possible interpretations, as contrasted to a *fact* (also called a *synthetic claim* or a *contingency*) which is only true in this world as it has historically unfolded. A proposition such as “If p and q, then p.” is considered to be logical truth because it is true because of the meaning of the symbols and words in it and not because of any facts of any particular world. They are such that they could not be untrue.

## How does Symbolic Analysis cover the theories (presented above)?

Symbolic analysis has been created based on Prolog (predicate logic) so that each symbol of the logic model refers to the corresponding term of a program, reverse engineered from an application. Therefore, in the lowest level, each term in the atomistic symbolic model is a propostion typical for truth in logic. Each term has been implemeted as a AHO-artefact, a symbolic element having an excellent correspondence to the code . Links between these elements are coherent having a symbolic language to describe semantics of each term (semantic theory of truth).

AHO elements create in simulation (*aho:run()*) results which are side effects caused of each element. They are mimicking computations of the program. Hence, it is possible for the user to compare side effects with his/her hypotheses in a higher level of logic. If those assumptions are different or equal, this information has pragmatic value from the program comprehension point-of-view. For possible contradictions some reasons should be found. They are possible bugs/errors of the code.

### Some links:

- Truth in Wiki: http://en.wikipedia.org/wiki/Truth

- Sowa: Laws, Facts, and Contexts: Foundations for Multimodal Reasoning http://www.jfsowa.com/pubs/laws.htm

**Automated theorem proving** (**ATP**) or **automated deduction**, currently the most well-developed subfield of *automated reasoning* (AR), is the proving of mathematical theorems by a computer program.

Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, with no (“proper”) axioms, Gödel’s completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven.

However, *invalid* formulas (those that are *not* entailed by a given theory), cannot always be recognized.

A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable.

### Some techniques for theorem proving

- First-order resolution with unification
- Lean theorem proving
- Model elimination
- Method of analytic tableaux
- Superposition and term rewriting
- Model checking
- Mathematical induction
- Binary decision diagrams
- DPLL
- Higher-order unification

————————————

## Further reading

## How to make it as easily as possible?

A **lean theorem prover** is an automated theorem prover implemented in a minimum amount of code. Lean provers are generally^{[citation needed]} implemented in Prolog, and make proficient use of the backtracking engine and logic variables of that language. Lean provers can be as small as a few hundred bytes of source code.

The following is a small Prolog implementation:

propexp=

not_(propexp);

or_(propexp,propexp);

and_(propexp,propexp);

implies_(propexp,propexp);

s_(symbol); a ; b ; c.

**The beef is that this procedural definition for a theorem can be converted into an atomistic model so that each domain will be transformed into a symbolic element. **

**Therefore**, proving the theorem means running the top element of the theorem. The principle is exactly the same as in source code simulation (see SimulationWare).

When those variable a, b and c will be captured from the original code, this work will enable proving source code sequences written in Java or similar languages.

### Some links:

Derivation is a popular example about symbolic computation, i.e. *symbolic analysis from the mathematical approach*.

This example demonstrates the expressive power and ease-of-use of symbolic manipulation in Prolog (more examples here: http://en.allexperts.com/e/p/pr/prolog.htm).

### Procedural approach to Prolog, Derivation

d(X,X,1) :- !. /* 1 x dx = 1 */

d(C,X,0) :- atomic(C). /* 2 c dx = 0 */

d(-U,X,-A) :- d(U,X,A). /* 3 -u dx = - d u dx */

d(U+V,X,A+B) :- d(U,X,A), d(V,X,B). /* 4 u+v dx = d u dx + d v dx */

d(U-V,X,A-B) :- d(U,X,A), d(V,X,B). /* 5 u-v dx = d u dx - d v dx */

d(C*U,X,C*A) :- C \= X, d(U,X,A), !. /* 6 c*u dx = c*d u dx */

d(U*V,X,B*U+A*V) :- d(U,X,A), d(V,X,B). /* 7 u*v dx = u*d v dx + v*d u dx */

d(U/V,X,A) :- d(U*V^(-1),X,A). /* 8 u/v dx = d (u*v)^-1 dx */

d(U^C,X,C*U^(C-1)*W) :- C \= X, d(U,X,W). /* 9 u^c dx = c*u^(c-1)*d u dx */

d(log(U),X,A*U^(-1)) :- d(U,X,A). /*10 ln(u) dx = u^-1 * d u dx */

### Derivation in Visual Prolog

d(int(_),_) = int(0). d(var(X),X) = int(1):- !. d(var(_),_) = int(0):-!. d(plus(U,V),X) = plus(d(U,X),d(V,X)). d(minus(U,V),X) = minus(d(U,X),d(V,X)). d(mult(U,V),X) = plus(mult(d(U,X),V),mult(U,d(V,X))). d(div_(U,V),X) = div_(minus(mult(d(U,X),V),mult(U,d(V,X))),mult(V,V)). d(ln(U),X) = mult(div_(int(1),U),d(U,X)). d(potens(E1,int(I)),X) = mult(mult(int(I),potens(E1,int(I-1))),d(E1,X)). d(sin(U),X) = mult(cos(U),d(U,X)). d(cos(U),X) = minus(int(0),mult(sin(U),d(U,X))). d(tan(U),X) = mult(potens(sec(U),int(2)),d(U,X)).

Starting derivation

differentiate(STR) = ResultStr:- InputExpression = diff::parse(Str,EXP), DifferentiatedExpression = diff::d(InputExpression,"x"), OutputExpression = diff::reduce(DifferentiatedExpression), ResultStr = diff::writeexp(OutputExpression).

### The Atomistic Correspondence for the Procedural Definition

Let’s assume **SE **to be the base class for all atomistic objects. It is then the definition for AHO. Let’s **Symbol **be the definition for the whole model semantics, here derivation.

Model Weaver to create the atomistic model:

differentiateExpression(Str) = OutputExpression:-

S = s::new(),

BuiltModel = sEModelBuilder::build(S:parse(Str)),

ResultExpression = BuiltModel:d("x"),

ResultElement = seData::new(ResultExpression),

OutputExpression = ResultElement:reduce().

Interface for the Symbol:

interface symbol domains exp = var(string); int(integer); plus(exp,exp); minus(exp,exp); … potens(exp,exp); ref(se). /* this is a reference to a symbolic element, AHO */ predicates parse: (string) -> exp determ. writeExp: (exp) -> string. end interface symbol

To convert a procedural Prolog-derivation model into an atomistic one, there is only one definition needed to add into the original code:

d(ref(SE),X) = SE:d(X).

### Architecture for the Derivation Example

### Summary

This application shows that it is possible to translate “procedural”, i.e. non-object-based programs into atomistic ones, when the semantics is formal.

### Some links

- About Prolog at AllExperts : http://en.allexperts.com/e/p/pr/prolog.htm

**Symbolic Manipulation Program**, usually called **SMP**, was a computer algebra system designed by Chris A. Cole and Stephen Wolfram at Caltech circa 1979 and initially developed in the Caltech physics department. It was first sold commercially in 1981 by the Computer Mathematics Corporation of Los Angeles which later became part of Inference Corporation; Inference Corp. further developed the program and marketed it commercially from 1983 to 1988. SMP was essentially Version Zero of the more ambitious Mathematica system.

As we can assume due to the name, the core of SMP is (was) symbol. Let’s study it more.

### Motivation for SMP (see)

The primary motivation for the construction of the SMP program was the necessity of performing very complicated algebraic manipulations in certain areas of theoretical physics. The need to deal with advanced mathematical constructs required the program to be of great generality. In addition, the size of the calculations anticipated demanded that the program should operate quickly and be capable of handling very large amounts of data. The resulting program is expected to be valuable in a wide variety of applications.

### Main principles of SMP (see)

The basic purpose of SMP is to manipulate symbolic expressions. These expressions may represent algebraic formulae, on which mathematical operations are performed. By virtue of their symbolic nature, they may also represent procedures and actions.

**The ability to manipulate symbolic expressions, as well as sets of numbers, allows for much greater generality and a much richer architecture than in numerical computer languages. **

**The structure of expressions in SMP is defined recursively as follows:**

exprconsists ofsymbol(1)

orexpr[expr,expr,…] (projection) (2)

or { [expr]:expr, [expr]:expr,…} (list) (3)

Symbols are the fundamental units. Projections represent extraction of a part in an expression. Lists allow expressions to be collected together.

**These three fundamental forms suffice to represent all the objects, operations and procedures required in symbolic manipulations.**

### Comparison with Symbolic Analysis

In Symbolic Analysis our model to represent all objects etc is even more simple, much simpler. In it the only artefact is symbol itself. It is possible because of the hybrid structure of the symbolic element (see more about AHO and Symbol). We argue, that Symbol is the core of core computer science.

### Case 1: Symbol at SMP (see)

Symbols are labelled by a unique name (e.g. `x3` or `Mult`) which is used to represent them for input and output. Expressions may be assigned as values for symbols. A symbol to which a value has been assigned becomes essentially a short notation for its value, and is replaced by the value whenever it occurs. If no value has been assigned to a symbol, the results of (most) operations are such as would hold for any possible value of the symbol. The set of possible values represented by a symbol may be delimited by assigning a list of properties to the symbol.

### Case 2: Projection at SMP (see)

The projection `f[ expr]` represents the part of the expression

`f`selected by the “filter”

*expr*. If

`f`is a list, the entry with index

*expr*is selected. If

`f`is a symbol with no value, operations performed on

`f[`hold for any value of

*expr*]`f`. Properties assigned to

`f`may specify particular treatment. “System-defined” symbols stand for many fundamental operations. Projections from these symbols (e.g.

`Plus`) yield expressions which represent the action of these operations on the filter expressions.

### Case 3: Lists at SMP (see)

Lists are ordered and indexed sets of expressions. The index and value of each entry in a list may be arbitrary expressions. A particular value in a list is extracted by projection with the corresponding index as a filter. If the value of some symbol `f` is a list, the entries of the list describe parts of the “object” `f`: they give the values for projections of `f` with different filters. Entries may be introduced into such a list by assignment of values for projections of `f`.

For example, `f:{ [1]: a} ` (or `f[1]: a`) defines `f` to be an object whose projection with filter `1` is `a`. The values of other projections from `f` remain unspecified. `f[x+y]: (x+y)^2` then yields {`{ `[x+y]: (x+y)^2, [1]: a} } and defines the value for a projection of `f` with filter `x+y` to be `(x+y)^2`.

### Symbolic Analysis is a Reverse Engineering Architecture and Methodology

Meanwhile SMP has been written for expressing mathematical formulas and equations, the main purpose of symbolic analysis was reverse engineering: expressing program models in the symbolic notation. However, there are many similar needs and goals in both of them. It is obvious that symbolic analysis can be extended into mathematic formulations, too, because its internal formalism allows presenting semantics for a wide range of formal languages expressions.

### Correspondence between SMP and symbolic analysis

It is obvious that any expression of SMP can be transformed into the symbolic analysis model, only by extending the Symbolic language for the necessary expressions. Lists (case 3) and projections (case 2) can be some extensions to the Symbolic language, the *clause *definition.

### Mathematica

Mathematica is the follower of SMP. In it **everything is an expression**. There are 3000 different expressions in it. See a video .

*Note***: In symbolic analysis (Laitila) everything is a symbol.**

### Some links:

- Symbolic Manipulation Program at Wiki: http://en.wikipedia.org/wiki/Symbolic_Manipulation_Program
- Foundation for SMP: http://www.stephenwolfram.com/publications/articles/computing/81-smp/index.html
- Documentation for Mathematica at WolframResearch: http://reference.wolfram.com/mathematica/guide/Mathematica.html