You are currently browsing the category archive for the ‘Other Methodologies’ category.
In mathematics and logic a traditional proof is done by starting from one or more axioms. Clauses are used as steps (bridges) into higherorder decisions. Lemmas and corollarys are many times useful in order to mark the route to the final decision, which is usually a proof or a theorem.
Below the most important concepts regarding how to derive logical decisions:
 An axiom (or postulate) is a proposition that is not proved or demonstrated but considered to be either selfevident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.
 A lemma is simultaneously a contention for premises below it and a premise for a contention above it.
 A corollary is a statement which follows readily from a previous statement. In mathematics a corollary typically follows a theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. Proposition B is a corollary of proposition A if B can readily be deduced from A, but the meaning of readily varies depending upon the author and context.
 A theorem is a statement which has been proven on the basis of previously established statements, such as other theorems, and previously accepted statements, such as axioms
Symbolic analysis and the mathematic dictionary
In Symbolic Analysis we use the concepts above in the following way:
 Symbols are axioms to be started. Some of them are grounded, but some are nonground: fuzzy, vague or mental models without exact information.
 Those symbols that are ground and captured from source code – have a certain semantics. A lemma is that for any symbol having a semantic notation, a symbolic notation can be created, because each grammar term is independent of other grammar terms by default. Therefore, each symbol can be simulated separately.
 The corollary is that each source language symbol is an automaton, which can be programmed in a tool as a state machine having the same principal logic as Turing machine.
 The final decision from the steps 1, 2 and 3 above is that by simulating source code it is possible to mimic original program execution step by step. This makes a foundation for interactive program proof.
Some links:
 What is the difference between theorem.. and corollary by David Richeson.
 Definitions for axioms etc at Wiki: http://users.dickinson.edu/~richesod/
 Ideal of Science: https://symbolicanalysis.wordpress.com/2009/10/22/idealofscience/
 Transformation process for reverse engineering: https://symbolicanalysis.wordpress.com/2009/09/09/reverseengineering/ (the figure describes the symbolic analysis transformation process as successive automata: A1A7).
A cognitive architecture is a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term ‘architecture’ implies an approach that attempts to model not only behavior, but also structural properties of the modelled system. These need not be physical properties: they can be properties of virtual machines implemented in physical machines (e.g. brains or computers).
Common researchers on cognitive architectures is the belief that understanding (human, animal or machine) cognitive processes means being able to implement them in a working system, though opinions differ as to what form such a system can have: some researchers assume that it will necessarily be a computational system whereas others argue for alternative models such as dynamical systems.
Cognitive architectures can be symbolic, connectionist, or hybrid. Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT). Many of these architectures are based on themindislikeacomputer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION).
Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models.
The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. Neural networks are by far the most commonly used connectionist model today, but there are some more generic approaches, too.
Is the Symbolic atomistic model (SAM) a cognitive architecture?
The SAMprinciple, presented in this blog, is a hybrid cognitive architecture, which contains both the symbolic and connectionist approach. There is a paper published about it at the Conference of Hybrid Artificial Intelligence Systems (Burgos Spain, 2008).
Some links:
 Cognitive architectures in Wiki:http://en.wikipedia.org/wiki/Cognitive_architecture
 Erkki Laitila: Symbolic Hybrid Programming Tool for Software Understanding. HAIS 2008: 499506, see: http://www.informatik.unitrier.de/~ley/db/conf/hais/hais2008.html#Laitila08
A Petri net (also known as a place/transition net or P/T net) is one of several mathematical modeling languages for the description of discrete distributed systems. A Petri net is a directed bipartite graph, in which the nodes represent transitions (i.e. discrete events that may occur, signified by bars), places i.e. conditions, signified by circles, and directed arcs that describe which places are pre and/or postconditions for which transitions, signified by arrows.
Like industry standards such as UML activity diagrams, BPMN and EPCs, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a welldeveloped mathematical theory for process analysis.
Because of its executability Petri net is an interesting formalism compared with the symbolic atomistic model.
Formalism for Petri nets
A Petri net consists of places, transitions, and directed arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition.
Places may contain a natural number of tokens. A distribution of tokens over the places of a net is called a marking. A transition of a Petri net may fire whenever there is a token at the end of all input arcs; when it fires, it consumes these tokens, and places tokens at the end of all output arcs. A firing is atomic, i.e., a single noninterruptible step.
Execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, any one of them may fire. If a transition is enabled, it may fire, but it doesn’t have to. Since firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling the concurrent behavior of distributed systems.
The following formal definition is loosely based on (Peterson 1981). Many alternative definitions exist.
Syntax
A Petri net graph (called Petri net by some, but see below) is a 3tuple , where
 S is a finite set of places
 T is a finite set of transitions
 S and T are disjoint, i.e. no object can be both a place and a transition
 is a multiset of arcs, i.e. it defines arcs and assigns to each arc a nonnegative integer arc multiplicity; note that no arc may connect two places or two transitions.
The flow relation is the set of arcs: . In many textbooks, arcs can only have multiplicity 1, and they often define Petri nets using F instead of W.
A Petri net graph is a bipartite multidigraph with node partitions S and T.
The preset of a transition t is the set of its input places: ; its postset is the set of its output places:
A marking of a Petri net (graph) is a multiset of its places, i.e., a mapping . We say the marking assigns to each place a number of tokens.
A Petri net (called marked Petri net by some, see above) is a 4tuple , where
 (S,T,W) is a Petri net graph;
 M_{0} is the initial marking, a marking of the Petri net graph.
Executablity of Petri nets
Petri nets include a specific logic, with some variations. Shortly said:
 firing a transition t in a marking M consumes W(s,t) tokens from each of its input places s, and produces W(t,s) tokens in each of its output places s
 a transition is enabled (it may fire) in M if there are enough tokens in its input places for the consumptions to be possible, i.e. iff .
We are generally interested in what may happen when transitions may continually fire in arbitrary order.
We say that a marking M‘ is reachable from a marking M in one step if ; we say that it is reachable from M if , where is the transitive closure of ; that is, if it is reachable in 0 or more steps.
For a (marked) Petri net , we are interested in the firings that can be performed starting with the initial marking M_{0}. Its set of reachable markings is the set
The reachability graph of N is the transition relation restricted to its reachable markings R(N). It is the state space of the net.
A firing sequence for a Petri net with graph G and initial marking M_{0} is a sequence of transitions such that . The set of firing sequences is denoted as L(N).
Types of different levels of Petri nets are shown below:
Comparing Petri nets and Symbolic Atomistic Model
Formalism of Petri nets is different from the one of computer programming languages, because in each place there is a contents, which is a set of tokens. The purpose is to study firing and balance between modeling and analyzability.
In spite of the differences, I argue that Petri nets can be converted into the symbolic atomistic model (AHO objects) by some arrangements:
 Each place reserves an AHO object.
 Each transition reserves so many AHO’s that it requires when the semantics are converted atomistic (atomic formula).
 Contents of each place is a separate dynamic AHO – object.
 Code for transitions are written in a high level language.
 The executing logic will be programmed into the run – methods for AHO’s allowing nondeterminism.
 Otherwise, the logic of SimulationWare could be used.
There is no demonstration that it works, but it seems evident that a modified atomistic model (the architecture of AHO’s) should simulate Petri nets, too.
Some links:
 See more about Petri nets from: http://en.wikipedia.org/wiki/Petri_net
 About a tool Maria (in Finnish): http://www.tcs.hut.fi/Software/maria/petri.fi.pdf
In graph theory, reachability is the notion of being able to get from one vertex in a directed graph to some other vertex. Note that reachability in undirected graphs is trivial–it is sufficient to find the connected components in the graph, which can be done in linear time.
Definition for Reachability: For a directed graph D = (V, A), the reachability relation of D is the transitive closure of its arc set A, which is to say the set of all ordered pairs (s, t) of vertices in V for which there exist vertices v_{0} = s, v_{1}, …, v_{d} = t such that (v_{i – 1}, v_{i} ) is in A for all 1 ≤ i ≤ d.
Algorithms for reachability fall into two classes: those that require preprocessing and those that do not. For the latter case, resolving a single reachability query can be done in linear time using algorithms such as breadth first search or iterative deepening depthfirst search.
Reachability has many times been modeled using Petri nets.
Because Petri nets can be used for modeling reachability and locks of source code, and the input for it and the symbolic atomistic model (SAM) is the same, too, it is natural to assume, that SAM can be used for modeling reachability and source code locks, too.
In fig above the different slices (parts of the graph) could be symbolic models captured from methods of threads of Java or C++ or other code. Typically some threads are consumers and some producers.
It is straightforward to visualizate logic of threads using the symbolic atomistic model as a base so that crtical resources can be seen as connective nodes between different parts of the threads.
Some links:
 Reachability in Wiki: http://en.wikipedia.org/wiki/Reachability
 Reachability Analysis by Pataricza and Bantzar: http://sauron.inf.mit.bme.hu/Edu/FMethods/2008/fmethods.nsf/a141f6e0f01d3349c1256e4c005bc395/9bb883dc5657fa4ac125740100462bb7/$FILE/reachability.ppt
Truth can have a variety of meanings, from the state of being the case, being in accord with a particular fact or reality, being in accord with the body of real things, events, actuality, or fidelity to an original or to a standard. In archaic usage it could be fidelity, constancy or sincerity in action, character, and utterance.
There are differing claims on such questions as what constitutes truth; what things are truthbearers capable of being true or false; how to define and identify truth; the roles that revealed and acquired knowledge play; and whether truth is subjective, relative, objective, or absolute. This post presents some topics how symbolic analysis covers some attributes of truth and truth theories.
Correspondence theory for truth
For the truth to correspond it must first be proved by evidence or an individuals valid opinion, which have similar meaning or context. This type of theory posits a relationship between thoughts or statements on the one hand, and things or objects on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle.
Coherence theory for truth
For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. Some variants of coherence theory are claimed to characterize the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries.
Concensus theory
Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.
Pragmatic theory
The three most influential forms of the pragmatic theory of truth were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that truth is verified and confirmed by the results of putting one’s concepts into practice.
Peirce defines truth as follows: “Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and onesidedness, and this confession is an essential ingredient of truth.”
Peirce emphasizes that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and “reference to the future”, are essential to a proper conception of truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.
Semantic theory of truth
The semantic theory of truth has as its general case for a given language:
 ‘P’ is true if and only if P
where ‘P’ is a reference to the sentence (the sentence’s name), and P is just the sentence itself.
Alfred Tarski developed the theory for formal languages (such as formal logic). Here he restricted it in this way: no language could contain its own truth predicate, that is, the expression is true could only apply to sentences in some other language. The latter he called an object language, the language being talked about. (It may, in turn, have a truth predicate that can be applied to sentences in still another language.) The reason for his restriction was that languages that contain their own truth predicate will contain paradoxical sentences like the Liar: This sentence is not true. See The Liar paradox. As a result Tarski held that the semantic theory could not be applied to any natural language, such as English, because they contain their own truth predicates. Donald Davidson used it as the foundation of his truthconditional semantics and linked it to radical interpretation in a form of coherentism.
Truth in logic
Logic is concerned with the patterns in reason that can help tell us if a proposition is true or not. However, logic does not deal with truth in the absolute sense, as for instance a metaphysician does. Logicians use formal languages to express the truths which they are concerned with, and as such there is only truth under some interpretation or truth within some logical system.
A logical truth (also called an analytic truth or a necessary truth) is a statement which is true in all possible worlds^{[38]} or under all possible interpretations, as contrasted to a fact (also called a synthetic claim or a contingency) which is only true in this world as it has historically unfolded. A proposition such as “If p and q, then p.” is considered to be logical truth because it is true because of the meaning of the symbols and words in it and not because of any facts of any particular world. They are such that they could not be untrue.
How does Symbolic Analysis cover the theories (presented above)?
Symbolic analysis has been created based on Prolog (predicate logic) so that each symbol of the logic model refers to the corresponding term of a program, reverse engineered from an application. Therefore, in the lowest level, each term in the atomistic symbolic model is a propostion typical for truth in logic. Each term has been implemeted as a AHOartefact, a symbolic element having an excellent correspondence to the code . Links between these elements are coherent having a symbolic language to describe semantics of each term (semantic theory of truth).
AHO elements create in simulation (aho:run()) results which are side effects caused of each element. They are mimicking computations of the program. Hence, it is possible for the user to compare side effects with his/her hypotheses in a higher level of logic. If those assumptions are different or equal, this information has pragmatic value from the program comprehension pointofview. For possible contradictions some reasons should be found. They are possible bugs/errors of the code.
Some links:
 Truth in Wiki: http://en.wikipedia.org/wiki/Truth
 Sowa: Laws, Facts, and Contexts: Foundations for Multimodal Reasoning http://www.jfsowa.com/pubs/laws.htm
Executable UML, often abbreviated to xtUML or xUML, is the evolution of the ShlaerMellor method to UML. Executable UML graphically specifies a system using a profile of the UML. The models are testable, and can be compiled into a less abstract programming language to target a specific implementation. Executable UML supports MDA through specification of platformindependent models, and the compilation of the platformindependent models into platformspecific models.
Executable UML is used to model the domains in a system. Each domain is defined at the level of abstraction of its subject matter independent of implementation concerns. The resulting system view is composed of a set of models represented by at least the following:
 The domain chart provides a view of the domains in the system, and the dependencies between the domains.
 The class diagram defines the classes and class associations for a domain.
 The statechart diagram defines the states, events, and state transitions for a class or class instance.
 The action language defines the actions or operations that perform processing on model elements.
Shortcomings of Executable UML
UML doesn’t define semantics of a programming language. Hence, it is not complete to describe behavior of any exact system. In order to correct this shortcoming some new principles are needed. Some of them are: automata theory and symboldriven engineering.
Automata Theory and SymbolDriven Engineering (SDE)
Automata theory defines behavior model for corresponding code elements. Therefore, this theory background eliminates the gap between program behavior and the corresponding model, like UML
In SDE every symbol has an automaton behind it. In order to execute those symbols an invocation Symbol:run is needed plus some specific initailizations for some specific symbols like definitions (method invocations and object and variable definitions). See figure below.
Some links:

^{ Book: }^{ } Mellor, S; Balcer, M: “Executable UML: A foundation for modeldriven architecture”, Preface, Addison Wesley, 2002
 Wiki: http://en.wikipedia.org/wiki/Executable_UML
Roundtrip engineering is a functionality of software development tools that provides generation of models from source code and generation of source code from models; this way, existing source code can be converted into a model, be subjected to software engineering methods and then be converted back.
Round trip is a particular model perspective in the UML standard, and it covers conceptual elements of the full execution use case that happens in an implemented system. This includes the network communication, in typical webbased applications.
Roundtrip engineering vs. Symbolic Analysis
When rounttrip engineering creates typical UML models from code, Symbolic analysis instead creates symbolic atomistic models from code: from GrammarWare to ModelWare.
Some links:
 http://en.wikipedia.org/wiki/Roundtrip_engineering
 Dreaming about roundtrip engineering: http://fox.wikis.com/wc.dll?Wiki~RoundTripEngineeringDreaming
In computer programming, program slicing is the computation of a program slice. The program slice consists of the parts of a program that may affect the values computed at some point of interest, referred to as a slicing criterion. Program slicing can be used in debugging to locate source of errors more easily. Other applications of slicing include software maintenance, optimization, program analysis, and information flow control.
Slicing techniques have been seeing a rapid development since the original definition by Mark Weiser. At first, slicing was only static, i.e., applied on the source code with no other information than the source code. Bogdan Korel and Janusz Laski introduced dynamic slicing, which works on a specific execution of the program (for a given execution trace). Other forms of slicing exist, for instance path slicing.
Slicing in Symbolic Analysis
Symbolic analysis is a method to traverse source code symbols by using a specific traversing algorithm, or by simulating the corresponding code. Actually symbolic analysis is static slicing, but it tries to mimich dynamic behavior, too. As an output of simulation symbolic analysis creates side effect elements, which are execution results from symbols.
Below a rough principle about a slice generated from symbolic analysis.
Some links: