Formal Aspects of Computing

Journal Information
ISSN / EISSN : 0340-1200 / 1432-0770
Total articles ≅ 5,129
Current Coverage
SCOPUS
LOCKSS
EI COMPENDEX
MEDICUS
MEDLINE
PUBMED
SCIE
INSPEC
Archived in
EBSCO
SHERPA/ROMEO
Filter:

Latest articles in this journal

Blair Archibald, Muffy Calder, Michele Sevegnani
Formal Aspects of Computing; https://doi.org/10.1145/3545180

Abstract:
Bigraphs are a universal computational modelling formalism for the spatial and temporal evolution of a system in which entities can be added and removed. We extend bigraphs to probabilistic bigraphs, and then again to action bigraphs, which include non-determinism and rewards. The extensions are implemented in the BigraphER toolkit and illustrated through examples of virus spread in computer networks and data harvesting in wireless sensor systems. BigraphER also supports the existing stochastic bigraphs extension of Krivine et al., and using BigraphER we give, for the first time, a direct implementation of the membrane budding model used to motivate stochastic bigraphs.
Heiko Klare, Joshua Gleitze
Formal Aspects of Computing; https://doi.org/10.1145/3543845

Abstract:
When developers describe a software system with multiple models, such as architecture diagrams, deployment descriptions, and source code, these models must represent the system in a uniform way, i.e., they must be and stay consistent . One means to automatically preserve consistency after changes to models are model transformations, of which bidirectional transformations that preserve consistency between two models have been well researched. To preserve consistency between multiple models, such transformations can be combined to networks. When transformations are developed independently and reused modularly, the resulting network can be of arbitrary topology. For such networks, no universal strategy exists to orchestrate the execution of transformations such that the resulting models are consistent. In this paper, we prove that termination of such a strategy can only be guaranteed if it is incomplete, i.e., if it is allowed to fail to restore consistency for some changes although an execution order of transformations exists that yields consistent models. We propose such a strategy, for which we prove termination and show that and why it makes it easier for users of model transformation networks to understand the reasons whenever the strategy fails. In addition, we provide a simulator for the comparison of different execution strategies. These findings help transformation developers and users in understanding when and why they can expect the execution of a transformation network to terminate and when they can even expect it to succeed. Furthermore, the proposed strategy guarantees them termination and supports them in finding the reason whenever it is not successful.
Published: 21 June 2022
Biological Cybernetics pp 1-18; https://doi.org/10.1007/s00422-022-00937-6

Abstract:
In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas.
Published: 20 June 2022
Biological Cybernetics pp 1-25; https://doi.org/10.1007/s00422-022-00936-7

Abstract:
Adaptation, the reduction of neuronal responses by repetitive stimulation, is a ubiquitous feature of auditory cortex (AC). It is not clear what causes adaptation, but short-term synaptic depression (STSD) is a potential candidate for the underlying mechanism. In such a case, adaptation can be directly linked with the way AC produces context-sensitive responses such as mismatch negativity and stimulus-specific adaptation observed on the single-unit level. We examined this hypothesis via a computational model based on AC anatomy, which includes serially connected core, belt, and parabelt areas. The model replicates the event-related field (ERF) of the magnetoencephalogram as well as ERF adaptation. The model dynamics are described by excitatory and inhibitory state variables of cell populations, with the excitatory connections modulated by STSD. We analysed the system dynamics by linearising the firing rates and solving the STSD equation using time-scale separation. This allows for characterisation of AC dynamics as a superposition of damped harmonic oscillators, so-called normal modes. We show that repetition suppression of the N1m is due to a mixture of causes, with stimulus repetition modifying both the amplitudes and the frequencies of the normal modes. In this view, adaptation results from a complete reorganisation of AC dynamics rather than a reduction of activity in discrete sources. Further, both the network structure and the balance between excitation and inhibition contribute significantly to the rate with which AC recovers from adaptation. This lifetime of adaptation is longer in the belt and parabelt than in the core area, despite the time constants of STSD being spatially homogeneous. Finally, we critically evaluate the use of a single exponential function to describe recovery from adaptation.
Samuel Coward, , Theo Drane, Emiliano Morini
Formal Aspects of Computing; https://doi.org/10.1145/3543670

Abstract:
We present a method for formal verification of transcendental hardware and software algorithms that scales to higher precision without suffering an exponential growth in runtimes. A class of implementations using piecewise polynomial approximation to compute the result is verified using MetiTarski, an automated theorem prover, which verifies a range of inputs for each call. The method was applied to commercial implementations from Cadence Design Systems with significant runtime gains over exhaustive testing methods and was successful in proving that the expected accuracy of one implementation was overly optimistic. Reproducing the verification of a sine implementation in software, previously done using an alternative theorem proving technique, demonstrates that the MetiTarski approach is a viable competitor. Verification of a 52 bit implementation of the square root function highlights the method’s high precision capabilities.
Formal Aspects of Computing; https://doi.org/10.1145/3536429

Abstract:
One of the major advantages of model checking over other formal methods is its ability to generate a counterexample when a model does not satisfy is its specification. A counterexample is an error trace that helps to locate the source of the error. Therefore, the counterexample represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the task of counterexample generation has a quantitative aspect. Unlike the previous methods proposed for conventional model checking that generate the counterexample as a single path ending with a bad state representing the failure, the task in PMC is completely different. A counterexample in PMC is a set of evidences or diagnostic paths that satisfy a path formula, whose probability mass violates the probability threshold. Counterexample generation is not sufficient for finding the exact source of the error. Therefore, in conventional model checking, many debugging techniques have been proposed to act on the counterexamples generated in order to locate the source of the error. In PMC, debugging counterexamples is more challenging, since the probabilistic counterexample consists of multiple paths and it is probabilistic. In this paper, we propose a debugging technique based on stochastic games to analyze probabilistic counterexamples generated for probabilistic models described as Markov chains in PRISM language. The technique is based mainly on the idea of considering the modules composing the system as players of a reachability game, whose actions contribute to the evolution of the game. Through many case studies, we will show that our technique is very effective for systems employing multiple components. The results are also validated by introducing a debugging tool called GEPCX (Game Explainer of Probabilistic Counterexamples).
Cheng-Hao Cai, Jing Sun, Gillian Dobbie, Zhé Hóu, Hadrien Bride, Jin Song Dong, Scott Uk-Jin Lee
Formal Aspects of Computing; https://doi.org/10.1145/3536430

Abstract:
Automated model repair techniques enable machines to synthesise patches that ensure models meet given requirements. B-repair, which is an existing model repair approach, assists users in repairing erroneous models in the B formal method, but repairing large models is inefficient due to successive applications of repair. In this work, we improve the performance of B-repair using simultaneous modifications, repair refactoring and better classifiers. The simultaneous modifications can eliminate multiple invariant violations at a time so that the average time to repair each fault can be reduced. Further, the modifications can be refactored to reduce the length of repair. The purpose of using better classifiers is to perform more accurate and general repairs and avoid inefficient brute-force searches. We conducted an empirical study to demonstrate that the improved implementation leads to the entire model process achieving higher accuracy, generality and efficiency.
Back to Top Top