EISSN : 2078-2489
Published by: MDPI AG (10.3390)
Total articles ≅ 2,031
Latest articles in this journal
Information, Volume 12; doi:10.3390/info12080309
Error coefficients are ubiquitous in systems. In particular, errors in reasoning verification must be considered regarding safety-critical systems. We present a reasoning method that can be applied to systems described by the polynomial error assertion (PEA). The implication relationship between PEAs can be converted to an inclusion relationship between zero sets of PEAs; the PEAs are then transformed into first-order polynomial logic. Combined with the quantifier elimination method, based on cylindrical algebraic decomposition, the judgment of the inclusion relationship between zero sets of PEAs is transformed into judgment error parameters and specific error coefficient constraints, which can be obtained by the quantifier elimination method. The proposed reasoning method is validated by proving the related theorems. An example of intercepting target objects is provided, and the correctness of our method is tested through large-scale random cases. Compared with reasoning methods without error semantics, our reasoning method has the advantage of being able to deal with error parameters.
Information, Volume 12; doi:10.3390/info12080308
The current IT market is more and more dominated by the “cloud continuum”. In the “traditional” cloud, computing resources are typically homogeneous in order to facilitate economies of scale. In contrast, in edge computing, computational resources are widely diverse, commonly with scarce capacities and must be managed very efficiently due to battery constraints or other limitations. A combination of resources and services at the edge (edge computing), in the core (cloud computing), and along the data path (fog computing) is needed through a trusted cloud continuum. This requires novel solutions for the creation, optimization, management, and automatic operation of such infrastructure through new approaches such as infrastructure as code (IaC). In this paper, we analyze how artificial intelligence (AI)-based techniques and tools can enhance the operation of complex applications to support the broad and multi-stage heterogeneity of the infrastructural layer in the “computing continuum” through the enhancement of IaC optimization, IaC self-learning, and IaC self-healing. To this extent, the presented work proposes a set of tools, methods, and techniques for applications’ operators to seamlessly select, combine, configure, and adapt computation resources all along the data path and support the complete service lifecycle covering: (1) optimized distributed application deployment over heterogeneous computing resources; (2) monitoring of execution platforms in real time including continuous control and trust of the infrastructural services; (3) application deployment and adaptation while optimizing the execution; and (4) application self-recovery to avoid compromising situations that may lead to an unexpected failure.
Information, Volume 12; doi:10.3390/info12080311
Personal information has been likened to “golden data”, which companies have chased using every means possible. Via mobile apps, the incidents of compulsory authorization and excessive data collection have evoked privacy concerns and strong repercussions among app users. This manuscript proposes a privacy boundary management model, which elaborates how such users can demarcate and regulate their privacy boundaries. The survey data came from 453 users who authorized certain operations through mobile apps. The partial least squares (PLS) analysis method was used to validate the instrument and the proposed model. Results indicate that information relevance and transparency play a significant role in shaping app users’ control–risk perceptions, while government regulation is more effective than industry self-discipline in promoting the formation of privacy boundaries. Unsurprisingly, privacy risk control perceptions significantly affect users’ privacy concerns and trust beliefs, which are two vital factors that ultimately influence their willingness to authorize. The implications of conducting a thorough inquiry into app users’ willingness to authorize their privacy information are far-reaching. In relation to this, app vendors should probe into the privacy-relevant beliefs of their users and enact effective privacy practices to intercept the economic and reputational damages induced by improper information collection. More significantly, a comprehensive understanding of users’ willingness to authorize their information can serve as an essential reference for relevant regulatory bodies to formulate reasonable privacy protection policies in the future.
Information, Volume 12; doi:10.3390/info12080307
The goal of our research was to assess whether the observation about deceptive texts having a lower positive tone than truthful ones in terms of sentiment could become operative and be used for building a classifier in the particular case of fraudster’s letters written in Spanish. The data were the letters that CEOs address to company shareholders in their annual financial reports, and the task was to identify the letters of companies that committed financial misconduct or fraud. This case was challenging for two reasons: first, most of the research worked with spontaneous written or spoken texts, while these letters did not; second, most of the research in this area worked on English texts, while we validated the linguistic cues found as evidence of deception for Spanish texts. The results of our research confirm that an SVM trained with a bag-of-words model of frequent adjectives can achieve 81% accuracy because these adjectives bring the information about which positive or negative tone and which word combinations in a text turn out to be a characteristic of fraudster’s texts.
Information, Volume 12; doi:10.3390/info12080310
In today’s scenario, image watermarking has been an integral part in various multimedia applications. Watermarking is the approach for adding additional information to the existing image to protect the data from modification and to provide data integrity. Frequency transform domain techniques are complex and costly and degrade the quality of the image due to less embedding of bits. The proposed work utilize the original DCT method with some modifications and applies this method on frequency bands of DWT. Furthermore, the output is used in combination with a pixel modification method for embedding and extraction. The proposed outcome is the improvement of performance achieved in terms of time, imperceptibility, and robustness.
Information, Volume 12; doi:10.3390/info12080306
The pervasiveness of offensive content in social media has become an important reason for concern for online platforms. With the aim of improving online safety, a large number of studies applying computational models to identify such content have been published in the last few years, with promising results. The majority of these studies, however, deal with high-resource languages such as English due to the availability of datasets in these languages. Recent work has addressed offensive language identification from a low-resource perspective, exploring data augmentation strategies and trying to take advantage of existing multilingual pretrained models to cope with data scarcity in low-resource scenarios. In this work, we revisit the problem of low-resource offensive language identification by evaluating the performance of multilingual transformers in offensive language identification for languages spoken in India. We investigate languages from different families such as Indo-Aryan (e.g., Bengali, Hindi, and Urdu) and Dravidian (e.g., Tamil, Malayalam, and Kannada), creating important new technology for these languages. The results show that multilingual offensive language identification models perform better than monolingual models and that cross-lingual transformers show strong zero-shot and few-shot performance across languages.
Information, Volume 12; doi:10.3390/info12080304
This paper introduces the Steel Cold Rolling Ontology (SCRO) to model and capture domain knowledge of cold rolling processes and activities within a steel plant. A case study is set up that uses real-world cold rolling data sets to validate the performance and functionality of SCRO. This includes using the Ontop framework to deploy virtual knowledge graphs for data access, data integration, data querying, and condition-based maintenance purposes. SCRO is evaluated using OOPS!, the ontology pitfall detection system, and feedback from domain experts from Tata Steel.
Information, Volume 12; doi:10.3390/info12080305
Previous work established the set of square-free integers n with at least one factorization
for which and are valid RSA keys, whether they are prime or composite. These integers are exactly those with the property , where is the Carmichael totient function. We refer to these integers as idempotent, because for any positive integer k. This set was initially known to contain only the semiprimes, and later expanded to include some of the Carmichael numbers. Recent work by the author gave the explicit formulation for the set, showing that the set includes numbers that are neither semiprimes nor Carmichael numbers. Numbers in this last category had not been previously analyzed in the literature. While only the semiprimes have useful cryptographic properties, idempotent integers are deserving of study in their own right as they lie at the border of hard problems in number theory and computer science. Some idempotent integers, the maximally idempotent integers, have the property that all their factorizations are idempotent. We discuss their structure here, heuristics to assist in finding them, and algorithms from graph theory that can be used to construct examples of arbitrary size.
Information, Volume 12; doi:10.3390/info12080303
Currently, the research on the inversion of wave height by using the shadow statistical method attracts more attention, due to the benefit of without external calibration equipment. Under the assumption of the sea wave satisfying the ideal first-order dispersion relation, the wave period is used to describe the relationship between wave slope and significant wave height. However, the influence of the sea surface current is ignored during the process of extracting the wave height, since the ideal first-order dispersion relation is adopted. By deeply investigating the theoretical derivation process, the retrieving accuracy of wave height is deteriorated when the surface current exists. To solve this problem of the shadow statistical method, the influence of the surface current on the wave height inversion is investigated and is considered in the first-order dispersion relation for retrieving significant wave height in this paper. The synthetic and the collected X-band marine radar images are utilized to certify the influence of sea surface current on the inversion of the significant wave height. The experimental results demonstrate that the inversion accuracy of the significant wave height can be improved when the influence of the surface current is taken into account.
Information, Volume 12; doi:10.3390/info12080301
Ontologies are widely used nowadays. However, the plethora of ontologies currently available online, makes it really difficult to identify which ontologies are appropriate for a given task and to decide on their quality characteristics. This is further complicated by the fact that multiple quality criteria have been proposed for ontologies, making it even more difficult to decide which ontology to adopt. In this context, in this paper we present Delta, a modular online tool for analyzing and evaluating ontologies. The interested user can upload an ontology to the tool, which then automatically analyzes it and graphically visualizes numerous statistics, metrics, and pitfalls. Those visuals presented include a diverse set of quality dimensions, further guiding users to understand the benefits and the drawbacks of each individual ontology and how to properly develop and extend it.