EISSN : 1999-5903
Current Publisher: MDPI AG (10.3390)
Total articles ≅ 1,146
Latest articles in this journal
Future Internet, Volume 13; doi:10.3390/fi13060154
Nowadays, the majority of everyday computing devices, irrespective of their size and operating system, allow access to information and online services through web browsers. However, the pervasiveness of web browsing in our daily life does not come without security risks. This widespread practice of web browsing in combination with web users’ low situational awareness against cyber attacks, exposes them to a variety of threats, such as phishing, malware and profiling. Phishing attacks can compromise a target, individual or enterprise, through social interaction alone. Moreover, in the current threat landscape phishing attacks typically serve as an attack vector or initial step in a more complex campaign. To make matters worse, past work has demonstrated the inability of denylists, which are the default phishing countermeasure, to protect users from the dynamic nature of phishing URLs. In this context, our work uses supervised machine learning to block phishing attacks, based on a novel combination of features that are extracted solely from the URL. We evaluate our performance over time with a dataset which consists of active phishing attacks and compare it with Google Safe Browsing (GSB), i.e., the default security control in most popular web browsers. We find that our work outperforms GSB in all of our experiments, as well as performs well even against phishing URLs which are active one year after our model’s training.
Future Internet, Volume 13; doi:10.3390/fi13060153
Picking reliable partners, negotiating synchronously with all partners, and managing similar proposals are challenging tasks for any manager. This challenge is even harder when it concerns small and medium enterprises (SMEs) who need to deal with short budgets and evident size limitations, often leading them to avoid handling very large contracts. This size problem can only be mitigated by collaboration efforts between multiple SMEs, but then again this brings back the initially stated issues. To address these problems, this paper proposes a collaborative negotiation system that automates the outsourcing part by assisting the manager throughout a negotiation. The described system provides a comprehensive view of all negotiations, facilitates simultaneous bilateral negotiations, and provides support for ensuring interoperability among multiple partners negotiating on a task described by multiple attributes. In addition, it relies on an ontology to cope with the challenges of semantic interoperability, it automates the selection of reliable partners by using a lattice-based approach, and it manages similar proposals by allowing domain experts to define a satisfaction degree for each SME. To showcase this method, this research focused on small and medium-size dairy farms (DFs) and describes a negotiation scenario in which a few DFs are able to assess and generate proposals.
Future Internet, Volume 13; doi:10.3390/fi13060151
An Ambient Intelligence responds to user requests based on several contexts. A relevant context is related to what has happened in the ambient; therefore, it focuses a primordial interest on events. These involve information about time, space, or people, which is significant for modeling the context. In this paper, we propose an event-driven approach for context representation based on an ontological model. This approach is extendable and adaptable for academic domains. Moreover, the ontological model to be proposed is used in reasoning and enrichment processes with the context event information. Our event-driven approach considers five contexts as a modular perspective in the model: Person, temporal (time), physical space (location), network (resources to acquire data from the ambient), and academic events. We carried out an evaluation process for the approach based on an ontological model focused on (a) the extensibility and adaptability of use case scenarios for events in an academic environment, (b) the level of reasoning by using competence questions related to events, (c) and the consistency and coherence in the proposed model. The evaluation process shows promising results for our event-driven approach for context representation based on the ontological model.
Future Internet, Volume 13; doi:10.3390/fi13060152
The central thesis of this paper is that memetic practices can be crucial to understanding deception at present when hoaxes have increased globally due to COVID-19. Therefore, we employ existing memetic theory to describe the qualities and characteristics of meme hoaxes in terms of the way they are replicated by altering some aspects of the original, and then shared on social media platforms in order to connect global and local issues. Criteria for selecting the sample were hoaxes retrieved from and related to the local territory in the province of Alicante (Spain) during the first year of the pandemic (n = 35). Once typology, hoax topics and their memetic qualities were identified, we analysed their characteristics according to form in terms of Shifman (2014) and, secondly, their content and stance concordances both within and outside our sample (Spain and abroad). The results show, firstly, that hoaxes are mainly disinformation and they are related to the pandemic. Secondly, despite the notion that local hoaxes are linked to local circumstances that are difficult to extrapolate, our conclusions demonstrate their extraordinary memetic and “glocal” capacity: they rapidly adapt other hoaxes from other places to local areas, very often supplanting reliable sources, and thereby demonstrating consistency and opportunism.
Future Internet, Volume 13; doi:10.3390/fi13060150
Since their emergence in the mid-90s, online media have evolved from simple digital editions that merely served to dump content from print newspapers, to sophisticated multi-format products with multimedia and interactive features. In order to discover their visual evolution, this article conducts a longitudinal study of the design of online media by analyzing the front pages of five general-information Spanish newspapers (elpais.com, elmundo.es, abc.es, lavanguardia.com, and elperiodico.com) over the past 25 years (1996–2020). Moreover, some of their current features are listed. To this end, six in-depth interviews were conducted with managers of different online media outlets. The results indicate that the media analysed have evolved from a static, rigid format, to a dynamic, mobile, and multi-format model. Regarding the language used, along with increased multimedia and interactive possibilities, Spanish online media currently display a balance between text and images on their front pages. Lastly, audience information consumption habits, largely superficial and sporadic, and the increasing technification and speed of production processes, means that news media have lost in terms of the design part of the individual personality they had in their print editions. However, they maintain their index-type front pages as one of their most characteristic elements, which are very vertical and highly saturated.
Future Internet, Volume 13; doi:10.3390/fi13060149
As an emerging network architecture, Information-Centric Networking (ICN) is considered to have the potential to meet the new requirements of the Fifth Generation (5G) networks. ICN uses a name decoupled from location to identify content, supports the in-network caching technology, and adopts a receiver-driven model for data transmission. Existing ICN congestion control mechanisms usually first select a nearby replica by opportunistic cache-hits and then insist on adjusting the transmission rate regardless of the congestion state, which cannot fully utilize the characteristics of ICN to improve the performance of data transmission. To solve this problem, this paper proposes a two-level congestion control mechanism, called 2LCCM. It switches the replica location based on a node state table to avoid congestion paths when heavy congestion happens. This 2LCCM mechanism also uses a receiver-driven congestion control algorithm to adjust the request sending rate, in order to avoid link congestion under light congestion. In this paper, the design and implementation of the proposed mechanism are described in detail, and the experimental results show that 2LCCM can effectively reduce the transmission delay when heavy congestion occurs, and the bandwidth-delay product-based congestion control algorithm has better transmission performance compared with a loss-based algorithm.
Future Internet, Volume 13; doi:10.3390/fi13060148
Nowadays, Internet of Things (IoT) adoptions are burgeoning and deemed the lynchpin towards achieving ubiquitous connectivity. In this context, defining and leveraging robust IoT security risk management strategies are paramount for secure IoT adoptions. Thus, this study aims to support IoT adopters from any sector to formulate or reframe their IoT security risk management strategies to achieve robust strategies that effectively address IoT security issues. In a nutshell, this article relies on a mixed methods research methodology and proposes a reference model for IoT security risk management strategy. The proposed IoT security risk management strategy reference model (IoTSRM2) relies on the 25 selected IoT security best practices which are outlined using a proposed taxonomic hierarchy, and on the proposed three-phased methodology that consists of nine steps and outputs. The main contribution of this work is the proposed IoTSRM2 which consists of six domains, 16 objectives, and 30 prioritized controls. Furthermore, prior to providing the related work, this article provides a critical evaluation of selected informative references of IoTSRM2 based on their percentage-wise linkage to the IoTSRM2 domains and to the entire IoTSRM2. The findings of the critical evaluation illustrate, inter alia, the selected informative references that are the top three most and least linked to the entire IoTSRM2.
Future Internet, Volume 13; doi:10.3390/fi13060145
The proposed paper introduces an innovative methodology useful to assign intelligent scores to web pages. The approach is based on the simultaneous use of User eXperience (UX), Artificial Neural Network (ANN), and Long Short-Term Memory (LSTM) algorithms, providing the web page scoring and taking into account outlier conditions to construct the training dataset. Specifically, the UX tool analyses different parameters addressing the score, such as navigation time, number of clicks, and mouse movements for page, finding possible outliers, the ANN are able to predict outliers, and the LSTM processes the web pages tags together with UX and user scores. The final web page score is assigned by the LSTM model corrected by the UX output and improved by the navigation user score. This final score is useful for the designer by suggesting the tags typologies structuring a new web page layout of a specific topic. By using the proposed methodology, the web designer is addressed to allocate contents in the web page layout. The work has been developed within a framework of an industry project oriented on the formulation of an innovative AI interface for web designers.
Future Internet, Volume 13; doi:10.3390/fi13060147
Age, gender, educational background, and so on are the most basic attributes for identifying and portraying users. It is also possible to conduct in-depth mining analysis and high-level predictions based on such attributes to learn users’ preferences and personalities so as to enhance users’ online experience and to realize personalized services in real applications. In this paper, we propose using classification algorithms in machine learning to predict users’ demographic attributes, such as gender, age, and educational background, based on one month of data collected with the Sogou search engine with the goal of making user portraits. A multi-model approach using the fusion algorithms is adopted and hereby described in the paper. The proposed model is a two-stage structure using one month of data with demographic labels as the training data. The first stage of the structure is based on traditional machine learning models and neural network models, whereas the second one is a combination of the models from the first stage. Experimental results show that our proposed multi-model method can achieve more accurate results than the single-model methods in predicting user attributes. The proposed approach also has stronger generalization ability in predicting users’ demographic attributes, making it more adequate to profile users.
Future Internet, Volume 13; doi:10.3390/fi13060146
Side-channel attacks remain a challenge to information flow control and security in mobile edge devices till this date. One such important security flaw could be exploited through temperature side-channel attacks, where heat dissipation and propagation from the processing cores are observed over time in order to deduce security flaws. In this paper, we study how computer vision-based convolutional neural networks (CNNs) could be used to exploit temperature (thermal) side-channel attack on different Linux governors in mobile edge device utilizing multi-processor system-on-chip (MPSoC). We also designed a power- and memory-efficient CNN model that is capable of performing thermal side-channel attack on the MPSoC and can be used by industry practitioners and academics as a benchmark to design methodologies to secure against such an attack in MPSoC.