Inteligencia Artificial

Journal Information
ISSN / EISSN : 1137-3601 / 1988-3064
Total articles ≅ 552
Current Coverage
SCOPUS
COMPENDEX
DOAJ
ESCI
Archived in
EBSCO
SHERPA/ROMEO
Filter:

Latest articles in this journal

Mariela Morveli Espinoza
Inteligencia Artificial, Volume 24, pp 36-39; doi:10.4114/intartif.vol24iss67pp36-39

Abstract:
Rhetorical arguments are used in negotiation dialogues when a proponent agent tries to persuade his opponent to accept a proposal more readily. When more than one argument is generated, the proponent must compare them in order to select the most adequate for his interests. A way of comparing them is by means of their strength values. Related work propose a calculation based only on the components of the rhetorical arguments, i.e., the importance of the opponent's goal and the certainty level of the beliefs that make up the argument. This work aims to propose a model for the calculation of the strength of rhetorical arguments, which is inspired on the pre-conditions of credibility and preferability stated by Guerini and Castelfranchi. Thus, we suggest the use of two new criteria to the strength calculation: the credibility of the proponent and the status of the opponent's goal in the goal processing cycle. The model is empirically evaluated and the results demonstrate that the proposed model is more efficient than previous works in terms of number of exchanged arguments and number of reached agreements.
Otto Menegasso Pires, Eduardo Inacio Duzzioni, Jerusa Marchi, Rafael De Santiago
Inteligencia Artificial, Volume 24, pp 90-101; doi:10.4114/intartif.vol24iss67pp90-101

Abstract:
Quantum Computing has been evolving in the last years. Although nowadays quantum algorithms performance has shown superior to their classical counterparts, quantum decoherence and additional auxiliary qubits needed for error tolerance routines have been huge barriers for quantum algorithms efficient use.These restrictions lead us to search for ways to minimize algorithms costs, i.e the number of quantum logical gates and the depth of the circuit. For this, quantum circuit synthesis and quantum circuit optimization techniques are explored.We studied the viability of using Projective Simulation, a reinforcement learning technique, to tackle the problem of quantum circuit synthesis. The agent had the task of creating quantum circuits up to 5 qubits. Our simulations demonstrated that the agent had a good performance but its capacity for learning new circuits decreased as the number of qubits increased.
Jean Phelipe De Oliveira Lima, Carlos Maurí­cio Seródio Figueiredo
Inteligencia Artificial, Volume 24, pp 40-50; doi:10.4114/intartif.vol24iss67pp40-50

Abstract:
In modern smart cities, there is a quest for the highest level of integration and automation service. In the surveillance sector, one of the main challenges is to automate the analysis of videos in real-time to identify critical situations. This paper presents intelligent models based on Convolutional Neural Networks (in which the MobileNet, InceptionV3 and VGG16 networks had used), LSTM networks and feedforward networks for the task of classifying videos under the classes "Violence" and "Non-Violence", using for this the RLVS database. Different data representations held used according to the Temporal Fusion techniques. The best outcome achieved was Accuracy and F1-Score of 0.91, a higher result compared to those found in similar researches for works conducted on the same database.
Jorge Herrera-Franklin, Alejandro Rosete, Milton García-Borroto
Inteligencia Artificial, Volume 24, pp 71-89; doi:10.4114/intartif.vol24iss67pp71-89

Abstract:
The Variable Cost and Size Bin Packing Problem (VCSBPP) is a known NP-Hard problem that consists in minimizing the cost of all bins used to pack a set of items. There are many real-life applications of the VCSBPP where the focus is to improve the efficiency of the solution method. In spite of the existence of fuzzy approaches to adapt other optimization problems to real life conditions, VCSBPP has not been extensively studied in terms of relaxations of the crisp conditions. In this sense, the fuzzy approaches for the VCSBPP varies from relaxing the capacity of the bins to the items weights. In this paper we address a non-explored side consisting in relaxing the set of items to be packed. Therefore, our main contribution is a fuzzy version of VCSBPP that allows incomplete packing. The proposed fuzzy VCSBPP is solved by a parametric approach. Particularly, a fast heuristic algorithm is introduced that allows to obtain a set of solutions with interesting trade-offs between cost and relaxation of the original crisp conditions. An experimental study is presented to explore the proposed fuzzy VCSBPP and its solution.
Varsha Bhole, Arun Kumar
Inteligencia Artificial, Volume 24, pp 102-120; doi:10.4114/intartif.vol24iss67pp102-120

Abstract:
Shelf-life prediction for fruits based on the visual inspection and with RGB imaging through external features becomes more pervasive in agriculture and food business. In the proposed architecture, to enhance the accuracy with low computational costs we focus on two challenging tasks of shelf life (remaining useful life) prediction: 1) detecting the intrinsic features like internal defects, bruises, texture, and color of the fruits; and 2) classification of fruits according to their remaining useful life. To accomplish these tasks, we use the thermal imaging technique as a baseline which is used as non-destructive approach to find the intrinsic values of fruits in terms of temperature parameter. Further to improve the classification tasks, we combine it with a transfer learning approach to forecast the shelf life of fruits. For this study, we have chosen „Kesar? (Mangifera Indica Linn cv. Kesar) mangoes and for the purpose of classification, our designed dataset images are categorized into 19 classes viz. RUL-1 (Remaining Useful Life-1) to RUL-18 (Remaining Useful Life-18) and No-Life as after harvesting, the storage span of „Kesar? is near about 19 days. A comparative analysis using SqueezeNet, ShuffleNet, and MobileNetv2 (which are prominent CNN based lightweight models) has been performed in this study. The empirical results show a highest achievable accuracy of 98.15±0.44% with an almost a double speedup in training the entire process by using thermal images.
Gerardo Ernesto Rolong Agudelo, Carlos Enrique Montenegro Marin, Paulo Alonso Gaona-Garcia
Inteligencia Artificial, Volume 24, pp 121-128; doi:10.4114/intartif.vol24iss67pp121-128

Abstract:
In the world and some countries like Colombia, the number of missing person is a phenome very worrying and growing, every year, thousands of people are reported missing all over the world, the fact that this keeps happening might indicate that there are still analyses that have not been done and tools that have not been considered in order to find patterns in the information of missing person. The present article presents a study of the way informatics and computational tools can be used to help find missing person and what patterns can be found in missing person datasets using as a study case open data about missing person in Colombia in 2017. The goal of this study is to review how computational tools like data mining and image analysis can be used to help find missing person and draw patterns in the available information about missing person. For this, first it will be review of the state of art of image analysis in real world applications was made in order to explore the possibilities when studying the photos of missing person, then a data mining process with data of missing person in Colombia was conducted to produce a set of decision rules that can explain the cause of the disappearance, as a result is generated decision rules algorithm suggest links between socioeconomic stratification, age, gender and specific locations of Colombia and the missing person reports. In conclusion, this work reviews what information about missing person is available publicly and what analysis can me made with them, showing that data mining and face recognition can be useful tools to extract patterns and identify patterns in missing person data.
Hicham Deghbouch, Fatima Debbat
Inteligencia Artificial, Volume 24, pp 18-35; doi:10.4114/intartif.vol24iss67pp18-35

Abstract:
This work addresses the deployment problem in Wireless Sensor Networks (WSNs) by hybridizing two metaheuristics, namely the Bees Algorithm (BA) and the Grasshopper Optimization Algorithm (GOA). The BA is an optimization algorithm that demonstrated promising results in solving many engineering problems. However, the local search process of BA lacks efficient exploitation due to the random assignment of search agents inside the neighborhoods, which weakens the algorithm’s accuracy and results in slow convergence especially when solving higher dimension problems. To alleviate this shortcoming, this paper proposes a hybrid algorithm that utilizes the strength of the GOA to enhance the exploitation phase of the BA. To prove the effectiveness of the proposed algorithm, it is applied for WSNs deployment optimization with various deployment settings. Results demonstrate that the proposed hybrid algorithm can optimize the deployment of WSN and outperforms the state-of-the-art algorithms in terms of coverage, overlapping area, average moving distance, and energy consumption.
Flávio Arthur O. Santos, Thiago Dias Bispo, Hendrik Teixeira Macedo, Cleber Zanchettin
Inteligencia Artificial, Volume 24, pp 1-17; doi:10.4114/intartif.vol24iss67pp1-17

Abstract:
Natural language processing systems have attracted much interest of the industry. This branch of study is composed of some applications such as machine translation, sentiment analysis, named entity recognition, question and answer, and others. Word embeddings (i.e., continuous word representations) are an essential module for those applications generally used as word representation to machine learning models. Some popular methods to train word embeddings are GloVe and Word2Vec. They achieve good word representations, despite limitations: both ignore morphological information of the words and consider only one representation vector for each word. This approach implies the word embeddings does not consider different word contexts properly and are unaware of its inner structure. To mitigate this problem, the other word embeddings method FastText represents each word as a bag of characters n-grams. Hence, a continuous vector describes each n-gram, and the final word representation is the sum of its characters n-grams vectors. Nevertheless, the use of all n-grams character of a word is a poor approach since some n-grams have no semantic relation with their words and increase the amount of potentially useless information. This approach also increase the training phase time. In this work, we propose a new method for training word embeddings, and its goal is to replace the FastText bag of character n-grams for a bag of word morphemes through the morphological analysis of the word. Thus, words with similar context and morphemes are represented by vectors close to each other. To evaluate our new approach, we performed intrinsic evaluations considering 15 different tasks, and the results show a competitive performance compared to FastText. Moreover, the proposed model is $40\%$ faster than FastText in the training phase. We also outperform the baseline approaches in extrinsic evaluations through Hate speech detection and NER tasks using different scenarios.
Gildã¡sio Lecchi Cravo, Dayan De Castro Bissoli, Andrã© Renato Sales Amaral
Inteligencia Artificial, Volume 24, pp 51-70; doi:10.4114/intartif.vol24iss67pp51-70

Abstract:
O problema de layout em linha dupla (DRLP) consiste em determinar a localização de facilidades ao longo de ambos os lados de um corredor central, tendo como objetivo, a minimização da soma ponderada das distâncias entre todos os pares de facilidades. Como facilidades podem ser máquinas, centros de trabalho, células de manufatura, departamentos de um edifício e robôs em sistemas de manufatura. Esse trabalho propõe uma abordagem puramente heurística, baseada na meta-heurística Otimização do Enxame de Partículas (PSO). Para validar o algoritmo proposto, o mesmo foi submetido a testes computacionais com cinquenta e uma instâncias, incluindo instâncias consideradas de grande porte e os resultados encontrados mostram o PSO proposto como uma excelente abordagem para o DRLP, melhorado tendo os valores conhecidos para diversas instâncias disponíveis na literatura.
Supoj Hengpraprohm, Suwimol Jungjit
Inteligencia Artificial, Volume 23, pp 100-114; doi:10.4114/intartif.vol23iss65pp100-114

Abstract:
For breast cancer data classification, we propose an ensemble filter feature selection approach named ‘EnSNR’. Entropy and SNR evaluation functions are used to find the features (genes) for the EnSNR subset. A Genetic Algorithm (GA) generates the classification ‘model’. The efficiency of the ‘model’ is validated using 10-Fold Cross-Validation re-sampling. The Microarray dataset used in our experiments contains 50,739 genes for each of 32 patients. When our proposed ‘EnSNR’ subset of features is used; as well as giving an enhanced degree of prediction accuracy and reducing the number of irrelevant features (genes), there is also a small saving of computer processing time.
Back to Top Top