Journal of Intelligent Learning Systems and Applications

Journal Information
ISSN / EISSN: 21508402 / 21508410
Total articles ≅ 189

Latest articles in this journal

James W. Mock, Suresh S. Muknahallipatna
Journal of Intelligent Learning Systems and Applications, Volume 15, pp 36-56;

Deep reinforcement learning (deep RL) has the potential to replace classic robotic controllers. State-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Reinforcement Algorithms, to mention a few, have been investigated for training robots to walk. However, conflicting performance results of these algorithms have been reported in the literature. In this work, we present the performance analysis of the above three state-of-the-art Deep Reinforcement algorithms for a constant velocity walking task on a quadruped. The performance is analyzed by simulating the walking task of a quadruped equipped with a range of sensors present on a physical quadruped robot. Simulations of the three algorithms across a range of sensor inputs and with domain randomization are performed. The strengths and weaknesses of each algorithm for the given task are discussed. We also identify a set of sensors that contribute to the best performance of each Deep Reinforcement algorithm.
Jeremiah Ratican, James Hutson, Andrew Wright
Journal of Intelligent Learning Systems and Applications, Volume 15, pp 24-35;

The realization of an interoperable and scalable virtual platform, currently known as the “metaverse,” is inevitable, but many technological challenges need to be overcome first. With the metaverse still in a nascent phase, research currently indicates that building a new 3D social environment capable of interoperable avatars and digital transactions will represent most of the initial investment in time and capital. The return on investment, however, is worth the financial risk for firms like Meta, Google, and Apple. While the current virtual space of the metaverse is worth $6.30 billion, that is expected to grow to $84.09 billion by the end of 2028. But the creation of an entire alternate virtual universe of 3D avatars, objects, and otherworldly cityscapes calls for a new development pipeline and workflow. Existing 3D modeling and digital twin processes, already well-established in industry and gaming, will be ported to support the need to architect and furnish this new digital world. The current development pipeline, however, is cumbersome, expensive and limited in output capacity. This paper proposes a new and innovative immersive development pipeline leveraging the recent advances in artificial intelligence (AI) for 3D model creation and optimization. The previous reliance on 3D modeling software to create assets and then import into a game engine can be replaced with nearly instantaneous content creation with AI. While AI art generators like DALL-E 2 and DeepAI have been used for 2D asset creation, when combined with game engine technology, such as Unreal Engine 5 and virtualized geometry systems like Nanite, a new process for creating nearly unlimited content for immersive reality is possible. New processes and workflows, such as those proposed here, will revolutionize content creation and pave the way for Web 3.0, the metaverse and a truly 3D social environment.
James Hutson, Gaurango Banerjee, Naresh Kshetri, Kurt Odenwald, Jeremiah Ratican
Journal of Intelligent Learning Systems and Applications, Volume 15, pp 1-23;

There has been disagreement over the value of purchasing space in the metaverse, but many businesses including Nike, The Wendy’s Company, and McDonald’s have jumped in headfirst. While the metaverse land rush has been called an “illusion” given underdeveloped infrastructure, including inadequate software and servers, and the potential opportunities for economic and legal abuse, the “real estate of the future” shows no signs of slowing. While the current virtual space of the metaverse is worth $6.30 billion, that is expected to grow to $84.09 billion by the end of 2028. But the long-term legal and regulatory considerations of capitalizing on the investment, as well as the manner in which blockchain technology can secure users’ data and digital assets, has yet to be properly investigated. With the metaverse still in a conceptual phase, building a new 3D social environment capable of digital transactions will represent most of the initial investment in time in human capital. Digital twin technologies, already well-established in industry, will be ported to support the need to architect and furnish the new digital world. The return on and viability of investing in the “real estate of the future” raises questions fundamental to the success or failure of the enterprise. As such this paper proposes a novel framing of the issue and looks at the intersection where finance, technology, and law are converging to prevent another Dot-com bubble of the late 1990s in metaverse-based virtual real estate transactions. Furthermore, the paper will argue that these domains are technologically feasible, but the main challenges for commercial users remain in the legal and regulatory arenas. As has been the case with the emergence of online commerce, a legal assessment of the metaverse indicates that courts will look to traditional and established legal principles when addressing issues until the enactment of federal and/or state statutes and accompanying regulations. Lastly, whereas traditional regulation of real estate would involve property law, the current legal framing of ownership of metaverse assets is governed by contract law.
James Hutson, Ben Fulcher, Joseph Weber
Journal of Intelligent Learning Systems and Applications, Volume 14, pp 115-131;

The gamification of learning has proven educational benefits, especially in secondary education. Studies confirm the successful engagement of students with improved time on task, motivation and learning outcomes. At the same time, there remains little research on games and learning at the postsecondary level of education where traditional pedagogies remain the norm. Studies that have been conducted remain almost exclusively restricted to science programs, including medicine and engineering. Moreover, postsecondary subject-matter experts who have created their own gamified experiences often are forced to do so on an ad hoc basis either on their own, teaching themselves game engines, or with irregular support from experts in the field. But to ensure a well-designed, developed, and high-quality educational experience that leads to desired outcomes for a field, a sustainable infrastructure needs to be developed in institutions that have (or can partner with) others that have an established game design program. Moreover, such a design-based learning approach can be embedded within an existing studio model to help educate participants while producing an educational product. As such, this qualitative case study provides an example of the process of operationalizing a game design studio from pre-production through post-production, drawing from the design and development of the educational video game The Museum of the Lost VR (2022). The results, resources, and classification system presented are scalable and provide models for different sized institutions. Methods to develop a sustainable infrastructure are presented to ensure interdisciplinary partnerships across departments and institutions with game design programs to collaborate and create educational experiences that optimize user experience and learning outcomes.
Ilman Shazhaev, Dimitry Mihaylov, Abdulla Shafeeg
Journal of Intelligent Learning Systems and Applications, Volume 14, pp 89-95;

Despite the fact that their neurobiological processes and clinical criteria are well-established, early identification remains a significant hurdle to effective, disease-modifying therapy and prolonged life quality. Gaming on computers, gaming consoles, and mobile devices has become a popular pastime and provides valuable data from several sources. High-resolution data generated when users play commercial digital games includes information on play frequency as well as performance data that reflects low-level cognitive and motor processes. In this paper, we review some methods present in the literature that is used for identification of digital biomarkers for Parkinson’s disease. We also present a machine learning method for early identification of problematic digital biomarkers for Parkinson’s disease based on tapping activity from Farcana-Mini players. However, more data is required to reach a complete evaluation of this method. This data is being collected, with their consent, from players who play Farcana-Mini. Data analysis and a full assessment of this method will be presented in future work.
Ilman Shazhaev, Dmitry Mikhaylov, Abdulla Shafeeg, Ekaterina Mulyarchik
Journal of Intelligent Learning Systems and Applications, Volume 14, pp 96-106;

Farcana has developed a smart a gaming input device, that, apart from being a tool for the gamer to use in the process of gameplay, is also a suitable tool to collect biomedical information about the gamer, which after analysis by the artificial intelligence (AI) system allows informing the gamer about whether the individual is in a state of tilt. Tilt itself is a poor emotional state of the individual that appears due to the latter’s inability to control one’s emotions in the process of gameplay. The gamer can be either winning or losing, yet the fact that he/she can neither control nor even acknowledge the emotional state is tilt. The latter is an immense factor of impact on the overall success of the individual in the sphere of gaming and one’s rating in cybersport. This paper has analyzed numerous studies and patents on the topic at hand. The available literature has provided the necessary insight on the topic of tilt and why it is important to help the gamer acknowledge one’s state, especially given deteriorating results. Also, we have proposed a framework for the AI system for tilt recognition.
Ilman Shazhaev, Dimitry Mihaylov, Abdulla Shafeeg
Journal of Intelligent Learning Systems and Applications, Volume 14, pp 107-114;

The Highlights are the most interesting, selling moments from video stream, which can make the viewer watch the entire video. They are like a shop window: everything that is bright and colorful goes there. Seeing them, the user can understand in advance what is inside the video. And they are more versatile than trailers: they can be made shorter or longer, embedded in different places in the user interface. The user sees a selection of highlights as soon as he gets to the website or watches a video clip on YouTube or even a section of a stream of a popular blogger/influencer. The user’s attention is immediately attracted by the most memorable shots. Naturally, it is becoming more tedious to manually create all video highlights, due to the immense amount of material that the highlights are needed for. Thus, creating an algorithm, capable of automating the process would make the process significantly simpler. Besides easing up the work, this process will pave the way to a whole set of new applications that before did not seem real. At the same time, this would be a new process where the AI would not need to be fully supervised by a human, but be capable of identifying and labeling the most interesting and attractive moments on screen. After doing a literature review on video highlight detection, this paper has utilized the model presented in the study by [1] to determine the possibility of attaining highlights from the Farcana 2.0 version Twitch video feed.
Bezawit Lake, Fekade Getahun, Fitsum T. Teshome
Journal of Intelligent Learning Systems and Applications, Volume 14, pp 71-88;

Livestock is a critical socioeconomic asset in developing countries such as Ethiopia, where the economy is significantly based on agriculture and animal husbandry. However, there is an enormous loss of livestock population, which undermines efforts to achieve food security and poverty reduction in the country. The primary reason for this challenge is the lack of a reliable and prompt diagnosis system that identifies livestock diseases in a timely manner. To address some of these issues, the integration of an expert system with deep learning image processing was proposed in this study. Due to the economic significance of cattle in Ethiopia, this study was only focused on cattle disease diagnosis. The cattle disease symptoms that were visible to the naked eye were collected by a cell phone camera. Symptoms that were identified by palpation were collected by text dialogue. The identification of the symptoms category was performed by the image analysis component using a convolutional neural network (CNN) algorithm. The algorithm classified the input symptoms with 95% accuracy. The final diagnosis conclusion was drawn by the reasoner component of the expert system by integrating image classification results, location, and text information obtained from the users. We developed a prototype system that incorporates the image classification algorithms and the reasoner component. The evaluation result of the developed system showed that the new diagnosis system could provide a rapid and effective diagnosis of cattle diseases.
James Hutson, Trent Olsen
Journal of Intelligent Learning Systems and Applications, Volume 14, pp 57-70;

While images are central to the discipline of art history, surprisingly little research has been conducted on the uses of digital environments for teaching in the discipline. Over the past decade, more studies have emerged considering the egalitarian space that can be used by students and teachers in web-based applications and social media. A body of literature has begun to emerge out of a small network of scholars and educators interested in digital humanities and art history, providing examples of how new tools can be integrated into the standard slideshow and lecture format of the field. At the same time, the latest technology that proves revolutionary for the field has had very little study-virtual reality (VR). Additionally, sensory evidence for digital art history and the creation of immersive interactive and multimodal environments for knowledge production is still underexplored. As multiple educational metaverses are currently under development, understanding best practices and pedagogical use of VR has never been timelier. This study seeks to review the pedagogical use of VR in art history current in the field and introduces results from a study of the most effective ways to use these immersive experiences using Bloom’s revised taxonomy. Results confirm that the most effective method to structure VR assignments is to provide training on the technology, provide students with the necessary instructional material to introduce the concept, skill or technique to be learned, create or select an immersive experience that reinforces that topic, and conclude with a debrief or discussion about major takeaways from the experience.
Aravind Sasidharan Pillai
Journal of Intelligent Learning Systems and Applications, Volume 14, pp 43-56;

In this era of pandemic, the future of healthcare industry has never been more exciting. Artificial intelligence and machine learning (AI & ML) present opportunities to develop solutions that cater for very specific needs within the industry. Deep learning in healthcare had become incredibly powerful for supporting clinics and in transforming patient care in general. Deep learning is increasingly being applied for the detection of clinically important features in the images beyond what can be perceived by the naked human eye. Chest X-ray images are one of the most common clinical method for diagnosing a number of diseases such as pneumonia, lung cancer and many other abnormalities like lesions and fractures. Proper diagnosis of a disease from X-ray images is often challenging task for even expert radiologists and there is a growing need for computerized support systems due to the large amount of information encoded in X-Ray images. The goal of this paper is to develop a lightweight solution to detect 14 different chest conditions from an X ray image. Given an X-ray image as input, our classifier outputs a label vector indicating which of 14 disease classes does the image fall into. Along with the image features, we are also going to use non-image features available in the data such as X-ray view type, age, gender etc. The original study conducted Stanford ML Group is our base line. Original study focuses on predicting 5 diseases. Our aim is to improve upon previous work, expand prediction to 14 diseases and provide insight for future chest radiography research.
Back to Top Top