Refine Search

New Search

Results in Journal Intelligent Information Management: 360

(searched for: journal_id:(1205170))
Page of 8
Articles per Page
Show export options
  Select all
Antonio Di Leva, Emilio Sulis, Manuela Vinai
Intelligent Information Management, Volume 09, pp 189-205;

This article proposes a framework, called BP-M* which includes: 1) a methodology to analyze, engineer, restructure and implement business processes, and 2) a process model that extends the process diagram with the specification of resources that execute the process activities, allocation policies, schedules, times of activities, management of queues in input to the activities and workloads so that the same model can be simulated by a discrete event simulator. The BP-M* framework has been applied to a real case study, a public Contact Center which provides different typologies of answers to users’ requests. The simulation allows to study different system operating scenarios (“What-If” analysis) providing useful information for analysts to evaluate restructuring actions.
Vivian Vimarlund, Craig Kuziemsky, , Pirkko Nykänen, Nicolas Nikula
Intelligent Information Management, Volume 09, pp 177-188;

In this study we use the experiences from the service industry and explore pre-requisites of the e-health market which will need to achieve to stimulate both sides of the market (vendors, healthcare organizations, government, institutions, corporations and services organizations) to interact with each other and develop demand driven services and social innovations. The results presented in this paper may be of interest for decision makers, industries (e.g. software or technology designers), small and medium enterprises (SME) and entrepreneurs with an interest in becoming a part of the e-health market, and for consumers (e.g. healthcare personnel and patients) that are willing to influence the market through their choices. The outcomes of the study shown that the role of virtual brokers is essential to the further development of a sustainable e-health market globally because its role as catalyst for interaction between the two-sides of the markets, its effects on the reduction of competitive constrains, its effects on the accessibility to broader network of actors and its effects on the support of public-private exchanges of knowledge and experience.
Saleh Almuayqil, Anthony S. Atkins, Bernadette Sharp
Intelligent Information Management, Volume 09, pp 156-176;

The area of knowledge management, the SECI mode in particular, has great value in terms of enriching patients’ knowledge about their diseases and its complications. Despite its effectiveness, the application of knowledge management in the healthcare sector in the Kingdom of Saudi Arabia seems deficient, leading to insufficient practice of self-management and education of different prevalent diseases in the Kingdom. Moreover, the SECI model seems to be only focusing in the conversion of human knowledge and ignore knowledge stored in databases and other technological means. In this paper, we propose a framework to support diabetic patients and healthcare professionals in the Kingdom of Saudi Arabia to self-manage their disease. Data mining and the SECI model can provide effective mechanisms to support people with diabetes mellitus. The area of data mining has long been utilised to discover useful knowledge whereas the SECI model facilitates knowledge conversion between tacit and explicit knowledge among different individuals. The paper also investigates the possibilities of applying the model in the web environment and reviews the tools available in the internet that can apply the four modes of the SECI model. This review helps in providing a new median for knowledge management by addressing several cultural obstacles in the Kingdom.
Pankaj Chaudhary, Micki Hyde, James A. Rodger
Intelligent Information Management, Volume 09, pp 133-155;

Information Systems (IS) agility is a current topic of interest in the IS industry. The study follows up on work on the definition of the construct of IS agility and attributes for sensing, diagnosis, and selection and execution in an agile IS. IS agility is defined as the ability of an IS to sense a change in real time; diagnose it in real time; and select and execute a response in real time. Architecting an agile IS is a complex and resource-intensive task, and hence examination of its benefits is highly desired and appropriate. This paper examines the benefits of an Agile Information System. Benefits of an agile IS were derived from related academic literature and then refined using practitioner literature and qualitative data. The benefits considered were the first order or direct benefits. These benefits were then empirically validated through a survey of IT practitioners. The results of the survey were analyzed and a rank order of the benefits was arrived at. An exploratory factor analysis was also done to find the common dimensions underlying the benefits. It is suggested that organizations can use the empirically validated benefits from this study to justify and jump-start their capital and labor expenditure to build agility into their Information System.
Li Wang, Xiaoning Wang, Ningyu Liang, Ming Xiao
Intelligent Information Management, Volume 02, pp 647-651;

Modern educational technology is playing a very important role in information engineering discipline in ordinary universities and colleges. The using of modern educational technology can be classified into instinctive stage and rational stage. In the instinctive stage, users have been aware of the importance of those modern assisted teaching methods and make full application already. In rational stage, the user should not only establish a best teaching environment by using all teaching methods, but also optimize the usage of these methods to get the most powerful effects in class based on specific educational goal. In this paper, how to make highly rational use of modern educational technologies in ordinary universities and colleges is the main topic we discussed. Authors investigate relative educational environment and consequences in their teaching experiences of many years, and conclude from those data by comparing, evaluating and analyzing relative situation.
Intelligent Information Management, Volume 02, pp 637-646;

Mathematical statement of elastodynamic contact problem for cracked body with considering unilateral restrictions and friction of the crack faces is done in classical and weak forms. Different variational formulations of unilateral contact problems with friction based on boundary variational principle are considered. Nonsmooth optimization algorithms of Udzawa’s type for solution of unilateral contact problem with friction have been developed. Convergence of the proposed algorithms has been studied numerically.
Intelligent Information Management, Volume 02, pp 631-636;

In this paper, we use the lower record values from the inverse Weibull distribution (IWD) to develop and discuss different methods of estimation in two different cases, 1) when the shape parameter is known and 2) when both of the shape and scale parameters are unknown. First, we derive the best linear unbiased estimate (BLUE) of the scale parameter of the IWD. To compare the different methods of estimation, we present the results of Sultan (2007) for calculating the best linear unbiased estimates (BLUEs) of the location and scale parameters of IWD. Second, we derive the maximum likelihood estimates (MLEs) of the location and scale parameters. Further, we discuss some properties of the MLEs of the location and scale parameters. To compare the different estimates we calculate the relative efficiency between the obtained estimates. Finally, we propose some numerical illustrations by using Monte Carlo simulations and apply the findings of the paper to some simulated data.
Intelligent Information Management, Volume 02, pp 619-630;

Today’s world is characterized by uncertainty and complexity. While examining the importance of research in such a context, the paper attempts to outline a first definition of the role and potential of policy research. The policy process itself has become increasingly complex and non linear, as has its relationship with research. Consequently, policy researchers’ contributions to policymakers may not have a direct, punctual and immediate influence on single issues, but rather a more pervasive, interactive, deliberative effect. Focusing on the theoretical definition of the risk, uncertainty and complexity of the policy process today, the paper outlines some questions and puts forward possible answers which offer a starting point for further analysis. It explores a new role for policy research and underlines the opportunities offered by argumentative, deliberative and multidisciplinary approaches which can positively impact democracy.
Intelligent Information Management, Volume 09, pp 115-132;

The traveling salesman problem has long been regarded as a challenging application for existing optimization methods as well as a benchmark application for the development of new optimization methods. As with many existing algorithms, a traditional genetic algorithm will have limited success with this problem class, particularly as the problem size increases. A rule based genetic algorithm is proposed and demonstrated on sets of traveling salesman problems of increasing size. The solution character as well as the solution efficiency is compared against a simulated annealing technique as well as a standard genetic algorithm. The rule based genetic algorithm is shown to provide superior performance for all problem sizes considered. Furthermore, a post optimal analysis provides insight into which rules were successfully applied during the solution process which allows for rule modification to further enhance performance.
Wissam Alobaidi, , Entidhar Alkuam
Intelligent Information Management, Volume 09, pp 97-113;

Agent based simulation has successfully been applied to model complex organizational behavior and to improve or optimize aspects of organizational performance. Agents, with intelligence supported through the application of a genetic algorithm are proposed as a means of optimizing the performance of the system being modeled. Local decisions made by agents and other system variables are placed in the genetic encoding. This allows local agents to positively impact high level system performance. A simple, but non trivial, peg game is utilized to introduce the concept. A multiple objective bin packing problem is then solved to demonstrate the potential of the approach in meeting a number of high level goals. The methodology allows not only for a systems level optimization, but also provides data which can be analyzed to determine what constitutes effective agent behavior.
Mohammad Mustafa, Afag Salah Eldeen, Sulieman Bani-Ahmad, Abdelrahman Osman Elfaki
Intelligent Information Management, Volume 09, pp 39-67;

Arabic, as one of the Semitic languages, has a very rich and complex morphology, which is radically different from the European and the East Asian languages. The derivational system of Arabic, is therefore, based on roots, which are often inflected to compose words, using a spectacular and a relatively large set of Arabic morphemes affixes, e.g., antefixs, prefixes, suffixes, etc. Stemming is the process of rendering all the inflected forms of word into a common canonical form. Stemming is one of the early and major phases in natural processing, machine translation and information retrieval tasks. A number of Arabic language stemmers were proposed. Examples include light stemming, morphological analysis, statistical-based stemming, N-grams and parallel corpora (collections). Motivated by the reported results in the literature, this paper attempts to exhaustively review current achievements for stemming Arabic texts. A variety of algorithms are discussed. The main contribution of the paper is to provide better understanding among existing approaches with the hope of building an error-free and effective Arabic stemmer in the near future.
C. Ganeshkumar, M. Pachayappan, G. Madanmohan
Intelligent Information Management, Volume 09, pp 68-96;

The purpose of this paper is to present a critical review of prior literature relating to agri-food supply chain management. An in-depth analysis has been carried out to identify the influential information from the literature. This paper has identified gaps to be explored about agricultural supply chain management (SCM) practices which may be used by researchers to enrich theory construction and practitioners may concentrate on establishing the extent and frontiers of agri-food SCM. This research work is the first attempt to make a critical literature review of available literature on agri-food SCM practices for developing countries like India. The research articles and other materials related to the agri-food supply chain management were collected from online data bases like Scopus, EBSCO and Google Scholar for the period of 10 years (2006-2016). The study performs content analysis and is followed by descriptive analysis. In the next phase, the literature in the field of agri-food supply chain management is classified into four broad categories viz. general literature review of agri-food supply chain, policies affecting the segments of agri-food supply chain, individual segments of agri-food SCM (structure of supply chain segments and conduct of supply chain segments) and performance of supply chain segments. These four categories are comprehensively reviewed and elaborated the research gap in the literature based on agri-food supply chain management. Finally, potato supply chain of India is considered as a case example for comprehensive analysis and elaborated in detail.
, , Sima Ayat, Somayeh Hamidi, Farshad Mahini
Intelligent Information Management, Volume 07, pp 253-259;

These days, health care systems such as pharmacies and drugstores normally produce high volumes of data. Consequently, utilizing data mining methods in health care systems has become a conventional process. In this research, Apriori algorithm has been applied to perform data mining using the data obtained from the prescriptions ordered within a pharmacy. Ten association rules were achieved from the assigned pharmaceutical drugs in those prescriptions using the aforementioned Apriori algorithm. The accuracy of these rules is also manually studied and reviewed by a physician. Among these association rules, Vitamin D and Calcium pills are the most interrelated medications, and Omeprazole and Metronidazole rankd second in terms of association. The results of this study provide useful feedback information about associations among drugs.
Michael L. Vaughan, , Mingxin Li
Intelligent Information Management, Volume 09, pp 1-20;

Traditionally, the process used by public transportation entities to determine the acquisition strategy for new vehicle asset is based upon a broad range of criteria. Vehicle cost has been cited as one of the more critical factors which decision makers consider. It is currently a common practice to consider other factors (life-cycle cost, fuel efficiency, vehicle reliability, environmental effects, etc.) that contribute to a more comprehensive approach. This study investigates the next generation of advancements in decision making tools in the area of the application of methods to quantify and manage uncertainty. In particular, the uncertainty comes from the public policy arena where future policy and regulations are not always based upon logical and predictable processes. The fleet decision making process in most governmental agencies is a very complex and interdependent activity. There are always competing forces and agendas within the view of the decision maker. Rarely is the decision maker a single person although, within the transit environment, there is often one person charged with the responsibility of fleet management. The focus of this research examines the decision making of the general transit agency community via the development of an expert systems prototype tool. A computer-based prototype system is developed which provide an expert knowledge-based recommendation, based upon variable user inputs. The results shown in this study show that a decision making tool for the management of transit system alternate fuel vehicle assets can be modeled and tested. The direct users of this research are the transit agency administrations. The results can be used by the management teams as a reliable input to inform their urban transit buses expansion decision making process.
Majed Alshamari
Intelligent Information Management, Volume 08, pp 170-180;

Systems’ usability is one of the critical attribute of any system’s quality. Medical prac-titioners usually encounter usability difficulties while using a health information system like other systems. There are different usability factors, which are expected to influence systems’ usability. Errors preventions, patient safety and privacy are vital usability factors and should not be ignored while developing a health information system. This study is based on a comprehensive analysis of published academic and industrial literature to provide the current status of health information systems’ usability. It also identifies different usability factors such as privacy, errors, design and efficiency. Usability factors are then assessed. Those factors are further examined through a questionnaire to study the priorities of them from medical practitioners’ point of view in Saudi Arabia. The statistical analysis shows that the privacy and errors are very critical than the other usability factors. The study results further revealed that availability and response time are the main challenges faced by the medical practitioners when using the HIS. However, flexibility and customizability were claimed to ease the use of the HIS. In addition, a number of statistical correlations were established. Overall, the study findings seemed helpful to designers and implementers to consider these factors for successful implementation of HIS.
Kazuhiro Esaki
Intelligent Information Management, Volume 08, pp 181-193;

In the previous study, we suggested the concept of new TQM based on the consideration of basic concept of Quality Control. Also, in the previous study, we suggested the target domains and entities of product and process based on the TQM Matrix and view point of Three Dimensional Unification Value Models for managing quality of organization systems. Furthermore, in the previous study, we suggest the Common Management Process of organizations. Based on the above suggestion, in this paper, we would like to propose the Common Management Process Model of Total Quality Management based on the consideration of situation analysis and more precise definition of TQM Matrix and Three Dimensional Unification Value Model of “Product and Process”. Improvement of quality and efficiency of organization management can be expected by the integration of conventional different management such as quality assurance, quality improvement, risk management, investment individually from the view point of common management process.
B. N. Malar Selvi, J. Edwin Thomson
Intelligent Information Management, Volume 08, pp 115-141;

This study discusses on the communication method followed by most of the brands called. “Electronic Word of Mouth” in a short form is called as eWOM in order to reach the customers effectively in a short span. Social media having become a new hybrid component of integrated marketing communication allows the brands to establish strong relationship with the customers. With the establishment of customer relationship online, the brands create a platform for the customers to discuss about the product features, quality, price and write a review about the product online. This research analyses about the social media and its impact in spreading the messages about the brands to the end customers and the impact of gender, age groups, income, designation and the demographic details of the customers in trusting the information that is spread through electronic media and the level at which the eWOM helps the customers to select the brand.
Leakemariam Berhe, Tesfay Gidey
Intelligent Information Management, Volume 08, pp 143-169;

Introduction: The present work was devoted to assess the awareness and usage of quality control tools with the emphasis on statistical process control in Ethiopian manufacturing industries. Semi structured questionnaire has been employed to executive and technical managers of manufacturing industries of various size and specialism across the country. Stratified random sample method by region was used to select sample industries for the study. The samples used for this study are industries mainly from Oromiya, Addis Ababa, Tigray, Amara, SNNP and Diredawa regions proportional to their size of the available industries. Methods: Exploratory method and descriptive statistics was used for data analysis. Available documents and reports related to quality control policy of the selected companies were investigated. Results and Discussions: The number of manufacturing industries involved in this study was 44. Of the sampled manufacturing industries about 60% are from Oromiya and Addis Ababa regions. It has been reported that 100% of the respondents said that the importance of quality control tools is very important to their organizations’ productivity and quality improvement (Figure 3). Quality control professionals were also asked the extent to which quality control system is working in their industry and majority of the respondents (45%) have indicated that quality control system is working to some extent in their respective industries (Figure 18). Conclusions and Recommendations: Most of the quality department of the industries did not fully recognize the importance of statistical process control as quality control tools. This is mainly due to lack of awareness and motivation of the top managements, shortage of man power in the area, and others together would make it difficult to apply quality control tools in their organization. In general, the industries in Ethiopia are deficient in vigor and found to be stagnant hence less exposed to a highly competitive market and don’t adopt the latest quality control techniques in order to gain knowledge about systems to improve quality and operational performance. We conclude that quality management system has to be established as an independent entity with a real power and hence the quality control department which is responsible for quality can make an irreversible decision with respect to quality of any given product. Moreover, the concerned bodies (government and ministry of industries) should give attention and work together with universities to ensure how these statistical process control techniques could be incorporated in a curriculum of the universities at higher levels in degree and masters programs. Furthermore, different trainings which could improve quality and efficiency of their respective management system should be given as short and long term to the employees including top and middle managers found in various industries relevant to their process.
Dongping Gao, Zhendong Niu, Baosheng Zhang, Nanning Zhang
Intelligent Information Management, Volume 02, pp 613-617;

Generally speaking, the software of management systems in law field is based on the frames of events. However, we are going to study and develop the new software of management system in which the basic elements are evidence. This kind of software is often called evidence management system. Here we are supposed to present a designing plan and an implementing approach for the evidence management system in detail. Some functions such as global, dynamic and systematic managing of evidence can be implemented in this system. We attempt to provide a function of individual searching as well. Users may carry out multi-dimensional data analysis based on the information of our database of the management systems.
Intelligent Information Management, Volume 02, pp 608-612;

In the view of traditional industry cluster theory, it is easy to copy the software industry cluster pattern, or it is easy to copy another Silicon Valley, due to low reliability of the resources and the guidance factors of locations in software industry. But it is much more difficult to copy a Silicon Valley mode practically than imaginatively and the difficulties of bringing up and supporting high-tech initiatives is more than theoretic anticipation. In China, the software companies have just gathered together geographically and therefore no initiative center can be formed. All these above signify that software industry cluster is distinct from the traditional industry clusters, but the cognition of the reasons of software industry cluster is not clear yet. Furthermore, reasonable explanations of the bewilderment in the economical practice of software industry cluster are urgently needed.
, Xiaoxia Yang
Intelligent Information Management, Volume 02, pp 597-607;

The state-of-art Web services composition approaches are facing more and more serious bottlenecks of effectiveness and stability with the increasing diversity and real-time requirements of applications, since new web service chain must be generated from “scratch” for each application. To break these bottlenecks, this paper presents a Min-Conflict Heuristic-Based Web Service Chain Reconfiguration Approach(MCHRC) to maximal reuse relative web services chain: a min-conflict heuristic based regression search algorithms is proposed to implement the web services chain reconfiguration based on the formal definition of process constraint and integrity constraint to guarantee the correctness and integrality of the reconfiguration. This benefits the service reuse and then can relieve the time complexity of web service composition and improve web services chain executing stability by reduce service provider load. Experimental results show that this approach makes significant improvement on the effectiveness of web services composition.
Ali R. Hurson, Sahra Sedigh
Intelligent Information Management, Volume 02, pp 586-596;

This paper describes PERCEPOLIS, an educational platform that leverages technological advances, in particular in pervasive computing, to facilitate personalized learning in higher education, while supporting a networked curricular model. Fundamental to PERCEPOLIS is the modular approach to course development. Blended instruction, where students are responsible for perusing certain learning objects outside of class, used in conjunction with the cyberinfrastructure will allow the focus of face-to-face meetings to shift from lecture to active learning, interactive problem-solving, and reflective instructional tasks. The novelty of PERCEPOLIS lies in its ability to leverage pervasive and ubiquitous computing and communication through the use of intelligent software agents that use a student’s academic profile and interests, as well as supplemental information such as his or her learning style, to customize course content. Assessments that gauge the student’s mastery of concepts are used to allow self-paced progression through the course. Furthermore, the cyberinfrastructure facilitates the collection of data on student performance and learning at a resolution that far exceeds what is currently available. We believe that such an infrastructure will accelerate the acquisition of knowledge and skills critical to professional engineering practice, while facilitating the study of how this acquisition comes about, yielding insights that may lead to significant changes in pedagogy.
Saeid A. Alghamdi
Intelligent Information Management, Volume 02, pp 569-585;

The availability of automated evaluation methodologies that may reliably be used for determining students’ scholastic performance through assigning letter grades are of utmost practical importance to educators, students, and do invariably have pivotal values to all stakeholders of the academic process. In particular, educators use letter grades as quantification metrics to monitor students’ intellectual progress within a framework of clearly specified learning objectives of a course. To students grades may be used as predictive measures and motivating drives for success in a study field. However due to numerous objective and subjective variables that may by be accounted for in a methodological process of assigning students’ grades, and since such a process is often tainted with personal philosophy and human psychology factors, it is essential that educators exercise extra care in maximizing positive account of all objective factors and minimizing negative ramifications of subjectively fuzzy factors. To this end, and in an attempt to make assigning students’ grades more reliable for assessing true-level of mastering specified learning outcomes, this paper will: i) provide a literature review on previous works on the most common methods that have traditionally been in use for assigning students’ grades, and a short account of the virtues and/or vices of such methods, and ii) present a user-friendly computer code that may be easily adapted for the purpose of assigning students’ grades. This would relieve educators from the overwhelming concerns associated with mechanistic aspects of determining educational metrics, and it would allow them to have more time and focus to obtain reliable assessments of true-level of students’ mastery of learning outcomes by accounting for all possible evaluation components.
Opeyemi A. Abisoye, Blessing O. Abisoye, Blessing Ele Ojonuba
Intelligent Information Management, Volume 08, pp 103-114;

Outpatients receive medical treatment without being admitted to a hospital. They are not hospitalized for 24 hours or more but visit hospital, clinic or associated facility for diagnosis or treatment [1]. But the problems of keeping their records for quick access by the management and provision of confidential, secure medical report that facilitates planning and decision making and hence improves medical service delivery are vital issues. This paper explores the challenges of manual outpatient records system for General Hospital, Minna and infers solutions to the current challenges by designing an online outpatient’s database system. The main method used for this research work is interview. Two (2) doctors, three (3) nurses on duty and two (2) staff at the record room were interviewed. Fifty (50) sampled outpatient records were collected. The combination of PHP, MYSQL and MACROMIDIA DREAMVEAVER was used to design the webpage and input data. The records were implemented on the designed outpatient management system and the outputs were produced. The finding shows these challenges facing the manual system of inventory management system. Distortion of patient’s folder and difficulty in searching a patient’s folder, difficulty in relating previous complaint with the new complains because of volume of the folder, slow access to patient diagnosis history during emergency, lack of back up when an information is lost, and preparation of accurate and prompt reports make it become a difficult task as information is difficult to collect from various register. Based on the findings, this paper highlights the possible solutions to the above problems. An online outpatient database system was designed to keep the outpatients records and improve medical service delivery.
Xiayan Cheng, , Yunxia Zhou
Intelligent Information Management, Volume 08, pp 98-102;

A parallel related uniform machine system consists of m machines with different processing speeds. The speed of any machine is independent on jobs. In this paper, we consider online scheduling for jobs with arbitrary release times on the parallel uniform machine system. The jobs appear over list in terms of order. An order includes the processing size and releasing time of a job. For this model, an algorithm with competitive ratio of 12 is addressed in this paper.
Hisako Orimoto
Intelligent Information Management, Volume 08, pp 87-97;

It is important to specify the occurrence and cause of failure of machines without stopping the machines because of increased use of various complex industrial systems. In this study, two new diagnosis methods based on the correlation information between sound and vibration emitted from the machine are derived. First, a diagnostic method which can detect the part of machine with fault among the assumed several faults is proposed by measuring simultaneously the time series data on sound and vibration. Next, a diagnosis method based on the estimation of the changing information of correlation between sound and vibration is considered by using prior information in only normal situation. The effectiveness of the proposed theory is experimentally confirmed by applying it to the observed data emitted from a rotational machine driven by an electric motor.
, Peer Mohamed Shahabudeen
Intelligent Information Management, Volume 08, pp 41-65;

The growing global competition compels organizations to use many productivity improvement techniques. In this direction, assembly line balancing helps an organization to design its assembly line such that its balancing efficiency is maximized. If the organization assembles more than one model in the same line, then the objective is to maximize the average balancing efficiency of the models of the mixed model assembly line balancing problem. Maximization of average balancing efficiency of the models along with minimization of makespan of sequencing models forms a multi-objective function. This is a realistic objective function which combines the balancing efficiency and makespan. This assembly line balancing problem with multi-objective comes under combinatorial category. Hence, development of meta-heuristic is inevitable. In this paper, an attempt has been made to develop three genetic algorithms for the mixed model assembly line balancing problem such that the average balancing efficiency of the model is maximized and the makespan of sequencing the models is minimized. Finally, these three algorithms and another algorithm in literature modified to solve the mixed-model assembly line balancing problem are compared in terms of the stated multi-objective function using a randomly generated set of problems through a complete factorial experiment.
Feichen Shen, Hongfang Liu, Sunghwan Sohn, David W. Larson, Yugyung Lee
Intelligent Information Management, Volume 08, pp 66-85;

In the current biomedical data movement, numerous efforts have been made to convert and normalize a large number of traditional structured and unstructured data (e.g., EHRs, reports) to semi-structured data (e.g., RDF, OWL). With the increasing number of semi-structured data coming into the biomedical community, data integration and knowledge discovery from heterogeneous domains become important research problem. In the application level, detection of related concepts among medical ontologies is an important goal of life science research. It is more crucial to figure out how different concepts are related within a single ontology or across multiple ontologies by analysing predicates in different knowledge bases. However, the world today is one of information explosion, and it is extremely difficult for biomedical researchers to find existing or potential predicates to perform linking among cross domain concepts without any support from schema pattern analysis. Therefore, there is a need for a mechanism to do predicate oriented pattern analysis to partition heterogeneous ontologies into closer small topics and do query generation to discover cross domain knowledge from each topic. In this paper, we present such a model that predicates oriented pattern analysis based on their close relationship and generates a similarity matrix. Based on this similarity matrix, we apply an innovated unsupervised learning algorithm to partition large data sets into smaller and closer topics and generate meaningful queries to fully discover knowledge over a set of interlinked data sources. We have implemented a prototype system named BmQGen and evaluate the proposed model with colorectal surgical cohort from the Mayo Clinic.
Yuriy E. Obzherin
Intelligent Information Management, Volume 08, pp 17-26;

In the present paper, to build model of two-line queuing system with losses GI/G/2/0, the approach introduced by V.S. Korolyuk and A.F. Turbin, is used. It is based on application of the theory of semi-Markov processes with arbitrary phase space of states. This approach allows us to omit some restrictions. The stationary characteristics of the system have been defined, assuming that the incoming flow of requests and their service times have distributions of general form. The particular cases of the system were considered. The used approach can be useful for modeling systems of various purposes.
Tobore Igbe, Bolanle Ojokoh
Intelligent Information Management, Volume 08, pp 27-40;

Over the years, there has been increasing growth in academic digital libraries. It has therefore become overwhelming for researchers to determine important research materials. In most existing research works that consider scholarly paper recommendation, the researcher’s preference is left out. In this paper, therefore, Frequent Pattern (FP) Growth Algorithm is employed on potential papers generated from the researcher’s preferences to create a list of ranked papers based on citation features. The purpose is to provide a recommender system that is user oriented. A walk through algorithm is implemented to generate all possible frequent patterns from the FP-tree after which an output of ordered recommended papers combining subjective and objective factors of the researchers is produced. Experimental results with a scholarly paper recommendation dataset show that the proposed method is very promising, as it outperforms recommendation baselines as measured with nDCG and MRR.
, Mary Helen Mays, Elizabeth A. Sternke
Intelligent Information Management, Volume 08, pp 9-16;

Personalized medicine is the development of “tailored” therapies that reflect traditional medical approaches with the incorporation of the patient’s unique genetic profile and the environmental basis of the disease. These individualized strategies encompass disease prevention and diagnosis, as well as treatment strategies. Today’s healthcare workforce is faced with the availability of massive amounts of patient- and disease-related data. When mined effectively, these data will help produce more efficient and effective diagnoses and treatment, leading to better prognoses for patients at both the individual and population level. Designing preventive and therapeutic interventions for those patients who will benefit most while minimizing side effects and controlling healthcare costs requires bringing diverse data sources together in an analytic paradigm. A resource to clinicians in the development and application of personalized medicine is largely facilitated, perhaps even driven, by the analysis of “big data”. For example, the availability of clinical data warehouses is a significant resource for clinicians in practicing personalized medicine. These “big data” repositories can be queried by clinicians, using specific questions, with data used to gain an understanding of challenges in patient care and treatment. Health informaticians are critical partners to data analytics including the use of technological infrastructures and predictive data mining strategies to access data from multiple sources, assisting clinicians’ interpretation of data and development of personalized, targeted therapy recommendations. In this paper, we look at the concept of personalized medicine, offering perspectives in four important, influencing topics: 1) the availability of “big data” and the role of biomedical informatics in personalized medicine, 2) the need for interdisciplinary teams in the development and evaluation of personalized therapeutic approaches, and 3) the impact of electronic medical record systems and clinical data warehouses on the field of personalized medicine. In closing, we present our fourth perspective, an overview to some of the ethical concerns related to personalized medicine and health equity.
Intelligent Information Management, Volume 08, pp 1-8;

The volume of information being created, generated and stored is huge. Without adequate knowledge of Information Retrieval (IR) methods, the retrieval process for information would be cumbersome and frustrating. Studies have further revealed that IR methods are essential in information centres (for example, Digital Library environment) for storage and retrieval of information. Therefore, with more than one billion people accessing the Internet, and millions of queries being issued on a daily basis, modern Web search engines are facing a problem of daunting scale. The main problem associated with the existing search engines is how to avoid irrelevant information retrieval and to retrieve the relevant ones. In this study, the existing system of library retrieval was studied. Problems associated with them were analyzed in order to address this problem. The concept of existing information retrieval models was studied, and the knowledge gained was used to design a digital library information retrieval system. It was successfully implemented using a real life data. The need for a continuous evaluation of the IR methods for effective and efficient full text retrieval system was recommended.
, Maytha Al-Ali, Kees Rietsema
Intelligent Information Management, Volume 07, pp 260-267;

By providing real-time updates of essential information, airports not only display and disseminate information but also help control the flow of traffic. In order to maximize available space, particularly in high traffic areas, Airport Display Information Systems should be integrated into the overall design of the airport and their positioning should be carefully planned to deliver optimal results. Airport Display Information Systems can help airports maximize space, increase customer satisfaction, and generate new revenue opportunities. The technology is designed not only to comply with environmental regulations, but also to help airports keep budgets in check. This paper discusses airport display systems, their connections and interoperability with other systems and who the key airport users of these airport display systems are.
, Ramasamy Panneerselvam
Intelligent Information Management, Volume 07, pp 313-338;

This paper presents four different hybrid genetic algorithms for network design problem in closed loop supply chain. They are compared using a complete factorial experiment with two factors, viz. problem size and algorithm. Based on the significance of the factor “algorithm”, the best algorithm is identified using Duncan’s multiple range test. Then it is compared with a mathematical model in terms of total cost. It is found that the best hybrid genetic algorithm identified gives results on par with the mathematical model in statistical terms. So, the best algorithm out of four algorithm proposed in this paper is proved to be superior to all other algorithms for all sizes of problems and its performance is equal to that of the mathematical model for small size and medium size problems.
Miguel Reis, Ruben Silva, , José Portugal╃José Saias$University of évora, Miguel Reis$Cortex Intelligence, Évora, Portugal╃Ruben Silva$Cortex Intelligence, Portugal
Intelligent Information Management, Volume 07, pp 303-312;

Business war games are strategic management exercises that bring the military scenario simulation to a commercial setting, helping business managers to better understand the environment in which they operate and anticipate scenarios, such as competition movements, new product launching and production capacity planning, among others. These exercises normally take place with players organized in teams, gathered in a room, with a static package of information provided beforehand. In this paper we present an alternative, dynamic way of playing a business war game, with players geographically dispersed, and information dynamically available as it is available from its sources. We introduce BigPicture, an analytical platform with unique features that allow it to be an ideal “playground” for conducting more realistic business war games.
Intelligent Information Management, Volume 07, pp 283-302;

Despite extensive research, timing channels (TCs) are still known as a principal category of threats that aim to leak and transmit information by perturbing the timing or ordering of events. Existing TC detection approaches use either signature-based approaches to detect known TCs or anomaly-based approach by modeling the legitimate network traffic in order to detect unknown TCs. Unfortunately, in an software-defined networking (SDN) environment, most existing TC detection approaches would fail due to factors such as volatile network traffic, imprecise timekeeping mechanisms, and dynamic network topology. Furthermore, stealthy TCs can be designed to mimic the legitimate traffic pattern and thus evade anomalous TC detection. In this paper, we overcome the above challenges by presenting a novel framework that harnesses the advantages of elastic resources in the cloud. In particular, our framework dynamically configures SDN to enable/disable differential analysis against outbound network flows of different virtual machines (VMs). Our framework is tightly coupled with a new metric that first decomposes the timing data of network flows into a number of using the discrete wavelet-based multi-resolution transform (DWMT). It then applies the Kullback-Leibler divergence (KLD) to measure the variance among flow pairs. The appealing feature of our approach is that, compared with the existing anomaly detection approaches, it can detect most existing and some new stealthy TCs without legitimate traffic for modeling, even with the presence of noise and imprecise timekeeping mechanism in an SDN virtual environment. We implement our framework as a prototype system, OBSERVER, which can be dynamically deployed in an SDN environment. Empirical evaluation shows that our approach can efficiently detect TCs with a higher detection rate, lower latency, and negligible performance overhead compared to existing approaches.
Intelligent Information Management, Volume 07, pp 268-281;

Population aging and the consequent change in the profile of the age pyramid are already a reality the world over. One undeniable effect of this aging process is the significant increase in the number of people with Alzheimer’s disease (AD), which is the most common form of dementia, accounting for around 50% - 60% of all cases. AD tends to affect people in their 60 s, becoming progressively more commonplace in older age groups. It is an incurable disease, and patients can live for many years taking medication on a daily basis. This study shows that research into AD is on the rise around the world because the pharmaceutical industry and research institutions are seeking new types of drugs to treat and even cure Alzheimer’s patients. By analyzing patent documents, we map out the potential future treatments for this disease, indicating the leading countries and drugs companies that have invested most in a bid to accelerate progress towards new discoveries about the disease and the development of new drugs.
Intelligent Information Management, Volume 07, pp 223-229;

Disaster recovery (DR) and business continuity (BC) have been important areas of inquiry for both business managers and academicians. It is now widely believed that for achieving sustainable business continuity, a firm must be able to recover from both man-made and natural disasters. This is especially true for maintaining and recovering the lifeline of the organization and its data. Although the literature has discussed the importance of disaster recovery and business continuity, there is not much known about how Information System Data Analytics Resilience (ISDAR) and the organization’s ability to recover from lost information. In this research, we take a step in this direction and analyze the relationship of IS personnel expertise on ISDAR and investigate Information System (IS) personnel understanding of the firm’s competitive priorities, IS Personnel understanding of business policies and objectives, IS personnel’s ability to solve business problems, IS personnel initiatives in changing business processes and their determination and attentiveness to focus on achieving confident leadership in data and analytics resilience. We collected data through a survey of IS and business managers from 302 participants. Our results show that there is evidence to support our hypothesis and that there may indeed be a relationship between these variables.
Rasim Alguliyev,
Intelligent Information Management, Volume 07, pp 230-241;

This paper suggests an approach for providing the dynamic federations of clouds. The approach is based on risk assessment technology and implements cloud federations without consideration of identity federations. Here, for solving this problem, first of all, important factors which are capable of seriously influencing the information security level of clouds are selected and then hierarchical risk assessment architecture is proposed based on these factors. Then, cloud provider’s risk priority vectors are formed by applying the AHP methodology and fuzzy logic excerpt type risk evaluation is carried out based on this vector.
Emadeldeen Noureldaim, ,
Intelligent Information Management, Volume 05, pp 42-47;

In this article we propose to combine an integrated method, the PCA-GMM method that generates a relatively improved segmentation outcome as compared to conventional GMM with Kalman Filtering (KF). The combined new method the PCA-GMM-KF attempts tracking multiple moving objects; the size and position of the objects along the sequence of their images in dynamic scenes. The obtained experimental results successfully illustrate the tracking of multiple moving objects based on this robust combination
Mohamed Abdelfattah
Intelligent Information Management, Volume 05, pp 35-41;

A performance dashboard is a full-fledged business information system that is built on a business-intelligence and data- integration infrastructure. It has been one of the most hot research topics. Now many corporations have involved in the performance dashboard Architectures related techniques and many performance dashboard Architectures have been put forward. This is a favorable situation to study and application of performance dashboard related techniques. Though interesting, there are also some problems for so many Architectures forms. For to a novice or user with little knowledge about performance dashboard Architectures, it is still very hard to make a reasonable choice. What differences are there for different performance dashboard Architectures and what characteristics and advantages each has? To answer these problems, the characteristics, architectures and applications of several popular performance dashboard Architectures are analyzed and discussed in detail. From the comparison of these Architectures, users can better understand the different performance dashboard Architectures and more reasonably choose what they want.
Intelligent Information Management, Volume 07, pp 195-222;

This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactical offensive threats. Conformal prediction is the principled and unified adaptive and learning framework used to design, develop, and deploy a multi-faceted self-managing defensive shield to detect, disrupt, and deny intrusive attacks, hostile and malicious behavior, and subterfuge. Conformal prediction leverages apparent relationships between immunity and intrusion detection using non-conformity measures characteristic of affinity, a typicality, and surprise, to recognize patterns and messages as friend or foe and to respond to them accordingly. The solutions proffered throughout are built around active learning, meta-reasoning, randomness, distributed semantics and stratification, and most important and above all around adaptive Oracles. The motivation for using conformal prediction and its immediate off-spring, those of semi-supervised learning and transduction, comes from them first and foremost supporting discriminative and non-parametric methods characteristic of principled demarcation using cohorts and sensitivity analysis to hedge on the prediction outcomes including negative selection, on one side, and providing credibility and confidence indices that assist meta-reasoning and information fusion.
Intelligent Information Management, Volume 07, pp 242-251;

The graph can contain huge amount of data. It is heavily used for pattern recognition and matching tasks like symbol recognition, information retrieval, data mining etc. In all these applications, the objects or underlying data are represented in the form of graph and graph based matching is performed. The conventional algorithms of graph matching have higher complexity. This is because the most of the applications have large number of sub graphs and the matching of these sub graphs becomes computationally expensive. In this paper, we propose a graph based novel algorithm for fingerprint recognition. In our work we perform graph based clustering which reduces the computational complexity heavily. In our algorithm, we exploit structural features of the fingerprint for K-means clustering of the database. The proposed algorithm is evaluated using realtime fingerprint database and the simulation results show that our algorithm outperforms the existing algorithm for the same task.
Intelligent Information Management, Volume 07, pp 153-180;

Introduction: The present work compared the prediction power of the different data mining techniques used to develop the HIV testing prediction model. Four popular data mining algorithms (Decision tree, Naive Bayes, Neural network, logistic regression) were used to build the model that predicts whether an individual was being tested for HIV among adults in Ethiopia using EDHS 2011. The final experimentation results indicated that the decision tree (random tree algorithm) performed the best with accuracy of 96%, the decision tree induction method (J48) came out to be the second best with a classification accuracy of 79%, followed by neural network (78%). Logistic regression has also achieved the least classification accuracy of 74%. Objectives: The objective of this study is to compare the prediction power of the different data mining techniques used to develop the HIV testing prediction model. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes. Data preprocessing was performed and missing values for the categorical variable were replaced by the modal value of the variable. Different data mining techniques were used to build the predictive model. Results: The target dataset contained 30,625 study participants. Out of which 16,515 (54%) participants were women while the rest 14,110 (46%) were men. The age of the participants in the dataset ranged from 15 to 59 years old with modal age of 15 - 19 years old. Among the study participants, 17,719 (58%) have never been tested for HIV while the rest 12,906 (42%) had been tested. Residence, educational level, wealth index, HIV related stigma, knowledge related to HIV, region, age group, risky sexual behaviour attributes, knowledge about where to test for HIV and knowledge on family planning through mass media were found to be predictors for HIV testing. Conclusion and Recommendation: The results obtained from this research reveal that data mining is crucial in extracting relevant information for the effective utilization of HIV testing services which has clinical, community and public health importance at all levels. It is vital to apply different data mining techniques for the same settings and compare the model performances (based on accuracy, sensitivity, and specificity) with each other. Furthermore, this study would also invite interested researchers to explore more on the application of data mining techniques in healthcare industry or else in related and similar settings for the future.
, Kees Rietsema, Maytha Al-Ali
Intelligent Information Management, Volume 07, pp 130-138;

Research on the intersection of the areas of aviation and management information systems is sparse. Just as within other economic sectors, members of the aviation sector must incorporate new and existing technologies as they grow to maintain their competitive edge whether in aircraft systems, airports or other aerospace and aviation related industries. A proper classification is a prerequisite to systems alignment. This paper reviews landside airport information management systems, and their connections and interoperability with other systems and who the key airport users are. The information presented in this paper is based on interviews and data collection at a number of representative airports across the United States. Airport size and function are key considerations in the acquisition of information management system airside or land side. The implication is that not all airports are equipped in the same manner and therefore these systems can only be considered as representative of what exists “on the ground”. This paper represents a point of departure or a reference for those researchers interested in a more indepth study of airport information systems on the landside.
Intelligent Information Management, Volume 07, pp 139-152;

A novel approach to detect and filter out an unhealthy dataset from a matrix of datasets is developed, tested, and proved. The technique employs a new type of self organizing map called Accumulative Statistical Spread Map (ASSM) to establish the destructive and negative effect a dataset will have on the rest of the matrix if stayed within that matrix. The ASSM is supported by training a neural network engine, which will determine which dataset is responsible for its inability to learn, classify and predict. The carried out experiments proved that a neural system was not able to learn in the presence of such an unhealthy dataset that possessed some deviated characteristics, even though it was produced under the same conditions and through the same process as the rest of the datasets in the matrix, and hence, it should be disqualified, and either removed completely or transferred to another matrix. Such novel approach is very useful in pattern recognition of datasets and features that do not belong to their source and could be used as an effective tool to detect suspicious activities in many areas of secure filing, communication and data storage.
, Oscar Tamburis, Teresa Abbate, Alessandro Pepino
Intelligent Information Management, Volume 07, pp 93-106;

The management systems currently used in the Italian healthcare sector provide fragmented and incomplete information on this system and are generally unlikely to give accurate information on the performances of the healthcare processes. The present paper introduces a combined discrete event simulation (DES)/business process management (BPM) approach as innovative means to study the workflow of the activities within the Department of Laboratory Medicine of the “San Paolo” Hospital in Naples (Italy). After a first “As-Is” analysis to identify the current workflows of the system and to gather information regarding its behaviour, a following DES-based “What-If” analysis is implemented to figure out alternative work hypotheses in order to highlight possible modifications to the system’s response under varying operating conditions and improve its overall performances. The structure of the simulation program is explained and the results of the scenario analysis are discussed. The paper starts with a brief exploration of the use of DES in healthcare and ends with general observations on the subject.
, I. Eleftheriadis
Intelligent Information Management, Volume 07, pp 123-129;

Corporate net value is efficiently described on its stock price, offering investors a chance to include a potentially surplus value to the net worth of the overall investment portfolio. Financial analysis of corporations extracted from the accounting statements is constantly demanded to support decisions making of portfolio managers. Econometrics and Artificial Intelligence methods aim to extract hidden information from complex accounting and financial data. Support Vector Machines hybrids optimized in their components by Genetic Algorithms provide effective results in corporate financial analysis.
Murugaiyan Pachayappan, Ramasamy Panneerselvam
Intelligent Information Management, Volume 07, pp 107-122;

This paper considers machine-component cell formation problem of cellular manufacturing system. Since this problem comes under combinatorial category, development of a meta-heuristic is a must. In this paper, a hybrid genetic algorithm is presented. Normally, in genetic algorithm, the initial population is generated by random assignment of genes in each of the chromosomes. In this paper, the initial population is created using ideal seed heuristic. The proposed algorithm is compared with four other algorithms using 28 problems from literature. Through a completed factorial experiment, it is observed that the proposed algorithm outperforms the other algorithms in terms of grouping efficiency as well as grouping efficacy.
Page of 8
Articles per Page
Show export options
  Select all
Back to Top Top