Refine Search

New Search

Results: 59

(searched for: doi:10.13176/11.44)
Save to Scifeed
Page of 2
Articles per Page
by
Show export options
  Select all
, , Asish Saha, Indrajit Chowdhuri, Jasem A Albanai, Saeid Janizadeh, Koursoh Ahmadi, Khaled Mohamed Khedher, , Weili Duan
Published: 24 November 2021
Geocarto International pp 1-18; https://doi.org/10.1080/10106049.2021.2009921

Abstract:
The main aim of this research is to predict the impact of seasonal precipitation regimes on flood hazard applying machine learning models. For this purpose, twelve static variables and eight rainfall dynamics variables for 2050s (RCP 2.6 and 8.5) were used as conditioning factors. Four machine learning algorithms including K-Nearest Neighbor (KNN), Extremely Randomize Trees (ERT), Random Forest (RF) and Oblique-Random Forest (ORF) were used to model flood risk. Considering the area under curve (AUC) and other indices, the ORF was the most optimal model. The AUC of the KNN, ERT, RF and ORF for the validation datasets were 0.85, 0.90, 0.89 and 0.92 respectively. The results showed that under two RCPs, spatial distribution of high flood risk areas will change in the future and the trends will be different from the current. These results could provide valuable insights in simulating, predicting and reducing future flood risk.
Mateo Cámara, ,
Published: 3 September 2020
Abstract:
Since the emergence of a new strain of coronavirus known as SARS-CoV-2, many countries around the world have reported cases of COVID-19 disease caused by this virus. Numerous people’s lives have been affected both from a health and an economic point of view. The long tradition of using mathematical models to generate insights about the transmission of a disease, as well as new computer techniques such as Artificial Intelligence, have opened the door to diverse investigations providing relevant information about the evolution of COVID-19. In this research, we seek to advance the existing epidemiological models based on microscopic Markov chains to predict the impact of the pandemic at medical and economic levels. For this purpose, we have made use of the Spanish population movements based on mobile-phone geographically-located information to determine its economic activity using Artificial Intelligence techniques and have developed a novel advanced epidemiological model that combines this information with medical data. With this tool, scenarios can be released with which to determine which restriction policies are optimal and when they have to be applied both to limit the destruction of the economy and to avoid the feared possible upsurge of the disease.
, Israel Edem Agbehadji, Hongji Yang
Applied Nature-Inspired Computing: Algorithms and Case Studies pp 1-19; https://doi.org/10.1007/978-981-15-6695-0_1

The publisher has not yet granted permission to display this abstract.
Shilei Lyu, Zhiwei Wei,
Journal of Physics: Conference Series, Volume 1486; https://doi.org/10.1088/1742-6596/1486/3/032008

Abstract:
The performance of RFID networks can be optimized by RFID reader scheduling. This paper proposed a novel approach using an improved bat algorithm (IBA), which can optimize RFID networks. The IBA algorithm includes 2 improved mechanisms. In the proposed approach, all RFID readers are scheduled to work in appropriate sequence, which can greatly reduce RFID collisions. Experiments on two different RFID networks have been carried out to evaluate the effectiveness. Simulation results show that the proposed approach using IBA has better optimization precision than control algorithms.
Published: 9 November 2019
Expert Systems with Applications, Volume 143; https://doi.org/10.1016/j.eswa.2019.113072

The publisher has not yet granted permission to display this abstract.
Published: 6 June 2019
Studies in Big Data pp 21-48; https://doi.org/10.1007/978-3-030-21851-5_2

The publisher has not yet granted permission to display this abstract.
, , Nilesh Patel, Ishwar Sethi
Published: 28 August 2018
Procedia Computer Science, Volume 126, pp 146-155; https://doi.org/10.1016/j.procs.2018.07.218

Abstract:
This paper presents a binary coded evolutionary computational method inspired from the evolution in plant genetics. It introduces the concept of artificial DNA which is an abstract idea inspired from inheritance of characteristics in plant genetics through transmitting dominant and recessive genes and Epimutaiton. It involves a rehabilitation process which similar to plant biology provides further evolving mechanism against environmental mutation for being better and better. Test of the effectiveness, consistency, and efficiency of the proposed optimizer have been demonstrated through a variety of complex benchmark test functions. Simulation results and associated analysis of the proposed optimizer in comparison to Self-learning particle swarm optimization (SLPSO), Shuffled Frog Leap Algorithm (SFLA), Multi-species hybrid Genetic Algorithm (MSGA), Gravitational search algorithm (GSA), Group Search Optimization (GSO), Cuckoo Search (CS), Probabilistic Bee Algorithm (PBA), and Hybrid Differential PSO (HDSO) approve its applicability in solving complex problems. In this paper, we have shown effective results on thirty variables benchmark test problems of different classes.
, Fei Li, Juan Wang, Zuli Wang, ,
Published: 2 March 2017
Cybernetics and Systems, Volume 48, pp 162-181; https://doi.org/10.1080/01969722.2016.1276771

Abstract:
The Internet of Things (IoT) has gained significant attention from industry as well as academia during the past decade.The main reason behind this interest is the capabilities of the IoT for seamlessly integrating classical networks and networked objects, and hence allowing people to create an intelligent environment based on this powerful integration. However, how to extract useful information from data produced by IoT and facilitate standard knowledge sharing among different IoT systems are still open issues to be addressed. In this paper, we propose a novel approach, the Experience-Oriented Smart Things (EOST), that utilizes deep learning and knowledge representation concept called Decisional DNA to help IoT systems acquire, represent, and store knowledge, as well as share it amid various domains where it can be required to support decisions. Decisional DNA motivation stems from the role of deoxyribonucleic acid (DNA) in storing and sharing information and knowledge. We demonstrate our approach in a set of experiments, in which the IoT systems use knowledge gained from past experience to make decisions and predictions. The presented initial results show that the EOST is a very promising approach for knowledge capture, representation, sharing, and reusing in IoT systems.
, Hassina Seridi, Fouad Bousetouane, ,
Published: 1 March 2017
Knowledge-Based Systems, Volume 119, pp 166-177; https://doi.org/10.1016/j.knosys.2016.12.011

The publisher has not yet granted permission to display this abstract.
Published: 29 November 2016
by MDPI
Biosensors, Volume 6; https://doi.org/10.3390/bios6040058

Abstract:
Gait analysis using wearable wireless sensors can be an economical, convenient and effective way to provide diagnostic and clinical information for various health-related issues. In this work, our custom designed low-cost wireless gait analysis sensor that contains a basic inertial measurement unit (IMU) was used to collect the gait data for four patients diagnosed with balance disorders and additionally three normal subjects, each performing the Dynamic Gait Index (DGI) tests while wearing the custom wireless gait analysis sensor (WGAS). The small WGAS includes a tri-axial accelerometer integrated circuit (IC), two gyroscopes ICs and a Texas Instruments (TI) MSP430 microcontroller and is worn by each subject at the T4 position during the DGI tests. The raw gait data are wirelessly transmitted from the WGAS to a near-by PC for real-time gait data collection and analysis. In order to perform successful classification of patients vs. normal subjects, we used several different classification algorithms, such as the back propagation artificial neural network (BP-ANN), support vector machine (SVM), k-nearest neighbors (KNN) and binary decision trees (BDT), based on features extracted from the raw gait data of the gyroscopes and accelerometers. When the range was used as the input feature, the overall classification accuracy obtained is 100% with BP-ANN, 98% with SVM, 96% with KNN and 94% using BDT. Similar high classification accuracy results were also achieved when the standard deviation or other values were used as input features to these classifiers. These results show that gait data collected from our very low-cost wearable wireless gait sensor can effectively differentiate patients with balance disorders from normal subjects in real time using various classifiers, the success of which may eventually lead to accurate and objective diagnosis of abnormal human gaits and their underlying etiologies in the future, as more patient data are being collected.
Published: 26 September 2016
by MDPI
Sustainability, Volume 8; https://doi.org/10.3390/su8100960

Abstract:
An effective green supply chain (GSC) can help an enterprise obtain more benefits and reduce costs. Therefore, developing an effective evaluation method for GSC performance evaluation is becoming increasingly important. In this study, the advantages and disadvantages of the current performance evaluations and algorithms for GSC performance evaluations were discussed and evaluated. Based on these findings, an improved five-dimensional balanced scorecard was proposed in which the green performance indicators were revised to facilitate their measurement. A model based on Rough Set theory, the Genetic Algorithm, and the Levenberg Marquardt Back Propagation (LMBP) neural network algorithm was proposed. Next, using Matlab, the Rosetta tool, and the practical data of company F, a case study was conducted. The results indicate that the proposed model has a high convergence speed and an accurate prediction ability. The credibility and effectiveness of the proposed model was validated. In comparison with the normal Back Propagation neural network algorithm and the LMBP neural network algorithm, the proposed model has greater credibility and effectiveness. In practice, this method provides a more suitable indicator system and algorithm for enterprises to be able to implement GSC performance evaluations in an uncertain environment. Academically, the proposed method addresses the lack of a theoretical basis for GSC performance evaluation, thus representing a new development in GSC performance evaluation theory.
Petr Dostál
Published: 20 May 2016
Psychology and Mental Health pp 1541-1579; https://doi.org/10.4018/978-1-5225-0159-6.ch068

Abstract:
The decision-making processes in management are very complicated because they include political, social, psychological, economic, financial, and other factors. Many variables are difficult to measure; they may be characterized by imprecision, uncertainty, vagueness, semi-truth, approximations, and so forth. Soft computing methods have had successful applications in management. Nowadays the new theories of soft computing are used for these purposes. The applications in management have specific features in comparison with others. The processes are focused on private corporate attempts at money making or decreasing expenses. The soft computing methods help in decentralization of decision-making processes to be standardized, reproduced, and documented. There are various soft computing methods used in management-classical ones and methods using soft computing. Among soft computing methods there belongs fuzzy logic, neural networks, and evolutionary algorithms. The use of the theories mentioned previous is important also in the sphere of analysis and simulation. The case studies are discussed in the article. It can be mentioned, for example, which way should be used to address the potential customer (fuzzy logic), which kind of customer could be provided by a loan or a mortgage (neural networks), the sorting of products according to the kind of customers (genetic algorithms), or solving the travelling salesman problem (evolutionary algorithms).
Seunghwan Lee, Hankuk Academy Of Foreign Studies, Changyoon Lee, Donghee Kim, Taeseon Yoon
International Journal of Machine Learning and Computing, Volume 6, pp 155-159; https://doi.org/10.18178/ijmlc.2016.6.2.591

, Meikang Qiu, Sam Adam Elnagdy
2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC), and IEEE International Conference on Intelligent Data and Security (IDS) pp 197-202; https://doi.org/10.1109/bigdatasecurity-hpsc-ids.2016.66

Abstract:
With the fast development of Web-based solutions, a variety of paradigms and platforms are emerging as value creators or improvers in multiple industries. This trend has also enable financial firms to improve their business processes and create new services. Sharing data between financial service institutions has become an option of achieving value enhancements. However, the concern of the privacy information leakage has also arisen, which impacts on both financial organizations and customers. It is important for stakeholders in financial services to be aware of the proper information classifications, by which determining which information can be shared between the financial service institutions. This paper focuses on this issue and proposes a new approach that use combined supervised learning techniques to classify the information in order to avoid releasing those information that can be harmful for either financial service providers or customers. The proposed model is entitled as Supervised learning-Based Secure Information Classification (SEB-SIC) model, which is mainly supported by the proposed Decision Tree-based Risk Prediction (DTRP) algorithm. The proposed scheme is a predictive mechanism that uses the historical data as the training dataset. The performance of our proposed mechanism has been assessed by the experimental evaluations.
Kuan-Cheng Lin, Kai-Yuan Zhang, Yi-Hung Huang, , Neil Y. Yen
Published: 27 January 2016
The Journal of Supercomputing, Volume 72, pp 3210-3221; https://doi.org/10.1007/s11227-016-1631-0

The publisher has not yet granted permission to display this abstract.
Hoim Jeong, Kyeongjun Lee, Bong-Hwan Choi, Hee-Eun Lee, , Ng-Hoon Lee, Ji-Hong Ha, Kook-Il Han, YoungSeuk Cho,
Published: 20 August 2015
Genes & Genomics, Volume 37, pp 969-976; https://doi.org/10.1007/s13258-015-0326-x

The publisher has not yet granted permission to display this abstract.
Genetic Programming and Evolvable Machines, Volume 16, pp 499-530; https://doi.org/10.1007/s10710-015-9243-7

The publisher has not yet granted permission to display this abstract.
R. Struharik, V. Vranjkovic, S. Dautovic, L. Novak
2014 IEEE 12th International Symposium on Intelligent Systems and Informatics (SISY) pp 257-262; https://doi.org/10.1109/sisy.2014.6923596

Abstract:
In this paper an application of evolutionary algorithm to oblique decision tree inference is presented. In the core of new decision tree inducing algorithm is the specific evolutionary algorithm called HereBoy. Performance of proposed HBDT algorithm is studied and compared with eight existing decision tree building algorithms using standard benchmark datasets obtained from the UCI Machine Learning Repository database. The results of experimental study indicate that the proposed HBDT algorithm compares very favorably with some of the previously proposed decision tree building algorithms.
, Chenglin Liao, Lifang Wang
2014 IEEE Conference and Expo Transportation Electrification Asia-Pacific (ITEC Asia-Pacific) pp 1-6; https://doi.org/10.1109/itec-ap.2014.6940898

Abstract:
Battery models play a very important role in the battery management system(BMS), especially in the estimation of state of charge(SOC). Among different kinds of models, equivalent circuit models are proved more practical and flexible. In this paper, the definition of SOC and influence factors are firstly discussed, and then some equivalent circuit models of batteries in EV will be reviewed, finally the possibilities for further progress in this area will be presented.
Yung-Hsing Peng, Chin-Shun Hsu, Po-Chuang Huang, Yen-Dong Wu
2014 IEEE International Conference on Automation Science and Engineering (CASE) pp 716-721; https://doi.org/10.1109/coase.2014.6899407

Abstract:
To insure the quality and quantity of yields, computational tools for monitoring and analyzing the growth of crops are of great importance in scientific agriculture. In recent years, non-destructive measurements that utilize spectroscopy for crop monitoring have drawn much attention, and algorithms for selecting proper wavelengths are worth being investigated, since they have deep impact on the accuracy. In this research, an approach for utilizing wavelengths on orchid chlorophyll prediction is proposed. The newly proposed method is based on the response surface methodology (RSM), and we apply it to four wavelength selection algorithms to see the effectiveness. The spectral data in our experiment is obtained by the interactance measurement on 600 orchid plants with a hand-held spectrometer, and the actual chlorophyll content is also measured with a CCI meter for verification. Experimental results show that this new approach significantly improves the utilization of wavelengths for building prediction model, raising R 2 from 88.74% to 93.95% and reducing the RMSECV from 7.5 to 6.94 for 15 wavelengths. Therefore, the proposed method is worth being applied to devising wavelength selection algorithms.
Petr Dostál
Handbook of Research on Machine Learning Innovations and Trends pp 294-326; https://doi.org/10.4018/978-1-4666-4450-2.ch010

Abstract:
The decision-making processes in management are very complicated because they include political, social, psychological, economic, financial, and other factors. Many variables are difficult to measure; they may be characterized by imprecision, uncertainty, vagueness, semi-truth, approximations, and so forth. Soft computing methods have had successful applications in management. Nowadays the new theories of soft computing are used for these purposes. The applications in management have specific features in comparison with others. The processes are focused on private corporate attempts at money making or decreasing expenses. The soft computing methods help in decentralization of decision-making processes to be standardized, reproduced, and documented. There are various soft computing methods used in management-classical ones and methods using soft computing. Among soft computing methods there belongs fuzzy logic, neural networks, and evolutionary algorithms. The use of the theories mentioned previous is important also in the sphere of analysis and simulation. The case studies are discussed in the article. It can be mentioned, for example, which way should be used to address the potential customer (fuzzy logic), which kind of customer could be provided by a loan or a mortgage (neural networks), the sorting of products according to the kind of customers (genetic algorithms), or solving the travelling salesman problem (evolutionary algorithms).
, Kaddour Sadouni
Lecture Notes in Electrical Engineering pp 203-214; https://doi.org/10.1007/978-94-007-7684-5_15

The publisher has not yet granted permission to display this abstract.
Xue Mei Fan, Shu Jun Zhang, Kevin Hapeshi, Yin Sheng Yang
Applied Mechanics and Materials, Volume 461, pp 942-958; https://doi.org/10.4028/www.scientific.net/amm.461.942

Abstract:
People have learnt from biological system behaviours and structures to design and develop a number of different kinds of optimisation algorithms that have been widely used in both theoretical study and practical applications in engineering and business management. An efficient supply chain is very important for companies to survive in global competitive market. An effective SCM (supply chain management) is the key for implement an efficient supply chain. Though there have been considerable amount of study of SCM, there have been very limited publications of applying the findings from the biological system study into SCM. In this paper, through systematic literature review, various SCM issues and requirements are discussed and some typical biological system behaviours and natural-inspired algorithms are evaluated for the purpose of SCM. Then the principle and possibility are presented on how to learn the biological systems' behaviours and natural-inspired algorithms for SCM and a framework is proposed as a guide line for users to apply the knowledge learnt from the biological systems for SCM. In the framework, a number of the procedures have been presented for using XML to represent both SCM requirement and bio-inspiration data. To demonstrate the proposed framework, a case study has been presented for users to find the bio-inspirations for some particular SCM problems in automotive industry.
Shi Lei Lu, Shun Zheng Yu
Applied Mechanics and Materials, Volume 427-429, pp 600-605; https://doi.org/10.4028/www.scientific.net/amm.427-429.600

Abstract:
Optimization of network scheduling is a significant way to improve the performance of the radio frequency identification (RFID) networks. This paper proposes an improved particle swarm optimization algorithm (PSO). It uses an animal foraging strategy to maintain a high diversity of swarms, which can protect them from premature convergence. The proposed algorithm is used to optimize the network performance by determining the optimal work status of readers. It has been tested in two different RFID network topologies to evaluate the effectivenesss. The simulation results reveal that the proposed algorithm outperforms the other algorithms in terms of optimization precision.
Suman Khatwani, Arti Arya
2013 International Conference on Computer Communication and Informatics pp 1-8; https://doi.org/10.1109/iccci.2013.6466309

Abstract:
In order to improve the overall performance of an institution, individual performances must be looked into. Hence it is useful for educational institutions to analyze learners' performances to identify the areas of weakness to guide their students to a better future. In this paper, an algorithm is proposed for predicting a learner's performance using decision trees and genetic algorithm. Id3 algorithm is used to create multiple decision trees, each of which predicts the performance of a student based on a different feature set. Since each decision tree provides us with an insight to the probable performance of each student; and different trees give different results, we are not only able to predict the performance but also identify areas or features that are responsible for the predicted result. For higher accuracy of the obtained results, genetic algorithm is also incorporated. The genetic algorithm is implemented on the n-ary trees, by calculating the fitness of each tree and applying crossover operations to obtain multiple generations, each contributing to creating trees with a better fitness as the generations increase, and finally resulting in the decision tree with the best accuracy. The results so obtained are quite encouraging.
, Kazunori Matsumoto
Lecture Notes in Electrical Engineering, Volume 156, pp 143-153; https://doi.org/10.1007/978-3-642-28807-4_21

The publisher has not yet granted permission to display this abstract.
, Matej Šprogar, Sandi Pohorec
WIREs Data Mining and Knowledge Discovery, Volume 3, pp 63-82; https://doi.org/10.1002/widm.1079

The publisher has not yet granted permission to display this abstract.
Rainer Knauf, Yoshitaka Sakurai, Kouhei Takada, Setsuo Tsuruta
2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC) pp 3051-3056; https://doi.org/10.1109/icsmc.2012.6378259

Abstract:
In former work, the authors developed a modeling system for university learning processes, which aims at evaluating and refining university curricula to reach an optimum of learning success in terms of a best possible grade point average (GPA). This is performed by applying an Educational Data Mining (EDM) technology to former students curricula and their degree of success (GPA) and thus, uncovering golden didactic knowledge for successful education. We used learner profiles to personalize this technology. After a short introduction to this technology, we discuss the result of a practical application and draw conclusions. In particular, we could not obtain sufficient data to establish this kind of learner profiles. Therefore, we shifted our strategy from an “eager” one of holding an explicit model towards a “lazy” strategy of mining with data, which is really available without making “guesses” what they mean (profiles). In particular, we utilize the educational history of the students and vocational ambitions for student modeling.
, , Emérita S. Opaleye, Lucas Neiva-Silva, , Ana R. Noto
Published: 1 July 2012
by SciELO
Cadernos de Saúde Pública, Volume 28, pp 1371-1380; https://doi.org/10.1590/s0102-311x2012000700015

Abstract:
The aim of this study was to investigate factors associated to frequent and heavy drug use among street children and adolescents aged 10 to 18 years. A sample of 2,807 street children and adolescents from the 27 Brazilian state capital cities was analyzed. A World Health Organization questionnaire for non-students was adapted for use in Brazil. Data analysis was performed using logistic regression and decision tree models. Factors inversely associated with frequent and heavy drug use were: being age nine to 11 years (OR = 0.1); school attendance (OR = 0.3); daily time (one to five hours) spent on the streets (OR = 0.3 and 0.4); not sleeping on the streets (OR = 0.4); being on the streets for less than one year (OR = 0.4); maintenance of some family bonds (OR = 0.5); presence on the streets of a family member (OR = 0.6); not suffering domestic violence (OR = 0.6); being female (OR = 0.8). All of these variables were significant at the p < 0.05 level. The findings suggest that being younger, having family bonds and engagement in school are important protective factors that affect drug use among this population and should be considered in the formulation of public policies.
, Sandi Pohorec, , Vili Podgorelec
WIREs Data Mining and Knowledge Discovery, Volume 2, pp 237-254; https://doi.org/10.1002/widm.1056

The publisher has not yet granted permission to display this abstract.
Sugimura Hiroshi, Matsumoto Kazunori
2012 IEEE Symposium on Computers & Informatics (ISCI) pp 28-33; https://doi.org/10.1109/isci.2012.6222662

Abstract:
This paper proposes a system which datamines time series classification knowledge leading by a discovery of feature patterns. In the case of classification, prediction accuracy is an important point, and to build a human understandable model is another essential issue. To satisfy these requests, our system runs in two stages. In the first stage, the system discovers important feature patterns which are useful for identifying data. For this purpose, we propose a feature importance measure which is called FI. The second stage builds a decision tree that determines class membership based on the feature patterns. We explain how these two stages are harmonized in the entire process.
Hiroshi Sugimura, Kazunori Matsumoto
2011 IEEE International Conference on Systems, Man, and Cybernetics pp 1340-1345; https://doi.org/10.1109/icsmc.2011.6083844

Abstract:
This paper proposes a system which acquires feature patterns and makes classifiers for time series data without using background knowledge given by a user. Time series data are widely appeared in finance, medical research, industrial sensors, etc. The system acquires the feature patterns that characterize similar data in database. We focus on two aspects of the feature pattern: global and local frequency. Our purpose is to acquire features of each data by extracting these patterns. The system cut out subsequences from time series data. Several representative sequences are extracted from these subsequences by using clustering. Feature patterns are acquired from these representative sequences. For this purpose, we develop a method that applies TF*IDF weight technique, which is often used in text mining, to time series data. The time series data are classified by using the acquired feature patterns. In accordance with a criterion that is based on the entropy theory, feature patterns are improved by the automatic process, generation by generation, using the genetic algorithm. By using the final and optimized feature patterns, we build a decision tree that determines future behaviors. We explain how these two tools are combinatory applied in the entire knowledge discovery process.
N. Manwani, P. S. Sastry
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Volume 42, pp 181-192; https://doi.org/10.1109/tsmcb.2011.2163392

Abstract:
In this paper, we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy for assessing the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.
Page of 2
Articles per Page
by
Show export options
  Select all
Back to Top Top