(searched for: Evaluation of Distribution System Reliability Indices Using Fuzzy Reasoning Approach)
Published: 1 May 2021
European Journal of Electrical Engineering and Computer Science, Volume 5, pp 1-8; doi:10.24018/ejece.2021.5.3.264
The most fundamental problems in the distribution system are the quality, the continuity, and the power supply. Political and economic changes were accompanied by changes in the structure of the electric load in the distribution network. Lack of investment and aging of the distribution company assets was accompanied by a decrease in the reliability of the distribution system. Identification and classification of assets from the point of view of their maintenance and replacement was one of the problems that were posed to the engineers. Fuzzy logic can be successfully used to evaluate distribution system reliability indices. In this paper fuzzy logic is used to evaluate the distribution system reliability indices of lines and transformers using six input variables. These variables considered the most important are: Age, Operation, Maintenance, Electrical current loading, Exposure and Weather conditions (Wind or Temperature). The fuzzy inferences knowledge-based IF-THEN rule is developed using Matlab Fuzzy software. The detailed analysis of the fuzzy system surfaces shows that the factors taken in consideration are dynamically and accurately connected to each other. The constructed rules based in engineering experience accurately represent the Reliability Indices.
Discrete Dynamics in Nature and Society, Volume 2017, pp 1-9; doi:10.1155/2017/9470943
The delivery time of order has become an important fact for customers to evaluate logistics services. Due to the diverse and large quantities of orders in the background of electronic commerce, how to improve the flexibility of distribution hub and reduce the waiting time of customers becomes one of the most challenging questions for logistics companies. With this in mind, this paper proposes a new method of flexibility assessment in distribution hub by introducing cost weighted time (CWT). The advantages of supply hub operation mode in delivery flexibility are verified by the approach: the mode has pooling effects and uniform distribution characteristics; these traits can reduce overlapping delivery time to improve the flexibility in the case of two suppliers. Numerical examples show that the supply hub operation mode is more flexible than decentralized distribution operation mode in multidelivery cycles.1. IntroductionToday, as the development of electronic commerce drives the increase of consumer demand, improving the flexibility of distribution hub to reduce the order-to-delivery time has become the primary challenge for distribution centers . Due to the high time and financial costs, order delivery is considered an important activity of logistics distribution centers. Some studies have shown that the delivery time accounted for more than half of logistics services time .The distribution hub is closely related to the efficiency of order delivery, so it has been a hot research topic in logistics researches for the last several decades, and scholars have used a variety of methods in their attempts to study distribution hub, including logistics hub location, optimization of logistics network, region partitioning, and supply chain performance evaluation of logistics hub. (i) Logistics hub location: Ebrahimi-zade et al.  used covering radius, a decision variable, to build new mixed integer model to hub covering problem of multiperiod single distribution hub. Mohammadi et al.  proposed a mixed integer programming model to reduce failure cost for disruption problem; a new hybrid metaheuristic algorithm is designed to solve and prove reliability of the model. Correia et al.  put forward a modeling framework to minimize the total expected transportation costs between logistics hubs for determining the best multiple allocation hub locations. Some scholars even used the geographical level and family choice to construct structuration criteria for sequential assessment or simultaneous assessment of hubs location . (ii) Optimization of logistics network: Ghodratnama et al.  proposed a single distribution hub location model with the objective of total transportation and installation costs and also considered the effects of different modes of transport on greenhouse gas emissions. Yang et al.  used the new fuzzy random simulation technique and the multistart simulated annealing algorithm to optimize the intermodal hub-and-spoke model to obtain the minimum mixed traffic cost and travel time. (iii) Region partitioning: the purpose is to divide the hubs in a certain area into several zones according to some rules. According to priority, the multistage logistics distribution network is divided into two-echelon logistics distribution region . Anwar et al.  used spectral theory based graph partition to divide the density peak graph and obtain the different subnetworks, and results show that the proposed method is superior to the existing approach based on normalization. Franceschetti et al.  divided the distribution hub network by customer density to obtain the best combination of distribution hub partitions and number of vehicles. (iv) Supply chain performance evaluation of logistics hub: Bichou and Gray  proposed an approach that evaluates hub efficiency by directing port strategy towards relevant value-added logistics activities. Then Bichou  designed an integrative framework including traders affiliated with terminals as integrative benchmark to measure logistics hub performance by conceptualising hub from a logistics and supply chain management (SCM) approach. However, the most logistics centers usually measure output and resource utilization, while the use of comprehensive performance indicators is rare. Some scholars have made breakthroughs: Barros and Dieke  enhanced the performance of the least efficient airports by using data envelopment analysis (DEA). Cheng and Wang  constructed a platform that can be used to assess the integrated supply chain activities of enterprises, infrastructure, and institutional stakeholders in logistics hub.To improve the efficiency of order distribution, some researchers studied the performance of logistics activities from the perspective of flexibility. Logistics flexibility is defined as an ability that reasonably uses a variety of resources in the storage, transportation, distribution, and other aspects to improve the response speed for customer’s demand [16–19]. In particular, Nigel  and Upton  have made pioneering works in the study of flexibility. The former defined the flexibility from three dimensions (range, time, and cost): (1) range, the state or behavior of systems; (2) time, the time consumption of transition between states; (3) cost, the cost of transition between states. Based on the study , Upton  proposed the uniformity of flexibility, which can keep the performances of the whole system consistent. Now, the research of logistics flexibility is mainly focused on the transportation planning and management, facilities management, and other logistics activities related to information processing, material transportation, inventory management, reverse logistics, tracking, and delivery . In order to improve the manufacturing flexibility, Mansoornejad et al.  considered the product line configuration in the integration of supply chain. Papageorgiou  proposed an optimized supply chain model considering the effect of uncertainty on system flexibility. Even the two-stage stochastic programming method is used to improve the flexibility of logistics system [25, 26].Nevertheless, the existing researches have some limitations: (i) most of them defined the definition and condition of delivery flexibility or used delivery flexibility as one of the assessment indicators to solve other problems. For example, the impact of the manufacturer’s promise strategy on the flexibility of delivery was discussed in . Li et al.  designed a pricing strategy to maintain a high delivery flexibility. Chen  made a path planning model to solve fuzzy flexible delivery and pickup problem. However, few studies have responded to the question: how to determine the delivery flexibility under known distribution center operation mode? (ii) There are some previous works which considered weight in the supply chain management; Li et al.  constructed the TOPSIS model based on entropy weight considering the short-term target, the design concept of strategic target, and the characteristics of synchronous supply chain. Kocaoğlu et al.  designed a TOPSIS evaluation model based on strategic objectives and business indicators and used analytic hierarchy process (AHP) to determine the weights of indicators. Ha et al.  focused on interdependencies among logistics hub performance measures and built a model to quantify the supply chain performance of hub by introducing weights of interdependent measures. In , the “slack time” was used to evaluate the flexibility of enterprises’ supply chain. However, they constructed the indicators of flexibility evaluation only taking into account the company strategy, external environment, competitors’ reflection, and so on. Compared with these macrofactors, the microfactors such as delivery cost and delivery time interval have been neglected; they are flawed for assessing the flexibility of the distribution center . Therefore, in order to resolve the research gap, this paper proposes a new method of flexibility assessment in distribution hub considering cost weighted time. The cost weighted time introduces the delivery cost factor of order into slack time that can be more accurate to measure the effect of delivery cost on the delivery time. In multidelivery cycles, the numerical examples show that supply hub is more flexible than decentralized distribution operation mode.The rest of the paper is structured as follows. Section 2 introduces the proposed new method. Section 3 presents our numerical example and discussion of results. The conclusions are presented in Section 4.2. The Method of Flexibility Assessment in Distribution Hub2.1. Problem StatementSupply hub is also known as supplier hub, vendor-managed inventory (VMI) hub, which refers to the logistics distribution center located near the manufacturer and used to store the raw materials of supplier . Unlike decentralized distribution mode, supply hub changes the relationship between manufacturers and suppliers from one-to-many to one-to-one, simplifying the operation processes . Since the utilization of supply hub has already achieved good results in enterprise management practices [35–37], we validate the effectiveness of the new approach by comparing and analyzing the flexibility performance of decentralized distribution mode as Mode 1 and supply hub operation mode as Mode 2. In order to facilitate the study, we consider the simple supply chain structure of 1 manufacturer and 2 suppliers. Each distribution center of Mode 1 only delivers one raw material as shown in Figure 1. The hub of Mode 2 concentrates on two kinds of raw materials for management and distribution as shown in Figure 2.Figure 1: The decentralized distribution mode.Figure 2: The supply hub operation mode.In Mode 1, two suppliers of enterprises deliver raw materials to manufacturers through distribution center 1 and distribution center 2. The manufacture
Applied Soft Computing, Volume 54, pp 108-120; doi:10.1016/j.asoc.2017.01.020
The publisher has not yet granted permission to display this abstract.
Journal of Sensors, Volume 2016, pp 1-28; doi:10.1155/2016/9350928
Event detection in realistic WSN environments is a critical research domain, while the environmental monitoring comprises one of its most pronounced applications. Although efforts related to the environmental applications have been presented in the current literature, there is a significant lack of investigation on the performance of such systems, when applied in wireless environments. Aiming at addressing this shortage, in this paper an advanced multimodal approach is followed based on fuzzy logic. The proposed fuzzy inference system (FIS) is implemented on TelosB motes and evaluates the probability of fire detection while aiming towards power conservation. Additionally to a straightforward centralized approach, a distributed implementation of the above FIS is also proposed, aiming towards network congestion reduction while optimally distributing the energy consumption among network nodes so as to maximize network lifetime. Moreover this work proposes an event based execution of the aforementioned FIS aiming to further reduce the computational as well as the communication cost, compared to a periodical time triggered FIS execution. As a final contribution, performance metrics acquired from all the proposed FIS implementation techniques are thoroughly compared and analyzed with respect to critical network conditions aiming to offer realistic evaluation and thus objective conclusions’ extraction.1. IntroductionIn recent years, wireless sensor networks (WSNs) have emerged as a promising research field and have been applied to a wide variety of application domains including industrial control, environmental monitoring, and healthcare applications. The primary objective in such WSN applications is the accurate and reliable monitoring of an environment, based on the processing of multiple and diverse sensors values and the identification of irregular situations or dynamic real life events. The collaborative tasks lead to specific action scenarios, so as to control the monitored environment. The process of observing a real phenomenon and evaluating its behaviour in WSNs is known as event detection .With respect to the dependency upon the input signals, real events are distinguished in two categories: single modality and multimodality events . The former concerns the examination of the monitored values of each parameter independently, based on the assumption that if any of these exceed a specific “normal” range, an event occurs . The latter category includes the multimodal events which are based on the correlation of several attributes, the processing of which evaluates the occurrence of an event . Critical challenges of event detection algorithms in WSN include energy saving, data integrity, and in-depth understanding of the monitored environment. In order to meet such objectives, the development of a classification model is essential for the accurate identification of an event, along with the reduction of the communication as well as processing overhead. The classification of an event can be defined as the process of evaluating an event of interest using multiple sensor nodes (multimodal event). Such processing may vary from a trivial rule engine machine to a complex machine learning algorithm, while the final outcome of this process triggers specific action scenarios. Taking into consideration typical WSN characteristics, the classification of an event is strongly affected by the quality and characteristics of the communication channel between the monitoring and actuation units in order to optimize the monitoring and control of the environment.Considering an application which is based on single modality events, when a sensor value exceeds an upper/lower threshold (i.e., temperature in an environmental monitoring application), an event generation is indicated (i.e., fire alarm). However, in many cases, such decisions may lead to false alarms, since most of the real life events depend on multiple monitored parameters in a correlated manner. For example, in the fire alarm scenario, an accurate decision should take into consideration the existence of smoke and luminosity level along with the temperature value. Alarm situations trigger specific reactions and, thus, node-to-node communication (actor-to-actor coordination schemes) . Therefore, false alarms will lead to an application’s performance degradation, as well as to a network traffic and energy consumption increase. Hence, the need for more sophisticated multimodality classification processes that will maximize the application’s accuracy while mitigating resource wastage among the actuation units arises. Additionally, taking into consideration that the communication modules are the most energy consuming components of a sensor node, the lifetime of the node and the network’s robustness are anticipated to be accordingly benefited. In that respect, the utilization of classification algorithms  (i.e., fuzzy inference systems, FIS) in WSNs regarding the identification of complex events is crucial in achieving the aforementioned objectives.Towards this goal, existing data mining techniques can offer several algorithmic solutions  to this field. However, conventional implementation requires high processing capabilities and abundant memory availability, in order to meet specific execution time restrictions. Such assumptions contradict typical WSN characteristics, where the sensor nodes suffer from limited processing power and available memory. Such characteristics, in combination with the error prone nature of wireless communications, highlight the challenge for designing distributed, highly efficient, yet of low complexity and low resource-demanding data mining algorithms in WSNs. Furthermore, in realistic WSN applications, the distributed implementation of such computational intensive algorithms can be highly beneficial towards balancing CPU load among several nodes. Specifically, the distributed implementation of an algorithm increases on-site processing and can potentially reduce the number of data packet transmissions, leading to bandwidth conservation, network data transfer relaxation, and energy consumption degradation.Another critical aspect drastically affecting FIS algorithm design when aiming towards WSN application concerns the effect of network characteristics upon the algorithm’s performance. An indicative example could concern the case where the outcome of the classification process is extracted using invalid inputs. This could be caused by several factors leading to input values potentially not being up to date because of unpredictable transient problematic network conditions. Indicatively, such conditions include network congestion, resulting in increased packet loss and delay, or node mobility leading to network disconnection. Traditional data mining approaches do not consider these problems. Specifically, the input data are assumed to be always valid (i.e., in time), while the execution delay is assumed negligible due to abundant processing resources.Driven by such observations, distinguishing WSN from other traditional network areas is essential. A valuable contribution of this work, compared to relative ones, concerns a comprehensive study on the effect of such conditions in the context of realistic WSN environments. Towards this objective this paper also presents a respective framework enabling application of existing data mining algorithms in such cases, accounting for the distributed processing power, the communication cost, and the algorithm’s sensitivity to invalid input data.In this work, we study the proposed fuzzy logic system in an environmental scenario by simulating a realistic WSN infrastructure characterized by significant communication challenges. In our previous work  a centralized implementation of a healthcare FIS was presented, where a TelosB mote was considered as a cluster head, being responsible for the reception of all the generated packets and the FIS execution. In  the respective distributed implementation was presented. The evaluation results of both efforts proved the sensitivity of the system’s performance to the networking conditions and the cluster head’s overloading. Driven by these observations the main goal and contribution of this paper concern the proposal of novel and efficient approaches on implementing data mining algorithms specifically targeting WSNs applications as well as undertaking a respective comprehensive performance evaluation. Towards this objective, this work investigates the FIS’s performance under three different scenarios: centralized and distributed time triggered as well as centralized event triggered execution. Respective evaluation highlights in a quantifiable way that the centralized approach burdens the CPU utilization of the FCHN, due to the fact that all data flows are directed to it. Driven by these observations as well as taking into consideration the scarce CPU and wireless bandwidth availability in WSNs, we propose a distributed approach to partition the execution of the FIS among the nodes. In this way we aim to optimally balance the energy consumption between the nodes. Moreover, respective measurements indicate a significant diffusion of network traffic avoiding the all-to-one communication scenarios. Moreover, an exploration is conducted on the way wireless channel conditions and the packet arrival rate affect the overall performance of the FIS and the final outcome of the event detection system. Although the distributed time triggered approach is anticipated to balance the energy consumption, the evaluation revealed additional aspect and interdependencies, needing in-depth investigation. Typically, the FIS algorithm is executed periodically based on a time triggered approach. In case that the occurrence of abnormal events is rare, the periodic nature handles CPU resources and network bandwidth in a nonoptimal way. For this reason, our work proposes an eve
Advances in Fuzzy Systems, Volume 2016, pp 1-9; doi:10.1155/2016/4612086
The applications and contributions of fuzzy set theory to human reliability analysis (HRA) are reassessed. The main contribution of fuzzy mathematics relies on its ability to represent vague information. Many HRA authors have made contributions developing new models, introducing fuzzy quantification methodologies. Conversely, others have drawn on fuzzy techniques or methodologies for quantifying already existing models. Fuzzy contributions improve HRA in five main aspects: (1) uncertainty treatment, (2) expert judgment data treatment, (3) fuzzy fault trees, (4) performance shaping factors, and (5) human behaviour model. Finally, recent fuzzy applications and new trends in fuzzy HRA are herein discussed. 1. IntroductionThe term “Human Reliability Assessment” (HRA), human reliability evaluation or analysis, was first introduced in 1962 by Munger et al.  and can be defined as “the probability that a task or job is successfully completed by an individual in a specific state of operation of the system in a minimum required time (if there is time requirements)” .In the negative sense, “human error” is defined as “the failure probability to execute a given task (or execution of a prohibited task), which may cause equipment damage or disrupt the sequence Operations” .Almost all HRA methods and approaches share the assumption that it is significant to use the concept of “human error,” so it is also significant to develop ways to estimate chances of “human error.” As a result, numerous studies have been performed to produce data sets or databases to be used as a basis for human error probabilities (HEP) quantification. This view prevails despite serious doubts expressed by HRA scientists and professionals and related disciplines. A general review of HRA  notes that many approaches are based on highly questionable assumptions about human behaviour.The main contribution of fuzzy mathematics is its ability to represent vague information. It has been used to model systems that are difficult to define precisely . As a methodology, fuzzy set theory incorporates vagueness and subjectivity. Fuzzy decision-making includes the uncertainties of human behaviour in decision-making. Fuzzy set theory, created by Zadeh in 1965, emerges as a powerful way to quantitatively represent and manipulate imprecise decision-making problems . Since the vague parameters are treated as imprecise rather than precise values, the process is more powerful and results are more credible. Fuzzy mathematics emerges as a tool to model processes that are too complex for traditional techniques (such as probability theory) and when process information is qualitative, inaccurate, or unclear; for these cases the concept of membership function properly represents this type of knowledge .Fuzzy logic captures an inherent property of most human communications: they are not accurate, concise, perfectly clear, and crisp . The meaning of the word (natural language) is diffused because a word can be applied perfectly to some objects or events, clearly excluding others, and can be applied to a certain extent, in part, to other objects or events. Language statements are inherently vague; this fact could be addressed with fuzzy set theory . Fuzzy logic resembles the way that humans make decisions and inferences .In fuzzy processing there are basically three components : (1) fuzzification, (2) fuzzy inference, and (3) defuzzification. Fuzzification is the process by which the input variables are transformed into fuzzy numbers sets. Fuzzy inference is a set of fuzzy if-then-else rules used to process diffuse inputs and generate fuzzy conclusions; that is, fuzzy inference interprets input vector values and, based on a rules set, generates an output vector. Defuzzification is the process of weighing and averaging out all fuzzy values into a single output signal or decision.It is easy to see the applicability of this tool for quantifying human reliability. Many HRA authors have made contributions developing new models with fuzzy quantification methodologies or using fuzzy techniques or methodologies for quantifying existing models, for example, fuzzy CREAM . In the following sections the main concepts of HRA methodologies and fuzzy applications and contributions made to human reliability are presented.2. Human Reliability Assessment ReviewThe birth of HRA methods was in 1960, but most of the techniques for human factor evaluation, in terms of propensity to fail, have been developed since the mid-80s. HRA techniques or approaches can be basically divided into two categories: first and second generation. Currently, dynamic HRA techniques or methods of the third generation, understood as an evolution of the previous generations , are subject to research.The first-generation methods or quantitative HRA methods were based on statistics. The most important first-generation HRA method is THERP (technique for human error-rate prediction) , based on event tree analysis. A lot of methods and models in classical HRA theory assume that all probabilities are accurate ; that is, each probability involved can be perfectly determined. HEP can be assigned on the basis of operator’s task characteristics and then modified by performance shaping factors (PSFs). In first-generation HRA, task characteristics are represented by HEPs; and the context, which is represented by PSF, is considered a minor factor in HEP estimation . This generation is concentrated in HRA quantification, in terms of action success/failure, with less attention paid to in-depth causes and reasons for human behaviour.The integrity of probabilistic information implicates two conditions: (1) all probabilities and probability distributions are well known or determinable; (2) system components are independent; that is, all random variables that describe component reliability behaviour are independent or alternatively dependence is precisely known.Precise measurements of system reliability can be calculated whenever these two conditions are met. However, reliability evaluations combined with systems and components description may come from various sources. In most practical applications, it is difficult to expect that the first condition is met and, usually, the second condition is violated.Utkin and Coolen  provide an important contribution to the imprecise reliability, discuss a variety of topics, and review the suggested applications of imprecise probabilities in terms of reliability. Modelling human error through probabilistic approaches has shown a limitation in qualitative aspects’ quantification of human error and attributes complexity of involved circumstances. Mosleh and Chang  indicate the first-generation HRA methods’ limitations, enumerate some expectations, and show that methods should be based on human behaviour models.Among first-generation techniques are Absolute Probability Judgment (APJ), Human Error Assessment and Reduction Technique (HEART), Human Error Justified Data Information (JHEDI), Probabilistic Human Reliability Analysis (PHRA), Action Tree System Operator (OATS), and Success Likelihood Index Method (SLIM). The most popular and effective method is THERP, characterized, as other first-generation approaches, by a precise mathematical treatment of probability and error rates. THERP is based on event tree where each branch represents a combination of human activities and their mutual influences and results.The main features of first-generation methods can be summarized  as (1) binary representation of human actions (success/failure); (2) human action phenomenology; (3) low attention in human cognitive actions (lack of a cognitive model); (4) emphasis on quantifying the probability of incorrect human actions; (5) dichotomy between the errors of omission and commission; and (6) indirect treatment of context.THERP and approaches developed in parallel—as HCR (Human Cognition Reliability) developed by Hannaman, Spurgin, and Lukic in 1985—describe cognitive aspects of operator performance with a cognitive model of human behaviour, known as skill-rule-knowledge (SRK) model . This model, based on human behaviour classification, is divided into practical skills, rules, and knowledge-based behaviour, depending on the cognitive level used. Attention and conscious thought that an individual gives to activities decreased from the third to the first level. This model of behaviour fits very well with Reason’s human error theory : there are several types of errors, depending on the actions’ result carried out with intention or not. Reason distinguished “slips” errors that occur in skill level; “lapses” errors caused by memory failure; and “mistakes” errors made during the action execution. In THERP, however, bad actions are divided into omission and commission errors representing, respectively, failure to carry out necessary operations to achieve the desired result and execution of actions not referred to as concerned task, which keep off the desired result .First-generation HRA methods ignore the cognitive processes that underlie human behaviour, in fact, they have a cognitive model without realism and they are psychologically inadequate. They are often criticized for not considering some factors’ impact such as environment, organizational factors and other relevant PSFs, and inadequate treatment of commission errors and expert judgment [14, 18, 19]. Hollnagel  noted that “all inadequacies of previous HRA methods often lead analysts to perform an HEP evaluation deliberately high and with greater uncertainty limits to compensate, at least in part, these problems” . This is clearly not a desirable solution.In the early 1990s, the need for improved HRA methods generated a number of important research and development activities worldwide. These efforts led to great advances in first-generation methods and the birth of new techniques, iden