Refine Search

New Search

Results: 2,105

(searched for: publisher_id:27195)
Save to Scifeed
Page of 43
Articles per Page
by
Show export options
  Select all
Retraction
Ting Wang, Dong-Lin Zhang, Xiao-Yong Yang, Jing-Qian Xu, Coffey Matthew,
Published: 15 October 2020
Petroleum Science, Volume 17, pp 1795-1795; https://doi.org/10.1007/s12182-020-00519-w

Abstract:
This article has been retracted. Please see the Retraction Notice for more detail: https://doi.org/10.1007/s12182-020-00519-w.
Shan-Bin Gao, , Xue-Feng Lu, Ke-Bin Chi, Ai-Jun Duan, Yan-Feng Liu, Xiang-Bin Meng, Ming-Wei Tan, Hong-Yue Yu, Yu-Ge Shen, et al.
Published: 4 September 2020
Petroleum Science, Volume 17, pp 1752-1763; https://doi.org/10.1007/s12182-020-00500-7

Abstract:
Nobel metallic Pt/ZSM-22 and Pt/ZSM-23 catalysts were prepared for hydroisomerization of normal dodecane and hydrodewaxing of heavy waxy lube base oil. The hydroisomerization performance of n-dodecane indicated that the Pt/ZSM-23 catalyst preferred to crack the C–C bond near the middle of n-dodecane chain, while the Pt/ZSM-22 catalyst was favorable for breaking the carbon chain near the end of n-dodecane. As a result, more than 2% of light products (gas plus naphtha) and 3% more of heavy lube base oil with low-pour point and high viscosity index were produced on Pt/ZSM-22 than those on Pt/ZSM-23 while using the heavy waxy vacuum distillate oil as feedstock.
EURO Journal on Computational Optimization, Volume 8, pp 289-308; https://doi.org/10.1007/s13675-020-00126-9

Abstract:
In this work, a multi-constraint graph partitioning problem is introduced. The input is an undirected graph with costs on the edges and multiple weights on the nodes. The problem calls for a partition of the node set into a fixed number of clusters, such that each cluster satisfies a collection of node weight constraints, and the total cost of the edges whose end nodes are in the same cluster is minimized. It arises as a sub-problem of an integrated vehicle and pollster problem from a real-world application. Two integer programming formulations are provided, and several families of valid inequalities associated with the respective polyhedra are proved. An exact algorithm based on Branch & Bound and cutting planes is proposed, and it is tested on real-world instances.
Pei-Xian Liu, Shi-Biao Deng, , Yi-Qiu Jin, Kai Wang, Yong-Quan Chen
Published: 6 June 2020
Petroleum Science, Volume 17, pp 873-895; https://doi.org/10.1007/s12182-020-00434-0

Abstract:
The study on Lower Cambrian dolostones in Tarim Basin can improve our understanding of ancient and deeply buried carbonate reservoirs. In this research, diagenetic fluid characteristics and their control on porosity evolution have been revealed by studying the petrography and in situ geochemistry of different dolomites. Three types of diagenetic fluids were identified: (1) Replacive dolomites were deviated from shallow burial dolomitizing fluids, which might probably be concentrated ancient seawater at early stage. (2) Fine-to-medium crystalline, planar-e diamond pore-filling dolomites (Fd1) were likely slowly and sufficiently crystallized from deep-circulating crustal hydrothermal fluids during Devonian. (3) Coarse crystalline, non-planar-a saddle pore-filling dolomites (Fd2) might rapidly and insufficiently crystallize from magmatic hydrothermal fluids during Permian. Early dolomitizing fluids did not increase the porosity, but transformed the primary pores to dissolution pores through dolomitization. Deep-circulating crustal hydrothermal fluids significantly increased porosity in the early stages by dissolving and then slightly decreased the porosity in the late stage due to Fd1 precipitation. Magmatic hydrothermal fluids only precipitated the Fd2 dolomites and slightly decreased the porosity. In summary, Devonian deep-circulating crustal hydrothermal fluids dominated the porosity evolution of the Lower Cambrian dolostone reservoir in the Tarim Basin.
EURO Journal on Computational Optimization, Volume 8, pp 103-139; https://doi.org/10.1007/s13675-020-00124-x

Abstract:
The location of shelters in different areas threatened by wildfires is one of the possible ways to reduce fatalities in a context of an increasing number of catastrophic and severe wildfires. These shelters will enable the population in the area to be protected in case of fire outbreaks. The subject of our study is to determine the best place for shelters in a given territory. The territory, divided into zones, is represented by a graph in which each zone corresponds to a node and two nodes are linked by an edge if it is feasible to go directly from one zone to the other. The problem is to locate p shelters on nodes so that the maximum distance of any node to its nearest shelter is minimized. When the uncertainty of fire outbreaks is not considered, this problem corresponds to the well-known p-Center problem on a graph. In this article, the uncertainty of fire outbreaks is introduced taking into account a finite set of fire scenarios. A scenario defines a fire outbreak on a single zone with the main consequence of modifying evacuation paths. Several evacuation paths may become impracticable and the ensuing evacuation decisions made under pressure may no longer be rational. In this context, the new issue under consideration is to place p shelters on a graph so that the maximum evacuation distance of any node to its nearest shelter in any scenario is minimized. We refer to this problem as the Robust p-Center problem under Pressure. After proving the NP-hardness of this problem on subgraphs of grids, we propose a first formulation based on 0-1 Linear Programming. For real size instances, the sizes of the 0-1 Linear Programs are huge and we propose a decomposition scheme to solve them exactly. Experimental results outline the efficiency of our approach.
, Steve Begg,
EURO Journal on Decision Processes, Volume 8, pp 89-124; https://doi.org/10.1007/s40070-020-00112-x

Abstract:
An experiment was set up to determine whether some short, focused training could influence decision makers to take a more structured and process-based approach to project decision-making. The experiment also investigated the impact on project decision-making of the way a decision is framed by an authority figure, i.e. how a decision is influenced by an authority figure advocating a process-driven, neutral or an opinion/schedule-driven approach. The experiment was set up so that half of the participants watched three 15-min training videos before answering questions on decision-making scenarios for projects, and the other half just answered questions on the decision-making scenarios. 40% of participants (split across those who watched the training videos and those that only answered the decision-making scenario questions) had undergone some prior training on decision making. The results demonstrate that watching the training videos has an impact. The impact is greater when there has been no prior training; however, there is still impact in each case, albeit small for some. This implies that the benefits of 1 h of training prior to project decision-making is more valuable for those with no prior training, but still worthwhile for those with prior training. The results showed that framing by an authority figure has a strong influence on the participants’ responses, in terms of whether a process-based, neutral or opinion/intuition-based response was given.
EURO Journal on Decision Processes, Volume 8, pp 79-88; https://doi.org/10.1007/s40070-020-00111-y

Abstract:
This paper analyzes the labor–employer relations during conditions that lead to strike using an evolutionary game and catastrophe theory. During a threat to strike, the employers may accept the whole or only a part of the demands of labors and improve the work conditions or decline the demands, and each selected strategies has its respective costs and benefits. The threat to strike action causes the formation of a game between the strikers and employers that in which, as time goes on, different strategies are evaluated by the players and the effective variables of strike faced gradual and continuous changes, which can lead to a sudden jump of the variables and push the system to very different conditions such as dramatic increase or decrease in the probability of selecting strategies. So the alliance between labors could suffer or reinforce. This discrete sudden change is called catastrophe. In this study after finding evolutionary stable strategies for each player, the catastrophe threshold analyzed by nonlinear evolutionary game and the managerial insight is proposed to employers to prevent the parameters from crossing the border of the catastrophe set that leads to a general strike.
EURO Journal on Decision Processes, Volume 8, pp 61-77; https://doi.org/10.1007/s40070-020-00110-z

Abstract:
The objective of the present work is twofold. First, Pythagorean fuzzy ordered weighted averaging (PFOWA) aggregation operator is introduced along with its desirable properties, namely commutatively, idempotency, boundedness and monotonicity. Finally, the proposed operator is applied to decision making problems to show the validity, practicality and effectiveness of the new approach. The main advantage of using the proposed method is that this method gives more accurate results as compared to the existing methods.
EURO Journal on Computational Optimization, Volume 8, pp 173-199; https://doi.org/10.1007/s13675-020-00123-y

Abstract:
Multistage stochastic programs arise in many applications from engineering whenever a set of inventories or stocks has to be valued. Such is the case in seasonal storage valuation of a set of cascaded reservoir chains in hydro management. A popular method is stochastic dual dynamic programming (SDDP), especially when the dimensionality of the problem is large and dynamic programming is no longer an option. The usual assumption of SDDP is that uncertainty is stage-wise independent, which is highly restrictive from a practical viewpoint. When possible, the usual remedy is to increase the state-space to account for some degree of dependency. In applications, this may not be possible or it may increase the state-space by too much. In this paper, we present an alternative based on keeping a functional dependency in the SDDP—cuts related to the conditional expectations in the dynamic programming equations. Our method is based on popular methodology in mathematical finance, where it has progressively replaced scenario trees due to superior numerical performance. We demonstrate the interest of combining this way of handling dependency in uncertainty and SDDP on a set of numerical examples. Our method is readily available in the open-source software package StOpt.
Zhao Han, Zhuang Ma, Cheng-Bo Wang, Chun-Yu He, Ming Ke, Qing-Zhe Jiang,
Published: 27 March 2020
Petroleum Science, Volume 17, pp 849-857; https://doi.org/10.1007/s12182-020-00439-9

Abstract:
An alumina support was modified by fluorine via impregnation to investigate the effect of fluoride content on the reactivity of Ni–Mo/Al2O3 catalyst. The catalyst was characterized by X-ray diffraction, N2 adsorption–desorption (Brunauer–Emmett–Teller) isotherms, temperature-programmed desorption of ammonia, X-ray photoelectron spectroscopy and high-resolution transmission electron microscopy. Sulfur etherification performance of the catalyst was studied using a fixed-bed reactor. The results show that increasing fluoride content increases the pore volume and pore size but reduces the specific surface area. In addition, the degree of sulfidation of Ni first increases and then decreases. The amounts of strong acid and total acid also increase with increasing fluoride content. Performance evaluation of the catalyst reveals that the fluoride content has a minor effect on the thioetherification performance of the catalyst; however, an optimum fluoride content, which was determined to be 0.2%, can ensure lower olefin saturation and an efficient diene selective hydrogenation.
Ding-Jin Liu, , Zi-Ying Wang
Published: 11 March 2020
Petroleum Science, Volume 17, pp 352-362; https://doi.org/10.1007/s12182-019-00419-8

Abstract:
Envelope inversion (EI) is an efficient tool to mitigate the nonlinearity of conventional full waveform inversion (FWI) by utilizing the ultralow-frequency component in the seismic data. However, the performance of envelope inversion depends on the frequency component and initial model to some extent. To improve the convergence ability and avoid the local minima issue, we propose a convolution-based envelope inversion method to update the low-wavenumber component of the velocity model. Besides, the multi-scale inversion strategy (MCEI) is also incorporated to improve the inversion accuracy while guaranteeing the global convergence. The success of this method relies on modifying the original envelope data to expand the overlap region between observed and modeled envelope data, which in turn expands the global minimum basin of misfit function. The accurate low-wavenumber component of the velocity model provided by MCEI can be used as the migration model or an initial model for conventional FWI. The numerical tests on simple layer model and complex BP 2004 model verify that the proposed method is more robust than EI even when the initial model is coarse and the frequency component of data is high.
EURO Journal on Computational Optimization, Volume 8, pp 141-172; https://doi.org/10.1007/s13675-020-00121-0

Abstract:
In this paper, we consider a facility location problem where customer demand constitutes considerable uncertainty, and where complete information on the distribution of the uncertainty is unavailable. We formulate the optimal decision problem as a two-stage stochastic mixed integer programming problem: an optimal selection of facility locations in the first stage and an optimal decision on the operation of each facility in the second stage. A distributionally robust optimization framework is proposed to hedge risks arising from incomplete information on the distribution of the uncertainty. Specifically, by exploiting the moment information, we construct a set of distributions which contains the true distribution and where the optimal decision is based on the worst distribution from the set. We then develop two numerical schemes for solving the distributionally robust facility location problem: a semi-infinite programming approach which exploits moments of certain reference random variables and a semi-definite programming approach which utilizes the mean and correlation of the underlying random variables describing the demand uncertainty. In the semi-infinite programming approach, we apply the well-known linear decision rule approach to the robust dual problem and then approximate the semi-infinite constraints through the conditional value at risk measure. We provide numerical tests to demonstrate the computation and properties of the robust solutions.
, David Ríos Insua
Published: 6 December 2019
EURO Journal on Decision Processes, Volume 8, pp 13-39; https://doi.org/10.1007/s40070-019-00109-1

Abstract:
With the proliferation of information and communication technologies, especially with recent developments in Artificial Intelligence, social robots at home and the workplace are no longer being treated as lifeless and emotionless, leading to proposals which aim at incorporating affective elements within agents. Advances in areas such as affective decision-making and affective computing drive this interest. Our motivation in this paper is to use affection as a basic element within a decision-making process to facilitate robotic agents providing more seemingly human responses. We use earlier research in cognitive science and psychology to provide a model for an autonomous agent that makes decisions partly influenced by affective factors when interacting with humans and other agents. The factors included are emotions, mood, personality traits, and activation sets in relation with impulsive behavior. We describe several simulations with our model to study and compare its performance when facing various types of users. Through them, we essentially showcase that our model allows for a powerful agent design mechanism regulating its behavior and provides greater decision-making adaptivity when compared to emotionless agents and simpler emotional models. We conclude describing potential uses of our model in several application areas.
EURO Journal on Computational Optimization, Volume 7, pp 359-380; https://doi.org/10.1007/s13675-019-00116-6

Abstract:
The plain Newton-min algorithm for solving the linear complementarity problem (LCP) “\(0\leqslant x\perp (Mx+q)\geqslant 0\)” can be viewed as an instance of the plain semismooth Newton method on the equational version “\(\min (x,Mx+q)=0\)” of the problem. This algorithm converges for any q when M is an \(\mathbf{M }\)-matrix, but not when it is a \(\mathbf{P }\)-matrix. When convergence occurs, it is often very fast (in at most n iterations for an \(\mathbf{M }\)-matrix, where n is the number of variables, but often much faster in practice). In 1990, Harker and Pang proposed to improve the convergence ability of this algorithm by introducing a stepsize along the Newton-min direction that results in a jump over at least one of the encountered kinks of the min-function, in order to avoid its points of nondifferentiability. This paper shows that, for the Fathi problem (an LCP with a positive definite symmetric matrix M, hence a \(\mathbf{P }\)-matrix), an algorithmic scheme, including the algorithm of Harker and Pang, may require n iterations to converge, depending on the starting point.
, Wilco Burghout
EURO Journal on Transportation and Logistics, Volume 8, pp 745-767; https://doi.org/10.1007/s13676-019-00146-5

Abstract:
In this article, we investigate empty vehicle redistribution algorithms for Personal Rapid Transit (PRT) or autonomous station-based taxi services, from a passenger service perspective. We present a new index-based redistribution (IBR) algorithm that improves upon existing nearest neighbour and indexing algorithms by incorporating expected passenger arrivals and predicted waiting times into the surplus/deficit index. We evaluate six variations of algorithms on a test case in Paris Saclay, France. The results show that especially the combination of Simple Nearest Neighbours + Index Based Redistribution provides promising results for both off-peak and rush-hour demand, outperforming the other methods tested, in terms of passenger waiting time (average and maximum) as well as station queue lengths.
, Pierre Glynn, , Mahmud Farooque, , Ben Miyamoto, Patricia McKay
Published: 30 November 2019
EURO Journal on Decision Processes, Volume 7, pp 243-265; https://doi.org/10.1007/s40070-019-00104-6

The publisher has not yet granted permission to display this abstract.
, Mustapha Abbad, Gallyam Aidagulov, Steve Dyer, Dominic Brady
Published: 23 November 2019
Petroleum Science, Volume 17, pp 671-686; https://doi.org/10.1007/s12182-019-00398-w

Abstract:
Accurate acid placement constitutes a major concern in matrix stimulation because the acid tends to penetrate the zones of least resistance while leaving the low-permeability regions of the formation untreated. Degradable materials (fibers and solid particles) have recently shown a good capability as fluid diversion to overcome the issues related to matrix stimulation. Despite the success achieved in the recent acid stimulation jobs stemming from the use of some products that rely on fiber flocculation as the main diverting mechanism, it was observed that the volume of the base fluid and the loading of the particles are not optimized. The current industry lacks a scientific design guideline because the used methodology is based on experience or empirical studies in a particular area with a particular product. It is important then to understand the fundamentals of how acid diversion works in carbonates with different diverting mechanisms and diverters. Mathematical modeling and computer simulations are effective tools to develop this understanding and are efficiently applied to new product development, new applications of existing products or usage optimization. In this work, we develop a numerical model to study fiber dynamics in fluid flow. We employ a discrete element method in which the fibers are represented by multi-rigid-body systems of interconnected spheres. The discrete fiber model is coupled with a fluid flow solver to account for the inherent simultaneous interactions. The focus of the study is on the tendency for fibers to flocculate and bridge when interacting with suspending fluids and encountering restrictions that can be representative of fractures or wormholes in carbonates. The trends of the dynamic fiber behavior under various operating conditions including fiber loading, flow rate and fluid viscosity obtained from the numerical model show consistency with experimental observations. The present numerical investigation reveals that the bridging capability of the fiber–fluid system can be enhanced by increasing the fiber loading, selecting fibers with higher stiffness, reducing the injection flow rate, reducing the suspending fluid viscosity or increasing the attractive cohesive forces among fibers by using sticky fibers.
Published: 21 November 2019
EURO Journal on Decision Processes, Volume 7, pp 221-241; https://doi.org/10.1007/s40070-019-00103-7

Abstract:
Stakeholder participation is increasingly being embedded into decision-making processes from the local to the global scale. With limited resources to engage stakeholders, frameworks that allow decision-makers to make cost-effective choices are greatly needed. In this paper, we present a structured decision-making (SDM) framework that enables environmental decision-makers to prioritise different engagement options by assessing their relative cost-effectiveness. We demonstrate the application of this framework using a case study in biosecurity management. Drawing on a scenario of Panama Disease Tropical Race 4 (TR4) invasion in the Australian banana industry, we conducted 25 semi-structured interviews and held a workshop with key stakeholders to elicit their key concerns and convert them into four objectives-making more informed decisions, maximising buy-in, empowering people, and minimising the stress of biosecurity incidents. We also identified ten engagement alternatives at local, State/Territory, and National scales. Our results showed that options to engage local stakeholders and enable capacity to undertake adaptive approaches to biosecurity management are more cost-effective than engagement efforts that seek to build capacities at higher decision-making levels. More interestingly, using the weights provided by different stakeholder groups does not significantly affect the cost-effectiveness ranking of the ten options considered. Even though the results are contingent on the context of this biosecurity study, the SDM framework developed for maximising cost-effectiveness is transferable to other areas of environmental management. The efficient frontier generated by this framework allows decision-makers to examine the trade-offs between the costs and benefits and select the best portfolio for their investment. This approach provides a practical and transparent estimate of the return on investment for stakeholder engagement in highly complex or uncertain situations, as is usually the case for environmental issues.
, Klemens Niederberger, Peter Rey, Urs Helg, Susanne Haertel-Borer
Published: 19 November 2019
EURO Journal on Decision Processes, Volume 7, pp 197-219; https://doi.org/10.1007/s40070-019-00101-9

Abstract:
Despite the large literature about non-additive value aggregation techniques, in the large majority of applied decision support processes, additive value aggregation functions are used. The main reasons for this may be the simplicity of the approach, minimum elicitation requirements, software availability, and the appeal of the underlying preference independence concepts that may be strengthened by an adequate choice of sub-objectives and attributes. However, in an applied decision support process, the decision maker(s) or the stakeholders decide on the sub-objectives and attributes to characterize the state of a system and they have to provide information that allows the decision analyst to express their preferences as a value function of these attributes. It is the task of the decision analyst to find the parameterization and parameter values of a value function that fits best the expressed preferences. We describe a value function elicitation process for the ideal morphological state of a lake shore, performed with stakeholders from federal and cantonal authorities and from environmental consulting companies in Switzerland. This process led to the elicitation of strongly non-additive and partly even non-concave value aggregation functions. The objective of this paper is to raise the awareness about the importance of carefully testing the assumptions underlying parameterized (often additive) value aggregation techniques during the preferences elicitation process and to be flexible regarding evaluating value functions that deviate from the often used additive aggregation scheme. This can lead to a higher confidence that additive aggregation is suitable for the specific decision problem or to the selection of alternative aggregation techniques that better represent the decision maker’s preferences in case additivity is violated.
, Joshu Jullier, Martin Raubal
Published: 18 November 2019
EURO Journal on Decision Processes, Volume 7, pp 159-195; https://doi.org/10.1007/s40070-019-00100-w

Abstract:
Decisions about urban space and especially regarding power transmission lines are of great public interest, because their visibility affects citizens for decades. With citizens’ increasing awareness, they expect to be transparently informed, their concerns to be taken seriously and that decision-makers base their decisions rationally on facts and laws. In this paper, we present a 3D Decision Support System (3D DSS) that tackles this issue and allows decision-makers to find an optimal transmission line corridor on such rational basis and by considering stakeholder’s preferences regarding multiple criteria. We examined its reliability regarding the ability of predicting transmission line corridors realistically—as stakeholders would expect them—by carrying out a study in central Switzerland with 10 grid planning experts and government representatives. Moreover, we investigated the extent to which graphic representations may support decision-makers firstly in evaluating a transmission line corridor modeled by the 3D DSS, secondly in considering and improving a human-defined scenario for transmission line planning, and thirdly in changing their opinion about a human-defined path. For this, a questionnaire was statistically evaluated by means of exploratory analysis, correlation analysis, and regression analysis. The results on the investigated visual analytics approach showed that it supports the evaluation of the corridor modeled by the 3D DSS as well as of the scenario defined by the stakeholders. As our new approach allows stakeholders to evaluate a transmission line path they consider to be optimal for land and population, it has a high potential for supporting rational group decision-making when considering different opinions.
, Igor Linkov
Published: 1 November 2019
EURO Journal on Decision Processes, Volume 7, pp 151-157; https://doi.org/10.1007/s40070-019-00108-2

Abstract:
Our society is facing serious environmental challenges related to climate change, pollution, diminishing resources, and biodiversity loss. Such problems are often ill-defined and are characterized by high uncertainty. Environmental decisions have strong impacts on society and demand clear and transparent trade-offs across values and priorities of stakeholder groups. This Feature Issue on Environmental Decisions includes papers focused on important environmental applications approached through various disciplinary backgrounds. The papers highlight advanced–often interdisciplinary–methodological approaches and include the perspectives of different stakeholders in the process of environmental decision-making. A wide range of methods are explored, ranging from a comprehensive review (for sustainable transport by Marleau Donais et al.) to an opinion paper proposing the use of Records of Engagement and Decision-making (RoED; by Cockerill et al.). Stakeholder engagement and preference elicitation required the development of new aggregation models for Multi-Criteria Decision Analysis (MCDA; by Reichert et al.). The integration of Cost-Benefit Analysis (CBA) with MCDA was found to be necessary in practice (by Liu et al.; Marleau Donais et al.). MCDA was extended to include the spatial dimension by integrating Geographic Information Systems (GIS; by Guay et al.; Schito et al.). The importance of considering the resilience of systems to better respond to and recover from unpredictable risks was emphasized (by Leyerer et al.; Mustajoki and Marttunen). These papers demonstrate the richness of approaches to environmental decision-making. Environmental issues offer ample exciting research opportunities to a broader scientific community. We encourage the readers of this Feature Issue—and of EJDP—to engage in environmental decision-making projects to support emerging societal needs.
, Marc-Oliver Sonneberg, Maximilian Heumann, Michael H. Breitner
Published: 1 November 2019
EURO Journal on Decision Processes, Volume 7, pp 267-300; https://doi.org/10.1007/s40070-019-00105-5

Abstract:
The worldwide trend of urbanization, the rising needs of individuals, and the continuous growth of e-commerce lead to increasing urban delivery activities, which are a substantial driver of traffic and pollution in cities. Due to rising public pressure, emission-reducing measures are increasingly likely to be introduced. Such measures can cover diesel bans or even entire car-free zones, causing drastic effects on delivery networks in urban areas. As an option to reduce the risk of a regulation-induced shock, we present a resilience-oriented network and fleet optimization. We propose an innovative parcel delivery concept for last mile delivery (LMD) operations and develop an optimization model to support tactical planning decisions. Our model minimizes overall operating costs by determining optimal locations for micro depots and it allocates transport vehicles to them. An adjustable CO2-threshold and external costs are included to consider potential regulatory restrictions by city authorities. We implement our model into a decision support system (DSS) that allows analyzing and comparing different scenarios. We provide a computational study by evaluating and discussing our DSS with an example of a mid-sized German city. Our results and findings demonstrate the trade-off between cost and emission minimization by quantifying the impacts of various fleet compositions. The proposed logistics concept represents an option to achieve environmentally friendly, cost-efficient, and resilient LMD of parcels.
EURO Journal on Transportation and Logistics, Volume 8, pp 769-793; https://doi.org/10.1007/s13676-019-00147-4

Abstract:
This paper presents an assignment modeling framework for public transport networks with co-existing schedule- and frequency-based services. The paper develops, applies and discusses a joint model, which aims at representing the behavior of passengers as realistically as possible. The model consists of a choice set generation phase followed by a multinomial logit route choice model and assignment of flow to the generated alternatives. The choice set generation uses an event dominance principle to exclude alternatives with costs above a certain cost threshold. Furthermore, a heuristic for aggregating overlapping lines is proposed. The results from applying the model to a case study in the Greater Copenhagen Area show that the level of service obtained in the unified network model of mixed services is placed between the level of service for strictly schedule-based and strictly frequency-based networks. The results also show that providing timetable information to the passengers improve their utility function as compared to only providing information on frequencies.
, Mika Marttunen
Published: 9 October 2019
EURO Journal on Decision Processes, Volume 7, pp 359-386; https://doi.org/10.1007/s40070-019-00099-0

Abstract:
Resilience management aims to increase the ability of the system to respond to adverse events. In this study, we develop and apply a structured framework for assessing the resilience of the decision-making process related to reservoir (or lake) regulation with the resilience matrix approach. Our study area is Finland, where the initiatives for the regulation have typically been hydro power production or flood prevention, but nowadays recreational and environmental issues are also increasingly considered. The main objectives of this study are twofold. First, it aims to provide support for reservoir operators and supervisors of the water course regulation projects in their work for identifying the possible threats and actions to diminish their consequences. Second, it studies the applicability of the resilience matrix approach in a quite specifically defined operational process, as most of the earlier applications have focused on a more general context. Our resilience matrix was developed in close co-operation with reservoir operators and supervisors of regulation by means of two workshops and a survey. For the practical application of the matrix, we created an evaluation form for assessing the resilience of a single dam operation process and for evaluating the cost efficiency of the actions identified to improve the resilience. The approach was tested on a dam controlling the water level of a middle-sized lake, where it proved to be a competent way to systematically assess resilience.
, Irène Abi-Zeid, E. Owen D. Waygood, Roxane Lavoie
Published: 8 October 2019
EURO Journal on Decision Processes, Volume 7, pp 327-358; https://doi.org/10.1007/s40070-019-00098-1

Abstract:
Transport decision processes have traditionally applied cost–benefit analysis (CBA) with benefits mainly relating to time-savings, and costs relating to infrastructure and maintenance costs. However, a shift toward more sustainable practices was initiated over the last decades to remedy the many negative impacts of automobility. As a result, decision processes related to transport projects have become more complex due to the multidimensional aspects and to the variety of stakeholders involved, often with conflicting points of view. To support rigourous decision-making, multicriteria decision analysis (MCDA) is, in addition to CBA, often used by governments and cities. However, there is still no consensus in the transport field regarding a preferred method that can integrate sustainability principles. This paper presents a descriptive literature review related to MCDA and CBA in the field of transport. Among the 66 considered papers, we identified the perceived strengths and weaknesses of CBA and MCDA, the different ways to combine them and the ability of each method to support sustainable transport decision processes. We further analysed the results based on four types of rationality (objectivist, conformist, adjustive, and reflexive). Our results show that both methods can help improve the decision processes and that, depending on the rationality adopted, the perceived strengths and weaknesses of MCDA and CBA can vary. Nonetheless, we observe that by adopting a more global and holistic perspective and by facilitating the inclusion of a participative process, MCDA, or a combination of both methods, emerge as the more promising appraisal methods for sustainable transport.
, Andrea Lodi, Patrice Marcotte
EURO Journal on Computational Optimization, Volume 8, pp 61-84; https://doi.org/10.1007/s13675-019-00120-w

Abstract:
In the design of service facilities, whenever the behaviour of customers is impacted by queueing or congestion, the resulting equilibrium cannot be ignored by a firm that strives to maximize revenue within a competitive environment. In the present work, we address the problem faced by a firm that makes decisions with respect to location, service levels and prices and that takes explicitly into account user behaviour. This situation is modelled as a nonlinear mathematical program with equilibrium constraints that involves both discrete and continuous variables, and for which we propose an efficient algorithm based on an approximation that can be solved for its global optimum.
Published: 31 August 2019
Petroleum Science, Volume 16, pp 1159-1175; https://doi.org/10.1007/s12182-019-00359-3

Abstract:
This paper addresses the scheduling and inventory management of a straight pipeline system connecting a single refinery to multiple distribution centers. By increasing the number of batches and time periods, maintaining the model resolution by using linear programming-based methods and commercial solvers would be very time-consuming. In this paper, we make an attempt to utilize the problem structure and develop a decomposition-based algorithm capable of finding near-optimal solutions for large instances in a reasonable time. The algorithm starts with a relaxed version of the model and adds a family of cuts on the fly, so that a near-optimal solution is obtained within a few iterations. The idea behind the cut generation is based on the knowledge of the underlying problem structure. Computational experiments on a real-world data case and some randomly generated instances confirm the efficiency of the proposed algorithm in terms of the solution quality and time.
EURO Journal on Computational Optimization, Volume 7, pp 381-419; https://doi.org/10.1007/s13675-019-00118-4

Abstract:
Interior-point or barrier methods handle nonlinear programs by sequentially solving barrier subprograms with a decreasing sequence of barrier parameters. The specific barrier update rule strongly influences the theoretical convergence properties as well as the practical efficiency. While many global and local convergence analyses consider a monotone update that decreases the barrier parameter for every approximately solved subprogram, computational studies show a superior performance of more adaptive strategies. In this paper we interpret the adaptive barrier update as a reinforcement learning task. A deep Q-learning agent is trained by both imitation and random action selection. Numerical results based on an implementation within the nonlinear programming solver WORHP show that the agent successfully learns to steer the barrier parameter and additionally improves WORHP’s performance on the CUTEst test set.
, , Zhen-Yu Song, Yun-Peng Xiao, Wei Yang, Man Dong, Yun-Fei Huang, Da Gao
Published: 16 August 2019
Petroleum Science, Volume 16, pp 956-971; https://doi.org/10.1007/s12182-019-0346-2

Abstract:
This study is the first systematic assessment of the Lower Ordovician microbial carbonates in Songzi, Hubei Province, China. This paper divides the microbial carbonates into two types according to growth patterns, namely nongranular and granular. The nongranular types include stromatolites, thrombolites, dendrolites, leiolites and laminites; the granular types are mainly oncolites and may include a small amount of microbiogenic oolite. According to their geometric features, the stromatolites can be divided into four types: stratiform, wavy, columnar and domal. Additionally, dipyramidal columnar stromatolites are identified for the first time and represent a new type of columnar stromatolite. The thrombolites are divided into three types: speckled, reticulated and banded. The grazing gastropod Ecculiomphalus and traces of bioturbation are observed in the speckled and reticulated thrombolites. This paper considers these two kinds of thrombolites to represent bioturbated thrombolites. These findings not only fill gaps in the field of domestic Ordovician bioturbated thrombolites but also provide new information for the study of thrombolites. Based on the analysis of the sedimentary characteristics of microbialites, the depositional environments of the various types of microbialites are described, and the distribution patterns of their depositional environments are summarized. The relationship between the development of microbialites and the evolution and radiation of metazoans during the Early to Middle Ordovician is discussed. Consistent with the correspondence between the stepwise and rapid radiation of metazoans and the abrupt reduction in the number of microbialites between the late Early Ordovician and the early Middle Ordovician, fossils of benthonic grazing gastropods (Ecculiomphalus) were found in the stromatolites and thrombolite of the study area. It is believed that the gradual reduction in microbialites was related to the rapid increase in the abundance of metazoans. Grazers not only grazed on the microorganisms that formed stromatolites, resulting in a continuous reduction in the number of stromatolites, but also disrupted the growth state of the stromatolites, resulting in the formation of unique bioturbated thrombolites in the study area. Hydrocarbon potential analysis shows that the microbialites in the Nanjinguan Formation represent better source rocks than those in the other formations.
, , Long-Qing Qiu, , A. G. Yagola
Published: 5 August 2019
Petroleum Science, Volume 16, pp 794-807; https://doi.org/10.1007/s12182-019-0350-6

Abstract:
Full tensor magnetic gradient measurements are available nowadays. These are essential for determining magnetization parameters in deep layers. Using full or partial tensor magnetic gradient measurements to determine the subsurface properties, e.g., magnetic susceptibility, is an inverse problem. Inversion using total magnetic intensity data is a traditional way. Because of difficulty in obtaining the practical full tensor magnetic gradient data, the corresponding inversion results are not so widely reported. With the development of superconducting quantum interference devices (SQUIDs), we can acquire the full tensor magnetic gradient data through field measurements. In this paper, we study the inverse problem of retrieving magnetic susceptibility with the field data using our designed low-temperature SQUIDs. The solving methodology based on sparse regularization and an alternating directions method of multipliers is established. Numerical and field data experiments are performed to show the feasibility of our algorithm.
Chang-Sheng Qu, , Ying-Chang Cao, Yong-Qiang Yang, Kuan-Hong Yu
Published: 2 August 2019
Petroleum Science, Volume 16, pp 763-775; https://doi.org/10.1007/s12182-019-0353-3

Abstract:
The Lucaogou Formation in the Jimusar Sag of the eastern Junggar Basin is an important sedimentary stratum accumulating huge amounts of lacustrine tight oil in China, where organic-rich rocks are commonly observed. Focusing on the Lucaogou Formation, a precise analysis of the inorganic and organic petrology and the inorganic geochemistry characteristics was conducted. The paleoclimate and paleoenvironment during sedimentation of the Lucaogou Formation were established, and the key factors that were controlling the accumulation of organic matter during this time were identified. The results of this study suggest that during the sedimentation of the Lucaogou Formation, the paleoclimate periodically changed from a humid environment to an arid environment. As a result, the salinity of the water and the redox environment fluctuated. During the sedimentation period, the lake showed sufficient nutrient supplies and a high primary productivity. The interval studies in the Lucaogou Formation were divided into five sedimentary cycles, where the first, second, and fifth sedimentary cycles consisted of cyclical paleoclimate fluctuations varied from a humid environment to an arid environment and shifted back to a humid environment with levels of salinity from low to high and decreased again. The third and fourth cycles have cyclical fluctuations from a humid to an arid environment and corresponding salinity variation between low and high levels. During the period when organic-rich rocks in the Lucaogou Formation deposited in the Jimusar Sag, the paleoclimate and the water body were suitable for lower aquatic organisms to flourish. As a result, its paleoproductivity was high, especially during the early period of each cycle. A quiet deep water body is likely to form an anoxic environment at the bottom and is also good for accumulation and preservation of organisms. Fine-grained sediments were accumulated at a low deposition rate, with a low dilution of organic matter. Therefore, high paleoproductivity provided a sufficient volume of organisms in the studied area in a quiet deep water body with an anoxic environment and these were the key factors controlling formation of organic-rich rocks.
Gholamreza Hosseinyar, , Iraj Abdollahie Fard, Asadollah Mahboubi, Rooholah Noemani Rad
Published: 2 August 2019
Petroleum Science, Volume 16, pp 776-793; https://doi.org/10.1007/s12182-019-0347-1

Abstract:
Lower Cretaceous Shurijeh–Shatlyk Formations host some of the main reservoirs in the Kopeh Dagh-Amu Darya Basin. Exploration in this area so far has focused on the development of structural traps, but recognition of stratigraphic traps in this area is of increasing importance. Integration of 3D seismic data with borehole data from thirteen wells and five outcrop sections was used to identify potential reservoir intervals and survey the hydrocarbon trap types in the East Kopeh Dagh Foldbelt (NE Iran). Analyses of horizontal slices indicated that the lower Shurijeh was deposited in a braided fluvial system. Generally, three types of channel were identified in the lower Shurijeh Formation: type 1, which is low-sinuosity channels interpreted to be filled with non-reservoir fine-grained facies; type 2, which is a moderately sinuous sand-filled channel with good prospectively; and type 3, which is narrow, high sinuosity channel filled with fine-grained sediments. Results indicate that upper Shurijeh–Shatlyk Formations were deposited in fluvial to delta and shallow marine environments. The identified delta forms the second reservoir zone in the Khangiran Field. Study of the stratigraphic aspects of the Shurijeh succession indicates that both lower and upper Shurijeh reservoirs are stratigraphic reservoir traps that improved during folding.
De-Bo Ma, , , Xin-Sheng Luo, Jian-Fa Han, Zhi-Yong Chen
Published: 2 August 2019
Petroleum Science, Volume 16, pp 752-762; https://doi.org/10.1007/s12182-019-0352-4

Abstract:
Understanding the scaling relation of damage zone width with displacement of faults is important for predicting subsurface faulting mechanisms and fluid flow processes. The understanding of this scaling relationship is influenced by the accuracy of the methods and types of data utilized to investigate faults. In this study, seismic reflection data are used to investigate the throw and damage zone width of five strike-slip faults affecting Ordovician carbonates of the Tarim intracraton basin, NW China. The results indicate that fault slips with a throw less than 200 m had formed wide damage zones up to 3000 m in width. Also, damage zone width is found to have both a positive correlation and a power-law relation with throw of two orders of magnitude, with a ratio of these values varying in a range of 2–15. The relationship between throw and damage zone width is not a simple power-law and changes its slope from small to larger size faults. The results indicate that throw scales well with damage zone width for the studied faults, and hence these can be used to predict fault geometries in the Tarim Basin. The study of the wide carbonate damage zones presented here provides new insights into scaling of large-size faults, which involve multiple faulting stages.
, , Jon Gluyas, Yan-Zhong Wang, Ke-Yu Liu, Ke-Lai Xi, Tian Yang, Jian Wang
Published: 30 July 2019
Petroleum Science, Volume 16, pp 729-751; https://doi.org/10.1007/s12182-019-0344-4

Abstract:
Burial dissolution of feldspar and carbonate minerals has been proposed to generate large volumes of secondary pores in subsurface reservoirs. Secondary porosity due to feldspar dissolution is ubiquitous in buried sandstones; however, extensive burial dissolution of carbonate minerals in subsurface sandstones is still debatable. In this paper, we first present four types of typical selective dissolution assemblages of feldspars and carbonate minerals developed in different sandstones. Under the constraints of porosity data, water–rock experiments, geochemical calculations of aggressive fluids, diagenetic mass transfer, and a review of publications on mineral dissolution in sandstone reservoirs, we argue that the hypothesis for the creation of significant volumes of secondary porosity by mesodiagenetic carbonate dissolution in subsurface sandstones is in conflict with the limited volume of aggressive fluids in rocks. In addition, no transfer mechanism supports removal of the dissolution products due to the small water volume in the subsurface reservoirs and the low mass concentration gradients in the pore water. Convincing petrographic evidence supports the view that the extensive dissolution of carbonate cements in sandstone rocks is usually associated with a high flux of deep hot fluids provided via fault systems or with meteoric freshwater during the eodiagenesis and telodiagenesis stages. The presumption of extensive mesogenetic dissolution of carbonate cements producing a significant net increase in secondary porosity should be used with careful consideration of the geological background in prediction of sandstone quality.
Hamed Foroughi Asl, , Abbas Khaksar Manshad, Mohammad Ali Takassi, , Alireza Keshavarz
Published: 26 July 2019
Petroleum Science, Volume 17, pp 105-117; https://doi.org/10.1007/s12182-019-0354-2

Abstract:
Surfactant flooding is an important technique used to improve oil recovery from mature oil reservoirs due to minimizing the interfacial tension (IFT) between oil and water and/or altering the rock wettability toward water-wet using various surfactant agents including cationic, anionic, non-ionic, and amphoteric varieties. In this study, two amino-acid based surfactants, named lauroyl arginine (l-Arg) and lauroyl cysteine (l-Cys), were synthesized and used to reduce the IFT of oil–water systems and alter the wettability of carbonate rocks, thus improving oil recovery from oil-wet carbonate reservoirs. The synthesized surfactants were characterized using Fourier transform infrared spectroscopy and nuclear magnetic resonance analyses, and the critical micelle concentration (CMC) of surfactant solutions was determined using conductivity, pH, and turbidity techniques. Experimental results showed that the CMCs of l-Arg and l-Cys solutions were 2000 and 4500 ppm, respectively. It was found that using l-Arg and l-Cys solutions at their CMCs, the IFT and contact angle were reduced from 34.5 to 18.0 and 15.4 mN/m, and from 144° to 78° and 75°, respectively. Thus, the l-Arg and l-Cys solutions enabled approximately 11.9% and 8.9% additional recovery of OOIP (original oil in place). It was identified that both amino-acid surfactants can be used to improve oil recovery due to their desirable effects on the EOR mechanisms at their CMC ranges.
Ehsan Ghandi, , Masoud Riazi
Published: 22 July 2019
Petroleum Science, Volume 16, pp 1361-1373; https://doi.org/10.1007/s12182-019-0355-1

Abstract:
Most fractured carbonate oil reservoirs have oil-wet rocks. Therefore, the process of imbibing water from the fractures into the matrix is usually poor or basically does not exist due to negative capillary pressure. To achieve appropriate ultimate oil recovery in these reservoirs, a water-based enhanced oil recovery method must be capable of altering the wettability of matrix blocks. Previous studies showed that carbonated water can alter wettability of carbonate oil-wet rocks toward less oil-wet or neutral wettability conditions, but the degree of modification is not high enough to allow water to imbibe spontaneously into the matrix blocks at an effective rate. In this study, we manipulated carbonated brine chemistry to enhance its wettability alteration features and hence to improve water imbibition rate and ultimate oil recovery upon spontaneous imbibition in dolomite rocks. First, the contact angle and interfacial tension (IFT) of brine/crude oil systems were measured for several synthetic brine samples with different compositions. Thereafter, two solutions with a significant difference in WAI (wettability alteration index) but approximately equal brine/oil IFT were chosen for spontaneous imbibition experiments. In the next step, spontaneous imbibition experiments at ambient and high pressures were conducted to evaluate the ability of carbonated smart water in enhancing the spontaneous imbibition rate and ultimate oil recovery in dolomite rocks. Experimental results showed that an appropriate adjustment of the imbibition brine (i.e., carbonated smart water) chemistry improves imbibition rate of carbonated water in oil-wet dolomite rocks as well as the ultimate oil recovery.
Zhengwei Ma,
Published: 16 July 2019
Petroleum Science, Volume 16, pp 929-938; https://doi.org/10.1007/s12182-019-0339-1

Abstract:
This paper investigates the relationship between China’s fuel ethanol promotion plan and food security based on the interactions between the crude oil market, the fuel ethanol market and the grain market. Based on the US West Texas Intermediate (WTI) crude oil spot price and Chinese corn prices from January 2008 to May 2018, this paper applies Granger causality testing and a generalized impulse response function to explore the relationship between world crude oil prices and Chinese corn prices. The results show that crude oil prices are not the Granger cause of China’s corn prices, but changes in world crude oil prices will have a long-term positive impact on Chinese corn prices. Therefore, the Chinese government should pay attention to changes in crude oil prices when promoting fuel ethanol. Considering the conduction effect between fuel ethanol and the food market, the government should also take some measures to ensure food security.
Vladimir Beresnev,
EURO Journal on Computational Optimization, Volume 8, pp 33-59; https://doi.org/10.1007/s13675-019-00117-5

Abstract:
We consider a model of two parties’ competition organized as a Stackelberg game. The parties open their facilities intending to maximize profit from serving the customers that behave following a binary rule. The set of customers is unknown to the party which opens its facilities first and is called the Leader. Instead, a finite list of possible scenarios specifying this set is provided to the Leader. One of the scenarios is to be realized in the future before the second party, called the Follower, would make their own decision. The scenarios are supplied with known probabilities of realization, and the Leader aims to maximize both the probability to get a profit not less than a specific value, called a guaranteed profit, and the value of a guaranteed profit itself. We formulate the Leader’s problem as a bi-objective bi-level mathematical program. To approximate the set of efficient solutions of this problem, we develop an \(\varepsilon \)-constraint method where a branch-and-bound algorithm solves a sequence of bi-level problems with a single objective. Based on the properties of feasible solutions of a bi-level program and mathematical programming techniques, we developed three upper bound procedures for the branch-and-bound method mentioned. In numerical experiments, we compare these procedures with each other. Besides that, we discuss relations of the model under investigation and the stochastic competitive location model with uncertain profit values.
, M. Grazia Speranza, Emirena Garrafa
EURO Journal on Decision Processes, Volume 8, pp 41-60; https://doi.org/10.1007/s40070-019-00097-2

Abstract:
In this paper, we analyze and discuss the optimization challenges and opportunities raised by the decision of building an automated clinical laboratory in a hospital unit. We first describe the general decision setting from a strategic, tactical and operational perspective. We then focus on the analysis of a practical case, i.e., the Central Laboratory of a large urban academic teaching hospital in the North of Italy, the ‘Spedali Civili’ in Brescia. We will describe the present situation and the research opportunities related to the study of possible improvements of the management of the laboratory.
, Guy Desaulniers, Issmail El Hallaoui
EURO Journal on Transportation and Logistics, Volume 8, pp 713-744; https://doi.org/10.1007/s13676-019-00145-6

Abstract:
The integral simplex using decomposition (ISUD) algorithm was recently developed to solve efficiently set partitioning problems containing a number of variables that can all be enumerated a priori. This primal algorithm generates a sequence of integer solutions with decreasing costs, leading to an optimal or near-optimal solution depending on the stopping criterion used. In this paper, we develop an integral column generation (ICG) heuristic that combines ISUD and column generation to solve set partitioning problems with a very large number of variables. Computational experiments on instances of the public transit vehicle and crew scheduling problem and of the airline crew pairing problem involving up to 2000 constraints show that ICG clearly outperforms two popular column generation heuristics (the restricted master heuristic and the diving heuristic). ICG can yield optimal or near-optimal solutions in less than 1 hour of computational time, generating up to 300 integer solutions during the solution process.
, Sauleh Siddiqui
EURO Journal on Computational Optimization, Volume 8, pp 85-101; https://doi.org/10.1007/s13675-019-00115-7

Abstract:
We developed a gradient-based method to optimize the regularization hyper-parameter, C, for support vector machines in a bilevel optimization framework. On the upper level, we optimized the hyper-parameter C to minimize the prediction loss on validation data using stochastic gradient descent. On the lower level, we used dual coordinate descent to optimize the parameters of support vector machines to minimize the loss on training data. The gradient of the loss function on the upper level with respect to the hyper-parameter, C, was computed using the implicit function theorem combined with the optimality condition of the lower-level problem, i.e., the dual problem of support vector machines. We compared our method with the existing gradient-based method in the literature on several datasets. Numerical results showed that our method converges faster to the optimal solution and achieves better prediction accuracy for large-scale support vector machine problems.
Mukul Chavan, , Shirish Patil, Santanu Khataniar
Published: 28 May 2019
Petroleum Science, Volume 16, pp 1344-1360; https://doi.org/10.1007/s12182-019-0325-7

Abstract:
A thorough literature review is conducted that pertains to low-salinity-based enhanced oil recovery (EOR). This is meant to be a comprehensive review of all the refereed published papers, conference papers, master’s theses and other reports in this area. The review is specifically focused on establishing various relations/characteristics or “screening criteria” such as: (1) classification/grouping of clays that have shown or are amenable to low-salinity benefits; (2) clay types vs. range of residual oil saturations; (3) API gravity and down hole oil viscosity range that is amenable for low salinity; (4) salinity range for EOR benefits; (5) pore sizes, porosity, absolute permeability and wettability range for low-salinity EOR; (6) continuous low-salinity injection vs. slug-wise injection; (7) grouping of possible low-salinity mechanisms; (8) contradictions or similarities between laboratory experiments and field evidence; and (9) compositional variations in tested low-salinity waters. A proposed screening criterion for low-salinity waterflooding is introduced. It can be concluded that either one or more of these mechanisms, or a combination thereof, may be the case-specific mechanism, i.e., depending on the particular oil–brine–rock (OBR) system rather than something that is “universal” or universally applicable. Therefore, every OBR system that is unique or specific ought to be individually investigated to determine the benefits (if any) of low-salinity water injection; however, the proposed screening criteria may help in narrowing down some of the dominant responsible mechanisms. Although this review primarily focuses on sandstones, given the prominence of carbonates containing ~60% of the world’s oil reserves, a summary of possible mechanisms and screening criteria, pertaining to low-salinity waterflooding, for carbonates is also included. Finally, the enhancement of polymer flooding by using low-salinity water as a makeup water to further decrease the residual oil saturation is also discussed.
Gang Yan, , Yan Liu, Peng-Hai Tang, Wei-Bin Liu
Published: 28 May 2019
Petroleum Science, Volume 16, pp 502-512; https://doi.org/10.1007/s12182-019-0326-6

Abstract:
Sesquiterpanes are ubiquitous components of crude oils and ancient sediments. Liquid saturated hydrocarbons from simulated pyrolysis experiments on immature organic-rich mudstone collected from the Lower Cretaceous Hesigewula Sag were analyzed by gas chromatography–mass spectrometry (GC–MS). C14 bicyclic sesquiterpanes, namely, 8β(H)-drimane, 8β(H)-homodrimane, and 8α(H)-homodrimane were detected and identified on basis of their diagnostic fragment ions (m/z 123, 179, 193, and 207), and previously published mass spectra data, and these bicyclic sesquiterpanes presented relatively regular characteristics in their thermal evolution. The ratios 8β(H)-drimane/8β(H)-homodrimane, 8β(H)-homodrimane/8α(H)-homodrimane, and 8β(H)-drimane/8α(H)-homodrimane all show a clear upward trend with increasing temperature below the temperature turning point. Thus, all these ratios can be used as evolution indexes of source rocks in the immature–low-maturity stage. However, the last two ratios may be more suitable than the first ratio as valid parameters for measuring the extent of thermal evolution of organic matter in the immature–low-maturity stage because their change amplitude with increasing temperature is more obvious.
Falk M. Hante,
EURO Journal on Computational Optimization, Volume 7, pp 299-323; https://doi.org/10.1007/s13675-019-00112-w

Abstract:
We consider nonlinear and nonsmooth mixing aspects in gas transport optimization problems. As mixed-integer reformulations of pooling-type mixing models already render small-size instances computationally intractable, we investigate the applicability of smooth nonlinear programming techniques for equivalent complementarity-based reformulations. Based on recent results for remodeling piecewise affine constraints using an inverse parametric quadratic programming approach, we show that classical stationarity concepts are meaningful for the resulting complementarity-based reformulation of the mixing equations. Further, we investigate in a numerical study the performance of this reformulation compared to a more compact complementarity-based one that does not feature such beneficial regularity properties. All computations are performed on publicly available data of real-world size problem instances from steady-state gas transport.
EURO Journal on Computational Optimization, Volume 8, pp 3-31; https://doi.org/10.1007/s13675-019-00114-8

Abstract:
The concept of leader-follower (or Stackelberg) equilibrium plays a central role in a number of real-world applications bordering on mathematical optimization and game theory. While the single-follower case has been investigated since the inception of bilevel programming with the seminal work of von Stackelberg, results for the case with multiple followers are only sporadic and not many computationally affordable methods are available. In this work, we consider Stackelberg games with two or more followers who play a (pure or mixed) Nash equilibrium once the leader has committed to a (pure or mixed) strategy, focusing on normal-form and polymatrix games. As customary in bilevel programming, we address the two extreme cases where, if the leader’s commitment originates more Nash equilibria in the followers’ game, one which either maximizes (optimistic case) or minimizes (pessimistic case) the leader’s utility is selected. First, we show that, in both cases and when assuming mixed strategies, the optimization problem associated with the search problem of finding a Stackelberg equilibrium is \(\mathcal {NP}\)-hard and not in Poly-\(\mathcal {APX}\) unless \(\mathcal {P} = \mathcal {NP}\). We then consider different situations based on whether the leader or the followers can play mixed strategies or are restricted to pure strategies only, proposing exact nonconvex mathematical programming formulations for the optimistic case for normal-form and polymatrix games. For the pessimistic problem, which cannot be tackled with a (single-level) mathematical programming formulation, we propose a heuristic black-box algorithm. All the methods and formulations that we propose are thoroughly evaluated computationally.
François Clautiaux, , François Vanderbeck, Quentin Viaud
EURO Journal on Computational Optimization, Volume 7, pp 265-297; https://doi.org/10.1007/s13675-019-00113-9

Abstract:
In the two-dimensional guillotine cutting-stock problem, the objective is to minimize the number of large plates used to cut a list of small rectangles. We consider a variant of this problem, which arises in glass industry when different bills of order (or batches) are considered consecutively. For practical organization reasons, leftovers are not reused, except the large one obtained in the last cutting pattern of a batch, which can be reused for the next batch. The problem can be decomposed into an independent problem for each batch. In this paper, we focus on the one-batch problem, the objective of which is to minimize the total width of the cutting patterns used. We propose a diving heuristic based on column generation, in which the pricing problem is solved using dynamic programming (DP). This DP generates so-called non-proper columns, i.e. cutting patterns that cannot participate in a feasible integer solution of the problem. We show how to adapt the standard diving heuristic to this “non-proper” case while keeping its effectiveness. We also introduce the partial enumeration technique, which is designed to reduce the number of non-proper patterns in the solution space of the dynamic program. This technique strengthens the lower bounds obtained by column generation and improves the quality of the solutions found by the diving heuristic. Computational results are reported and compared on classical benchmarks from the literature as well as on new instances inspired from glass industry data. According to these results, variants of the proposed diving heuristic outperform constructive and evolutionary heuristics.
Page of 43
Articles per Page
by
Show export options
  Select all
Back to Top Top