Refine Search

New Search

Results: 8,800

(searched for: publisher_group_id:5032)
Save to Scifeed
Page of 176
Articles per Page
Show export options
  Select all
Tomomi Aoyama, Toshihiko Nakano, Ichiro Koshijima, Yoshihiro Hashimoto, Kenji Watanabe, Ibaraki Hitachi Ltd.
Journal of Disaster Research, Volume 12, pp 1081-1090;

The purpose of this study is to illustrate how exercises can play the role of a driving power to improve an organization’s cyber security preparedness. The degree of cyber security preparedness varies significantly among organizations. This implies that training and exercises must be tailored to specific capabilities. In this paper, we review the National Institute of Standards and Technology (NIST) cybersecurity framework that formalizes the concept of tier, which measures the degree of preparedness. Subsequently, we examine the types of exercises available in the literature and propose guidelines that assign specific exercise types, aims, and participants to each level of preparedness. The proposed guideline should facilitate the reinforcement of cybersecurity risk management practices, reduce resource misuse, and lead to a smooth improvement of capabilities.
Shohei Naito, Ken Xiansheng Hao, Shigeki Senna, Takuma Saeki, Hiromitsu Nakamura, Hiroyuki Fujiwara, Takashi Azuma
Journal of Disaster Research, Volume 12, pp 899-915;

In the 2016 Kumamoto earthquake, the Futagawa fault zone and the Hinagu fault zone were active in some sections, causing severe damage in neighboring areas along the faults. We conducted a detailed investigation of the surface earthquake fault, building damage, and site amplification of shallow ground within about 1 km of the neighboring areas of the fault. The focus was mainly on Kawayou district, Minamiaso village and Miyazono district, Mashiki town, and locations that suffered particularly severe building damage. We explored the relationship between local strong motion and building damage caused in areas that were in the immediate vicinity of the active fault.
Ken-Ichi Shimose, Shingo Shimizu, , Koyuru Iwanami
Journal of Disaster Research, Volume 12, pp 956-966;

This study reports preliminary results from the three-dimensional variational method (3DVAR) with incremental analysis updates (IAU) of the surface wind field, which is suitable for real-time processing. In this study, 3DVAR with IAU was calculated for the case of a tornadic storm using 500-m horizontal grid spacing with updates every 10 min, for 6 h. Radial velocity observations by eight X-band multi-parameter Doppler radars and three Doppler lidars around the Tokyo Metropolitan area, Japan, were used for the analysis. In this study, three types of analyses were performed between 1800 to 2400 LST (local standard time: UTC + 9 h) 6 September 2015. The first used only 3DVAR (3DVAR), the second used 3DVAR with IAU (3DVAR+IAU), and the third analysis did not use data assimilation (CNTL). 3DVAR+IAU showed the best accuracy of the three analyses, and 3DVAR alone showed the worst accuracy, even though the background was updated every 10 min. Sharp spike signals were observed in the time series of wind speed at 10 m AGL, analyzed by 3DVAR, strongly suggesting that a “shock” was caused by dynamic imbalance due to the instantaneous addition of analysis increments to the background wind components. The spike signal was not shown in 3DVAR+IAU analysis, therefore, we suggest that the IAU method reduces the shock caused by the addition of analysis increments. This study provides useful information on the most suitable DA method for the real-time analysis of surface wind fields.
Yuichiro Usuda, Makoto Hanashima, Ryota Sato, Hiroaki Sano
Journal of Disaster Research, Volume 12, pp 1002-1014;

In disaster response, wherein many organizations undertake activities simultaneously and in parallel, it is important to unify the overall recognition of the situation through information sharing. Furthermore, each organization must respond appropriately by utilizing this information. In this study, we developed the Shared Information Platform for Disaster Management (SIP4D), targeted at government offices, ministries, and agencies, to carry out information sharing by intermediating between various information systems. We also developed a prototype of the National Research Institute for Earth Science and Disaster Resilience (NIED) Crisis Response Site (NIED-CRS), which provides the obtained information on the web. We applied these systems to support disaster response efforts in the 2016 Kumamoto Earthquakes and other natural disasters. We analyzed the effects of and issues experienced with the information sharing systems. As effects, we found 1) the realization of increased overall efficiency, 2) validity of sharing alternative information, and 3) possibility of using the system as a basis for information integration. As future issues, we highlight the needs for 1) advance loading of data, 2) machine readability of top-down data, and 3) identifying the common minimum required items and standardization of bottom-top data.
Yohsuke Kawamata, Manabu Nakayama, Ikuo Towhata, Susumu Yasuda
Journal of Disaster Research, Volume 12, pp 868-881;

Underground structures are generally considered to have high seismic performance and expected to play an important role as a base for reconstruction even after a destructive earthquake. Rigidity changing points, such as jointed and curved portions of underground structure, where localized deformation and stress is supposed to be generated, are ones of the most critical portions in terms of seismic performance of underground structure. Because the underground structure in a mega-city functions as a network, local damage could lead to fatal dysfunction. Accordingly, rigidity changing points and their surrounding area could significantly influence the resiliency of urban functions, and it is indispensable to evaluate their seismic performance and dynamic responses during earthquakes. The responses of rigidity changing points and their surrounding area to earthquakes have been tried evaluating by using large-scale numerical analyses, there is no case available where the responses have been measured in detail. For this reason, it is difficult to verify the validity of the results of such evaluations.In light of the above, the shake table test was conducted at E-Defense using a coupled specimen of soil and underground structures to obtain detailed data, especially on the localized responses around rigidity changing points during the earthquake. Based on the data obtained, the behavior of the underground structure with a curved portion at the time of an earthquake was analyzed comprehensively. As a result of the analysis on the test data, it is found that there is a strong correlation between the localized deformation of the curved portion of the tunnel and the displacement of the surrounding ground. In addition, it is necessary to conduct a three-dimensional seismic response analysis not only around the rigidity changing point but also in wider area.
, Hideki Ueda, Toshikazu Tanada
Journal of Disaster Research, Volume 12, pp 932-943;

Mt. Tarumae is an active volcano located in the southeast of the Shikotsu caldera, Hokkaido, Japan. Recently, crustal expansion occurred in 1999–2000 and 2013 near the summit of Mt. Tarumae, with a M5.6 earthquake recorded west of the summit on July 8, 2014. In this study, we determined hypocenter distributions and performed b-value analysis for the period between August 1, 2014 and August 12, 2016 to improve our understanding of the geometry of the magma system beneath the summit of Mt. Tarumae. Hypocenters were mainly distributed in two regions: 3–5 km west of Mt. Tarumae, and beneath the volcano. We then determined b-value distributions. Regions with relatively high b-values (1.3) were located at depths of –0.5 to 2.0 km beneath the summit and at depths greater than 6.0 km about 1.5–3.0 km northwest of the summit, whereas a region with relatively low b-values (0.6) was located at depths of 2.0–6.0 km beneath the summit. Based on comparison of the b-value distributions with other geophysical observations, it was found that the high b-value region from –0.5 to 2.0 km in depth corresponded to regions of lower resistivity, positive self-potential anomaly, and an inflation source detected in 1999–2000. Therefore, it is inferred that this region was generated by crustal heterogeneity, a decrease in effective normal stress, and change of frictional properties caused by the development of faults and fissures and the circulation of hydrothermal fluids. On the other hand, the inflation source detected in 2013 was located near the boundary between the low b-value region beneath the summit and the deeper high b-value region about 1.5–3.0 km northwest of the summit. Studies of other volcanoes have suggested that such high b-values likely correspond to the presence of a magma chamber. Based on the deeper high b-value region estimated in this study, the magma chamber is inferred to be located at depths greater than 6.0 km about 1.5–3.0 km northwest of the summit. Thus, these findings contribute to our understanding of the magma plumbing system beneath the summit of Mt. Tarumae.
Haruo Hayashi, Yuichiro Usuda
Journal of Disaster Research, Volume 12, pp 843-843;

In April 2016, our institute, NIED, under its new English name the “National Research Institute for Earth Science and Disaster Resilience,” commenced its fourth mid-to-long term planning period, set to last seven years. We are constantly required to carry out comprehensive efforts, including observations, forecasts, experiments, assessments, and countermeasures related to a variety of natural disasters, including earthquakes, tsunamis, volcanic eruptions, landslides, heavy rains, blizzards, and ice storms. Since this is NIED’s first special issue for the Journal of Disaster Research (JDR), works were collected on a wide variety of topics from research divisions and centers as well as from ongoing projects in order to give an overview of the latest achievements of the institute. We are delighted to present 17 papers on five topics: seismic disasters, volcanic disasters, climatic disasters, landslide disasters, and the development of comprehensive Information Communications Technology (ICT) for disaster management. Even though the achievements detailed in these papers are certainly the results individual research, NIED hopes to maximize these achievements for the promotion of science and technology for disaster risk reduction and resilience as a whole. It is our hope that this special issue awakens the readers’ interest in a study, and, of course, creates an opportunity for further collaborative works with us.
Hideyuki Shintani, Tomomi Aoyama, Ichiro Koshijima
Journal of Disaster Research, Volume 12, pp 1073-1080;

In order to operate the Internet of Things (IoT) or Cyber Physical Systems (CPS) in the real world, the system needs to be structured to have people in the real world incorporated as a part of its process: Human-in-the-Loop CPS (HITLCPS). With people in the real world incorporated as a part of its process, the system must have a secure structure to be able to continue operating normally. With sensors, actuators and other devices connected in a network, it becomes vulnerable to cyberattacks; hence, its framework must be resilient and secure in order to ensure its safety in the face of any disturbances. In this paper, we describe a safety-based secure system structure, using a STAMP model and a covariance structure.
Tadashi Ise, Takuya Takahashi, Ryota Sato, Hiroaki Sano, Takeshi Isono, Makoto Hanashima, Yuichiro Usuda
Journal of Disaster Research, Volume 12, pp 1028-1038;

In order to efficiently gather and effectively utilize information fragments collected in the initial stage of disaster response, those who utilize shared information need to determinate which information to gather and conduct appropriate processing as necessary. On the occasion of the 2016 Kumamoto earthquakes, the National Research Institute for Earth Science and Disaster Resilience (NIED) sent a resident researcher to the Kumamoto Prefectural Office the following day to implement disaster information support that included organizing various pieces of disaster information collected via telephone, fax, and the like on a WebGIS to generate an information map that was then provided to bodies that carry out disaster response. In light of this series of disaster information support activities, this article analyzes the necessary requirements for utilizing disaster information at a disaster response site; in other words, it addresses a problem with the effective utilization of a large amount of shared information in conducting disaster response activities. As a result, an outline of the information items that are necessary for utilization of disaster information has become clear. This provides a suggestion for the conception of a system for each disaster response body to utilize disaster information for carrying out activities at the disaster site.
Shingo Shimizu, Seiichi Shimada, Kazuhisa Tsuboki
Journal of Disaster Research, Volume 12, pp 944-955;

In this study, we examined variations in predicted precipitable water produced from different Global Positioning System (GPS) zenith delay methods, and assessed the corresponding difference in predicted rainfall after assimilating the obtained precipitable water data. Precipitable water data estimated from the GPS and three-dimensional horizontal wind velocity field derived from the X-band dual polarimetric radar were assimilated in CReSS and rainfall forecast experiments were conducted for the heavy rainfall system in Kani City, Gifu Prefecture on July 15, 2010. In the GPS analysis, a method to simultaneously estimate coordinates and zenith delay, i.e., the simultaneous estimation method, and a method to successively estimate coordinates and zenith delay, i.e., the successive estimation method, were used to estimate precipitable water. The differences generated from using predicted orbit data provided in pseudo-real time from the International GNSS (Global Navigation Satellite System) Service for geodynamics (IGS) versus precise orbit data released after a 10-day delay were examined. The change in precipitable water due to varying the analysis methods was larger than that due to the type of satellite orbit information. In the rainfall forecast experiments, those using the successive estimation method results had a better precision than those using the simultaneous estimation method results. Both methods that included data assimilation had higher rainfall forecast precisions than the forecast precision without precipitable water assimilation. Water vapor obtained from GPS analysis is accepted as important in rainfall forecasting, but the present study showed additional improvements can be attained from incorporating a zenith delay analysis method.
, Shingo Shimizu, Ken-Ichi Shimose, Koyuru Iwanami
Journal of Disaster Research, Volume 12, pp 967-979;

The forecast accuracy of a numerical weather prediction (NWP) model for a very short time range (≤1 h) for a meso-γ-scale (2–20 km) extremely heavy rainfall (MγExHR) event that caused flooding at the Shibuya railway station in Tokyo, Japan on 24 July 2015 was compared with that of an extrapolation-based nowcast (EXT). The NWP model used CReSS with 0.7 km horizontal grid spacing, and storm-scale data from dense observation networks (radars, lidars, and microwave radiometers) were assimilated using CReSS-3DVAR. The forecast accuracy of the heavy rainfall area (≥20 mm h-1), as a function of forecast time (FT), was investigated for the NWP model and EXT predictions using the fractions skill score (FSS) for various spatial scales of displacement error (L). These predictions were started 30 minutes before the onset of extremely heavy rainfall at Shibuya station. The FSS for L=1 km, i.e., grid-scale verification, showed NWP accuracy was lower than that of EXT before FT=40 min; however, NWP accuracy surpassed that of EXT from FT=45 to 60 min. This suggests the possibility of seamless, high-accuracy forecasts of heavy rainfall (≥20 mm h-1) associated with MγExHR events within a very short time range (≤1 h) by blending EXT and NWP outputs. The factors behind the fact that the NWP model predicted heavy rainfall area within the very short time range of ≤1 h more correctly than did EXT are also discussed. To enable this discussion of the factors, additional sensitivity experiments with a different assimilation method of radar reflectivity were performed. It was found that a moisture adjustment above the lifting condensation level using radar reflectivity was critical to the forecasting of heavy rainfall near Shibuya station after 25 min.
Tsuneo Ohsumi, Hiroyuki Fujiwara
Journal of Disaster Research, Volume 12, pp 891-898;

The purpose of this study is to verify fault modeling in the source region of the 1940 Shakotan-Oki earthquake using active faults offshore of Japan. Tsunami heights simulated in previous studies are found to be lower than observed levels, which makes it difficult to explain historical tsunami records of this earthquake. However, the application of appropriate slip magnitudes in the fault models may explain these differences. In the “Project for the Comprehensive Analysis and Evaluation of Offshore Fault Informatics (the Project),” a new fault model is constructed using marine seismic data and geological and geophysical data compiled by the Offshore Fault Evaluation Group, Japan Agency for Marine-Earth Science and Technology (JAMSTEC) as part of the Project for Fault Evaluation in the Seas around Japan (Ministry of Education, Culture, Sports, Science and Technology, MEXT). Single-channel and multichannel reflection seismic data were used that includes information from a new fault identified in previous surveys. We investigated fault geometries and their parameters using the above data. Here, we show that the geometric continuity of these faults is adjusted by increasing the magnitude of fault slip. Standard scaling laws are applied on the basis of strong ground motion of the fault parameters, and the validity of the fault model is examined by comparing tsunami heights along the Japanese coastline from historically observed records with tsunami height from simulation analysis. This verification quantitatively uses Aida’s K and κ scale and variance parameters. We determine that the simulated tsunami height determined using the new model approach the heights observed historically, which indicates that the model is valid and accurate for the source region.
Kan Shimazaki, Yoshinobu Mizui
Journal of Disaster Research, Volume 12, pp 916-925;

This study quantitatively analyzes the differences between the actual focal region of the Nankai Trough Giant Earthquake, which is expected to occur in the future, and the conceptual focal region drawn on the map by 595 students. It also examines the differences between the subjective expectation and the scientific prediction of the seismic intensity at the residence of the respondents, to find out the relationship between such differences and the variables of respondents such as residence, attributes and experiences, and others. As a result of the examination, the following findings are clear: the subjective expectation of the focal region of the Nankai Trough Giant Earthquake deviates largely eastwards; those who have their own residence and parents’ home in the area forecasted to be affected by the Nankai Trough Giant Earthquake recognize the focal region of the earthquake better; and those who have taken measures toward disaster prevention such as stocking goods for emergencies and participating in disaster drills account for a smaller percentage of respondents who underestimated seismic intensity at their residence.
Ritsuko Aiba, Takeshi Hiromatsu
Journal of Disaster Research, Volume 12, pp 1060-1072;

This paper introduces previous studies that propose a model supporting decision-making on information security risk treatment by the top management of an organization and its assessment using statistical data. The reason that statistical data are used to assess the model is that the data necessary for information security risk treatment are not generally disclosed for security reasons. A verification using actual data is generally difficult.
Ryoichi Sasaki
Journal of Disaster Research, Volume 12, pp 1040-1049;

With society’s increasing dependence on information technology (IT) systems, it is becoming increasingly difficult to resolve safety problems related to IT systems through conventional information security technology alone. Accordingly, under the heading of “IT risk” research, we have been investigating ways to address broader safety problems that arise in relation to IT systems themselves, along with the services and information they handle, in situations that include natural disasters, malfunctions, and human error, as well as risks arising from wrongdoing. Through our research, we confirmed that a risk communication-based approach is essential for resolving IT risk problems, and clarified five issues that pertain to a risk-based approach. Simultaneously, as tools to support problem resolution, we developed a multiple risk communicator (MRC) for consensus formation within organizations, along with Social-MRC for social consensus formation. The results of our research are detailed in this paper.
Naoshi Sato
Journal of Disaster Research, Volume 12, pp 1050-1059;

In this paper, we discuss the current situation and problems of cyberattacks from multiple viewpoints, and propose a guideline for future countermeasures. First, we provide an overview of some trends in cyberattacks using various survey data and reports. Next, we examine a new cyberattack countermeasure to control Internet use and propose a specific guideline. Specifically, we propose an Internet user qualification system as a policy to maintain cyber security and discuss ways to realize the system, the expected effects, and problems to be solved.
Toshikazu Tanada, Hideki Ueda, Masashi Nagai, Motoo Ukawa
Journal of Disaster Research, Volume 12, pp 926-931;

In response to the recommendation of the Council for Science and Technology (Subdivision on Geodesy and Geophysics), the National Research Institute for Earth Science and Disaster Resilience (NIED) constructed a network of stations to observe 11 volcanoes: Tokachidake, Usuzan, Tarumaesan, Hokkaido-Komagatake, Iwatesan, Kusatsu-Shiranesan, Asamayama, Asosan, Kirishimayama, Unzendake, and Kuchinoerabujima. At each new station, a borehole seismograph and tiltmeter, a broadband seismograph, and a GNSS (GPS) were installed. Now, NIED has established 55 stations at 16 volcanoes, adding five volcanoes, namely, Izu- Oshima, Miyakejima, Ogasawara Iwoto, Mt. Fuji and Nasu-dake, and has constructed a new volcano observation network linking the 11 original volcanoes. NIED calls the combination of the new and earlier network the fundamental volcano observation network (V-net).Under a fully open policy, data from the borehole seismographs and tiltmeters, broadband seismographs, rain gauges, barometers,and quartz thermometers in the pressure vessels of the borehole seismographs and tiltmeters are distributed to institutes such as the Japan Meteorological Agency and universities in real time over NIED’s conventional seismic observation data distribution system. GNSS (GPS) data are regularly distributed to relevant research institutes, such as the Geospatial Information Authority of Japan, using file transfer protocol (FTP). In addition, since everyone can use these data for the promotion of volcano research and volcanic disaster prevention, it is now possible to view seismic waves and download data from NIED’s website.
Kenji Watanabe
Journal of Disaster Research, Volume 12, pp 1039-1039;

As our daily lives and socioeconomic activities have increasingly come to depend on information systems and networks, the impact of disruptions to these systems and networks have also become more complex and diversified.
Toru Danjo, Tomohiro Ishizawa, Masamitsu Fujimoto, Naoki Sakai,
Journal of Disaster Research, Volume 12, pp 993-1001;

Every year in Japan, slope failures often occur due to heavy rainfall during the wet season and typhoon season. The main reasons for soil failure are thought to be the increase of soil weight from infiltrated precipitation, the decrease in shear strength, and effects of the increase groundwater elevation. It is therefore important to consider to characteristics of groundwater behavior to improve slope disaster prevention. Kiyomizu-dera experienced major slope failures in 1972, 1999, and 2013, and a large slope failure occurred nearby in 2015. The two most recent events occurred since observation of precipitation and groundwater conditions began at the site in 2004. In this research, we determine the relationship between rainfall and groundwater level using both a full-scale model experiment and field measurements. Results indicate strong connection between rainfall intensity and the velocity of increase in groundwater level, indicating that it is possible to predict changes in the groundwater level due to heavy rainfall.
Makoto Hanashima, Ryota Sato, Yuichiro Usuda
Journal of Disaster Research, Volume 12, pp 1015-1027;

The purpose of this paper is to consider the essential concept by which to formulate standardized information that supports effective disaster response. From the experiences of past disasters, we have learned that disaster response organizations could not work effectively without information sharing. In the context of disaster response, the purpose of “information sharing” is to ensure common recognition of the disaster situation being confronted. During the Kumamoto earthquake, we provided a set of disaster information products to disaster response organizations to support their relief activities. Based on the real disaster response experience, we extracted issues of information sharing between various organizations. To resolve these issues, we discuss the concept of information sharing first, and then consider the quality of information that supports disaster response activities by referring to the information needs of emergency support organizations such as the Disaster Medical Assistance Team (DMAT). We also analyze the Basic Disaster Management Plan published by the Central Disaster Management Council and extract a common disaster-information set for governmental organizations. As a result, we define the “Standard Disaster-information Set” (SDS) that covers most disaster response information needs. Based on the SDS, we formulate intermediate information products for disaster response that provide consistent information of best-effort quality, named the “Standardized Disaster-information Products” (SDIP). By utilizing the SDIP, disaster response organizations are able to consolidate the common recognition of disaster situations without consideration of data availability, update timing, reliability, and so on.
Makoto Matsubara, Hiroshi Sato, , Masashi Mochizuki, Toshihiko Kanazawa
Journal of Disaster Research, Volume 12, pp 844-857;

Tomographic analysis of the seismic velocity structure beneath oceans has always been difficult because offshore events determined by onshore seismic networks have large uncertainties in depth. In order to use reliable event locations for our computations, we have developed a method to use the hypocentral depths determined by the NIED F-net with moment tensor solutions using long-period (20-50 s) waves from offshore events away from onshore seismic networks. We applied seismic tomographic method to events occurring between the years 2000 and 2015 to generate a tomographic image of the Japanese Islands and the surrounding using travel time data picked by the NIED Hi-net, hypocenteral information for onshore earthquakes from the Hi-net, and hypocenter information for offshore events from the F-net. The seismic velocity structure at depths of 30-50 km beneath the Pacific Ocean off the east coast of northeastern Japan and onshore Japan was clearly imaged using both onshore and offshore event date. The boundary between high and low P-wave velocities (Vp) is clearly seen at the Median Tectonic Line beneath southwestern Japan at depths of 10 and 20 km. We discuss how the high-Vp lower crust and low-Vp upper crust beneath central Japan and towards the Sea of Japan are responsible for the failed rift structures formed during the opening of the Sea of Japan. Due to consequent shortening, the crustal deformation has been concentrated along the failed rift zone. Resolution of shallow structures beneath the ocean is investigated using S-net data, confirming the possibility of imaging depths of 5-20 km. In future studies, application of S-net data will be useful in evaluating whether the failed rift structure, formed during the late Cretaceous to early Tertiary, continues towards the shallow regions beneath the Pacific Ocean.
Tomohiro Ishizawa, Toru Danjo, Naoki Sakai
Journal of Disaster Research, Volume 12, pp 980-992;

The failure time of a slope is predicted by a method based on creep failure theory for slope displacement on natural slopes, embankments, and cutting slopes. These prediction methods employ several equations based on the relationship between the displacement rate (displacement velocity) and time. However, such methods harbor problems because the shape of the tertiary creep curve is affected by many conditions, and it is difficult to identify the phase of tertiary creep. This study examines the time change in the displacement rate of the slope and derives an index for identifying the phase of tertiary creep. Two models of large-scale composite granite slopes were tested by using a large-scale rainfall simulator. In the experiments, the slope displacements were monitored in real time. From these results, inflection points were found in the velocity of the slope displacement. It was found that the corresponding inflection points at different locations in the sliding soil mass occurred with the same timing. This paper discusses the effectiveness of the prediction method for slope failure time by using the inflection points of displacement rate in real-time monitoring records.
Tomohiro Sasaki, Koichi Kajiwara, Takuzo Yamashita, Takuya Toyoshi
Journal of Disaster Research, Volume 12, pp 858-867;

The shake table test of small-scaled steel frame structure was conducted using large-scale earthquake simulator at the National Research Institute for Earth Science and Disaster Resilience (NIED) in Tsukuba, Ibaragi. This paper presents the performance evaluation of Micro Electro Mechanical Systems (MEMS) type accelerometers, which are recently being used in various fields, comparing with the conventional servo type accelerometers. In addition, this paper discussed the integration method of the measured acceleration into displacements, which is suitable for structural damage evaluation due to strong earthquakes.
Takuzo Yamashita, Mahendra Kumar Pal, Kazutoshi Matsuzaki, Hiromitsu Tomozawa
Journal of Disaster Research, Volume 12, pp 882-890;

To construct a virtual reality (VR) experience system for interior damage due to an earthquake, VR image contents were created by obtaining images, sounds, and vibration data from multiple devices, with synchronization information, in a room at the 10thfloor of 10-story RC structure tested at E-Defense shake table. An application for displaying 360-degree images of interior damage using a head mount display (HMD) was developed. The developed system was exhibited in public disaster prevention events, and then a questionnaire survey was conducted to assess usefulness of VR experience in disaster prevention education.
Manoj Kanta Mainali, Kaoru Shimada, Shingo Mabu, Kotaro Hirasawa
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 12, pp 546-553;

One of the main functions of the traffic navigation systems is to find the optimal route to the destination. In this paper, we propose an iterative Q value updating algorithm, Q method, based on dynamic programming to search the optimal route and its optimal traveling time for a given Origin-Destination (OD) pair of road networks. The Q method uses the traveling time information available at adjacent intersections to search for the optimal route. The Q value is defined as the minimum traveling time to the destination when a vehicle takes the next intersection. When the Q values converge, the optimal route to the destination can be determined by choosing the minimum Q value at each intersection. The Q method gives us the solutions from multiple origins to a single destination. The proposed method is not restricted to find a single solution, but, if there exist multiple optimal routes with the identical traveling time to the destination, the proposed method can find all of it. In addition to that, when the traveling time of the road sections changes, an alternative optimal route can be found easily starting with the already obtained Q values. We compared the Q method with Dijkstra algorithm and the simulation results showed that the Q method can give better performances, depending on the situations, when the traveling time of the road sections changes.
Kazuma Matsumoto, Takato Tatsumi, Hiroyuki Sato, Tim Kovacs, Keiki Takadama
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 856-867;

The correctness rate of classification of neural networks is improved by deep learning, which is machine learning of neural networks, and its accuracy is higher than the human brain in some fields. This paper proposes the hybrid system of the neural network and the Learning Classifier System (LCS). LCS is evolutionary rule-based machine learning using reinforcement learning. To increase the correctness rate of classification, we combine the neural network and the LCS. This paper conducted benchmark experiments to verify the proposed system. The experiment revealed that: 1) the correctness rate of classification of the proposed system is higher than the conventional LCS (XCSR) and normal neural network; and 2) the covering mechanism of XCSR raises the correctness rate of proposed system.
Nobuhiko Yamaguchi
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 825-831;

Gaussian process dynamical models (GPDMs) are used for nonlinear dimensionality reduction in time series by means of Gaussian process priors. An extension of GPDMs is proposed for visualizing the states of time series. The conventional GPDM approach associates a state with an observation value. Therefore, observations changing over time cannot be represented by a single state. Consequently, the resulting visualization of state transition is difficult to understand, as states change when the observation values change. To overcome this issue, autoregressive GPDMs, called ARGPDMs, are proposed. They associate a state with a vector autoregressive (VAR) model. Therefore, observations changing over time can be represented by a single state. The resulting visualization is easier to understand, as states change only when the VAR model changes. We demonstrate experimentally that the ARGPDM approach provides better visualization compared with conventional GPDMs.
Hikaru Sasaki, Tadashi Horiuchi, Satoru Kato
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 840-848;

Deep Q-network (DQN) is one of the most famous methods of deep reinforcement learning. DQN approximates the action-value function using Convolutional Neural Network (CNN) and updates it using Q-learning. In this study, we applied DQN to robot behavior learning in a simulation environment. We constructed the simulation environment for a two-wheeled mobile robot using the robot simulation software, Webots. The mobile robot acquired good behavior such as avoiding walls and moving along a center line by learning from high-dimensional visual information supplied as input data. We propose a method that reuses the best target network so far when the learning performance suddenly falls. Moreover, we incorporate Profit Sharing method into DQN in order to accelerate learning. Through the simulation experiment, we confirmed that our method is effective.
Xiaowen Hu, Duanming Zhou, Chengchen Hu, Fei Ai
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 769-777;

The empirical characteristics of domestic and foreign interest rate shocks are obtained by using VAR method: the domestic interest rate regulation is counter-cyclical, and the increase of foreign interest rate leads to the increase of domestic output and inflation. On this basis, we construct a small open dynamic stochastic general equilibrium theory framework which reflects the empirical characteristics, including exchange rate control, to analyze the macroeconomic effects of exchange rate liberalization reform. By volatility simulation, impulse response and social welfare loss function analysis, the empirical results show that: firstly, exchange rate reform would increase volatility of output and exchange rate, but reduce volatility of inflation and interest rate. Secondly, exchange rate reform enhances the impact of domestic interest rate shocks on output and inflation. Which means the reform would improve the control ability of interest rate as a monetary policy tool. Moreover, the reform increases loss of social welfare. The conclusion shows that the exchange rate liberalization should be implemented step by step. The government should accelerate the reform when the external macro economy is stable. Otherwise it will cause a larger economic volatility.
Caili Zhang, Takato Tatsumi, ,
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 885-894;

This paper presents an approach to clustering that extends the variance-based Learning Classifier System (XCS-VR). In real world problems, the ability to combine similar rules is crucial in the knowledge discovery and data mining field. Conventionally, XCS-VR is able to acquire generalized rules, but it cannot further acquire more generalized rules from these rules. The proposed approach (called XCS-VRc) accomplishes this by integrating similar generalized rules. To validate the proposed approach, we designed a bench-mark problem to examine whether XCS-VRc can cluster both the generalized and more generalized features in the input data. The proposed XCS-VRc proved to be more efficient than XCS and the conventional XCS-VR.
, Keiki Takadama
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 868-875;

In this paper, we propose a method to improve ECS-DMR which enables appropriate output for imbalanced data sets. In order to control generalization of LCS in imbalanced data set, we propose a method of applying imbalance ratio of data set to a sigmoid function, and then, appropriately update the matching range. In comparison with our previous work (ECS-DMR), the proposed method can control the generalization of the appropriate matching range automatically to extract the exemplars that cover the given problem space, wchich consists of imbalanced data set. From the experimental results, it is suggested that the proposed method provides stable performance to imbalanced data set. The effect of the proposed method using the sigmoid function considering the data balance is shown.
Masaya Nakata, Tomoki Hamagami
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 876-884;

The XCS classifier system is an evolutionary rule-based learning technique powered by a Q-learning like learning mechanism. It employs a global deletion scheme to delete rules from all rules covering all state-action pairs. However, the optimality of this scheme remains unclear owing to the lack of intensive analysis. We here introduce two deletion schemes: 1) local deletion, which can be applied to a subset of rules covering each state (a match set), and 2) stronger local deletion, which can be applied to a more specific subset covering each state-action pair (an action set). The aim of this paper is to reveal how the above three deletion schemes affect the performance of XCS. Our analysis shows that the local deletion schemes promote the elimination of inaccurate rules compared with the global deletion scheme. However, the stronger local deletion scheme occasionally deletes a good rule. We further show that the two local deletion schemes greatly improve the performance of XCS on a set of noisy maze problems. Although the localization strength of the proposed deletion schemes may require consideration, they can be adequate for XCS rather than the original global deletion scheme.
Kazuteru Miyazaki
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 849-855;

Currently, deep learning is attracting significant interest. Combining deep Q-networks (DQNs) and Q-learning has produced excellent results for several Atari 2600 games. In this paper, we propose an exploitation-oriented learning (XoL) method that incorporates deep learning to reduce the number of trial-and-error searches. We focus on a profit sharing (PS) method that is an XoL method, and combine it with a DQN to propose a DQNwithPS method. This method is compared with a DQN in Atari 2600 games. We demonstrate that the proposed DQNwithPS method can learn stably with fewer trial-and-error searches than required by only a DQN.
Kazuteru Miyazaki, Koudai Furukawa, Hiroaki Kobayashi
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 930-938;

When multiple agents learn a task simultaneously in an environment, the learning results often become unstable. This problem is known as the concurrent learning problem and to date, several methods have been proposed to resolve it. In this paper, we propose a new method that incorporates expected failure probability (EFP) into the action selection strategy to give agents a kind of mutual adaptability. The effectiveness of the proposed method is confirmed using Keepaway task.
Takato Tatsumi, Hiroyuki Sato, Keiki Takadama
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 895-906;

This paper focuses on the generalization of classifiers in noisy problems and aims at construction learning classifier system (LCS) that can acquire the optimal classifier subset by dynamically determining the classifier generalization criteria. In this paper, an accuracy-based LCS (XCS) that uses the mean of the reward (XCS-MR) is introduced, which can correctly identify classifiers as either accurate or inaccurate for noisy problems, and investigates its effectiveness when used for several noisy problems. Applying XCS and an XCS based on the variance of reward (XCS-VR) as the conventional LCSs, along with XCS-MR, to noisy 11-multiplexer problems where the reward value changes according to a Gaussian distribution, Cauchy distribution, and lognormal distribution revealed the following: (1) XCS-VR and XCS-MR could select the correct action for every type of reward distribution; (2) XCS-MR could appropriately generalize the classifiers with the smallest amount of data; and (3) XCS-MR could acquire the optimal classifier subset in every trial for every type of reward distribution.
Qin Qin, Josef Vychodil
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 834-839;

This paper proposes a new multi-feature detection method of local pedestrian based on a convolutional neural network (CNN), which provides a reliable basis for multi-feature fusion in pedestrian detection. According to the standard of pedestrian detection ratio, the pedestrian under the detection window would be segmented, using the sample labels to guide the local characteristics of CNN learning, the supervised learning after the network can obtain the local feature fusion more pedestrian description ability. Finally, a large number of experiments have been performed. The experimental results show that the local features of the neural network are better than those of most pedestrian features and combination features.
Masato Nagayoshi, Simon J. H. Elderton, Kazutoshi Sakakibara,
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 948-957;

In this paper, we introduce an autonomous decentralized method for directing multiple automated guided vehicles (AGVs) in response to uncertain delivery requests. The transportation route plans of AGVs are expected to minimize the transportation time while preventing collisions between the AGVs in the system. In this method, each AGV as an agent computes its transportation route by referring to the static path information. If potential collisions are detected, one of the two agents chosen by a negotiation-rule modifies its route plan. Here, we propose a reinforcement learning approach for improving the negotiation-rules. Then, we confirm the effectiveness of the proposed approach based on the results of computational experiments.
Takuya Okano, Itsuki Noda
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 939-947;

In this paper, we propose a method to adapt the exploration ratio in multi-agent reinforcement learning. The adaptation of exploration ratio is important in multi-agent learning, as this is one of key parameters that affect the learning performance. In our observation, the adaptation method can adjust the exploration ratio suitably (but not optimally) according to the characteristics of environments. We investigated the evolutionarily adaptation of the exploration ratio in multi-agent learning. We conducted several experiments to adapt the exploration ratio in a simple evolutionary way, namely, mimicking advantageous exploration ratio (MAER), and confirmed that MAER always acquires relatively lower exploration ratio than the optimal value for the change ratio of the environments. In this paper, we propose a second evolutionary adaptation method, namely, win or update exploration ratio (WoUE). The results of the experiments showed that WoUE can acquire a more suitable exploration ratio than MAER, and the obtained ratio was near-optimal.
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 803-812;

The major objective of the paper is to investigate a new probabilistic supervised learning approach that incorporates “missingness” into a decision tree classifier splitting criterion at each particular attribute node in terms of software effort development predictive accuracy. The proposed approach is compared empirically with ten supervised learning methods (classifiers) that have mechanisms for dealing with missing values. 10 industrial datasets are utilized for this task. Overall, missing incorporated in attributes 3 is the top performing strategy, followed by C4.5, missing incorporated in attributes, missing incorporated in attributes 2, missing incorporated in attributes, linear discriminant analysis and so on. Classification and regression trees and C4.5 performed well in data with high correlations among attributes whilek-nearest neighbour and support vector machines performed well in data with higher complexity (limited number of instances). The worst performing method is repeated incremental pruning to produce error reduction.
Ben Xu, Xin Chen, Min Wu,
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 785-794;

Sintering is an important production process in iron and steel metallurgy. Carbon fuel consumption accounts for about 80% of the total energy consumption in the sintering process. To enhance the efficiency of carbon fuel consumption, we need to determine the factors affecting carbon efficiency and build a model of it. In this paper, the CO/CO2is taken to be a measure of carbon efficiency, and a cascade predictive model is built to predict it. This model has two parts: the key state parameter submodel and the CO/CO2submodel. The submodels are built using particle swarm optimization-based back propagation neural networks (PSO-BPNNs). Based on the mechanism analysis, spearman’s rank correlation coefficient (SRCC) and stepwise regression analysis (SRA) are used to determine the relationship between the process parameters, in order to determine the inputs of each submodel. Finally, the results of a simulation show the feasibility of the cascade model, which will serve as the basic model for the optimization and control of the carbon efficiency of the sintering process.
Takato Okudo, Tomohiro Yamaguchi, Akinori Murata, Takato Tatsumi, Fumito Uwano, Keiki Takadama
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 907-916;

This paper proposes a learning goal space that visualizes the distribution of the obtained solutions to support the exploration of the learning goals for a learner. Subsequently, we examine the method for assisting a learner to present the novelty of the obtained solution. We conduct a learning experiment using a continuous learning task to identify various solutions. To assign the subjects space to explore the learning goals, several parameters related to the success of the task are not instructed to the subjects. In the comparative experiment, three types of learning feedbacks provided to the subjects are compared. These are presenting the learning goal space with obtained solutions mapped on it, directly presenting the novelty of the obtained solutions mapped on it, and presenting some value that is slightly related to the obtained solution. In the experiments, the subjects to whom the learning goal space or novelty of the obtained solution is shown, continue to identify solutions according to their learning goals until the final stage in the experiment is attained. Therefore, in a continuous learning task, our supporting method of directly or indirectly presenting the novelty of the obtained solution through the learning goal space is effective.
Yuto Omae, Hirotaka Takahashi
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 813-824;

In recent years, many studies have been performed on the automatic classification of human body motions based on inertia sensor data using a combination of inertia sensors and machine learning; training data is necessary where sensor data and human body motions correspond to one another. It can be difficult to conduct experiments involving a large number of subjects over an extended time period, because of concern for the fatigue or injury of subjects. Many studies, therefore, allow a small number of subjects to perform repeated body motions subject to classification, to acquire data on which to build training data. Any classifiers constructed using such training data will have some problems associated with generalization errors caused by individual and trial differences. In order to suppress such generalization errors, feature spaces must be obtained that are less likely to generate generalization errors due to individual and trial differences. To obtain such feature spaces, we require indices to evaluate the likelihood of the feature spaces generating generalization errors due to individual and trial errors. This paper, therefore, aims to devise such evaluation indices from the perspectives. The evaluation indices we propose in this paper can be obtained by first constructing acquired data probability distributions that represent individual and trial differences, and then using such probability distributions to calculate any risks of generating generalization errors. We have verified the effectiveness of the proposed evaluation method by applying it to sensor data for butterfly and breaststroke swimming. For the purpose of comparison, we have also applied a few available existing evaluation methods. We have constructed classifiers for butterfly and breaststroke swimming by applying a support vector machine to the feature spaces obtained by the proposed and existing methods. Based on the accuracy verification we conducted with test data, we found that the proposed method produced significantly higher F-measure than the existing methods. This proves that the use of the proposed evaluation indices enables us to obtain a feature space that is less likely to generate generalization errors due to individual and trial differences.
Keiki Takadama, Kazuteru Miyazaki
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 833-833;

Machine learning has been attracting significant attention again since the potential of deep learning was recognized. Not only has machine learning been improved, but it has also been integrated with “reinforcement learning,” revealing other potential applications, e.g., deep Q-networks (DQN) and AlphaGO proposed by Google DeepMind. It is against this background that this special issue, “Cutting Edge of Reinforcement Learning and its Hybrid Methods,” focuses on both reinforcement learning and its hybrid methods, including reinforcement learning with deep learning or evolutionary computation, to explore new potentials of reinforcement learning.Of the many contributions received, we finally selected 13 works for publication. The first three propose hybrids of deep learning and reinforcement learning for single agent environments, which include the latest research results in the areas of convolutional neural networks and DQN. The fourth through seventh works are related to the Learning Classifier System, which integrates evolutionary computation and reinforcement learning to develop the rule discovery mechanism. The eighth and ninth works address problems related to goal design or the reward, an issue that is particularly important to the application of reinforcement learning. The last four contributions deal with multiagent environments.These works cover a wide range of studies, from the expansion of techniques incorporating simultaneous learning to applications in multiagent environments. All works are on the cutting edge of reinforcement learning and its hybrid methods. We hope that this special issue constitutes a large contribution to the development of the reinforcement learning field.
Yibo Li, Chao Liu, Senyue Zhang, Wenan Tan, Yanyan Ding
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 795-802;

Conventional kernel support vector machine (KSVM) has the problem of slow training speed, and single kernel extreme learning machine (KELM) also has some performance limitations, for which this paper proposes a new combined KELM model that build by the polynomial kernel and reproducing kernel on Sobolev Hilbert space. This model combines the advantages of global and local kernel function and has fast training speed. At the same time, an efficient optimization algorithm called cuckoo search algorithm is adopted to avoid blindness and inaccuracy in parameter selection. Experiments were performed on bi-spiral benchmark dataset, Banana dataset, as well as a number of classification and regression datasets from the UCI benchmark repository illustrate the feasibility of the proposed model. It achieves the better robustness and generalization performance when compared to other conventional KELM and KSVM, which demonstrates its effectiveness and usefulness.
Fumito Uwano, Keiki Takadama
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 917-929;

This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.
Bolin Liao, Qiuhong Xiang
Journal of Advanced Computational Intelligence and Intelligent Informatics, Volume 21, pp 778-784;

This study analyses the robustness and convergence characteristics of a neural network. First, a special class of recurrent neural network (RNN), termed a continuous-time Zhang neural network (CTZNN) model, is presented and investigated for dynamic matrix pseudoinversion. Theoretical analysis of the CTZNN model demonstrates that it has good robustness against various types of noise. In addition, considering the requirements of digital implementation and online computation, the optimal sampling gap for a discrete-time Zhang neural network (DTZNN) model under noisy environments is proposed. Finally, experimental results are presented, which further substantiate the theoretical analyses and demonstrate the effectiveness of the proposed ZNN models for computing a dynamic matrix pseudoinverse under noisy environments.
Tatsuya Iwamoto, Masanori Idesawa
Journal of Robotics and Mechatronics, Volume 9, pp 121-125;

In the human visual system, binocular unpaired regions, where binocular images do not correspond to each other, play a very important role on stereo perception. In our recent experiments, we found that binocular unpaired regions give a special effect on the volume perception of solid objects with curved surfaces. In this paper, we shall introduce phenomena of volume perception, and then propose some strategies for realizing such function on a computer vision system.
Kazuhiko Kawashima
Journal of Disaster Research, Volume 1, pp 378-389;

A review on the seismic behavior and design of underground structures in soft ground is described focusing on the development of equivalent static seismic design called the seismic deformation method. Seismic isolation of underground structures is also presented.
Nobutsuna Endo, Atsuo Takanishi
Journal of Robotics and Mechatronics, Volume 23, pp 969-977;

Personal robots and Robot Technology (RT)-based assistive devices are expected to play a substantial role in our society largely populated by the elderly; they will play an active role in joint works and community life with humans. In particular, these robots are expected to play an important role in the assistance of the elderly and disabled people during normal Activities of Daily Living (ADLs). To achieve this result, personal robots should better be capable of making emotional expressions like human. In this perspective we developed a whole body bipedal humanoid robot named KOBIAN that is capable of expressing humanlike emotions. In this paper we present the development and evaluations of KOBIAN.
Terutake Hayashi, Toshiki Seri, Syuhei Kurokawa
International Journal of Automation Technology, Volume 11, pp 754-760;

In this study, a novel particle sizing method is proposed based on Brownian diffusion analysis for abrasive particles using fluorescent probing. A fluorescent probe is used to measure the average dynamic viscosity of the nanoparticle dispersion in a solvent. By measuring both the average dynamic viscosity and the size of the nanoscale abrasive particles simultaneously, the uncertainty of the particle sizing is considered to be improved based on the viscosity compensation for the Brownian diffusion of nanoparticles. In this research, the authors investigate the difference between the nanoviscosity and the shear viscosity of the solvent to verify the efficacy in using viscosity compensation for nanoparticle sizing.
Page of 176
Articles per Page
Show export options
  Select all
Back to Top Top