Refine Search

New Search

Results: 274,268

(searched for: publisher_group_id:390)
Save to Scifeed
Page of 5,486
Articles per Page
by
Show export options
  Select all
Sirisha Kalidindi, Christian H. Reick, Thomas Raddatz,
Published: 10 October 2017
Abstract:
We study an Earth-like terra-planet with an overland recycling mechanism bringing fresh water back from higher latitudes to the lower latitudes. By performing model simulations for such a planet we find two drastically different climate states for the same set of boundary conditions and parameter values: A Cold and Wet (CW) state (present-day Earth-like climate) with dominant low-latitude precipitation and, a Hot and Dry (HD) state with only high-latitude precipitation. We notice that for perpetual equinox conditions, both climate states are stable below a certain threshold value of background soil albedo while above the threshold only the CW state is stable. Starting from the HD state and increasing background soil albedo above the threshold causes an abrupt shift from the HD state to the CW state resulting in a sudden cooling of about 35 °C globally which is of the order of the temperature difference between the present-day and the Snowball Earth state. In contrast to the Snowball Earth instability, we find that the sudden cooling in our study is driven by the cloud albedo feedback rather than the snow-albedo feedback. Also, when albedo in the CW state is reduced back to zero the terra-planet does not display a closed hysteresis. This is due to the high cloud cover in the CW state hiding the surface from solar irradiation. As a result, this reduction of background surface albedo has only a minor effect on the top of the atmosphere radiation balance, thereby making it impossible to heat the planet sufficiently strongly to switch back to the HD state. Additional simulations point to a similar abrupt transition from HD state to the CW state for non-zero obliquity which is the only stable state in this configuration. Our study also has implications for the habitability of Earth-like terra-planets. At the inner edge of the habitable zone, the higher cloud cover in the CW state cools the planet and may prevent the onset of a runaway greenhouse state. At the outer edge, the resupply of water at lower latitudes stabilizes the greenhouse effect and keeps the planet in the HD state and may prevent water from getting trapped at higher latitudes in frozen form. Overall, the existence of bi-stability in the presence of an overland recycling mechanism hints at the possibility of a wider habitable zone for Earth-like terra-planets at lower obliquities.
Ramakrishna Ramisetty, , Xiaoli Shen, , ,
Published: 10 October 2017
Abstract:
Size, composition, and mixing state of individual aerosol particles can be analysed in real time using single particle mass spectrometry (SPMS). In SPMS, laser ablation is the most widely used method for desorption and ionization of particle components, often realizing both in one single step. Excimer lasers are well suited for this task due to their relatively high power density (107 W cm−2–1010 W cm−2) in nanosecond (ns) pulses at ultraviolet (UV) wavelengths, and short triggering times. However, varying particle optical properties and matrix effects make a quantitative interpretation of this analytical approach challenging. In atmospheric SPMS applications, this influences both the mass fraction of an individual particle that gets ablated, as well as the resulting mass spectral fragmentation pattern of the ablated material. The goal of the present study is to explore the use of shorter (femtosecond, fs) laser pulses for atmospheric SPMS, and to systematically investigate the influence of power density and pulse duration on airborne particle (polystyrene latex, SiO2, NH4NO3, NaCl, and custom-made core-shell particles) ablation and reproducibility of mass spectral signatures. We used a laser ablation aerosol time-of-flight single particle mass spectrometer (LAAPTOF, AeroMegt GmbH), originally equipped with an excimer laser (wavelength 193 nm, pulse width 8 ns, pulse energy 4 mJ), and coupled it to an fs-laser (Spectra Physics Solstice-100F ultrafast laser) with similar pulse energy, but longer wavelengths (266 nm with 100 fs and 0.2 mJ, 800 nm with 100 fs and 4 mJ, respectively). Generally, mass spectra exhibit an increase in ion intensities (factor 1 to 5) with increasing laser power density (~ 108 W cm−2 to ~ 1013 W cm−2) from ns- to fs-laser. At the same time, fs-laser ablation produces spectra with larger ion fragments and ion clusters, as well as clusters with oxygen, which does not render spectra interpretation more simple compared to ns-laser ablation. Quantification of ablated material remains difficult due to incomplete ionization of the particle. Furthermore, the fs-laser application still suffers from limitations in triggering it in a useful timeframe. Further tests are needed to test potential advantages of fs- over ns-laser ablation in atmospheric SPMS.
Published: 10 October 2017
Abstract:
Simplified flood loss models are one important source of uncertainty in flood risk assessments. Many countries experience sparseness or absence of comprehensive high-quality flood loss data sets which is often rooted in a lack of protocols and reference procedures for compiling loss data sets after flood events. Such data are an important reference for developing and validating flood loss models. We consider the Secchia river flood event of January 2014, when a sudden levee-breach caused the inundation of nearly 52 km2 in Northern Italy. For this event we compiled a comprehensive flood loss data set of affected private households including buildings footprint, economic value, damages to contents, etc. based on information collected by local authorities after the event. By analysing this data set we tackle the problem of flood damage estimation in Emilia-Romagna (Italy) by identifying empirical uni- and multi-variable loss models for residential buildings and contents. The accuracy of the proposed models is compared with those of several flood-damage models reported in the literature, providing additional insights on the transferability of the models between different contexts. Our results show that (1) even simple uni-variable damage models based on local data are significantly more accurate than literature models derived for different contexts; (2) multi-variable models that consider several explanatory variables outperform uni-variable models which use only water depth. However, multi-variable models can only be effectively developed and applied if sufficient and detailed information is available.
, Scot M. Miller, , Justus Notholt, , Thorsten Warneke
Published: 10 October 2017
Geoscientific Model Development, Volume 10, pp 3695-3713; https://doi.org/10.5194/gmd-10-3695-2017

Abstract:
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Published: 10 October 2017
Abstract:
The soil texture representation with the standard textural fraction triplet 'sand-silt-clay' is commonly used to estimate soil properties. The objective of this work was to test the hypothesis that other fraction sizes in the triplets may provide better representation of soil texture for estimating some soil parameters. We estimated the cumulative particle size distribution and bulk density from entropy-based representation of the textural triplet with experimental data for 6300 soil samples. Results supported the hypothesis. For example, simulated distributions were not significantly different from the original ones in 25 and 85 % of cases when the 'sand-silt-sand' and 'very coarse+coarse + medium sand – fine +very fine sand – silt+clay', were used, respectively. When the same standard and modified triplets were used to estimate the average bulk density, the coefficients of determination were 0.001 and 0.967, respectively. Overall, the textural triplet selection appears to be application- and data-specific.
Drinking Water Engineering and Science, Volume 10, pp 93-98; https://doi.org/10.5194/dwes-10-93-2017

Abstract:
The calculation of hydraulic state variables for a network is an important task in managing the distribution of potable water. Over the years the mathematical modeling process has been improved by numerous researchers for utilization in new computer applications and the more realistic modeling of water distribution networks. But, in spite of these continuous advances, there are still a number of physical phenomena that may not be tackled correctly by current models. This paper will take a closer look at the two modeling paradigms given by demand- and pressure-driven modeling. The basic equations are introduced and parallels are drawn with the optimization formulations from electrical engineering. These formulations guarantee the existence and uniqueness of the solution. One of the central questions of the French and German research project ResiWater is the investigation of the network resilience in the case of extreme events or disasters. Under such extraordinary conditions where models are pushed beyond their limits, we talk about deficient network models. Examples of deficient networks are given by highly regulated flow, leakage or pipe bursts and cases where pressure falls below the vapor pressure of water. These examples will be presented and analyzed on the solvability and physical correctness of the solution with respect to demand- and pressure-driven models.
Nian Bie, Liping Lei, Zhaocheng Zeng, Bofeng Cai, Shaoyuan Yang, Zhonghua He, Changjiang Wu,
Published: 10 October 2017
Abstract:
The regional uncertainty of XCO2 (column-averaged dry air mole fraction of CO2) retrieved using different algorithms from the Greenhouse gases Observing SATellite (GOSAT) and its attribution are still not well understood. This paper investigates the regional performance of XCO2 within a band of 37° N–42° N segmented into 8 cells in a grid of 5° from west to east (80° E–120° E) in China, where there are typical land surface types and geographic conditions. The former include the various land covers of desert, grassland and built-up areas mixed with cropland, and the latter include anthropogenic emissions that tend to be small to large from west to east, including those from the megacity of Beijing. For these specific cells, we evaluate the regional uncertainty of GOSAT XCO2 retrievals by quantifying and attributing the consistency of XCO2 retrievals from five algorithms (ACOS, NIES, EMMA, OCFP, and SRFP) by intercomparison and particularly by comparing these with simulated XCO2 from the Goddard Earth Observing System 3-D chemical transport model (GEOS-Chem), the nested model in East Asia. We introduce the anthropogenic CO2 emissions data generated from the investigation of surface emitting point sources that was conducted by the Ministry of Environmental Protection of China to GEOS-Chem simulations of XCO2 over the Chinese mainland. The results indicate that (1) regionally, the five algorithms demonstrate smaller absolute biases between 0.9–1.5 ppm in eastern cells, which are covered by built-up areas mixed with cropland with intensive anthropogenic emissions, than those in the western desert cells with a high-brightness surface, 1.2–2.2 ppm from the pairwise comparison results of XCO2 retrievals. The inconsistency of XCO2 from the five algorithms tends to be high in the Taklimakan Desert in western cells, which is likely induced by high surface albedo in addition to dust aerosols in this region. (2) Compared with XCO2 simulated by GEOS-Chem (GEOS-XCO2), the XCO2 values of ACOS and SRFP better agree with GEOS-XCO2, while OCFP is the least consistent with GEOS-XCO2. (3) Viewing attributions of XCO2 in the spatio-temporal pattern, ACOS, SRFP and EMMA demonstrate similar patterns, while OCFP is largely different from the others. In conclusion, the discrepancy in the five algorithms is the smallest in eastern cells in the investigated band where the megacity of Beijing is located and where there are strong anthropogenic CO2 emissions, which implies that XCO2 from satellite observations could be reliably applied in the assessment of atmospheric CO2 enhancements induced by anthropogenic CO2 emissions. The large inconsistency among the five algorithms presented in western deserts with a high albedo and dust aerosols, moreover, demonstrates that further improvement is still necessary in such regions, even though many algorithms have endeavored to minimize the effects of aerosols and albedo.
Anne E. Mather,
Geological Society, London, Special Publications, Volume 440, pp 103-128; https://doi.org/10.1144/sp440.15

Abstract:
Lithology is acknowledged to be an important internal catchment control on flow processes to adjacent alluvial fans. However, the role of inherited structural configurations (e.g. bedrock attitude) in catchment connectivity and sediment transport is rarely considered. We examine four young (<100-year-old) active tributary junction alluvial fan systems from the Dadès Valley in the High Atlas of Morocco in terms of their catchment-scale connectivity, sediment transfer and resulting alluvial fan processes. The catchments occur on the same lithologies (limestones and interbedded mudstones), but experience different passive structural configurations (tilted and structurally thickened beds). The fan systems react differently to historical peak discharges (20–172 m3 s−1). Catchments containing tectonically thickened limestone units develop slot canyons, which compartmentalize the catchment by acting as barriers to sediment transfer, encouraging lower sediment to water flows on the fans. Syn-dip catchments boost connectivity and sediment delivery from translational bedrock landslides as a result of steep channel gradients, encouraging higher sediment to water flows. By contrast, translational landslides in strike-oriented drainages disrupt longitudinal connectivity by constricting the valley width, while the gradients of the main channels are supressed by the attitude of the limestone beds, encouraging localized backfilling. This diminishes the sediment to water content of the resulting flows.
Abul Amir Khan, N. C. Pant, Rasik Ravindra, Apurva Alok, Manika Gupta, Shikha Gupta
Geological Society, London, Special Publications, Volume 462, pp 73-87; https://doi.org/10.1144/sp462.2

Abstract:
The hydrological budget of the three major Asian rivers, namely the Indus, the Ganga and the Brahmaputra, is controlled by the Indian monsoon and Westerlies but their contribution in these basins are highly variable. Widely varying average annual precipitation has been reported within these basins. A poor network of in situ rain gauges, particularly in mountainous regions, inaccessible terrain, high variations in altitude and the significantly large size of basins forces adaption of satellite-based average annual precipitation. We investigate precipitation patterns for these three basins by using satellite-based Tropical Rainfall Measuring Mission (TRMM-3B42) data and compare and validate it with Asian Precipitation Highly Resolved Data Integration Towards Evaluation (APHRODITE) and India Meteorological Department (IMD) interpolated gridded precipitation data. The entire basins as well as basinal areas within the geographic limits of India have been considered. Our study shows that the precipitation broadly follows an east–west and north–south gradient control. The easternmost Brahmaputra Basin has the highest amount of precipitation followed by the Ganga Basin, and the westernmost Indus Basin has the least precipitation; precipitation is highest on the higher elevations than compared to lower elevations of the basins. A seasonal- and elevation-based approach is adapted to estimate snow precipitation and is discussed in terms of overall precipitation.
, Dorothy K. Hall, Miguel O. Román
Published: 10 October 2017
Earth System Science Data, Volume 9, pp 765-777; https://doi.org/10.5194/essd-9-765-2017

Abstract:
Knowledge of the distribution, extent, duration and timing of snowmelt is critical for characterizing the Earth's climate system and its changes. As a result, snow cover is one of the Global Climate Observing System (GCOS) essential climate variables (ECVs). Consistent, long-term datasets of snow cover are needed to study interannual variability and snow climatology. The NASA snow-cover datasets generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua spacecraft and the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) are NASA Earth System Data Records (ESDR). The objective of the snow-cover detection algorithms is to optimize the accuracy of mapping snow-cover extent (SCE) and to minimize snow-cover detection errors of omission and commission using automated, globally applied algorithms to produce SCE data products. Advancements in snow-cover mapping have been made with each of the four major reprocessings of the MODIS data record, which extends from 2000 to the present. MODIS Collection 6 (C6; https://nsidc.org/data/modis/data_summaries) and VIIRS Collection 1 (C1; https://doi.org/10.5067/VIIRS/VNP10.001) represent the state-of-the-art global snow-cover mapping algorithms and products for NASA Earth science. There were many revisions made in the C6 algorithms which improved snow-cover detection accuracy and information content of the data products. These improvements have also been incorporated into the NASA VIIRS snow-cover algorithms for C1. Both information content and usability were improved by including the Normalized Snow Difference Index (NDSI) and a quality assurance (QA) data array of algorithm processing flags in the data product, along with the SCE map. The increased data content allows flexibility in using the datasets for specific regions and end-user applications. Though there are important differences between the MODIS and VIIRS instruments (e.g., the VIIRS 375 m native resolution compared to MODIS 500 m), the snow detection algorithms and data products are designed to be as similar as possible so that the 16+ year MODIS ESDR of global SCE can be extended into the future with the S-NPP VIIRS snow products and with products from future Joint Polar Satellite System (JPSS) platforms. These NASA datasets are archived and accessible through the NASA Distributed Active Archive Center at the National Snow and Ice Data Center in Boulder, Colorado.
Mira L. Pöhlker, , , Thomas Klimach, Isabella Hrabě De Angelis, , , Samara Carbone, , Xuguang Chi, et al.
Published: 10 October 2017
Abstract:
Size-resolved measurements of atmospheric aerosol and cloud condensation nuclei (CCN) concentrations and hygroscopicity were conducted at the remote Amazon Tall Tower Observatory (ATTO) in the central Amazon Basin over a full seasonal cycle (Mar 2014–Feb 2015). In a companion part 1 paper, we presented an in-depth CCN characterization based on annually as well as seasonally averaged time intervals and discuss different parametrization strategies to represent the Amazonian CCN cycling in modelling studies (M. Pöhlker et al., 2016b). The present part 2 study analyzes the aerosol and CCN variability in original time resolution and, thus, resolves aerosol advection and transformation for the following case studies, which represent the most characteristic states of the Amazonian atmosphere: 1. Near-pristine (NP) conditions, defined as the absence of detectable black carbon ( 90 %), and correspondingly low hygroscopicity levels (κAit = 0.14, κacc = 0.17). The BB CCN efficiency spectrum shows that the CCN population is highly sensitive to changes in S in the low S regime. 4. Mixed pollution conditions show the superposition of African (i.e., volcanic) and Amazonian (i.e., biomass burning) aerosol emissions during the dry season. The African aerosols showed a broad monomodal distribution (D = 130 nm, N = ~ 1300 cm−3), with very high sulfate fractions (20 %), and correspondingly high hygroscopicity (κAit = 0.14, κacc = 0.22). This was superimposed by fresh smoke from nearby fires with one strong mode (D = 113 nm, Nacc = ~ 2800 cm−3), an organic-dominated aerosol, and sharply decreased hygroscopicity (κAit = 0.10, κacc = 0.20). These conditions underline the rapidly changing pollution regimes with clear impacts on the aerosol and CCN properties. Overall, this study provides detailed insights into the CCN cycling in relation to aerosol-cloud interaction in the vulnerable and climate-relevant Amazon region. The detailed analysis of aerosol and CCN key properties and particularly the extracted CCN efficiency spectra with the associated fit parameters provide a basis for an in-depth analysis of aerosol-cloud interaction in the Amazon and beyond.
, , Jürgen Haseloff, Oliver Bronkalla, José Protásio, Katia Pinheiro,
Geoscientific Instrumentation, Methods and Data Systems, Volume 6, pp 367-376; https://doi.org/10.5194/gi-6-367-2017

Abstract:
The Tatuoca magnetic observatory (IAGA code: TTB) is located on a small island in the Amazonian delta in the state of Pará, Brazil. Its location close to the geomagnetic equator and within the South Atlantic Anomaly offers a high scientific return of the observatory's data. A joint effort by the National Observatory of Brazil (ON) and the GFZ German Research Centre for Geosciences (GFZ) was undertaken, starting from 2015 in order to modernise the observatory with the goal of joining the INTERMAGNET network and to provide real-time data access. In this paper, we will describe the history of the observatory, recent improvements, and plans for the near future. In addition, we will give some comments on absolute observations of the geomagnetic field near the geomagnetic equator.
Published: 10 October 2017
Geographica Helvetica, Volume 72, pp 393-404; https://doi.org/10.5194/gh-72-393-2017

Abstract:
This article aims at expanding the predominant narrative of a Quantitative Revolution in German-speaking geography, to develop a more complex and multifaceted perspective on this chapter of the discipline's history. For this purpose, I take a closer look at the institute of geography in Erlangen. Eugen Wirth, the long-term chair holder in Erlangen, argued that here, in contrast to the majority of other institutes, the implementation of quantitative methods started in 1932, when Walter Christaller submitted his thesis: Central Places in Southern Germany. According to Wirth a dissertation supervised by him in 1969 was a further step towards the use of quantitative methods. I argue that Wirth made a significant contribution to the debate on quantitative theoretical geography in Germany with his textbook Theoretical Geography published in 1979, although the book was subsequently criticised and strongly rejected by Bartels and others as a conservative embrace. By examining this local negotiation process, I develop one of many narratives, that stand opposed to a unified account with which the general assembly of geographers in 1969 and Bartels' Geographie des Menschen uniquely motivated the abandonment of the concept of Länderkunde.
Published: 10 October 2017
Abstract:
Developing and implementing a quick response post-storm survey protocol has the potential to improve impact assessments of coastal storms. Pre- and post-event surveys are essential to properly quantify the storm impacts on the coast. In this study, a combination of traditional RTK GPS and Unmanned Aerial Vehicle drone platform was utilized as part of a coordinated storm response workflow. The comprehensive approach employed in this pilot case study was conducted on the Emilia-Romagna coast (Italy), in the immediate aftermath of an extreme storm event that impacted the shoreline on the 5th–6th February 2015 called the Saint Agatha Storm. The activities were supported by timing information on the approaching storm provided by the regional early warning system. We collected aerial photos from a commercial off-the-shelf drone immediately after the Saint Agatha Storm and generated both orthomosaic and digital elevation models utilizing structure-from-motion photogrammetry techniques. The drone- based survey approach allowed us to quickly survey an area of 0.25 km2 within a 10-minute flight resulting in a ground sampling distance of 2.5 cm/pixel. Flooding and erosion impacts are analyzed and presented for the target study area. Limitations and possible applications for coastal management of the quick response post-storm surveying protocol are highlighted.
Published: 10 October 2017
Abstract:
Watershed topography plays an important role in determining the spatial heterogeneity of ecological, geomorphological, and hydrological processes. Few studies have quantified the role of topography on various flow variables. In this study, 28 watersheds with snow-dominated hydrological regimes were selected with daily flow records from 1989 to 1996. The watersheds are located in the Southern Interior of British Columbia, Canada and range in size from 2.6 to 1,780 km2. For each watershed, 22 topographic indices (TIs) were derived, including those commonly used in hydrology and other environmental fields. Flow variables include annual mean flow (Qmean), Q10%, Q25%, Q50%, Q75%, Q90%, and annual minimum flow (Qmin), where Qx% is defined as flows that at the percentage (x) occurred in any given year. Factor analysis (FA) was first adopted to exclude some redundant or repetitive TIs. Then, stepwise regression models were employed to quantify the relative contributions of TIs to each flow variable in each year. Our results show that topography plays a more important role in low flows than high flows. However, the effects of TIs on flow variables are not consistent. Our analysis also determines five significant TIs including perimeter, surface area, openness, terrain characterization index, and slope length factor, which can be used to compare watersheds when low flow assessments are conducted, especially in snow-dominated regions.
Stephanie S. Day, , Chris Paola
Published: 10 October 2017
Abstract:
Ravines grow through head cut propagation in response to overland flow coupled with incision and widening in the channel bottom leading to hillslope failures. Altered hydrology can impact the rate at which ravines grow by changing head-cut propagation, channel incision, and channel widening rates. Using a set of small physical experiments, we tested how changing overland flow rates and flow volumes alter the total volume of erosion and resulting ravine morphology. Ravines were modeled as both detachment-limited and transport-limited systems, using two different substrates with varying cohesion. In both cases, the erosion rate varied linearly with water discharge, such that the volume of sediment eroded was a function not of flow rate, but of total water volume. This implies that efforts to reduce peak flow rates alone without addressing flow volumes entering ravine systems may not reduce erosion. The documented response in these experiments is not typical when compared to larger pre-existing channels where higher flow rates result in greater erosion through non-linear relationships between water discharge and sediment discharge. Ravines do not respond like pre-existing channels because channel slope remains a free parameter and can adjust relatively quickly in response to changing flows.
Published: 10 October 2017
Abstract:
The orientations and densities of fractures in the foliated hanging-wall of the Alpine Fault provide insights into the role of a mechanical anisotropy in upper crustal deformation, and the extent to which existing models of fault zone structure can be applied to active plate-boundary faults. Three datasets were used to quantify fracture damage at different distances from the Alpine Fault principal slip zones (PSZs): (1) X-ray computed tomography (CT) images of drill-core collected within 25 m of the PSZs during the first phase of the Deep Fault Drilling Project that were reoriented with respect to borehole televiewer images, (2) field measurements from creek sections at
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 7-10; https://doi.org/10.5194/isprs-archives-xlii-4-w5-7-2017

Abstract:
In this paper, an innovative framework, based on both spectral and spatial information, is proposed. The objective is to improve the classification of hyperspectral images for high resolution land cover mapping. The spatial information is obtained by a marker-based Minimum Spanning Forest (MSF) algorithm. A pixel-based SVM algorithm is first used to classify the image. Then, the marker- based MSF spectral-spatial algorithm is applied to improve the accuracy for classes with low accuracy. The marker-based MSF algorithm is used as a binary classifier. These two classes are the low accuracy class and the remaining classes. Finally, the SVM algorithm is trained for classes with acceptable accuracy. To evaluate the proposed approach, the Berlin hyperspectral dataset is tested. Experimental results demonstrate the superiority of the proposed method compared to the original MSF-based approach. It achieves approximately 5 % higher rates in kappa coefficients of agreement, in comparison to the original MSF-based method.
A. A. Belmonte, M. M. P. Biong, E. G. Macatulad
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 11-19; https://doi.org/10.5194/isprs-archives-xlii-4-w5-11-2017

Abstract:
Digital elevation models (DEMs) are widely used raster data for different applications concerning terrain, such as for flood modelling, viewshed analysis, mining, land development, engineering design projects, to name a few. DEMs can be obtained through various methods, including topographic survey, LiDAR or photogrammetry, and internet sources. Terrestrial close-range photogrammetry is one of the alternative methods to produce DEMs through the processing of images using photogrammetry software. There are already powerful photogrammetry software that are commercially-available and can produce high-accuracy DEMs. However, this entails corresponding cost. Although, some of these software have free or demo trials, these trials have limits in their usable features and usage time. One alternative is the use of free and open-source software (FOSS), such as the Python Photogrammetry Toolbox (PPT), which provides an interface for performing photogrammetric processes implemented through python script. For relatively small areas such as in mining or construction excavation, a relatively inexpensive, fast and accurate method would be advantageous. In this study, PPT was used to generate 3D point cloud data from images of an open pit excavation. The PPT was extended to add an algorithm converting the generated point cloud data into a usable DEM.
T. Bibi, K. Azahari Razak, , A. Latif
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 21-30; https://doi.org/10.5194/isprs-archives-xlii-4-w5-21-2017

Abstract:
Landslides are an inescapable natural disaster, resulting in massive social, environmental and economic impacts all over the world. The tropical, mountainous landscape in generally all over Malaysia especially in eastern peninsula (Borneo) is highly susceptible to landslides because of heavy rainfall and tectonic disturbances. The purpose of the Landslide hazard mapping is to identify the hazardous regions for the execution of mitigation plans which can reduce the loss of life and property from future landslide incidences. Currently, the Malaysian research bodies e.g. academic institutions and government agencies are trying to develop a landslide hazard and risk database for susceptible areas to backing the prevention, mitigation, and evacuation plan. However, there is a lack of devotion towards landslide inventory mapping as an elementary input of landslide susceptibility, hazard and risk mapping. The developing techniques based on remote sensing technologies (satellite, terrestrial and airborne) are promising techniques to accelerate the production of landslide maps, shrinking the time and resources essential for their compilation and orderly updates. The aim of the study is to provide a better perception regarding the use of virtual mapping of landslides with the help of LiDAR technology. The focus of the study is spatio temporal detection and virtual mapping of landslide inventory via visualization and interpretation of very high-resolution data (VHR) in forested terrain of Mesilau river, Kundasang. However, to cope with the challenges of virtual inventory mapping on in forested terrain high resolution LiDAR derivatives are used. This study specifies that the airborne LiDAR technology can be an effective tool for mapping landslide inventories in a complex climatic and geological conditions, and a quick way of mapping regional hazards in the tropics.
A. J. Abubakar, ,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 1-5; https://doi.org/10.5194/isprs-archives-xlii-4-w5-1-2017

Abstract:
Geothermal systems are essentially associated with hydrothermal alteration mineral assemblages such as iron oxide/hydroxide, clay, sulfate, carbonate and silicate groups. Blind and fossilized geothermal systems are not characterized by obvious surface manifestations like hot springs, geysers and fumaroles, therefore, they could not be easily identifiable using conventional techniques. In this investigation, the applicability of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were evaluated in discriminating hydrothermal alteration minerals associated with geothermal systems as a proxy in identifying subtle Geothermal systems at Yankari Park in northeastern Nigeria. The area is characterized by a number of thermal springs such as Wikki and Mawulgo. Feature-oriented Principal Component selection (FPCS) was applied to ASTER data based on spectral characteristics of hydrothermal alteration minerals for a systematic and selective extraction of the information of interest. Application of FPCS analysis to bands 5, 6 and 8 and bands 1, 2, 3 and 4 datasets of ASTER was used for mapping clay and iron oxide/hydroxide minerals in the zones of Wikki and Mawulgo thermal springs in Yankari Park area. Field survey using GPS and laboratory analysis, including X-ray Diffractometer (XRD) and Analytical Spectral Devices (ASD) were carried out to verify the image processing results. The results indicate that ASTER dataset reliably and complementarily be used for reconnaissance stage of targeting subtle alteration mineral assemblages associated with geothermal systems.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 67-71; https://doi.org/10.5194/isprs-archives-xlii-4-w5-67-2017

Abstract:
The International Hydrographic Organization (IHO) has issued standards that provide the minimum requirements for different types of hydrographic surveys execution to collect data to be used to compile navigational charts. Such standards are usually updated from time to time to reflect new survey techniques and practices and must be achieved to assure both surface navigation safety and marine environment protection. Hydrographic surveys can be classified to four orders namely, special order, order 1a, order 1b, and order 2. The order of hydrographic surveys to use should be determined in accordance with the importance to the safety of navigation in the surveyed area. Typically, geodetic-grade dual-frequency GPS receivers are utilized for position determination during data collection in hydrographic surveys. However, with the evolution of high-sensitivity low-cost single-frequency receivers, it is very important to evaluate the performance of such receivers. This paper investigates the performance of low-cost single-frequency GPS receivers in hydrographic surveying applications. The main objective is to examine whether low-cost single-frequency receivers fulfil the IHO standards for hydrographic surveys. It is shown that the low-cost single-frequency receivers meet the IHO horizontal accuracy for all hydrographic surveys orders at any depth. However, the single-frequency receivers meet only order 2 requirements for vertical accuracy at depth more than or equal 100 m.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 61-66; https://doi.org/10.5194/isprs-archives-xlii-4-w5-61-2017

Abstract:
Open sea and inland waterways are the most widely used mode for transporting goods worldwide. It is the International Maritime Organization (IMO) that defines the requirements for position fixing equipment for a worldwide radio-navigation system, in terms of accuracy, integrity, continuity, availability and coverage for the various phases of navigation. Satellite positioning systems can contribute to meet these requirements, as well as optimize marine transportation. Marine navigation usually consists of three major phases identified as Ocean/Coastal/Port approach/Inland waterway, in port navigation and automatic docking with alert limit ranges from 25 m to 0.25 m. GPS positioning is widely used for many applications and is currently recognized by IMO for a future maritime navigation. With the advancement in autonomous GPS positioning techniques such as Precise Point Positioning (PPP) and with the advent of new real-time GNSS correction services such as IGS-Real-Time-Service (RTS), it is necessary to investigate the integrity of the PPP-based positioning technique along with IGS-RTS service in terms of availability and reliability for safe navigation in maritime application. This paper monitors the integrity of an autonomous real-time PPP-based GPS positioning system using the IGS real-time service (RTS) for maritime applications that require minimum availability of integrity of 99.8 % to fulfil the IMO integrity standards. To examine the integrity of the real-time IGS-RTS PPP-based technique for maritime applications, kinematic data from a dual frequency GPS receiver is collected onboard a vessel and investigated with the real-time IGS-RTS PPP-based GPS positioning technique. It is shown that the availability of integrity of the real-time IGS-RTS PPP-based GPS solution is 100 % for all navigation phases and therefore fulfil the IMO integrity standards (99.8 % availability) immediately (after 1 second), after 2 minutes and after 42 minutes of convergence time for Ocean/Coastal/Port approach/Inland waterway, in port navigation and automatic docking, respectively. Moreover, the misleading information is about 2 % for all navigation phases that is considered less safe is not in immediate danger because the horizontal position error is less than the navigation alert limits.
G. Buyuksalih, S. Bayburt, A. P. Baskaraca, H. Karim, A. Abdul Rahman
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 39-44; https://doi.org/10.5194/isprs-archives-xlii-4-w5-39-2017

Abstract:
Solar energy modelling is increasingly popular, important, and economic significant in solving energy crisis for big cities. It is a clean and renewable resource of energy that can be utilized to accommodate individual or group of buildings electrical power as well as for indoor heating. Implementing photovoltaic system (PV) in urban areas is one of the best options to solve power crisis over expansion of urban and the growth of population. However, as the spaces for solar panel installation in cities are getting limited nowadays, the available strategic options are only at the rooftop and façade of the building. Thus, accurate information and selecting building with the highest potential solar energy amount collected is essential in energy planning, environmental conservation, and sustainable development of the city. Estimating the solar energy/radiation from rooftop and facade are indeed having a limitation - the shadows from other neighbouring buildings. The implementation of this solar estimation project for Istanbul uses CityGML LoD2-LoD3. The model and analyses were carried out using Unity 3D Game engine with development of several customized tools and functionalities. The results show the estimation of potential solar energy received for the whole area per day, week, month and year thus decision for installing the solar panel could be made. We strongly believe the Unity game engine platform could be utilized for near future 3D mapping visualization purposes.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 31-38; https://doi.org/10.5194/isprs-archives-xlii-4-w5-31-2017

Abstract:
The central dry zone area of Myanmar is the most water stressed and also one of the most food insecure regions in the country. In the Dry Zone area, the total population is 10.1 million people in 54 townships, in which approximately 43 % live in below poverty line and 40–50 % of the rural population is landless. Agriculture is the most important economic sector in Myanmar as it is essential for national food security and a major source of livelihood for its people. In this region the adverse effects of climate change such as late or early onset of monsoon season, longer dry spells, erratic rainfall, increasing temperature, heavy rains, stronger typhoons, extreme spatial-temporal variability of rainfall, high intensities, limited rainfall events in the growing season, heat stress, drought, flooding, sea water intrusion, land degradation, desertification, deforestation and other natural disasters are believed to be a major constraint to food insecurity. For food vulnerability, we use following indicators: slope, precipitation, vegetation, soil, erosion, land degradation and harvest failure in ArcGIS software. The erosion is influenced by rainfall and slope, while land degradation is directly related to vegetation, drainage and soil. While harvest failure can be generate by rainfall and flood potential zones. Results show that around 45 % study area comes under very high erosion danger level, 70 % under average harvest failure, 59 % intermediate land degradation area and the overall around 45 % study area comes under insecure food vulnerability zone. Our analysis shows an increase in alluvial farming by 1745.33 km2 since 1988 to reduce the insecure food vulnerability. Food vulnerability map is also relevant to increased population and low income areas. The extreme climatic events are likely increase in frequency and magnitude of serious drought periods and extreme floods. Food insecurity is an important thing that must be reviewed because it relates to the lives of many people. This paper is helpful for identifying the areas of food needs in central dry zone area of Myanmar.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 45-51; https://doi.org/10.5194/isprs-archives-xlii-4-w5-45-2017

Abstract:
Museum exhibit management is one of the usual undertakings of museum facilitators. Art works must be strategically placed to achieve maximum viewing from the visitors. The positioning of the artworks also highly influences the quality of experience of the visitors. One solution in such problems is to utilize GIS and Agent-Based Modelling (ABM). In ABM, persistent interacting objects are modelled as agents. These agents are given attributes and behaviors that describe their properties as well as their motion. In this study, ABM approach that incorporates GIS is utilized to perform analyticcal assessment on the placement of the artworks in the Vargas Museum. GIS serves as the backbone for the spatial aspect of the simulation such as the placement of the artwork exhibits, as well as possible obstructions to perception such as the columns, walls, and panel boards. Visibility Analysis is also done to the model in GIS to assess the overall visibility of the artworks. The ABM is done using the initial GIS outputs and GAMA, an open source ABM software. Visitors are modelled as agents, moving inside the museum following a specific decision tree. The simulation is done in three use cases: the 10 %, 20 %, and 30 % chance of having a visitor in the next minute. For the case of the said museum, the 10 % chance is determined to be the closest simulation case to the actual and the recommended minimum time to achieve a maximum artwork perception is 1 hour and 40 minutes. Initial assessment of the results shows that even after 3 hours of simulation, small parts of the exhibit show lack of viewers, due to its distance from the entrance. A more detailed decision tree for the visitor agents can be incorporated to have a more realistic simulation.
N. Naharudin, M. S. S. Ahamad,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 137-144; https://doi.org/10.5194/isprs-archives-xlii-4-w5-137-2017

Abstract:
Every transit trip begins and ends with pedestrian travel. People need to walk to access the transit services. However, their choice to walk depends on many factors including the connectivity, level of comfort and safety. These factors can influence the pleasantness of riding the transit itself, especially during the first/last mile (FLM) journey. This had triggered few studies attempting to measure the pedestrian-friendliness a walking environment can offer. There were studies that implement the pedestrian experience on walking to assess the pedestrian-friendliness of a walking environment. There were also studies that use spatial analysis to measure it based on the path connectivity and accessibility to public facilities and amenities. Though both are good, but the perception-based studies and spatial analysis can be combined to derive more holistic results. This paper proposes a framework for selecting a pedestrian-friendly path for the FLM transit journey by using the two techniques (perception-based and spatial analysis). First, the degree of importance for the factors influencing a good walking environment will be aggregated by using Analytical Network Process (ANP) decision rules based on people’s preferences on those factors. The weight will then be used as attributes in the GIS network analysis. Next, the network analysis will be performed to find a pedestrian-friendly walking route based on the priorities aggregated by ANP. It will choose routes passing through the preferred attributes accordingly. The final output is a map showing pedestrian-friendly walking path for the FLM transit journey.
N. Z. A. Halim, Saiful Aman Hj Sulaiman, K. Talib, O. M. Yusof, M. A. M. Wazir, M. K. Adimin
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 81-89; https://doi.org/10.5194/isprs-archives-xlii-4-w5-81-2017

Abstract:
This paper explains the process carried out in identifying the significant role of NDCDB in Malaysia specifically in the land-based analysis. The research was initially a part of a larger research exercise to identify the significance of NDCDB from the legal, technical, role and land-based analysis perspectives. The research methodology of applying the Delphi technique is substantially discussed in this paper. A heterogeneous panel of 14 experts was created to determine the importance of NDCDB from the role standpoint. Seven statements pertaining the significant role of NDCDB in Malaysia and land-based analysis were established after three rounds of consensus building. The agreed statements provided a clear definition to describe the important role of NDCDB in Malaysia and for land-based analysis, which was limitedly studied that lead to unclear perception to the general public and even the geospatial community. The connection of the statements with disaster management is discussed concisely at the end of the research.
G. A. Domingo, M. M. Mallillin, A. M. C. Perez, , A. M. Tamondong
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 53-60; https://doi.org/10.5194/isprs-archives-xlii-4-w5-53-2017

Abstract:
Studies have shown that mangrove forests in the Philippines have been drastically reduced due to conversion to fishponds, salt ponds, reclamation, as well as other forms of industrial development and as of 2011, Iloilo’s 95 % mangrove forest was converted to fishponds. In this research, six (6) Landsat images acquired on the years 1973, 1976, 2000, 2006, 2010, and 2016, were classified using Support Vector Machine (SVM) Classification to determine land cover changes, particularly the area change of mangrove and aquaculture from 1976 to 2016. The results of the classification were used as layers for the generation of 3D visualization models using four (4) platforms namely Google Earth, ArcScene, Virtual Terrain Project, and Terragen. A perception survey was conducted among respondents with different levels of expertise in spatial analysis, 3D visualization, as well as in forestry, fisheries, and aquatic resources to assess the usability, effectiveness, and potential of the various platforms used. Change detection showed that largest negative change for mangrove areas happened from 1976 to 2000, with the mangrove area decreasing from 545.374 hectares to 286.935 hectares. Highest increase in fishpond area occurred from 1973 to 1976 rising from 2,930.67 hectares to 3,441.51 hectares. Results of the perception survey showed that ArcScene is preferred for spatial analysis while respondents favored Terragen for 3D visualization and for forestry, fishery and aquatic resources applications.
N. M. Hashim, A. H. Omar, , K. M. Omar, N. Din
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 91-96; https://doi.org/10.5194/isprs-archives-xlii-4-w5-91-2017

Abstract:
Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.
Nurul Amirah Isa, W. M. N. Wan Mohd,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 107-112; https://doi.org/10.5194/isprs-archives-xlii-4-w5-107-2017

Abstract:
A common consequence of rapid and uncontrollable urbanization is Urban Heat Island (UHI). It occurs due to the negligence on climate behaviour which degrades the quality of urban climate condition. Recently, addressing urban climate in urban planning through mapping has received worldwide attention. Therefore, the need to identify the significant factors is a must. This study aims to analyse the relationships between Land Surface Temperature (LST) and two urban parameters namely built-up and green areas. Geographical Information System (GIS) and remote sensing techniques were used to prepare the necessary data layers required for this study. The built-up and the green areas were extracted from Landsat 8 satellite images either using the Normalized Difference Built-Up Index (NDBI), Normalized Difference Vegetation Index (NDVI) or Modified Normalize Difference Water Index (MNDWI) algorithms, while the mono-window algorithm was used to retrieve the Land Surface Temperature (LST). Correlation analysis and Multi-Linear Regression (MLR) model were applied to quantitatively analyse the effects of the urban parameters. From the study, it was found that the two urban parameters have significant effects on the LST of Kuala Lumpur City. The built-up areas have greater influence on the LST as compared to the green areas. The built-up areas tend to increase the LST while green areas especially the densely vegetated areas help to reduce the LST within an urban areas. Future studies should focus on improving existing urban climatic model by including other urban parameters.
, R. Ahamed
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 97-105; https://doi.org/10.5194/isprs-archives-xlii-4-w5-97-2017

Abstract:
Spatial point pattern is one of the most suitable methods for analysing groundwater arsenic concentrations. Groundwater arsenic poisoning in Bangladesh has been one of the biggest environmental health disasters in recent times. About 85 million people are exposed to arsenic more than 50 μg/L in drinking water. The paper seeks to identify the existing suitable aquifers for arsenic-safe drinking water along with “spatial arsenic discontinuity” using GIS-based spatial geostatistical analysis in a small study site (12.69 km2) in the coastal belt of southwest Bangladesh (Dhopakhali union of Bagerhat district). The relevant spatial data were collected with Geographical Positioning Systems (GPS), arsenic data with field testing kits, tubewell attributes with observation and questionnaire survey. Geostatistics with kriging methods can design water quality monitoring in different aquifers with hydrochemical evaluation by spatial mapping. The paper presents the interpolation of the regional estimates of arsenic data for spatial discontinuity mapping with Ordinary Kriging (OK) method that overcomes the areal bias problem for administrative boundary. This paper also demonstrates the suitability of isopleth maps that is easier to read than choropleth maps. The OK method investigated that around 80 percent of the study site are contaminated following the Bangladesh Drinking Water Standards (BDWS) of 50 μg/L. The study identified a very few scattered “pockets” of arsenic-safe zone at the shallow aquifer.
T. A. Musa, M. H. Mazlan, Y. D. Opaluwa, I. A. Musliman,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 127-135; https://doi.org/10.5194/isprs-archives-xlii-4-w5-127-2017

Abstract:
This paper presents the development of TM model by using the radiosonde stations from Peninsular Malaysia. Two types of TM model were developed; site-specific and regional models. The result revealed that the estimation from site-specific model has small improvement compared to the regional model, indicating that the regional model is adequately to use in estimation of GPS-derived IWV over Peninsular Malaysia. Meanwhile, this study found that the diurnal cycle of TS has influenced the TM–TS relationship. The separation between daytime and nighttime observation can improve the relationship of TM–TS. However, the impact of diurnal cycle to IWV estimation is less than 1 %. The TM model from Global and Tropic also been evaluated. The Tropic TM model is superior to be utilized as compared to the Global TM model.
A. G. Koppad, B. S. Janagoudar
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 121-125; https://doi.org/10.5194/isprs-archives-xlii-4-w5-121-2017

Abstract:
The study was conducted in Uttara Kannada districts during the year 2012–2014. The study area lies between 13.92° N to 15.52° N latitude and 74.08° E to 75.09° E longitude with an area of 10,215 km2. The Indian satellite IRS P6 LISS-III imageries were used to classify the land use land cover classes with ground truth data collected with GPS through supervised classification in ERDAS software. The land use and land cover classes identified were dense forest, horticulture plantation, sparse forest, forest plantation, open land and agriculture land. The dense forest covered an area of 63.32 % (6468.70 sq km) followed by agriculture 12.88 % (1315.31 sq. km), sparse forest 10.59 % (1081.37 sq. km), open land 6.09 % (622.37 sq. km), horticulture plantation and least was forest plantation (1.07 %). Settlement, stony land and water body together cover about 4.26 percent of the area. The study indicated that the aspect and altitude influenced the forest types and vegetation pattern. The NDVI map was prepared which indicated that healthy vegetation is represented by high NDVI values between 0.1 and 1. The non- vegetated features such as water bodies, settlement, and stony land indicated less than 0.1 values. The decrease in forest area in some places was due to anthropogenic activities. The thematic map of land use land cover classes was prepared using Arc GIS Software.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 113-119; https://doi.org/10.5194/isprs-archives-xlii-4-w5-113-2017

Abstract:
Current practice in combining bathymetry and topographic DEM is based on overlaying and merging both datasets into a new DEM based on river boundary. Throughout a few sample of datasets from previous recent projects, authors realize that this method is not preserving the nature of natural river characteristic, especially at the slope in between riverbank and riverbed. Some arising issues were also highlighted; validity of the topographic DEM as well as the river boundary, limitations of DEMs and how bathymetry survey was carried out on field. To overcome these issues, a new technique called blending DEMs was proposed and tested to the project datasets. It is based on a fusion of two DEMs (with respective buffer, offset and fusion ratio from a validated river boundary) to produce riverbank slope and a merging of two different interpolation results to produce a best riverbed DEM. Simple riverbank ontology was prescribed to illustrate the model enhancement in accuracy and visualization provided by this technique. The output from three projects/DEM results was presented as a comparison study between the current practices with the proposed technique.
M. Safari, , A. Maghsoudi,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 153-157; https://doi.org/10.5194/isprs-archives-xlii-4-w5-153-2017

Abstract:
Shahr-e-Babak tract of the Kerman metalogenic belt is one of the most potential segments of Urumieh–Dokhtar (Sahand-Bazman) magmatic arc. This area encompasses several porphyry copper deposits in exploration, development and exploitation hierarchy. The aim of this study is to map hydrothermal alterations caused by early Cenozoic magmatic intrusions in Shahr-e-Babak area. To this purpose, mineral mapping methods including band combinations, ratios and multiplications as well as PCA and MNF data space transforms in SWIR and VNIR for both ASTER and OLI sensors. Alteration zones according to spectral signatures of each type of alteration mineral assemblages such as argillic, phyllic and propylitic are successfully mapped. For enhancing the target areas false color composites and HSI-RGB color space transform are performed on developed band combinations. Previous studies have proven the robust application of ASTER in geology and mineral exploration; nonetheless, the results of this investigation prove applicability of OLI sensor from landsat-8 for alteration mapping. According to the results, evidently OLI sensor data can accurately map alteration zones. Additionally, the 12-bit quantization of OLI data is its privilege over 8-bit data of ASTER in VNIR and SWIR, thus OLI high quality results, which makes it easy to distinguish targets with enhanced color contrast between the altered and unaltered rocks.
M. A. H. M. Rosdi, A. N. Othman, M. A. M. Zubir, Z. A. Latif, Z. M. Yusoff
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 145-151; https://doi.org/10.5194/isprs-archives-xlii-4-w5-145-2017

Abstract:
Sinkhole is not classified as new phenomenon in this country, especially surround Klang Valley. Since 1968, the increasing numbers of sinkhole incident have been reported in Kuala Lumpur and the vicinity areas. As the results, it poses a serious threat for human lives, assets and structure especially in the capital city of Malaysia. Therefore, a Sinkhole Hazard Model (SHM) was generated with integration of GIS framework by applying Analytical Hierarchical Process (AHP) technique in order to produced sinkhole susceptibility hazard map for the particular area. Five consecutive parameters for main criteria each categorized by five sub classes were selected for this research which is Lithology (LT), Groundwater Level Decline (WLD), Soil Type (ST), Land Use (LU) and Proximity to Groundwater Wells (PG). A set of relative weights were assigned to each inducing factor and computed through pairwise comparison matrix derived from expert judgment. Lithology and Groundwater Level Decline has been identified gives the highest impact to the sinkhole development. A sinkhole susceptibility hazard zones was classified into five prone areas namely very low, low, moderate, high and very high hazard. The results obtained were validated with thirty three (33) previous sinkhole inventory data. This evaluation shows that the model indicates 64 % and 21 % of the sinkhole events fall within high and very high hazard zones respectively. Based on this outcome, it clearly represents that AHP approach is useful to predict natural disaster such as sinkhole hazard.
N. M. Said, ,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 159-164; https://doi.org/10.5194/isprs-archives-xlii-4-w5-159-2017

Abstract:
Over the years, the acquisition technique of bathymetric data has evolved from a shipborne platform to airborne and presently, utilising space-borne acquisition. The extensive development of remote sensing technology has brought in the new revolution to the hydrographic surveying. Satellite-Derived Bathymetry (SDB), a space-borne acquisition technique which derives bathymetric data from high-resolution multispectral satellite imagery for various purposes recently considered as a new promising technology in the hydrographic surveying industry. Inspiring by this latest developments, a comprehensive study was initiated by National Hydrographic Centre (NHC) and Universiti Teknologi Malaysia (UTM) to analyse SDB as a means for shallow water area acquisition. By adopting additional adjustment in calibration stage, a marginal improvement discovered on the outcomes from both Stumpf and Lyzenga algorithms where the RMSE values for the derived (predicted) depths were 1.432 meters and 1.728 meters respectively. This paper would deliberate in detail the findings from the study especially on the accuracy level and practicality of SDB over the tropical environmental setting in Malaysia.
Kumar Ravi Prakash, Tanuja Nigam,
Published: 10 October 2017
Abstract:
A coupled atmosphere-ocean-wave model used to examine mixing in the upper oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10–14 October 2013. Model simulations highlight prominent role of cyclone induced near-inertial oscillations in sub-surface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played central role in deepening of thermocline and mixed layer depth by 40 m and 15 m, respectively. A detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out at a location in northwestern BoB. The peak magnitude of kinetic energy in baroclinic and barotropic currents found to be 1.2 m2 s−2 and 0.3 × 10−2 m2 s−2, respectively. The power spectrum analysis suggested a dominant frequency operative in sub-surface mixing was associated with near-inertial oscillations. The peak strength of 0.84 m2 s−1 in zonal baroclinic current found at 14 m depth. The baroclinic kinetic energy remain higher (> 0.03 m2 s−2) during 11–12 October and decreased rapidly thereafter. The wave-number rotary spectra identified the downward propagation, from surface up to the thermocline, of energy generated by inertial oscillations. A quantitative analysis of shear generated by the near-inertial baroclinic current showed higher shear generation at 40–80 m depth during peak surface winds. Analysis highlights that greater mixing within the mixed layer take place where the eddy kinetic diffusivity was high (> 6 × 10−11 m2 s−1). The turbulent kinetic energy dissipation rate increased from 4 × 10−14 to 2.5 × 10−13 W kg−1 on approaching the thermocline that dampened mixing process further downward into the thermocline layer.
, H. Karim, , H. Purwanto
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 201-208; https://doi.org/10.5194/isprs-archives-xlii-4-w5-201-2017

Abstract:
Current practices in bathymetry survey (available method) are indeed having some limitations. New technologies for bathymetry survey such as using unmanned boat has becoming popular in developed countries - filled in and served those limitations of existing survey methods. Malaysia as one of tropical country has it own river/water body characteristics and suitable approaches in conducting bathymetry survey. Thus, a study on this emerging technology should be conducted using enhanced version of small ROV boat with Malaysian rivers and best approaches so that the surveyors get benefits from the innovative surveying product. Among the available ROV boat for bathymetry surveying in the market, an Indonesian product called SHUMOO is among the promising products – economically and practically proven using a few sample areas in Indonesia. The boat was equipped and integrated with systems of remote sensing technology, GNSS, echo sounder and navigational engine. It was designed for riverbed surveys on shallow area such as small /medium river, lakes, reservoirs, oxidation/detention pond and other water bodies. This paper tries to highlight the needs and enhancement offered to Malaysian’ bathymetry surveyors/practitioners on the new ROV boat which make their task easier, faster, safer, economically effective and better riverbed modelling results. The discussion continues with a sample of Indonesia river (data collection and modelling) since it is mostly similar to Malaysia’s river characteristics and suggests some improvement for Malaysia best practice.
, S. Jones
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 209-214; https://doi.org/10.5194/isprs-archives-xlii-4-w5-209-2017

Abstract:
This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn’t lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.
S. Salihin, T. A. Musa,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 165-175; https://doi.org/10.5194/isprs-archives-xlii-4-w5-165-2017

Abstract:
This paper provides the precise information on spatial-temporal distribution of water vapour that was retrieved from Zenith Path Delay (ZPD) which was estimated by Global Positioning System (GPS) processing over the Malaysian Peninsular. A time series analysis of these ZPD and Integrated Water Vapor (IWV) values was done to capture the characteristic on their seasonal variation during monsoon seasons. This study was found that the pattern and distribution of atmospheric water vapour over Malaysian Peninsular in whole four years periods were influenced by two inter-monsoon and two monsoon seasons which are First Inter-monsoon, Second Inter-monsoon, Southwest monsoon and Northeast monsoon.
S. A. Samsudin,
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 177-183; https://doi.org/10.5194/isprs-archives-xlii-4-w5-177-2017

Abstract:
Recently, there have been many debates to analyse backscatter data from multibeam echosounder system (MBES) for seafloor classifications. Among them, two common methods have been used lately for seafloor classification; (1) signal-based classification method which using Angular Range Analysis (ARA) and Image-based texture classification method which based on derived Grey Level Co-occurrence Matrices (GLCMs). Although ARA method could predict sediment types, its low spatial resolution limits its use with high spatial resolution dataset. Texture layers from GLCM on the other hand does not predict sediment types, but its high spatial resolution can be useful for image analysis. The objectives of this study are; (1) to investigate the correlation between MBES derived backscatter mosaic textures with seafloor sediment type derived from ARA method, and (2) to identify which GLCM texture layers have high similarities with sediment classification map derived from signal-based classification method. The study area was located at Tawau, covers an area of 4.7 km2, situated off the channel in the Celebes Sea between Nunukan Island and Sebatik Island, East Malaysia. First, GLCM layers were derived from backscatter mosaic while sediment types (i.e. sediment map with classes) was also constructed using ARA method. Secondly, Principal Component Analysis (PCA) was used determine which GLCM layers contribute most to the variance (i.e. important layers). Finally, K-Means clustering algorithm was applied to the important GLCM layers and the results were compared with classes from ARA. From the results, PCA has identified that GLCM layers of Correlation, Entropy, Contrast and Mean contributed to the 98.77 % of total variance. Among these layers, GLCM Mean showed a good agreement with sediment classes from ARA sediment map. This study has demonstrated different texture layers have different characterisation factors for sediment classification and proper analysis is needed before using these layers with any classification technique.
, J. Gill, Z. M. Amin, K. M. Omar
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 185-199; https://doi.org/10.5194/isprs-archives-xlii-4-w5-185-2017

Abstract:
A semi-dynamic datum provides positions with respect to time while taking into account the secular and non-secular deformations, making it the best approach to adapt with the dynamic processes of the earth. Malaysia, as yet, employs a static datum, i.e., GDM2000, at epoch 2000; though Malaysia has evidently been affected by seismic activity for the past decade. Therefore, this paper seeks to propose a design for implementing a semi-dynamic datum for Malaysia. Methodologically, GPS time series analyses are carried out to investigate the seismic activity of Malaysia, which essentially contributes to the proposed design of the semi-dynamic datum for Malaysia. The implications of implementing a semi-dynamic datum for Malaysia are discussed as well. The results indicate that Malaysia undergoes a complex deformation; whereby the earthquakes – primarily the 2004 Sumatra-Andaman, 2005 Nias and 2012 Northern Sumatra earthquakes – have affected the underlying secular velocities of Malaysia. Consequently, from this information, the proposed design, particularly the secular and non-secular deformation models, is described in detail. The proposed semi-dynamic datum comprises a transformation, temporal, and spatial module, and utilizes a bilinear interpolation method. Overall, this paper aims to contribute to the feasibility of a semi-dynamic datum approach for Malaysia.
Published: 10 October 2017
Geoscientific Model Development, Volume 10, pp 3679-3693; https://doi.org/10.5194/gmd-10-3679-2017

Abstract:
This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate–chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.
, Thomas Offenwanger, Carsten Schmidt, Michael Bittner, , , Jeng-Hwa Yee, , James M. Russell Iii
Published: 10 October 2017
Abstract:
For the first time, we present an approach to derive zonal, meridional and vertical wavelengths as well as periods of gravity waves based on only one OH* spectrometer addressing one vibrational-rotational transition. Knowledge of these parameters is a precondition for the calculation of further information such as the wave group velocity vector. OH(3-1) spectrometer measurements allow the analysis of gravity wave periods, but spatial information cannot necessarily be deduced. We use a scanning spectrometer and the harmonic analysis to derive horizontal wavelengths at the mesopause above Oberpfaffenhofen (48.09 °N, 11.28 °E), Germany for 22 nights in 2015. Based on the approximation of the dispersion relation for gravity waves of low and medium frequency and additional horizontal wind information, we calculate vertical wavelengths afterwards. The mesopause wind measurements nearest to Oberpfaffenhofen are conducted at Collm (51.30 °N, 13.02 °E), Germany, ca. 380 km northeast of Oberfpaffenhofen by a meteor radar. In order to check our results, vertical temperature profiles of TIMED-SABER (Thermosphere Ionosphere Mesosphere Energetics Dynamics, Sounding of the Atmosphere using Broadband Emission Radiometry) overpasses are analysed with respect to the dominating vertical wavelength.
M. N. Uti, , A. H. Omar
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 215-224; https://doi.org/10.5194/isprs-archives-xlii-4-w5-215-2017

Abstract:
Satellite altimeter has proven itself to be one of the important tool to provide good quality information in oceanographic study. Nowadays, most countries in the world have begun in implementation the wind energy as one of their renewable energy for electric power generation. Many wind speed studies conducted in Malaysia using conventional method and scientific technique such as anemometer and volunteer observing ships (VOS) in order to obtain the wind speed data to support the development of renewable energy. However, there are some limitations regarding to this conventional method such as less coverage for both spatial and temporal and less continuity in data sharing by VOS members. Thus, the aim of this research is to determine the reliability of wind speed data by using multi-mission satellite altimeter to support wind energy potential in Malaysia seas. Therefore, the wind speed data are derived from nine types of satellite altimeter starting from year 1993 until 2016. Then, to validate the reliability of wind speed data from satellite altimeter, a comparison of wind speed data form ground-truth buoy that located at Sabah and Sarawak is conducted. The validation is carried out in terms of the correlation, the root mean square error (RMSE) calculation and satellite track analysis. As a result, both techniques showing a good correlation with value positive 0.7976 and 0.6148 for point located at Sabah and Sarawak Sea, respectively. It can be concluded that a step towards the reliability of wind speed data by using multi-mission satellite altimeter can be achieved to support renewable energy.
, Randall V. Martin, Andrew Morrow, Sangeeta Sharma, Lin Huang, W. Richard Leaitch, , Hannes Schulz, , , et al.
Atmospheric Chemistry and Physics Discussions, Volume 17, pp 11971-11989; https://doi.org/10.5194/acp-17-11971-2017

Abstract:
Black carbon (BC) contributes to Arctic warming, yet sources of Arctic BC and their geographic contributions remain uncertain. We interpret a series of recent airborne (NETCARE 2015; PAMARCMiP 2009 and 2011 campaigns) and ground-based measurements (at Alert, Barrow and Ny-Ålesund) from multiple methods (thermal, laser incandescence and light absorption) with the GEOS-Chem global chemical transport model and its adjoint to attribute the sources of Arctic BC. This is the first comparison with a chemical transport model of refractory BC (rBC) measurements at Alert. The springtime airborne measurements performed by the NETCARE campaign in 2015 and the PAMARCMiP campaigns in 2009 and 2011 offer BC vertical profiles extending to above 6 km across the Arctic and include profiles above Arctic ground monitoring stations. Our simulations with the addition of seasonally varying domestic heating and of gas flaring emissions are consistent with ground-based measurements of BC concentrations at Alert and Barrow in winter and spring (rRMSE < 13 %) and with airborne measurements of the BC vertical profile across the Arctic (rRMSE = 17 %) except for an underestimation in the middle troposphere (500–700 hPa).Sensitivity simulations suggest that anthropogenic emissions in eastern and southern Asia have the largest effect on the Arctic BC column burden both in spring (56 %) and annually (37 %), with the largest contribution in the middle troposphere (400–700 hPa). Anthropogenic emissions from northern Asia contribute considerable BC (27 % in spring and 43 % annually) to the lower troposphere (below 900 hPa). Biomass burning contributes 20 % to the Arctic BC column annually.At the Arctic surface, anthropogenic emissions from northern Asia (40–45 %) and eastern and southern Asia (20–40 %) are the largest BC contributors in winter and spring, followed by Europe (16–36 %). Biomass burning from North America is the most important contributor to all stations in summer, especially at Barrow.Our adjoint simulations indicate pronounced spatial heterogeneity in the contribution of emissions to the Arctic BC column concentrations, with noteworthy contributions from emissions in eastern China (15 %) and western Siberia (6.5 %). Although uncertain, gas flaring emissions from oilfields in western Siberia could have a striking impact (13 %) on Arctic BC loadings in January, comparable to the total influence of continental Europe and North America (6.5 % each in January). Emissions from as far as the Indo-Gangetic Plain could have a substantial influence (6.3 % annually) on Arctic BC as well.
Atmospheric Chemistry and Physics Discussions, Volume 17, pp 11991-12010; https://doi.org/10.5194/acp-17-11991-2017

Abstract:
Oxidation flow reactors (OFRs) are increasingly employed in atmospheric chemistry research because of their high efficiency of OH radical production from low-pressure Hg lamp emissions at both 185 and 254 nm (OFR185) or 254 nm only (OFR254). OFRs have been thought to be limited to studying low-NO chemistry (in which peroxy radicals (RO2) react preferentially with HO2) because NO is very rapidly oxidized by the high concentrations of O3, HO2, and OH in OFRs. However, many groups are performing experiments by aging combustion exhaust with high NO levels or adding NO in the hopes of simulating high-NO chemistry (in which RO2 + NO dominates). This work systematically explores the chemistry in OFRs with high initial NO. Using box modeling, we investigate the interconversion of N-containing species and the uncertainties due to kinetic parameters. Simple initial injection of NO in OFR185 can result in more RO2 reacted with NO than with HO2 and minor non-tropospheric photolysis, but only under a very narrow set of conditions (high water mixing ratio, low UV intensity, low external OH reactivity (OHRext), and initial NO concentration (NOin) of tens to hundreds of ppb) that account for a very small fraction of the input parameter space. These conditions are generally far away from experimental conditions of published OFR studies with high initial NO. In particular, studies of aerosol formation from vehicle emissions in OFRs often used OHRext and NOin several orders of magnitude higher. Due to extremely high OHRext and NOin, some studies may have resulted in substantial non-tropospheric photolysis, strong delay to RO2 chemistry due to peroxynitrate formation, VOC reactions with NO3 dominating over those with OH, and faster reactions of OH–aromatic adducts with NO2 than those with O2, all of which are irrelevant to ambient VOC photooxidation chemistry. Some of the negative effects are the worst for alkene and aromatic precursors. To avoid undesired chemistry, vehicle emissions generally need to be diluted by a factor of > 100 before being injected into an OFR. However, sufficiently diluted vehicle emissions generally do not lead to high-NO chemistry in OFRs but are rather dominated by the low-NO RO2 + HO2 pathway. To ensure high-NO conditions without substantial atmospherically irrelevant chemistry in a more controlled fashion, new techniques are needed.
Page of 5,486
Articles per Page
by
Show export options
  Select all
Back to Top Top