Refine Search

New Search

Results: 68

(searched for: A Web-Based Aggregate Information Portal)
Save to Scifeed
Page of 2
Articles per Page
by
Show export options
  Select all
Donatus I. Bayem, Henry O. Osuagwu, Chimezie F. Ugwu
European Journal of Electrical Engineering and Computer Science, Volume 5, pp 14-22; doi:10.24018/ejece.2021.5.3.323

Abstract:
A Web portal aggregates an array of information for a target audience and affords a variety of services including search engines, directories, news, e-mail, and chat rooms, and they have evolved to provide a customized gateway to Web information. Also, a high level of personalization and customization has been possible. The portal concept could further be established to function as a classy Web interface that can serves as sustenance for variety of the task performance. The aggregate information Web portal will serve as portals for the information needs of users on the web. The Web based portal enable marketing of users broadly across a wide variety of interests. Most of the popular usage of the Web based aggregate information portal probably refers to the visual and user interface (UI) design of a Web site. It is a crucial aspect since the visitor is often more impressed with looks of website and easy to use rather than about the technologies and techniques that are used behind the scenes, or the operating system that runs on the web server. In other words, it just does not matter what technologies that is involved in creating, when the site is hard to use and easy to forget. This paper explores the factors that must be painstaking during the design and development of a Web based aggregate information portal. Design as a word in the context of a Web application can mean many things. A working Web based aggregate information portal, kaseremulticoncept was developed to support the various users’ task performances. A number of technologies was studied and implemented in this research, which includes multi-tier architecture, server and client side scripting techniques and technologies such as PHP programming languages and relational databases such as MySQL, Structured Query language (SQL) and XAMPP Server.
ISPRS International Journal of Geo-Information, Volume 10; doi:10.3390/ijgi10010001

Abstract:
To effectively disseminate location-linked information despite the existence of digital walls across institutions, this study developed a cross-institution mobile App, named GeoFairy2, to overcome the virtual gaps among multi-source datasets and aid the general users to make thorough accurate in-situ decisions. The app provides a one-stop service with relevant information to assist with instant decision making. It was tested and proven to be capable of on-demand coupling and delivering location-based information from multiple sources. The app can help general users to crack down the digital walls among information pools and serve as a one-stop retrieval place for all information. GeoFairy2 was experimented with to gather real-time and historical information about crops, soil, water, and climate. Instead of a one-way data portal, GeoFairy2 allows general users to submit photos and observations to support citizen science projects and derive new insights, and further refine the future service. The two-directional mechanism makes GeoFairy2 a useful mobile gateway to access and contribute to the rapidly growing, heterogeneous, multisource, and location-linked datasets, and pave a way to drive us into a new mobile web with more links and less digital walls across data providers and institutions.
Francisco De La Vega, Juan Pablo Garcia-Martin, German P. Santos, Antonio Torralba
2020 IEEE International Symposium on Systems Engineering (ISSE) pp 1-3; doi:10.1109/isse49799.2020.9272242

The publisher has not yet granted permission to display this abstract.
Biodiversity Information Science and Standards, Volume 4; doi:10.3897/biss.4.59166

Abstract:
A discussion session held at a National Science Foundation-sponsored Herbarium Networks Workshop at Michigan State University in September of 2004 resulted in a rallying objective: make all botanical specimen information in United States collections available online by 2020. Rabeler and Macklin 2006 outlined a toolkit for realizing this ambitious goal, which included: a review of relevant and state-of-the-art web resources, data exchange standards and, mechanisms to maximize efficiencies while minimizing costs. a review of relevant and state-of-the-art web resources, data exchange standards and, mechanisms to maximize efficiencies while minimizing costs. Given that we are now in the year 2020, it seems appropriate to examine the progress towards the objective of making all US botanical specimen collections data available online. Our presentation will attempt to answer several questions: How close have we come to meeting the original objective? What fraction of “digitized” specimens are minimally represented by a catalog number, a determination, and/or a photograph? What fraction has been thoroughly transcribed? How close have we come to attaining a seamlessly integrated, comprehensive, and national view of botanical specimen data that guides a stakeholder to appropriate resources regardless of their entry point? What “holes” in this effort still exist and what might be required to fill them? How close have we come to meeting the original objective? What fraction of “digitized” specimens are minimally represented by a catalog number, a determination, and/or a photograph? What fraction has been thoroughly transcribed? How close have we come to attaining a seamlessly integrated, comprehensive, and national view of botanical specimen data that guides a stakeholder to appropriate resources regardless of their entry point? What “holes” in this effort still exist and what might be required to fill them? Given our interest in the success of both the Global Biodiversity Information Facility (GBIF) and the Integrated Digitized Biocollections (iDigBio), as well as the overwhelming likelihood that either one of these initiatives is the usual entry point for someone seeking US-based botanical data, we approached the answers to the above questions by first crafting a repeatable data download and processing workflow in early July 2020. This resulted in 25.6M records of plant, fungi, and Chromista from 216 datasets available through GBIF and 32.8M comparable records available through iDigBio from 525 recordsets. We attempted to align these seemingly discordant sets of records and also chose Darwin Core terms that were best suited to match the four hierarchical levels of digitization defined in the Minimal Information for Digital Specimens (MIDS) (van Egmond et al. 2019). During the analysis/comparison of the datasets, we found several examples where the number of data records from an institution seemed much lower than expected. From a combination of analyzing record content in GBIF/iDigBio and consulting regional/taxonomic portals, it became evident that, besides datasets only being included in either GBIF or iDigBio, there was a significant number of records in regional/taxonomic portals that were not yet made available through either GBIF or iDigBio. Progress on digitization has benefited greatly from the US National Science Foundation's creation of the Advancing Digitization of Biodiversity Collections (ADBC) program, and funding of the 15 Thematic Collection Networks (TCN). The launching of new projects and the ensuing digitization of herbarium collections have led to a multitude of new specimen portals and the enhancement of existing software like Symbiota (Gries et al. 2014). But, it has also led to insufficient data sharing among projects and inadequately aligned data synchronization practices between aggregators. Consistency in terms of data availability and quality between GBIF and iDigBio is low, and the chronic lack of record-level identifiers consistently restricts the flow of enhancements made to records. We conclude that there remains substantial work to be done on the national infrastructure and on international best practices to help facilitate collaboration and to realize the original objective of making all US botanical specimen collections data available online.
Yonael Teklu, Saifur Rahman, Peter Wiesner
2002 Annual Conference Proceedings pp 7.437.1-7.437.7; doi:10.18260/1-2--11049

The publisher has not yet granted permission to display this abstract.
, Furqan Baig, Zhigang Xu, Rohit Shukla, Pratik Sushil Zambani, Arun Swaminathan, Majid Jahangir, Khadija Chowdhry, Rahul Lachhani, Nitesh Idnani, et al.
Journal of Medical Internet Research, Volume 22; doi:10.2196/13598

Abstract:
Background With increased specialization of health care services and high levels of patient mobility, accessing health care services across multiple hospitals or clinics has become very common for diagnosis and treatment, particularly for patients with chronic diseases such as cancer. With informed knowledge of a patient’s history, physicians can make prompt clinical decisions for smarter, safer, and more efficient care. However, due to the privacy and high sensitivity of electronic health records (EHR), most EHR data sharing still happens through fax or mail due to the lack of systematic infrastructure support for secure, trustable health data sharing, which can also cause major delays in patient care. Objective Our goal was to develop a system that will facilitate secure, trustable management, sharing, and aggregation of EHR data. Our patient-centric system allows patients to manage their own health records across multiple hospitals. The system will ensure patient privacy protection and guarantee security with respect to the requirements for health care data management, including the access control policy specified by the patient. Methods We propose a permissioned blockchain-based system for EHR data sharing and integration. Each hospital will provide a blockchain node integrated with its own EHR system to form the blockchain network. A web-based interface will be used for patients and doctors to initiate EHR sharing transactions. We take a hybrid data management approach, where only management metadata will be stored on the chain. Actual EHR data, on the other hand, will be encrypted and stored off-chain in Health Insurance Portability and Accountability Act–compliant cloud-based storage. The system uses public key infrastructure–based asymmetric encryption and digital signatures to secure shared EHR data. Results In collaboration with Stony Brook University Hospital, we developed ACTION-EHR, a system for patient-centric, blockchain-based EHR data sharing and management for patient care, in particular radiation treatment for cancer. The prototype was built on Hyperledger Fabric, an open-source, permissioned blockchain framework. Data sharing transactions were implemented using chaincode and exposed as representational state transfer application programming interfaces used for the web portal for patients and users. The HL7 Fast Healthcare Interoperability Resources standard was adopted to represent shared EHR data, making it easy to interface with hospital EHR systems and integrate a patient’s EHR data. We tested the system in a distributed environment at Stony Brook University using deidentified patient data. Conclusions We studied and developed the critical technology components to enable patient-centric, blockchain-based EHR sharing to support cancer care. The prototype demonstrated the feasibility of our approach as well as some of the major challenges. The next step will be a pilot study with health care providers in both the United States and Switzerland. Our work provides an exemplar testbed to build next-generation EHR sharing infrastructures.
Petr Škoda, Jakub Klímek, Martin Nečaský,
Transactions on Petri Nets and Other Models of Concurrency XV pp 103-110; doi:10.1007/978-3-030-32047-8_10

The publisher has not yet granted permission to display this abstract.
Biodiversity Information Science and Standards, Volume 3; doi:10.3897/biss.3.35232

Abstract:
The World Flora Online initiative (www.worldfloraonline.org) is a global consortium of many of the world’s leading botanical institutions with the aim to offer a worldwide information resource for plant information (Miller 2019). It aggregates information provided by the botanical community, either through specialized information systems or published taxonomic treatments and floras. WFO distinguishes contributions to the Taxonomic Backbone (i.e. the community-curated consensus system of scientific names, taxa, synonyms and their classification) from Content contributions (i.e. descriptive data, images, distribution, etc.). In the course of writing the guidelines for contributors, a format for the electronic submission of these data had to be developed. The expectation was that this would be a comparatively simple task, drawing on existing TDWG standards and using established formats and tools, i.e. Darwin Core Archive, the Integrated Publishing Toolkit and the DwC-A Validator tool. Actually, it was not that simple, as several problems had to be solved. First of all it was somewhat difficult to find authoritative sources on the web for existing data definitions. That solved, the actual definitions were, in some cases not really adequate for use by the botanical community, or a narrower description had to be given, or our portal software (based on the eMonocot portal system developed by the Royal Botanic Gardens, Kew) required a different controlled vocabulary. A decision was taken to follow the DwC naming conventions for data elements, although in some cases the designations - or at least the applications in a checklist context - were patently wrong (e.g. “taxonID” as the identifier for names, including synonyms). For Content contributions, the DwC-A standard star schema was useful, but it was not appropriate for backbone contributions with their multiple relationships e.g., to literature references. This experience underlines the necessity for a coherent documentation of standards (see Blum (2019)), including user-friendly access to definitions, data validation tools and clear guidelines for extensions/subtyping also at the element-level.
David Luck, Yang Ding, Thuy Mai Luu, Tim Oberlander, Ruth Grunau, Steven Miller, Gregory Lodygensky
Paediatrics & Child Health, Volume 24; doi:10.1093/pch/pxz066.125

The publisher has not yet granted permission to display this abstract.
, Furqan Baig, Zhigang Xu, Rohit Shukla, Pratik Sushil Zambani, Arun Swaminathan, Majid Jahangir, Khadija Chowdhry, Rahul Lachhani, Nitesh Idnani, et al.
Published: 1 February 2019
Abstract:
BACKGROUND With increased specialization of health care services and high levels of patient mobility, accessing health care services across multiple hospitals or clinics has become very common for diagnosis and treatment, particularly for patients with chronic diseases such as cancer. With informed knowledge of a patient’s history, physicians can make prompt clinical decisions for smarter, safer, and more efficient care. However, due to the privacy and high sensitivity of electronic health records (EHR), most EHR data sharing still happens through fax or mail due to the lack of systematic infrastructure support for secure, trustable health data sharing, which can also cause major delays in patient care. OBJECTIVE Our goal was to develop a system that will facilitate secure, trustable management, sharing, and aggregation of EHR data. Our patient-centric system allows patients to manage their own health records across multiple hospitals. The system will ensure patient privacy protection and guarantee security with respect to the requirements for health care data management, including the access control policy specified by the patient. METHODS We propose a permissioned blockchain-based system for EHR data sharing and integration. Each hospital will provide a blockchain node integrated with its own EHR system to form the blockchain network. A web-based interface will be used for patients and doctors to initiate EHR sharing transactions. We take a hybrid data management approach, where only management metadata will be stored on the chain. Actual EHR data, on the other hand, will be encrypted and stored off-chain in Health Insurance Portability and Accountability Act–compliant cloud-based storage. The system uses public key infrastructure–based asymmetric encryption and digital signatures to secure shared EHR data. RESULTS In collaboration with Stony Brook University Hospital, we developed ACTION-EHR, a system for patient-centric, blockchain-based EHR data sharing and management for patient care, in particular radiation treatment for cancer. The prototype was built on Hyperledger Fabric, an open-source, permissioned blockchain framework. Data sharing transactions were implemented using chaincode and exposed as representational state transfer application programming interfaces used for the web portal for patients and users. The HL7 Fast Healthcare Interoperability Resources standard was adopted to represent shared EHR data, making it easy to interface with hospital EHR systems and integrate a patient’s EHR data. We tested the system in a distributed environment at Stony Brook University using deidentified patient data. CONCLUSIONS We studied and developed the critical technology components to enable patient-centric, blockchain-based EHR sharing to support cancer care. The prototype demonstrated the feasibility of our approach as well as some of the major challenges. The next step will be a pilot study with health care providers in both the United States and Switzerland. Our work provides an exemplar testbed to build next-generation EHR sharing infrastructures.
, Mark D. Huisjes
International Journal of Geographical Information Science, Volume 33, pp 28-54; doi:10.1080/13658816.2018.1514120

Abstract:
A most fundamental and far-reaching trait of geographic information is the distinction between extensive and intensive properties. In common understanding, originating in Physics and Chemistry, extensive properties increase with the size of their supporting objects, while intensive properties are independent of this size. It has long been recognized that the decision whether analytical and cartographic measures can be meaningfully applied depends on whether an attribute is considered intensive or extensive. For example, the choice of a map type as well as the application of basic geocomputational operations, such as spatial intersections, aggregations or algebraic operations such as sums and weighted averages, strongly depend on this semantic distinction. So far, however, the distinction can only be drawn in the head of an analyst. We still lack practical ways of automation for composing GIS workflows and to scale up mapping and geocomputation over many data sources, e.g. in statistical portals. In this article, we test a machine-learning model that is capable of labeling extensive/intensive region attributes with high accuracy based on simple characteristics extractable from geodata files. Furthermore, we propose an ontology pattern that captures central applicability constraints for automating data conversion and mapping using Semantic Web technology.
Z. T. Ma, C. M. Li, Z. Wu, P. D. Wu
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 389-396; doi:10.5194/isprs-archives-xlii-4-389-2018

Abstract:
Spatio-temporal big data cloud platform is an important spatial information infrastructure that can provide different period spatial information data services, various spatial analysis services and flexible API services. Activities of policy coordination, facilities connectivity and unimpeded trade on the Belt and Road Initiative (B&R) will create huge demands to the spatial information infrastructure. This paper focuses on researching a distributed spatio-temporal big data engine and an extendable cloud platform framework suits for the B&R and some key technologies to implement them. A distributed spatio-temporal big data engine based on Cassandra™ and an extendable 4-tier architecture cloud platform framework is put forward according to the spirit of parallel computing and cloud service. Four key technologies are discussed: 1) a storage and indexing method for distributed spatio-temporal big data, 2) an automatically collecting, processing, mapping and updating method of authoritative spatio-temporal data for web mapping service, 3) a schema of services aggregation based on nodes registering and services invoking based on view extension, 4) a distributed deployment and extension method of the cloud platform. We developed a distributed spatio-temporal big data centersoftware and founded the main node platform portal with MapWorld™ map services and some thematic information services inChina and built some local platform portals for those countries in the B&R area. The management and analysis services for spatio-temporal big data were built in flexible styles on this platform. Practices show that we provide a flexible and efficient solution tobuild the distributed spatio-temporal big data center and cloud platform, more node portals can be aggregated to the main portal bypublishing their own web services and registering them in the aggregation schema. The data center and platform can support thestorage and management of massive data well and has higher fault tolerance and better scalability.
Sascha Heymann, Ljiljana Stojanovci, Kym Watson, Seungwook Nam, Byunghun Song, Hans Gschossmann, Sebastian Schriegel, Jurgen Jasperneite
2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA), Volume 1, pp 187-194; doi:10.1109/etfa.2018.8502645

The publisher has not yet granted permission to display this abstract.
, Katarina Boland, Dagmar Kern
Transactions on Petri Nets and Other Models of Concurrency XV pp 729-744; doi:10.1007/978-3-319-93417-4_47

The publisher has not yet granted permission to display this abstract.
Philipp Gerth, Anne Sieverling, Martina Trognitz
Published: 14 December 2017
Studies in Digital Heritage, Volume 1, pp 182-193; doi:10.14434/sdh.v1i2.23235

Abstract:
IANUS is funded by the German Research Foundation (DFG) with the objective to build up a digital archive for archaeology and ancient studies in Germany. A first three year phase of conceptual work is now being followed by a second, in which the concepts get implemented and the data centre begins its operational work.Data curation is essential for preservation of digital data and helps to detect errors, aggregate documentation, ensure the reusability of data and in some cases even add further functionality and additional files. This paper will present the workflow of data curation based on a data collection about European vertebrate fauna and will exemplify the different data processing stages at IANUS according to the OAIS model – from its initial submission until its final presentation on the recently established data portal. One aspect of this will be the discussion of the archival information package. To enable and ease the reusability of research data, it is useful to enrich the data. This includes the GIS integration of geographic informations and reutilisation of bibliography. Finally a re-use scenario of research data stored in the IANUS repository will be presented that offers researchers a unified search and discovery facilities over several distributed and heterogeneous datasets by using Semantic Web technologies.
Forensic Science International: Genetics, Volume 31, pp 111-117; doi:10.1016/j.fsigen.2017.08.017

Abstract:
The STR Sequencing Project (STRSeq) was initiated to facilitate the description of sequence-based alleles at the Short Tandem Repeat (STR) loci targeted in human identification assays. This international collaborative effort, which has been endorsed by the ISFG DNA Commission, provides a framework for communication among laboratories. The initial data used to populate the project are the aggregate alleles observed in targeted sequencing studies across four laboratories: National Institute of Standards and Technology (N=1786), Kings College London (N=1043), University of North Texas Health Sciences Center (N=839), and University of Santiago de Compostela (N=944), for a total of 4612 individuals. STRSeq data are maintained as GenBank records at the U.S. National Center for Biotechnology Information (NCBI), which participates in a daily data exchange with the DNA DataBank of Japan (DDBJ) and the European Nucleotide Archive (ENA). Each GenBank record contains the observed sequence of a STR region, annotation ("bracketing") of the repeat region and flanking region polymorphisms, information regarding the sequencing assay and data quality, and backward compatible length-based allele designation. STRSeq GenBank records are organized within a BioProject at NCBI (https://www.ncbi.nlm.nih.gov/bioproject/380127), which is sub-divided into: commonly used autosomal STRs, alternate autosomal STRs, Y-chromosomal STRs, and X-chromosomal STRs. Each of these categories is further divided into locus-specific BioProjects. The BioProject hierarchy facilitates access to the GenBank records by browsing, BLAST searching, or ftp download. Future plans include user interface tools at strseq.nist.gov, a pathway for submission of additional allele records by laboratories performing population sample sequencing and interaction with the STRidER web portal for quality control (http://strider.online).
Ramakrishnan Raman, S. Vadivel, Benson Edwin Raj
2017 Fourth HCT Information Technology Trends (ITT) pp 13-18; doi:10.1109/ctit.2017.8259559

The publisher has not yet granted permission to display this abstract.
A. Moitinho, , H. Savietto, M. Barros, C. Barata, A. J. Falcão, T. Fernandes, J. Alves, A. F. Silva, M. Gomes, et al.
Published: 7 September 2017
Astronomy & Astrophysics, Volume 605; doi:10.1051/0004-6361/201731059

Abstract:
Context. The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoplyof methods used for data exploration, visualisation is often the starting point and even the guiding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims. We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods. The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results. The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. These visual queries are ready for use in the Gaia Archive Search/data retrieval service. In addition, regions around user-selected objects can be further examined with automatically generated SIMBAD searches. Integration of the Aladin Lite and JS9 applications add support to the visualisation of HiPS and FITS maps. The production of the all-sky source density map that became the iconic image of Gaia DR1 is described in detail. Conclusions. On the day of DR1, over seven thousand users accessed the Gaia Archive visualisation portal. The system, running on a single machine, proved robust and did not fail while enabling thousands of users to visualise and explore the over one billion sources in DR1. There are still several limitations, most noticeably that users may only choose from a list of pre-computed visualisations. Thus, other visualisation applications that can complement the archive service are examined. Finally, development plans for Data Release 2 are presented.
Cartik Kothari, Maxime Wack, Claire Hassen‐Khodja, Sean Finan, Guergana Savova, Megan O'boyle, Geraldine Bliss, Andria Cornell, Elizabeth J. Horn, Rebecca Davis, et al.
American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, Volume 177, pp 613-624; doi:10.1002/ajmg.b.32579

Abstract:
The heterogeneity of patient phenotype data are an impediment to the research into the origins and progression of neuropsychiatric disorders. This difficulty is compounded in the case of rare disorders such as Phelan-McDermid Syndrome (PMS) by the paucity of patient clinical data. PMS is a rare syndromic genetic cause of autism and intellectual deficiency. In this paper, we describe the Phelan-McDermid Syndrome Data Network (PMS_DN), a platform that facilitates research into phenotype–genotype correlation and progression of PMS by: a) integrating knowledge of patient phenotypes extracted from Patient Reported Outcomes (PRO) data and clinical notes—two heterogeneous, underutilized sources of knowledge about patient phenotypes—with curated genetic information from the same patient cohort and b) making this integrated knowledge, along with a suite of statistical tools, available free of charge to authorized investigators on a Web portal https://pmsdn.hms.harvard.edu. PMS_DN is a Patient Centric Outcomes Research Initiative (PCORI) where patients and their families are involved in all aspects of the management of patient data in driving research into PMS. To foster collaborative research, PMS_DN also makes patient aggregates from this knowledge available to authorized investigators using distributed research networks such as the PCORnet PopMedNet. PMS_DN is hosted on a scalable cloud based environment and complies with all patient data privacy regulations. As of October 31, 2016, PMS_DN integrates high-quality knowledge extracted from the clinical notes of 112 patients and curated genetic reports of 176 patients with preprocessed PRO data from 415 patients.
Biodiversity Information Science and Standards, Volume 1; doi:10.3897/tdwgproceedings.1.20302

Abstract:
Hundreds of herbarium collections have accumulated a valuable heritage and knowledge of plants over several centuries (Page et al. 2015). Recent initiatives, such as iDigBio (https://www.idigbio.org), aggregate data from and images of vouchered herbarium sheets (and other biocollections) and make this information available to botanists and the general public worldwide through web portals. These ambitious plans to transform and preserve these historical biodiversity data into digital format are supported by the United States National Science Foundation (NSF) Advancing the Digitization of Natural History Collections (ADBC) and the digitization is done by the Thematic Collections Networks (TCNs) funded under the ADBC program. However, thousands of herbarium sheets are still unidentified at the species level while numerous sheets should be reviewed and updated following more recent taxonomic knowledge. These annotations and revisions require an unrealistic amount of work for botanists to carry out in a reasonable time (Bebber et al. 2010). Computer vision and machine learning approaches applied to herbarium sheets are promising (Wijesingha and Marikar 2012) but are still not well studied compared to automated species identification from leaf scans or pictures of plants taken in the field. In this work, we study and evaluate the accuracy with which herbarium images can be potentially exploited for species identification with deep learning technology (Carranza-Rojas et al. 2017), particularly Convolutional Neural Networks (CNN) (Szegedy et al. 2015). This type of network allows automatic learning of the most prominent visual patterns in the images since they are trainable end-to-end (thus, differentiable), as opposed to previous approaches that use custom, hand-made feature extractors. In addition, we propose studying if the combination of herbarium sheet images with photos of plants in the field (Joly et al. 2015, Carranza-Rojas and Mata-Montero 2016) is a viable idea to train models that provide accurate results during identification. We explore if herbarium images from one region with a specific flora can be used in transfer learning (a technique in deep learning that first allows training a model with a dataset and then once trained, uses the weighted results to train another model with that knowledge as the baseline) to another region with other species; for example, in a region under-represented in terms of collected data. Our evaluation shows that the accuracy for species identification with deep learning technology, based on herbarium images, reaches 90.3% on a dataset of more than 1200 European plant species. This could potentially lead to the creation of a semi-, or even fully automated system to help taxonomists and experts with their annotation, classification, and revision works. This study, conducted by researchers from the biological sciences and computer science communitites, has allowed a better understanding of the capacity and needs of each community in term of data structure, quality, and volume. This work is an example of innovative research activities that can be contibuted by the computer science community thanks to the new platforms developed by institutions with natural history collections. Hundreds of herbarium collections have accumulated a valuable heritage and knowledge of plants over several centuries (Page et al. 2015). Recent initiatives, such as iDigBio (https://www.idigbio.org), aggregate data from and images of vouchered herbarium sheets (and other biocollections) and make this information available to botanists and the general public worldwide through web portals. These ambitious plans to transform and preserve these historical biodiversity data into digital format are supported by the United States National Science Foundation (NSF) Advancing the Digitization of Natural History Collections (ADBC) and the digitization is done by the Thematic Collections Networks (TCNs) funded under the ADBC program. However, thousands of herbarium sheets are still unidentified at the species level while numerous sheets should be reviewed and updated following more recent taxonomic knowledge. These annotations and revisions require an unrealistic amount of work for botanists to carry out in a reasonable time (Bebber et al. 2010). Computer vision and machine learning approaches applied to herbarium sheets are promising (Wijesingha and Marikar 2012) but are still not well studied compared to automated species identification from leaf scans or pictures of plants taken in the field. In this work, we study and evaluate the accuracy with which herbarium images can be potentially exploited for species identification with deep learning technology (Carranza-Rojas et al. 2017), particularly Convolutional Neural Networks (CNN) (Szegedy et al. 2015). This type of network allows automatic learning of the most prominent visual patterns in the images since they are trainable end-to-end (thus, differentiable), as opposed to previous approaches that use custom, hand-made feature extractors. In addition, we propose studying if the combination of herbarium sheet images with photos of plants in the field (Joly et al. 2015, Carranza-Rojas and Mata-Montero 2016) is a viable idea to train models that provide accurate results during identification. We explore if herbarium images from one region with a specific flora can be used in transfer learning (a technique in deep learning that first allows training a model with a dataset and then once trained, uses the weighted results to train another model with that knowledge as the baseline) to another region with other species; for example, in a region under-represented in terms of collected data. Our evaluation shows that the accuracy for species identification with deep learning technology, based on herbarium images, reaches 90.3% on a dataset of more than 1200 European plant species. This could potentially lead to the creation of a semi-, or even...
Christin Seifert, Werner Bailer, Thomas Orgel, Louis Gantner, Roman Kern, Hermann Ziak, Albin Petit, , Stefan Zwicklbauer,
Journal on Computing and Cultural Heritage, Volume 10, pp 1-27; doi:10.1145/3012284

The publisher has not yet granted permission to display this abstract.
Nikul H. Ukani, Adam Tomkins, Chung-Heng Yeh, Wesley Bruning, Allison L. Fenichel, , Yu-Chi Huang, Dorian Florescu, Carlos Luna Ortiz, , et al.
Published: 14 December 2016
Abstract:
SummaryNeuroNLP, is a key application on the Fruit Fly Brain Observatory platform (FFBO, http://fruitflybrain.org), that provides a modern web-based portal for navigating fruit fly brain circuit data. Increases in the availability and scale of fruit fly connectome data, demand new, scalable and accessible methods to facilitate investigation into the functions of the latest complex circuits being uncovered. NeuroNLP enables in-depth exploration and investigation of the structure of brain circuits, using intuitive natural language queries that are capable of revealing the latent structure and information, obscured due to expansive yet independent data sources. NeuroNLP is built on top of a database system call NeuroArch that codifies knowledge about the fruit fly brain circuits, spanning multiple sources. Users can probe biological circuits in the NeuroArch database with plain English queries, such as “show glutamatergic local neurons in the left antennal lobe” and “show neurons with dendrites in the left mushroom body and axons in the fan-shaped body”. This simple yet powerful interface replaces the usual, cumbersome checkboxes and dropdown menus prevalent in today’s neurobiological databases. Equipped with powerful 3D visualization, NeuroNLP standardizes tools and methods for graphical rendering, representation, and manipulation of brain circuits, while integrating with existing databases such as the FlyCircuit. The userfriendly graphical user interface complements the natural language queries with additional controls for exploring the connectivity of neurons and neural circuits. Designed with an open-source, modular structure, it is highly scalable/flexible/extensible to additional databases or to switch between databases and supports the creation of additional parsers for other languages. By supporting access through a web browser from any modern laptop or smartphone, NeuroNLP significantly increases the accessibility of fruit fly brain data and improves the impact of the data in both scientific and educational exploration.
, , , , , J.-Matthias Graf Von Der Schulenburg
BMC Medical Informatics and Decision Making, Volume 16; doi:10.1186/s12911-016-0346-8

Abstract:
The Analytic Hierarchy Process (AHP) is increasingly used to measure patient priorities. Studies have shown that there are several different approaches to data acquisition and data aggregation. The aim of this study was to measure the information needs of patients having a rare disease and to analyze the effects of these different AHP approaches. The ranking of information needs is then used to display information categories on a web-based information portal about rare diseases according to the patient's priorities. The information needs of patients suffering from rare diseases were identified by an Internet research study and a preliminary qualitative study. Hence, we designed a three-level hierarchy containing 13 criteria. For data acquisition, the differences in outcomes were investigated using individual versus group judgements separately. Furthermore, we analyzed the different effects when using the median and arithmetic and geometric means for data aggregation. A consistency ratio ≤0.2 was determined to represent an acceptable consistency level. Forty individual and three group judgements were collected from patients suffering from a rare disease and their close relatives. The consistency ratio of 31 individual and three group judgements was acceptable and thus these judgements were included in the study. To a large extent, the local ranks for individual and group judgements were similar. Interestingly, group judgements were in a significantly smaller range than individual judgements. According to our data, the ranks of the criteria differed slightly according to the data aggregation method used. It is important to explain and justify the choice of an appropriate method for data acquisition because response behaviors differ according to the method. We conclude that researchers should select a suitable method based on the thematic perspective or investigated topics in the study. Because the arithmetic mean is very vulnerable to outliers, the geometric mean and the median seem to be acceptable alternatives for data aggregation. Overall, using the AHP to identify patient priorities and enhance the user-friendliness of information websites offers an important contribution to medical informatics.
Programmieren für Ingenieure und Naturwissenschaftler pp 75-78; doi:10.1007/978-3-319-41938-1_8

The publisher has not yet granted permission to display this abstract.
, Bandrowski Anita, Chiu Michael, Gillespie Tom, Go James, Li Yueling, Ozyurt Ibrahim, Martone Maryann
Frontiers in Neuroinformatics, Volume 10; doi:10.3389/conf.fninf.2016.20.00063

Abstract:
SciCrunch was designed to allow communities of researchers create their own focused portals that provide access to resources, information, databases and tools of relevance to their research areas. A data portal that searches across hundreds of databases and information resources can be created in minutes. Communities can choose from our existing SciCrunch data sources and also add their own. SciCrunch was designed to break down the traditional types of portal silos created by different communities, so that communities can take advantage of work done by others and share their expertise as well. When a community brings in a data source, it becomes available to other communities, thus ensuring that valuable resources are shared by other communities who might need them. At the same time, individual communities can customize the way that these resources are presented to their constituents, to ensure that their user base is served. To ensure proper credit and to help share expertise, all resources are tagged by the communities that create them and those that access them. Exploring Information and Data: SciCrunch is one of the largest aggregations of scientific data and tools available on the Web. One can think of SciCrunch as a “PubMed” for tools and data. Just as you can search across all the biomedical literature through PubMed, regardless of journal, SciCrunch lets you search across hundreds of databases and information resources and millions of data records from a single interface. SciCrunch enhances search with semantic technologies to ensure we bring you all the results. SciCrunch provides three primary searchable collections: the SciCrunch Registry and the SciCrunch Data Federation. * SciCrunch Registry – is a curated catalog of thousands of research resources (data, tools, materials, services, organizations, core facilities), focusing on freely-accessible resources available to the scientific community. Each research resource is categorized by resource type and given a unique identifier. SciCrunch also provides information on resource mentions in the literature and provides alerts when new publications mention a resource. * SciCrunch Data Federation – a data discovery index that provides deep query across the contents of databases created and maintained by independent individuals and organizations. Each database is aligned to the SciCrunch semantic framework, to allow users to browse the contents of these databases quickly and efficiently. Users are then taken to the source database for further exploration. SciCrunch deploys a unique data ingestion platform that makes it easy for database providers to make their resources available to SciCrunch. Using this technology, SciCrunch currently makes available more than 200 independent databases, comprising hundreds of millions of data records. To keep up to date on specific searches or resources, SciCrunch provides alerting services when new information or data is found. * SciCrunch Literature – provides a searchable index across literature via PubMed, full text articles from the Open Access literature, and other literature archives. SciCrunch Communities: SciCrunch currently supports a diverse collection of communities, each with their own data needs: * Neuroscience Information Framework (NIF) – is a biological search engine that allows students, educators, and researchers to navigate the Big Data landscape by searching the contents of data resources relevant to neuroscience - providing a platform that can be used to pull together information about the nervous system. Underlying the NIF system is the Neurolex knowledge base. Neurolex seeks to define the major concepts of neuroscience, e.g., brain regions, cell types, in a way that is understandable to a machine. * NIDDK Information Network (dkNET) – serves the needs of basic and clinical investigators by providing seamless access to large pools of data relevant to the mission of The National Institute of Diabetes, Digestive and Kidney Disease (NIDDK). The portal contains information about research resources such as antibodies, vectors and mouse strains, data, protocols, and literature. * Research Resource Identification Initiative (RII) – aims to promote research resource identification, discovery, and reuse. The RII portal offers a central location for obtaining and exploring Research Resource Identifiers (RRIDs) - persistent and unique identifiers for referencing a research resource. A critical goal of the RII is the widespread adoption of RRIDs to cite resources in the biomedical literature. RRIDs use established community identifiers where they exist, and are cross-referenced in our system where more than one identifier exists for a single resource. * Drug Design Data Resource (D3R) – aims to advance the technology of computer-aided drug discovery through the interchange of high quality protein-ligand datasets and workflows, and by holding community-wide, blinded prediction challenges.
Malaria Journal, Volume 14; doi:10.1186/s12936-015-0965-z

Abstract:
The cornerstone of decision making aimed at improving health services is accurate and timely health information. The Ministry of Public Health and Sanitation in Kenya decided to pilot feasibility of Fionet, an innovation that integrates diagnostics, data capture and cloud services, in its malaria control programme to demonstrate usability and feasibility by primary level workers in a remote setting in Kenya. Eleven sites comprising one sub-district hospital, ten health centres and dispensaries were selected in three districts of Kisumu County to participate. Two health workers per site were selected, trained over a two-day period in the use of the Deki Reader™ to undertake rapid diagnostic testing (RDT) for malaria and data capture of patients’ records. Health managers in the three districts were trained in the use of Fionet™ portal (web portal to cloud based information) to access the data uploaded by the Deki Readers. Field Support was provided by the Fio Corporation representative in Kenya. A total of 5812 malaria RDTs were run and uploaded to the cloud database during this implementation research study. Uploaded data were automatically aggregated into predetermined reports for use by service managers and supervisors. The Deki Reader enhanced the performance of the health workers by not only guiding them through processing of a malaria RDT test, but also by doing the automated analysis of the RDT, capturing the image, determining whether the RDT was processed according to guidelines, and capturing full patient data for each patient encounter. Supervisors were able to perform remote Quality assurance/Quality control (QA/QC) activities almost in real time. Quality, complete and timely data collection by health workers in a remote setting in Kenya is feasible. This paperless innovation brought unprecedented quality control and quality assurance in diagnosis, care and data capture, all in the hands of the health worker at point of care in an integrated way.
, B P Kuzkin, Yu V Demina, V M Dubyansky, A N Kulichenko, O V Maletskaya, O Kh Shayakhmetov, O V Semenko, Yu V Nazarenko, , et al.
The publisher has not yet granted permission to display this abstract.
Mark Aschenbrennar, Jason Koo, Daniel Toshner, Kristen Tsolis, Michael Jaye
Published: 1 January 2015
Procedia Manufacturing, Volume 3, pp 4144-4151; doi:10.1016/j.promfg.2015.07.533

Abstract:
Despite the Department of Defense's (DoD) many investments directed toward developing and fielding programs designed to advance sociocultural knowledge, the DoD nonetheless lacks a shared repository in which all entities can aggregate, visualize, and share sociocultural data across the enterprise. A gap analysis of DoD's desired and actual states of achieving and implementing a sociocultural understanding reveals three main shortcomings: a data gap, a repository gap, and a collaboration gap. As a consequence, we created a proof of concept, enterprise solution for DoD that bridges the overall sociocultural gap by harnessing the overlooked and untapped potential of today's deployed DoD service members, who over the course of their daily duties, are exposed to various populations’ cultures. Service member observations and interpretations of service members’ interactions form an untapped set of operationally relevant sociocultural data. The existing wellspring of sociocultural information needs only be collected and indexed using a framework derived from the Five Operational Culture Dimensions model. Residing on a geodatabase and interfaced via a custom multi-client supported web-based Geographic Information System (GIS), this framework integrates the collected data comprised of service member narratives with the greater Joint Force thereby creating a dynamic and collaborative sociocultural living repository. Combining an anthropologically sound framework that is operationally relevant with the capabilities of GIS results in a solution that will allow DoD personnel to uniformly populate, visualize, and share near real-time cultural data relevant to military operations across all services and agencies. This DoD enterprise solution has the potential to enhance the Nation's armed forces’ strategic performance through the application of culturally adept military power
W Shao, P Kupelian, J Wang, D Löw, D Ruan
Published: 29 May 2014
by Wiley
Medical Physics, Volume 41, pp 114-114; doi:10.1118/1.4887883

The publisher has not yet granted permission to display this abstract.
, Bandrowski Anita, Banks Davis, Condit Christopher, Gupta Amarnath, Larson Stephen, Li Yueling, Ozyurt Ibrahim, Stagg Andrea, Whetzel Patricia, et al.
Frontiers in Neuroinformatics, Volume 8; doi:10.3389/conf.fninf.2014.18.00069

Abstract:
Introduction SciCrunch was designed to help communities of researchers create their own portals to provide access to resources, databases and tools of relevance to their research areas. A data portal that searches across hundreds of databases can be created in minutes. Communities can choose from our existing SciCrunch data sources and also add their own. SciCrunch was designed to break down the traditional types of portal silos created by different communities, so that communities can take advantage of work done by others and share their expertise as well. When a community brings in a data source, it becomes available to other communities, thus ensuring that valuable resources are shared by other communities who might need them. At the same time, individual communities can customize the way that these resources are presented to their constituents, to ensure that their user base is served. To ensure proper credit and to help share expertise, all resources are tagged by the communities that create them and those that access them. Exploring Data SciCrunch is one of the largest aggregations of scientific data and tools available on the Web. One can think of SciCrunch as a “PubMed” for tools and data. Just as you can search across all the biomedical literature through PubMed, regardless of journal, SciCrunch lets you search across hundreds of databases and millions of data records from a single interface. Such databases are considered part of the “hidden web” because their content is not easily accessed by search engines. SciCrunch enhances search with semantic technologies to ensure we bring you all the results. SciCrunch provides three primary searchable collections: • SciCrunch Registry – is a curated catalog of thousands of research resources (data, tools, materials, services, organizations, core facilities), focusing on freely-accessible resources available to the scientific community. Each research resource is categorized by resource type and given a unique identifier. • SciCrunch Data Federation – provides deep query across the contents of databases created and maintained by independent individuals and organizations. Each database is aligned to the SciCrunch semantic framework, to allow users to browse the contents of these databases quickly and efficiently. Users are then taken to the source database for further exploration. SciCrunch deploys a unique data ingestion platform that makes it easy for database providers to make their resources available to SciCrunch. Using this technology, SciCrunch currently makes available over 200 independent databases, comprising ~400 million data records. • SciCrunch Literature – provides a searchable index across literature via PubMed and full text articles from the Open Access literature. SciCrunch Communities SciCrunch currently supports a diverse collection of communities (Figure 1), each with their own data needs: • CINERGI – focuses on constructing a community inventory and knowledge base on geoscience information resources to meet the challenge of finding resources across disciplines, assessing their fitness for use in specific research scenarios, and providing tools for integrating and re-using data from multiple domains. The project team envisions a comprehensive system linking geoscience resources, users, publications, usage information, and cyberinfrastructure components. This system would serve geoscientists across all domains to efficiently use existing and emerging resources for productive and transformative research. • Monarch Initiative (http://monarchinitiative.org; Figure 2) – provides tools that will use semantics and statistical models to support navigation through multi-scale spatial and temporal phenotypes across in vivo and in vitro model systems in the context of genetic and genomic data. These tools will provide basic, clinical, and translational science researchers, informaticists, and medical professionals with an integrated interface and set of discovery tools to reveal the genetic basis of disease, facilitate hypothesis generation, and identify novel candidate drug targets. The goal of the system is to promote true translational research, connecting clinicians with model systems and researchers who might shed light on related phenotypes, assays, or models. • Neuroscience Information Framework (NIF) – is a biological search engine that allows students, educators, and researchers to navigate the Big Data landscape by searching the contents of data resources relevant to neuroscience - providing a platform that can be used to pull together information about the nervous system. Underlying the NIF system is the Neurolex knowledge base. Neurolex seeks to define the major concepts of neuroscience, e.g., brain regions, cell types, in a way that is understandable to a machine. • NIDDK Information Network (dkNET) – serves the needs of basic and clinical investigators by providing seamless access to large pools of data relevant to the mission of The National Institute of Diabetes, Digestive and Kidney Disease (NIDDK). The portal contains information about research resources such as antibodies, vectors and mouse strains, data, protocols, and literature. • Research Identification Initiative (RII) – aims to promote research resource identification, discovery, and reuse. The RII portal offers a central location for obtaining and exploring Research Resource Identifiers (RRIDs) - persistent and unique identifiers for referencing a research resource. A critical goal of the RII is the widespread adoption of RRIDs to cite resources in the biomedical literature. RRIDs use established community identifiers where they exist, and are cross-referenced in our system where more than one identifier exists for a single resource.
, , Franck Theeten
Published: 16 September 2013
Biodiversity Data Journal, Volume 1; doi:10.3897/BDJ.1.e968

Abstract:
The BioCASe Monitor Service (BMS) is a web-based tool for coordinators of distributed data networks that provide information to web-portals and data aggregators via the BioCASe Provider Software. Building on common standards and protocols, it has three main purposes: (1) monitoring provider’s progress in data provision, (2) facilitating checks of data mappings with a focus on the structure, plausibility and completeness, and (3) verifying compliance of provided data for transformation into other target schemas. Herein two use cases, GBIF-D and OpenUp!, are presented in which the BMS is being applied for monitoring the progress in data provision and performing quality checks on the ABCD (Access to Biological Collection Data) schema mapping. However, the BMS can potentially be used with any conceptual data schema and protocols for querying web services. Through flexible configuration options it is highly adaptable to specific requirements and needs. Thus, the BMS can be easily implemented into coordination workflows and reporting duties within other distributed data network projects.
, Rekha Holtry, Kalman Hazins, Sheryl Happel Lewis
Online Journal of Public Health Informatics, Volume 5; doi:10.5210/ojphi.v5i1.4527

Abstract:
The objective of this project is to provide a technical mechanism for information to be easily and securely shared between public health ESSENCE users and non-public health partners; specifically, emergency management, law enforcement, and the first responder community. This capability allows public health officials to analyze incoming data and create interpreted information to be shared with others. These interpretations are stored securely and can be viewed by approved users and captured by authorized software systems. This project provides tools that can enhance emergency management situational awareness of public health events. It also allows external partners a mechanism for providing feedback to support public health investigations. Automated Electronic Disease Surveillance has become a common tool for most public health practitioners. Users of these systems can analyze and visualize data coming from hospitals, schools, and a variety of sources to determine the health of their communities. The insights that users gain from these systems would be valuable information for emergency managers, law enforcement, and other non-public health officials. Disseminating this information, however, can be difficult due to lack of secure tools and guidance policies. This abstract describes the development of tools necessary to support information sharing between public health and partner organizations. The project initially brought together public health and emergency management officials to determine current gaps in technology and policy that prevent sharing of information on a consistent basis. Officials from across the National Capital Region (NCR) in Maryland, Virginia, and the District of Columbia determined that a web portal in which public health information could be securely posted on and captured by non-public health users (humans and computer systems) would be best. The development team then found open source tools, such as the Pebble blogging system, that would allow information to be posted, tagged, and searched in an easily navigable site. The system also provided RSS feeds both on the site as whole and specific tags to support notification. The team made modifications to the system to incorporate spring security features to allow the site to be securely hosted requiring usernames and passwords for access. Once the Pebble system was completed and deployed, the NCR’s aggregated ESSENCE system was adapted to allow users to submit daily reports and post time series images to the new site. An additional feature was created to post visualizations every evening to the site summarizing the day’s reports. The system has been in testing since March of 2012 and users of the system have provided valuable feedback. Based on the success of the tests, public health users in the NCR have begun working on the policy component of the project to determine when and how it should be used. Modifications to the system since deployment have included a...
, Smitka Jan, , Mautner Pavel, Čepička Ladislav, Holečková Irena
Frontiers in Neuroinformatics, Volume 7; doi:10.3389/conf.fninf.2013.09.00067

Abstract:
A need to store, organize, share and interpret data and metadata from electrophysiological experiments also emerged during our investigation of developmental coordination disorder in children. If classic data and metadata were successfully stored to the EEG/ERP portal [1], related studies, discussions, and partial interpretations remained unorganized and not searchable. Since the EEG/ERP portal (using a relational database a persistent layer) was not sufficiently prepared to store and process these unstructured texts, it was decided to find an appropriate solution to aggregate and store such data and facilitate subsequent search of relevant information. It was also necessary to use already existing description of data and domain knowledge in a form of semantic web structures. The OWLIM repository [2] and the KIM platform [3] were finally selected and used to store, annotate and search data. The KIM Platform supports semantic annotation of documents based on ontology, which is stored in the semantic repository. The annotated documents can be searched through; the use of ontological terms ensures more relevant results than a normal full-text search. To facilitate ontology development, a tool KIM-OWLImport was created. It is able to retrieve the selected ontology into the semantic repository in memory and modify it according to the rules defined by the KIM platform. The ontology then can be used for semantic annotation. To import documents into the KIM Platform a tool KIMBridge was developed. It runs as a service and periodically downloads new documents from selected data sources. Currently, KIMBridge supports downloading PDF documents from Google Drive and downloading discussions from the social network LinkedIn. Downloaded documents are annotated according to ontological prototype and indexed in the KIM Platform. Subsequent search is made through the web interface. This functionality was verified on a test set of domain documents.
Todd McNutt, Jatinder Palta, Carl Bogardus, Walter Bosch, Jeffrey Carlin, Henry Chou, Bruce Curran, Joel Goldwein, Ken Hotz, , et al.
Journal of Clinical Oncology, Volume 30, pp 300-300; doi:10.1200/jco.2012.30.34_suppl.300

The publisher has not yet granted permission to display this abstract.
P. H. Cheah, R. Zhang, , H. Yu, M. K. Foo
2012 10th International Power & Energy Conference (IPEC) pp 407-411; doi:10.1109/asscc.2012.6523302

The publisher has not yet granted permission to display this abstract.
User Modeling and User-Adapted Interaction, Volume 23, pp 381-443; doi:10.1007/s11257-012-9124-1

The publisher has not yet granted permission to display this abstract.
Thomas Emmendorfer, Peter A. Glassman, Von Moore, Thomas C. Leadholm, Chester B. Good, Francesca Cunningham
American Journal of Health-System Pharmacy, Volume 69, pp 321-328; doi:10.2146/ajhp110026

The publisher has not yet granted permission to display this abstract.
Elia Palme, Chrysanthos Dellarocas, Mihai Calin, Juliana Sutanto
Proceedings of the 14th Annual International Conference on Digital Government Research pp 25-26; doi:10.1145/2346536.2346540

The publisher has not yet granted permission to display this abstract.
Brian Craft, , Mary Goldman, Christopher Wilks, Christopher Szeto, Singer Ma, Josh Stuart, Jingchun Zhu,
Cancer Genomics, Volume 71; doi:10.1158/1538-7445.fbcr11-a39

The publisher has not yet granted permission to display this abstract.
Martin Wolpers, Martin Memmel, Katja Niemann, Joris Klerkx, Marcus Specht, Alberto Giretti, Erik Duval
2011 IEEE International Conference on Information Reuse & Integration pp 187-192; doi:10.1109/iri.2011.6009544

The publisher has not yet granted permission to display this abstract.
Jana Polgar, Robert Mark Braum, Tony Polgar, Robert Mark Bram
Building and Managing Enterprise-Wide Portals pp 134-172; doi:10.4018/978-1-59140-661-7.ch009

The publisher has not yet granted permission to display this abstract.
Xiuzhen Feng
Encyclopedia of Portal Technologies and Applications pp 402-407; doi:10.4018/978-1-59140-989-2.ch068

The publisher has not yet granted permission to display this abstract.
, , Mohini Singh
Encyclopedia of E-Commerce, E-Government, and Mobile Commerce pp 1016-1021; doi:10.4018/978-1-59140-799-7.ch163

The publisher has not yet granted permission to display this abstract.
Arthur Tatnall
Encyclopedia of E-Commerce, E-Government, and Mobile Commerce pp 1217-1221; doi:10.4018/978-1-59140-799-7.ch195

The publisher has not yet granted permission to display this abstract.
Li Xiao, Subhasish Dasgupta
Web Systems Design and Online Consumer Behavior pp 192-204; doi:10.4018/978-1-59140-327-2.ch011

The publisher has not yet granted permission to display this abstract.
Information Technology and Innovation Trends in Organizations pp 199-207; doi:10.1007/978-3-7908-2632-6_23

The publisher has not yet granted permission to display this abstract.
Arthur Tatnall, Stephen Burgess, Mohini Singh, Mehdi Khosrow-Pour
Encyclopedia of E-Commerce, E-Government, and Mobile Commerce; doi:10.4018/9781591407997.ch163

The publisher has not yet granted permission to display this abstract.
Arthur Tatnall, Mehdi Khosrow-Pour
Encyclopedia of E-Commerce, E-Government, and Mobile Commerce; doi:10.4018/9781591407997.ch195

The publisher has not yet granted permission to display this abstract.
Page of 2
Articles per Page
by
Show export options
  Select all
Back to Top Top