Application of WEB-GIS for Dissemination and 3D Visualization of Larege-Volume LIDAR Data.- Design and Evaluation of WEBGL-BASED Heat Map Visualization for Big Point Data.- Sparse Big Data Problem: A Case Study of Czech Graffiti Crimes.- Surveying of Open Pit Mine Using Low-Cost Aerial Photogrammetry.- Models for Relocation of Emergency Medical Stations.- The Possibilities of Big GIS Data Processing on the Desktop Computers.- Creating Large Size of Data with Apache Hadoop.- Processing LIDAR Data with Apache Hadoop.- Applicability of Support Vector Machines in Landslide Susceptibility Mapping.- Integration of Heterogeneous Data in the Support of the Forest Protection - Structural Concept.
This edited volume gathers the proceedings of the Symposium GIS Ostrava 2016, the Rise of Big Spatial Data, held at the Technical University of Ostrava, Czech Republic, March 16–18, 2016. Combining theoretical papers and applications by authors from around the globe, it summarises the latest research findings in the area of big spatial data and key problems related to its utilisation.
Welcome to dawn of the big data era: though it’s in sight, it isn’t quite here yet. Big spatial data is characterised by three main features: volume beyond the limit of usual geo-processing, velocity higher than that available using conventional processes, and variety, combining more diverse geodata sources than usual. The popular term denotes a situation in which one or more of these key properties reaches a point at which traditional methods for geodata collection, storage, processing, control, analysis, modelling, validation and visualisation fail to provide effective solutions.
Entering the era of big spatial data calls for finding solutions that address all “small data” issues that soon create “big data” troubles. Resilience for big spatial data means solving the heterogeneity of spatial data sources (in topics, purpose, completeness, guarantee, licensing, coverage etc.), large volumes (from gigabytes to terabytes and more), undue complexity of geo-applications and systems (i.e. combination of standalone applications with web services, mobile platforms and sensor networks), neglected automation of geodata preparation (i.e. harmonisation, fusion), insufficient control of geodata collection and distribution processes (i.e. scarcity and poor quality of metadata and metadata systems), limited analytical tool capacity (i.e. domination of traditional causal-driven analysis), low visual system performance, inefficient knowledge-discovery techniques (for transformation of vast amounts of information into tiny and essential outputs) and much more. These trends are accelerating as sensors become more ubiquitous around the world.