The size and level of detail of spatial datasets is continuously increasing with improved mapping technologies and demands for highly detailed data. Examples of large datasets are Global Administrative Areas (GADM), OpenStreetMap, vectorized land cover maps, digital elevation models, and remotely sensed imagery. Such massive spatial datasets pose challenges for processing with GIS software, e.g. out-of-memory errors and processing times that can grow exponentially with the data size. Time and memory requirements can make handling of large datasets ineffective or impossible, partially because many GIS suites load everything to memory for processing, which can easily exceed available system memory even on modern computers. Unfortunately, a reduction of memory requirements comes often at the cost of longer processing times. Particularly challenging processes are e.g. least cost path searches, hydrological modeling, image rectification, and building and cleaning vector topology. In order to handle large datasets, processing routines need to be adjusted such that processing requires less memory and remains efficient for large datasets. Common approaches are tiling, using external memory, and fast sorting and searching algorithms. These mechanisms are already used in the current stable version of GRASS GIS to facilitate handling of larger datasets. The next version of GRASS makes more extensive use of these approaches, which have partially been improved. The capabilities of the next GRASS GIS version to handle large datasets are described and compared to the current stable version.
Metz, M. (2011). Support for massive spatial datasets in GRASS GIS. In: Geoinformatics FCE CTU 2011: Prague: 33 p.. handle: http://hdl.handle.net/10449/20539
Support for massive spatial datasets in GRASS GIS
Metz, Markus
2011-01-01
Abstract
The size and level of detail of spatial datasets is continuously increasing with improved mapping technologies and demands for highly detailed data. Examples of large datasets are Global Administrative Areas (GADM), OpenStreetMap, vectorized land cover maps, digital elevation models, and remotely sensed imagery. Such massive spatial datasets pose challenges for processing with GIS software, e.g. out-of-memory errors and processing times that can grow exponentially with the data size. Time and memory requirements can make handling of large datasets ineffective or impossible, partially because many GIS suites load everything to memory for processing, which can easily exceed available system memory even on modern computers. Unfortunately, a reduction of memory requirements comes often at the cost of longer processing times. Particularly challenging processes are e.g. least cost path searches, hydrological modeling, image rectification, and building and cleaning vector topology. In order to handle large datasets, processing routines need to be adjusted such that processing requires less memory and remains efficient for large datasets. Common approaches are tiling, using external memory, and fast sorting and searching algorithms. These mechanisms are already used in the current stable version of GRASS GIS to facilitate handling of larger datasets. The next version of GRASS makes more extensive use of these approaches, which have partially been improved. The capabilities of the next GRASS GIS version to handle large datasets are described and compared to the current stable version.File | Dimensione | Formato | |
---|---|---|---|
2011 Markus Metz Prague FCE CTU.pdf
accesso aperto
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.31 MB
Formato
Adobe PDF
|
1.31 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.