Saturday, December 21, 2013
Sales Area Management Tool with Postcodes
# A Common Problem...
Defining and managing sales, franchise or dealership areas and territories is quite a tedious task without access to specialised GIS software. However, the cost of deploying such software within a business, or employing external consultants to help in creating hardcopy maps of territories, may be prohibitive for many.
# The Solution
Sales Area Management Tool from aus-emaps.com is an online application that will make the process of creating and managing sales territories much, much easier and it costs only a fraction of the other options.
# Content
Postcodes are the most popular spatial unit used for defining sales territories hence postal boundaries are included as a default option. They are small enough to allow quite precise definition of local neighbourhoods yet, there is a manageable quantity of them covering entire Australia. More administrative boundaries for Australia, as well as other countries, will be added progressively.
# Functionality
The tool is very simple to use and does not require any specialised knowledge.
Sale territories can be created by adding individual polygons to the list either by clicking on the desired polygons or by drawing shapes that approximate areas of interest on the map (drawing options include rectangle, circle or irregular polygon).
Alternatively, a comma delimited list of postcodes can be pasted into a text area under the map or postcode numbers can be typed in one by one.
Advanced conflict resolution process is implemented in this application to prevent creating areas with invalid postal codes or allocating a polygon to more than a single sales area.
For ease of identification, sales territories are coloured in random colours. Colour settings, including transparency, can be adjusted as required.
Each sales territory can be identified with a unique id and name. Individual sales areas can be edited or removed from the map if required. The result can be saved for reuse as a csv file or in JSON format, which preserves colour schema selected for individual sales areas. JSON version can be uploaded back to the application to continue with editing at a later time.
A convenient by-product of “radius select” function is ability to identify polygons that intersect with a circle of predefined radius from a selected point.
# Access
Version 1 of Sales Area Management Tools is available for customers as an advanced beta release. To arrange access, please email your request to info@aus-emaps.com with relevant contact details.
# Reselling Opportunities
There are attractive revenue sharing opportunities for affiliates co-marketing Sales Area Management Tool. Please, email your expression of interest to info@aus-emaps.com for additional details.
Friday, December 20, 2013
Public opinions mapped by ABC
Australian Broadcasting Corporation (ABC) has created a simple interactive presentation using Google Map that allows gauging sentiment of population of Commonwealth Electorates on key social, moral, economic and policy issues that were debated during the 2013 federal election campaign.
Information is based on 1.4 million responses collected via the ABC’s Vote Compass tool. Results were weighted against Census information and are presented as a thematic map.
What emerges is a very interesting picture of “great divides” in the Australian society. In particular, the maps illustrate differences in points of view between city folks and rural electorates, between regions in capital cities (for example, East and West Sydney), or between haves and have nots. It is a rich material for political research and sociologists.
First Spotted on Google Maps Mania
Related Posts:
Federal election 2013 results mapped
How to win election with a handful of votes
More 2013 federal election maps
Mapping federal election 2013 Pt2
Mapping 2013 federal election results
Map adds sizzle to elections
Information is based on 1.4 million responses collected via the ABC’s Vote Compass tool. Results were weighted against Census information and are presented as a thematic map.
What emerges is a very interesting picture of “great divides” in the Australian society. In particular, the maps illustrate differences in points of view between city folks and rural electorates, between regions in capital cities (for example, East and West Sydney), or between haves and have nots. It is a rich material for political research and sociologists.
First Spotted on Google Maps Mania
Related Posts:
Federal election 2013 results mapped
How to win election with a handful of votes
More 2013 federal election maps
Mapping federal election 2013 Pt2
Mapping 2013 federal election results
Map adds sizzle to elections
Saturday, December 14, 2013
Year's end reflection: State of spatial affairs
I have been following fortunes of spatial industry in Australia for more than a decade and, comparing to recent past, it seems that the industry is in a bit of a low right now. Not necessary in terms of revenues, or the level of activity, or overall relevance of the industry to the rest of the economy, but rather in terms of the level of excitement and enthusiasm amongst the participants. It is hard to tell why but things just seem to be lacking a bit of a spark.
The last time the entire spatial industry was abuzz with excitement was in 2005/06 when Google released its online map service. Back then, there was no conversation amongst spatial professionals and no presentation at conferences and seminars without a reference to Google Maps and enormous opportunity their release presented for the industry. And indeed, the follow up years were very good as the industry expanded in size (both in terms of the number of participants and attributed revenues) and in relevance, as it reached into a wide range of new fields and applications.
Although that level of excitement is rather unlikely to return any time soon it doesn’t mean that nothing interesting is happening in the industry right now. In fact, there are many developments to be excited about and although they may be happening on the fringes, it will not take long for them to filter through to the mainstream.
On the top of my list of the most significant developments is a giant leap in capabilities of web browsers - thanks in big part to consistent implementation of HTML5 in the latest versions. It makes it viable now to serve voluminous spatial data directly to the browser and render it in 2D as well as in 3D in real time. Animation and rich interactive visualisation of spatial data is now also possible, extending spatial industry well into the domain of interactive graphics and visual art.
SVG is back in favour and tiled vector data for base map presentation is the next obvious step in the evolution of online mapping applications. Better browser capabilities are also enabling replication of desktop functionality in an online environment – opening up myriad of opportunities for Software as a Service (SaaS) applications with advanced spatial functionality.
In the quest for greater efficiencies XML based spatial data exchange formats, that traditionally supported spatial web services, have been substituted with more processing friendly formats, like for example GeoJSON. The latest flavour is TopoJSON that preserves topology of spatial data and allows for more compact description of geometries since common points are recorded only once.
Spatial data storage and transfer formats is the area where a lot of innovation will be happening in the coming years. This is necessary to fully utilise “big data” and High Performance Computing (HPC) processing capabilities for spatial data. Traditional file formats and relational databases with spatial extensions are not compatible with these new paradigms in computing.
The volume of spatial data keeps growing exponentially as data capture and processing capabilities increase. We now have spatial data generated from a wide range of satellite, airplane and drone deployed sensors, but also from stationary as well as mobile devices on the ground. Information is captured in a variety of formats, such as optical, radar, LiDAR or multitude of measured observations but also can be derived from secondary sources. All that data can be streamed and processed in real or near real time into a variety of useful spatial information. The opportunities to “value add” to that data are overwhelming.
The access to that vast information base is getting easier as well since more and more State and Federal Government agencies release their spatial data at no cost to end users and with no restrictions on its use. Roads centrelines, cadastre and address information, administrative boundaries etc. data can now be combined with crowd sourced information to improve the accuracy and reliability of data already in the public domain. The flip side of this development is that traditional “data reseller” model may no longer be a viable business option.
Positional accuracy of spatial data will keep increasing as higher resolution sensors come online and as alternatives to GPS are launched - for example, European Galileo program. It is also viable now to establish ground based private high accuracy positioning infrastructure, capable of working indoors.
However, that variety of spatial data and speed of its creation also creates big challenges. Mainly in terms of efficient versioning, cataloguing and archiving of data and making it easily discoverable. This puzzle is yet to be solved so, whoever comes first with an acceptable solution is destined to reap a big reward.
I have barely scratched the surface with the above list since the boundaries of spatial industry are so expansive - and hence the number of possible factors influencing the industry is so large. The important point to note is that spatial industry never operated in a vacuum – it has always served a “bigger cause”. That is, the outputs created by spatial industry gave significant advantage to those who used them in pursuit of much grander objectives. In historical times, these outputs were maps for military operations and maritime endeavours. Today, it is a vast array of specialised tools, advanced theories and spatial information in myriad of formats that support almost every aspect of human activity.
As range of fields where spatial theory, data and tools can be applied expands, so does the industry. That “value adding” characteristic of spatial industry is the biggest opportunity for all the participants. However, it is also the biggest threat to industry’s identity, as various aspects of spatial technologies and practises get absorbed into much larger and/or more prominent industries and undertakings. As general level of education of the community increases, basic spatial skills and tools are becoming a commodity, like touch typing skills or word processing and spreadsheet software. Perhaps this is the biggest and the most far reaching development of all…
Whether you are in hardware, software and/or service side of the business, it is important to recognise not only what is happening in your part of the market but also in related fields. Take advantage of this quieter time to re-examine where your next big opportunity is likely to come from. As it happened with Google Maps, “the next big thing in spatial” may originate outside of the industry so, it is important to have a broad perspective to recognise the opportunities in advance and to get onboard early.
The last time the entire spatial industry was abuzz with excitement was in 2005/06 when Google released its online map service. Back then, there was no conversation amongst spatial professionals and no presentation at conferences and seminars without a reference to Google Maps and enormous opportunity their release presented for the industry. And indeed, the follow up years were very good as the industry expanded in size (both in terms of the number of participants and attributed revenues) and in relevance, as it reached into a wide range of new fields and applications.
Although that level of excitement is rather unlikely to return any time soon it doesn’t mean that nothing interesting is happening in the industry right now. In fact, there are many developments to be excited about and although they may be happening on the fringes, it will not take long for them to filter through to the mainstream.
On the top of my list of the most significant developments is a giant leap in capabilities of web browsers - thanks in big part to consistent implementation of HTML5 in the latest versions. It makes it viable now to serve voluminous spatial data directly to the browser and render it in 2D as well as in 3D in real time. Animation and rich interactive visualisation of spatial data is now also possible, extending spatial industry well into the domain of interactive graphics and visual art.
SVG is back in favour and tiled vector data for base map presentation is the next obvious step in the evolution of online mapping applications. Better browser capabilities are also enabling replication of desktop functionality in an online environment – opening up myriad of opportunities for Software as a Service (SaaS) applications with advanced spatial functionality.
In the quest for greater efficiencies XML based spatial data exchange formats, that traditionally supported spatial web services, have been substituted with more processing friendly formats, like for example GeoJSON. The latest flavour is TopoJSON that preserves topology of spatial data and allows for more compact description of geometries since common points are recorded only once.
Spatial data storage and transfer formats is the area where a lot of innovation will be happening in the coming years. This is necessary to fully utilise “big data” and High Performance Computing (HPC) processing capabilities for spatial data. Traditional file formats and relational databases with spatial extensions are not compatible with these new paradigms in computing.
The volume of spatial data keeps growing exponentially as data capture and processing capabilities increase. We now have spatial data generated from a wide range of satellite, airplane and drone deployed sensors, but also from stationary as well as mobile devices on the ground. Information is captured in a variety of formats, such as optical, radar, LiDAR or multitude of measured observations but also can be derived from secondary sources. All that data can be streamed and processed in real or near real time into a variety of useful spatial information. The opportunities to “value add” to that data are overwhelming.
The access to that vast information base is getting easier as well since more and more State and Federal Government agencies release their spatial data at no cost to end users and with no restrictions on its use. Roads centrelines, cadastre and address information, administrative boundaries etc. data can now be combined with crowd sourced information to improve the accuracy and reliability of data already in the public domain. The flip side of this development is that traditional “data reseller” model may no longer be a viable business option.
Positional accuracy of spatial data will keep increasing as higher resolution sensors come online and as alternatives to GPS are launched - for example, European Galileo program. It is also viable now to establish ground based private high accuracy positioning infrastructure, capable of working indoors.
However, that variety of spatial data and speed of its creation also creates big challenges. Mainly in terms of efficient versioning, cataloguing and archiving of data and making it easily discoverable. This puzzle is yet to be solved so, whoever comes first with an acceptable solution is destined to reap a big reward.
I have barely scratched the surface with the above list since the boundaries of spatial industry are so expansive - and hence the number of possible factors influencing the industry is so large. The important point to note is that spatial industry never operated in a vacuum – it has always served a “bigger cause”. That is, the outputs created by spatial industry gave significant advantage to those who used them in pursuit of much grander objectives. In historical times, these outputs were maps for military operations and maritime endeavours. Today, it is a vast array of specialised tools, advanced theories and spatial information in myriad of formats that support almost every aspect of human activity.
As range of fields where spatial theory, data and tools can be applied expands, so does the industry. That “value adding” characteristic of spatial industry is the biggest opportunity for all the participants. However, it is also the biggest threat to industry’s identity, as various aspects of spatial technologies and practises get absorbed into much larger and/or more prominent industries and undertakings. As general level of education of the community increases, basic spatial skills and tools are becoming a commodity, like touch typing skills or word processing and spreadsheet software. Perhaps this is the biggest and the most far reaching development of all…
Whether you are in hardware, software and/or service side of the business, it is important to recognise not only what is happening in your part of the market but also in related fields. Take advantage of this quieter time to re-examine where your next big opportunity is likely to come from. As it happened with Google Maps, “the next big thing in spatial” may originate outside of the industry so, it is important to have a broad perspective to recognise the opportunities in advance and to get onboard early.
Tuesday, November 26, 2013
Google making inroads with Enterprise GIS
It looks that Google is finally making some progress in Australia with selling its GIS enterprise solutions to government clients. The first, and for quite some time the only public sector user of Google technology was NT government but just in the last few months three other State governments succumbed to Google’s charms. In particular, Western Australian Land Information System (WALIS) is upgrading its GIS capabilities with Google Maps Engine platform and has already started serving some data in OGC compliant web service standards.
Earlier this month the NSW Land & Property Information (LPI) released NSW Globe, which allows displaying a range of State data in Google Earth, and in the last few days Queensland’s Department of Natural Resources and Mines released its version as Queensland Globe - with almost identical list of datasets.
It is good that more and more data is made accessible for preview in a public domain. Let’s hope that these initiatives are only a beginning and will lead to more investment into a proper infrastructure to serve the data to third party applications.
Displaying data on a map is so passé. It was a thrilling functionality a decade ago but these days, in order to make a real impact, the data has to be put in context of tasks that community and business undertake on regular basis. Which is anything from looking up bus timetable to researching optimal delivery routes, from searching properties for purchase to collecting business intelligence for marketing purposes, etc.
Google has already recognised that tools it offers cannot deliver all those solutions so the company is focusing its efforts on enabling linking of data served from Google infrastructure to open source tools like QGIS - to enable users performing more specialised spatial tasks.
Government agencies should ideally follow a similar strategy. The best return on all that data in State and federal vaults will be if application developers are allowed unencumbered access to it. Whether it is Google or ESRI or other technology facilitating the access is not that critical as long as there is a long term commitment to maintain it.
Related Posts:
South Australia opens its data
East coast unanimously frees data
Free data a GFC casualty
Governments intensify free data efforts
Data overload makes SDI obsolete
What’s the benefit of gov data warehouses?
Earlier this month the NSW Land & Property Information (LPI) released NSW Globe, which allows displaying a range of State data in Google Earth, and in the last few days Queensland’s Department of Natural Resources and Mines released its version as Queensland Globe - with almost identical list of datasets.
It is good that more and more data is made accessible for preview in a public domain. Let’s hope that these initiatives are only a beginning and will lead to more investment into a proper infrastructure to serve the data to third party applications.
Displaying data on a map is so passé. It was a thrilling functionality a decade ago but these days, in order to make a real impact, the data has to be put in context of tasks that community and business undertake on regular basis. Which is anything from looking up bus timetable to researching optimal delivery routes, from searching properties for purchase to collecting business intelligence for marketing purposes, etc.
Google has already recognised that tools it offers cannot deliver all those solutions so the company is focusing its efforts on enabling linking of data served from Google infrastructure to open source tools like QGIS - to enable users performing more specialised spatial tasks.
Government agencies should ideally follow a similar strategy. The best return on all that data in State and federal vaults will be if application developers are allowed unencumbered access to it. Whether it is Google or ESRI or other technology facilitating the access is not that critical as long as there is a long term commitment to maintain it.
Related Posts:
South Australia opens its data
East coast unanimously frees data
Free data a GFC casualty
Governments intensify free data efforts
Data overload makes SDI obsolete
What’s the benefit of gov data warehouses?
Monday, November 25, 2013
New approach to satellite imagery analysis
Geoscience Australia has just released a short 3 min. video presenting a concept of “data cube” for storing and analysing Earth observation imagery acquired by satellites. This proof of concept application has been built to work with Landsat data and was already used in an operational capacity on several data analysis projects. This is the future of analysis of big volumes of temporal, remotely sensed data.
The concept can be extended to work with any data that can be referenced to a grid structure (in fact, to any spatial data that comprise of a collection of points in space). This is not the first attempt to work with cubed spatial data but certainly the first that I know of that is capable of processing terabytes of spatial imagery into a variety of derived information for immediate, practical use.
Related Posts:
Point cloud 3D map technology
Photosynth - big promise or just a fancy photo viewer?
The concept can be extended to work with any data that can be referenced to a grid structure (in fact, to any spatial data that comprise of a collection of points in space). This is not the first attempt to work with cubed spatial data but certainly the first that I know of that is capable of processing terabytes of spatial imagery into a variety of derived information for immediate, practical use.
Related Posts:
Point cloud 3D map technology
Photosynth - big promise or just a fancy photo viewer?
Tuesday, November 12, 2013
GIS community responds to another disaster
Another big disaster has just struck our region. Typhoon Haiyan / Yolanda was one of the most powerful recorded storms in history. Winds of up to 120km/h caused massive wave surges and caused widespread devastation, particularly in the Philippines. The enormity of this tragedy is slowly emerging from media reports.
As on previous occasions, GIS community responded with massive crowd sourcing effort to help in mapping the extent of damage as well as to assist in response activities. Google created dedicated Crisis Response Map. The Standby Task Force created SBTF Crisis Map and Rappler has released the Project Agos Disaster Information Map.
A useful list of aid groups and charities responding to this crisis has been published by The Guardian.
First spotted on Google Maps Mania.
As on previous occasions, GIS community responded with massive crowd sourcing effort to help in mapping the extent of damage as well as to assist in response activities. Google created dedicated Crisis Response Map. The Standby Task Force created SBTF Crisis Map and Rappler has released the Project Agos Disaster Information Map.
A useful list of aid groups and charities responding to this crisis has been published by The Guardian.
First spotted on Google Maps Mania.
Monday, October 21, 2013
World Topo Map 100th Anniversary
2013 marks the 100th anniversary of an initiative to create the International Map of the World at 1:1M scale. The project, although never completed, left a legacy of standards that are still used in modern cartography and data that support map production til this day.
It all started with an idea by German geographer Albrecht Penck who proposed a worldwide system of maps at the Fifth International Geographical Conference in 1891. His solution, called the International Map of the World, would consist of 2,500 individual maps, each covering four degrees of latitude and six degrees of longitude at a scale of 1:1 million (1 cm = 10 km). But it wasn’t until 1913 that Penck's idea came to fruition when an international conference in Paris established standards for the maps, which became also known as the Millionth Map of the World due to the map series' scale.
The standards required that maps would use the local form of place name in the Roman alphabet (thus, mandating translation of local names from languages that use other alphabets). Map colours were also standardized so towns, railroads, and political boundaries would be represented in black, roads would be red, and topographic features would be brown. Individual maps were to be indexed according to a common system (used to this day).
It was agreed that each country would be responsible for create their own maps but not many countries had the capacity to undertake this task so, a lot of early maps were created by a handful of Anglo-Saxon countries. By the 1930s, 405 maps had been produced although only half adhered to the standards of the project. The newly-created United Nations took control of the Millionth Map project in 1953 but the international interest creating the maps kept waning over the following decades. By the 1980s, only about 800 to 1000 total maps had been created (and less than half were accurate or based on the standards), and the U.N. stopped issuing their regular reports about the status of the project.
The data captured over the past 10 decades and used in the production of the Millionth Map of the World filtered through to many projects and formed the basis of VMap0 GIS dataset and other derived products. It is hard to get excited about 1:1M scale maps any more when you have Google Maps but let’s not forget that just 8 years ago, before Google created the world map at street level accuracy and before OpenStreetMap started its crowdsourcing initiative, this was the highest resolution available which was of consistent quality for almost the entire world. Although 1:1 million scale is not suitable for navigation purposes, maps at this and smaller scales remain popular choice as general reference maps, or as a backdrop for thematic mapping, or for various infographics.
Links:
Index to International Maps of the World
Information sourced from About Geography
It all started with an idea by German geographer Albrecht Penck who proposed a worldwide system of maps at the Fifth International Geographical Conference in 1891. His solution, called the International Map of the World, would consist of 2,500 individual maps, each covering four degrees of latitude and six degrees of longitude at a scale of 1:1 million (1 cm = 10 km). But it wasn’t until 1913 that Penck's idea came to fruition when an international conference in Paris established standards for the maps, which became also known as the Millionth Map of the World due to the map series' scale.
Australian version of Topo 1M map by Geoscience Australia
The standards required that maps would use the local form of place name in the Roman alphabet (thus, mandating translation of local names from languages that use other alphabets). Map colours were also standardized so towns, railroads, and political boundaries would be represented in black, roads would be red, and topographic features would be brown. Individual maps were to be indexed according to a common system (used to this day).
It was agreed that each country would be responsible for create their own maps but not many countries had the capacity to undertake this task so, a lot of early maps were created by a handful of Anglo-Saxon countries. By the 1930s, 405 maps had been produced although only half adhered to the standards of the project. The newly-created United Nations took control of the Millionth Map project in 1953 but the international interest creating the maps kept waning over the following decades. By the 1980s, only about 800 to 1000 total maps had been created (and less than half were accurate or based on the standards), and the U.N. stopped issuing their regular reports about the status of the project.
The data captured over the past 10 decades and used in the production of the Millionth Map of the World filtered through to many projects and formed the basis of VMap0 GIS dataset and other derived products. It is hard to get excited about 1:1M scale maps any more when you have Google Maps but let’s not forget that just 8 years ago, before Google created the world map at street level accuracy and before OpenStreetMap started its crowdsourcing initiative, this was the highest resolution available which was of consistent quality for almost the entire world. Although 1:1 million scale is not suitable for navigation purposes, maps at this and smaller scales remain popular choice as general reference maps, or as a backdrop for thematic mapping, or for various infographics.
Links:
Index to International Maps of the World
Information sourced from About Geography
Thursday, October 10, 2013
Metadata – a problem that doesn’t go away
Last month I came across yet another case study that exposed spatial metadata standard as the primary cause of problems with a delivered data archiving, cataloguing and dissemination system. It is another statistic on a long list of failures caused by reliance on a flowed concept. I am an avid critic of the current metadata standard for spatial data and have written extensively about the reasons in my earlier posts so, I am not going to repeat previous arguments here again - you can find links to those posts at the end of this article. Criticism, however constructive, only helps to expose the problems, not to solve them. Therefore today I would like to share with you my thoughts on a better, more pragmatic approach to creating metadata for spatial information.
# The history
Briefly about the anatomy of the problem (maybe a bit overdramatised but with a good intention to highlight where things went wrong). I remember vividly the presentation on spatial data interoperability I attended about a decade ago. One of the key messages was that metadata is “a glue that will allow it all to happen”. No problem with that but when I asked a few questions regarding practical implementation, I was quickly hushed. One of the presenters did approach me after the event and admitted that they were aware of the potential problems but that they did not want to alienate the audience by bringing those issues to the forefront at that point in time. As he put it, there were enough benefits in the proposed approach that warranted overlooking potential issues in order to get the maximum buy-in from key stakeholders. This spin approach succeeded. In years that followed you only got to hear about “good things” and quite a number of people made a career out of selling the “metadata success story” from one continent to the other. And the myth “just follow the standard and everything will be ok” perpetuated… never mind the truth.
# Background on metadata and standards
Let me set the record straight - I do not dismiss the need for metadata. On the contrary – metadata is a very, very important aspect of any data creation, maintenance and dissemination activity. But I question the usefulness of current metadata standards to support those tasks adequately.
“Standard” is just a set of conventions that a community of practice agrees to follow. It could be designed by a committee (pun intended) or arrived at by wide acceptance of a common practice (ie. when “someone’s way” of doing things is liked and followed by others). But standards should never be treated as a formula for success … Unfortunately, this is how ISO 19115 Geographic Information - Metadata standard has been sold to GIS community…
At a theoretical level, the concept of metadata is very simple. It is just “data about the data”… But this is where the simplicity ends and complexity begins because you quickly discover that metadata is useful not only for spatial data but also for spatial information in any format (ie. printed as well as electronic, single point to compilations of 100’s of layers, vectors as well as rasters and grids… and point clouds… one-off or dynamically generated on the fly… and whatever else you want to class as “spatial”).
# Why metadata implementation projects fail
A typical metadata implementation project goes like this - an organisation accumulates more and more data, to the extent that it causes problems for the IT department to manage it. So, a project is initiated to “catalogue what we have”. And in order to catalogue systematically what you have you need to describe it in a consistent way - you need metadata. The obvious step is to search if there are any standards that will help to deal with the issue rather than reinventing the wheel.
It does not take much effort to find the spatial data metadata standard documentation. So, you start reading and quickly realise that you cannot make any sense of the gobbledygook of the official documentation. A thought inevitably crosses your mind - “I need an expert!”. And of course you look for… “the expert in ISO 19115 metadata standard” . In the end, you get what you ask for - an expert in the standard, not the expert in solving your kind of problems. That expert cannot advise you otherwise but just to “follow the standard and you will be right”.
The expert usually brings a set of recommend tools (which of course are built around ISO metadata standard) and you also get help in implementing your catalogue/system “by the book”. All good, great success… Until you realise that a classic “garbage in / garbage out” principle applies here as well… what a surprise!
You see, the failure is built into the solution (it is the metadata standard!) so, it is very rare that this approach delivers. You don’t believe me? Just talk to those who have to use the information contained in metadata records (not those who implement metadata standards and build systems!)…
Ok, to be fair, there is one exception where you can achieve an acceptable outcome (but I caveat this statement depending on how you define “acceptable”). It will happen only if you rule with an iron fist what information goes into your metadata. However, it can get out of hands easily if you have a lot of data to deal with (so you take shortcuts to process it all in bulk), a lot of different people writing metadata content (so, there are different views of what is important and needs to be recorded – because the standard allows this), and/or a lot of different types of data or data that grows rapidly (lack of consistency of information or sheer volume of data makes it impossible to record all the useful details). And let me stress this again, there is no guarantee that content of your metadata will be of any use outside of your organisation or immediate community of interest because it may lack the information perceived by others as vital (see my earlier post for more explanations).
If this all sounds too melodramatic and over-generalised, I do not apologise for it. My intension is to shake readers’ perception about the infallibility of “follow the standard” mantra. Enough money has been wasted on this so far. Do not trust “the experts” in a flowed concept any more – they do not know better than you do. Applying common sense to create your metadata will yield better outcomes than “following the standard” can ever deliver.
# A better approach - for better outcome
Now that you understand why “following the standard” is not a recipe for creating useful metadata records let’s review how to make it all work for you.
First and foremost, you have to define precisely WHY you need the metadata in the first place. The information you need to capture will differ depending whether the metadata is for internal use in data production and maintenance tasks or just to make the data easily discoverable by others.
For example, if you are building “just a catalogue” why bother with the complexity of ISO standard? Majority of users of your data simply want to know a few basic things about it:
As simple as that – nothing more and nothing less. Capture this information in a succinct metadata record and you have already done better than “following the standard” - even if this is only a paragraph of free text. And if you add a consistent structure to record that information for all you data you will achieve more than any expert in ISO 19115 standard can ever do for you.
If you are thinking that “most of these items are specified in the metadata standard anyway”, have a look at the documentation in detail. The key point is that most of these vital information items are either optional categories (so, may or may not be included in metadata record created “by the book”) or the choice of options in the mandatory categories does not allow including anything that is meaningful.
# Divide to conquer – metadata hierarchy
The key issue I have with ISO19115 metadata standard is that it tries to be all things to all people. The result is that it is too specific for most cataloguing purposes or not detailed enough for capturing really important details about the data for reuse, production and maintenance purposes. It also tries to be applicable to any spatial data which compounds its uselessness by bringing it all to a “least common denominator”. In reality, you have to capture and store different information for different data types and for different purposes, depending on the intended use of that information. Therefore, in a complex production environment you will need:
This or similar metadata hierarchy should be adopted to capture relevant information as data progresses through various stages of the production process. In an ideal environment you should maintain all that information and make it available for the end users of your data because it describes the product from end to end. Also, if you are engaged in a continuous production process, and that process changes over time, it is important to preserve all the relevant information for future perusal. However, for many, it will be an overkill as all they really need is the last metadata option, and in a very simplified format.
# Metadata granularity
In an ideal world there should be a metadata about every single piece of information you use or manage. In case of spatial data - about every single point, or line segment, or network node, or grid cell, as well as about their respective attributes. The hierarchy of metadata documentation outlined in the above section allows managing the granularity of maintained information. So, if you do need point/cell or segment metadata you can maintain that information in a lower level metadata construct (eg. production level metadata) and more generic information about your data can be captured in a higher level metadata. For example, information about source and acquisition date of a road segment data may be stored in production level metadata while your data licensing details in discovery level metadata. And one is linked to another via a hierarchy structure.
As you can see, this approach is flexible enough to allow storing relevant information about online applications as well as whole data collections with 100’s of layers but also about individual data layers, individual features within those layers, and down to the smallest spatial data construct – point/grid cell in space, if you need. You will never be able to achieve this level of granularity with ISO 19115 standard metadata.
# A word about naming your data/ spatial information
There are no conventions for naming spatial data that are universally acceptable and followed. Generally, creators try to give descriptive names and pack as much extra information into the title or file name as possible, so humans can quickly ascertain what the data is about by looking at just a file name or title. Data which is disseminated in “chunks”, like satellite imagery scenes or various grids structures, usually incorporate basic metadata information in their names - like satellite name, sensor, time of acquisition, resolution and grid/path references. This approach is handy if you need to interact with a small number of data files manually but it is a totally unnecessary complication if you have thousands of files. Your metadata should be a window to all your data and there should be no need to interact with the data via their convoluted naming convention. This is where the ISO 19115 metadata concept falls short again because it is inadequate for complex data filtering purposes, hence you have no choice but to interact with the data manually, based on file names, and not via purpose-built data query tools. That innovation could not happen to date.
For all practical purposes I suggest to stick to a minimum when naming your files. That is, giving your file a descriptive name and version/date id to make it unique. Information about everything else relating to you data should be in a proper metadata file.
# What about ISO19115 then?
You may still need to publish metadata in ISO 19115 standard, for example, to deal with limitations of cataloguing tools or to accommodate requirements of some of your less sophisticated clients. If you design your metadata content correctly, it will take just a few minutes for your programmer to map your metadata content to mandatory ISO categories and to make your metadata into an “ISO 19115 compliant” XML structure. The key point is to treat ISO metadata standard as an output format, one of many formats that may be required by different users, and not as a foundation for creating the metadata in the first place.
# Closing remarks
If you are wondering whether any of the above makes sense and why nobody else is raising the issue… Well, please consider this: those active in the OGC and spatial standards arena have quietly recognised the problem. There is already a number of initiatives on the way to develop more metadata standards for specific data formats (like Metadata for Observations and Measurements) and for “easier discovery” of spatial information (eg. Earth Observation extension of OpenSearch specification)… But true, no one has publicly admitted yet that the old approach failed and that the spatial community should have another go at solving metadata problem - in a holistic rather than piecemeal way.
Current approach to bring more and more standards is a lost cause as it is just an attempt to patch things up rather than to address the issue properly. Dividing and splitting the problem without acknowledging that it exists in the first place will only lead to more problems down the track - it will result in more chaos and confusion for the end users. These new initiatives are not about creating a hierarchy of metadata standards - just about more standards. If it was so difficult to successfully implement one standard, just imagine the troubles of trying to deal with 3 or more! The obvious question will be: “Which one do I use???” If you choose OpenSearch approach chances are that your data cannot be catalogued because traditional spatial cataloguing tools require ISO 19115 structure. And if your data formats happen to be different than “observations and measurements” or "image/grid", you may be waiting another decade for a proper standard to be published….
Persisting with the current approach to solve metadata problem will not succeed. As with Gordian Knot, there is only one way to solve this problem quickly…cut your loses short and start afresh. This is what I am doing. I will share my further thoughts and experiences in not too distant future.
Related Posts:
Why standards should be ignored
GIS metadata standard deficiencies
GIS standards dilemma
Ed Parsons on Spatial Data Infrastructure
Data overload makes SDI obsolete
# The history
Briefly about the anatomy of the problem (maybe a bit overdramatised but with a good intention to highlight where things went wrong). I remember vividly the presentation on spatial data interoperability I attended about a decade ago. One of the key messages was that metadata is “a glue that will allow it all to happen”. No problem with that but when I asked a few questions regarding practical implementation, I was quickly hushed. One of the presenters did approach me after the event and admitted that they were aware of the potential problems but that they did not want to alienate the audience by bringing those issues to the forefront at that point in time. As he put it, there were enough benefits in the proposed approach that warranted overlooking potential issues in order to get the maximum buy-in from key stakeholders. This spin approach succeeded. In years that followed you only got to hear about “good things” and quite a number of people made a career out of selling the “metadata success story” from one continent to the other. And the myth “just follow the standard and everything will be ok” perpetuated… never mind the truth.
# Background on metadata and standards
Let me set the record straight - I do not dismiss the need for metadata. On the contrary – metadata is a very, very important aspect of any data creation, maintenance and dissemination activity. But I question the usefulness of current metadata standards to support those tasks adequately.
“Standard” is just a set of conventions that a community of practice agrees to follow. It could be designed by a committee (pun intended) or arrived at by wide acceptance of a common practice (ie. when “someone’s way” of doing things is liked and followed by others). But standards should never be treated as a formula for success … Unfortunately, this is how ISO 19115 Geographic Information - Metadata standard has been sold to GIS community…
At a theoretical level, the concept of metadata is very simple. It is just “data about the data”… But this is where the simplicity ends and complexity begins because you quickly discover that metadata is useful not only for spatial data but also for spatial information in any format (ie. printed as well as electronic, single point to compilations of 100’s of layers, vectors as well as rasters and grids… and point clouds… one-off or dynamically generated on the fly… and whatever else you want to class as “spatial”).
# Why metadata implementation projects fail
A typical metadata implementation project goes like this - an organisation accumulates more and more data, to the extent that it causes problems for the IT department to manage it. So, a project is initiated to “catalogue what we have”. And in order to catalogue systematically what you have you need to describe it in a consistent way - you need metadata. The obvious step is to search if there are any standards that will help to deal with the issue rather than reinventing the wheel.
It does not take much effort to find the spatial data metadata standard documentation. So, you start reading and quickly realise that you cannot make any sense of the gobbledygook of the official documentation. A thought inevitably crosses your mind - “I need an expert!”. And of course you look for… “the expert in ISO 19115 metadata standard” . In the end, you get what you ask for - an expert in the standard, not the expert in solving your kind of problems. That expert cannot advise you otherwise but just to “follow the standard and you will be right”.
The expert usually brings a set of recommend tools (which of course are built around ISO metadata standard) and you also get help in implementing your catalogue/system “by the book”. All good, great success… Until you realise that a classic “garbage in / garbage out” principle applies here as well… what a surprise!
You see, the failure is built into the solution (it is the metadata standard!) so, it is very rare that this approach delivers. You don’t believe me? Just talk to those who have to use the information contained in metadata records (not those who implement metadata standards and build systems!)…
Ok, to be fair, there is one exception where you can achieve an acceptable outcome (but I caveat this statement depending on how you define “acceptable”). It will happen only if you rule with an iron fist what information goes into your metadata. However, it can get out of hands easily if you have a lot of data to deal with (so you take shortcuts to process it all in bulk), a lot of different people writing metadata content (so, there are different views of what is important and needs to be recorded – because the standard allows this), and/or a lot of different types of data or data that grows rapidly (lack of consistency of information or sheer volume of data makes it impossible to record all the useful details). And let me stress this again, there is no guarantee that content of your metadata will be of any use outside of your organisation or immediate community of interest because it may lack the information perceived by others as vital (see my earlier post for more explanations).
If this all sounds too melodramatic and over-generalised, I do not apologise for it. My intension is to shake readers’ perception about the infallibility of “follow the standard” mantra. Enough money has been wasted on this so far. Do not trust “the experts” in a flowed concept any more – they do not know better than you do. Applying common sense to create your metadata will yield better outcomes than “following the standard” can ever deliver.
# A better approach - for better outcome
Now that you understand why “following the standard” is not a recipe for creating useful metadata records let’s review how to make it all work for you.
First and foremost, you have to define precisely WHY you need the metadata in the first place. The information you need to capture will differ depending whether the metadata is for internal use in data production and maintenance tasks or just to make the data easily discoverable by others.
For example, if you are building “just a catalogue” why bother with the complexity of ISO standard? Majority of users of your data simply want to know a few basic things about it:
- what is it (basic description – including list of features for compilation products, spatial accuracy, geographic extents, when created/time reference and version if more than one created),
- where to get it from and how (online or shopfront, order hardcopy or download electronic format),
- how much it costs (free or paid - how much!),
- how to access it (ie. access constraint, e.g. “none” or “login” or “restricted” plus relevant classification level) and what can be done with it (eg. “internal use only”, or “non commercial use only” or “republish with attribution”, etc.) – important to separate the two to make it clear!
As simple as that – nothing more and nothing less. Capture this information in a succinct metadata record and you have already done better than “following the standard” - even if this is only a paragraph of free text. And if you add a consistent structure to record that information for all you data you will achieve more than any expert in ISO 19115 standard can ever do for you.
If you are thinking that “most of these items are specified in the metadata standard anyway”, have a look at the documentation in detail. The key point is that most of these vital information items are either optional categories (so, may or may not be included in metadata record created “by the book”) or the choice of options in the mandatory categories does not allow including anything that is meaningful.
# Divide to conquer – metadata hierarchy
The key issue I have with ISO19115 metadata standard is that it tries to be all things to all people. The result is that it is too specific for most cataloguing purposes or not detailed enough for capturing really important details about the data for reuse, production and maintenance purposes. It also tries to be applicable to any spatial data which compounds its uselessness by bringing it all to a “least common denominator”. In reality, you have to capture and store different information for different data types and for different purposes, depending on the intended use of that information. Therefore, in a complex production environment you will need:
- Metadata describing source inputs (ie. to define lineage of your data);
- Metadata for production datasets (which describes interim data versions at various stages of the process of transformation of inputs/source data into finished product);
- Metadata for all output formats of the finished product (since it is inevitable that format conversion will alter the data in some way so, it needs to be documented that “what you see” eg. on a slippy map demonstrating the data is not the same as what you get in a data file in format x and which is different yet to that in format y; this is due to generalisations and other inherent alterations of original inputs in the process of spatial conversion);
- Metadata for discovery (ie. solely for cataloguing purposes).
This or similar metadata hierarchy should be adopted to capture relevant information as data progresses through various stages of the production process. In an ideal environment you should maintain all that information and make it available for the end users of your data because it describes the product from end to end. Also, if you are engaged in a continuous production process, and that process changes over time, it is important to preserve all the relevant information for future perusal. However, for many, it will be an overkill as all they really need is the last metadata option, and in a very simplified format.
# Metadata granularity
In an ideal world there should be a metadata about every single piece of information you use or manage. In case of spatial data - about every single point, or line segment, or network node, or grid cell, as well as about their respective attributes. The hierarchy of metadata documentation outlined in the above section allows managing the granularity of maintained information. So, if you do need point/cell or segment metadata you can maintain that information in a lower level metadata construct (eg. production level metadata) and more generic information about your data can be captured in a higher level metadata. For example, information about source and acquisition date of a road segment data may be stored in production level metadata while your data licensing details in discovery level metadata. And one is linked to another via a hierarchy structure.
As you can see, this approach is flexible enough to allow storing relevant information about online applications as well as whole data collections with 100’s of layers but also about individual data layers, individual features within those layers, and down to the smallest spatial data construct – point/grid cell in space, if you need. You will never be able to achieve this level of granularity with ISO 19115 standard metadata.
# A word about naming your data/ spatial information
There are no conventions for naming spatial data that are universally acceptable and followed. Generally, creators try to give descriptive names and pack as much extra information into the title or file name as possible, so humans can quickly ascertain what the data is about by looking at just a file name or title. Data which is disseminated in “chunks”, like satellite imagery scenes or various grids structures, usually incorporate basic metadata information in their names - like satellite name, sensor, time of acquisition, resolution and grid/path references. This approach is handy if you need to interact with a small number of data files manually but it is a totally unnecessary complication if you have thousands of files. Your metadata should be a window to all your data and there should be no need to interact with the data via their convoluted naming convention. This is where the ISO 19115 metadata concept falls short again because it is inadequate for complex data filtering purposes, hence you have no choice but to interact with the data manually, based on file names, and not via purpose-built data query tools. That innovation could not happen to date.
For all practical purposes I suggest to stick to a minimum when naming your files. That is, giving your file a descriptive name and version/date id to make it unique. Information about everything else relating to you data should be in a proper metadata file.
# What about ISO19115 then?
You may still need to publish metadata in ISO 19115 standard, for example, to deal with limitations of cataloguing tools or to accommodate requirements of some of your less sophisticated clients. If you design your metadata content correctly, it will take just a few minutes for your programmer to map your metadata content to mandatory ISO categories and to make your metadata into an “ISO 19115 compliant” XML structure. The key point is to treat ISO metadata standard as an output format, one of many formats that may be required by different users, and not as a foundation for creating the metadata in the first place.
# Closing remarks
If you are wondering whether any of the above makes sense and why nobody else is raising the issue… Well, please consider this: those active in the OGC and spatial standards arena have quietly recognised the problem. There is already a number of initiatives on the way to develop more metadata standards for specific data formats (like Metadata for Observations and Measurements) and for “easier discovery” of spatial information (eg. Earth Observation extension of OpenSearch specification)… But true, no one has publicly admitted yet that the old approach failed and that the spatial community should have another go at solving metadata problem - in a holistic rather than piecemeal way.
Current approach to bring more and more standards is a lost cause as it is just an attempt to patch things up rather than to address the issue properly. Dividing and splitting the problem without acknowledging that it exists in the first place will only lead to more problems down the track - it will result in more chaos and confusion for the end users. These new initiatives are not about creating a hierarchy of metadata standards - just about more standards. If it was so difficult to successfully implement one standard, just imagine the troubles of trying to deal with 3 or more! The obvious question will be: “Which one do I use???” If you choose OpenSearch approach chances are that your data cannot be catalogued because traditional spatial cataloguing tools require ISO 19115 structure. And if your data formats happen to be different than “observations and measurements” or "image/grid", you may be waiting another decade for a proper standard to be published….
Persisting with the current approach to solve metadata problem will not succeed. As with Gordian Knot, there is only one way to solve this problem quickly…cut your loses short and start afresh. This is what I am doing. I will share my further thoughts and experiences in not too distant future.
Related Posts:
Why standards should be ignored
GIS metadata standard deficiencies
GIS standards dilemma
Ed Parsons on Spatial Data Infrastructure
Data overload makes SDI obsolete
Tuesday, September 24, 2013
South Australia opens its data
South Australian government is the latest convert to free and open data cause. Unveiling www.data.sa.gov.au portal South Australian premier, Jay Weatherill, mandated that all state government agencies are to house their public data in a central portal to ensure that it is accessible to the community at large. At present there are 229 data sets released by a number of SA state government agencies including the Department of Planning, Transport and Infrastructure and the Attorney-General’s Department.
As with all the other initiatives of similar type, the success will be measured by uptake of released data by business and the community. However, this information is hard to compile, so my litmus test of the likely success of a particular “data.gov.au” initiative is how much of useful information is put in the public domain...
It looks that SA is on a right track releasing full roads dataset but more spatial data has to be made available in order for this initiative to start paying off for the effort involved. My next criticism is rather useless (or frankly, lacking) metadata information for supplied data but this issue is not unique to SA and other jurisdictions are also guilty of neglecting that aspect of the "data discovery" part of their respective project. "ISO 19115 - Geographic Information Metadata" is certainly nowhere to be seen on "data.gov.au" portals...
Below is a quick scorecard of State and Federal government open data initiatives - based on availability of “high value” spatial data (as per my very subjective list).
Table. Availability of Free Fundamental Spatial Data
I will restrain from providing my assessment of those initiatives at this point in time. It is enough to say that expectations are high as to the economic value free and open data could deliver but, as you can see from the above matrix, there are many gaps in availability of what I consider fundamental data to make any meaningful impact… Let's give it a year and see if there are any improvements.
Related Posts:
East coast unanimously frees data
Free data a GFC casualty
Governments intensify free data efforts
Data overload makes SDI obsolete
What’s the benefit of gov data warehouses?
First spotted on: spatialsource.com.au
As with all the other initiatives of similar type, the success will be measured by uptake of released data by business and the community. However, this information is hard to compile, so my litmus test of the likely success of a particular “data.gov.au” initiative is how much of useful information is put in the public domain...
It looks that SA is on a right track releasing full roads dataset but more spatial data has to be made available in order for this initiative to start paying off for the effort involved. My next criticism is rather useless (or frankly, lacking) metadata information for supplied data but this issue is not unique to SA and other jurisdictions are also guilty of neglecting that aspect of the "data discovery" part of their respective project. "ISO 19115 - Geographic Information Metadata" is certainly nowhere to be seen on "data.gov.au" portals...
Below is a quick scorecard of State and Federal government open data initiatives - based on availability of “high value” spatial data (as per my very subjective list).
Table. Availability of Free Fundamental Spatial Data
Dataset
|
Fed
|
ACT
|
NSW
|
NT
|
Qld
|
SA
|
Tas
|
Vic
|
WA
|
Gazetteer
|
Yes
|
Yes
|
Yes
|
||||||
Cadastre boundaries
|
Yes (by LGA)
|
Adelaide
City only
|
Yes
|
||||||
Addresses locations
|
Gungahlin
Town Centre only
|
Yes (as a list)
|
Yes
|
||||||
Roads
|
Yes (State managed only)
|
Yes
|
Yes
|
||||||
Admin boundaries
|
Yes (via ABS)
|
Some State specific
|
Yes
|
||||||
Property sales
|
By LGA only
|
?
|
|||||||
Property/ land valuations
|
Yes (data at LGA level only)
|
?
|
|||||||
Elevation
|
Yes
(30m)
|
Yes (90% coverage at 10m)
|
Yes (contours 1m+)
|
||||||
High Res imagery
|
25-15m
Landsat; 2.5m AGRI
|
Potentially (as tile service)
|
Old Landsat imagery
|
||||||
Overall
|
I will restrain from providing my assessment of those initiatives at this point in time. It is enough to say that expectations are high as to the economic value free and open data could deliver but, as you can see from the above matrix, there are many gaps in availability of what I consider fundamental data to make any meaningful impact… Let's give it a year and see if there are any improvements.
Related Posts:
East coast unanimously frees data
Free data a GFC casualty
Governments intensify free data efforts
Data overload makes SDI obsolete
What’s the benefit of gov data warehouses?
First spotted on: spatialsource.com.au
Tuesday, September 10, 2013
Federal election 2013 results mapped
The election was held on 7 September, 2013. Preliminary results are in and we have a change of government but the picture is not yet complete since counting for several marginal seats is still going (78.1% of votes counted so far, and results for 11 seats are still in doubt). I have mentioned in earlier posts Yahoo!7 map and SMH map as potential sources of information about election results but there are a few more options available, as listed below:
Australian Broadcasting Corporation Federal Election 2013 – Australia Votes portal, providing a comprehensive range of historical as well as the latest information about the candidates and winners, including detailed count of votes and “swings” in voters preferences. Interactive map shows current predicted (and eventually final) result for each electorate - coloured according to a party winning the seat. A separate view shows only those electorates that “changed hands” in this election.
The Guardian’s Dot Map – a nice interactive graphic presentation that allows toggling between 2010 and 2013 results (if you are not fussy about that extents of electoral boundaries in Victoria and South Australia that changed substantially since 2010 – the issue I have heighted in my previous two posts). Colour of the dots corresponds to a political party – hover the mouse over the dot to reveal information about the electorate and which party held the seat in 2010 and won in 2013.
Google’s Politics and Elections – Australia map, displaying electoral boundaries coloured according to party affiliation of the winning candidate. Click on an electorate brings out the list of candidates to the House of Representatives with statistics about received votes. Unfortunately, it does not work in Internet Explorer.
First spotted on Google Maps Mania
Australian Broadcasting Corporation Federal Election 2013 – Australia Votes portal, providing a comprehensive range of historical as well as the latest information about the candidates and winners, including detailed count of votes and “swings” in voters preferences. Interactive map shows current predicted (and eventually final) result for each electorate - coloured according to a party winning the seat. A separate view shows only those electorates that “changed hands” in this election.
The Guardian’s Dot Map – a nice interactive graphic presentation that allows toggling between 2010 and 2013 results (if you are not fussy about that extents of electoral boundaries in Victoria and South Australia that changed substantially since 2010 – the issue I have heighted in my previous two posts). Colour of the dots corresponds to a political party – hover the mouse over the dot to reveal information about the electorate and which party held the seat in 2010 and won in 2013.
Google’s Politics and Elections – Australia map, displaying electoral boundaries coloured according to party affiliation of the winning candidate. Click on an electorate brings out the list of candidates to the House of Representatives with statistics about received votes. Unfortunately, it does not work in Internet Explorer.
First spotted on Google Maps Mania
Wednesday, September 4, 2013
More 2013 federal election maps
Seven News and Yahoo!7 partnered with ESRI Australia to create their own version of 2013 Federal Election Map. The map is supplemented with quite an extensive collection of Census 2011 data (employment, education, ethnicity, incomes, internet connectivity, etc), median house prices for each electorate and, of course, information about the sitting Member of Parliament – all nicely laid out with animated graphs and descriptive legends. A couple of unique features of this presentation is the ability to display on the map twitter comments related to this election - according to location of publishers, as well as viewing relative top 5 ranking of electorates based on a selection of Census data.
This map is hosted by ESRI Australia on Amazon Cloud and is served as an embedded application on Yahoo!7 site. As with Sydney Morning Herald 2013 election map, featured earlier this week, it is very disappointing that even the most seasoned geographers can fall into a trap of mixing spatial data with incorrect attribute information. That is, this map also presents the latest version of Commonwealth Electoral boundaries and references them to historical information which applied to the previous election. Electoral boundaries have changed quite substantially in South Australia and Victoria since the last election in 2010 so, it is very inappropriate to mix “old with the new”. Again, if the map presented only new candidates that would be a different story but in the current version it is another big fail...
First spotted on spatialsource.com.au
Related Posts:
Mapping federal election 2013 Pt2
Mapping 2013 federal election results
Map adds sizzle to elections
This map is hosted by ESRI Australia on Amazon Cloud and is served as an embedded application on Yahoo!7 site. As with Sydney Morning Herald 2013 election map, featured earlier this week, it is very disappointing that even the most seasoned geographers can fall into a trap of mixing spatial data with incorrect attribute information. That is, this map also presents the latest version of Commonwealth Electoral boundaries and references them to historical information which applied to the previous election. Electoral boundaries have changed quite substantially in South Australia and Victoria since the last election in 2010 so, it is very inappropriate to mix “old with the new”. Again, if the map presented only new candidates that would be a different story but in the current version it is another big fail...
First spotted on spatialsource.com.au
Related Posts:
Mapping federal election 2013 Pt2
Mapping 2013 federal election results
Map adds sizzle to elections
Monday, September 2, 2013
Mapping federal election 2013 Pt2
In preparation for the election day, Sydney Morning Herald has created an interactive presentation featuring a map with current electoral boundaries coloured according to the party of incumbent candidate. Click on the electorate polygon brings information about the sitting Member of Parliament, including swing statistics from the last election in 2010 and indication how safe is their seat. Presented below the map is a summary of demographic statistics for the electorate (based on Census 2011 data).
To be picky, it could be argued that the map does not reflect the situation correctly because it presents current version of electoral boundaries (ie. those applying to September 7 election and afterwards) with information about the “old sitting members” (including Julia Gillard and several other Members of Parliament who are not contesting their seats in 2013 election). Creating this map was possible only because the latest redistribution of electoral boundaries did not include name changes for the electorates, hence allowing for this “artificial compilation”. It would be a different story if this map presented candidates for Members of Parliament...
This is a perfect example where making a map “because you can” does not necessary equate with “adding value” to the information. This is quite an innocent example but map creators have to be wary that in many circumstances the consequences of “messing with spatial data” may be quite perilous. Only when this map is updated with post-September 7 results it will be able to be considered a nice example of presenting complex information using spatial tools and interactive graphics. For now, it is a big fail for SMH data journalism!
Related Posts:
Mapping 2013 federal election results
Map adds sizzle to elections
To be picky, it could be argued that the map does not reflect the situation correctly because it presents current version of electoral boundaries (ie. those applying to September 7 election and afterwards) with information about the “old sitting members” (including Julia Gillard and several other Members of Parliament who are not contesting their seats in 2013 election). Creating this map was possible only because the latest redistribution of electoral boundaries did not include name changes for the electorates, hence allowing for this “artificial compilation”. It would be a different story if this map presented candidates for Members of Parliament...
This is a perfect example where making a map “because you can” does not necessary equate with “adding value” to the information. This is quite an innocent example but map creators have to be wary that in many circumstances the consequences of “messing with spatial data” may be quite perilous. Only when this map is updated with post-September 7 results it will be able to be considered a nice example of presenting complex information using spatial tools and interactive graphics. For now, it is a big fail for SMH data journalism!
Related Posts:
Mapping 2013 federal election results
Map adds sizzle to elections
Thursday, August 29, 2013
Mapping 2013 federal election results
Australian federal election is still more than a week away but some have already decided the result is a forgone conclusion. In particular, an online bookmaker sportbet.com.au is so certain the Coalition will win the election it has paid out all bets on that outcome. It is probably just a PR stunt. So meantime, before we have any confirmation of the real outcome, have a look a map published by The Age which presents how Australians cast their votes at the last election in 2010. Using data from the Australian Electoral Commission they have been able to map out the two-party preferred vote for every polling booth throughout Australia.
The map was created by Geoplex, GIS consultancy based in Canberra and Melbourne, using open source mapping application Leafletjs and data served by CartoDB (cloud based GIS solution).
First spotted on spatialsource.com.au
The map was created by Geoplex, GIS consultancy based in Canberra and Melbourne, using open source mapping application Leafletjs and data served by CartoDB (cloud based GIS solution).
First spotted on spatialsource.com.au
Wednesday, August 28, 2013
Maps aid in gauging public opinions
Interactive nature of online maps makes them a great tool for engagement with local communities while soliciting feedback on various aspects of local life. The City of Cockburn in Western Australia has just deployed an innovative online application to survey the community about local transportation issues using the Google Maps API.
Visitors to the City of Cockburn Integrated Transport Survey page can comment on local transport issues and reference specific locations on a Google Map. In particular, users can search for a specific address then add a marker to the map and leave a comment categorising it into one of six transport related issues: congestion, road safety, parking, freight, public transport, cycling or walking related.
All comments are published immediately as interactive markers on the map as well as a twitter-like list. Other residents can vote on each issue by either agreeing or disagreeing with the author of the original comment.
This is a great example of using a simple online map as a low cost but very effective tool to reach a large number of members of a local community that otherwise would not have had the opportunity to raise their concerns. The application was developed by a group of Brisbane based specialists from Arup, an engineering and built environment consultancy, and is available for other projects under CollaborativeMaps.org banner.
First spotted on Google Maps Mania
Visitors to the City of Cockburn Integrated Transport Survey page can comment on local transport issues and reference specific locations on a Google Map. In particular, users can search for a specific address then add a marker to the map and leave a comment categorising it into one of six transport related issues: congestion, road safety, parking, freight, public transport, cycling or walking related.
All comments are published immediately as interactive markers on the map as well as a twitter-like list. Other residents can vote on each issue by either agreeing or disagreeing with the author of the original comment.
This is a great example of using a simple online map as a low cost but very effective tool to reach a large number of members of a local community that otherwise would not have had the opportunity to raise their concerns. The application was developed by a group of Brisbane based specialists from Arup, an engineering and built environment consultancy, and is available for other projects under CollaborativeMaps.org banner.
First spotted on Google Maps Mania
Monday, August 19, 2013
Update of Census 2011 map app
My Census 2011 map app has just been updated with Socio-Economic Indexes for Areas (SEIFA) data. For each index there are two measures available for mapping: index value and decile it falls within (ie. comparing to all postcodes in Australia). The indexes can be used for a number of different purposes, including targeting areas for business and services, strategic planning and social and economic research (for more in depth examples see: How to Use SEIFA).
Briefly about SEIFA. It is a suite of four indexes that have been created from social and economic Census information. Each index ranks geographic areas across Australia in terms of their relative socio-economic advantage and disadvantage. The four indexes in SEIFA 2011 are:
Full explanation of each index is available from the ABS.
As a side comment, working with the updated version of Fusion Tables got me thinking that the writing is on the wall as to the future of this service… It is well on the path to share a place in history with many other Google initiatives that now are just a distant memory. The concept is great but the execution is very cumbersome. Hmm, perhaps an opportunity?
Related posts:
Briefly about SEIFA. It is a suite of four indexes that have been created from social and economic Census information. Each index ranks geographic areas across Australia in terms of their relative socio-economic advantage and disadvantage. The four indexes in SEIFA 2011 are:
- Index of Relative Socio-Economic Disadvantage (IRSD)
- Index of Relative Socio-Economic Advantage and Disadvantage (IRSAD)
- Index of Economic Resources (IER)
- Index of Education and Occupation (IEO)
Full explanation of each index is available from the ABS.
As a side comment, working with the updated version of Fusion Tables got me thinking that the writing is on the wall as to the future of this service… It is well on the path to share a place in history with many other Google initiatives that now are just a distant memory. The concept is great but the execution is very cumbersome. Hmm, perhaps an opportunity?
Related posts: