[image courtesy of Excel Blog]
Google is already offering integrated spreadsheet, fusion tables and map solution through their Docs application so it is just a catch up for Microsoft. If history is anything to judge by, mapping data in spreadsheets will not become a mainstream use of Excel – simply because users will need to geocode all those “millions of records” in the first place - no indication so far as to how much this service will cost. Also, all those who deal with geocoding records know that it is not as simple as “pressing a button”. Not to mention that challenges with “processing of spatial data for 1 million points” have not been quite solved yet without a quite complex server setup so chances of doing anything useful with such vast amount of information, even within a desktop version of Bing Map, appear rather limited (in fact, some users have already commented about the performance). Spatial education within general population is another factor – general knowledge is increasing but is still not enough to deal with issues of “projections and datums” for spatial data and more advanced forms of analysis, beyond simple presentation of points on a map…
In my opinion Microsoft would have much better chances of success developing specific tools in support of specific business activities, with narrowly focused spatial functionality, rather than providing a generic DIY mapping capability. This strategy failed in the past but it seems to be a preferred approach by big players, including Google. Time will show if this time the outcome will be different.
4 comments:
Given the binary nature of data stored in excel, this is only marginally scarier than 1 cell = 1 pixel maps in excel...!
Yes, binary format helps but not if you have to deal with 17B points :-( . As a side note, from my own experiments, it looks that browsers are quite capable of dealing efficiently with "big-er" data – provided it is delivered in an efficient for processing format. Forget XML or even JSON – simple csv is better. Even old IE7 was able to process and display (!) 2 million rows of numbers...
That's interesting and it brings us to the issue of standards (which you've been posting about recently) - which is an area where csv's seem to fall down.
Maybe its an advantage for csv and txt file formats in the sense that it allows for flexibility and low overhead, but pushes the requirement for integrity/ quality/data management back on the originator/users... or more sophisticated tools + some way to capture metadata? google refine + web indexing would seem to tick both boxes?
It's a good point Luke, I have not considered this perspective... Simple formats have certainly wide usage.
Post a Comment