After more work than we anticipated, the API and mashup release is ready for Prime Time. The GeoIQ API is now officially available for consumption and you can get all the good information about it at:
Our GeoIQ Mashups
To demonstrate the versatility and power of GeoIQ, weâ€™ve built three demo mashups. The traffic congestion demo, we had screen shots of before, is up – using Bureau of Transportation Statistics data on average traffic delay per lane mile (in English, how many minutes on average are people delayed on a specific mile of road over the course of a year).
The second mash up is more ambitious, taking detailed Census block demographic data for San Francisco and New York, combining it with the Yahoo! Local API and Google Maps. The idea here was to show an advancement of â€œLocal Search.â€ While Microsoft and Google have been investing heavily in making local search more immersive with amazing satellite imagery and three dimensional buildings, we think what the market needs now is the ability to make better local search decisions, not prettier ones. Although we do think our heat maps are pretty. We believe most local decisions are based on a number of criteria. There is more to buying a house than price â€“ users want to know about schools, crime rates, traffic congestion etc. Marketers want to know demographics about the markets they are selling to as well as the location of competition. Lastly, we cannot forget the inside dev joke here â€“ where are the bars with the most single women/men. The next evolution really becomes location intelligence that enables decision making. The NY vs. SF mashup is a first small step in that direction. You can see where there is the highest concentration of Starbucks and people who make over $100,000, high home values and realtors, college degrees and book stores, or night time population and video rentals.
To do this weâ€™ve built analytical tools to go with our heat mapping engine. You can make a basic heat map that will show where there are the highest values for the combined attributes you are looking at. Next, there is the concentration index which takes the values of your data sets and measures the distance between all of them to give you a score of how concentrated and how high the values of your data are. Lastly, there is the intersection index that shows only those locations where there is overlap between your search criteria. This is one step toward making decisions with local search by determining if location â€œAâ€ is better than location â€œBâ€ for a certain set of criteria.
There are some caveats to using our analytics for comparing locations. In order to integrate multiple data sets for analysis, they have to be normalized so that you are comparing apples to apples. If you leave rent in dollars and populations in number of people, the math becomes meaningless. Therefore we normalize it all so everything meshes up well. The downside of this is that absolute values are lost in the metric that is produced. The number produced on the map for the concentration and intersection tools is based on how closely located together the highest values on the map are. It does not say which map has the absolute highest value. So you could have a higher concentration index of average rents in San Francisco than New York even through the rent in New York is higher. This is not a problem with population counts and I am getting into the weeds with this, but wanted to be forthright about the shortcomings of the analytics. In the next release weâ€™ll have metrics that give the average values and maximum and minimum values for all the data sets in the extent of the map. One thing at a time, but weâ€™ll get it all out shortly. You can find a more detailed description of our geo-analytic tools here â€“ (once in the GeoIQ website click on the support link).
Last but not least, we have our risk mapping application available. For the risk app we built out our own web mapping application with OpenLaszlo and built a custom UI for interacting with the GeoIQ API. In the tradition of being a poor startup we used all open source imagery (Demis and Terraserver), so it is definitely slower that Google and others. Once you get over the slow sat imagery, we have a nice set of functionalities that illustrates some of the more advanced features of GeoIQ. Since it is more complex, we recommend a quick read of the tutorial before getting started (once in the GeoIQ website click on the support link).
Across all three mashups weâ€™ve collected a wide range of open source geospatial data and we have lots more sitting in our data repository. Through the API weâ€™ll be making more and more of this data available. Next month weâ€™ll be formally launching the repository with the data weâ€™ve collected â€“ made available to the public for free. In our minds there is nothing worse than paying for open source data. Weâ€™ve all paid for government data with our taxes, but there is a huge industry reselling that data to you the user. Why? Because the data is hard to find and even harder to use. Weâ€™ve taken the time to clean it up, annotate it and make it easily available in open standard web formats. Our hope is that other people will contribute the open source data theyâ€™ve collected so that no one has to pay for free data. We have lots in store to help foster the community with tools to make it easier to consume data and share it. The plan is to keep growing our contributions and hope others join in, so there is another of many places to get geospatial data to drive the next evolution in web mapping for the masses.
Welcome to the Esri DC Development Center blog. We write about features of our work on big data analytics, open platforms, and open data, what is new and exciting in the Esri and community, and general industry thought leadership and discussions of geospatial data visualization and analysis.
Please explore what we're working on and let us know if you have any questions or ideas!