When moving to the cloud, consider changing your discovery approach

Blog post
edited by
Wiki Admin

As we do not want to pave that cowpath (What cow paths, space shuttles, and chariots have in common or What are some patterns or anti-patterns where architecture and governance can help cover this point), we want to not only save the monies in moving to IT Commodity utility model, but also consider, do we just take the MIS architecture and pattern and put that in the cloud, or do we look at new patterns, such as new search index, engine, or NoSQL models that allow rapid, near real-time smart discovery on the read part of the solution.

This will increase your data and digital assets relevancy as the market demands to make things easier, simpler, and instant gratification. 

 

Traditional: Keyword Search Matching 

For many new large, cloud-hosted, database transaction management solution, organization needs a fast document, record, object, or content search by facets, keywords across both metadata and full search with a quick, nice experience that can handle millions of documents, authorities, and lookup lists along with thousands of monthly transactions.

Currently, the architecture clients invested in is a model that was developed pre “big data”. These models emulate MIS form based searched by trained users with a supporting search engine that does a full scan of any keyword or some category or facet filtering to return ALL matching records weighted by keyword closest match. This can handle full text search as well as facet search, but does tend to be higher taxing on computing power to return not only accurate results, but results are will not be context aware of popularity, typos, synonyms, etc..

New Searching architecture is not just Faster, but is Smarter

NoSQL models are put into query box as well, but NoSQL engines can have multiple index-like “signals” that the query engine can look up to better help interpret should be able to figure out the key signals to infer what the user may be looking for. The search engine solution would handle and have an increase investment in interpretative signals (i.e. fuzzy logic support for popular search weighting, typos, thesauri integration, synonym, typo recognition, community based, event/trending, business rules, profile favorite patterns, etc.). This could include as well researching improving description framework improvements such as improved overlapping categorical/alignment, schema.org and move towards RDFa.

When these solutions do not apply

As Apache on Hadoop states, Hadoop (or it does imply NoSQL more in general) is NOT:

  1. Apache Hadoop is not a substitute for a database – you need something on top for high-performance updates
  2. MapReduce is not always the best algorithm – if an you need MR jobs to know about the last, then you lose the parallelization benefits.
  3. Hadoop and MapReduce is not for beginner Java, Linux, or error debugging – Its open source, and emerging, so many of these techs built on top bring that and are worth the extra layering.

Initial newer search engine better at metadata search, but not full text and full results

Google solutions or Open Source solutions like MongoDB are fast at addressing these “signals”, but are limited at full text document searches of extremely long documents which is sometimes required by legal, policies, or other regulations.  For instance, when doing a CraigsList or Groupon, a user is searching against metadata fields, i.e. “Bike Vintage” between 1950 and 1960 and most of the text, and what is returned is milli-second results of the top x hundred results, but the results are not hitting the raw data and nor every record, but instead is hitting these index-like constructs with the record ID. For those results succeeding, the user can then call a URL to then go into transaction mode back in Oracle. If an edit is made, the trigger to update the NoSQL index can be updated immediately as well as full-text updates can be updated in Oracle reasonably fast, but definitely not as fast as the NoSQL index.

There are other solutions in the newer search engine technologies that can address all requirements. For example, there may be 10,000 results a user wants to pull all those results into their software, then move into transaction/edit mode, and commit those edits in Oracle, and the NoSQL index can be update immediately, and be available for near immediate use for full-text in Oracle or in some search engine solutions in full-text.

Exploring NoSQL and new signals will yield faster and smarter results

Point being, the improved discovery not only be faster from a query return point of view, but also by returning smarter results. This will also make the discovery process itself faster, to move the user to faster actions on their intended transactions as the search results will be more context aware of language issues, popularity, and user personalized needs.  This can be achieved by technologies such as ElasticSearch, possibly Spinx, or possibly a combination of MongoDB for fast search, and existing Oracle for full-text search.

Some favorite TED talks

Blog post
edited by
Wiki Admin
– “Migrated to Confluence 5.3”

A business partner last night said “I don’t wake up and turn on my phone, or watch TV, or check email right away. I try to keep it simple… ” he said as several of us waxed rhapsodic of the pre-pocket tech and internet days and how teenagers patterns know no other world. But yet he continued, “OK, well that’s not true, I do get my morning dose of TED for inspiration”. 

Its just one more to add to the many morning intake mediums. People seeking personal philosophical guidance in the morning through religion, scripture, reading a story, meditation, prayer, mind-body engagement or quiet time. People seeking temporal context in morning news TV, newspaper, internet and feeds, websurfing (can I still use that term?), tablet time. People seeking social engagement with morning coffeee at the diner with the guys/gals, spouse or/and kid quality time, the facebook rise-and-shiner, or other social media digests. People seeking inspiration in either of the above

Personally, I have yet to ever find my morning ritual and I bounce in different mediums. Sometimes, its playing trains or toys or some activity with the family when we get a good rhythm going that morning, sometimes it is tablet browsing when feeling curious on various news or video feeds, sometimes it is mindless TV news digestion, and probably more rare than I should, sometimes it is outside quiet time in a run, bike, walk, or reading or whanot. Other times, the day gets going to fast, and there is no interstitial time, and an east coast call to this mountain time zone starts right up.

Though, I haven’t found my rhythm, but over the last partial decade here are a few of the greatest TED hits I’ve tweeted out as greatest hits and found inspirational :

Hans Rosling: Stats that reshape your world-view (Jun 2007)

Geoffrey West: The surprising math of cities and corporations (July 2011)

TEDxUofM – Jameson Toole – Big Data for Tomorrow (May 2011)

Eli Pariser: Beware online “filter bubbles” (Mar 2011)

Sugata Mitra: Build a School in the Cloud (Feb 2013)

Deb Roy: The birth of a word (Mar 2011)

-mt

Why we focus on spatial data science

Blog post
edited by
Matt Tricomi

The I in Information Technology is so broad – why is our first integrated data science problem focus on spatial data? It doesnt fit when looking on face of our Services Catalog . We get asked this a lot and this is our reason, and like Geospatial, its multi-dimensional spanning different ways of thinking, audiences, maturity, progressions, science, modeling, and time:

 

In green, x-axis, is the time progression of public web content. The summary point is data took the longest period – about 10-15 years. And data can only get better as it matures into being popular 25 years old on the web. We are in the information period now, but moving swiftly into the knowledge period. Just see how much more scientific data visualizations, and dependence we are on the internet. Just think how much you were on the web in 1998 compared to 15 years later – IT IS IN YOUR POCKET now. 

This isn’t just our theory.

RadarNetworks put together the visual of progressing through the web eras. Web 1.0 was websites or Content and early Commerce sites. Web 2.0 raised the web community with blogs and the web began to link collaboratively built information with wikis. Web 3.0 is ushering in the semantic direction and building integrated knowledge.

Even scarier, Public Web Content progression lags several business domains, but not necessarily in this leading order: Intelligence, Financial, Energy, Retail, and Large Corporate Analytics. Meaning, this curve reflects the Public maturity, and those other domains have different and faster curves. 

The recent discussions on intelligence analysis linking social/internet data with profile, Facebook/Google Privacy and use for personalized advertising, level of detail SalesForce knows about you and why companies pay so much for a license/seat, how energy exploration is optimizing where to drill in some harder to find areas, or the absolute complexity and risk of the financial derivatives as the world market goes – these technologies usually lag in how we integrate public content for googling someone, or using the internet to learn more and faster. Reason: Those do not make money. Same reason why the DoD invented the internet – it was driven by security of the U.S. which makes money which makes power. 

So, that digression aside (as we have been told “well, my industry is different”), the public progression does follow a parabolic curve that matches Moore’s Law driving factor in IT capability – every 2 years, computing power doubles in power, at same cost (paraphrasing). The fact that we can do more faster at quality levels means we can continue to increase our complexity of analysis in red. And there appears to be a stall not moving towards wisdom, but as we move toward knowledge. Its true our knowledge will continue to increase VERY fast, but what we do with that as a society is the “fear” as we move towards this singularity so fast. 

Fast is an understatement, very fast even for logarithmic progression as its hard to emote and digest the magnitude of just how fast it is moving. We moved from

  • The early 90s simply placing history up there and experimentation and having general content with loose hyperlinking and web logs
  • to the late 90s conducting eCommerce and doing math/financial interaction modeling and simulations and building product catalogs with metadata that allowed us to relate and say if a user found that quality or metdata in something, it might liek something else over here
  • to the early 2000s to engineering solutions including social and true community solutions that began to build on top of relational and the network effect and use semantics and continually share content on timelines and where a photo was taken as GPS devices began to appear in our pockets
  • To the 2010s or today where we are looking for new ways to collaborate, find new discoveries in cloud, and use the billions and billions on sensors and data streams to create more powerful more knowledgable applications

Another way to digest this progression is via the table below.

Web VersionTimeDIKWWeb MaturityKnowledge Domain Leading WebData Use Model on WebData Maturity on Web
.9early 90sDataContentHistoryExperimentalLogs
1.01995+Info HistoryExperimentalContent
1.11997  MathExperimentalRelational
1.21999 +CommerceMathHypotheticalMetadata
1.32002  EngineeringHypotheticalSpatial
2.02005+Knowledge+CommunityEngineeringComputationalTemporal
2.12010s  EngineeringComputationalSemantic
3.02015 and predictable webKnowledge+CollaborationScienceData as 4th paradigm notTempoSpatial (goes public)
4.02020 -2030Wisdom in sectorsAdvancing Collaboration with 3rd world coreAdvancing Science into Shared Services – Philsophical is out yearRobot/Ant data qualitySentiment and Predictive (goes public/useful) – Sensitive is out year

Now, think of the last teenager that could maintain eye contact in a conversation with an adult while holding phone in their hand and not be distracted by the pavlovian response of a text, tweet, instagram, etc. Now imagine, ten years from now, when its not tidbits of data, but as a call comes up, auto-searching on terms they arent aware of come up in augmented reality. Advice on how to react on the sentiment they just received – not just the information. The emotional knowledge quotient will be google now – “What do I do when?” versus critical thinking and live and learn.

So, taking it back to the “now”, though this blog is lacking the specific citations (blogs do allow us to cheat, but our research sources will make sure to detail and source our analysis), if you agree that spatial mapping for professional occurred in early 2000s and agree now that it has hit the public and understand that spatially tagging data has pass the tipping points with advent of smartphones, map apps, local scouts, augmented reality directions, and multi-dimensionl modeling integrating GIS and CAD with web, then you can see the data science maturity stage we are in that has the largest impact right now is – Geospatial.

Geospatial data is different. Prior to geospatial, data is non-dimension-based. It has many attributable and categorical facets, but prior to spatial data, that data does not have to be stored as a mathematical or picture form with specific relation to earth position. Spatial data – GIS, CAD, Lat/Longs, have to be stored in numerical fashion in order to calculate upon it. Further more, it hasnt be be related to a grounding point. Essentially, geospatial is storing vector maps or pixel maps. When you begin to put that together for 10s of millions of streams, you get a a very large complicated spatially referenced hydrography dataset. It gets even more complicated when you overlay 15-minute time-based data such as water attributes (flow, height, temperature, quality, changes, etc.) with that. Even more complicated when you combine that data with other dimensions such as earth elevations and need to relate across domains of science, speaking different languages to be able to calculate how fast water may flow a certain contaniment down a slope after a river bank or levy collapses.

Before we can get to those more complex scenarios, geospatial data is the next progression in data complexity .

That said, definitely check out our Geospatial Integrated Services and Capabilities

What does geodata.gov mean to data.gov

Blog post
added by
Wiki Admin

During the First International Open Government Data Conference in November 2010, Xentity Geospatial Services and Architect Lead, Jim Barrett had the opportunity to present alongside colleagues such as the OMB Federal CIO, CTO, Sir Tim Berners-Lee and several other name-dropping figures in this space.

Jim, at the time part of the XPN as independent consultant, presented on our recent conceptual architecture work for data.gov that looked to integrate the previous administrations geodata.gov. Geodata.gov open data registration accounts for over 80% of all data in data.gov, so its by all means a major impact to where data.gov would need to focus.

The conference appears to continue as a bi-annual event with the last one being held in July 2012

The following captures the extended version of his presentation:

Xentity chosen to help re-arch data.gov

Blog post
edited by
Wiki Admin

Xentity has been brought on as subcontractor to Phase One Consulting Group to support adding geospatial and help update some of the user discovery and supplier coordination patterns for data.gov

UPDATE 11/2011:

  • Xentity delivered architecture recommendations in early 2011
  • data.gov implemented the first phase of the architecture by retiring and initial migration of geodata.gov to geo.data.gov
  • Future phase recommendations are procurement sensitive, but project results were very exciting to see where data.gov could go.
  • What does geodata.gov mean to data.gov – Xentity Supports presenting Architecture findings at International Data Conference.