Open Geospatial Consortium OGC adopts OWS Context Conceptual Model and ATOM Encoding Standards

Blog post
added by
Wiki Admin

OGC has made a release today. There is a new standard for, if implemented, lets say if you make a map in Google Maps, and want to show it in Bing or an advanced GIS tool, it would be like opening a Word document in another tool like OpenOffice. 

This seems simple enough, but as you can imagine, we cannot do this now. Actually quite the opposite – most viewers require coding to make a mashup or workflow configuration work in its own application. This standard paves the way for sharing across OGC services.

Xentity staff supported input into the ATOM XML encoded standard, and provided support to drive for the JSON encoding, which is more HTML5 and browser friendly, which can help for tablet, light viewer solutions as well.

Full article: The OGC adopts OWS Context Conceptual Model and ATOM Encoding Standards

So what is the point of this metaphoric drivel

Blog post
added by
Wiki Admin

So what is the point of this metaphoric drivel about cowpaths, space shuttles, and chariots?

Yes, fair enough. Aside from being a fun story, there should be a point.

I think there are 3, not unlike the Goldilocks story.

Change Agents can’t come in too hot to put in new technology and abandoned the old as there are consequences

Change Agents can’t come in too cold and put in new technologies just putting it in the footprint and same design footprint of the old.

Change Agents need to find the transition balance between the old and new that allows the new ecosystems to be adopted and the old ecosystem to adapt.

To get this balance, there are three factors standing in the way of introducing a disruption such as this:

  • Scaling – Scaling Research Readiness for solution expansion, adoption, and architecture qualities
  • Legacy – Legacy investment stakeholders agendas
  • Transition – Patterns for new investment that benefits the new solution and addresses legacy investment stakeholders

Read the next blog post for considering the disruption factors on an example topic – advancing our global network keeping up with the Computers to make the internet truly 21st century.

More to come.

Piling On HealthCare.gov

Blog post
edited by
Matt Tricomi

Washington just can’t catch a break, eh?. Debt Ceiling. Sequester. Shutdown. And now an epic fail of a single portal solution that provides the primary access to the new healthcare access law.
Love it or hate the law, the model for the electronic access provision is definitely an epic fail of a rollout.
What else is an architect to do if we can’t pile on the fiasco, offer opinion, while at same time, we are hoping to catch someone’s attention or get some sense someone is thinking about it from the design point of view, as it appears the media, hearings, and threads are not hitting on, at least, where we were hoping the discusion would land. Its a bad design. I’m not commenting on the policy part – the pundits do enough of that. I’m commenting on the architecture itself. It appears a poor design.
The following captures our internet sleuthing, colleague discussions, and our current deductions.
This is an evolving blog, as its more of a case study than a daily diary. Its a bit in draft form, but a way to begin to pin the story together. Apologies for the language typos and pre-publish ready state, but figured this is so fast moving and important, that I wanted to add to the dialog, and not simply leech after with 20/20 hindsight.

UX and Web Design is fine – Its not the front end

Now much fanfare has gone to the “web site” glitches. It was written about back in June by Alex Howard in the AtlanticI had the pleasure of connecting with Alex during data.gov work back in 2010 and off and on, we correspond on social media occasionally. I have a respect for his writing, what he follows, and have found generally that he is spot on with bringing collaboration across traditional boundaries into the world of information and technology. That said, I may be a bit biased. 

Here are just a handful of articles done on the healthcare.gov performance:

http://www.websiteoptimization.com/speed/tweak/healthcare-gov-analysis/

http://apmblog.compuware.com/2013/11/04/diagnosing-obamacare-website-heathcare-gov-still-lacks-basic-optimizations-before-it-can-mature/

http://www.mardesco.com/blog/website-optimization-and-healthcare-gov/

http://www.conversionmax.com/healthcaregov-what-went-wrong/

Its the same thing as above – minify, compress, order your scripts/CSS, cache. Yep, the same things back in 1999, just a MUCH more powerful scripting and style processing capability. 

Point being, this client-side tweaking is all REALLY good practice, especially for large sites with large traffic. Every little bit of debris cleanup helps. But, whatever fix is done, there needs to be a balance of the true timeline issues. It appears that the problem is primarily a server-side architecture problem . There were/are definitely several issues with the front-end performance as the above articles suggest. Its easy for the web technophiles to do, since most of the web processing is on the client side – in your browser and tools (i.e. hit F12 on a PC in Chrome or do  YSlow plugin for Firefox) are easy to use or websites like what Google Analytics does can analyze performance, content safety, usage statistics and so much more. Point being, are you going to put more resources to fix the leaky faucet or the gushing, gaping hole in the water main first?

Now, Alex only reported on the developments of the front end and UX component. There is deserved high praise on Prose.io, Jekyll, open source concepts, garage organizations breaking beltway development stereotypes, and persona development as a way to develop the navigation for this brand new pattern.

But, the point is, the UX and web design part is fine and and dandy. Alex was right in June and still is. Its the same architecture I did for united.com back in ’99, just different tech. Have a CMS, cache it, distributed over 4 servers west/east. When 9/11 hit, my site was only airline site (check internetwayback machine) and call center that stayed up. So, this model for hc.gov is fine. The UX caching is fine. 

He didn’t report on the back-end part. Mind you, this part is under reported and the complexities of the iceberg under the water has been mis-understood. A lot of folks have joined the form of internet bashing snarkiness and bashed Alex’s journalism and attacked his integrity as a bandit of sorts. In other articles, I did see some purist nerd talk on some of poorly grouped javascript or heavy, some bad code, added some extra callbacks, and wasn’t as static as it should have been, it was quick to determine that was minor. 

I felt pretty bad for him on the article, and generally as a citizen, embarrassed, so like many of us architect weenies, I dug into it as many other colleagues have. Hey, regardless of politics, we all want a working country. 

Why the logic for real-time data aggregation architecture?

Just like the Tacoma Narrows Bridge didn’t fall because the construction contractor failed, it was the architecture forgot one piece of logic – the wind in this river was very strong. There was nothing to dampen the flow, and when the wind force blew, it blew the bridge passed and near its natural frequency (think rubbing your finger on a wine glass), those vibrations shattered the bridge. It was built to specification by the contractors, but the design was horrible because it had flawed logic.
OK, where is the parallel. So if its the back-end problem for healthcare.gov, where is it? My background and recent work on “hero” architecture was also excited to hope it was a minor performance issues that could be fixed by some horizontal scaling of servers. They did that, no major fix. Maybe it could be some technical server or software tuning – no luck. That only leaves bad logic.

Now it appears possibly the contractor did have faulty construction in that their wasn’t enough foresight to do more parallel testing, load testing, integration testing and the “7 steps of doneness“. The architects came out and said it today. Though that sounds a little passive aggressive now, as an architect of a building, would you say that after the bridge collapsed? Sounds like either buckpassing or gag order or droopy dog, no on is listening to me. 

But even that would have been able to be reported out by now. So, aside from the obvious lack of discipline for engineering failures which is the civil engineering equivalent to the Tacoma Narrows Bridge, what logic am I talking about.

I believe it comes down to the architecture logic in this case, likely multiple areas. They treated the architecture like a controlled real-time MIS distributed system on data that has not been standardized or proven to be easily integrated through the test of time
 

Bad Logic: It appears they are responding to business rules that ask for real-time queries?


Why are they doing this? Is Healthcare.gov is following the KAYAK.com or Orbitz.com data aggregation model? In airlines, rates could change any minute, airlines have proven APIs over a decade of time-tested improvements, and those smaller airlines get screenscraped and KAYAK spends dollars keeping those scrapers up to date – just like when mainframes used to be scraped for client-server integration. Airline is a huge industry, they put their paper ticket to eticket on the line and it took a decade to get to this model. Then again, USA Today put out an article noting that Healthcare.gov is not alone on high-tech blunders:

United, Continental merge computer systems

March and August 2012

United Airlines had problems with its reservations system in early March after it switched to Continental’s computer system as the two airlines merged operations. Passengers complained as United struggled for several days to fix problems. In late August, the airline’s computer system and website went down causing problems with reservations, ticketing and check-ins.

This of course made me smile a little bit since they took down my architecture I did for united.com since they decided to move to Continental’s toolsets mostly because United needed to get out of Apollo mainframe (so says word on the street). So when they through [my] proverbial baby, then 12 years old and still well respected and award-winning out, with that, it did make me smile. But, point being, even big moves in private industry will happen in this type of architecture.


While I appreciate the importance of up-to-date comparisons, cant this stuff be pre-loaded? Why are we bucket brigading this information? Did they do this because they got caught up with the fanfare of the neat front-end – which again is slick, but its a small part of the project. Its the book on the cover that gets people there and comfortable and to have a good interior design user experience. But if you cant get a cup of coffee in Starbucks, does that matter?
I wonder if it needed to be this way.  When I heard about the queries real-time crossing statelines, which meant that all the quality validation would require quality handshake, in real-time from each source where each time a different provider, state, or other governing data source, that would require a whole different set of rule base to validate. Each state, provider, etc. operates differently and to expect that real-time is quite an aggressive and bold feat.
At same time, is that required? Imagine doing a bucket brigade through 10 points for one bucket to put out a fire, some water will be spilled, but if you simply put more people/power in between, the spillage doesnt matter. That is why it works for normal web service models which has made Twitter, facebook, and many other Services used in other apps work so well. Its a simple single source of data.
Now, pretend there are 50 different types of twitter imitators, all with different approaches to tweeting, different data about the user or the tweet (aka metadata), different ways they setup the service call, and under different state laws to share that information. This is healthcare.gov.
My idea here isnt overaly novel. Check out gov.uk and MongoDB’s article – Reinventing Data Management for Government Websites which discusses just this theory:
HealthCare.gov faces challenges like aggregating volumes of data and building an efficient system to meet citizens’ needs. An agile database like MongoDB could have helped HealthCare.gov to scale, remove redundancy, and potentially reduce the cost (estimated to be at $292 million so far) (WaPo article) of both creating the site and dealing with the fallout of its failure.
I haven’t reached a point of validation yet, but it sure seems like the separately developed service handshaking across these disparate sources would cause integrating data that was created, formed, provided, and published under such different bucket brigade handlers, that it would be like having a bucket bridge coming out of a lake and an ocean, and expect only fresh water and the extra salt water magically goes away.

Why not pre-stage the queries?

I guess I’m asking still – why do they need to aggregate real-time? Why can’t they pre-stage the calls? Are they really updated every minute?


You make pre-staged products, where in between each organization, you validate the quality, get it ahead of time, or design-time, so then your run-time call can call the validated source which can be updated every week, day, 15 minutes, etc. and different of each feed. Then when new feeds come in, the separate pre-built maps or indices are auto-updated to make and optimize the comparison experience too, and even setup the possibility to help inform what the comparison mean or simply make matter of fact statements about the comparisons (not advice, but make obvious the differences). Google has proven this model. Its just reading content. And with the knowledge snippets in Google now on the right, you can ask it basic factual questions now. They can compare and its all hitting a local, validated, appropriately up to index that can scale, elastically on the cloud, that is proven fast – Google has proven that.

We recently did this same thing for a must be remained anonymous major Government agency. They had their search calls call the traditional RDBMS which is better at searching on a specific record and returning in simple, non-high computational queries. Instead, we asked, can you move 90% of the search into a NoSQL solution. It can load hundreds of millions of records in minutes, do all the pre-calculations to make for a smarter search – like how google knows what you are typo before you do, it can handle typo, many facets, drilldowns, etc. I believe KAYAK has moved this direction as well, but can’t validate, to optimize its search experience.

This is why the twitter, facebook, and other popular highly used service APIs work. They designed their SiM, and now there is a massive ecosystem of sub-applications, sub-markets, aftermarkets. HC.gov did not do that, it took a YAGP (yet another government portal) architecture technique, parted the scoping out like for a battleship. There wasnt enough consideration for patterns like more design-time pattern integration vs. run-time pattern integration (which is something I recently architecture prototype for NARA adding a NoSQL “index”, if you will, in front, so the query part was fast, but the transaction part got passed to the traditional RDBMS, then if updated, it did a millisecond sync back with NoSQL.

Now for the Blame Game: In contracting, investing in architecture is still not a requirement, so its a liability to bid it that way. 

It was divvied to over 50 contractors, and outside of a PMO, there appears to be no enterprise service integration patterns concepts are part of the leadership team. I saw a PMO, but they usually manage the production, not the architects of what needs to be produced. We can throw CGI Federal under the bus all we want, and whomever did the PMO, but it sure seems like the requirements and team did not have a solution or architecture integrator as one of the roles. Someone(s) overseeing the Service Integration Model – call it Enterprise Service Architect, Sr. Solution Architect, Architecture Review Board/Governance – to advise on design risks, maintain risk weights and let the PMO know where risk is at escalation points. 

I do know in contracts, if you are believing you could win if you could shave 3% off the contract, the first thing to go is usually the higher end rates. Those rates are usually quality focused. Those typically are those on architecture, design, strategy. Typically government contracts do not write that in, or if it is, it is written in a more compliant way. Contracting Officers are not in a position to review whether a subjective concept such as a proposed architecture is better than another and Contracting process review boards for IT have not adopted concepts like in civil engineering architecture concept review boards. Given that, having a higher quality architecture solution component is not seen as differentiator, since it doesnt check a box, typically integrators will drop that high rate position, and wallla, you have just undercut competition by 2-4% on the bid. 

By the way, don’t get hard on the contractors only. The same goes for the writing of the contracts. Contracting Officers do not have a way of knowing if the technical requirements is asking for are sound or the best. And they have stated on occasion they like to leave it open to allow the contractor to come back and tell them “how”. While this is fair to let private industry offer best solutions, there should be architecture principles that are put in the contract to guide how they can answer, and thus less subjectively how they can assure robust architecture, for instance, without just saying “it will be a robust architecture”.


But as we saw, when you don’t buy back the risk up front, and the design changes, the cost balloons, which it did. Today, the reports are saying contractors are blaming the government after a few weeks of government throwing “greedy” contractors under the bus. I say its not greed, but a broken procurement process. We have blogged for years, and built our practice around this principle – We can approach architecture for other implementers . 

For a large majority of consulting companies both design and implement for ALL projects.  Though profitable for many firms, the best design can end up biased towards the agenda of the implementer which may be to sell more components, get more bodies. Now, we have the capability to implement architecture, but our end goal is not to design an architecture that is for us to implement, but an architecture that is implementable. 

Many times, the client knows that the implementer will design with a bias, so the client chooses to or must design blind without considering the maturity of what an implementer can provide. In those cases, we can come in, architect, and be a third party to help do the concept, design, and design the requirements and performance work statement basis.

This approach with these services buy-back risk to your implementation and increase the likelihood of achieving your metrics and goals.


11/1 Note: A colleague forwarded me two WSJ articles that discusses Fixing Procurement Process Is Key to Preventing Blunders Like Healthcare.gov and Procurement Process is Government’s IT Albatross which only adds fuel to the fire. These blogs uses Clay Johson’s Fix Procurement Manifesto as its guide. I agree with many points in Mr. Johnson’s manifesto and his points, but I don’t agree with the author’s understanding. The author boiled it down to make procurement easier to get new tech. That sounds well and good, but as a colleague who was formerly in the OMB said a few weeks back, if you add any tech to a broken process, it will only make it worse. And it did. So, I think it will not only take time to fix Agency Leadership as Johnson notes, and the author cites Former OMB CIO Kundra as a key, but also, the general public understanding of how technology and business transformation works. The author and agency leadership compare healthcare.gov, a solution that involves data exchange, policy oversight, and manages millions of transactions to an Operating System flaw in an iPad which requires fix once, deply many. These are very apples and oranges conversations – one is commodity technology, and other is automating multiple decades of policies on top of policies. Until we can help educate the differences in complexity, we won’t be able to achieve the aspects of Johnson’s manifesto as well as help understand the right architecture to invest in PRIOR to letting a contract.

-mt

When you are ready to move into BigData, it means you are wanting to Answer new questions.

Blog post
added by
Wiki Admin

The article on Throwing a Lifeline to Scientists Drowning in Data discusses how we need to be able to “sift through the noise” to be able this faster and faster deluge of sensors and feeds. Its not amount information management models of fast, large retail or defense data. It is about finding the signals you need to know to take advantage.

In controlled environments, like retail and business, this has been done for years on end to guide business analytics and targeted micro-actions. 

For instance, gambling industries have been doing this for 15 plus years taking in all the transnational data of each pull of a slot machine from all their machines from all their hotels linked with your loyalty card you entered and time of year and when you go, your profile, your trip patterns, then laws allowing, they adjust the looseness of the slots, the coupons provided, the trip rewards all to make sure they do what they are supposed to do in capitalism – be profitable. 

Even in uncontrolled environments such as intelligence, defense or internet search, the model is build analytics on analytics to improve the data quality and lifecycle so that the end analytics can improve. Its sound equalizers on top of the sound board.

Do go for the neat tech for your MIS. Go because your users are asking more of you in the data information knowledge chain. 

Continue on to read more on our follow-on article: BigData is a big deal because it can help answer questions fast

What cow paths, space shuttles, and chariots have in common

Blog post
edited by
Wiki Admin

A colleague recently sent me a chain email (they do still exist) about the old adage on how new technology is driven by thousand year old standards. I had seen it before. I remember then I liked it. But, my new habit on chain emails or viral urban legends was to poke around. Being childlike, I hope for fun new ways too see things, but being a problem-solver as well, I am skeptical of these amazing discovery of trivial connections. Regardless, its still a fun story where one can mine some good nuggets.

The anecdote essentially notes how historical inventions are connected and a moral. Reading it backwards, it connotes how the width of the space shuttle rocket boosters are due to width of railroad tunnel. And how railroad tracks width are due to the carriage wheel width. And how that width is tied to chariot width because of the width of two horses. Point being, the boosters width is derived due to width of two horses rear-ends.

Like I said, it is fun, but the tangents are more loosely coupled and coincidental than the “seven degrees of Kevin Bacon” concept. Snopes nicely walks us through how while this is true, but only through generalities – not unlike how someone could say the clothes we wear now is because of a medieval tailor sized it that way. Snopes can be a party pooper some time, but they did also note a few things about people and change (insert my agenda HERE). This is why I do like stories like this as I can tie my own tangential take-aways from it.

Snopes points out humans presets on change:

Although we humans can be remarkably inventive, we are also often resistant to change and can be persistently stubborn (or perhaps practical) in trying to apply old solutions to new conditions. When confronted with a new idea such as a “rail,” why go to the expense and effort of designing a new vehicle for it rather than simply adapting ones already in abundant use on roadways? If someone comes along with an invention known as an “iron horse,” wouldn’t it make sense to put the same type of conveyance pulled by “regular” horses behind it?

It goes on for several more examples noting how new innovations leverage the blueprints of previous generation inventions, regardless of their direct influence. The tone felt a bit down when noting this, but I felt this continuity is not wholly a bad thing.

As a physical society that build infrastructure to share, this compatibility is needed to limit the impact of disruption while progressing towards addressing societal challenges of Maslo’s Hierarchy of Needs globally.

For example, lets say there is a future decision to stop using dams for hydroelectric power and go into a series of nano-electric generators that works off river flow that would impede water less and generators more power. This is great as we have a lower cost, simpler, more efficient solution that also does not disrupt the ecosystem such as riparian development, fiash spawning, etc. like dams have for decades.

How do we transition to the new nano solution. The railroad story says we would use the previous footprint of the dam, and once ready, slowly migrate to the new solution to allow the water flow to slowly come back in place. This would allow the wetlands and riparian ecosystem to grow back at natures pace, and allow for fish and river life to adapt generationally.

Yet, the new solution does not require the same footprint. We could build it anywhere along the river. It could even be setup in a series of micro generators, and once the level of energy put into the grid matches the dams, in theory, the dam could just be exploded, and we could progress on without anyone in the future anthropocene historic footprint to be aware that a dam was ever there.

But, removing the previous infrastructure in a responsible way will be key. Blowing up a dam means the water release would cause major sediment displacement, kill the dam-resulted adapted riparians and wetland ecosystems, and generations of fish and river life would actually die as a result. The dismantling process, though not required for the new direct energy human need, is very critical to consider the indirect impact of the evolved ecosystem. 

If still interested, check out the follow-up blog post So what is the point of this metaphoric drivel