Handling delays on Internal Projects due to skill gaps

Blog post
added by
Wiki Admin

Other duties as assigned or projects performed by employees are both precarious and advantageous.

There are slower times in employees operations, and most talented employees will enjoy the extra challenge, which means you can get more production out of them as well as they can add skillsets to their career. But, the peter principle of management typically yields adding more other duties, projects, or promotions until the performance level drops. Typically, to get them back on track, coaching, consulting, or even get-well plans are required. Also, during this phase, if not addressed, when career growth is stymied as either bored or overwhelmed, they want to retain with minimal effort, while just enough to retain employment and to have time to surf for future jobs on the web.

So if the staff is not familiar with these other duties or projects, they will most definitely slip, cost more, done with lowest quality, or requested scope. So, you have four options – coach more, invest in consulting, put on get-well plan, or let go due to underperformance – or the fifth no action and let those duties and project objectives slip.

If this is staff you want to retain, then get well or coaching can help, but if you have near-term project objectives, the only other option is adding consultants. You may have internal folks who can be that, but many times, they are also under duress with their own scope. This means introducing consultants. Possibly, the solution is a new hire for a new permanent position, but the perception will be for the first 90 days, this person is an “outside” consultant.

Avoiding The Bobs

“What would you say you do here?” is the imminent persona of a consultant as portrayed on the movie office space. 

Employees have had many bad experiences working with consultants. Sure, many good ones as well, and a lot of times they are re-hired or even converted. All in all, though, it is less the consultant or more the way the consultant was brought in.

  • “I’ve had consultants work for me before and they gave me ideas way too advanced for our culture.” – In this case, deliverables were not designed, the contract was not performance-based, and the sponsor did not know exactly what they were buying
  • “They explain things using a different language. They don’t get us.” – The consultant went off and interviewed other staff or trained, and it was not tailored to their priorities, lingo, acronyms, objectives, maturity, etc.
  • Or water cooler chatter, “Why do I need these guys, they costs twice as much, and I have to train them”. There was never clear definition of measures for the staff to get the project done, so if they didn’t, on what would trigger additional help

Point being, we all have had bad experiences with consultants just like we have experiences with peer employees. The different of course is one is part of the plan most the time, and the other is a remediation when the plan has gaps.

There are a few ways to introduce outside consultants.

The best way is when you hire, promote, or give them a project, you budget this upfront knowing their gaps. But, sometimes you may want to see what the employee is made of first, or the complexity of the task/project was unknown. Bringing in consultants unplanned can tend to introduce cultural issues, and creates various turbulence if not brought in right. You still need to either start your project right or get your project back on track, and outside consulting can do just that, assuming, on a separate topic, the project is scoped right, vetted right, priced mutually well, and is clearly defined for delivery and transition. 

Also, the level of consultant you bring in will change the types of possible reaction:

  • Strategic Advisor Intrusion
  • Embedded Consultant Insult
  • Consulting Project Team Infestation

The following captures some common issues and suggested solutions for engaging outside support in hero mode.

The Strategic Advisor Intrusion:

A strategic consultant will spend time with a top sponsor. This is inserting influence where existing team used to have more time for. If value is produced that aligns all or most agendas, this is seen as positive. If no fruits are visibly linked to a wanted agenda, this is seen as an intrusion

Potential Reactions:  

  • Seen as insult to executive or leadership team
  • Some team members hog the advisor to advance own agenda possibly undermining the intended direction
  • Some team members take offense on why they aren’t receiving budget support for their gaps in support and feel their scope is undervalued and is seen as competing priorities

Suggested Solution:

  • Kick off the effort with a collaborative, tailored 2-day workshop to rapidly plan, capture drivers, needs, priorities in front of each other, make the deliverables each night, produce value immediately, show how the team can work together with the new advisor both showing they add great value, but also get their culture. Tailor the workshop to the project needs (complexity, as-is situation, defining target vision, scoping out resources, and setting milestones).

Embedded Consultant Insult

An embedded consultant can be seen as a short-timer “leader” working side-by-side existing staff trying to catalyze a vision where others may perceive as a sign of failure. If the value can be seen clearly by project or program objectives or measures advancing, this is typically on the whole seen as positive, but even then, the insult of having to get help can insult a minority of the staff, and some could be key staff.

Potential Reactions:

  • Especially in project recovery, if the sponsor forces the consultant on the lead, typical reactions go beyond resistance, and can actually go into sabotage as a sign of “Not in my backyard” protectionism. It can be seen as an insult to intelligence.
  • If also the consultant is replacing a previous leader, the “acting” or “temporary” leadership role the consultant will definitely experience an “awkard transition”. This is expensive as consultants do tend to run 1.25-2x employee costs

Solution:

  • To get this positive, three things need to happen
    • One, the lead has to recognize they are behind, the sponsor wants to help them address the gap and being specific to – project is behind, solution is not there, costs are overrunning or has a high burn rate, or project team needs more guidance to get quality up
    • Two, Consultant needs to be brought in as embedded consultant – part of the team. This is not a temp that does “rote” tasks and reports to a manager. The consultant needs be part of team, interact with all, and be expected to deliver within culture, get deliverables done. The consultant can gather, interact, make observations, and even present training or subject matter, but the concluding direction, recommendations should be presented outward by the manager. This demonstrates they get it, are competent, and can grow.
    • Thereafter, to address objectives for achieving value, the consultant needs to have clear deliverables, and early ones should be tangible and visible. This is so that others including the manager can see if they are getting what they paid for. The deliverables should be defined up front. Thereafter, you can decide to doe time & materials, but at minimum, milestones should still be clear.
    • As a way to get started, use a process that links the embedded consultant work to the newly defined/updated drivers, stakeholders, objectives, and milestones and always refer back to both when developing solution, so any collaboration is not personal, but using executive direction and proven process

Non-solution:

  • Train existing staff how to do a specific intermediate skillset – plan, design, research, architect, etc.. – who already is observed to be overwhelmed rarely yields success without first taking on more other duties or projects off their plate. These intermediate positions are multi-year effort with years of domain, subject, and pattern knowledge. The solution can be to start training, but they will come back with new acronyms and certifications (PME, ITIL, FEAC, CMMI, etc.), patterns, which take time to learn what applies when.

Consulting Project Team Infestation

A project team brought in to introduce a new system, process, migration, or evaluation can be a tidal wave when it hits. Project teams introduce a sub-culture within themselves, and can be referred to in a segregated fashion as they will be “gone soon”.

Potential Reactions: Awkard transition from previous development, resentment from those loyal to previous developers, initial stagnation, Attempted coup in defense of previous team, lower IT support to team, project team has myopic view of needs due to isolation and could impact deliverable results

Solution:

  • Foster new relationships early by getting a milestone successful executed out of gate to show the new team demonstrates results
  • Have a project liaison, whether that is the project manager, or the office correspondent, to the team that assures the project team has escalation of needs and as well help them acclimate to how “things get done around here” as well as opportunities to engage in the office culture (events, outings, even idea meetings, and brownbags)

Point being, bringing in consultants can be scary to staff if the expectations are not clear why.

Sure, there may be cases where the employee is on a get well pan while bringing in consultants. If you are not at that point, set the milestones for what needs to happen, and if those measures are not being met, and it is not at a fire point, and the coaching time dedicated is not cutting it, you need to move to consultant phase.

GAO releases report on FGDC Role and Geospatial Information

Blog post
edited by
Wiki Admin

GAO release report on us of geospatial information with title “OMB and Agencies Can Reduce Duplication by Making Coordination a Priority”. Readers Digest – focus on integrating data. 

Click to download PDF

We tend to agree. FGDC is currently very focused on a service enabling management model (Geoplatform) to accomplish this. It is bold, but if their role of being a service provisioner can directly or indirectly get them in the game to address the real problem of data lifecycle management, they will have a chance to address this. 

Point being, FGDC knows its role is not to be in IT Operations as its direct goal. But, they also saw that being a sideline judge with no carrot or stick role would not garner the direction and recommendations that GAO suggests. They are getting on the playing field, taking advantage of the open service provider role, being that broker, and using that role to move IT costs down, and also enabling those shifts in monies to then focus on the data issues cited. Its bold, and a unique approach, and there are many questions can a traditionally non-operational group develop that culture to be effective. Proof will show over the next 2 years.

Below find our summary of strategic direction for FGDC’s geoplatform.

The challenges and recommendation sections are:

  1. FGDC Had Not Made Fully Implementing Key Activities for Coordinating Geospatial Data a Priority
  2. Departments Had Not Fully Implemented Important Activities for Coordinating and Managing Geospatial Data
  3. Theme-lead Agencies Had Not Fully Implemented Important Activities for Coordinating and Managing Geospatial Data
  4. OMB Did Not Have Complete and Reliable Information to Identify Duplicative Geospatial Investments

Our review of Background – then and now

The foundation the FGDC has put in place. The Federal Geographic Data Committee (FGDC) has always been a catalyst and leader enabling the adoption and use of geospatial information.

The Federal Geographic Data Committee (FGDC) has been successfully creating the geospatial building blocks for the National Spatial Data Infrastructure (NSDI) and empowering users to exploit the value of geospatial information.  The FGDC has been leading the development of the NSDI by creating the standards and tools to organize the asset inventory, enhance data and system interoperability and increase the use of national geospatial assets. The FGDC has successfully created policy, metadata, data and lifecycle standards, clearinghouses, catalogs, segment architectures and platforms that broaden the types and number of geospatial users while increasing the reuse of geospatial assets. [1] 

What is next? The Geospatial Platform and NGDA portfolio will be the mechanism for adoption of shared geospatial services to create customer value

Recently, the FGDC and its’ partners, have expanded their vision to include the management and development of a shared services platform and a National Geospatial Data Asset (NGDA)portfolio.  The goals are to “develop National Shared Services Capabilities, Ensure Accountability and Effective Development and Management of Federal Geospatial Resources, and Convene Leadership of the National Geospatial Community benefitting the communities of interest with cost savings, improved process and decision making”.[2]

As the FGDC continues on the road to establish a world class geospatial data, application and service infrastructure, it will face significant challenges “where the Managing Partner, along with a growing partner network, will move from start‐up and proof‐of‐concept to an operational Geospatial Platform”.[3]

Xentity has reviewed the FGDC’s current strategy, business plan and policies and identified the following critical issues that need to be solved to attain the goals:

  • Building and maintaining a federated, “tagged”[4] standards-based NGDA and an open interoperable Geospatial platform. The assets need to provide sufficient data quantity and quality with service performance to attract and sustain partner and customer engagement[5]
  • Developing a customer base with enough critical mass to justify the FGDC portfolio and provide an “Increased return on existing geospatial investments by promoting the reuse of data application, web sites, and tools, executed through the Geospatial Platform” [6]
  • Improving Service Management and customer-partner relationship capabilities to accelerate the  adoption of interoperable “shared services” vision and satisfy customers [7]
  • Executing simple, transparent and responsive Task Order and Requirements management processes that result in standards based interoperable solutions.  [8]

The Big Challenges

Establish the financial value and business impact of the FGDC’s Portfolio!

The Geospatial Platform and NGDA will provide valuable cost saving opportunities for its adopters.  It will save employee’s time; avoid redundant data acquisition and management costs, and improve decision making and business processes.  The financial impact to government and commercial communities could be staggering. It is a big and unknown figure.

The Geospatial Platform by definition and design is a powerful efficient technology with the capacity to generate a significant return on investment.  It is a community investment and requires community participation to realize the return.  The solution will need to assist the communities with the creation and sharing of return on investment information, cost modeling, case studies, funding strategies, tools, references and continue to build the investment justification.  The solution will need to optimize funding enhancement and be responsive to shorter term “spot” or within current budget opportunities while always positioning for long term sustainability.  The FGDC Geospatial Platform Strategic Plan suggests a truly efficient capability could create powerful streamlined channels between much broader stakeholder communities including citizens, private sector, or other government-to-government interfaces. Similar to the market and business impacts of GPS, DOQ, satellite imaging technology, the platform could in turn promote more citizen satisfaction, private sector growth, or multiplier effects on engaged lines of business.

To get a big return, it will demand continuous creative thinking to develop investment, funding, management and communication approaches to realize and calculate the value.  It is a complex national challenge involving many organizations, geospatial policy, conflicting requirements, interests and intended uses.

The key is demonstrable successes.  Successes become the premise for investment strategy and cost savings for the customers.  Offering “a suite of well‐managed, highly available, and trusted geospatial data, services, and application, web site for use by Federal agencies—and their State, local, Tribal, and regional partners” [9] is the means to create the big value.  

”A successful model of enterprise service delivery will create an even greater business demand for these assets while reducing their incremental service delivery costs.” [10]

FGDC has to create and tell a compelling “geospatial” value proposition story

To successfully implement the FGDC’s vision, it will demand a robust set of outreach and marketing capabilities.  The solution will need to help construct the platforms value proposition and marketing story to build and inform the community.  The objective is to ensure longer term sustainable funding and community participation.  The solution will need to bring geospatial community awareness, incentive modeling, financial evaluation tools, multi-channel communication and funding development experience to the FGDC.  The solution will need to have transparently developed and implemented communication and marketing strategies that have led to growth in customer base, alternative portfolio funding models and shared services environments for the geospatial communities.  The solution will need to have an approach that will be transparent, engage the customer and partners and continuously build the community.

This is a challenging time to obtain needed capital and win customers even for efficient economic engines like shared geospatial data and services.  The solution will need to approach the community outreach is impactful, trusted and will tell the story of efficiencies, cost savings, and higher quality information.  The platform and NGDA must impact the customer program objectives. Figure 1 – FGDC Performance and Value framework shows how the platform’s value chain aligns with the types of performance benefits that can be realized throughout its inherent processes. The supporting team’s understanding of this model will need to organize the “Story” to convince the customer and partners that the platform can:

  • Provide decision makers with content that they can use with confidence to support daily functions and important issues,
  • Provide consistency of base maps and services that can be used by multiple organizations to address complex issues,
  • Eliminate the need to choose from redundant geospatial resources by providing access to preferred data, maps and services[11] 

As the approach is implemented, the FGDC, its partners and the Communities of Interest will have successfully accelerated the adoption and use of location based information.  Uses will recognize the value offering and reap the benefits to their operations and bottom line.   The benefits will be measurable and support the following FGDC business case objectives:

  • Increasing Return on Existing Investments, Government Efficiency, Service Delivery
  • Reducing (unintentional) Redundancy and Development and Management Costs
  • Increasing Quality and Usability[12]

Our Suggested Solution

FGDC’s challenges requires PMO, integrated lifecycle management, partner focus, and blend experience with an integrated approach and single voice designed to meet the FGDC’s strategic objectives and provide a world-class service shared services and data portfolio.  Doing this, they can integrate organizations, data, and service provision.

A solution like this would provide the program, partner and customer relationship management, communications, development and operational capabilities required to successfully implement the FGDC’s vision and business plan. The focus will need to 

  1. Coordinate cross-agency tasks, portfolio needs in agile prgoram management coordination with a single voice,
  2. implement an understanding of critical lifecycle processes to manage and operate the data, technology, capital assets and development projects for a secure cloud-based platform
  3. have communications and outreach focused on communities for partner and customer engagement in the lifecycle decisions
  4. Finally, make sure secretariat staff and team has rotating collective experience with representatives and contractors who hav esuccessfully performed at this scale across all functional areas with domain knowledge in Geospatial, technology, program, service, development and operations.

The strategy and collective experience and techniques will enable FGDC to provide a single voice from all management domains (PMO, Development, Operations and Service Management) for customer engagement. The approach will be need to be integrated with the existing FGDC operating model creating a sum value greater than that of its individual parts. This approach will help create the relationship to develop trusted partner relations services. 


[1]  (page 7 – Geospatial –Platform-Business-Plan-Redacted-Final)

[2]  (page 2 – Draft NSDI –Strategic Plan 2014-2016 V2)

[3]  (page 28 – Geospatial –Platform-Business-Plan-Redacted-Final)

[4]  (page 11 – Ibid )

[5]  (page 9 – Ibid)

[6]  (page 26 – Ibid)

[7]  (page 4 – Ibid)

[8]  (page 6 – Ibid)

[9]  (page 2 – Geospatial –Platform-Business-Plan-Redacted-Final)

[10]  (DOI Geospatial Services Blueprint – 2007)

[11]  (page 13 – Geospatial –Platform-Business-Plan-Redacted-Final)

[12]  (Appendix A – Geospatial –Platform-Business-Plan-Redacted-Final)

[13]  (Page 12 – OMB Circular A-16 Supplemental Guidance)

[14]  (page 12 – Geospatial –Platform-Business-Plan-Redacted-Final)

[15]  (page 36 – Ibid)

[16]  (ITSM – Service Operations V3.0)

[17]  (page 26 – Ibid) 


Top 5 low hanging fruit to not bungle IT Procurement

Blog post
added by
Wiki Admin

IT Procurement as been a hot button issue as one of the largest civilian IT projects – DoD IT-sized in a way – was “bungled” in almost every phase: Cost went 6x original bid, architecture was overly complicated, no system integration concept, no end to end software or data lifecycle management end, and no quality acceptance procedures, criteria, incentive or the like.

We all know some or all those problems. But as the FCW article cites: “Bungled launches didn’t start with HealthCare.gov.” Its everywhere. Its government, commercial, non-profit, everywhere.

Of course, at Xentity we are biased – we believe in take your medicine now approach – design upfront, register and buy back the risks, and then move into agile design, rapid development, and iterative launch as relevancy and the market allows.

FCW notes there is “considerable agreement on how to go about overhauling the procurement system…” but they have some consensus on 5 key actions:

1. Do a better job on defining desired outcomes upfront
2. Improve the training options for the federal acquisition workforce to put them on an even footing with vendors
3. Give agency CIOs more budget authority
4. Avoid lowest price, technically acceptable contracts on large innovation-heavy projects.
5. Use agile development strategically and mainly when a project does not require a log of interaction with legacy systems.

To too our own horn, these are many of the fundamental goals that Xentity staff and solution focuses on. In reply to those five items:

1. This is our main emphasis on the design the concept of operations, and requirements for the SOW, and registering the risks and knowing them ahead before procurement. Still allow vendors the flexibility for logical and technical design, but know upfront the various concepts that may come back and know how to score them
2. our fedbiz.xentity.com business management specialists service integrated with our architecture practice allows us to help bring contracting and procurement specialization into helping understand how vendors who may respond based on market analysi results will respond to certain requirements or solicitation frameworks
3. We are setup to help advise CIOs on enterprise portfolio, architecture, capital planning, and segment adoption of CIO services and solutions
4. We agree that the LPTA method does not work for acquiring design, planning, creative, and solution management services. LPTA beltway experts tend to game the systems by replying to architect positions with application developer rates for architects on initial task order and techincally it is acceptable, and the rates are 30% better. But its a lot like asking a cook, “can you farm?” Technically, the cook probably could, but wouldnt you want someone familiar with the subject matter expertise of agriculture economics, farming lifecycles, key risk and success factors or someone who knows food. The solution ends up costing more as MODS occur, the app developer gets replaced, and government pays the cost for missing deadlines and scope creep.
5. This is a biggie! Marketing hyped up Agile as the “it slices, it dices, it julienne’s!”. It is great for new transaction systems on abstracted solutions. It can be good for some feed ETL or integration. But, when you have an immovable object, it doesnt matter how agile you are. In those cases, you need to conduct architecture design, concept of operation alternative analysis, business case evalution, requirements definition, and register and buy back risk.

All this said, we are very happy to see this attention back onto better design and defin up front. Of course, being in this field, it is always nice to be noticed and knowing we are on the right track. More importantly, we believe it is what is needed and the right thing to do for investing wisely with our citizens or customers money/assets.

Ten Web Performance Tuning Tips – Measure, Compress, Minify

Blog post
added by
Wiki Admin

With the mobile ecosystem creating an even less patient user than desktop users, web performance issues harken us back to 1999 when we were on 56K baud modems and using netzero to dial up from airport lounges. A web site needs to be architected, designed, and coded with best practices to perform well. After reviewing Google analytics for an Atlassian Confluence website, we realized we had both client-side and server-side performance issues.  Google provides more Web Performance tips to learn more about web performance tools at Google, including browser extensions and APIs for Insights, PageSpeed Service, and their optimization libraries. So we did some investigation and the following were some suggestions:

Enable compression

Compressing resources with gzip or deflate can reduce the number of bytes sent over the network. Enable compression for the following resources to reduce their transfer size. We found about 72% reduction.

Minify JavaScript

Compacting JavaScript code can save many bytes of data and speed up downloading, parsing, and execution time. Minify JavaScript for the following resources to reduce their size. We found about 12% KB reduction

Eliminate render-blocking JavaScript and CSS in above-the-fold content

If your page has one or several blocking script resources or CSS resources, this causes a delay in rendering your page. None of the above-the-fold content on your page can be rendered without waiting for those resources to load. Try to defer or asynchronously load blocking resources, remove render-blocking JavaScript:or inline the critical portions of those resources directly in the HTML. We had 7 scripts and 11 CSS blocking resources.

Leverage browser caching

Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network. Leverage browser caching for cacheable resources. We had several files that could be cached for 10 minutes and a two for 30 and 60 minutes.

Prioritize visible content

Your page requires additional network round trips to render the above-the-fold content. For best performance, reduce the amount of HTML needed to render above-the-fold content. Prioritize visible content that is needed for rendering above-the-fold. 21 KB of our responses required to render the above-the-fold content.

Minify CSS

Compacting CSS code can save many bytes of data and speed up download and parse times. Minify CSS resources to reduce their size. (we found 4% reduction).

Minify HTML

Compacting HTML code, including any inline JavaScript and CSS contained in it, can save many bytes of data and speed up download and parse times. Minify HTML resources to reduce their size. We found about 17% reduction.

Optimize images

Properly formatting and compressing images can save many bytes of data.  Optimize images to reduce their size. We found 33% reduction opportunity where our repeating images in menus fould be reduced 50%, but in all honesty, that saved less than 2K overall

Avoid landing page redirects

Your page has no redirects. Learn more about avoiding landing page redirects.

Reduce server response time

There are many factors that can slow down your server response time. Please read our recommendations to learn how you can monitor and measure where your server is spending the most time. This is where we found 90% of our issues – the server was going to sleep mountain time and not waking up for other time zones, where our primary usage was coming from.

Yes, the tweaks above will definitely help the mobile user on 3G on an older SmartPhone. But Occam’s Razor suggested focus on the simple, obvious part first. The above will take a few days or maybe more since many of the recommendations rely on a COTS package and we’d need a dialog with them to find the right solution. But, the server side fix turns out we just needed to tickle the server and set one server-side recommendation about (caching). Reason: we notice generally great performance, but sometimes when the service is waking up, the page times skews the overall page performance. 

We’ll still tweak some images and cache some files, but we’ll likely more reach out to Atlassian to see where they are at with minifying and addressing above-the-fold content processing.

 

 

Piling On HealthCare.gov

Blog post
edited by
Matt Tricomi

Washington just can’t catch a break, eh?. Debt Ceiling. Sequester. Shutdown. And now an epic fail of a single portal solution that provides the primary access to the new healthcare access law.
Love it or hate the law, the model for the electronic access provision is definitely an epic fail of a rollout.
What else is an architect to do if we can’t pile on the fiasco, offer opinion, while at same time, we are hoping to catch someone’s attention or get some sense someone is thinking about it from the design point of view, as it appears the media, hearings, and threads are not hitting on, at least, where we were hoping the discusion would land. Its a bad design. I’m not commenting on the policy part – the pundits do enough of that. I’m commenting on the architecture itself. It appears a poor design.
The following captures our internet sleuthing, colleague discussions, and our current deductions.
This is an evolving blog, as its more of a case study than a daily diary. Its a bit in draft form, but a way to begin to pin the story together. Apologies for the language typos and pre-publish ready state, but figured this is so fast moving and important, that I wanted to add to the dialog, and not simply leech after with 20/20 hindsight.

UX and Web Design is fine – Its not the front end

Now much fanfare has gone to the “web site” glitches. It was written about back in June by Alex Howard in the AtlanticI had the pleasure of connecting with Alex during data.gov work back in 2010 and off and on, we correspond on social media occasionally. I have a respect for his writing, what he follows, and have found generally that he is spot on with bringing collaboration across traditional boundaries into the world of information and technology. That said, I may be a bit biased. 

Here are just a handful of articles done on the healthcare.gov performance:

http://www.websiteoptimization.com/speed/tweak/healthcare-gov-analysis/

http://apmblog.compuware.com/2013/11/04/diagnosing-obamacare-website-heathcare-gov-still-lacks-basic-optimizations-before-it-can-mature/

http://www.mardesco.com/blog/website-optimization-and-healthcare-gov/

http://www.conversionmax.com/healthcaregov-what-went-wrong/

Its the same thing as above – minify, compress, order your scripts/CSS, cache. Yep, the same things back in 1999, just a MUCH more powerful scripting and style processing capability. 

Point being, this client-side tweaking is all REALLY good practice, especially for large sites with large traffic. Every little bit of debris cleanup helps. But, whatever fix is done, there needs to be a balance of the true timeline issues. It appears that the problem is primarily a server-side architecture problem . There were/are definitely several issues with the front-end performance as the above articles suggest. Its easy for the web technophiles to do, since most of the web processing is on the client side – in your browser and tools (i.e. hit F12 on a PC in Chrome or do  YSlow plugin for Firefox) are easy to use or websites like what Google Analytics does can analyze performance, content safety, usage statistics and so much more. Point being, are you going to put more resources to fix the leaky faucet or the gushing, gaping hole in the water main first?

Now, Alex only reported on the developments of the front end and UX component. There is deserved high praise on Prose.io, Jekyll, open source concepts, garage organizations breaking beltway development stereotypes, and persona development as a way to develop the navigation for this brand new pattern.

But, the point is, the UX and web design part is fine and and dandy. Alex was right in June and still is. Its the same architecture I did for united.com back in ’99, just different tech. Have a CMS, cache it, distributed over 4 servers west/east. When 9/11 hit, my site was only airline site (check internetwayback machine) and call center that stayed up. So, this model for hc.gov is fine. The UX caching is fine. 

He didn’t report on the back-end part. Mind you, this part is under reported and the complexities of the iceberg under the water has been mis-understood. A lot of folks have joined the form of internet bashing snarkiness and bashed Alex’s journalism and attacked his integrity as a bandit of sorts. In other articles, I did see some purist nerd talk on some of poorly grouped javascript or heavy, some bad code, added some extra callbacks, and wasn’t as static as it should have been, it was quick to determine that was minor. 

I felt pretty bad for him on the article, and generally as a citizen, embarrassed, so like many of us architect weenies, I dug into it as many other colleagues have. Hey, regardless of politics, we all want a working country. 

Why the logic for real-time data aggregation architecture?

Just like the Tacoma Narrows Bridge didn’t fall because the construction contractor failed, it was the architecture forgot one piece of logic – the wind in this river was very strong. There was nothing to dampen the flow, and when the wind force blew, it blew the bridge passed and near its natural frequency (think rubbing your finger on a wine glass), those vibrations shattered the bridge. It was built to specification by the contractors, but the design was horrible because it had flawed logic.
OK, where is the parallel. So if its the back-end problem for healthcare.gov, where is it? My background and recent work on “hero” architecture was also excited to hope it was a minor performance issues that could be fixed by some horizontal scaling of servers. They did that, no major fix. Maybe it could be some technical server or software tuning – no luck. That only leaves bad logic.

Now it appears possibly the contractor did have faulty construction in that their wasn’t enough foresight to do more parallel testing, load testing, integration testing and the “7 steps of doneness“. The architects came out and said it today. Though that sounds a little passive aggressive now, as an architect of a building, would you say that after the bridge collapsed? Sounds like either buckpassing or gag order or droopy dog, no on is listening to me. 

But even that would have been able to be reported out by now. So, aside from the obvious lack of discipline for engineering failures which is the civil engineering equivalent to the Tacoma Narrows Bridge, what logic am I talking about.

I believe it comes down to the architecture logic in this case, likely multiple areas. They treated the architecture like a controlled real-time MIS distributed system on data that has not been standardized or proven to be easily integrated through the test of time
 

Bad Logic: It appears they are responding to business rules that ask for real-time queries?


Why are they doing this? Is Healthcare.gov is following the KAYAK.com or Orbitz.com data aggregation model? In airlines, rates could change any minute, airlines have proven APIs over a decade of time-tested improvements, and those smaller airlines get screenscraped and KAYAK spends dollars keeping those scrapers up to date – just like when mainframes used to be scraped for client-server integration. Airline is a huge industry, they put their paper ticket to eticket on the line and it took a decade to get to this model. Then again, USA Today put out an article noting that Healthcare.gov is not alone on high-tech blunders:

United, Continental merge computer systems

March and August 2012

United Airlines had problems with its reservations system in early March after it switched to Continental’s computer system as the two airlines merged operations. Passengers complained as United struggled for several days to fix problems. In late August, the airline’s computer system and website went down causing problems with reservations, ticketing and check-ins.

This of course made me smile a little bit since they took down my architecture I did for united.com since they decided to move to Continental’s toolsets mostly because United needed to get out of Apollo mainframe (so says word on the street). So when they through [my] proverbial baby, then 12 years old and still well respected and award-winning out, with that, it did make me smile. But, point being, even big moves in private industry will happen in this type of architecture.


While I appreciate the importance of up-to-date comparisons, cant this stuff be pre-loaded? Why are we bucket brigading this information? Did they do this because they got caught up with the fanfare of the neat front-end – which again is slick, but its a small part of the project. Its the book on the cover that gets people there and comfortable and to have a good interior design user experience. But if you cant get a cup of coffee in Starbucks, does that matter?
I wonder if it needed to be this way.  When I heard about the queries real-time crossing statelines, which meant that all the quality validation would require quality handshake, in real-time from each source where each time a different provider, state, or other governing data source, that would require a whole different set of rule base to validate. Each state, provider, etc. operates differently and to expect that real-time is quite an aggressive and bold feat.
At same time, is that required? Imagine doing a bucket brigade through 10 points for one bucket to put out a fire, some water will be spilled, but if you simply put more people/power in between, the spillage doesnt matter. That is why it works for normal web service models which has made Twitter, facebook, and many other Services used in other apps work so well. Its a simple single source of data.
Now, pretend there are 50 different types of twitter imitators, all with different approaches to tweeting, different data about the user or the tweet (aka metadata), different ways they setup the service call, and under different state laws to share that information. This is healthcare.gov.
My idea here isnt overaly novel. Check out gov.uk and MongoDB’s article – Reinventing Data Management for Government Websites which discusses just this theory:
HealthCare.gov faces challenges like aggregating volumes of data and building an efficient system to meet citizens’ needs. An agile database like MongoDB could have helped HealthCare.gov to scale, remove redundancy, and potentially reduce the cost (estimated to be at $292 million so far) (WaPo article) of both creating the site and dealing with the fallout of its failure.
I haven’t reached a point of validation yet, but it sure seems like the separately developed service handshaking across these disparate sources would cause integrating data that was created, formed, provided, and published under such different bucket brigade handlers, that it would be like having a bucket bridge coming out of a lake and an ocean, and expect only fresh water and the extra salt water magically goes away.

Why not pre-stage the queries?

I guess I’m asking still – why do they need to aggregate real-time? Why can’t they pre-stage the calls? Are they really updated every minute?


You make pre-staged products, where in between each organization, you validate the quality, get it ahead of time, or design-time, so then your run-time call can call the validated source which can be updated every week, day, 15 minutes, etc. and different of each feed. Then when new feeds come in, the separate pre-built maps or indices are auto-updated to make and optimize the comparison experience too, and even setup the possibility to help inform what the comparison mean or simply make matter of fact statements about the comparisons (not advice, but make obvious the differences). Google has proven this model. Its just reading content. And with the knowledge snippets in Google now on the right, you can ask it basic factual questions now. They can compare and its all hitting a local, validated, appropriately up to index that can scale, elastically on the cloud, that is proven fast – Google has proven that.

We recently did this same thing for a must be remained anonymous major Government agency. They had their search calls call the traditional RDBMS which is better at searching on a specific record and returning in simple, non-high computational queries. Instead, we asked, can you move 90% of the search into a NoSQL solution. It can load hundreds of millions of records in minutes, do all the pre-calculations to make for a smarter search – like how google knows what you are typo before you do, it can handle typo, many facets, drilldowns, etc. I believe KAYAK has moved this direction as well, but can’t validate, to optimize its search experience.

This is why the twitter, facebook, and other popular highly used service APIs work. They designed their SiM, and now there is a massive ecosystem of sub-applications, sub-markets, aftermarkets. HC.gov did not do that, it took a YAGP (yet another government portal) architecture technique, parted the scoping out like for a battleship. There wasnt enough consideration for patterns like more design-time pattern integration vs. run-time pattern integration (which is something I recently architecture prototype for NARA adding a NoSQL “index”, if you will, in front, so the query part was fast, but the transaction part got passed to the traditional RDBMS, then if updated, it did a millisecond sync back with NoSQL.

Now for the Blame Game: In contracting, investing in architecture is still not a requirement, so its a liability to bid it that way. 

It was divvied to over 50 contractors, and outside of a PMO, there appears to be no enterprise service integration patterns concepts are part of the leadership team. I saw a PMO, but they usually manage the production, not the architects of what needs to be produced. We can throw CGI Federal under the bus all we want, and whomever did the PMO, but it sure seems like the requirements and team did not have a solution or architecture integrator as one of the roles. Someone(s) overseeing the Service Integration Model – call it Enterprise Service Architect, Sr. Solution Architect, Architecture Review Board/Governance – to advise on design risks, maintain risk weights and let the PMO know where risk is at escalation points. 

I do know in contracts, if you are believing you could win if you could shave 3% off the contract, the first thing to go is usually the higher end rates. Those rates are usually quality focused. Those typically are those on architecture, design, strategy. Typically government contracts do not write that in, or if it is, it is written in a more compliant way. Contracting Officers are not in a position to review whether a subjective concept such as a proposed architecture is better than another and Contracting process review boards for IT have not adopted concepts like in civil engineering architecture concept review boards. Given that, having a higher quality architecture solution component is not seen as differentiator, since it doesnt check a box, typically integrators will drop that high rate position, and wallla, you have just undercut competition by 2-4% on the bid. 

By the way, don’t get hard on the contractors only. The same goes for the writing of the contracts. Contracting Officers do not have a way of knowing if the technical requirements is asking for are sound or the best. And they have stated on occasion they like to leave it open to allow the contractor to come back and tell them “how”. While this is fair to let private industry offer best solutions, there should be architecture principles that are put in the contract to guide how they can answer, and thus less subjectively how they can assure robust architecture, for instance, without just saying “it will be a robust architecture”.


But as we saw, when you don’t buy back the risk up front, and the design changes, the cost balloons, which it did. Today, the reports are saying contractors are blaming the government after a few weeks of government throwing “greedy” contractors under the bus. I say its not greed, but a broken procurement process. We have blogged for years, and built our practice around this principle – We can approach architecture for other implementers . 

For a large majority of consulting companies both design and implement for ALL projects.  Though profitable for many firms, the best design can end up biased towards the agenda of the implementer which may be to sell more components, get more bodies. Now, we have the capability to implement architecture, but our end goal is not to design an architecture that is for us to implement, but an architecture that is implementable. 

Many times, the client knows that the implementer will design with a bias, so the client chooses to or must design blind without considering the maturity of what an implementer can provide. In those cases, we can come in, architect, and be a third party to help do the concept, design, and design the requirements and performance work statement basis.

This approach with these services buy-back risk to your implementation and increase the likelihood of achieving your metrics and goals.


11/1 Note: A colleague forwarded me two WSJ articles that discusses Fixing Procurement Process Is Key to Preventing Blunders Like Healthcare.gov and Procurement Process is Government’s IT Albatross which only adds fuel to the fire. These blogs uses Clay Johson’s Fix Procurement Manifesto as its guide. I agree with many points in Mr. Johnson’s manifesto and his points, but I don’t agree with the author’s understanding. The author boiled it down to make procurement easier to get new tech. That sounds well and good, but as a colleague who was formerly in the OMB said a few weeks back, if you add any tech to a broken process, it will only make it worse. And it did. So, I think it will not only take time to fix Agency Leadership as Johnson notes, and the author cites Former OMB CIO Kundra as a key, but also, the general public understanding of how technology and business transformation works. The author and agency leadership compare healthcare.gov, a solution that involves data exchange, policy oversight, and manages millions of transactions to an Operating System flaw in an iPad which requires fix once, deply many. These are very apples and oranges conversations – one is commodity technology, and other is automating multiple decades of policies on top of policies. Until we can help educate the differences in complexity, we won’t be able to achieve the aspects of Johnson’s manifesto as well as help understand the right architecture to invest in PRIOR to letting a contract.

-mt