Our Change Services Concepts

Blog post
added by
Wiki Admin

Our Concepts and Approach starts with the executive sponsor. 

We want to connect the line of sight from drivers to goals through products and services, process, roles, systems of information and technology and down through the bottom line. We can start with a short rapid implementation planning workshop to validate, discover, level-set, educate, and start your transformation effort on the right foot. Or we can use ITIL continuous improvement approach as part of supporting your operations.

Our methodology and training focuses on transformation leadership that can help improve customer effectiveness and efficiency. We do this by proactively managing risk and delivering results through strong, facilitated execution or increase relevancy and economics of existing or new product lines and services.  In working with customers, Xentity provides integrated oversight of the parts that need to be connected, understood, and communicated prior to significant investment.  This approach enables Xentity to understand value opportunities and risk, determine mitigation strategies, and support customer awareness on how to realize recommendations.  Once decisions and investments have been made, Xentity utilizes strong communication and project management skills to facilitate change. The methodology work we developed has been recognized and adopted as Federal Government best practices by the U.S. OMB.

This approach helps design, execute, and showcase major change through proven methods. We approach information and technology change from within the mission out to the enterprise. This is different. And it has been an adopted concept and method by the Federal Government and has become popular in the commercial space. Take on the right amount of change, a focus area at a time. This is what we know. It is what we do and have been doing since 2001. 

Either as a team creating deliverables, embedded consultants, or staff augmentation, our transformation designers, architects, analysts managers, management consultants, and creatives specialize in change.

 Our transformation approach and experts will help you buy-back the risk.

  •  Upfront, we can help you design and architect your transformation concept of operations, develop the full architecture and requirements before you go to the street using our collaborative business transformation approach.
  • We can tactically engage to support and manage your project – either existing project and team back on track or new project and team going on right foot.
  • We then can design and execute your outreach efforts and even produce short movies to help you brag about your change.
  • And for program support, and continuous improvement, we can provide your high-tech, geospatial, and science operations, analysis, and management with true subject-matter familiarity and staffing solutions. 

 We want to connect the line of sight from drivers to goals through products and services, process, roles, systems of information and technology and down through the bottom line. We can start with a short rapid implementation planning workshop to validate, discover, level-set, educate, and start your transformation effort on the right foot. Or we can use ITIL continuous improvement approach as part of supporting your operations. Read more about our Services on how we can design change with you or augment your current architecture, management, and communication staffing needs.

Address gaps early on. Buy back the risk.  

  • Get the right definition and design for embracing the right innovation and disruption concepts.
  • Coordinate and integrate your change to mediate and anticipate risks and challenges. 
  • Recover from current project design and management issues. 
  • Showcase and engage your community with your new or changed solution the way it deserves. 
  • Bring on someone that can help you with this transformation lifecycle

Our Services:

  • Buy-Back Risk of your transformation failing
  • Improve transformation requirements and concepts
  • Set path for most successful project implementation
  • Focus on Information Lifecycle challenges
  • Address Solutions for Disruptions in tech, business, and cultural shifts
  • Increase the likelihood of achieving your metrics and goals
  • Help accelerate time-to-market, 
  • Increase quality and relevance of your change effort.
  • Can include training and workshop to transition approaches to keep focus on continuous improvement
  • Tell the world your transformation story!

If this story below is you, these are services you need

Information Technology used to be hidden in your organization. Likely for Financials and other enterprise resource management or MIS. Ran as a cost-center under the CFO or COO.  Now its core to your business. Workforce costs are being replaced on the delivery or customer service ends by internet provided capabilities. Sales force automation, marketing campaigning, devices, storefronts, support desks, mission critical services – you name it.

Moreso, Now, data is an asset and your business is done online or through B2B information exchange. Its in the boardroom to the factory floor to the customer interface. You need to manage the information supply chain, use for management decisions, analytics, and in many cases your business is completely reliant on information and technology as your service. 

So you invest capital funds or operation funds deferrals in projects, development, infrastructure, contracts, etc.. in hopes of gaining that competitive edge or cost savings. This introduces new ways of doing business. And Change. Which of course, no one likes change except the change visionary.

The canary in the cave signals start to come in. The project the costs keep creeping up. Requirements weren`t there. Traditional cost-center procurement and development models were used to bid or build. You created a business case, but most of the time its ignored. A plan is used as the law, instead of the guide, and was wrong as business agility and the technology offerings changed before you even started. And an architecture or operating concept is either non-existent, incomplete, or build on old patterns.

As a result, the outcomes are not there, and delays, over-runs, re-designs are bringing forward nightmare scenarios. It gets risky. Before you know it, project costs are beyond up, a new technology is out, outcomes are missed and you and your stakeholders and stockholders are very skeptical. Worse yet, public relations and internal chatter is causing a culture of loss confidence which may leak public forcing an premature launch.

Something has to give.

You are at risk of joining the statistic that only 25% of IT projects succeed, 25% fail, and rest are partial wins/losses (Source: KPMG). 

This is where we come in – either before the nightmare occurs, or in the midst. We can operate as a project team, embedded consultants, or SWAT team.

Flipping the Educational Value Chain

Blog post
edited by
Wiki Admin

Business, governments, and even many non-profits have benefited from the windfall of a flattening world – less war, trend towards better resource distribution, new business models, digital economy proliferation, sharing workforce. Education has not

At Xentity, to us exploring NextGen Transformation using architecture, analysis, design is not about IT. IT is a core component, but we are looking at how the Next Generation will progress and transform. And with generation lines becoming more of a blur, this isn’t a 30 year delay, or even a 15 year delay. In some cases, we are talking 5 to 10 years for transformation of a generation. Given such, when we examine workforce capital, we are truly interested in the changing models not just in the employee – which by the way, is a relic of the industrial age – but also how those employed in your organization (employee, contractor, consultant, vendor, service provider), are changing themselves.

One way of examining this is looking at the actual next generation. The kids. This is very important. For instance, the current incoming generation, aside from now being larger than the Baby Boomer generation, has benefited from the previous 30 years of relative stability, and Millenials engage in collaborative environments, as a result of growing up in a connected world NATURALLY.  

They weren’t taught this though, what they were taught for the most part, with some Montessori, STEM Academy, and other cloud school minor exceptions, in a school model that was intended for the children to go into a pre-industrial revolution business workforce that had bells to change shifts, required discipline of a “robot” in the factory for efficiency and safety, and required still minds to take orders and execute.

When examining your organization, you may have unwritten rules, or codes that have been passed down out of habit, institutionalization, or what we know. Those unspoken rules of engagement or life definitely help manage the chaos and focus on the mission, but the question that at times needs to be asked is “Is this the right mission? If not, are these the right rules?” and thereafter of course, do you or does your organization have the political and actual capital to make the transformation.

The following, in two parts, Jim Barrett examines this phenomena of:

Mr. Barrett is not only is Xentity’s Architecture lead, but has actively served and presently engages in multiple early childhood education development advisory and exploratory boards.

 

 

Xentity awarded IT IDIQ from State of Colorado

Blog post
edited by
Wiki Admin

The State of Colorado’s Governor’s Office of Information Technology (OIT) has awarded to Xentity an IDIQ Master Agreement for business services

This master task order contract (MATOC) is a result of an award under RFP-001-JG-14 for Computer Programming, Consulting Services, and Business Services involving Cloud Solutions. 

In the Fall of 2013, The State of Colorado’s Governor’s Office of Information Technology (OIT) sought proposals to identify Implementation Services (“Implementers”) for business services involving cloud solutions by Salesforce.com, Google, and Perceptive Software (Perceptive), and other emerging technologies. 

  • The award is for an Enterprise Agreement, as a multi-contract award IDIQ
  • base period of 5 years and 5 consecutive 1-year renewal options
  • an initial $10 million maximum contract amount/ceiling.  
  • Task orders can be issued by multiple sponsoring state agencies.

Xentity has previously won and supported contracts for the State of Colorado with the Department of State and has worked closely with the Office of Information Technology.

Xentity’s Services can be ordered from any of the Colorado Agencies via this contract

Scope Include:

  • Task Order Technical Management
  • Agile Project Management
  • Solution Architecture
  • Architecture & Governance Support
  • Cloud Solution Development / Database Support
  • Portal & Development/Database Support
  • Application Development Support
  • Quality Assurance / Customer Support
  • Transition Support
  • Disaster Recovery/COOP Participation
  • Best Practice Group Support/Participation
  • Outreach Strategy and Support

Positions include: Project Manager, Technical Consultants, Architects, Architecture Analysts, Management Analysts, Solution Architects, Enterprise Architects, and Communications specialists for Branding, communications, design, and strategy 

More to come on how to access Xentity services off this contract.

 

Piling On HealthCare.gov

Blog post
edited by
Matt Tricomi

Washington just can’t catch a break, eh?. Debt Ceiling. Sequester. Shutdown. And now an epic fail of a single portal solution that provides the primary access to the new healthcare access law.
Love it or hate the law, the model for the electronic access provision is definitely an epic fail of a rollout.
What else is an architect to do if we can’t pile on the fiasco, offer opinion, while at same time, we are hoping to catch someone’s attention or get some sense someone is thinking about it from the design point of view, as it appears the media, hearings, and threads are not hitting on, at least, where we were hoping the discusion would land. Its a bad design. I’m not commenting on the policy part – the pundits do enough of that. I’m commenting on the architecture itself. It appears a poor design.
The following captures our internet sleuthing, colleague discussions, and our current deductions.
This is an evolving blog, as its more of a case study than a daily diary. Its a bit in draft form, but a way to begin to pin the story together. Apologies for the language typos and pre-publish ready state, but figured this is so fast moving and important, that I wanted to add to the dialog, and not simply leech after with 20/20 hindsight.

UX and Web Design is fine – Its not the front end

Now much fanfare has gone to the “web site” glitches. It was written about back in June by Alex Howard in the AtlanticI had the pleasure of connecting with Alex during data.gov work back in 2010 and off and on, we correspond on social media occasionally. I have a respect for his writing, what he follows, and have found generally that he is spot on with bringing collaboration across traditional boundaries into the world of information and technology. That said, I may be a bit biased. 

Here are just a handful of articles done on the healthcare.gov performance:

http://www.websiteoptimization.com/speed/tweak/healthcare-gov-analysis/

http://apmblog.compuware.com/2013/11/04/diagnosing-obamacare-website-heathcare-gov-still-lacks-basic-optimizations-before-it-can-mature/

http://www.mardesco.com/blog/website-optimization-and-healthcare-gov/

http://www.conversionmax.com/healthcaregov-what-went-wrong/

Its the same thing as above – minify, compress, order your scripts/CSS, cache. Yep, the same things back in 1999, just a MUCH more powerful scripting and style processing capability. 

Point being, this client-side tweaking is all REALLY good practice, especially for large sites with large traffic. Every little bit of debris cleanup helps. But, whatever fix is done, there needs to be a balance of the true timeline issues. It appears that the problem is primarily a server-side architecture problem . There were/are definitely several issues with the front-end performance as the above articles suggest. Its easy for the web technophiles to do, since most of the web processing is on the client side – in your browser and tools (i.e. hit F12 on a PC in Chrome or do  YSlow plugin for Firefox) are easy to use or websites like what Google Analytics does can analyze performance, content safety, usage statistics and so much more. Point being, are you going to put more resources to fix the leaky faucet or the gushing, gaping hole in the water main first?

Now, Alex only reported on the developments of the front end and UX component. There is deserved high praise on Prose.io, Jekyll, open source concepts, garage organizations breaking beltway development stereotypes, and persona development as a way to develop the navigation for this brand new pattern.

But, the point is, the UX and web design part is fine and and dandy. Alex was right in June and still is. Its the same architecture I did for united.com back in ’99, just different tech. Have a CMS, cache it, distributed over 4 servers west/east. When 9/11 hit, my site was only airline site (check internetwayback machine) and call center that stayed up. So, this model for hc.gov is fine. The UX caching is fine. 

He didn’t report on the back-end part. Mind you, this part is under reported and the complexities of the iceberg under the water has been mis-understood. A lot of folks have joined the form of internet bashing snarkiness and bashed Alex’s journalism and attacked his integrity as a bandit of sorts. In other articles, I did see some purist nerd talk on some of poorly grouped javascript or heavy, some bad code, added some extra callbacks, and wasn’t as static as it should have been, it was quick to determine that was minor. 

I felt pretty bad for him on the article, and generally as a citizen, embarrassed, so like many of us architect weenies, I dug into it as many other colleagues have. Hey, regardless of politics, we all want a working country. 

Why the logic for real-time data aggregation architecture?

Just like the Tacoma Narrows Bridge didn’t fall because the construction contractor failed, it was the architecture forgot one piece of logic – the wind in this river was very strong. There was nothing to dampen the flow, and when the wind force blew, it blew the bridge passed and near its natural frequency (think rubbing your finger on a wine glass), those vibrations shattered the bridge. It was built to specification by the contractors, but the design was horrible because it had flawed logic.
OK, where is the parallel. So if its the back-end problem for healthcare.gov, where is it? My background and recent work on “hero” architecture was also excited to hope it was a minor performance issues that could be fixed by some horizontal scaling of servers. They did that, no major fix. Maybe it could be some technical server or software tuning – no luck. That only leaves bad logic.

Now it appears possibly the contractor did have faulty construction in that their wasn’t enough foresight to do more parallel testing, load testing, integration testing and the “7 steps of doneness“. The architects came out and said it today. Though that sounds a little passive aggressive now, as an architect of a building, would you say that after the bridge collapsed? Sounds like either buckpassing or gag order or droopy dog, no on is listening to me. 

But even that would have been able to be reported out by now. So, aside from the obvious lack of discipline for engineering failures which is the civil engineering equivalent to the Tacoma Narrows Bridge, what logic am I talking about.

I believe it comes down to the architecture logic in this case, likely multiple areas. They treated the architecture like a controlled real-time MIS distributed system on data that has not been standardized or proven to be easily integrated through the test of time
 

Bad Logic: It appears they are responding to business rules that ask for real-time queries?


Why are they doing this? Is Healthcare.gov is following the KAYAK.com or Orbitz.com data aggregation model? In airlines, rates could change any minute, airlines have proven APIs over a decade of time-tested improvements, and those smaller airlines get screenscraped and KAYAK spends dollars keeping those scrapers up to date – just like when mainframes used to be scraped for client-server integration. Airline is a huge industry, they put their paper ticket to eticket on the line and it took a decade to get to this model. Then again, USA Today put out an article noting that Healthcare.gov is not alone on high-tech blunders:

United, Continental merge computer systems

March and August 2012

United Airlines had problems with its reservations system in early March after it switched to Continental’s computer system as the two airlines merged operations. Passengers complained as United struggled for several days to fix problems. In late August, the airline’s computer system and website went down causing problems with reservations, ticketing and check-ins.

This of course made me smile a little bit since they took down my architecture I did for united.com since they decided to move to Continental’s toolsets mostly because United needed to get out of Apollo mainframe (so says word on the street). So when they through [my] proverbial baby, then 12 years old and still well respected and award-winning out, with that, it did make me smile. But, point being, even big moves in private industry will happen in this type of architecture.


While I appreciate the importance of up-to-date comparisons, cant this stuff be pre-loaded? Why are we bucket brigading this information? Did they do this because they got caught up with the fanfare of the neat front-end – which again is slick, but its a small part of the project. Its the book on the cover that gets people there and comfortable and to have a good interior design user experience. But if you cant get a cup of coffee in Starbucks, does that matter?
I wonder if it needed to be this way.  When I heard about the queries real-time crossing statelines, which meant that all the quality validation would require quality handshake, in real-time from each source where each time a different provider, state, or other governing data source, that would require a whole different set of rule base to validate. Each state, provider, etc. operates differently and to expect that real-time is quite an aggressive and bold feat.
At same time, is that required? Imagine doing a bucket brigade through 10 points for one bucket to put out a fire, some water will be spilled, but if you simply put more people/power in between, the spillage doesnt matter. That is why it works for normal web service models which has made Twitter, facebook, and many other Services used in other apps work so well. Its a simple single source of data.
Now, pretend there are 50 different types of twitter imitators, all with different approaches to tweeting, different data about the user or the tweet (aka metadata), different ways they setup the service call, and under different state laws to share that information. This is healthcare.gov.
My idea here isnt overaly novel. Check out gov.uk and MongoDB’s article – Reinventing Data Management for Government Websites which discusses just this theory:
HealthCare.gov faces challenges like aggregating volumes of data and building an efficient system to meet citizens’ needs. An agile database like MongoDB could have helped HealthCare.gov to scale, remove redundancy, and potentially reduce the cost (estimated to be at $292 million so far) (WaPo article) of both creating the site and dealing with the fallout of its failure.
I haven’t reached a point of validation yet, but it sure seems like the separately developed service handshaking across these disparate sources would cause integrating data that was created, formed, provided, and published under such different bucket brigade handlers, that it would be like having a bucket bridge coming out of a lake and an ocean, and expect only fresh water and the extra salt water magically goes away.

Why not pre-stage the queries?

I guess I’m asking still – why do they need to aggregate real-time? Why can’t they pre-stage the calls? Are they really updated every minute?


You make pre-staged products, where in between each organization, you validate the quality, get it ahead of time, or design-time, so then your run-time call can call the validated source which can be updated every week, day, 15 minutes, etc. and different of each feed. Then when new feeds come in, the separate pre-built maps or indices are auto-updated to make and optimize the comparison experience too, and even setup the possibility to help inform what the comparison mean or simply make matter of fact statements about the comparisons (not advice, but make obvious the differences). Google has proven this model. Its just reading content. And with the knowledge snippets in Google now on the right, you can ask it basic factual questions now. They can compare and its all hitting a local, validated, appropriately up to index that can scale, elastically on the cloud, that is proven fast – Google has proven that.

We recently did this same thing for a must be remained anonymous major Government agency. They had their search calls call the traditional RDBMS which is better at searching on a specific record and returning in simple, non-high computational queries. Instead, we asked, can you move 90% of the search into a NoSQL solution. It can load hundreds of millions of records in minutes, do all the pre-calculations to make for a smarter search – like how google knows what you are typo before you do, it can handle typo, many facets, drilldowns, etc. I believe KAYAK has moved this direction as well, but can’t validate, to optimize its search experience.

This is why the twitter, facebook, and other popular highly used service APIs work. They designed their SiM, and now there is a massive ecosystem of sub-applications, sub-markets, aftermarkets. HC.gov did not do that, it took a YAGP (yet another government portal) architecture technique, parted the scoping out like for a battleship. There wasnt enough consideration for patterns like more design-time pattern integration vs. run-time pattern integration (which is something I recently architecture prototype for NARA adding a NoSQL “index”, if you will, in front, so the query part was fast, but the transaction part got passed to the traditional RDBMS, then if updated, it did a millisecond sync back with NoSQL.

Now for the Blame Game: In contracting, investing in architecture is still not a requirement, so its a liability to bid it that way. 

It was divvied to over 50 contractors, and outside of a PMO, there appears to be no enterprise service integration patterns concepts are part of the leadership team. I saw a PMO, but they usually manage the production, not the architects of what needs to be produced. We can throw CGI Federal under the bus all we want, and whomever did the PMO, but it sure seems like the requirements and team did not have a solution or architecture integrator as one of the roles. Someone(s) overseeing the Service Integration Model – call it Enterprise Service Architect, Sr. Solution Architect, Architecture Review Board/Governance – to advise on design risks, maintain risk weights and let the PMO know where risk is at escalation points. 

I do know in contracts, if you are believing you could win if you could shave 3% off the contract, the first thing to go is usually the higher end rates. Those rates are usually quality focused. Those typically are those on architecture, design, strategy. Typically government contracts do not write that in, or if it is, it is written in a more compliant way. Contracting Officers are not in a position to review whether a subjective concept such as a proposed architecture is better than another and Contracting process review boards for IT have not adopted concepts like in civil engineering architecture concept review boards. Given that, having a higher quality architecture solution component is not seen as differentiator, since it doesnt check a box, typically integrators will drop that high rate position, and wallla, you have just undercut competition by 2-4% on the bid. 

By the way, don’t get hard on the contractors only. The same goes for the writing of the contracts. Contracting Officers do not have a way of knowing if the technical requirements is asking for are sound or the best. And they have stated on occasion they like to leave it open to allow the contractor to come back and tell them “how”. While this is fair to let private industry offer best solutions, there should be architecture principles that are put in the contract to guide how they can answer, and thus less subjectively how they can assure robust architecture, for instance, without just saying “it will be a robust architecture”.


But as we saw, when you don’t buy back the risk up front, and the design changes, the cost balloons, which it did. Today, the reports are saying contractors are blaming the government after a few weeks of government throwing “greedy” contractors under the bus. I say its not greed, but a broken procurement process. We have blogged for years, and built our practice around this principle – We can approach architecture for other implementers . 

For a large majority of consulting companies both design and implement for ALL projects.  Though profitable for many firms, the best design can end up biased towards the agenda of the implementer which may be to sell more components, get more bodies. Now, we have the capability to implement architecture, but our end goal is not to design an architecture that is for us to implement, but an architecture that is implementable. 

Many times, the client knows that the implementer will design with a bias, so the client chooses to or must design blind without considering the maturity of what an implementer can provide. In those cases, we can come in, architect, and be a third party to help do the concept, design, and design the requirements and performance work statement basis.

This approach with these services buy-back risk to your implementation and increase the likelihood of achieving your metrics and goals.


11/1 Note: A colleague forwarded me two WSJ articles that discusses Fixing Procurement Process Is Key to Preventing Blunders Like Healthcare.gov and Procurement Process is Government’s IT Albatross which only adds fuel to the fire. These blogs uses Clay Johson’s Fix Procurement Manifesto as its guide. I agree with many points in Mr. Johnson’s manifesto and his points, but I don’t agree with the author’s understanding. The author boiled it down to make procurement easier to get new tech. That sounds well and good, but as a colleague who was formerly in the OMB said a few weeks back, if you add any tech to a broken process, it will only make it worse. And it did. So, I think it will not only take time to fix Agency Leadership as Johnson notes, and the author cites Former OMB CIO Kundra as a key, but also, the general public understanding of how technology and business transformation works. The author and agency leadership compare healthcare.gov, a solution that involves data exchange, policy oversight, and manages millions of transactions to an Operating System flaw in an iPad which requires fix once, deply many. These are very apples and oranges conversations – one is commodity technology, and other is automating multiple decades of policies on top of policies. Until we can help educate the differences in complexity, we won’t be able to achieve the aspects of Johnson’s manifesto as well as help understand the right architecture to invest in PRIOR to letting a contract.

-mt