Blending a Distributed Transformation Team

Blog post
edited by
Wiki Admin

It is no secret that Politicians and Senior Executives in the Federal Government are Washington DC based. In the corporate world, they are equivalent to the Corporate Agency Headquarters as well as many regional offices. These HQ’s lead new policies, budget decisions, funding distribution and forming core principles and program direction for defense and civilian programs. Most Senior Executives in DOD and Civilian Agencies are based in DC. So, a common question from agencies, partners, and candidates is why would a transformation and management consulting firm providing government executives be based in Colorado?

In short, the work is not done in DC – its done in the programs. A not very well known fact is that a very large amount of those programs – especially Engineering, Earth, Energy, and Land based programs – are actually run out of Colorado. Colorado has over 53,000 Federal Government employees (Denver Post June 2011), not including federal contractors. The concentration of Federal Labs, Military Bases, Land and Science Program management, and multiple Science and operations centers are the main culprits. Specifically, for the civilian programs, Denver, has over 10,000 Federal Employees and contractors running the operations, hosting, and mission functions for those programs are in and around the Denver Federal Center. A majority of these agencies are Earth or Land Program focused. See a list of over 100 Agencies and functions.. In the commercial world, Denver also has a large presence in Communications, Satellites, Financial, and Restaurant Headquarters given its central U.S. location.

This means for example, DOE EERE in DC sets direction and budget for NREL, but the NREL program R&D National Lab is in Golden, CO performing the Solar, Wind, Biomass, Hydrogen, Geothermal, water power and other renewable studies and preparation for commercialization and technology transfer. Or USGS program headquarters are in Reston, and USGS is located around the world, but a major hub is Denver based with thousands of employees and contractors performing water quality analysis, tracking eathquakes, making geospatial data. 
We have seen two-approaches to transformation in distributed organizations: Headquarter Located Survey models, and Collaborative Blended Location Models

Headquarter Located Survey models: Data Calls

Lets use a Federal Government example. When it comes to program mission transformation or migrating programs to shared services or consolidations, the DC-based Senior Executives set the direction, from DC. DC-based consulting support in transformation for the common executive corporate functions – budget formulation, IT Policy setting, Financial Accounting, Workforce statistics and strategy development, and other more corporate commodity services. 

What gets lost though is the real mission business transformation analysis, ideation buy-in, actual migration issues, consolidation, and recommendation implementation will be in the field.

No corporation survives by “crystal tower” decision making, and thou shalt execution perfection. So, there shouldn’t be any expectation that true transformation needs to include the collaboration – not upward data-call reporting – but transformation working with the non-DC program management to construct the needs, deem the appropriate strategic or tactical improvement opportunities, identify the quality improvement, process efficiencies, lifecycle management changes from supply to enablement to use, and examine the sequence possibilities of their program, investment, system, facility, and asset portfolio.

An example pathing of this mistake is that DC-based executives will ask DC consultants to perform data calls as a way to reach out to programs as a way to collaborate. So, the unknowing consultant creates the template, it looks good – and it really does. Performs the data call (this may include a local visit or literally be a phone call) that no one likes, but needs to be done. But ultimately, the result is flawed in a few ways. Typically, the data call is presented from a corporate point of view void of the unique program mission factors that impact on programs such as: Science Data Management, Geospatial and Geography Services, Earth Observation and Remote Sensing, Natural Resource Planning, Policy, and Protection, Wildland Fire Management, National Environmental Protection Act Management, Land management, Park Management, Agricultural Study, Energy Lifecycle Management, Emergency/Hazards Management, Health Information Management, and other mission Information and Data Engineering & Research.

The DC consultant was brought in for executive corporate analysis, and does not typically have context to the program being done in the field, but does collect the information as requested. Based on that completes a very good analysis of the information provided. Provides some analysis and recommendations based on the data calls input and reception of that in DC.

THEN, the results come back, and the problems start as the saying “garbage in, garbage out” goes. And worse, the strain seen in the public eye between DC and citizens grows very akin to the strain seen between DC and non-DC management. It is widely publicized that DC is very unpopular in the public eye right now, but as well, internal management between beltway decision making and program management are at huge odds due to the same climate of having to respond to the current political climate.

Turns out the system manager who provided the information didnt have the context of the overall policy. The System architect didnt realize a certain integration point and provides several technical reasons of why that cannot work. Managers weren’t on the data call, so some core change context was missed and unfortunately misrepresented, causing consternation and delay in accepting any recommendations and requires further revision. Program executives in hearing all the consternation respond with lack of support towards the initiative fearing another change revolt.

Even though there was a powerpoint giving an overview of the effort, the call was performed flawlessly, and even a web site with some PDFs and facts providing some context, the reality is, it was in the context of the DC-based effort, it still failed. There was no account for the program context and how it is associated to localized change. The executives brought it didn’t come in through the program, so their staff didnt have the proper impetus. And, when the realization of how important it really was came about, it was too late to ask questions.

Collaborative blended location approach for providing Agency and Program Executive Analysis Design and Planning Services

Whether its a Major Data Center Consolidation initiative, move to shared services, migrating financials systems to single point solutions, executive driven efficiencies engagement, or moving up the maturity model chain for that program, a blended model for leveraging outside consultancy in this area helps address – not avoid – these issues much more complete and efficiently, and sometimes less painful.

Xentity’s approach is to engage and involve early and often the programs by working with the local presence – both in DC and in the program presence – which has typically been Denver to date. Denver is in the “Interior” and for the Department of the Interior, for instance who manages 1 in 5 acres of the United States, mostly West of the Mississippi, key DOI presence are in Albuquerque, Sioux Falls, California, Portland, Seattle, etc and similar for USDA and many of those on the list of over 100 Agencies and functions are at the bottom of this article based in Denver. DC isn’t the only place with daily political fires or lacking clear communication directions – Programs have lots of fires too. It seems odd to even say this, but the “Potomac Fever” or Headquarter myopic views tend to forget that programs are more than a robotic factory (even if they are just that). Local response to new technology acquisition governance gone awry. Rogue process or technology developments going the wrong way resulting in server down time or premature application deployment. Standard Workforce management issues causing re-swizzling of assignments. Failure in a processing batch that causes a mini-industrial engineering assessment as to the quality, mechanistic, process, or workforce failure. 

This context – this hands-on context – helps with grounding the recommendations in realities and in context almost in an ethnographic way. Rather than surveying what is happening asking the wrong people out of context, ask the people while they are in context so the right answer is gotten right away with the right perspective and next steps. As well, by being near by, as factors dance around what needs to be discovered, the local context begins to show the true issues that need to be addressed. In the end, by keeping the ideas where the problems are to how they can fit into the target approach tested and ultimately with and by DC-based, the end solution is likely to be more readily implementable, have higher accuracy, mitigate more risks, and, for what its worth, increase corporate morale as the “cheese is moved”.

An example of doing this as a lesson learned at the Department of the Interior, in the latter phases, is for a major financial business and management system consolidation driven from DC, but the DC-led effort also has an on the ground transition team training, triaging, and guiding the local migration issues. The analysis phase did not do this, and resulted in several start and stops, restarts, lawsuits, etc. The original effort considered a handful of systems, a second effort resulted in a dozen, and then a local analysis results in discovering a couple hundred interfaces. It was at this point, the DC-led team started to look at batches and phases, based on locational presence and complexity. It is still a very difficult problem transitioning one-hundred plus year old accounting processes into a modern single financial service, but the local presence in both places has resulted in later batches having increased success.

How Xentity executes the local presence blended model
Xentiy is Colorado based but with staff in both Colorado and DC amongst other client headquartered areas. All staff will be trained in our Services Catalog, transformation methods, and leadership qualities. An added advantage is our cost of living adjustment over DC headquarters is passed on through our general administration and overhead costs. This, on top of our 8(a) acquisition benefits and Commercial and GSA Schedules, and partner agreements help get started quickly as we are very accessible.
Headquarter Local Consulting – Our or are prime partner staff can support on-site the Senior Executives are responding to political pressures, last-minute calls for action, and advisory and supporting consultants will get pulled in for quick response. If you are not in DC, you can quickly be put out of mind as you are out of sight which in this model is a benefit, so the Denver-based staff can focus on the program management and design needs. In addition, we train our staff or select partners who are well versed in the Corporate context, jargon, lingo, portfolio, players in that agency to help accelerate ramp-up.
Program Local Consulting – Our Colorado based staff bring the actual mission subject matter expertise analysis, design, and planning skillset where it tends to be. They are trained on the earth, land, and mission subjects. They work with the programs, and as well, living in the Interior of the United States, next to Federal Lands, Recreating in National Lands, and experiencing the wonders and threads (Wildland Fire, Water, Climate) issues of the Interior tend to build an internal drive as well. This means our staff will be familiar with the
  • Metro-based Hosting, engineering, Technical, research, Service or Operation Center effiiciencies, status, portfolio current state and key influencers in various agencies, bureaus, and programs. 
  • Understand the Laboratory Portfolio of work and challenges for technology transfer and quality, yet efficient science that the Denver Metro based national and regional labs encounter
  • Trust factors and gaps between DC decision makers and Denver-based upper management. 
Xentity recommends a blended model to help with better understanding of the true program portfolio, understand the value chain as it goes through the actual centers, and also, help build the bridge of better understanding, clarity, increased context and risk mitigation between Headquarter (DC) and major Programs and Centers. This engages the Program Middle Management, Labs, Centers to be inclusive, like in any transformational leadership guide, allows the solution to be collaboratively developed to better help mitigate risks in migrations, understand the true portfolio alternatives, sequencing, and again, and building trust. 


Is it time to reduce the complexity of our solutions

Blog post
edited by
Wiki Admin

There was a well-written article on Is it time to reduce the complexity of our solutions? that starts off with:

There are many information technology trends to observe if you’re in the business long enough.  For instance, when I began we spent much of our time replacing thin-clients (aging dumb terminals) with full-blown Windows installations and bloated software clients.  The trend was to push more horsepower out to the users and distribute the processing to increasingly powerful PCs.  We then turned around a few years later and began emphasizing bringing applications back to the data center and using thin computing once again.  Now, I am the unfortunate witness to end-user mutinies which are forcing us to return fat-clients to the desktop.  It’s classic centralization versus decentralization, and is a topic worth discussing in its own right.  But I will save that for another day.

The article continues to outline ways to address such as simplifying the solution, doing more with less, unintended consequences, managing the solution which cover some of the points in our blog on What are some patterns or anti-patterns where architecture and governance can help. The article wraps with:

There are only so many new systems and technologies a team of IT pros, however skilled, can implement before it becomes critical to take a step back from the frenzied pace and analyze the existing solutions.  Integrating everything may not always be the best course of action.  Sometimes simplifying can lead to more satisfied customers and staff.

The article was well done, but it wasnt the article that caught our eye, though we headnodded in agreement to its contents, even if a bit light on substance. BUT, it was one of the commenters, “xentity” had this to say:

I have worked in a wide breadth of operations across many sectors in the economy. I have also been working with IT since 1982. The recurring theme / problem I have seen is multi-faced:

1. Leadership in the IT industry is polar. They think about cool technologies and dressing systems in pretty bells and whistles but fail to address the core of their own industry. The industry is not about technology. The industry is about information and information processes then applying technology to those processes. These processes often travel across a wide breadth of departments, organizations, and even industries. Information and knowledge ‘ownership’ is temporal for extremely short intervals of time before others gain access to it and begin to use it. I am not discussing personal information like SSN’s but instead I am discussing practice. The actual owner of knowledge are the communities of practice. Managing and controlling information must be addressed at this level.

2. Common leadership in the various industries are inept at effectively utilizing, implementing, or managing information technology. This was something I thought would eventually fade as more competent leadership matured over the years but that has failed to fully materialize. At the center of the problem is that despite many IT professionals who are highly skilled, they often are hog tied by CFO, controllers, and other leadership. I find that a large majority of leadership and management have little grasp at running an operation but have made their way into leadership roles. Decisions are made not made based on sound strategies that resolve the root problems and advance business performance. This is something I have seen repeatedly more than the cases where they have in fact made sage decisions.

3. Information technologies are strategic in nature. The effects are long term but leadership’s attention span is about 90 days. This is evident in their calls to “Go Live” and to “Show Results”. In talking these same people the attention span is about 90 secs. Most are incapable of seeing the abstract and almost always ethereal world of IT. What appeals to them is what “sounds good”. Hence, the wild swings back and forth between strategies and platforms without realizing the long term cost and effects.

In short, today I believe there is a clash between information architectures and organizational architectures. Organizations want to cling on to tradition establishing artificial boundaries that disrupt information processes. They employ expeditors and redundant efforts to resolve the conflicts between information and organizational architectures. The complexity enters at this point. Adaptive and autonomic systems are cute technologies for IT professionals but what is really needed is adaptive systems for industry. The systems must be organized around information processes and be able to self organize. Staffing should be structured along these information processes. In the end, there needs to be a symbiotic union between staffing, business, and IT that is adaptable to emerging conditions.

What a great add-on. As the What are some patterns or anti-patterns where architecture and governance can help blog points out, we couldn’t have agreed more. Maybe it was one of our staff that put it up, but no one owned up. Well put “xentity”!

Moral of this post: Its sometimes not the blog content, but the blog topic, title, and timing that generates the best content about a blog (note to self, when we have that base community, we need to open up comments too!)

Linking segment architecture to the IT Investment Maturity Framework

Blog post
edited by
Matt Tricomi

As we noted in Developing Our Approach, we linked the methodology to existing maturity models.

For background, specifically, one of the core maturity frameworks is the GAO ITIM Framework – (IT Investment Maturity) – has been the embedded driving basis for the Federal Government OMB Capital Planning and Investment Contraol, based on OMB Directive A-130 for selecting, controlling, and evaluating investment for many years.

When developing the five phases – analyzing segment readiness, charter & sponsorship, analysis & blueprint, execute & transition (implement), and maintain/manage – these were closely aligned to ITIM. The graphic below captures the overlap (pink) of the segment architecture with the maturity stages as well the enterprise architecture (blue) overlap for setting up tools, methods, goals; principles, criteria, and compliance; and then governance & tracking (PMO). 

Breaking the Change Phases and alignment to ITIM maturity stages down

Phase 1 of the segment methods aligns directly with GAO ITIM stage 1. You first need to get your portfolio understood.

ITIM Stage 2 requires the architecture org to select their tools, methods, and goals to assure any transformative teams agree to the rules and tools to help change the investment, but in addition the transformation team in phase 2 needs to have solid backing of leadership and definition. You cannot miss this soft science aspect. Without leadership, change concepts become shelfware all ltoo quickly. Just having that builds your chance or foundation of solid investment.

ITIM Stage 3 talks about having a well-defined portfolio to guide criteria for investment selection. This means architecture needs to have its principles, criteria, and compliance defined so when an investment comes up for change recommendations, the review is based on as objective review as possible. This is important again as change can be subjective as consensus is formed, so given such, definition of how to handle rationalization and remediation of decisions is key. Without this guidance, investment change wills stall at this point.

ITIM Stage 4 is where the change phase 3 analysis and recommendations is needed. If you do not have the previous ITIM stages and change phases executed, this puts instituting true change at risk of political, coup, disenchantment and many other misalignment, anthropological or bureaucratic issues. 

ITIM Stage 5 essentially shows an organization has change maturity in its blood and can do segment phase 5 management and maintenance of change.

Where ITIM measures maturity, the approach lends to how to progress and when to progress to each maturity stage – for a segment. If this is a focus, take a chance to digest the ITIM maturity definition – it is nothing but complete. As well, reviewing the transformation approaches in MBT or FSAM. 

Developing a Transformation Approach

Blog post
edited by
Matt Tricomi

In 2004, when developing out an approach for business transformation, our primary goal of transformation has been to mature the investment through addressing transformation core architecture concepts discussed such as Paving Cowpaths, improving bad data lifecycles, avoiding redundant buying, balancing compliance with aligning programs, aligning products and services directly to performance drivers, and addressing inefficiencies. Our Team was under a Northrop-Grumman Contract at the Department of the Interior. Department of the Interior had multiple representatives across the CIO and Management and Budget office functions guiding and approving the enhancements.  There were three team members from Xentity, one from Phase One Consulting Group, and One from IBM and the project included management oversight by Northrop-Grumman. 

The method would provide a collaborative way to bring cross-functional business representation with the architecture analysts and consulting service to walk through major analysis steps:

  • Initiating the project 
  • Scoping the segment
  • Business analysis
  • Technical analysis
  • Author recommendation set as blueprint
  • Incorporate into Enterprise Plan and Portfolio

We knew we wanted to use architecture techniques to help, but wanted to assure architecture did not follow the path that appeared for the last fifteen years which resulted in completing the information needed to assess portfolio then serially address change.

That approach would and has not engage executive interest, and likely, give the time it takes to collect the information and map to taxonomies (which needed to be developed and fostered), the information, which is time-sensitive, thus perishable, would be stale. 

 The result was the Methodology for Business Transformation version 1.0.

Lessons Learned

We used the method and applied against four major mission area blueprints, but this was a version 1.0 and we ran into several major issues – actually, we logged around 100 improvements and 8 minor and sub-minor revisions resulting in MBT 1.5. We found issues in: Enterprise Governance, OMB Functions, Change Management, Performance, CIO Office, Scoring, Solution, Governance, Cost Benefit, and General. But the largest issues were not in the analysis, but the wrappers around the method. Here are the top two lessons plus general notes summary of the over 100 improvements discovered

#1 Lesson Learned – Tick, Tock, Tick, Tock

Instead, for large organizations ($100 million to $1 billion), we knew we wanted to take a segment approach to building out the information while moving through the phases of transformation. In our first draft of the resulting segment analysis method, the transformation phases resulted in starting with Phase 3.

The clock would start ticking for the architecture as soon as a team was formed, funding for the team assigned, and the project was kicked off.

This was a major issue after the first four blueprints, as we missed the key executive steps for assigning a sponsorship concept. Meaning, the definiition for success was in the eye of the beholder – the executive. We skipped the critical step of defining the sponsor and their executive level needs – were they interested in savings, product expansion? What Architecture Concepts?

#2 Provide precedence-based guidance on how far to mature

We knew we didnt want to re-invent how we develop service level maturity

In 2005, as part of looking at our transformation approach, we examined the GAO INFORMATION TECHNOLOGY INVESTMENT MANAGEMENT A Framework for Assessing and Improving Process Maturity – (ITIM) – detailing “Select, Control, Evaluate” phases of management, background on proposed processes to improve IT Investment Management. 

And for guiding the maturity for the actual capabilties, we relied heavily on IT Infrastructure Library (ITIL v3) Management Best Practices (ITIL) to help in guiding how to move from reactive service to proactive and not reach to far while gaining improvements and efficiency


#3 Showing the line of sight

We needed to show clearly how the analysis and resulting work products supporting across the business drivers, and the full business value chain.

Some Enterprise Architecture experts may argue the existing Frameworks such as Zachman accomplishes that, and it does in a library function, but not in a journalism function which tells it in story mode and backpockets the work products as due diligence. 

#4 Change is the message, EA is just a set of tools – Get over ourselves

This moved Enterprise Architecture out of being front and center and moved the mission of the segment analysis first. EA became a supporting role, and we would attempt to actually remove EA from the vernacular. The obvious reaction is this was done because of the brand beating it has taken since 1996 when the intentions of Clinger-Cohen reduced EA into an IT compliance reporting role. It was actually moreso of EA`s own culture to put EA at the center of the universe when analyzing, ideating, managing change is at the center.



In our over 100 other improvements, other areas highlighted are

  • Not enough clarity on the varying roles of Enterprise Governance
  • Most transformations have strong ties to annual Budget formulation and execution activities for example, but we didnt align that enough
  • Aligning Calendars and timing OMB and Agency Management and Budget Functions such as Planning and Performance Management, Workforce Strategy, Acquisition, etc.
  • Change Management truly missed for guiding how to implement, assess and manage risk, developing transition management capability (i.e. a PMO)
  • Performance
  • Find ways to integrate other CIO Information Management requirements to synchronize data calls across privacy, records, security, CPIC further
  • Scoring for organizational readiness as well as including Scoring framework requirements from OMB, GAO, and the like to help improve the architecture maturty scoring
  • Improved linkage to work products and guidance that Solution architecture could use
  • Govenrnance
  • Cost Benefit missing in guiding alternative analysis
  • General iterations on language consistency (i.e. its work product not artifact), updating templates, improving checklists, improving training and creating hands-on exercises, helping customize to audiences

The result ended up not being a full major release, as we had yet to address cultural transformation issues and still had additional concepts to consider for linking other ways to path through the method more as an approach.

The supporting toolkit for MBT is a large difference maker from whitepaper procedures or sample templates:

(2008: Later note, the MBT 1.5 ended up becoming the core method and set of templates for the Federal Segment Architecture Methodology (FSAM) which continues in aspects as a basis to the Common Approach to Federal Enterprise Architecture). Two team members from Phase One Consulting Group, one of which on the original MBT team, supported this effort.

MBT has since been validated by the following organizations: