How mature is your service architecture

Blog post
edited by
Wiki Admin

Figuring out what architecture style you are ready to move to is very important before jumping around great ideas vendors are selling you. Yes, the ideas are likely very good and the vendors may be right that you NEED those features, forms, concepts, etc. But, the question is your organization ready enough to adopt these service models.

Moving into service models require a greater level of maturity as you add more responsibility. An internal standalone system or even enterprise integrated systems are used by trained users 9-5. You are responsible to the service level to employees, and if it isnt up, then management would reset employee expectations during those periods. This is much different than web applications calling those systems 24/7 to untrained users with access from anywhere. More so, if the services supporting the web application could be called by other ecosystems, people could now harvest and build many other interfaces for their own community. Now your are responsible to a service level of users you do not manage – which means re-setting expectations during downtime or slow periods is very, very difficult.

By having a view of your readiness to maturity into the service arena, this can let you know how much you need to invest in business, governance, methods, applications, architecture, information and infrastructure to get there.

The Service Integration Maturity Model (SIMM) is a standardized model for organizations to guide their SOA transformation journey. By having a standard maturity model, it becomes possible for the organizations or industry to benchmark their SOA levels, to have a roadmap for transformation to assist their planning and for vendors to offer services and software against these benchmarks. (Wikipedia)

IBM and The Open Group have adopted and expanded upon these concepts. 

IBM Summary

Silo (data integration)

Level One: The organization starts from proprietary and quite ad-hoc integration, rendering the architecture brittle in the face of change.

Integrated (application integration)

Level Two: The organization moves toward some form of EAI (Enterprise Application Integration), albeit with proprietary connections and integration points. The approaches it uses are tailored to use legacy systems and attempt to dissect and re-factor through data integration.

Componentized (functional integration)

Level Three: At this level, the organization componentizes and modularizes major or critical parts of its application portfolio. It uses legacy transformation and renovation methods to re-factor legacy J2EE or .NET-based systems with clear component boundaries and scope, exposing functionality in a more modular fashion. The integration between components is through their interfaces and the contracts between them.

Simple services (process integration)

Level Four: The organization embarks on the early phases of SOA by defining and exposing services for consumption internally or externally for business partners — not quite on a large scale — but it acts as a service provider, nonetheless.

Composite services (supply-chain integration)

Level Five: Now the organization extends its influence into the value chain and into the service eco-system. Services form a contract among suppliers, consumers, and brokers who can build their own eco-system for on-demand interaction.

Virtualized services ( virtual infrastructure)

Level Six: The organization now creates a virtualized infrastructure to run applications. It achieves this level after decoupling the application, its servcies, components, and flows. Now the infrastructure is more finely tuned, and the notions of the grid and the grid service render it more agile. It externalizes its monitoring, management, and events (common event infrastructure).

Dynamically reconfigurable services (eco-system integration)

Level Seven: The organization now has a dynamically re-configurable software architecture. It can compose services at run-time using externalized policy descriptions, management, and monitoring.

Open Group Summary

Each level has a detailed set of characteristics and criteria for assessment.

The Open Group has a nice matrix that shows not only the 7 levels, but how it impacts business, governance, methods, applications, architecture, information and infrastructure.

 

Apple Maps now has more share than Google Maps

Blog post
edited by
Wiki Admin

I have been tracking Apple for a long time (Check out 2011 article on Apple in the 80s and a local kid view on the Jobs-Sculley re-organization) and once again their approach to releasing a solution that works by default in their ecosystem triumphs over better engineering. VHS wins over beta, again. Lots of articles on this press release:

Apple maps: how Google lost when everyone thought it had won | Technology | theguardian.com

 

 

 

Top 5 low hanging fruit to not bungle IT Procurement

Blog post
added by
Wiki Admin

IT Procurement as been a hot button issue as one of the largest civilian IT projects – DoD IT-sized in a way – was “bungled” in almost every phase: Cost went 6x original bid, architecture was overly complicated, no system integration concept, no end to end software or data lifecycle management end, and no quality acceptance procedures, criteria, incentive or the like.

We all know some or all those problems. But as the FCW article cites: “Bungled launches didn’t start with HealthCare.gov.” Its everywhere. Its government, commercial, non-profit, everywhere.

Of course, at Xentity we are biased – we believe in take your medicine now approach – design upfront, register and buy back the risks, and then move into agile design, rapid development, and iterative launch as relevancy and the market allows.

FCW notes there is “considerable agreement on how to go about overhauling the procurement system…” but they have some consensus on 5 key actions:

1. Do a better job on defining desired outcomes upfront
2. Improve the training options for the federal acquisition workforce to put them on an even footing with vendors
3. Give agency CIOs more budget authority
4. Avoid lowest price, technically acceptable contracts on large innovation-heavy projects.
5. Use agile development strategically and mainly when a project does not require a log of interaction with legacy systems.

To too our own horn, these are many of the fundamental goals that Xentity staff and solution focuses on. In reply to those five items:

1. This is our main emphasis on the design the concept of operations, and requirements for the SOW, and registering the risks and knowing them ahead before procurement. Still allow vendors the flexibility for logical and technical design, but know upfront the various concepts that may come back and know how to score them
2. our fedbiz.xentity.com business management specialists service integrated with our architecture practice allows us to help bring contracting and procurement specialization into helping understand how vendors who may respond based on market analysi results will respond to certain requirements or solicitation frameworks
3. We are setup to help advise CIOs on enterprise portfolio, architecture, capital planning, and segment adoption of CIO services and solutions
4. We agree that the LPTA method does not work for acquiring design, planning, creative, and solution management services. LPTA beltway experts tend to game the systems by replying to architect positions with application developer rates for architects on initial task order and techincally it is acceptable, and the rates are 30% better. But its a lot like asking a cook, “can you farm?” Technically, the cook probably could, but wouldnt you want someone familiar with the subject matter expertise of agriculture economics, farming lifecycles, key risk and success factors or someone who knows food. The solution ends up costing more as MODS occur, the app developer gets replaced, and government pays the cost for missing deadlines and scope creep.
5. This is a biggie! Marketing hyped up Agile as the “it slices, it dices, it julienne’s!”. It is great for new transaction systems on abstracted solutions. It can be good for some feed ETL or integration. But, when you have an immovable object, it doesnt matter how agile you are. In those cases, you need to conduct architecture design, concept of operation alternative analysis, business case evalution, requirements definition, and register and buy back risk.

All this said, we are very happy to see this attention back onto better design and defin up front. Of course, being in this field, it is always nice to be noticed and knowing we are on the right track. More importantly, we believe it is what is needed and the right thing to do for investing wisely with our citizens or customers money/assets.

Where to get started on IT performance tuning

Blog post
added by
Wiki Admin

In general, we like to apply the iterative Deming Cycle

or a variance we implemented in late nineties – TMAF – Test, Measure, Analysis, Fix, repeat

  • List your tests across the major components and design your test harness (like taking your mid-life physical – get your sensors on every moving part having wear and tear – get your performance monitors, load testers for storage, CPU, memory, logic process points, database, etc. etc.)
    • This is your foundation. Focus here, or your tests results turn into the leaning tower of Pisa – data becomes unreliable
    • Make sure your test prove real-world volume, velocity, veracity and variety so you do not provide a simulation of data that would never happen that way.
  • Run and measure  and validate all data collected correctly which proves it to be a successful reliable test proving to be good data.
    • Using load testers and test harness like jmeter or web sites if public can help get bundle the test and measurements
    • Make sure to record your internal performance monitors as well and know what measurements to get – are you looking at memory, CPU, I/O, read/writes, etc.? 
  • Run your analysis looking at the expected change results, and document findings and capture potential recommendations
    • Go one step deeper in your analysis than your role typically requires and get to know the other guys/gals part better and ask questions here! You’d be surprised that most performance tuning resulted in not realizing what the exchange between different parts were causing the problem
  • Decide the next change and fix to implement by prioritizing the less risky and biggest wins first, then more risk and medium wins, and being prepared to move to higher risk last. 
    • List your fixes across logical fixes (code changes or business rule usually), vertical (Tuning), and then horizontal (hardware)
    • This cycle should try to address logic, then vertical and then horizontal (more hardware) last where possible.

In the previous blog on Ten Web Performance Tuning Tips – Measure, Compress, Minify , all the solutions landed on vertical (minify, tune, etc.) on the front-end, but didn’t hit where the bulk of the measurements were proving to fail – back-end.

Need more help?

Xentity does not just design, manage, and do outreach on the big projects. We help recover on existing projects. Its very common the architecture was not implemented as the blueprint was designed. Either due to misunderstanding, or just lack of blueprint, it looked good on paper, things changed, or in all reality, a technologists went rogue. Sorry, it happens.

We can engage in either executing and getting hands dirty or facilitating and training the TMAF process. 

Our architects understand various architecture stacks – ETL and data aggregation, MVC and n-tier/3-tier stacks, transation modeling, database tuning, or considering new business rules, new architecture, and the like.

Overall Xentity staff have executed dozens of performance tuning engagements:

  • Energy Mission lifecycle for transaction tuning between screens and database
  • Energy allocations and billing batch processing in Oracle (PL/SQL, SQL, Oracle Configuration, hardware)
  • Financial allocations and feeds calculation batch processing in Oracle (similar scope)
  • Government Records in Oracle including legacy XML objects database on cloud (similar scope) 
  • Airline Major eCommerce content management system and logic design
  • eCommerce and Web coding tuning facilitation (similar to Ten Web Performance Tips – Minify CSS Javascript HTML et cetera
  • Capacity Planning sizing of eCommerce and service to citizen sites for hardware, software, business rules, and expected volume, velocity and data veracity and variety.
  • And of course, our upfront architectures are designed to have the simplests architecture allowed by outcome and market relevancy goals.

These engagements can be small hours/weeks or can be full-time embedded consultant or performance SWOT team engagement with all 3 expertise (logical, vertical, horizontal).

 

Ten Web Performance Tuning Tips – Measure, Compress, Minify

Blog post
added by
Wiki Admin

With the mobile ecosystem creating an even less patient user than desktop users, web performance issues harken us back to 1999 when we were on 56K baud modems and using netzero to dial up from airport lounges. A web site needs to be architected, designed, and coded with best practices to perform well. After reviewing Google analytics for an Atlassian Confluence website, we realized we had both client-side and server-side performance issues.  Google provides more Web Performance tips to learn more about web performance tools at Google, including browser extensions and APIs for Insights, PageSpeed Service, and their optimization libraries. So we did some investigation and the following were some suggestions:

Enable compression

Compressing resources with gzip or deflate can reduce the number of bytes sent over the network. Enable compression for the following resources to reduce their transfer size. We found about 72% reduction.

Minify JavaScript

Compacting JavaScript code can save many bytes of data and speed up downloading, parsing, and execution time. Minify JavaScript for the following resources to reduce their size. We found about 12% KB reduction

Eliminate render-blocking JavaScript and CSS in above-the-fold content

If your page has one or several blocking script resources or CSS resources, this causes a delay in rendering your page. None of the above-the-fold content on your page can be rendered without waiting for those resources to load. Try to defer or asynchronously load blocking resources, remove render-blocking JavaScript:or inline the critical portions of those resources directly in the HTML. We had 7 scripts and 11 CSS blocking resources.

Leverage browser caching

Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network. Leverage browser caching for cacheable resources. We had several files that could be cached for 10 minutes and a two for 30 and 60 minutes.

Prioritize visible content

Your page requires additional network round trips to render the above-the-fold content. For best performance, reduce the amount of HTML needed to render above-the-fold content. Prioritize visible content that is needed for rendering above-the-fold. 21 KB of our responses required to render the above-the-fold content.

Minify CSS

Compacting CSS code can save many bytes of data and speed up download and parse times. Minify CSS resources to reduce their size. (we found 4% reduction).

Minify HTML

Compacting HTML code, including any inline JavaScript and CSS contained in it, can save many bytes of data and speed up download and parse times. Minify HTML resources to reduce their size. We found about 17% reduction.

Optimize images

Properly formatting and compressing images can save many bytes of data.  Optimize images to reduce their size. We found 33% reduction opportunity where our repeating images in menus fould be reduced 50%, but in all honesty, that saved less than 2K overall

Avoid landing page redirects

Your page has no redirects. Learn more about avoiding landing page redirects.

Reduce server response time

There are many factors that can slow down your server response time. Please read our recommendations to learn how you can monitor and measure where your server is spending the most time. This is where we found 90% of our issues – the server was going to sleep mountain time and not waking up for other time zones, where our primary usage was coming from.

Yes, the tweaks above will definitely help the mobile user on 3G on an older SmartPhone. But Occam’s Razor suggested focus on the simple, obvious part first. The above will take a few days or maybe more since many of the recommendations rely on a COTS package and we’d need a dialog with them to find the right solution. But, the server side fix turns out we just needed to tickle the server and set one server-side recommendation about (caching). Reason: we notice generally great performance, but sometimes when the service is waking up, the page times skews the overall page performance. 

We’ll still tweak some images and cache some files, but we’ll likely more reach out to Atlassian to see where they are at with minifying and addressing above-the-fold content processing.