In general, we like to apply the iterative Deming Cycle
or a variance we implemented in late nineties – TMAF – Test, Measure, Analysis, Fix, repeat
- List your tests across the major components and design your test harness (like taking your mid-life physical – get your sensors on every moving part having wear and tear – get your performance monitors, load testers for storage, CPU, memory, logic process points, database, etc. etc.)
- This is your foundation. Focus here, or your tests results turn into the leaning tower of Pisa – data becomes unreliable
- Make sure your test prove real-world volume, velocity, veracity and variety so you do not provide a simulation of data that would never happen that way.
- Run and measure and validate all data collected correctly which proves it to be a successful reliable test proving to be good data.
- Using load testers and test harness like jmeter or web sites if public can help get bundle the test and measurements
- Make sure to record your internal performance monitors as well and know what measurements to get – are you looking at memory, CPU, I/O, read/writes, etc.?
- Run your analysis looking at the expected change results, and document findings and capture potential recommendations
- Go one step deeper in your analysis than your role typically requires and get to know the other guys/gals part better and ask questions here! You’d be surprised that most performance tuning resulted in not realizing what the exchange between different parts were causing the problem
- Decide the next change and fix to implement by prioritizing the less risky and biggest wins first, then more risk and medium wins, and being prepared to move to higher risk last.
- List your fixes across logical fixes (code changes or business rule usually), vertical (Tuning), and then horizontal (hardware)
- This cycle should try to address logic, then vertical and then horizontal (more hardware) last where possible.
In the previous blog on Ten Web Performance Tuning Tips – Measure, Compress, Minify , all the solutions landed on vertical (minify, tune, etc.) on the front-end, but didn’t hit where the bulk of the measurements were proving to fail – back-end.
Need more help?
Xentity does not just design, manage, and do outreach on the big projects. We help recover on existing projects. Its very common the architecture was not implemented as the blueprint was designed. Either due to misunderstanding, or just lack of blueprint, it looked good on paper, things changed, or in all reality, a technologists went rogue. Sorry, it happens.
We can engage in either executing and getting hands dirty or facilitating and training the TMAF process.
Our architects understand various architecture stacks – ETL and data aggregation, MVC and n-tier/3-tier stacks, transation modeling, database tuning, or considering new business rules, new architecture, and the like.
Overall Xentity staff have executed dozens of performance tuning engagements:
- Energy Mission lifecycle for transaction tuning between screens and database
- Energy allocations and billing batch processing in Oracle (PL/SQL, SQL, Oracle Configuration, hardware)
- Financial allocations and feeds calculation batch processing in Oracle (similar scope)
- Government Records in Oracle including legacy XML objects database on cloud (similar scope)
- Airline Major eCommerce content management system and logic design
- Capacity Planning sizing of eCommerce and service to citizen sites for hardware, software, business rules, and expected volume, velocity and data veracity and variety.
- And of course, our upfront architectures are designed to have the simplests architecture allowed by outcome and market relevancy goals.
These engagements can be small hours/weeks or can be full-time embedded consultant or performance SWOT team engagement with all 3 expertise (logical, vertical, horizontal).