Introduction
A development department needs to have a Vision and an associated Mission Statement defined.
I typically define CSFs to complement the Mission Statement and, as ever, work out how we can demonstrate our success in meeting them and continually improving.
I haven't included a Vision statement below as they are normally company specific - but I have included a Mission Statement and CSFs I wrote a for a company a few years ago.
Contents
Mission
CSFs
CSFs / KPIs - Measuring and Implementation of Supporting Processes
Mission
A development team that is delivering business solutions quickly and efficiently to agreed timescales, quality/scope and cost by….
• Using a defined, discrete and best of breed set of application development technologies.
• Using appropriate engagement models and development lifecycles for the different technologies and types of development. Appropriate means that processes should not be over engineered or methodologies too high ceremony for the type of work.
• Aligning in terms of skills and capacity to the needs of our business customers. Proactively and dynamically adjusting the team’s capacity and skill-set (though self-learning and similar) as business needs and industry trends dictate.
• Proactively managing (with input from IT Architecture) the business application estate such that we have a coherent application set and a strategy in place to ensure that the coherent state will continue.
• Engaging with external suppliers and using them appropriately (with a defined engagement model and development lifecycle) to enhance the development team’s capacity and/or avoid the need to maintain niche skillsets.
• Motivating themselves through their dynamism, success and by the discrete support of management.
• Being aware of their CSFs and by self-measuring and self-analysing continually improving their position with respect to those CSFs.
• Working closely with the other IT departments, maintaining open communications and seeking to maintain and further those relationships.
• Understanding the latest IT innovations that are of use to our business customers and to proactively work with the business facing elements of IT (Business Analysts?) to promote those new technologies for consideration by the business.
CSFs
Success with respect to achieving all the below will be measured against SLAs or other defined criteria. Where appropriate some of these targets may be internal (to IT).
- Response to Estimate and Design: Respond as per SLAs to requests for assistance with Project Feasibility and Initiation with estimates and designs.
- Response to Schedule Build: Respond as per SLAs to requests to schedule Build work.
- Deliver to QDC: To deliver enhancements and projects to quality, delivery and cost measures.
- Maximise Development Time: Maximise time spent on developments for the business, minimise time on other activities such as support and work internal to IT.
- Quality Assurance: Deliver code and other deliverables with minimum of defects.
- Cross Skilling: Maintain cross skilled team, with skills changing to accommodate business requirements.
- Manage Application Platforms: Manage (and seek to minimise by sun-setting and controlled adoption of new) our application technology platforms.
- Avoiding Legacy: Maintain the various elements of application technology platforms close to latest version levels – highlighting any significant deviations to this policy to management.
- Responsive Support: Quickly respond to and resolve production defects and support requests.
- HR Best Practice: To conduct standard HR practices in respect to the members of the team and in particular, objective setting, performance reviews and career development sessions.
- Low Leaving (churn) Rate: To ensure we retain our staff and thus avoid the costs or recruitment and knowledge acquisition.
- Customer satisfaction: How our customers from PMs to business partners rate us.
CSFs / KPIs- Measuring and Implementation of Supporting Processes
This section discusses the CSFs above and how we can measure our performance against them.
This effectively turns them into KPIs and also allows us to define targets/SLAs.
1. Response to Estimate and Design
Measurement is how long it takes us from initial request (with all documentation in place) to submitting a signed off estimate.
SLA will need to be short for small pieces of work but understandably bigger for larger projects – as we can’t just conjure up ten days design work when a typical staff member will typically be committed for at least a couple of months.
BAs will have the same problems.
Needs to be scheduled in as with any work
Need to think through how we can solve this conundrum – perhaps by:
- Keeping the more senior developers / team leads away from long term development work and therefore free to pick up short notice work such as this.
- If the work is large and has not been flagged as coming up by the business we can justifiably ask them to make a priority call on other work or ask them to accept a delayed response.
If we adopt the above it would mean the SLA varies according to:
- The size of the enhancement / project.
- The notice we were given of the project (i.e. no notice and the SLA would go out).
2. Response to Schedule build
A fairly simple job for the relevant team lead and Development Team manager assuming we have resource scheduling in place. However it would be expected that we would want to be more advanced and that IT would, jointly with the business, model various project scenarios (i.e. do we go with two 100 day projects A & B (priorities 1 and 3) or do we instead do 200 day Project C (priority 2) first).
If we adopt this modelling process it is likely it will be done at a joint IT / Business monthly meeting.
In which case the SLA for this will be by the conclusion of the process that would surround the next scheduling meeting.
See “Guideline - Resource Scheduling Tool” and “Resource Scheduling Strategy”.
3. Deliver to QDC
Quality: This is seen as conformance to scope (usually met and can be measured by CRs raised by IT to vary the scope (typically downwards)) and also defects in deliverables / code – measureable through monitoring of defects raised during document reviews and the various testing phases.
Delivery (timescales): Probably a joint PM / Development quality measure – I would expect Development would be held to their ETAs +/- 10% but that this would be subject to the PM’s controlling change appropriately and any business sponsored change wrt other projects that impacted end dates also being factored in.
Cost: Would be measured by monitoring actual time spent versus estimate. Quite simple to do once assuming we are capturing actuals against tasks in a system like Jira, Project or CA PPM (pka Clarity, ABT).
4. Maximise Development Time
Easy to measure either assuming we track time spent against project - see Cost section of QDC above.
Basically we would categorise time spent (typically to business development / support / internal IT) and then set “recovery” targets – typically something like 80% on business development work.
Another implied element to this is to drive down the time spent on support work (including production defects) – this would be done by monitoring and tracking time spent on support, analysing common defects and initiating work to eliminate them (or speed their resolution).
5. Quality Assurance
Effectively the same as the “Q” in QDC above – we would track defects found at the various stages of the lifecycle – review, various stages of testing etc. – typically companies start tracking at just the system testing level – CMM level 3 would require tracking at all stages.
6. Cross Skilling
Easy to measure but more difficult to implement. To measure we just need to track what level of skills we have in each specialism (e.g. ASP .Net, web (jQuery etc.), SQL (TSQL, map reduce etc.) , Tools (Jira, etc.), Test (Selenium etc..)) and at what level (junior, intermediate, senior) and then relate that to number of individuals.
So for example success could be, between the twenty individuals within development we had five senior, fifteen intermediate and twenty junior skill sets. To drive this individuals would be set objectives to incentivise them to not only be, say, good (intermediate) within one skillset such as ASP .Net but also good or able (junior) at one or two others e.g. EPI Server.
As time goes by skillsets (e.g. classic ASP) would be sunsetted with a year or two’s notice meaning that people would need to learn new skillsets in order to maintain their position – should not be onerous if managed well.
7. Managing Application Platforms
These relate to skillsets but would be a sub-division of them – for example the different architectural platforms that we use to support our different web applications. We need to manage these so that we are not trying to maintain a too broad a skillset within the team
Success would be shown by demonstrating that all our active application platforms were valid and that they were sunsetted (and introduced) as advised by Architecture.
8. Avoiding Legacy
Easy to track current versions of our different applications' environments.
We could present a report for management (including IT architecture) with summary.
Moving forward in terms of versions may involve IT spend – if it does not fall within a business project – though of course IT can decide not to move ahead.
9. Responsive Support
Measure by tracking support related MI – typically reports from a ticketting package (e.g. Remedy) and present as pivot tables / cubes.
Historical Trends
- Number of issues raised per team / application by month.
- Target – reducing trend 10% per year – but adjusted by number of live applications.
Current Status
- Number of open bugs by length of time open, severity etc.
- Number of bugs resolved, time taken, by team. Individual
These would be subject to business SLAs and influenced by target above.
10. HR Best Practice
Easy to track completion for the various activities (objective setting, review etc.) per staff member – can be reported in a simple matrix.