Showing posts with label Process. Show all posts
Showing posts with label Process. Show all posts

Wednesday, 11 December 2013

Test Cases, an Investment

It never ceases to frustrate and disappoint me when I hear people talking of test cases as use-once, throwaway artefacts. Any team worth its salt will be building a library of tests and will see that library as an asset and something worth investing in.

Any system change needs to be tested from two perspectives:
  1. Has our changed functionality taken effect? (incremental testing)
  2. Have we broken any existing functionality? (regression testing)
The former tends to be the main focus, the latter is often overlooked (it is assumed that nothing got broke). Worse still, since today's change will be different to tomorrow's (or next week's), there's a tendency to throw away today's incremental test cases. Yet, today's incremental test cases are tomorrow's regression test cases.

At one extreme, such as when building software for passenger jet aircraft, we might adopt the following strategy:
  • When introducing a system, write and execute test cases for all testable elements
  • When we introduce a new function, we should write test cases for the new function, we should run those new test cases to make sure the new function works, and we should re-run all the previous test cases to make sure we didn't break anything (they should all work perfectly because nothing else changed, right?)
  • When we update existing functionality, we should update the existing test cases for the updated function, we should run those updated test cases to make sure the updated function works, and we should re-run all the previous test cases to make sure we didn't break anything (again, they should all work perfectly because nothing else changed)
Now, if we're not building software for passenger jets, we need to take a more pragmatic, risk-based approach. Testing is not about creating guarantees, it's about establishing sufficient confidence in our software product. We only need to do sufficient amounts of testing to establish the desired degree of confidence. So there are two relatively subjective decisions to be made:
  1. How much confidence do we need?
  2. How many tests (and what type) do we need to establish the desired degree of confidence?
Wherever we draw the line of "sufficient confidence", our second decision ought to conclude that we need to run a mixture of incremental tests and regression tests. And, rather than writing fresh regression tests every time, we should be calling upon our library of past incremental tests and re-running them. And the bottom line here is that today's incremental tests are tomorrow's regression tests - they should work (unedited and without modification) because no other part of the system has changed.

Every one of our test cases is an investment, not an ephemeral object. If we're investing in test cases and managing our technical debt, then we are on the way to having a responsibly managed development team!

Tuesday, 26 November 2013

More on Technical Debt #2/2

Last week I offered some techniques for management of technical debt. In this post I offer some more.

Technical debt is a debt that you incur every time you avoid doing the right thing (like refactoring, removing duplication/redundancy), thereby letting the code quality deteriorate over time. As with financial debt, it is the easy thing to do in the short term; however, over time, you pay interest on this debt - the code quality deteriorates over time. And as with real debt, it can be used beneficially if managed well.

1. Refactor technical debt away. Apply several forms of refactoring, including code refactoring, data model refactoring, and report interface refactoring. Refactorings are typically very small, such as renaming an operation or splitting a data mart column, so should just be part of everyday development. Rework, on the other hand, is more substantive and should be explicitly planned. The Architecture owner (see below) will often negotiate rework-oriented work items with the Product Owner (the person on the team who is responsible for prioritising the work).

2. Regression test continuously. One of the easiest ways to find problems in your work is to have a comprehensive regression test suite that is run regularly. This test suite will help you detect when defects are injected into your code, enabling you to fix them, or back out the changes, right away.

3. Have an explicit architecture owner. The Architecture Owner (AO) should be responsible for guiding the team through technical decisions, particularly those at the architecture level. AOs often mentor other team members in design skills, skills that should help them to avoid injecting new technical debt into the environment. They should also be on the lookout for existing technical debt and where appropriate motivate the team to address that technical debt when appropriate.

4. Do a bit of up front thinking. Develop a technical strategy early-on in your project. By thinking through critical technical issues before you implement your solution, you have the opportunity to avoid a technical strategy which needs to be reworked at a future date. The most effective way to deal with technical debt is to avoid it in the first place.

5. Be enterprise aware. Good development teams are enterprise aware, realising that what they do should leverage and enhance the overall organisational ecosystem. They will work close with your enterprise architects, so that they can take advantage of existing IT assets. An important strategy for avoiding technical debt is to reuse existing assets and not rebuild or rebuy something that you already have.

Manage your debt and it will pay you back; pay no attention to it and you may end-up with a credit bubble!

Tuesday, 19 November 2013

More on Technical Debt #1/2

Last year I introduced the topic of technical debt. Technical debt is a debt that you incur every time you avoid doing the right thing (like refactoring, removing duplication/redundancy), thereby letting the code quality deteriorate over time. As with financial debt, it is the easy thing to do in the short term; however, over time, you pay interest on this debt - the code quality deteriorates over time. And as with real debt, it can be used beneficially if managed well.

I thought I'd list a few of the techniques I use to manage debt. I'll list five here, and offer some more in a subsequent post.

1. Reduce the debt before implementation. Passing systems with high technical debt to other teams, such as a systems operation team is generally a bad practice. It should be ingrained in your culture that each team is responsible for keeping the quality of their solutions high. It is reasonable to expect maintenance groups to resist accepting systems that have high technical debt.

2. Some technical debt is acceptable. Sometimes you will decide to explicitly accept some short term technical debt for tactical reasons. Perhaps there is a new component or framework about to be delivered by another group in your organisation, so you’re writing a small portion of what you need for now until you can replace it with the more robust component. Regardless of the reason, part of the decision to accept technical debt is to also accept the need to pay it down at some point in the future. Having good regression testing plans in place assures that refactoring accepted technical debt in the future can be done with low risk.

3. Measure technical debt. If you are serious about technical debt then you must measure it and, more importantly, keep an eye on the trends (which should be going down over time). Keep a log of technical debt that identifies each element.

4. Explicitly govern your technical debt. For your organisation to succeed at reducing technical debt it must be governed. This means it needs to be understood by senior management, measured (see previous point), and funded.

5. Make the reduction of technical debt part of your culture. Technical debt isn't going to fix itself, and worse yet will accrue "interest" over time in the form of slower and more expensive evolution of your system.

As with real debt, technical debt can be used positively if it is well managed. Using the above techniques will help you to manage it.

Read more:
More on Technical Debt #2/2

Monday, 18 November 2013

NOTE: More Agility at SAS

Last month I featured an article by SAS's Tim Arthur regarding the adoption of agile techniques at SAS. Tim promised to produce another article in November, and he has been good to his word.

In 5 More Ways SAS Scaled Agile Scrum, Tim focuses on coaching options, communication, scale, and closing the loop. All of these topics are focused on increasing the adoption of agile around the organisation. Judging by the speed with which new versions of Visual Analytics are being released, the agile approach is making a positive difference at SAS.

Good tips from Tim. I'd add the following that are specifically related to agile planning:
  • Split the project into short iterations. By working in short iterations of 2 - 4 weeks each, and being sure to deliver working software at the end of each, you get a true measure of your progress

  • Only create detailed plans for imminent tasks. Schedule in detail weeks ahead but not months; use a high-level plan for the months ahead. In practice, this usually means producing a detailed plan for the next couple of iterations

  • Ensure the people doing the work are actively involved in scheduling. They have the skills and knowledge, plus the motivation to get it right. And it ensures they buy in to the plan

  • Allow people to choose their work, don't assign it. Again, this ensures commitment and buy-in

  • Take a requirement-centred approach. Centre your plan around delivering features or user stories rather than the traditional design, build, test activities

  • Remember training. If agile is new to your enterprise, remember to include training for new staff, and refresher sessions for existing staff
Good luck with your agile project.

Wednesday, 16 October 2013

NOTE: Increasingly Agile

I'm a keen follower of SAS's adoption of Agile delivery techniques. I've posted articles on the subject in the past. You'll have noticed how SAS are releasing new version of Visual Analytics every six months; this is a good example of the benefits of Agile.

Since my earlier article, SAS's Tim Arthur has published a couple more articles on the subject. How is Being Agile Different From Doing Agile was published in July, and 5 Ways SAS Scaled Agile Scrum was published earlier this week. Both are highly informative regarding Agile in general and SAS's use in particular.

Tim intends to publish more information next month, so stay tuned!

Monday, 24 June 2013

NOTE: Agile Developments at SAS

Seek and you shall find. Isn't Google wonderful? I'm a keen proponent of Agile software development practices and processes. I've long been curious to know more about the software development processes used within the various teams at SAS. Jason Burke wrote an informative blog post back in 2009 but I just hit upon a recent paper by SAS's Tim Arthur titled Agile Adoption: Measuring its worth.

The paper was prepared for the April 2013 Strategic Execution Conference and it describes the results of Tim's survey on Agile adoption within SAS since the initiation of some pilots in 2007. Tim describes how Agile has been interpreted and adopted at SAS, and then presents his survey findings. The survey found that respondents' believed that their teams had produced higher quality products since they started adopting Agile practices, and respondents would recommend Agile techniques to other teams.

The paper places greater emphasis on the survey and its results rather than the Agile practices and processes adopted within SAS but nonetheless I found it a most interesting read.

Accompanying slides (and the paper) can be found here: http://support.sas.com/rnd/papers/2013/AgileAdoption.zip.

[UPDATE. Coincidence? Tim Arthur just posted a blog entry about Agile on the SAS blog site today. It's titled How SAS R&D does agile and it briefly talks about the use of Agile at SAS]

Tuesday, 11 June 2013

NOTE: Release Management and Version Control

Yesterday I mentioned SAS platform administrators and how their work and their value can be overlooked. My post related to an article on the SAS Users Groups blog. Last week I saw another SAS Users Groups blog entry about something else that is often overlooked: Release Management.

The blog post was a nod to the best contributed paper in the Systems Administration and Architecture stream: SAS Release Management and Version Control by John Heaton (also winner of best contributed paper in the Data Management stream as I mentioned on Friday).

At the risk of turning this week's sequence of blogs into a celebration of all-things-overlooked, I thought it worth highlighting John's paper because management of releases and versioning does indeed tend to get lost in the excitement and frenzied activity associated with application upgrades. Steve O'Donoghue and I were invited to present a related paper at SAS Global Forum 2009 entitled Configuration Management for SAS Software Projects so it's clear that the topic is one in which I hold a special interest.

John describes release management as "the process of managing multiple different software releases between environments in a controlled, auditable and repeatable process" and his paper looks at the capabilities of the SAS 9.3 toolset to build an effective Release Management process to support the migration and maintenance of SAS code throughout the software development lifecycle (SDLC).

John makes specific references to Subversion (SVN) as a tool for storing current and previous versions of code. SAS tools don't explicitly support SVN (or any of its competitors) so John describes his approach to marrying SAS and SVN.

Having made introductions to SVN, John continues by covering import and export of metadata-resident objects such as DI Studio jobs. DI Studio is the focus of John's paper. He doesn't forget to include an approach for deploying unplanned software patches.

John's paper is very technical in nature, but be sure to understand the over-riding strategies that are implicit in what he says. Ignoring or overlooking release and version control is perilous. If you're working in a regulated industry then your regulators are likely to suggest that you are not in control of your software development process. Without clear, robust release and version control you risk expensive mistakes during or after software upgrades.

Wednesday, 15 May 2013

Affinity Diagrams for Problem Solving #sasgf13

I was pleased to be invited to present a paper on Visual Techniques for Problem Solving and Debugging at this year's SAS Global Forum (SGF) conference. I spoke about the importance of human interaction in solving complex issues; the process and people make a far greater contribution than the associated software tools. I spoke about seven more-or-less visual techniques, some of which I've highlighted in NOTE: before:
DMAIC is an excellent end-to-end process to give structure to your whole problem solving endeavour. 5 Whys is a flexible technique for probing root causes. Ishikawa is a terrific approach to information gathering and helps ensure comprehensive coverage of the problem area.
The Ishikawa diagram (and most of the other techniques I discussed) is a top-down approach. The distinctive element of the Affinity diagram is that it is created bottom-up. Whilst the Ishikawa (and Mind Map) are drawn by starting with general topics (or questions) and then drilling down into detail, the process of drawing an Affinity diagram begins with a brainstormed set of detailed observations and facts.

The bottom-up idea can sound unstructured, but is it ever a bad thing to have too many ideas? Probably not, but if you've ever experienced information overload or struggled to know where to begin with a wealth of data you've been given, you may have wondered how you can use all of these ideas effectively.

When there's lots of "stuff" coming at you, it is hard to sort through everything and organise the information in a way that makes sense and helps you make decisions. Whether you're brainstorming ideas, trying to solve a problem or analysing a situation, when you are dealing with lots of information from a variety of sources, you can end up spending a huge amount of time trying to assimilate all the little bits and pieces. Rather than letting the disjointed information get the better of you, you can use an Affinity diagram to help you organise it.

Also called the KJ method, after its developer Kawakita Jiro (a Japanese anthropologist) an Affinity diagram helps to organise large amounts of data by finding relationships between ideas. The information is then gradually structured from the bottom up into meaningful groups. From there you can clearly "see" what you have, and then begin your analysis or come to a decision.

Here’s how it works:
  1. Make sure you have a good definition of your problem (ref: DMAIC)
  2. Use a brainstorm exercise (or similar) to generate ideas, writing each on a sticky note. Remember that it’s a brainstorm session, so don’t restrict the number of ideas/notes, don’t be judgemental, don’t be afraid to re-use and enhance ideas on existing sticky notes, and don’t try to start solving the problem (yet)
  3. Now that you have a wall full of sticky notes, sort the ideas into themes. Look for similar or connected ideas. This is similar to the Ishikawa’s ribs, but we’re working bottom-up, and we’re not constrained a by a set of ribs as our start points. When you’re doing this, it may help to split everybody into smaller teams
  4. Aim for complete agreement amongst all attendees. Discuss each other’s opinions and move the sticky notes around until agreement is reached. You may find some ideas that are completely unrelated to all other ideas; in which case, you can put them into an “Unrelated” group
  5. Now create a sticky note for each theme and then super-themes, etc. until you've reached the highest meaningful level of categorisation. Arrange the sticky notes to reflect the hierarchical structure of the (super)themes
You’re now in a similar position to where you would be with an Ishikawa diagram and can proceed accordingly. The benefit of the Affinity diagram over Ishikawa is that the bottom-up approach can produce different results and thereby offer different perspectives on your problem.

Affinity diagrams are great tools for assimilating and understanding large amounts of information. When you work through the process of creating relationships and working backward from detailed information to broad themes, you get an insight you would not otherwise find. The next time you are confronting a large amount of information or number of ideas and you feel overwhelmed at first glance, use the Affinity diagram approach to discover all the hidden linkages. When you cannot see the forest for the trees, an Affinity diagram may be exactly what you need to get back in focus.

If you'd like to know more about some of the other techniques, you can catch an audiovisual recording of my whole paper on Brainshark.

Tuesday, 23 October 2012

Technical Debt

Last week I mentioned a term that was new to me (Mutation Testing) and so I thought I'd mention another recently acquired term - Technical Debt. In this case I was familiar with the concept, but I hadn't heard the term before. I think the term very succinctly describes the concept.

We're all familiar with the fact that the software that we build isn't perfect. I don't mean it's full of bugs, I mean that there are things we could have done in a more robust or long-lasting manner if we'd had the time or the money. It could be code or it could be architecture. This is our technical debt - things that are an effective and appropriate tactical and short-term choice but which we should put right in the longer-term in order to avoid specific risks or increasing costs (the interest on the debt).

Examples of technical debt include:
  • Incomplete error trapping, e.g. we know that the code will bomb in certain circumstances and will not offer the user any message to explain why it bombed and what they need to do to avoid it, e.g. the supplied data was of the wrong format. As a tactic to get the code out of the door, this is sometimes necessary
  • Hard-coding a series of values rather than placing them in a control file and giving the appropriate people the ability to edit the control file. Again, as a tactic to get the code out of the door, this is sometimes necessary
  • Coding-up a routine that is known to be part of the base software in the next version of the base software. This may be necessary as a short-term measure because the upgrade to the next version of the base software is a significant project in itself
  • Attaching a barely sufficient amount of temporary storage 
  • Using a non-strategic means of getting source data into your ETL process
  • Delivering an early release of software that doesn't fully meet all requirements
Whatever form your own technical debt takes, it is important that you maintain a register of it and that you manage it.

As in our personal lives,debt is not necessarily a bad thing. It allows us to buy a house and/or a car that would otherwise be our of reach. The key thing is to recognise that one has the debt and to manage it - which is not necessarily the same thing is removing the debt.

Release cycles can make a considerable difference in the rate of acquisition and disposal of technical debt. Releasing early and often makes it much easier to take on technical debt but also makes it easier to resolve that debt. When well-managed, this can be a blessing - taking on debt earlier allows you to release more functionality earlier, allowing immediate feedback from customers, resulting in a product that is more responsive to user needs. If that debt is not paid off promptly, however, it also compounds more quickly, and the system can bog down at a truly frightening rate.

Shortcuts that save money or speed up progress today at the risk of potentially costing money or slowing down progress in the (usually unclear) future are technical debt. It is inevitable, and can even be a good thing as long as it is managed properly, but this can be tricky: technical debt comes from a multitude of causes, often has difficult-to-predict effects, and usually involves a gamble about what will happen in the future. Much of managing technical debt is the same as risk management, and similar techniques can be applied. If technical debt isn't managed, then it will tend to build up over time, possibly until a crisis results.

The term "technical debt" was coined by Ward Cunningham in his 1992 OOPSLA paper The WyCash Portfolio Management System.

Technical debt can be viewed in many ways and can be caused by all levels of an organization. It can be managed properly only with assistance and understanding at all levels. Of particular importance is helping non-technical parties understand the costs that can arise from mismanaging that debt.

Aside from reading Ward's 1992 paper, you can find plenty more valuable sources of information on this topic. Here are just a few that I recommend:


Take good care of your debt and it will take good care of you. The reverse also holds!

Tuesday, 18 September 2012

Whatever You Call It, It's All About People First

Achieve Intelligence (AI) just published its latest monthly news article. AI's monthly publications are all themed on building your Business Intelligence (BI) strategy. This month's publication is entitled "Capability Improvement – The Three P’s" and it describes the importance of people, process and plumbing in your strategy.
AI's monthly publications are long-overdue a mention in NOTE: because they are written by a team of people who have "got the T-shirt" in addition to going "there" and seeing "it"; plus, the publications are written in bite-sized chunks so you can take an actionable nugget of information from each monthly publication. Similar to NOTE:, you can visit the web site monthly, or you can request a convenient monthly email. I've not yet found a means to subscribe through RSS (many NOTE: readers use this approach).
Past publications from AI have included:
  • Why do I need a Business Intelligence Strategy?
  • How to Create a Business Intelligence Strategy
  • Reasons for chaos: before a Business Intelligence Strategy
  • Five Areas of Business Intelligence Strategy
  • Stakeholder Management
AI's topic this month is one that I visited myself a couple of years ago. In "Keys to Success: People, Process, Technology", I wrote about how people are the most important factor in the success of any endeavour, closely followed by the business processes that the people use to achieve their goals. I refer to the final element as "technology", AI prefers "plumbing", but our message is the same... no project or strategy can focus on technology alone.

If you'd like to receive some advice and challenges about BI strategy each month, subscribe to AI. Better still, invite them to visit you and discuss your BI strategy. You do have a clearly-expressed BI strategy, don't you?...

Monday, 30 April 2012

Requirements. Whose Responsibility? #sasgf12

I was pleased to see some papers on the subject of software development processes at SAS Global Forum this year. The IT industry hasn't yet reached a point where a consensus on the perfect software development process has been reached (will it ever?). So, it's no surprise that opinions differ on some matters.

One paper I attended opined that "requirements are developed by the end user of the software and not by the developer". The paper had a lot to commend it, but on this one point I strongly disagree.

Capturing requirements is a skill. It is not easy to gather all facets of the business requirements, and it is not easy to document them in a fashion that best serves all the needs of the development process (and beyond). Thus, it is unreasonable to expect users (or developers) to possess these skills unless they have been explicitly trained.

If training is required (e.g. in the absence of trained Analysts), does it not make more economic sense to train developers? They can be trained once and then use their skills (and growing experience) multiple times on subsequent projects. If you train a user, they are unlikely to re-use those skills (unless their application is in a constant state of change).

There are a variety of tools and techniques for performing analysis for requirements capture. One of the key skills is the ability to see beyond the current business process and to capture the true needs of the new business process. It is not apparent that users have a proper understanding of all aspects of their current business process; it is far from likely that they can accurately specify their target requirements. If requirements are to be of use, they must be documented in a form that facilitates their subsequent use by a) architects and designers, b) test case authors, and c) maintenance developers.

In my opinion, it is a developer's responsibility to help the user understand their current business process (particularly the processes for dealing with abnormal situations), and to guide them in the art of the possible for their target requirements. Developers need people-skills in addition to knowledge of tools and techniques for requirements capture.

The art of the possible is a key element of the requirements capture phase. We've all had experience of i) users asking for features that seem simple to them but are difficult/expensive for us to implement, and ii) users not asking for features that would be of high value to them but which they thought were too hard for us to deliver. I've seen countless examples of users telling me that they need the ability to:
a) email various reports to groups of people, and
b) write reports as spreadsheets.

Users typically express requirements in terms of things with which they are familiar, i.e. existing technology. We can advise them of the extended capabilities of:
a) portal and publish/subscribe capabilities that avoid the need to clog-up the email system with uncontrolled copies of report, and
b) web report studio and add-in for Microsoft office that give the user the ability to "interact" with the data, without the need for the data to leave the data centre.

If you're a developer, and you don't have professional Analysts to help you, take an interest in requirements capture; appreciate the skills, techniques and tools at your disposal, and (if possible) get some training to enhance your ability.

Delivering a successful project is a result of good teamwork. It is not the users' sole responsibility to produce good requirements; nor am I saying that it is the developers' sole responsibility. It's a question of what each party brings to the table. The users have to be committed and provide their time in addition to their knowledge and experience of the business; the developers must be willing and able to help the users express their requirements. You will succeed as a team.

Garbage in, garbage out. If all of the project's stakeholders are not clear on what is to be delivered, the chances of meeting everybody's expectations are much reduced. The capture of good quality requirements is crucial for ensuring the success of your projects. Play your part!

Papers Without SAS?! #sasgf12

I was pleased to see a number of papers at this year's SAS Global Forum that dared to focus on topics outside of SAS technology and syntax. Two papers that particularly caught my interest were How to Create a Business Intelligence Strategy by Guy Garrett, and The Systems Development Life Cycle (SDLC) as a Standard: Beyond the Documentation by Dianne Rhodes. These papers were good demonstrations of the fact that you can buy the best software in the world, but you'll not optimise your return on investment if you don't put it to use in a planned, structured manner.

The focus of SAS Global Forum should always be SAS software and solutions. I'm not suggesting the event should be turned into a computer science conference, but there's a balance that can be struck. In my opinion, the balance lies at a point whereby attendees' interest in planning and process can be piqued such that they want to find out more once they return to their office.

Tuesday, 30 November 2010

Keys to Success: People, Process, Technology

This blog is focused upon best practice for software development - typically using SAS software. It can sometimes seem like an eclectic mix of stuff, but there are an awful lot of things that contribute to making great software. But think about it, it's not just about making great software. Most of us have discovered that building a great new software tool will not always produce a successful project outcome. The saying "a fool with a tool is still a fool" springs to mind. The tool is only as good as the Processes that wrap around it. Taking it a stage further, the Processes are only as good as the People that use them.

People, Process, Technology - PPT

The most important factor is People; get this right and the rest will follow. You need to be sure that the users of your new, shiny SAS system are properly trained in all aspects of the system and have the appropriate professional skills, experience and qualifications too. For example, making the most effective use of SAS's data mining technology requires data mining skills, not just a knowledge of what the buttons and menus offer.

Beyond people, Process is crucial. You need to build and document your business processes and workflows before you get into detailed design of your technology, else you'll end-up with the tail wagging the dog! Train your people on your new processes, but be sure to involve them in the development of those processes in order to get their buy-in.

But, what is a business process? Put simply, it's a sequence of activities, started by a trigger event, and ending in a defined output and/or output, with specified steps performed by specified people/roles. Some examples would include:
Process#1: Produce new churn report
Trigger: Request from head of marketing
Output: New, scheduled churn report delivered to information delivery portal

Process#2: Credit score a potential new loan customer
Trigger: Potential customer telephones call centre
Output: Accept/reject potential customer's request for a loan
You need to define processes for a variety of aspects of your system, not just the business outputs. Make sure you have processes that cover support and administration too. Earlier this year I did a post-implementation review of a multi-million pound SAS analytics platform whose development had focused on technology and pushed people and processes to the side. The first few weeks after go-live had been very painful for users, IT and sponsors alike. Seeing the problems, everybody had pitched-in and had stabilised the system such that it could be used productively, but the post-implementation review warned that the system would not remain stable if  proper processes were not put in place quickly.

Some of the things highlighted in the post-implementation review were:

  • No data governance model, including an absence of data ownership/stewardship/custodianship, and an absence of a strategy for dealing with data quality and data loading issues
  • No change control processes for handling new groups of users, requests for new data sources, etc
  • No defined support processes specifying single points of contact for key areas

At its simplest, you can document a process as a series of steps, with one or more trigger events, with inputs and outputs for each step, with named people or job roles for each step. Swim lane diagrams are often used to document this information. Start by capturing the big steps in the process, then drill into the detail.

Ignore processes and people at your peril: the success of your project depends on them far more than the technology!

Tuesday, 22 June 2010

Bug Safaris - A Useful Activity?

I was introduced to a new computing term the other day: bug safari. I wasn't convinced by the idea, but I'm keen to hear others' thoughts. Why not write a comment once you've read this article. Tell me, and your fellow readers, what you think.

I've been doing some work with a company named Space Time Research (STR). They're an Australian company who produce some rather good tabulation and visualisation software.

In a 2009 STR blog entry, Jo Deeker & Adrian Mirabelli describe how the STR quality team used a "bug safari" to enhance the quality of an upcoming release. Upon reading the blog entry for the first time, it sounded to me like they just arranged for some people to randomly use the software and deliberately try to find bugs. But reading it again more carefully I could see some structure and planning elements, and I could begin to see some merit.

Conventional, structured testing is focused upon the use of tests scripts which are themselves traceable to the elements of the requirements and/or specification. In this way, you can be sure you have planned and scripted at least one test for each functional element or design element (I shall talk about the V-model in a later blog article). On the face of it there is no value in any further testing since you believe you've tested everything. But software often incorporates complex paths through it, and testing rarely produces a comprehensive test of all paths (testing produces confidence, not guarantees). So, I can see merit in allowing users to go "off piste" with their testing and spend a limited amount of time just using it and trying to break it.

As I say, testing is about producing confidence not guarantees, and I see that bug safaris can generate confidence in some situations.

What do you think? Share your thoughts; write a comment...

Wednesday, 12 May 2010

Developer Testing

When I took over a "failing" development team in a high-profile banking project in London, I introduced a simple form for handing-over code from the development team to the system testing team. Apart from details such as why the change was being made and how the code should be transported from Dev to Test environments (and properly installed), the form included a tick-box to say that developer testing had been done (and fields to specify where the test code and test output was archived). Prior to this the developers had complained of being pressured to hand-over untested code in a management rush to get development work "completed".

I told the team that I wanted the form filled-in honestly and openly. I told them that if they had done no testing because they had been pressured into delivering before they had time to test it, they should write this fact on the form and must not tick the "tested" box. I told them they had my full support if anybody came back to them and complained about the quality of the code after they'd not tested it due to management pressure. Of course, I also told them I wanted to be told if they felt they were being pressured to skip testing. And finally, I told them that I expected them to include appropriate time for appropriate amounts of testing in any plans they put forward.

Tuesday, 16 March 2010

Project Plans in Excel - As a Chart

The series of posts on project planning in Excel with Gantt charts has been very popular, and one of the most popular questions has been "Why don't you use an Excel chart?" Well, the answer's simple, they don't work very well for large list of tasks. For small lists they look very nice, but they don't scale well, hence I prefer to keep my Gantt in the cells of the worksheet. However, for completeness I thought I'd offer this bonus post to show how it's done. You can see the end result alongside this paragraph (right). I'm using Excel 2003.

We'll start with the result from the last post, including the progress bars in the worksheet. You can see it alongside this paragraph (left). Since we used the cells to indicate progress, we were limited to showing progress in chunks of whole days. In the chart we will be able to show a more accurate picture of progress.

I'm going to start by removing the groups that we had in the last result - I've never explored how they can successfully be charted. So, let's select the input data area (A2 to F9), go to the Subtotals window, and click the Remove All button (then confirm that you understand that entire rows will be removed). Our chart collapses and looks as shown below.

Tuesday, 9 March 2010

Project Plans in Excel - Tracking to Completion


The series on maintaining a project plan and Gantt chart in Excel has been popular, and I've had a lot of queries about tracking progress. So, in this bonus post I’ll describe how to display tasks’ progress on the Gantt chart that was featured in the previous posts in this series. In addition, I’ll show how to highlight “today”. Alongside this paragraph (right) you can see what the result of this post looks like.

In the three previous posts in this series I described how to create a neat and simple Gantt chart, how to add dates to the day numbers, and how to group tasks. These three simple sets of steps have given the developer sufficient knowledge to quickly create a simple but effective Gantt chart that demonstrates the developer is in control of the project (without spending more time on planning than on delivery). Alongside this paragraph (left) you can see what the results of our previous efforts looks like.

Let’s start by inserting a column after E and heading it “%Done”. This is where you'll need to type values to indicate your progress. Then, after column N let’s add “Done” and “DoneEnd”. I’ve made the text colour of the latter two columns a semi-visible grey because they’re our working values and not of interest to the reader of the Gantt chart. If you see the picture below, you’ll see that I’ve also populated the %Done column with some values.


Now let’s populate the calculated columns. The working columns are not strictly necessary, but they’ll help illustrate the calculations that we’re doing. Firstly, let’s understand what we’re trying to achieve. Cell H3 represents progress on activity #1 on day 1. We’ll display a block in the cell if progress on the activity is equal to (or greater than) half a day’s effort. So, for activity #1 we can see that effort is complete up to the end of day 3; for activity #2 the effort is complete up to 2/3 of the way through day number 2. Since day number 2 for activity #2 is more than half complete we’ll put a block in that cell (but not day number 3).

Tuesday, 2 March 2010

Project Plans in Excel - Grouping Tasks

In the two previous posts in this series I described how to create a neat and simple Gantt chart and how to add dates to the day numbers. In this post I’ll describe how to groups your tasks in the chart that was featured in the previous post. The picture alongside (right) shows the end result from today's post. Grouping tasks is a generally useful thing to do, but I also find that my list of tasks increases as time goes by, so I might not need groups to begin with, but they become a useful way of keeping my plan tidy after it has grown.

As with the previous cases, I’m going to describe a quick and simple method. The objective is to have a useful and communicative chart without spending too long on creating it and without making it difficult to maintain. We start with the chart that was created in the last posting (shown to the left). Remember my comments in the first post in this series: I expect SAS developers to run their own (small to medium sized) projects from time-to-time, and I expect them to know how to work to a plan.

Tuesday, 23 February 2010

Project Plans in Excel - Adding Dates

In the previous post in this series I described how to use Conditional Formatting to create a neat and simple Gantt chart alongside a simple Excel-based project plan. In this post I’ll describe how to use dates in addition to the day numbers that were featured in the previous post. The picture alongside (right) shows the result from today's post.

As with the previous case, I’m going to describe a quick and simple method. This method also takes weekends into account as non-working days. We ended the last post with what you see alongside (left).

So, let’s begin by adding the date for day 1 into cell F1. I'm typing “22/2” to represent 22nd February). It’s not readable in the small width of the cell, so we’ll go to the Format Cells window (you can use Ctrl-1 to get there quickly) and select text orientation as 90 degrees. Then, to get the date format that we want, we’ll stay in the Format Cells window and specify a custom number format of “dd-mmm (ddd)”. If the height of row 1 doesn’t automatically increase for you, just do it manually. You should have a result like this:

Wednesday, 17 February 2010

NOTE: The Missing Semicolon Just Arrived

Systems Seminar Consultants' newsletter (named The Missing Semicolon) is always a good read, so I was pleased to get notification of the Winter 2010 issue last week. Featuring a mixture of topics, this issue seems to focus on writing good documentation (program documentation and system documentation). Please don't view this as a switch-off topic! Read the articles and you'll better understand the benefits that properly targeted and focused documentation offers.

However, I do strongly disagree with the author's rule of adding a comment to every line of code. Programming standards always give rise to a strong degree of discussion, but in my opinion slavishly putting comments onto every line of code doesn't add anything to the reader's knowledge of the code. Indeed, in the example code given, the vast majority of on-the-line comments are stating the obvious. Comments should describe what is not obvious in the code - that typically means describing what blocks of code are doing and/or why a particular approach was taken (and why other approaches were considered but discarded).

The issue also offers a review of The Little SAS Book (by Lora Delwich and Susan Slaughter whom I featured yesterday), and a nice tip regarding the INFILE statement's MISSOVER parameter.

I recommend you hop over to Systems Seminar Consultants' publications page and a) sign-up for a free subscription, and b) take some time to browse through the archive of issues.