Thursday, January 16, 2014

MTInsight: Testing Award-winning Software

Note:  I wrote this article for the Jan 16, 2014 IMTS Insider
By: Dave Edstrom

MTInsight: Testing Award-winning Software

Software is increasingly important in manufacturing and is rapidly becoming the key differentiator in plants and shops. After 35 years in the software business, I have seen great successes and great failures in my career. The great failures had many similar attributes, such as “you built what the customer said they wanted, but not what the customer needed.” On the other hand, Apple famously stated that they would not have customer focus groups for that exact reason. Henry Ford once said that his customers told him they wanted “faster horses.”
We have all seen software that was designed for engineers that might have been functional, but was certainly not intuitive. This begs the obvious question – what makes great software? Having a great idea that addresses a clear market need is the first order of business. Creating software that is functional, intuitive, extremely reliable and fast is a must.
Since no one gets a second chance to make a first impression, how do you ensure that you have a great rollout of software? How to you avoid what happened to the Affordable Care Act, aka ObamaCare, with your software? I would argue that improper testing is the number one reason for failed software rollouts. Let’s look at an absolute software success to understand the importance and nature of software testing, as well as how to do it right.
MTInsight is an award-winning business intelligence tool that was created by AMT – The Association For Manufacturing Technology. As stated at, “MTInsight is the game changing business intelligence tool that your company must have to succeed in today's manufacturing world. MTInsight is based on three key elements: dynamic software, AMT's experience and analysis, and our unique data warehouse — all of the information AMT tracks on your markets, benchmarking surveys, industry forecasts, your competitors, customers and supply chain.” MTInsight has literally redefined the business intelligence (BI) market for manufacturing.
When designing software, there are always market pressures from internal and external customers. Internal customers are sales and marketing. It’s their job to create excitement and sell the software when it is available. They naturally want the new features out yesterday and they want it be 100% bug free. External customers are always requesting new features and want those to available as soon as possible too.
A Product Manager (PM) for the software product is the one who controls the three most important parts of software design – resources, features and schedule. Too often in my career I have seen the classic rookie mistake of junior PMs who, when faced with an unrealistic deadline that they should have never agreed to, make the decision to cut back on testing. I always like to ask them a question that puts it into proper perspective for them, “Would you rather be on time and buggy or a little late with a quality product?” I then suggest they ask their external customers, who pay for the software, if they would agree with the PM’s logic. Usually, I hear, “But you don’t understand, I am under a lot of pressure.” Then, I respond with one last question, “Do you think you will have more or less pressure on you if you are fired for delivering a buggy product?”
While there are countless tools available for testing software, there is absolutely no substitute for human testing. Certainly having automated testing tools is a must in any development shop, but having individuals, who are not the same software developers, test out the product can find show-stopping bugs that automated tools simply would miss. Here is a real life example. I worked for a company that had business software as one of its product lines. The software was what you would expect: General Ledger, Accounts Receivable, Accounts Payable, Payroll and Inventory Control. Those were the “big five,” as we liked to call them, for any business. This was during the late 1970s and early 1980s when the industry was converting from interpretive BASIC to compiled BASIC. With interpretive BASIC, the customer had access to the source code since the source code was what you literally ran. This had big plusses and minuses. The big advantage was that customers could modify the code quite easily. The big disadvantage was that the customer could modify the code quite easily. Imagine trying to do customer support when the customer could modify the code whenever they felt like it. Not a recipe for success. By going to compiled code, the software was the equivalent of a .exe file, it was faster and did not allow the customer to easily modify it. When this happened, the industry also changed how files were stored. Files were no longer stored as simple text files on tape or disk, but instead used databases with index files. This was much faster as well.
Everything sounds great, right? What could possibly be the fly in the ointment here? The problem was that we never thought of what might happen if the customer’s index file got clobbered. Yes, of course we always pushed good data processing (that was the term for information technology back in the 1970s) practices, such as backups, but what if the customer had a very old backup? What if the customer had no backup? Do you know what it is like to have a president of a construction company come into your business and tell you that, unless you can fix this payroll index file, his guys are not getting paid and they all know where you work and live? It was something that you would expect to see in the HBO show “The Sopranos.” My suggestion was that we go into a previous version of payroll, issue the checks as a one-time event, then rebuild the payroll database over the weekend. Afterward, I went back to the development team and said that we did not do real-life testing. Back in the 1970s and early 1980s, it was not unusual for the computer to literally be sitting on a desk in the corner of the shop. It was treated like a big calculator that just happened to have a screen. The testing was not doing the things the real-life users were doing, such as not doing backups and pulling out a floppy disk when it was still writing this week’s payroll data, completely blowing up the indices and rendering the database useless. What did we do? We put the backup program directly into our software and forced the users to backup. We created a program that could take a clobbered accounting database and rebuild it to where it was, as long as the disks could be read. We then tested the living daylights out of it to make sure we would not have any more conversations where you feared for your life. Trust me, being the person between a bunch of construction workers and their paychecks on a Friday night is not where you want to find yourself.
One of the true pleasures of my long career is work with great folks who really want to deliver world-class software. The MTInsight Team is such a group of very talented, hardworking and passionate individuals. The group is led by Steve Lesnewich, V.P. - MTInsight Group, and Julie Peppers, MTInsight Project Manager. When we test, the human functional testing is truly a team effort that involves a wide range of individuals in different groups. We have sales, marketing, communications, exhibitions, management, economists, statisticians, customers and sometimes the software developers. The reason I say “sometimes” the software developers is because we really want software developers to be addressing the issues and bugs found by the others first. Julie writes up the test plans that we follow and implement with a variety of user types and scenarios. We are pushing the standard and corner cases. We ask everyone to break the software. We ask them to try things that are completely illogical. We do this not out of disrespect for the customers, but out of total respect for the customers’ time. We don’t want customers to find a bug because they accidently did something they did not mean to. We have a very detailed plan where we track all tests, all bugs, and all fixes. We have extensive meetings to go over the testing and the results.
Having humans testing the software is great for functional testing, but sometimes humans get bored with doing the same test over and over again. This is where the automated testing tools are an absolute must. Human testing augments the automated functional testing. Regression testing and unit testing are very important in software. Regression testing is an automated test plan that will test and stress the parts of software that have not changed. Why test the parts of the software that have not changed? The law of unintended consequences can bite you every time with software. If I had a dollar for every time I heard a software developer say, “Well, it should not affect that part of the system,” and it absolutely did, I would be a very rich man. Think of regression testing as the “Hippocratic oath of software”, which is “first, do no harm”. The second key aspect is unit testing. Unit testing means exactly what it sound like, you are testing the smallest part of the software that can be separate from the other components of the system. Software developers typically write unit testing to ensure what they have developed is working as designed.
An area that PMs sometimes cut themselves short on it is scalability testing. Scalability testing answers the question that is sometimes referred to as the “Victoria’s Secret Super Bowl Problem.” The problem goes back to the 1999 Super Bowl where a 30-second Victoria’s Secret commercial drove over 1 million visitors to their website in an hour. They were not prepared and their website went down. The only way to test a million users is with an automated testing tool. There are many, many automated tools for testing. A popular open source automated testing tool is JMeter, which is both easy to use and extremely extensive. Essentially, you create your software test plan and then run it. When you run it, you can simulate doing a test and select the number of users. For example, you might want to run tests of a website with 10, 50, 100, 200, 500, 1,000 and 10,000 users to see where your system rolls over and dies. Depending on your expected workload, 10,000 users might be complete overkill. JMeter provides incredible amounts of data to analyze. On the webserver, you would also have monitoring software to see where the bottlenecks are occurring or where errors are rearing their ugly heads. Remote servers can be set up so the load can come from multiple servers in different geographies to more accurately simulate the expected load. JMeter is available at http://
One of the ObamaCare reports I heard on TV was that consultants, as far back as April of 2013, told the team running ObamaCare that they had not allocated enough time for testing. The political pressure was tremendous on these individuals to make the rollout date for ObamaCare. If it was a true statement that they cut testing, added requirements late in the game, then it would come as no surprise they blew the rollout. You never get a second chance to make a first impression. No one wants their software to prefaced with the phrase, “The first version was terrible, but I think they finally got it right.”
The way to ensure the opportunity for a positive rollout is to make sure there are no shortcuts in functional or scalability testing. As more and more manufacturing companies scale up their software development, executives should always ask one critical question of their software development team, “Does everyone feel that we have a comprehensive testing plan in place?” This gives anyone who feels shortcuts are being taken in the test plan the chance to speak up. Remember the words that MTInsight has written at the top of floor-to-ceiling whiteboard in the development area, “Fast, good or cheap. Pick any two, you can’t have all three.” Finally, if you want award-winning software, like MTInsight, then test, test, test and test again.

No comments:

Post a Comment