This newsletter comes close on the heels of the last one, so I have little to add in the next couple of sections. That means more of the good stuff and a shorter read!
Not much new to report. I’ve been busy with the latest mechanical design and haven’t had a lot of time to ponder deep thoughts or flesh out new design concepts. As per usual, the newsletter topic is related to my current work. That make it more relevant and “fresh”, but selfishly makes it easier for me to write and get stuff off my chest. My latest musings are when to determine the value of testing over analysis or simulation. So here we go…
Engineers strive to make an impact on the real world, which means that we all hope to see the results of our work manifest in some useful way. There is a long road leading up to this point (covered by many other topics in the past), but right before the solution is “released” for general consumption, some level of confidence is required that it will work as needed. That level of confidence is dependent on the type of solution and risk (covered in topics such as quality and engineering design process) and is not covered in this newsletter. Instead, we’re diving into the tools engineers used to establish confidence in the design’s ability to meet requirements.
There are essentially three ways we can gain the necessary level of confidence in the design: testing, analysis, or simulation. An engineer must determine the best method to achieve the appropriate level of confidence by selecting one or a combination of these tools and defining the means in which they will be used.
Before diving into the most appropriate usage of each of these tools, a common understanding of each should be established.
Testing involves exercising a design in the “real world”. Whether this is stress testing a single component or fatigue testing a complex assembly, the whole process starts with making a “thing” first and doing stuff to it.
Testing is often considered to provide the highest level of confidence. What can be more defendable than “seeing it work”? This implies that the testing will consider all the ways a design may be used, which is a far cry from a single pressure test or structural test. It is usually impractical for testing to be so comprehensive that it can accurately validate all aspects of any design. What testing can do is provide defendable results for defined attributes and potentially fast feedback on the design efficacy.
Analysis is best characterized by the dreaded word “calculation”. Every engineer is well acquainted with the concept and dependent on your industry it may consume most of your time. Where the aspects of the design that are critical and well understood. And where the methods of analysis firmly established, it is often the preferred and potentially required to do some sort of analysis as a basis for validation. Dependent on the design and methods used, analysis can be a very precise predictor of actual performance, an extremely simplified and conservative version of reality, or a wildly inaccurate understanding of the actual application. To properly use analysis, the most important step is a rigorous review process.
Some would consider simulations a subset of analysis, but I think they are distinct. The disciplines used in performing simulation are similar, if not identical to those for analysis, but the goal of a simulation is to understand interactions, not validate specific design features. Simulations focus on the response of the design, not the failure mechanism. A calculation of beam bending is analysis. Plotting the stresses in the beam in a bridge as a truck rolls over the roadway is simulation. Simulations often require involved dynamic calculations and can be extremely involved. The amount of information that can be gleaned from a simulation is potentially valuable, but processing and understanding the information (as well as creating the calculation) may be expensive in terms of time and effort.
When I am planning a design validation, I think about the three tools at my disposal, usually in the order presented above. As discussed in previous editions, I start with my critical characteristics of design and look at them individually. I then go through my decision flow chart, which I’ve linked here for your viewing pleasure.
The first question to ask is whether you are required to perform the analysis due to some sort of regulatory requirement. If this is the case, the point is moot and you’ll have to use the analysis tool to meet validation requirements.
If you aren’t locked into analysis, then you start by determining if this is something testable, and whether testing is worthwhile. The “worthwhile” aspect is an important point. There are two situations where testing is worthwhile: you have the budget or you don’t. The first situation is obvious – if you have the budget, do the testing. If you don’t have the budget, but only testing can meet your confidence or timing requirements, you will have to find the budget to make testing happen. In the end of the day we’re engineers, not economists.
If testing is either not possible or worthwhile, you will have to look at analysis as the preferred tool. The ideal scenario is that whatever you’re analyzing is standard in the industry and there are established methods that will meet your requirements. All too often it’s rarely that simple and you will have to modify the actual analytical method or inputs to make the analysis possible. If you can make modifications to inputs in a clearly conservative manner and use industry standard methods, you will have a path forward without too much drama. In cases where that is still not sufficient, the road forward is more difficult.
It’s tempting to create some fancy analysis as a basis for validating the characteristic. The danger here is whether you will be credible if you do so. Established industry approaches come with the cache of credibility where unique approaches will have to establish that credibility on their own merits. That can be very difficult unless you have the expertise to defend your approach. That expertise is not just technical capability, but industry acceptance, reputation, exhaustive research, etc. Before claiming that your novel approach is defendable, be ready to defend it to your harshest critic. If you honestly believe that to be the case, by all means go forward. If you have any doubts, consider moving on to simulation.
Simulation is the last refuge for validation. It can be a very powerful tool but done in a manner that can be defended for validation is usually very expensive, both in terms of time and effort. That’s why the first question you ask yourself if you start down the simulation path is whether you can afford to do the simulation. Even if you can answer in the affirmative you have to then consider whether all of that work will be accepted. Much like a novel analysis, without the acceptance of stakeholders the work is meaningless. Unlike novel analysis, you are likely not creating new methods, but rather attacking the problem with known methods from multiple angles simultaneously. It’s not the validity of the approach that’s in question, but the level of confidence of the result. That’s a hard concept to wrap your head around, but critical to determining whether simulation is a real path forward.
If you look through the flowchart, you’ll notice that I prefer testing over other tools unless the other tools are required (specifically analysis). And in the end, if you haven’t found a way to validate your critical characteristic, perhaps you should rethink your design or the characteristic. Either way, you’re off this chart and should look back as some previous newsletters about defining your design requirements and identifying critical characteristics.
Even though it looks like I neatly package the entire decision process on a single flowchart, there is another dimension beyond the scope of this discussion. Beyond selecting a single tools, you could combine tools to create the necessary level of confidence, like doing a test to validate the basis for a novel calculation, or benchmarking a simulation based on simplistic testing. In an attempt to keep this newsletter short I intentionally avoid this discussion as entire pages could be devoted to that approach. The intention with this newsletter is to establish a basis for the decision making process when selecting a validation method.
If you haven’t noticed, I’m a big fan of testing whenever possible. I’ve spent a career doing complex analysis, and even with the best intentions, they are often deeply embedded with simplifications and conservatisms since uncertainties in too many factors are unknown. The aphorism this newsletter is a warning to those that rely heavily on analysis and simulation and consider the results representative. In well performed analysis and simulations the worst case scenario is considered and therefore we should not expect that the actual test will look anything like the analysis. That’s not wrong, but we should temper our expectations on the sort of information analysis provides.
The performance expectations from simulations and analysis rarely survive first contact with testing results.