Sarah: Good afternoon and thanks for joining us to hear about calibration’s role in the manufacturing jigsaw puzzle. My name is Sarah Wallace from Transcat, and I'll be your moderator this afternoon. Our presenter is Howard Zion, he's Transcat's director of service application engineering. So, for the first part of our time together today, we'll have the presentation, and then at the end of the call, we'll answer your questions.
You'll notice in your webinar controls to the right, there's a box for questions, so you could just put your question in there, and then at the end of the presentation, we'll go through there and answer them.
I also wanted to mention that this webinar is being recorded today. Each of you will receive a followup email with a link to the recorded webinar in the slide of today's presentation, probably about two hours after the presentation finishes today.
So, at this time, I'm just going to turn it over the Howard.
The Big Picture: Manufacturing Goals
Howard: Thank you, Sarah. Good afternoon or good morning, wherever you are coming from. I want to talk to you about some concepts of how calibration ties to manufacturing. Things that start in manufacturing and then push to the need for calibration, and making sure there's no disconnects there, because that can happen. And it does happen more often than it should.
We're going to cover what the manufacturing goals are. We'll talk about how things get segregated or split out as different people in the company are responsible for different functions and departments. And then the piece of puzzle that's all about the calibration support piece. And sometimes that appears to fit; sometimes it really doesn't fit, but it looks like it should. And then following up with how we make sure you get your pieces to fit properly.
Manufacturing goals, the big picture there is to make products that are:
- In market demand because you want to sell them, and if people don't want them, you don't want to be into that type of a product. So, market demand is important.
- You want it to be profitable.
- You want your product to be effective.
- And you want your product to be safe.
So, the whole big picture can be split into parts, and if we put it all together -- some of this animation is a little bit slow -- but if we put it altogether, we can see how it all fits.
So, all of these functions and others are required to make this happen, to be able to manufacture products of any type. Some of the functions of the manufacturing process are split into separate departments necessarily. Some of the sub-functions, some of the sub-assemblies of the product are either outsourced or handled by different departments internally. And that in itself causes miscommunication sometimes, or a lack of communication -- not intentional -- but it can cause things to get dropped or missed.
Back to Top of Page ↑
Puzzle Pieces: The Components of Manufacturing Quality
Example: Our materials are purchased from multiple sources, depending on what they need and the quantities, and the purchasing people get involved to get the lowest cost, or making sure that you have good quality but low cost. Some assembly can be farmed out to local tool shops.
Some companies enforce the quality of the parts on to their supplier to make sure they're receiving good parts. Other companies have receiving inspection to check sample of the supplier parts for quality, or maybe 100% inspection to make sure they get what they pay for and it's going to work in their product.
Back to Top of Page ↑
So, all of these things cause different functions to get split out, and then that puts a bigger burden on communication – not only verbally, but through documentation to make sure people understand the job they're doing and what they need to do to make sure that you get, as a corporation, what you're asking for and what you're paying for.
So, how does that affect the calibration function, as one focal point? It goes back to understanding what the original point of calibration was for in manufacturing. And that's really to make sure that you don't lose that connection between designing your product, finding methodologies to manufacture the parts of your product to get your product to market, and then the pieces that tie to making sure you have good measurements on all of those parts of the product.
And making sure you have good measurements means that you need to elect suitable instruments and a number of other things and make sure that your calibration is supporting what is needed for those measurements on the product.
That creates the calibration silo, especially if it's outsourced but even for an internal lab that can happen. And how does that affect manufacturing? How does that affect the goal of the company in making their product?
Back to Top of Page ↑
Calibrations Don’t Always Equal Measurement Assurance
What can happen is, calibrations may be taking place, but that doesn't necessarily mean that measurement assurance is in place. And we'll be getting to talking about the components of measurement assurance in a minute, because it's not just calibration.
So, as an example, I'll use an Omega HH82A, which is a temperature indicating device. And one of the examples I have seen in my experience is one of these devices which, by the way, has a channel A and a channel B. So, there's two channels that can be used for temperature management. It can also be used to measure differences between temperature values at two probes. So, it requires calibration to make sure that if this is being used on the production floor to make quantitative measurements about the product being good or bad, then that needs to be calibrated. Both channels, and by the way, each of the channels can handle one of four thermocouples, J, K, G, and E.
And so, what I'm seeing is calibrations that have been performed on these, in one example a calibration that was performed on this, where only channel A was calibrated, and only two thermocouples types were calibrated. And the question to that manufacturing quality manager was, "Where are you using this? How are you using it? Let's find out for sure what you're doing with this."
And it turn out both channels were being used, all four thermocouple types, and so they didn't have any longer any measurement traceability on channel B or the other two thermocouples on channel A.
That creates a problem because they're making decisions about the product being good or bad, and they really don't know if that instrument is telling them the right answer.
So, on the left you can see, taking it back to the analogy of puzzle pieces and how things fit together, that it looks like the piece on the left which fit into the calibration portion respectively, but the piece on the right looks like it could fit in there as well. And if you take the larger piece above it and put it into place, you find out that the piece of the left really isn't the right piece. So, it's the one on the right that shows that it fits into the calibration piece.
And so, just like that, the jigsaw puzzle example, some calibrations can look, to the untrained eye, like, "Yes, I got the calibration done. I have to certificate to prove it. It looks like I'm doing what I should for that requirement in my organization." Yet when you get down to looking at the details of it, it wasn't fully calibrated, and it really doesn't support the operation like it should.
So, you're at risk, in that situation, of passing product that could be actually bad. Or, the alternative to that is that you could be accepting product that really should have been rejected.
So, with this example, what similar risks might you have with the calibrations that you're currently receiving? If it happened with this item, it can happen with others. And I've seen it multiple times where calibrations really aren't supporting the production process. And that breaks down the entire measurement assurance program.
So, if manufacturing of any type is interested in maximizing their profits and making sure they have good product safety and quality, then they must have an active measurement assurance program.
Back to Top of Page ↑
Measurement Assurance Programs
So, let's talk about a measurement assurance program and what that consists of.
There may be additional components to this, but this gets the meat of it.
1. The Right Tool for the Right Job
So, if manufacturing of any type is interested in maximizing their profits and making sure they have good product safety and quality, then they must have an active measurement assurance program.
2. Regular Calibration
It has to be regularly calibrated because you're making decisions, again, about the product.
So you're basing those decisions on instrumentation that gets you quantitative values. And those calibrations have to support the process for the instruments used. They have to supply the correct tolerances. There are situations where you can get incorrect tolerances based on using a different procedure or source of specifications that can cause that.
Calibrations have to be valid, meaning, there's an uncertainty that support that you can actually quantify those values of the measurement of the calibration. And so, that's all very important in making sure you don't lose that piece of what you're trying to do in the manufacturing process.
So, in an internal lab, they probably have a greater chance of getting it right because they should be tied in, to some degree, with what's going on on the production floor. And if they're not talking back and forth, then that's where that can get lost.
And other thing is, metrology is a fairly small world, and a lot of people have gotten their training through the military, and people are comfortable with using what they've learned in the military, and that is the military procedures, cal procedures. Sometimes those procedures are modified, don’t cover the full calibration of the instrument because it fits what the military's need was. Sometimes they change the tolerances from what the manufacture had, or they don't keep up with the changes of what the manufacturer states as they update their specification.
And so, if there's any disconnect there, it follows through to whoever uses those procedures. So, you have to be very diligent in making sure that it meets what the customer's expectation is.
The customer's expectation, or the user of the instrument, should be, or usually is, based on what the manufacturer says that instrument can perform to. And that is their specification sheet. And usually, if they have a valid, good metrology practice calibration procedure, then you'll follow that as well. So, you have to be very careful that the calibration isn't drifting away from what the intent was when the person selected the right tool for the job.
3. Using the Instrument Correctly
Then once it's calibrated correctly and you've picked the right tool for the job, you've got to use it correctly. And that can be a matter of training of the operator. It could be gage R and R studies to figure out how to minimize variances in the use of the instrument, complexity of how to use the instrument. So, there's a lot that goes around that piece of measurement assurance. If you have everything calibrated right, and you pick the right instrument, and somebody uses it wrong, there goes your measurement assurance, and there goes the whole idea of manufacturing a product that's known to be good.
4. Accounting for Process Measurement Uncertainty
Accounting for irregularities in the production process, or process measurement uncertainties. A lot of people don't think about this concept, but there are uncertainties in every measurement that's made and not just in the calibration lab. And so, on the production floor, I kind of alluded to using the instrument tool correctly, gage R and R studies. There could be a number of things that you need to pay attention to, outside of just the calibration of the instrument in making sure it's reading correctly, that can affect the measurements on the product. And you've got to make sure that you're taking those into account.
And then when you get your information back from a calibration event, you've got to determine if you have an out-of-tolerance investigation that needs to be performed, and you need to do that correctly so that you're taking those errors of the instrument back to the decisions that were made about the product or any process where that instrument was used.
In doing that, you'll find out if you made some decisions that could have accepted bad product or rejected good product.
5. Out of Tolerance (OOT) Measurement Investigations
6. Corrective Actions for Out of Tolerance
And then corrective actions for taking that out-of-tolerance impact, if you found that there's potential problems with accepting the product or vice versa, you've got to take some corrective actions there. And that could be instrument recall, it could be rework of your product or components, a number of actions that could be taken. But that was the whole point of traceability in the first place in calibrating the instruments to make sure you know that you had good product. And if you know that you may not have, you've got to take action on that.
Back to Top of Page ↑
Temperature Measurement Example
So, I want to go through an example, simple temperature measurement. Manufacturing engineers designing a process where it needs to measure solution or she needs to measure solution used to treat the product. The process measurement is 350 degrees Celsius. The engineer determines that, outside of a two degree window around that value, you start maybe having problems and the product isn't treated the way it needs to be.
So, if that's the determination, the manufacturing engineer needs to figure out for their product where those limits would be for a process.
Now you have to pick the right tool for the job. What is the instrument that's suitable for this measurement? Would it be an instrument with an accuracy that's equal to those process limits, plus or minus two degrees? Most people would understand that's not a good idea because the instrument is allowed now to drift that full amount over its cal-interval, and that can directly impact previous process measurements and decisions about the product.
So, let's say the instrument has been calibrated today, and it was adjusted to nominal. So, it's reading perfectly. Or within the uncertainty of the measurement, anyway. The instrument is used now on that same day to measure that solution temperature, and the temperature is right at nominal. Everything is aligned and perfect. The sun is shining, birds are chirping, everyone is happy. That process measurement is good.
Now, of the last day of the instrument's calibration interval is going to used before it's recalibrated, used to measure the temperature solution and, low and behold, it's right at 350 degrees Celsius, right at the nominal point.
So, in that cal-interval of that instrument, as we used it to make measurements about this process, on the first day and last day, at least, we see that it was at nominal, and we're happy. Process measurement is good.
Now the instrument goes in for temperature re-calibration, and then subsequently it's adjusted and return it to nominal. As found, readings show that the instrument drifted to the upper limit of 352 degrees C. Remember, pick an instrument that has the same tolerance as the process measurement that we're trying to make. Now it has drifted to its upper limit. What does that do?
That drift occurred over its cal-intervals. It probably didn't happen all at once unless the instrument was damaged or something. But that is called an in-tolerance reading. And there's no plague to the person getting that cert back saying, “I should check into some problem that I might have.” They just see that it's in tolerance and that it was adjusted back the nominal, and they've gone their way.
But did that move all at once, or a little bit over time? Probably a little over time is typically the situation. And how does that impact the measurements or the decisions about the process since the last time that instrument was calibrated?
Because you don't know, or if you don't have any information that would tell you otherwise, you have to assume the last known good condition of the instrument was the last calibration performed. Everything that instrument touched over that period of time, even though the instrument was in tolerance, you're going to take a look at how that impact affected your decisions – everything has to be reviewed for potential risk.
So, it wasn't reading at nominal in your process, as your thought it did. And on the first day, since we have to go all the way back, unless you have some other type of checks and balances in place, like checks on the instrument to see if it was drifting, then you have to assume you go back to first day when likely it wasn't bad, but you don't know, so you have to say it wasn't actually reading 350. The instrument that I used for that was actually higher than that. When I bring it back to where it should have been nominally on the instrument, that brings my process measurement down the same amount.
And now I'm really at the borderline of the process acceptance level, and that lower tolerance is where I was actually sitting. Same thing with the reading on the last day at the end of the cycle. That process measurement is good because it's in the tolerance. It's at the lower limit, still in the tolerance.
Should have you concerned, though.
What if some of those readings throughout that interval, of that instrument drifting over time, was at 348, and you said, "That's in the tolerance level. I'm good to go." Now that you know that effect of the instrument being at its upper limit and the fact that it should have been lower by two degrees, it no longer would be acceptable. And that in-tolerance calibration on the temperature instrument was not flagged as a situation that puts you at risk.
Do you see the dilemma here? It's not just about out-of-tolerance situations that flagged you. You must do some evaluation of, and in fact, on, your decisions. It's about any shift in that instrument and making sure that you understand what it did to the decisions about your process.
Because now you have a situation when you're actually well beyond the acceptable tolerance for that solution in the process of manufacturing. And that process measurement was actually bad. That's what we call a false accept situation.
So, we really want to pick an instrument that has the same tolerances as the process that we're looking at or trying to measure. We want something more accurate. What if we picked something that had an accuracy twice as good as the process that we're measuring? Now if it drifts to its upper limit over its cal-interval, process readings would only have been off by half of the process tolerance, and that helps you. But there is still a risk situation.
What if an instrument was selected that is four times as good as the process tolerance? Now, if it drifts, then your process readings will only be off by one quarter of process tolerance. Again, if the instrument only drifted to its outer limit. And the whole thing about manufacture specification on how their instruments will perform is about two things. About setting the cal interval over which it will hold those values, and then determining what those values are for accuracy and other parameters. So the manufacturer says, “I am expecting that the majority of my products that I made for you, the instruments I make, will hold these tolerances over this period of time. So because they should, it is not likely that they will go beyond that, although it does happen. But where now we’re using that kind of logic or thought process to determine the right instrument for the job, the suitability of the right instrument for the job.
Four to one ratio is usually where traditionally people have gone. We could look for ten times better, hundred times better, but the problem is there is limits to technology that won’t get you there in some cases depending on the parameter of measurement, and eventually it becomes cost-prohibitive to go too far with that concept. And, as I said, a four to one ratio is usually sufficient to reduce to an acceptable level the probability that an out-of-tolerance would have had an impact on the product or the process decisions during that cal interval. For some measurements this four to one ratio can’t be achieved, so you have to live with the higher risk and you got to know how to manage that to your benefit.
So with this four to one ratio that we’re looking at, the instrument has four times less tolerance limit than the process tolerance. If your reading was at 348 during the use of their instrument over its cal interval. Now it really should have been 347.5. If that’s a quarter of that tolerance, that’s what the instrument’s tolerance would be and you still are in a situation, even with the four to one, where you could have false accept decisions on the process or the product. You still have to deal with that, but it minimizes it and gets it into a more manageable level. So here we still have a process measurement that’s bad. We still have a false accept situation.
How do you deal with that?
Lesson learned here is even in-tolerance results can impact your process measurements.
Back to Top of Page ↑
Guard Band Your Process Limits to Reduce Measurement Risk
I’d be willing to bet there is a number of people in the audience who never realized that. Hopefully this has helped you understand why. All of your cal data should be reviewed against your process measurements to understand the impact to the product. And I call that, not an out-of-tolerance NCR, because it’s an in-tolerance NCR evaluation. At this point I am sure a lot of you are shaking your head saying, “Are you kidding me? I’ve got to do all this extra work for in-tolerance?” Hold on. I’ve got a better solution to help you out there.
Impact studies are expensive. You don’t want to have to do those. It consumes valuable resources, very costly time. It is rework, which means you are not working on new product which means that takes away from your profitability. Costs thousands of dollars per evaluation event. Parts or products that you've passed already with false accept situation, may have already been released or shipped by the time you get the calibration information in or out-of-tolerance on the instrument you used to make those decisions.
Sarah, I think I've lost the control here. I can't forward.
Sarah: Go ahead and try one more time.
Howard: No.
Sarah: Alright. I am going to take control for one moment. Sorry about this everyone. Okay. There you go.
Howard: We're back on track. Thank you. So if product has been released or shipped to distribution warehouse, to clients, it is out of your control. Now you might have to do a product recall. That could be very expensive. You don’t want that. If it hasn’t been released, you might have to do re-work and that too is expensive because again, it consumes workers to now re-work the product instead of making new product and plus the cost of the materials, if you have to scrap it. So this risk can be reduced. That's the good news. And the concept there is guard-banding the process limits.
We will talk about that here. For this guard-band first you want to determine your realistic tolerance limits for the process as part of your normal determination suitability of the instrument process tolerances and comparing those. And then you want to take the instrument tolerance which is the value expected or maximum drift of the instrument over its calibration interval, and back that off from your upper and lower limits.
To arrive at new upper and lower acceptance limit so that if the instrument drifts over time, over its calibration interval to its maximum value, you negate the need to do even in-tolerance... Well, this would be an in-tolerance situation throughout its interval, if it was only to the maximum value. You negate the need to perform those in-tolerance, non-conformance report evaluation.
And that process measurement is now protected unless the instrument drifts further than its tolerance limit and then you still have an out-of-tolerance situation that you have perform an NCR evaluation on. But that should significantly reduce your risks from where you are today.
Back to Top of Page ↑
So, to summarize all of this:
- Make sure you implement a good measurement assurance program that takes all of the different components into account, because the goal there is to protect your product, to make sure you are not making bad decisions about it.
- Understand and exercise good suitability of the instruments, the right tool for the job.
- Guard-band your process tolerance limits to reduce costly NCR evaluations or, if you're not aware of the fact that in-tolerance can affect that, mitigate that altogether now.
- Understand both the process that you are trying to do for your manufacturing and the calibration of the instrument to ensure that the intent of preserving good measurements on the product is not lost. That’s not just from the manufacture engineer, that’s the operators who are actually performing the test, the person in charge of giving the calibration time, whether it's an internal lab or somebody who outsources it. All of those people need to be tied together with these concepts to protect the measurements, decisions made on the product. Everyone has a role.
- Be thorough in your non-conformance report evaluations to make sure, again, that you are using all that information that you are spending money on to ensure whether those decisions were good or bad and fix that process if it wasn’t.
So if you are in over your head, get some help. We are here to help you. Any questions at this point?