A vibration alert tells you something is wrong. Multi-signal monitoring tells you what it is, how bad it is, and what to do about it. The difference between those two things is the difference between a reliability program that keeps improving and one that stalls.
There’s a moment every reliability engineer knows. The vibration alert fires. You pull the data, study the waveform, and then… you’re not sure. Is that a bearing defect developing, or just some resonance from the line? Is this a two-week problem or a two-day problem? Do you schedule a shutdown, or do you wait and watch?
Single-signal monitoring gets you to that moment of uncertainty and leaves you there.
From clue to diagnosis
For years, vibration analysis was the gold standard of condition monitoring, and for good reason. It catches a lot. But “catching something” and “knowing what to do about it” are two very different things. One gives you an alert. The other gives you data to make a decision. And in a production environment, decisions made on incomplete diagnostic information have a way of going wrong in both directions: unnecessary shutdowns that bleed money and missed failures that bleed even more.
Chris Napier, a reliability specialist who works with maintenance teams across a wide range of industrial facilities, puts it plainly: “If you were in school and had to do a test on a book you’d read, you wouldn’t just read chapters one, three, and seven and feel confident taking the test. You’ve got to understand the whole story. Vibration works well at some speeds, and on some machines. For others, you’ll need infrared or ultrasound. You have to understand what the needs of the machine are and what’s the best way to monitor it.”
The problem is structural. Any single signal, vibration, temperature, ultrasound, oil analysis, or motor current, provides only a partial view of what’s happening inside a machine. Vibration works well for rotating parts, but can be unclear about the root cause. Temperature is a late indicator; by the time it moves, a problem has usually been developing for some time. Ultrasound detects friction and early lubrication issues before vibration registers them. Motor current exposes electrical and load-side anomalies that mechanical sensors miss entirely.
“Vibration works well at some speeds, and on some machines. For others, you’ll need infrared or ultrasound. You have to understand what the needs of the machine are and what’s the best way to monitor it.”
Chris Napier
When the signal you’re missing is the one that matters
Chris saw this play out at a facility in Fort Smith, Arkansas, running a large rotary kiln. “You cannot use vibration analysis to catch the early stages of failure on ultra-low RPM equipment,” he says. “We put an ultrasound solution on it, and because we were listening in a completely different frequency range, we caught brick failures inside the kiln that standard vibration would never have found. This frequency is beyond what a human can hear. It’s like a dog whistle. They might have been able to hear those bricks break.”
The stakes were immediate: had the kiln failed mid-run, the facility would have been down for months. Instead, the problem was found and addressed during a scheduled shutdown already on the calendar. “They didn’t know they needed to work on it during that shutdown. If they’d started back up with that problem still there, it would have failed in a few months and cost them another couple of months of production.”
He found similar value in combining vibration and ultrasound for lubrication monitoring, an area responsible for a disproportionate share of machine failures. “Vibration monitored bearing health; ultrasound monitored lubrication health. That combination caught several failures they didn’t know about, just by lubricating machines better.” The equipment in question, rotary kilns the size of a school bus, meant failures measured in weeks of downtime and heavy contractor costs. “Most of your machine failures originate from poor lubrication practices,” Chris says. “If you’re only running one signal, you’re missing where most of the problems start.”
How to avoid a conversation with operations
Multi-signal monitoring doesn’t just catch more failures, it also prevents false alarms. Kagen Wittbold, a reliability specialist who began his career spending 26 weeks a year on call in paper mills, describes a scenario that maintenance teams encounter regularly. A food-processing pump running at 1,500 rpm suddenly speeds up to 2,000. “From a vibration standpoint, things speed up, it shakes more, and that may look like a maintenance problem from the outset,” he says. “But if you’re also collecting magnetic flux and line frequency data, you can see that this wasn’t a mechanical fault developing. It was a recipe change. The process called for a faster speed.” Without the second signal, you’re chasing a ghost. With it, you’re making a diagnosis.
“I wish I had these insights when I was working on call in the mills. I could have slept and enjoyed the job a lot more,” chuckles Kagen.
Removing guesswork from the loop
That multi-signal logic extends to how teams should respond when alarms fire. Kagen walks customers through a structured decision process that treats each alarm as the beginning of an investigation, not a conclusion. Is the machine heading toward danger? If yes, inspect immediately. If not, go to process and production first: was there any change in speed, load, material, or viscosity? “If there was, that may explain what we’re seeing,” Kagen says. “If there wasn’t, there’s probably a real problem, and you need to go look.”
The tree keeps branching through inspection findings, work order planning, repair execution, and post-repair review, asking at each stage whether the problem was actually resolved. “This is what takes the educated guesswork out of the loop. If you already know what’s happening with your process, you can rule things out fast and focus on what actually needs attention.”
Think of it the way a doctor does. A single elevated lab result is a warning. It might signal something serious, or it might just be noise. But when three or four indicators point in the same direction, the clinical picture becomes clearer. You stop guessing and start knowing. Machine health works the same way, except the stakes aren’t just clinical. They’re commercial.
Which is where this stops being a technology conversation and becomes a business one.
Follow the ROI: all of it
The reliability programs that struggle, the ones that never quite deliver their promised ROI, where management starts questioning the investment, where maintenance teams burn out chasing alarms, often share a common root cause. They were built on a diagnostic foundation too narrow to support confident decision-making, or to meaningfully reduce unplanned downtime.
Chris points to another angle that rarely makes it into ROI calculations: energy. “If you have a 300-horsepower motor running a 24/7 operation, it costs around $250,000 a year at current US energy prices,” he says. “Run it misaligned, because you don’t know it’s misaligned, and that number climbs to around $280,000. Imagine 30 or 40 machines across your facility in that condition. You’ve lost up to a million dollars, and you didn’t even know it, because you didn’t have the predictive technologies in place.” The diagnostic gap isn’t just a maintenance problem. It shows up directly in the energy bill and in the broader cost of unplanned downtime in manufacturing.
Chris sees the implications extending beyond the factory floor. “I wonder, with the whole AI revolution and energy being such a significant concern, running all those data centers. This will need to be addressed sooner or later. But maybe if manufacturing inefficiencies were fixed first, it could make a huge difference.”
What a genuine reliability program looks like
Building a program that holds requires more than sensors. Chris describes what reliability maturity actually looks like: parts and procurement, CMMS utilization, maintenance planning and scheduling, lubrication programs, and preventive and predictive maintenance, all functioning as parts of a coordinated strategy rather than isolated initiatives.
“The customers who succeed are those who have a plan for every aspect of reliability and a strategy for each part that fits into an overall plan,” he says. His quick maturity diagnostic: CMMS backlog. “You can always tell how mature a reliability program is by asking how far out their work order backlog is planned. If you’ve planned out two weeks, you’re using the software. If you’re planning out one hour, you’re not.”
Getting there requires honesty first. “You’ve got to benchmark yourself and not be nice about it,” Chris says. “If you’re struggling, you have to look at yourself and say: we’re struggling. Because unless you’re honest about it, you can’t compare yourself to where you want to go, and you can’t start filling in the gap.” It’s also a leadership transition, from focused thinking to big-picture thinking. “Maintenance managers who struggle are often still turning wrenches in their heads. But when you step into that role, your job is to make sure the people who turn wrenches are in the right spot. You have to put the wrench down and see the whole facility.”
The goal isn’t more data. It’s less doubt.
The shift from single-signal to multi-signal diagnostics isn’t just a technical upgrade. It’s the difference between a system that tells you something is wrong and one that tells you what’s wrong, how confident it is, and what you should do about it. For reliability programs trying to move from reactive firefighting to genuine predictive capability, that distinction isn’t academic.
It’s the full story.
Want to stop guessing and start knowing? Start with fault detection and diagnostics.
When the data speaks for itself
After analyzing fault card patterns across five facilities at a major confectionery manufacturer, one finding kept surfacing at every site: misalignment was the single most common cause of machine failures, driving inefficiency and premature breakdowns across the board.
The fix wasn’t a new sensor or a software upgrade. It was a single hour-and-a-half call with teams from all five sites, walking through the data together, looking at real examples, and building a shared understanding of what misalignment actually looks like and costs.
“You could see the gears turning in everybody’s head,” says Chris Napier. “I think we can improve that. I think I understand now.”
That’s what a mature multi-signal reliability program makes possible: not just finding problems, but knowing which ones to tackle first and having the data to bring everyone along.