Machine monitoring fails more often than vendors want to admit. Not because the technology does not work — it does. It fails because shops make the same five mistakes during deployment and operation. Each mistake costs more than the monitoring subscription, and none of them are technical.
These are the patterns we see repeatedly. If you are considering monitoring or already running it, check yourself against this list.
Mistake 1: Monitoring Everything Instead of What Matters
What it looks like
The monitoring system is collecting 200 data points per machine. Spindle load, axis positions, coolant temperature, servo current, program name, tool number, feed override, and 193 more. The dashboard has 8 tabs per machine. Nobody looks at any of them.
Why it happens
The instinct is understandable: if we are going to instrument the machine, we should collect everything. More data is better, right? In theory, yes. In practice, data you never act on is not data — it is noise. And noise drowns out the signals that actually matter.
How to fix it
Start with three metrics per machine. Just three:
- Machine state — Running, idle, or off. This gives you utilization, which is the foundation of every other metric.
- Vibration velocity — The single best predictor of mechanical failure. One sensor on the spindle housing catches bearing wear, tool degradation, and balance issues. ISO 10816 thresholds tell you when to investigate.
- Cycle time — Actual vs. expected. Tells you if the machine is running at programmed speed or if the operator has backed off the feed override.
Once your team is actually using those three metrics to make decisions, add more. But not before.
What it costs
Alert fatigue. When a system generates 50 alerts per day, operators stop reading them. The critical alert that signals a real problem gets buried under noise and missed. One missed critical alert can cost $15,000+ in a spindle crash.
Mistake 2: Setting Thresholds from Spec Sheets Instead of Baselines
What it looks like
The vibration alert is set to 4.5 mm/s because that is the ISO 10816 Zone B/C boundary for Class II machines. The alert fires 12 times on Tuesday. The operator checks the machine — it sounds fine, looks fine, parts are in tolerance. Wednesday: 15 alerts. Thursday: the operator disables the alert.
Why it happens
ISO 10816 thresholds are generic classifications for broad machine categories. They do not account for the fact that your 15-year-old Haas VF-2 has a baseline vibration of 3.8 mm/s because the foundation is slightly uneven, or that the spindle has normal wear that puts it higher than a brand-new machine. A “good” reading on your machine might look “bad” by the textbook.
How to fix it
Run the monitoring system in listen-only mode for 1-2 weeks before setting any alerts. Record the machine's actual baseline under normal operating conditions. Then set thresholds relative to that baseline:
- Warning: 1.5x the baseline (investigate at next opportunity)
- Critical: 2.0x the baseline (investigate immediately)
- Emergency: 3.0x the baseline or sudden spike (stop the machine)
These relative thresholds adapt to each machine's actual condition. A machine with a 2.0 mm/s baseline triggers at 3.0/4.0/6.0. A machine with a 4.0 mm/s baseline triggers at 6.0/8.0/12.0. Both are appropriate for their specific machine.
What it costs
False alarms destroy trust. Once the operators learn to ignore alerts, you have a monitoring system that nobody monitors. You are paying for the hardware and subscription but getting none of the value. And the one real alert that comes through gets treated like another false alarm.
Mistake 3: Collecting Data Without Acting on It
What it looks like
The monitoring system has been running for 6 months. The dashboards look great. Utilization is tracked. OEE is calculated. Vibration is graphed. But nothing has changed on the floor. The same machines run the same way. The same failures happen. The maintenance schedule has not been updated. The data is there, but nobody has translated it into a decision.
Why it happens
Data does not fix anything by itself. Data tells you where the problems are. But someone has to look at the data, decide what to do differently, and execute the change. If nobody in the organization is assigned to review the data and turn it into action, the dashboards become expensive screensavers.
How to fix it
Assign one person — usually the maintenance lead or plant manager — to review the monitoring data for 15 minutes every morning. Not stare at dashboards. Answer three questions:
- What alerted overnight?
- What trend is getting worse?
- What one thing should we change today based on the data?
Fifteen minutes per day. One action per day. That is the entire discipline. A shop that takes one data-driven action per day will outperform a shop with a $500,000 analytics platform that nobody uses.
What it costs
The monitoring subscription ($599/month for a 5-machine shop) with zero ROI. Every month you pay without acting is a month of wasted spend — plus the ongoing downtime and failures the data could have prevented.
Mistake 4: Treating Monitoring as IT Instead of Operations
What it looks like
The monitoring project is managed by the IT department. The discussions are about network security, data architecture, and system integration. The maintenance team has not been consulted. The operators learned about the sensors when they noticed them bolted to their machines.
Why it happens
Monitoring involves technology, so it gets routed to the people who manage technology. Logical, but wrong. Machine monitoring is an operations tool. Its purpose is to reduce downtime, improve utilization, and extend equipment life. Those are operations outcomes, not IT outcomes.
How to fix it
The project owner should be the person who feels the pain of downtime — the plant manager, the maintenance supervisor, or the operations director. IT should be consulted on security and network requirements, but the project lives in operations.
More importantly: the operators need to be involved from day one. They know the machines better than anyone. They know which machine “sounds funny on cold mornings.” They know which tool position always fails first. That tribal knowledge shapes where you put sensors and what thresholds make sense. If the operators feel like monitoring is being done to them (surveillance) instead of for them (better tools), it will fail regardless of the technology.
What it costs
Delayed deployment (IT prioritizes other projects), wrong metrics (IT measures uptime, operations needs OEE), and operator resistance. The worst outcome is operators who actively work around the monitoring system — disabling sensors, ignoring alerts, or refusing to change their workflow. A monitoring project that loses the operators is dead on arrival.
Mistake 5: Expecting ROI Without Changing Behavior
What it looks like
Six months after installation, the shop owner says: “We installed monitoring and our downtime didn't change.” The dashboards are up. Alerts are configured. But the maintenance schedule is the same as before. Operators still run the same feed overrides. Nobody reviews the weekly utilization report.
Why it happens
There is a belief that monitoring is a buy-and-deploy product. Install the sensors, turn on the dashboard, and downtime magically decreases. It does not work that way. Monitoring gives you visibility. Visibility enables better decisions. But the decisions still have to be made and executed by people.
How to fix it
Before you install anything, define three specific behaviors you will change based on the data:
- “When vibration crosses 2x baseline, we will schedule maintenance within 48 hours.”
- “When utilization drops below 40% for 3 consecutive days, we will investigate the root cause.”
- “When a machine has 3+ micro-stops in a shift, the supervisor will review the pattern the next morning.”
These are decision rules. They turn data into action automatically — no analysis required, no meeting needed. The data triggers the rule. The rule triggers the behavior. The behavior reduces downtime.
What it costs
Everything. If you install monitoring and change nothing, you get nothing. You spend $4,800/year on a subscription and still lose $50,000-$200,000/year in preventable downtime. The monitoring did not fail — the organization did not adapt.
Get it right the first time
Our assessment tool builds a monitoring plan for your shop that avoids all five mistakes — starting with the right metrics, the right thresholds, and a 30-day action plan so data turns into results from day one.
See what downtime is costing you →The Common Thread
All five mistakes share the same root cause: treating monitoring as a technology project instead of a behavior change initiative. The sensors and dashboards are tools. They do not reduce downtime any more than a thermometer reduces a fever. They give you the information to act differently. The acting is on you.
The shops that get the most value from monitoring are not the ones with the most sensors. They are the ones that:
- Start with 3 metrics per machine, not 200
- Set thresholds from baselines, not textbooks
- Review data daily and take one action per day
- Put operations in charge, not IT
- Define decision rules before the sensors are installed
Monitoring is not a purchase. It is a practice. The technology is the easy part. The discipline is where the ROI lives.