The most common failure mode in predictive maintenance pilot programs isn't the algorithm. It's not the dashboard. It's not even the change management problem of getting maintenance technicians to trust the system. It's the data collection stage — and specifically, the way most pilots approach sensor installation as a secondary concern after the software selection and commercial negotiation are already done.
We've taken over three failed pilots from other vendors in the past two years. In every case, the failure started with bad data. Here's what that looks like in practice.
Sensor Location Is Not Arbitrary
A vibration accelerometer mounted in the wrong location doesn't give you wrong data — it gives you data that looks right but is actually picking up the wrong signal source. Vibration measurements need to be taken at the load zone of the bearing being monitored. On a motor driving a gearbox, that means the motor drive-end bearing housing, measured in the radial direction perpendicular to the shaft. Not the gearbox housing. Not the motor frame. The bearing housing, at the load zone.
In the most common installation error we encounter, sensors get placed wherever is physically convenient — top of a motor case, side of a gearbox housing that has a flat mounting surface, or bolted to an adjacent structural member because the target bearing housing didn't have a clean spot for a sensor pad. Each of those locations attenuates the bearing defect signal and introduces structural resonance from adjacent components. The resulting data contains the signal you want, buried under noise you don't want, at amplitudes that don't match any baseline or alert threshold you could usefully set.
The fix isn't complicated — it's grinding a flat mounting pad directly onto the bearing housing, applying thread-in sensor mounting studs, and using a contact coupling compound. It takes 20 minutes per location if you do it right. Skip it to save installation time and the pilot data is compromised from day one.
Grounding Problems Kill Electrical Noise Immunity
Factory floors are electrically noisy environments. Variable Frequency Drives generate high-frequency current noise. Motor cables carry common-mode noise. Ground loops between equipment at different electrical potentials show up as signal contamination. An accelerometer cable run alongside a VFD power cable for 30 meters in the same cable tray will inject 50–120 Hz noise into the vibration signal that looks indistinguishable from low-speed mechanical frequencies.
Proper electrical installation for vibration monitoring is straightforward but non-negotiable: shielded signal cables, shield grounded at one end only (typically at the data acquisition end), signal cables routed in dedicated conduit or at minimum 300mm separation from power cabling, and dedicated earth ground for the monitoring system chassis. When this isn't done, the vibration spectrum baseline at every measurement point has an electrical noise floor that masks low-amplitude bearing defect signals in the early stages of degradation — exactly the signals you need to catch for useful early warning.
We've audited pilot installations where the signal cables were run through existing cable trays alongside VFD output cables with no separation and no shielded conduit. The resulting spectra had 10–15 dB higher noise floors than a correctly installed system. The vendor had set alert thresholds above the noise floor to avoid false alarms — which also set them above the Stage 1 bearing defect amplitude range. The system would only alert on Stage 3 and above. By that point, you're not doing predictive maintenance, you're doing early reactive maintenance.
Sampling Rate Determines What You Can See
Vibration data for bearing diagnostics needs a sampling rate high enough to capture the bearing defect frequencies for the equipment being monitored. For bearings on machines running at 1,200–3,600 RPM, the relevant bearing defect frequency range is typically 100–2,000 Hz. You need a sampling rate of at least 5,000 Hz (samples per second) to see these frequencies, with 12,800 Hz or higher preferred for reliable spectral resolution. Some IoT-positioned "predictive maintenance" sensors sample at 200–500 Hz — a rate adequate for overall vibration level monitoring but not for the spectral analysis needed to identify bearing fault frequencies.
There's also the question of sample duration. A single 100-millisecond vibration sample is not sufficient for reliable FFT analysis of low-frequency bearing components. You need 2–10 seconds of continuous sampling at each reading to get spectral resolution down to the frequencies of interest. A system that takes 100ms snapshots every 5 minutes looks like it's doing continuous monitoring, but it's actually providing very limited diagnostic data.
Ask any monitoring vendor two questions before deployment: what is the sampling rate in Hz, and what is the sample duration per reading? If those numbers are vague, or the answers are "it depends on the subscription tier," that tells you something important about the system's diagnostic capability.
Baseline Periods Get Cut Short
A predictive maintenance system learns what normal looks like by watching equipment run in a known-healthy state over a baseline period. The baseline needs to cover enough operating variation — load cycles, temperature changes, shift changes, startup and shutdown cycles — to characterize normal behavior across the machine's operating envelope. For heavy production equipment, this typically requires 30–60 days of operation.
Pilot programs are usually under pressure to show results quickly. After 2–3 weeks of installation and monitoring, stakeholders want to see alerts and anomalies being detected. Under that pressure, baseline periods get cut to 2 weeks or less. The result: alert thresholds set against an incomplete baseline generate false positives when the equipment runs at higher load or in warmer ambient conditions than it did during the shortened baseline. A few false alarms early in a pilot are enough to get the monitoring system written off as unreliable by the maintenance team. Credibility lost in week three is almost impossible to recover in a 90-day pilot.
Historical Data That Doesn't Exist
Predictive maintenance systems work best when there's historical failure data to calibrate against. In practice, most plants don't have well-organized historical failure records that can be used for this purpose. Maintenance logs are often in paper form, incomplete, or stored in a CMMS with inconsistent entry practices. The failure events that would anchor a training dataset aren't there in useful form.
This means that new deployments are working from live monitoring data only, which requires longer baseline periods and more conservative alert thresholds in the early months. Vendors that promise useful predictions within the first 30 days of deployment at a new facility, with no historical failure data to calibrate against, are either overselling the capability or have very simple alert logic that doesn't adapt to the specific equipment. Either way, it's worth asking directly: what is the alert model based on for a new installation with no historical failure data, and how long before it's generating reliable alerts?
The Fixable Problem
Every failure mode described above is fixable. Sensor location errors are corrected by remounting with proper preparation. Electrical noise problems are solved by cable management and grounding corrections. Sampling rate issues are solved by selecting hardware with appropriate specifications. Baseline period problems are solved by setting pilot timelines that allow for a full baseline before alert evaluation. Historical data gaps are solved by accepting a longer ramp-up period and calibrating expectations accordingly.
None of this is technically difficult. It's discipline — in the installation process, in the project timeline, and in the expectations set with stakeholders about what the system can tell you and when. Predictive maintenance works. Predictive maintenance pilots fail when the data collection infrastructure is treated as an afterthought to the software.
Had a pilot that didn't deliver?
We can audit an existing deployment and tell you exactly what the data quality problem is and what it would take to fix it. No commitment to a new contract required.
Request a Pilot Audit