Industrial edge gateway installation — GW-400 unit mounted at press cell, automotive stamping plant

There's a version of "edge computing" that lives in conference presentations — clean rack-mounted servers, high-speed plant networks, orderly data flows feeding beautiful dashboards. Then there's the factory floor version, which involves conduit runs through areas that hit 140°F, network switches that get covered in metal shavings, and IT departments that want nothing to do with OT systems but also don't want you touching their VLANs.

We've deployed edge processing at 23 facilities across North America and Europe. Here's what actually works.

Keep Processing as Close to the Sensor as Possible

The single most important architectural decision in factory floor edge computing is where you run the signal processing. If you're streaming raw vibration data from a Wilcoxon Research 786-500 accelerometer sampling at 12.8 kHz back to a server somewhere — even a local server — you've already created a problem. That's 12,800 samples per second, per sensor. Put 40 sensors on a network designed for SCADA traffic and you have a bandwidth crisis within the first month.

The right answer is to run FFT analysis and feature extraction at the sensor gateway level. A DIN rail-mounted edge gateway running a 4-core ARM processor can handle real-time spectral analysis for 16–24 vibration channels simultaneously, passing summary features rather than raw waveforms to upstream systems. That's a 95%+ reduction in data volume with no loss of diagnostic resolution. You still capture and retain raw waveforms for fault signature analysis — but on a triggered basis, not continuously.

Hardened Hardware Is Not Optional

Consumer-grade or light commercial computing hardware fails fast in manufacturing environments. We learned this the hard way in an early deployment at a forge press facility — Raspberry Pi-based edge nodes running in enclosures that looked adequate on paper, rated IP65, but not rated for the continuous vibration and thermal cycling from being mounted on the press frame itself. Eighteen months in, we had 40% failure rates on the compute modules. The enclosures were fine. The boards inside were not.

Current generation industrial edge hardware for this application should be rated to IEC 61000-4 for EMC, have solid-state storage only (no spinning disks, no SD cards in vibration environments), operate continuously at 70°C ambient, and be DIN rail mountable with isolated power inputs. Units from Moxa, Advantech, and similar industrial compute vendors in the MIL-spec tier meet these requirements. They cost more than a Raspberry Pi. They don't fail every 18 months.

Network Architecture: OT Stays Separate

Factory floor network infrastructure is a persistent source of conflict in edge deployments. IT wants everything on managed infrastructure. OT wants the SCADA network left alone. Both positions have merit and neither is fully workable if you treat them as absolute.

What we've found to work: a dedicated plant monitoring VLAN, physically separated at the switch level from both the production OT network and the corporate IT network, with a purpose-built gateway machine that aggregates edge node data and provides a single controlled connection point for remote access. That gateway machine runs a hardened Linux instance with no unnecessary services, strict firewall rules, and certificate-based authentication only.

This keeps the OT team satisfied that the SCADA network is untouched. It gives IT a single managed entry point they can audit and control. And it means the monitoring system can be rebooted, reconfigured, or replaced without any risk to production systems.

Offline Operation Is a Requirement, Not a Feature

Factory networks go down. Managed switches reboot for firmware updates. Core switches fail. If your edge monitoring system stops functioning every time there's a network event, you'll lose your plant's trust in about three months.

Every edge gateway in our architecture operates fully autonomously when the network is unavailable. It continues sampling, analyzing, and storing locally. It queues alerts. When connectivity returns, it syncs. The plant floor monitoring never stops because a network switch rebooted.

Local storage capacity is relevant here. At our typical feature extraction rates, 30 days of continuous operation for a 24-channel vibration gateway takes about 18GB of storage. Our GW-400 units ship with 256GB SSD, giving well over a year of autonomous storage capacity before there's any risk of data loss during extended network outages.

What Tends to Fail

Wireless sensor networks fail more often than people expect. Not because the RF technology is unreliable in principle, but because factory floors are terrible RF environments. Steel structures, Variable Frequency Drives running at 4–16 kHz switching frequencies, and the sheer density of metallic obstacles between access points and sensors create RF environments that spec sheets don't prepare you for. We've done RF surveys at seven plants where vendors promised 90%+ wireless reliability and found actual packet error rates of 15–30% in the production area. For vibration monitoring, that's not acceptable.

Wired sensor connections with structured cable management add installation labor up front. They run reliably for years afterward without attention. For anything running at more than 1,000 RPM where you need reliable high-frequency data, wire it.

Over-centralized analytics also fail. If your edge architecture requires a functioning connection to a central analysis server to generate alerts, you've created a single point of failure in the wrong place. Alerts need to originate at the edge and be delivered through multiple paths — local display, email, and CMMS integration — not queued at the edge and routed through a central system that may or may not be available.

The Practical Deployment Checklist

Before any edge deployment at a new facility, we run through: ambient temperature survey at each installation point, RF environment assessment for any locations where wireless is being considered, review of existing network topology and OT/IT boundary locations, confirmation of power availability at each gateway location, and a review of the plant's existing alarm management load to ensure we're not adding to an already-overloaded alert queue.

That pre-work takes 4–6 hours for a typical plant. It saves 40+ hours of remediation after installation. Every time. The plants that skip the survey are the ones calling us six weeks post-deployment with network conflicts, thermal shutdowns on edge hardware, or RF connectivity problems that require relocating access points.

Edge computing on the factory floor works well when it's designed for the actual environment. The gap between what vendors promise and what plants experience comes almost entirely from shortcuts in the pre-deployment assessment.

Considering edge monitoring at your facility?

We do a free pre-deployment environmental assessment before any site quote. Talk to our team about what that review involves and what it typically surfaces.

Request a Site Assessment