Since man first walked, he has been scouring the skies for understanding of how he came to be. Big events are bound to enhance our learning, but capturing those moments and making sense of them is never a simple task. In August came one of those big moments astronomers had been waiting a lifetime to witness: the collision of two neutron stars.
While the cosmic event answered questions astrophysicists had long been contemplating, it also opened a new world of questions. The answers to at least some of those questions are buried in mountains of digital data that could take years to sort through.
A similar situation is unfolding in factories worldwide. We no longer look at machines in isolation, nor do the machines themselves act independently. Systems designed to check the work of other machines on the line are proliferating. Those machines generate immeasurable amounts of data, some of which are used to independently resolve ongoing or potential processing “events,” big and small. But the capture of all these data threatens to bury already overworked engineers.
Roughly 20 months ago, solutions started appearing, promising not just to resolve issues of how machines talk to each other, but what to do with the results of all those communications. Various protocols, some proprietary, some not, quickly began to emerge, Mentor’s OML, IPC’s CFX, ASM’s (and others’) Hermes Initiative, among them. The scopes vary: Some overlap; some are finite; some are broad. For users, to say confusion reigns is an overstatement, as most are only vaguely aware of the respective efforts.
There’s no question a standard protocol for data transport (how information is moved around), encoding (the information definition and descriptions) and content (what’s moved around) would be valuable. It would speed time new systems could be made compatible with other vendors’ equipment. Time and costs now spent on developing software translators would disappear. But the direct value to end-customers remains less defined.
I can understand why Flex and Jabil, for instance, would see value in the ability to view performance data across all their many factories. For the small EMS with a single automated line, where machine utilization and customer needs are so different, aggregating and uploading floor data to the cloud may never be worth the time and money.
The security aspect of all this seems underplayed. If Citibank can’t protect my credit card from being hacked – and judging by the three king-sized mattresses I “bought” last month from a store some 2,000 miles from my home, it can’t – how are we to get comfortable if that same security protocol is used to protect our factory data? What’s to stop, for instance, the cloud data from being compromised and all our corporate IP spilled out for the world to see? Or perhaps an enterprising hacker will make their way into the factory and shut down a line, or worse, change the machine parameters, tying up the lines and creating mounds of scrap.
We’ve seen this play before. Starting more than 30 years ago, trade associations, consortia and private companies vied to deliver a consensus standard electronics design data transfer format. The dumbest format – one never intended for electronics CAD data – won out. We call it Gerber, and it’s used in every electronics manufacturing factory in the world.
I won’t hazard a guess as to who the winner of this latest race for data transfer protocol supremacy will be. It won’t necessarily be the first to market. It will be the first to reach a critical mass of users – be it 70% or 50% or maybe only 30%. After that, the others will begin to consolidate or fall away. History says the simplest-to-implement programs will win.
For those promulgating the would-be standards, they must keep in mind that collision is inevitable. How they decide to proceed will determine whether they light the way for years to come, or like those neutron stars, crash and dissipate.
P.S. We are grateful for your support this year, and wish all our readers and customers a healthy, prosperous 2018!