Cutting Machine Programming Time Print E-mail
User Rating: / 0
Written by Edward Faranda   
Thursday, 31 July 2008 19:00

How a novel process can avoid repeated online debugging.

One of the biggest challenges in printed circuit board manufacturing is generating programs offline for production machines. This process is required for every assembly we build. But, for many manufacturing engineers, it means many more hours of online programming when time on the production line is at its most valuable. This costs companies thousands of dollars (or more) in real production time lost – be it overtime or downtime – spent debugging programs while on the machines.

How does one create machine programs and have them run perfectly once they get to the machines? I am an ME who battled this for years. I haven’t totally perfected the process, but I have procedures in place to help with this problem.

Still, it is a difficult task, despite many years spent in pursuit of perfecting this process. Occasionally I still have to spend time on the floor working on machine programs, tweaking here and there to get them right. This is time-consuming. With prototype, alpha and beta runs becoming smaller, it becomes more difficult to have a perfected program by the time of production release. In fact, we recently went from prototype to production build in a matter of months, while building less than 10 products. Without a process I call Dynamic Programming, this would be a nightmare at most EMS firms and OEMs.

Static programming. I define Dynamic Programming by first describing its opposite. Static programming is the generation of machine programs using a database that defines locations and component information. Every plant has a database that indicates how a particular part looks. This information is pulled as BoM and CAD files require. Good static databases generate error-free programs, but then experience program problems once they are on the machine. The usual result: many hours in program debug on the production floor.

Consider how often you have used Part no. 123 on one product, debugged it, and then used the same part on another product only to have to redo the debug. Most production machines contain a part library that defines a particular part. But, there’s still a link file between the placement file and parts library file that says Part no. 123 uses this particular record in the parts library.

The problem lies between the libraries on the machines compared to the library in the program software package. Usually, this is where the problem is, and where most engineers fall short in the program development process. How does one keep these pieces of critical information in sync? Hence what I call Dynamic Programming.

Dynamic Programming. Dynamic Programming provides feedback to programming software on any information changed on the machine to get it running. It’s surprising how many people fail to do this. Colleagues claim either a lack of time to do double the work or their software doesn’t define that information. To me, both reasons are unacceptable.

We’re all very busy, but I believe in the payback system. When I first came up with the idea of Dynamic Programming and how I wanted to handle program management, I deleted all the parts library databases, not only in the programming software, but on the machines as well. Everything was deleted in a matter of a few minutes.

(As an aside, and I don’t recommend this, but I didn’t initially tell my employer what I was about to do. When I owned up, the response was: “What! Why did you do that?” I explained my process and how program management was going to be conducted. In giving his OK, my boss also asked that the next time I do something major to at least let him know first. Away I went to recreate my databases.)

At the time, I had about 500 part numbers in my database, and approximately 120 different products that needed to run. One by one, I entered the part shape information in the database, using manufacturer component drawings to input these data.

Working in my favor was the company’s robust database of specifications for components it used. But it lacked the component mechanical drawings. This brought static from the R&D engineers. “They don’t supply us with that information,” they informed me. I spent hours on the Internet looking for those first 500 components. But in time, I found a drawing for every single component. After showing R&D and documentation staff these drawings were, in fact, available, I put together a directive for them to include with every single part they introduce to the factory. Now, when I don’t get a mechanical drawing for a component, it gets kicked back to the R&D engineer who chose the part (and who probably designed the pad layout).

The database done, I began generating programs for the production line. The first board took awhile to work, but I succeeded. During the process, I took notes, and later updated the offline database. I then regenerated the programs and downloaded the same product I just got running (overwriting the tweaked program). That program ran perfectly. At that instant, I realized I had generated a perfect operational program offline and was able to run the program without touching it after it was put on the machine. This was major milestone: It was possible to generate perfect machine programs for my software.

Each time a new product came on the floor, I only had to “tweak” any components I didn’t use in the previous product. Each time I changed the database, I regenerated all the programs. Thus, each program would have a shorter and shorter debug time until the point where none existed.

Building the dynamic database was in sync with the changing machine databases. The programs improved with each generation. From an initial duration of two hours, the debug time dropped to zero with those products. In fact, to this day, I can regenerate programs for those products and not even walk on the floor. It is transparent to production.


The initial rebuild of the database was time-consuming. The database maintenance is much easier. The only debugging required is for components first entered into the database. Data are entered into the database based on the manufacturers’ drawings. Detailed notes are taken during data entry and, because they are new components, it is clear which components need to be looked at and verified on the machine. When components are loaded on the machine, the new components are verified and adjusted accordingly.

The key to Dynamic Programming is to first test the new settings on the machine, and then update the database. Once all the components have been tested at the machine, the program is deleted and a new program generated and downloaded to the machine. Established components are set and there is no need to look at them when debugging floor programs. At that point, confidence is 100% that the offline program generator database is synced with the machine information.

As the machine component information changes, so too does the database on the offline program generator. Hence, Dynamic.


It is a tedious process that takes discipline and control. This is why, if there are several workers generating programs, they need to be disciplined and follow all the process steps. Support programmers should understand the database workings and be fully aware of the consequences of not following the process.

A single individual should be named database Key Master to ensure processes and rules are followed. They should review each program generation. I require my support programmer to fill out a simple form that details changes made. My programmers do this as they make changes to ensure they document everything that was changed.

A database is also maintained to track who changes certain component information, when they did so, and for what reason. This also reveals what new components are pending review. In our company, this database is actually linked directly to the component database in the program generation software.

Maintaining control. Controlling the offline database is key to this process. Slacking on updating information will return the database to its previous, untested state. Testing the programs at the machine, then entering them into the offline database, then regenerating the program not only ensures the database is correctly aligned with the machine information, it keeps the data integrity intact and under control.



And yet, occasionally some programmers will update machine information and forget to do so with the database, or they lack time to update the information. This usually happens when something is fixed on the fly. One of the biggest problems I have is component thickness changes, especially on small chips.

Because of this, it becomes necessary to compare the machine library with the offline database. (The software I use has a feature that permits this.) Although not an ideal solution, it does provide a comparison and the ability to reject or accept the changes. On the other hand, it does it globally: The software won’t permit a user to reject an update for a certain part number and then accept another one. To remedy this, detailed notes are taken on the update, and then the necessary corrections made to the machine or offline database. Doing this weekly doesn’t take much time. Less frequent updates have a tendency to increase the update time sharply.

Falling short. Equipment manufacturers do understand the importance of having the offline databases in sync with the machine libraries. They often ask about it. But they usually fall short when it comes to making this task as painless as possible.

Five to 10 years ago, most machine communication was done over serial communication. Since then, network capabilities have improved and machines come with Ethernet connections. Yet a constant link of the machine library to offline database has yet to be created. It would be much easier if, as soon as corrections were made on the machine, the offline database updated automatically. Of course, that would necessitate tighter security on production machines to ensure library and database integrity.

The time from program generation to manufacturing products needs to be reduced. It is possible to shift the majority of the time spent debugging programs directly on machines to offline programming. This task isn’t an impossible task, but requires written procedures and discipline to follow certain rules.

Edward Faranda is a senior manufacturing engineer at QSC Audio Products LLC. (; This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

Last Updated on Thursday, 31 July 2008 08:21


Eastern-US: China’s New Competitor?

Parity emerges among EMS Factories from Asia, Mexico and the US.

For the first time in years we see parity in the Eastern US among EMS factories from Asia, Mexico and the US. This EMS market condition will permit American OEMs (the EMS industry refers to OEMs as customers) to have more EMS pathways to choose from. Now more than ever, such EMS assignments will require deeper investigation relating to the OEMs’ evaluation of manufacturing strategies.

The Human Touch

For those who count on the electronics industry for big feats, it’s been a remarkable couple of years.



Advances in Concentration Monitoring and Closed-Loop Control

Contaminated bath water skews refractive index results. New technology can accurately measure aqueous cleaning agent concentration.

Circuits Disassembly: Materials Characterization and Failure Analysis

A systematic approach to nonconventional methods of encapsulant removal.





CB Login



English French German Italian Portuguese Russian Spanish


Panasonic Debuts PanaCIM Maintenance with Augmented Reality
PanaCIM Maintenance with Augmented Reality software provides instant communication and information to factory technicians -- when and where it is needed -- so they can respond to factory needs more...