Test and Inspection

We get paid to report what is, not what is wished.

A hackneyed maxim says that PCBA testing is a non-value-added activity. Really? Did you know PCBA testing has the unique ability to make revenue out of thin air? If that isn’t creating value, then what is?

Read more ...
Jun Balangue
On-board flash memory device testing and programming.
Read more ...

And we have the battle scars to prove it.

For a number of years our company has been AS9100 registered, in recognition of the many unique quality management requirements of the military/aerospace sector, and our company’s willingness to abide by them. Section 7.1.2 of the AS9100C standard specifies that a registered organization “shall establish, implement, and maintain a process for managing risk to the achievement of applicable requirements.” Elaborating, the standard goes on to say that to be compliant, the qualifying organization will assign responsibilities for risk management; define risk criteria, including likelihood and probable consequences of taking certain risks; identify, assess and communicate these risks throughout the process; identify, implement and manage actions to mitigate risks exceeding established risk acceptance criteria, and accept remaining risks after completing any mitigating actions.

Charming. Still with me?

Pointedly, the standard makes no attempt to define, specify or impose what risk is most critical, burdensome, or potentially debilitating to a specific organization, leaving that for each company to define as an integral part of their process. Risk is sort of like McCarthy Era communism: It’s out there, waiting to be rooted out.
Which is why divine providence has graced us with auditors.

What, then, does this mean?

Let us postulate first that, in my role as a business owner, a significant portion of every waking hour is dedicated to some form of risk management. To imagine otherwise would insult the intelligence of any reasonable business owner. The mere reality of owning and operating a firm in 2013 is an ongoing case study in risk management. Further, the small business sector, in which our company operates, owes its existence in part to doing things the Big Guys either can’t do quickly or won’t do without an accompanying mountain of paperwork and procedure. That’s the symbiosis between big and small business. Big business checks up on the well-oiled functioning of that symbiosis by means of accrediting standards and organizations, observed and enforced by our friends in the auditing profession.
Who, ironically, have the final word on risk, without risking a thing themselves. The Auditor gets to decide whether we struggling businesses have properly assessed risk in the evidence documenting the daily conduct of our own operations. So judgmental. The nerve.

Risks for me, and for our business, include the following, in no particular order:

  • That the customer is certifiably insane or, worse, bereft of common sense.
  • Human stupidity, either self-inflicted or from a qualified external source.
  • Armageddon.
  • Acts of God.
  • Random chance or caprice.
  • Failure of the tested product to work as designed.
  • Failure of our designated test system to work as intended.
  • Sudden illness or plague, famine, war or pestilence.
  • Failure of the customer to pay their bill.
  • Unplanned, and potentially lethal, outbreaks of incompetence on the part of the customer, the test engineer or both.

We deal with most of these things daily. We have the battle scars to prove it.

Let us further stipulate that as test engineers we face certain unique risks, high among them being:

  • That the customer has failed to adequately specify their requirements because of ignorance, spinelessness or an assumption of supernatural intuitive powers on the part of the test services provider. (We offer the latter for an additional charge.)
  • The customer has unrealistic expectations regarding (less) time, (lower) cost or worse: both. But in any event, we should just know!
  • That test engineering services represent a cost rather than an enduring value and, therefore, are to be avoided, except as a last resort, usually at 4:45 p.m. on a Friday preceding a three-day weekend.

Why does this matter to me? Because I wonder in practice whether risk management, according to the standard, makes a difference. A personal example will elucidate.

Seeking a brief respite from the delights of day-to-day risk management, in November 2009 my wife and I took a Mediterranean cruise. The ship was the Costa Concordia. You may have heard of it. It is the same ship now in the headlines, as it is being raised upright and (hopefully) salvaged from the rocks of Giglio Island in Tuscany, where it has rested on its side for the past 19 months.

It would be an understatement to say that the story of the Costa Concordia’s demise is a monument to poor risk management. Well-deserved opprobrium has been attached to the then-Captain’s reckless disregard for the hazards of sailing in shallow water with 4,200 passengers. However, when we sailed roughly the same course in 2009, the ship had a different captain, but a similar attitude to that portrayed by the media the night the Concordia ran aground. Specifically, to our eyes, the management of the ship barely rose above the level of chaos. Crewmembers on our cruise simply did not know what to do. Lifeboat drills, for example, seemed to be treated as attendance-optional events. (My understanding is that they are required on the first day of a cruise by maritime law, and attendance by all passengers is mandatory.) Baggage was misplaced. Tour excursions were completely disorganized. A culture of indifference to customer comfort, service and safety seemed to permeate the vessel.

But, by God, they passed their audits. They had the signatures on file to prove it.

After that cruise in 2009, my wife and I vowed never to journey on that ship – or cruise line – again. Ever. And when the Concordia hit the rocks in Tuscany in January 2012, we were saddened but hardly surprised. To us it seemed an inevitable consequence.

But they were audited.

The cynical view of risk management taken by many business owners is to game the system and do the minimum necessary to pass a cursory third-party audit, demonstrating sufficient wide-eyed conviction to hoodwink the auditor into thinking they are taking the issue seriously. We, the audited, are further expected to rank these risks for the auditor’s benefit. Less work for them that way. No-risk auditing, if you will.

Auditors are human too. This does not make them bad or inherently evil people. They have to eat like the rest of us. And like the rest of us, they often choose the path of least resistance. Auditors are also seldom confronted with the challenge and pressure of having to pay the bills. There are few documented instances of auditors rising to the top of large organizations, and for good reason. Nevertheless, by some pretzel logic, auditors hold almost arbitrary sway over how our business defines risk, and how we quantify that definition.

So what do we as a testing company do? For us, the crucial moment in risk identification comes during what we call the Contract Review Process, in which we determine the following:

  1. Can we do the project as it has been specified?
  2. If it has not been completely specified, can we fill the gaps in the customer’s specification, and demonstrate to concerned laypersons (buyers) that we both know what we’re doing and warrant separating their company’s money in our direction?
  3. Most important, are there things we absolutely cannot do, and have we made these “NOs” explicit in our communications to the customer?

Following these simple steps methodically ensures negligible risk. Or, as the First Officer of Asiana 214 said to his Captain – busy setting the autopilot the morning of July 6, 2013 (somewhere west of San Francisco) – “What a beautiful morning, and what could possibly go wrong on a day like today?”

Robert Boguski is president of Datest Corp., (datest.com); rboguski@datest.com. His column runs bimonthly.

Testing mission-critical functions in automotive electronics.

Read more ...

Lessons from the Skunk Works management style still resonate today.

Clarence L. “Kelly” Johnson (1910-1990) was a legendary aircraft designer and aerospace program manager. He and his Lockheed teams, known organizationally far and wide as the Skunk Works, were responsible for creating some of the most celebrated military aircraft in history, including the F-104 Starfighter and the SR-71 Blackbird Mach 3 Reconnaissance Aircraft. Lesser known but still renowned and enduring within the aerospace community are Johnson’s 14 Rules of Management, honed from more than 40 years in the pressure cooker environment of shepherding high-profile, yet top-secret, government contracts from conception to completion.

Johnson’s life and career offer valuable lessons that can be applied to any business, including test engineering.

A good example of this is the use of plainspoken clarity and stick-to-it-iveness as a day-to-day business practice – in written, spoken, and digital words.

Why? Because in our business, what often passes for plain speech and follow-through is more like jargon and lip service. Buzzwords are frequently employed to obscure, confuse and mislead, often in the service of diverting attention, rather than expanding the frontiers of knowledge.

Which is a pity, considering how many of us live and work in the technological center of the English-speaking universe (Silicon Valley). Home of allegedly well-spoken, smart people. In law-abiding, image-obsessed California. Where many of us still harbor barely suppressed yearnings to keep it the acknowledged center of digital supremacy it has always been. Somehow we lost our way, and we need to regain it.

Kelly Johnson’s lifelong motto was “Be quick, be quiet, be on time.” Short. Sweet. Indisputably clear. His 14 Rules of Management amplified that philosophy.

Johnson firmly believed that world-beating results were best achieved by small, highly-talented, firmly-focused, authoritative teams, holding oligarchic responsibility for delivering an extraordinarily complex working product (more often than not a supersonic aircraft) on time, within budget, and to customer (usually the US Air Force) specifications. Those teams managed documentation systems kept deliberately simple and extremely flexible. Procedures were subject to ruthless reinvention as circumstances demanded. Meetings were limited to small gatherings, often no more than a handful of people, to a maximum of 15, thus encouraging open participation and rapid feedback to design changes. Blowhard reports were discouraged; in fact, Johnson hated reports exceeding 15 pages in length. Reporting relationships were short. The customer was kept well – and regularly informed. Costs were carefully monitored. Surprises were minimal. Trust was all: between design chief and team; between prime and subcontractors; between customer and prime.

Johnson’s rules were the result of necessity. Most of the design, technology and materials for revolutionary aircraft like the SR-71 had to be developed from scratch.
The physics behind the specifications demanded it. National security inspired it. The Lockheed team was in uncharted waters, and had to apply its skills to making its own charts. A rigid system of design and contracting rules would never have enabled the crown jewel of American aeronautics, a plane conceived to outrun and outsoar everything shot at it or flown in pursuit after it, to see the light of day. Or night, where the SR-71 often operated. From this fearless willingness to invent anew when confronting technical obstacles, great things were accomplished. Skunk Works designs were significant contributors to the winning of the Cold War.

Back to Earth: What lessons can we test engineers draw from this history?

1. Keep It Simple, Stupid (KISS, another Johnson innovation). Follow a 20th Century variant of Occam’s Razor (i.e., given the choice between a simple path of thinking or action and a more complicated path, choose the former). Example: Avoid being hamstrung by overly-annotated Statements of Work (SOWs) that read like combat after-action reports. I recently received an 18-page SOW from a customer intent on defensively micromanaging every byte, bit and pogo pin of an in-circuit test fixture and program development because of the neglect of a previous supplier. It was a Dutch dyke-plugger’s manifesto in its attempt to catalog every engineering goof of the preceding 10 years. The sins of old are visited on the new.

2. Listen to the Customer. Really. Listen carefully to what the customer wants, however harebrained it may initially sound. Customers have legitimate needs, wants and biases, often borne of bad past experiences. (That’s why they are in your office, right?) Customers also have half-formed (some would say half-baked) ideas, occasionally needing the guiding nudge of hard-bitten expertise to make them real. Either way, respect them. Just take notes and suppress the snarky opinions. There will be ample time to pass judgment later.

3. State what you are prepared to do, then do it, and keep the customer informed while you’re doing it. Avoid the impression your work relies heavily on Black Magic (unless, of course, it does). Too often in the test business, the impression is given, with well-measured condescension, that a genius is at work and is not to be disturbed, by anyone, until the masterpiece is unveiled. And heaven preserve the poor Buyer who simply and reasonably wants a clear response to the question of when their product will be done. The bolder among them might even hazard a query about how it works. Such effrontery is often met today with the email equivalent of a malevolent stare, often in 140 words or fewer. Genius works that way. Rather than promoting hostilities, better to state your intentions in your quote, then back them up in the execution once the order is yours. Back them up again with accurate coverage reports. In all things make clear to the customer exactly what it is they are paying for. Look past the customer’s doltish behavior; they are Our Dolt. Patience, until further notice, is still a virtue. And their checks are still cashable.

Truth in advertising is the exception nowadays. Make it your strength.

4. No less important, declare unequivocally what you will NOT do. Kelly Johnson’s Rule #10 states this definitively: “The specifications applying to the hardware must be agreed to well in advance of contracting. The Skunk Works practice of having a specification section stating clearly which important military specification items will not knowingly be complied with and reasons, therefore, is highly recommended.” Don’t attempt to hide this uncomfortable truth. Better to address and enumerate the technical shortcomings head-on and early, before the recriminations start.

5. Above all else, eschew obfuscation. Say that 10 times fast. Worship clarity in all things, and put theory into practice. (Sounds insultingly basic, but so many of us don’t do it.) This is your lodestar. Everything flows from this. Observe this rule and you will separate yourself from the herd in a business where vagueness is deployed to competitive advantage.

Common sense yet again.

For some, anathema. For others, refreshing. Choose wisely.

Any takers? From somewhere on high, Kelly’s watching.

Robert Boguski is president of Datest Corp., (datest.com); rboguski@datest.com. His column runs bimonthly.

Boundary scan and embedded test will need to make up for ICT gaps.

Read more ...

Page 8 of 9