Monthly Archives: July 2016

How can manufacturers improve QC cycle times while still performing everything they need to stay compliant?

At a glance, the mission of a quality control specialist working in fields like chemical, medical device manufacturing, or life sciences seems different to that of a production manager at the same company. After all, isn’t quality control all about ensuring the safety of the products no matter how long it takes, whereas production itself is far more concerned with meeting quotas and demand on a tight schedule?

Yes and no – while quality control standardizes the manufacturing process to avoid variances harmful to customers and the reputation of the organization at large, QC microbiologists and technicians no doubt have work orders of their own to fill and capacities to reach when it comes to testing. And although production managers or other manufacturing specialists may have output on the mind, they understand without a high standard for quality in operations, their businesses wouldn’t likely have any customer demand in the first place.

Optimizing QC laboratory processes in the manufacturing sector involves a level balancing of both safety and speed without compromising one another. How can manufacturers improve QC cycle times while still performing everything they need to stay compliant?

cycle times

All QC specialists should follow the same guidelines for greater risk prevention and cycle time preservation.

Drill down the basics
Good risk management in a QC lab should outline all methods for quarantining and reversing conditions adversely affecting manufactured goods. That way, microbiologists and lab technicians save resources, perform speedy investigations, and set QC processes back on track after an out-of-specification (OOS) event. However, there’s something to be said about avoiding trouble in the first place when cycle times are at stake.

To that end, the QC lab should take a page from lean manufacturing, particularly on the subject of process standardization and uniformity. The sequence in which technicians prepare for work, process samples, dispose spent resources, or clean lab equipment matters greatly to both the success of the testing and the prevention of widespread contamination. An audit of testing operations performed by laboratory supervisors may reveal areas where technicians’ actions or inactions potentially subvert the constancy of QC processing and production.

If possible, supervisors should look to documentation on past OOS events for hints on where to start looking first to minimize time and resources spent investigating. That said, any small discovery that preempts a contamination event, whether found in either historical data or through careful observation, saves production considerably in cycle time.

Bring in automation
Research published by The Royal Society of Chemistry analyzing the most common errors in chemical laboratories uncovered the greatest threat to QC cycle time stability: humans. The study found problems like sample preparation, uncalibrated equipment, miscalculation, and general human error made up the majority of OOS incidents. While insightful, these findings should come as no surprise to manufacturers, especially those who witnessed the age of manual production give way to automation.

“Manual processes anywhere open businesses up to risk.”

Truth be told, manual processes anywhere in the production cycle open businesses up to risk, perhaps even unnecessarily. The burgeoning field of rapid microbiological and rapid microbial methods devotes itself entirely to finding a solution to this very issue. Manufacturers should likewise devote their time to investigating and investing in innovations that target low-value, high-risk laboratory activities like data keying or slide movement between processing stations and incubators. Focusing on these areas mitigates the risk of production downtime due to contamination, frees up microbiologists for more value-added opportunities and reduces the overall time spent performing these tasks, all supporting better cycle times for the rest of the plant.

Go digital for smarter oversight
There’s a reason why many QC labs have gone digital with laboratory information management systems (LIMS). LIMSes aggregate and galvanize all QC processing data stored therein, so laboratory workers can utilize information in ways that complement faster, more consistent cycle times. Dashboards and other visualizations immediately come to mind. When technicians can easily interpret their workloads and capacity demands at a moment’s notice, they spend more time applying their talent to testing.

Manufacturers should remember to align their investment strategies with cycle time improvement initiatives established above. For instance, if a QC lab still finds value in manually keying data directly into an LIMS, perhaps it should purchase software with manipulable value fields. A single misplaced decimal point could send a laboratory on a costly wild goose chase attempting to find the phantom catalyst that caused an OOS reading. Some LIMS software has the power to prevent technicians from entering numbers or symbols based on prearranged value ranges, so an error in the QC lab doesn’t carry over onto the production floor in the form of downtime.

Where do supply chains need the most oversight? How can enhanced analytics maximize efficiency and help these areas operate more smoothly?

Big data applications in an asset-intensive industrial setting like a manufacturing plant or an oil refinery need no introductions. Business leaders in these sectors have long awaited the ability to monitor on-site equipment performance in all its granularity, measure it against historical data quickly, and aggregate unstructured gigabytes of data from disparate machinery into easily interpreted and configurable dashboards. Now that it’s readily available to them, the whole thing feels like a homecoming.

Yet, many organizations have been unreasonably reluctant to carry over big data analytics into supply chain management, arguably an area in every business particularly subject to an abundance of complexity. According to an Accenture study, although 97% of executives are aware of the benefits big data analytics bring to supply chains, only about 1 in 6 had such measures in place as of 2014.

Smarter supply chains cut costs for everyone involved, supplier and client alike, so long as partnerships develop and act on the right metrics. Where do supply chains need the most oversight? How can enhanced analytics maximize efficiency and help these areas operate more smoothly?


Use data to deploy supply chain vehicles safely and cost-effectively.

Fleet Management
Businesses concerned with fleet metrics tend to focus primarily on the KPIs directly related to spend, such as cost per mile, fuel efficiency, and even controlled vehicle re-marketing. However, there’s something to be said for stretching analytics viewpoints to include long-term value adds instead of “pinching pennies” in the short term.

Condition-based maintenance programs, for instance, typically utilize complex data sets to determine if and when vehicles need servicing. As businesses switch to “just-in-time” inventory management models, the importance of fleet availability increases, as does risk. A decommissioned truck or van not only places immediate revenue in jeopardy from a customer service perspective. It also usually requires expensive emergency repairs and may even compromise driver safety in certain circumstances. As such, supply chain and fleet management should coordinate on data-driven oversight to keep transportation operational throughout its life cycle.

Weather
When winter storm Juno froze New York in 2015, analysts estimated its economic toll would cost businesses between $500 million to $1 billion. A single storm can do a number on service and profitability, which is why any supply chain management strategy would be incomplete without weather forecasting.

“Businesses should use weather forecasting as a springboard for supplier or 3PL negotiations.”

That said, nothing is more predictably unpredictable than meteorological activity. Knowing a storm is on its way doesn’t really do much to prevent or preempt its impact to a substantial degree. Businesses should use weather forecasting as a springboard for supplier or 3PL negotiations. Business leaders should leverage data to inject flexibility into service contracts beneficial to both sides, absolving all parties of blame when weather is at its worst and hopefully securing carrier engagement/satisfaction in the process.

Decision-makers should also develop robust in-house policies for operators, drivers and warehouse crews diverse enough to accommodate any eventuality. That way, workers know exactly what is expected of them when different phenomena occur. Heavy rain? Drivers should execute safer, more defensive driving strategies on the road as defined by supervisors. Snowfall shuts down a major thoroughfare? Warehouse pickers should switch over to other value-add duties like cleaning or inventory management to avoid labor cost waste.

Demand Forecasting
This one is almost so obvious, it goes without saying – supply chain management hinges on customer demand, where it will be tomorrow and how quickly businesses can respond to it.

What might not be nearly as evident is the effect misaligned supply/demand relationship has on the business beyond supply management, in the form of surpluses, steep product or service markdowns, and inadequate customer service. Businesses shouldn’t merely turn their attentions to the metrics supporting best practices, but set notifications and alarm bells on KPIs that may forewarn them of potential supply chain mismanagement while it’s still able to be resolved.

Regularly scheduled repairs on capital-intensive assets are the cornerstone of any preventive, predictive, or proactive maintenance strategy in the industrial sector. By preempting outright equipment failures and counteracting small, nearly unnoticeable deficiencies in performance, businesses save cycle time, maintain product/service quality, optimize labor costs, prevent waste, keep operators safe, and preserve the lifespan of their most valuable machinery.

All these benefits aside, planned asset maintenance programs require plant managers and decision-makers to coordinate with technicians as to customizing a maintenance agenda that comports with the business at hand, whose operations may differ vastly from others. However, a few considerations are universal and enhance maintenance scheduling no matter where or how one works.

Consider capacity always
In the industrial sector, nothing should be more valuable to a business than production or service. To that end, companies dependent on advanced machinery rely on scheduled asset maintenance programs, first and foremost, to sustain uptime. Planned maintenance or tune-ups minimize the impact offline equipment has on the business’s bottom line.

Depending on the number of assets under a given planned maintenance program, it is possible to lose this edge because of –believe it or not – poor planning in regard to capacity. Let’s say you own a fleet of 100 delivery trucks and, on an average day, you need 90 up and running to accommodate your customers. Any maintenance schedule, therefore, could only address one-tenth of the fleet on any given day, otherwise, the work would compromise availability.

Capacity, unfortunately, is never that cut and dry. Maintenance management teams should always consider backlog, upcoming product/service changes, or seasonal demand metrics which may affect operations and respond intelligently.

Planned maintenance can deliver incredible value, so long as you’re doing it the right way.

Factor in labor costs
Reactive maintenance, or responding to failures after they occur, can cost a business significantly through emergency labor. A study by Maintenance Phoenix found businesses can spend nearly 20% of the total replacement cost for a given machine to remedy a single reactive maintenance event. Comparatively, proactive maintenance events generally cost just 1.4% of the same variable.

Expenses that low don’t happen automatically, so maintenance management teams should keep a few things in mind. First, it is almost always better to spread larger maintenance orders over a few shifts rather than tackling everything all at once. This leaves room in technicians schedule for other things that may crop up.

Asset Maintenance
“Don’t let overtime negate the margin of savings reclaimed by switching to a planned program.”

Second, overtime expenses should be a major factor in planning, regardless of whether businesses employ in-house repair professionals or outsource. Typically, initiating a preventive, predictive, and proactive asset maintenance program involves hiring a few additional technicians. Scheduled maintenance during overtime hours may, therefore, negate the margin of savings reclaimed by switching to a planned program in the first place.

Prioritize work orders
Another benefit to scheduled maintenance is the ability to rank work orders in a low-risk environment. Reactive maintenance forces organizations to respond to situations as they arise, leaving little to no time to respond to anything else. Since planned maintenance catches failures before they happen – usually through embedded sensors and telemetry monitoring internal changes in temperature or vibration – the organizations that adopt such programs may be allowed windows of opportunity to handle repairs before experiencing the repercussions of leaving them unattended.

With that in mind, an asset-heavy company should not only create an actionable list of all their equipment and components, but rank those assets according to importance based on their respective business objectives. These lists should update after every major tech investment to ensure prioritization accurately reflects current operations. Advanced computerized maintenance management software (CMMS) could be a welcome addition to maintenance management system and help businesses develop both a master asset list and real-time asset hierarchies.

Making the change to scheduled asset maintenance delivers many competitive advantages, so use these tips to develop a more robust and effective program.

Tenets of process improvement must, by nature, contain a certain degree of ubiquity. Processes are everywhere: on manufacturing production lines, in back-end administration, from the warehouses to the corner offices to cyberspace.

Despite the universality of process improvement, it can provide many valuable guidelines to industries seeking cost efficiency, optimization, standardization and/or enhanced quality. What are some of the most powerful principles?

1. Do not become distracted by change
Whether your business is lean, Six Sigma, a little bit of both or neither, blind allegiance to process improvement does not itself achieve results.

“Look to give form to processes that lack it or those that could use a dose of uniformity.”

Modern businesses, fearful of inflexibility, have sought to re-identify their company cultures as ones with a better grasp on change and continuous improvement. As this new era in more malleable management awakens, however, some rigidity must remain, particularly in dealing with ad-hoc processes as opposed to established channels of production or service. Instead of homing in on reconstructing and re-reconstructing processes to squeeze out every last drop of efficiency, look to give form to processes that lack it or those that could use a heavy dose of uniformity.

2. Do not expect to improve everything all at once
Improvements should always have a benefactor, someone or a group of people who experience positive effects through implementation. Maybe it’s asset operators on the production line, maybe it’s human resources, maybe it’s suppliers or customers or regulators. However, there is one group of people process improvements could never and should never accommodate: everyone.

A process improvement should help a closed cadre of workers and create small reverberations of affirmative change throughout an organization. That said, some of the best improvements have little to no significant impact on anyone else. It’s the sign of a smooth integration. Achieving process improvement modularity, where change occurs compartmentally, is a victory all its own.

3. Do incorporate more data, more people
Earlier this year, a survey commissioned by Osney Media found while businesses utilize data as a means of studying their own performance, only 11% actually use data during strategic decision-making, the cornerstone of effective process improvement. Data can do more than tell a company when to pull the trigger – it can also teach it how to aim. Beyond that, cold hard data helps decision-makers win over employees who push back on process improvements not perceived as beneficial to them or their operations.

However, don’t let that lead you astray in terms of the role employee recognition plays in process improvement matters. Cross-functional teams serve a valuable purpose by ensuring all legitimate concerns are addressed, both on a department-by-department basis and also between departments.