In the night skies over Memphis, the growl of jet engines seems to arrive from all points of the compass. Between 11 p.m. and 4 a.m. over 150 cargo jets will land at Memphis International Airport, one every 90 seconds.
The planes are a small fraction of the largest carrier fleet in the world and each is loaded with what can only be described as a bizarre variety of goods: alternators and dolls, flowers and lobsters… almost 2 million separate packages containing every industrial and consumer product imaginable that is safe to transport by airfreight.
Before dawn, those planes will be unloaded and reloaded, their contents shuffled according to intricate and carefully choreographed routing instructions, bound for distant lands. As the planes take off, it’s possible to make out the logo on each of them: a large purple and orange FedEx.
The largest logistics hub in the world has just enabled the world of modern commerce to flow again for one more day.
And it did it all with data.
Logistics professionals control almost every aspect of supply chain management:
- Order picking from existing stocks
- Replenishment procurements to keep stock levels adequate
- Packaging for efficient delivery
- Routing of packages to avoid choke points
Companies like FedEx represent some of the biggest employers of master’s-educated data scientists, relying on them to analyze and optimize their logistics and supply chain processes.
The World of Logistics is Awash in Data Just Begging to be Processed
According to a 2016 white paper from logistics giant DHL, most companies are sitting on top of enormous piles of data about their own supply chain processes. Logistics management systems have long since been computerized. Every order and delivery generates a database record somewhere.
But apart from the initial fulfillment process or odd order verification, those records simply sit there, untapped. A study conducted by Deloitte Consulting in cooperation with industry trade association the Material Handling Institute revealed that less than 24 percent of 400 companies surveyed had implemented predictive supply chain analytics.
It’s not that organizations don’t understand the potential in their logistics data, says DHL. In fact, 70% of the respondents in that same survey indicated intentions to implement such programs. Rather, there are too few data scientists available with the adequate level of education necessary to analyze it.
The challenges are significant. Although logistics systems are almost entirely computerized today, many of the most common systems in use today can’t be interfaced with one another. And human error is common on the input side of the process.
Describing and looking into the full flow of items requires tying all those systems together and controlling for human error.
Things That Keep Track of Themselves: The Internet of Things Spurs In-Transit Visibility
It’s common for a widget to be carefully managed and tracked through its manufacture and stocking process at a distant factory, only to be tossed into a shipping container and lost to all human knowledge for weeks and months of transit. On the back of a truck, on the high seas, all that represents that item is a packing slip that, hopefully, the shipping manager at the dock filled out properly.
One way for data scientists to eliminate much of the human error in supply chain management is to design systems around the burgeoning ability of ordinary stuff to actually connect itself to the Internet to give an update on its own whereabouts.
The Internet of Things (IoT) revolves around embedding Internet-enabled instrumentation in physical objects. Containers and packages have long been labeled with bar codes or RFID (Radio Frequency Identification) codes for tracking, but IoT is not just a passive label. Instrumentation can be active and have its own embedded logic, automated to interact with other items or with a central hub somewhere.
By 2014, almost 2 million shipping containers were embedded with active tracking devices. Maersk Line, the massive shipping company, set up nearly 290,000 refrigerated containers with advanced monitoring systems to show not only their location, but the temperature and status of their contents.
This process generates an enormous amount of data. Managing the information is surely a job for data scientists, but so too is securing it. With almost every kind of item in transit somewhere in the world at any given moment, the kind of detailed information that could be revealed by intensive data analysis has the potential to result in massive breaches of privacy, devastating corporate espionage, or costly criminal escapades.
How Cisco Saw the Light: Integrating Factory Systems with the Supply Chain
Logisticians have begun to understand that the factories that feed their supply chains can no longer simply be treated as a black box. With automation and computerization increasingly taking over the means of production, it’s entirely possible to integrate data directly from the factory floor with distribution management systems.
Data scientists at Cisco recently addressed this challenge. Attempts to use data to optimize the company’s supply chain were foiled by a lack of integration and access to data from manufacturing centers. Disparate software systems prevented the sort of transparency that efficient routing flow required.
The organization re-engineered and consolidated its processes in what it referred to as a “circle of light” visualization, enabling complete integration of data from design offices to delivery hubs. Forecasting could be tied to production, which could be tied to processing, which could be tied to shipping.
With the new data systems in place and under the management of the company’s most talented data scientists:
- Order fulfillment time was reduced from 11 days to only 3
- Order processing happens in minutes instead of hours
- Time to market was improved by 30 to 50 percent
- Time to bring a new factory online was reduced from 18 months to 6 months