We live in an age of ubiquitous data. Each of us literally has access to more information in our pockets than the world’s most powerful computers had 20 years ago. The availability of data has enabled everyone to research any topic of interest and rapidly assemble enough background and information to convince ourselves that we can make good decisions. Quick decisions. Easy decisions.
Have a strange rash? Need to replace a faucet cartridge? Want to teach your kid how to do long division? All are quickly solved with a search engine – the world’s knowledge is at your fingertips. The accessibility of data has given us the illusion that most of the unknown world can become known within seconds.
The site selection business is no different. Location data, such as employment statistics, school performance, average manufacturing wages, utility rates, taxes, site options and even potential incentives are all available with just a few dabs and swipes.
While location data is increasingly easier to obtain, the risk of companies to make rapid location decisions is climbing. When a poor location decision is made, the company will often realize during its ramp-up that conditions in the community are not what they had hoped for: Talent in some categories may be challenging to attract, costs may be higher than predicted, infrastructure and services may be less than desirable, or worse. The consequences for the company, employees and community can be devastating, which could potentially result in lost profits, layoffs, closure and an idle asset in the community.
Our experience has revealed three data challenges in the location decision process: Choosing the right data, ensuring it is reliable and applying the right tolerances to define an “acceptable” candidate location.
1. Data selection: Most companies know they want to be in locations with rich pools of talent, abundant and accessible infrastructure, favorable operating costs, and more, but how should they measure the ability of a community to deliver these conditions? Companies should seek data that has a directly positive impact on operating success, not just data that is easy to pull. To evaluate the presence of talent, utilizing factors such as county-level unemployment rate, presence of similar industry operations and occupational presence will likely result in a very different set of candidate locations than less accessible data such as relevant college graduate flow, location quotient and average wages for specific occupations.
2. Data reliability: The sources of location-based data are often not comparable across geographies; they can be collected with varying methods, definitions and time periods. Some data reports are paid for by a community or region seeking to promote its competitive position. The use of third-party location “rankings” as a replacement for hard statistics can introduce hidden biases and weightings to the analysis, which can further tilt site selection results.
3. Data boundaries: Even when companies choose relevant data and reliable sources, it is critical to apply the right tolerances. For each data element, what are the right boundary conditions that should be applied to make retention or elimination decisions in the site selection? Should a community demonstrate Location Quotient for engineers of greater than 1.1 or 1.2 in order to be retained?
Before initiating any location analysis, the project team should confront the approach it will take to data choices, asking the following.
• Which location data should we use – is each data point a strong predictor of our operating success?
• Is the data reliable – is it comparable across locations, provided by relevant sources, objectively collected, and recent?
• How should we use the data – what boundary conditions should we use for location scoring or elimination?
By navigating carefully through the sea of location data, more companies will likely be positioned for long-term results with good site selection – resulting in greater potential benefits for shareholders, employees and communities. T&ID