Announcement

ADP Research Institute (ADPRI) and the Stanford Digital Economy Lab (the “Lab”) announced they will retool the ADP National Employment Report (NER) methodology to provide a more robust, high-frequency view of the labor market and trajectory of economic growth. In preparation for the changeover to the new report and methodology, ADPRI will pause issuing the current report and has targeted August 31, 2022, to reintroduce the ADP National Employment Report in collaboration with the Stanford Digital Economy Lab (the “Lab”). We look forward to providing an even more comprehensive labor market analysis and will be in touch with additional details closer to the re-launch, later this summer.  For more information on this announcement, please visit here.

Data Fluency Series #1: How to Tell Good Data from Bad Data

October 31, 2017

Marcus Buckingham
Share this

This is the first episode in our series on Data Fluency. Knowing how to tell good data from bad data is key, because it will affect you whether you are an HR practitioner, team leader, or individual.

Bad data is a pervasive problem in our world, and especially in the world of work. It’s a problem that we have copious amounts of useless data, but the fact that we’re treating this bad data as though it’s good data – as though it will validly predict things like performance – is nothing short of a disaster. People are being promoted and fired because this faulty data that actually tells us nothing about our employees.

This data series is for everyone. It’s for HR leaders looking to provide their company with the right data to make the best decisions. It’s for leaders and managers who are being asked to rate or rank their employees, when those ratings mean nothing. And it’s for individual contributors – any person with any job in which there is a performance review – because you need to know how to ask the right questions to challenge the status quo.

When it comes to data there are three important words to remember:

  1. Reliability: Is the person a reliable rater of what is being measured?
  2. Variation: Does the tool reveal the true range that actually exists?
  3. Validity: Does what we’re measuring matter? Does it validly predict something else in the real world?

Most HR tools fall short of at least one of these criteria – and many fall short on all three. The next time you are asked to participate in any sort of performance rating, opinion survey, 360-degree survey, or any other source of people data, ask yourself if it complies with these three standards. Odds are, it won’t.

Watch the next episode in the Data Fluency Series for more information on how to become data fluent and the impact of bad data on our businesses.

Note: The views expressed on this blog are those of the blog author(s), and not necessarily those of ADP. This blog does not provide legal, financial, accounting, or tax advice. The content on this blog is “as is” and carries no warranties. ADP does not warrant or guarantee the accuracy, reliability, and completeness of the content on this blog.

ADP, the ADP logo and the ADP Research Institute are trademarks of ADP, Inc. All other marks are the property of their respective owners. Copyright © 2020 ADP, Inc.