In these most difficult times, the use of analytics is certainly not top of mind for most organizations unless it is being used to combat the virus. The challenging scenarios of meeting payroll and having access to cash are the obvious immediate priorities. But from a non analytical perspective, like most people, I am amazed by the many acts of giving and generosity that really speak to the better angels of our nature.
But we will overcome these challenges and being the constant optimist that I am, this will happen sooner rather than later. In this new post COVID-19 environment, it is not unrealistic to assume that the way consumers behave and think will be transformed significantly. Of course, this has ramifications when conducting analytics exercises. Virtually all data analytics exercises deal with historical and longitudinal data. The development of models, segmentation systems, and/ or reports all use historical data in their solutions. Given that much of the power of predictive analytics/machine learning solutions arises from longitudinal or historical data, this all begs the question of how we deal with data and specifically consumer behaviour data prior to, during, and after the COVID-19 crisis.
As I thought about this, I was reminded of the last crisis that seemed to galvanize our collective consciousness and made us reflect on what is truly important in our lives. That was the 9/11 crisis. During that time, this newfound awareness did impact consumer behaviour. My organization at that time was asked multiple times about the 9/11 impact on our analytics exercises especially the more advanced analytics exercises such as the many predictive models that we had built. In other words, would model performance be significantly compromised because of these changes.
Our perspective in evaluating model performance was to observe the increased targeting capability where top scored names are most likely to yield the desired behaviour and the bottom scored names are least likely to yield the desired behaviour. If we place these scored names into model deciles, then the top decile should have the strongest observed behaviour while the bottom decile should have the weakest observed behaviour. Essentially, the model can then be evaluated based on how well the model rank orders scored names based on the observed modeled behaviour.