Opportunity comes infrequently. Opinion piece by Scott Dunham
Mineral resource estimation; fast and slow. Some aspects of resource estimation evolve rapidly while others are caught in a paradigm trap. The impact of the differing rate of progress between our technology solutions and thought processes is a central theme to the Parker Challenge. An opinion piece by Scott Dunham.
Over the length of my career I’ve seen many changes to the way we model and estimate mineral resources. In fact, my very first estimate pre-dates version 1 of the JORC Code - those Wild West days! Along side the many changes I’ve seen some things that have stayed the same. If I were to group the changes vs the non-changes, I’d divide them into:
- advances in technology (these are the things that change frequently)
- underlying paradigms and assumptions (these are the things that we just accept as ‘normal’)
These days I spend a lot of time thinking about resource estimation and risk. Dividing things into those two categories is very helpful. They are both important but there are marked differences in the rate of advance in each - almost by definition. You see, it’s comparatively easy to develop new technology, new software, new algorithms, new techniques. We have seen progressions from manual interpretation to explicitly-determined 3D digital solids to 3D solids based on mathematical functions (so-called implicit models). We’ve seen progress from polygonal estimates, to inverse distance, a vast pallet of kriging flavours to simulation and beyond. The technology we use today is almost unrecognisable from the perspective of the 1980’s and 1990’s.
The same cannot be said about the paradigm or framework we use. The unacknowledged assumptions. Those things left unmentioned lest they make us look less erudite. The ideas, concepts and archetypes that underpin the edifice of estimation. These change slowly if at all. As Max Planck once put it “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it … What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth.”
Ideas are sticky. We accept the world has always been (and will always be) as we know it. There is a type of scientific group think that limits our ability to see beyond the boundaries of current practice.Bluntly, we are biased towards the status quo. But… our individual status quos are different, influenced by age, education and experience if nothing else.
It is this problem, our collective beliefs in the unexplored and unexamined framework of estimation and modelling that the Parker Challenge is attempting to address. But what does that actually mean?
There is a naive assumption that two or more resource estimation experts, given the same data, will produce similar estimates of tonnes and grade. This naivety extends even further. It pervades our public reporting codes. The JORC Code and other CRIRSCO-based codes require publicly reported mineral resource and ore reserve estimates to be classified into broad ‘risk’ groupings (Measured, Indicated, Inferred or Proved and Probable respectively). Let’s think about that for a moment. For our classification systems to be meaningful, two different experts must not only produce similar estimates given the same data, they must also then assess the risk of those estimates similarly and apply the same level of risk tolerance and judgement. (Right… pull the other one!)
That is a paradigm that needs challenging if ever there was one.
By asking participants in the Parker Challenge to estimate, report and classify a mineral resource, using the same base data, we will have a rich data set to question their assumptions, decisions and judgements. Think about it. What are the degrees of freedom in this system? How sensitive is the outcome to those degrees of freedom? How much does human judgement affect the outcome? Are we all aligned when it comes to matters of judgement? Just how different will these estimates be? Will they cluster together? Will the estimates take on a normal (Gaussian) distribution?
There is power in these ideas. Power to force us to address some of the elephants in the room. Power to help us realise that we need a more complete understanding of estimation before we can understand risk. Power to recognise uncertainty.
It goes beyond the individual estimates and models as well. We have set the Parker Challenge up as a competition and that implies another form of judgement. The judges need to reach a consensus on the best entry. There will be differences in each judge, the things they weigh more highly than others, the range of inputs and considerations they address. Yes, the judging will provide equally useful information from a different perspective. We will begin to see how different people, all experts in the field, perceive quality and risk. I have little doubt that we will disagree, that we will take a different scale when assessing the entries. Just as our managers, investors and other stakeholders bring their beliefs and histories to the table when looking and interpreting our estimates, so will the judges.
There is more to uncertainty and judgement than we generally perceive. If we wish to address the quality of our estimates, the quality of our risk advice, we need to take that next step. We need to reach into the known-unknowns and from there recognise that there will always be unknown-unknowns waiting to completely change the world.
In my opinion it is vital we address these issues, now more than ever. We stand on the brink of a world where we offload human judgement to machine learning and artificial intelligence systems. The performance of those systems is entirely dependent on the paradigms and data they are based upon. If there is a gaping hole in the data, or in the paradigm, then these quantum leaps in technology will do nothing more than perpetuate the problems embedded in our bias to believe.
So I have high hopes. I’m an optimist (although a cynical one) at heart. I like to believe that we are ready to question ourselves and admit we don’t know what we don’t know.