Skip to main content

The Parker Challenge: what it means for mineral resource estimation. Expert Q&A

AusIMM
· 900 words, 4 minute read
Designed as part of the 2023 Mineral Resource Estimation conference, the Parker Challenge aims to quantify the ‘between person variance’ or ‘pattern noise’ in resource estimation. Estimates from challenge participants will be amalgamated and the results compared. The range of outcomes, differences in approaches and classification decisions will be presented during the conference, highlighting just how different the estimates can be given the same data.
In an exciting twist The Parker Challenge Sponsor, Rio Tinto will reward the challenge winner with $55,000! In anticipation for the upcoming conference and the announcement of the challenge outcome, we interviewed Conference Chair Rene Sterk and Committee Member Scott Dunham to find out more.

 

The Parker Challenge is calling on mineral resource estimators to create a classified model from the same base dataset. What is the conference committee hoping to achieve by hosting this challenge? We find out!

Scott Dunham (SD): – The Parker Challenge, my favourite part of this conference! One of the things that plagues the resource estimation discipline is that we all work independently, working on individual deposits to produce our estimates. We all understand that how that estimate performs is really difficult to determine. And one of the things that’s never been measured is that if you gave that same data, that same information, to a wide group of people and asked them to come up with their estimate – how different would each one be?

How much of what’s involved in estimation is the noise between people, as opposed to the noise of the data? It’s something we’ve never measured. We assume that we get a single number coming out of a resource estimate, and that if I gave that data to another estimator then they’d come up with exactly the same number, and we know that’s not right. So how different could things be? What’s the variance going to be? Is it going to be plus or minus 10%? Plus or minus 20%? Plus or minus 100%? The Parker Challenge gives us the opportunity to look at this variance because we need to understand when it comes to classification, when it comes to risk, when it comes to just about every part of what we do. So I’m really excited by The Parker Challenge. I think it’s a great opportunity to take a big step forward for the industry.

Rene Sterk (RS): -There are all sorts of questions indeed and the hour we’ve got to talk about the results of The Parker Challenge at the conference is probably not going to be enough, you could devote an entire conference to the outcomes of this study.

As Scott mentioned, every deposit is different, and every deposit gets estimated by a different person so it’s incredibly difficult to look at any of these different variables and find any standardisation. It will be interesting to see the spread, to see how different people interpret the geology differently. And then there is the actual treatment of the input data, what are different people doing with the quantification of risk and the classification? The results will be fascinating! The submissions are looking very strong at the moment so we’re looking forward to presenting it all on stage in the last act.

I’d also like to thank Rio Tinto for their generous support of this challenge and supplying the base dataset.

SD: – It will also be interesting to see the different approaches to estimating the dataset. There will be some people that will likely take a fairly traditional approach and we may get people doing inverse distance weighing, for instance, or others that go to machine learning. What will the spread of the estimation approach do to this problem? Not only do we have multiple people doing the same estimate, we’ve got each of them doing their own type of estimate using their own algorithms, using their own parameters. We’re capturing a whole raft of uncertainty that’s never really been looked at before. As Rene said you could do an entire conference on this and I expect that there’ll be follow up papers, discussions and debate - it’s going to be great!

In years to come, what do you expect will be the industry benefits of The Parker Challenge?

RS: – What excites me about the industry right now is that there is more unity starting to develop. It’s not the secrecy about data that drives all decision making. We’re sharing more stuff between us as practitioners, so if we continue this with different deposits over the years to come and then reconcile our models with production data or improved models, there’s going to be a shift in how we see models coming out of ground and reconciliation. It will improve how we do things and should have a profound impact if there is continuity in this process.

SD: – Hopefully The Parker Challenge becomes an annual or a biannual event where we estimate a new deposit and build on our learnings.

If you relate this to the AI industry, a lot of the big leaps forward all happened around challenges. When a dataset was made publicly available and people were challenged to do the best image classification they could. Things would progress along at a fairly steady rate and then somebody would have a new idea and it would catapult the industry forward and I suspect we’ve got the same type of opportunity with The Parker Challenge where we will have these step changes occurring. The Parker Challenge is a fascinating part of the conference.

Our site uses cookies

We use these to improve your browser experience. By continuing to use the website you agree to the use of cookies.