1. Balances all the trade-offs
Most complex problems have many objectives such as fastest, cheapest, or least carbon intensive. These objectives are often also interconnected in complex ways. PolyChord can balance all of these objectives to give the best overall solution.
2. Explainable and easy to interrogate
PolyChord maps the components of a problem onto a data landscape which can be understood and interrogated by a non-expert. With many other machine learning methods, you must simply trust it is correct – with PolyChord you can see why a particular solution is given.
3. Quantifiable reliability calculations.
For example, “we are 96% sure this is the best solution”. These calculations are done simultaneously with the solutions, and are vital in many applications where safety regulations are paramount. If you think about the way in which “other” methods such as Machine Learning or Neural Networks deliver answers, you never have any certainty how good those answers are. This is a key strength for PolyChord.
4. Computationally efficient.
PolyChord maps the entire “data landscape” in a single run, as opposed to doing many short, random explorations of the landscape – like other tools. This makes it less expensive to run.
5. Copes with messy, incomplete data.
Data collection is often the most resource intensive part of the process of engaging with machine learning.
PolyChord can handle “dirty” data without any lengthy preparation periods – allowing you to leverage the value of your existing datasets.
6. Easily adjustable.
After the solution is calculated, you can add in new variables or constraints without needing total relcalculation. PolyChord “remembers” everything about the data landscapes it explores and can make adjustments without rescanning being necessary, taking account of new things you might want to feed into the mix. Even non-linear constraints.