Fundamental AI

 

 

We develop the fundamental AI technology to address Supply Chain challenges in a trustworthy, explainable and reliable way. Some of our key foci include:

 

Uncertainty: A missing ingredient from many supply chain machine learning deployments is the measurement of uncertainty. Without reasoning on the uncertainty of these systems, manufacturers are exposed to risk of overconfident and erroneous predictions, hindering the adoption of promising ML techniques due to mistrust.

 

Cooperative AI: Supply networks are co-opetitive systems, where companies are embedded in a complex network that both are interdependent for their performance, but also  might compete with one another. Can we build multi-agent systems, which learn to cooperate to achieve better outcomes at the system level?

 

 

Explainability:  Lack of explainability is a pervasive problem
that impacts AI adoption and has driven significant research efforts within the wider
AI landscape. While supply chain AI hithertho  focussed on the maximisation of predictive performance,  more effort should be paid to incorporate XAI approaches for informed-decision making when using AI technology in critical applications. 

 

 

Machine unlearning: Machine unlearning is the ability for a machine learning model to forget, which can provide us with the ability to comply with data privacy regulations, and/or remove harmful, manipulated, outdated information. The key challenge lies in forgetting specific information while protecting model performance on the remaining data. Retraining over data that does not include the offending sample adds to computational overhead. We are exploring retrain-free options to create fast and highly performant machine leaning models.

 

 

Continual learning: Machine learning models deployed in supply chain organisations must continuously adapt to their changing environments and learn to solve new tasks. Continual learning seeks to overcome the challenge of catastrophic forgetting, where learning to solve new tasks causes a model to forget previously learnt information.

Share This