Machine learning is a strategic asset for Fractal. It is at the heart of our predictive models which enables our platform to give a small business timely insights into its forecasted expenditure and better financial product recommendations.
As if we needed more reason to visit Long Beach, California, this year’s ICML (International Conference on Machine Learning) was filled with detailed findings, meticulous presentations and of course, a lot of sunshine.
The performance benchmark for NLP architectures are rising
Models such as Transformers, and more sophisticated architectures such as BERT, which utilise the attention mechanism (born to help memorise long source sentences in neural machine translation), are becoming the performance benchmark for the industry.
Already adopted by big names within the machine learning (ML) space, Transformers (used by OpenAI in their language models, and also used recently by DeepMind for AlphaStar ) and Google’s BERT are becoming more mainstream by the larger ML population.
This is due to a number of factors, the accessibility (open source code), the ease of integration, and the infrastructure’s quality to measure the performance of the model.
Self-supervised learning is the new direction for machine learning
There are not many things that can grab a machine learning scientist’s attention. However, self-supervised learning is gaining a lot of buzz around the ML community, with many stating that it is the direction of which we will all become accustomed to.
So what is the self-supervised learning approach? Algorithms and reinforcement learning (RL) agents learn without explicit supervision, therefore utilising a smaller amount of labelled data or taking fewer environment steps. This is important as often the amount of data available is a huge barrier for the adoption of effective ML models. Algorithms that work well without a big amount of labelled data will make deep learning benefits accessible for everyone, especially for businesses who want a competitive advantage.
The importance of identifying and understanding deep-learning phenomena
The ML community are always focussing on identifying and understanding deep-learning phenomena together with the vulnerabilities of the deep-learning methods.
There needs to be further progress in this area as it will help to address the issues connected with quantifying uncertainty and the robustness of the deep-learning algorithms.
For example, in May 2016, the first fatality occurred for a self driving vehicle where the perception system of the vehicle confused the white side of a trailer for the sky. Being able to correctly quantify the uncertainty of ML models could help avoid many issues in security and ethics like the example given above.