AI and Data Science

Machine learning trend analysis in 2024

13 September 2024 Last updated: 25 November 2024
Dr Russell Hunter

Meet Dr Russell Hunter , Senior Teaching Associate (Online Education and Web Technology) at the Department of Engineering, University of Cambridge and the Academic Lead for the cutting-edge Cambridge Advance Online course on Leveraging Big Data for Business Intelligence.

Dr Russell Hunter

Artificial intelligence (AI) and machine learning (ML) have been the recent buzzwords. They have transformed not only the way we work but more importantly, the way we live. These technologies are expected to evolve continuously, further affecting many aspects of our lives.

The terms AI and machine learning always go together but what’s the difference?

In his Cambridge Advance Online course Leveraging Big Data for Business Intelligence, Dr Russell Hunter draws on IBM’s definition(Opens in a new window) of machine learning as ‘a branch of AI and computer science which centres on the use of data and algorithms to imitate the way humans learn in order to gradually improve the accuracy of predictions’.

This kind of technology is so closely woven into the structure of our daily lives that we might not even notice it. Some examples we’ve probably all come across include personalised social media recommendations, facial recognition technology, virtual personal assistants and self-driving cars.

As we approach the end of 2024, our latest blog article asks Dr Hunter to share his top 10 trends in the fast-evolving world of machine learning. We delve into some of the pivotal advances, real-world applications and the far-reaching effects of these cutting-edge technologies that every business leader needs to know about.

Top 10 machine learning trends in 2024

  1. Autonomous decision-making

  2. Federated learning

  3. Machine learning operationalisation management (ML Ops)

  4. AI and sustainability

  5. Quantum machine learning

  6. Explainable AI (XAI)

  7. Edge AI

  8. AI-generated content

  9. AI in healthcare

  10. Augmented workforce

Analysis of 2024 machine learning trends

1. Autonomous decision-making

First off the block, financial and healthcare sectors are rapidly embracing autonomous decision-making systems to transform their operations. These advanced systems boost the speed and precision of crucial decisions, leading to greater efficiency and enhanced customer experiences.

However, Dr Hunter points out key reasons why some of these are not new within the respective fields: for example, when it comes to trading, the use of autonomous systems enhances efficiency and speed by taking advantage of market opportunities in milliseconds. Similarly, automation in general reduces the need for manual processes and therefore reduces costs and allows faster transaction processing. Moreover, the ability to analyse vast amounts of data quickly helps in identifying patterns and making informed decisions, which is especially useful for managing risks and fraud detection.

‘In terms of healthcare, we can move towards personalised medicine using autonomous systems,’ Dr Hunter says. ‘Sophisticated multimodal AI can analyse genetic data and patient histories to recommend personalised treatment plans. This leads to more effective and individualised healthcare. Similarly, by leveraging data from electronic health records, these systems can predict patient outcomes or complications, which allows for proactive intervention.’

‘Like the financial sector, keeping costs down is important,’ adds Dr Hunter. ‘Being able to automate resource allocation leads to more efficient use of resources, consequently improving healthcare delivery. Assisting healthcare professionals with automation allows them to focus on patient care.’

Dr Hunter’s article, ‘How three of the biggest companies use big data and how you can use it too’ sheds more light on this topic.

2. Federated learning

Federated learning is a decentralised machine learning approach where multiple devices or servers collaborate to train a model while keeping data localised on the respective device. Instead of sending data to a central server, each device trains a model on its own local data and only shares the model parameters (for example, the weights and gradients) with a central server.

‘The central server aggregates these parameters to update the main/global model, which is then distributed back to the devices for further training,’ explains Dr Hunter. ‘This process is iterative and continues until the model converges. By keeping the data on local devices, federated learning minimises the risk of data breaches and unauthorised access. Sensitive information never leaves the device, significantly reducing exposure. This is also useful for regulatory compliance like data protection. Model updates can also be anonymised to further protect user identities and data specifics.'

Dr Hunter goes on to explain that encryption techniques ensure that the model updates are aggregated to prevent the central server from accessing individual data points and to detect and mitigate the impact of malicious participants who might try to corrupt the model. Although the distributed nature of federated learning seems to add vulnerability, it actually makes it harder for things like poisoned data to be introduced into the learning process. The main security concern is to ensure that all participating devices are trustworthy and not compromised.

‘One issue is communication between devices and the central server, especially in environments or locations with little connectivity,’ Dr Hunter continues. Similarly, communication latency can slow down the training process. Another similar issue is the device constraints. Limited processing power and battery life hinder the training process. Since the data is often non-independent and identically distributed, this means there are issues with models generalising well. This increases model divergence, which means the aggregation in the global model will be complicated.

Federated learning is particularly beneficial in scenarios where data is compartmentalised due to legal or economic reasons, according to Dr Hunter. In the healthcare sector, for instance, patient data is frequently kept isolated because of privacy concerns. Traditional ML models therefore have access to limited data which can make them biased. Federated learning allows AI algorithms to access more diverse data from various sources, leading to more accurate and generalised predictions.

3. ML Ops

Machine learning operationalisation management – or ML Ops for short – focuses on the deployment, monitoring and governance of ML models in production, explains Dr Hunter.

‘In the early phases of our innovation work in this space,’ Dr Hunter recounts, ‘there was a worry about drift in performance, managing multiple variations of models, and retraining new data without affecting the business. This is the kind of thing that ML Ops can help solve as it integrates best practices from a well-established practice in DevOps (software development [dev] and operations [ops]) to ensure the reliable and scalable operation of ML systems.’

Dr Hunter further explains that the standardisation and streamlining of ML workflows through ML Ops have become essential as businesses scale their AI capabilities. This trend has solidified its place in the industry, enabling faster deployment and maintenance of ML models.

4. AI and sustainability

When it comes to sustainability, AI also has its pros and cons.

‘As far back as five years ago, I had a dissertation student using predictive models on the distributed grid for windfarm energy in Orkney,’ Dr Hunter says. ‘Advances in this area are widespread where we have AI that optimises the distribution and consumption of electricity through smart grid management, predicting demand patterns, and adjusting supply accordingly to minimise waste and ensure efficient use of resources. They have AI systems that regulate heating, ventilation and air conditioning systems in real time based on occupancy and weather forecasts, reducing energy consumption and operational costs.’

For years there have been models that predict and monitor carbon emissions across industries, aiding in the creation of effective policies and strategies to reduce carbon footprints. AI enhances the efficiency of renewable energy sources like solar and wind by predicting weather conditions and optimising the alignment and operation of solar panels and wind turbines. These approaches are also applied in agriculture for optimising processes and crop monitoring, as well as in waste management such as sorting recyclables and conservation.

However, as models get more complex, they demand more computational power and the more we rely on them, the more energy is needed to power them. For instance, training a single large model can consume as much energy as several hundred households over a year.

‘Data centres that support AI operations consume vast amounts of electricity,’ says Dr Hunter. ‘While efforts are being made to use renewable energy sources, the current reliance on traditional energy sources contributes to a substantial carbon footprint.’

There are ongoing research and development efforts focused on developing algorithms that require less computational power without compromising performance. Techniques like model pruning (a method for reducing the number of deep neural network parameters), quantisation (another model to compress the model size), and even quantum computing could be used to reduce energy consumption. Similarly, efficiencies can be made in hardware with more energy-efficient processors.

5. Quantum machine learning

Quantum machine learning is also an emerging area.

‘As AI continues to grow and move forward,’ says Dr Hunter, ‘the computational resources needed to grow exponentially too. Quantum computing can potentially solve some of these problems. Quantum AI has the potential to allow more accurate and complete models as they are not constrained by classical computing. This is more speculative for the future, but it’s an exciting frontier and has the potential to solve problems beyond the reach of classical algorithms.’

This pioneering area is attracting significant research and investment, particularly in high-stakes industries like finance and pharmaceuticals. IBM and Google are also heavily invested according to Dr Hunter.

6. Explainable AI (XAI)

Explainable AI or XAI is another very important area.

‘I touch on this both in my teaching and my professional work,’ says Dr Hunter. ‘When you build a model to solve a particular problem, it is often more difficult to persuade stakeholders to come on board. In fact, in many cases they would prefer a less optimal model that can be visualised and understood more easily than jumping on board with some kind of mysterious model that works for unknown reasons. This is especially important when it comes to healthcare or finance.’

In healthcare, XAI provides explanations for diagnostic decisions or treatment recommendations made by AI systems. These explanations are crucial for doctors and patients to trust and act on AI-driven insights, ultimately improving patient outcomes. AI models used for predicting patient risks, such as the likelihood of developing a certain disease, need to be clear and understandable to ensure that healthcare providers can grasp the underlying factors behind the risk assessment.

‘Financial institutions use AI for credit scoring and loan approvals,’ Dr Hunter explains when it comes to the use of XAI in the finance industry. ‘XAI helps in providing transparent reasons for credit decisions, ensuring compliance with regulatory standards and building trust with customers. AI systems for detecting fraudulent activities in financial transactions benefit from XAI by offering clear explanations for flagged transactions, enabling quicker and more accurate responses from human analysts.’

XAI aims to make AI decisions understandable to humans, enhancing trust and regulatory compliance.

7. Edge AI

Another cutting-edge development, Edge AI offers significant benefits in terms of improving response times and data efficiency by processing data locally on the device, which reduces latency and enables real-time decision-making, according to Dr Hunter.

This immediate processing capability is crucial for applications in autonomous vehicles, industrial automation and healthcare monitoring, where time-sensitive tasks require prompt responses. Additionally, local data processing minimises the amount of data that needs to be transmitted to central servers. This is good for network latency and connectivity issues.

Edge AI also enhances privacy and security by processing sensitive information locally, which reduces the risk of data breaches during transmission. This is particularly important for sectors like healthcare and finance that handle personal data. The technology allows for the implementation of advanced security measures such as data anonymisation and encryption before any data transmission occurs, ensuring higher security standards.

‘As with federated learning, challenges such as hardware limitations, integration complexity, and the need for efficient management and maintenance of numerous edge devices curtail the full effectiveness of edge AI,’ adds Dr Hunter.

8. AI-generated content

AI-generated content is fundamentally transforming the media, marketing and entertainment industries by automating and enhancing creative processes.

‘In media and marketing, AI tools such as ChatGPT(Opens in a new window) are revolutionising ad development and creativity,’ says Dr Hunter. ‘These tools produce AI-generated images, create various versions of photographs, and automate audio recordings, significantly speeding up the ad production process and enabling more personalised and engaging content.’

AI’s ability to predict user preferences and offer personalised recommendations helps media companies optimise their campaigns for higher return on advertising spend, ultimately boosting viewership and readership, he adds.

‘In the entertainment industry, AI is used to create lifelike visual effects and generate realistic synthetic media content, including videos and virtual environments. AI-driven platforms enhance user engagement by providing real-time insights into campaign performance and delivering highly personalised experiences. Moreover, AI helps with automating tedious tasks such as campaign management, ad production and reporting, allowing creatives to focus more on innovation and idea generation.’

‘This also extends to things like AI chatbots, which can streamline lead qualification and automate sales processes, providing instant responses to ad enquiries and customer queries,’ says Dr Hunter. ‘AI’s predictive analytics capabilities allow advertisers to forecast marketing outcomes more accurately and create dynamically tailored ads that adjust in real time based on product attributes and customer preferences. This level of customisation ultimately leads to better campaign performance.’

Read Dr Hunter’s article on how generative AI can impact business intelligence.

9. AI in healthcare

AI’s applications in healthcare are expanding rapidly – from diagnostics to personalised medicine. As we’ve already discussed, this technology’s capability to analyse massive datasets is driving breakthroughs in treatment options and patient care management.

‘One of my first projects using convolutional neural networks was creating a skin cancer classifier, which was based on a model that could outperform the best medical minds in the world,’ recalls Dr Hunter. ‘There has been a lot of advancement since then. For example, agentic AI represents a significant advancement beyond classical reactive AI by being designed to proactively set its own goals and take autonomous actions to achieve them. In the realm of personalised healthcare, agentic AI can revolutionise patient care by continuously monitoring patient health metrics and autonomously administering medication as needed. For example, an agentic AI system could monitor a diabetic patient’s blood sugar levels in real-time and administer insulin precisely when required, thus maintaining optimal glucose levels and reducing the risk of complications.’

‘Another application is in personalised treatment plans for chronic diseases,’ Dr Hunter adds. ‘Agentic AI can analyse vast amounts of patient data to predict disease progression and suggest tailored treatment plans. For instance, in oncology, agentic AI can process data from medical records, genetic profiles and treatment responses to recommend personalised chemotherapy protocols, potentially improving outcomes and minimising side effects.’

These proactive systems not only enhance patient care but also have the potential to alleviate the burden on healthcare professionals by automating routine monitoring and treatment adjustments,

10. Augmented workforce

While there are concerns that AI will replace humans in the workplace, Dr Hunter believes that the latest AI developments can augment rather than undermine human contributions.

‘The augmented workforce trend leverages AI to assist rather than replace human workers, transforming job roles and boosting productivity across various sectors,’ he says. ‘This collaboration between humans and AI combines the strengths of both, allowing AI to handle repetitive, data-intensive tasks while humans focus on strategic, creative and interpersonal activities that require emotional intelligence and critical thinking.’

How does this happen? In healthcare, for example, AI can assist doctors by analysing medical images and patient data, identifying patterns that might be missed by the human eye. This allows doctors to make more accurate diagnoses and develop personalised treatment plans, thereby improving patient outcomes and operational efficiency. Similarly, in customer service, AI-powered chatbots can handle routine inquiries, freeing human agents to resolve more complex issues and provide a higher level of customer satisfaction.’

‘Rather than eliminating jobs, AI reshapes them, leading to the creation of new roles that require managing, programming and collaborating with AI systems. Employees may need to acquire new skills to work effectively alongside AI, fostering a culture of continuous learning and adaptation. AI can take over mundane tasks and boost productivity by allowing people to contribute more meaningfully and creatively. Thus, the collaboration between AI and people drives innovation, efficiency and job satisfaction,’ he concludes.

The impact of trends on business workflows

In addition to the trends we explored, innovation using hybrid models to leverage the strengths and mitigate weaknesses of machine learning models is an emerging trend.

‘Hybrid AI models can integrate symbolic AI with machine learning or blend different types of neural networks to improve performance and robustness,’ says Dr Hunter. ‘This approach can be particularly beneficial in complex problem-solving scenarios where a single model may fall short.’

The use of AI in cybersecurity will also become highly important, according to Dr Hunter: ‘AI improves cybersecurity exponentially by automating threat detection, enhancing data analysis and providing real-time responses to cyber threats, making security systems more efficient and effective.’

The future of machine learning trends in business

Indeed, machine learning is driving significant advancements across industries. Finance and healthcare are some fields benefitting from these advancements, with trends like autonomous decision-making, federated learning and AI-driven cybersecurity enhancing efficiency, security and innovation. However, challenges like increased energy demands and the need for transparency are also emerging, while future technologies like quantum machine learning and hybrid AI models hold the potential for further breakthroughs.

Thus, it is crucial to keep an eye on these trends and developments and make sure that our businesses and organisations are fully equipped to gain an edge by leveraging AI. As Dr Hunter notes, the downsides of not embracing AI and machine learning in a modern business world are obvious: ‘Everything from a competitive disadvantage, operational inefficiencies, loss of customer trust and loyalty due to a lack of hyper-personalisation, and seamless experiences to the simple issue of being able to attract talent.’

For anyone looking to deepen their understanding of these trends and their applications, Cambridge Advance Online’s Leveraging Big Data for Business Intelligence course, led by Dr Hunter, is the perfect resource.

Find out more about Dr Hunter and how he approaches big data for business intelligence in the exclusive interview he recently did for CAO.

References:

IBM. (n.d). ‘What is machine learning (ML)?’ IBM. https://www.ibm.com/topics/machine-learning(Opens in a new window) accessed 6 Sep 2024.

Dr Russell Hunter

Senior Teaching Associate (Online Education and Web Technology), Department of Engineering, University of Cambridge
Russell is a Senior Software Engineer and a researcher at the University of Cambridge.