Artificial intelligence (AI) road safety cameras have been rolling out across Australia, resulting in a large number of fines. For example, roughly 184,000 infringements have been issued in Western Australia since the cameras were launched in October last year.
AI road safety cameras are fuelling a surge in driver fines. Are they fair?
Artificial intelligence (AI) road safety cameras have been rolling out across Australia, resulting in a large number of fines.
For example, roughly 184,000 infringements have been issued in Western Australia since the cameras were launched in October last year. In New South Wales, more than 130,000 fines were issued in 2024-2025, the first year the technology was used. In Queensland, about 114,000 fines based on AI image recognition technology were issued in 2024.
But it is the size of the penalties that has drawn much attention and criticism. In WA, penalties start at A$550 and four demerit points for seatbelt infringements. These can add up quickly in cases where people get repeated fines.
For example, one WA driver with a seatbelt exemption was issued almost A$20,000 in fines after multiple seatbelt infringements.
Although fines can be challenged, there are growing concerns about the accessibility and fairness of the appeal process, especially at scale. The technology also raises the question of whether AI camera systems are the best way to promote safe driving.
How do AI road safety cameras work?
AI road safety cameras use computer vision systems to identify patterns in still images captured by roadside cameras.
AI software initially reviews all images. Where no infringement is detected, the image is automatically deleted. Where potential infringements are detected, the images are then flagged for human review before an infringement is sent out to a driver.
The approach is intended to increase efficiency, as it saves officers from having to manually review large volumes of images.
Drivers can challenge these fines where they believe an error occurred, or in cases where a valid excuse can be established.
In Queensland, a driver successfully challenged a fine he received after his passenger shifted their seatbelt under their arm mid-trip. Representing himself in court, the driver argued it would have been unsafe to continuously monitor and check on his passenger mid-trip, especially on a busy motorway.
Many others who have attempted to challenge fines have been less successful. And many have expressed confusion and frustration about the process of challenging fines.
Practical problems
One issue is the practical difficulty of challenging an infringement. When tens of thousands of fines are issued, the burden on drivers who wish to challenge fines increases accordingly.
Some drivers have reported spending hours on hold, navigating unfamiliar procedures, or struggling to work out whether their situations are even relevant to the appeals process.
This uncertainty can lead to stress and frustration. Appealing fines can also involve a significant amount of time and financial cost, especially if a case makes it to court and is unsuccessful.
The difficulty of appealing is related to a second issue: context.
AI camera systems rely on still images, which limit what can be captured. This is less of a problem with speeding infringements, where context is often irrelevant.
Seatbelt use, on the other hand, is more dynamic. Seatbelts can be temporarily moved by a passenger without the driver noticing, for reasons that are not immediately apparent from a single image.
In some cases, drivers have argued they or their passengers were wearing their seatbelt correctly, but the image presented was ambiguous. In one case from NSW, a woman argued her seatbelt was on correctly, sitting between her shoulder and collarbone. She was not successful in overturning the fine.
Drivers with medical conditions have grounds for challenging fines, but the details need to be carefully documented. Temporary injuries present difficulties. Brief seatbelt adjustments can be reasonable in context, but difficult to defend retrospectively, many weeks later, especially if there is no documentation.
With a human officer on the roadside, these situations could be addressed in real time, through explanation or education, potentially preventing extreme outcomes, such as the driver who was issued with almost $20,000 worth of fines and faced the accumulation of nearly 200 demerit points.
The real issue, then, is not about whether individuals can challenge infringement notices, but whether the system operates fairly at scale. If AI cameras aim to promote safe drivers, this goal should extend beyond detection alone.
Measuring what matters
As with any automated system, much depends on the choice of proxy - the measurable indicator that stands in for what we care about.
The problem of treating a narrow proxy decisively has been well documented in other domains. In education, for example, student test scores can be used as a proxy for teaching quality. But as data scientist Cathy O'Neil has shown, these systems ignore crucial classroom context and have been relied on for high-stakes decisions, such as the dismissal of teachers.
The risk with AI-based enforcement is similar.
If the proxies for "safe driving" are too narrowly chosen, they distort what safety actually requires, creating the illusion that the proxy is standing in for the real goal.
Although AI cameras can show seatbelts are off, or incorrectly positioned, the problem arises when seatbelt positioning comes to dominate the discussion, while other driver safety issues, such as tiredness, distraction, tailgating, or aggressive driving, are harder to capture with automated roadside systems.
Improving road safety does not require abandoning AI enforcement. What it does require is that AI-mediated systems capture a reliable proxy, operate fairly at scale, and support education and prevention, not simply punishment.
TheConversation.com
Author: Adam Andreotta - Lecturer, School of Management and Marketing, Curtin University


















































