Advertisement

Ad promo image large
  • Published Date

    March 6, 2026
    This ad was originally published on this date and may contain an offer that is no longer valid. To learn more about this business and its most recent offers, click here.

Ad Text

The Legal Light Justin Stack What's the law if Al gets it wrong? More than a third of Australians say they use artificial intelligence (Al) at work several times a week. Eight per cent use it many times every day - and that proportion is growing. A Nine survey found the majority use ChatGPT, Google Gemini and other Al tools for minor tasks like writing emails, but a third of Al users said they use it to produce reports, presentations, translations, research and responses to questions from customers. One in ten said their boss doesn't know they use Al. A third don't know if their employer has a policy regarding Al use. But what is the law if Al gets it wrong? What if Al inserts mistakes or false information into medical reports, accounting processes, financial or legal advice? What if Al breaches confidential information? Ethan Brightmore at Stacks Law Firm says the law has yet to catch up with the enormous growth of Al use in business dealings. "Legal liability is likely to be determined on a case-by-case basis, depending on the extensiveness of Al's usage, the negligence in not cross checking information and the consequences of the mistake," Mr Brightmore said. "An employee who included false Al information is likely to be held responsible by their employer for not checking everything was correct before distributing it. But is the employer or Al owner still responsible? "The difficulty in taking legal action against Al is that it is not a person. Our laws only permit filing lawsuits against entities when they are what the law calls a 'legal person' - a human being, or a non-human entity such as a corporation or organisation. "It is a grey area whether Al is a 'legal person' or functioning as an agent of a 'legal person'. Al systems are considered the property of their creator, but Al has developed so fast it can now develop code for its own programs. It is developing beyond its creator. "It could be argued Al should not be held responsible for its mistakes in the way a human or 'legal person' can, because it is not a conscious being. Some argue Al should be held responsible for its mistakes, because it is now capable of making decisions on its own. "Under the legal principle of 'vicarious liability', which sees employers responsible for the actions of their agents or employees, it is likely the 'legal person' which passed out wrong information is held responsible for the consequences. "In the future, legislation may classify Al as a 'legal person' like a corporation, so that Al owners can be sued. Artificial intelligence declared a person? That's sci-fi scary," Mr Brightmore said. STACKS LAW FIRM Joshua Crowther Specialist in Wills, Estates & Wealth Protection 02 6592 6592 taree.stacklaw.com.au Partners in life The Legal Light Justin Stack What's the law if Al gets it wrong ? More than a third of Australians say they use artificial intelligence ( Al ) at work several times a week . Eight per cent use it many times every day - and that proportion is growing . A Nine survey found the majority use ChatGPT , Google Gemini and other Al tools for minor tasks like writing emails , but a third of Al users said they use it to produce reports , presentations , translations , research and responses to questions from customers . One in ten said their boss doesn't know they use Al . A third don't know if their employer has a policy regarding Al use . But what is the law if Al gets it wrong ? What if Al inserts mistakes or false information into medical reports , accounting processes , financial or legal advice ? What if Al breaches confidential information ? Ethan Brightmore at Stacks Law Firm says the law has yet to catch up with the enormous growth of Al use in business dealings . " Legal liability is likely to be determined on a case - by - case basis , depending on the extensiveness of Al's usage , the negligence in not cross checking information and the consequences of the mistake , " Mr Brightmore said . " An employee who included false Al information is likely to be held responsible by their employer for not checking everything was correct before distributing it . But is the employer or Al owner still responsible ? " The difficulty in taking legal action against Al is that it is not a person . Our laws only permit filing lawsuits against entities when they are what the law calls a ' legal person ' - a human being , or a non - human entity such as a corporation or organisation . " It is a grey area whether Al is a ' legal person ' or functioning as an agent of a ' legal person ' . Al systems are considered the property of their creator , but Al has developed so fast it can now develop code for its own programs . It is developing beyond its creator . " It could be argued Al should not be held responsible for its mistakes in the way a human or ' legal person ' can , because it is not a conscious being . Some argue Al should be held responsible for its mistakes , because it is now capable of making decisions on its own . " Under the legal principle of ' vicarious liability ' , which sees employers responsible for the actions of their agents or employees , it is likely the ' legal person ' which passed out wrong information is held responsible for the consequences . " In the future , legislation may classify Al as a ' legal person ' like a corporation , so that Al owners can be sued . Artificial intelligence declared a person ? That's sci - fi scary , " Mr Brightmore said . STACKS LAW FIRM Joshua Crowther Specialist in Wills , Estates & Wealth Protection 02 6592 6592 taree.stacklaw.com.au Partners in life