Wednesday, July 12, 2023

Return to listings

ChatGPT: the legal and insurance implications

The business world has been obsessed with ChatGPT lately — and not surprisingly. With its ability to generate reports, code, summarise information and even develop business proposals in mere seconds, generative AI has the potential to dramatically streamline processes and slash operating costs.

But like any technology that seems too good to be true, business leaders need to consider the risks, not just the rewards of generative AI.

Just this week the federal government announced its intention for AI, saying “safeguards” are needed to protect society with gaps in existing laws. The announcement came just days after a grim warning in a statement from the Centre for AI Safety, from experts including the heads of OpenAI and Google Deepmind, that AI could lead to the extinction of humanity.

In fact, Sam Altman, the CEO of OpenAI (the start-up behind ChatGPT), himself recently told a US Senate panel that he was “nervous” about the potential of AI to interfere with election integrity, and that it needs regulation.

But beyond these existential issues come much more immediate and granular risks. With employee adoption of tools like ChatGPT outpacing business policies and broader regulation, is the unfettered use of AI putting employers at risk? And what is the role of insurance in an already labyrinthine risk environment?

If we look at cyber insurance as an example, the use of generative AI tools like Chat GPT by employees leaves businesses exposed to greater risk of accidental data breaches. This raises questions over cybersecurity coverage, particularly in the wake of Optus and Medibank hacks which have placed businesses under greater scrutiny over cyber risk management.

Then there are the professional indemnity considerations. In a professional services environment, what if employees are giving clients advice taken from ChatGPT that leads to significant financial or reputational loss? Does and should insurance cover this?

These considerations must be worked through.

Data privacy and cyber insurance risk

Recently, Samsung made global news after employees leaked sensitive company information to ChatGPT, uploading problematic source code and requesting the AI-tool suggest a fix. Considering organisations spend significant resources to ensure their most valuable IP is protected, such a significant breach would be enough to see businesses rethink their company policies on generative AI.

Companies are scrambling to update policies and procedures around the use of generative AI tools such as Chat GPT, and mitigating the risk of these tools leading to cybersecurity breaches is a board-level imperative.

Through conversations with our clients, we know that this is a growing priority for Risk and Compliance committees. And while it is in its infancy and pleading ignorance is valid for now, the window of acceptable ignorance is quickly closing.

The risk management approach to generative AI is where cyber was five years ago, so expect growing expectations from insurers, regulators and broader stakeholders for minimum hygiene risk management practices around these tools.

With the Australian Government proposing fines up to $50 million for serious data breaches, this isn’t a risk many organisations could afford. As new technologies continue to evolve, so too do the insurance policies that surround them — especially as it relates to cyber risk exposure. While the current state of play is murky with considerable nuance, businesses would be wise to watch this space as it evolves or risk falling behind.

What if ChatGPT gives bad advice?

ChatGPT is highly prone to errors and misinformation. Organisations that provide advice to their customers, whether via their blog or direct consultation, cannot rely on the accuracy of the information generative AI provides.

While specialised professional indemnity or errors and omissions insurance exists to protect insured companies against claims from clients that allege financial losses due to incorrect advice, the use of AI in any professional capacity could incur specific policy conditions in the future, potentially impacting insurance covers, the implications of which must be considered seriously and on an ongoing basis.

For example, a lawyer might decide to ask ChatGPT to summarise their clients’ annual financial reports. But if the summary is incorrect and the wrong advice is provided to the client, this puts the lawyer at breach of their duty of care towards their client. In such a situation both the individual and their company are at risk of being sued.

The key concern for all professional services businesses should be the scale on which erroneous advice can be provided through tools like Chat GPT. One person giving the wrong advice could result in a lost client — 500 people giving the wrong advice could result in a significant lawsuit.

A legal grey area

Generative AI is only in its infancy, and many of the policies surrounding its use are yet to be established. However, we expect that laws surrounding legal and fair use of generative AI will slowly begin to emerge.

Following its data breach incident, Samsung has since banned staff use of generative AI tools for work purposes. It joins other organisations such as JPMorgan Chase, Amazon, Verizon and Accenture who have all banned ChatGPT use in the workplace.

Given the significant risk of data privacy breach, the potential breach of cyber insurance and professional indemnity policies, as well as the potential risk of copyright infringement, organisations need to ask themselves — do the benefits of generative AI weigh up the potential risk? At this stage, there’s no right or wrong answer — it’s up to each individual organisation to decide what risks they’re willing to take.

Originally published in Technology Decisions
Return to listings