Uploading to the AI Cloud - Avoiding Stormy Weather
Posted on November 25, 2025
← Back to Info CentreAs a firm we have started to notice a lot more correspondence which has been written or informed by artificial intelligence. Its tell-tale signs are obvious, but its consequences are less so.
We of course have no issue in principle with clients carefully using A.I. in matters we are assisting with; it can often be a cost-effective way of researching, getting to grips with information, and considering the best way forward. We also acknowledge that we as lawyers (despite our best efforts) can still use a good bit of ‘legalese’ in our language, and no doubt A.I. has its place in parsing through correspondence and advice.
However, there are hidden risks to uploading correspondence, which our clients and other A.I. users may not be aware of.
Legal Privilege
Sam Altman, the CEO of OpenAI recently confirmed an obvious consideration; that there is no obligation of legal confidentiality when using large language models (LLMs) (such as his company’s offering ChatGPT).
“…right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
Whilst this initially means that your discussions with an LLM hold no confidentiality, and indeed could be discoverable in legal proceedings, it could also have run-on effects for confidential documents or advice which do attract legal privilege.
Legal privilege (which comes in the form of legal advice privilege and litigation privilege) is a form of protection for confidential communication between advocates and their clients (and, sometimes, relevant third parties) which looks to ensure that those communications are not disclosable without a client’s permission. This is particularly important in litigation matters, where parties are under an obligation to share documents, including those which adversely affect their case or support an opponent’s case, with the other side. Documents which have had, and have lost, the protection can therefore become disclosable; this can include crucial legal advice and strategy which would have otherwise been protected.
It is important to note that very few publicly available LLMs are “closed” models. Whilst we at M&P have our own bespoke A.I. model built on our servers which is very much “closed” to the outside world and to any third-party access, most publicly accessible models are “open”, in that all conversations and documents are uploaded to central servers and added to the ‘brain’ that underpins the machine’s learning. If you have uploaded an image of yourself to an open LLM for fun editing, that image of you now forms part of that A.I.’s mind and its view of the world.
In sharing documents which are in existence and have attracted legal privilege with an “open” A.I. you share that communication, somewhat publicly, with a third party who owes you no duty of confidentiality. What this likely means is that you immediately remove the necessary shield of confidentiality which must exist for legal privilege to remain[1]. In such a case it is likely that any kind of legal privilege which did exist in communications between you and your advocate is thereby waived.
The consequences could be catastrophic. The crucial legal advice which sets out strategy, pitfalls and prospects could become open to discovery by the other side, meaning it might have to be handed over in any disclosure exercise. It could even be the case that your confidential information gets regurgitated by the LLM to someone else entirely, seeing as that document or advice now forms part of that LLM’s ‘brain’. It is something to be avoided at all costs.
Hallucination[2]
In a recent article I highlighted that I had recently asked ChatGPT a question on the indemnification of retiring trustees against fraud. What it returned was a correct answer (generally), but it had ‘hallucinated’ legislation which didn’t exist. Whilst the approach it took was mostly correct, we have also seen more and more instances of LLMs providing completely incorrect and potentially damaging advice – particularly when it comes to the intricacies of Manx Law.
We are not a large jurisdiction in which there is a lot of publicly available and digestible information. It is our understanding that the Isle of Man Government has set its public webpages (including legislation.im) so that they cannot be scraped by LLMs; as a result, unless legislation is specifically uploaded or referred to by third parties, it is generally inaccessible from the source.
As a result, LLMs – when scraping the internet for its information – have a lot less to pick from when providing its considered responses. What we have found is that generally advice relating to Isle of Man matters is often poor or just wrong.
We as lawyers are a proud lot, but (generally) have no issue with our advice being poked or questioned or stress tested. We tend to do it a lot ourselves before sending it out; searching for angles at which it can be attacked or alternative routes available. We regularly play our own ‘devil’s advocate’, and will sometimes ask a colleague to cast their eyes over advice for a second opinion. Therefore, we have no real issue with clients engaging a healthy dose of scepticism or considering alternative views when reviewing our advice.
However, a significant issue comes when that second opinion is sought from an LLM with limited and often incorrect view of Isle of Man law, which is then relied on without further interrogation. It is my experience that LLMs also generally tend to tell users what they want to hear, rather than providing an unbiased and balanced view; as a result, users seeking a second opinion on advice they don’t particularly like often risk the LLM working to enforce that view rather than providing a detached and balanced overview.
Providing an advocate’s advice to an LLM for review and critique, or to ask it to draft a response to the same, can therefore result in the reply incorrectly undermining or dismissing that advice. It can propose dangerous alternatives and take you away from well considered, educated, and researched strategies. It can create unfortunate breakdowns in trust between client and advocate, where a client thinks their advocate has produced poor work and an advocate thinks their client has no trust in their advice any more. It can increase costs, where we are forced to spend time correcting the LLM’s poor and hallucinated opinion.
There is no issue with the principles being tested, or even with the use of A.I. to sensibly question and consider advice. It is in a client’s best interests to work to reduce costs where possible, and to provide that client with the best possible guidance. When used correctly, A.I. can be a great tool for doing so.
Our concern lies where A.I. interjects to increase time and cost, providing an unfounded layer of doubt or disagreement which cuts to the centre of the relationship between advocate and client. In that regard, we would always urge caution and consideration.
Personal Data
I have written about this topic previously, and I think it is worth repeating here as it is something people often forget when using A.I.
As highlighted above, data shared with an open LLM is not kept on your computer or even your network; it is shared to servers all around the world and forms part of that LLM’s ‘brain’. Anything you upload – be it a piece of written advice, some disclosed evidence, or a draft document – can contain the personal data of clients, colleagues, customers, opponents, and even your own advocates.
If you hold that information as a data controller or processor and upload that information to an LLM without consent or justification, there is a significant risk that doing so could amount to a data protection breach which could open your organisation to damaging fines. If anything is being uploaded to an LLM, we would always consider that it should be first scrubbed of any and all personal data.
Many of the above examples are based on real life circumstances we have already seen, and which have already begun to affect our practice. Some we hope to never see – such as the loss of legal privilege, or a significant fine for a data breach – but others we have started to see more and more of. A.I. is no doubt a useful tool which is only going to interact with our work more and more in future. It has its clear uses, but it also has its clear issues. One of the most important considerations to bear in mind is that it is not yet in a position of true ‘intelligence’; it should only be relied on as a guide rather than an advisor.
This article was Not written by A.I. but in fact written by Lorcan O’Mahony who is a director of M&P Legal. Always take specific advice on the facts of each case.
[1] Whilst there has been no determination on this that we are aware of, we are aware of the “Artificial Intelligence (AI) Guidance for Judicial Office Holders”, which is issued by the Courts and Tribunals Judiciary of England and Wales (31 October 2025) which sets out the “any information that you input into a public AI chatbot should be seen as being published to all the world”. Whilst not legally binding, it does demonstrate how the judiciary may view such uploads.
[2] “Hallucination” is the generally accepted term for where A.I. creates, or ‘hallucinates’, incorrect or misleading information which it then passes off as fact.
Back to top
The Company
People
Practice Areas
Info Centre
FAQs
Contact Us