By: Michel A. Nkansah
Abstract
Artificial intelligence is increasingly integrated into legal practice, reshaping how lawyers conduct research, draft documents, and deliver advice. In Ghana, however, the Legal Profession (Professional Conduct and Etiquette) Rules, 2020 (L.I. 2423) remain silent on the use of such technologies. This article argues that while the existing regulatory framework provides robust principles governing competence, diligence, confidentiality, and supervision, it does not adequately address the challenges posed by AI-assisted legal work. Drawing on comparative developments in the United States, the United Kingdom, Nigeria, and Kenya, the article demonstrates that jurisdictions are adopting varying approaches, ranging from formal ethics guidance to principles-based regulation and early-stage engagement. It shows that while some jurisdictions have begun to clarify how traditional professional duties apply in an AI-driven context, others remain at a formative stage. The article concludes that Ghana’s General Legal Council should adopt a targeted policy approach to bridge this gap by interpreting existing obligations in light of technological change.
Introduction
Ghana’s legal profession is governed by a structured and comprehensive ethical framework. The Legal Profession (Professional Conduct and Etiquette) Rules, 2020 (L.I. 2423) set out the standards expected of lawyers in the discharge of their duties.¹ These standards include obligations relating to competence, diligence, confidentiality, and supervision, and they form the foundation of professional accountability.
However, the regulatory framework reflects assumptions about the nature of legal work that are increasingly outdated with the advent of modern technologies. It presumes that legal services are performed by human actors operating within traditional professional structures. Artificial intelligence has begun to disrupt this model. Some Lawyers now rely on AI tools for drafting, research, and analysis, yet there is no explicit guidance on how such tools should be used within the bounds of professional responsibility.
This article examines the tension between Ghana’s existing legal ethics framework and the realities of AI-assisted legal practice. It argues that while the underlying principles remain sound, their application requires clarification.
Competence and the Limits of AI Reliance
The duty of competence is central to professional regulation. Regulation 6 of L.I. 2423 requires a lawyer to provide representation with the legal knowledge, skill, thoroughness and preparation reasonably necessary for the matter.²
In practice, this obligation assumes that the lawyer, the human being, understands and can justify the materials and reasoning relied upon. Artificial intelligence challenges this assumption. Generative AI systems operate through probabilistic models and may produce outputs that are plausible but incorrect.
In response to emerging technologies, the American Bar Association has issued Formal Opinion 512, which emphasises that lawyers must maintain competence when using AI and must verify outputs before relying on them.³ This reflects a broader concern that AI tools may obscure errors rather than eliminate them.
The Ghanaian framework establishes the duty of competence but does not address how it should be applied where legal reasoning is partially delegated to a machine. This creates uncertainty as to whether reliance on AI-generated outputs, without adequate verification, meets the required standard. For now the law takes it that the product is assigned to the lawyer no matter the research assistants that he used in drafting or doing the legal work.
Diligence and the Risk of Automation Bias
The duty of diligence, set out in Regulation 10, requires a lawyer to act with reasonable diligence and promptness in representing a client.⁴ While AI tools may enhance efficiency, they also introduce the risk of automation bias, where users place undue trust in machine-generated outputs. Automation bias is the tendency to accept AI-generated information as accurate without sufficient scrutiny, even where errors or inconsistencies are present.
This risk is particularly acute in legal practice, where accuracy is essential. If AI outputs are accepted without sufficient scrutiny, the duty of diligence may be compromised. Yet the current regulatory framework does not provide guidance on the extent to which lawyers must interrogate AI-generated content.
The absence of such guidance leaves practitioners to determine for themselves what constitutes reasonable diligence in an AI-assisted workflow.
Confidentiality in an AI-Enabled Environment
Confidentiality is a cornerstone of legal ethics. Regulation 19 provides that a lawyer shall not knowingly reveal information relating to the representation of a client except in limited circumstances.⁵
The use of AI tools introduces practical challenges in complying with this obligation. Many systems process data on external servers, and some may store or reuse user inputs. This creates a risk that client information could be exposed beyond the lawyer’s control.
Many jurisdictions have addressed this issue directly. The American Bar Association has emphasised that lawyers must assess the risks associated with AI systems and take steps to ensure that confidentiality is preserved.⁶
In Ghana, however, there is no equivalent guidance. The duty is clear, but its application in the context of AI remains undefined. For now where a lawyer fails to pay attention to confidentiality and the same is discovered he can be held liable for professional negligence. The bigger question is how will this be found out, especially when the regulator itself appears not to have a full grasp of this new tool.
Supervision, Delegation, and AI Systems
Regulations 45 and 46 impose duties on lawyers to supervise another lawyer and ensure that delegated work is carried out in accordance with professional standards.⁷ These provisions are designed to ensure accountability within legal practice.
Artificial intelligence complicates this framework. AI systems increasingly perform tasks that resemble those of junior lawyers or paralegals, yet they are not recognised as subjects of supervision within the regulatory structure.
Comparative guidance suggests that lawyers must retain oversight and must not abdicate professional judgment to AI systems.⁸ However, Ghana’s framework does not specify how the duty of supervision extends to such tools.
This raises important questions about accountability, particularly where AI-generated outputs influence legal advice or submissions.
Comparative Developments
Although no jurisdiction has fully resolved the regulatory challenges posed by artificial intelligence in legal practice, several have taken steps to clarify how existing professional obligations apply in this context.
In the United States, the American Bar Association issued Formal Opinion 512 in 2024, which provides a structured approach to the use of generative AI. The opinion confirms that duties relating to competence, confidentiality, communication, and supervision remain fully applicable where AI tools are used, and that responsibility for all outputs continues to rest with the lawyer.⁹ The New York City Bar Association has similarly issued Formal Opinion 2024-5, offering detailed guidance on the ethical implications of AI-assisted legal work.¹⁰
In the United Kingdom, the Solicitors Regulation Authority has adopted a principles-based approach. Rather than introducing a standalone regulatory framework for artificial intelligence, it has clarified that existing professional obligations continue to apply where such technologies are used. The regulator has identified specific risks relating to confidentiality, accuracy, and accountability, while maintaining that responsibility remains with the lawyer.¹¹ The Law Society of England and Wales has supplemented this position with practical guidance encouraging informed and responsible use of generative AI systems.¹²
These guidelines are not limited to the US and the UK. African Countries such as Nigeria and Kenya have also given some approaches. In Nigeria, the Nigerian Bar Association has taken a more proactive step by issuing formal guidance on the use of artificial intelligence in legal practice. These guidelines address core issues including human oversight, confidentiality, verification of outputs, and client transparency, and provide practical direction to lawyers on the responsible use of AI. While not constituting binding rules, they represent one of the clearest attempts within Africa to translate professional obligations into applied guidance for emerging technologies.
By contrast, in Kenya, the legal profession is regulated by the Law Society of Kenya, which has engaged with artificial intelligence primarily through conferences, continuing professional development programmes, and professional discourse. However, these efforts remain exploratory, and no formal or binding guidance has been issued governing the use of AI in legal practice.
A consistent pattern emerges across these jurisdictions. Regulators are not replacing existing professional rules, but are increasingly seeking to interpret them in light of technological change. Where guidance has been issued, it serves to clarify the application of established duties rather than to create entirely new regulatory frameworks.
Ghana’s position reflects an earlier stage in this trajectory. While its existing framework provides a strong foundation, there has been no formal effort to articulate how professional obligations apply in the context of AI-assisted legal work. This places Ghana alongside jurisdictions where adoption is advancing ahead of regulatory clarification, but also highlights an opportunity to define a coherent approach at an early stage.
The Case for Regulatory Clarification in Ghana
Ghana’s position is not unusual, but it is incomplete. The existing framework provides a strong foundation, but it does not address the practical realities of AI-assisted legal work.
This creates uncertainty for practitioners and increases the risk of inconsistent application of professional standards. It also means that issues of compliance are likely to be addressed retrospectively, rather than through clear, prospective guidance.
A targeted policy response from the General Legal Council would address this gap. Such a response would not require a fundamental restructuring of the existing framework. Instead, it would involve clarifying how existing duties apply in the context of AI.
Conclusion
Artificial intelligence is now embedded in legal practice. Its use will continue to expand, regardless of whether regulatory frameworks evolve.
Ghana’s Legal Profession (Professional Conduct and Etiquette) Rules, 2020 provide a robust ethical foundation. However, they were not designed with AI in mind. As a result, there is a gap between the principles they articulate and the realities of modern legal work.
Addressing this gap requires clarification, not reinvention. By providing guidance on the application of existing duties in an AI-driven environment, Ghana has the opportunity to ensure that innovation is aligned with professional responsibility. We hope it will not take another 50 years before this AI-technology guidelines or protocols are written into law like Ghana did with its first etiquette rules for lawyers which were passed in 1969 as LI 613 were repealed in 2020.
Footnotes
- Legal Profession (Professional Conduct and Etiquette) Rules, 2020 (L.I. 2423).
- ibid reg 6.
- American Bar Association, ‘Formal Opinion 512: Generative Artificial Intelligence Tools’ (2024) https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/ accessed 26 March 2026.
- Legal Profession (Professional Conduct and Etiquette) Rules, 2020 (L.I. 2423), reg 10.
- ibid reg 19.
- American Bar Association, ‘Generative AI and Confidentiality Obligations’ https://www.americanbar.org/groups/litigation/resources/newsletters/ethics-professionalism/generative-ai-lawyers-part-2-maintaining-confidentiality/ accessed 26 March 2026.
- Legal Profession (Professional Conduct and Etiquette) Rules, 2020 (L.I. 2423), regs 45–46.
- Katten Muchin Rosenman LLP, ‘ABA Weighs in on Generative AI Use in Legal Practice’ https://quickreads.ext.katten.com/post/102jf50/aba-weighs-in-on-generative-ai-use-in-legal-practice accessed 26 March 2026.
- American Bar Association (n 3).
- New York City Bar Association, ‘Formal Opinion 2024-5: Generative AI in the Practice of Law’ https://www.nycbar.org/reports/formal-opinion-2024-5-generative-ai-in-the-practice-of-law/ accessed 26 March 2026.
- Solicitors Regulation Authority, ‘Artificial Intelligence and the Legal Market’ https://www.sra.org.uk/sra/research-publications/artificial-intelligence-legal-market/ accessed 26 March 2026.
- Law Society of England and Wales, ‘Generative AI: The Essentials’ https://www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials accessed 26 March 2026.
Author
Michel A. Nkansah is a Senior Product Manager (Innovation and AI) at Dentons UK, Ireland and Middle East, where he leads the development of AI-enabled legal products across multiple jurisdictions. Over the past eight years, he has worked across legal technology startups in Africa and contributed to legal innovation initiatives in the United Kingdom, including LawtechUK, a programme backed by the UK Ministry of Justice, where he served as a coordinator for the Regulatory Response Unit. He has also been involved in programmes associated with Tech Nation and CodeBase. His work focuses on the development and governance of artificial intelligence in legal practice and regulated environments. He leads Legaltech Lounge, one of Africa’s largest law and technology communities, with over 3,000 lawyers.