Guarding Justice in the AI Era
A Dialogue on the relevance of AI in the Dutch public sector
An Interview with Bart Schellekens, Data and Technology Lawyer, Dutch National Government
(Titelbild: © Adobe Firefly)
Kurz und Bündig
Bart Schellekens, a data and technology lawyer, explores the evolving role of artificial intelligence (AI) in the legal field. He addresses the misconceptions surrounding AI‘s capabilities, highlighting that while AI offers significant potential for automating routine legal tasks, its limitations and biases must be acknowledged. Schellekens discusses the use of AI in the Dutch public sector, emphasizing the need for transparency and accountability, particularly when dealing with sensitive data. He underscores the importance of balancing technological innovation with ethical standards.
How is AI transforming the legal world? What does it mean when judges and lawyers rely on algorithms to make decisions or draft legal documents? In this interview Bart Schellekens, an experienced data and technology lawyer discusses the opportunities and challenges that AI brings to the legal sector. From the first cases in the Netherlands to pressing ethical and legal concerns, this interview offers insights into the future of law and the impact of modern technologies on access to justice.
IM+io: What are your experiences with the use of AI in the legal field?
BS: AI has garnered significant attention and debate within the legal field, often leading to misconceptions about its capabilities. It sometimes seems as if all the world‘s problems will be solved by artificial intelligence or that the end of humanity is imminent because computers are ‚overtaking‘ us. Neither of these extremes is accurate and they are often part of PR and marketing narratives. This trend is also evident in the legal field, where many lawyers tend to overestimate the capabilities of technology. Several cases, particularly in North America, have emerged where lawyers used ChatGPT to draft legal documents, only to be unpleasantly surprised when it turned out that the text contained fabricated citations of non-existent jurisprudence. In the Netherlands, we recently had the first instance of a judge using ChatGPT as a source in a court decision, which was met with significant criticism from the legal community.
However, we are also witnessing the emergence of serious AI tools in the legal field. Especially for more routine tasks, significant automation is possible. For example, AI can be used to review or generate standard contracts, reducing the time and human error associated with these routine tasks. There are also concrete AI applications in the areas of due diligence and forensic investigations that offer substantial added value.
IM+io: Are there current applications of artificial intelligence in the Dutch public sector?
BS: Certainly. From predictive policing by the Dutch police to algorithms that detect ship movements in the harbor using camera footage, AI applications are becoming increasingly prevalent in the Dutch public sector. A virtual policy assistant is developed to facilitate responding to parliamentary questions more quickly and accurately. Currently, AI in the public sector is predominantly utilized in domains such as knowledge management, document archiving, and the anonymization of sensitive data, all of which are essential for maintaining operational efficiency and data privacy. While the use of generative AI and large language models (LLMs) is still limited, it is noteworthy that several public entities are developing their own open language model: GPT-NL. Similar to ChatGPT, but open, transparent, and designed with robust data protection measures in place.
IM+io: Is the use of (legal) AI in a government setting different from that in the commercial sector?
BS: It certainly is. First of all, the primary focus of AI in a government setting is to serve the public interest rather than to generate profit. A public organization does not have customers who voluntarily try their product or service; instead, it has citizens who depend on them. Government use of AI, therefore, often requires higher levels of accountability and transparency due to public scrutiny and the need to maintain trust. As a result, the pace of AI innovation in the public sector may be slower, with a stronger emphasis on risk management and ethics.
The Dutch government has recently presented a vision on generative AI, emphasizing the need to harness its opportunities while addressing challenges. It aims to ensure that AI developments align with public values and socioeconomic security. Initiatives include establishing a public national AI test facility, launching public-private partnerships, and assessing AI applications for non-discrimination. The government is also investing in AI innovations to remain competitive internationally. Unfortunately, there are still cases where AI systems violate human rights. For instance, the “Systeem Risico Indicatie” or “SyRI,” a legal instrument used by the Dutch government to detect various forms of fraud, including social benefits and tax fraud. The Hague District Court ruled that the legislation regulating the use of SyRI violates higher law, specifically the right to respect for private and family life, home, and correspondence. The court explained that human rights law states that the government has a special responsibility when applying new technologies, including analyzing data using algorithms. It must strike the right balance between the benefits such technologies bring and the infringement of the right to a private life. It is no coincidence that in the new European AI legislation, the AI Act, most of the high-risk categories include public services such as justice, democratic processes, migration, and law enforcement.
IM+io: How can data protection and data security be ensured when using AI in the legal field?
BS: It‘s essential to distinguish between the development and deployment phases of AI. During development, large amounts of data are often required to train AI systems. It‘s crucial to assess whether this data contains personal information and, if so, under what conditions it can be used. In deployment, the focus shifts to ensuring that the data input is appropriate and that the output is of high quality.
The GDPR applies to AI as soon as personal data is processed. The principles of transparency, accountability, and data quality within GDPR are highly relevant to AI. The forthcoming AI Act will build on these principles, focusing on safety and security. Beyond data protection, it‘s equally important to consider the broader societal implications, particularly in a government context.
IM+io: Which areas of law are most suitable for AI implementation, and why?
BS: To determine which legal areas are most suitable for AI, it‘s essential to analyze the specific tasks involved and the degree of automation possible. This depends on the complexity of the legal framework and the availability of data for training. For example, AI tools are increasingly used for document review and analysis, particularly in large law firms that have vast amounts of data and annotated documents available for training.
The consequences for involved parties also play a role. The greater the potential consequences, the less suitable AI may be for that area. For example, criminal law is less suitable for AI, whereas intellectual property law sees more experimentation. In IP law, AI mostly deals with objects rather than people, and the involved parties are often professionals who should be better equipped to assess the risks of using AI. It is important that those working with AI systems receive proper training and understand their limitations.
IM+io: Can AI contribute to improving access to justice?
BS: The idea of an AI chatbot guiding individuals through legal processes is undoubtedly appealing. However, I‘m cautious about relying on AI as a superficial solution without addressing deeper systemic issues. The law should be accessible to everyone, and if AI can simplify the legal system, I‘m a strong advocate. But if individuals can only find out how procedures work or effectively exercise their rights through AI chatbots, we may be worse off, especially if those with the best AI systems —and the largest budgets— have an advantage.
That said, I‘m not pessimistic. AI has the potential to bridge the access-to-justice gap by increasing efficiency and democratizing access to legal information. It can empower individuals to resolve legal issues independently or connect them with professionals more easily. Additionally, AI can alleviate the workload on courts and judges, ultimately improving access to justice by reducing costs and increasing the quality of decisions.
IM+io: What role does human judgment play in comparison to AI in assessing legal cases? Could AI be more objective in reviewing legal cases?
BS: Technology is only as objective as the data and algorithms behind it, and bias can enter as soon as humans are involved. One of the biggest challenges with AI is that it is often trained on biased datasets. For instance, generative AI like ChatGPT is trained largely on Western data, and the biases of our society are reflected in its responses.
However, AI also holds promise for identifying and mitigating human bias. Bias and fairness algorithms can be applied to legal decision-making processes to ensure more objective and consistent outcomes, which could lead to a fairer legal system.
IM+io: What types of data are needed to train AI models for legal decisions, and how can this data be collected?
BS: One key step is making all judicial decisions publicly available online. The Dutch judiciary has started such a program, although it faces challenges, such as ensuring privacy for the parties involved. We don‘t want a court ruling to follow someone permanently on Google. Moreover, analyzing this data could have unintended consequences, such as profiling judges. In France, there is even a law that bans the evaluation, analysis, or prediction of individual judicial behavior, punishable by up to five years in prison.
Even with rulings available, we still wouldn‘t be able to develop AI that can fully replicate legal decision-making without access to all documents from the procedures. This introduces even more challenges, including compliance with procedural requirements.
IM+io: What technological or legal barriers must be overcome to effectively implement AI in the legal field?
BS: The first barrier is ensuring that AI systems in the legal field meet legal requirements, particularly in terms of respecting human rights and ensuring quality. As AI systems increasingly take on legal tasks, the demands for transparency, impartiality, and fairness grow. These are not barriers per se but essential requirements set by European and national law.
Once AI systems meet these standards and can be deployed responsibly, legislation may need to adapt to create space for AI implementation. This could include regulatory sandboxes that allow for experimentation under strict oversight and safeguards.
IM+io: In your opinion, how will the roles of lawyers and judges change with the increasing use of AI? How could AI be used in the judicial system in the future?
BS: As I mentioned earlier, lawyers often overestimate technology, while computer scientists tend to underestimate the complexity of the law. I would love for our legal system to be simple enough to automate with current technology, but that is not the case. Nevertheless, computers will increasingly take over standard tasks from lawyers.
In the future, I envision a collaborative dynamic where AI systems provide preliminary analyses, which legal professionals then review and finalize. While AI will not replace lawyers, those who fail to adapt may be outpaced by colleagues who effectively leverage these technologies.