An international law firm, Hill Dickinson, has restricted general access to several artificial intelligence (AI) tools after detecting a “significant increase in usage” by its staff.
The firm, which employs over 1,000 people in the UK, issued an email warning employees about the improper use of AI tools, which often violated its AI policy. Moving forward, staff will need to request access to these tools, ensuring compliance with the firm’s guidelines.
Hill Dickinson’s chief technology officer had revealed that the firm recorded over 32,000 hits to ChatGPT, a popular AI chatbot, during a seven-day period in January and February. Additionally, there were more than 3,000 hits to DeepSeek, a Chinese AI service recently banned from Australian government devices due to security concerns, and nearly 50,000 hits to Grammarly, a writing assistance tool.
While the exact number of users and repeated visits remains unclear, the firm emphasized that the surge in AI usage raised concerns about data security and compliance.
The email stated, “We have been monitoring usage of AI tools, particularly publicly available generative AI solutions, and have noticed a significant increase in usage of, and uploading of files to, such tools.”
UK Data Watchdog Calls for Responsible AI Adoption
The Information Commissioner’s Office (ICO), the UK’s data watchdog, responded to the situation by urging organizations not to discourage AI usage. A spokesperson said, “With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar. Instead, companies need to offer their staff AI tools that meet their organisational policies and data protection obligations.”
Hill Dickinson, which has offices across England and internationally, stated that it aims to “positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients.” The firm’s AI policy prohibits the uploading of client information and requires staff to verify the accuracy of AI-generated responses.
To maintain control, the firm implemented a request process for accessing AI tools. Some requests have already been approved, reflecting the firm’s commitment to balancing innovation with security.
The Legal Sector Battles with AI Integration and its Risks
Ian Jeffery, chief executive of the Law Society of England and Wales, highlighted the potential of AI to “improve the way we do things a great deal.” However, he stressed that AI tools “need human oversight” and pledged to support legal professionals and the public in navigating this “brave new digital world.”
A spokesperson from the Solicitors Regulation Authority (SRA) warned of the risks posed by a lack of digital skills across the UK legal sector. “This could present a risk for firms and consumers if legal practitioners do not fully understand the new technology that is implemented,” they said.
Survey Reveals Growing AI Adoption in UK Law Firms
Meanwhile, a September survey by legal software provider Clio found that 62% of 500 UK solicitors anticipated increased AI usage over the next 12 months. Law firms are already using AI for tasks such as drafting documents, reviewing contracts, and conducting legal research.
A spokesperson from the Department for Science, Innovation and Technology described AI as a “technological leap” that will “free workers from repetitive tasks and unlock more rewarding opportunities.” The government is committed to introducing legislation to safely harness AI’s benefits and plans to launch a public consultation to address the challenges of this fast-evolving technology.