Your next boss might be an AI algorithm

In Brian Inkster's excellent blog post, “Artificial Intelligence and Law—Robots Replacing Lawyers?,” Brian entertainingly calls out the AI evangelists and sensational forecasts that predict robots will replace lawyers in the future. He provides context and research from all corners of the internet and beyond, and after consideration, Brian sides with MIT roboticist and skeptic Rodney Brooks who calls such extreme claims "ludicrous."

Headline-grabbing controversy aside, the reality is that automation of legal and administrative work is increasing and improving the quality, consistency and efficiency of traditional legal activities. For example, reviewing 20,000 files is now a task that can be performed in a matter of hours and days rather than weeks, or longer.

As a legal tech professional, I’ve focused on increasing process efficiency for a long time, and I find that people welcome the ability to use AI to automatically assign work or highlight specific files and tasks that need to be prioritised.

This initial, guarded-but-positive response could change if firms do not recognise and address the challenges and limitations created by increasing automation. Many are aware that automation and AI must be fair and unbiased (addressing ethical concerns)—but also, and more subtly, humans must be cognitively aware of the limitations of AI. There is evidence that, at present, we are not.
 

AI Boss Image 1

Minding the limits of AI

In his recent book, “Artifictional Intelligence,” Harry Collins argues that AI will never understand human language sufficiently enough to become truly intelligent—but our innate humanity compensates for these failings and projects a level of intelligence onto the AI that is not actually there.

In the future as described by Collins, the danger is that we rely on and trust artificial intelligence so much that we surrender our thinking to them. We cease to apply critical thought and challenge the algorithm, because we have fooled ourselves into believing AI is sufficiently intelligent to be infallible.

This type of surrender is exemplified by stories of people who have nearly driven off a cliff or into a lake because they disregarded their own logic, switched off their brains, blindly followed instructions, and trusted that a machine knows more about the physical world than they do.

Automated processes still require people

The challenge, when automating a legal process, is to ensure that the tasks being automated have clearly marked prompts for human beings to use their judgment. Human decision-making must consider the limitations of the algorithm making the decision, the likely range of quality in the data inputs and the potential consequences of an error.

Escalation to a human for review or approval must be built in, and that human being must be trained and required to apply proper critique to the task before them. Humans are able to take into account information that even the smartest, most data-intensive AI running on the latest generation of supercomputers cannot—context, tacit knowledge, neologisms, sarcasm, motivations and common sense.

In our processes at HighQ, we choose to use risk ratings and numerical measures to assess risk as part of our due diligence review processes. The most perfect process would be one that results in only "risk" or "no risk" outcomes, but we use a scale of ratings to ensure that our process can also output a "don't know" response and prompt a request for assistance from a wider human audience.

In manual review processes, that would be a learning opportunity for the reviewer—a chance to check the logic or reasoning and seek a second opinion. Within an automated process, that might require an escalation to a group of people with a task assigned to “review decision” and a status of “under review.”

The balance between AI efficiency and human review

The difficulty is knowing when people should intervene without diminishing the efficiency benefits of automation. If every decision is reviewed manually, the increase in efficiency is minimal. Conversely, there’s danger in believing in a foolproof algorithm that then makes fools of us all, because it lacks common sense and context.
 

AI Boss Blog Image

The truth is that our next boss, the entity that allocates us tasks, reviews our output, and shapes much of our daily working life, might well be an algorithm in the near future. At the Victoria and Albert Museum, there’s an exhibition titled “The Future Starts Here” that cites Uber Eats as a company that assigns work to couriers based on an algorithm, as does the Uber car ride business. These gig-workers already work for an algorithm, as might you, if you are in document production or transcription services.

As we move toward wider adoption of AI, we must take steps to make sure that we do not over automate and remove the human entirely from the process. Otherwise, we will have surrendered our critical thought to the robots—and then we really will be in trouble!

Andy Neill

Senior Product Manager at HighQ
Andy has over twelve years of experience working at a range of global law firms, including Norton Rose Fulbright, Herbert Smith Freehills and Allen & Overy, and six years as a business consultant at Deloitte & Touche and Arthur Andersen. Andy leads the design of the search, legal AI, data analytics and visualisation features of the HighQ platform, to ensure HighQ’s clients have access to the latest business intelligence. He holds two Masters degrees, in Engineering and Computing, and is also a certified MSP programme manager.