Generative AI virtual assistants causing problems

Generative AI virtual assistants causing problems

With National Technology Day here, there is some news that’s causing quite the stir in the U.S. government and with AI usage in general.

A three-month project using an AI assistant, called AVA, was used to accelerate processing court documents in Alaska related to family law. According to Aubrie Souza, the chatbot assistant unfortunately was ill-powered and spewed incorrect information, backing up the court processing for another 12 months. Then, more time was spent trying to correct the design, etc. but it wound up causing even more issues.

Severe mishaps, like legal proceedings, might just give more credibility to those who doubt AI.

Technology leaders know that if you’re using virtual assistants (specially in legal matters) - you have to avoid causing serious damage to people’s legal records.

In our previous posts we mentioned that businesses turn to AI and generative tools to optimize their operations - data analyses, finance and marketing operations - with human oversight, of course. They’ve been really narrowing in on AI’s potential to surpass human capabilities. It’s set to expand in 2026 but it will also likely cause even more workers to be unemployed. But, with AI glitches, perhaps there is some hope that we will still need humans to keep machines in line to avoid further issues (like errors in legal cases).

But according to recent survey results from Deloitte, less than 6% of local governments in the US are vying for generative AI tools for optimization - since there is a greater risk of liabilities that will hurt families, estates, and reputations.  

Some local governments throughout the U.S. adopt AI as chatbot assistants to provide straightforward questions to citizen’s basic questions. Others use it to translate outgoing emails into different languages that have had great results and feedback. The ones that are hesitant to use AI though, cite laws that follow specific rules that AI models, like AVA, can override or disobey its original machine-learning setup. This goes back to governments’ concerns about AI’s inability to replicate human qualities in industries like court law. The lack of authenticity is already a top reason for anti-AI folks. Since, custom virtual assistants are causing errors in in legal proceedings.

Do you believe AI should be involved in public services such as legal proceedings? Or , for now - should they only be useful at providing simple services?

Stay tuned for more blogs posts on the impact of AI and its wider ramifications.

Sources:

Alaska's court system built an AI chatbot. It didn’t go smoothly. 

Scaling gen AI in governments | Deloitte Insights 

Tips on upgrading your technical skills from SpaceBound Solutions!

Tips on upgrading your technical skills from SpaceBound Solutions!