Three Things to Consider for Responsible AI in Government – MeriTalk

2022-04-02 07:03:05 By : Mr. Right Way

The use of AI and analytics are crucial for government agencies, who are among the largest owners of data. The benefits of AI are often discussed – operational efficiencies, intelligent automation possibilities and the ability to gain deeper insights from massive amounts of data.

With the intense interest and proliferation of AI, governance of machine intelligence is getting more attention and appropriately so. Absent legislation, organizations must anticipate and adopt voluntary practices to minimize risk and avoid undesirable outcomes.

Here are three areas of focus recommended as part of a comprehensive responsible AI strategy.

Responsible AI solutions start with planning. Some key questions to ask (and then answer) during the initiation of an AI project are:

Similar to applying the DevSecOps mindset, where teams “shift left” on security planning and execution to include these activities from the start, the same is recommended for risk planning in AI projects. Identify potential challenges and risks early and commit to maintaining a plan to assess and address them.

AI models are complex, and transparency into how machine intelligence is making decisions and taking action is becoming increasingly critical. AI models now help us drive more safely through real-time alerts, or in some cases drive for us. AI is being incorporated into medical research and treatment plans. This level of complexity can be difficult to decipher when systems don’t operate with expected outcomes. What went wrong? Why was a decision made or an action taken?

Explainable AI (or XAI) advocates for fully transparent AI solutions. This means that all code and workflows can be interpreted and understood without having advanced technical knowledge.  This often requires additional steps in the design and build of the solution to ensure explainability is achieved and maintained.

Think of explainability as a two-step process – first, interpretability, the ability to interpret an AI model, and second, explainability, to be able to explain it in a way humans can comprehend. Explainable models provide transparency – so organizations stay accountable to users or customers and build trust over time. A black box solution that cannot be interpreted when things go awry is high risk investment that is potentially damaging and unexpectedly more expensive.

Enable Humans in the Loop

The Toyota production line Andon Cord is famous for its ability to stop the production line in the pursuit of quality. A physical rope used to halt all work when a defect was suspected, enabling an assessment and resolution of the issue before it can proliferate further.

What is the equivalent in the build and use of possibly high-stakes automated AI solutions? A human in the loop – enabling the ability for a person to oversee and have the option to override the system outputs. This can include data labeling by humans to support the model training process, human involvement in validating model results to support model “learning,” and implementing monitoring and alerts that require human review when specific or unexpected conditions are detected.

The combination of human and machine intelligence is a powerful one that expands possibilities while enacting safeguards.

By implementing governance guidelines and adopting approaches that specifically address the challenges and risks associated with AI solutions, federal organizations can proactively act to protect the interests of the public and Federal employees.

The @DoDJAIC announced that Sharothi Pikar will be joining the Chief Digital and #ArtificialIntelligence Office as its first deputy chief digital and artificial intelligence officer, acquisitions. #AI meritalk.com/articles/dod-n…

The House and Senate are finishing up their work on major #innovation legislation that includes a $52 billion investment in revitalizing the U.S. #semiconductor industry, along with a raft of new #cybersecurity programs. meritalk.com/articles/house…

The @USGAO said in a new report that the @CMSGov should assess the effect of the increased use of #telehealth services on beneficiaries’ quality of care, and plot next steps based on that analysis. meritalk.com/articles/gao-c…

On this week’s “What Happened This Week,” we have the #FedIT news you won’t get anywhere else. Trust us when we say you’ll have to see 👀 it to believe it. IT's all here and only here on #MeriTV: ow.ly/CisB50IxyLZ pic.twitter.com/rncC4z7t6g

#SeaAirSpace is bringing new, innovative content and opportunities to its’ 2022 flagship event. Join us April 4-6 to learn more: ow.ly/fWLC50Ixyws #SAS2022 pic.twitter.com/m5NQapEoAm