President Biden signed an executive order (EO) on artificial intelligence, which he described as a significant milestone, sparking varying assessments from experts in the swiftly evolving tech field.
“One key area the Biden AI [executive order] is focused on includes the provision of ‘testing data’ for review by the federal government. If this provision allows the federal government a way to examine the ‘black box’ algorithms that could lead to a biased AI algorithm, it could be helpful,” Christopher Alexander, chief analytics officer of Pioneer Development Group, told Fox News Digital.
“Since core algorithms are proprietary, there really is no other way to provide oversight and commercial protections,” Alexander added. “At the same time, this needs to be a bipartisan, technocratic effort that checks political ideology at the door or this will likely make the threat of AI worse rather than mitigate it.”
Alexander’s remarks follow Biden’s unveiling of a highly anticipated executive order introducing fresh AI regulations, lauding it as the “most extensive measures ever taken to safeguard Americans from potential AI system hazards.”
This executive order will mandate that AI developers disclose safety test findings to the government, establish criteria for AI safety monitoring, and establish protective measures to safeguard Americans’ privacy amidst the rapid expansion of AI technology.
“AI is all around us,” Biden said before signing the order, according to a report from The Associated Press. “To realize the promise of AI and avoid the risk, we need to govern this technology.”
Jon Schweppe, the policy director at the American Principles Project, informed Fox News Digital that the apprehensions prompting the executive order regarding AI are “justified.” He praised certain aspects of Biden’s executive order but contended that it also emphasizes “misplaced priorities.”
“There’s a role for direct government oversight over AI, especially when it comes to scientific research and homeland security,” Schweppe said. “But ultimately we don’t need government bureaucrats micromanaging all facets of the issue. Certainly we shouldn’t want a Bureau of Artificial Intelligence running around conducting investigations into whether a company’s AI algorithm is adequately ‘woke.’”
Schweppe maintained that “private oversight” should play a part in managing the expanding technology and emphasized the need for AI developers to face “substantial liability.”
“AI companies and their creators should be held liable for everything their AI does, and Congress should create a private right of action giving citizens their day in court when AI harms them in a material way,” Schweppe said. “This fear of liability would lead to self-correction in the marketplace — we wouldn’t need government-approved authentication badges because private companies would already be going out of their way to protect themselves from being sued.”
This order was crafted to expand upon voluntary pledges made by major technology companies that the president facilitated earlier this year. These commitments will oblige these firms to collaborate with the government in sharing data related to AI safety.