By Eric Velte, Chief Technology Officer, 91快活林 Federal
As published in聽 on June 12, 2024.聽
Eric Velte is聽91快活林 Federal‘s chief technology officer with 25 years of experience in computer science and technical solutions.
As the federal government continues to explore the seemingly unlimited potential of generative AI, or GenAI, the question of how to regulate the technology supersedes the discussion around its many benefits. The public is skeptical of AI鈥攁nd for good reason, with so much of the attention on the technology questioning its safety and whether it鈥檚 an ethical tool in certain scenarios.
Between the U.S. Department of Commerce鈥檚 National Institute of Standards and Technology (NIST)聽聽the U.S. AI Safety Institute Consortium and the U.S. Army聽聽how to promote responsible AI adoption, government leaders continue to prioritize identifying the best practices of AI鈥攁nd how to limit adversarial threats and algorithmic biases through responsible deployment along the way.
As more agencies聽, the government must establish a synergistic relationship between personnel and AI鈥攁nd quickly. This necessitates a new role, the “AI operator,” who can help agencies strike a balance between adopting the tech when and where the agency can benefit from it and ensuring the proper checks and balances exist between the tech and personnel. AI operators would bring a deep understanding of the technology for which they’re responsible, helping their organizations adhere to proper risk controls to build and maintain trust with employees and citizens.
An AI operator’s role is especially critical when it comes to GenAI. By providing the necessary oversight and comprehensive analysis of current GenAI pilot programs, this role can show decision-makers the tangible benefits of this technology. AI operators can illuminate previously unseen advantages and applications of GenAI, expanding its potential beyond the known use cases.
As GenAI’s capabilities continue to mature, equipping organizations with dedicated personnel deeply ingrained in the nuances of the technology will be vital to safeguard against security risks, as evidenced by the DOD鈥檚 recent announcement of a聽. Ideal AI operators will possess extensive insight into AI’s security challenges, potential biases and ethical implications.
Although GenAI has the potential to significantly optimize agency operations, the federal government must first prepare for its security obstacles before it can reap the benefits. With their sole focus on optimizing the use of GenAI, AI operators would be responsible for guiding the development, training and testing of AI models to intimately understand their weaknesses, like data privacy.
GenAI systems leverage extensive datasets, and federal agencies often face heightened challenges, as they possess sensitive data and information adversaries target. AI operators would devise tailored cybersecurity measures to combat the data challenges GenAI poses and enhance system resilience to mitigate cyberattacks. Additionally, compliance is always a part of the conversation when it comes to emerging technologies. AI operators should be experts in AI regulations and standards to ensure systems comply with federal and industry requirements.
Possibly, the most impactful role AI operators can play is as an intermediary between the personnel utilizing the technology in day-to-day operations and the leaders responsible for investing in it. To garner buy-in from leadership, AI operators should not only demonstrate the value of AI adoption but also reduce many of the concerns associated with AI. By highlighting GenAI’s strengths, AI operators can appeal to federal leadership and the general public on the advantages of agency GenAI adoption, such as enhanced decision-making, cost savings and improved efficiencies.
Dedicating personnel solely focused on AI could transform how we approach implementation. AI operators bring human perspectives to GenAI by acting as a safeguard during the training and development of AI systems. Infusing GenAI with the human element helps mitigate algorithmic biases and security challenges.
Overall, investing in AI operators could yield substantial returns. Having dedicated AI personnel and a focus on an AI operator at all life cycles mitigates security and ethical risks, helping agencies keep pace with the latest developments of the technology, like GenAI, and ultimately altering the federal landscape by bolstering civilian and leadership buy-in and optimizing use cases.
