back to top
Sunday, September 8, 2024

Careers

CIA AI Director claims the agency is taking a ‘thoughtful approach

 

TOPCLAPS interviewed Lakshmi Raman, the CIA  director of AI, as part of its ongoing Women in AI series. We discussed her path to the position, the CIA’s use of AI, and the balance between embracing new tech and deploying it responsibly.

Career Path to CIA  Director

Raman has been in intelligence for a long time. She joined the CIA in 2002 as a software developer after earning her bachelor’s degree from the University of Illinois Urbana-Champaign and her master’s degree in computer science from the University of Chicago. Several years later, she moved into management at the agency, eventually leading its overall enterprise data science efforts.

Women Role Models in CIA  Intelligence

Given that women are historically underrepresented among high-ranking officials in intelligence — which is putting it lightly — Raman said she was lucky to have women role models and predecessors within reach at the CIA. “I still have people I can look to, who I can ask advice from, who I can approach about what the next level of leadership looks like,” she said. “I think there are things every woman has to navigate as they navigate their career.”

AI as an Intelligence Tool

Orchestrating AI Activities

As director, Raman orchestrates, integrates, and drives AI activities across the CIA. “We think AI is here to support our mission,” she said. ”It’s humans and machines together that are at the forefront of our use of AI.”

History of AI at the CIA

AI isn’t new at all for our nation’s top intelligence agency; according to Raman, around 2000 or so is when we started seeing widespread interest throughout the community in applications of data science and artificial intelligence. Natural language processing (i.e., analyzing text), computer vision (analyzing images), and video analytics are some key areas where we’ve seen investment by companies.

The agency tries to stay on top of newer trends, such as generative AI, with a roadmap informed by industry and academia.

Generative AI and CIA  Content Triage

“When we think about the huge amounts of data that we have to consume within the agency, content triage is an area where generative AI can make a difference,” Raman said. “We’re looking at things like search and discovery aid ideation aid, which is helping us to generate counterarguments to help counter analytic bias we might have.”

Urgency in AI Deployment

There’s a sense of urgency within the U.S. intelligence community to deploy any tools that could help the CIA combat growing geopolitical tensions around the world — from threats of terror motivated by the war in Gaza to disinformation campaigns mounted by foreign actors (e.g., China, Russia). Last year, the Special Competitive Studies Project — a high-powered advisory group focused on AI in national security — set a two-year timeline for domestic intelligence services to get beyond experimentation and limited pilot projects to adopt generative AI at scale.

Osiris: The CIA’s Generative AI Tool

The CIA has developed an artificial intelligence tool called Osiris, powered by generative algorithms. It is similar to OpenAI’s ChatGPT but designed for use in the intelligence community. It can only summarize public and commercial data that has not been classified; however, users can ask follow-up questions using plain English. While Raman did not say whether they built this tool themselves or if they used technology from third-party vendors, he did mention that there are some well-known companies with whom they have established partnerships.

Collaborating With Industry Leaders

“We do leverage commercial services,” acknowledged Raman before adding that AI also helps with translation tasks and alerts analysts off duty about possible important events.“We need to be able to work closely with the private industry to help us not only provide the larger services and solutions you’ve heard of but also provide even more niche services from non-traditional vendors that you might not already think of.”

Doubts And Worries

Secret Record Warehouse

Concerns over using AI by the CIA are both valid and numerous. In February 2022, Senators Ron Wyden (D-OR) and Martin Heinrich (D-New Mexico) disclosed in a public letter that despite being generally prohibited from collecting information on Americans or US businesses, there exists within CIA holdings an undisclosed storage facility housing data pertaining solely to US citizens. Furthermore, in an annual report produced by the Office of the Director of National Intelligence last year, it was revealed that American intelligence agencies, including but not limited to our friends over at Langley, routinely purchase personal info about folks like you and me directly from outfits such as LexisNexis and Sayari Analytics without much oversight.

CIA  Potential Misuse Of Artificial Intelligence

Should any part of this record ever come under scrutiny courtesy of its newfound relationship with machines smarter than ourselves, it’s a safe bet there will be more than a few people unhappy about it . Indeed, to do so would constitute a flagrant breach of civil liberties, no matter what sort of justice system we happened to live under at the time.

Ethical Use and Compliance of AI Systems

The CIA had to abide by all U.S. laws but also wanted to follow what Raman called “ethical guidelines” while using artificial intelligence in such a way that mitigates bias. “I would call it a thoughtful approach [to AI],” she said. “I would say that our approach is one where we want our users to understand as much as possible about the AI system they’re using.

According to Raman, no matter what an AI system is intended to do, designers should clarify the areas where it may fall short. A recent study conducted by North Carolina State University researchers found police departments were employing AI tools such as facial recognition and gunshot detection algorithms without understanding the technology or its limitations.

AI Abuse

In an egregious example perhaps borne out of ignorance, Raman said the NYPD once used photos of celebrities, distorted images and sketches to generate facial recognition matches on suspects in cases when surveillance stills yielded no results.

Labeling and Explanations

“The users should clearly understand any AI-generated output, and that means labeling AI-generated content and providing clear explanations of how AI systems work,” Raman said. “Everything we do in the agency, we are adhering to our legal requirements, and we are ensuring that our users and our partners and our stakeholders are aware of all of the relevant laws, regulations (and) guidelines governing the use of … our AI systems, and we are complying with all these rules.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here