Please wait while the page is loading...


INTA 2024: CIOs encourage companies to find their ‘data language’

23 May 2024

INTA 2024: CIOs encourage companies to find their ‘data language’

“Artificial intelligence continues to be a hot topic, and there is no AI without the massive infusions of data into the models that drive them,” said Janine Bowen, a technology lawyer and partner at Baker Hostetler in Atlanta, and moderator of the INTA 2024 session, “A View from the CIO’s Office: The Use of AI, Data, and Intellectual Property to Accelerate Business.” Bowen noted that the data may come from disparate sources, some with clear IP underpinnings, privacy obligations, ownership obligations and commercialization opportunities.

The panel of CIOs spoke first about ethical issues that come with using AI.

“When you think about where we’re at with artificial intelligence, this dream that started in the 1950s, is quickly becoming an actual, lived reality,” said DeWayne Griffin, chief digital and information officer at international staffing company Insight Global in Atlanta. “So much so, now machines are working with machines to do things that we as humans would typically do with machines, or even without them.”

Today, he said, organizations often grapple with the necessity to manage not only the responsibility to how they build and use AI, but how to assure that to capture enough of what it means to be human and what humans should be involved with.

“Who gets to decide that? Does the private sector and companies get to decide that? Does the public sector get to decide that? Do we democratically get to decide that? The intersection of those really requires a conversation for us to figure it out, because the machines are here.”

Susan Shook, senior vice president for digital and data assets at medical goods manufacturer McKesson Corporation in Atlanta, agreed that AI technology is moving quickly.

“I’ve been doing technology law since 2008. In the last six months, the growth of AI is incredible, how much better it’s getting month after month,” she said.

The problem is, she said, much of what AI knows, it knows from the internet. “Much of which is not good content. It can be biased content. It can be old, stale content. It can be factually untrue. I’ve heard other people talk about AI, especially the large language models, as a very eager intern. ‘I want to prove to you what I can do, so I’m going to take all the knowledge you’ve given me and spit back an answer based on what you’ve asked me.’ But if the pool of data it’s pulling from is biased or not factually accurate, that’s where you get into ethical problems.”

Griffin agreed: “The AI we’re using now is scraping the internet. I could write a blog that could be in your next search and you will take it as truth because it was presented to you that way.”

Figuring out the data, and how it is used by different silos within an organization, can be a vexing problem.

The mission of the Atlanta Community Food Bank (ACFB), for instance, is to get more food into the communities they serve – equitably, frequently and conveniently. “When we think about that equitable piece, it does translate back to us and the data we have our hands on,” said Sharay Erskine, chief information officer at the ACFB. “What sort of data do we have that we can use to solve some of those equitable disparities to the people who do need the food in our communities?”

Erskine said the food bank is attempting to “solve the data part first” before choosing the AIs to use to fulfil its mission.

The panel all noted that understanding the way data is parsed and used is an essential first step before using AI that relies on your data. “We have so many disparate systems that the data is seen as untrustworthy. We’re trying to take a look at how we’re synthesizing the data from all the different systems we have coming in. One of the biggest issues is that [different parts of the organization] defined something as simple as what a meal is completely differently. The calculation was different from system to system. We’re trying to solve for the data components of AI, then bringing that in on top,” she said.

“Data has its own language, and just like in a country, data can have many dialects,” Griffin said. “Someone might take the exact same field of data – and that data has meaning to the sales team that is different from the finance team that is different from the technology team, so what you have is all these sources of truth that everybody believes in, but it’s not common. What makes data hard is trying to find the actual language that personifies the business.”

Griffin noted that problems inherent in data languages aren’t unique to companies. “We don’t even have the language of data yet. Companies don’t have it. As a society, we don’t have it. You have to figure out a way to find your data language that then allows you to have effective, meaningful outcomes. If you haven’t started that yet in your company, your AI journey is so far behind, and it’s fraught with risks,” he said.

There is also a concern among CIOs of, as Erskine put it, employees “self-solving” their AI needs.

“Our employees hear ‘AI’ and everyone gets excited. ‘I want Grammerly, I want Canva, I want all that,” she said. “Next thing you know, your applications log is from here to the main entrance. We’re taking a look at that issue, because we want to make that as ethical and equitable as possible. It comes to speed. If we can get more foods into the hands of people who need it faster by buying it, we’ll buy it.”

- Gregory Glass, reporting from Atlanta

Law firms