How Do We Live and Thrive in a World with AI?

IU trident on a chip inside a computer.
IU experts are keeping a watchful eye on artificial intelligence and its relationship with humankind.

Artificial intelligence could write this article and you might not even notice.

AI is getting so good—some might say scary good—that the line between humans and technology seems blurrier than ever before.

When the company OpenAI launched ChatGPT in November 2022, it signaled a seismic shift in AI capability. Pose a question or give a prompt like “summarize War and Peace in 20 words or less” or “write a haiku about Indiana University,” and ChatGPT generates a text response accordingly.

ChatGPT may have captured the public’s attention initially, but now we’re in the midst of a generative AI boom. It’s hard to even keep track of all the available AI tools that can create text, images, audio, and video content. In fact, things are moving so fast that some of the information in this article could be obsolete by the time you read it.

Viewed from above, a student sits at a table working on their laptop. They're wearing a red had with a white IU trident and a yellow-orange puffer jacket. Superimposed on top of the photo is a graphic that looks like a microchip, which has "AI" on it.
According to a 2022 Pew Research Center report, 45 percent of Americans feel equally concerned and excited about the increased use of AI in daily life. Photo by Liz Kaye, IUPUI.

Of course, artificial intelligence isn’t anything new; it’s been part of our everyday lives for decades, and its scope is massive. Artificial intelligence checks our grammar, suggests movies or television shows we might like, detects fraudulent use of our credit cards, and alerts us when our car may be headed for a collision. But AI also compromises our privacy, generates and spreads false information, perpetuates biases, and empowers plagiarism.

If any of this concerns you, you’re not alone. According to a 2022 Pew Research Center report, 45 percent of Americans feel equally concerned and excited about the increased use of AI in daily life, while 37 percent feel more concerned than excited.

Fortunately, scores of experts at Indiana University are keeping a watchful eye on the future of AI and our relationship with it.

“The reason all of us are here”

As we wrestle with our feelings about AI, a prevailing question emerges: How can we maximize the potential usefulness of AI while minimizing the harmful consequences? That question is the driving force behind the Luddy Artificial Intelligence Center (LAIC) at IU, established in 2022. The center partners with IU faculty from various research areas to investigate the potential uses and implications of AI.

“AI is a big challenge and opportunity. It’s not just computer science—it touches all across the university and across society,” said the center’s director, David Crandall, PhD, a professor of computer science in the Luddy School of Informatics, Computing, and Engineering.

“When you have something that comes along like that, a university is the perfect institution for understanding these opportunities in a way that isn’t biased, because our goal is to serve the public good.”

IU experts affiliated with the center are exploring applications of AI related to cancer and immunology, developmental psychology, data security, language, music, learning, human vision, and other disciplines.

One of the center’s current projects is the Trusted Artificial Intelligence Initiative, a three-year collaboration with other universities and the Naval Surface Warfare Center, Crane Division. The initiative’s main goals are to: 1) create a pipeline of students who can work with AI, and 2) develop AI solutions that can be trusted at the highest levels.

“Our university partners take our hard problems and turn them into projects to both train students and develop technical solutions,” said Kara Perry, the education and workforce development co-lead at NSWC Crane.

IU experts are also fighting the spread of misinformation that has been emboldened by AI. Developed through IU’s Observatory on Social Media, the Top FIBers dashboard identifies the top 10 disinformation spreaders on Facebook and X, formerly known as Twitter.

Ultimately, with projects like these, IU is trying to figure out how systems and people can work together to do better than either one can do individually, Crandall said.

“The goal of what we’re doing at the university—the reason all of us are here: We want to make people’s lives better.”

“I’d rather be in control”

In a brightly lit studio with large canvases stacked against the walls, a man with tan skin and brown hair wearing a short-sleeve, yellow button-up shirt and blue gloves holds a paintbrush up to a canvas full of undulating, overlapping colorful shapes.
Caleb Weintraub, painter and associate professor in the Eskenazi School of Art, Architecture and Design, considers AI’s role in artistic practice. Photo by Chris Meyer, Indiana University.

Caleb Weintraub, a painter and associate professor in the Eskenazi School of Art, Architecture and Design, is an affiliate faculty member with the LAIC. His research considers how AI can augment an artist’s practice.

During the 2020 pandemic—when he was limited to working from home—Weintraub began using generative AI as part of his artistic process. He thought of AI as a collaborator as he used a variety of AI tools to brainstorm ideas, generate otherworldly imagery for reference, evaluate and interpret his work, and more. This was all before ChatGPT and its ilk came on the scene, bringing with them increased concern among creative professionals that AI threatens their jobs and livelihoods.

“How might we incorporate these tools into our practice without it replacing us?” was the question at the heart of AI in the Studio, a class Weintraub co-taught in spring 2023.

“The assignments in the course focused on using these technologies to cultivate individual artistic visions in tandem with material approaches, including photography, image editing, painting, drawing, projection, fabrication, and installation,” said Weintraub.

“No matter our thoughts about AI, it’s inevitable,” said Weintraub. “So, I’d rather be in control of how I engage with it than have it kind of run me over at the last second.”

“A no-brainer”

IU School of Education Professor Anne Ottenbreit-Leftwich, PhD, foresees a future where most careers use AI in some way.

Ottenbreit-Leftwich, who is the Barbara B. Jacobs Chair in Education and Technology, designs K-12 computer science courses and investigates best practices for incorporating computer science lessons in the classroom. For example, as part of a life sciences lesson, students might compare how computers versus animals (like snakes, dogs, and bats) gather information to understand the world. In a language arts lesson, students might practice using AI to develop an outline.

What matters most, Ottenbreit-Leftwich said, is finding an application of the technology that students care about in order to pique their interest and draw them into the topic at hand. She also says it’s imperative to prepare students to navigate all the repercussions of living in an increasingly AI-infused world.

“The more people we prepare to solve the big problems, the better off we are.”

“AI is going to be used, right? That’s a no-brainer. Just like technology is used in almost every career now, I think AI will also be used in every career; so we’ve got to empower our students to understand these pieces,” Ottenbreit-Leftwich explained.

Plus, she continued, every child is someone who could go on to solve one of society’s big problems—“and the more people we prepare to solve the big problems, the better off we are.”

This article was originally published in the 2023 issue of Imagine magazine. 

Tags from the story
, ,
Written By
Andrea Alumbaugh
A native Hoosier, Andrea Alumbaugh is a graduate of IU (BAJ’08) and a senior writer at the IU Foundation.