Experts employed and trained by IU are exploring the many aspects and influences of artificial intelligence. There’s the alum and legal scholar talking about AI systems as they relate to privacy, safety, and liability; the IU professor working to teach speech-recognition systems the nuances of human language; and, finally, a pair of IU experts—a computer historian and a radiologist—giving their views of AI’s impact on human employment.
For those in the legal field, artificial intelligence systems bring with them a host of new legal issues that will no doubt be explored through the courts in years to come, says Drew Simshaw, JD’12, a legal method/communication fellow at Elon University School of Law. For Simshaw, who coauthored a chapter on “Cybersecurity and the Legal Profession” for the National Cybersecurity Institute’s Cybersecurity in Our Digital Lives, legal issues surrounding new technologies are a specialty.
The legal issues stemming from artificial intelligence largely fall into three buckets: privacy, safety, and liability. Questions of privacy might stem from the healthcare field, where medical information gathered by technology systems would not be covered by existing privacy petitions, whereas questions of apportioning blame will likely arise when something goes wrong with a new technology, such as an autonomous vehicle or robotic surgical procedure.
“There’s an increasingly large ecosystem with new players and new entities working together to makes these technologies a reality, and it is hard to allocate risk when we don’t know what the risks are,” he says.
Right now, there are few clear answers to questions of liability when it comes to artificially intelligent systems, Simshaw says. This is a major reason we are seeing a “slow rollout” of some new forms of technology, he says.
“As recently as a few years ago, it was predicted we’d see driverless cars become ubiquitous very, very quickly, and we are of course seeing that’s not the case. Even with things like robotic surgery and driverless cars, you are seeing humans not being taken far out of the loop, which makes it easier for humans to intervene when those machines don’t behave the way we think they will.”
Artificial Intelligence and Language
Damir Cavar, associate professor of computational linguistics at IU, is at work on a group of projects that develops data sets and algorithms aimed at teaching speech-recognition systems the nuances of human language. Right now, most industrial speech recognition systems are trained to take a series of audio signals and simply identify the words being used. Such systems can’t understand the significance of the way something is said—yet.
“Imagine, I see my dog chewing a shoe and ask, ‘What did you do’ with a non-interrogative intonation, rather the intonation of ‘Bad dog, I see what you did, this is not good,’” Cavar says. “We build the data sets for such speech properties and train algorithms to detect this, to identify irony, sentiment, and sarcasm using spoken language signal, intonation, specific accent, or stress patterns.”
Artificial Intelligence and Employment
The rise of AI also triggers perhaps one of the oldest fears about any new technology, the potential displacement of workers.
“Computers in some way have always been about replacing human workers, and so always, whether you are talking about computers in a conventional sense or artificial intelligence, there is always a concern of, ‘What human labor will this replace?’” says Nathan Ensmenger, chair of IU’s School of Informatics, Computing, and Engineering and a longtime researcher of the history of computing.
In the United States, for instance, automation has been a major cause of the reduction in manufacturing jobs once available.
In another field—radiology—many have feared that automation could mean a reduced need for trained radiologists; one leader in the field, Geoffrey Hinton, has even argued that medical schools should stop training people as radiologists. However, Dr. Himanshu Shah, chair of the Department of Radiology and Imaging Science at the Indiana University School of Medicine, strongly disagrees.
Shah believes that artificial intelligence will play an important role in radiology in the years to come, but not to replace humans. Rather, he believes such systems will help radiologists—who he describes as being “maxed out”—cope with the enormous amounts of data now available to them. As imaging technology has improved, radiologists now have many hundreds, even thousands, of images to review from each individual patient, when in the past they would have had perhaps 100.
“There are a lot of applications that are getting increasing hype,” Shah says of artificial intelligence systems in radiology. “Not how to replace the radiologists, but how to make the radiologists more productive; [how to] provide a second set of eyes or elevate the performance of generalists to get them closer to specialists levels.”
While there can be no dispute that some jobs have been lost to automation, Ensmenger believes that job losses on a massive scale in the future are less likely than people think. More likely, facets of jobs will be shifted to computers, but humans will retain the roles for which they are better suited.
“This idea that in 10 years we’ll all have nothing to do and our big problem will be figuring out what to do with our spare time is as absurd now as it was when people were predicting that in the 1950s, which was as absurd as when John Maynard Keynes was predicting that in the 1920s,” Shah says. “It’s a fantasy.”
To read about other IU people at work on the impact of artificial intelligence, check out the “Thinking Machines” feature story in the Winter 2018 issue of the IU Alumni Magazine, a magazine for members of the IU Alumni Association. View current and past issues of the IUAM.