Body

A.I. and machine learning at Rice: Much more than science fiction

Rice researchers are changing public perception by making artificial intelligence less flashy and more practical.

artificial intelligence at rice

Jan Odegard is fond of quoting something John McDonald, the CEO of Clear Object, an Internet of Things company, recently wrote:

“AI is not really intelligent and there’s nothing artificial about it.”

“We’re not even close to artificial intelligence,” said Odegard, executive director of the Ken Kennedy Institute for Data and Computation at Rice University, “but a lot of very smart people, here at Rice and elsewhere, are working on the problem. Some of their work is promising and I expect to see some breakthroughs in the coming years.”

The public perception of AI, thanks to science-fiction films and wishful thinking, is convincingly human-looking robots that sometimes outsmart their creators.

“It’s not like that,” said Devika Subramanian, professor of computer science (CS) and of electrical and computer engineering (ECE). “AI and its sub-field, machine learning, (ML) are computational enablers for autonomous decision making. AI and ML are tools to enhance human decision-making, just as mechanical shovels enhance our ability to dig holes faster, cheaper and better.”

The reality of AI at Rice is less flashy and more practical. Take Pedram Hassanzadeh, assistant professor of mechanical engineering and of earth, environmental and planetary sciences. With the aid of a grant from Microsoft AI for Earth, he is studying ways to predict extreme weather events using deep learning techniques.

Or Caleb Kemere, associate professor of ECE and assistant professor of bioengineering, who uses ML tools to learn how cognitive processes by large assemblies of neurons evolve across circadian timescales and over time with learning. How are these processes, he asks, implemented in neural circuits?

Or Lydia E. Kavraki, Noah Harding Professor of Computer Science, who is director of a lab dedicated to computational robotics, A.I. and biomedicine. “In robotics and AI, we want to enable robots to work with people and in support of them. Our research develops the underlying algorithms for motion planning in high-dimensional kinodynamic systems.”

At Rice, research into AI is by nature interdisciplinary, drawing on CS, ECE, mathematics, psychology, linguistics, philosophy and other academic fields. The George R. Brown School of Engineering already offers 16 graduate and undergraduate courses in some aspect of AI in the departments of CS, ECE, statistics, and computational and applied mathematics. AI aims at building computer programs that solve problems and are sufficiently flexible to “reason” about what to do when they encounter novel situations. The AI sub-category, ML, has come to dominate the field.

“Machine learning tries to build programs that learn to solve problems from data or from experience, so a programmer doesn’t need to provide every detail on how to solve every case. The program figures it out,” said Chris Jermaine, professor of CS. “Machine learning has become so dominant that when people say ‘AI’ they often mean machine learning, or at least AI with some aspect of machine learning.”

Jermaine’s research further develops what he calls the “plumbing” that makes AI work:

“One under-appreciated fact is that a lot of the advances in AI in the last five years or so have come directly from new software systems. They’ve made it easy to implement and deploy complicated deep networks. Google’s TensorFlow is an example. Before such systems, a Ph.D. student in machine learning might need to spend a year or more doing the math required to get a new deep network to function, then implementing the network.

“Such software is already cutting down that time to a month. Work still needs to be done. These softwares don’t really deal well with Big Data, or really big neural networks, or when you want to learn from data that are geographically distributed. Addressing these limitations is the core of what I work on.”

Anshumali Shrivastava, assistant professor of CS, and his research collaborators have used machine learning to more accurately estimate the number of identified victims killed in the ongoing Syrian civil war. Working with the Human Rights Data Analysis Group, he has devised a data-indexing method called “hashing with statistical estimation.” It produces real-time estimates of documented victims with a lower margin of error than existing statistical methods for finding duplicate records in databases.

A record-by-record analysis of four Syrian war databases would have resulted in roughly 63 billion paired comparisons. “One approach that avoids bias is random sampling. So, we might choose 1 million random pairs out of the 63 billion, see how many duplicates are, and then apply that rate to the entire dataset. This produces an unbiased estimate.”

Subramanian’s understanding of AI is simultaneously more abstract and more specific to her own research applications:

“The main intellectual question defining the field is how to engineer autonomous agents that do the right thing in the face of limited computational resources and limited information. Humans do this all the time. We define intelligence as the ability to do the right thing in contexts where all the needed information and computational resources may not be available.”

For the last two decades, Subramanian has been identifying and solving interesting problems using algorithms from AI and machine learning. With partners at the Texas Medical Center, she has developed ways to predict hospital readmissions, diabetic ketoacidosis in pediatric Type-1 diabetics, and side effects of drugs and drug combinations, and analyzed the appropriate time for insertion of left ventricular-assist devices.

“There are so many applications in so many fields,” Subramanian said: “We’ve developed tools for predicting risk at the household level of damage from hurricanes a chronic flooding for a million homes in Harris County.”

With Richard J. Stoll, the Albert Thomas Professor of Political Science at Rice, she has developed methods for learning conflict models and the spread of terrorist networks by mining news stories. She worked with Stoll using social media data-driven approaches to understanding public attitudes to gun control and to uncover the strategies used by foreign governments to influence Americans through social media.

Someone at Rice who has pondered the impact of AI on society is Moshe Y. Vardi, the Karen Ostrum George Distinguished Service Professor in Computational Engineering, and University Professor, who leads the recently launched Rice Initiative on Technology, Culture, and Society.

“Machines have already automated millions of routine, working-class jobs in manufacturing. Now AI is learning to automate less routine jobs in transportation and logistics, legal writing, financial services, administrative support and healthcare. We still haven’t factored in what this will do to the lives of people,” Vardi said.

Vardi has expressed concern over the growing use of ML to develop automated decision systems that are being deployed in the criminal justice system to make sentencing recommendations, a practice that is rapidly growing. “No one can say why computers make specific rulings, just as they can’t describe the exact thought processes in the brain of a human judge. But artificial neural networks can learn to behave in ways that weren’t intended. For example, they can learn racial bias from data that reflect historical bias against certain groups.”