Meredith Broussard was on a train in Philadelphia when she had an epiphany. She was running late and didn’t have cash, and wanted to use her husband’s monthly pass to ride. But it had a big “M” sticker on it, for male. That had to cause problems for people in the trans community, she mused, later finding out it was true.
Broussard is a data journalism professor at New York University and author of Artificial Unintelligence: How Computers Misunderstand the World. That experience on the train sparked her investigation into how computer systems were affecting people’s treatment of gender.
“I realized that mainstream computer science, as a discipline, needed to start getting more progressive,” she said during NeurIPS. “The decisions made about how to represent gender in code were efforts, sometimes deliberate, to enforce 1950s ideas about gender on society. Weirdly, we’re still living with those retrograde ideas.”
Programmers would use paper forms to create databases and programs, and if the forms offered only two gender options, so did the code.
“When I learned programming, back then nobody imagined that gender would need to be an editable field,” Broussard said. Her talk included excerpts from an upcoming book she’s writing on the topic.
How Computer Code Causes Issues For Transgender People
Psychologist & Nobel Laureate In Economics
For me, rationality is a technical term in decision theory or in logic that requires consistency of all beliefs or preferences.
I think it is not even useful as an ideal for the human mind, because it’s a nonstarter. A finite human mind cannot be rational.”
“
DANIEL KAHNEMAN
Daniel Kahneman, the renowned psychologist and author of Thinking, Fast and Slow, helped his audience understand the many factors that affect human decisions—many of which have no logical or emotional reason to do so.
Perception, for example, is tied into judgment, he said. We literally see what we think should be the truth. When given a collection of letters, for
Daniel Kahneman Wants You To Know The Many Factors Affecting Your Judgment
Nature provides some of the best examples of productive swarms: tiny cells or bugs or animals that, collectively, can accomplish some pretty amazing things. So why can’t we create that collective intelligence with tiny robots?
That was the focus of Radhika Nagpal’s talk at NeurIPS. A professor of computer science at Harvard University, Nagpal focuses her work on biologically inspired robots.
“Army ants are this spectacular example of collective intelligence,” she said. “Millions of them work together without any leader in charge. They self-assemble their entire nest out of their own bodies and create these crazy bridge structures to get over rough terrain.”
The trick, she said, was figuring out the rules—the algorithms, in the case of robots—that allow leaderless collectives to function together to accomplish shared goals. It’s incredibly important, because when you think about robots in the open world, many systems start to take this shape. All the autonomously driven cars on the road could be considered a collective, she noted.
How To Make Machines Work Like Flocks Of Birds Or Colonies Of Army Ants
Gray, a senior principal researcher at Microsoft Research, has been studying healthcare workers, their workflow and their data needs, since the start of the pandemic and particularly in the American South, where those professionals “are driving vaccine equity among Black and Latinx communities,” she said at the conference.
She’s been trying to help solve the problem of how to create tech innovations that can serve those groups of people, building collaboration and social connection, rather than any particular user.
Some of the questions that work has raised include how and by whom data is collected for use in machine learning or artificial intelligence, she said. It
said. It might be possible to link social sciences with computing to allow people embedded in communities to figure out how to share information and how to curate that information to get the most meaningful results from the eventual computation.
In addition to her work for Microsoft, Gray is a faculty fellow at Harvard University, affiliated with both the E.J. Safra Center for Ethics and the Berkman Klein Center for Internet and Society. Simultaneously, she holds a faculty position in the Luddy School of Informatics, Computing and Engineering, with affiliations in anthropology and gender studies, at Indiana University. She became a MacArthur Fellow, the so-called “genius grants,” in 2020 for her work with anthropology and technology, digital economies and society.
Mary L. Gray Wants To Create Tech To Serve Groups, Not
Just Individuals
Senior Principal Researcher, Microsoft Research
So much of computer science is grounded in an assumption that there is an end user that should be the point of connection. That is precisely what I see as the limits of our current approaches to machine learning.”
“
MARY L. GRAY
example, humans will see “B”—but if the series is numbers, we’ll see the same shape as “13.”
“We deal not with reality directly,” he said. “We deal with reality as a representation, and there is a choice in the representation.”
Kahneman’s background is novel-worthy. He and a partner won a Nobel Prize for Economics in 2002, something of a surprise for psychologists. They had used psychological research to create a new model for individual decision making in economics. Kahneman spent his childhood escaping the Nazis during World War II, and he used his time in the Israeli Defense Forces to essentially begin his work figuring out how to predict human behavior.
He held professorships at the University of British Columbia in Vancouver, University of California, Berkeley, and Princeton University. After retiring in 2007, Kahneman would go on to win a U.S. Presidential Medal of Freedom.