‘The Debate of the Next Decade’ – AI Debate 2 Explores AGI and AI Ethics

image-102.png

‘The Debate of the Next Decade’ – AI Debate 2 Explores AGI and AI Ethics

For the second year in a row, Gary Marcus, CEO and founder of Robust.AI and New York University Professor Emeritus, went live on the AI Debate series hosted by Montréal.AI. This time it was not to spar with Turing Award winner Yoshua Bengio, but to moderate three panel discussions on how to move AI forward.

“Last year, in the first annual December AI Debate, Yoshua Bengio and I discussed — what I think is one of the key debates in the last decade — are big data and deep learning alone enough to get to artificial general intelligence (AGI)?” said Marcus as he launched what he termed “the debate of next decade — how can we take AI to the next level?”

This year’s “AI Debate 2 – Moving AI Forward: An Interdisciplinary Approach” was held again on the day before Christmas Eve and featured 16 panellists — from leading AI researchers and practitioners to psychology professors, neuroscientists, and researchers on ethical AI. The four-hour event included three panel discussions: Architecture and Challenges, Insights from Neuroscience and Psychology, and Towards AI We Can Trust.

image.png

Fei-Fei Li kicked off the Architecture and Challenges panel with the presentation “In search of the next AI North Star.” Li is a researcher in Computer Vision and AI + Healthcare, a computer science professor at the Stanford University, co-director Stanford Human-Centered AI Institute, and cofounder and chair at AI4ALL.

Problem formulation is the first step to any solution, and AI research is no exception, Li explains. Object recognition as one critical functionality of human intelligence has guided AI researchers to work on deploying it in artificial systems for the past two decades or so. Inspired by the research on the evolution of human/animal nervous systems, Li says she believes the next critical AI problem is how to build interactive learning agents that use perception and actuation to learn and understand the world.

image.png

Machine Learning Researcher Luis Lamb, who’s also a professor of the Federal University of Rio Grande do Sul in Brazil, and Secretary of State for Innovation, Science and Technology, State of Rio Grande do Sul, Brazil, thinks the current key problem in AI is how to identify its necessary and sufficient building blocks, and how to develop trustworthy ML systems that are not only explainable, but also interpretable.

image.png

Richard Sutton, distinguished research scientist at DeepMind and a computing science professor at the University of Alberta in Canada, agrees that it’s important to understand the problem before offering solutions. He points out that AI has surprisingly little computational theory — it’s true in neuroscience that we’re missing a sort of higher-level understanding of the goals and purposes of the overall mind, and that’s also true in AI, he says.

AI needs an agreed-upon computational theory, Sutton explains, and he regards reinforcement learning (RL) as the first computational theory of intelligence, which is explicit about its goal — the whats and the whys of intelligence.

image.png

“It is well-established that AI can solve problems, but what we humans can do is still very unique,” says Ken Stanley, an OpenAI research manager and a courtesy computer sciences professor at the University of Central Florida. As humans exhibit “open-ended innovation,” AI researchers similarly need to pursue open-endedness in artificial systems.

image.png

Stanley emphasizes the importance of understanding what makes intelligence a fundamental aspect of humanity. He identifies several dimensions of intelligence that he believes are neglected: divergence, diversity preservation, stepping stone collection, etc.

Judea Pearl, Turing Award winner “for fundamental contributions to AI through the development of a calculus for probabilistic and causal reasoning” and director at the UCLA Cognitive Systems Laboratory, argues that next-level AI systems need added knowledge instead of remaining data-driven. This idea that knowledge of the world or common sense is one of the fundamental missing pieces is shared by Yejin Choi, an associate professor at the University of Washington who won the AAAI20 Outstanding Paper Award earlier this year.

The Insights from Neuroscience and Psychology panel had researchers from other disciplines share their views on topics such as how understanding feedback in brains could help build better AI systems.

The final panel, Towards AI We Can Trust, focused on AI ethics and how to deal with biases in ML systems. “Algorithmic bias is not only problematic for the direct harms it causes, but also for the cascading harms of how it impacts human beliefs,” says Celeste Kidd, a professor at UC Berkeley whose lab studies how humans form beliefs and build knowledge in the world.

image.png

Unethical AI systems are problematic because they can be embedded seamlessly in people’s everyday lives and drive human beliefs in sometimes destructive and likely irreparable ways, Kidd explains. “The point here is that biases in AI systems reinforce and strengthen biases in the people who use them.”

Kidd says “right now is a terrifying time for ethics in AI,” especially with the termination of Timnit Gebru from Google. She says “it’s clear that private interests will not support diversity, equity and inclusion. It should horrify us that the control of algorithms that drive so much of our lives remains in the hands of a homogeneous narrow-minded minority.”

Margaret Mitchell, Gebru’s co-lead at Google’s Ethical AI team and one of the co-authors of the paper at the centre of the Gebru controversy, introduced research she and Gebru were working on. “One of the key things we were really trying to push forward in the ethical AI space is the role of foresight, and how that can be incorporated into all aspects of development.

image.png

There’s no such thing as neutrality in algorithms or apolitical programming, Mitchell says. Human biases and different value judgements are everywhere — from training data to system structure, post-processing steps, and model output. “We were trying to break the system — we call it bias laundering. One of the fundamental parts of developing AI ethically is to make sure that from the start there is a diversity of perspectives and background at the table.”

This point is reflected in the format selected for this year’s AI Debate, which was designed to bring in different perspectives. As an old African proverb goes — “it takes a village to raise a child.” Marcus says it similarly would take a village to raise an AI that’s ethical, robust, and trustworthy. He concludes that it was great to have some pieces of that village gather together at this year’s AI Debate, and that he also sees a lot of convergence in what the panellists brought to the event.


Reporter: Yuan Yuan | Editor: Michael Sarazen


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

Published at Thu, 24 Dec 2020 23:26:15 +0000

Artificial Intelligence Classifies Real Supernova Explosions With Unprecedented Accuracy

Supernova Explosion

A new machine learning algorithm trained only with real data has classified over 2,300 supernovae with over 80% accuracy.

Artificial intelligence is classifying real supernova explosions without the traditional use of spectra, thanks to a team of astronomers at the Center for Astrophysics | Harvard & Smithsonian. The complete data sets and resulting classifications are publicly available for open use.

By training a machine learning model to categorize supernovae based on their visible characteristics, the astronomers were able to classify real data from the Pan-STARRS1 Medium Deep Survey for 2,315 supernovae with an accuracy rate of 82-percent without the use of spectra.

The astronomers developed a software program that classifies different types of supernovae based on their light curves, or how their brightness changes over time. “We have approximately 2,500 supernovae with light curves from the Pan-STARRS1 Medium Deep Survey, and of those, 500 supernovae with spectra that can be used for classification,” said Griffin Hosseinzadeh, a postdoctoral researcher at the CfA and lead author on the first of two papers published in The Astrophysical Journal. “We trained the classifier using those 500 supernovae to classify the remaining supernovae where we were not able to observe the spectrum.”

Cassiopeia A Supernova Remnant

Cassiopeia A, or Cas A, is a supernova remnant located 10,000 light years away in the constellation Cassiopeia, and is the remnant of a once massive star that died in a violent explosion roughly 340 years ago. This image layers infrared, visible, and X-ray data to reveal filamentary structures of dust and gas. Cas A is amongst the 10-percent of supernovae that scientists are able to study closely. CfA’s new machine learning project will help to classify thousands, and eventually millions, of potentially interesting supernovae that may otherwise never be studied. Credit: NASA/JPL-Caltech/STScI/CXC/SAO

Edo Berger, an astronomer at the CfA explained that by asking the artificial intelligence to answer specific questions, the results become increasingly more accurate. “The machine learning looks for a correlation with the original 500 spectroscopic labels. We ask it to compare the supernovae in different categories: color, rate of evolution, or brightness. By feeding it real existing knowledge, it leads to the highest accuracy, between 80- and 90-percent.”

Although this is not the first machine learning project for supernovae classification, it is the first time that astronomers have had access to a real data set large enough to train an artificial intelligence-based supernovae classifier, making it possible to create machine learning algorithms without the use of simulations.

“If you make a simulated light curve, it means you are making an assumption about what supernovae will look like, and your classifier will then learn those assumptions as well,” said Hosseinzadeh. “Nature will always throw some additional complications in that you did not account for, meaning that your classifier will not do as well on real data as it did on simulated data. Because we used real data to train our classifiers, it means our measured accuracy is probably more representative of how our classifiers will perform on other surveys.” As the classifier categorizes the supernovae, said Berger, “We will be able to study them both in retrospect and in real-time to pick out the most interesting events for detailed follow up. We will use the algorithm to help us pick out the needles and also to look at the haystack.”

The project has implications not only for archival data, but also for data that will be collected by future telescopes. The Vera C. Rubin Observatory is expected to go online in 2023, and will lead to the discovery of millions of new supernovae each year. This presents both opportunities and challenges for astrophysicists, where limited telescope time leads to limited spectral classifications.

“When the Rubin Observatory goes online it will increase our discovery rate of supernovae by 100-fold, but our spectroscopic resources will not increase,” said Ashley Villar, a Simons Junior Fellow at Columbia University and lead author on the second of the two papers, adding that while roughly 10,000 supernovae are currently discovered each year, scientists only take spectra of about 10-percent of those objects. “If this holds true, it means that only 0.1-percent of supernovae discovered by the Rubin Observatory each year will get a spectroscopic label. The remaining 99.9-percent of data will be unusable without methods like ours.”

Unlike past efforts, where data sets and classifications have been available to only a limited number of astronomers, the data sets from the new machine learning algorithm will be made publicly available. The astronomers have created easy-to-use, accessible software, and also released all of the data from Pan-STARRS1 Medium Deep Survey along with the new classifications for use in other projects. Hosseinzadeh said, “It was really important to us that these projects be useful for the entire supernova community, not just for our group. There are so many projects that can be done with these data that we could never do them all ourselves.” Berger added, “These projects are open data for open science.”

References:

“SuperRAENN: A Semisupervised Supernova Photometric Classification Pipeline Trained on Pan-STARRS1 Medium-Deep Survey Supernovae” by V. Ashley Villar, Griffin Hosseinzadeh, Edo Berger, Michelle Ntampaka, David O. Jones, Peter Challis, Ryan Chornock, Maria R. Drout, Ryan J. Foley, Robert P. Kirshner, Ragnhild Lunnan, Raffaella Margutti, Dan Milisavljevic, Nathan Sanders, Yen-Chen Pan, Armin Rest, Daniel M. Scolnic, Eugene Magnier, Nigel Metcalfe, Richard Wainscoat and Christopher Waters, 17 December 2020, The Astrophysical Journal.
DOI: 10.3847/1538-4357/abc6fd

“Photometric Classification of 2315 Pan-STARRS1 Supernovae with Superphot” by Griffin Hosseinzadeh, Frederick Dauphin, V. Ashley Villar, Edo Berger, David O. Jones, Peter Challis, Ryan Chornock, Maria R. Drout, Ryan J. Foley, Robert P. Kirshner, Ragnhild Lunnan, Raffaella Margutti, Dan Milisavljevic, Yen-Chen Pan, Armin Rest, Daniel M. Scolnic, Eugene Magnier, Nigel Metcalfe, Richard Wainscoat and Christopher Waters, 17 December 2020, The Astrophysical Journal.
DOI: 10.3847/1538-4357/abc42b

This project was funded in part by a grant from the National Science Foundation (NSF) and the Harvard Data Science Initiative (HDSI).

Published at Thu, 24 Dec 2020 23:15:00 +0000