Letter: AI can advance faster if banks learn to share

Letter: AI can advance faster if banks learn to share

Although the article (Opinion, November 20) on how artificial intelligence is reshaping finance captures eloquently some of the possible resolutions to the AI issues facing the sector, there is one additional point which should be recognised: banks opening their data and machine learning algorithms to the sector. This freedom will accelerate the accuracy of AI and provide the high-level governance measures which your article is requesting.

Opening up does mean sharing of intellectual property with potential rivals. However, some in the UK financial services sector are already familiar with open collaborative projects such as the open banking initiative. AI is a linear, iterative process that requires plenty of experimentation to get right and this process can be accelerated if the work is shared widely with more eyeballs to conduct peer reviews. AI expertise is available but currently pretty underutilised.

Investments are being made in cloud computing, which has the promise to prop up large-scale AI projects instead of building a robust private cloud platform. Additionally, the freedom to share research and advancements is not being considered and this is putting a ceiling on AI and the talent who are responsible for developing it.

Instead of proposing structures on innovation in a highly regulated sector, could we instead encourage more openness that could create a more attentive, diverse and competitive industry? Focusing on the industry rather than its participants could well futureproof its growth.

Jack Watts
Emea Leader, Artificial Intelligence,
NetApp, Caddington, Bedfordshire, UK

Published at Fri, 18 Dec 2020 00:00:00 +0000

2020 in Review | 10 AI Papers That Made an Impact

Much of the world may be on hold, but AI research is still booming. The volume of peer-reviewed AI papers has grown by more than 300 percent over the last two decades, and attendance at AI conferences continues to increase significantly, according to the Stanford AI Index. In 2020, AI researchers made exciting progress on applying transformers to areas other than natural-language processing (NLP) tasks, bringing the powerful network architecture to protein sequences modelling and computer vision tasks such as object detection and panoptic segmentation. Improvements this year in unsupervised and self-supervised learning methods meanwhile evolved these into serious alternatives to traditional supervised learning methods.

The top AI conferences were mostly held virtually in 2020, but still recorded record numbers of paper submissions. June’s CVPR received a total of 6,656 submissions, up from 5,165 from last year; July’s ACL 2020 had 3,088 submissions, breaking that conference’s record of 2,906; and, also in July, ICML 2020 reviewed 4,990 submissions — a 45.7 percent increase over the 3,424 submissions last year. ICLR 2020 meanwhile accepted 687 out of 2,594 papers, drew over 5,600 participants from nearly 90 countries to double the 2,700 physical attendees in 2019, and received more than a million page views and over 100,000 video watches over its five-day run in April. (ICLR organizers however did not present Best Paper awards this year.) NeurIPS 2020, which just concluded, received 9,467 paper submissions, a 40 percent increase over last year.

As part of our year-end series, Synced highlights 10 artificial intelligence papers that garnered extraordinary attention and accolades in 2020.

AAAI 2020 Outstanding Paper Award

WinoGrande: An Adversarial Winograd Schema Challenge at Scale
Authors: Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi
Institution(s): Allen Institute for Artificial Intelligence, University of Washington

https://arxiv.org/pdf/1907.10641.pdf

https://aaai.org/Awards/paper.php

CVPR 2020 Best Paper Award

Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild

Authors: Shangzhe Wu, Christian Rupprecht, Andrea Vedaldi
Institution(s): University of Oxford

Abstract: We propose a method to learn 3D deformable object categories from raw single-view images, without external supervision. The method is based on an autoencoder that factors each input image into depth, albedo, viewpoint and illumination. In order to disentangle these components without supervision, we use the fact that many object categories have, at least in principle, a symmetric structure. We show that reasoning about illumination allows us to exploit the underlying object symmetry even if the appearance is not symmetric due to shading. Furthermore, we model objects that are probably, but not certainly, symmetric by predicting a symmetry probability map, learned end-to-end with the other components of the model. Our experiments show that this method can recover very accurately the 3D shape of human faces, cat faces and cars from single-view images, without any supervision or a prior shape model. On benchmarks, we demonstrate superior accuracy compared to another method that uses supervision at the level of 2D image correspondences.

ACL 2020 Best Overall Paper

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList

Authors: Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh
Institution(s): Microsoft Research, University of Washington, University of California-Irvine

Abstract: Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a taskagnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

ICML 2020 Outstanding Paper Awards

Efficiently Sampling Functions From Gaussian Process Posteriors

Authors: James Wilson, Slava Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Deisenroth
Institution(s): Imperial College London, St. Petersburg State University, St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences, University College London

Abstract: Gaussian processes are the gold standard for many real-world modeling problems, especially in cases where a model’s success hinges upon its ability to faithfully represent predictive uncertainty. These problems typically exist as parts of larger frameworks, wherein quantities of interest are ultimately defined by integrating over posterior distributions. These quantities are frequently intractable, motivating the use of Monte Carlo methods. Despite substantial progress in scaling up Gaussian processes to large training sets, methods for accurately generating draws from their posterior distributions still scale cubically in the number of test locations. We identify a decomposition of Gaussian processes that naturally lends itself to scalable sampling by separating out the prior from the data. Building off of this factorization, we propose an easy-to-use and general-purpose approach for fast posterior sampling, which seamlessly pairs with sparse approximations to afford scalability both during training and at test time. In a series of experiments designed to test competing sampling schemes’ statistical properties and practical ramifications, we demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.

Generative Pretraining from Pixels

Authors: Mark Chen, Alec Radford, Rewon Child, Jeffrey K Wu, Heewoo Jun, David Luan, Ilya Sutskever
Institution(s): OpenAI

Abstract: Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine tuning, matching the top supervised pre-trained models. An even larger model trained on a mixture of ImageNet and web images is competitive with self-supervised benchmarks on ImageNet, achieving 72.0% top-1 accuracy on a linear probe of our features.

ECCV 2020 Best Paper Award

RAFT: Recurrent All-Pairs Field Transforms for Optical Flow

Authors: Zachary Teed, Jia Deng
Institution(s): Princeton University

Abstract: We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts perpixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field through a recurrent unit that performs lookups on the correlation volumes. RAFT achieves state-of-the-art performance. On KITTI, RAFT achieves an F1-all error of 5.10%, a 16% error reduction from the best published result (6.10%). On Sintel (final pass), RAFT obtains an end-point-error of 2.855 pixels, a 30% error reduction from the best published result (4.098 pixels). In addition, RAFT has strong cross-dataset generalization as well as high efficiency in inference time, training speed, and parameter count. Code is available at https://github.com/princeton-vl/RAFT.

CoRL 2020 Best Paper Award


Learning Latent Representations to Influence Multi-Agent Interaction

Authors: Annie Xie, Dylan P. Losey, Ryan Tolsma, Chelsea Finn, Dorsa Sadigh
Institution(s): Stanford University, Virginia Tech

Abstract: Seamlessly interacting with humans or robots is hard because these agents are non-stationary. They update their policy in response to the ego agent’s behavior, and the ego agent must anticipate these changes to co-adapt. Inspired by humans, we recognize that robots do not need to explicitly model every low-level action another agent will make; instead, we can capture the latent strategy of other agents through high-level representations. We propose a reinforcement learning based framework for learning latent representations of an agent’s policy, where the ego agent identifies the relationship between its behavior and the other agent’s future strategy. The ego agent then leverages these latent dynamics to influence the other agent, purposely guiding them towards policies suitable for co-adaptation. Across several simulated domains and a real-world air hockey game, our approach outperforms the alternatives and learns to influence the other agent.

NeurIPS Outstanding Paper Awards

Language Models are Few-Shot Learners

Authors: Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei
Institution(s): OpenAI

Abstract: We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks. We also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.

No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium

Authors: Andrea Celli (Polimi), Alberto Marchesi (Polimi), Gabriele Farina (CM) and Nicola Gatti (Polimi)
Institution(s): Politecnico di Milano and Carnegie Mellon University

Abstract: The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different properties than the normal-form counterpart, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to normal-form correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in n-player general-sum extensive-form games with perfect recall. First, we introduce a notion of trigger regret in extensive-form games, which extends that of internal regret in normal-form games. When each player has low trigger regret, the empirical frequency of play is a close to an EFCE. Then, we give an efficient no-trigger-regret algorithm. Our algorithm decomposes trigger regret into local subproblems at each decision point for the player, and constructs a global strategy of the player from the local solutions at each decision point.

Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nystrom Method

Authors: Michał Dereziński, Rajiv Khanna, Michael W. Mahoney
Institution(s): University of California, Berkeley

Abstract: The Column Subset Selection Problem (CSSP) and the Nystrom method are among the leading tools for constructing small low-rank approximations of large datasets in machine learning and scientific computing. A fundamental question in this area is: how well can a data subset of size k compete with the best rank k approximation? We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees which go beyond the standard worst-case analysis. Our approach leads to significantly better bounds for datasets with known rates of singular value decay, e.g., polynomial or exponential decay. Our analysis also reveals an intriguing phenomenon: the approximation factor as a function of k may exhibit multiple peaks and valleys, which we call a multiple-descent curve. A lower bound we establish shows that this behavior is not an artifact of our analysis, but rather it is an inherent property of the CSSP and Nystrom tasks. Finally, using the example of a radial basis function (RBF) kernel, we show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.

In addition to the 10 papers recognized by the top AI and ML conferences, Synced would like to highlight a couple of other notable 2020 papers:

Google researchers proposed an AutoML-Zero approach designed to automatically search for machine learning (ML) algorithms from scratch, requiring minimal human expertise or input:
AutoML-Zero: Evolving Machine Learning Algorithms From Scratch

Google (Meena) and Facebook (Blender) both introduced novel approaches for building humanlike chatbots:
Towards a Human-like Open-Domain Chatbot
Recipes for Building an Open-Domain Chatbot


Reporter: Yuan Yuan | Editor: Michael Sarazen


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

Published at Thu, 17 Dec 2020 22:52:30 +0000