A Christmas symposium from the British Neuroscience Association (BNA) has reviewed the growing relationship between neuroscience and artificial intelligence (AI) techniques. The online event featured talks from across the UK, which reviewed how AI has changed brain science and the many unrealized applications of what remains a nascent technology.
Moving past idiotic AI
Opening the day with his talk, Shake your Foundations: the future of neuroscience in a world where AI is less rubbish, Prof.Christopher Summerfield, from the University of Oxford, looked at the idiotic, ludic and pragmatic stages of AI. We are moving from the idiotic phase, where virtual assistants are usually unreliable and AI-controlled cars crash into random objects they fail to notice, to the ludic phase, where some AI tools are actually quite handy. Summerfield highlighted a program called DALL-E, an AI that converts text prompts into images, and a language generator called gopher that can answer complicated ethical questions with eerily natural responses.
What could these advances in AI mean for neuroscience? Summerfield suggested that they invite researchers to consider the limits of current neuroscience practice that could be enhanced by AI in the future.
Integration of neuroscience subfields could be enabled by AI, said Summerfield. Currently, he said “People who study language don’t care about vision. People who study vision don’t care about memory.” AI systems don’t work properly if only one distinct subfield is considered and Summerfield suggested that, as we learn more about how to create a more complete AI, similar advances will be seen in our study of the biological brain.
Another element of AI that could drag neuroscience into the future is the level of grounding required for it to succeed. Currently, AI models are provided with contextual training data before they can learn associations, whereas the human brain learns from scratch. What makes it possible for a volunteer in a psychologist’s experiment to be told to do something, and then just do it? To create more natural AIs, this is a problem that neuroscience will have to solve in the biological brain first.
Better decisions in healthcare using AI
The University of Oxford’s Prof. Mihaela van der Schaar looked at how we can use machine learning to empower human learning in her talk, Quantitative Epistemology: a new human-machine partnership. Van der Schaar’s talks discussed practical applications of machine learning in healthcare by teaching clinicians through a process called meta-learning. This is where, said van der Schaar, “learners become aware of and increasingly in control of habits of perception, inquiry, learning and growth.”
This approach provides a potential look at how AI might supplement the future of healthcare, by advising clinicians on how they make decisions and how to avoid potential error when undertaking certain practices. Van der Schaar gave an insight into how AI models can be set up to make these continuous improvements. In healthcare, which, at least in the UK, is slow to adopt new technology, van der Schaar’s talk offered a tantalizing glimpse of what a truly digital approach to healthcare could achieve.
Dovetailing nicely from van der Schaar’s talk was Imperial College London professor Aldo Faisal’s presentation, entitled AI and Neuroscience – the Virtuous Cycle. Faisal looked at systems where humans and AI interact and how they can be classified. Whereas in van der Schaar’s clinical decision support systems, humans remain responsible for the final decision and AIs merely advise, in an AI-augmented prosthetic, for example, the roles are reversed. A user can suggest a course of action, such as “pick up this glass”, by sending nerve impulses and the AI can then find a response that addresses this suggestion, by, for example, directing a prosthetic hand to move in a certain way. Faisal then went into detail on how these paradigms can inform real-world learning tasks, such as motion-tracked subjects learning to play pool.
One fascinating study involved a balance board task, where a human subject could tilt the board in one axis, while an AI controlled another, meaning that the two had to collaborate to succeed. After time, the strategies learned by the AI could be “copied” between certain subjects, suggesting the human learning component was similar. But for other subjects, this wasn’t possible.
Faisal suggested this hinted at complexities in how different individuals learn that could inform behavioral neuroscience, AI systems and future devices, like neuroprostheses, where the two must play nicely together.
The afternoon’s session featured presentations that touched on the complexities of the human and animal brain. The University of Sheffield’s Professor Eleni Vasilaki explained how mushroom bodies, regions of the fly brain that play roles in learning and memory, can provide insight into sparse reservoir computing. Thomas Nowotny, professor of informatics at the University of Sussex, reviewed a process called asynchrony, where neurons activate at slightly different times in response to certain stimuli. Nowotny explained how this enables relatively simple systems like the bee brain to perform incredible feats of communication and navigation using only a few thousand neurons.
Do AIs have minds?
Wrapping up the day’s presentations was a lecture that showed an uncanny future for social AIs, delivered by the Henry Shevlin, a senior researcher at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.
Shevlin reviewed the theory of mind, which enables us to understand what other people might be thinking by, in effect modeling their thoughts and emotions. Do AIs have minds in the same way that we do? Shevlin reviewed a series of AI that have been out in the world, acting as humans, here in 2021.
One such AI, OpenAIs language model, GPT-3, spent a week posting on internet forum site Reddit, chatting with human Redditors and racking up hundreds of comments. Chatbots like Replika that personalize themselves to individual users, creating pseudo-relationships that feel as real as human connections (at least to some users). But current systems, said Shevlin, are excellent at fooling humans, but have no “mental” depth and are, in effect, extremely proficient versions of the predictive text systems our phones use.
While the rapid advance of some of these systems might feel dizzying or unsettling, AI and neuroscience are likely to be wedded together in future research. So much can be learned from pairing these fields and true advances will be gained not from retreating from complex AI theories but by embracing them. At the end of Summerfield’s talk, he summed up the idea that AIs are “black boxes” that we don’t fully understand as “lazy”. If we treat deep networks and other AIs systems as neurobiological theories instead, the next decade could see unprecedented advances for both neuroscience and AI.
Published at Mon, 20 Dec 2021 13:40:49 +0000
Technology is a democratic right. That’s not a legal statement, a core truism or even any kind of de facto public awareness proclamation. It’s just something that we all tend to agree upon. The birth of cloud computing and the rise of open source have fuelled this line of thought i.e. cloud puts access and power in anyone’s hands and open source champions meritocracy over hierarchy, an action which in itself insists upon access, opportunity and engagement.
Key among the sectors of the IT landscape now being driven towards a more democratic level of access are Artificial Intelligence (AI) and the Machine Learning (ML) methods that go towards building the ‘smartness’ inside AI models and their algorithmic strength.
Amazon Web Services (AWS) is clearly a major player in cloud and therefore has the breadth to bring its datacenter’s ML muscle forwards in different ways, in different formats and at different levels of complexity, abstraction and usability.
While some IT democratization focuses on putting complex developer and data science tools in the hands of laypeople, other democratization drives to put ML tools in the hands of developers… not all of whom will be natural ML specialists and AI engineers in the first instance.
Hands-on AI & ML experimentation
The recently announced SageMaker Studio Lab is a free service for software application developers to ‘learn machine learning’ methods. It teaches them core techniques and offers them the chance to perform hands-on experimentation with an Integrated Development Environment (in this case, a JupyterLab IDE) to start creating model training functions that will work on real world processors (both CPU ‘chips’ and higher end Graphic Processing Units, or GPUs) as well as the gigabytes of storage these processes also require.
AWS has twinned its product development with the creation of its own AWS AI & ML Scholarship Program. This is a US$10 million investment per year learning and mentorship initiative created in collaboration with Intel and Udacity.
“Machine Learning will be one of the most transformational technologies of this generation. If we are going to unlock the full potential of this technology to tackle some of the world’s most challenging problems, we need the best minds entering the field from all backgrounds and walks of life. We want to inspire and excite a diverse future workforce through this new scholarship program and break down the cost barriers that prevent many from getting started,” said Swami Sivasubramanian, VP of Amazon Machine Learning at AWS.
Founder and CEO of Girls in Tech Adriana Gascoigne agrees with Sivasubramanian’s diversity message wholeheartedly. Her organization is a global nonprofit dedicated to eliminating the gender gap in tech and she welcomes what she calls ‘intentional programs’ like these that are designed to break down barriers.
“Progress in bringing more women and underrepresented communities into the field of Machine Learning will only be achieved if everyone works together to close the diversity gap. Girls in Tech is glad to see multi-faceted programs like the AWS AI & ML Scholarship to help close the gap in Machine Learning education and open career potential among these groups,” said Gascoigne.
The program uses AWS DeepRacer (an integrated learning system for users of all levels to learn and explore reinforcement learning and to experiment and build autonomous driving applications) and the new AWS DeepRacer Student League to teach students foundational machine learning concepts by giving them hands-on experience training machine learning models for autonomous race cars, while providing educational content centered on machine learning fundamentals.
The World Economic Forum estimates that technological advances and automation will create 97 million new technology jobs by 2025, including in the field of AI & ML. While the job opportunities in technology are growing, diversity is lagging behind in science and technology careers.
Birthplace of the modern computer
The University of Pennsylvania Engineering is regarded by many in technology as the birthplace of the modern computer. This honor and epithet is due to the fact that ENIAC, the world’s first electronic, large-scale, general-purpose digital computer, was developed there in 1946. Professor of Computer and Information Science (CIS) at the university Dan Roth is enthusiastic on the subject of AI & ML democratization.
“One of the hardest parts about programming with Machine Learning is configuring the environment to build. Students usually have to choose the compute instances, security polices and provide a credit card,” said Roth. “My students needed Amazon SageMaker Studio Lab to abstract away all of the complexity of setup and provide a free powerful sandbox to experiment. This lets them write code immediately without needing to spend time configuring the ML environment.”
In terms of how these systems and initiatives actually work, Amazon SageMaker Studio Lab offers a free version of Amazon SageMaker, which is used by researchers and data scientists worldwide to build, train, and deploy machine learning models quickly.
Amazon SageMaker Studio Lab removes the need to have an AWS account or provide billing details to get up and running with machine learning on AWS. Users simply sign up with an email address through a web browser and Amazon SageMaker Studio Lab provides access to a machine learning development environment.
No-code Machine Learning
This thread of industry effort must also logically embrace the use of Low-Code/No-Code (LC/NC) technologies. AWS has built this element into its platform with what it calls Amazon SageMaker Canvas. This is a No-Code service intended to expands access to Machine Learning to ‘business analysts’ (a term that AWS uses to broadly define line-of-business employees supporting finance, marketing, operations and human resources teams) with a visual interface that allows them to create accurate Machine Learning predictions on their own, without having to write a single line of code.
Amazon SageMaker Canvas provides a visual, point-and-click user interface for users to generate predictions. Customers point Amazon SageMaker Canvas to their data stores (e.g. Amazon Redshift, Amazon S3, Snowflake, on-premises data stores, local files, etc.) and the Amazon SageMaker Canvas provides visual tools to help users intuitively prepare and analyze data.
Amazon SageMaker Canvas uses automated Machine Learning to build and train machine learning models without any coding. Businesspeople can review and evaluate models in the Amazon SageMaker Canvas console for accuracy and efficacy for their use case. Amazon SageMaker Canvas also lets users export their models to Amazon SageMaker Studio, so they can share them with data scientists to validate and further refine their models.
According to Marc Neumann, product owner, AI Platform at The BMW Group, the use of AI as a key technology is an integral element in the process of digital transformation at the BMW Group. The company already employs AI throughout its value chain, but has been working to expand upon its use.
“We believe Amazon SageMaker Canvas can add a boost to our AI/ML scaling across the BMW Group. With SageMaker Canvas, our business users can easily explore and build ML models to make accurate predictions without writing any code. SageMaker also allows our central data science team to collaborate and evaluate the models created by business users before publishing them to production,” said Neumann.
With great power comes great responsibility
As we know, with all great power comes great responsibility and nowhere is this more true than in the realm of AI & ML with all the machine brain power we are about to wield upon our lives.
Enterprises can of course corral, contain and control how much ML any individual, team or department has access to – and which internal and external systems it can then further connect with and impact – via policy controls and role-based access systems that make sure data sources are not manipulated and then subsequently distributed in ways that could ultimately prove harmful to the business, or indeed to people.
There is no denying the general weight of effort being applied here as AI intelligence and ML cognizance is being democratized for a greater transept of society… and after all who wouldn’t vote for that?
Published at Mon, 20 Dec 2021 13:35:24 +0000