Extreme heatwaves like those that struck Western Europe last summer could be predicted weeks in advance in the future using artificial intelligence.
Because heatwaves are rare and difficult to anticipate, it has historically been difficult to prepare for the likes of wildfires and the health implications for people and animals when they strike.
However, French scientists have now unveiled an AI system to predict them, using so-called “deep learning”.
Machine learning is where AI evolves with minimal human interference, while deep learning is an offset of machine learning that uses artificial neural networks to mimic the human brain.
The AI used by the Claude Bernard University Lyon researchers uses environmental conditions such as soil moisture and the state of the atmosphere to measure the probability of an extreme heatwave up to a month before its arrival.
They trained the technology on 8,000 years of weather data, simulated by a climate model from the University of Hamburg.
The AI can make predictions in a matter of seconds, and can also be used to predict rare phenomena difficult to anticipate using traditional climate forecasts and climate models, the researchers said.
As global warming intensifies, extreme heatwaves are likely to become more frequent.
The Intergovernmental Panel on Climate Change (IPCC), a UN-backed body of global climate scientists, including Maynooth University professor Peter Thorne, said last month that more than a century of burning fossil fuels has led to global warming of 1.1C above pre-industrial levels, resulting in more frequent and more intense extreme weather events in every region of the world.
Every increment of warming results in rapidly escalating hazards, such as more intense heatwaves, heavier rainfall, and other weather extremes.
Almost half of the world’s population lives in regions highly vulnerable to climate change, where in the last decade, deaths from floods, droughts, and storms were 15 times higher.
A report from Christian Aid calculated that drought caused by the extreme heat across Europe during the summer was likely to have cost €20bn and 20,000 deaths in excess of normal, with wildfires and agricultural losses particularly acute.
Wildfires across Europe proved costly not just in monetary terms, but also regarding emissions. Emissions from June to August were the highest summer total wildfire output estimated for the EU plus Britain in the last 15 years.
France, Spain, Germany, and Slovenia experienced their highest summer wildfire emissions for at least the last 20 years, the EU’s climate change service Copernicus said.
Copernicus said in January that Europe’s summer was the hottest in recorded history “by a clear margin”, with all countries across the entire continent bar one experiencing annual temperatures above the 30-year average.
Autumn was the third warmest on record, only beaten by 2020 and 2006, while winter temperatures in 2022 were about 1C above average, ranking amongst the 10 warmest.
The continent experienced its second warmest June ever recorded at about 1.6C above average.
Published at Sun, 09 Apr 2023 07:00:55 +0000
The surgeon waiting for you in the operating room has downloaded all the diagnostic files relating to your case, and will perform the procedure with micrometre precision.
The risk of error or infection is reduced, incisions minimised, and recovery time accelerated.
Except the person carrying out the operation will not be human, but a super-intelligent robot specially-programmed to carry out emergency abdominal surgery.
This may sound like science fiction, but for those working at the cutting edge of artificial intelligence and robotics it is seen as one probable future for healthcare.
“I would expect that yes, at some point in the future, that’s exactly the kind of thing one should expect to see,” said Subramanian Ramamoorthy, a professor and personal chair of robot learning and autonomy at Edinburgh University’s School of Informatics.
“Predictions are always hard, but technically, that would be the natural evolution. It’ll be in steps, from smaller procedures to bigger procedures.
“The introduction is going to be slow. We’ll start with existing, minimally-invasive surgery and giving people one step of assistance.
“Then, once people start to trust that, you go one step further into task-led robotic surgery: you tell me what bit you want excised – a polyp, for example – and then that gets automated.
“Then, one day in future – it’s hard to predict when – we can imagine that you could have an entirety of surgeries, but we’re not there yet. In commercial terms, we’re not even at the beginning of this.”
Ramamoorthy is among those on the frontline of artificial intelligence (AI) in medicine.
At a specially-created lab at the Bayes Centre in Edinburgh, which mimics a typical operating theatre environment, he has been pioneering the development of sensor-guided autonomous robots that can help cancer surgeons “push towards tighter margins” – meaning that less healthy tissue is removed and recovery rates improve.
This work on safe AI for surgical assistance builds on ideas Ramamoorthy first explored through research into self-driving vehicles.
He sees parallels between the incremental progress in autonomous driving technology – from parking assist to eventual driverless cars – and the step-by-step advances in healthcare from surgeon-guided robots (already a reality) to the autonomous robot surgeons of the future.
“In the beginning everyone hypes it and is a bit disappointed, and then you get gradual growth,” said Ramamoorthy.
“It’s exactly the same thing here. To the insiders the hype was not justified; likewise, the feeling that some people have that ‘it’s not going to happen’ is also unjustified, because it was always going to be a long game.”
When it comes to diagnostics, AI is already finding a foothold in the NHS.
A successful study in Grampian used AI as a “second pair of eyes” to scan 80,000 mammograms for signs of breast cancer.
It is also being trialled in Glasgow to alert clinicians to COPD patients most at risk of emergency hospital admissions so that pre-emptive interventions can be taken instead.
When it comes to robots performing surgery, however, Ramamoorthy says it is a bit like transitioning from map-reading to GPS.
He said: “At the moment a surgeon in the room outside looks at the imaging, keeps it in their head, and then walks in and performs the surgery based on what they can see.
“It’s a bit like the old-fashioned way of steering a ship after having looked at the map somewhere else, whereas what we are talking about is more like GPS-driven navigation.
“What we’re looking for here are real-time diagnostics giving the robots that micrometre level of accuracy.
“The issue of staffing shortages in many ways is secondary – not because it’s not important – but for a long time they’re not going to get rid of people because people are still going to be sitting there monitoring it, and supervising it.
“In the beginning, accuracy will be the driver.”
Ramamoorthy will be discussing the latest developments during a talk at the Bayes Centre on Thursday.
It comes days after AI experts including Twitter billionaire, Elon Musk, and Apple co-founder, Steve Wozniak, called for a worldwide pause in the training of human-competitive intelligence technologies, warning that they “pose profound risks to society and humanity”.
It follows the release on March 14 of GPT-4, the next generation of the deep learning language model behind chatbot, ChatGPT.
While Musk and Wozniak caution that no one “can understand, predict, or reliably control” these emerging innovations, others have compared a moratorium on AI development to “[delaying] the Manhattan Project and letting the Nazis catch up” – a reference to nuclear weapons.
— Terminator 2 Movie (@Terminator2Mov) November 11, 2019
To the layman, all this seems worryingly reminiscent of HAL 9000 – the rogue computer in ‘2001: A Space Odyssey’ – or Skynet, the fictional artificial intelligence system in the ‘Terminator’ franchise which, on becoming self-aware, triggered global nuclear warfare before its human inventors could shut it down.
The late theoretical physicist, Professor Stephen Hawking, once warned that it was impossible to foresee whether humanity would be “infinitely helped, ignored, or conceivably destroyed” by AI.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation,” he told a technology summit in Lisbon in 2017.
The question of whether sentient robots are “friends or monsters” will be discussed by a panel of guests at the Edinburgh Science Festival on Tuesday, moderated by Professor Michael Herrmann of the Edinburgh Centre for Robotics.
He said: “In the past there wasn’t really a question of whether robots or machines can be sentient, but now there is a feeling that something has changed – a new quality has been reached – so we need to ask these questions again.”
The concept of sentience in robots throws up a raft of dilemmas, from the ethical to the existential: should robots have rights, for instance, and if we can create consciousness in machines doesn’t that prove once and for all that it is not some divine gift for humans?
One of the panellists, Rupert Robson, author of ‘The Sentient Robot’, notes that we still don’t know why consciousness exists.
He said: “If you think about our brain, all sorts of cognitive and emotional functions take place – all sorts of information processing.
“The question is, why doesn’t all this information processing go on in the dark, just as it does in a handheld calculator?
“And yet we know that it doesn’t go on in the dark – we’re aware of it. That is sentience.
“But it’s not absolutely clear what sentience, or ‘consciousness’, brings to the party because all of that information processing is going on anyway.
“Do AIs or algorithms like ChatGPT and GPT4 have sentience or consciousness?
“Absolutely not – yet.
“Is there a likelihood that we will be able to figure out consciousness in order to embed it into robots? Yes, that is possible.
“But it’s not going to happen by accident. It’s going to happen because we’ve designed it into the robot.”
For his part, Robson thinks sentience could be the thing that actually saves us from a Terminator-style doomsday.
“Make no mistake, we will develop – in the fullness of time – really super-clever robots, with a much greater breadth of intelligence than ChatGPT, and at that point we have a danger – a risk to ourselves – and we need to mitigate that risk.
“I think sentience is a way of doing that.
“If [the robots] see the world through our eyes, if they are able to empathise with us because they have sentience, then I think there is an argument – a good argument – that we stand a greater chance of them being friendly to us, rather than hostile.”
Back in the more mundane world of healthcare, Dr Cian O’Donovan, a researcher at University College London, is concerned with making sure that we harness AI to our benefit – not to replace staff, but to free up clinicians and carers to spend more time with patients.
He said: “It’s not simply a matter of ‘the robots are coming and taking all the jobs’ – the robots are coming, that means we’ve got to think really hard about training.
“Patients will benefit if robotics and automation technologies allow them to spend more time with human carers.”
O’Donovan cautioned that AI is “not a panacea” for workforce shortages if we fail to plan for an ageing population.
He added: “There’s a danger that because of the successes – or perceived successes – in places like diagnostics or in replicating chess players, that we’re too quick in projecting those successes into other areas.
“Thinking about wards, thinking about care homes, these environments are so unpredictable and so far removed from the board games, from the X-ray labs or, in the case of robots, from the factory floor.
“I don’t think that’s fully costed in by governments thinking that AI technologies are the future across the board.”
Published at Sun, 09 Apr 2023 06:10:06 +0000