{"id":3087,"date":"2020-10-01T22:06:23","date_gmt":"2020-10-01T22:06:23","guid":{"rendered":"https:\/\/techclot.com\/index.php\/2020\/10\/01\/anticipating-heart-failure-with-machine-learning\/"},"modified":"2020-10-01T22:06:23","modified_gmt":"2020-10-01T22:06:23","slug":"anticipating-heart-failure-with-machine-learning","status":"publish","type":"post","link":"https:\/\/techclot.com\/index.php\/2020\/10\/01\/anticipating-heart-failure-with-machine-learning\/","title":{"rendered":"Anticipating heart failure with machine learning"},"content":{"rendered":"<p><a href=\"https:\/\/www.google.com\/url?rct=j&#038;sa=t&#038;url=https:\/\/news.mit.edu\/2020\/anticipating-heart-failure-machine-learning-1001&#038;ct=ga&#038;cd=CAIyHDkyYmU1MGQ5NjY1NjYxZTA6Y28udWs6ZW46R0I&#038;usg=AFQjCNEQFo_asNq3aBk6vnGGVE1WulLdrQ\">Anticipating heart failure with machine learning<\/a><\/p>\n<p><div><img data-recalc-dims=\"1\" decoding=\"async\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2020\/10\/QuSIDU.png?w=640&#038;ssl=1\" class=\"ff-og-image-inserted lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\"><\/div>\n<div class=\"news-article--content--body--inner\">\n<div class=\"paragraph paragraph--type--content-block-text paragraph--view-mode--default\">\n<p>Every year, roughly one out of eight U.S. deaths is caused at least in part <a href=\"https:\/\/wonder.cdc.gov\/ucd-icd10.html\">by heart failure<\/a>. One of acute heart failure\u2019s most common warning signs is <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/20354029\/\">excess fluid in the lungs<\/a>, a condition known as \u201cpulmonary edema.\u201d&nbsp;<\/p>\n<p>A patient\u2019s exact level of excess fluid often dictates the doctor\u2019s course of action, but&nbsp;making such determinations is difficult and requires clinicians to rely on subtle features in X-rays that sometimes lead to inconsistent diagnoses and treatment plans.<\/p>\n<p>To better handle that kind of nuance, a group led by researchers at MIT\u2019s Computer Science and Artificial Intelligence Lab (CSAIL) has developed a machine learning model that can look at an X-ray to quantify how severe the edema is, on a four-level scale ranging from 0 (healthy) to 3 (very, very bad). The system determined the right level more than half of the time, and correctly diagnosed level 3 cases 90 percent of the time.<\/p>\n<p>Working with <a href=\"https:\/\/www.bidmc.org\/\">Beth Israel Deaconess Medical Center<\/a> (BIDMC) and <a href=\"https:\/\/www.philips.com\/a-w\/research\/home\">Philips<\/a>, the team plans to integrate the model into BIDMC\u2019s emergency-room workflow this fall.<\/p>\n<p>\u201cThis project is meant to augment doctors\u2019 workflow by providing additional information that can be used to inform their diagnoses as well as enable retrospective analyses,\u201d says PhD student Ruizhi Liao, who was the co-lead author of a related paper with fellow PhD student Geeticka Chauhan and MIT professors Polina Golland and Peter Szolovits.&nbsp;<\/p>\n<p>The team says that better edema diagnosis would help doctors manage not only acute heart issues, but other conditions like sepsis and kidney failure that are strongly associated with edema.&nbsp;<\/p>\n<p>As part of a separate journal article, Liao and colleagues also took <a href=\"https:\/\/news.mit.edu\/2019\/mimic-chest-x-ray-database-0201\">an existing public dataset of X-ray images<\/a> and <a href=\"https:\/\/github.com\/RayRuizhiLiao\/regex_pulmonary_edema\">developed new annotations<\/a> of severity labels that were agreed upon by a team of four radiologists. Liao\u2019s hope is that these consensus labels can serve as a universal standard to benchmark future machine learning development.<\/p>\n<p>An important aspect of the system is that it was trained not just on more than 300,000 X-ray images, but also on the corresponding text of reports about the X-rays that were written by radiologists. The team was pleasantly surprised that their system found such success using these reports, most of which didn\u2019t have labels explaining the exact severity level of the edema.<\/p>\n<p>\u201cBy learning the association between images and their corresponding reports, the method has the potential for a new way of automatic report generation from the detection of image-driven findings,<strong>\u201d <\/strong>says Tanveer Syeda-Mahmood, a <a href=\"https:\/\/arxiv.org\/abs\/2007.13831\">researcher<\/a> not involved in the project who serves as chief scientist for IBM\u2019s <a href=\"https:\/\/researcher.watson.ibm.com\/researcher\/view_group.php?id=4384\">Medical Sieve Radiology Grand Challenge<\/a>. \u201cOf course, further experiments would have to be done for this to be broadly applicable to other findings and their fine-grained descriptors.\u201d<\/p>\n<p>Chauhan\u2019s efforts focused on helping the system make sense of the text of the reports, which could often be as short as a sentence or two. Different radiologists write with varying tones and use a range of terminology, so the researchers had to develop a set of linguistic rules and substitutions to ensure that data could be analyzed consistently across reports. This was in addition to the technical challenge of designing a model that can jointly train the image and text representations in a meaningful manner.<\/p>\n<p>\u201cOur model can turn both images and text into compact numerical abstractions from which an interpretation can be derived,\u201d says Chauhan. \u201cWe trained it to minimize the difference between the representations of the X-ray images and the text of the radiology reports, using the reports to improve the image interpretation.\u201d<\/p>\n<p>On top of that, the team\u2019s system was also able to \u201cexplain\u201d itself, by showing which parts of the reports and areas of X-ray images correspond to the model prediction. Chauhan is hopeful that future work in this area will provide more detailed lower-level image-text correlations, so that clinicians can build a taxonomy of images, reports, disease labels and relevant correlated regions.&nbsp;<\/p>\n<p>\u201cThese correlations will be valuable for improving search through a large database of X-ray images and reports, to make retrospective analysis even more effective,\u201d Chauhan says.<\/p>\n<p>Chauhan, Golland, Liao and Szolovits co-wrote the paper with MIT Assistant Professor Jacob Andreas, Professor William Wells of Brigham and Women\u2019s Hospital, Xin Wang of Philips, and Seth Berkowitz and Steven Horng of BIDMC. The paper will be presented Oct. 5 (virtually) at the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).&nbsp;<\/p>\n<p>The work was supported in part by the MIT Deshpande Center for Technological Innovation, the MIT Lincoln Lab, the National Institutes of Health, Philips, Takeda, and the Wistron Corporation.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/p>\n<p>Published at Thu, 01 Oct 2020 18:00:00 +0000<\/p>\n<p><a href=\"https:\/\/www.google.com\/url?rct=j&#038;sa=t&#038;url=https:\/\/healthitanalytics.com\/news\/artificial-intelligence-simplifies-covid-19-testing-workflows&#038;ct=ga&#038;cd=CAIyHDkyYmU1MGQ5NjY1NjYxZTA6Y28udWs6ZW46R0I&#038;usg=AFQjCNEOey7UPoCMi3hMBcbwoUpVEUu1Aw\">Artificial Intelligence Simplifies COVID-19 Testing, Workflows<\/a><\/p>\n<p><div><img data-recalc-dims=\"1\" decoding=\"async\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2020\/10\/INx8yr.jpg?w=640&#038;ssl=1\" class=\"ff-og-image-inserted lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\"><\/div>\n<div class=\"article-top-author\">\n<p>By <a href=\"mailto:jkent@xtelligentmedia.com\">Jessica Kent<\/a><\/p>\n<\/div>\n<p><time datetime=\"2020-10-1\">October 01, 2020<\/time> &#8211;&nbsp;Tufts Medical Center <a href=\"https:\/\/www.prnewswire.com\/news-releases\/olives-ai-workforce-to-revolutionize-covid-19-testing-at-tufts-medical-center-301139423.html\">has announced<\/a> a partnership with Olive to use artificial intelligence to streamline COVID-19 testing operations and improve the care experience for both patients and providers.<\/p>\n<p><strong><em>For more coronavirus updates, visit&nbsp;<a href=\"https:\/\/patientengagementhit.com\/news\/latest-coronavirus-updates-for-the-healthcare-community\">our resource page<\/a>, updated twice daily by Xtelligent Healthcare Media.<\/em><\/strong><\/p>\n<p>Tufts will use the AI platform to automate high-volume, labor-intensive data entry and patient screening tasks. These new efficiencies are estimated to improve care delivery by making the in-person testing process up to seven and a half times faster, saving 86 percent of patient testing time that\u2019s inflated by manual patient data entry.<\/p>\n<aside id=\"article-dig-deeper\">\n<h4>Dig Deeper<\/h4>\n<\/aside>\n<p>Across the US, demands in COVID-19 testing are on the rise and healthcare organizations are working to gain access to necessary supplies and scale up testing operations. It\u2019s becoming increasingly important for hospitals and health systems to have technological infrastructure that supports the efficient collection of patient data and entry of test information.<\/p>\n<p>With this AI platform, Tufts will have additional support in scheduling, initial screening, and information entry steps \u2013 tasks that are typically performed manually. The health system will be able to <a href=\"https:\/\/healthitanalytics.com\/news\/deep-learning-models-can-detect-covid-19-in-chest-ct-scans\">expand testing capacity<\/a> and address additional challenges as a result of the pandemic.<\/p>\n<p>The AI tool will save clinicians up to 50 hours per day collectively in data entry, with an expectation that this number will increase as more tests are administered to the community.<\/p>\n<p>Tufts Medical Center currently administers more than one in ten, or approximately 12 percent, of all COVID-19 tests in the Boston area. The organization has also processed more than 100,000 COVID-19 tests in total since the start of the pandemic.<\/p>\n<p>\u201cPart of our COVID-19 response includes making testing available to as many people in our community as possible \u2013 and a key component of that is leveraging technology to support frontline workers,\u201d said&nbsp;Kristine Hanscom, CFO of Tufts Medical Center.<\/p>\n<p>\u201cWe were looking for an AI platform to strengthen and connect the moving parts in our technology infrastructure as we continue to scale testing capabilities. An AI workforce will operate behind the scenes to manage data and information processing so our clinical team can be as agile as possible as they continue to focus on delivering world-class patient care.\u201d<\/p>\n<p>Based on the symptoms and health history patients report during their screening via a secure form on the Tufts MC website, the streamlined process will direct patients to the testing site or the emergency department. Patient screening and specimen data will then be entered into the Tufts MC system to update EHR records, automatically identify data inconsistencies, and deliver more accurate, timely information to frontline providers.<\/p>\n<p>\u201cHealthcare is facing incredible challenges, and Olive is here to help hospitals and health systems address them head on,\u201d&nbsp;said Sean Lane, CEO of Olive.<\/p>\n<p>\u201cWhether it&#8217;s deploying our AI workforce to streamline testing processes, revenue cycle workflows or IT operations, Olive is committed to build lasting solutions across the enterprise. We&#8217;re honored to integrate our AI workforce at Tufts Medical Center, and proud to support our healthcare heroes.\u201d<\/p>\n<p>Throughout the COVID-19 pandemic, organizations have turned to AI and data analytics tools to help manage surges in patient volumes. A team from Cedars-Sinai recently <a href=\"https:\/\/healthitanalytics.com\/news\/machine-learning-tool-predicts-staffing-needs-during-covid-19\">developed<\/a> a machine learning tool that can forecast data points related to the pandemic and can predict staffing needs.<\/p>\n<p>The platform can track local hospitalization volumes and the rate of confirmed COVID-19 cases, running multiple <a href=\"https:\/\/healthitanalytics.com\/news\/deep-learning-model-predicts-covid-19-surges-7-days-into-the-future\">forecasting models<\/a> to help prepare for increasing COVID-19 patient volumes.<\/p>\n<p>\u201cOur goal is to have the capacity and the right care available every day to treat the patients who need us, which fluctuates on a daily basis,\u201d said Michael Thompson, executive director of Enterprise Data Intelligence at Cedars-Sinai, which developed the platform and runs the forecasts. \u201cWe need to match that daily demand with the necessary resources: beds, staff, PPE and other supplies.\u201d<\/p>\n<p><br class=\"hidden-lg hidden-md\"><\/p>\n<p>Published at Thu, 01 Oct 2020 16:52:30 +0000<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anticipating heart failure with machine learning Every year, roughly one out of eight U.S. deaths&#8230;<\/p>\n","protected":false},"author":3,"featured_media":3085,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3],"tags":[],"class_list":["post-3087","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2020\/10\/QuSIDU.png?fit=1000%2C667&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p3orZX-NN","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/3087","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/comments?post=3087"}],"version-history":[{"count":0,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/3087\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media\/3085"}],"wp:attachment":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media?parent=3087"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/categories?post=3087"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/tags?post=3087"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}