{"id":3100,"date":"2020-10-02T22:10:51","date_gmt":"2020-10-02T22:10:51","guid":{"rendered":"https:\/\/techclot.com\/index.php\/2020\/10\/02\/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute\/"},"modified":"2020-10-02T22:10:51","modified_gmt":"2020-10-02T22:10:51","slug":"google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute","status":"publish","type":"post","link":"https:\/\/techclot.com\/index.php\/2020\/10\/02\/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute\/","title":{"rendered":"Google, Cambridge, DeepMind &amp; Alan Turing Institute&#8217;s &#8216;Performer&#8217; Transformer Slashes Compute &#8230;"},"content":{"rendered":"<p><a href=\"https:\/\/www.google.com\/url?rct=j&#038;sa=t&#038;url=https:\/\/syncedreview.com\/2020\/10\/02\/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs\/&#038;ct=ga&#038;cd=CAIyHDkyYmU1MGQ5NjY1NjYxZTA6Y28udWs6ZW46R0I&#038;usg=AFQjCNGClxFKPBUYdbMFbefksfsGJYYUgA\">Google, Cambridge, DeepMind &#038; Alan Turing Institute&#8217;s &#8216;Performer&#8217; Transformer Slashes Compute &#8230;<\/a><\/p>\n<p><p>It\u2019s no coincidence that Transformer neural network architecture is gaining popularity across so many machine learning research fields. Best known for natural language processing (NLP) tasks, Transformers not only enabled OpenAI\u2019s 175 billion parameter language model<a href=\"https:\/\/arxiv.org\/pdf\/2005.14165.pdf\"> GPT-3<\/a> to deliver SOTA performance, the power- and potential-packed architecture also helped DeepMind\u2019s <a href=\"https:\/\/deepmind.com\/blog\/article\/alphastar-mastering-real-time-strategy-game-starcraft-ii\">AlphaStar<\/a> bot defeat professional StarCraft players. Researchers have now introduced a way to make Transformers more compute-efficient, scalable and accessible.<\/p>\n<p>While previous learning approaches such as RNNs suffered from vanishing gradient problems, Transformers\u2019 game-changing self-attention mechanism eliminated such issues. As explained in the paper introducing Transformers \u2014 <a href=\"https:\/\/papers.nips.cc\/paper\/7181-attention-is-all-you-need.pdf\"><em>Attention Is All You Need<\/em><\/a>, the novel architecture is based on a <strong>trainable attention mechanism<\/strong> that identifies complex dependencies between input sequence elements.<\/p>\n<p>Transformers however scale quadratically when the number of tokens in an input sequence increases, making their use<strong> prohibitively<\/strong><strong>expensive<\/strong> for large numbers of tokens. Even when fed with moderate token inputs, Transformers\u2019 gluttonous appetite for computational resources can be difficult for many researchers to satisfy.<\/p>\n<p>A team from <strong>Google, University of Cambridge, DeepMind, and Alan Turing Institute <\/strong>have proposed a new type of Transformer dubbed <strong>Performer<\/strong>, based on a<strong> F<\/strong>ast <strong>A<\/strong>ttention <strong>V<\/strong>ia positive <strong>O<\/strong>rthogonal <strong>R<\/strong>andom features (<strong>FAVOR+<\/strong>) backbone mechanism. The team designed Performer to be \u201ccapable of provably accurate and practical estimation of regular (softmax) full rank attention, but of only linear space and timely complexity and not relying on any priors such as sparsity or low-rankness.\u201d<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" data-attachment-id=\"25108\" data-permalink=\"https:\/\/syncedreview.com\/2020\/10\/02\/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs\/image-1-90\/\" data-orig-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?fit=1142%2C302&amp;ssl=1\" data-orig-size=\"1142,302\" data-comments-opened=\"1\" data-image-meta=\"\" data-image-title=\"image-1\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?fit=300%2C79&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?fit=950%2C251&amp;ssl=1\" width=\"950\" height=\"251\" data-src=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?resize=950%2C251&amp;ssl=1\" alt=\"image.png\" class=\"wp-image-25108 lazyload\" data-srcset=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?resize=1024%2C271&amp;ssl=1 1024w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?resize=300%2C79&amp;ssl=1 300w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?resize=768%2C203&amp;ssl=1 768w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?resize=600%2C159&amp;ssl=1 600w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-1.png?w=1142&amp;ssl=1 1142w\" data-sizes=\"auto, (max-width: 950px) 100vw, 950px\" data-recalc-dims=\"1\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 950px; --smush-placeholder-aspect-ratio: 950\/251;\"><\/figure>\n<\/div>\n<p>Softmax has been a bottleneck burdening attention-based Transformers computation. Transformers typically use a learned linear transformation and softmax function to convert decoder output to predicted next-token probabilities. The proposed method instead estimates softmax and Gaussian kernels with positive orthogonal random features for a robust and unbiased estimation of regular softmax attention in the FAVOR+ mechanism. <strong>The research confirms that using positive features can efficiently train softmax-based linear Transformers.<\/strong><\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" data-attachment-id=\"25111\" data-permalink=\"https:\/\/syncedreview.com\/2020\/10\/02\/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs\/image-4-64\/\" data-orig-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?fit=1234%2C621&amp;ssl=1\" data-orig-size=\"1234,621\" data-comments-opened=\"1\" data-image-meta=\"\" data-image-title=\"image-4\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?fit=300%2C151&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?fit=950%2C478&amp;ssl=1\" width=\"950\" height=\"478\" data-src=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?resize=950%2C478&amp;ssl=1\" alt=\"image.png\" class=\"wp-image-25111 lazyload\" data-srcset=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?resize=1024%2C515&amp;ssl=1 1024w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?resize=300%2C151&amp;ssl=1 300w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?resize=768%2C386&amp;ssl=1 768w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?resize=600%2C302&amp;ssl=1 600w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-4.png?w=1234&amp;ssl=1 1234w\" data-sizes=\"auto, (max-width: 950px) 100vw, 950px\" data-recalc-dims=\"1\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 950px; --smush-placeholder-aspect-ratio: 950\/478;\"><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" data-attachment-id=\"25110\" data-permalink=\"https:\/\/syncedreview.com\/2020\/10\/02\/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs\/image-3-75\/\" data-orig-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?fit=1216%2C558&amp;ssl=1\" data-orig-size=\"1216,558\" data-comments-opened=\"1\" data-image-meta=\"\" data-image-title=\"image-3\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?fit=300%2C138&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?fit=950%2C436&amp;ssl=1\" width=\"950\" height=\"436\" data-src=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?resize=950%2C436&amp;ssl=1\" alt=\"image.png\" class=\"wp-image-25110 lazyload\" data-srcset=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?resize=1024%2C470&amp;ssl=1 1024w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?resize=300%2C138&amp;ssl=1 300w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?resize=768%2C352&amp;ssl=1 768w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?resize=600%2C275&amp;ssl=1 600w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-3.png?w=1216&amp;ssl=1 1216w\" data-sizes=\"auto, (max-width: 950px) 100vw, 950px\" data-recalc-dims=\"1\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 950px; --smush-placeholder-aspect-ratio: 950\/436;\"><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" data-attachment-id=\"25109\" data-permalink=\"https:\/\/syncedreview.com\/2020\/10\/02\/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs\/image-2-86\/\" data-orig-file=\"https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?fit=1264%2C436&amp;ssl=1\" data-orig-size=\"1264,436\" data-comments-opened=\"1\" data-image-meta=\"\" data-image-title=\"image-2\" data-image-description data-medium-file=\"https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?fit=300%2C103&amp;ssl=1\" data-large-file=\"https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?fit=950%2C327&amp;ssl=1\" width=\"950\" height=\"327\" data-src=\"https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?resize=950%2C327&amp;ssl=1\" alt=\"image.png\" class=\"wp-image-25109 lazyload\" data-srcset=\"https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?resize=1024%2C353&amp;ssl=1 1024w, https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?resize=300%2C103&amp;ssl=1 300w, https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?resize=768%2C265&amp;ssl=1 768w, https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?resize=1260%2C436&amp;ssl=1 1260w, https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?resize=600%2C207&amp;ssl=1 600w, https:\/\/i1.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/10\/image-2.png?w=1264&amp;ssl=1 1264w\" data-sizes=\"auto, (max-width: 950px) 100vw, 950px\" data-recalc-dims=\"1\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 950px; --smush-placeholder-aspect-ratio: 950\/327;\"><\/figure>\n<\/div>\n<p>Leveraging detailed mathematical theorems, the paper demonstrates that rather than relying solely on computational resources to boost performance, it is also possible to develop improved and efficient Transformer architectures that have significantly lower energy consumption. Also, because Performers use the same training hyperparameters as Transformers, the FAVOR+ mechanism can function as a simple drop-in without much tuning.<\/p>\n<p>The team tested Performers on a rich set of tasks ranging from pixel-prediction to protein sequence modelling. In their experimental setup, a Performer only replaced a regular Transformer\u2019s attention component with the FAVOR+ mechanism. On the challenging task of training a 36-layer model using protein sequences, the Performer-based model (Performer-RELU) achieved better performance than the baseline Transformer models Reformer and Linformer, which showed significant drops in accuracy. On the standard ImageNet64 benchmark, a Performer with six layers matched the accuracy of a Reformer with 12 layers. After optimizations, Performer was also twice as fast as Reformer.<\/p>\n<p>Because Performer-enabled scalable Transformer architectures can handle much longer sequences without constraints on attention mechanism structure while remaining accurate and robust, it is believed they could lead to breakthroughs in bioinformatics, where technologies such as such as language modelling for proteins have already shown strong potential.<\/p>\n<p>The paper <em>Rethinking Attention With Performers <\/em>is on <a href=\"https:\/\/arxiv.org\/pdf\/2009.14794.pdf\">arXiv<\/a>.<\/p>\n<hr class=\"wp-block-separator\">\n<p><strong>Reporter<\/strong>: Fangyu Cai | <strong>Editor<\/strong>: Michael Sarazen<\/p>\n<hr class=\"wp-block-separator\">\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" data-attachment-id=\"24633\" data-permalink=\"https:\/\/syncedreview.com\/2020\/09\/10\/tinyspeech-novel-attention-condensers-enable-deep-recognition-networks-on-edge-devices\/image-66-20\/\" data-orig-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-66.png?fit=567%2C230&amp;ssl=1\" data-orig-size=\"567,230\" data-comments-opened=\"1\" data-image-meta=\"\" data-image-title=\"image-66\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-66.png?fit=300%2C122&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-66.png?fit=567%2C230&amp;ssl=1\" width=\"567\" height=\"230\" data-src=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-66.png?resize=567%2C230&amp;ssl=1\" alt=\"B4.png\" class=\"wp-image-24633 lazyload\" data-srcset=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-66.png?w=567&amp;ssl=1 567w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-66.png?resize=300%2C122&amp;ssl=1 300w\" data-sizes=\"auto, (max-width: 567px) 100vw, 567px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 567px; --smush-placeholder-aspect-ratio: 567\/230;\"><\/figure>\n<\/div>\n<p><strong>Synced Report |&nbsp;<\/strong><a href=\"https:\/\/payhip.com\/b\/Mdme\"><strong>A Survey of China\u2019s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic \u2014 87 Case Studies from 700+ AI Vendors<\/strong><\/a><\/p>\n<p>This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on&nbsp;<a href=\"https:\/\/www.amazon.com\/Artificial-Intelligence-Solutions-Response-COVID-19-ebook\/dp\/B08C373G1B\/ref=sr_1_1?dchild=1&amp;keywords=synced+global&amp;qid=1594150418&amp;s=digital-text&amp;sr=1-1\">Amazon Kindle<\/a>.&nbsp;<strong>Along with this report, we also introduced a&nbsp;<\/strong><a href=\"https:\/\/payhip.com\/b\/i5bN\"><strong>database<\/strong><\/a><strong>&nbsp;covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.<\/strong><\/p>\n<p>Click&nbsp;<a href=\"https:\/\/payhip.com\/SyncedReview\">here<\/a>&nbsp;to find more reports from us.<\/p>\n<hr class=\"wp-block-separator\">\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" data-attachment-id=\"24634\" data-permalink=\"https:\/\/syncedreview.com\/2020\/09\/10\/tinyspeech-novel-attention-condensers-enable-deep-recognition-networks-on-edge-devices\/image-67-21\/\" data-orig-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-67.png?fit=567%2C230&amp;ssl=1\" data-orig-size=\"567,230\" data-comments-opened=\"1\" data-image-meta=\"\" data-image-title=\"image-67\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-67.png?fit=300%2C122&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-67.png?fit=567%2C230&amp;ssl=1\" width=\"567\" height=\"230\" data-src=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-67.png?resize=567%2C230&amp;ssl=1\" alt=\"AI Weekly.png\" class=\"wp-image-24634 lazyload\" data-srcset=\"https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-67.png?w=567&amp;ssl=1 567w, https:\/\/i0.wp.com\/syncedreview.com\/wp-content\/uploads\/2020\/09\/image-67.png?resize=300%2C122&amp;ssl=1 300w\" data-sizes=\"auto, (max-width: 567px) 100vw, 567px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 567px; --smush-placeholder-aspect-ratio: 567\/230;\"><\/figure>\n<\/div>\n<p>We know you don\u2019t want to miss any news or research breakthroughs.<strong>&nbsp;Subscribe to our popular newsletter&nbsp;<em><a href=\"https:\/\/mailchi.mp\/2fb3aa308ad3\/welcome-to-synced-global-ai-weekly-newsletter\">Synced Global AI Weekly<\/a><\/em>&nbsp;to get weekly AI updates.<\/strong><\/p>\n<div class=\"sharedaddy sd-sharing-enabled\">\n<div class=\"robots-nocontent sd-block sd-social sd-social-icon-text sd-sharing\">\n<h3 class=\"sd-title\">Share this:<\/h3>\n<\/div>\n<\/div>\n<div class=\"sharedaddy sd-block sd-like jetpack-likes-widget-wrapper jetpack-likes-widget-unloaded\" id=\"like-post-wrapper-120977279-25106-5f77a51d20d13\" data-src=\"https:\/\/widgets.wp.com\/likes\/#blog_id=120977279&amp;post_id=25106&amp;origin=syncedreview.com&amp;obj_id=120977279-25106-5f77a51d20d13\" data-name=\"like-post-frame-120977279-25106-5f77a51d20d13\">\n<h3 class=\"sd-title\">Like this:<\/h3>\n<div class=\"likes-widget-placeholder post-likes-widget-placeholder\"><span class=\"button\"><span>Like<\/span><\/span> <span class=\"loading\">Loading&#8230;<\/span><\/div>\n<p><span class=\"sd-text-color\"><\/span><\/div>\n<\/p>\n<p>Published at Fri, 02 Oct 2020 20:03:45 +0000<\/p>\n<p><a href=\"https:\/\/www.google.com\/url?rct=j&#038;sa=t&#038;url=https:\/\/www.prnewswire.com\/news-releases\/amesite-ceo-dr-ann-marie-sastry-scheduled-to-appear-on-fox-business-networks-mornings-with-maria-monday-at-630-am-et-301145022.html&#038;ct=ga&#038;cd=CAIyHDkyYmU1MGQ5NjY1NjYxZTA6Y28udWs6ZW46R0I&#038;usg=AFQjCNGNcqn3SdM2Ejr6TpRqPVAIt7Kwgw\">Amesite CEO Dr. Ann Marie Sastry Scheduled to Appear on Fox Business Network&#8217;s Mornings With &#8230;<\/a><\/p>\n<p><p><span class=\"xn-location\">ANN ARBOR, Mich.<\/span>, <span class=\"xn-chron\">Oct. 2, 2020<\/span> \/PRNewswire\/ &#8212;&nbsp;<b>Amesite<\/b> <b>Inc<\/b>. (Nasdaq: <a class=\"ticket-symbol\" href=\"https:\/\/www.prnewswire.com\/news-releases\/amesite-ceo-dr-ann-marie-sastry-scheduled-to-appear-on-fox-business-networks-mornings-with-maria-monday-at-630-am-et-301145022.html#financial-modal\">AMST<\/a>), an artificial intelligence software company providing online learning ecosystems for business, higher education, and K-12, announced today its CEO, Dr. <span class=\"xn-person\">Ann Marie Sastry<\/span>, is scheduled to appear on Fox Business Network&#8217;s <i>Mornings With Maria<\/i> Monday morning at <span class=\"xn-chron\">6:30 a.m. ET<\/span>.&nbsp;&nbsp;<\/p>\n<p>Dr. Sastry will discuss the latest in remote learning and how it is impacting students, parents, teachers, and administrators. She will also explain how artificial intelligence is helping to power Amesite&#8217;s next generation online learning platform and why innovation in the EdTech space has become so crucial. <\/p>\n<p><b>About Amesite Inc.<\/b><\/p>\n<p>Amesite is a high tech artificial intelligence software company offering a cloud-based platform and content creation services for K-12, college, university and business education and upskilling. Amesite-offered courses and programs are branded to our&nbsp;customers.&nbsp; Amesite uses artificial intelligence technologies to provide customized environments for learners, easy-to-manage interfaces for instructors, and greater accessibility for learners in the US education market and beyond.&nbsp; The Company leverages existing institutional infrastructures, adding mass customization and cutting-edge technology to provide cost-effective, scalable and engaging experiences for&nbsp;learners anywhere.&nbsp; For more information, visit&nbsp;<b><a href=\"https:\/\/c212.net\/c\/link\/?t=0&amp;l=en&amp;o=2938553-1&amp;h=1123252505&amp;u=https%3A%2F%2Fprotect-us.mimecast.com%2Fs%2FIxe3Cn5VMxT7o6OLSJu1kS%3Fdomain%3Dglobenewswire.com&amp;a=https%3A%2F%2Famesite.com\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">https:\/\/amesite.com<\/a><\/b>.<\/p>\n<p><b><u>Forward Looking Statements<\/u><\/b><\/p>\n<p>This communication contains forward-looking statements (including within the meaning of Section&nbsp;21E of the Securities Exchange Act of 1934, as amended, and Section&nbsp;27A of the Securities Act of 1933, as amended) concerning the Company, the Company&#8217;s planned online machine learning platform, the Company&#8217;s business plans, any future commercialization of the Company&#8217;s online learning solutions, potential customers, business objectives and other matters. Forward-looking statements generally include statements that are predictive in nature and depend upon or refer to future events or conditions, and include words such as &#8220;may,&#8221; &#8220;will,&#8221; &#8220;should,&#8221; &#8220;would,&#8221; &#8220;expect,&#8221; &#8220;plan,&#8221; &#8220;believe,&#8221; &#8220;intend,&#8221; &#8220;look forward,&#8221; and other similar expressions among others. Statements that are not historical facts are forward-looking statements. Forward-looking statements are based on current beliefs and assumptions that are subject to risks and uncertainties and are not guarantees of future performance. Actual results could differ materially from those contained in any forward-looking statement. Risks facing the Company and its planned platform are set forth in the Company&#8217;s filings with the SEC. Except as required by applicable law, the Company undertakes no obligation to revise or update any forward-looking statement, or to make any other forward-looking statements, whether as a result of new information, future events or otherwise.<\/p>\n<p>Media Contact \u2013 <span class=\"xn-person\">Robert Busweiler<\/span> \u2013 <a href=\"https:\/\/www.prnewswire.com\/cdn-cgi\/l\/email-protection#ed8f989e9a888481889fad9e98839e858483889e8c8e859ec38e8280\" rel=\"nofollow noopener noreferrer\" target=\"_blank\"><span class=\"__cf_email__\" data-cfemail=\"ea889f999d8f83868f98aa999f84998283848f998b898299c4898587\">[email&nbsp;protected]<\/span><\/a> \u2013 631.379.6454<\/p>\n<p>SOURCE Amesite Inc.<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" alt data-src=\"https:\/\/i0.wp.com\/rt.prnewswire.com\/rt.gif?w=640&#038;ssl=1\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><\/p>\n<h4> Related Links<\/h4>\n<p> <a title=\"Link to https:\/\/amesite.com\" href=\"https:\/\/amesite.com\" class=\"linkOnClick\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">https:\/\/amesite.com<\/a><\/p>\n<\/p>\n<p>Published at Fri, 02 Oct 2020 18:33:45 +0000<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google, Cambridge, DeepMind &#038; Alan Turing Institute&#8217;s &#8216;Performer&#8217; Transformer Slashes Compute &#8230; It\u2019s no coincidence&#8230;<\/p>\n","protected":false},"author":3,"featured_media":3101,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3],"tags":[],"class_list":["post-3100","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2020\/10\/image-1.png?fit=1142%2C302&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p3orZX-O0","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/3100","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/comments?post=3100"}],"version-history":[{"count":0,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/3100\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media\/3101"}],"wp:attachment":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media?parent=3100"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/categories?post=3100"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/tags?post=3100"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}