{"id":1072,"date":"2019-03-27T17:34:21","date_gmt":"2019-03-27T17:34:21","guid":{"rendered":"http:\/\/techclot.com\/index.php\/2019\/03\/27\/capsule-networks\/"},"modified":"2019-03-27T17:34:29","modified_gmt":"2019-03-27T17:34:29","slug":"capsule-networks","status":"publish","type":"post","link":"https:\/\/techclot.com\/index.php\/2019\/03\/27\/capsule-networks\/","title":{"rendered":"Capsule Networks"},"content":{"rendered":"<p>If you want to blame someone for the hoopla around artificial intelligence, Google researcher Geoff Hinton is a good candidate.<\/p>\n<p>Today neural networks transcribe our speech, recognize our pets, and fight our trolls.<\/p>\n<p>But Hinton now belittles the technology he helped bring to the world. \u201cI think the way we\u2019re doing computer vision is just wrong,\u201d he says. \u201cIt works better than anything else at present but that doesn\u2019t mean it\u2019s right.\u201d<\/p>\n<p> <ins class=\"adsbygoogle\"><\/ins> <\/p>\n<p>In its place, Hinton has unveiled another \u201cold\u201d idea that might transform how computers see\u2014and reshape AI. That\u2019s important because computer vision is crucial to ideas such as self-driving cars, and having software that plays doctor.<\/p>\n<p><div class=\"jetpack-video-wrapper\"><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class=\"youtube-player lazyload\" width=\"640\" height=\"360\" data-src=\"https:\/\/www.youtube.com\/embed\/rTawFwUvnLE?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=opaque\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe><\/span><\/div>Brain &amp; Cognitive Sciences &#8211; Fall Colloquium Series Recorded December 4, 2014 Talk given at MIT. Geoffrey Hinton talks about his capsules project.<\/p>\n<p>Late last week, <strong>Hinton released two research papers<\/strong> that he says prove out an idea he\u2019s been mulling for almost 40 years. \u201cIt\u2019s made a lot of intuitive sense to me for a very long time, it just hasn\u2019t worked well,\u201d Hinton says. \u201cWe\u2019ve finally got something that works well.\u201d<\/p>\n<p>Hinton\u2019s new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton\u2019s <strong>capsule networks<\/strong> matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits.<\/p>\n<p>In the second, capsule networks almost halved the best previous error rate on a test that challenges software to recognize toys such as trucks and cars from different angles.<\/p>\n<p>                                                                        <img data-recalc-dims=\"1\" decoding=\"async\" class=\"thumb-image lazyload\" alt=\"This graph shows interest in &quot;capsule networks&quot; over the past 5 years. It is safe to say these 2 papers have sparked quite some rise in interest levels.\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2019\/03\/sudden-increase-in-interest-for-capsule-networks.png?w=640\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/>                                                            <\/p>\n<p>This graph shows interest in &#8220;capsule networks&#8221; over the past 5 years. It is safe to say these 2 papers have sparked quite some rise in interest levels.<\/p>\n<p>                                                                                                                                  <img data-recalc-dims=\"1\" decoding=\"async\" class=\"thumb-image lazyload\" alt=\"Not surprisingly the interest levels for &quot;capsule networks&quot; by region include all the usual suspects. Interest levels for AI in India are huge!\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2019\/03\/regional-interest-in-capsule-networks.png?w=640\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/>                                                            <\/p>\n<p>Not surprisingly the interest levels for &#8220;capsule networks&#8221; by region include all the usual suspects. Interest levels for AI in India are huge!<\/p>\n<h2>The Two Capsule Networks Research Papers<\/h2>\n<p><strong>1. Dynamic Routing Between Capsules<\/strong> at <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1710.09829\">https:\/\/arxiv.org\/abs\/1710.09829<\/a><br \/>Abstract: A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discriminatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule. <strong>[Sara Sabour, Nicholas Frosst, Geoffrey E Hinton]<\/strong><\/p>\n<p> <ins class=\"adsbygoogle\"><\/ins> <\/p>\n<p><strong>2. Matrix Capsules with EM Routing<\/strong> at <a target=\"_blank\" href=\"https:\/\/openreview.net\/forum?id=HJWLfGWRb&amp;noteId=HJWLfGWRb\">https:\/\/openreview.net\/forum?id=HJWLfGWRb&amp;noteId=HJWLfGWRb<\/a><br \/>Abstract: A capsule is a group of neurons whose outputs represent different properties of the same entity. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4&#215;4 pose matrix which could learn to represent the relationship between that entity and the viewer. A capsule in one layer votes for the pose matrices of many different capsules in the layer above by multiplying its own pose matrix by viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated using the EM algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The whole system is trained discriminatively by unrolling 3 iterations of EM between each pair of adjacent layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45% compared to the state-of-the-art. Capsules also show far more resistant to white box adversarial attack than our baseline convolutional neural network. <strong>[Anonymous]<\/strong><\/p>\n<p>                                                                        <img data-recalc-dims=\"1\" decoding=\"async\" class=\"thumb-image lazyload\" alt=\"capsule-networks-at-wired.png\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2019\/03\/capsule-networks-at-wired.png?w=640\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/>                                                                          \t<a href=\"https:\/\/www.wired.com\/story\/googles-ai-wizard-unveils-a-new-twist-on-neural-networks\/\" class=\"sqs-block-button-element--small sqs-block-button-element\">Learn More<\/a>                                                                         <img data-recalc-dims=\"1\" decoding=\"async\" class=\"thumb-image lazyload\" alt=\"capsule-networks-at-mit-technology-review.png\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2019\/03\/capsule-networks-at-mit-technology-review.png?w=640\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/>                                                                          \t<a href=\"https:\/\/www.technologyreview.com\/the-download\/609297\/google-researchers-have-a-new-alternative-to-traditional-neural-networks\/\" class=\"sqs-block-button-element--small sqs-block-button-element\">Learn Even More<\/a><br \/>\n<a rel=\"nofollow\" href=\"https:\/\/www.artificial-intelligence.blog\/news\/capsule-networks\">2019 Artificial Intelligence News &#8211; AI News<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you want to blame someone for the hoopla around artificial intelligence, Google researcher Geoff&#8230;<\/p>\n","protected":false},"author":1,"featured_media":1073,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3],"tags":[2633,1694],"class_list":["post-1072","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","tag-capsule","tag-networks"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2019\/03\/sudden-increase-in-interest-for-capsule-networks.png?fit=1000%2C206&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p3orZX-hi","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/1072","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/comments?post=1072"}],"version-history":[{"count":1,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/1072\/revisions"}],"predecessor-version":[{"id":1077,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/1072\/revisions\/1077"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media\/1073"}],"wp:attachment":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media?parent=1072"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/categories?post=1072"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/tags?post=1072"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}