{"id":296,"date":"2022-03-16T15:14:36","date_gmt":"2022-03-16T20:14:36","guid":{"rendered":"https:\/\/baylor.ai\/?p=296"},"modified":"2022-03-16T15:17:45","modified_gmt":"2022-03-16T20:17:45","slug":"hybrid-quantum-variational-autoencoders-for-representation-learning","status":"publish","type":"post","link":"https:\/\/lab.rivas.ai\/?p=296","title":{"rendered":"Hybrid Quantum Variational Autoencoders for Representation Learning"},"content":{"rendered":"\n<p>One of our recent papers introduces a novel hybrid quantum machine learning approach to unsupervised representation learning by using a quantum variational circuit that is trainable with traditional gradient descent techniques. Access it here: [&nbsp;<a href=\"https:\/\/www.rivas.ai\/bibs\/rivas2021hybrid.bib\">bib<\/a>&nbsp;|&nbsp;<a href=\"https:\/\/www.rivas.ai\/pdfs\/rivas2021hybrid.pdf\">.pdf<\/a>&nbsp;]<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"513\" height=\"525\" src=\"https:\/\/baylor.ai\/wp-content\/uploads\/2022\/03\/image-2.png\" alt=\"\" class=\"wp-image-298\" srcset=\"https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-2.png 513w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-2-293x300.png 293w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-2-106x108.png 106w\" sizes=\"auto, (max-width: 513px) 100vw, 513px\" \/><\/figure>\n\n\n\nMuch of the work related to quantum machine learning has been popularized in recent years. Some of the most notable efforts involve variational approaches (Cerezo 2021, Khoshaman 2018, Yuan 2019). Researchers have shown that these models are effective in complex tasks that grant further studies and open new doors for applied quantum machine learning research. \n\nAnother popular approach is to perform kernel learning using a quantum approach (Blank 2020, Schuld 2019, Rebentrost 2014). In this case the kernel-based projection of data <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-bcda923e732ff6e429d93d0fa7ea8a47_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"11\" style=\"vertical-align: 0px;\"\/> produces a easible linear mapping to the desired target <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-0af556714940c351c933bba8cf840796_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#121;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"9\" style=\"vertical-align: -4px;\"\/> as follows: \n<p class=\"ql-center-displayed-equation\" style=\"line-height: 64px;\"><span class=\"ql-right-eqno\"> (1) <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-97c0960a4bcf55dc59ba1300144f8a3d_l3.png\" height=\"64\" width=\"257\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#32;&#32;&#32;&#32;&#121;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#41;&#61;&#92;&#111;&#112;&#101;&#114;&#97;&#116;&#111;&#114;&#110;&#97;&#109;&#101;&#123;&#115;&#105;&#103;&#110;&#125;&#92;&#108;&#101;&#102;&#116;&#40;&#92;&#115;&#117;&#109;&#95;&#123;&#106;&#61;&#49;&#125;&#94;&#123;&#77;&#125;&#32;&#92;&#97;&#108;&#112;&#104;&#97;&#95;&#123;&#106;&#125;&#32;&#107;&#92;&#108;&#101;&#102;&#116;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#95;&#123;&#106;&#125;&#44;&#32;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#92;&#114;&#105;&#103;&#104;&#116;&#41;&#43;&#98;&#92;&#114;&#105;&#103;&#104;&#116;&#41; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;\" title=\"Rendered by QuickLaTeX.com\"\/><\/p>\nfor hyper parameters <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-bf47477023bbafcf8fb56e0c4382cd3b_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#98;&#44;&#92;&#97;&#108;&#112;&#104;&#97;\" title=\"Rendered by QuickLaTeX.com\" height=\"16\" width=\"26\" style=\"vertical-align: -4px;\"\/> that need to be provided or learned. This enables the creation of some types of support vector machines whose kernels are calculated such that the data <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-bcda923e732ff6e429d93d0fa7ea8a47_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"11\" style=\"vertical-align: 0px;\"\/> is processed in the quantum realm. That is <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-65de94cbc8460acda83afb163a1702e6_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#108;&#101;&#102;&#116;&#124;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#95;&#123;&#106;&#125;&#92;&#114;&#105;&#103;&#104;&#116;&#92;&#114;&#97;&#110;&#103;&#108;&#101;&#61;&#49;&#32;&#47;&#92;&#108;&#101;&#102;&#116;&#124;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#95;&#123;&#106;&#125;&#92;&#114;&#105;&#103;&#104;&#116;&#124;&#32;&#92;&#115;&#117;&#109;&#95;&#123;&#107;&#61;&#49;&#125;&#94;&#123;&#78;&#125;&#92;&#108;&#101;&#102;&#116;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#95;&#123;&#106;&#125;&#92;&#114;&#105;&#103;&#104;&#116;&#41;&#95;&#123;&#107;&#125;&#124;&#107;&#92;&#114;&#97;&#110;&#103;&#108;&#101;\" title=\"Rendered by QuickLaTeX.com\" height=\"23\" width=\"213\" style=\"vertical-align: -6px;\"\/>. The work of Schuld et al., expands the theory behind this idea an show that all kernel methods can be quantum machine learning methods. \n\nRecently, in 2020, Mari et al., worked on variational models that are hybrid in format. Particularly, the authors focused on transfer learning, i.e., the idea of bringing a pre-trained model (or a piece of it) to be part of another model. In the case of Mari the larger model is a computer vision model, e.g., ResNet, which is part of a variational quantum circuit that performs classification.\nThe work we present here follows a similar idea, but we focus in the autoencoder architecture, rather than a classification model, and we focus on learning representations in comparison between a classic and a variational quantum fine-tuned model.\n\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"967\" height=\"807\" src=\"https:\/\/baylor.ai\/wp-content\/uploads\/2022\/03\/image-1.png\" alt=\"\" class=\"wp-image-297\" srcset=\"https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-1.png 967w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-1-300x250.png 300w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-1-768x641.png 768w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-1-863x720.png 863w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/03\/image-1-129x108.png 129w\" sizes=\"auto, (max-width: 967px) 100vw, 967px\" \/><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>One of our recent papers introduces a novel hybrid quantum machine learning approach to unsupervised representation learning by using a quantum variational circuit that is trainable with traditional gradient descent techniques. Access it here: [&nbsp;bib&nbsp;|&nbsp;.pdf&nbsp;] Much of the work related to quantum machine learning has been popularized in recent years. Some of the most notable &hellip; <a href=\"https:\/\/lab.rivas.ai\/?p=296\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Hybrid Quantum Variational Autoencoders for Representation Learning<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[7,8],"class_list":["post-296","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-quantum-ml","tag-representation-learning"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/296","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=296"}],"version-history":[{"count":3,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/296\/revisions"}],"predecessor-version":[{"id":302,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/296\/revisions\/302"}],"wp:attachment":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=296"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=296"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=296"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}