{"id":2650,"date":"2023-02-10T10:00:00","date_gmt":"2023-02-10T16:00:00","guid":{"rendered":"https:\/\/baylor.ai\/?p=2650"},"modified":"2023-02-21T13:14:10","modified_gmt":"2023-02-21T19:14:10","slug":"power-of-data-in-quantum-machine-learning","status":"publish","type":"post","link":"https:\/\/lab.rivas.ai\/?p=2650","title":{"rendered":"Power of Data In Quantum Machine Learning"},"content":{"rendered":"\n<p>This week at the lab, we read the following paper, and here is our summary:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Huang, Hsin-Yuan, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven, and Jarrod R. McClean. &#8220;<a href=\"https:\/\/www.nature.com\/articles\/s41467-021-22539-9\">Power of data in quantum machine learning<\/a>.&#8221;\u00a0<em>Nature communications<\/em>\u00a012, no. 1 (2021): 2631.<\/p>\n<\/blockquote>\n\n\n\n<h4 class=\"wp-block-heading\">Summary<\/h4>\n\n\n\nThis work focuses on the advancement of quantum technologies and their impact on machine learning. The two paths towards the quantum enhancement of machine learning include using the power of quantum computing to improve the training process of existing classical models and using quantum models to generate correlations between variables that are inefficient to represent through classical computation. The authors show that this picture is incomplete in machine learning problems where some training data are provided, as the provided data can elevate classical models to rival quantum models. The authors present a flowchart for testing potential quantum prediction advantage based on prediction error bounds for training classical and quantum ML methods based on kernel functions. This elevation of classical models through some training samples is illustrative of the power of data. The authors also show that, &#8220;training a specific classical ML model on a collection of <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-5793832f979c2268e3694c246d53b1bb_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#78;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"16\" style=\"vertical-align: 0px;\"\/> training examples <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-03b115b6cb2e484d09180e0e2df7a7ed_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#44;&#32;&#121;&#32;&#61;&#32;&#102;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#41;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"99\" style=\"vertical-align: -5px;\"\/> would give rise to a prediction model <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-45ef50d309f8a4a2e766def16a266f71_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#104;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"34\" style=\"vertical-align: -5px;\"\/> with\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 22px;\"><span class=\"ql-right-eqno\"> (1) <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-1bcb2e6db15b7dd6beddc99a32845bce_l3.png\" height=\"22\" width=\"212\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#92;&#109;&#97;&#116;&#104;&#98;&#98;&#123;&#69;&#125;&#95;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#124;&#104;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#41;&#45;&#102;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#41;&#124;&#92;&#108;&#101;&#113;&#32;&#99;&#32;&#92;&#115;&#113;&#114;&#116;&#123;&#112;&#94;&#50;&#47;&#78;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;\" title=\"Rendered by QuickLaTeX.com\"\/><\/p>\nfor a constant <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-178a4ba561a4ee995ba5b2a9168834c9_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#99;&#32;&#62;&#32;&#48;\" title=\"Rendered by QuickLaTeX.com\" height=\"14\" width=\"40\" style=\"vertical-align: -2px;\"\/>. Hence, with <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-9999fa83902a597c24f36b3f6b561389_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#78;&#32;&#92;&#97;&#112;&#112;&#114;&#111;&#120;&#32;&#112;&#94;&#50;&#47;&#92;&#101;&#112;&#115;&#105;&#108;&#111;&#110;&#94;&#50;\" title=\"Rendered by QuickLaTeX.com\" height=\"20\" width=\"79\" style=\"vertical-align: -5px;\"\/> training data, one can train a classical ML model to predict the function <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-a927d00d444c38edfd1d7a6d7dd74483_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#102;&#40;&#92;&#109;&#97;&#116;&#104;&#98;&#102;&#123;&#120;&#125;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"35\" style=\"vertical-align: -5px;\"\/> up to an additive prediction error <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-729568734d87ffb0f88cf42b1bc6828a_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#101;&#112;&#115;&#105;&#108;&#111;&#110;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"7\" style=\"vertical-align: 0px;\"\/>.&#8221;  They also show that a slight geometric difference between kernel functions defined by classical and quantum ML guarantees similar or better performance in prediction by classical ML. On the other hand, a sizeable geometric difference indicates the possibility of a large prediction advantage using the quantum ML model.\n\n\n\n\n<p>Additionally, the authors introduced \u201dprojected quantum kernels\u201d and demonstrated, through empirical results, that these outperformed all tested classical models in prediction error. This work provides a guidebook for generating ML problems that showcase the separation between quantum and classical models.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Intellectual Merit<\/h4>\n\n\n\n<p>This work provides a theoretical and computational framework for comparing classical and quantum ML models. The authors develop prediction error bounds for training classical and quantum ML methods based on kernel functions, which provide provable guarantees and are very flexible in the functions they can learn. The authors also develop a flowchart for testing potential quantum prediction advantage, a function-independent prescreening that allows one to evaluate the possibility of better performance. The authors provide a constructive example of a discrete log feature map, which gives a provable separation for their kernel. They rule out many existing models in the literature, providing a powerful sieve for focusing the development of new data encodings.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Broader Impact<\/h4>\n\n\n\n<p>The authors\u2019 contributions to the field of quantum technologies and machine learning have significant broader impacts. The development of a flowchart for testing potential quantum prediction advantage provides a tool for researchers and practitioners to determine the possibility of better performance using quantum ML models. The authors\u2019 framework can also be used to compare and construct hard classical models, such as hash functions, which have applications in cryptography and secure communication. The authors\u2019 work has the potential to accelerate the development of new data encodings, leading to more efficient and accurate machine learning models. This has far-reaching implications for various applications, including image recognition, text translation, and even physics applications, where machine learning can revolutionize how we analyze and interpret data. The paper was organized and written by collaborating with three famous quantum institutes: Google Quantum AI, the Institute for Quantum Information and Matter at Caltech, and the Department of Computing and Mathematical Sciences at Caltech.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"845\" src=\"https:\/\/baylor.ai\/wp-content\/uploads\/2023\/02\/image-4-1024x845.png\" alt=\"\" class=\"wp-image-2660\" srcset=\"https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/02\/image-4-1024x845.png 1024w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/02\/image-4-300x247.png 300w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/02\/image-4-768x633.png 768w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/02\/image-4-1536x1267.png 1536w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/02\/image-4-863x712.png 863w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/02\/image-4-131x108.png 131w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/02\/image-4.png 1770w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>This week at the lab, we read the following paper, and here is our summary: Huang, Hsin-Yuan, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven, and Jarrod R. McClean. &#8220;Power of data in quantum machine learning.&#8221;\u00a0Nature communications\u00a012, no. 1 (2021): 2631. Summary This work focuses on the advancement of quantum technologies and their &hellip; <a href=\"https:\/\/lab.rivas.ai\/?p=2650\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Power of Data In Quantum Machine Learning<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[4,7],"class_list":["post-2650","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-ai-lab","tag-quantum-ml"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/2650","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2650"}],"version-history":[{"count":10,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/2650\/revisions"}],"predecessor-version":[{"id":2661,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/2650\/revisions\/2661"}],"wp:attachment":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2650"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2650"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2650"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}