{"id":7010,"date":"2025-09-06T19:26:28","date_gmt":"2025-09-07T00:26:28","guid":{"rendered":"https:\/\/lab.rivas.ai\/?p=7010"},"modified":"2025-09-06T19:26:28","modified_gmt":"2025-09-07T00:26:28","slug":"controlling-generalization-in-quantum-machine-learning-via-fisher-information-geometry-and-dimensionality-reduction","status":"publish","type":"post","link":"https:\/\/lab.rivas.ai\/?p=7010","title":{"rendered":"Controlling Generalization in Quantum Machine Learning via Fisher Information Geometry and Dimensionality Reduction"},"content":{"rendered":"\n\n\n\n<article><header>\n<p class=\"meta-description\">We derive geometry\u2011aware generalization bounds for quantum ML, showing how Fisher information and dimensionality reduction tighten performance guarantees.<\/p>\n<p class=\"deck\">We combine quantum Fisher information geometry with covering-number analysis to obtain explicit high\u2011probability generalization bounds for quantum machine\u2011learning models, and we explain how reducing the effective dimensionality of the parameter space leads to tighter guarantees.<\/p>\n<\/header><nav class=\"toc\">\n<ul>\n<li><a href=\"#tldr\">TL;DR<\/a><\/li>\n<li><a href=\"#why-it-matters\">Why it matters<\/a><\/li>\n<li><a href=\"#how-it-works\">How it works (plain words)<\/a><\/li>\n<li><a href=\"#results\">What we found<\/a><\/li>\n<li><a href=\"#equation\">Key equation<\/a><\/li>\n<li><a href=\"#limits\">Limits and next steps<\/a><\/li>\n<li><a href=\"#faq\">FAQ<\/a><\/li>\n<li><a href=\"#read-the-paper\">Read the paper<\/a><\/li>\n<\/ul>\n<\/nav>\n<section id=\"tldr\">\n<h2>TL;DR<\/h2>\n<ul>\n<li>We bound the generalization error of quantum ML models using the quantum Fisher information matrix.<\/li>\n<li>The bound tightens as the effective dimension\u202f<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/> drops, giving a <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-77c829859da64bf4d2e12824b59132c4_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#49;&#47;&#92;&#115;&#113;&#114;&#116;&#123;&#78;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"21\" width=\"49\" style=\"vertical-align: -5px;\"\/> improvement with more data.<\/li>\n<li>Our approach links geometry, covering numbers, and dimensionality reduction: tools rarely combined in quantum learning theory.<\/li>\n<\/ul>\n<\/section>\n<section id=\"why-it-matters\">\n<h2>Why it matters<\/h2>\n<p>Quantum machine learning promises speed\u2011ups for tasks such as chemistry simulation and combinatorial optimization. However, a model that works well on a training set may fail on new data, a problem known as over\u2011fitting. Classical learning theory offers tools like Rademacher complexity and covering numbers to predict over\u2011fitting, but quantum models have a very different parameter landscape. By translating those classical tools into the quantum domain, using the quantum Fisher information matrix (Q\u2011FIM) to describe curvature, we obtain the first rigorous, geometry\u2011aware guarantees that a quantum model will perform well on unseen inputs. This helps practitioners design models that are both powerful and reliable.<\/p>\n<\/section>\n<section id=\"how-it-works\">\n<h2>How it works (plain words)<\/h2>\n<p>Our method proceeds in four intuitive steps:<\/p>\n<ol>\n<li><strong>Characterise the landscape.<\/strong> We compute the quantum Fisher information matrix for the model parameters. The determinant of this matrix tells us how \u201cflat\u201d or \u201ccurved\u201d the parameter space is.<\/li>\n<li><strong>Control complexity with covering numbers.<\/strong> Using the curvature information, we bound the number of small balls needed to cover the whole parameter space. Fewer balls mean a simpler hypothesis class.<\/li>\n<li><strong>Translate to Rademacher complexity.<\/strong> Covering\u2011number bounds feed into a standard inequality that limits the Rademacher complexity, a measure of how well the model can fit random noise.<\/li>\n<li><strong>Derive a high\u2011probability generalization bound.<\/strong> Combining the Rademacher bound with a concentration inequality (Talagrand\u2019s bound) gives an explicit formula that relates training error, sample size <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-5793832f979c2268e3694c246d53b1bb_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#78;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"16\" style=\"vertical-align: 0px;\"\/>, effective dimension <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/>, and geometric constants.<\/li>\n<\/ol>\n<p>Finally, we note that many quantum circuits can be projected onto a lower\u2011dimensional subspace (e.g., by pruning or principal\u2011component analysis). Reducing <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/> shrinks the exponential term in the bound, directly improving the guarantee.<\/p>\n<\/section>\n<section id=\"results\">\n<h2>What we found<\/h2>\n<p>Our theoretical analysis yields a clear, data\u2011dependent guarantee:<\/p>\n<ul>\n<li>The generalization error decays as <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-77c829859da64bf4d2e12824b59132c4_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#49;&#47;&#92;&#115;&#113;&#114;&#116;&#123;&#78;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"21\" width=\"49\" style=\"vertical-align: -5px;\"\/>, with a prefactor that depends on the effective dimension <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/> and a geometry term <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-dfc29e15e25edd9df1d275f7768083c8_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#67;&#39;\" title=\"Rendered by QuickLaTeX.com\" height=\"14\" width=\"18\" style=\"vertical-align: 0px;\"\/>.<\/li>\n<li>When the Q\u2011FIM has a positive lower bound on its determinant (i.e., the parameter space is well\u2011conditioned), the exponential factor <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-fb1271f0ad85562fae8880a4e90f0a95_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#101;&#120;&#112;&#40;&#67;&#39;&#47;&#100;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"76\" style=\"vertical-align: -5px;\"\/> remains modest.<\/li>\n<li>Dimensionality reduction, whether by explicit pruning, low\u2011rank approximations, or post\u2011training projection, reduces <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/>, which tightens the bound and makes the model less prone to over\u2011fitting.<\/li>\n<\/ul>\n<\/section>\n<section id=\"equation\">\n<h2>Key equation<\/h2>\n<div class=\"eq\"><p class=\"ql-center-displayed-equation\" style=\"line-height: 46px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-0e18279ebfdc0e21688ac788ea0bf454_l3.png\" height=\"46\" width=\"392\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#91;&#82;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;&#32;&#92;&#108;&#101;&#32;&#92;&#104;&#97;&#116;&#123;&#82;&#125;&#95;&#78;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;&#32;&#43;&#32;&#92;&#102;&#114;&#97;&#99;&#123;&#49;&#50;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#112;&#105;&#32;&#100;&#125;&#92;&#44;&#92;&#101;&#120;&#112;&#92;&#33;&#92;&#98;&#105;&#103;&#108;&#40;&#67;&#39;&#47;&#100;&#92;&#98;&#105;&#103;&#114;&#41;&#125;&#123;&#92;&#115;&#113;&#114;&#116;&#123;&#78;&#125;&#125;&#32;&#43;&#32;&#51;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#102;&#114;&#97;&#99;&#123;&#92;&#108;&#111;&#103;&#40;&#50;&#47;&#92;&#100;&#101;&#108;&#116;&#97;&#41;&#125;&#123;&#50;&#78;&#125;&#125;&#92;&#44;&#44;&#92;&#93;\" title=\"Rendered by QuickLaTeX.com\"\/><\/p><\/div>\n<p>This inequality states that the true risk <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-d16c54eee3afbaa357acabfc51a65153_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#82;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"35\" style=\"vertical-align: -5px;\"\/> is bounded by the empirical risk <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-b5767a87791e747916b7992412ec232e_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#104;&#97;&#116;&#123;&#82;&#125;&#95;&#78;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"21\" width=\"48\" style=\"vertical-align: -5px;\"\/> plus two correction terms: a geometry\u2011dependent term that shrinks with <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-2268c4c523ef996abdd0c902ee8184ea_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#113;&#114;&#116;&#123;&#78;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"18\" width=\"31\" style=\"vertical-align: -2px;\"\/> and a confidence term that scales with <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-0d6b3f69c5030fdc340c0c9649ab3366_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#108;&#111;&#103;&#40;&#49;&#47;&#92;&#100;&#101;&#108;&#116;&#97;&#41;&#47;&#78;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"22\" width=\"105\" style=\"vertical-align: -6px;\"\/>.<\/p>\n<p>Here <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-5420a96582ee892651ac9cb7ddced206_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#67;&#39;&#32;&#61;&#32;&#92;&#108;&#111;&#103;&#32;&#86;&#95;&#92;&#84;&#104;&#101;&#116;&#97;&#32;&#45;&#32;&#92;&#108;&#111;&#103;&#32;&#86;&#95;&#100;&#32;&#45;&#32;&#92;&#108;&#111;&#103;&#32;&#109;&#32;&#43;&#32;&#100;&#92;&#108;&#111;&#103;&#32;&#76;&#95;&#102;&#94;&#112;\" title=\"Rendered by QuickLaTeX.com\" height=\"24\" width=\"298\" style=\"vertical-align: -9px;\"\/>, where <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-fb670a8376b3a4656c2a65d5fb0fe6fc_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#86;&#95;&#92;&#84;&#104;&#101;&#116;&#97;\" title=\"Rendered by QuickLaTeX.com\" height=\"15\" width=\"20\" style=\"vertical-align: -3px;\"\/> is the volume of the full parameter space, <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-5b1cad47c28af736e4ff915d9483dc45_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#86;&#95;&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"15\" width=\"18\" style=\"vertical-align: -3px;\"\/> the volume of a <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/>\u2011dimensional subspace, <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-6b41df788161942c6f98604d37de8098_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#109;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"15\" style=\"vertical-align: 0px;\"\/> a lower bound on <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-9eb8d8d3474446db6741a30653b7e251_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#100;&#101;&#116;&#40;&#70;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;&#41;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"22\" width=\"92\" style=\"vertical-align: -6px;\"\/>, and <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-17f559739e9d4fbc588f8b02c225d50f_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#76;&#95;&#102;&#94;&#112;\" title=\"Rendered by QuickLaTeX.com\" height=\"24\" width=\"20\" style=\"vertical-align: -9px;\"\/> a Lipschitz constant for the noisy model <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-aa259642f3281bd7ea4b4f0c643c7cb0_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#102;&#95;&#123;&#92;&#116;&#104;&#101;&#116;&#97;&#44;&#112;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"18\" width=\"26\" style=\"vertical-align: -6px;\"\/>.<\/p>\n<\/section>\n<section id=\"limits\">\n<h2>Limits and next steps<\/h2>\n<p>Our analysis relies on three assumptions that are common in learning\u2011theory work but may limit immediate practical use:<\/p>\n<ul>\n<li>We assume the loss function is Lipschitz continuous and the gradients of the quantum model are uniformly bounded.<\/li>\n<li>We require a positive lower bound <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-6b41df788161942c6f98604d37de8098_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#109;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"15\" style=\"vertical-align: 0px;\"\/> on the determinant of the Q\u2011FIM; highly ill\u2011conditioned circuits could violate this condition.<\/li>\n<li>The exponential term <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-fb1271f0ad85562fae8880a4e90f0a95_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#101;&#120;&#112;&#40;&#67;&#39;&#47;&#100;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"76\" style=\"vertical-align: -5px;\"\/> can become large if the parameter\u2011space volume or Lipschitz constant is not carefully controlled.<\/li>\n<\/ul>\n<p>Future research directions include:<\/p>\n<ul>\n<li>Designing training regularizers that directly enforce a well\u2011behaved Q\u2011FIM.<\/li>\n<li>Developing quantum\u2011specific dimensionality\u2011reduction algorithms that preserve expressive power while lowering <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/>.<\/li>\n<li>Empirically testing the bound on near\u2011term quantum hardware to validate the theoretical predictions.<\/li>\n<\/ul>\n<\/section>\n<section id=\"faq\">\n<h2>FAQ<\/h2>\n<dl>\n<dt>What is the quantum Fisher information matrix (Q\u2011FIM) and why does it matter?<\/dt>\n<dd>The Q\u2011FIM measures how sensitively a quantum state (or circuit output) changes with respect to small parameter variations. Its determinant captures the curvature of the parameter landscape; a larger determinant indicates a well\u2011conditioned space where learning is stable, which directly reduces the complexity term in our bound.<\/dd>\n<dt>How does reducing the effective dimension <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/> improve generalization?<\/dt>\n<dd>All three terms in the bound become smaller when <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-4e8716946f6a868f015e0d62f28bc540_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#100;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: 0px;\"\/> drops. In particular, the factor <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-fb1271f0ad85562fae8880a4e90f0a95_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#101;&#120;&#112;&#40;&#67;&#39;&#47;&#100;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"76\" style=\"vertical-align: -5px;\"\/> shrinks, and the prefactor <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-9ee8c79db0ebff242d411cfcee1f7eff_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#49;&#50;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#112;&#105;&#32;&#100;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"18\" width=\"52\" style=\"vertical-align: -2px;\"\/> scales with <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-450f4e5ea9f6c5dc697e01b79ab7718c_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#113;&#114;&#116;&#123;&#100;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"18\" width=\"24\" style=\"vertical-align: -2px;\"\/>. Dimensionality reduction therefore tightens the guarantee and makes the model less able to fit random noise.<\/dd>\n<dt>Is the bound applicable to noisy quantum hardware?<\/dt>\n<dd>Yes. Our derivation explicitly includes a noise parameter <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-3bf85f1087e9fbed3a319341134ac1a2_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#112;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: -4px;\"\/> through the model <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-aa259642f3281bd7ea4b4f0c643c7cb0_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#102;&#95;&#123;&#92;&#116;&#104;&#101;&#116;&#97;&#44;&#112;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"18\" width=\"26\" style=\"vertical-align: -6px;\"\/> and shows that the bound remains valid as long as <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-3bf85f1087e9fbed3a319341134ac1a2_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#112;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: -4px;\"\/> stays within a regime where the Lipschitz constant <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-17f559739e9d4fbc588f8b02c225d50f_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#76;&#95;&#102;&#94;&#112;\" title=\"Rendered by QuickLaTeX.com\" height=\"24\" width=\"20\" style=\"vertical-align: -9px;\"\/> and the Q\u2011FIM lower bound <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-6b41df788161942c6f98604d37de8098_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#109;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"15\" style=\"vertical-align: 0px;\"\/> are preserved.<\/dd>\n<\/dl>\n<\/section>\n<section id=\"read-the-paper\">\n<h2>Read the paper<\/h2>\n<p><a href=\"https:\/\/example.com\/paper.pdf\">Controlling Generalization in Quantum Machine Learning via Fisher Information Geometry and Dimensionality Reduction<\/a><\/p>\n<\/section>\n<section id=\"references\">\n<h2>References<\/h2>\n<p>Khanal, B., &amp; Rivas, P. (2025). Data-dependent generalization bounds for parameterized quantum models under noise. <i>The Journal of Supercomputing<\/i>, 81(611), 1\u201334. https:\/\/doi.org\/10.1007\/s11227-025-06966-9. <a href=\"https:\/\/www.rivas.ai\/pdfs\/khanal2025datadependent.pdf\">Download PDF<\/a><\/p>\n<\/section>\n<\/article>","protected":false},"excerpt":{"rendered":"<p>We derive geometry\u2011aware generalization bounds for quantum ML, showing how Fisher information and dimensionality reduction tighten performance guarantees.<\/p>\n<p>We combine quantum Fisher information geometry with covering-number analysis to obtain explicit high\u2011probability generalization bounds for quantum machine\u2011learning models, and we explain how reducing the effective dimensionality of the parameter space leads to tighter guarantees.<\/p>\n","protected":false},"author":11,"featured_media":7009,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[7],"class_list":["post-7010","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-quantum-ml"],"jetpack_featured_media_url":"https:\/\/lab.rivas.ai\/wp-content\/uploads\/2025\/09\/Vpaab-cover.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/7010","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7010"}],"version-history":[{"count":2,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/7010\/revisions"}],"predecessor-version":[{"id":7013,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/7010\/revisions\/7013"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/media\/7009"}],"wp:attachment":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7010"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7010"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7010"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}