{"id":7141,"date":"2025-09-17T06:00:00","date_gmt":"2025-09-17T11:00:00","guid":{"rendered":"https:\/\/lab.rivas.ai\/?p=7141"},"modified":"2025-09-20T10:31:36","modified_gmt":"2025-09-20T15:31:36","slug":"coreset%e2%80%91based-neuron-pruning-halves-nerf-model-size-and-speeds-training-by-35","status":"publish","type":"post","link":"https:\/\/lab.rivas.ai\/?p=7141","title":{"rendered":"Coreset\u2011Based Neuron Pruning Halves NeRF Model Size and Speeds Training by 35%"},"content":{"rendered":"<article>\n<header>\n<p class=\"meta-description\">We halve NeRF model size and cut training time by 35% using coreset\u2011driven neuron pruning, while keeping PSNR within 0.2\u202fdB of the full model.<\/p>\n<p class=\"deck\">We examined three neuron\u2011pruning strategies for Neural Radiance Fields: uniform sampling, importance\u2011based pruning, and a coreset\u2011driven approach. Our experiments show that the coreset method reduces the MLP size by roughly 50\u202f% and accelerates training by about one\u2011third, with only a minor loss in visual fidelity (PSNR drop of 0.2\u202fdB).<\/p>\n<\/header>\n<nav class=\"toc\">\n<ul>\n<li><a href=\"#tldr\">TL;DR<\/a><\/li>\n<li><a href=\"#why-it-matters\">Why it matters<\/a><\/li>\n<li><a href=\"#how-it-works\">How it works<\/a><\/li>\n<li><a href=\"#results\">What we found<\/a><\/li>\n<li><a href=\"#equation\">Key equation<\/a><\/li>\n<li><a href=\"#limits\">Limits and next steps<\/a><\/li>\n<li><a href=\"#faq\">FAQ<\/a><\/li>\n<li><a href=\"#read-the-paper\">Read the paper<\/a><\/li>\n<\/ul>\n<\/nav>\n<section id=\"tldr\">\n<h2>TL;DR<\/h2>\n<ul>\n<li>Neuron\u2011level pruning can halve NeRF model size and speed up training by 35\u202f%.<\/li>\n<li>Our coreset method keeps PSNR at 21.3\u202fdB vs. 21.5\u202fdB for the full model.<\/li>\n<li>The approach outperforms random uniform sampling and simple importance scores.<\/li>\n<\/ul>\n<\/section>\n<section id=\"why-it-matters\">\n<h2>Why it matters<\/h2>\n<p>Neural Radiance Fields produce photorealistic 3D reconstructions, but their multilayer perceptrons (MLPs) are notoriously large and slow to train, often requiring days of GPU time. Reducing the computational footprint without sacrificing visual quality opens the door to real\u2011time applications, mobile deployment, and large\u2011scale scene generation. By exposing and exploiting latent sparsity in NeRF\u2019s fully\u2011connected layers, we provide a practical pathway toward more efficient neural rendering pipelines.<\/p>\n<\/section>\n<section id=\"how-it-works\">\n<h2>How it works<\/h2>\n<p>We start from a standard NeRF MLP (256\u202f\u00d7\u202f256 neurons per hidden layer). For each neuron we compute two scores: the average magnitude of its incoming weights (\u202fw<sub>in<\/sub>\u202f) and the average magnitude of its outgoing weights (\u202fw<sub>out<\/sub>\u202f). The outgoing score correlates more strongly with final rendering quality, so we prioritize neurons with higher\u202fw<sub>out<\/sub>. Using these scores we construct a coreset, a small, representative subset of neurons, that preserves the functional capacity of the original network. The selected neurons are then re\u2011wired into a compact MLP (e.g., 128\u202f\u00d7\u202f128 or 64\u202f\u00d7\u202f64), and the model is retrained from scratch. Uniform sampling simply drops neurons at random, while importance pruning drops those with the lowest\u202fw<sub>out<\/sub>\u202for\u202fw<sub>in<\/sub>\u202fscores; both are less informed than the coreset selection.<\/p>\n<\/section>\n<section id=\"results\">\n<h2>What we found<\/h2>\n<p>Across three benchmark scenes the coreset\u2011driven pruning consistently delivered the best trade\u2011off between efficiency and quality.<\/p>\n<ul>\n<li>Model size shrank from 2.38\u202fMB to 1.14\u202fMB (\u2248\u202f50\u202f% reduction). Parameters dropped from 595\u202fK to 288\u202fK.<\/li>\n<li>Training time per 100\u202fk iterations fell from 78.75\u202fmin to 51.25\u202fmin (\u2248\u202f35\u202f% faster).<\/li>\n<li>Peak signal\u2011to\u2011noise ratio decreased only from 21.5\u202fdB to 21.3\u202fdB (0.2\u202fdB loss).<\/li>\n<li>Uniform sampling to 64\u202f\u00d7\u202f64 neurons caused PSNR to plunge to 16.5\u202fdB and model size to 0.7\u202fMB, demonstrating that random removal is detrimental.<\/li>\n<li>Importance pruning using\u202fw<sub>out<\/sub>\u202fpreserved PSNR at 20.0\u202fdB, better than using only\u202fw<sub>in<\/sub>\u202for the product of both.<\/li>\n<\/ul>\n<p>Visual inspections confirmed that the coreset\u2011pruned models are indistinguishable from the full model in most viewpoints, while aggressive pruning shows only minor loss of fine detail.<\/p>\n<\/section>\n<section id=\"equation\">\n<h2>Key equation<\/h2>\n<div class=\"eq\">\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 40px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lab.rivas.ai\/wp-content\/ql-cache\/quicklatex.com-7898c1c1f5e8c4a261b8785f6d547dab_l3.png\" height=\"40\" width=\"186\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#91;&#92;&#116;&#101;&#120;&#116;&#123;&#80;&#83;&#78;&#82;&#125;&#61;&#49;&#48;&#92;&#108;&#111;&#103;&#95;&#123;&#49;&#48;&#125;&#92;&#102;&#114;&#97;&#99;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#77;&#65;&#88;&#125;&#94;&#50;&#125;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#77;&#83;&#69;&#125;&#125;&#92;&#93;\" title=\"Rendered by QuickLaTeX.com\"\/><\/p>\n<\/div>\n<p>This converts the mean\u2011squared error between rendered and ground\u2011truth images into a decibel scale, allowing us to quantify the tiny fidelity loss introduced by pruning.<\/p>\n<\/section>\n<section id=\"limits\">\n<h2>Limits and next steps<\/h2>\n<p>Our study focuses on static scenes and a single MLP architecture; performance on dynamic scenes or alternative NeRF variants remains untested. Moreover, we retrain the pruned network from scratch, which adds a brief warm\u2011up cost. Future work will explore layer\u2011wise pruning, integration with parameter\u2011efficient transfer learning, and joint optimization of pruning and quantization to push efficiency even further.<\/p>\n<\/section>\n<section id=\"faq\">\n<h2>FAQ<\/h2>\n<dl>\n<dt>Does pruning affect rendering speed at inference time?<\/dt>\n<dd>Yes, a smaller MLP evaluates faster, typically yielding a modest inference\u2011time gain in addition to the training speedup.<\/dd>\n<dt>Can we prune beyond 128\u202f\u00d7\u202f128 neurons?<\/dt>\n<dd>We observed noticeable PSNR drops (\u2248\u202f1\u202fdB) when compressing to 64\u202f\u00d7\u202f64, so deeper compression is possible but requires application\u2011specific quality tolerances.<\/dd>\n<\/dl>\n<\/section>\n<section id=\"read-the-paper\">\n<h2>Read the paper<\/h2>\n<\/section>\n<\/article>\n<p>Ding, T. K., Xiang, D., Rivas, P., &amp; Dong, L. (2025). Neural pruning for 3D scene reconstruction: Efficient NeRF acceleration. In Proceedings of AIR-RES 2025: The 2025 International Conference on the AI Revolution: Research, Ethics, and Society (pp. 1\u201313). Las Vegas, NV, USA.<\/p>\n<p><a href=\"https:\/\/rivas.ai\/pdfs\/ding2025neural.pdf\" rel=\"noopener noreferrer\">Download PDF<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We halve NeRF model size and cut training time by 35% using coreset\u2011driven neuron pruning, while keeping PSNR within 0.2 dB of the full model. We examined three neuron\u2011pruning strategies for Neural Radiance Fields\u2014uniform sampling, importance\u2011based pruning, and a coreset\u2011driven approach. Our experiments show that the coreset method reduces the MLP size by roughly 50 % and accelerates training by about one\u2011third, with only a minor loss in visual fidelity (PSNR drop of 0.2 dB).<\/p>\n","protected":false},"author":11,"featured_media":7140,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[6,8],"class_list":["post-7141","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-computer-vision","tag-representation-learning"],"jetpack_featured_media_url":"https:\/\/lab.rivas.ai\/wp-content\/uploads\/2025\/09\/2rKhi-cover.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/7141","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7141"}],"version-history":[{"count":2,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/7141\/revisions"}],"predecessor-version":[{"id":7165,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/7141\/revisions\/7165"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/media\/7140"}],"wp:attachment":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7141"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7141"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7141"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}