{"id":242,"date":"2022-02-17T12:54:00","date_gmt":"2022-02-17T18:54:00","guid":{"rendered":"https:\/\/baylor.ai\/?p=242"},"modified":"2022-03-15T16:35:59","modified_gmt":"2022-03-15T21:35:59","slug":"enhancing-adversarial-examples-on-deep-qnetworks-with-previous-information","status":"publish","type":"post","link":"https:\/\/lab.rivas.ai\/?p=242","title":{"rendered":"Enhancing Adversarial Examples on Deep QNetworks with Previous Information"},"content":{"rendered":"\n<p>This work finds strong adversarial examples for Deep Q Networks which are famous deep reinforcement learning models. We combine two subproblems of finding adversarial examples in deep reinforcement learning: finding states to perturb and determining how much to perturb. Therefore, the attack can jointly optimize this problem. Further, we trained Deep Q Networks to play Atari games: Breakout and Space Invader. Then, we used our attack to find adversarial examples on those games. As a result, we can achieve state-of-the-art results and showed that our attack is natural and stealthy. Paper: [ <a rel=\"noreferrer noopener\" href=\"https:\/\/rivas.ai\/bibs\/sooksatra2021enhancing.bib\" target=\"_blank\"><strong>bib<\/strong><\/a> | <a rel=\"noreferrer noopener\" href=\"https:\/\/rivas.ai\/pdfs\/sooksatra2021enhancing.pdf\" target=\"_blank\"><strong>pdf<\/strong><\/a> ]<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"795\" height=\"334\" src=\"https:\/\/baylor.ai\/wp-content\/uploads\/2022\/02\/Screenshot-from-2022-02-17-12-51-00.png\" alt=\"\" class=\"wp-image-246\" srcset=\"https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/02\/Screenshot-from-2022-02-17-12-51-00.png 795w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/02\/Screenshot-from-2022-02-17-12-51-00-300x126.png 300w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/02\/Screenshot-from-2022-02-17-12-51-00-768x323.png 768w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2022\/02\/Screenshot-from-2022-02-17-12-51-00-257x108.png 257w\" sizes=\"auto, (max-width: 795px) 100vw, 795px\" \/><\/figure>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This work finds strong adversarial examples for Deep Q Networks which are famous deep reinforcement learning models. We combine two subproblems of finding adversarial examples in deep reinforcement learning: finding states to perturb and determining how much to perturb. Therefore, the attack can jointly optimize this problem. Further, we trained Deep Q Networks to play &hellip; <a href=\"https:\/\/lab.rivas.ai\/?p=242\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Enhancing Adversarial Examples on Deep QNetworks with Previous Information<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[2,6],"class_list":["post-242","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-adversarial-ml","tag-computer-vision"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/242","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=242"}],"version-history":[{"count":4,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/242\/revisions"}],"predecessor-version":[{"id":293,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/242\/revisions\/293"}],"wp:attachment":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=242"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=242"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=242"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}