{"id":6426,"date":"2023-06-19T18:51:09","date_gmt":"2023-06-20T02:51:09","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=6426"},"modified":"2025-04-24T17:15:23","modified_gmt":"2025-04-24T17:15:23","slug":"guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/","title":{"rendered":"Guide to Image Inpainting: Using machine learning to edit and correct defects in photos"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\">\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"514\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp\" alt=\"\" class=\"wp-image-6427\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp 1024w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-300x150.webp 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-768x385.webp 768w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1536x770.webp 1536w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-2048x1027.webp 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"df93\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">We\u2019ve all heard the saying <em class=\"np\">A picture is worth a thousand words<\/em>. But is a tarnished image with gaping holes or splotches or blurs worth a few hundred? What if you just found an age-old photograph of your grandparents\u2019 wedding, but the surface was so worn that you could barely make out their faces. Or perhaps you got photobombed in what would otherwise have been the perfect picture. Or maybe you\u2019re like me and are wondering why hasn\u2019t anyone integrated an option in a smartphone camera app to remove unwanted objects from images?<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*pyWQrsKEypdOPxQNT48M5g.png\" alt=\"\" width=\"700\" height=\"526\"><\/figure><div class=\"mp mq nq\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*pyWQrsKEypdOPxQNT48M5g.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*pyWQrsKEypdOPxQNT48M5g.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*pyWQrsKEypdOPxQNT48M5g.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*pyWQrsKEypdOPxQNT48M5g.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*pyWQrsKEypdOPxQNT48M5g.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*pyWQrsKEypdOPxQNT48M5g.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*pyWQrsKEypdOPxQNT48M5g.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*pyWQrsKEypdOPxQNT48M5g.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*pyWQrsKEypdOPxQNT48M5g.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*pyWQrsKEypdOPxQNT48M5g.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*pyWQrsKEypdOPxQNT48M5g.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*pyWQrsKEypdOPxQNT48M5g.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*pyWQrsKEypdOPxQNT48M5g.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*pyWQrsKEypdOPxQNT48M5g.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div><figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">This view from my school would be just the sort of thing Inpainting could improve.<\/figcaption><\/figure>\n<h1 id=\"2091\" class=\"nv nw fo be nx ny nz go oa ob oc gr od oe of og oh oi oj ok ol om on oo op oq bj\" data-selectable-paragraph=\"\">What is Image Inpainting?<\/h1>\n<blockquote class=\"or os ot\"><p id=\"6af0\" class=\"mu mv np be b gm mw mx my gp mz na nb ou nd ne nf ov nh ni nj ow nl nm nn no fh bj\" data-selectable-paragraph=\"\">Inpainting is the process of reconstructing lost or deteriorated parts of images and videos. In the museum world, in the case of a valuable painting, this task would be carried out by a skilled art conservator or art restorer. In the digital world, inpainting refers to the application of sophisticated algorithms to replace lost or corrupted parts of the image data. (<a class=\"af mt\" href=\"https:\/\/ww2.mathworks.cn\/matlabcentral\/fileexchange\/50366-inpainting?s_tid=prof_contriblnk\" target=\"_blank\" rel=\"noopener ugc nofollow\">source<\/a>)<\/p><\/blockquote>\n<p id=\"ea7a\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">This official definition of inpainting <a class=\"af mt\" href=\"https:\/\/en.wikipedia.org\/wiki\/Inpainting\" target=\"_blank\" rel=\"noopener ugc nofollow\">on Wikipedia<\/a> already takes into account the use of \u201csophisticated algorithms\u201d that do the same work of manually overwriting imperfections or repairing defects but in a fraction of the time.<\/p>\n<p id=\"b865\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">As deep learning technologies progress further, however, the process of inpainting has become automated in so complete a manner that these days, it requires no human intervention at all. Simply feed a damaged image to a neural network and receive the corrected output. Go ahead and try it out yourself, with <a class=\"af mt\" href=\"https:\/\/www.nvidia.com\/research\/inpainting\/\" target=\"_blank\" rel=\"noopener ugc nofollow\">NVIDIA\u2019s web playground<\/a> that demonstrates how their network fills in a missing portion for any image.<\/p>\n<p id=\"7a5e\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Simply drag and drop any image file, erase a portion of it with the cursor and watch how the AI patches it up. I tried it on a few pictures lying around on my desktop. Here\u2019s one of them below, with a big chunk of my face missing and the neural network restoring it again in a matter of seconds, albeit making me look like I just got out of a street fight.<\/p>\n<\/div>\n<\/div>\n<div class=\"me\">\n<div class=\"ab ca\">\n<div class=\"ox oy oz pa pb pc ce pd cf pe ch bg\">\n<figure class=\"mf mg mh mi mj me pg ph paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1000\/1*hshvXmbWNaVSxMR7lyCTNQ.png\" alt=\"\" width=\"1000\" height=\"358\"><\/figure><div class=\"mp mq pf\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*hshvXmbWNaVSxMR7lyCTNQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*hshvXmbWNaVSxMR7lyCTNQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*hshvXmbWNaVSxMR7lyCTNQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*hshvXmbWNaVSxMR7lyCTNQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*hshvXmbWNaVSxMR7lyCTNQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*hshvXmbWNaVSxMR7lyCTNQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/format:webp\/1*hshvXmbWNaVSxMR7lyCTNQ.png 2000w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*hshvXmbWNaVSxMR7lyCTNQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*hshvXmbWNaVSxMR7lyCTNQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*hshvXmbWNaVSxMR7lyCTNQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*hshvXmbWNaVSxMR7lyCTNQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*hshvXmbWNaVSxMR7lyCTNQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*hshvXmbWNaVSxMR7lyCTNQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/1*hshvXmbWNaVSxMR7lyCTNQ.png 2000w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"0e8c\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">You can also use it to quickly getting rid of something in a picture, too. Here\u2019s another image I had lying around. A great view of Hangzhou\u2019s west lake with the picturesque Leifeng Pagoda in the distance. The AI does a great job in envisioning a new lake with no pagoda around.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:624\/1*d86-PJKv_Em7GHEUBbO5BQ.png\" alt=\"\" width=\"624\" height=\"332\"><\/figure><div class=\"mp mq pi\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*d86-PJKv_Em7GHEUBbO5BQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*d86-PJKv_Em7GHEUBbO5BQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*d86-PJKv_Em7GHEUBbO5BQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*d86-PJKv_Em7GHEUBbO5BQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*d86-PJKv_Em7GHEUBbO5BQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*d86-PJKv_Em7GHEUBbO5BQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1248\/format:webp\/1*d86-PJKv_Em7GHEUBbO5BQ.png 1248w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 624px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*d86-PJKv_Em7GHEUBbO5BQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*d86-PJKv_Em7GHEUBbO5BQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*d86-PJKv_Em7GHEUBbO5BQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*d86-PJKv_Em7GHEUBbO5BQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*d86-PJKv_Em7GHEUBbO5BQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*d86-PJKv_Em7GHEUBbO5BQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1248\/1*d86-PJKv_Em7GHEUBbO5BQ.png 1248w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 624px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<p id=\"3be1\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Traditional forms of image restoration usually evolve around some simple concept. Given a gap in pixels, fill the gap with pixels that are the same as, or similar to, neighboring pixels. These techniques are generally dependent on various factors and are more efficient for removing noise or small defects from images. They will most likely fail when the image has huge gaps or a significant amount of missing data.<\/p>\n<p id=\"9812\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The most straightforward and conventional technique for image restoration is <a class=\"af mt\" href=\"https:\/\/en.wikipedia.org\/wiki\/Deconvolution\" target=\"_blank\" rel=\"noopener ugc nofollow\">deconvolution<\/a>, which is performed in the frequency domain and after computing the <a class=\"af mt\" href=\"https:\/\/en.wikipedia.org\/wiki\/Fourier_transform\" target=\"_blank\" rel=\"noopener ugc nofollow\">Fourier transform<\/a> of both the image and the PSF to undo the resolution loss caused by the blurring factors. The result of applying this technique usually creates an imperfectly deblurred image.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*9U_0bX1U5VBF0AmowzkGVw.png\" alt=\"\" width=\"700\" height=\"310\"><\/figure><div class=\"mp mq pj\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*9U_0bX1U5VBF0AmowzkGVw.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*9U_0bX1U5VBF0AmowzkGVw.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*9U_0bX1U5VBF0AmowzkGVw.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*9U_0bX1U5VBF0AmowzkGVw.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*9U_0bX1U5VBF0AmowzkGVw.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*9U_0bX1U5VBF0AmowzkGVw.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*9U_0bX1U5VBF0AmowzkGVw.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*9U_0bX1U5VBF0AmowzkGVw.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*9U_0bX1U5VBF0AmowzkGVw.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*9U_0bX1U5VBF0AmowzkGVw.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*9U_0bX1U5VBF0AmowzkGVw.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*9U_0bX1U5VBF0AmowzkGVw.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*9U_0bX1U5VBF0AmowzkGVw.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*9U_0bX1U5VBF0AmowzkGVw.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\"><a class=\"af mt\" href=\"https:\/\/www.math.ucla.edu\/~imagers\/htmls\/inp.html\" target=\"_blank\" rel=\"noopener ugc nofollow\">Source<\/a><\/figcaption>\n<\/figure>\n<p id=\"0aa2\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">In a basic sense, inpainting does refer to the restoration of missing parts of an image based on the background information. It can be thought of as a process of filling in missing data in a designated region of visual input.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<h1 id=\"15c9\" class=\"nv nw fo be nx ny qc go oa ob qd gr od oe qe og oh oi qf ok ol om qg oo op oq bj\" data-selectable-paragraph=\"\">A New Approach with Machine Learning<\/h1>\n<p id=\"ea45\" class=\"pw-post-body-paragraph mu mv fo be b gm qh mx my gp qi na nb nc qj ne nf ng qk ni nj nk ql nm nn no fh bj\" data-selectable-paragraph=\"\">The new age alternative is to use deep learning to inpaint images by utilizing supervised image classification. The idea is that each image has a specific label, and neural networks learn to recognize the mapping between images and their labels by repeatedly being taught or \u201ctrained.\u201d<\/p>\n<p id=\"c1e8\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">When trained on huge training datasets (millions of images with thousands of labels), deep networks have remarkable classification performance that can often surpass human accuracy. Generative adversarial networks are typically used for this sort of implementation, given their ability to \u201cgenerate\u201d new data, or in this case, the missing information.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:576\/1*1J8tjUdn215sy6Px68AC2g.gif\" alt=\"\" width=\"576\" height=\"280\"><\/figure><div class=\"mp mq qm\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*1J8tjUdn215sy6Px68AC2g.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*1J8tjUdn215sy6Px68AC2g.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*1J8tjUdn215sy6Px68AC2g.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*1J8tjUdn215sy6Px68AC2g.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*1J8tjUdn215sy6Px68AC2g.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*1J8tjUdn215sy6Px68AC2g.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1152\/1*1J8tjUdn215sy6Px68AC2g.gif 1152w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 576px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*1J8tjUdn215sy6Px68AC2g.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*1J8tjUdn215sy6Px68AC2g.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*1J8tjUdn215sy6Px68AC2g.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*1J8tjUdn215sy6Px68AC2g.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*1J8tjUdn215sy6Px68AC2g.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*1J8tjUdn215sy6Px68AC2g.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1152\/1*1J8tjUdn215sy6Px68AC2g.gif 1152w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 576px\" data-testid=\"og\"><\/picture><\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">Courtesy : NVIDIA<\/figcaption>\n<\/figure>\n<p id=\"f58a\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The basic workflow is as follows: feed the network an input image with \u201choles\u201d or \u201cpatches\u201d that need to be filled. These patches can be considered a hyperparameter required by the network since the network has no way of discerning what actually needs to be filled in. For instance, a picture of a person with a missing face conveys no meaning to the network except changing values for pixels.<\/p>\n<p id=\"f28f\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">To enable the neural network understand what part of the image actually needs filling in, we need a separate layer mask that contains pixel information for the missing data. The input image then goes through several convolutions and deconvolutions as it traverses across the network layers. The network does produce an entirely synthetic image generated from scratch. The layer mask allows us to discard those portions that are already presented in the incomplete image, since we don\u2019t need to fill those parts in. The new generated image is then superimposed on the incomplete one to yield the output.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*NijyRJOe0KfHXomucmW58A.png\" alt=\"\" width=\"700\" height=\"130\"><\/figure><div class=\"mp mq qn\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*NijyRJOe0KfHXomucmW58A.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*NijyRJOe0KfHXomucmW58A.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*NijyRJOe0KfHXomucmW58A.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*NijyRJOe0KfHXomucmW58A.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*NijyRJOe0KfHXomucmW58A.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*NijyRJOe0KfHXomucmW58A.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*NijyRJOe0KfHXomucmW58A.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*NijyRJOe0KfHXomucmW58A.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*NijyRJOe0KfHXomucmW58A.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*NijyRJOe0KfHXomucmW58A.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*NijyRJOe0KfHXomucmW58A.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*NijyRJOe0KfHXomucmW58A.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*NijyRJOe0KfHXomucmW58A.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*NijyRJOe0KfHXomucmW58A.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">Comparison of various inpainting approaches<\/figcaption>\n<\/figure>\n<p id=\"5fdb\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">A discriminator network, such as the one in a conventional GAN, can prove useful at such points. Its main use in such a scenario is to ensure that the final image obtained after filling in the gaps doesn\u2019t look obviously fake. When compared with the original image, it needs to look reasonably similar, containing minute differences.<\/p>\n<p id=\"8482\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The problem here can be the absence of an original image to compare against. A discriminator network trained on thousands of image samples might be just the thing. <a class=\"af mt\" href=\"http:\/\/iizuka.cs.tsukuba.ac.jp\/projects\/completion\/data\/completion_sig2017.pdf\" target=\"_blank\" rel=\"noopener ugc nofollow\">Iizuka et al.<\/a> propose a version of a network that can be used for image inpainting pictured below.<\/p>\n<p id=\"b092\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">It consists of a completion network (for convolution and deconvolution) and two auxiliary context discriminator networks that are used only for training the completion network and are not used during testing. The global discriminator network takes the entire image as input, while the local discriminator network takes only a small region around the completed area as input. Both discriminator networks are trained to determine if an image is real or completed by the completion network, while the completion network is trained to fool both discriminator networks.<\/p>\n<\/div>\n<\/div>\n<div class=\"me\">\n<div class=\"ab ca\">\n<div class=\"ox oy oz pa pb pc ce pd cf pe ch bg\">\n<figure class=\"mf mg mh mi mj me pg ph paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1000\/1*XbYtwcpoxTysr9q2nuH-pg.png\" alt=\"\" width=\"1000\" height=\"334\"><\/figure><div class=\"mp mq qo\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*XbYtwcpoxTysr9q2nuH-pg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*XbYtwcpoxTysr9q2nuH-pg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*XbYtwcpoxTysr9q2nuH-pg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*XbYtwcpoxTysr9q2nuH-pg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*XbYtwcpoxTysr9q2nuH-pg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*XbYtwcpoxTysr9q2nuH-pg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/format:webp\/1*XbYtwcpoxTysr9q2nuH-pg.png 2000w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*XbYtwcpoxTysr9q2nuH-pg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*XbYtwcpoxTysr9q2nuH-pg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*XbYtwcpoxTysr9q2nuH-pg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*XbYtwcpoxTysr9q2nuH-pg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*XbYtwcpoxTysr9q2nuH-pg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*XbYtwcpoxTysr9q2nuH-pg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/1*XbYtwcpoxTysr9q2nuH-pg.png 2000w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">Globally and Locally Consistent Image Completion (<a class=\"af mt\" href=\"http:\/\/iizuka.cs.tsukuba.ac.jp\/projects\/completion\/data\/completion_sig2017.pdf\" target=\"_blank\" rel=\"noopener ugc nofollow\">Iizuka et. al.<\/a>)<\/figcaption>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"4d1e\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Not long ago, researchers at NVIDIA <a class=\"af mt\" href=\"https:\/\/eccv2018.org\/openaccess\/content_ECCV_2018\/papers\/Guilin_Liu_Image_Inpainting_for_ECCV_2018_paper.pdf\" target=\"_blank\" rel=\"noopener ugc nofollow\">published a thesis<\/a> on their new and improved method for inpainting irregular holes in images using partial convolutions. They claimed that previous approaches with deep learning focused on rectangular holes, usually around the image center, and often relied on expensive post-processing or \u201ctouching up\u201d.<\/p>\n<p id=\"365b\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Their new method \u201coperates robustly on irregular hole patterns, and produces semantically meaningful predictions that incorporate smoothly with the rest of the image without the need for any additional post-processing or blending operation.\u201d<\/p>\n<figure class=\"mf mg mh mi mj me\">\n<div class=\"qp is l eb\">\n<div class=\"qq qr l\"><iframe loading=\"lazy\" class=\"ek n fc dx bg\" title=\"NVIDIA's AI Removes Objects From Your Photos | Two Minute Papers #255\" src=\"https:\/\/cdn.embedly.com\/widgets\/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FtU484zM3pDY%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DtU484zM3pDY&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FtU484zM3pDY%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube\" width=\"854\" height=\"480\" frameborder=\"0\" scrolling=\"no\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/div>\n<\/div>\n<\/figure>\n<p id=\"7322\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Their test results on the Celeba-HQ and ImageNet datasets (pictured below),demonstrate that their new method doesn\u2019t suffer from the same drawbacks as previous approaches, which include some form of touch up to ensure the filled holes don\u2019t look extremely fake. It also managed to avoid imperfections such as granular degradation or blurred edges.<\/p>\n<p id=\"fcd4\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Fundamentally, their method works by having the neural network create masks and partial convolutional predictions, thus generating an invisible, intermediate layer that can be controlled until the image passes its test of authenticity by the discriminator.<\/p>\n<\/div>\n<\/div>\n<div class=\"me\">\n<div class=\"ab ca\">\n<div class=\"ox oy oz pa pb pc ce pd cf pe ch bg\">\n<figure class=\"mf mg mh mi mj me pg ph paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1000\/1*9AFoCJVnrGaa2ueBbC8jig.png\" alt=\"\" width=\"1000\" height=\"592\"><\/figure><div class=\"mp mq qs\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*9AFoCJVnrGaa2ueBbC8jig.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*9AFoCJVnrGaa2ueBbC8jig.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*9AFoCJVnrGaa2ueBbC8jig.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*9AFoCJVnrGaa2ueBbC8jig.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*9AFoCJVnrGaa2ueBbC8jig.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*9AFoCJVnrGaa2ueBbC8jig.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/format:webp\/1*9AFoCJVnrGaa2ueBbC8jig.png 2000w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*9AFoCJVnrGaa2ueBbC8jig.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*9AFoCJVnrGaa2ueBbC8jig.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*9AFoCJVnrGaa2ueBbC8jig.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*9AFoCJVnrGaa2ueBbC8jig.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*9AFoCJVnrGaa2ueBbC8jig.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*9AFoCJVnrGaa2ueBbC8jig.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/1*9AFoCJVnrGaa2ueBbC8jig.png 2000w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">Comparison of test results from Celeba-HQ (above) and ImageNet (below) datasets.<\/figcaption>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"5a3d\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Image inpainting is a technique that has inspired widespread experimentation amongst enthusiasts. While there is always room for improvement, many tools and frameworks offer their own solutions for inpainting images.<\/p>\n<p id=\"715e\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">OpenCV has two inbuilt methods for it. Both can be accessed by the same function, <code class=\"cw qt qu qv qw b\"><a class=\"af mt\" href=\"https:\/\/docs.opencv.org\/3.3.1\/d1\/d0d\/group__photo.html#gaedd30dfa0214fec4c88138b51d678085\" target=\"_blank\" rel=\"noopener ugc nofollow\">cv2.inpaint()<\/a><\/code>, which simply needs a damaged image and a layer mask<strong class=\"be qx\">. <\/strong>The first is based on a Fast Marching Method, which starts from the boundary of the region to be inpainted and moves towards the epicenter, gradually filling everything in the boundary first. Each pixel is replaced by a normalized weighted sum of all the known pixels in its neighborhood.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*unA_VLeezkJayo6TzCRWIQ.png\" alt=\"\" width=\"700\" height=\"166\"><\/figure><div class=\"mp mq qy\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*unA_VLeezkJayo6TzCRWIQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*unA_VLeezkJayo6TzCRWIQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*unA_VLeezkJayo6TzCRWIQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*unA_VLeezkJayo6TzCRWIQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*unA_VLeezkJayo6TzCRWIQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*unA_VLeezkJayo6TzCRWIQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*unA_VLeezkJayo6TzCRWIQ.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*unA_VLeezkJayo6TzCRWIQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*unA_VLeezkJayo6TzCRWIQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*unA_VLeezkJayo6TzCRWIQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*unA_VLeezkJayo6TzCRWIQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*unA_VLeezkJayo6TzCRWIQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*unA_VLeezkJayo6TzCRWIQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*unA_VLeezkJayo6TzCRWIQ.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">The Fast Marching method(left). Taking a small neighborhood B\u03b5(p) of size \u03b5 of the known image around a point p, one must inpaint p situated on the boundary \u2202\u2126 of the region to inpaint \u2126. A thick region to inpaint needs weights in accordance(right).<\/figcaption>\n<\/figure>\n<p id=\"c2a2\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Selection of weights is an important matter. More weightage is given to those pixels lying near the boundary of the missing parts, such as those near contours. Once a pixel is inpainted, it moves to the next nearest pixel using the Fast Marching Method, which ensures those pixels nearest the known pixels are inpainted first.<\/p>\n<p id=\"1988\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">In simple words, a weighting function plays an important role in deciding the quality of the inpainted image. The distinguishing feature between weights being the amount of missing information.<\/p>\n<pre class=\"mf mg mh mi mj qz qw ra rb ax rc bj\"><span id=\"e8b9\" class=\"rd nw fo qw b ia re rf l iq rg\" data-selectable-paragraph=\"\">import numpy as np\nimport cv2\nimg = cv2.imread('messi_2.jpg')\nmask = cv2.imread('mask2.png',0)\ndst = cv2.inpaint(img,mask,3,cv2.INPAINT_TELEA)\ncv2.imshow('dst',dst)\ncv2.waitKey(0)\ncv2.destroyAllWindows()<\/span><\/pre>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*wD9YAcUiHnSIHuNZ1DS3RA.png\" alt=\"\" width=\"700\" height=\"179\"><\/figure><div class=\"mp mq rh\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*wD9YAcUiHnSIHuNZ1DS3RA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"90db\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The second method is based on a heuristic principle that encompasses fluid dynamics and partial differential equations. It first travels along the edges from known regions to unknown regions (because edges are meant to be continuous). It continues traveling along isophotes, which can be thought of as lines joining points with the same intensity, while matching gradient vectors at the boundary of the inpainting region.<\/p>\n<p id=\"5e00\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">You can also check out the Keras implementation of GMCNN (Generative Multi-column Convolutional Neural Networks) inpainting model originally proposed at NIPS 2018: <a class=\"af mt\" href=\"https:\/\/arxiv.org\/abs\/1810.08771\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be qx\">Image Inpainting via Generative Multi-column Convolutional Neural Networks<\/strong>.<\/a><\/p>\n<p id=\"b0aa\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The model was trained using of high-resolution images from the Places365-Standard dataset, which can be accessed <a class=\"af mt\" href=\"http:\/\/places2.csail.mit.edu\/download.html\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*F2GLJC13CvpF2N3XBwvFVA.png\" alt=\"\" width=\"700\" height=\"2147\"><\/figure><div class=\"mp mq ri\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*F2GLJC13CvpF2N3XBwvFVA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*F2GLJC13CvpF2N3XBwvFVA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*F2GLJC13CvpF2N3XBwvFVA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*F2GLJC13CvpF2N3XBwvFVA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*F2GLJC13CvpF2N3XBwvFVA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*F2GLJC13CvpF2N3XBwvFVA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*F2GLJC13CvpF2N3XBwvFVA.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*F2GLJC13CvpF2N3XBwvFVA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*F2GLJC13CvpF2N3XBwvFVA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*F2GLJC13CvpF2N3XBwvFVA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*F2GLJC13CvpF2N3XBwvFVA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*F2GLJC13CvpF2N3XBwvFVA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*F2GLJC13CvpF2N3XBwvFVA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*F2GLJC13CvpF2N3XBwvFVA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"ab ca pk pl pm pn\" role=\"separator\"><span style=\"color: var(--wpex-heading-color); font-size: var(--wpex-text-3xl); font-family: var(--wpex-body-font-family, var(--wpex-font-sans));\">Further Research<\/span><\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"dd8c\" class=\"pw-post-body-paragraph mu mv fo be b gm qh mx my gp qi na nb nc qj ne nf ng qk ni nj nk ql nm nn no fh bj\" data-selectable-paragraph=\"\">Inpainting images has always been a popular task that lures developers and researchers, as it\u2019s a challenging task that can always been perfected further. Once deep learning was discovered to be a significant boost for improving inpainting algorithms, researchers also started exploring various other use cases and experiments that could be possible with the same approach and techniques.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:652\/1*s2bG37m-8g4sqioUC3T76w.png\" alt=\"\" width=\"652\" height=\"448\"><\/figure><div class=\"mp mq rj\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*s2bG37m-8g4sqioUC3T76w.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*s2bG37m-8g4sqioUC3T76w.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*s2bG37m-8g4sqioUC3T76w.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*s2bG37m-8g4sqioUC3T76w.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*s2bG37m-8g4sqioUC3T76w.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*s2bG37m-8g4sqioUC3T76w.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1304\/format:webp\/1*s2bG37m-8g4sqioUC3T76w.png 1304w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 652px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*s2bG37m-8g4sqioUC3T76w.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*s2bG37m-8g4sqioUC3T76w.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*s2bG37m-8g4sqioUC3T76w.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*s2bG37m-8g4sqioUC3T76w.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*s2bG37m-8g4sqioUC3T76w.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*s2bG37m-8g4sqioUC3T76w.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1304\/1*s2bG37m-8g4sqioUC3T76w.png 1304w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 652px\" data-testid=\"og\"><\/picture><\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">SC-FEGAN can be used to make realistic edits to images by providing a free-form mask, a sketch and colors as an input.<\/figcaption>\n<\/figure>\n<h2 id=\"4b8e\" class=\"rd nw fo be nx rk rl rm oa rn ro rp od nc rq rr rs ng rt ru rv nk rw rx ry rz bj\" data-selectable-paragraph=\"\">SC-FEGAN<\/h2>\n<p id=\"c987\" class=\"pw-post-body-paragraph mu mv fo be b gm qh mx my gp qi na nb nc qj ne nf ng qk ni nj nk ql nm nn no fh bj\" data-selectable-paragraph=\"\">This idea was proposed in the research paper <a class=\"af mt\" href=\"https:\/\/arxiv.org\/abs\/1902.06838\" target=\"_blank\" rel=\"noopener ugc nofollow\"><em class=\"np\">SC-FEGAN : Face Editing Generative Adversarial Network with User\u2019s Sketch and Color.<\/em><\/a> In simplistic terms, it uses the principle of a GAN to fill in the missing parts of images by the data it \u201cimagines.\u201d<\/p>\n<p id=\"14bb\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">We\u2019ve already seen that we can use this ability to obliterate or \u201cerase\u201d specific objects from an image (and not just noise). SC-FEGAN goes a step further by allowing users to edit the parts of the image they want to modify, serving as more of an image editing system than just an inpainting network.<\/p>\n<p id=\"2780\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The user provides a free-form mask, sketch, and color as input. A trainable convolutional network utilizes these inputs as guidelines to generate the new image. In essence, the aim here is not to inpaint a defective image, but to intentionally damage an effect and use the GAN\u2019s ability to modify the image as one sees fit.<\/p>\n<\/div>\n<\/div>\n<div class=\"me\">\n<div class=\"ab ca\">\n<div class=\"ox oy oz pa pb pc ce pd cf pe ch bg\">\n<figure class=\"mf mg mh mi mj me pg ph paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1000\/1*yMpfgJt247y66_KpC4LZQg.png\" alt=\"\" width=\"1000\" height=\"513\"><\/figure><div class=\"mp mq so\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*yMpfgJt247y66_KpC4LZQg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*yMpfgJt247y66_KpC4LZQg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*yMpfgJt247y66_KpC4LZQg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*yMpfgJt247y66_KpC4LZQg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*yMpfgJt247y66_KpC4LZQg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*yMpfgJt247y66_KpC4LZQg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/format:webp\/1*yMpfgJt247y66_KpC4LZQg.png 2000w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*yMpfgJt247y66_KpC4LZQg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*yMpfgJt247y66_KpC4LZQg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*yMpfgJt247y66_KpC4LZQg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*yMpfgJt247y66_KpC4LZQg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*yMpfgJt247y66_KpC4LZQg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*yMpfgJt247y66_KpC4LZQg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/1*yMpfgJt247y66_KpC4LZQg.png 2000w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">Network Architecture of SC-FEGAN<\/figcaption>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<h2 id=\"ff43\" class=\"rd nw fo be nx rk rl rm oa rn ro rp od nc rq rr rs ng rt ru rv nk rw rx ry rz bj\" data-selectable-paragraph=\"\">EdgeConnect<\/h2>\n<p id=\"3ff5\" class=\"pw-post-body-paragraph mu mv fo be b gm qh mx my gp qi na nb nc qj ne nf ng qk ni nj nk ql nm nn no fh bj\" data-selectable-paragraph=\"\">Another example is <a class=\"af mt\" href=\"https:\/\/arxiv.org\/abs\/1901.00212\" target=\"_blank\" rel=\"noopener ugc nofollow\">EdgeConnect<\/a>, which implements \u201cAdversarial Edge Learning\u201d to improve upon the imperfections left behind by traditional deep learning inpainting techniques.<\/p>\n<p id=\"24a1\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">It uses a two-stage adversarial model that comprises of an edge generator followed by an image completion network. The edge generator \u201challucinates\u201d (or renders its own imagination of) edges of the missing region (both regular and irregular) in the image, and the image completion network fills in the missing regions using hallucinated edges as a guide.<\/p>\n<p id=\"6921\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The model was trained and evaluated on Celeba, Places2, and Paris StreetView datasets, with the researchers demonstrating its success over previous methods, both quantitatively and qualitatively. The model, implemented in PyTorch and along with the code, is available <a class=\"af mt\" href=\"https:\/\/github.com\/knazeri\/edge-connect.\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:470\/1*O_1OhsG0k4CP5xI27d1giQ.png\" alt=\"\" width=\"470\" height=\"355\"><\/figure><div class=\"mp mq sp\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*O_1OhsG0k4CP5xI27d1giQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*O_1OhsG0k4CP5xI27d1giQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*O_1OhsG0k4CP5xI27d1giQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*O_1OhsG0k4CP5xI27d1giQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*O_1OhsG0k4CP5xI27d1giQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*O_1OhsG0k4CP5xI27d1giQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:940\/format:webp\/1*O_1OhsG0k4CP5xI27d1giQ.png 940w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 470px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*O_1OhsG0k4CP5xI27d1giQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*O_1OhsG0k4CP5xI27d1giQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*O_1OhsG0k4CP5xI27d1giQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*O_1OhsG0k4CP5xI27d1giQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*O_1OhsG0k4CP5xI27d1giQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*O_1OhsG0k4CP5xI27d1giQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:940\/1*O_1OhsG0k4CP5xI27d1giQ.png 940w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 470px\" data-testid=\"og\"><\/picture><\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">(Left to Right) Original image, Input image, generated edges (blue lines denote hallucinated edges), Inpainted results without any post-processing.<\/figcaption>\n<\/figure>\n<p id=\"e1f5\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The reasoning is simple. When one looks at an image with huge missing regions, the human mind is powerful enough to relay the missing information in a mental representation of the completed picture. The edges of an object in the image help the most in relaying this perception, as they help us understand \u201cwhat\u201d the image actually is, and we can fill in the finer details later on.<\/p>\n<p id=\"7970\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">One could argue that completing the edges in the missing part is essentially a battle half won. The model accepts input images with missing regions and computes edge masks. Edges already present are drawn using the Canny edge detector, whereas the edges supposed to be in the missing areas are hallucinated by the edge generator network.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*_CuXrQaxH4dKjgu7VbqSbA.png\" alt=\"\" width=\"700\" height=\"164\"><\/figure><div class=\"mp mq sq\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*_CuXrQaxH4dKjgu7VbqSbA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*_CuXrQaxH4dKjgu7VbqSbA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*_CuXrQaxH4dKjgu7VbqSbA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*_CuXrQaxH4dKjgu7VbqSbA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*_CuXrQaxH4dKjgu7VbqSbA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*_CuXrQaxH4dKjgu7VbqSbA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*_CuXrQaxH4dKjgu7VbqSbA.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*_CuXrQaxH4dKjgu7VbqSbA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*_CuXrQaxH4dKjgu7VbqSbA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*_CuXrQaxH4dKjgu7VbqSbA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*_CuXrQaxH4dKjgu7VbqSbA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*_CuXrQaxH4dKjgu7VbqSbA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*_CuXrQaxH4dKjgu7VbqSbA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*_CuXrQaxH4dKjgu7VbqSbA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"256d\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Interestingly, the model can also be used as an interactive image editing tool, as shown above. As with SC-FEGAN, it\u2019s possible to manipulate objects in the edge domain and transform the edge maps back to generate a new image. One such possibility is using separate complementary halves from two images as input and edge map respectively. The generated image appears to share characteristics of both images.<\/p>\n<h2 id=\"3f60\" class=\"rd nw fo be nx rk rl rm oa rn ro rp od nc rq rr rs ng rt ru rv nk rw rx ry rz bj\" data-selectable-paragraph=\"\">Pluralistic Image Completion<\/h2>\n<p id=\"afa3\" class=\"pw-post-body-paragraph mu mv fo be b gm qh mx my gp qi na nb nc qj ne nf ng qk ni nj nk ql nm nn no fh bj\" data-selectable-paragraph=\"\">When some area in an image is missing, there can be multiple reasonable possibilities as to what can be filled in. Since we don\u2019t use these methods for testing but for practical applications, it\u2019s never possible to tell what should actually be filled in the missing region.<\/p>\n<\/div>\n<\/div>\n<div class=\"me\">\n<div class=\"ab ca\">\n<div class=\"ox oy oz pa pb pc ce pd cf pe ch bg\">\n<figure class=\"mf mg mh mi mj me pg ph paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1000\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png\" alt=\"\" width=\"1000\" height=\"132\"><\/figure><div class=\"mp mq sr\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/format:webp\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 2000w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/1*2h1eEJC4wDZ7dzsMkyv-kQ.png 2000w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">Numerous possibilities can exist when it comes to image completion, especially when the gaps are bigger.<\/figcaption>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"1a12\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Most image completion methods produce only one result for each masked input. However, <a class=\"af mt\" href=\"https:\/\/arxiv.org\/abs\/1903.04227\" target=\"_blank\" rel=\"noopener ugc nofollow\">pluralistic image completion<\/a> focuses on the generation of several diverse plausible solutions for image completion.<\/p>\n<p id=\"dd04\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Conventional deep learning methods only reserve one ground-truth training instance per label, leading to minimal diversity. The pluralistic method goes about this in two parallel phases. One is a reconstructive phase that utilizes the only one given ground-truth to get an insight on missing parts and rebuilds the original image from this distribution.<\/p>\n<p id=\"c2e1\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">The other is a generative phase for which this \u201cinsight\u201d about missing parts is coupled with the newly rebuilt images to obtain potential fits or \u201cspinoffs\u201d. Both phases are undertaken by generative networks.<\/p>\n<figure class=\"mf mg mh mi mj me mp mq paragraph-image\">\n<div class=\"nr ns eb nt bg nu\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mk ml c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*JNpG_n0hCr40DsmbB_a7Ow.gif\" alt=\"\" width=\"700\" height=\"394\"><\/figure><div class=\"mp mq ss\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*JNpG_n0hCr40DsmbB_a7Ow.gif 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mm mn mo mp mq mr ms be b bf z dv\" data-selectable-paragraph=\"\">Source : <a class=\"af mt\" href=\"https:\/\/github.com\/lyndonzheng\/Pluralistic-Inpainting\/blob\/master\/images\/free_form.gif\" target=\"_blank\" rel=\"noopener ugc nofollow\">Github<\/a><\/figcaption>\n<\/figure>\n<h1 id=\"4440\" class=\"nv nw fo be nx ny nz go oa ob oc gr od oe of og oh oi oj ok ol om on oo op oq bj\" data-selectable-paragraph=\"\">Conclusion<\/h1>\n<p id=\"7a0c\" class=\"pw-post-body-paragraph mu mv fo be b gm qh mx my gp qi na nb nc qj ne nf ng qk ni nj nk ql nm nn no fh bj\" data-selectable-paragraph=\"\">While it\u2019s true that machine learning is the latest thing, it may not necessarily always be true that a machine learning approach outperforms conventional inpainting algorithms. The superiority of deep learning methods for image inpainting remains highly subjective from image to image.<\/p>\n<p id=\"e2cd\" class=\"pw-post-body-paragraph mu mv fo be b gm mw mx my gp mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no fh bj\" data-selectable-paragraph=\"\">Although it\u2019s true that inpainting with deep learning works much better when the algorithm needs to be applied on a general category of images (i.e the kind of image is not known beforehand), it\u2019s also probable that the growth in computational power as well as the introduction of newer and more powerful AI accelerators will eventually lead to perfectly inpainted images.<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>We\u2019ve all heard the saying A picture is worth a thousand words. But is a tarnished image with gaping holes or splotches or blurs worth a few hundred? What if you just found an age-old photograph of your grandparents\u2019 wedding, but the surface was so worn that you could barely make out their faces. Or [&hellip;]<\/p>\n","protected":false},"author":37,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6,7],"tags":[],"coauthors":[149],"class_list":["post-6426","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Guide to Image Inpainting: Using machine learning to edit and correct defects in photos - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Guide to Image Inpainting: Using machine learning to edit and correct defects in photos\" \/>\n<meta property=\"og:description\" content=\"We\u2019ve all heard the saying A picture is worth a thousand words. But is a tarnished image with gaping holes or splotches or blurs worth a few hundred? What if you just found an age-old photograph of your grandparents\u2019 wedding, but the surface was so worn that you could barely make out their faces. Or [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-20T02:51:09+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:15:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp\" \/>\n<meta name=\"author\" content=\"Jamshed Khan\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Jamshed Khan\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Guide to Image Inpainting: Using machine learning to edit and correct defects in photos - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/","og_locale":"en_US","og_type":"article","og_title":"Guide to Image Inpainting: Using machine learning to edit and correct defects in photos","og_description":"We\u2019ve all heard the saying A picture is worth a thousand words. But is a tarnished image with gaping holes or splotches or blurs worth a few hundred? What if you just found an age-old photograph of your grandparents\u2019 wedding, but the surface was so worn that you could barely make out their faces. Or [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-06-20T02:51:09+00:00","article_modified_time":"2025-04-24T17:15:23+00:00","og_image":[{"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp","type":"","width":"","height":""}],"author":"Jamshed Khan","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Jamshed Khan","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/"},"author":{"name":"Jamshed Khan","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/d18d17839bcc14e70bfe02df42a19f47"},"headline":"Guide to Image Inpainting: Using machine learning to edit and correct defects in photos","datePublished":"2023-06-20T02:51:09+00:00","dateModified":"2025-04-24T17:15:23+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/"},"wordCount":2454,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp","articleSection":["Machine Learning","Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/","url":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/","name":"Guide to Image Inpainting: Using machine learning to edit and correct defects in photos - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp","datePublished":"2023-06-20T02:51:09+00:00","dateModified":"2025-04-24T17:15:23+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/06\/1mRwqjmJv4EWZ1ndseP830g-1024x514.webp"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/guide-to-image-inpainting-using-machine-learning-to-edit-and-correct-defects-in-photos\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Guide to Image Inpainting: Using machine learning to edit and correct defects in photos"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/d18d17839bcc14e70bfe02df42a19f47","name":"Jamshed Khan","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/27af8c26f0585e3b34635f070418a865","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/1561664362839-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/1561664362839-96x96.jpg","caption":"Jamshed Khan"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/jamshedkhan\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6426","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/37"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=6426"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6426\/revisions"}],"predecessor-version":[{"id":15613,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6426\/revisions\/15613"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=6426"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=6426"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=6426"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=6426"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}