ML Meetup: Deep Image Prior

At Man AHL, we believe in the Python ecosystem and have been successfully trading Machine Learning based systems since early 2014.

28 JUNE 2018

To give back and strengthen London’s Python and Machine Learning Communities, we sponsor and support the London PyData and Machine Learning meetups. In May, we had the pleasure to welcome Dmitry Ulyanov, PhD student at Skoltech and Research Scientist at Yandex to the London Machine Learning Meetup. Below is a summary of his talk.

Deep Image Prior - Dmitry Ulyanov

Dmitry Ulyanov, PhD student at Skoltech and Research Scientist at Yandex, presents in this talk Deep Image Prior. Dmitry Ulyanov, Andrea Vedaldi and Victor Lempitsky suggest that the structure of a generator network is sufficient to capture most low level image statistics prior to any learning. They apply such networks in image restoration, a task which consists of estimating clean images from corrupted images, and, as such, requires prior knowledge of the image generating process. Dmitry introduces first two standard approaches to image restoration: learning a prior from a dataset or explicitly defining (handcrafting) one. The first approach, learning a prior, can be done for instance by training deep Convolutional Neural Networks (CNN) using corrupted/clean images as inputs/outputs. The prior is here embedded in the restoration process, the network. The second approach, explicitly defining a prior, consists in minimizing a loss function between the estimated image and the corrupted image with respect to constraints, such as clearness, natural aspect, etc. These constraints, representing the prior, are difficult to express mathematically; as a result, learning a prior from a dataset tends to generate better results than explicitly defining one.

While many deep learning successes can be attributed to the ability to learn from large datasets, Dmitry proposes that the structure of a generator network is sufficient to impose a strong prior. Deep Image Prior “bridges” the two approaches by imposing the output to be generated by a network. Their method consists firstly in (randomly) initializing an input to be used with a generator network, using Maximum a posteriori (MAP) estimation to update the network parameters, and finally using these updated parameters to restore the corrupted image. Unlike the standard training approach, in which weights are fixed and inputs are varied to estimate different outputs, the authors fixed the input and varied weights here to get different outputs. Deep Image Prior reconciles both approaches. While being similar in essence to the second approach, it is not explicitly defining the prior but learning and embedding it in the generator network. On the other hand, while learned priors are usually trained using large datasets, only the corrupted image is necessary in this method.

Dmitry then presents the experiments they made to increase their understanding of Deep Image Prior and compares it to alternative approaches in the tasks of denoising, inpainting, super- resolution, features-inversion and activation maximization. When using it to denoise an image, he illustrates how Deep Image Prior first learns some structured component and then starts to overfit the noise. He emphasizes that using Neural Networks as a prior may be counterintuitive as they can probably approximate any function if they are trained long enough; their method requires an early stopping policy. Dmitry then shows results in the other restoration tasks mentioned above. Deep Image Prior performs better than handcrafted prior and as well, if not better, than learned prior based methods. He argues that the fact learned prior methods don’t perform significantly better than Deep Image Prior reveals a fundamental flaw. The difference between these two approaches reside in whether the prior is learned by training (using a dataset) or directly captured from the image. The similarity in the results suggest that learning the prior from a dataset, while being expensive, does not improve the output significantly.

Dmitry concludes that their work is not about introducing a new image restoration method but to propose that the structure of a generator network is sufficient to capture a great deal of low level image statistics prior to any learning. The next step of their work is to understand how important the dataset and the network are when both are used.

Latest Technology articles

Open sourcing a new database tool.

Man Alpha Technology Team

Here’s how we use the Prometheus time-series database for insight into our systems.

Man Alpha Technology Team

Important information

Opinions expressed are those of the author and may not be shared by all personnel of Man Group plc (‘Man’). These opinions are subject to change without notice, are for information purposes only and do not constitute an offer or invitation to make an investment in any financial instrument or in any product to which the Company and/or its affiliates provides investment advisory or any other financial services. Any organisations, financial instrument or products described in this material are mentioned for reference purposes only which should not be considered a recommendation for their purchase or sale. Neither the Company nor the authors shall be liable to any person for any action taken on the basis of the information provided. Some statements contained in this material concerning goals, strategies, outlook or other non-historical matters may be forward-looking statements and are based on current indicators and expectations. These forward-looking statements speak only as of the date on which they are made, and the Company undertakes no obligation to update or revise any forward-looking statements. These forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially from those contained in the statements. The Company and/or its affiliates may or may not have a position in any financial instrument mentioned and may or may not be actively trading in any such securities. This material is proprietary information of the Company and its affiliates and may not be reproduced or otherwise disseminated in whole or in part without prior written consent from the Company. The Company believes the content to be accurate. However accuracy is not warranted or guaranteed. The Company does not assume any liability in the case of incorrectly reported or incomplete information. Unless stated otherwise all information is provided by the Company. Past performance is not indicative of future results.


Please update your browser

Unfortunately we no longer support Internet Explorer 8, 7 and older for security reasons.

Please update your browser to a later version and try to access our site again.

Many thanks.