Inserted: 1 may 2019
Last Updated: 1 may 2019
This article is the second work in our series of papers dedicated to image processing models based on the fractional order total variation $TV^r$. In our first work of this series, we studied key analytic properties of these semi-norms. Here we focus on the more applied aspects of such models: first, in order to obtain a better reconstructed image, we propose several extensions of the fractional order total variation. Such generalizations, collectively denoted by RVL, will be modular, i.e. the parameters therein are mutually independent, and can be fine tuned to the particular task. Then, we will study the bilevel training schemes based on RVL, and show that such schemes are well defined, i.e. they admit minimizers. Finally, we will provide some numerical examples, showing that training schemes based on RVL are effectively better than those based on classical regularizer $TV$ and $TGV^2$.