NUS – School of Computing CS6234 Advanced Topics in Algorithms ๐ณ๐ Gradient Minimization Abdelhak Bentaleb (A0135562H), Lei Yifan (A0138344E), Ji Xin (A0138230R), Dileepa Fernando (A0134674B), Abdelrahman Kamel (A0138294X) Outline • Introduction (Abdelhak) • ๐ฟ0 gradient minimization using coordinate descent optimization (Ji Xin) • ๐ฟ0 gradient minimization using alternating optimization and region fusion (Abdelrahman & Dileepa) • Experiments & Applications (Lei Yifan) 7/1/2016 L0 Gradient Minimization 2 Introduction Abdelhak 7/1/2016 L0 Gradient Minimization 3 Introduction • ๐ฟ0 norm • ๐ฟ0 gradient • ๐ฟ0 gradient minimization • Why it is NP hard • Alternating optimization 7/1/2016 L0 Gradient Minimization 4 Introduction ๐ณ๐ Norm • ๐ณ๐ norm for vector x is ๐ฅ ๐ = ๐ ๐ฅ๐ ๐ , ๐ ∈ โ ๐ • A pth root of a summation of all elements to the pth power • ๐ณ๐ norm is ๐ฅ 0 = 0 ๐ ๐ฅ๐ 0 • Mathematically, not a norm • because there is a presence of zeroth-power (00) and zeroth-root in it • For applicability reasons, ๐ฟ0 norm can be redefined ๐ฅ 0 = #(๐โ๐ฅ๐ ≠ 0) • total number of non-zero elements in a vector • Wide applications on discrete functions/signals 7/1/2016 L0 Gradient Minimization 5 Introduction ๐ณ๐ Gradient • Recall: ๐ฟ0 norm is the number of non-zero elements • ๐ณ๐ gradient is the number of non-zero gradients • Example: 8 6 6 4 2 20 points 0 3 zero gradients -2 17 non-zero gradients ๐ฟ0 gradient = 17 1 1 2 4 3 1 0 2 3 4 5 -1 6 -1 3 2 8 3 1 9 10 11 12 13 14 2 2 17 18 3 1 0 15 16 -1 19 20 Discrete Function 5 1 1 2 3 0 0 1 -5 7 5 4 -3 5 -1 6 2 7 1 8 2 2 9 10 3 0 11 -1 12 -2 13 0 14 -2 15 -1 16 -1 17 18 1 19 20 -2 Gradients 7/1/2016 L0 Gradient Minimization 6 Introduction ๐ณ๐ Gradient Minimization • Convert input function ๐ผ ๏ Output function ๐ Minimize the number of non-zero gradients (๐ณ๐) in ๐ผ to get ๐ while maintaining the similarity between ๐ผ and ๐ as possible • To minimize ๐ฟ0 gradient is an optimization problem 7/1/2016 L0 Gradient Minimization 7 8 6 6 Ad-hoc Example 4 1 2 2 3 1 1 0 0 -2 4 2 3 4 5 -1 6 -1 7 3 2 8 3 1 9 10 11 12 13 14 2 2 17 18 3 1 0 15 16 -1 19 20 Discrete Function Original Function I 5 • ๐ฟ0 gradient = 17 5 1 1 2 3 0 0 1 -5 4 -3 5 -1 6 2 7 1 8 2 2 9 10 3 0 11 -1 12 13 3 3 -2 1 0 14 -2 15 -1 16 -1 17 18 19 20 2 2 2 2 -2 Gradients Removing some small gradients 6 4 2 2 5 2 2 2 5 2 0 0 1 2 3 4 -2 5 -1 6 -1 7 Output Function S • ๐ฟ0 gradient = 8 5 8 9 10 11 12 13 0 0 0 14 15 16 17 18 19 20 0 0 2 0 0 0 15 16 17 18 19 20 Discrete Function with less non-zero gradients 10 0 -10 1 0 0 2 3 0 4 -2 5 -1 3 6 7 0 8 3 9 0 0 10 11 0 12 -2 13 14 -3 Gradients 7/1/2016 L0 Gradient Minimization 8 Introduction It is NP Hard • Discrete • Non-convex • Non-derivative • Can’t use traditional optimization techniques (such as gradient descent) • So, we need to use Alternating Optimization to get an Approximate Solution • (See appendix A for detailed proof) 7/1/2016 L0 Gradient Minimization 9 Introduction Alternating Optimization – Overview • Iterative method that generates a sequence of improving approximate solutions for a problem. • An optimization problem can be written as: • minimize some objective function ๐(๐) given a set of data points x • Partition x ∈ ๐ into ๐ non-overlapping parts ๐ฆ๐ such that This is some kind of local optimization 7/1/2016 L0 Gradient Minimization 10 Introduction Alternating Optimization – Overview 7/1/2016 L0 Gradient Minimization 11 Introduction Alternating Optimization – Overview • For each iteration ๐ (until convergence) • For each partition of data ๐๐ • Find ๐๐ values for next iteration such that ๐(๐) is minimized sufficiently • i.e., optimize one partition while fixing all other partitions 7/1/2016 L0 Gradient Minimization 12 Introduction Alternating Optimization – Overview 7/1/2016 L0 Gradient Minimization 13 ๐ณ๐ Gradient Minimization (1) Using Coordinate Descent Ji Xin 7/1/2016 L0 Gradient Minimization 14 ๐ณ๐ Gradient Minimization (1) ๐ณ๐ Gradient Minimization (1) • Recall: L0 gradient is the number of non-zero gradients • Original function ๐ผ ๏จ Desired output ๐ • Denote the gradient of ๐ as ๐ป๐ • The original function usually consists of discrete points (e.g., signals for 1D, pixels for 2D) in real-world applications. • Thus, ๐ป๐ can be computed by subtraction between neighbor points. • For 1D signal, one way to calculate the gradient at point ๐ is ๐ป๐๐ = ๐๐ − ๐๐−1 + |๐๐+1 − ๐๐ | 7/1/2016 ๐ฟ0 Gradient Minimization 15 2D Formulation ๐ณ๐ Gradient Minimization (1) • Original image ๐ผ ๏จ smoothed output ๐ • Gradient for pixel p is ๐ป๐๐ = (๐ป๐๐๐ฅ , ๐ป๐๐๐ฆ )๐ where each dimension can be treated as a 1D situation independently. • The L0 gradient is ๐ป๐ 0 = ๐ | ๐ป๐๐๐ฅ + ๐ป๐๐๐ฆ ≠ 0 • Objective function: ๐ = ๐ − ๐ผ 2 + ๐ · ๐ป๐ 0 • λ controls the weight between L2 norm measure (input-output similarity) and L0 gradient (smoothness) 7/1/2016 ๐ฟ0 Gradient Minimization 16 ๐ณ๐ Gradient Minimization (1) Solver • Introduce auxiliary variables ๐ฟ, ๐ฟ is initially the gradient of the input image. • Each pixel p of ๐ฟ is denoted by ๐ฟ๐ = (๐ฟ๐๐ฅ , ๐ฟ๐๐ฆ )๐ , corresponding to ๐ป๐๐๐ฅ and ๐ป๐๐๐ฆ . • Objective function: ๐ = ๐−๐ผ 2 +๐โ ๐ฟ 0 + ๐พ โ ๐ป๐ − ๐ฟ 2 • ๐พ controls the similarity between ๐ฟ and their corresponding gradient ๐ป๐, ๐พ > 0 • Can be solved by the following two subproblems 7/1/2016 ๐ฟ0 Gradient Minimization 17 ๐ณ๐ Gradient Minimization (1) Subproblem 1: Fix ๐น and Compute ๐บ ๐ = ๐−๐ผ 2 +λโ ๐ฟ • Objective function ๐ = ๐−๐ผ 2 0 + ๐พ โ ๐ป๐ − ๐ฟ + ๐พ โ ๐ป๐ − ๐ฟ 2 2 • This is a quadratic function, global minimum ๐ achieved by gradient descent. (Let ๐ป๐ = 0, compute ๐) 7/1/2016 ๐ฟ0 Gradient Minimization 18 ๐ณ๐ Gradient Minimization (1) Subproblem 2: Fix ๐บ and Compute ๐น ๐ = ๐−๐ผ • Objective function 2 +λโ ๐ฟ 0 + ๐พ โ ๐ป๐ − ๐ฟ 2 ๐ ๐ = ๐ป๐ − ๐ฟ + โ ๐ฟ 0 ๐พ • Compute optimal pixel-wise objective function at a time • Pixel-wise objective function ๐ 2 ๐๐ = ๐ป๐๐ − ๐ฟ๐ + โ ๐ฟ๐ 0 ๐พ 2 7/1/2016 ๐ฟ0 Gradient Minimization 19 ๐ณ๐ Gradient Minimization (1) Subproblem 2: Fix ๐บ and Compute ๐น • Compute optimal ๐ฟ๐ for every pixel p • Objective function ๐๐ = ๐ป๐๐ − ๐ฟ๐ 7/1/2016 0 = 1 ๐ + โ ๐ฟ๐ ๐พ 0 2 0 ๐ฟ๐ 2 ๐๐ = ๐ป๐๐ − ๐ฟ๐ ๐๐ = ๐ป๐๐ ๐ ๐ 2 + =0+ , ๐พ ๐พ ๐ฟ0 Gradient Minimization ๐ฟ๐ ← ๐ป๐๐ 20 ๐ณ๐ Gradient Minimization (1) Subproblem 2: Fix ๐บ and Compute ๐น • Compute optimal ๐ฟ๐ for every pixel p • Objective function ๐๐ = ๐ป๐๐ − ๐ฟ๐ • Case 1: Condition ๐ ๐พ ≥ ๐ป๐๐ • Case 2: Condition < ๐ป๐๐ ๐ + โ ๐ฟ๐ ๐พ 0 2 • ๐๐ can achieve the minimum ๐ป๐๐ ๐ ๐พ 2 2 2 when ๐น๐ = (๐, ๐)๐ป . . λ ๐พ • ๐๐ can achieve the minimum when ๐น๐ = ๐ต๐บ๐ . 7/1/2016 ๐ฟ0 Gradient Minimization 21 ๐ณ๐ Gradient Minimization (1) Subproblem 2 – Combined Result • Minimum ๐๐ achieved under the condition ๐ฟ๐ = 0, 0 ๐, ๐ ๐พ ≥ ๐ป๐๐ ๐ป๐๐ , ๐๐กโ๐๐๐ค๐๐ ๐ 2 . • Corresponding ๐๐ ๐๐ = ๐ป๐๐ 2 , ๐คโ๐๐ ๐ฟ๐ = 0,0 ๐ , ๐คโ๐๐ ๐พ ๐ฟ๐ = ๐ป๐๐ ๐ . • So, the global objective function achieves by aggregating the objective function of every pixel. 7/1/2016 ๐ฟ0 Gradient Minimization 22 ๐ณ๐ Gradient Minimization (1) Algorithm • Input: image ๐ผ, smoothing weight λ, parameters ๐พ, ๐พ0 and rate κ • Initialization: ๐ ← ๐ผ, ๐พ ← ๐พ0 , ๐ ← 0 • repeat • With ๐ , solve for ๐ฟ (๐) (๐) in Eq. ๐ฟ๐ = 0, 0 ๐ ๐ , ๐พ ≥ ๐ป๐๐ 2 for every pixel. ๐ป๐๐ , ๐๐กโ๐๐๐ค๐๐ ๐ • With ๐ฟ (๐) , solve ๐ (๐+1) by gradient descent. • ๐พ starts form small ๐พ0 , in this case 2๐ • ๐พ ← κ๐พ, ๐ + +. • ๐พ ends with ๐พ๐๐๐ฅ , ๐พ๐๐๐ฅ is set to 1e5 • until ๐พ ≥ ๐พ๐๐๐ฅ • Output: smoothed image ๐ 7/1/2016 • ๐พ is multiplied by κ every iteration • κ controls the growth of ๐พ • finally becomes the approximation of the gradient of the output image ๐ฟ0 Gradient Minimization 23 ๐ณ๐ Gradient Minimization (2) Abdelrahman & Dileepa 7/1/2016 L0 Gradient Minimization 24 Objective Function ๐ณ๐ Gradient Minimization (2) • Original function ๐ผ Desired output ๐ • We need to minimize number of non-zero gradients in ๐ผ to get ๐ • while maintaining the similarity between ๐ผ and ๐ as possible Maintaining similarity 7/1/2016 L0 Gradient Minimization Minimizing gradients Parameter to control the ๐ฟ0 gradient Larger ๐ means smaller ๐ฟ0 gradient 25 Objective Function ๐ณ๐ Gradient Minimization (2) • Removing a gradient depends on the neighbors of some point • For example, in 1D, point ๐๐ affects 2 gradients (๐๐ − ๐๐−1 ) and (๐๐+1 − ๐๐ ) • So, rewrite the previous function as summation over all points • If we have ๐ด points Maintaining similarity ๐ is divided by 2 as each gradient is calculated twice 7/1/2016 L0 Gradient Minimization Minimizing gradients Sum of all gradients from point ๐ to all its neighbors ๐๐ 26 Neighborhood ๐ณ๐ Gradient Minimization (2) • What are the neighbors of some point ๐๐ • Some examples: ๐บ๐ ๐บ๐ 7/1/2016 L0 Gradient Minimization ๐บ๐ 27 ๐ณ๐ Gradient Minimization (2) Alternating Optimization • Recall that ๐ฟ0 minimization is NP-hard • • • • So, we need to apply alternating optimization How: Optimize for each pair of neighboring points at a time (partition) So, for one pair, the minimization step becomes ๐๐ and ๐๐ are some pair of neighbor points 7/1/2016 L0 Gradient Minimization 28 Minimization Step ๐ณ๐ Gradient Minimization (2) • Now, we need to solve this minimization step • Let’s consider two cases: • When ๐๐ ≠ ๐๐ • When ๐๐ = ๐๐ 7/1/2016 L0 Gradient Minimization 29 ๐ณ๐ Gradient Minimization (2) Minimization Step – Case 1 • Case 1: When ๐๐ ≠ ๐๐ • ๐๐ − ๐๐ 0 = 1 because the gradient ๐๐ − ๐๐ ≠ 0 1 • So, ๐ becomes • And, ๐๐ = ๐ผ๐ and ๐๐ = ๐ผ๐ give us the minimum value of ๐ = ๐ 7/1/2016 L0 Gradient Minimization 30 ๐ณ๐ Gradient Minimization (2) Minimization Step – Case 2 • Case 2: When ๐๐ = ๐๐ • ๐๐ − ๐๐ 0 = 0 because the gradient ๐๐ − ๐๐ = 0 0 • So, ๐ becomes a simple quadratic equation in one variable ๐๐ See Appendix B for detailed solution ๐ • And, ๐บ๐ = ๐บ๐ = (๐ฐ๐ + ๐ฐ๐ )/๐ give us the minimum value of ๐ = ๐ฐ๐ − ๐ฐ๐ /๐ 7/1/2016 L0 Gradient Minimization 31 Fusion Criterion ๐ณ๐ Gradient Minimization (2) • Based on Case 1 & 2, we have 2 possible minimum values of ๐ • To solve the minimization step, we need to know which is smaller • Combined together, we have the solution: • This is called the Fusion Criterion 7/1/2016 L0 Gradient Minimization 32 ๐ณ๐ Gradient Minimization (2) Example – Fusion Criterion • Assume ๐ = ๐ 5 4 3 2 1 5 4 2 ๐ผ๐ ๐ผ๐ 1 2 4 ๐ผ๐ − ๐ผ๐ 2 2 =2<๐ ๐ผ๐ + ๐ผ๐ ๐๐ = ๐๐ = =3 2 0 3 2 1 3 3 ๐๐ ๐๐ 1 2 0 ๐ณ๐ = ๐ ๐ณ๐ = ๐ 5 5 4 3 2 1 4 ๐ผ๐ ๐ผ๐ 1 ๐ผ๐ − ๐ผ๐ 2 2 =8>๐ ๐๐ = ๐ผ๐ = 1 , ๐๐ = ๐ผ๐ = 5 0 3 2 1 ๐๐ ๐๐ 1 0 1 7/1/2016 5 5 2 ๐ณ๐ = ๐ 1 L0 Gradient Minimization 2 ๐ณ๐ = ๐ 33 Region Fusion ๐ณ๐ Gradient Minimization (2) • Up to now, we considered minimizing ๐ฟ0 for pairs of points, ๐ฎ๐ • For alternating optimization, it is better to optimize regions or groups of points together at a time ๐ฐ๐ • For faster convergence • Lets assume, for each point ๐ผ๐ we have a group of points ๐ฎ๐ of ๐๐ points that is fused together to the same value ๐๐ ๐ฎ๐ ๐บ๐ ๐บ๐ ๐บ๐ • Example: in the figure ๐ค๐ = 3 7/1/2016 L0 Gradient Minimization 34 Region Fusion ๐ณ๐ Gradient Minimization (2) • For generality, in 2D and 3D functions, we must consider the number of gradients ๐๐,๐ between pairs of neighboring groups ๐ and ๐ • ๐ ∈ ๐ต๐ • Example: in the figure, ๐1,2 = ๐2,1 = 5, ๐ค1 = 10 • Note that ๐๐ and ๐๐,๐ are initially set to ๐. 7/1/2016 L0 Gradient Minimization 35 Region Fusion ๐ณ๐ Gradient Minimization (2) • After adding the notion of groups, the minimization step becomes: • Now, ๐๐ and ๐๐ are the average value of the original function in the regions ๐บ๐ and ๐บ๐, respectively • ๐ค๐ , ๐ค๐ = number of points in ๐บ๐ and ๐บ๐ • ๐ฝ is to control the number of gradients between ๐บ๐ and ๐บ๐ • ๐๐,๐ = number of gradients between ๐บ๐ and ๐บ๐ • The above equation can be solved in the exact same manner used for Minimization Step 7/1/2016 L0 Gradient Minimization 36 ๐ณ๐ Gradient Minimization (2) Region Fusion • Solution: • ๐ต = (๐ค๐ ๐๐ + ๐ค๐ ๐๐ )/(๐ค๐ + ๐ค๐ ) weighted average of the two groups ๐บ๐ and ๐บ๐ . • This criterion is used to decide whether to fuse the group ๐บ๐ into the group ๐บ๐ or not. 7/1/2016 L0 Gradient Minimization 37 Region Fusion Algorithm 7/1/2016 =1 ๐ฟ0 Gradient Minimization 38 ๐ณ๐ Gradient Minimization (2) Initialization of the Algorithm 7/1/2016 ๐ฟ0 Gradient Minimization 39 ๐ณ๐ Gradient Minimization (2) Fusion Step for Gi with Gj 7/1/2016 ๐ฟ0 Gradient Minimization 40 ๐ณ๐ Gradient Minimization (2) Fusion Step Explained • Decision for merging is taken by comparing objective function change • A neighbour is merged and it’s information is also merged • After deleting merged group, Indexes are updated • Number of groups is updated • Loop goes through the neighbours of the merged group for more fusions 7/1/2016 ๐ฟ0 Gradient Minimization 41 ๐ณ๐ Gradient Minimization (2) Reduction of the objective function • Claim : The sum of non zero gradients removed from the merged neighbours after any fusion steps, is a lower bound for the total non zero gradients removed • Sub Claim: At the end of ๐ฝ = 0 iteration, 1. ∀๐ < ๐ , ๐ผ๐ = ๐๐ 2. For any two neighbours ๐1 , ๐2 ∈ ๐ผ, ๐ผ๐1 = ๐ผ๐2 โน ๐บ ๐1 = ๐บ(๐2 ) (Here ๐บ ๐ฅ is the group which pixel ๐ฅ belonging to) 7/1/2016 ๐ฟ0 Gradient Minimization 42 ๐ณ๐ Gradient Minimization (2) Reduction of the objective function Why the sub claim is correct? 1. When ๐ฝ = 0 ,two groups ๐บ๐ , ๐บ๐ are merged only if ๐๐ = ๐๐ i.e.(๐ค๐ ๐ค๐ ๐๐ − ๐๐ 2 ≤ 0) 1. New value after merging does not change because, weighted average of equal values is the value itself. (Objective function is not changed) 2. Since each pixel is visited at least once, if two pixels have similar values, they should be merged to the same group. 7/1/2016 ๐ฟ0 Gradient Minimization 43 ๐ณ๐ Gradient Minimization (2) Reduction of the objective function When ๐ฝ > 0, • Initially, no two neighbouring groups have zero gradients (sub claim) • In the first iteration for ๐ฝ > 0 , zero gradients are introduced for the neighbours considered (fusion step- ๐บ๐ and ๐บ๐ ) • Some side neighbours (๐บ๐ ) may also introduce zero gradients. • In a later iteration, they may result in non-zero gradients • Non-zero gradients of the side neighbours does not exceed the initial count 7/1/2016 ๐ฟ0 Gradient Minimization 44 ๐ณ๐ Gradient Minimization (2) Reduction of the objective function (upper bound) • An upper bound for the objective function exists • It keeps on decreasing from the initial value ๐||∇S||0 • Amount of reduction at each fusion step is at least , ๐ci,j − wi wj ||Yi − Yj||2 /(wi + wj) 7/1/2016 ๐ฟ0 Gradient Minimization 45 Choice of β ๐ณ๐ Gradient Minimization (2) • β=0 case, groups neighbours with zero gradient • Increasing β in small intervals gives a very smooth solution but it consumes time. 7/1/2016 ๐ฟ0 Gradient Minimization 46 Choice of β ๐ณ๐ Gradient Minimization (2) • For an extreme example, let’s say β’s first non zero value is ๐. • Then very large gradients can be merged resulting a larger objective function. • If β starts from a small value, small gradients are merged and after merge a large gradient between two input pixels may be converted into much larger gradient between two groups containing the input pixels. • Then the inputs pixels are not grouped, resulting in a smaller objective function value. 7/1/2016 ๐ฟ0 Gradient Minimization 47 ๐ณ๐ Gradient Minimization (2) Choice of β Assume β ๐๐ ๐๐๐๐๐๐ ๐. ๐. ๐๐๐๐๐๐ ๐ ๐๐๐ ๐ ๐๐๐ ๐๐๐ ๐๐๐๐๐๐ ๐๐๐ ๐๐๐๐๐๐ β , ๐๐๐ ๐๐๐ ๐๐๐๐๐๐ ๐๐๐ ๐๐๐ − ๐๐๐๐๐๐ β Output – smooth beta Input Output – non-smooth beta 7 7 7 6 6 6 5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0 0 0 Pixel position 1 2 3 Pixel position 1 2 3 F= 1.5 7/1/2016 ๐ฟ0 Gradient Minimization Pixel position 1 2 3 F= 14 48 Applications Lei Yifan 7/1/2016 L0 Gradient Minimization 49 Applications Demo L0 norm L2 norm 7/1/2016 L0 Gradient Minimization 50 Applications Experiment Objective Function ๐ = 0.01 6000 5000 4000 3000 L0 norm 2000 1000 L2 norm 0 0 10 20 30 40 50 60 70 80 90 100 # of iterations 7/1/2016 L0 Gradient Minimization 51 Applications Experiment – Comparison input The result of 1st algorithm. The result of 2nd algorithm. input 7/1/2016 L0 Gradient Minimization The result of 1st algorithm. The result of 2nd algorithm.52 Applications Experiment – Comparison input The result of 1st algorithm. The result of 2nd algorithm. input 7/1/2016 L0 Gradient Minimization The result of 1st algorithm. The result of 2nd algorithm. 53 Applications Example Applications • Due to its unique property, it can be used in many application. 1. 2. 3. 4. 5. 6. 7/1/2016 Edge Enhancement Color Quantization JPEG artifact removal 3D Mesh Denoising Underdetermined linear system … L0 Gradient Minimization 54 Applications Edge Enhancement • Edge extraction is very useful in many application. However, sometimes the edge of the original input is not sufficient enough. To address this problem, L0 minimization method could be used. Original input 7/1/2016 Extracted Edge from original input Image after L0-minimazition L0 Gradient Minimization Extracted Edge from Image after L0-minimazition 55 Applications Color Quantization • L0 gradient can also be used to do color quantization. Color quantization is really helpful for image compression. Input, # of colors is 41290 7/1/2016 Output, # of colors is 38 L0 Gradient Minimization 56 Applications JPEG Artifact Removal • It is with high probability that there could be some artifact when compressing image especially for Clip-Art. This algorithm can address this problem well. 7/1/2016 L0 Gradient Minimization 57 Applications 3D Mesh Denoising 7/1/2016 L0 Gradient Minimization 58 Applications Underdetermined Linear Systems • Solve underdetermined linear system, if a sparse solution is needed. • To solve Ax=v, we can minimize ๐ ๐ฅ = ๐ด๐ฅ − ๐ฃ + ๐ ๐ฅ 0 • We can use similar strategy to optimize this objective function. 7/1/2016 L0 Gradient Minimization 59 Conclusion • We started by introducing some background. • Then, we presented L0 gradient minimization using two algorithms. • Coordinate descent optimization. • Alternating optimization with region fusion. • Also, we presented some experiments and comparisons between the two algorithms. • Finally, we showed the applicability of the algorithms on some problems. 7/1/2016 L0 Gradient Minimization 60 Thank You! Appendix A Proof: ๐ณ๐ Minimization is NP Hard • The solution for ๐ฟ0 gradient minimization for some vector ๐ฅ is given by: min ๐ฅ • The definition of ๐ฅ 0 0 s. t. ๐ด๐ฅ − ๐ < ฯต is number of non-zero entries in ๐ฅ. • Consider the simplest case where ๐ฅ 0 = 1 but you don't know the location of that non-zero entry. • We have ๐1 possibilities and to find the solution we have to examine the values of all the unique minimizer. • Similarly, if ๐ฅ 0 = ๐, you need to search ๐ ๐ possibilities. i.e., the algorithm grows as ๐ ๐ ๐ 1 possibilities for finding the with increase in ๐. • Since ๐ is not known a priori, you have to check for all the ๐ possible values of ๐. • Hence, the complexity of the algorithm is ๐ ๐=1 ๐ ๐ • This is NP hard. 7/1/2016 L0 Gradient Minimization 62 Appendix B Solving Minimization Step ๐ = ๐๐ − ๐ผ๐ 2 + ๐๐ − ๐ผ๐ 2 ๐′ = 0 2 ๐๐ − ๐ผ๐ + 2 ๐๐ − ๐ผ๐ = 0 4๐๐ − 2๐ผ๐ − 2๐ผ๐ = 0 (๐ผ๐ +๐ผ๐ ) ๐๐ = 2 7/1/2016 L0 Gradient Minimization 63 References 1. Xu, L., Lu, C., Xu, Y., and Jia, J., "Image smoothing via L0 gradient minimization." ACM Transactions on Graphics (TOG), Vol. 30, No. 6, ACM, 2011. 2. Cheng, X., Zeng, M., and Liu, X., "Feature-preserving filtering with L0 gradient minimization." Computers & Graphics, 38, 150-157, 2014. 3. Nguyen, Rang M. H., and Brown, M. S., "Fast and Effective L0 Gradient Minimization by Region Fusion." International Conference on Computer Vision (ICCV), 2015. 7/1/2016 L0 Gradient Minimization 64