You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Problem statement :

  • Best site selection with respect to compute profiles:  R2 has ability to select the best compute flavor on per site basis. That is, HPA constraint plugin on per input candidate site, checks whether there is any matching compute profile. If there is one, it stops searching on rest of candidates.  In R3, intention is to search through all input candidate sites,  get the best compute profile in each site and select the site whose compute profile (combined) score is highest.  

Solution Proposal:

Using a weighted sum approach, we will minimize multiple objective functions. Weights for each of the objective function is operator provided and will obtained through policy. The goal is to minimize the weighted sum of all the objective functions specified in the homing template.

For eg:

Goal is to minimize F(x) = Σwiƒi(x)

                                                         where i = 1 to N indicates number of objective functions.

                                                         wi is the operator supplied weight of the objective function ƒi(x)

Challenge with this approach - 1) Weight is user supplied,  2) For supporting multiple dimensions we would need a function the normalize the unit across dimensions.


hpa_score as objective function

  • hpa_score is calculated for each vnfc by adding the score parameter from hpa policy.
  • If there are multiple VNFC's per VNF then the score of the best flavor selected is cumulatively added and stored as candidate[hpa_score].
  • Translator should be able to identify hpa_score objective function from homing template.
  • A new objective function needs to created in solver module. Objective function is computed using the candidate[hpa_score] parameter. 





Lecture 9: Multi-Objective Optimization Suggested reading: K. Deb, Multi-Objective Optimization using Evolutionary Algorithms, John Wiley & Sons, Inc., 2001 2 ? Involve more than one objective function that are to be minimized or maximized ? Answer is set of solutions that define the best tradeoff between competing objectives Multi-Objective Optimization Problems (MOOP) 3 General Form of MOOP ? Mathematically x x x i n h k K g j J f m M U i i L i k j m , 1 , 2 , , ( ) 0 , 1 , 2 , , subject to ( ) 0 , 1 , 2 , , min/max ( ), 1 , 2 , , ( ) ( ) L L L L ≤ ≤ = = = ≥ = = x x x lower bound upper bound 4 ? In the single-objective optimization problem, the superiority of a solution over other solutions is easily determined by comparing their objective function values ? In multi-objective optimization problem, the goodness of a solution is determined by the dominance Dominance 5 Definition of Dominance ? Dominance Test ? x 1 dominates x 2, if ? Solution x 1 is no worse than x 2 in all objectives ? Solution x 1 is strictly better than x 2 in at least one objective ? x 1 dominates x 2 x 2 is dominated by x 1 6 ? 1 Vs 2: 1 dominates 2 ? 1 Vs 5: 5 dominates 1 ? 1 Vs 4: Neither solution dominates Example Dominance Test f2 (minimize) f1 (maximize) 1 2 3 4 5 7 ? Non-dominated solution set ?Given a set of solutions, the non-dominated solution set is a set of all the solutions that are not dominated by any member of the solution set ? The non-dominated set of the entire feasible decision space is called the Pareto-optimal set ? The boundary defined by the set of all point mapped from the Pareto optimal set is called the Paretooptimal front Pareto Optimal Solution 8 Graphical Depiction of Pareto Optimal Solution feasible objective space f1 (x ) (minimize) f2 (x ) x 2 (minimize) x 1 feasible decision space Pareto-optimal front B C Pareto-optimal solutions A feasible objective space f1 (x ) (minimize) f2 (x ) x 2 (minimize) x 1 feasible decision space x 2 x 1 feasible decision space Pareto-optimal front B C Pareto-optimal solutions A 9 Goals in MOO ? Find set of solutions as close as possible to Paretooptimal front ? To find a set of solutions as diverse as possible feasible objective space f 1 (x ) f 2 (x ) Pareto-optimal front 1 2 Classic MOO Methods 11 Weighted Sum Method ? Scalarize a set of objectives into a single objective by adding each objective pre-multiplied by a usersupplied weight ? Weight of an objective is chosen in proportion to the relative importance of the objective x x x i n h k K g j J F w f U i i L i k j M m m m , 1, 2, , ( ) 0, 1, 2, , ( ) 0, 1, 2, , ( ), ( ) ( ) 1 L L subject to L minimize ( ) ≤ ≤ = = = ≥ = = ∑ = x x x x 12 ? Advantage ?Simple ? Disadvantage ?It is difficult to set the weight vectors to obtain a Pareto-optimal solution in a desired region in the objective space ?It cannot find certain Pareto-optimal solutions in the case of a nonconvex objective space Weighted Sum Method 13 Weighted Sum Method (Convex Case) f1 f2 w 2 w 1 Paretooptimal front Feasible objective space 14 Weighted Sum Method (Non-Convex Case) f1 f2 Paretooptimal front Feasible objective space 15 ε-Constraint Method ? Haimes et. al. 1971 ? Keep just one of the objective and restricting the rest of the objectives within user-specific values x x x i n h k K g j J f m M m f U i i L i k j m m , 1, 2, , ( ) 0, 1, 2, , ( ) 0, 1, 2, , ( ) , 1, 2, , and ( ), ( ) ( ) L L L subject to L minimize ≤ ≤ = = = ≥ = ≤ = ≠ x x x x ε μ μ 16 ε-Constraint Method Keep f2 as an objective Minimize f2(x) Treat f1 as a constraint f1(x) ≤ ε1 f1 f2 Feasible objective space ε1a ε1b a b f1 f2 Feasible objective space ε1a ε1b a b 17 ? Advantage ?Applicable to either convex or non-convex problems ? Disadvantage ?The ε vector has to be chosen carefully so that it is within the minimum or maximum values of the individual objective function ε-Constraint Method 18 Weighted Metric Method ? Combine multiple objectives using the weighted distance metric of any solution from the ideal solution z * x x x i n h k K g j J l w f z U i i L i k j p M m p m m m , 1, 2, , ( ) 0, 1, 2, , ( ) 0, 1, 2, , ( ) , ( ) ( ) 1/ 1 * L L subject to L minimize ( ) p ≤ ≤ = = = ≥ = ? ? ? ? ? ? = ∑ − = x x x x 19 Weighted Metric Method f1 f2 z * a b p=1 (Weighted sum approach) f1 f2 z * a b p=2 20 Weighted Metric Method f1 f2 z * a b p= ∞ (Weighted Tchebycheff problem) 21 ? Advantage ?Weighted Tchebycheff metric guarantees finding all Pareto-optimal solution with ideal solution z* ? Disadvantage ?Requires knowledge of minimum and maximum objective values ?Requires z* which can be found by independently optimizing each objective functions ?For small p, not all Pareto-optimal solutions are obtained ?As p increases, the problem becomes non-differentiable Weighted Metric Method Multi-Objective Genetic Algorithms 23 Advantages of GAs over Traditional Methods ? Our desired answer: a set of solutions ? Traditional optimization methods operate on a candidate solution ? Genetic algorithms fundamentally operate on a set of candidate solutions 24 Multi-Objective EAs (MOEAs) ? There are several different multi-objective evolutionary algorithms ? Depending on the usage of elitism, there are two types of multi-objective EAs 25 Multi-Objective MOEAs Non-Elitist MOEAs ? Vector evaluated GA ? Vector optimized ES ? Weight based GA ? Random weighted GA ? Multiple-objective GA ? Non-dominated Sorting GA ? Niched Pareto GA Elitist MOEAs ? Elitist Non-dominated Sorting GA ? Distance-based Pareto GA ? Strength Pareto GA ? Pareto-archived ES 26 Identifying the Non-Dominated Set ? Critical Step in Elitist Strategies ? Kung’s et. al. Method ?Computational the most efficient method known ?Recursive 27 Kung’s et. al. Method: Step 1 ? Step 1: Sort population in descending order of importance of the first objective function and name population as P ? Step 2: Call recursive function Front(P) 28 Front(P) IF |P| = 1, Return P ELSE T = Front ( P(1: [ |P|/2 ]) ) B = Front ( P( [ |P|/2 + 1 ] : |P|) ) IF the i-th non-dominated solution of B is not dominated by any nondominated solution of T, M=T ∪{i} Return M END END 29 Notes on Front (P) ? |•| is the number of the elements ? P( a : b ) means all the elements of P from index a to b, ? [ •] is an operator gives the nearest smaller integer value 30 Example of Kung’s Method f1 (min) f2 (min) abc d efg h f1 (min) f2 (min) abc d efg h 31 Example of Kung’s Method a b e c f h d g a b e c f h d g n recursively call the function ‘front’ Step 1 a b e c f h d g Step 2 a a b b e e c c f f h h d d g g a a b b e e c c f f h h d d g g a f, h e d, g a, e, h f, h a, e o front returns M as output T B T T B B B B B B T T T T a, b, e, c, f, h d, g a, b, e, c f, h, d, g a,b e,c f, h d, g 32 Elitist MOEAs ? Elite-preserving operator carries elites of a population to the next generation ? Rudolph(1996) proved GAs converge to the global optimal solution of some functions in the presence of elitism ? Elitist MOEAs two methods are often used ? Elitist Non-dominated Sorting GA (NSGA II) ? Strength Pareto EA * Reference: G. Rudolph, Convergence of evolutionary algorithms in general search spaces, In Proceedings of the Third IEEE conference of Evolutionary Computation, 1996, p.50-54 33 Elitist Non-Dominated Sorting GA (Deb et al., 2000) ? Use an explicit diversity-preserving strategy together with an elite-preservation strategy 34 Elitist Non-Dominated Sorting GA ? Non-Dominated Sorting ?Classify the solutions into a number of mutually exclusive equivalent non-dominated sets U ρ =1 = j F F j F2 F1 F3 (min) f1 (min) f2 35 Elitist Non-Dominated Sorting GA ? Determine Crowding Distance ?Denotes half of the perimeter of the enclosing cuboid with the nearest neighboring solutions in the same front Cuboid (min) f2 (min ) f1 i 36 Elitist Non-Dominated Sorting GA ? Crowding tournament selection ? Assume that every solution has a non-domination rank and a local crowding distance ? A solution i wins a tournament with another solution j 1. if the solution i has a better rank 2. They have the same rank but solution i has a better crowing distance than solution j 37 Elitist Non-Dominated Sorting GA ? Step 1 ?Combine parent P t and offspring Q t populations Rt = P t ∪ Q t ?Perform a non-dominated sorting to Rt and find different fronts Fi ? Step 2 ?Set new population Pt+1 = ∅ and set i = 1 ?Until |Pt+1| + |Fi| < N, perform Pt+1 = Pt+1 ∪ Fi and increase i 38 Elitist Non-dominated Sorting GA ? Step 3 ?Include the most widely spread solutions (N-|Pt+1|) of Fi in Pt+1 using the crowding distance values ? Step 4 ?Create offspring population Qt+1 from Pt+1 by using the crowded tournament selection, crossover and mutation operators 39 Elitist Non-Dominated Sorting GA Pt Qt Rt Non-dominated sorting Pt+1 Crowding distance sorting F1 F2 F3 F1 F2 F3 discard 40 Elitist Non-dominated Sorting GA ? Advantages ? The diversity among non-dominated solutions is maintained using the crowding procedure: No extra diversity control is needed ? Elitism protects an already found Pareto-optimal solution from being deleted 41 Elitist Non-dominated Sorting GA ? Disadvantage ? When there are more than N members in the first nondominated set, some Pareto-optimal solutions may give their places to other non-Pareto-optimal solutions N=7 A Pareto-optimal solution is discarded 42 Strength Pareto EA (SPEA) ? Zitzler & Thiele., 1998 ? Use external population P ? Store fixed number of non-dominated solutions ? Newly found non-dominated solutions are compared with the existing external population and the resulting non-dominated solutions are preserved 43 SPEA Clustering Algorithm 1. Initially, each solution belongs to a distinct cluster Ci 2. If number of clusters is less than or equal to N, go to 5 3. For each pair of clusters, calculate the cluster distance dij and find the pair with minimum cluster-distance 4. Merge the two clusters and go to 2 5. Choose only one solution from each cluster and remove the other (The solution having minimum average distance from other solutions in the cluster can be chosen) ∑ ∈ ∈ = 1 2 1 2 , 12 ( , ) 1 i C j C d i j C C d 44 SPEA Clustering Algorithm f 2 f 1 Cluster 1 Cluster 2 Cluster 3 N=3 45 SPEA Algorithm ? Step 1. Create initial population P 0 of size N randomly and an empty external population P0 with maximum capacity of N ? Step 2. Find the non-dominated solutions of P t and copy (add) these to P t ? Step 3. Find the non-dominated solutions of P t and delete all dominated solutions ? Step 4. If |P t| > N then use the clustering technique to reduce the size to N 46 ? Step 5 Fitness evaluation ? Elites: assign fitness to each elite solution i by using ? For current population: assign fitness to a current population member j where Dj is the set of all external population members dominating j ? Note: a solution with smaller fitness is better ∑ ∈ = + Dj i Fj 1 Si N 1 # of population members dominated by elite + = i Si 47 N 1 # of current population members dominated by Sa + = a 7 2 = ∑ ∈ = + Dj i i F 1 S 1 7 9 7 2 =1+ = SPEA Algorithm Fitness Eval. f2 f1 P P a 1 2 3 4 5 6 2/7 2/7 3/7 9/7 14/7 10/7 10/7 9/7 7/7 N= 6, N=3 48 SPEA Algorithm ? Step 6 ?Apply a binary tournament selection (in a minimization sense), a crossover and a mutation operator to Pt U Pt and generate the next population Pt+1 49 Advantages of SPEA ? Once a solution Pareto-optimal front is found, it is stored in the external population ? Clustering ensures a spread among the nondominated solutions ? The proposed clustering algorithm is parameterless 50 Disadvantages of SPEA ? A balance between the regular population size N and the external population size N is important ?If N is too large, selection pressure for the elites is too large and the algorithm may not converge to the Pareto-optimal front ?If N is too small, the effect of elitism will be lost

  • No labels