This paper discusses the popular evolutionary optimization technique, Genetic Algorithm (GA)and Teaching-learningbased Optimization (TLBO) algorithm. It also covers the definitions of various parameters used by these algorithms.
I.INTRODUCTION Most of the engineering design problems are competing multi-objective problems for which the optimal values of the design variables are searched that optimize several objectivesfor a given set of constraints. The different methods available to formulate a multi-objective problem as a single objective problem are weighted global criterion method, weighted sum method, lexicographic method, weighted min-max method, exponential weighted criterion, weighted product method, goal programming methods, bounded objective function method, and physical programming (Marled and Arora, 2004).The weighted sum approach is more widely used in which a normalized objective function is formulated by assigning proper weighting factors to all the objectives. By selecting different values of the weighting factorsto objectives, the results are obtained as a set of optimum solutions and each solution in this set is a trade-off between the different objectives (Marled and Arora, 2010). A constrained optimization problem is considered more complex than that of an unconstrained problem. It finds a feasible solution that optimizes one or more mathematical functions in a constrained search space. The constrained optimization problem is transformed into an unconstrained optimization problem by modifying the objective function on the basis of the constraint violations After formulating the optimization problem, it can be solved by using either traditional or evolutionary optimization algorithms. The traditional or classical optimization algorithms are based on deterministic approach, i.e., they use gradient information of objective function with respect to the design variables and move from one solution to other following the specific rules. Depending on the starting solution these algorithmsmay end up with a local optimum solution. Therefore, one has to explore all local solutions; one of them is the global optimum solution. To improve the chances of getting the global optimum solution, a large set of randomly generated initial solutions is required for these algorithms. The global optimum solution is then found as the best of all local optimum solution provided by different instances of the algorithm.The popular methods in this category are quadratic programming, steepest descent method, linear programming, nonlinear programming, dynamic programming and geometric programming, etc. For the complex optimization problem having a large number of design variables and multiple local minimum solutions, these methods converge on the optimum solution near to the initial solution provided and thus produce local optimum solution (Marler and Arora, 2004;Mariappan and Krishnamurty, 1996).These techniques are generally not suitable for the optimization problems with (1) large number of constraints (2) large...