গাণিতিক কাম্যতমকরণ: সংশোধিত সংস্করণের মধ্যে পার্থক্য

বিষয়বস্তু বিয়োগ হয়েছে বিষয়বস্তু যোগ হয়েছে
A.M.R. (আলোচনা | অবদান)
A.M.R. (আলোচনা | অবদান)
৩৪ নং লাইন:
 
{{col-end}}
 
== প্রধান ক্ষেত্রসমূহ ==
* [[উত্তল সমস্যা|উত্তল সমস্যার]] গবেষণার বিষয় হল [[উত্তল ফাংশন|উত্তল]] লক্ষ্য ফাংশন, যার সীমাবদ্ধতাগুলো (constraint) [[উত্তল সেট]] তৈরি করে।
 
* [[Convex programming]] studies the case when the objective function is [[convex function|convex]] and the constraints, if any, form a convex set. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming.
** [[Linear programming]] (LP), is a type of convex programming, studies the case in which the objective function ''f'' is linear and the set of constraints is specified using only linear equalities and inequalities. Such a set is called a [[polyhedron]] or a [[polytope]] if it is [[Bounded set|bounded]].
** [[Second order cone programming]] (SOCP) is a convex program, and includes certain types of quadratic programs.
** [[Semidefinite programming]] (SDP) is a subfield of convex optimization where the underlying variables are [[semidefinite]] [[matrix (mathematics)|matrices]]. It is generalization of linear and convex quadratic programming.
** [[Conic programming]] is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone.
** [[Geometric programming]] is a technique whereby objective and inequality constraints expressed as [[posynomials]] and equality constraints as [[monomials]] can be transformed into a convex program.
* [[Integer programming]] studies linear programs in which some or all variables are constrained to take on [[integer]] values. This is not convex, and in general much more difficult than regular linear programming.
* [[Quadratic programming]] allows the objective function to have quadratic terms, while the set A must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming.
* [[Nonlinear programming]] studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, the convexity of the program affects the difficulty of solving more than the linearity.
* [[Stochastic programming]] studies the case in which some of the constraints or parameters depend on [[random variable]]s.
* [[Robust optimization|Robust programming]] is, as stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. This is not done through the use of random variables, but instead, the problem is solved taking into account inaccuracies in the input data.
* [[Combinatorial optimization]] is concerned with problems where the set of feasible solutions is discrete or can be reduced to a [[discrete mathematics|discrete]] one.
* [[Infinite-dimensional optimization]] studies the case when the set of feasible solutions is a subset of an infinite-[[dimension]]al space, such as a space of functions.
* [[Metaheuristic]]s make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics do not guarantee an optimal solution is ever found.
* [[Constraint satisfaction]] studies the case in which the objective function ''f'' is constant (this is used in [[artificial intelligence]], particularly in [[automated reasoning]]).
** [[Constraint programming]].
* Disjunctive programming used where at least one constraint must be satisfied but not all. Of particular use in scheduling.
 
In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):
* [[Calculus of variations]] seeks to optimize an objective defined over many points in time, by considering how the objective function changes if there is a small change in the choice path.
* [[Optimal control]] theory is a generalization of the calculus of variations.
* [[Dynamic programming]] studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the [[Bellman equation]].
* [[Mathematical programming with equilibrium constraints]] is where the constraints include [[variational inequalities]] or [[complementarity theory|complementarities]].
 
=== Multi-objective optimization ===
{{Main|Multiobjective optimization}}
Adding more than one objective to an optimization problem adds complexity. For example, if you wanted to optimize a structural design, you would want a design that is both light and rigid. Because these two objectives conflict, a trade-off exists. There will be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and stiffness. This set of trade-off designs is known as a Pareto set. The curve created plotting weight against stiffness of the best designs is known as the [[Pareto frontier]].
 
A design is judged to be Pareto optimal if it is not dominated by other designs: a Pareto optimal design must be better than another design in at least one aspect. If it is worse than another design in all respects, then it is dominated and is not Pareto optimal.
 
=== Multi-modal optimization ===
Optimization problems are often multi-modal, that is they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer.
 
Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. [[Evolutionary Algorithm]]s are however a very popular approach to obtain multiple solutions in a multi-modal optimization task. See [[Evolutionary multi-modal optimization]].
 
 
== গাণিতিক ধারণা ==