![]() ![]() It is not always obvious that an algorithm applies this two-level structure. A similar, but less broad, term which is also encountered is ‘multi-start’ optimization which refers to the repeated application of local optimization steps. This includes solution-based methods (which iteratively improve a single incumbent solution), and population-based ones (like evolutionary algorithms (EA) which improve a group of solutions). Nevertheless, most optimizers are metaheuristics. The literature on it is extensive and varied, and the term is often not used explicitly. The use of metaheuristics decouples, and attempts to balance, exploration and exploitation. ![]() The heuristic may provide a suitable solution to the problem but cannot be used alone since it will most likely locate a local rather than global minimum. The lower level is typically any local search procedure and is called a heuristic. A metaheuristic is any optimization method in which an upper algorithm selects the starting conditions for a lower one. Tackling these hard problems can only be done with metaheuristics, i.e., the use of a two-tier algorithm. When the task is both difficult and expensive, this procedure can become time-consuming and difficult. Often, optimizations become iterative procedures to refine algorithms and their settings, and to verify the quality and reproducibility of the minima found. These choices are made based on some intuition of the problem, but are often shown to be wrong as the task is investigated further. A practitioner faced with a new optimization challenge must select an algorithm, and then values for its hyper-parameters. Ī particular reason for our interest in HEB problems is the practical challenges they introduce to the optimization process. Numerous options exist to tackle problems with one or two of these difficulties, but rarely are all three addressed simultaneously. The consequence of this complexity is a significant reduction in the number of optimization algorithms which can be used. High dimensionality also demands increased function evaluations, but a high evaluation expense makes this infeasible. ![]() This is because optimizers can easily be led astray by rough surfaces, and many more function evaluations are typically needed for the optimizer to learn about the structure of the problem. Black-box optimization problems-ones for which no gradient information is available-are generally regarded as some of the most difficult to handle. Many real-life applications fall into this class. ![]() In this work, we are particularly interested in tackling the hardest of global optimization challenges: high-dimensional, expensive, and black-box (HEB) problems. High-dimensional, expensive, black-box optimization GloMPO is also shown to provide qualitative benefits such a identifying degenerate minima, and providing a standardized interface and workflow manager. The novel feature of forced optimizer termination was shown to find better minima than normal optimization. GloMPO is shown to produce lower minima than traditional optimization approaches on global optimization test functions, the Lennard-Jones cluster problem, and ReaxFF reparameterizations. We hope that GloMPO will be a flexible framework which allows for customization and hybridization of various optimization ideas, while also providing a substitute for human interventions and decisions which are a common feature of optimization processes of hard problems. GloMPO manages and shares information between traditional optimization algorithms run in parallel. We then introduce a general optimization management tool called GloMPO (Globally Managed Parallel Optimization) to help address some of the challenges faced by practitioners. In this work we explore the properties which make many real-life global optimization problems extremely difficult to handle, and some of the common techniques used in literature to address them. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |