Releases: thieu1995/mealpy
Releases · thieu1995/mealpy
v3.0.2
- Fix bug infinite loop in
JADE
,OriginalSHADE
, andL_SHADE
optimizers. - Fig bug check input conditions in
Validator
class. - Fig bug set up init function in
Problem
class. - Fix bug in
get_roulette_wheel_selection_index()
function inOptimizer
class. - Fix bug counting number of function evaluation in
Optimizer
class. - Fix bug divide by zero in
FLA
,BeesA
,DMOA
,ESOA
,FA
,SSpiderO
,GCO
,MRFO
, andAVOA
. - Fix bug compare
np.all
inNRO
. - Fix bug random number verse random vector in
MFO
. - Fix bug epoch in
MGO
. - Fix bug parallelization in
CL-PSO
. - Fix bug compare fitness with Agent in
Agent
class. - Replace
MixSetVar
byCategoricalVar
inspace
module. - Add
SequenceVar
tospace
module for handling tuple, list, and set variables. - Update
get_optimizer_by_class()
andget_optimizer_by_name()
functions inmealpy
module to support new classes. - Fix bug lower bound in
TransferBinaryVar
andTransferBoolVar
classes. - Fix bug out of range for choice in
GSKA
optimizer. - Update example comment in
IWO
,SBO
,SMA
,SA
,GTO
,GWO
,HGS
optimizers. - Update docs and examples.
v3.0.1
- Add transfer function module (please read this paper )
- Add two new datatypes:
TransferBinaryVar
andTransferBoolVar
. - Fix bug un-order variables in
PermutationVar
data type. - Update data type of encoded solution in
BoolVar
data type. - Update correct function in
BoolVar
andBinaryVar
. - Fix bug reproduce results in
GA
,WCA
, andEHO
optimizers. - Fix bug higher probability of 0 value in
IntegerVar
data type.
v3.0.0
Based on our new proposed classes, solving continuous and discrete problems is never that easy.
Add
space
module with: FloatVar, IntegerVar, StringVar, BoolVar, PermutationVar, BinaryVar, and MixedSetVar classes- FloatVar: handle problem with solution's format as float value
- IntegerVar: handle problem with solution's format as integer value
- StringVar: handle problem with solution's format as string value
- BoolVar: handle problem with solution's format as boolean value (True or False)
- PermutationVar: handle problem with solution's format as permutation value
- BinaryVar: handle problem with solution's format as binary value (0 or 1)
- MixedSetVar: handle problem with solution's format as mixed discrete set
target
module with Target class contains:objectives
(list),weights
(list) to calculate fitness, andfitness
(number)
agent
module with: Agentte class that is a placeholder for a search agent, it contains at least two attributes:solution
(position - np.ndarray), and atarget
object
Update
- Convert all optimizers to use new classes
- Convert Tuner and MultiTask classes
- Rename all un-official (developed by our team) optimizers to
DevOptimizerName
- Update tests and documents
- Update some examples, not all examples have converted yet (utils and applications folders)
v2.5.4
- Remove deepcopy() to improve the computational speed
- Update the parameter's order in Tuner class
- Update the saving's bug when using Termination in Multitask
- Remove ILA optimizer
- Rename "amend_position()" definition in some algorithms to "bounded_position()".
- Add a "amend_position()" function in Optimizer class. This function will call two functions.
- bounded_position() from optimizer. This means for optimizer level (get in valid range of position)
- amend_position() from problem. This means for problem level (transform to the correct solution)
- Fix bugs coefficients in GWO-based optimizers.
- Fig bug self.epoch in SCSO optimizer.
- Fix bug self.dyn_pop_size when pop_size is small value
- Move SHADE-based optimizers from DE to SHADE module in evolutionary_based group
- Add Improved Grey Wolf Optimization (IGWO) in GWO algorithm
- Add Tabu Search (TS) to math-based group
- Add get_all_optimizers() and get_optimizer_by_name() in Mealpy
- Rename the OriginalSA to SwarmSA in SA optimizer
- Add the OriginalSA and GaussianSA in SA optimizer
- Update parameters in OriginalHC and SwarmHC
- Update ParameterGrid class to produce the dict with same order as original input
- Add export_figures() to Tuner class. It can draw the hyperparameter tuning process.
- Fix several bugs in docs folders.
v2.5.4-alpha.6
- Fix bug self.dyn_pop_size when pop_size is small value
- Move SHADE-based optimizers from DE to SHADE module in evolutionary_based group
- Add Improved Grey Wolf Optimization (IGWO) in GWO algorithm
- Add Tabu Search (TS) to math-based group
- Add get_all_optimizers() and get_optimizer_by_name() in Mealpy
v2.5.4-alpha.5
- Fig bug self.epoch in SCSO optimizer.
v2.5.4-alpha.4
- Update documents for MultiTask
- Fix bugs coefficients in GWO-based optimizers.
v2.5.4-alpha.3
Release new alpha version
v2.5.4-alpha.2
- Update the parameter's order in Tuner class
- Update the saving's bug when using Termination in Multitask
- Remove ILA optimizer
- Rename "amend_position()" definition in some algorithms to "bounded_position()".
- Add a "amend_position()" function in Optimizer class. This function will call two functions.
- bounded_position() from optimizer. This means for optimizer level (get in valid range of position)
- amend_position() from problem. This means for problem level (transform to the correct solution)
v2.5.4-alpha.1
Update
- Update the parameter's order in Tuner class
- Update the saving's bug when using Termination in Multitask