To convert a solution generated in binary (base 2) to decimal (base 10), a decode function uses a formula that takes the binary number, the minimum and maximum possible values of the function domain, and the number of bits used to represent numbers in the domain. The formula calculates the decimal value as the minimum value plus the decimal equivalent of the binary value times the range over the maximum possible value represented by the number of bits.
To convert a solution generated in binary (base 2) to decimal (base 10), a decode function uses a formula that takes the binary number, the minimum and maximum possible values of the function domain, and the number of bits used to represent numbers in the domain. The formula calculates the decimal value as the minimum value plus the decimal equivalent of the binary value times the range over the maximum possible value represented by the number of bits.
To convert a solution generated in binary (base 2) to decimal (base 10), a decode function uses a formula that takes the binary number, the minimum and maximum possible values of the function domain, and the number of bits used to represent numbers in the domain. The formula calculates the decimal value as the minimum value plus the decimal equivalent of the binary value times the range over the maximum possible value represented by the number of bits.
Because the solution is generated in base 2 it has to be converted in base 10 so it will be a decode function that will use the formula: X10 = a+decimal(x2)*(b-a)/(2n-1) Where:
X10 is the number in base 10
a is the smallest value that can be taken by x For example De Jong's function 1 is defined for [-5.12, 5.12] such that a=-5.12
X2 is the number in base 2
a and b are the the domain of a function f(x), a ≤x ≤b N= (b-a) * 10^22 n= ceil(log2(N))