Related background and interconnected problem posted on math stackexchange: https://math.stackexchange.com/questions/3972648/non-linear-optimization-levenberg-marquardt-gives-different-results-using-diffe/3972674
Posting this question here as it is not (only) math related.
I'm using L-M algorithm to minimize (x*,y*) = SUM(x-x0)^2+(y-y0)^2-r^2
Forms of function to minimize (non-squared as both c++ and java libs squares it for us apparently):
sqrt((x-x0)^2+(y-y0)^2) - r
1 - sqrt((x-x0)^2+(y-y0)^2)/r
Using java L-M algorithm both forms give good result (almost identical, very small difference, I wonder what is the reason for the diff.)
BUT using Eigen's L-M algorithm only the 2nd form is working properly. Somehow the 1st form has huge error.
What do you think is the problem here? I'm wondering if there is a fundamental difference in the JAVA/Eigen c++ L-M implementation? Would be great to clear this up.
I tried eigen's L-M with automatic and "handwritten" derivatives for both forms. In Java I only used "handwritten" derivatives.
This is how I use eigen:
int operator()(const Eigen::VectorXd &b, Eigen::VectorXd &fvec) const {
for(int i = 0; i < matrix.rows(); i++) {
fvec[i] = 1 - sqrt(pow(matrix(i, 0) - b[0], 2) + pow(matrix(i, 1) - b[1], 2)) / matrix(i, 2);
//fvec[i] = sqrt(pow(matrix(i, 0) - b[0], 2) + pow(matrix(i, 1) - b[1], 2)) - matrix(i, 2);
/*
* Important: LevenbergMarquardt is designed to work with objective functions that are a sum
* of squared terms. The algorithm takes this into account: do not do it yourself.
* In other words: objFun = sum(fvec(i)^2)
*/
}
return 0;}
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…