当前位置:网站首页>Using MATLAB programming to realize the steepest descent method to solve unconstrained optimization problems

Using MATLAB programming to realize the steepest descent method to solve unconstrained optimization problems

2022-04-23 14:35:00 I channel I

This article contains the following

 

        1、 Draw the algorithm flow chart of the steepest descent method ;

        2、MATLAB Write gradient calculation function using numerical differentiation method ( Functional expression M file );

        3、MATLAB Write the function of steepest descent method to solve unconstrained optimization problem , It is required to use the golden section method for accurate one-dimensional search , Calculate the gradient by numerical differentiation ( Functional expression M file , Precision set to epson Adjustable );

        4、MATLAB Write the function of steepest descent method to solve unconstrained optimization problem , Required Wolfe-Powell Inexact one-dimensional search , Calculate the gradient by numerical differentiation ( Functional expression M file , Precision set to epson Adjustable );

        5、MATLAB Programming ( imperative M file ), Using exact search and imprecise search respectively The steepest descent method , Solve the following problem :

\LARGE \min f(x)=100(x_2-x_1^2)^2+(1-x_1)^2

         Accuracy of 0.001, The initial point is (-1,1);

         Change the initial point to (-1.2,1) Rerun , Run the observation .

In this experiment, the function is used separately function Calculation

function y=f(x)
if(length(x)==1)
    global xk;
    global pk;
    x=xk+x*pk;
end
y=100*(x(2)-x(1)^2)^2+(1-x(1))^2;

1. Algorithm flow chart of steepest descent method

 

2、MATLAB Write gradient calculation function using numerical differentiation method ( Functional expression M file );

function g=shuzhiweifenfa(x)
for i = 1:length(x)
    m=zeros(1,length(x));
    m(i)=(10^-3)/2;
    g(i)=f(x+m)-f(x-m);
end
g=g/10^-3;

3、 The steepest descent method is a function for solving unconstrained optimization problems , The golden section method is used to search one-dimensional accurately , Calculate the gradient by numerical differentiation ( Functional expression M file , Precision set to epson Adjustable );

function x=zuisuxiajiangfa_hjfg(e,x)
%step 1
% Not used k, Store only the value of the current iteration .
global xk;
global pk;
while 1
    %step 2
    g=shuzhiweifenfa(x);
    %step 3
    % The norm uses the square sum and the open root sign 
    if sqrt(sum(g.^2))<=e
        return;
    end
    pk=-g;
    xk=x;
    % These two functions are shown in the previous code (matlab General algorithm of unconstrained optimization )
    [a,b,c]=jintuifa(0,0.1);
    a=huangjinfenge(a,c,10^-4);
    %step 4
    x=x+a*pk;
end

4、 The steepest descent method is a function for solving unconstrained optimization problems , Required Wolfe-Powell Inexact one-dimensional search , Calculate the gradient by numerical differentiation ( Functional expression M file , Precision set to epson Adjustable );

function a=Wolfe_Powell(x,pk)
%step 1
u=0.1;
b=0.5;
a=1;
n=0;
m=10^100;
%step 2
fx=f(x);
g=shuzhiweifenfa(x);
while 1
    xk=x+a*pk;
    fxk=f(xk);
    gk=shuzhiweifenfa(xk);
    if (fx-fxk)>=(-u*a*g*pk.')%(3-1)
        if (gk*pk.')>=(b*g*pk.')%(3-2)
            return;
        else
            %step 4
            n=a;
            a=min(2*a,(a+m)/2);
        end
    else
        %step 3
        m=a;
        a=(a+n)/2;
    end
end

function x=zuisuxiajiangfa_Wolfe(e,x)
%step 1
% Not used k, Store only the value of the current iteration .
while 1
    %step 2
    g=shuzhiweifenfa(x);
    %step 3
    % The norm uses the square sum and the open root sign 
    if sqrt(sum(g.^2))<=e
        return;
    end
    pk=-g;
    a=Wolfe_Powell(x,pk);
    %step 4
    x=x+a*pk;
end

5、 Using exact search and imprecise search respectively The steepest descent method , problem :

\LARGE \min f(x)=100(x_2-x_1^2)^2+(1-x_1)^2

clear
clc
for i=1:2
    if(i==1)
        x=[-1,1];
        fprintf('=========================');
        fprintf('\nx=%f\t\t%f\n',x(1),x(2));
        fprintf('=========================\n');
    else
        x=[-1.2,1];
        fprintf('=========================');
        fprintf('\nx=%f\t\t%f\n',x(1),x(2));
        fprintf('=========================\n');
    end
    fprintf(' The steepest descent method for accurate search :\n');
    x_=zuisuxiajiangfa_hjfg(10^-3,x);
    fprintf('x*=%f\t%f\n',x_(1),x_(2));
    fprintf('f(x)=%f\n',f(x_));
    fprintf(' The steepest descent method of imprecise search \n');
    x_=zuisuxiajiangfa_Wolfe(10^-3,x);
    fprintf('x*=%f\t%f\n',x_(1),x_(2));
    fprintf('f(x)=%f\n',f(x_));
end

result :

 

 

版权声明
本文为[I channel I]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204231433048632.html