Page 176, Section 7.2 Proximal Policy Optimization (PPO)
Thanks to Jérémie Clair Coté for suggesting we clarify this and for the discussion, and HyeAnn Lee for correction.
Page 176, the last sentence of the 1st paragraph and the first two sentences of the 2nd paragraph read:
To see why this is the case, consider when rt(θ)At would assume large positivevalues, which is either At>0,rt(θ)>0, or At<0,rt(θ)<0. When At>0,rt(θ)>0, if rt(θ) becomes much larger than 1, the upper clip term1−ϵ applies to upper-bound rt(θ)≤1+ϵ, hence JCLIP≤(1+ϵ)At. On the other hand, when At<0,rt(θ)<0, if rt(θ) becomes much smaller than 1, thelower clip term 1−ϵ applies to again upper-bound JCLIP≤(1−ϵ)At. This is confusing because (1) rt(θ) cannot be < 0 because it is a ratio of two probabilities and (2) there is a typo when referring to the upper clip term. The sentences should be replaced with:
To see why this is the case, let’s consider when At is either >0 or <0. Note that rt(θ) is always≥0 because it is a ratio of two probabilites. When At>0, if rt(θ)>1+ϵ, the upper clip term 1+ϵ applies to upper-bound rt(θ)≤1+ϵ, hence JCLIP≤(1+ϵ)At. On the other hand, when At<0, if rt(θ)<1−ϵ the lower clip term 1−ϵ applies to again upper-bound JCLIP≤(1−ϵ)At. Page 178, Section 7.3 PPO Algorithm, Algorithm 7.2
Thanks to Jérémie Clair Coté for this correction.
Algorithm 7.2 PPO with clipping, line 35:
θC=θC+αC∇θCLval(θC) contains a typo. The second term on the right hand side of the equation should be subtracted not added since the loss is being minimized. It should read:
θC=θC−αC∇θCLval(θC) Note that the actor parameter update on line 33 of algorithm 7.2 is correct because the policy "loss" for PPO is formulated as an objective to be maximized (see equation 7.39 on page 177).