^{1}, Fan Yang

^{1,a}, Changhao Yan

^{1}, Xuan Zeng

^{1}and Dian Zhou

^{1,2,b}

^{1}State Key Lab of ASIC & System, School of Microelectronics, Fudan University, Shanghai, P.R. China.

^{a}yangfan@fudan.edu.cn

^{b}zhoud@fudan.edu.cn

^{2}Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX, USA

Multiple starting point optimization is an efficient approach for automated analog circuit optimization. Starting from a set of starting points, the corresponding local optimums are reached by local optimization method Sequential Quadratic Programming (SQP). The global optimum is then selected from these local optimums. If one starting point is located in a valley, it converges rapidly to the local optimum by the local search. Such a region-hit property makes the multiple starting optimization approach more likely to reach the global optimum. However, the SQP method needs the gradients to drive the optimization. In the traditional method, the gradients are approximated by finite differences. A large number of simulations are needed to obtain the gradients, which becomes the bottleneck of the circuit optimization. We find that for a new point, it is usually surrounded by several neighboring points which have been evaluated in the previous SQP steps. In this paper, we propose an efficient method to calculate the gradient by recycling the previous evaluated points. It is based on the relationship between gradients and the directional derivatives along the directions of the neighbor points. If the neighboring points are not enough for gradient calculation, we will sample adequate neighboring points for gradient calculation. Furthermore, since the performances of the circuits are not sensitive to some design parameters, the gradients are usually sparse. We can thus further employ the idea of sparse recovery to recover the sparse gradients with fewer simulations. Our experimental results demonstrate that with these strategies, the number of simulations can be reduced by up to 63% without significantly surrendering the accuracy of the optimization results.