Abstract:
In the chip packaging process, traditional PID control applied to servo press controllers can achieve basic stable control, but it relies on experience for parameter tuning and lacks dynamic adaptability, making it difficult to handle the complex conditions of nonlinearity and parameter changes in the servo press packaging process. To enable the servo press to adapt to real-world application environments, improve position tracking accuracy, and achieve precise control, deep reinforcement learning is innovatively incorporated into the servo press control model, the deep deterministic policy gradient (DDPG) algorithm is employed, and an adaptive dynamic compensation mechanism is established to optimize parameters. Simulation results show that compared to traditional PID control, the DDPG based dynamic compensation control strategy reduces error ranges by 91.70%, 94.09%, 85.38%, and 87.57% under nominal, high-friction, wide-clearance, and random disturbance conditions, respectively, demonstrating significant improvements in tracking performance and disturbance resistance. The simulation experiment results fully validate the effectiveness of the proposed method.