菜单
  
    1 Introduction Process parameter settings for plastic injection molding critically influence the quality of the molded products. An unsuitable process parameter setting inevitably causes a multitude of production problems: long lead times, many rejects, and substandard moldings. The negative impact on efficiency raises costs and reduces competitiveness. This research develops a process parameter optimization system to help manufacturers make rapid, efficient, preproduction setups for MISO plastic injection molding. The focus of this study was molded housing components, with attention to a particularly telling quality characteristic: weight. The optimization system proposed herein includes two stages. In the first stage, mold flow analysis was used to obtain preliminary process parameter settings. In the second stage, the Taguchi method with ANOVA was applied to etermine optimal initial process parameter settings, and a BPNN was applied to build up the prediction model. Then, the BPNN was inpidually combined with the DFP method and with a GA to search for the final optimal process parameter settings. Three confirmation experiments were performed to verify the effectiveness of the final optimal process parameter settings. The final optimal process parameter settings are not limited to discrete values as in the Taguchi method and can determine settings for production that not only approach the target value of the selected quality characteristic more closely but also with less variation. data for the BPNN are limited by the function values, the data must be normalized by the following equation:6211
    where PN is the normalized data; P is the original data; Pmax is the maximum value of the original data; Pmin is the minimum value of the original data; Dmax is the expected maximum value of the normalized data, and Dmin is the expected minimum value of the normalized data. When applying neural networking to the system, the input and output values of the neural network fall in the range of[Dmin, Dmax].
    According to previous studies [24, 25], there are a few conditions for network learning ermination: (1) when the root mean square error (RMSE) between the expected value and network output value is reduced to a preset value; (2) when the preset number of learning cycles has been reached; and (3) when cross-validation takes place between the training samples and test data. In this research, the first approach was adopted by gradually increasing the network training time to slowly decrease the RMSE until it was stable and acceptable. The RMSE is defined as follows:
    where N, di, and  are the number of training samples, the actual value for training sample i, and the predicted value of the neural network for training sample i, respectively.
    2 Optimization methodologies
    The optimization methodologies including BPNNs, GAs, and the DFP method are briefly introduced as follows.
    2.1 Back-propagation neural networks
    Many researchers have mentioned that BPNNs have the advantage of fast response and high learning accuracy [19-23]. A BPNN consists of an input layer, one or more hidden layers, and an output layer. The parameters for a BPNN include: the number of hidden layers, the number of hidden neurons, the learning rate, momentum, etc. All of these parameters have significant impacts on the performance of a neural network. In this research, the steepest descent method was used to find the weight and bias change and minimize the cost function. The activation function is a hyperbolic tangent function. In network learning, input data and output results are used to adjust the weight and bias values of the network. The more detailed the input training classification is and the greater the amount of learning information provided, the better the output will conform to the expected result. Since the learning and verification of
    2.2 Genetic algorithms
    GAs are a method of searching for optimized factors analogous to Darwin's survival of the fittest and are based on a biological evolution process. The evolution process is random yet guided by a selection mechanism based on the fitness of inpidual structures. There is a population of a given number of inpiduals, each of which represents a particular set of defined variables. Fitness is determined by the measurable degree of approach to the ideal. The “fittest” inpiduals are permitted to “reproduce” through a recombination of their variables, in the hope that their “offspring” will prove to be even better adapted. In addition to the strict probabilities dictated by recombination, a small mutation rate is also factored in. Less-fit inpiduals are discarded with the subsequent iteration, and each generation progresses toward an optimal solution.
  1. 上一篇:资产重组问题与应对策略开题报告
  2. 下一篇:塑料注塑工艺英文文献和翻译
  1. 汽车内燃机连杆载荷和应...

  2. 机械手系统英文文献和中文翻译

  3. 固体氧化物燃料电池英文文献和中文翻译

  4. 船舶运动仿真系统英文文献和中文翻译

  5. 新能源空调系统设计英文文献和中文翻译

  6. 正交试验回归法和响应曲...

  7. 机械设计制造及其自动化英文文献和中文翻译

  8. C++最短路径算法研究和程序设计

  9. 江苏省某高中学生体质现状的调查研究

  10. NFC协议物理层的软件实现+文献综述

  11. g-C3N4光催化剂的制备和光催化性能研究

  12. 高警觉工作人群的元情绪...

  13. 中国传统元素在游戏角色...

  14. 上市公司股权结构对经营绩效的影响研究

  15. 巴金《激流三部曲》高觉新的悲剧命运

  16. 浅析中国古代宗法制度

  17. 现代简约美式风格在室内家装中的运用

  

About

优尔论文网手机版...

主页:http://www.youerw.com

关闭返回