DOI: https://doi.org/10.26089/NumMet.v24r428

A new method of linear programming

Authors

  • Nikolay A. Olkhovsky
  • Leonid B. Sokolinsky

Keywords:

linear programming
surface motion method
iterative method
convergence theorem
deep neural network

Abstract

The article presents a new iterative method of linear programming, called the surface movement method. This method builds a path from the initial boundary point to the point at which the optimal value of the objective function is achieved on the surface of a polytope that restricts the feasible region of linear programming problem. The movement vector defines the direction of the maximum increase/decrease in the value of the objective function. A formal description of the algorithm implementing the surface movement method is presented. The convergence theorem is proved. The described method involves the use of a feed forward deep neural network to determine the direction of movement along the edges of a feasible polytope. To do this, a multidimensional local image of the linear programming problem is constructed at the point of the current approximation, which is fed to the input of the neural network. The set of labeled precedents necessary for training a neural network can be obtained using the apex method.


Published

2023-11-20

Issue

Section

Methods and algorithms of computational mathematics and their applications

Author Biographies

Nikolay A. Olkhovsky

Leonid B. Sokolinsky


References

  1. T. Hartung, “Making Big Sense from Big Data,” Front. Big Data 1, Article Number 5 (2018).
    doi 10.3389/fdata.2018.00005
  2. I. M. Sokolinskaya and L. B. Sokolinsky, “On the Solution of Linear Programming Problems in the Age of Big Data,” in Communications in Computer and Information Science (Springer, Cham, 2017), Vol. 753, pp. 86-100.
    doi 10.1007/978-3-319-67035-5_7
  3. J. Branke, “Optimization in Dynamic Environments,” in Evolutionary Optimization in Dynamic Environments. Genetic Algorithms and Evolutionary Computation (Springer, Boston, 2002), Vol. 3, pp. 13-29.
    doi 10.1007/978-1-4615-0911-0_2
  4. I. I. Eremin and V. D. Mazurov, Nonstationary Processes of Mathematical Programming (Nauka, Moscow, 1979) [in Russian].
  5. J. Brogaard, T. Hendershott, and R. Riordan, “High-Frequency Trading and Price Discovery,” Rev. Financ. Stud. 27 (8), 2267-2306 (2014).
    doi 10.1093/rfs/hhu032
  6. S. Deng, X. Huang, J. Wang, et al., “A Decision Support System for Trading in Apple Futures Market Using Predictions Fusion,” IEEE Access 9, 1271-1285 (2021).
    doi 10.1109/ACCESS.2020.3047138
  7. G. Seregin, Lecture Notes on Regularity Theory for the Navier-Stokes Equations (World Scientific, Singapore, 2014).
    doi 10.1142/9314
  8. D. Demin, “Synthesis of Optimal Control of Technological Processes Based on a Multialternative Parametric Description of the Final State,” East.-Eur. J. Enterp. Technol. 3 (4), 51-63 (2017).
  9. L. S. Kazarinov, D. A. Shnayder, and O. V. Kolesnikova, “Heat Load Control in Steam Boilers,” in Proc. Int. Conf. on Industrial Engineering, Applications and Manufacturing, St. Petersburg, Russia, May 16-19, 2017.
    doi 10.1109/ICIEAM.2017.8076177
  10. E. V. Zagoskina, T. A. Barbasova, and D. A. Shnaider, “Intelligent Control System of Blast-Furnace Melting Efficiency,” in Proc. Int. Multi-Conference on Engineering, Computer and Information Sciences, Novosibirsk, Russia, October 21-27, 2019.
    doi 10.1109/SIBIRCON48586.2019.8958221
  11. J. Fleming, X. Yan, C. Allison, et al., “Real-Time Predictive Eco-Driving Assistance Considering Road Geometry and Long-Range Radar Measurements,” IET Intell. Transp. Syst. 15 (4), 573-583 (2021).
    doi 10.1049/ITR2.12047
  12. M. Scholl, K. Minnerup, C. Reiter, et al., “Optimization of a Thermal Management System for Battery Electric Vehicles,” in Proc. 14th Int. Conf. on Ecological Vehicles and Renewable Energies, Monte-Carlo, Monaco, May 8-10, 2019.
    doi 10.1109/EVER.2019.8813657
  13. S. Meisel, “Dynamic Vehicle Routing,” in Anticipatory Optimization for Dynamic Decision Making. Operations Research/Computer Science Interfaces Series (Springer, New York, 2011), Vol. 51, pp. 77-96.
    doi 10.1007/978-1-4614-0505-4_6
  14. D. R. Kiran, Production Planning and Control: A Comprehensive Approach (Butterworth-Heinemann, Oxford, 2019).
    doi 10.1016/C2018-0-03856-6
  15. R. Mall, Real-Time Systems: Theory and Practice (Pearson Education, Delhi, 2007).
  16. G. B. Dantzig, Linear Programming and Extensions (Princeton Univ. Press, Princeton, 1998).
  17. J. A. J. Hall and K. I. M. McKinnon, “Hyper-Sparsity in the Revised Simplex Method and How to Exploit It,” Comput. Optim. Appl. 32 (3), 259-283 (2005).
    doi 10.1007/s10589-005-4802-0
  18. V. Klee and G. Minty, “How Good is the Simplex Algorithm?’’ in Inequalities III (Academic Press, New York, 1972), pp. 159-175.
  19. R. H. Bartels, J. Stoer, and Ch. Zenger, “A Realization of the Simplex Method Based on Triangular Decompositions,” in Handbook for Automatic Computation. Vol. II: Linear Algebra (Springer, Berlin, 1971), pp. 152-190.
    doi 10.1007/978-3-642-86940-2_11
  20. P. Tolla, “A Survey of Some Linear Programming Methods,” in Concepts of Combinatorial Optimization (Wiley, Hoboken, 2014), pp. 157-188.
    doi 10.1002/9781119005216.ch7
  21. J. A. J. Hall, “Towards a Practical Parallelisation of the Simplex Method,” Comput. Manag. Sci. 7 (2), 139-170 (2010).
    doi 10.1007/s10287-008-0080-5
  22. B. Mamalis and G. Pantziou, “Advances in the Parallelization of the Simplex Method,” in Lecture Notes in Computer Science (Springer, Cham, 2015), Vol. 9295, pp. 281-307.
    doi 10.1007/978-3-319-24024-4_17
  23. V. I. Zorkaltsev and I. V. Mokryi, “Interior Point Algorithms in Linear Optimization,” Sib. Zh. Ind. Mat. 21 (1), 11-20 (2018) [J. Appl. Ind. Math. 12 (1), 191-199 (2018)].
    doi 10.1134/S1990478918010179
  24. I. I. Dikin, “Iterative Solution of Problems of Linear and Quadratic Programming,” Dokl. Akad. Nauk SSSR 174 (4), 747-748 (1967).
  25. J. Gondzio, “Interior Point Methods 25 Years Later,” Eur. J. Oper. Res. 218 (3), 587-601 (2012).
    doi 10.1016/j.ejor.2011.09.017
  26. C. Roos, T. Terlaky, and J.-P. Vial, Interior Point Methods for Linear Optimization (Springer, New York, 2005).
    doi 10.1007/b100325
  27. I. Sokolinskaya, “Parallel Method of Pseudoprojection for Linear Inequalities,” in Communications in Computer and Information Science (Springer, Cham, 2018), Vol. 910, pp. 216-231.
    doi 10.1007/978-3-319-99673-8_16
  28. J. Gondzio and A. Grothey, “Direct Solution of Linear Systems of Size 10^9 Arising in Optimization with Interior Point Methods,” in Lecture Notes in Computer Science (Springer, Berlin, 2006), Vol. 3911, pp. 513-525.
    doi 10.1007/11752578_62
  29. A. Prieto, B. Prieto, E. M. Ortigosa, et al., “Neural Networks: An Overview of Early Research, Current Frameworks and New Challenges,” Neurocomputing 214, 242-268 (2016).
    doi 10.1016/j.neucom.2016.06.014
  30. D. Tank and J. Hopfield, “Simple ’Neural’ Optimization Networks: An A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit,” IEEE Trans. Circuits Syst. 33 (5), 533-541 (1986).
    doi 10.1109/TCS.1986.1085953
  31. M. Kennedy and L. Chua, “Unifying the Tank and Hopfield Linear Programming Circuit and the Canonical Nonlinear Programming Circuit of Chua and Lin,” IEEE Trans. Circuits Syst. 34 (2), 210-214 (1987).
    doi 10.1109/TCS.1987.1086095
  32. A. Rodriguez-Vazquez, R. Dominguez-Castro, A. Rueda, et al., “Nonlinear Switched-Capacitor ’Neural’ Networks for Optimization Problems,” IEEE Trans. Circuits Syst. 37 (3), 384-398 (1990).
    doi 10.1109/31.52732
  33. S. H. Zak, V. Upatising, and S. Hui, “Solving Linear Programming Problems with Neural Networks: A Comparative Study,” IEEE Trans. Neural Netw. 6 (1), 94-104 (1995).
    doi 10.1109/72.363446
  34. A. Malek and A. Yari, “Primal-Dual Solution for the Linear Programming Problems Using Neural Networks,” Appl. Math. Comput. 167 (1), 198-211 (2005).
    doi 10.1016/J.AMC.2004.06.081
  35. X. Liu and M. Zhou, “A One-Layer Recurrent Neural Network for Non-Smooth Convex Optimization Subject to Linear Inequality Constraints,” Chaos Solit. Fractals 87, 39-46 (2016).
    doi 10.1016/j.chaos.2016.03.009
  36. Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521 (7553), 436-444 (2015).
    doi 10.1038/nature14539
  37. N. A. Olkhovsky and L. B. Sokolinsky, “Visualizing Multidimensional Linear Programming Problems,” in Communications in Computer and Information Science (Springer, Cham, 2022), Vol. 1618, pp. 172-196.
    doi 10.1007/978-3-031-11623-0_13
  38. R. Raina, A. Madhavan, and A. Y. Ng, “Large-Scale Deep Unsupervised Learning Using Graphics Processors,” in Proc. 26th Annual Int. Conf. on Machine Learning, Montreal, Canada, June 14-18, 2009 (ACM Press, New York, 2009), pp. 873-880.
    doi 10.1145/1553374.1553486
  39. L. B. Sokolinsky and I. M. Sokolinskaya, “Apex Method: A New Scalable Iterative Method for Linear Programming,” Mathematics 11 (7), 1654-1-1654-28 (2023).
    doi 10.3390/MATH11071654
  40. N. A. Olkhovsky, Study of the Receptive Field Structure in the Visual Method of Solving the Linear Programming Problem [in Russian].
    doi 10.24108/preprints-3112771