One of the biggest challenges to achieve the goal of producing fusion energy in tokamak devices is the necessity of avoiding disruptions of the plasma current due to instabilities. The Disruption Event Characterization and Forecasting (DECAF) framework has been developed in this purpose, integrating physics models of many causal events that can lead to a disruption. Two different machine learning approaches are proposed to improve the ideal magnetohydrodynamic (MHD) no-wall limit component of the kinetic stability model included in DECAF. First, a random forest regressor (RFR), was adopted to reproduce the DCON computed change in plasma potential energy without wall effects, dw for a large database of equilibria from the National Spherical Torus Experiment (NSTX). This tree-based method provides an analysis of the contribution of each input feature, giving an insight into the underlying physics phenomena. Secondly, a fully-connected neural network has been trained on sets of calculations with the DCON code, to get an improved closed form equation of the no-wall limit as a function of the relevant plasma parameters indicated by the RFR. The neural network has been guided by physics theory of ideal MHD in its extension outside the domain of the NSTX experimental data. The estimated value of beta has been incorporated into the DECAF kinetic stability model and tested against a set of experimentally stable and unstable discharges. Moreover, the neural network results were used to simulate a real-time stability assessment using only quantities available in real time. Finally, the portability of the model was investigated, showing encouraging results by testing the NSTX-trained algorithm on the Mega Ampere Spherical Tokamak (MAST).