Traditional computing architecture (Von Neumann) that requires data transfer between the off-chip memory and processor consumes a large amount of energy when running machine learning (ML) models. Memristive synaptic devices are employed to eliminate this inevitable inefficiency in energy while solving cognitive tasks. However, the performances of energy-efficient neuromorphic systems, which are expected to provide promising results, need to be enhanced in terms of accuracy and test error rates for classification applications. Improving accuracy in such ML models depends on the optimal learning parameter changes from a device to algorithm-level optimisation. To do this, this paper considers the Adadelta, an adaptive learning rate technique, to achieve accurate results by reducing the losses and compares the accuracy, test error rates, and energy consumption of stochastic gradient descent (SGD), Adagrad and Adadelta optimisation methods integrated into the Ag:a-Si synaptic device neural network model. The experimental results demonstrated that Adadelta enhanced the accuracy of the hardware-based neural network model by up to 4.32% when compared to the Adagrad method. The Adadelta method achieved the best accuracy rate of 94%, while DGD and SGD provided an accuracy rate of 68.11 and 75.37%, respectively. These results show that it is vital to select a proper optimisation method to enhance performance, particularly the accuracy and test error rates of the neuro-inspired nano-synaptic device-based neural network models.