Call for Papers 2020

Jun 2020 - Volume 11, Issue 3
Deadline: 15 May 2020
Due to COVID-19 deadline extended to 31-May-2020
Notification: 15 Jun 2020
Publication: 30 Jun 2020

Aug 2020 - Volume 11, Issue 4
Deadline: 15 Jul 2020
Notification: 15 Aug 2020
Publication: 31 Aug 2020


Indexed in

IJCSE Indexed in Scopus


Title : Q Learning Based Technique for Accelerated Computing on Multicore Processors
Authors : Avinash Dhole, Dr Mohan Awasthy, Dr Sanjay Kumar
Keywords : Multi-core Processing Reinforced Learning, Machine Learning, & Computational Load Balancing.
Issue Date : Oct-Nov 2017
Abstract :
In this paper, we exhibit new convergent Q learning algorithm that consolidate components of policy iteration and classical Q learning/esteem iteration to effectively learn and control arrangements for a dynamic load adjusting situations utilizing reinforcement learning techniques. The model is prepared with a variation of memory optimization strategy for dynamic load adjusting recreation on multi-core processors making utilization of a machine learning approach, whose inputs areseveral time consuming computational processes and whose yield are time situated wrapper towards adjusting the computational and correspondence stack individually with an evaluation future rewards. The primary point of preference over this Q learning methodology is lower overhead; as most iteration doesn’t require a minimization over all controls, in the context of modified policy iteration.We apply our technique to multi-core Q-learning way to make an algorithm which is a combination of the results from enhanced load and effective memory utilization on multiple cores. This technique gives a learning situation in handling computational load with no modification of the architecture resources or learning algorithm.These executions conquer a portion of the conventional convergence difficulties of offbeat modified policy iteration particularly in handling circumstances like that of multicore processors, and give policy iteration-like option Q-learning plans with as dependable convergence as classical Q learning.
Page(s) : 601-612
ISSN : 0976-5166
Source : Vol. 8, No.5