e-ISSN:0976-5166
p-ISSN:2231-3850


INDIAN JOURNAL OF COMPUTER SCIENCE AND ENGINEERING

Call for Papers 2024

Feb 2024 - Volume 15, Issue 1
Deadline: 15 Jan 2024
Publication: 20 Feb 2024

Apr 2024 - Volume 15, Issue 2
Deadline: 15 Mar 2024
Publication: 20 Apr 2024

More

 

ABSTRACT

Title : Q Learning Based Technique for Accelerated Computing on Multicore Processors
Authors : Avinash Dhole, Dr Mohan Awasthy, Dr Sanjay Kumar
Keywords : Multi-core Processing Reinforced Learning, Machine Learning, & Computational Load Balancing.
Issue Date : Oct-Nov 2017
Abstract :
In this paper, we exhibit new convergent Q learning algorithm that consolidate components of policy iteration and classical Q learning/esteem iteration to effectively learn and control arrangements for a dynamic load adjusting situations utilizing reinforcement learning techniques. The model is prepared with a variation of memory optimization strategy for dynamic load adjusting recreation on multi-core processors making utilization of a machine learning approach, whose inputs areseveral time consuming computational processes and whose yield are time situated wrapper towards adjusting the computational and correspondence stack individually with an evaluation future rewards. The primary point of preference over this Q learning methodology is lower overhead; as most iteration doesn’t require a minimization over all controls, in the context of modified policy iteration.We apply our technique to multi-core Q-learning way to make an algorithm which is a combination of the results from enhanced load and effective memory utilization on multiple cores. This technique gives a learning situation in handling computational load with no modification of the architecture resources or learning algorithm.These executions conquer a portion of the conventional convergence difficulties of offbeat modified policy iteration particularly in handling circumstances like that of multicore processors, and give policy iteration-like option Q-learning plans with as dependable convergence as classical Q learning.
Page(s) : 601-612
ISSN : 0976-5166
Source : Vol. 8, No.5