e-ISSN:0976-5166
p-ISSN:2231-3850


INDIAN JOURNAL OF COMPUTER SCIENCE AND ENGINEERING

Call for Papers

Aug 2019 - Volume 10, Issue 4
Deadline: 15 Jul 2019
Notification: 15 Aug 2019
Publication: 31 Aug 2019

Oct 2019 - Volume 10, Issue 5
Deadline: 15 Sep 2019
Notification: 15 Oct 2019
Publication: 30 Oct 2019

Indexed in

IJCSE Indexed in Scopus

ABSTRACT

Title : Q Learning Based Technique for Accelerated Computing on Multicore Processors
Authors : Avinash Dhole, Dr Mohan Awasthy, Dr Sanjay Kumar
Keywords : Multi-core Processing Reinforced Learning, Machine Learning, & Computational Load Balancing.
Issue Date : Oct-Nov 2017
Abstract :
In this paper, we exhibit new convergent Q learning algorithm that consolidate components of policy iteration and classical Q learning/esteem iteration to effectively learn and control arrangements for a dynamic load adjusting situations utilizing reinforcement learning techniques. The model is prepared with a variation of memory optimization strategy for dynamic load adjusting recreation on multi-core processors making utilization of a machine learning approach, whose inputs areseveral time consuming computational processes and whose yield are time situated wrapper towards adjusting the computational and correspondence stack individually with an evaluation future rewards. The primary point of preference over this Q learning methodology is lower overhead; as most iteration doesn’t require a minimization over all controls, in the context of modified policy iteration.We apply our technique to multi-core Q-learning way to make an algorithm which is a combination of the results from enhanced load and effective memory utilization on multiple cores. This technique gives a learning situation in handling computational load with no modification of the architecture resources or learning algorithm.These executions conquer a portion of the conventional convergence difficulties of offbeat modified policy iteration particularly in handling circumstances like that of multicore processors, and give policy iteration-like option Q-learning plans with as dependable convergence as classical Q learning.
Page(s) : 601-612
ISSN : 0976-5166
Source : Vol. 8, No.5