Por favor, use este identificador para citar o enlazar este ítem: http://repositoriodigital.ipn.mx/handle/123456789/15423
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorReyes, Alberto-
dc.contributor.authorSucar, L. Enrique-
dc.contributor.authorMorales, Eduardo F.-
dc.date.accessioned2013-04-25T18:21:33Z-
dc.date.available2013-04-25T18:21:33Z-
dc.date.issued2009-08-15-
dc.identifier.citationRevista Computación y Sistemas; Vol. 13 No.1es
dc.identifier.issn1405-5546-
dc.identifier.urihttp://www.repositoriodigital.ipn.mx/handle/123456789/15423-
dc.description.abstractAbstract. This paper proposes a novel and practical model-based learning approach with iterative refinement for solving continuous (and hybrid) Markov decision processes. Initially, an approximate model is learned using conventional sampling methods and solved to obtain a policy. Iteratively, the approximate model is refined using variance in the utility values as partition criterion. In the learning phase, initial reward and transition functions are obtained by sampling the state–action space. The samples are used to induce a decision tree predicting reward values from which an initial partition of the state space is built. The samples are also used to induce a factored MDP. The state abstraction is then refined by splitting states only where the split is locally important. The main contributions of this paper are the use of sampling to construct an abstraction, and a local refinement process of the state abstraction based on utility variance. The proposed technique was tested in AsistO, an intelligent recommender system for power plant operation, where we solved two versions of a complex hybrid continuous-discrete problem. We show how our technique approximates a solution even in cases where standard methods explode computationally.es
dc.description.sponsorshipInstituto Politécnico Nacional - Centro de Investigación en Computación (CIC).es
dc.language.isoen_USes
dc.publisherRevista Computación y Sistemas; Vol. 13 No.1es
dc.relation.ispartofseriesRevista Computación y Sistemas;Vol. 13 No.1-
dc.subjectKeywords. Recommender systems, power plants, Markov decision processes, abstractions.es
dc.titleAsistO: A Qualitative MDP-based Recommender System for Power Plant Operationes
dc.title.alternativeAsistO: Un Sistema de Recomendaciones basado en MDPs Cualitativos para la Operación de Plantas Generadorases
dc.typeArticlees
dc.description.especialidadInvestigación en Computaciónes
dc.description.tipoPDFes
Aparece en las colecciones: Revistas

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
v13no1_Art01.pdf513.48 kBAdobe PDFVisualizar/Abrir


Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.