Por favor, use este identificador para citar o enlazar este ítem:
http://repositoriodigital.ipn.mx/handle/123456789/15423
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Reyes, Alberto | - |
dc.contributor.author | Sucar, L. Enrique | - |
dc.contributor.author | Morales, Eduardo F. | - |
dc.date.accessioned | 2013-04-25T18:21:33Z | - |
dc.date.available | 2013-04-25T18:21:33Z | - |
dc.date.issued | 2009-08-15 | - |
dc.identifier.citation | Revista Computación y Sistemas; Vol. 13 No.1 | es |
dc.identifier.issn | 1405-5546 | - |
dc.identifier.uri | http://www.repositoriodigital.ipn.mx/handle/123456789/15423 | - |
dc.description.abstract | Abstract. This paper proposes a novel and practical model-based learning approach with iterative refinement for solving continuous (and hybrid) Markov decision processes. Initially, an approximate model is learned using conventional sampling methods and solved to obtain a policy. Iteratively, the approximate model is refined using variance in the utility values as partition criterion. In the learning phase, initial reward and transition functions are obtained by sampling the state–action space. The samples are used to induce a decision tree predicting reward values from which an initial partition of the state space is built. The samples are also used to induce a factored MDP. The state abstraction is then refined by splitting states only where the split is locally important. The main contributions of this paper are the use of sampling to construct an abstraction, and a local refinement process of the state abstraction based on utility variance. The proposed technique was tested in AsistO, an intelligent recommender system for power plant operation, where we solved two versions of a complex hybrid continuous-discrete problem. We show how our technique approximates a solution even in cases where standard methods explode computationally. | es |
dc.description.sponsorship | Instituto Politécnico Nacional - Centro de Investigación en Computación (CIC). | es |
dc.language.iso | en_US | es |
dc.publisher | Revista Computación y Sistemas; Vol. 13 No.1 | es |
dc.relation.ispartofseries | Revista Computación y Sistemas;Vol. 13 No.1 | - |
dc.subject | Keywords. Recommender systems, power plants, Markov decision processes, abstractions. | es |
dc.title | AsistO: A Qualitative MDP-based Recommender System for Power Plant Operation | es |
dc.title.alternative | AsistO: Un Sistema de Recomendaciones basado en MDPs Cualitativos para la Operación de Plantas Generadoras | es |
dc.type | Article | es |
dc.description.especialidad | Investigación en Computación | es |
dc.description.tipo | es | |
Aparece en las colecciones: | Revistas |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
v13no1_Art01.pdf | 513.48 kB | Adobe PDF | Visualizar/Abrir |
Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.