We establish sharp large deviation principles for cumulative rewards associated with a discrete-time renewal model, supposing that each renewal involves a broad-sense reward taking values in a real separable Banach space. The framework we consider is the pinning model of polymers, which amounts to a Gibbs change of measure of a classical renewal process and includes it as a special case. We first tackle the problem in a constrained pinning model, where one of the renewals occurs at a given time, by an argument based on convexity and super-additivity. We then transfer the results to the original pinning model by resorting to conditioning. (C) 2021 Elsevier B.V. All rights reserved.
Large deviations in discrete-time renewal theory
Marco Zamparo
2021-01-01
Abstract
We establish sharp large deviation principles for cumulative rewards associated with a discrete-time renewal model, supposing that each renewal involves a broad-sense reward taking values in a real separable Banach space. The framework we consider is the pinning model of polymers, which amounts to a Gibbs change of measure of a classical renewal process and includes it as a special case. We first tackle the problem in a constrained pinning model, where one of the renewals occurs at a given time, by an argument based on convexity and super-additivity. We then transfer the results to the original pinning model by resorting to conditioning. (C) 2021 Elsevier B.V. All rights reserved.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.