Limit of Sequence of Right Continuous Functions
Right-Continuous Function
Topics from the Theory of Characteristic Functions
George Roussas , in An Introduction to Measure-Theoretic Probability (Second Edition), 2014
11.1 Definition of the Characteristic Function of a Distribution and Basic Properties
In all that follows, d.f.s are nonnegative, nondecreasing, right-continuous functions with finite variations; it is not assumed that the variations are necessarily bounded by 1 unless otherwise stated (see also Exercises 4 and 5 in Chapter 8).
Definition 1
The characteristic function of a d.f. (in the sense of Definition 1 in Chapter 8; see also Remark 5 there) is, in general, a complex-valued function defined on by
(11.1)
The integration in (11.1) is to be understood either in the sense of Riemann–Stieltjes, or as integration with respect to the measure induced by (see also Appendix B). The integral is well defined for all , since and are -integrable. If is the d.f. of a r.v. , then (11.1) may be rewritten as(11.2)
Some basic properties of a ch.f. are gathered next in the form of a theorem.Theorem 1
- (i)
-
, and . In particular, if and , then is the ch.f. of a r.v.
- (ii)
-
is uniformly continuous in .
- (iii)
-
If is the ch.f. of a r.v. , then , where and are constants.
- (iv)
-
If is the ch.f. of a r.v. , then , where, for .
- (v)
-
If for some positive integer the th moment is finite, then .
Remark 1
In the proof of the theorems, as well as in other cases, the following property is used:
where and are real-valued functions, and . Its justification is left as an exercise (see Exercise 1).
Proof of Theorem 1
For convenience omit in the integration. Then
- (i)
-
, and . If , then , which together with , implies , so that is the d.f. of a r.v.
- (ii)
-
. Now , which is independent of and -integrable. Furthermore, as . Therefore the Dominated Convergence Theorem applies and gives
- (iii)
-
.
- (iv)
-
.
- (v)
-
Consider, e.g., the interval for some . Then, for exists, and , independent of and integrable. Then, by Theorem 5 in Chapter 5,
The same applies for any , since exists, and , independent of and integrable. In particular, .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128000427000116
Methods and Models in Neurophysics
Emery N. Brown , in Les Houches, 2005
3. The conditional intensity function and interevent time probability density
Neural spike trains are characterized by their interspike interval probability models. In Section 2, we showed how elementary interspike interval probability models can be derived from elementary stochastic dynamical systems models of neurons. By viewing the neural spike trains as a point process, we can present characterization of the spike train in terms of its conditional intensity function. We develop this characterization in this section and we relate the conditional intensity function to the interspike interval probability models in Section 2. The presentation here follows closely [5].
Let (0, T] denote the observation interval and let be a set of J spike time measurements. For t ɛ (0, T] let N 0:t be the sample path of the point process over (0, t]. It is defined as the event , where N(t) is the number of spikes in (0, t] and j ≤ J. The sample path is a right continuous function that jumps 1 at the spike times and is constant otherwise [1, 5–8]. The function N 0:t tracks the location and number of spikes in (0, t] and hence, contains all the information in the sequence of spike times (Fig. 4A). The counting process N(t) gives the total number of events that have occurred up through time t. The counting process satisfies
- i)
-
N(t) ≥ 0.
- ii)
-
N(t) is an integer-valued function.
- iii)
-
If s < t, then N(s) ≤ N(t).
- iv)
-
For s < t, N(t) – N(s) is the number of events in (s, t).
We define the conditional intensity function for t ∈ (0, T] as
(3.1)
where Ht is the history of the sample path and of any covariates up to time t. In general λ(t|Ht ) depends on the history of the spike train and therefore, it is also termed the stochastic intensity. In survival analysis the conditional intensity function is called the hazard function [9, 10]. This is because the hazard function can be used to define the probability of an event in the interval [t, t + Δ) given that there has not been an event up to t. For example, it might represent the probability that a piece of equipment fails in [t, t + Δ) given that it was worked up to time t [9]. As another example, it might define the probability that a patient receiving a new therapy dies in the interval [t, t + Δ) given that he/she has survived up to time t [10]. It follows that λ(t|Ht ) can be defined in terms of the interspike interval probability density at time t, p(t|Ht ), as
(3.2)
We gain insight into the definition of the conditional intensity function in Eq. 3.1 by considering the following heuristic derivation of Eq. 3.2 based on the definition of the hazard function. We compute explicitly the probability of the event, a spike in [t, t + Δ) given Ht and that there has been no spike in (0, t). That is,
(3.3)
where o(Δ) refers to all events of order smaller than Δ, such as two or more spikes occurring in an arbitrarily small interval. This establishes Eq. 3.2.
The power of the conditional intensity function is that if it can be defined, as Eq. 3.3 suggests, then it completely characterizes the stochastic structure of the spike train. In any time interval defines the probability of a spike given the history up to time t. If the spike train is an inhomogeneous Poisson process, then becomes the Poisson rate function. Thus, the conditional intensity function (Eq. 3.1) is a history-dependent rate function that generalizes the definition of the Poisson rate function. Similarly, Eq. 3.1 is also a generalization of the hazard function for renewal processes [9, 10].
Example 3.1. Conditional intensity function of the Gamma probability density.
The gamma probability density for the integrate and fire model in Eq. 2.4
is
(3.4)
From Eq. 3.2, it follows that the conditional intensity function is
(3.5)
Example 3.2. Conditional intensity function of the inverse Gaussian probability density.
The inverse Gaussian probability density for the Wiener process integrate and fire model in Eq. 2.20 is
(3.6)
From Eq. 3.2, the conditional intensity function for this model is
(3.7)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0924809905800204
Stochastic Dynamics
Don Kulasiri , Wynand Verwoerd , in North-Holland Series in Applied Mathematics and Mechanics, 2002
2.3 What is Stochastic Calculus?
In standard calculus we deal with differentiable functions which are continuous except perhaps in certain locations of the domain under consideration. To understand the continuity of the functions better we make use of the definitions of the limits. We call a function f, a continuous function at the point t = t0 if
regardless of the direction t approaches t0 . A right-continuous function at t0 has a limiting value only when t approaches t0 from the right direction, i.e. t is larger than t0 in the vicinity of t 0. We will denote this as
Similarly a left-continuous function at t0 can be represented as
These statements imply that a continuous function in both right-continuous and left-continuous at a given point of t. Often we encounter functions having discontinuities; hence the need for the above definitions. To measure the size of a discontinuity, we define the term "jump" at any point t to be a discontinuity where the both f(t +) and f(t-) exist and the size of the jump be ∆f (t)=f(t +)− f(t −). The jumps are the discontinuities of the first kind and any other discontinuity is called a discontinuity of the second kind. Obviously a function can only have countable number of jumps in a given range. From the mean value theorem in calculus it can be shown that we can differentiate a function in a given interval only if the function is either continuous or has a discontinuity of the second kind during the interval. Stochastic calculus is the calculus dealing with often non-differentiable functions having jumps without discontinuities of the second kind. One such example of a function is the Wiener process (Brownian motion). One realization of the standard Wiener process is given in Figure 2.1.
Without going into details of how we computed this function- we will do that in Chapter 3 – we can see that the increments are irregular and we can not define a derivative according to the mean value theorem. This is because of the fact that the function changes erratically within small intervals, however small that interval may be, and we can not define a derivative at a given point in the conventional sense. Therefore we have to devise new mathematical tools that would be useful in dealing with these irregular non-differentiable functions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0167593102800031
Preliminaries
Jaroslav Hájek , ... Pranab K. Sen , in Theory of Rank Tests (Second Edition), 1999
2.3.4 The Neyman-Pearson lemma.
This basic lemma shows that the most powerful test for testing a simple hypothesis against a simple alternative may be found quite easily.
Lemma 1
In testing p against q at level α the most powerful test may be found as follows:
(1)
where k and Ψ0 (x) for x such that q(x) = k p(x) should and can be defined so that
(2)
Proof.
Observe that α(c) = P(q(X) > cp(X )) is a non-increasing and right-continuous function of c such that α(0 − 0) = 1 and α(∞) = 0. Therefore for each α ∈ (0,1) there exists a k ≥ 0 such that
(3)
If k is a continuity point, then (2) follows from (1) regardless of the choice of Ψ0(x) for x such that q(x) = kp(x). If k is a point of discontinuity, it suffices to put
(4)
for x such that q(x) = kp(x).
Now for any other critical function Ψ, 0 ≤ Ψ ≤ 1 and (1) imply that either sign(Ψ0 − Ψ) = sign(q − kp) or at least one of these, expressions equals 0, so that for all x
(5)
Consequently, if ∫ Ψ dP ≤ α,
(6)
which was to be proved.
Remark 1.
We also see that Ψ has the same power as Ψ0 only if (5) equals 0 a.s., i.e. if Ψ satisfies (1) a.s. with respect to Lebesgue measure.
Remark 2.
We have made no use of the fact that the measure, with respect to which the densities are defined, is the Lebesgue measure. And as a matter of fact the Neyman-Pearson lemma holds for densities defined with respect to any σ-finite measure.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780126423501500205
Recent Progress in Functional Analysis
Bertram M. Schreiber , in North-Holland Mathematics Studies, 2001
1 Introduction
The notion of a stochastic process which is continuous in probability (stochastically continuous in [11]) arises in numerous contexts in probability theory [2,4,5,11,15]. Indeed, the Poisson process is continuous in probability, and this notion plays a role in the study of its generalizations and, from a broader point of view, in the theory of processes with independent increments [11]. For instance, the work of X. Fernique [9 ] on random right-continuous functions with left-hand limits (so-called cadlag functions) involves continuity in probability in an essential way.
The study of processes continuous in probability as a generalization of the notion of a continuous function began with the approximation theorems of K. Fan [7] (cf. [5], Theorems VI.III.III and VI.III.IV) and D. Dugué ([5], Theorem VI.III.V) on the unit interval. These results were generalized to convex domains in higher dimensions in [12], where the problem of describing all compact sets in the complex plane on which every random function continuous in probability can be uniformly approximated in probability by random polynomials was raised. This problem, as well as the corresponding question for rational approximation, were taken up in [1], Along with some stimulating examples, the authors of [1] prove, under the natural assumptions appearing below, that random polynomial approximation holds over Jordan curves and the closures of Jordan domains.
In this note we study the space of functions continuous in probability over a general topological space and develop the analogue of the space C(K) for K compact. This space has the structure of a Fréchet algebra. We investigate the closed ideals of this algebra and then introduce the notion of a stochastic uniform algebra.
Just as in the deterministic, classical case, there are natural stochastic uniform algebras defined by the appropriate concept of random approximation. We shall highlight some results from [3] which show that random polynomial approximation in the plane obtains for a very large class of compact sets. For instance, if K is a compact set with the property that every continuous function on ∂K can be uniformly approximated by rational functions, then every function continuous in probability on K (with respect to a nonatomic measure) and random holomorphic on the interior of K can be uniformly approximated in probability by random polynomials.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0304020801800554
Preliminaries
Yuriy E. Obzherin , Elena G. Boyko , in Semi-Markov Models, 2015
1.3 Preliminaries on semi-Markov processes with arbitrary phase space of states
We represent necessary results from the theory of semi-Markov processes (SMPs) with arbitrary phase space of states [14–16] 14 15 16 .
Definition [16]. Semi-Markov kernel (SM-kernel) in a measurable space (E, ) is the function satisfying the conditions:
- (1)
-
are nondecreasing right-continuous functions of ;
- (2)
-
with fixed, is a semistochastic kernel:
- (3)
-
is a stochastic kernel by that is
An SMP with arbitrary phase space of states is defined by means of a Markov renewal process (MRP).
Definition [16]. An MRP is a two-dimensional Markov chain taking values in . Its transition probabilities are given by the expression:
where is an SM-kernel in (E, ).
The first component of the MRP is a Markov chain. Its transition probabilities are defined by means of SM-kernel :
It is called an embedded Markov chain (EMC) of MRP . RVs making the second component of MRP , determine intervals between the moments of Markov restoration:
Consider the counting process which counts the number of Markov renewal moments in .
Definition [16]. The process is an SMP corresponding MRP .
It can be concluded from the definition that SMP is a jump right-continuous process:
Another way of SMP definition is the following [16]:
- (1)
-
stochastic kernel
- (2)
-
DF of sojourn times of EMC transitions
Then SM-kernel is defined by the formula [16]:
(1.14)
Let us write out definitions and formulas of some reliability and efficiency characteristics of restorable systems described by means of SMP.
Let a system be described by SMP with a phase space (E, ). Assume the set of SMP states can be represented as
where and are interpreted as sets of system up- and down-states, respectively.
Definition [16]. Stationary availability factor of system is the number, given by
under assumption the limit existence and independence on the initial state
The following stationary reliability characteristics of restorable systems are often in use. Their formal definition is given in [16]:
- (a)
-
mean stationary operating time to failure ,
- (b)
-
mean stationary restoration time .
EMC stationary distribution satisfies the integral equation:
(1.15)
It was proved in [16], that if the unique stationary distribution of EMC of SMP describing the system operation exists, characteristics are given by the formulas:
(1.16)
(1.17)
(1.18)
under some assumptions.
Here denotes the EMC stationary distribution, and is the mean sojourn time in state One should note the characteristics relate like this:
(1.19)
The Markov renewal equation [16] plays an important role in the theory of SMP. It is as follows:
(1.20)
Markov renewal equations for some SMP characteristics are given in [16]. The Markov renewal equation for the distribution of the sojourn time of SMP in a certain subset of states is often applied [16]:
(1.21)
its consequence is the equation for mean sojourn times in a subset [15]:
where is the SMP mean sojourn time in
Stationary efficiency characteristics of system operation are: is the mean specific income per calendar time unit and is the mean specific expenses per time unit of up-state. In terms of SM model, these characteristics are given by the ratios [18,26] 18 26 :
(1.22)
(1.23)
where , are functions denoting income and expenses in each state.
In the monograph, the following method of approximation of system stationary reliability characteristics, introduced in [14], is applied.
Let the initial system S operation is described by SMP with a phase space (E, ). The set E of states is divided into two subsets and , so that Assume the kernel , B∈ , of EMC of SMP is close to the kernel , B∈ , of EMC of supporting system having unique stationary distribution , B∈, .
Then instead of the expressions (1.17) and (1.18) we can use the following formulas [14]:
(1.24)
approximating characteristics of the initial system .
Here, is the EMC stationary distribution for supporting system; is the mean sojourn times in the states of the initial system; is the the probabilities of EMC transitions from up- into down-states in minimal path for the initial system; is a minimum of steps, necessary for transition from the states of , belonging to the ergodic class of the initial system, to the set of down-states . Under , formula (1.24) takes the form:
(1.25)
The kernel of the initial system EMC is close to the kernel of supporting system EMC , that is why under , along with the second formula (1.25), the following approximating formula for can be used:
(1.26)
To approximate system stationary efficiency characteristics, instead of (1.22) and (1.23) the following ratios will be used:
(1.27)
where is the stationary distribution of supporting system EMC ; is the mean sojourn times in the states of the initial system; and , are the functions denoting income and expenses in each state of the initial system.
Semi-Markov models of latent failures control are built under the following assumptions:
- (1)
-
From the point of view of reliability, a system component is a minimal compound element (detail), which can be failed, controlled, and restored.
- (2)
-
Component failure is detected while control execution only.
- (3)
-
After failure detection, restoration process immediately begins.
- (4)
-
A component is completely restored while restoration process.
- (5)
-
DFs of the RVs: operating time to failure, time periods between the moments of control execution, control and restoration time are arbitrary ones.
The stages of semi-Markov model construction and system stationary characteristics definition are represented in Figure 1.2.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128022122000012
Time Series Analysis: Methods and Applications
Kanchan Mukherjee , in Handbook of Statistics, 2012
6.1 M- and R-Estimators
Let denote a generic value in the parameter space and let be the true parameter. To estimate , we proceed in three steps. Using in (29), we first propose a preliminary estimator ; note that the proposal does not take into account the heteroscedasticity of the model, and hence, it gives a consistent but inefficient estimator. Next, we use to construct an estimator of the parameter . Finally, substituting and in (29), the heteroscedastic model is transformed to an approximate nonlinear homoscedastic autoregressive model (36), and we use standard robust estimation procedures of the homoscedastic models to propose improved estimator of .
In the sequel, and denote the derivatives of the functions μ and σ, respectively, with respect to their second arguments. Also for a vector , its j th coordinator is denoted as .
- Step 1:
-
Define
(34)
where is the j th coordinate of the vector , .In particular, when ,
- Step 2:
-
Let
(35)
This in turn can be approximated by(36)
which is a nonlinear autoregressive model with homoscedastic errors.Now using the standard definition for homoscedastic nonlinear model (35), the class of M-estimators and R-estimators based on appropriate score functions ψ and φ, respectively, can be defined as follows; see the study by Bose and Mukherjee (2003) for a similar two-step idea.
- Step 3:
-
Let ψ be nondecreasing and bounded function on I R such that . An example is the function when s are symmetrically distributed around 0.
Let belong to the class
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444538581000065
Interpolation of Operators
In Pure and Applied Mathematics, 1988
Definition 1.5
Suppose f belongs to M 0(R, μ). The decreasing rearrangement of f is the function f* defined on [0, ∞) by
(1.9)
We use here the convention that inf Ø = ∞. Thus, if μf (λ) > t for all λ ≥ 0, then f*(t) = ∞. Also, if (R, μ) is a finite measure space, then the distribution function μf is bounded by μ(R) and so f*(t) = 0 for all t ≥ μ(R). In this case, we may regard f* as a function defined on the interval [0, μ(R)). Notice also that if μf happens to be continuous and strictly decreasing, then f* is simply the inverse of μ f on the appropriate interval. In fact, for general f, if we first form the distribution function μf and then form the distribution function mμf of μf (with respect to Lebesgue measure m on [0, ∞)) we obtain precisely the decreasing rearrangement f*. This is an immediate consequence of the identities
(1.10)
which follow from (1.9), the fact that μ f is decreasing, and the definition of the distribution function.
Examples 1.6
- (a)
-
Now we compute the decreasing rearrangement of the simple function f given by (1.6). Referring to (1.9) and Figure 1, we see that f*(t) = 0 if t ≥ m 3. Also, if m 3> t ≥ m 2, then f*(t) = a 3, and if m2 > t ≥ m 1, then f*(t) = a 2, and so on. Hence,
(1.11)
where we have taken m 0 = 0.Geometrically, we are merely rearranging the vertical blocks in the graph of f in decreasing order to obtain the decreasing rearrangement f* (see Figure 2); the values of f* at the jumps are determined by the right continuity (Proposition 1.7).
- (b)
-
It is sometimes more useful to section functions into horizontal blocks rather than vertical ones. Thus, the simple function f in (1.6) may be represented also as follows:
(1.12)
where the coefficients bk are positive and the sets Fk each have finite measure and form an increasing sequence F 1 ⊂ F 2 ⊂ … ⊂ Fn . Comparison with (1.6) shows that
In this case, the decreasing rearrangement is viewed as being formed by sliding the blocks in each horizontal layer to form a single larger block positioned with its left-hand end against the vertical axis (see Figure 3). Thus
(1.13)
- (c)
-
Let f(x) = 1 − e−x , (0 < x < ∞). The distribution function mf (with respect to Lebesgue measure m on (0, ∞)) is infinite for 0 ≤ λ < 1, and equal to zero for all λ ≥ 1. Hence f*(t) = 1 for all t ≥ 0 (cf. Figure 4). This example shows that a considerable amount of information may be lost in passing to the decreasing rearrangement. Such information, however, is irrelevant as far as Lp -norms (or any other rearrangement-invariant norms) are concerned. Thus, the Lp -norms of f and f* are both infinite when 1 ≤ p < ∞, and the L ∞-norms are both equal to 1.
Proposition 1.7
Suppose f, g, and fn , (n = 1, 2, …), belong to M 0(R, μ) and let a be any scalar. The decreasing rearrangement f* is a nonnegative, decreasing, right-continuous function on [0, ∞). Furthermore,
(1.14)
(1.15)
(1.16)
(1.17)
in particular,
(1.18)
(1.19)
(1.20)
Proof. That f* is nonnegative, decreasing, and right-continuous follows from Proposition 1.3 and the fact that f* is itself a distribution function (cf. (1.10)). The properties (1.14), (1.15), and (1.17) are immediate consequences of their counterparts in Proposition 1.3 and the definition of the decreasing rearrangement.
For property (1.18), fix λ ≥ 0 and suppose t = μf(λ) is finite. Then (1.9) gives
which establishes the first part of (1.18). For the second part, fix t ≥ 0 and suppose λ = f*(t) is finite. By (1.9), there is a sequence λn ↓ λ with μ f (λ n ) ≤ t, so the right-continuity of μf (Proposition 1.3) gives
This establishes (1.18).
Returning to (1.16), we may assume that λ = f*(t 1) + g*(t 2) is finite since otherwise there is nothing to prove. Let t = μ f + g (λ). Then by the triangle inequality and the second of the inequalities in (1.18) we have
This shows in particular that t is finite. Hence, using the first of the inequalities in (1.18) and the fact that (f + g)* is decreasing, we obtain
and this establishes (1.16).
For an arbitrary function f in M 0, we can find a sequence of nonnegative simple functions fn , (n = 1, 2, …), such that fn ↑ |f|. It is clear (cf. Example 1.6(a)) that for each n the functions fn and fn * are equimeasurable, that is,
(1.21)
But fn ↑ |f| and fn * ↑ f* (by 1.17) so property (1.5), applied to each of the distribution functions in (1.21), shows that
(1.22)
Hence, f and f* are equimeasurable, as asserted by (119).
Finally, from (1.22) we have
Passing to the decreasing rearrangements by means of (1.9), we obtain (1.20).
The next result gives alternative descriptions of the L p-norm in terms of the distribution function and the decreasing rearrangement.
Proposition 1.8
Let . If 0 < p < ∞, then
(1.23)
Furthermore, in the case p = ∞,
(1.24)
Proof. In view of (1.5), (1.17), and the monotone convergence theorem, it will suffice to prove (1.23) for an arbitrary nonnegative simple function f. With f written in the form (1.6), we saw that its decreasing rearrangement f* is given by (1.11). But then it is clear from (1.8) that
Similarly, using the expressions (1.6) and (1.7) for f and its distribution function μ f , we have
where the third equality follows from (1.8) and a summation by parts.
This establishes (1.23). The proof of (1.24) is straightforward and we omit it.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0079816908608478
zunigathadisitud1984.blogspot.com
Source: https://www.sciencedirect.com/topics/mathematics/right-continuous-function
0 Response to "Limit of Sequence of Right Continuous Functions"
Post a Comment