Next Article in Journal
Corrections and Addition: Energy and Entropy as the Fundaments of Theoretical Physics. Entropy: 2002, 4, 128-141
Previous Article in Journal
Energy and Entropy as the Fundaments of Theoretical Physics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Boltzmann-like Entropy in Reliability Theory

IBM, via Shangai 53, 00144 Roma, Italy
Submission received: 19 September 2001 / Accepted: 8 May 2002 / Published: 9 May 2002

Abstract

:
We introduce the entropy function in order to study the reliability and repairability of systems. In detail we establish the explicit relationships between reliability and repairability; then we calculate the decay of a system due to its regular running.

1. Foreword

Experience proves the interdependence of systems’ reliability and repairability. They are steadily coupled during the whole system life. In details, a young system, free from errors, is reliable and repairable, thereafter, the more a system becomes old and unreliable and the more repairs are difficult.
  • These parallel phenomena, well known in practice [1], are not theoretically justified. Current literature does not calculate how they are joined, how they evolve together by time passing.
    Reliability is the probability of a system performing a defined job and a fault closes its performance. Authors ground the Reliability Theory upon the stochastic model, which neglects the inner structure of a general system.
  • Chains figure a physical system that works steadily and stops, they do not detail how in general its internal parts evolve and break down.
In substance, the stochastic model does not cast light into the internal causes that generate a failure and enable a system to restart. The Markov chains prevent us from tackling the aforementioned problems and consequently
  • Failure analysis, repairability and maintenance researches give sharp and practical results that however have a specific meaning. They are not included within a general framework.
  • Today theories scarcely formalize the relations between repairability and maintenance tenets.
  • Theoretical investigations upon reliability and maintenance are separated, thus specialists double their efforts.
Some years ago we aimed at solving these problems, even if the scientific community devotes few resources to wide ranging questions today [2] and in the future [3]. We did not find a significant help in bibliography, instead we have taken an inspiration from thermodynamics. The Boltzmann entropy calculates the evolution of a thermodynamical system [4] and we supposed that this kind of function could symmetrically detail the internal of a system and complement the stochastic model. In particular we guessed that the entropy could explicate the inner evolution of a general system when it deteriorates and/or when it improves after a repair.
In this article, we introduce the Boltzmann-like entropy and we discuss it with respect to problems 1 and 2.

2. Entropy Function

Let the system S assumes n states
S = (A1 OR A2 OR .... OR An )
The generic state Ai is complex and consists of r cooperating substates
Ai = (Ai1 AND Ai2 AND .... AND Air )
By definition each stochastic state/substate depends on the probability, notably from (2) we have
Pi = (Pi1* Pi2 * Pi3 * * Pir )
A system, that assumes the state Ai, stays steady in it or moves easily from Ai. In the former situation S rarely evolves from Ai, this state is rather stable and we say that the system S assumes an irreversible state. When S easily abandons Ai, this state is somewhat unstable and we shall say the state Ai is reversible. E.g. A man/woman goes into an irreversible coma, then he/she does not leave this state and no longer recovers. E.g. The machine S is immediately repaired, the failure was light and the failure state was very reversible.
We introduce the entropy function
H = H(Ai)
That calculates the attitude of evolving using this criterion
2.1) – The more Ai is irreversible and the higher is H(Ai). Vice versa the more the state reversible, the lower is the entropy.
The ensuing conditions specify point 2.1).
2.1A) – Each state in a stochastic system depends on the probability by definition thus the entropy depends on the probability
H = H(Ai) = H[A(Pi)] = H(Pi)
2.1B) - Once S assumes an irreversible state, this state is rather stable; conversely a reversible state appears unstable and somewhat infrequent. We reasonably conclude
H = H(Pi) is a monotonic increasing function of Pi
2.1) – The reversibility/irreversibility of each substate influences the aptitude of Ai. E.g. If the part q of the equipment Q breaks down and assumes an irreversible state, then q affects the whole machine. In practice, q must be substituted since its entropy spoils the overall operational state of Q. As a further example, let all the r parts of (3) work steadily. The system fairly runs since its reliability depends on each part, namely the entropy of the operational substate influence the global operational state. We conclude that the global entropy is the summation of the entropy of each substate
H = H(Ai) = H(Ai1) + H(Ai2) + …. + H(Air)
This is the most meaningful assumption as it relates the reversibility/irreversibility with the system complexity and disorder. Summation (7) calculate the internal structure, which we make explicit in (1) and (2).
Theorem 2.1: If (5), (6) and (7) are true, we get
H(Pi) = a log b (Pi) + c
where a, b, c are nonnegative constants.
Proof : In order to simplify the inferences from the formal point of view, let A is equipped with two substates. From (7) we get
H = H(Ai) = H(Ai1) + H(Ai2)
From (3) we obtain
Pi = (Pi1Pi2 )
Now we write (9) in function of the probability
H(Pi1Pi2 ) = H(Pi1) + H(Pi2)
Differentiating with respect to Pi2
H(Pi1) • H’(Pi1 * Pi2) = H’(Pi2)
and differentiating (12) with respect to Pi1 we obtain
H’(Pi1Pi2) + Pi1 Pi2 H”( Pi1Pi2 ) = 0
(10) leads to
H’(Pi) = PiH”(Pi)
and so
H ′′ ( P i ) H ( P i ) = 1 P i
This expression yields
ln [H’(Pi)] = – ln (Pi) + const
which we write as
H’(Pi) = a* / Pi
Where a* is some nonnegative constant as yet undetermined. We calculate (17) and get
H(Pi) = a* ln (Pi) + c
Since
ln (x) = ln (b)• logb (x)
we can write (18) such as
H(Pi) = a log b(Pi) + c
The entropy varies in the open range (- ∞, 0) because S is strictly stochastic and the extreme values of probability are excluded.

3. Reliability and Repairability Are Coupled

We suppose that S either runs or is repaired after the failure. We reject that a system is capable of running and is idle. We exclude also that S is broken-down and is not maintained, hence S assumes either the operational state Ah or the failure state Aj during which the fault of S is remedied
S = (Ah OR Aj)
The states are mutually exclusive and their probabilities verify
P = Ph + Pj = 1
This means the states Ah and Aj are joined
Ph = 1 – Pj
Pj = 1 – Ph
Let us use (20) and we obtain that the entropies of the operational state and the failure state are coupled
H(Ph) increases ⇔ H(Pj) decreases
H(Ph) decreases ⇔ H(Pj) increases
In particular we derive two pairs of connected results
l i m   H ( P h ) p h 1 =   0 l i m   H ( P j ) p j 0 =    
l i m   H ( P h ) p h 0 =   l i m   H ( P j ) p j 1 =   0  
The former result holds that if the operational state is highly irreversible, than the failure state is very reversible (and vice versa). In other words S is stable in Ah ,namely it is reliable and contemporary is repairable.
Expression (26) affirms that the operational state is unstable while Aj is irreversible. The system S is unreliable and irreparable at the same time.
Results (23) and (23bis) explicate how the probability of working and the probability of failing are joined; (24) and (24bis) detail how the entropies are coupled, lastly (25) and (26) calculate the limit values. In such a way we give an answer to question 1 in Foreword.

4. Reliability function

Both (24) and (24bis) are valid on the theoretical plane. We wonder which of them is usually verified in practice, to wit we search for the direction of the reversibility/irreversibility.
We assume the system S executes the plainest job, it works regularly and continuously during the time. We exclude any acceleration, any stop and restart that can stress the system. We prevent S also from external attacks or disturbances. In these conditions, a system free from errors reaches the high value of Ph at its birth by definition. The probability cannot do but decrease by time passing. As a consequence the reliability entropy H(Ph) diminishes. The trend (24bis) is true in the physical world whereas (24) is false. However (24bis) does not give any explicit relationship between the entropy H(Ph) and the time and we look for this relation.
If H(Ph) is “high”, S goes on working. Conversely if H(Ph) is “low”, the system abandons Ah and switches to Aj since it fails. The reliability entropy quantifies the system attitude to running. We reasonably assume that H(Ph) decreases linearly when S works regularly. In particular we define the simplest evolution in these terms.
4.1A) The reliability entropy decreases constantly with respect to time
H(Ph) = – d • Δt
where d is a positive constant. Intuitively d expresses the speed of the entropy descent during the time. The turbine in the power plant that night and day runs, provides a perfect example. The machine wears continuously hour by hour and (27) is true. On the contrary, the turbine destroyed by a single psychopathic being does not follow (27).
Biological systems do not follow (27) in the first period of their life and offer a second counter-example. A cell, a body, a tree etc. grow from their birth to the youth and the number s of the operational substates
Ah = (Ah1 AND Ah2 AND .... AND Ahs )
scales up
s increases
During system’s childhood the reliability entropy raises due to the gain of s
H(Ph) = H(Ph1) + H(Ph2) + .... + H(Phs)
Trend (24bis) is false and (24) is true. Experience confirms how a body becomes more robust from the birth to the youngness. The growth of a biological system comes to the end in the maturity when
s constant
From this period onward the biological system entropy follows (27).
What does the formula (27) mean in substance?
Regular running produces several degenerative dynamics such as attrition, oxidation, grinding etc. All of them are summarized by the constant reduction of entropy H(Ph) that explains the gradual decaying of systems working continuously. Expression (27) claims that the longer a system works, the more S degenerates and the lower is H(Ph). The system degrades due to its mere job and we highlight this explanatory quality of (27). Regular running is the first and most general reason for reliability shortage. Long investigations on incidents, accurate and complicate accounts of events make clear outcomes due to simple working. This is the origin of an unlimited list of faults.
Now we calculate Ph in the most simple and common conditions held by (27). We choose these values for (20)
a = 1
b = e
c = 0
Thus (27) is
ln H(Ph) = – d (t – to)
We assume S starts at time
to = 0
then
ln(Ph) = – dt
and
  e ln ( P h ) =   e d t
From this we obtain the reliability function
  P h =   P h ( t ) = v e   λ t
where ν and λ are positive constants depending on the specific system. This results confirms the correctness and consistency of the Boltzmann-like entropy introduced in Reliability Theory.
Today authors calculate the failure rate
λ = λ(t)
as an empirical function and from the facts they derive the meaning of its constant trend
λ constant
On the contrary we derive the result (37) from (27). This explicit hypothesis clarifies the reliability function theoretically and details the internal reasons for the most common failures as point 2 demands.

5. Conclusive Remarks

Entropy (20) is symmetrical to Boltzmann’s entropy
H(Pi*) = k ln (Pi*)
In point of mathematics, we note that both of them are inferred from the conditions 2.1A), 2.1B) and 2.1C). The Boltzmann entropy gives the reversibility/irreversibility of the thermodynamical system and this is the same use of (20) for a general system. In physical terms the reversibility/irreversibility expresses the practical attitude of a thermodynamical system to work. In details the growth of H(Pi*) defines the inaptitude to transform energy and the practical decaying of the system. Symmetrically H(Ph) details the decline of a general system. The former describes the ageing of a thermodynamical system due to the molecules increasing disorder and the latter describes the ageing of a general system that derives from friction, oxidation, etc.
Beside these significant affinities, we remark these major differences. The Boltzmann entropy calculates the properties of a thermodynamical system and has the physical dimension provided by Boltzmann’s constant
k = 1.38   10   16 erg / ° K
Instead H(Pi) is pure number.
In thermodynamics authors calculate the number of the molecules complexions Pi* [5], instead of the classical probability Pi. In particular Pi* varies between 1 and + ∞, and the domain of H(Pi*) is (0, +∞), instead (20) varies within (-∞ ,0).
The reliability function (37), derived from the entropy, is fully consistent with the solutions already calculated in the Reliability Theory.
The concept of reversibility/irreversibility leads to (25) and (26). They make evident how the reliability and repairability are coupled during the progressive system’s ageing, instead the current Reliability Theory instead considers the system ageing only in terms of reliability [6]. Experience confirms widely our results. Expression (25) is typical of the "young" system free from construction faults. The "old" system verifies (26) and at last comes to an end. These relations hold for both artificial and biological systems, and they are so universally verified that people are used to consider the contents of (25) and (26) as a “fatal” law. The entropy function offers a new and internal view of dynamic systems and we believe this is an original contribution.
The assumption (27) specifies that the decaying of S depends linearly on the system running. A system, that executes a regular and continuous work, decays by degrees. The present Reliability Theory puts to light this style empirically and does not do on the theoretical plane. This contribution, by which we can discuss different behaviors such as (31), seems significant to us.
An ample theory [7] on systems and information includes the results proposed in this paper and this is the last feature, which we aim at highlighting.

References

  1. Moubray, J. Reliability Centered Maintenance; TWI Press: Terre Haute, 1987. [Google Scholar]
  2. Dyer, D. “Unification of Reliability/Availability/ Repairability Models for Markov Systems”. IEEE Transactions on Reliability 1989, 38(2). [Google Scholar]
  3. Ushakov, L. “Reliability: Past, Present and Future”. In Proc. MMR; Birkhauser: Boston, 2000. [Google Scholar]
  4. Giles, E. Mathematical Foundations of Thermodinamics; Pergamon Press: Oxford, 1964. [Google Scholar]
  5. Wilks, J. The Third Law of Thermodynamics; Oxford University Press: Oxford , 1961. [Google Scholar]
  6. Barlow, R.E.; Proshan, F. Statistical Theory of Reliability and Life Testing; Holt Rinehart & Winston: New York, 1975. [Google Scholar]
  7. Rocchi, P. Technology + Culture = Software; IOS Press: Amsterdam, 2000. [Google Scholar]

Share and Cite

MDPI and ACS Style

Rocchi, P. Boltzmann-like Entropy in Reliability Theory. Entropy 2002, 4, 142-150. https://0-doi-org.brum.beds.ac.uk/10.3390/e4050142

AMA Style

Rocchi P. Boltzmann-like Entropy in Reliability Theory. Entropy. 2002; 4(5):142-150. https://0-doi-org.brum.beds.ac.uk/10.3390/e4050142

Chicago/Turabian Style

Rocchi, Paolo. 2002. "Boltzmann-like Entropy in Reliability Theory" Entropy 4, no. 5: 142-150. https://0-doi-org.brum.beds.ac.uk/10.3390/e4050142

Article Metrics

Back to TopTop