Reliability and availability of the network

TV

Although the cable television industry has historically had no formal availability or outage targets, CableLabs conducted a study of local reliability and subscriber tolerance for outages under the auspices of the Outage Reduction Working Group. Its report, published in 1992,5 is discussed in detail in Chapter 9. In summary, the report set an outage target that no customer should experience more than 2 outages in any 3-month period, with a secondary standard of no more than 0.6 outages per month per customer.

As with the telephony target, it is important to review what is included in the definition of an outage:

▪ Only failures with multiple customers are included, which excludes all disconnection, NID, set-top box terminal and internal wiring problems.

▪ Includes failures in a single channel, which includes the headend processing equipment.

▪ Only signal interruptions are included, not reception degradation.

▪ All elements of signal distribution between the headend and the tap are included, including the impact of commercial power availability.

▪ Although this language refers only to outages that are “experienced by customers”, the software that is distributed with the report calculates all outages, and so it can be assumed that customer awareness of outages is not important to the definition, even if the objective is based on customer experience.

▪ The clause “no customer should experience” in the target clearly indicates that a worst case analysis is required, not an average.

Based on an estimated average service restoration time of 4 hours, CableLabs translated 0.6 outages per month into a minimum acceptable availability of 99.7%. Clearly, this standard (which dates back to a time when cable television carried only analog video) is outdated in a world where cable television is the primary provider of both basic telephone and high-speed data services.

The equivalence between individual outages perceived by customers and distribution system outages is tenuous at best. On the one hand, some outages may be detected and repaired without some subscribers being aware of the outage; on the other hand, individual subscribers may have problems at home that would not be included in the previous definition. Nevertheless, CableLabs has chosen 0.6 multiple customer outages per month as a benchmark for modeling distribution system outages. 6

In the field measurement portion of its study, the working group collected actual field failure analysis reports from some of its members and used them to predict the failure rates of certain classifications of components (discussed later) and to develop models that could be used to predict the failure rates of arbitrary network architectures. The accuracy of the raw data used in this type of analysis is limited by the analytical skills of the field technician and its completeness. For example, it is not obvious that a technician might have thought to report a short failure caused by a routine maintenance procedure, or a short power failure that did not trigger the entry of a formal fault report into the dispatch system. Secondly, the data collected, necessarily typical of typical cable systems of the time, did not include modern HFC systems. Finally, cable systems (at least in 1992) routinely reported only distribution system failures, not individual customer failures.