Kevin S. Nelson
Executive Director

Housing Authority of the Town of Stratford, Connecticut
Stratford, CT 06615

Testimony of Kevin S. Nelson on Public Housing Assessment System
U. S. Senate Subcommittee on Housing and Transportation

March 21, 2000

Housing authorities are not opposed to a physical inspection. In fact, housing authorities have been conducting annual inspections of their units for years, and HUD requires authorities to have a preventive maintenance plan as well. After PHAs had done their annual inspections, they certified to HUD that they had done so, and then HUD would do a certain number of confirmatory reviews each year to verify that the inspections had in fact been done. These inspections were carried out based on the Housing Quality Standards (HQS) system protocol, which is the same inspection protocol currently being used in the more than one million tenant-based Section 8 units.

Then HUD introduced the new physical inspection protocol for public housing, the Uniform Physical Condition Standards (UPCS). This standard is very different from HQS, and is a great deal more stringent. Thus, public housing is now being held to a higher standard than the private market, tenant-based Section 8 units.

HUD introduced this new standard without even informing housing authorities what it consisted of. The three industry groups, the Public Housing Directors Association, PHADA, the National Association of Housing and Redevelopment Officials, NAHRO, and the Council for Large Public Housing Authorities, CLPHA, had to file a lawsuit in order to get HUD to publish what the new standard was.

Along with the new inspection standard came the scoring system. The scoring system is unlike any other. The scoring system is not one in which there are a certain number of possible points, and then one counts the points taken off, and then the final score reflects the percentage of possible points one has gotten right.

Instead, it is an artificial and arbitrary construction of HUD's which has a very questionable legitimacy. As mentioned, the score does not reflect the percentage of the property which is in good condition. When a housing authority gets an 85, it does not mean that 85 percent of the property is in good shape and 15 percent needs improvement. In fact, HUD has written that if PHAs were scored on a percentage basis, virtually all PHAs would score in the 90s. Thus, according to HUD's inspection, for vitally all housing authorities, more than 90% of their property is in fine condition. This fact alone casts considerable doubt on a physical inspection scoring system in which 40 percent of the nation's public housing units failed their first advisory round.

If HUD's score is not based on a percentage, what exactly is it based on? The answer to this question is rather complicated. Let's put the answer this way. When HUD inspects an apartment, there are more than 2,000 points, which can be deducted for problems. Yet, as mentioned, HUD does not deduct points for deficiencies from the 2,000 possible points. For some reason, which it has never explained, it deducts points from the number 100 and if an authority scores below 60 it is the equivalent of failing. Therefore, if the authority had just 41 of these 2,000 points deducted, or 2% of them, that is the equivalent of failing. If a PHA lost 2% of the possible points in each apartment inspected, it would have the equivalent of a failing score in the Dwelling Units area.

This decision on HUD's part to deduct points from 100 is what makes the scoring system artificial and arbitrary. HUD could just as easily have chosen any other number from which to deduct points - for instance 200 points, or 500 points, or 1,000 or even 50. Each number would have made just as much sense as 100. Similarly, how did HUD choose the number 60 as the threshold below which authorities would fail? We have seen it has no relationship to a percentage.

HUD could have chosen any number. Although, it is impossible to be certain, it appears as if HUD chose the numbers 100 and 60 to create the illusion that the scores were a percentage in order to lend some legitimacy to the score, but the numbers are a purely artificial and arbitrary selection on HUD's part.

This selection of these numbers, which HUD has never justified, has the most profound importance for PHAs. As a result of this decision, a tiny percentage of the possible deductible points can cause a PHA to fail. Housing authorities do not believe that failing on the basis of such a small percentage is fair or that such a score reflects the performance of PHA management. By most normal standards, having 98 percentage of one's property in good condition would be a sign of a strong and effective management.

What makes HUD's system lose even more legitimacy is the fact that there are several reasons for which HUD deducts points which are very flawed. Thus, in many cases even the points HUD has deducted are not legitimate. There are three main reasons these point deductions are not legitimate.

The first is that many of the items are minor or insignificant. For instance, having a shrub touch a fence causes a significant loss of points. A dripping faucet or ice build up in a resident's refrigerator cause a loss of points.

As explained earlier, until the institution of this new system, PHAs had to meet Housing Quality Standards (HQS) in order to meet the congressional mandate of "decent, safe and sanitary". HQS is still the standard for the more than 1,000,000 units in the Section 8 tenant-based program. Yet, there are more than 35 deficiencies in the dwelling unit area alone which would pass HQS, but which cause point deductions under the new physical inspection standard. Why should PHAs run the risk of being considered troubled for points deducted which would have been acceptable three months ago and are still acceptable for the more than 1,000,000 tenant-based Section 8 units? PHAs do not think they should be considered troubled for deficiencies, which exceed the "decent, safe and sanitary" standard and therefore question the legitimacy of this new standard.

A second reason the scores may not be legitimate is that in certain cases, the number of points deducted is way out of proportion to the seriousness of a defect. The method HUD uses to calculate the number of points deducted for each deficiency is very complicated.

HUD assigns each deficiency a weight and a criticality level, which determine how many points are taken off. The weights do not represent the each deficiency's portion of the whole, as it might be assumed, but instead add up to 6,500. HUD has never opened for discussion with housing authorities how these weights and criticality levels were set, even though they are so vital. The reason there are so many points over all and that each deficiency can be worth so many points, is that for each deduction, HUD multiplies the weight (which total 6,500 for all the deficiencies) times the criticality level, (ranging from 1 to 5) times the severity (ranging from .25 to 1).

Many of these weights and criticality levels are so high that they score a minor problem as if it is extremely critical. Thus, a cracked toilet seat, which costs next to nothing to buy, takes only a few minutes to repair, which does not greatly inconvenience the resident, and which passes Housing Quality Standards in the Section 8 program, causes a unit to lose 37.5 points. Remember losing 41 is the equivalent of failing. Thus, this one minor item, which passes HQS, causes a unit to lose almost all the points needed to reach a score which is the equivalent of its failing. Since there are many cases for which points can be deducted out of proportion to the seriousness of the deficiency, PHAs do not regard these deductions as valid and do not feel they represent an accurate portrayal of PHA management.

A third reason points deducted many not be legitimate is that HUD does not make any distinction between deficiencies which a PHA knew about but failed to repair and deficiencies which were caused by the tenant, and about which the PHA could have no reasonable way of knowing they even existed.

PHAs conduct an annual apartment inspection and repair problems found during this inspection. PHAs also repair problems reported by residents during the course of the year. However, if a tenant has caused damage in his or her unit after the annual inspection and not reported it to the PHA, there is no way an authority can have an opportunity to repair it. A resident may have kicked a hole in a wall, cracked a light switch cover, knocked a door off its hinges, broken a lock, damaged a kitchen cabinet, marred the floor surface or caused any one of many other possibilities, especially likely to happen in family housing with young children. If the resident has not reported it to the PHA, possibly for fear that the PHA will charge him or her for the cost of the damage, HUD's inspector will deduct points even though it does not reflect on the quality of a housing authority's management. Since it takes so few points to fail, PHAs can be declared troubled for no fault of their own. This reason is the third one PHAs do not regard all the points deducted as legitimate.

In summary, HUD's physical inspection scoring system does not have the legitimacy it needs to be used as a basis by which to initiate actions up to and including receivership against housing authorities. The basis on which HUD decides whether or not an authority passes or fails is arbitrary, with no grounding in the percentage of the property which passes or fails. The number of points HUD has chosen, as the amount needed to fail is very small. Finally, the method by which HUD deducts points means that many points are deducted for reasons for which there is no justification. As a result, the scores are not a fair and accurate portrayal of PHA management, and they should not be sued to determine whether or not a PHA is troubled. Since they do not present a fair and accurate portrait of PHA management, the system needs to be revised.

There are some significant problems with the financial indicator of the public housing assessment as well. The main problem is that housing authorities are now being evaluated on their entity-wide program, rather than their public housing program.

In Connecticut, for instance, housing authorities manage public housing administered by the State. The units were built with State of Connecticut funds, their regulations are written by the State, and they are reviewed and monitored by the State. Now, however, HUD will be evaluating all of these units in the PHAS financial indicator. Thus, the reserves from these programs will be mixed in with the reserves from the federal program, and vacancy rates and tenant accounts receivable in these state programs will be mixed up with vacancy rates and tenant accounts receivable in the federal program. In essence, HUD now has a program to evaluate these State units. Why should HUD, in Washington D.C., evaluate state, local and other non-federal housing programs?

This course of action simply does not make sense. The state programs have different regulations, different funding mechanisms, and different requirements and serve different populations in some cases. Disregarding the question of whether HUD has any right to evaluate a state housing program, including its data with the federal program can only distort the analysis of the federal program. HUD knows nothing about these programs, and cannot possibly tell whether an authority is running them well or not.

Another problem in the financial indicator is the fact that HUD divides all the PHAs into different groups, based on their size and then distributes points to them based on their position on the curve of their group. As a result, PHAs with non-federal units are being rated against PHAs without non-federal units. Pitting these authorities against one another is like comparing apples to oranges and is not fair to either one.

Similarly, since the Section 8 units are being included in determining a PHA's size for placement in one of the six groups, authorities with Section 8 programs will be compared against authorities without Section 8 programs. Again, these authorities are like apples and oranges. Since the Section 8 program has been allowed to have very different reserve levels than the conventional public housing program, HUD will now be comparing and scoring housing authorities with these different reserve levels, but without the differences necessarily being any reflection on the quality of the PHA's management. PHAs should not run the risk of being classified as troubled based on the results of such a hybrid system.

Furthermore, HUD receives the annual audit from every housing authority, which lists the financial results of every one of the PHA's programs, federal and non-federal. Therefore, HUD has all the financial data it needs, if it believes it must investigate whether any of a PHA's non-federal programs are jeopardizing the low-income program.

In conclusion, housing authorities have long been inspected and believe in the importance of a physical inspection system. However, HUD has initiated a new, far more stringent inspection standard, which exceeds the "decent, safe and sanitary" one achieved by HQS. It has accompanied this new standard with an arbitrary scoring system which contains many flaws in the manner it deducts points. For these reasons, PHAs question the legitimacy of the physical inspection component of the Public Housing Assessment System (PHAS). The financial indicator is flawed as well, with non-federal and Section 8 units thrown together with the conventional low-income program in a manner, which undermines the purpose and fairness of evaluating the public housing program.