Frequently Asked Questions About  HAPLR 1.0

HAPLR 1.0 ratings were discontinued after 2010 edition.  Previous editions of Version 1.0 are available at the Wayback Machine.  Simply type the following into the search box: haplr-index.com

 

Why did you stop publishing HAPLR?

  • There were many reasons. Many of the scores for libraries were very similar from year to year so some libraries lost interest.  I could never fully include electronic use data such as public computer use sessions because the data reported by libraries appeared to be consistently skewed by misunderstandings about what was being requested by state library agencies.  IMLS, the federal agency, tried hard to standardize things, but I was never convinced.  Furthermore, I ended up with the same dilemma faced by Coca Cola when they tried to introduce “New Coke.”  Testing showed that when I tried to change what was included in the ratings and how the items were scored, the first question became: “Yes, but what was the score under the OLD HAPLR?”  IMLS also changed the reporting year cycle for the 2010 cycle. Doing a revised edition for the same year just became too complicated.

What are the differences between the editions?

  1. 2010  The 11th edition was published in April 2010.  It was based on data published by IMLS in July of 2009.  The data included were reported by libraries in 2008.  It is important that 2008 is the year of reporting, not the year of activity; a library reports in 2008 on 2007 activities, of course.  To further complicate things, however, the various libraries and states have differing fiscal or reporting years.  Because of a change in reporting cycle for IMLS, there could have been a second edition of HAPLR for 2010.
  2. 2009  The 10th edition was published in June 2009.  It is based on data published by IMLS in December of 2008.  The data included were reported by libraries in 2007.  It is important that 2007 is the year of reporting, not the year of activity; a library reports in 2007 on 2006 activities, of course.  To further complicate things, however, the various libraries and states have differing fiscal or reporting years.
  3. 2008 The ninth edition, published October 2, 2008, was delayed a full year by the late release of the 2005 federal data. The cause of the delay was the transfer of responsibility for publishing the data from FSCSto IMLS  It was published on this web site and in American Libraries in October of 2008 but I made a major error.  I published the same data as the seventh edition by linking to the wrong dataset on my hard drive.  The error was corrected on line by October 10th and in American Libraries in the November edition.
  4. 2007  No edition because of delay in publication of data (see explanation above).
  5. 2006 The eighth edition of HAPLR Ratings was based on 2004 data from FSCS as published on the World Wide Web in July of 2006.  The federal agency, FSCS, compiles the annual reports as reported by state library agencies for nearly 9,000 libraries into a single dataset.
  6. 2002-2005 Fourth to seventh editions published in fall of each year.  The data used were submitted two years prior to publication year.
  7. A Fall 2003 edition of HAPLR had to be postponed and then abandoned because of delays in FSCS publication of the data.  The results for 1999 data should have been available in Spring 2001, allowing publication of HAPLR scores in Fall 2001.  But those results were delayed for almost a year and not published until May of 2002.  The 2000 data were published just 8 weeks later in July 2002.  FSCS indicated that it was their intent to publish the data in a more timely fashion from then on.
  8. 2001 A Fall 2001 edition of HAPLR had to be postponed and then abandoned because of delays in FSCS publication of the data.  The results for 1999 data should have been available in Spring 2001, allowing publication of HAPLR scores in Fall 2001.  But those results were delayed for almost a year and not published until May of 2002.  The 2000 data were published just 8 weeks later in July 2002.  FSCS indicated that it was their intent to publish the data in a more timely fashion from then on.
  9. 2000 The third edition was in the November 2000 issue of American Libraries.  The third edition did not include imputed data for the 1998 data used  because, as of October of 2000, the FSCS had not yet supplied the data.  Consequently, 1,648 libraries that did not supply the needed data, usually reference queries answered or annual visits, were not included in the third edition.
  10. 1999 The FSCS data used for the first edition did not have the needed data for 2,000 libraries and it was divided into population groupings that did not match FSCS groupings.  The second edition rectified both shortcomings of the first.
  11. The first edition was in the January 1999 issue of American Libraries, the second edition was published in September. Both the first and second editions were based on data from the Federal-State Cooperative Service (FSCS). The first edition was based on what FSCS calls Preliminary data, the second was based on what they call their Early Release data.  The two distinctions between the first and second editions were:  1) the number of libraries included: 7,000 in the first edition, nearly 9,000 in the second; 2) the population categories used, 4 in the first edition, 10 in the second.The second edition included 2,000 additional libraries because, after the first edition went to press, the FSCS in the Early Release edition began imputing data for libraries that had not reported data for key data elements.  Imputing is a bit like estimating, with a good deal more statistical validity.  The imputation added 2,000 libraries to the field for consideration. The second edition also used the same 10 population categories used by the FSCS rather than the four arbitrary categories originally devised by the author.  The first edition broke population categories at 2,000, 10,000 and 100,000.  The second edition has breaks at 1,000; 2,500; 5,000; 10,000; 25,000; 50,000; 100,000; 250,000; and 500,000.

What led you to do the HAPLR ratings?

  • Practically every time you pick up a magazine or newspaper there is another rating system for universities, places to work, hospitals, mutual funds, you name it.  But there was none for libraries.  Worse than that, the Money magazine listing of best places to live covered libraries by measuring only books per capita.  I was certain that a more comprehensive tool was needed.

Why didn’t you consider electronic measures?

  • For a long time I had wanted to do so and finally for the 2005 edition, I did a limited attempt.  The federal data on which I base the ratings did include such measures until recently.   Nevertheless, I did not incorporate these measures into the HAPLR scores because I believe them to be too unreliable. The 2006 federal data, published by IMLS published a new dataset for electronic use called Public Internet Users.  The LJ Index chose to use this new data even though it was the first time it had been included in the current form and even though the data appeared quite skewed when measured on a per capita basis.  Most measures that HAPLR and the LJ Index use have a high to low range of about 8 or 10 to 1.  The range for differences for electronic use is much, much higher.

Why don’t you consider square feet for the building?

  • Up until recently, the data on which I base the ratings, did not include such measures.  Even though we now have the data for square footage, I could not develop any good method for incorporating building size into the ratings.

Isn’t it really quality of service that counts; why rate quantity only?      

  • Of course quality counts.  As I said in the January 1999 issue of American Libraries, “Data measurement cannot capture a friendly smile and a warm greeting at the circulation desk.  Nor can data measurement alone measure the excitement of a child at story time or a senior surfing the Internet for the first time.”  But we have no accepted and nationally consistent measures of quality in library services that would allow for comparisons.  I agree that numbers alone do not identify truly great libraries, quality counts too.  On the other hand, I do not believe that a library can be truly great with poor numbers.  As my logic professor taught me, the numbers are a necessary but not sufficient condition.

Who is the HAPLR author?

  • Thomas J. Hennen Jr.  the author of the HAPLR Index, has over 30 years worth of experience in public libraries. Until 2013 when he retired he was Director of the Waukesha County Federated Library System.  He has a Masters Degree In Library Science from the University of Wisconsin-Milwaukee.  From 1983 to 1999 he was Administrator of Lakeshores Library System in southeastern Wisconsin.  From 1975 to 1983 he was Director of Watonwan County Library in southeastern Minnesota.  He has published in Library Journal, American Libraries and in other American Library Association publications. He had a column on rural library materials for the American Library Association’s Booklist magazine. He has been a speaker for library associations throughout the U.S. and Canada.

How does the author’s library rate on the HAPLR Ratings?

  • I coordinated the activities of 16 libraries in Waukesha County.  In a federated library system the activities of individual libraries are locally determined.  The federated library system deals with interlibrary relationships and provides leadership and overall direction. Nevertheless, the scores of libraries in the county were mostly very good.

What has been the response been to the HAPLR Ratings?

  • Overwhelming would be a good description of the response to the first edition.  The web site (HAPLR-Index.com) averaged about 1,000 unique annual visitors per month.  The visitors are from all over the globe, but primarily from the U.S. of course.  Press coverage was also excellent.  Over the years, hundreds of newspapers covered the index with feature or front-page coverage about their local library’s rankings. As print journalism faltered with the rise of the Internet, coverage slowed.

What do you say to those who note that the information on which you base your ratings seems out of date?

  • Anyone involved with data gathering and statistics wishes that they could be timelier, but we do what we can.  The information was collected by the states and submitted to IMLS. Information is checked for internal consistency and then published first on the Internet.  The IMLS imputes data for the 2,000 libraries that did not report the data necessary to calculate their rankings. As states increasingly automate their data collection and allow for filing over the Internet, the data will become closer and closer to “real time,” rather than the belated information I was working with.

Are there similar rating methods for libraries?

  • The independently produced HAPLR Index was the first of its kind for libraries in the United States.  Although published frequently in American Libraries it has never been editorially sponsored by that publication.  Bibliostat, a vendor of library statistical packages and Library Journal launched  the LJ Index – Star Library project in 2009.  The fundamental difference between the two is that HAPLR includes input measures while the LJ index does not.  The LJ Index looks at only one side of the library service equation.  HAPLR looks at both sides.  The closest thing to the HAPLR Index was developed in Germany.  The project sponsored by the Bertelsmann Foundation is called “BIX- The Library Index.”  Bertelsmann Publishing partnered with the German library association to produce BIX, a library index quite similar to HAPLR.  The main difference between BIX and HAPLR, aside from the publishing house backing, is that BIX was designed to provide comparisons of one library to another as well as over time. HAPLR compares all libraries to one another only during a given year.   BIX ceased publication shortly after I discontinued HAPLR.  An English language description of the BIX index was available at:  http://www.bertelsmann-stiftung.de/documents/Projekt_Info_Englisch_010112.pdf  There are no similar programs in either Canada, Australia or New Zealand.  I know that there is some interest in developing a similar index in Australia and New Zealand, because I published an article on the topic in APLIS, the Australasian Public Library and Information Science magazine. Great Britain adopted national standards, and in 2000 the Audit Commission began publishing both a summary annual reports of library conditions and individualized ratings of libraries.  Audit Commission personnel base the reports on statistical data, long-range plans, local government commitment to the library, and a site visit.  The Audit Commission is an independent body.  Every library is assigned a score. The scoring chart displays performance in two dimensions. A horizontal axis shows how good the service is at present, on a scale ranging from no stars for poor to three stars for excellent.   A vertical axis shows the improvement prospects over time of the service, also on a four-point scale.  The narrative reports, which are about 40 pages long, are very specific and quite blunt in their assessments and recommendations for improvement.  This is not quite the same thing as the HAPLR Index, but close.  Their site was at:  http://www.bestvalueinspections.gov.uk/   There is a project funded by DG13 of the European Commission within the Telematics Applications Programme. They are using Internet communications to develop a continuously updated database of statistics about library activities and associated costs in the context of their national economies. This project does not develop an index similar to the HAPLR-Index, however.  That information was found at http://www.cordis.lu/libraries/en/publib.html

How many libraries are there in each population category?

  • There are over 9,000 library entities included in the IMLS database. Library systems with multiple branches are counted as a single entity. The population categories used are those used by the IMLS for other comparisons, with one exception.  The IMLS data includes another category of libraries over 1 million population, but that would have provided too few libraries for purposes of HAPLR Ratings. For 2010 the numbers were:
Population Category Total
a) 500 K 83
b) 250 K 100
c) 100 K 337
d) 50 K 545
e) 25 K 943
f) 10 K 1,773
g) 5 K 1,481
h) 2.5 K 1,329
i) 1 K 1,479
j) 0 K 1,010
Grand Total 9,080

Can one compare rating numbers between two or more population categories?

  • With care, yes, one can do so.  There are variations in the highs, lows, medians, and so forth that vary by population sizes.  So a score of say 600 may be more easily attainable in some categories than others.  But the variations are not so extreme that no comparisons across population categories are possible.

How can you mix both input and output measures?

  • Some have criticized the HAPLR Index for including both input and output measures in the same product.  They note that inputs like how much money is spent on materials or how many periodicals the library owns are different from outputs such as circulation per capita or turnover rate.  Combining the two makes it possible to have a library with good inputs and poor outputs score moderately well.  Conversely a library shortchanged by its community on funding that manages through good management to provide excellent service outcomes may rank more poorly than a library in a rich community with only moderately good management and output measures.  I would like to get closer to answering the “are you getting what you paid for” type of question.  At this point, it appears to me that 70 to 80% of the output is traceable to good input levels.  The rest is probably traceable to good management or other factors that may not be measurable.   I hope to do further investigation on the correlation of input and output some day soon. (Research firms with grant money to spare, please take note.

What does a given rating number mean, and how should I interpret it?

  • The HAPLR Index is similar to an ACT or SAT score with a theoretical minimum of 1 and a maximum of 1,000.  (Note that some of my critics insist that there are differences between HAPLR and ACT scores so I should not make this comparison.  That is a bit like saying all metaphors or comparisons are useless.  I reject the argument.)  Most libraries scored between 260 and 730, so scores above and below those numbers are remarkable.  Consider the chart below for the libraries in the over 500,000-population category in 2010, for a short idea of the rating methodology.  A library above the 75th percentile for expenditure per capita of $38.50 will get a higher score on this measure than one below the 25th percentile. [Chart is for the 2008 edition].  Expenditure per capita is weighted more heavily than percent of budget devoted to materials.  In the HAPLR Index each library is compared to all others in its population category on all 15 measures.  The combined score is then transformed into an index score so that all can be easily compared with a single number.  For more information see the next question.
Measurement Category HALPR Weight 75th %ile 50th %ile 25th %ile
Expenditure per capita 3 $38.50 $25.87 $19.33
Percent Budget to materials 2 18.0% 15.4% 12.9%
Materials Expend. Per capita 2 $5.63 $3.96 $2.79
FTE staff per 1,000 population 2          0.6          0.4          0.3
Periodicals per 1000 residents 1          7.9          4.6          3.3
Volumes per Capita 1          3.0          2.4          1.7
Cost per circulation (low to high) 3 $3.38 $4.29 $5.89
Visits per capita 3          5.1          3.8          3.0
Collection turnover 2          3.9          2.4          1.7
Circulation per FTE Staff Hour 2          8.9          6.6          4.7
Circulation per Capita 2          8.9          5.0          3.9
Reference per capita 2          2.0          1.4          0.8
Circulation per hour 2      105.4        77.8        59.6
Visits per hour 1        68.5        56.2        40.7
Circulation per visit 1          1.9          1.4          1.1

Is weighting appropriate in a rating system?

  • Some objected that HAPLR gave more weight to some measures than to others.  Yet not weighting factors makes a value judgment as well. Not weighting the factors makes them equal in importance.  This begs the question; is a library visit truly equal in value to a circulation, an electronic resource session, or attendance at a program?  BIX, the German rating system used weighting, as does HAPLR.  The table below indicates the relative weights to possible measures assigned by HAPLR, BIX, and the LJ Index.  HAPLR assigned 38% of the values to input measures, BIX assigned 56%, the LJ index assigned 0%.  Note that due to the score algorithm of the LJ Index, the final score of a library can be almost totally driven by a single factor.
Type Measure HAPLR Relative weight BIX Relative Weight LJ Index Relative Weight
Input Expenditure per capita 10%
Employees per 1,000 capita 7% 8%
Percent Budget to materials 7%
Materials Expend. Per capita 7%
Collection units per capita 3% 8%
Periodicals per 1000 residents 3%
Total Opening hours per year per 1000 capita 8%
User area in sqm per 1,000 capita 4%
Employee hours per opening hour 4%
Investment per capita 2%
Stock Renewal rate 12%
Computer services in hours per capita 4%
Internet Services 4%
Advanced training per employee 2%
Output Number of visits per capita 10% 12% 25%
Cost per circulation (low to high) 10%
Circulation per capita 7% 8% 25%
Collection turnover 7%
Circulation per FTE Staff Hour 7%
Reference per capita 7%
Circulation per hour 7%
Visits per hour 3%
Circulation per visit 3%
Stock turnover rate 12%
Events per 1000 capita 4% 25%
Total Expenditure per visit 4%
Acquisitions budget per loan 4%
Electronic Resource Use per capita 25%
Input Totals 38% 56% 0%
Output totals 62% 44% 100%
Combined totals 100% 100% 100%

Is a score of 500 considered median for all categories?        

  • Just about, but not perfectly.  Blame it on Microsoft Excel, or the vagaries of prime numbers, if blame you must. Each of the 15 measures is a ratio, such as volumes per capita. When two non-prime numbers are involved in the ratio, it is possible for two or more libraries to tie on that measure.  When ties happen, the total number of points assigned are skewed towards a higher number than would otherwise have occurred.  In the grand scheme of things, this matters little because the median scores are affected only very marginally.

 Where can I see the specific calculations behind the index ratings?

  • An index number is always an attempt to encapsulate a lot of data into a single number.  No such index number is perfect, of course.  An explanation of how the ratings are calculated is available on the explanation of ratings page.

 

 

How were the ratings calculated? 

  • Rating Method Used for HAPLR 1.0 1999 – 2010  Nationwide public library statistics are collected and disseminated annually through the Institute for Museums Libraries and  for public library data. Statistics are collected from nearly 9,000 public libraries.  The web site is at:  http://www.imls.gov/  The HAPLR Index includes 15 factors. The focus is on circulation, staffing, materials, reference service, and funding levels. The Index does not include data on audio and video collections, or interlibrary loan, among other items that could have been calculated from the IMLS data. Perhaps most prominently absent from the data are any measures of electronic use or Internet service. While such measures would have been desirable, the data simply are simply not sufficient for such comparisons at this time. Internet, electronic services, and audiovisual services are excluded because there is simply not enough accurate data reported by enough libraries to make comparisons meaningful. What remains are fairly traditional data for print services, book checkouts, reference service, funding and staffing. It is likely that in the future, additional measures can be added to the data to begin to evaluate such other library services as Internet use, electronic services, and non-print services. The data have only been collected on a consistent national basis since 1981. Since then the data have been refined to be more consistent and to include more information. That trend is likely to accelerate, making the additional comparisons possible soon.

Population factors considered

  • HAPLR  uses the IMLS service area populations for each library, not the unduplicated population.  IMLS has no choice but to ask states to make some rather arbitrary assignments of population. The population served often extends beyond the population of the community that established it and provides its initial support. Library service territories, when added together in some states, exceed the total population. Hence, the IMLS reportsboth service area populations as reported by the state and unduplicated service populations which arbitrarily re-assign population.  Depending on the demographic makeup of the state, there will be inconsistencies in population assignment. Consolidated county and regional library systems are more prevalent in some states and regions than in others, skewing some population data.  Nearly half of the HAPLR Index is sensitive to population as reported in the data, so this fact should be considered when interpreting the results.

Weighting the Factors

 

Measurement Category HALPR Weight
Expend. per capita 3
Percent Budget to materials 2
Materials Expend. Per capita 2
FTE staff per 1,000 population 2
Periodicals per 1000 residents 1
Volumes per Capita 1
Cost per circulation (low to high) 3
Visits per capita 3
Collection turnover 2
Circulation per FTE Staff Hour 2
Circulation per Capita 2
Reference per capita 2
Circulation per hour 2
Visits per hour 1
Circulation per visit 1

 

 

HAPLR Index Calculation Details

  • Calculation Numbers (using Libraries over 500,000 Population as an example)The explanation below covers the methods employed to arrive at the ratings scores.  All 15 measures are calculated for each of libraries in the over 500,000-population group. Each library is then ranked on each of the 15 measures.  Measures and their related weights are listed in the table below. The first six are input measures, the remaining nine are output measures.

 

Measurement Category HALPR Weight
Expend. per capita 3
Percent Budget to materials 2
Materials Expend. Per capita 2
FTE staff per 1,000 popul 2
Periodicals per 1000 residents 1
Volumes per Capita 1
Cost per circulation (low to high) 3
Visits per capita 3
Collection turnover 2
Circulation per FTE Staff Hour 2
Circulation per Capita 2
Reference per capita 2
Circulation per hour 2
Visits per hour 1
Circulation per visit 1

 

Calculations:  Suppose there were 72 libraries in the over 500,000-population category. Calculations were as follows.  The calculations remained the same by population category but there were differing numbers of libraries in each population category. 

  1. For a given sample, let us suppose that the expenditure per capita rank is 22. Since the weight is 3, the expenditure per capita score is then (72-22=50) times 3 = 150
  2. Volumes per capita rank is 12 weight is 1. Volumes per capita score is (72-12=60) times 1 = 60.
  3. Take special note that cost per circulation is rated low to high, assuming that lowest cost is best. Therefore the calculation for this factor differs, it is the rank times weighting rather than the number of libraries minus the rank that is used.
  4. Cost per circulation rank is 10, weight is 3, so cost per circulation score is 10 times 3 = 30.
  5. The above scores, plus the remaining 12 items, total 1,073 for a given sample library.
  6. Next we note that there are 29 weighted points, so divide 1,073 score by 29 get a weighted average score for all measures of 37.0.
  7. Divided weighted score of 37.0 by the 72 libraries in population group and multiply by 1000 to get the index rating.
  8. 37.0 divided by 0.513889
  9. Multiply by 1000 to get 514 as the index number for the library.
  10. Finally, arrange all 72 libraries in the grouping by index score to get a ranking for each library on the index score.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s