Methodology

How was the Fitness Index developed?

In 2008, potential indicators for the ACSM American Fitness Index were identified and scored for relevance by a panel of 26 health and physical activity experts. Then, two rounds of Delphi-method scoring were used to reach consensus on whether each of the potential indicators should be included and the weight each selected indicator should carry in the Fitness Index.

From this process, 31 indicators were originally identified and weighted. A weight of 1 was assigned to indicators considered to be of lower importance by the panel; 2 for those indicators considered to be of moderate importance; and 3 to indicators considered of highest importance to health and fitness.

Values for each indicator were then ranked (worse value = 1) and multiplied by the weight assigned by the expert panel. Unhealthy measures, such as percent of residents smoking, were reverse ranked. The weighted ranks were then summed by indicator group to create sub-scores for the personal health indicators and community/ environment indicators. Overall scores were standardized to a scale with the upper limit of 100 by dividing the overall score by the maximum possible value and multiplying by 100.

The following formula summarizes the analysis process:

Methodology Formula
r = city rank on indicator
w = weight assigned to indicator
k = indicator group
n = 16 for personal health indicators and 17 for community/environment indicators
City Scoremax = hypothetical score if city ranked best on each indicator

The individual weights also were averaged for both indicator groups to create the total score. Both the indicator group scores and the total scores for the 100 cities were then ranked (best = 1).

The expert panel also selected 16 descriptive variables. The descriptive variables were not included in the ranking analysis, but, instead, provided for comparison purposes only.

How were Fitness Index indicators selected?

Indicators included in the Fitness Index were required to meet the following criteria to be included:

  • Related to the physical activity or health status of city residents, or city environment/community resources that support healthy behaviors
  • Reported by a reputable agency or organization
  • Available to the public
  • Measured routinely and provided in a timely fashion
  • Modifiable through community effort

What data sources were used in the analysis?

The most currently available data at the time of analysis from national-level reports and surveys provided the information analyzed for the Fitness Index. The data source for the personal health indicators was the Behavioral Risk Factor Surveillance System (BRFSS) updated annually provided by the U.S. Center for Disease Control and Prevention. The annual survey conducted by the Center for City Park Excellence of the Trust for Public Land provided many of the community/environment indicators. The U.S. Census American Community Survey was the source for population data and commuting measures. The U.S. Department of Agriculture and the State Report Cards (School Health Policies and Programs Study by the CDC) also provided data used in the Fitness Index.

Download the full list of data sources by individual indicator here.

Which cities are included and why?

The Fitness Index reports fitness, health, community, and built environment data on the 100 largest US cities by population. The Fitness Index was purposefully designed to provide a benefit to the largest number of people given the data and resources available.

How should the scores and ranks be interpreted?

It is important to consider both the score as well as the rank for each city when using the Fitness Index. While the rankings list the cities from the highest score to the lowest score, the scores for many cities are very similar, indicating there may be relatively little real difference among their fitness levels.

For example, Cincinnati, OH scored 57.0 overall and ranked #29 while Milwaukee, WI scored 55.8 overall and ranked #34. While Cincinnati ranked higher than Milwaukee in the 2018 Fitness Index, these two cities are actually very similar across most of the indicators as evidenced by the close scores; thus, there is little difference in the community fitness levels of the two.

Also, while one of the 100 cities ranked #1 and another ranked #100, this doesn’t mean that the highest ranked city has excellent values across all indicators and the lowest ranked city has the lowest values on all indicators. The ranking merely indicates that, relative to each other, some cities scored better than others. Every city has room for improvement and every city has successes to celebrate.

It’s also important to remember that a majority of the personal health indicators do not change rapidly and it will take time for the impact of new initiatives to be seen in most of the health indicators. Improvements in community and built environment indicators are important investments; consequently, notable changes in the health of residents are expected to slowly but surely follow.

The Fitness Index indicators and methods are reviewed and updated regularly by the ACSM American Fitness Index Advisory Board. Due to changes made to the Fitness Index, comparisons between the 2018 rankings and previous years’ rankings should be avoided.

What are the limitations of the Fitness Index?

The indicators used for personal health were based on self-reported responses to the Behavioral Risk Factor Surveillance Survey and are subject to the well-known limitations of self-reported data. Since this limitation applies to all cities included in the rankings, any biases should be similar across all communities, so the relative differences should still be basically valid.

The FBI Uniform Crime Reporting Program violent crime rates should be compared with caution. As indicated on the FBI website, data on violent crimes may not be comparable across all cities because of differences in law enforcement policies and practices from area to area.