Scientific evidence, expert opinion and statistical methodologies were employed to select, weigh and combine the elements used to produce the AFI data report.
Why Choose MSAs Over Cities?
Defining a “city” by its city limits overlooks the interaction between the core of the city and the surrounding suburban areas. Residents outside the city limits have access to fitness-related resources in their suburban area as well as the city core; likewise, the residents within the city limits may access resources in the surrounding areas. Thus, the metropolitan area, including both the city core and the surrounding suburban areas, act as a unit to support the wellness efforts of residents of the area. Consequently, the MSA data were used where possible in constructing the AFI. It is understood that various parts of the central city and surrounding suburban area may have very different demographic and health behavior characteristics, as well as access to community-level resources to support physical activity. Currently, the nationally available data needed to measure these characteristics and resources are not available to allow comparisons of all of the smaller geographical levels in the MSAs. However, it would be possible for communities within the MSA to collect local data using the measurements and strategy outlined in My AFI to identify opportunities and to monitor improvements occurring as a result of their initiatives.
How Were the Indicators Selected for the Data Index?
Elements included in the data index must have met the following criteria to be included:
- Be related to the level of health status and/or physical activity environment for the MSA;
- Be measured recently and reported by a reputable agency or organization;
- Be available to the public;
- Be measured routinely and provided in a timely fashion; and
- Be modifiable through community effort (for example, smoking rate is included, climate is not).
What Data Sources Were Used to Create the Data Index?
The most current publicly available data at the time of analysis from federal reports and past studies provided the information used in this version of the data index. The largest single data source for the personal health indicators was the Behavioral Risk Factor Surveillance System (BRFSS) provided by the U.S. Center for Disease Control and Prevention. Through a survey, conducted by the Center for City Park Excellence, the Trust for Public Land provided many of the community/environmental indicators, and the U.S. Census American Community Survey was the source for most of the MSA descriptions. The U.S. Department of Agriculture; State Report Cards (School Health Policies and Programs Study by the CDC); and the Federal Bureau of Investigation’s (FBI) Uniform Crime Reporting Program also provided data used in the MSA description and index. The data index elements and their data sources are shown in Appendix A.
How Was the Data Index Built?
Potential initial elements for the AFI data index were scored for relevance by a panel of 26 health and physical activity experts in 2008 (listed in Appendix B). Two Delphi method–type rounds of scoring were used to reach consensus on whether each item should be included in the data index and the weight it should carry in the calculations.
From this process, 31 currently available indicators were identified and weighted for the index and 16 description variables were selected. The description elements were not included in the data index calculation, but were shown for cities to use for comparison purposes. A weight of 1 was assigned to those elements that were considered to be of little importance by the panel of experts; 2 for those items considered to be of moderate importance; and 3 to those elements considered of high importance to include in the data index. Each item used in the scoring was first ranked (worse value = 1) and then multiplied by the weight assigned by consensus of the expert panel. The weighted ranks were then summed by indicator group to create scores for the personal health indicators and community/environmental indicators. Finally, the MSA scores were standardized to a scale with the upper limit of 100 by dividing the MSA score by the maximum possible value and multiplying by 100. Note that the changes made in the measures for 2014 reduced the number of indicators by 1 for a total of 30 indicators.
The following formula summarizes the scoring process:
r = MSA rank on indicator
w = weight assigned to indicator
k = indicator group
n = 15 for personal health indicators and 16 for community/environmental indicators
MSA Scoremax = hypothetical score if an MSA ranked best on each of the elements
The individual weights also were averaged for both indicator groups to create the total score. Both the indicator group scores and the total scores for the 50 cities were then ranked (best = 1) as shown on the Metropolitan Area Snapshots.
How Should the Scores and Ranks Be Interpreted?
It is important to consider both the score and rank for each city. While the ranking lists the MSAs from the highest score to the lowest score, the scores for many cities are very similar, indicating that there is relatively little difference among them. For example, the score for San Jose was 69.4 while the score for Seattle was 69.3. While San Jose was ranked higher than Seattle, these two metropolitan areas were actually very similar across all of the indicators; thus, there is little difference in the community wellness levels of the two MSAs. Also, while one city carried the highest rank (Washington, DC) and another carried the lowest rank (Memphis, TN), this does not necessarily mean that the highest ranked city has excellent values across all indicators and the lowest ranked city has the lowest values on all the indicators. The ranking merely indicates that, relative to each other, some cities scored better than others.
The data elements used in AFI were reviewed and updated in 2014. Specifically, BRFSS made significant changes in the survey items used to determine food intake information and physical activity level. In addition, percent covered by health insurance and primary care provider to population ratio measures were removed because the experts felt these measures did not significantly impact fitness levels. Finally, a new environmental/community measure, Walk Score ranking was added. Consequently, comparisons between the 2014 AFI individual elements that did not change can be compared with earlier years’ data, but the overall score and the sub-scores for 2014 are not comparable to earlier years.
How Were the Areas of Excellence and Improvement Priority Areas Determined?
The Areas of Excellence and Improvement Priority Areas for each MSA were listed to assist communities in identifying potential areas where they might focus their efforts using approaches adopted by those cities that have strengths in the same area. This process involved comparing the data index elements of the MSA to a newly developed target goal. The target goals for the personal health indicators were derived by generating the 90th percentile from the pooled 2008-2012 AFI data. For those new personal health indicators the target goal was 90% of the 2014 values. The target goals for the community health indicators were derived by calculating the average from the pooled 2008-2012 AFI data. New community indicators target goals were an average from the 2014 values. Data indicators with values equal to or better than the target goal were considered “Areas of Excellence.” Data indicators with values worse than 20% of the target goal were listed as “Improvement Priority Areas.”
What Are the Limitations of the AFI Data Report?
The items used for the personal health indicators were based on self-reported responses to the Behavioral Risk Factor Surveillance Survey and are subject to the well-known limitations of self-reported data. Since this limitation applies to all metropolitan areas included in this report, the biases should be similar across all areas, so the relative differences should still be relatively valid. In addition, the BRFSS data collection method changed in 2011 relative to weighting methodology and the addition of the cell phone sampling frame; thus measures before and after 2011 are not exactly comparable. As per advice provided on the FBI Uniform Crime Reporting Program website, violent crime rates were not compared to U.S. values or averages of all MSAs. As indicated on the FBI website, data on violent crimes may not be comparable across all metropolitan areas because of differences in law enforcement policies and practices from area to area. The Trust for Public Land community/environmental indicators only includes city-level data, not data for the complete MSA. Consequently, most of the community/environmental indicators shown on the MSA tables are for the main city in the MSA and do not include resources in the rest of the MSA.