Item: COMPARING THE EFFECTIVENESS OF THE ECT, PST AND CT FOR ASSESSING SNOW STABILITY
-
-
Title: COMPARING THE EFFECTIVENESS OF THE ECT, PST AND CT FOR ASSESSING SNOW STABILITY
Proceedings: International Snow Science Workshop Proceedings 2023, Bend, Oregon
Authors:
- Alex Marienthal [ Gallatin National Forest Avalanche Center, Bozeman, MT, USA ]
- Doug Chabot [ Gallatin National Forest Avalanche Center, Bozeman, MT, USA ]
- Karl Birkeland [ Birkeland Snow and Avalanche Scientific, Bozeman, MT, USA ] [ USDA Forest Service National Avalanche Center, Bozeman, MT, USA ]
Date: 2023-10-08
Abstract: Avalanche professionals utilize small-block tests such as the Extended Column Test (ECT), Propagation Saw Test (PST), and Compression Test (CT) to augment other data in formulating avalanche forecasts and making decisions. Previous research presents different metrics of effectiveness and a wide array of results for these tests. For example, false-stable rates reported for the ECT range from 0-40%, and false-unstable rates range from 2-44% For the PST, reported false-stable rates range from 22-44% and false-unstable rates range from 0-14%. For the CT, studies reported 0-48% false-stable rates, with false-unstable rates much higher than other tests, ranging from 44-79%. While these studies provide the best estimates to date of the efficacy of these tests, they have limitations due to sample size, sample bias, and regional bias. Our paper also has limitations, but we aim to provide further insight and a more accurate estimate of the efficacy of snowpack tests using two datasets. Our first dataset consists of stability tests collected by professional backcountry avalanche forecasters in the western U.S. during four winters (n=561). For our second dataset we utilized the SnowPilot.org database and used all snowpits from December 2007 to March 2020 containing at least one stability test and either a stability rating or sign of instability (n=3,313). We defined "true" slope stability in the first dataset using the forecasters' assigned stability ratings both before and after they performed tests. In the larger SnowPilot dataset we used the stability rating assigned by the user, assumed to be assigned after tests were performed, and any noted signs of instability on similar slopes. Our two datasets have false-stable rates of 16-31% for ECTs, 19-36% for PSTs, and 8-45% for CTs, and false-unstable rates of 23-44% for ECTs, 34-37% for PSTs, and 39-86% for CTs. In contrast to previous research, our results suggest that ECTs and PSTs perform similarly in terms of false-stable and false-unstable rates when conducted by avalanche professionals. However, in the larger and more diverse SnowPilot dataset, with a greater proportion of recreational users, ECTs have somewhat lower false-stable rates than PSTs. We hypothesize that avalanche professionals improve PST performance by selecting that test during appropriate times, identifying the correct weak layer to test, and making sure their saw cut is exactly in the weak layer. Another interesting finding is that forecasters adjusted their stability rating nearly a third of the time (29%) after performing at least one test. This suggests that backcountry forecasters find useful information about regional stability by performing stability tests, regardless of how well a test predicts slope stability. We suggest the higher false-stable rates of this research compared to previous studies more accurately estimates the efficacy of stability tests. Despite these false-stability rates, our results still demonstrate the usefulness of snowpack tests for appropriately trained professionals and recreational users.
Object ID: ISSW2023_O9.04.pdf
Language of Article: English
Presenter(s): Alex Marienthal
Keywords: stability tests, false-stable rates, ECT, PST, CT
Page Number(s): 1039 - 1046
-