Tornado Archives - Matthew Gove Blog https://blog.matthewgove.com/tag/tornado/ Travel the World through Maps, Data, and Photography Tue, 03 Aug 2021 18:17:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.5 https://blog.matthewgove.com/wp-content/uploads/2021/03/cropped-android-chrome-512x512-1-32x32.png Tornado Archives - Matthew Gove Blog https://blog.matthewgove.com/tag/tornado/ 32 32 I Built a COVID-19 Model that got 70-90% of its May and June predictions correct. I know nothing about epidemiology. Here’s How I Did It. https://blog.matthewgove.com/2020/07/10/i-built-a-covid-19-model-that-got-70-90-of-its-may-and-june-predictions-correct-i-know-nothing-about-epidemiology-heres-how-i-did-it/ Sat, 11 Jul 2020 00:09:56 +0000 https://blog.matthewgove.com/?p=1457 Tip: Check out my COVID-19 model predictions and performance on my COVID-19 Dashboard. You can ask anyone I know, and they’ll all tell you the exact same thing: I know very little about anything that has to do with the medical field and absolutely nothing about epidemiology and disease. So […]

The post I Built a COVID-19 Model that got 70-90% of its May and June predictions correct. I know nothing about epidemiology. Here’s How I Did It. appeared first on Matthew Gove Blog.

]]>
Tip: Check out my COVID-19 model predictions and performance on my COVID-19 Dashboard.


You can ask anyone I know, and they’ll all tell you the exact same thing: I know very little about anything that has to do with the medical field and absolutely nothing about epidemiology and disease. So how did I build a COVID-19 model that averaged getting 70-90% of its predictions correct on its May and June runs? Amazingly, I used my knowledge of meteorology to do it.

Now, I know your next question. Doctors and disease experts are constantly being criticized that their models are not accurate. How does an average Joe such as yourself get as much as 90% of your predictions correct on a single run? My response is quite simple: the models that have been under scrutiny project too far out into the future to have any kind of reliable accuracy. Those models make predictions 4 to 6 months into the future, while my model only forecasts 2 weeks and 1 month out. I’ll elaborate on this in a bit.

The 1 June, 2020 model run has been one of the model’s best performances of the pandemic. The model makes 2-week and 1-month projections for the 50 US States and 6 Canadian Provinces.

Before we dive in, it is long been my philosophy that modelers should stand by their model’s projections and take responsibility for them. That said, please hold me accountable for my model’s performance, which you can find on my COVID-19 Dashboard. If it stinks, please don’t hesitate to call me out on it.

1. I may not know anything about epidemiology, but I know a lot about mathematical modeling, especially in the context of meteorology.

While completing my Bachelor’s degree in math and physics, I developed a passion for numerical modeling. In classes and projects, I modeled everything from Hopf Bifurcations to Lorenz Systems to Fractals. For one of my senior seminars, I presented the efficacy of various weather models’ predictions of Hurricane Katrina.

The Apollonian Gasket Fractal is generated from triples of circles, where any circle is tangent to the other two. It is difficult to model because there is no equation that defines it.

By the time I arrived at the University of Oklahoma to continue my education in meteorology, I knew I wanted to study meteorological models extensively. During my time at OU, I worked with some of the most widely-used weather models, including the Global Forecast System (GFS), North American Mesoscale (NAM), Rapid Refresh (RAP), European Centre for Medium-Range Weather Forecasts (ECMWF), and the High Resolution Rapid Refresh (HRRR).

That knowledge forms the foundation of my COVID model. We usually don’t associate meteorology with epidemiology and disease, but it turns out that in the context of a pandemic, there is one very common thread that runs through both fields.

2. All outbreaks behave the same way, regardless if it’s a tornado outbreak or an outbreak of disease.

Back in March, we looked at modeling different types of outbreaks using Gaussian Functions, or bell curves. Remember that, in general terms, outbreaks tend to go through three phases:

  1. Some kind of trigger kicks off the initial case of the outbreak, causing initial growth to accelerate rapidly, often exponentially.
  2. The outbreak will at some point hit a ceiling or peak, causing new cases to level off and begin to decline.
  3. Cases will decline until they run out of fuel, at which point they will cease.

Example #1: A Tornado Outbreak

For example, the phases for a tornado outbreak would be:

  1. Peak heating in the afternoon maximizes extreme instability in the atmosphere. Storms begin to fire along the dryline, and rapidly grow in count and intensity.
  2. As the sun goes down, instability decreases, weakening existing storms and making it more difficult for new storms to form.
  3. As the evening cools, instability drops rapidly. While it may not drop to zero, it drops low enough to prevent new storms from forming. With their fuel supply cut off, any remaining storms eventually fizzle out, ending the outbreak.

Example #2: An Outbreak of Disease

Likewise, the phases for an outbreak of disease are:

  1. A new highly contagious disease starts rapidly spreading through the vulnerable population, often undetected at first.
  2. People begin taking precautions to protect against the spread of the disease, which slows the exponential spread. In extreme cases, the government may implement restrictions such as bans on large gatherings, business closures, mask mandates, and lockdowns.
  3. To end the outbreak, one of two things happens
    • The virus can’t spread fast enough to maintain itself and fizzles out on its own.
    • The population gains immunity, either through a vaccine or through herd immunity.

You’ve probably heard people on TV talking about multiple waves of COVID-19. It turns out that all types of outbreaks can have multiple waves. One of the worst multi-wave tornado outbreaks I witnessed occurred from 18 to 21 May, 2013. You can think of each day as an individual wave of the outbreak. In total, 67 tornadoes were confirmed across four days in the southern Great Plains, including the horrific EF-5 that struck Moore, Oklahoma on 20 May.

Funnel cloud over Norman, Oklahoma in May, 2013
Barrel-shaped funnel cloud over Norman, Oklahoma on 19 May, 2013. This would become an EF-4 tornado on the ground less than 5 minutes after I took this picture.
Radar image of 20 May, 2013 Moore, Oklahoma tornado
EF-5 Tornado on Radar Southwest of Moore, Oklahoma – 20 May, 2013. The blue crosshairs indicate my location at the time.

3. Keep how far in the future you’re forecasting to a minimum

The Global Forecast System, or GFS, is an American weather model that is the base of almost every forecast in North America. It makes worldwide projections in 6-hour time blocks up to 16 days in the future. While the model is well-known for its very accurate forecasts less than 5 days into the future, it’s shocking how quickly that accuracy drops once you get beyond 5 days. Do you know how many of its forecasts are correct 10 days away? Less than 10% of them. And if you go out to the full 16 days? Forget about it.

This phenomenon is observed in all models, including our COVID-19 model. Any guesses as to why it happens? When you break any model down to its nuts and bolts, every calculation a model makes is essentially an approximation, no matter how fine the resolution is. Small errors are introduced in every calculation, which are then passed to the next calculation. Repeat the process enough times, and the errors can get big enough to send model predictions off course.

Using Weather Models as a Basis for our COVID-19 Model

I used the GFS’s 5-day accuracy threshold as a basis for our COVID-19 model. The question seems simple: How far out is the threshold where COVID-19 predictions start to rapidly decline in accuracy? After some trial and error, I discovered that our COVID-19 model could hold its accuracy up to about 2 weeks out. I decided that the model would make the following predictions.

  • 2-Week Projection
    • Goal: 65% correct rate, with 80% of incorrect projections missing by 5,000 cases or less
  • 1-Month Projection
    • Goal: 50% correct rate, with 50% of incorrect projections missing by 5,000 cases or less

So how has the model performed? While it performed well above expectations in May and June, here is how it’s done overall.

Time PeriodCorrect 2 Week ProjectionsCorrect 1 Month Projections
May and June75 – 90%35 – 50%
Entire Pandemic65 – 85%30 – 45%

So why have other COVID-19 models been criticized for being wrong so often? They simply have been trying to make projections too far into the future. Many of the models cited by the media, as well as federal, state, and local governments are trying to make forecasts up to 4 to 6 months into the future. Look at the drop in accuracy in my model just going from 2 weeks to 1 month. How well do you think that’s going to work going out 4 months?

4. Past performance is the best indicator of future behavior.

You hear this said all the time on both fictional and true crime shows on TV. Investigators use this technique to profile and catch up to criminals when they’re a step behind.

It’s no different in terms of COVID-19 modeling. In fact, you can actually use it on two fronts. First, let’s look at human behavior. Remember the protests to the stay-at-home orders across the United States back in April? That was the writing on the wall for the current explosion of cases throughout the US. We just didn’t see it that way at the time.

As expected, the minute the stay-at-home orders were lifted, too many people went back to their “pre-pandemic” routine, and the rest is history. I have yet to meet anyone who doesn’t want to get back to normal, but there’s a right way and a wrong way to do it. The European Union, Australia, and New Zealand all did it right. The US, well, not so much.

Fitting the Curve of Our COVID-19 Model

On the mathematical front, implementing “past performance is the best indicator of future behavior” is as simple as best-fitting the actual data to the model data. To do that, I wrote a simple algorithm:

  1. Using a piece-wise numerical integration, best-fit each day’s worth of actual data to the model’s two inputs: the R Naught value and the percent of population expected to be infected at the end of the pandemic.
  2. Using the most recent 10 days best fit parameters, calculate the weighted average of those best-fit parameters to get a single value for each parameter that is input into the model. The most recent days are given the strongest weight in the calculation.

Is the best fit method perfect? No, but it works pretty darn well, especially in the short term. Best of all, it eliminates the need to make assumptions about extra parameters (a model is only as good as the assumptions it makes), which leads right to my next point.

5. K.I.S.S. – Keep It Simple, Stupid

I live by this motto. In the context of mathematical modeling, complexity is very much a double-edged sword. On one hand, if you can add complexity and at the same time get all of your assumptions and calculations right, you can greatly improve the accuracy of the model’s projections.

On the other hand, every additional piece of complexity you add to the model increases, often greatly, the risk of the model’s forecasts careening off course. Remember what I said earlier that every calculation the model makes adds to the potential for small errors to be introduced, and many small errors added together equal one big error. Every piece of complexity adds at least one additional calculation, and thus adds additional risk of sending the model’s predictions awry.

So just how much complexity is in our COVID-19 model?

6. Break the outbreak down into its nuts and bolts to better understand the mathematics.

Our COVID-19 model consists of a system of three differential equations. Come on, you can’t be serious? Yup, three pretty simple equations are all that make up the model. And you know what? Those three equations drive a lot of other COVID-19 models, too. Those three base equations make up the Susceptible – Infected – Removed, or SIR, model, and are defined as:

In laymen’s terms, these equations mean:

dS/dt: The Change in Susceptible People over Time

dS/dt models interactions between susceptible and infected people, and calculates the number of transmissions of the virus based on its transmission rate. It states that for every unit time, the susceptible population decreases by the number of susceptible people who become infected.

dI/dt: The Change in the Number of Infected Individuals over Time

dI/dt calculates the infection rate per unit time by subtracting the number of people who either recover or die from the number of susceptible people who became infected during that time period.

If the outbreak is accelerating, the number of new infections will be greater than the number of people who recover or die, so dI/dt will be positive, and the rate of infection will grow.

If the outbreak is waning, the number of new infections will be less than the number of people who recover or die, so dI/dt will be negative and the rate of infection will decline.

dR/dt: The Change in the Number of People Who Have Recovered or Died over Time

dR/dt calculates the change in the number of people who are removed from the pool of susceptible and infected population. In other words, it means they have either recovered or died. The calculation simply multiplies the recovery rate by the number of infected people.

To calculate how many people have died from the disease, simply multiply the number of “removed” population by the death rate.

The model uses Python’s Scipy package to solve the system of ordinary differential equations.

7. As uncertainty in a prediction increases, the model output must respond accordingly.

While it may not be obvious, the output of all forecasts is the expected range in which the model thinks the actual value will fall. In terms of weather forecasting, you’ve probably heard meteorologists say things like:

  • Today’s high will be in the upper 70’s to low 80’s
  • Winds will be out of the south at 10 to 15 mph
  • There’s a 30% chance of rain this afternoon

So why do models output a range of values? There’s actually two reasons.

Reason #1: It accounts for small deviations or errors in the actual value.

Let’s consider two hypothetical wind forecasts. One says winds will be out of the southwest at 12 mph. The other says winds will be out of the southwest at 10 to 15 mph. These forecasts wouldn’t be a second thought for most people. Now, what happens when it blows 13 mph instead of 12? The first forecast is wrong, but the second one is still right. While this may not seem like a big deal in day-to-day weather forecasting, it can be the difference between life and death when you’re modeling, planning for, and responding to extreme events like natural disasters and pandemics.

Reason #2: The model will have a high likelihood of being correct even when uncertainty is high.

As I mentioned earlier, uncertainty gets larger the further out in time a model makes predictions. How do we counteract that increase in uncertainty? By simply making the range larger. There is no better real-world example of this than the “Cone of Uncertainty” that is used to forecast the track of a hurricane.

In the example hurricane forecast below, take note of how the Cone of Uncertainty starts off narrow and then gets wider as you get further out in time because uncertainty increases.

COVID-19 Model: Example of a cone of uncertainty in a hurricane forecast
Example of a Cone of Uncertainty for a Hurricane Impacting the East Coast of the United States

Defining a Cone of Uncertainty for Our COVID-19 Model

Now, let’s look at how this translates into COVID-19 modeling. We’ll have a look at two states: one where cases are spiking and one where cases are stable. Take a second and think about what uncertainties might be introduced to each scenario and which plot you would expect to show higher uncertainty.

Here are a few uncertainties I though up that might impact the scenario where cases are spiking:

  • Will the government implement any restrictions or mandates, such as bans on large gatherings, business closures, or lockdowns to help slow the spread?
  • When will these restrictions be implemented?
  • Even in the absence of government action, does the general public take extra precautions, such as wearing masks and not going out as much, to protect themselves against the rapid spread of the virus? When does this start happening? How much of it is happening already?

A Graphical Look at Our COVID-19 Model’s Cone of Uncertainty

You can probably see that once you start thinking about these, it snowballs really quickly. As a result, you see huge uncertainties in COVID-19 model forecasts for states where cases are spiking, such as Florida, Texas, and Arizona, while you see much less uncertainty in states where COVID-19 has stabilized.

Let’s compare COVID-19 uncertainties for Florida, which is averaging close to 10,000 new cases per day right now, to New York, which was hit viciously hard back in March and April, but has since stabilized. The plots are from yesterday’s model run. Take note of how closer together the model output lines are on the New York plot than the Florida plot because of the lower uncertainties in the projections for New York.

Here are the same projections from yesterday’s model run in table form. Note the difference in the size of each range between the two states.

StateFcst. Apex DateFcst Cases 23 JulyFcst Cases 9 August
FloridaJul 10 to Oct 8224,000 to 1,113,000224,000 to 1,515,000
New YorkMay 13 to Jun 2399,000 to 423,000399,000 to 437,000
Data from my 9 July, 2020 Model Run

Conclusion

A mathematical modeler has a vast array of tools and tricks at their disposal. Like a carpenter, he or she must know when, where, and how to use each tool. When properly implemented, model outputs can account for high levels of uncertainty and still be correct in their projections. Just don’t take it too far. You don’t want to lose credibility, either.

Top Photo: The sparkling azure waters of the Sea of Cortez provide a stunning backdrop to the Malecón
Puerto Peñasco, Sonora, Mexico – August, 2019

The post I Built a COVID-19 Model that got 70-90% of its May and June predictions correct. I know nothing about epidemiology. Here’s How I Did It. appeared first on Matthew Gove Blog.

]]>
Simulating the COVID-19 Outbreak in the United States with Gaussian Functions https://blog.matthewgove.com/2020/03/11/simulating-the-covid-19-outbreak-in-the-united-states-with-gaussian-functions/ Thu, 12 Mar 2020 00:14:53 +0000 https://blog.matthewgove.com/?p=936 Well, it looks like the coronavirus has arrived in full-force here in the United States. COVID-19 will likely cause some disruptions to day-to-day life. Being a math person, the outbreak has piqued my interest in modeling and simulating some possible scenarios for the COVID-19 outbreak in the United States. Before […]

The post Simulating the COVID-19 Outbreak in the United States with Gaussian Functions appeared first on Matthew Gove Blog.

]]>
Well, it looks like the coronavirus has arrived in full-force here in the United States. COVID-19 will likely cause some disruptions to day-to-day life. Being a math person, the outbreak has piqued my interest in modeling and simulating some possible scenarios for the COVID-19 outbreak in the United States.

Before I begin, I do have a couple disclaimers.

  • I claim zero knowledge of viruses, disease, and everything else in the medical field.
  • These simulations are highly idealized and simplified. They are based solely on mathematics, and ignore many of the outside factors that are affecting the current outbreak.

How We Simulate Outbreaks

Most of my experience studying outbreaks came from watching tornado outbreaks across the Great Plains when I lived in Oklahoma. We can use the Gaussian Function to model any type of outbreak, including disease. Outbreaks most commonly follow a Gaussian Function, which is just a fancy way to say “bell curve”. A standard bell curve looks like this:

Simple bell curve for outbreaks

When outbreaks start, they grow exponentially until they eventually hit some kind of ceiling. With tornado outbreaks, that ceiling is when they use up all of the instability in the lower parts of the atmosphere, which cuts off the storms’ fuel supply. In the case of diseases, even if the pathogen spreads completely unchecked, it can only infect a finite number of people.

The exponential growth will start to slow down as the outbreak approaches the ceiling, or top of the bell curve. It will eventually level off, and then begin to drop. Mathematically, the Gaussian Function that defines the bell curve is:

In the context of an outbreak, the variables in the above equation are:

  • a is the number of cases (the height of the bell curve) at the peak of the outbreak. It is not the total number of cases.
  • b is the date/time of the peak of the outbreak
  • c is the standard deviation, which is related to how long it takes to reach the peak of the outbreak. It defines the width of the bell curve. There are approximately three standard deviations in a normal bell curve.
  • x is the time since the beginning of the outbreak. For the coronavirus, the time is in units of days.

A Few Numbers That Will Be Helpful For Our COVID-19 Simulations in the United States

Before we dive into the simulations, there are a few numbers that will be helpful to simulating the COVID-19 outbreak in the United States. I pulled these numbers off the internet and in no way shape or form do I guarantee that they are accurate.

Edit March 14, 2020: I made an error in my calculations that led to incorrect numbers below. I corrected them in the list below, but the plots still show incorrect values. In those examples, I am just demonstrating the concepts and the actual values are not hugely important.

  • The population of the United States is approximately 330 million
  • There are 2.4 hospital beds per 1,000 people in the US
  • This works out to 7.92 million 792,000 hospital beds total
  • Approximately 20% of the COVID-19 cases require hospitalization.
  • This means it will take approximately 39.6 million 3.96 million cases, or 12% 1.2% of the population getting sick to overwhelm the hospital system in the US.

Setting Up the Simulations

We will be writing and running the simulations for several different scenarios with Python. For each scenario, we will input

  • The total number of cases (number of people who contract the disease) at the peak of the outbreak
  • The average number of days it takes for the number of cases to double. In the real world, this number is constantly changing, which makes modeling the outbreak tricky.

The model can use this data to predict the peak of the outbreak will occur and how long the outbreak will last.

A Worst-Case Scenario

For the first simulation, let’s consider a hypothetical worst-case scenario where COVID-19 spreads unchecked throughout the United States. In this scenario, there are 100 million cases at the outbreak’s peak. The number of cases double every 3 days.

Simple Bell Curve for the United States

You can see the virus rapidly spreads, quickly overwhelming the country’s hospital system in early March before peaking in mid-April. Once you get over the hump, cases also decline rapidly. The odds of this scenario playing out are extremely unlikely.

So how do you control the bell curve? Health officials want to do two things:

  1. Reduce the amplitude of the bell curve (fewer cases at the peak of the outbreak). This reduction will hopefully lead to fewer total cases over the course of the outbreak.
  2. Slow the spread of the disease to prevent overwhelming the hospital system (widening the bell curve).

These goals are accomplished through measures that can include

  • Travel Bans
  • Quarantine and Isolation
  • Banning Mass Gatherings
  • Increased Cleanings of Public Places
  • The general public taking precautions such as working from home, washing/disinfecting their hands, and cancelling travel plans.

We are beginning to see a lot more of these measures implemented in the US this week.

Slowing the Spread of COVID-19 in the United States

Now, let’s consider another hypothetical scenario where those measures are taken to control the spread of the disease. Officials are able to contain the disease to 35 million cases at the peak of the outbreak. They also slow the spread of the disease so the number of cases doubles every 8 days.

Flattening the curve

While the total number of cases (the area under the curve) is similar for both scenarios, you can see that in the second case (orange line), the outbreak does not overwhelm the hospital system. The trade-off is that the outbreak in the second scenario lasts longer, peaking in mid-to-late June. If that’s what it takes to save lives, so be it.

How Does The Number of Cases at the Peak of the Outbreak Affect the Date the Outbreak Peaks?

Amazingly, if you hold the number of days it takes for cases to double constant, there is not a significant difference in the date the outbreak peaks, regardless if the outbreak peaks at 10 million cases or 100 million. This will be important when we make some “real-world” predictions. Consider some more hypothetical scenarios.

Simulated COVID-19 Outbreak scenarios in the United States, with cases doubling every 7 days.

You can probably see in the graph that an outbreak peak with 10 million cases occurs around May 15-20, while the peak of an outbreak with 100 million cases occurs shortly after June 1st. That’s only a difference about three weeks.

How Could This Play Out In The Real World?

I approach this question very cautiously. Please remember that these simulations are very idealized and simplified. They are based solely on mathematics, and do not account for many outside factors that affect the current outbreak.

To try to figure out the most realistic simulation for the United States, we must look at the ongoing outbreaks in Italy and South Korea. In both of those countries, the number of cases doubled every 5-6 days. Early indicators show the doubling rate to be the same in the United States.

Because of so many unknowns, we will look at a number of different scenarios with regards to the number of cases at the peak of the outbreak. The total number of cases in both Italy and South Korea is around 10,000. Since both countries are much smaller than the US, let’s look at scenarios for the outbreak peaking at 10,000 up to 100 million.

Simulated COVID-19 outbreak in the United States, peaking at 5 million to 100 million cases and doubling ever six days.
Simulated COVID-19 outbreak in the United States, peaking at 10,000 to 1 million cases and doubling ever six days.

Based on some of the numbers I’ve heard the health experts throwing around, here are a few possible outcomes. Remember that the number of cases at the peak of the outbreak is less than the total number of cases over the course of the entire outbreak.

5 Days to Double, 500,000 to 5 million cases
6 Days to Double, 500,000 to 5 million cases
7 Days to Double, 500,000 to 5 million cases

Again, please take the results of these simulations with a grain of salt. There are so many unknowns about this outbreak that it is impossible to accurately predict what will happen with such a simple, idealized model. Based on what I’ve observed with the coronavirus outbreaks in other countries coupled with the mathematics of the Gaussian Function simulations, it wouldn’t surprise me if we were close to the peak of the outbreak, if not past it, by the time we get to into May. But then again, given the uncertainty of everything, those guesses could be way off too.

Until then, buckle up. It’s gonna be a bumpy ride. Next time, we’ll look at some of the same predictions using the SIR Model.

P.S. If any of you are interested, here is the Python code that generated the plots in this post.

#!/usr/bin/env python3
import numpy as np
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
import math
import datetime

# Important Numbers:
# US 2.4 Hospital Beds Per 1000 People 
#   --> ~ 792,000 beds total
#   --> ~ 3.96 million cases to overwhelm system
# US Population 330 Million
# Approx 20% of COVID19 Cases Require Hospitalization
# First arrival in United States January 21, 2020

class Covid19Scenario(object):

    def __init__(self, number_infected, days_to_double):
        self.number_of_people_infected = number_infected
        self.days_to_double = days_to_double
        self.peak_day = self.days_to_reach_peak()
    
    def days_to_reach_peak(self):
        day = 0
        xt = 0
        x0 = 1
        r = 1 / self.days_to_double
        while xt < self.number_of_people_infected / 2:
            xt = math.floor(x0 * (1 + r)**day)
            day += 1
        return day


# Initialize Variables and Parameters
day_zero = datetime.datetime(2020, 1, 21)
t = np.arange(0, 300)
t_dates = [day_zero + datetime.timedelta(days=int(x)) for x in t]

# Define Scenarios
days_to_double = 7

all_scenarios = [
    Covid19Scenario(100e6, days_to_double)
    Covid19Scenario(50e6, days_to_double) 
    Covid19Scenario(20e6, days_to_double)
    Covid19Scenario(10e6, days_to_double)
    Covid19Scenario(5e6, days_to_double)
    Covid19Scenario(4e6, days_to_double)
    Covid19Scenario(3e6, days_to_double)
    Covid19Scenario(2e6, days_to_double)
    Covid19Scenario(1e6, days_to_double)
    Covid19Scenario(5e5, days_to_double)
]

legend_data = []
plt.grid(True)

for scenario in all_scenarios:
    a = scenario.number_of_people_infected      # Height of Bell (Number of Cases)
    b = scenario.peak_day                       # Position of Center of Peak (Day of Outbreak)
    c = scenario.peak_day / 3                   # Standard Deviation (Width of Bell) [Days]
                                                # Note: There are 3 std devs under normal bell curve

    n = (a * np.exp(-(t-b)**2 / (2*c**2))) / 1e6
    if scenario.number_of_people_infected >= 1e6:
        legend_lbl = "Peak {} Million Cases, {} Days to Double".format(
            int(scenario.number_of_people_infected/1e6), 
            scenario.days_to_double
        )
    else:
        legend_lbl = "Peak {:,} Cases, {} Days to Double".format(
            int(scenario.number_of_people_infected), 
            scenario.days_to_double
        )
    p, = plt.plot(t_dates, n, label=legend_lbl)
    legend_data.append(p)

# Plot Hospital Capacity
date_format = mdates.DateFormatter("%b")
capacity_line = [3.96 for x in t]
# p, = plt.plot(t_dates, capacity_line, 'k--', linewidth=0.5, label="Hospital Bed Capacity")
# legend_data.append(p)

ax = plt.gca()
ax.xaxis.set_major_formatter(date_format)
plt.legend(handles=legend_data)

fig = plt.gcf()
fig.set_size_inches(10, 5)

plt.title("Simulated COVID-19 Outbreak Scenarios in the United States")
plt.xlabel("Date")
plt.ylabel("Number of Cases (Millions)")
plt.savefig("figs/scenario-5-to-10-{}-double.png".format(days_to_double))
plt.show()

The post Simulating the COVID-19 Outbreak in the United States with Gaussian Functions appeared first on Matthew Gove Blog.

]]>
A Look at the Data From Last Week’s Possible Tornado https://blog.matthewgove.com/2018/08/08/a-look-at-the-data-from-last-weeks-possible-tornado/ Wed, 08 Aug 2018 21:08:58 +0000 https://blog.matthewgove.com/?p=564 During a monsoon storm last week, I observed evidence of a possible tornado. Today, we will look at some of the data from the storm. I built weather station and data logger at my house that logs data every 5 minutes. While I have plans to put a network of […]

The post A Look at the Data From Last Week’s Possible Tornado appeared first on Matthew Gove Blog.

]]>
During a monsoon storm last week, I observed evidence of a possible tornado. Today, we will look at some of the data from the storm. I built weather station and data logger at my house that logs data every 5 minutes. While I have plans to put a network of sensors at the house, the weather station currently gets its data from Lake Pleasant, which is about 15-20 miles from my house, via the Weather Underground API. Evidence of a possible tornado near the house obviously won’t show up on data from 15 miles away, but it’s still really neat to look at.

Desert storms are rather unique in nature compared to many other locations and climates. Extremely dry air near the surface causes high bases of the thunderstorms, and can also lead to violent downbursts as the storm’s updraft collapses. As the moist air descends in the storm’s downdraft, evaporative cooling takes place as it hits the dry air near the surface, causing it to accelerate downward even faster than it already was. When it hits the ground, it fans out in all directions, and can cause violent winds, dust storms, and in certain cases, landspouts and tornadoes.

I downloaded the 24-hour data for July 30th (the day the storm hit) from my weather station onto my computer to look for patterns of the storm. Remember from my post on the storm that the mayhem all started between 8:00 and 8:30 PM (the storms would have arrived at Lake Pleasant a little earlier than that). As you can expect, some signs were obvious, and some were not. The temperature and humidity plots are pretty much what you would expect: rain cooled air causes temperatures to drop and humidities to rise. One interesting thing to note is the bump in dew point (the green line) while the storm passed overhead between 8:00 and 9:30 PM, since dew point is not dependent on temperature. That increase in moisture is from the precipitation as the storm passed overhead. We will confirm this later on with the precipitation plot.

The wind plot was an interesting one to look at, and should be taken with a bit of a grain of salt because wind sensors can give false readings during high winds, which can break them, and dust storms, which can penetrate the sensor’s bearings if it’s not set up properly. Furthermore, the terrain at Lake Pleasant can cause low readings, too. If you’ve never been to the lake, it’s surrounded by steep mountains, which can block the wind.

There are 2 things that immediately jump out at me at this plot:

  1. The maximum wind gust measured at Lake Pleasant was 60 km/h (38 mph). This seems really low since there were confirmed wind gusts at the Deer Valley Airport and along the Loop 303, which are less than 10 miles from the lake but in open desert and away from the mountains, over 120 km/h (75 mph).
  2. With the amount of wind that was associated with this storm, why is there suddenly no wind data between 9:30 and 10:30 PM? Did the sensor break or go offline? Did the mountains block the wind?

I can’t say for certain, but my inclination is that the mountains knocked the wind down and caused the low peak wind gust reading, and that the sensor went offline between 9:30 and 10:30 PM. The reason it went offline could be anything from a power or network outage to being struck by lightning.

Remember what I said earlier about the temporary bump in dew point related to the rain? When we look at the precipitation plot, you will notice that the bump in the dew point is about an hour offset (earlier) from the precipitation. The dew point increase lasts from about 7:30 to 9:30 PM, while the precipitation occurs from 8:30 to 10:15 PM. What gives?

Well, it turns out there’s a combination of things that cause the offset, and in reality, the rain actually arrives earlier than the plots show. The tipping bucket rain gauges that are used in today’s weather stations cannot measure the start of the rain. The “start” of the rain is indicated by the first tip of the bucket, which usually requires between 5 and 10 millimeters of rain to fall. However, it doesn’t make up for the full hour difference, especially given how hard it rains in some of these storms.

The increase in the dew point actually starts when the cool, moist outflow boundary or gust front passes over the weather station, which is usually well ahead of the rain arriving. In these types of monsoon storms in Arizona, the gust front is usually indicated by a haboob. If you’re inside the haboob, you’re actually in the cooler, moist air of the storm’s outflow, even though it probably doesn’t feel like it. So the offset in the dew point and precipitation data is caused by the moist air arriving ahead of the rain, and the rain actually starting before the rain gauge can measure rainfall.

Next up is the pressure plot. I know tornadoes and fronts are localized areas of low pressure, but when a storm passes overhead, the opposite actually happens: you may see a quick drop in pressure as the gust front approaches, but you will get a spike in pressure due to the cooler, more moist air in the storm’s outflow being more dense than the hot, dry desert air.

In the plot below, the gradual drop in pressure throughout the afternoon has nothing to do with the evening thunderstorm. It’s actually caused by the diurnal heating cycle that you see every day. It gets so hot in the Phoenix area this time of year that the air thins out enough for it to be detected by pressure sensors. You can pretty clearly see the spike in pressure starting around 7:30 PM as the gust front and the storm passes overhead.

The final plot is the visibility plot. While this plot has given me some bizarre readings at times, it actually did a pretty good job handling both the haboob and the thunderstorm. Visibilities less than 1 km were right on par with what I observed in the storm.

So again, you obviously can’t confirm any tornadoes from this data (you probably wouldn’t be able to even if the tornado made a direct hit on the weather station), but I think seeing this kind of data in real world examples is really cool. I have also attached the raw data from my weather station in CSV format from 6 PM to midnight in both metric and imperial units if you’re interested further.


data-download-metric.csv

data-download-imperial.csv

The post A Look at the Data From Last Week’s Possible Tornado appeared first on Matthew Gove Blog.

]]>
Wild Night of Monsoon Storms and Possible Tornadoes in the Desert https://blog.matthewgove.com/2018/07/31/wild-night-of-monsoon-storms-and-possible-tornadoes-in-the-desert/ Tue, 31 Jul 2018 23:57:29 +0000 https://blog.matthewgove.com/?p=549 The summer monsoon kicked into high gear last night as a wild night of dust storms, flash floods, and severe weather ripped through the greater Phoenix area. During the storm, I became very suspicious that a small tornado had hit my house, so after a bit of clean up this […]

The post Wild Night of Monsoon Storms and Possible Tornadoes in the Desert appeared first on Matthew Gove Blog.

]]>
The summer monsoon kicked into high gear last night as a wild night of dust storms, flash floods, and severe weather ripped through the greater Phoenix area. During the storm, I became very suspicious that a small tornado had hit my house, so after a bit of clean up this morning, I set out to see if there was any evidence of tornadoes. You should probably know that I am a trained weather spotter and also spent 3 years studying tornadoes and severe weather at the University of Oklahoma. Evidence of tornadoes – especially small tornadoes – can be very subtle, but a trained eye can find evidence that may appear “hidden” on the surface.

Before we begin, here’s a quick overview of the geography and how the monsoon storms work. The blue line is the approximate dividing line between the flat, low desert and the mountains, most of which range from about 3,000 to 7,000 feet above sea level. A very common setup in the summer is for storms to form over the mountains as air at the surface is forced up, and then the storms roll off the mountains, gaining momentum as they go downhill (just like a ball rolling downhill), and then come ripping across the desert floor. The storms last night did exactly that. In the radar loop later in this post, take notice of how the storms really start to accelerate very close to where the blue line of the map below, especially along the US-60 corridor, which runs northwest from Phoenix.

Shortly after sunset last night, the skies began to darken to our north and east as a line of severe thunderstorms rolled off the mountains. The first thing that hits with these storms is a haboob, which is a powerful dust storm that forms on the gust fronts/outflow boundaries in thunderstorms. When you’re sitting at home, dust storms are harmless, but they drop visibilities to usually less than 1/4 mile.

A haboob from a very similar storm 48 hours earlier approaches from the east

The haboob last night was even more eerie, as you could see the cloud-to-ground lightning strikes in it more and more prominently as it approached. As the sunlight faded into twilight, the winds increased further, and visibilities continued to drop. My weather station, which currently gathers its data from Lake Pleasant measured a maximum sustained wind of 60 km/h (37 mph) during the storm, but gusts at my house and throughout the area were much higher.

Now, when I say I have a suspicion that we were hit by a small tornado, you need to understand that the conditions I was working in allow it to only be just a suspicion. The video below was shot about 10-15 minutes before the suspected tornado hit. As you can see, daylight is pretty much gone, and we were in the middle of a huge dust storm. You could not see anything.

Around 8:30 PM, my first hint of a possible tornado was that the potted plants that I keep on a table on my back patio started elevating. I was able to get some of them inside before the mayhem started, but not all of them. The pots on these plants are the cheap plastic ones that don’t weigh anything and are about 6-8 inches in diameter. Perfect for becoming projectiles and missiles in this exact scenario. As I was coming back out to rescue the last few plants, I watched as the pots were sucked right off of the bottoms of at least 2 or 3 of the plants and out from underneath my covered patio before being thrown over the house and into the night.

My “garden” during more tranquil conditions

It was about right then that I was beginning to realize that standing outside like that probably wasn’t the smartest thing to do, so I retreated back into the safety of my living room. No more than about 5 seconds after I closed the door we were hit by a huge wind gust, which I estimated to be in the 120-130 km/h (75-80 mph) range. It was immediately followed by another gust of equal intensity, but in the exact opposite direction before the wind let off and returned to the direction it had been blowing. I have been scraping my brain all day, and still have yet to come up with something other than a small tornado that can explain that. By about 9:15 PM, the outflow boundary had gotten far enough out in front of the storm to cut off its fuel supply, so it began to rapidly weaken.

I got up this morning with one purpose: to try to find some evidence that had been left behind to support the theory of a tornado. A walk around the house revealed no damage to the house itself, other than a few of my covers for my external outlets were missing. When I lived in Oklahoma, looking for evidence of tornadic winds in trees and grasses was pretty straight forward. Doing it in a desert landscape like this is a whole other ballgame.

Barren Sonoran Desert landscape that would prove to be a challenge for finding evidence of a small tornado.

I wandered across the street and into the desert in the approximate path I though a possible tornado would have taken. I found a patch of grass that had been blown down in all different directions, but it’s next to impossible to say whether the storm did that or not. About 200 to 300 feet from the house I found a dead plant that I had been using as a marker in my back yard. It certainly could have been picked up by a tornado and dropped there, but it could have blown down there in straight-line winds, too. I eventually found my outlet covers, some of which were over 500 feet from the house, but the flower pots could not be recovered.

Despite all of the flower pots flying over the house, there was one instance where the plant completely vanished and the pot was left untouched.

The other telling sign of a possible tornado I found was back at the house. Inside of my grill stand, where the propane tank lives, I had wedged an empty plastic bottle way in the back corner behind the tank where there was no possible way it could just blow away in regular wind. I had noticed during the storm that the door on the grill stand got sucked open at one point, and when I looked there this morning, that plastic bottle was no where to be found.

So at the end of the day, did we have a tornado? It’s certainly possible, and maybe even likely, but I don’t have enough evidence to definitively say either “yes, we had one” or “no we didn’t”. We were hit by the end of the squall line that is most likely to produce tornadoes as it bows out, but finding evidence of small tornadoes even in the most ideal landscapes is challenging. Regardless, it’s been a wild few days, and I look forward to what the remainder of the monsoon has to offer over the next month or so.

In the next post, we will analyze some of the meteorological data from Lake Pleasant for this event. I’ll leave you with the radar loop of the entire event (watch how the storms accelerate as they come off the mountains and approach US-60 NW of Phoenix), courtesy of the National Weather Service in Phoenix.

The post Wild Night of Monsoon Storms and Possible Tornadoes in the Desert appeared first on Matthew Gove Blog.

]]>
Three Factors Explaining Oklahoma’s Quiet Storm Season https://blog.matthewgove.com/2014/05/27/three-factors-explaining-oklahomas-quiet-storm-season/ Tue, 27 May 2014 22:00:22 +0000 https://blog.matthewgove.com/?p=372 It’s no secret by now that the severe weather season in Oklahoma this year has been nearly non-existent. The tornado count for the entire state so far this year can pretty much be counted on one hand. So what has caused the storm season to be so quiet? There are […]

The post Three Factors Explaining Oklahoma’s Quiet Storm Season appeared first on Matthew Gove Blog.

]]>
It’s no secret by now that the severe weather season in Oklahoma this year has been nearly non-existent. The tornado count for the entire state so far this year can pretty much be counted on one hand. So what has caused the storm season to be so quiet? There are plenty of theories, but I will look at three of the more obvious ones.

Reason 1: Extreme Drought in Oklahoma

As of May 20, 2014, 95% of Oklahoma is experiencing some sort of drought, and more than 60% of the state is in either extreme or exceptional drought, the two most severe categories. This area consists mainly of the western 2/3 of Oklahoma, plus the panhandle.

When areas such as western Oklahoma have been under a severe drought for as long as they have, there is hardly any soil moisture. When the spring winds blow over such a large area of extreme drought, it acts like a hair dryer to dry out the lower levels of the atmosphere. No low-level moisture means there are no storms, and the ones that do form are often high-based and poorly organized.

Reason 2: Out of Phase Large Scale Systems

With severe weather, the term “out of phase” refers to the timing of the severe weather ingredients occurring at different points during the day. It seems like most of the powerful storm systems this year have been out of phase as they crossed Oklahoma. Remember there are four main ingredients required to generate severe weather: instability, moisture, lift, and shear.

Let’s take a look at the April 27, 2014 severe weather event that began in Oklahoma and continued into Arkansas and Mississippi. April 26th saw ample moisture return to Oklahoma and Arkansas for severe weather, and with the jet stream directly overhead, there was plenty of shear over both states. Daytime heating is the primary driver of instability, peaking around 4-5 PM, and reaching its minimum around dawn. Forecast high temperatures across Oklahoma and Arkansas were forecast to be well into the 80’s on April 27th, providing plenty of instability for severe weather.

The only ingredient that’s missing is the lift. Lift is provided by the main upper-level storm system as it passes overhead. On April 27th, the main upper-level low passed over the Oklahoma City area at 6 AM, when instability minimized. This is as close to completely out of phase as you can get. There was not enough instability at that hour of the morning to support supercells, so we ended up with a squall line that had some gusty winds and large hail.

As that system progressed east throughout the day, instability in the moist sector skyrocketed with the daytime heating. By the time peak heating rolled around and everything was then in phase, the main upper-level low was over western Arkansas, where there was more than enough instability to support supercells. Supercells exploded on the dryline and a tornado outbreak began shortly thereafter, with a few violent tornadoes occurring that evening in the Little Rock area. Shift the timing of the upper-level low 12 hours earlier and that tornado outbreak would have been occurring along the I-35 corridor in Oklahoma.

Reason 3: The Unseasonably Cold Winter

It’s also no secret that this past winter was very unseasonably cold for nearly the entire USA. This included the Gulf of Mexico, the southern plains’ main source of rich tropical moisture in the spring. As a result, water temperatures in the Gulf of Mexico were much colder than normal. Warmer air can hold more moisture, so colder Gulf temperatures meant less moisture return to the southern plains. Combine this with the extreme drought in Oklahoma, and that’s all she wrote.

Has This Happened Before?

Believe it or not, this has happened before. I’m not going to go into the details, but there were zero tornadoes in Oklahoma in May, 2005, 4 tornadoes in Oklahoma in May, 2006, and 3 tornadoes in Oklahoma in May, 2012. Years such as 2010, where there were 91 tornadoes in May, seem to more than make up for it, though. You can see the tornado statistics at the National Weather Service’s Monthly Tornado Statistics for Oklahoma.

As of this posting, there have only been 5 tornadoes in Oklahoma in 2014 (4 in April and 1 in May). Since 1950, the record low for Oklahoma tornadoes is 17, which occurred in 1988. If the drought really takes hold this summer (it sure seems headed that way), it would not surprise me at all to see that record fall.

The post Three Factors Explaining Oklahoma’s Quiet Storm Season appeared first on Matthew Gove Blog.

]]>
A Dual Pol Radar Tutorial: The Vilonia, Arkansas Tornado https://blog.matthewgove.com/2014/05/02/a-dual-pol-radar-tutorial-the-vilonia-arkansas-tornado/ Fri, 02 May 2014 22:30:03 +0000 https://blog.matthewgove.com/?p=364 Dual polarization radar is an incredibly powerful tool for tracking severe weather. One important use for dual-pol radar is to track strong tornadoes by detecting the debris field with the radar. Similar to the Moore tornado last year, the April 27, 2014 Vilonia, Arkansas tornado passed about 10 miles north […]

The post A Dual Pol Radar Tutorial: The Vilonia, Arkansas Tornado appeared first on Matthew Gove Blog.

]]>
Dual polarization radar is an incredibly powerful tool for tracking severe weather. One important use for dual-pol radar is to track strong tornadoes by detecting the debris field with the radar. Similar to the Moore tornado last year, the April 27, 2014 Vilonia, Arkansas tornado passed about 10 miles north and west the radar site, providing about as good a low-level scan of the tornado as possible.

First, let’s recall what a classic supercell looks like on radar. The area of greatest interest is the hook echo and tornado, located near the back of the storm. Remember that when a tornado picks up debris and throws it in the air, it has a high reflectivity (the radar beam bounces off of it very easily), and shows up in the scan below as purple and pink. The scan is from last year’s Moore tornado.

Radar Image of Classic Supercell — May 20, 2013 Moore, OK Tornado

When we look at the reflectivity scan of a tornadic supercell, we want to look for three features: the hook echo, the tornado, and the inflow notch. The hook echo and tornado can sometimes be hard to see on the radar scan, locating the inflow notch is an easy way to locate a possible tornado. The inflow notch, which appears on radar as an area of no precipitation ahead of the tornado (black on the above scan), is the air from the surface being sucked into the tornado. On many storms, the inflow notch looks like someone took a bite out of the side of the storm. I do want to emphasize that all an inflow notch means is that the storm is rotating. It does not indicate whether or not a tornado is on the ground. If there is a tornado, it will be just to the south/west of the inflow notch. In fact, reflectivity alone cannot 100 percent confirm a tornado is on the ground.

We will look at four different radar parameters to determine if a tornado is present. We will look at reflectivity and velocity (wind speed and direction), as well as the correlation coefficient and differential reflectivity, which are dual-pol variables that indicate the shape of the object the radar is detecting. Any one of these scan by themselves cannot confirm for certain that a tornado is on the ground, but with all four together, you can say with near 100% certainty that a tornado is on the ground.

Now, let’s have a look at the Vilonia tornado, starting with the reflectivity on the left below. If you’re not well-versed in radar, the hook echo may be a little hard to detect, so I have indicated the location of the inflow notch. The tornado is the pink dot just southwest of the inflow notch.

On the right is the velocity scan. Because of the way radars work, it can only detect motion going towards it (shown as green on the scan) or away from it (shown as red on the scan) and cannot detect side-to-side motion, but since tornadoes rotate, two wind vectors somewhere in the tornado are always aligned perfectly with the radar beam (one towards it and one away from it), providing a full reading of the wind speed vector. The brighter the color is, the stronger the wind is, but do note that the brightest reds show up as orange on this particular scan. To locate the tornado, look for a small and really bright red spot right next to a small and really bright green spot. This is called a couplet, and indicates a small area of really strong winds going in one direction right next to a small area of really strong winds going in the opposite direction, or to put it simply, that the storm is rapidly rotating. The couplet is circled in the velocity scan below. The darkest orange/brown color indicates wind speeds close to 140 mph!

Reflectivity Scan – Vilonia, Arkansas Tornado

 

Velocity Scan – Vilonia, Arkansas Tornado

So far we have show that the storm has an area of very strong rotation in the hook echo of a supercell, and that area of rotation has a very high reflectivity. This is enough to say with 90-95 percent certainty that there is a tornado, but not with complete 100 percent certainty. To do that, we must look at the dual pol data. The best dual pol variable to look at is the correlation coefficient. I’m not going to go into the math behind it, but the definition of the correlation coefficient is the measure (ratio) of how similarly the horizontally and vertically polarized pulses are behaving within a pulse volume. Its values range from 0 to 1. For example, if a perfectly spherical object is being detected, the horizontal and vertical pulses will be the same, so when you divide one by the other, you will get 1. Correlation coefficients for rain are generally above 0.95, and are typically above above 0.85 for hail. On the other hand, if the radar is detecting a tree limb or a 2×4 flying around, they are nowhere close to being spherical, so their correlation coefficients will be much lower — 0.3 to 0.4 at the highest. If there is a debris cloud around the tornado, correlation coefficients will be low (less than 0.5 or 0.6) in the same spot the area of intense rotation is on the velocity scans. Low correlation coefficients are indicated by dark blue and black colors.

Differential reflectivity is the difference in returned energy (reflectivity) between the horizontal and vertical pulses of the radar, and is similar to the correlation coefficient. Since reflectivity is a logarithmic scale, this becomes a ratio of the horizontal reflectivity to the vertical reflectivity. Differential reflectivity values from a tornado’s debris field will also be very low, and can be seen in gray and black in the scan below.

Correlation Coefficient – Vilonia, Arkansas Tornado

 

Differential Reflectivity – Vilonia, Arkansas Tornado

Now, we have determined we have an area of very intense rotation in the hook echo of a supercell and there are objects with very low correlation coefficients and differential reflectivity values in the same location as area of intense rotation, indicating that very non-spherical objects are being detected about 600 feet in the air and a debris field is present. The presence of a debris field allows us to say with 100 percent certainty that a tornado is on the ground and likely doing damage.

The post A Dual Pol Radar Tutorial: The Vilonia, Arkansas Tornado appeared first on Matthew Gove Blog.

]]>
Storm Season Tries to Get Going in Oklahoma https://blog.matthewgove.com/2014/04/03/storm-season-tries-to-get-going-in-oklahoma/ Thu, 03 Apr 2014 20:10:28 +0000 https://blog.matthewgove.com/?p=361 The storm chasing season tried very hard to get started yesterday, but it just wasn’t meant to be. All of the ingredients were in place, but a deck of cirrus clouds (which limited instability by keeping surface temperatures down), strong cap and weak forcing on the warm front and dryline […]

The post Storm Season Tries to Get Going in Oklahoma appeared first on Matthew Gove Blog.

]]>
The storm chasing season tried very hard to get started yesterday, but it just wasn’t meant to be. All of the ingredients were in place, but a deck of cirrus clouds (which limited instability by keeping surface temperatures down), strong cap and weak forcing on the warm front and dryline boundaries prevented any storms from forming until sunset.

It was a tough decision not to chase, but in the end, driving for 2-3 hours only to chase for at most 45 minutes and then have to drive the same 2-3 hours home just wasn’t worth it. When it was all said and done, quite a few hail-producing storms went up in Kansas, but all the action was after sunset. We’ll give it another shot next time.

Today is a much different story as far as storm coverage and boundary forcing goes. The SPC has a Moderate Risk up for much of Arkansas and southern Missouri, and storms have already begun to initiate on the cold front and dryline in eastern Oklahoma and northeast Texas. All aspects of severe weather are possible: tornadoes, wind, and hail. The highest risk of tornadoes will be across the southern half of Missouri and the northern half of Arkansas. It will definitely be worth watching this afternoon.

The post Storm Season Tries to Get Going in Oklahoma appeared first on Matthew Gove Blog.

]]>
An Interesting Perspective of the Moore Tornado Path https://blog.matthewgove.com/2014/02/20/an-interesting-perspective-of-the-moore-tornado-path/ Thu, 20 Feb 2014 21:30:55 +0000 https://blog.matthewgove.com/?p=344 The first time I crossed the path of the May 20, 2013 Moore tornado at night I was really taken aback. Both sides of I-35 the entire way through Moore are normally lined with shops, strip malls, entertainment venues, and hotels, and are normally all lit up at night. The […]

The post An Interesting Perspective of the Moore Tornado Path appeared first on Matthew Gove Blog.

]]>
The first time I crossed the path of the May 20, 2013 Moore tornado at night I was really taken aback. Both sides of I-35 the entire way through Moore are normally lined with shops, strip malls, entertainment venues, and hotels, and are normally all lit up at night. The tornado, however, wiped out everything in its path, leaving a rather large and noticeable gap in the lit area.

I shot the video below crossing the path of the tornado on I-35 going northbound from Norman towards Oklahoma City at night. The video starts at the S 19th St exit, crosses the tornado path where the medical center once stood (which will be on the left), and ends at the S 4th St overpass. You will notice that both sides of the highway all of a sudden get very dark. That is the tornado path you are seeing.

The post An Interesting Perspective of the Moore Tornado Path appeared first on Matthew Gove Blog.

]]>
Weekly Severe Weather Outlook: October 7, 2013 https://blog.matthewgove.com/2013/10/07/weekly-severe-weather-outlook-october-7-2013/ Mon, 07 Oct 2013 21:00:12 +0000 https://blog.matthewgove.com/?p=277 With the beginning of the fall or “second” severe weather season here on the Great Plains, coupled with the tropics remaining quiet, I am going to shift this week’s discussion from the tropics back closer to home and talk about severe weather. The fall severe weather season started quite emphatically […]

The post Weekly Severe Weather Outlook: October 7, 2013 appeared first on Matthew Gove Blog.

]]>
With the beginning of the fall or “second” severe weather season here on the Great Plains, coupled with the tropics remaining quiet, I am going to shift this week’s discussion from the tropics back closer to home and talk about severe weather. The fall severe weather season started quite emphatically this past weekend, with an EF-4 tornado in Nebraska and close to 4 feet of snow in parts of South Dakota.

Another powerful upper-level storm system is set to move across the United States later this week, and a cold front will sweep across parts of the central and northern plains between Thursday and Saturday. While it is still too early to say exactly what will happen, I would expect to see at least a few severe storms somewhere in that time frame.

Before we jump into the details, remember that to generate severe weather, you need four main ingredients: instability, moisture, lift, and shear. Let’s start by looking at the big picture forecast from the GFS for Thursday evening at 7 PM CDT. Below you will find the upper-level jet stream map, as well as a surface temperature/wind map, with the approximate location of the cold front labeled.

500 mb Wind and Height (GFS Forecast)

Surface Temperature and Wind (GFS Forecast)

While I normally use CAPE (Convective Available Potential Energy) to assess instability, predicted CAPE values are expected to be between 1000 and 1500 J/kg. These are relatively low CAPE values for the plains, but is enough to generate thunderstorms, especially if the front is powerful and can provide large amounts of upward forcing (the lifting mechanism). I have also included a plot of Theta-E to show the axis of higher instability across western Kansas and Nebraksa, extending south into the Texas and Oklahoma Panhandles. The easiest way to describe Theta-E is that it combines temperature and moisture into one index, which is essentially what instability is. Higher Theta-E values indicate where the warmer and more humid air is located.

GFS Predicted CAPE for Thursday, Oct. 10th at 7 PM CDT 

GFS Predicted Theta-E for Thursday, Oct. 10th at 7 PM CDT

One ingredient that is noticeably lacking in the latest model runs is moisture. Dewpoints ahead of the cold front will struggle to get to 60°F, if they can even get that high, which explains the low instability values. The cold front that swept through over the weekend left an extremely dry air mass over much of the plains, and this week’s storm system is coming in so close behind it that there is simply not enough time to get significant moisture return into Kansas and Nebraska from the Gulf of Mexico. A lack of moisture often results in LP (low-precipitation) or high-based storms, but it can prevent storms from forming altogether (which is unlikely to occur with this week’s system).

The final ingredient for severe weather is the wind shear. The axis of maximum shear contains shear values above 50 knots, which is plenty of shear to produce severe thunderstorms. The axis of maximum shear also aligns almost perfectly with the axis of maximum instability, which should set the stage for thunderstorms as the cold front sweeps through the area and provides lift.

GFS Predicted Dewpoint for Thursday, Oct. 10th at 7 PM CDT 

 

GFS Predicted Shear for Thursday, Oct. 10th at 7 PM CDT

So the bottom line at this point is that the most likely thing to happen would be for a squall line to form on the leading edge of the cold front, extending from western Nebraska back through western Kansas and into the Texas Panhandle. The most likely area to see severe weather is over western Kansas, where the four severe weather ingredients best come together. I am not anticipating a big outbreak with this system. Wind and hail will be the primary threats, but a few isolated tornadoes will certainly be possible. Vertical wind profiles will be very favorable for supercells, so if discrete cells can fire ahead of the main squall line, they could become supercellular despite the low instability. We are still a long ways out, and things will likely change, so stay tuned for more updates as we get closer.

The post Weekly Severe Weather Outlook: October 7, 2013 appeared first on Matthew Gove Blog.

]]>
May 31st El Reno Tornado May Be the Most Powerful Tornado Ever Recorded https://blog.matthewgove.com/2013/09/21/may-31st-el-reno-tornado-may-be-the-most-powerful-tornado-ever-recorded/ Sat, 21 Sep 2013 16:00:30 +0000 https://blog.matthewgove.com/?p=258 Just a few weeks after the National Weather Service downgraded the May 31, 2013 El Reno, Oklahoma tornado from EF-5 to EF-3, a research paper published this week suggested that this tornado may be the largest most powerful (note that I say powerful, not destructive) tornado ever recorded, having been […]

The post May 31st El Reno Tornado May Be the Most Powerful Tornado Ever Recorded appeared first on Matthew Gove Blog.

]]>
Just a few weeks after the National Weather Service downgraded the May 31, 2013 El Reno, Oklahoma tornado from EF-5 to EF-3, a research paper published this week suggested that this tornado may be the largest most powerful (note that I say powerful, not destructive) tornado ever recorded, having been upgraded from 2.6 to 4.3 miles wide. Yes, 4.3 miles wide. The tornado can be seen in the graphic below (each square is 1 mile by 1 mile).

May 31, 2013 El Reno Tornado on Radar, from KFOR

When compared to the two Moore tornadoes on May 3, 1999 and May 20, 2013, as well as the 2011 Joplin tornado, the numbers can be hard to get your head around. The May 3, 1999 Bridge Creek/Moore tornado has long been the bar set for maximum tornado power and destruction. It was pretty clear that the 2011 Joplin tornado and the 2013 Moore tornado eclipsed the May 3rd tornado as far as damage and destruction go, but neither came close to the power of the May 3rd twister. I will let the following table tell the tale.

Doppler radar data from the mobile radar trucks also apparently shows that the subvorticies inside the El Reno tornado, which are essentially mini-tornadoes embedded in the larger circulation of the main tornado (often referred to as a multi-vortex tornado), were up to a mile wide. Let me put that into perspective for you. The subvorticies inside the El Reno tornado were essentially the width of both the May 3rd and the Joplin tornado. The exact strength of these subvortices is still unknown at this time.

Interestingly enough, the only other well-documented tornado that comes close to the width of the El Reno tornado also occurred on May 3, 1999, which struck the town of Mulhall, Oklahoma. This tornado was not as destructive becuase it did not strike a major population center, but the numbers are still very impressive.

It will be very interesting to see what else is uncovered as these tornadoes are studied further, and what changes are made in response to it. I will pass along the information as it comes in, so stay tuned.

See also: Why the May 31st El Reno Tornado was Downgraded to EF-3

The post May 31st El Reno Tornado May Be the Most Powerful Tornado Ever Recorded appeared first on Matthew Gove Blog.

]]>