from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))

Estimating COVID-19's $R_t$ in Real-Time

Kevin Systrom - April 17

In any epidemic, $R_t$ is the measure known as the effective reproduction number. It's the number of people who become infected per infectious person at time $t$. The most well-known version of this number is the basic reproduction number: $R_0$ when $t=0$. However, $R_0$ is a single measure that does not adapt with changes in behavior and restrictions.

As a pandemic evolves, increasing restrictions (or potential releasing of restrictions) changes $R_t$. Knowing the current $R_t$ is essential. When $R\gg1$, the pandemic will spread through a large part of the population. If $R_t<1$, the pandemic will slow quickly before it has a chance to infect many people. The lower the $R_t$: the more manageable the situation. In general, any $R_t<1$ means things are under control.

The value of $R_t$ helps us in two ways. (1) It helps us understand how effective our measures have been controlling an outbreak and (2) it gives us vital information about whether we should increase or reduce restrictions based on our competing goals of economic prosperity and human safety. Well-respected epidemiologists argue that tracking $R_t$ is the only way to manage through this crisis.

Yet, today, we don't yet use $R_t$ in this way. In fact, the only real-time measure I've seen has been for Hong Kong. More importantly, it is not useful to understand $R_t$ at a national level. Instead, to manage this crisis effectively, we need a local (state, county and/or city) granularity of $R_t$.

What follows is a solution to this problem at the US State level. It's a modified version of a solution created by Bettencourt & Ribeiro 2008 to estimate real-time $R_t$ using a Bayesian approach. While this paper estimates a static $R$ value, here we introduce a process model with Gaussian noise to estimate a time-varying $R_t$.

If you have questions, comments, or improvments feel free to get in touch: hello@systrom.com. And if it's not entirely clear, I'm not an epidemiologist. At the same time, data is data, and statistics are statistics and this is based on work by well-known epidemiologists so you can calibrate your beliefs as you wish. In the meantime, I hope you can learn something new as I did by reading through this example. Feel free to take this work and apply it elsewhere – internationally or to counties in the United States.

Additionally, a huge thanks to Frank Dellaert who suggested the addition of the process and to Adam Lerer who implemented the changes. Not only did I learn something new, it made the model much more responsive.

import pandas as pd
import numpy as np

from matplotlib import pyplot as plt
from matplotlib.dates import date2num, num2date
from matplotlib import dates as mdates
from matplotlib import ticker
from matplotlib.colors import ListedColormap
from matplotlib.patches import Patch

from scipy import stats as sps
from scipy.interpolate import interp1d

from IPython.display import clear_output

FILTERED_REGIONS = []
FILTERED_REGION_CODES = []

%config InlineBackend.figure_format = 'retina'

Bettencourt & Ribeiro's Approach

Every day, we learn how many more people have COVID-19. This new case count gives us a clue about the current value of $R_t$. We also, figure that the value of $R_t$ today is related to the value of $R_{t-1}$ (yesterday's value) and every previous value of $R_{t-m}$ for that matter.

With these insights, the authors use Bayes' rule to update their beliefs about the true value of $R_t$ based on how many new cases have been reported each day.

This is Bayes' Theorem as we'll use it:

$$ P(R_t|k)=\frac{P(k|R_t)\cdot P(R_t)}{P(k)} $$

This says that, having seen $k$ new cases, we believe the distribution of $R_t$ is equal to:

  • The likelihood of seeing $k$ new cases given $R_t$ times ...
  • The prior beliefs of the value of $P(R_t)$ without the data ...
  • divided by the probability of seeing this many cases in general.

This is for a single day. To make it iterative: every day that passes, we use yesterday's prior $P(R_{t-1})$ to estimate today's prior $P(R_t)$. We will assume the distribution of $R_t$ to be a Gaussian centered around $R_{t-1}$, so $P(R_t|R_{t-1})=\mathcal{N}(R_{t-1}, \sigma)$, where $\sigma$ is a hyperparameter (see below on how we estimate $\sigma$). So on day one:

$$ P(R_1|k_1) \propto P(R_1)\cdot \mathcal{L}(R_1|k_1)$$

On day two:

$$ P(R_2|k_1,k_2) \propto P(R_2)\cdot \mathcal{L}(R_2|k_2) = \sum_{R_1} {P(R_1|k_1)\cdot P(R_2|R_1)\cdot\mathcal{L}(R_2|k_2) }$$

etc.

Choosing a Likelihood Function $P\left(k_t|R_t\right)$

A likelihood function function says how likely we are to see $k$ new cases, given a value of $R_t$.

Any time you need to model 'arrivals' over some time period of time, statisticians like to use the Poisson Distribution. Given an average arrival rate of $\lambda$ new cases per day, the probability of seeing $k$ new cases is distributed according to the Poisson distribution:

$$P(k|\lambda) = \frac{\lambda^k e^{-\lambda}}{k!}$$

# Column vector of k
k = np.arange(0, 70)[:, None]

# Different values of Lambda
lambdas = [10, 20, 30, 40]

# Evaluated the Probability Mass Function (remember: poisson is discrete)
y = sps.poisson.pmf(k, lambdas)

# Show the resulting shape
print(y.shape)
(70, 4)

Note:this was a terse expression which makes it tricky. All I did was to make $k$ a column. By giving it a column for $k$ and a 'row' for lambda it will evaluate the pmf over both and produce an array that has $k$ rows and lambda columns. This is an efficient way of producing many distributions all at once, and you will see it used again below!

fig, ax = plt.subplots()

ax.set(title='Poisson Distribution of Cases\n $p(k|\lambda)$')

plt.plot(k, y,
         marker='o',
         markersize=3,
         lw=0)

plt.legend(title="$\lambda$", labels=lambdas);
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.

The Poisson distribution says that if you think you're going to have $\lambda$ cases per day, you'll probably get that many, plus or minus some variation based on chance.

But in our case, we know there have been $k$ cases and we need to know what value of $\lambda$ is most likely. In order to do this, we fix $k$ in place while varying $\lambda$. This is called the likelihood function.

For example, imagine we observe $k=20$ new cases, and we want to know how likely each $\lambda$ is:

k = 20

lam = np.linspace(1, 45, 90)

likelihood = pd.Series(data=sps.poisson.pmf(k, lam),
                       index=pd.Index(lam, name='$\lambda$'),
                       name='lambda')

likelihood.plot(title=r'Likelihood $P\left(k_t=20|\lambda\right)$');
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.

This says that if we see 20 cases, the most likely value of $\lambda$ is (not surprisingly) 20. But we're not certain: it's possible lambda was 21 or 17 and saw 20 new cases by chance alone. It also says that it's unlikely $\lambda$ was 40 and we saw 20.

Great. We have $P\left(\lambda_t|k_t\right)$ which is parameterized by $\lambda$ but we were looking for $P\left(k_t|R_t\right)$ which is parameterized by $R_t$. We need to know the relationship between $\lambda$ and $R_t$

Connecting $\lambda$ and $R_t$

The key insight to making this work is to realize there's a connection between $R_t$ and $\lambda$. The derivation is beyond the scope of this notebook, but here it is:

$$ \lambda = k_{t-1}e^{\gamma(R_t-1)}$$

where $\gamma$ is the reciprocal of the serial interval (about 7 days for COVID19). Since we know every new case count on the previous day, we can now reformulate the likelihood function as a Poisson parameterized by fixing $k$ and varying $R_t$.

$$ \lambda = k_{t-1}e^{\gamma(R_t-1)}$$

$$P\left(k|R_t\right) = \frac{\lambda^k e^{-\lambda}}{k!}$$

Evaluating the Likelihood Function

To continue our example, let's imagine a sample of new case counts $k$. What is the likelihood of different values of $R_t$ on each of those days?

k = np.array([20, 40, 55, 90])

# We create an array for every possible value of Rt
R_T_MAX = 12
r_t_range = np.linspace(0, R_T_MAX, R_T_MAX*100+1)

# Gamma is 1/serial interval
# https://wwwnc.cdc.gov/eid/article/26/7/20-0282_article
# https://www.nejm.org/doi/full/10.1056/NEJMoa2001316
GAMMA = 1/7

# Map Rt into lambda so we can substitute it into the equation below
# Note that we have N-1 lambdas because on the first day of an outbreak
# you do not know what to expect.
lam = k[:-1] * np.exp(GAMMA * (r_t_range[:, None] - 1))

# Evaluate the likelihood on each day and normalize sum of each day to 1.0
likelihood_r_t = sps.poisson.pmf(k[1:], lam)
likelihood_r_t /= np.sum(likelihood_r_t, axis=0)

# Plot it
ax = pd.DataFrame(
    data = likelihood_r_t,
    index = r_t_range
).plot(
    title='Likelihood of $R_t$ given $k$',
    xlim=(0,10),
    figsize=(6, 2.5)
)

ax.legend(labels=k[1:], title='New Cases')
ax.set_xlabel('$R_t$');

You can see that each day we have a independent guesses for $R_t$. The goal is to combine the information we have about previous days with the current day. To do this, we use Bayes' theorem.

Performing the Bayesian Update

To perform the Bayesian update, we need to multiply the likelihood by the prior (which is just the previous day's likelihood without our Gaussian update) to get the posteriors. Let's do that using the cumulative product of each successive day:

posteriors = likelihood_r_t.cumprod(axis=1)
posteriors = posteriors / np.sum(posteriors, axis=0)

columns = pd.Index(range(1, posteriors.shape[1]+1), name='Day')
posteriors = pd.DataFrame(
    data = posteriors,
    index = r_t_range,
    columns = columns)

ax = posteriors.plot(
    title='Posterior $P(R_t|k)$',
    xlim=(0,10),
    figsize=(6,2.5)
)
ax.legend(title='Day')
ax.set_xlabel('$R_t$');

Notice how on Day 1, our posterior matches Day 1's likelihood from above? That's because we have no information other than that day. However, when we update the prior using Day 2's information, you can see the curve has moved left, but not nearly as left as the likelihood for Day 2 from above. This is because Bayesian updating uses information from both days and effectively averages the two. Since Day 3's likelihood is in between the other two, you see a small shift to the right, but more importantly: a narrower distribution. We're becoming more confident in our believes of the true value of $R_t$.

From these posteriors, we can answer important questions such as "What is the most likely value of $R_t$ each day?"

most_likely_values = posteriors.idxmax(axis=0)
most_likely_values
Day
1    5.85
2    4.22
3    4.33
dtype: float64

We can also obtain the highest density intervals for $R_t$:

def highest_density_interval(pmf, p=.9, debug=False):
    # If we pass a DataFrame, just call this recursively on the columns
    if(isinstance(pmf, pd.DataFrame)):
        return pd.DataFrame([highest_density_interval(pmf[col], p=p) for col in pmf],
                            index=pmf.columns)
    
    cumsum = np.cumsum(pmf.values)
    
    # N x N matrix of total probability mass for each low, high
    total_p = cumsum - cumsum[:, None]
    
    # Return all indices with total_p > p
    lows, highs = (total_p > p).nonzero()
    
    # Find the smallest range (highest density)
    best = (highs - lows).argmin()
    
    low = pmf.index[lows[best]]
    high = pmf.index[highs[best]]
    
    return pd.Series([low, high],
                     index=[f'Low_{p*100:.0f}',
                            f'High_{p*100:.0f}'])

hdi = highest_density_interval(posteriors, debug=True)
hdi.tail()
Low_90 High_90
Day
1 3.89 7.55
2 2.96 5.33
3 3.42 5.12

Finally, we can plot both the most likely values for $R_t$ and the HDIs over time. This is the most useful representation as it shows how our beliefs change with every day.

ax = most_likely_values.plot(marker='o',
                             label='Most Likely',
                             title=f'$R_t$ by day',
                             c='k',
                             markersize=4)

ax.fill_between(hdi.index,
                hdi['Low_90'],
                hdi['High_90'],
                color='k',
                alpha=.1,
                lw=0,
                label='HDI')

ax.legend();

We can see that the most likely value of $R_t$ changes with time and the highest-density interval narrows as we become more sure of the true value of $R_t$ over time. Note that since we only had four days of history, I did not apply the process to this sample. Next, however, we'll turn to a real-world application where this process is necessary.

Real-World Application to India Data

Setup

India state data uploaded from: https://raw.githubusercontent.com/covid19india/api/master/csv/latest/state_wise_daily.csv

dfi = pd.read_csv("http://api.covid19india.org/states_daily_csv/confirmed.csv")
dfi.head()
dfi.columns
Index(['date', 'TT', 'AN', 'AP', 'AR', 'AS', 'BR', 'CH', 'CT', 'DATEYMD', 'DD',
       'DL', 'DN', 'GA', 'GJ', 'HP', 'HR', 'JH', 'JK', 'KA', 'KL', 'LA', 'LD',
       'MH', 'ML', 'MN', 'MP', 'MZ', 'NL', 'OR', 'PB', 'PY', 'RJ', 'SK', 'TG',
       'TN', 'TR', 'UN', 'UP', 'UT', 'WB', 'Unnamed: 41'],
      dtype='object')
dfi.head()
date TT AN AP AR AS BR CH CT DATEYMD ... RJ SK TG TN TR UN UP UT WB Unnamed: 41
0 14-Mar-20 81 0 1 0 0 0 0 0 2020-03-14 ... 3 0 1 1 0 0 12 0 0 NaN
1 15-Mar-20 27 0 0 0 0 0 0 0 2020-03-15 ... 1 0 2 0 0 0 1 0 0 NaN
2 16-Mar-20 15 0 0 0 0 0 0 0 2020-03-16 ... 0 0 1 0 0 0 0 1 0 NaN
3 17-Mar-20 11 0 0 0 0 0 0 0 2020-03-17 ... 0 0 1 0 0 0 2 0 1 NaN
4 18-Mar-20 37 0 0 0 0 0 0 0 2020-03-18 ... 3 0 8 1 0 0 2 1 0 NaN

5 rows × 42 columns

# Data preprocessing
# Exclude states with 2 or less datapoints with 10 or less cases

dfi = dfi.drop(columns=['DATEYMD', 'Unnamed: 41'], axis=1)
cols = dfi.columns[1:]
dfi.loc[:, cols] = dfi.loc[:, cols].cumsum(axis=0)

nstates = dfi.columns[1:].tolist()

dfa = pd.DataFrame()

for i, state in enumerate(nstates):
    dfc = dfi[['date', state]].copy()
    dfc['state'] = state
    dfc = dfc.rename({state: 'cases'}, axis=1)
    dfc['date'] = pd.to_datetime(dfc['date'])
    # dfc = dfc[dfc.cases > 5].copy()
    dfa = dfa.append(dfc)
    #if (len(dfc) > 2):
    #  dfa = dfa.append(dfc)
    #else:
    #  print(f"Excluding state {state}")
dfa['date'] = pd.to_datetime(dfa['date'])
states = dfa.set_index(['state', 'date']).squeeze()

states
state  date      
TT     2020-03-14         81
       2020-03-15        108
       2020-03-16        123
       2020-03-17        134
       2020-03-18        171
                      ...   
WB     2021-08-12    1536446
       2021-08-13    1537185
       2021-08-14    1537890
       2021-08-15    1538563
       2021-08-16    1539065
Name: cases, Length: 20319, dtype: int64

Taking a look at the state, we need to start the analysis when there are a consistent number of cases each day. Find the last zero new case day and start on the day after that.

Also, case reporting is very erratic based on testing backlogs, etc. To get the best view of the 'true' data we can, I've applied a gaussian filter to the time series. This is obviously an arbitrary choice, but you'd imagine the real world process is not nearly as stochastic as the actual reporting.

Running the Algorithm

Choosing the Gaussian $\sigma$ for $P(R_t|R_{t-1})$

Note: you can safely skip this section if you trust that we chose the right value of $\sigma$ for the process below. Otherwise, read on.
The original approach simply selects yesterday's posterior as today's prior. While intuitive, doing so doesn't allow for our belief that the value of $R_t$ has likely changed from yesterday. To allow for that change, we apply Gaussian noise to the prior distribution with some standard deviation $\sigma$. The higher $\sigma$ the more noise and the more we will expect the value of $R_t$ to drift each day. Interestingly, applying noise on noise iteratively means that there will be a natural decay of distant posteriors. This approach has a similar effect of windowing, but is more robust and doesn't arbitrarily forget posteriors after a certain time like my previous approach. Specifically, windowing computed a fixed $R_t$ at each time $t$ that explained the surrounding $w$ days of cases, while the new approach computes a series of $R_t$ values that explains all the cases, assuming that $R_t$ fluctuates by about $\sigma$ each day.

However, there's still an arbitrary choice: what should $\sigma$ be? Adam Lerer pointed out that we can use the process of maximum likelihood to inform our choice. Here's how it works:

Maximum likelihood says that we'd like to choose a $\sigma$ that maximizes the likelihood of seeing our data $k$: $P(k|\sigma)$. Since $\sigma$ is a fixed value, let's leave it out of the notation, so we're trying to maximize $P(k)$ over all choices of $\sigma$.

Since $P(k)=P(k_0,k_1,\ldots,k_t)=P(k_0)P(k_1)\ldots P(k_t)$ we need to define $P(k_t)$. It turns out this is the denominator of Bayes rule:

$$P(R_t|k_t) = \frac{P(k_t|R_t)P(R_t)}{P(k_t)}$$

To calculate it, we notice that the numerator is actually just the joint distribution of $k$ and $R$:

$$ P(k_t,R_t) = P(k_t|R_t)P(R_t) $$

We can marginalize the distribution over $R_t$ to get $P(k_t)$:

$$ P(k_t) = \sum_{R_{t}}{P(k_t|R_t)P(R_t)} $$

So, if we sum the distribution of the numerator over all values of $R_t$, we get $P(k_t)$. And since we're calculating that anyway as we're calculating the posterior, we'll just keep track of it separately.

Since we're looking for the value of $\sigma$ that maximizes $P(k)$ overall, we actually want to maximize:

$$\prod_{t,i}{p(k_{ti})}$$

where $t$ are all times and $i$ is each state.

Since we're multiplying lots of tiny probabilities together, it can be easier (and less error-prone) to take the $\log$ of the values and add them together. Remember that $\log{ab}=\log{a}+\log{b}$. And since logarithms are monotonically increasing, maximizing the sum of the $\log$ of the probabilities is the same as maximizing the product of the non-logarithmic probabilities for any choice of $\sigma$.

Function for Calculating the Posteriors

To calculate the posteriors we follow these steps:

  1. Calculate $\lambda$ - the expected arrival rate for every day's poisson process
  2. Calculate each day's likelihood distribution over all possible values of $R_t$
  3. Calculate the process matrix based on the value of $\sigma$ we discussed above
  4. Calculate our initial prior because our first day does not have a previous day from which to take the posterior
  5. Loop from day 1 to the end, doing the following:
    • Calculate the prior by applying the Gaussian to yesterday's prior.
    • Apply Bayes' rule by multiplying this prior and the likelihood we calculated in step 2.
    • Divide by the probability of the data (also Bayes' rule)
def prepare_cases(cases, cutoff=25):
    new_cases = cases.diff()

    smoothed = new_cases.rolling(7,
        win_type='gaussian',
        min_periods=1,
        center=True).mean(std=2).round()
    
    idx_start = np.searchsorted(smoothed, cutoff)
    
    smoothed = smoothed.iloc[idx_start:]
    original = new_cases.loc[smoothed.index]
    
    return original, smoothed
def get_posteriors(sr, sigma=0.15):

    # (1) Calculate Lambda
    lam = sr[:-1].values * np.exp(GAMMA * (r_t_range[:, None] - 1))

    
    # (2) Calculate each day's likelihood
    likelihoods = pd.DataFrame(
        data = sps.poisson.pmf(sr[1:].values, lam),
        index = r_t_range,
        columns = sr.index[1:])
    
    # (3) Create the Gaussian Matrix
    process_matrix = sps.norm(loc=r_t_range,
                              scale=sigma
                             ).pdf(r_t_range[:, None]) 

    # (3a) Normalize all rows to sum to 1
    process_matrix /= process_matrix.sum(axis=0)
    
    # (4) Calculate the initial prior
    #prior0 = sps.gamma(a=4).pdf(r_t_range)
    prior0 = np.ones_like(r_t_range)/len(r_t_range)
    prior0 /= prior0.sum()

    # Create a DataFrame that will hold our posteriors for each day
    # Insert our prior as the first posterior.
    posteriors = pd.DataFrame(
        index=r_t_range,
        columns=sr.index,
        data={sr.index[0]: prior0}
    )
    
    # We said we'd keep track of the sum of the log of the probability
    # of the data for maximum likelihood calculation.
    log_likelihood = 0.0

    # (5) Iteratively apply Bayes' rule
    for previous_day, current_day in zip(sr.index[:-1], sr.index[1:]):

        #(5a) Calculate the new prior
        current_prior = process_matrix @ posteriors[previous_day]
        
        #(5b) Calculate the numerator of Bayes' Rule: P(k|R_t)P(R_t)
        numerator = likelihoods[current_day] * current_prior
        
        #(5c) Calcluate the denominator of Bayes' Rule P(k)
        denominator = np.sum(numerator)
        
        # Execute full Bayes' Rule
        posteriors[current_day] = numerator/denominator
        
        # Add to the running sum of log likelihoods
        log_likelihood += np.log(denominator)
    
    return posteriors, log_likelihood

Choosing the optimal $\sigma$

In the previous section we described choosing an optimal $\sigma$, but we just assumed a value. But now that we can evaluate each state with any sigma, we have the tools for choosing the optimal $\sigma$.

Above we said we'd choose the value of $\sigma$ that maximizes the likelihood of the data $P(k)$. Since we don't want to overfit on any one state, we choose the sigma that maximizes $P(k)$ over every state. To do this, we add up all the log likelihoods per state for each value of sigma then choose the maximum.

Note: this takes a while!
sigmas = np.linspace(1/20, 1, 20)

targets = ~states.index.get_level_values('state').isin(FILTERED_REGION_CODES)
states_to_process = states.loc[targets]

results = {}
failed_states = []

for state_name, cases in states_to_process.groupby(level='state'):
    
    print(state_name)
    
    # Only difference with KS code
    new, smoothed = prepare_cases(cases, cutoff=1)
    
    # KS uses cutoff of 25 followed by 10
    #new, smoothed = prepare_cases(cases, cutoff=25)
        
    #if len(smoothed) == 0:
    #    new, smoothed = prepare_cases(cases, cutoff=10)
    
    result = {}
    
    # Holds all posteriors with every given value of sigma
    result['posteriors'] = []
    
    # Holds the log likelihood across all k for each value of sigma
    result['log_likelihoods'] = []
    
    try:
      for sigma in sigmas:
        posteriors, log_likelihood = get_posteriors(smoothed, sigma=sigma)
        result['posteriors'].append(posteriors)
        result['log_likelihoods'].append(log_likelihood)
    
    # Store all results keyed off of state name
      results[state_name] = result
      clear_output(wait=True)
    except:
      print(f"Error for state {state_name}")
      failed_states.append(state_name)

print('Done.')
Done.
print(f"Failed for {len(failed_states)} states {failed_states}")
Failed for 2 states ['DD', 'UN']

Now that we have all the log likelihoods, we can sum for each value of sigma across states, graph it, then choose the maximum.

# Each index of this array holds the total of the log likelihoods for
# the corresponding index of the sigmas array.
total_log_likelihoods = np.zeros_like(sigmas)

# Loop through each state's results and add the log likelihoods to the running total.
for state_name, result in results.items():
    total_log_likelihoods += result['log_likelihoods']

# Select the index with the largest log likelihood total
max_likelihood_index = total_log_likelihoods.argmax()

# Select the value that has the highest log likelihood
sigma = sigmas[max_likelihood_index]

# Plot it
fig, ax = plt.subplots()
ax.set_title(f"Maximum Likelihood value for $\sigma$ = {sigma:.2f}");
ax.plot(sigmas, total_log_likelihoods)
ax.axvline(sigma, color='k', linestyle=":")
<matplotlib.lines.Line2D at 0x7febf31bb438>

Compile Final Results

Given that we've selected the optimal $\sigma$, let's grab the precalculated posterior corresponding to that value of $\sigma$ for each state. Let's also calculate the 90% and 50% highest density intervals (this takes a little while) and also the most likely value.

final_results = None
hdi_error_list = []

for state_name, result in results.items():
    print(state_name)
    try:
        posteriors = result['posteriors'][max_likelihood_index]
        hdis_90 = highest_density_interval(posteriors, p=.9)
        hdis_50 = highest_density_interval(posteriors, p=.5)
        most_likely = posteriors.idxmax().rename('ML')
        result = pd.concat([most_likely, hdis_90, hdis_50], axis=1)
        if final_results is None:
            final_results = result
        else:
            final_results = pd.concat([final_results, result])
        clear_output(wait=True)
    except:
        print(f'hdi failed for {state_name}')
        hdi_error_list.append(state_name)
        pass

print(f'HDI error list: {hdi_error_list}')
print('Done.')
HDI error list: ['AN', 'AR', 'AS', 'CH', 'CT', 'DL', 'DN', 'HP', 'MH', 'NL', 'RJ', 'TR', 'UT']
Done.

Plot All India States

def plot_rt(result, ax, state_name):
    
    ax.set_title(f"{state_name}")
    
    # Colors
    ABOVE = [1,0,0]
    MIDDLE = [1,1,1]
    BELOW = [0,0,0]
    cmap = ListedColormap(np.r_[
        np.linspace(BELOW,MIDDLE,25),
        np.linspace(MIDDLE,ABOVE,25)
    ])
    color_mapped = lambda y: np.clip(y, .5, 1.5)-.5
    
    index = result['ML'].index.get_level_values('date')
    values = result['ML'].values
    
    # Plot dots and line
    ax.plot(index, values, c='k', zorder=1, alpha=.25)
    ax.scatter(index,
               values,
               s=40,
               lw=.5,
               c=cmap(color_mapped(values)),
               edgecolors='k', zorder=2)
    
    # Aesthetically, extrapolate credible interval by 1 day either side
    lowfn = interp1d(date2num(index),
                     result['Low_90'].values,
                     bounds_error=False,
                     fill_value='extrapolate')
    
    highfn = interp1d(date2num(index),
                      result['High_90'].values,
                      bounds_error=False,
                      fill_value='extrapolate')
    
    extended = pd.date_range(start=pd.Timestamp('2020-03-01'),
                             end=index[-1]+pd.Timedelta(days=1))
    
    ax.fill_between(extended,
                    lowfn(date2num(extended)),
                    highfn(date2num(extended)),
                    color='k',
                    alpha=.1,
                    lw=0,
                    zorder=3)

    ax.axhline(1.0, c='k', lw=1, label='$R_t=1.0$', alpha=.25);
    
    # Formatting
    ax.xaxis.set_major_locator(mdates.MonthLocator())
    ax.xaxis.set_major_formatter(mdates.DateFormatter('%b'))
    ax.xaxis.set_minor_locator(mdates.DayLocator())
    
    ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
    ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:.1f}"))
    ax.yaxis.tick_right()
    ax.spines['left'].set_visible(False)
    ax.spines['bottom'].set_visible(False)
    ax.spines['right'].set_visible(False)
    ax.margins(0)
    ax.grid(which='major', axis='y', c='k', alpha=.1, zorder=-2)
    ax.margins(0)
    ax.set_ylim(0.0, 5.0)
    ax.set_xlim(pd.Timestamp('2020-03-01'), result.index.get_level_values('date')[-1]+pd.Timedelta(days=1))
    fig.set_facecolor('w')
def plot_rt_new(result, ax, state_name):
    
    ax.set_title(f"{state_name}")
    
    # Colors
    ABOVE = [1,0,0]
    MIDDLE = [1,1,1]
    BELOW = [0,0,0]
    cmap = ListedColormap(np.r_[
        np.linspace(BELOW,MIDDLE,25),
        np.linspace(MIDDLE,ABOVE,25)
    ])
    color_mapped = lambda y: np.clip(y, .5, 1.5)-.5
    
    index = result['ML'].index.get_level_values('date')
    values = result['ML'].values
    
    # Plot dots and line
    ax.plot(index, values, c='k', zorder=1, alpha=.25)
    ax.scatter(index,
               values,
               s=40,
               lw=.5,
               c=cmap(color_mapped(values)),
               edgecolors='k', zorder=2)
    
    # Aesthetically, extrapolate credible interval by 1 day either side
    lowfn = interp1d(date2num(index),
                     result['Low_90'].values,
                     bounds_error=False,
                     fill_value='extrapolate')
    
    highfn = interp1d(date2num(index),
                      result['High_90'].values,
                      bounds_error=False,
                      fill_value='extrapolate')
    
    extended = pd.date_range(start=pd.Timestamp('2020-03-01'),
                             end=index[-1]+pd.Timedelta(days=1))
    
    ax.fill_between(extended,
                    lowfn(date2num(extended)),
                    highfn(date2num(extended)),
                    color='k',
                    alpha=.1,
                    lw=0,
                    zorder=3)

    ax.axhline(1.0, c='k', lw=1, label='$R_t=1.0$', alpha=.25);
    
    ax.axvline(x=pd.Timestamp('2020-03-24'), ls='--', c='k', alpha=.25)
    ax.axvline(x=pd.Timestamp('2020-04-14'), ls='--', c='k', alpha=.25)
    ax.axvline(x=pd.Timestamp('2020-05-03'), ls='--', c='k', alpha=.25)
    
    ax.text(pd.Timestamp('2020-03-22'), 0.2, 'March 24', bbox=dict(facecolor='white', alpha=0.5))
    ax.text(pd.Timestamp('2020-04-12'), 0.2, 'April 14', bbox=dict(facecolor='white', alpha=0.5))
    ax.text(pd.Timestamp('2020-05-02'), 0.2, 'May 3', bbox=dict(facecolor='white', alpha=0.5))
    
    result2 = result.reset_index()
    t1 = pd.Timestamp('2020-03-24')
    t2 = pd.Timestamp('2020-04-14')
    t3 = pd.Timestamp('2020-05-03')

    r1m = result2[result2.state==state_name][result2.date <= t1]['ML'].mean()
    r1l = result2[result2.state==state_name][result2.date <= t1]['Low_90'].mean()
    r1h = result2[result2.state==state_name][result2.date <= t1]['High_90'].mean()
    text1 = f"Pre-lockdown:\n {r1m:.2f} [{r1l:.2f}-{r1h:.2f}]"

    r2m = result2[result2.state==state_name][(result2.date > t1) & (result2.date <= t2)]['ML'].mean()
    r2l = result2[result2.state==state_name][(result2.date > t1) & (result2.date <= t2)]['Low_90'].mean()
    r2h = result2[result2.state==state_name][(result2.date > t1) & (result2.date <= t2)]['High_90'].mean()
    text2 = f"Initial Lockdown:\n {r2m:.2f} [{r2l:.2f}-{r2h:.2f}]"

    r3m = result2[result2.state==state_name][(result2.date > t2) & (result2.date <= t3)]['ML'].mean()
    r3l = result2[result2.state==state_name][(result2.date > t2) & (result2.date <= t3)]['Low_90'].mean()
    r3h = result2[result2.state==state_name][(result2.date > t2) & (result2.date <= t3)]['High_90'].mean()
    text3 = f"First Extension:\n {r3m:.2f} [{r3l:.2f}-{r3h:.2f}]"

    r4m = result2[result2.state==state_name][result2.date > t3]['ML'].mean()
    r4l = result2[result2.state==state_name][result2.date > t3]['Low_90'].mean()
    r4h = result2[result2.state==state_name][result2.date > t3]['High_90'].mean()
    text4 = f"Second Extension:\n {r4m:.2f} [{r4l:.2f}-{r4h:.2f}]"
    
    if r1m > 0:
        ax.text(pd.Timestamp('2020-03-08'), 3.5, text1, bbox=dict(facecolor='white', alpha=0.5))
    if r2m > 0:
        ax.text(pd.Timestamp('2020-03-30'), 3.5, text2, bbox=dict(facecolor='white', alpha=0.5))
    if r3m > 0:
        ax.text(pd.Timestamp('2020-04-20'), 3.5, text3, bbox=dict(facecolor='white', alpha=0.5))
    if r4m > 0:
        ax.text(pd.Timestamp('2020-05-04'), 3.5, text4, bbox=dict(facecolor='white', alpha=0.5))
    
    # Formatting
    ax.xaxis.set_major_locator(mdates.MonthLocator())
    ax.xaxis.set_major_formatter(mdates.DateFormatter('%b'))
    ax.xaxis.set_minor_locator(mdates.DayLocator())
    
    ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
    ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:.1f}"))
    ax.yaxis.tick_right()
    ax.spines['left'].set_visible(False)
    ax.spines['bottom'].set_visible(False)
    ax.spines['right'].set_visible(False)
    ax.margins(0)
    ax.grid(which='major', axis='y', c='k', alpha=.1, zorder=-2)
    ax.margins(0)
    ax.set_ylim(0.0, 5.0)
    ax.set_xlim(pd.Timestamp('2020-03-01'), result.index.get_level_values('date')[-1]+pd.Timedelta(days=1))
    fig.set_facecolor('w')
def plot_rt_top6(final_results, ax, state_list):
    
    ax.set_title(f"All India & Top States")
    
    # Colors
    ABOVE = [1,0,0]
    MIDDLE = [1,1,1]
    BELOW = [0,0,0]
    cmap = ListedColormap(np.r_[
        np.linspace(BELOW,MIDDLE,25),
        np.linspace(MIDDLE,ABOVE,25)
    ])
    color_mapped = lambda y: np.clip(y, .5, 1.5)-.5
    
    for i, (state_name, result) in enumerate(final_results.groupby('state')):
        if (state_name not in state_list):
            continue
    
        index = result['ML'].index.get_level_values('date')
        values = result['ML'].values

        # Plot dots and line
        ax.plot(index, values, c='k', zorder=1, alpha=.25)
        ax.scatter(index,
                   values,
                   s=40,
                   lw=.5,
                   label=state_name)
        
    ax.legend()
    ax.axhline(1.0, c='k', lw=1, label='$R_t=1.0$', alpha=.25);
    
    ax.axvline(x=pd.Timestamp('2020-03-24'), ls='--', c='k', alpha=.25)
    ax.axvline(x=pd.Timestamp('2020-04-14'), ls='--', c='k', alpha=.25)
    ax.axvline(x=pd.Timestamp('2020-05-03'), ls='--', c='k', alpha=.25)
    
    ax.text(pd.Timestamp('2020-03-22'), 0.2, 'March 24', bbox=dict(facecolor='white', alpha=0.5))
    ax.text(pd.Timestamp('2020-04-12'), 0.2, 'April 14', bbox=dict(facecolor='white', alpha=0.5))
    ax.text(pd.Timestamp('2020-05-02'), 0.2, 'May 3', bbox=dict(facecolor='white', alpha=0.5))
    
    result2 = result.reset_index()
    text1 = f"Pre-lockdown:"
    text2 = f"Initial Lockdown:"
    text3 = f"First Extension:"
    text4 = f"Second Extension:"

    ax.text(pd.Timestamp('2020-03-08'), 3.5, text1, bbox=dict(facecolor='white', alpha=0.5))
    ax.text(pd.Timestamp('2020-03-30'), 3.5, text2, bbox=dict(facecolor='white', alpha=0.5))
    ax.text(pd.Timestamp('2020-04-20'), 3.5, text3, bbox=dict(facecolor='white', alpha=0.5))
    ax.text(pd.Timestamp('2020-05-04'), 3.5, text4, bbox=dict(facecolor='white', alpha=0.5))
    
    # Formatting
    ax.xaxis.set_major_locator(mdates.MonthLocator())
    ax.xaxis.set_major_formatter(mdates.DateFormatter('%b'))
    ax.xaxis.set_minor_locator(mdates.DayLocator())
    
    ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
    ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:.1f}"))
    ax.yaxis.tick_right()
    ax.spines['left'].set_visible(False)
    ax.spines['bottom'].set_visible(False)
    ax.spines['right'].set_visible(False)
    ax.margins(0)
    ax.grid(which='major', axis='y', c='k', alpha=.1, zorder=-2)
    ax.margins(0)
    ax.set_ylim(0.0, 5.0)
    ax.set_xlim(pd.Timestamp('2020-03-01'), result.index.get_level_values('date')[-1]+pd.Timedelta(days=1))
    fig.set_facecolor('w')
#state_list = ['MH', 'RJ', 'TN', 'TT', 'DL', 'KL', 'GJ', 'PB']
state_list = ['MH', 'TN', 'TT', 'GJ', 'KL']
fig, ax = plt.subplots(figsize=(1200/72,800/72))
plot_rt_top6(final_results, ax, state_list)
ncols = 1
nrows = 1

plt.rcParams.update({'figure.max_open_warning': 0})

for i, (state_name, result) in enumerate(final_results.groupby('state')):
    if i == 0:
        continue
    fig, ax = plt.subplots(figsize=(900/72,600/72))
    plot_rt_new(result, ax, state_name)
ncols = 4
nrows = int(np.ceil(len(results) / ncols))

fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(15, nrows*3))

for i, (state_name, result) in enumerate(final_results.groupby('state')):
    plot_rt(result, axes.flat[i], state_name)

fig.tight_layout()
fig.set_facecolor('w')

Export Data to CSV

# Uncomment the following line if you'd like to export the data
final_results.to_csv('data/rt_india_state.csv')
tmpdf = final_results.reset_index()
tmpdf[tmpdf.date == '2020-05-13'].sort_values(by='ML', ascending=True).shape
(18, 7)

Standings

# As of 4/12

no_lockdown = [
]
partial_lockdown = [
]

FULL_COLOR = [.7,.7,.7]
NONE_COLOR = [179/255,35/255,14/255]
PARTIAL_COLOR = [.5,.5,.5]
ERROR_BAR_COLOR = [.3,.3,.3]
final_results
ML Low_90 High_90 Low_50 High_50
state date
AP 2020-03-18 0.00 0.00 10.81 0.00 6.01
2020-03-19 0.07 0.00 8.73 0.00 4.03
2020-03-20 0.09 0.00 7.10 0.00 3.20
2020-03-21 0.10 0.00 6.15 0.00 2.77
2020-03-22 0.10 0.00 5.53 0.02 2.51
... ... ... ... ... ... ...
WB 2021-08-12 0.99 0.78 1.15 0.90 1.05
2021-08-13 0.99 0.78 1.15 0.90 1.05
2021-08-14 0.97 0.76 1.13 0.88 1.03
2021-08-15 0.93 0.72 1.09 0.84 0.99
2021-08-16 0.89 0.68 1.05 0.81 0.96

11616 rows × 5 columns

filtered = final_results.index.get_level_values(0).isin(FILTERED_REGIONS)
mr = final_results.loc[~filtered].groupby(level=0)[['ML', 'High_90', 'Low_90']].last()

def plot_standings(mr, figsize=None, title='Most Recent $R_t$ by State'):
    if not figsize:
        figsize = ((15.9/50)*len(mr)+.1,2.5)
        
    fig, ax = plt.subplots(figsize=figsize)

    ax.set_title(title)
    err = mr[['Low_90', 'High_90']].sub(mr['ML'], axis=0).abs()
    bars = ax.bar(mr.index,
                  mr['ML'],
                  width=.825,
                  color=FULL_COLOR,
                  ecolor=ERROR_BAR_COLOR,
                  capsize=2,
                  error_kw={'alpha':.5, 'lw':1},
                  yerr=err.values.T)

    for bar, state_name in zip(bars, mr.index):
        if state_name in no_lockdown:
            bar.set_color(NONE_COLOR)
        if state_name in partial_lockdown:
            bar.set_color(PARTIAL_COLOR)

    labels = mr.index.to_series().replace({'District of Columbia':'DC'})
    ax.set_xticklabels(labels, rotation=90, fontsize=11)
    ax.margins(0)
    ax.set_ylim(0,2.)
    ax.axhline(1.0, linestyle=':', color='k', lw=1)

    leg = ax.legend(handles=[
                        Patch(label='Full', color=FULL_COLOR),
                        Patch(label='Partial', color=PARTIAL_COLOR),
                        Patch(label='None', color=NONE_COLOR)
                    ],
                    title='Lockdown',
                    ncol=3,
                    loc='upper left',
                    columnspacing=.75,
                    handletextpad=.5,
                    handlelength=1)

    leg._legend_box.align = "left"
    fig.set_facecolor('w')
    return fig, ax

mr.sort_values('ML', inplace=True)
plot_standings(mr);
/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/ipykernel_launcher.py:28: UserWarning: FixedFormatter should only be used together with FixedLocator
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
mr.sort_values('High_90', inplace=True)
plot_standings(mr);
/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/ipykernel_launcher.py:28: UserWarning: FixedFormatter should only be used together with FixedLocator
show = mr[mr.High_90.le(1)].sort_values('ML')
fig, ax = plot_standings(show, title='Likely Under Control');
/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/ipykernel_launcher.py:28: UserWarning: FixedFormatter should only be used together with FixedLocator
show = mr[mr.Low_90.ge(1.0)].sort_values('Low_90')
fig, ax = plot_standings(show, title='Likely Not Under Control');
ax.get_legend().remove()