public health

Modeling COVID-19 risk in New Orleans due to workplace exposure

Maximilian Marshall, Sonia Jindal, and Lauren Gardner

June 28, 2021

Summary

Throughout the COVID-19 pandemic, state and local governments have had to make critical decisions regarding the timing and logistics of lockdown measures. When making such decisions, it is important that they have detailed information regarding the risks posed to the populations they serve. For this reason, in partnership with the city of New Orleans, CSSE created a model that maps COVID-19 risk from workplaces to residences. This model estimates the risk of exposure to COVID-19 infection in each type of workplace and calculates how much risk exposed workers take home with them based on their commuting patterns and residential locations. This way, we can aggregate the risk that workers bring back to their home locations throughout the city and understand how opening various districts and job sectors will affect exposure. The model calculates the risk due to workplace exposure at the census tract level in three steps: estimating infection risk at each tract, simulating commuting patterns, and mapping risk from workplaces to residences. The resulting risk estimates are used to create a heat map of how much risk is brought home to each area of the city from each occupational category and location.  Importantly, this model does not reflect overall COVID-19 risk. Instead, since policy makers must weigh health benefits of keeping people at home against the economic effects of limited business, this model focuses on the risk of exposure at work and where workers take that risk when they go home.

Methodology

To calculate how high the risk of exposure to COVID-19 is for each tract in the city, the model calculates an estimate of how many infected people visit that tract. We estimate this using cellular phone mobility data by looking at trips to New Orleans for a multi-week span preceding the date of interest. County-level case data from the CSSE COVID-19 Dashboard is used to weigh the risk brought by each trip to New Orleans using the proportion of new cases to the population (case incidence) at the origin of the trip. The total workplace risk to a tract is the summation of the risk brought by each trip to that tract over the 14-day period being studied.

Example of county-level travel risk to New Orleans. Note that this example (for clarity) only shows counties within a 500-mile radius of New Orleans, while the model uses the entire United States.

The second step of the model provides the necessary information to link exposure risk in the workplace to the risk brought home by workers. In order to make this connection, two elements must be simulated – the first is the commuting travel patterns of workers broken down by occupational category, and the second is the relative risk of infection for each category. As there are no existing datasets that comprehensively map commuting patterns with specific origin-destination linkages in New Orleans by job type, we use an optimization model to generate commuting flows by job category. The US Bureau of Labor Statistics (BLS) provides data of the number of people who work and live in each census block group, along with the job category of these residents and workers. Additionally, the BLS provides data on commuting flows between block groups but does not distinguish between the job categories of the commuters. Our optimization routine uses this data and assigns each commuting flow a job category, with an objective of minimizing the total distance traveled. The model’s occupational categorizations are constrained such that flow patterns between tracts are exactly matched. Additionally, residential and workplace totals are respected. The result of the optimization is a highly detailed set of commuter flows between tracts, broken down by occupational category.

Example commuting flows for a single occupational category. These outputs are divided into quartiles, so even though there are many fewer red lines than blue, the red lines represent the same number of people moving to and from work. Note that the higher-volume flows tend to cluster around the French Quarter/Central Business District (maroon area).

Lastly, the model uses the results from steps one and two in order to route the risk of infection from workplaces to residences. Workplace risk, as calculated in step one, is routed back along these flow paths using the metrics from step two. It is scaled by the number of people that make the commute, the relative level of proximity to others for each occupation (proximity data provided by O*NET, the Occupational Information Network, developed under the U.S. Department of Labor), and the average household size of the residential tract. Each of these streams of risk flowing back from workplaces to residences is summed to yield the residential risk for each census tract. In this step, the model user can also input reopening levels for each occupational category to understand what the effects on risk will be. This reopening factor is used to scale the results at each tract by multiplying the risk for each occupational sector by the reopening level. 

Residential risk for workers employed in the French Quarter of New Orleans. At this stage, the model has determined workplace exposure from real-time travel patterns and epidemic data, and has mapped the risk back to workers' residences.

Results

Risks from each occupational category vary greatly between tracts. As seen in the two heat maps above, the risk due to working in the French Quarter is highest for unique tracts depending on the occupational category which is being reopened. The risk taken home by workers in the arts, recreation, and entertainment industries is concentrated to a few distinct areas, while the risk taken home by workers in the accommodation and food services industries is diffused throughout the city, with some areas of more concentrated risk. Since the model framework allows for adjusting the activity levels of different economic categories, it can provide a way to test reopening scenarios and gauge the relative risk of different policies. After a collaborative development process, we have provided this model to the government of the city of New Orleans in order to help inform reopening decision-making.

Acknowledgements

Below is a list of our partners and organizations supporting our COVID-19 modeling efforts. We are grateful for financial support from NSF, NIAID, and for our collaborators at the JHU Centers for Civic Impact.

Inverse optimization is an area of study where the purpose is to infer the unknown parameters of an optimization problem when a set of observations is available on the previous decisions made in the settings of the problem. We develop a framework to effectively and efficiently infer the cost vector of a linear optimization problem based on multiple observations on the decisions made previously. 

We then test our models in the setting of a diet problem on a data-set obtained from NHANES; The data-set is accessible via the link bellow:

https://github.com/CSSEHealthcare/Dietary-Behavior-Dataset

A set of female individuals with the above criteria were considered. Further demographic and diet considerations (in order to select similar patients) led to selecting 11 different individuals’ one day of intake as the initial dataset for the model. In another setting, we only considered people that have consumed a reasonable amount of sodium and water. We consider these two nutrients as the main constraints in the DASH diet. 



In order to compare different potential data and their performance with the model, we used different data groups from the NHANES database. A group of middle-aged women with certain similar characteristics and a group of people with certain attributes in their diets. In the first group, we did not consider how the individual’s daily diet is reflecting on the constraints that the forward problem had and we relied on their own personal answer to questions regarding hypertension and also how prone they thought they were to type-2 diabetes. The result was a sparse set of variables and an inconclusive optimal solution in regards to the preferences. In the second group, we tried to obtain sub-optimal data. We prioritized the maximum sodium intake constraint and the water intake constraints as our main and most important constraints. 

We introduce a new approach that combines inverse optimization with conventional data analytics to recover the utility function of a human operator. In this approach, a set of final decisions of the operator is observed. For instance, the final treatment plans that a clinician chose for a patient or the dietary choices that a patient made to control their disease while also considering her own personal preferences. Based on these observations, we develop a new framework that uses inverse optimization to infer how the operator prioritized different trade-offs to arrive at her decision. 

We develop a new inverse optimization framework to infer the constraint parameters of a linear (forward) optimization based on multiple observations of the system. The goal is to find a feasible region for the forward problem such that all given observations become feasible and the preferred observations become optimal. We explore the theoretical properties of the model and develop computationally efficient equivalent models. We consider an array of functions to capture various desirable properties of the inferred feasible region. We apply our method to radiation therapy treatment planning—a complex optimization problem in itself—to understand the clinical guidelines that in practice are used by oncologists. These guidelines (constraints) will standardize the practice, increase planning efficiency and automation, and make high-quality personalized treatment plans for cancer patients possible.

Assume that a decision-maker’s uncertain behavior is observed. We develop a an inverse optimization framework to impute an objective function that is robust against misspecifications of the behavior. In our model, instead of considering multiple data points, we consider an uncertainty set that encapsulates all possible realizations of the input data. We adopt this idea from robust optimization, which has been widely used for solving optimization problems with uncertain parameters. By bringing robust and inverse optimization together, we propose a robust inverse linear optimization model for uncertain input observations. We aim to find a cost vector for the underlying forward problem such that the associated error is minimized for the worst-case realization of the uncertainty in the observed solutions. That is, such a cost vector is robust in the sense that it protects against the worst misspecification of a decision-maker’s behavior. 

As an example, we consider a diet recommendation problem. Suppose we want to learn the diet patterns and preferences of a specific person and make personalized recommendations in the future. The person’s choice, even if restricted by nutritional and budgetary constraints, may be inconsistent and vary over time. Assuming the person’s behavior can be represented by an uncertainty set, it is important to find a cost vector that renders the worst-case behavior within the uncertainty set as close to optimal as possible. Note that the cost vector can have a general meaning and may be interpreted differently depending on the application (e.g., monetary cost, utility function, or preferences). Under such a cost vector, any non-worst-case diet will thus have a smaller deviation from optimality.  

Radiation therapy is frequently used in diagnosing patients with cancer. Currently, the planning of such treatments is typically done manually which is time-consuming and prone to human error. The new advancements in computational powers and treating units now allow for designing treatment plans automatically.

To design a high-quality treatment, we select the beams sizes, positions, and shapes using optimization models and approximation algorithms. The optimization models are designed to deliver an appropriate amount of dose to the tumor volume while simultaneously avoiding sensitive healthy tissues. In this project, we work on finding the best beam positions for the radiation focal points for Gamma Knife® Perfexion™, using quadratic programming and algorithms such as grassfire and sphere-packing.

In radiation therapy with continuous dose delivery for Gamma Knife® Perfexion™, the dose is delivered while the radiation machine is in movement, as oppose to the conventional step-and-shoot approach which requires the unit to stop before any radiation is delivered. Continuous delivery can increase dose homogeneity and decrease treatment time. To design inverse plans, we first find a path inside the tumor volume, along which the radiation is delivered, and then find the beam durations and shapes using a mixed-integer programming optimization (MIP) model. The MIP model considers various machine-constraints as well as clinical guidelines and constraints.

Perioperative services are one of the vital components of hospitals and any disruption in their operations can leave a downstream effect in the rest of the hospital. A large body of evidence links inefficiencies in perioperative throughput with adverse clinical outcomes. A regular delay in the operating room (OR), may lead to overcrowding in post-surgical units, and consequently, more overnight patients in the hospital. Conversely, an underutilization of OR is not only a waste of an expensive and high-demand resource, but it also means that other services who have a demand are not able to utilize OR. This mismatch in demand and utilization may, in turn, lead to hold-ups in the OR and cause further downstream utilization. We investigate the utilization of operating rooms by each service. The null hypothesis of this work is that the predicted utilization of the OR, i.e., the current block schedule, matches completely with the actual utilization of the service. We test this hypothesis for different utilization definitions, including physical and operational utilization and reject the null hypothesis. We further analyze why a mismatch may exist and how to optimize the schedule to improve patient flow in the hospital.

Primary care is an important piece in the healthcare system that affects the downstream medical care of patients heavily. There are specific challenges in primary care as healthcare shifts from fee-for-service to population health management and medical home, focuses on cost savings and integrates quality measures. We consider the primary care unit at a large academic center that is facing similar challenges. In this work we focus on the imbalance in workload, which is a growing regulatory burden and directly concerns any staff in primary care. It can result in missed opportunities to deliver better patient care or providing a good work-environment for the physicians and the staff. We consider the primary care unit at the large academic center and focus on their challenge in balancing staff time with quality of care through a redesign of their system. We employ optimization models to reschedule providers’ sessions to improve the patient flow, and through that, a more balanced work-level for the support staff. 

This work was performed with the MIT/MGH Collaboration.

In many healthcare services, care is provided continuously, however, the care providers, e.g., doctors and nurses, work in shifts that are discrete. Hence, hand-offs between care providers is inevitable. Hand-offs are generally thought to effect patient care, although it is often hard to quantify the effects due to reverse causal effects between patients’ duration of stay and the number of hand-off events. We use a natural randomized control experiment, induced by physicians’ schedules, in teaching general medicine teams. We employ statistical tools to show that between the two randomly assigned groups of patients, a subset who experiences hand-off experience a different length of stay compared to the other group.

This work was performed with the MIT/MGH Collaboration.

Many outpatient facilities with expensive resources, such as infusion and imaging centers, experience surge in their patient arrival at times and are under-utilization at other times. This pattern results in patient safety concerns, patient and staff dissatisfaction, and limitation in growth, among others. Scheduling practices is found to be one of the main contributors to this problem.

We developed a real-time scheduling framework to address the problem, specifically for infusion clinics. The algorithm assumes no knowledge of future appointments and does not change past appointments. Operational constraints are taken into account, and the algorithm can offer multiple choices to patients.

We generalize this framework to a new scheduling model and analyze its performance through competitive ratio. The resource utilization of the real-time algorithm is compared with an optimal algorithm, which knows the entire future. It can be proved that the competitive ratio of the scheduling algorithm is between 3/2 and 5/3 of an optimal algorithm.

This work was performed with the MIT/MGH Collaboration.

Tracking COVID-19

We are tracking the COVID-19 spread in real-time on our interactive dashboard with data available for download. We are also modeling the spread of the virus. Preliminary study results are discussed on our blog.