Tag Archives: microbiome

FDA

FDA and USDA Investigate Seasonal Factors Contributing to E. Coli Outbreaks Linked to Romaine Lettuce

By Food Safety Tech Staff
No Comments
FDA

CFSAN and the USDA’s Agricultural Research Service are conducting research to better understand the factors, including seasonal effects, that could be contributing to E. Coli O157:H7 outbreaks linked to bagged romaine lettuce. FDA and USDA scientists presented findings in the BMC Environmental Microbiome, which revealed that E. Coli O157:H7 survived “significantly better in cold-stored packaged romaine harvested in the fall than on the same varieties harvested in late spring.” In addition, the researchers showed that the microbiome present on bagged lettuce changes based on the season, level of deterioration of the lettuce and whether survival of the pathogen on the lettuce was high or low. They also found that the pathogen survived better in lettuce that was harvested in the fall versus lettuce harvested in the spring during cold storage. “This is a significant step toward closing the knowledge gaps identified in the FDA’s Leafy Greens STEC Action Plan and helping the agency and its partners to reduce foodborne illness linked to the consumption of leafy greens,” CFSAN stated in an agency update.

The study, “Seasonality, shelf life and storage atmosphere are main drivers of the microbiome and E. coli O157:H7 colonization of post-harvest lettuce cultivated in a major production area in California”, has been published on the Environmental Microbiome’s website.

Sasan Amini, Clear Labs
FST Soapbox

Beyond the Results: What Can Testing Teach Us?

By Sasan Amini
No Comments
Sasan Amini, Clear Labs

The microbiology lab will increasingly be understood as the gravitational center of big data in the food industry. Brands that understand how to leverage the data microbiology labs are producing in ever larger quantities will be in the best position to positively impact their bottom line—and even transform the lab from a cost center to a margin contributor.

The global rapid microbiology testing market continues to grow at a steady pace. The market is projected to reach $5.09 billion by 2023, up from $3.45 billion in 2018. Increased demand for food microbiology testing—and pathogen detection in particular—continues to drive the overall growth of this sector. The volume of food microbiology tests totaled 1.14 billion tests in 2016—up 15% from 2013. In 2018 that number is estimated to have risen to 1.3 billion tests, accounting for nearly half the overall volume of industrial microbiology tests performed worldwide.

The food industry is well aware that food safety testing programs are a necessary and worthwhile investment. Given the enormous human and financial costs of food recalls, a robust food safety testing system is the best insurance policy any food brand can buy.

We are going through a unique transition where food safety tests are evolving from binary tests to data engines that are capable of generating orders of magnitude of more information. This creates a unique opportunity where many applications for big data collected from routine pathogen testing can help go beyond stopping an outbreak. Paired with machine learning and other data platforms, these data have the opportunity to become valuable, actionable insights for the industry.

While some of these applications will have an impact on fundamental research, I expect that big data analytics and bioinformatics will have significant opportunity to push the utilities of these tests from being merely a diagnostic test to a vehicle for driving actions and offering recommendations. Two examples of such transformations include product development and environmental testing.

Food-Safety Testing Data and Product Development

Next-generation-sequencing (NGS) technologies demonstrate a great deal of potential for product development, particularly when it comes to better understanding shelf life and generating more accurate shelf-life estimates.

Storage conditions, packaging, pH, temperature, and water activity can influence food quality and shelf life among other factors. Shelf-life estimates, however, have traditionally been based on rudimentary statistical models incapable of accounting for the complexity of factors that impact food freshness, more specifically not being able to take into consideration the composition and quantity of all microbial communities present on any food sample. These limitations have long been recognized by food scientists and have led them to look for cost-effective alternatives.

By using NGS technologies, scientists can gain a more complete picture of the microbial composition of foods and how those microbial communities are influenced by intrinsic and extrinsic factors.

It’s unlikely that analyzing the microbiome of every food product or unit of product will ever be a cost-effective strategy. However, over time, as individual manufacturers and the industry as a whole analyze more and more samples and generate more data, we should be able to develop increasingly accurate predictive models. The data generation cost and logistics could be significantly streamlined if existing food safety tests evolve to broader vehicles that can create insights on both safety and quality indications of food product simultaneously. By comparing the observed (or expected) microbiome profile of a fresh product with the models we develop, we could greatly improve our estimates of a given product’s remaining shelf life.

This will open a number of new opportunities for food producers and consumers. Better shelf-life estimates will create efficiencies up and down the food supply chain. The impact on product development can hardly be underestimated. As we better understand the precise variables that impact food freshness for particular products, we can devise food production and packaging technologies that enhance food safety and food quality.

As our predictive models improve, an entire market for these models will emerge, much as it has in other industries that rely on machine learning models to draw predictive insights from big data.

Data Visualization for Environmental Monitoring

In the past one to two years, NGS technologies have matured to the point that they can now be leveraged for high-volume pathogen and environmental testing.

Just as it has in other industries, big data coupled with data visualization approaches can play a mainstream role in food safety and quality applications.

Data visualization techniques are not new to food safety programs and have proven particularly useful when analyzing the results of environmental testing. The full potential of data visualizations has yet to be realized, however. Visualizations can be used to better understand harborage sites, identifying patterns that need attention, and visualize how specific strains of a pathogen are migrating through a facility.

Some of this is happening in food production facilities already, but it’s important to note that visualizations are only as useful as the underlying data is accurate. That’s where technologies like NGS come in. NGS provides the option for deeper characterization of pathogenic microorganisms when needed (down to the strain). The depth of information from NGS platforms enables more reliable and detailed characterization of pathogenic strains compared to existing methods.

Beyond basic identification, there are other potential use cases for environmental mapping, including tracking pathogens as they move through the supply chain. It’s my prediction that as the food industry more broadly adopts NGS technologies that unify testing and bioinformatics in a single platform, data visualization techniques will rapidly advance, so long as we keep asking ourselves: What can the data teach us?

The Food Data Revolution and Market Consolidation

Unlike most PCR and immunoassay-based testing techniques, which in most cases can only generate binary answers, NGS platforms generate millions of data points for each sample for up to tens to hundreds of samples. As NGS technologies are adopted and the data we collect increases exponentially, the food safety system will become the data engine upon which new products and technologies are built.

Just as we have seen in any number of industries, companies with access to data and the means to make sense of it will be in the best position to capitalize on new revenue opportunities and economies of scale.

Companies that have adopted NGS technologies for food safety testing will have an obvious advantage in this emerging market. And they won’t have had to radically alter their business model to get there. They’ll be running the same robust programs they have long had in place, but collecting a much larger volume of data in doing so. Companies with a vision of how to best leverage this data will have the greatest edge.

Mahni Ghorashi, Clear Labs
In the Food Lab

The Food Safety Testing Lab as Profit Center

By Mahni Ghorashi
2 Comments
Mahni Ghorashi, Clear Labs

It’s not that the industry has been more reluctant than others to embrace change; rather, the forces that will drive the food’s big data revolution have but recently come to bear.

Regulation is now playing a role. FSMA mandates that the industry embrace proactive food safety measures. That means higher testing volumes. Higher testing volumes means more data.

At the same time, new technologies like next-generation sequencing (NGS) are beginning to find wide-scale adoption in food-safety testing. And NGS technologies generate a lot of data—so much so that the food safety lab will soon emerge as the epicenter of the food industry’s big data revolution. As a result, the microbiology lab, a cost center, will soon emerge as one the industry’s most surprising profit centers.

A Familiar Trend

This shift may be unprecedented in food, but plenty of other industries touched by a technological transformation have undergone a similar change, flipping the switch from overhead to revenue generation.

Take the IT department, for instance. The debate about IT departments being a cost or profit center has been ongoing for many years. If data centers had simply kept doing what they have done in the past—data processing, enterprise resource planning, desktop applications, help desk—maintaining an IT department would have remained a cost center.

But things look quite different today. Companies in today’s fast-changing business environment depend on their IT departments to generate value. Now and for the foreseeable future, the IT department is on the hook to provide companies with a strategic advantage and to create new revenue opportunities.

Netflix, for example, recently estimated the value of their recommenders and personalization engines at $1 billion per year by quadrupling their effective catalog and dramatically increasing customer engagement and reducing churn.

Another great example are the call centers of customer support departments. For most of their history, call centers generated incredibly small margins or were outright cost centers.

Now, call centers armed with AI and chatbots are a source of valuable customer insights and are a treasure trove of many brands’ most valuable data. This data can be used to fuel upsells, inform future product development, enhance brand loyalty, and increase market share.

Take Amtrak as a prime example. When the commuter railway implemented natural language chatbots on their booking site, they generated 30% more revenue per booking, saved $1 million in customer service email costs, and experienced an 8X return on investment.

These types of returns are not out of reach for the food industry.

The Food Data Revolution Starts in the Lab

The microbiology lab will be the gravitational center of big data in the food industry. Millions of food samples flow in and out of these labs every hour and more and more samples are being tested each year. In 2016 the global food microbiology market totaled 1.14 billion tests—up 15% from 2013.1

I’d argue that the food-testing lab is the biggest data generator in the entire supply chain. These labs are not only collecting molecular data about raw and processed foods but also important inventory management information like lot numbers, brand names and supplier information, to name a few.

As technologies like NGS come online, the data these labs collect will increase exponentially.
NGS platforms have dramatically reduced turnaround times and achieve higher levels of accuracy and specificity than other sequencing platforms. Unlike most PCR and ELISA-based testing techniques, which can only generate binary answers, NGS platforms generate millions of data points with each run. Two hundred or more samples can be processed simultaneously at up to 25 million reads per sample.
With a single test, labs are able to gather information about a sample’s authenticity (is the food what the label says it is?); provenance (is the food from where it is supposed to be from?); adulterants (are there ingredients that aren’t supposed to be there?); and pathogen risk.

The food industry is well aware that food safety testing programs are already a worthwhile investment. Given the enormous human and financial costs of food recalls, a robust food-safety testing system is the best insurance policy any food brand can buy.

The brands that understand how to leverage the data that microbiology labs produce in ever larger quantities will be in a position to transform the cost of this insurance policy into new revenue streams.

Digitizing the Food Supply Chain

It’s clear that the food lab will generate massive amounts of data in the future, and it’s easy to see that this data will have value, but how, exactly, can food brands turn their data into revenue streams?

The real magic starts to happen when we can combine and correlate the trillions of data points we’re gathering from new forms of testing like NGS, with data already being collected, whether for inventory management, supply chain management, storage and environmental conditions, downstream sales data, or other forms of testing for additives and contaminant like pH, antibiotics, heavy metals and color additives.

When a food brand has all of this data at their fingertips, they can start to feed the data through an artificial intelligence platform that can find patterns and trends in the data. The possibilities are endless, but some insights you could imagine are:

  • When I procure raw ingredient A from supplier B and distributors X, Y, and Z, I consistently record higher-than-average rates of contamination.
  • Over the course of a fiscal year Supplier A’s product, while a higher cost per pound, actually increases my margin because, on average, it confers a greater nutritional value than the supplier B’s product.
  • A rare pathogen strain is emerging from suppliers who used the same manufacturing plant in Arizona.

Based on this information about suppliers, food brands can optimize their supplier relationships, decrease the risk associated with new suppliers, and prevent potential outbreaks from rare or emerging pathogen threats.

But clearly the real promise for revenue generation is in leveraging food data to inform R&D, and creating a tighter food safety testing and product development feedback loop.

The opportunity to develop new products based on insights generated in the microbiology lab are profound. This is where the upside lives.

For instance, brands could correlate shelf life with a particular ingredient or additive to find new ways of storing food longer. We can leverage data collected across a product line or multiple product lines to create new ingredient profiles that find substitutes for or eliminate unhealthy additives like corn syrup.

One of the areas I’m most excited about is personalized nutrition. With microbiome data collected during routine testing, we could develop probiotics and prebiotics that promote healthy gut flora, and eventually are even tailored to the unique genetic profile of individual shoppers. The holistic wellness crowd has always claimed that food is medicine; with predictive bioinformatic models and precise microbiome profiles, we can back up that claim scientifically for the first time.

Insights at Scale

Right now, much of the insight to be gained from unused food safety testing data requires the expertise of highly specialized bioinformaticians. We haven’t yet standardized bioinformatic algorithms and pipelines—that work is foundational to building the food genomics platforms of the future.

In the near future these food genomics platforms will leverage artificial intelligence and machine learning to automate bioinformatic workflows, dramatically increasing our ability to analyze enormous bodies of data and identify macro-level trends. Imagine the insights we could gain when we combine trillions of genomic data points from each phase in the food safety testing process—from routine pathogen testing to environmental monitoring to strain typing.

We’re not there yet, but the technology is not far off. And while the path to adoption will surely have its fair share of twists and turns, it’s clear that the business functions of food safety testing labs and R&D departments will grow to be more closely integrated than ever before.

In this respect the success of any food safety program will depend—as it always has—not just on the technology deployed in labs, but on how food brands operate. In the food industry, where low margins are the norm, brands have long depended on efficiently managed operations and superb leadership to remain competitive. I’m confident that given the quality and depth of its human resources, the food industry will be prove more successful than most in harnessing the power of big data in ways that truly benefit consumers.

The big data revolution in food will begin in the microbiology lab, but it will have its most profound impact at the kitchen table.

References

  1. Ferguson, B. (February/March 2017). “A Look at the Microbiology Testing Market.” Food Safety Magazine. Retrieved from https://www.foodsafetymagazine.com/magazine-archive1/februarymarch-2017/a-look-at-the-microbiology-testing-market/.
Nur Hasan, CosmosID
Food Genomics

Metagenomes and Their Utility

By Gregory Siragusa, Douglas Marshall, Ph.D., Nur A. Hasan
No Comments
Nur Hasan, CosmosID

Recall that in article one of this series we wrote that there are two main techniques to obtain a microbiome, a targeted (e.g., bacteria or fungi) or a metagenome (in which all DNA in a sample is sequenced, not just specific targets like bacteria or fungi).  In this column we will now explore metagenomes and some applications to food safety and quality.

We have invited Dr. Nur Hasan of CosmosID, Inc., an expert in the field of microbial metagenomics, to share his deep knowledge of metagenomics. Our format will be an interview style.

Safe food production and preservation is a balancing act between food enzymes and microbes. We will start with some general questions about the microbial world, and then proceed deeper into why and how tools such as metagenomics are advancing our ability to explore this universe. Finally, we will ask Dr. Hasan how he sees all of this applying to food microbiology and safe food production.

Greg Siragusa/Doug Marshall: Thank you for joining us. Dr. Hasan, please give us a brief statement of your background and current position.

Nur Hasan: Thanks for having me. I am a molecular biologist by training. I did my bachelor and masters in microbiology, M.B.A in marketing, and Ph.D. in molecular biology. My current position is vice president and head of research and development at CosmosID, Inc., where I am leading the effort on developing the world’s largest curated genome databases and ultra rapid bioinformatics tools to build the most comprehensive, actionable and user-friendly metagenomic analysis platform for both pathogen detection and microbiome characterization.

Siragusa/Marshall: The slogan for CosmosID is “Exploring the Universe of Microbes”. What is your estimate of the numbers of bacterial genera and species that have not yet been cultured in the lab?

Hasan: Estimating the number of uncultured bacteria on earth is an ongoing challenge in biology. The widely accepted notion is more than 99% of bacteria from environmental samples remain ‘unculturable’ in the laboratory; however, with improvements in media design, adjustment of nutrient compositions and optimization of growth conditions based on the ecosystem these bacteria are naturally inhabiting, scientists are now able to grow more bacteria in the lab than we anticipated. Yet, our understanding is very scant on culturable species diversity across diverse ecosystems on earth. With more investigators using metagenomics tools, many ecosystems are being repeatedly sampled, with ever more microbial diversity revealed. Other ecosystems remain ignored, so we only have a skewed understanding of species diversity and what portion of such diversity is actually culturable. A report from Schloss & Handelsman highlighted the limitations of sampling and the fact that it is not possible to estimate the total number of bacterial species on Earth.1 Despite the limitation, they took a stab at the question and predicted minimum bacterial species richness to be 35,498. A more recent report by Hugenholtz estimated that there are currently 61 distinct bacterial phyla, of which 31 have no cultivable representatives.2 Currently NCBI has about 16,757 bacterial species listed, which represent less than 50% of minimum species richness as predicted by Schloss & Handelsman and only a fraction of all global species richness of about 107 to 109 estimated by Curtis and Dykhuizen.3,4

Siragusa/Marshall: In generic terms what exactly is a metagenome? Also, please explain the meaning of the terms “shotgun sequencing”, “shotgun metagenomes”, and “metagenomes”.  How are they equivalent, similar or different?

Hasan: Metagenome is actually an umbrella term. It refers to the collection of genetic content of all organisms present in a given sample. It is studied by a method called metagenomics that involves direct sequencing of a heterogeneous population of DNA molecules from a biological sample all at once. Although in most applications, metagenome is often used to refer to microbial metagenome (the genes and genomes of microbial communities of given sample), in a broader sense, it actually represents total genetic makeup of a sample including genomes and gene sequences of other materials in the sample, such as nucleic acids contributed by other food ingredients of plant and animal origin. The metagenome provides an in-depth understanding of the composition, structure, functional and metabolic activities of food, agricultural and human communities.

Shotgun sequencing is a method where long strands of DNA (such as an entire genome of a bacterium) are randomly shredded (“shotgunning”) into smaller DNA fragments, so that they can be sequenced individually. Once sequenced, these small fragments are then assembled together into contigs by computer programs that find overlaps in the genetic code, and the complete sequence of the bacterial genome is generated. Now, instead of one genome, if you directly sequence entire assemblage of genomes from a metagenome using such shotgun approach, it’s called shotgun metagenomics and resulting output is termed a shotgun metagenome. By this method, you are literally sequencing thousands of genomes simultaneously from a given metagenome in one assay and get the opportunity to reconstruct individual genomes or genome fragments for investigation and comparison of the genetic consortia and taxonomic composition of complete communities and their predicted functions. Whereas targeted 16S rRNA or targeted 16S amplicon sequencing relies on amplification and sequencing of one target region, the 16S gene region, shotgun metagenomics is actually target free, it is aimed at sequencing entire genomes of every organism present in a sample and gives a more accurate, and unbiased biological representation of a sample. As an analogy of shotgun metagenomics, lets think about your library where you may have multiple books (like as different organisms present in a metagenome). You can imagine shotgun metagenomics as a process whereby all books from your library are shredded, mixed up, and then you will assemble the small shredded pieces to find text overlap and piecing the cover of all books together to reassemble each of your favorite books. Shotgun metagenomics approximates this analogy.

Metagenome and metagenomics are often used interchangeably. Where metagenome is the total collection of all genetic material from a given samples, metagenomics is the method to obtain a metagenome that utilizes a shotgun sequencing approach to sequence all these genetic material at once.

Shotgun sequencing and shotgun metagenomics are also used interchangeably. Shotgun sequencing is a technique where you fragment large DNA strands into small pieces and sequence all small fragments. Now, if you apply such techniques to sequence a metagenome, than we call it shotgun metagenomics.

Go to page 2 of the interview below.

David Chambliss, IBM Research
In the Food Lab

Scientific Breakthrough May Change Food Safety Forever

By David Chambliss
No Comments
David Chambliss, IBM Research

How safe is a raw diet? Could sterilizing our food actually make us more prone to sickness? Are vegans healthier than carnivores? In the last few decades, global food poisoning scares from beef to peanut butter have kept food scientists and researchers around the world asking these questions and searching for improved methods of handling and testing what we eat.

It’s been more than 150 years since Louis Pasteur introduced the idea of germ theory—that bacteria cause sickness—fundamentally changing the way we think about what makes our food safe to eat. While we’ve advanced in so many other industrial practices, we’re still using pasteurization as the standard for the global food industry today.

Although pasteurization effectively controls most organisms and keeps the food supply largely safe, we continue to have foodborne outbreaks despite additional testing and more sophisticated techniques. The potential health promise of genomics, and the gut microbiome genetics and bacterial ecosystems, could be the key to the next frontier in food safety.

The scientific community is once again at the cusp of a new era with the advent of metagenomics and its application to food safety.

What is metagenomics? Metagenomics is the study of the bacterial community using genetics by examining the entire DNA content at once. Whole genome sequencing of a single bacterium tells us about the DNA of a specific organism, whereas metagenomic testing tells us about the interaction of all the DNA of all the organisms within a sample or an environment. Think of the vast quantity of genetic material in the soil of a rice patty, a lettuce leaf, your hand, a chicken ready for cooking, or milk directly from a cow. All of them have thousands of bacteria that live together in a complex community called the microbiome that may contain bacteria that are sometimes harmful to humans—and possibly also other bacteria that help to keep the potentially harmful bacteria in check.

Metagenomics uses laboratory methods to break up cells and extract many millions of DNA molecular fragment, and sequencing instruments to measure the sequences of A’s, C’s, G’s, and T’s that represent the genetic information in each of those fragments. Then scientists use computer programs to take the information from millions or billions of fragments to determine from what bacteria they came. The process is a little like mixing up many jigsaws, grabbing some pieces from the mix, and figuring out what was in the original pictures. The “pictures” are the genomes of bacteria, which in some cases carry enough unique information to associate a given bacterium with a previously seen colony of the same species.

Genomics of single bacterial cultures, each from a single species, is well established as a way to connect samples of contaminated foods with reported cases of foodborne illnesses. With metagenomics, which essentially looks for all known species simultaneously, one hopes to do a better job of early detection and prevention. For example, if a machine malfunction causes pasteurization or cleaning to be incomplete, the metagenomics measurement will likely show compositional shifts in which bacterial phyla are abundant. This can make it possible to take remedial action even before there are signs of pathogens or spoilage that would have led to a costly recall.

Up until now, keeping food safe has meant limiting the amount of harmful bacteria in the community. That means using standard methods such as pasteurization, irradiation, sterilization, salt and cooking. To determine whether food is actually safe to eat, we test for the presence of a handful of specific dangerous organisms, including Listeria, E. coli, and Salmonella, to name a few. But what about all the “good” bacteria that is killed along with the “bad” bacteria in the process of making our food safe?

Nutritionists, doctors and food scientists understand that the human gut is well equipped to thrive unless threatened by particularly dangerous contaminants. The ability to determine the entire genetic makeup within a food could mean being able to know with certainty whether it contains any unwanted or unknown microbial hazards. Metagenomic testing of the food supply would usher in an entirely new approach to food safety—one in which we could detect the presence of all microbes in food, including previously unknown dangers. It could even mean less food processing that leaves more of the healthful bacteria intact.

More than 150 years ago, Pasteur pointed us in the right direction. Now the world’s brightest scientific minds are primed to take the food industry the next leap toward a safer food supply.

Metagenomics, Food Safety

Preventing Outbreaks a Matter of How, Not When

By Maria Fontanazza
No Comments
Metagenomics, Food Safety

When it comes to preventing foodborne illness, staying ahead of the game can be an elusive task. In light of the recent outbreaks affecting Chipotle (norovirus, Salmonella, E. coli) and Dole’s packaged salad (Listeria), having the ability to identify potentially deadly outbreaks before they begin (every time) would certainly be the holy grail of food safety.

One year ago IBM Research and Mars, Inc. embarked on a partnership with that very goal in mind. They established the Consortium for Sequencing the Food Supply Chain, which they’ve touted as “the largest-ever metagenomics study…sequencing the DNA and RNA of major food ingredients in various environments, at all stages in the supply chain, to unlock food safety insights hidden in big data”.  The idea is to sequence metagenomics on different parts of the food supply chain and build reference databases as to what is a healthy/unhealthy microbiome, what bacteria lives there on a regular basis, and how are they interacting. From there, the information would be used to identify potential hazards, according to Jeff Welser, vice president and lab director at IBM Research–Almaden.

“Obviously a major concern is to always make sure there’s a safe food supply chain. That becomes increasingly difficult as our food supply chain becomes more global and distributed [in such a way] that no individual company owns a portion of it,” says Welser. “That’s really the reason for attacking the metagenomics problem. Right now we test for E. coli, Listeria, or all the known pathogens. But if there’s something that’s unknown and has never been there before, if you’re not testing for it, you’re not going to find it. Testing for the unknown is an impossible task.” With the recent addition of the diagnostics company Bio-Rad to the collaborative effort, the consortium is preparing to publish information about its progress over the past year.  In an interview with Food Safety Tech, Welser discusses the consortium’s efforts since it was established and how it is starting to see evidence that using microbiomes could provide insights on food safety issues in advance.

Food Safety Tech: What progress has the Consortium made over the past year?

Jeff Welser: For the first project with Mars, we decided to focus around pet food. Although they might be known for their chocolates, at least half of Mars’ revenue comes from the pet care industry. It’s a good area to start because it uses the same food ingredients as human food, but it’s processed very differently. There’s a large conglomeration of parts in pet food that might not be part of human food, but the tests for doing the work is directly applicable to human food. We started at a factory of theirs and sampled the raw ingredients coming in. Over the past year, we’ve been establishing whether we can measure a stable microbiome (if we measure day to day, the same ingredient and the same supplier) and [be able to identify] when something has changed.

At a high level, we believe the thesis is playing out. We’re going to publish work that is much more rigorous than that statement. We see good evidence that the overall thesis of monitoring the microbiome appears to be viable, at least for raw food ingredients. We would like to make it more quantitative, figure out how you would actually use this on a regular basis, and think about other places we could test, such as other parts of the factory or machines.

Sequencing the Food Supply Chain
Sequencing the food supply chain. Click to enlarge infographic (Courtesy of IBM Research)

FST: What are the steps to sequencing a microbiome?

Welser: A sample of food is taken into a lab where a process breaks down the cell walls to release the DNA and RNA into a slurry. A next-generation sequencing machine identifies every snippet of DNA and RNA it can from that sample, resulting in huge amounts of data. That data is transferred to IBM and other partners for analysis of the presence of organisms. It’s not a straightforward calculation, because different organisms often share genes or have similar snippets of genes. Also, because you’ve broken everything up, you don’t have a full gene necessarily; you might have a snippet of a gene. You want to look at different types of genes and different areas to identify bad organisms, etc.  When looking at DNA and RNA, you want to try to determine if an organism is currently active.

The process is all about the analysis of the data sequence. That’s where we think it has a huge amount of possibility, but it will take more time to understand it. Once you have the data, you can combine it in different ways to figure out what it means.

FST: Discuss the significance of the sequencing project in the context of recent foodborne illness outbreaks. How could the information gleaned help prevent future outbreaks?

Welser: In general, this is exactly what we’re hoping to achieve. Since you test the microbiome at any point in the supply chain, the hope is that it gives you much better headlights to a potential contamination issue wherever it occurs. Currently raw food ingredients come into a factory before they’re processed. If you see the problem with the microbiome right there, you can stop it before it gets into the machinery. Of course, you don’t know whether it came in the shipment, from the farm itself, etc. But if you’re testing in those places, hopefully you’ll figure that out as early as possible. On the other end, when a company processes food and it’s shipped to the store, it goes onto the [store] shelves. It’s not like anyone is testing on a regular basis, but in theory you could do testing to see if the ingredient is showing a different microbiome than what is normally seen.

The real challenge in the retail space is that today you can test anything sitting on the shelves for E. coli, Listeria, etc.— the [pathogens] we know about. It’s not regularly done when [product] is sitting on the shelves, because it’s not clear how effectively you can do it. It still doesn’t get over the challenge of how best to approach testing—how often it needs to be done, what’s the methodology, etc. These are all still challenges ahead. In theory, this can be used anywhere, and the advantage is that it would tell you if anything has changed [versus] testing for [the presence of] one thing.

FST: How will Bio-Rad contribute to this partnership?

Welser: We’re excited about Bio-Rad joining, because right now we’re taking samples and doing next-generation sequencing to identify the microbiome. It’s much less expensive than it used to be, but it’s still a fairly expensive test. We don’t envision that everyone will be doing this every day in their factory. However, we want to build up our understanding to determine what kinds of tests you would conduct on a regular basis without doing the full next-gen sequencing. Whenever we do sequencing, we want to make sure we’re doing the other necessary battery of tests for that food ingredient. Bio-Rad has expertise in all these areas, and they’re looking at other ways to advance their testing technology into the genomic space. That is the goal: To come up with a scientific understanding that allows us to have tests, analysis and algorithms, etc. that would allow the food industry to monitor on a regular basis.