Tag Archives: DNA

Sequencing pattern, pathogens

Build Stronger Food Safety Programs With Next-Generation Sequencing

By Akhila Vasan, Mahni Ghorashi
No Comments
Sequencing pattern, pathogens

According to a survey by retail consulting firm Daymon Worldwide, 50% of today’s consumers are more concerned about food safety and quality than they were five years ago. Their concerns are not unfounded. Recalls are on the rise, and consumer health is put at risk by undetected cases of food adulteration and contamination.

While consumers are concerned about the quality of the food they eat, buy and sell, the brands responsible for making and selling these products also face serious consequences if their food safety programs don’t safeguard against devastating recalls.

A key cause of recalls, food fraud, or the deliberate and intentional substitution, addition, tampering or misrepresentation of food, food ingredients or food packaging, continues to be an issue for the food safety industry. According to PricewaterhouseCoopers, food fraud is estimated to be a $10–15 billion a year problem.

Some of the more notorious examples include wood shavings in Parmesan cheese, the 2013 horsemeat scandal in the United Kingdom, and Oceana’s landmark 2013 study, which revealed that a whopping 33% of seafood sold in the United States is mislabeled. While international organizations like Interpol have stepped up to tackle food fraud, which is exacerbated by the complexity of globalization, academics estimate that 4% of all food is adulterated in some way.

High-profile outbreaks due to undetected pathogens are also a serious risk for consumers and the food industry alike. The United States’ economy alone loses about $55 billion each year due to food illnesses. The World Health Organization estimates that nearly 1 in 10 people become ill every year from eating contaminated food. In 2016 alone, several high-profile outbreaks rocked the industry, harming consumers and brands alike. From the E. coli O26 outbreak at Chipotle to Salmonella in live poultry to Hepatitis A in raw scallops to the Listeria monocytogenes outbreak at Blue Bell ice cream, the food industry has dealt with many challenges on this front.

What’s Being Done?

Both food fraud and undetected contamination can cause massive, expensive and damaging recalls for brands. Each recall can cost a brand about $10 million in direct costs, and that doesn’t include the cost of brand damage and lost sales.

Frustratingly, more recalls due to food fraud and contamination are happening at a time when regulation and policy is stronger than ever. As the global food system evolves, regulatory agencies around the world are fine-tuning or overhauling their food safety systems, taking a more preventive approach.

At the core of these changes is HACCP, the long implemented and well-understood method of evaluating and controlling food safety hazards. In the United States, while HACCP is still used in some sectors, the move to FSMA is apparent in others. In many ways, 2017 is dubbed the year of FSMA compliance.

There is also the Global Food Safety Initiative (GFSI), a private industry conformance standard for certification, which was established proactively by industry to improve food safety throughout the supply chain. It is important to note that all regulatory drivers, be they public or private, work together to ensure the common goal of delivering safe food for consumers. However, more is needed to ensure that nothing slips through the food safety programs.

Now, bolstered by regulatory efforts, advancements in technology make it easier than ever to update food safety programs to better safeguard against food safety risks and recalls and to explore what’s next in food.

Powering the Food Safety Programs of Tomorrow

Today, food safety programs are being bolstered by new technologies as well, including genomic sequencing techniques like NGS. NGS, which stands for next-generation sequencing, is an automated DNA sequencing technology that generates and analyzes millions of sequences per run, allowing researchers to sequence, re-sequence and compare data at a rate previously not possible.

The traditional methods of polymerase chain reaction (PCR) are quickly being replaced by faster and more accurate solutions. The benefit of NGS over PCR is that PCR is targeted, meaning you have to know what you’re looking for. It is also conducted one target at a time, meaning that each target you wish to test requires a separate run. This is costly and does not scale.

Next-generation sequencing, by contrast, is universal. A single test exposes all potential threats, both expected and unexpected. From bacteria and fungi to the precise composition of ingredients in a given sample, a single NGS test guarantees that hazards cannot slip through your supply chain.  In the not-too-distant future, the cost and speed of NGS will meet and then quickly surpass legacy technologies; you can expect the technology to be adopted with increasing speed the moment it becomes price-competitive with PCR.

Applications of NGS

Even today’s NGS technologies are deployment-ready for applications including food safety and supplier verification. With the bottom line protected, food brands are also able to leverage NGS to build the food chain of tomorrow, and focus funding and resources on research and development.

Safety Testing. Advances in NGS allow retailers and manufacturers to securely identify specific pathogens down to the strain level, test environmental samples, verify authenticity and ultimately reduce the risk of outbreaks or counterfeit incidents.

Compared to legacy PCR methods, brands leveraging NGS are able to test for multiple pathogens with a single test, at a lower cost and higher accuracy. This universality is key to protecting brands against all pathogens, not just the ones for which they know to look.

Supplier Verification. NGS technologies can be used to combat economically motivated food fraud and mislabeling, and verify supplier claims. Undeclared allergens are the number one reason for recalls.

As a result of FSMA, the FDA now requires food facilities to implement preventative controls to avoid food fraud, which today occurs in up to 10% of all food types. Traditional PCR-based tests cannot distinguish between closely related species and have high false-positive rates. NGS offers high-resolution, scalable testing so that you can verify suppliers and authenticate product claims, mitigating risk at every level.

R&D. NGS-based metagenomics analysis can be used in R&D and new product development to build the next-generation of health foods and nutritional products, as well as to perform competitive benchmarking and formulation consistency monitoring.

As the consumer takes more and more control over what goes into their food, brands have the opportunity to differentiate not only on transparency, but on personalization, novel approaches and better consistency.

A Brighter Future for Food Safety

With advances in genomic techniques and analysis, we are now better than ever equipped to safeguard against food safety risks, protect brands from having to issue costly recalls, and even explore the next frontier for food. As the technology gets better, faster and cheaper, we are going to experience a tectonic shift in the way we manage our food safety programs and supply chains at large.

We will be discussing this topic, “Building Stronger Food Safety Programs through Next-Generation Sequencing”, during a live conversation on June 7, 2017 at 2:00 pm ET. Microbiologists, testing personnel, food industry management, and anyone interested in how to leverage these new technologies to fortify their food safety programs will learn how NGS is going to transform the future of food safety.

Sanjay Singh, Eurofins
Food Genomics

How is DNA Sequenced?

By Sanjay K. Singh, Douglas Marshall, Ph.D., Gregory Siragusa, Ph.D.
No Comments
Sanjay Singh, Eurofins

Here is a prediction. Within the next year or years, at some time in your daily work life as a food safety professional you will be called upon to either use genomic tools or to understand and relay information based on genomic tools for making important decisions about food safety and quality. Molecular biologists love to use what often seems like a foreign or secret language. Rest assured dear reader, these are mostly just vernacular and are easily understood once you get comfortable with a bit of the vocabulary. In this the fourth installment of our column we progress to give you another tool for your food genomics tool kit. We have called upon a colleague and sequencing expert, Dr. Sanjay Singh, to be a guest co-author for this topic on sequencing and guide us through the genomics language barrier.

The first report of the annotated (labeled) sequence of the human genome occurred in 2003, 50 years after the discovery of the structure of DNA. In this genome document all the genetic information required to create and sustain a human being was provided. The discovery of the structure of DNA has provided a foundation for a deeper understanding of all life forms, with DNA as a core molecule of genetic information. Of course that includes our food and our tiny friends of the microbial world. Further molecular technological advances in the fields of agriculture, food science, forensics, epidemiology, comparative genomics, medicine, diagnostics and therapeutics are providing stunning examples of the power of genomics in our daily lives.  We are only now beginning to harvest the fruits of sequencing and using that knowledge routinely in our respective professions.

In our first column we wrote, “DNA sequencing can be used to determine the names, types, and proportions of microorganisms, the component species in a food sample, and track foodborne diseases agents.” In this month’s column, we present a basic guide to how DNA sequencing chemistry works.

Image courtesy of US Human Genome Project Knowledge base
Image courtesy of US Human Genome Project Knowledge base

DNA sequencing is the process of determining the precise order of four nucleotide bases, adenine or A, cytosine or C, guanine or G, and thymine or T in a DNA molecule. By knowing the linear sequence of A, C, G, and T in a DNA molecule, the genetic information carried in that particular DNA molecule can be determined.

DNA sequencing happened from the intersections of different fields including biology, chemistry, mathematics, and physics.1,2 The critical breakthrough was provided in 1953 by James Watson, Francis Crick, Maurice Wlkins and Rosalind Franklin when they resolved the now familiar double helix structure of DNA.3 Each helical strand was a polynucleotide, which consists of repeating monomeric units called nucleotides. A nucleotide consists of a sugar (deoxyribose), a phosphate moiety, and one of the four nitrogenous bases—the aforementioned A, C, G, and T. In the double helix, the strands run opposite to each other, commonly referred as anti-parallel. Repeating units of base-pairs (bp), where A always pairs with T and C always pairs with G, are arranged within the double helix so that they are slightly offset from each other like steps in a winding staircase. On a piece of paper, the double helix is often represented by scientists as a flat ladder-like structure, where the base pairs (bp) form the rungs of the ladder while the sugar-phosphate backbone form the antiparallel rails (see Figure 1).

DNA Double Helix
Artistic representation of DNA Double Helix. Source: Eurofins

The two ends of each polynucleotide strand are called 5′ or 3′-end, a nomenclature that represents the chemical structure of the deoxyribose sugar at that terminus. The lengths of a single- or double-stranded DNA are often measured in bases (b) or bases pairs (bp), respectively. The two polynucleotide strands can be readily unzipped by heating, and on cooling, the initial double-helix structure is re-formed or re-annealed. The ability to rezip the initial ladder-like structure can be attributed to the phenomenon of base pairing, which merits repetition—the base A always pairs with T and the base G always with C. This rather innocuous phenomenon of base pairing is the basis for the mechanism by which DNA is copied when cells divide and is also the theoretical basis on which most traditional and modern DNA sequencing methodologies have been developed.

Other biological advancements also paved the way towards the development of sequencing technologies. Prominent amongst these were the discovery of enzymes that allowed a scientist to manipulate the DNA. For example, restriction enzymes that recognize and cleave DNA at specific short nucleotide sequences can be used to fragment a long duplex strand of DNA.4 The DNA polymerase enzyme, in the presence of the deoxyribose nucleotide triphosphates (dNTPs: Chemically reactive forms of the nucleotide monomers), can use a single DNA strand to fill in the complementary bases and extend a shorter rail strand (primer extension) of a partial DNA ladder.5 A critical part of the primer extension is the ‘primer’, which are short single-stranded DNA pieces (15 to 30 bases long) that are complementary to a segment of the target DNA. These primers are made using automated high-throughput synthesizer machines. Today, such primers can be rapidly manufactured and delivered on the following day. When the primer and the target DNA are combined through a process called annealing (heat and then cool), they form a structure that shows a ladder-like head and a long single-stranded tail. In 1983, Kary Mullis developed an enzyme-based process called Polymerase Chain Reaction (PCR). Using this protocol, one can pick a single copy of DNA and amplify the same sequence an enormous number of times. One can think of PCR as molecular photocopier in which a single piece of DNA is amplified up to approximately 30 billion copies!

The other critical event that changed the course of DNA sequencing efforts was the publication of the ‘dideoxy chain termination’ method by Dr. Frederick Sanger in December 1977.6 This marked the beginning of the first generation of DNA sequencing techniques. Most next-generation sequencing methods are refinements of the chain termination, or “Sanger method” of sequencing.

Frederick Sanger chemically modified each base so that when it was incorporated into a growing DNA chain, the chain was forcibly terminated. By setting up a primer extension reaction where in one of the chemically modified ‘inactive’ base in smaller quantity is mixed with four active bases, Sanger obtained a series of DNA strands, which when separated based on their size indicated the positions of that particular base in the DNA sequence. By analyzing the results from four such reactions run in parallel, each containing a different ‘inactive’ base, Sanger could piece together the complete sequence of the DNA. Subsequent modifications to the method allowed for the determination of the sequence using dye-labeled termination bases in a single reaction. Since, a sequence of less than <1000 bases can be determined from a single such reaction, the sequence of longer DNA molecules have to be pieced together from many such reads.

Using technologies available in the mid-1990’s, as many as 1 million bases of sequence could be determined per day. However, at this rate, determining the sequence of the 3 billion bp human genome required years of sequencing work. By analogy, this is equivalent to reading the Sunday issue of The New York Times, about 300,000 words, at a pace of 100 words per day. The cost of sequencing the human genome was a whopping  $70 million. The human genome project clearly brought forth a need for technologies that could deliver fast, inexpensive and accurate genome sequences.  In response, the field initially exploded with modifications to the Sanger method. The impetus for these modifications was provided by advances in enzymology, fluorescent detection dyes and capillary-array electrophoresis. Using the Sanger method of sequencing, one can read up to ~1,000 bp in a single reaction, and either 96 or 384 such reactions (in a 96 or 384 well plate) can be performed in parallel using DNA sequencers. More recently a new wave of technological sequencing advances, termed NGS or next-generation sequencing, have been commercialized. NGS is fast, automated, massively parallel and highly reproducible. NGS platforms can read more than 4 billion DNA strands and generate about a terabyte of sequence data in about six days! The whole 3 billion base pairs of the human genome can be sequenced and annotated in a mere month or less.

Continue to page 2.

Gregory Siragusa, Eurofins
Food Genomics

Microbiomes Move Standard Plate Count One Step Forward

By Gregory Siragusa, Douglas Marshall, Ph.D.
No Comments
Gregory Siragusa, Eurofins

Last month we introduced several food genomics terms including the microbiome. Recall that a microbiome is the community or population of microorganisms that inhabit a particular environment or sample. Recall that there are two broad types of microbiomes, a targeted (e.g., bacteria or fungi) or a metagenome (in which all DNA in a sample is sequenced, not just specific targets like bacteria or fungi). This month we would like to introduce the reader to uses of microbiomes and how they augment standard plate counts and move us into a new era in food microbiology. Before providing examples, it might be useful to review a diagram explaining the general flow of the process of determining a microbiome (See Figure 1).

Microbiome
Figure 1. General process for performing a targeted microbiome (bacterial or fungal)

By analogy, if one thinks of cultural microbiology and plate counts as a process of counting colonies of microbes that come from a food or environmental sample, microbiome analysis can be thought of as identifying and counting signature genes, such as the bacterial specific 16S gene, from the microbes in a food or environmental sample. Plate counts have been and remain a food microbiologist most powerful indicator tool in the tool kit; however, we know there are some limitations in their use. One limitation is that not all bacterial or fungal cells are capable of outgrowth and colony formation on specific media under a set of incubation conditions (temperature, time, media pH, storage atmosphere, etc.). Individual plate count methods cannot cover the nearly infinite number of variations of growth atmospheres and nutrients. Because of these limitations microbiologists understand that we have not cultured but many different types of bacteria on the planet (this led to the term “The Great Plate Count Anomaly” (Staley & Konopka, 1985). Think of a holiday party where guests were handed nametags on which was printed: “Hello, I grow on Standard Methods Agar” or “Hello, I grow at 15°C”, etc. We can group the partygoers by ability to grow on certain media; we can also count partygoers, but they still do not have names. As effective as our selective and differential media have become, bacterial colonies still do not come with their own “Hello, My Name Is XYZ” name tags. Therefore, in the lab, once a plate is counted it is generally tossed into the autoclave bag, along with unnamed colonies and all they represent. Microbiomes can provide a nametag of sorts as well as what proportion of people at that party share  a certain name. For instance: “Hello, My Name is Pseudomonas” or “Hello, My Name is Lactobacillus”, etc. The host can then say “Now we are going to count you; would all Pseudomonas pleased gather in this corner?” or “All Lactobacillus please meet at the punch bowl”.

A somewhat overly simplified analogy, but it makes the point that microbiome technology gives names and proportions. Microbiomes too have limitations. First, with current technologies microbiomes need a relatively large threshold of organisms of a specific group to appear in the microbiome pie chart— approximately 103. In theory, a colony on a plate of agar medium can be derived from a single cell or colony-forming unit (CFU). Not all amplified genes in a microbiome are necessarily from viable cells (A topic that will be covered later in this series of articles). Forming a colony on an agar surface on the other hand requires cell viability. Finally, the specificity of a microorganism name assigned to a group in a microbiome depends on the size of the sequenced amplicon (an amplicon is a segment of DNA, in this case the 16S gene DNA, resulting from amplification by PCR before sequencing) and how well our microbial databases cover different subtypes in a species. Targeted microbiomes can reliably name the genus of an organism, however resolution to the species and subspecies is not guaranteed. (Later in this series we will discuss metagenomes and how they have the potential to identify to a species or even subspecies level). Readers can find very informative reviews on microbiome specificity in the following cited references: Bokulich, Lewis, Boundy-Mills, & Mills, 2016; de Boer et al., 2015; Ercolini, 2013; Kergourlay, Taminiau, Daube, & Champomier Vergès, 2015.

When we consider the power of using cultural microbiology for quantitative functional indicators of microbial quality together with microbiomic analysis, with limitations  and all for both, microbiomes have opened a door to the vast and varied biosphere of our food’s microbiology to a depth never before observed. This all sounds great, but how will we benefit and use this information? We have constructed Table 1 with examples and links of microbiome applications to problems that would have required years to study by cultural microbiology techniques alone. Please note this is by no means an exhaustive list, but it serves to illustrate the very broad and deep potential of microbiomics to food microbiology. We encourage the reader to email the editors or authors with questions regarding any reference. Using PubMed and the search terms “Food AND microbiome” will provide abstracts and a large variety of applications of this technology.

Foodstuff Reference
Ale (Bokulich, Bamforth, & Mills, 2012)
Beef Burgers (Ferrocino et al., 2015)
Beefsteak (De Filippis, La Storia, Villani, & Ercolini, 2013)
Brewhouse and Ingredients (Bokulich et al., 2012)
Cheese (Wolfe, Button, Santarelli, & Dutton, 2014)
Cheese and Listeria growth (Callon, Retureau, Didienne, & Montel, 2014)
Cherries, Hydrostatic Pressure (del Árbol et al., n.d.)
Cocoa (Illeghems, De Vuyst, Papalexandratou, & Weckx, 2012)
Dairy Starters and Spoilage Bacteria (Stellato, De Filippis, La Storia, & Ercolini, 2015)
Drinking Water Biofilms (Chao, Mao, Wang, & Zhang, 2015)
Fermented Foods (Tamang, Watanabe, & Holzapfel, 2016)
Foodservice Surfaces (Stellato, La Storia, Cirillo, & Ercolini, 2015)
Fruit and Vegetables (Leff & Fierer, 2013)
Insect Protein (Garofalo et al., 2017)
Kitchen surfaces (Flores et al., 2013)
Lamb (Wang et al., 2016)
Lobster (Tirloni, Stella, Gennari, Colombo, & Bernardi, 2016)
Meat and storage atmosphere (Säde, Penttinen, Björkroth, & Hultman, 2017)
Meat spoilage and processing plant (Pothakos, Stellato, Ercolini, & Devlieghere, 2015)
Meat Spoilage Volatiles (Casaburi, Piombino, Nychas, Villani, & Ercolini, 2015)
Meat Stored in Different Atmospheres (Ercolini et al., 2011)
Milk (Quigley et al., 2011)
Milk and Cow Diet (Giello et al., n.d.)
 Milk and Mastitis  (Bhatt et al., 2012)
 Milk and Teat Preparation  (Doyle, Gleeson, O’Toole, & Cotter, 2016)
 Natural starter cultures  (Parente et al., 2016)
 Olives  (Abriouel, Benomar, Lucas, & Gálvez, 2011)
 Pork Sausage  (Benson et al., 2014)
Spores in complex foods (de Boer et al., 2015)
Tomato Plants (Ottesen et al., 2013)
Winemaking (Marzano et al., 2016)
Table 1. Examples of microbiome analysis of different foods and surfaces.

See page 2 for references

Next-Generation Sequencing Targets GMOs

By Maria Fontanazza
1 Comment

As the movement among consumers for more information about the products they’re purchasing and consuming continues to grow, the food industry will experience persistent pressure from both advocacy groups and the government on disclosure of product safety information and ingredients. Top of mind as of late has been the debate over GMOs. “Given all of the attention on GMOs on the legislative side, there is huge demand from consumers to have visibility and transparency into whether products have been genetically modified or not,” says Mahni Ghorashi, co-founder of Clear Labs.

Mahni Ghorashi, Clear Labs
Mahni Ghorashi, co-founder of Clear Labs

Today Clear Labs announced the availability of its comprehensive next-generation sequencing (NGS)-based GMO test. The release comes at an opportune time, as the GMO labeling bill, which was passed by the U.S. House of Representatives last week, heads to the desk of President Obama.

Clear Labs touts the technology as the first scalable, accurate and affordable GMO test. NGS enables the ability to simultaneously screen for multiple genes at one time, which could companies save time and money. “The advantage and novelty of this new test or assay is the ability to screen for all possible GMO genes in a single universal test, which is a huge change from the way GMO testing is conducted today,” says Ghorashi.

The PCR test method is currently the industry standard for GMO screening, according to the Non-GMO Project. “PCR tests narrowly target an individual gene, and they’re extremely costly—between $150–$275 per gene, per sample,” says Ghorashi. “Next-generation sequencing is leaps and bounds above PCR testing.” Although he won’t specify the cost of the Clear Labs assay (the company uses a tiered pricing structure based on sample volume), Ghorashi says it’s a fraction of the cost of traditional PCR tests.

The new assay screens for 85% of approved GMOs worldwide and targets four major genes used in manufacturing GMOs (detection based on methods of trait introduction and selection, and detection based on common plant traits), allowing companies to determine the presence and amount of GMOs within products or ingredient samples. “We see this test as a definitive scientific validation,” says Ghorashi. The company’s tests integrate software analytics to enable customers to verify GMO-free claims, screen suppliers, and rank suppliers based on risk.

Clear Labs, GMO, testing
Screenshot of the Clear Labs GMO test, which is based on next-generation sequencing technology.

Clear Labs isn’t targeting food manufacturers of a specific size or sector within the food industry but anticipates that a growing number of leading brands will be investing in GMO testing technology. “We expect to see adoption across the board in terms of company size, related more to what their stance is on food transparency and making that information readily available to their end consumers,” says Ghorashi.

David Chambliss, IBM Research
In the Food Lab

Scientific Breakthrough May Change Food Safety Forever

By David Chambliss
No Comments
David Chambliss, IBM Research

How safe is a raw diet? Could sterilizing our food actually make us more prone to sickness? Are vegans healthier than carnivores? In the last few decades, global food poisoning scares from beef to peanut butter have kept food scientists and researchers around the world asking these questions and searching for improved methods of handling and testing what we eat.

It’s been more than 150 years since Louis Pasteur introduced the idea of germ theory—that bacteria cause sickness—fundamentally changing the way we think about what makes our food safe to eat. While we’ve advanced in so many other industrial practices, we’re still using pasteurization as the standard for the global food industry today.

Although pasteurization effectively controls most organisms and keeps the food supply largely safe, we continue to have foodborne outbreaks despite additional testing and more sophisticated techniques. The potential health promise of genomics, and the gut microbiome genetics and bacterial ecosystems, could be the key to the next frontier in food safety.

The scientific community is once again at the cusp of a new era with the advent of metagenomics and its application to food safety.

What is metagenomics? Metagenomics is the study of the bacterial community using genetics by examining the entire DNA content at once. Whole genome sequencing of a single bacterium tells us about the DNA of a specific organism, whereas metagenomic testing tells us about the interaction of all the DNA of all the organisms within a sample or an environment. Think of the vast quantity of genetic material in the soil of a rice patty, a lettuce leaf, your hand, a chicken ready for cooking, or milk directly from a cow. All of them have thousands of bacteria that live together in a complex community called the microbiome that may contain bacteria that are sometimes harmful to humans—and possibly also other bacteria that help to keep the potentially harmful bacteria in check.

Metagenomics uses laboratory methods to break up cells and extract many millions of DNA molecular fragment, and sequencing instruments to measure the sequences of A’s, C’s, G’s, and T’s that represent the genetic information in each of those fragments. Then scientists use computer programs to take the information from millions or billions of fragments to determine from what bacteria they came. The process is a little like mixing up many jigsaws, grabbing some pieces from the mix, and figuring out what was in the original pictures. The “pictures” are the genomes of bacteria, which in some cases carry enough unique information to associate a given bacterium with a previously seen colony of the same species.

Genomics of single bacterial cultures, each from a single species, is well established as a way to connect samples of contaminated foods with reported cases of foodborne illnesses. With metagenomics, which essentially looks for all known species simultaneously, one hopes to do a better job of early detection and prevention. For example, if a machine malfunction causes pasteurization or cleaning to be incomplete, the metagenomics measurement will likely show compositional shifts in which bacterial phyla are abundant. This can make it possible to take remedial action even before there are signs of pathogens or spoilage that would have led to a costly recall.

Up until now, keeping food safe has meant limiting the amount of harmful bacteria in the community. That means using standard methods such as pasteurization, irradiation, sterilization, salt and cooking. To determine whether food is actually safe to eat, we test for the presence of a handful of specific dangerous organisms, including Listeria, E. coli, and Salmonella, to name a few. But what about all the “good” bacteria that is killed along with the “bad” bacteria in the process of making our food safe?

Nutritionists, doctors and food scientists understand that the human gut is well equipped to thrive unless threatened by particularly dangerous contaminants. The ability to determine the entire genetic makeup within a food could mean being able to know with certainty whether it contains any unwanted or unknown microbial hazards. Metagenomic testing of the food supply would usher in an entirely new approach to food safety—one in which we could detect the presence of all microbes in food, including previously unknown dangers. It could even mean less food processing that leaves more of the healthful bacteria intact.

More than 150 years ago, Pasteur pointed us in the right direction. Now the world’s brightest scientific minds are primed to take the food industry the next leap toward a safer food supply.

Steven Guterman, InstantLabs
In the Food Lab

Save Seafood with Digital Tracking

By Steven Guterman, Sarah McMullin, Steve Phelan
No Comments
Steven Guterman, InstantLabs

The combination of improved digital tracking along the food supply chain, as well as fast, accurate DNA testing will provide modern, state-of-the-art tools essential to guarantee accurate labeling for the ever-increasing quantities of foods and ingredients shipped globally.

The sheer scale of the international food supply chain creates opportunities for unscrupulous parties to substitute cheaper products with false labels. We know fraud is obviously a part of the problem. Some suppliers and distributors engage in economically motivated substitution. That is certain.

It’s equally true, however, that some seafood misidentification is inadvertent. In fact, some species identification challenges are inevitable, particularly at the end of the chain after processing. We believe most providers want to act in an ethical manner.

Virtually all seafood fraud involves the falsification or manipulation of documents created to guarantee that the label on the outside of the box matches the seafood on the inside. Unfortunately, the documents are too often vague, misleading or deliberately fraudulent.

Oceana, an international non-profit focused solely on protecting oceans and ocean resources, has published extensively on seafood fraud and continues to educate the public and government through science-based campaigns.

Seafood fraud is not just an economic issue. If the product source is unknown, it is possible to introduce harmful contamination into the food supply. By deploying two actions simultaneously, we can help address this problem and reduce mistakes and mishandling:

  • Improved digital tracking technologies deployed along the supply chain
  • Faster, DNA-based in-house testing to generate results in hours

Strategic collaborations can help industry respond to broad challenges such as seafood fraud. We partner with the University of Guelph to develop DNA-based tests for quick and accurate species identification. The accuracy and portability produced by this partnership allow companies to deploy tests conveniently at many points in the supply chain and get accurate species identification results in hours.

Our new collaboration with SAP, the largest global enterprise digital partner in the world, will help ensure that test results can be integrated with a company’s supply chain data for instant visibility and action throughout the enterprise. In fact, SAP provides enterprise-level software to customers who distribute 78% of the world’s food and accordingly its supply chain validation features have earned global acceptance.

The food fraud and safety digital tracking innovations being developed by SAP will be critical in attacking fraud. Linking paper documents with definitive test results at all points in the supply chain is no longer a realistic solution. Paper trails in use today do not go far enough. Product volume has rendered paper unworkable. Frustrated retailers voice concerns that their customers believe they are doing more testing and validation than they can actually undertake.

We must generate more reliable data and make it available everywhere in seconds in order to protect and strengthen the global seafood supply chain.

Catfish will become the first seafood species to be covered by United States regulations as a result of recent Congressional legislation. This change will immediately challenge the capability of supply chain accuracy. Catfish are but one species among thousands.

Increasingly, researchers and academics in the food industry recognize fast and reliable in-house and on-site testing as the most effective method to resolve the challenges of seafood authentication.

DNA-based analyses have proven repeatedly to be the most effective process to ensure accurate species identification across all food products. Unfortunately, verifying a species using DNA sequencing techniques typically takes one to two weeks to go from sample to result. With many products, and especially with seafood, speed on the production line is essential. In many cases, waiting two weeks for results is just not an acceptable solution.

Furthermore, “dipstick” or lateral-flow tests may work on unprocessed food at the species level, however, DNA testing provides the only accurate test method to differentiate species and sub-species in both raw and processed foods.

Polymerase chain reaction (PCR), which analyzes the sample DNA, can provide accurate results in two to three hours, which in turn enhances the confidence of producers, wholesalers and retailers in the products they sell and minimizes their risk of recalls and brand damage.

New technology eliminates multi-day delays for test results that slow down the process unnecessarily. Traditional testing options require sending samples to commercial laboratories that usually require weeks to return results. These delays can be expensive and cumbersome. Worse, they may prevent fast, accurate testing to monitor problems before they reach a retail environment, where brand and reputational risk are higher.

Rapid DNA-based testing conducted in-house and supported by sophisticated digital tracking technologies will improve seafood identification with the seafood supply chain. This technological combination will improve our global food chain and allow us to do business with safety and confidence in the accuracy and reliability of seafood shipments.

Metagenomics, Food Safety

Preventing Outbreaks a Matter of How, Not When

By Maria Fontanazza
No Comments
Metagenomics, Food Safety

When it comes to preventing foodborne illness, staying ahead of the game can be an elusive task. In light of the recent outbreaks affecting Chipotle (norovirus, Salmonella, E. coli) and Dole’s packaged salad (Listeria), having the ability to identify potentially deadly outbreaks before they begin (every time) would certainly be the holy grail of food safety.

One year ago IBM Research and Mars, Inc. embarked on a partnership with that very goal in mind. They established the Consortium for Sequencing the Food Supply Chain, which they’ve touted as “the largest-ever metagenomics study…sequencing the DNA and RNA of major food ingredients in various environments, at all stages in the supply chain, to unlock food safety insights hidden in big data”.  The idea is to sequence metagenomics on different parts of the food supply chain and build reference databases as to what is a healthy/unhealthy microbiome, what bacteria lives there on a regular basis, and how are they interacting. From there, the information would be used to identify potential hazards, according to Jeff Welser, vice president and lab director at IBM Research–Almaden.

“Obviously a major concern is to always make sure there’s a safe food supply chain. That becomes increasingly difficult as our food supply chain becomes more global and distributed [in such a way] that no individual company owns a portion of it,” says Welser. “That’s really the reason for attacking the metagenomics problem. Right now we test for E. coli, Listeria, or all the known pathogens. But if there’s something that’s unknown and has never been there before, if you’re not testing for it, you’re not going to find it. Testing for the unknown is an impossible task.” With the recent addition of the diagnostics company Bio-Rad to the collaborative effort, the consortium is preparing to publish information about its progress over the past year.  In an interview with Food Safety Tech, Welser discusses the consortium’s efforts since it was established and how it is starting to see evidence that using microbiomes could provide insights on food safety issues in advance.

Food Safety Tech: What progress has the Consortium made over the past year?

Jeff Welser: For the first project with Mars, we decided to focus around pet food. Although they might be known for their chocolates, at least half of Mars’ revenue comes from the pet care industry. It’s a good area to start because it uses the same food ingredients as human food, but it’s processed very differently. There’s a large conglomeration of parts in pet food that might not be part of human food, but the tests for doing the work is directly applicable to human food. We started at a factory of theirs and sampled the raw ingredients coming in. Over the past year, we’ve been establishing whether we can measure a stable microbiome (if we measure day to day, the same ingredient and the same supplier) and [be able to identify] when something has changed.

At a high level, we believe the thesis is playing out. We’re going to publish work that is much more rigorous than that statement. We see good evidence that the overall thesis of monitoring the microbiome appears to be viable, at least for raw food ingredients. We would like to make it more quantitative, figure out how you would actually use this on a regular basis, and think about other places we could test, such as other parts of the factory or machines.

Sequencing the Food Supply Chain
Sequencing the food supply chain. Click to enlarge infographic (Courtesy of IBM Research)

FST: What are the steps to sequencing a microbiome?

Welser: A sample of food is taken into a lab where a process breaks down the cell walls to release the DNA and RNA into a slurry. A next-generation sequencing machine identifies every snippet of DNA and RNA it can from that sample, resulting in huge amounts of data. That data is transferred to IBM and other partners for analysis of the presence of organisms. It’s not a straightforward calculation, because different organisms often share genes or have similar snippets of genes. Also, because you’ve broken everything up, you don’t have a full gene necessarily; you might have a snippet of a gene. You want to look at different types of genes and different areas to identify bad organisms, etc.  When looking at DNA and RNA, you want to try to determine if an organism is currently active.

The process is all about the analysis of the data sequence. That’s where we think it has a huge amount of possibility, but it will take more time to understand it. Once you have the data, you can combine it in different ways to figure out what it means.

FST: Discuss the significance of the sequencing project in the context of recent foodborne illness outbreaks. How could the information gleaned help prevent future outbreaks?

Welser: In general, this is exactly what we’re hoping to achieve. Since you test the microbiome at any point in the supply chain, the hope is that it gives you much better headlights to a potential contamination issue wherever it occurs. Currently raw food ingredients come into a factory before they’re processed. If you see the problem with the microbiome right there, you can stop it before it gets into the machinery. Of course, you don’t know whether it came in the shipment, from the farm itself, etc. But if you’re testing in those places, hopefully you’ll figure that out as early as possible. On the other end, when a company processes food and it’s shipped to the store, it goes onto the [store] shelves. It’s not like anyone is testing on a regular basis, but in theory you could do testing to see if the ingredient is showing a different microbiome than what is normally seen.

The real challenge in the retail space is that today you can test anything sitting on the shelves for E. coli, Listeria, etc.— the [pathogens] we know about. It’s not regularly done when [product] is sitting on the shelves, because it’s not clear how effectively you can do it. It still doesn’t get over the challenge of how best to approach testing—how often it needs to be done, what’s the methodology, etc. These are all still challenges ahead. In theory, this can be used anywhere, and the advantage is that it would tell you if anything has changed [versus] testing for [the presence of] one thing.

FST: How will Bio-Rad contribute to this partnership?

Welser: We’re excited about Bio-Rad joining, because right now we’re taking samples and doing next-generation sequencing to identify the microbiome. It’s much less expensive than it used to be, but it’s still a fairly expensive test. We don’t envision that everyone will be doing this every day in their factory. However, we want to build up our understanding to determine what kinds of tests you would conduct on a regular basis without doing the full next-gen sequencing. Whenever we do sequencing, we want to make sure we’re doing the other necessary battery of tests for that food ingredient. Bio-Rad has expertise in all these areas, and they’re looking at other ways to advance their testing technology into the genomic space. That is the goal: To come up with a scientific understanding that allows us to have tests, analysis and algorithms, etc. that would allow the food industry to monitor on a regular basis.