Tag Archives: genomics

USDA Logo

USDA Appoints New Members to Food Safety Advisory Committee

By Food Safety Tech Staff
No Comments
USDA Logo

The USDA has appointed 21 new members and nine returning members to the National Advisory Committee on Microbiological Criteria for Foods (NACMCF). The purpose of the committee is to provide impartial scientific advice and recommendations to federal food safety agencies. Members of the committee are chosen based on their expertise in microbiology, risk assessment, epidemiology, public health, food science and other relevant disciplines. One individual affiliated with a consumer group is included in the membership of the committee and five members are federal government employees representing the five federal agencies involved in NACMCF—USDA FSIS, FDA, CDC, the Department of Commerce National Marine Fisheries Service, and the Department of Defense Veterinary Services.

“NACMCF members bring a wealth of expertise and dedication to the critical mission of ensuring the safety of our nation’s meat and poultry products,” said Agriculture Secretary Tom Vilsack. “Their contributions will help us continue to strengthen our nation’s food supply and protect the health and well-being of American consumers.”

The newly appointed NACMCF members, who will serve two-year terms are:

  • Dr. Bledar Bisha. University of Wyoming, Laramie, Wyoming
  • Dr. Heather Carleton. Centers for Disease Control and Prevention, Atlanta, Georgia
  • Dr. Anna Carlson. Cargill Protein, Wichita, Kansas
  • Dr. Hayriye Cetin-Karaca. Smithfield Foods, Springdale, Ohio
  • Dr. Ben Chapman. North Carolina State University, Raleigh, North Carolina
  • Dr. Vik Dutta. bioMérieux, Chicago, Illinois
  • Dr. Larry Figgs. Douglas County Health Dept., Omaha, Nebraska
  • Dr. David Goldman. Groundswell Strategy, Arlington, Virginia
  • Dr. Michael Hansen. Consumer Reports, Yonkers, New York
  • Dr. Arie Havelaar. University of Florida, Gainesville, Florida
  • Dr. Ramin Khaksar. Clear Labs, San Carlos, California
  • Lieutenant Colonel Noel Kubat. Department of Defense, U.S. Army Veterinary Corps, Fort Knox, Kentucky
  • Dr. KatieRose McCullough. North American Meat Institute, Washington, D.C.
  • Dr. Indaue Giriboni de Mello. Newman’s Own, Westport, Connecticut
  • Dr. Eric Moorman. Butterball, LLC, Garner, North Carolina
  • Dr. Abani Pradhan. University of Maryland, College Park, Maryland
  • Mr. Shivrajsinh Rana. Reckitt, Parsippany, New Jersey
  • Dr. Marcos Sanchez Plata. Texas Tech University, Lubbock, Texas
  • Dr. Kristin Schill. University of Wisconsin – Madison, Madison, Wisconsin
  • Dr. Nikki Shariat. University of Georgia, Athens, Georgia
  • Dr. Abigail Snyder. Cornell University, Ithaca, New York

The returning NACMCF members are:

  • Dr. Yaohua (Betty) Feng. Purdue University, West Lafayette, Indiana
  • Ms. Janell Kause. U.S. Department of Agriculture, Food Safety and Inspection Service, Washington, D.C.
  • Dr. Elisabetta Lambertini. Global Alliance for Improved Nutrition, Washington, D.C.
  • Ms. Shannara Lynn. U.S. Department of Commerce, National Seafood Inspection Laboratory, Pascagoula, Mississippi
  • Dr. Maxim Teplitski. International Fresh Produce Association, Washington, D.C.
  • Dr. Bing Wang. University of Nebraska – Lincoln, Lincoln, Nebraska
  • Dr. Benjamin Warren. Food and Drug Administration, Center for Food Safety and Applied Nutrition, College Park, Maryland
  • Dr. Randy Worobo. Cornell University, Ithaca, New York
  • Dr. Teshome Yehualaeshet. Tuskegee University, Tuskegee, Alabama

NACMCF will hold a virtual public meeting of the full committee and subcommittees from November 14, 2023, to November 16, 2023. In addition to welcoming the new members, the committee will introduce a new charge from FSIS on genomic characterization of pathogens and continue working on the response to the FDA’s charge on Cronobacter spp. in Powdered Infant Formula. Register here to attend the meeting.

 

Mahni Ghorashi, Clear Labs
In the Food Lab

The Food Safety Testing Lab as Profit Center

By Mahni Ghorashi
2 Comments
Mahni Ghorashi, Clear Labs

It’s not that the industry has been more reluctant than others to embrace change; rather, the forces that will drive the food’s big data revolution have but recently come to bear.

Regulation is now playing a role. FSMA mandates that the industry embrace proactive food safety measures. That means higher testing volumes. Higher testing volumes means more data.

At the same time, new technologies like next-generation sequencing (NGS) are beginning to find wide-scale adoption in food-safety testing. And NGS technologies generate a lot of data—so much so that the food safety lab will soon emerge as the epicenter of the food industry’s big data revolution. As a result, the microbiology lab, a cost center, will soon emerge as one the industry’s most surprising profit centers.

A Familiar Trend

This shift may be unprecedented in food, but plenty of other industries touched by a technological transformation have undergone a similar change, flipping the switch from overhead to revenue generation.

Take the IT department, for instance. The debate about IT departments being a cost or profit center has been ongoing for many years. If data centers had simply kept doing what they have done in the past—data processing, enterprise resource planning, desktop applications, help desk—maintaining an IT department would have remained a cost center.

But things look quite different today. Companies in today’s fast-changing business environment depend on their IT departments to generate value. Now and for the foreseeable future, the IT department is on the hook to provide companies with a strategic advantage and to create new revenue opportunities.

Netflix, for example, recently estimated the value of their recommenders and personalization engines at $1 billion per year by quadrupling their effective catalog and dramatically increasing customer engagement and reducing churn.

Another great example are the call centers of customer support departments. For most of their history, call centers generated incredibly small margins or were outright cost centers.

Now, call centers armed with AI and chatbots are a source of valuable customer insights and are a treasure trove of many brands’ most valuable data. This data can be used to fuel upsells, inform future product development, enhance brand loyalty, and increase market share.

Take Amtrak as a prime example. When the commuter railway implemented natural language chatbots on their booking site, they generated 30% more revenue per booking, saved $1 million in customer service email costs, and experienced an 8X return on investment.

These types of returns are not out of reach for the food industry.

The Food Data Revolution Starts in the Lab

The microbiology lab will be the gravitational center of big data in the food industry. Millions of food samples flow in and out of these labs every hour and more and more samples are being tested each year. In 2016 the global food microbiology market totaled 1.14 billion tests—up 15% from 2013.1

I’d argue that the food-testing lab is the biggest data generator in the entire supply chain. These labs are not only collecting molecular data about raw and processed foods but also important inventory management information like lot numbers, brand names and supplier information, to name a few.

As technologies like NGS come online, the data these labs collect will increase exponentially.
NGS platforms have dramatically reduced turnaround times and achieve higher levels of accuracy and specificity than other sequencing platforms. Unlike most PCR and ELISA-based testing techniques, which can only generate binary answers, NGS platforms generate millions of data points with each run. Two hundred or more samples can be processed simultaneously at up to 25 million reads per sample.
With a single test, labs are able to gather information about a sample’s authenticity (is the food what the label says it is?); provenance (is the food from where it is supposed to be from?); adulterants (are there ingredients that aren’t supposed to be there?); and pathogen risk.

The food industry is well aware that food safety testing programs are already a worthwhile investment. Given the enormous human and financial costs of food recalls, a robust food-safety testing system is the best insurance policy any food brand can buy.

The brands that understand how to leverage the data that microbiology labs produce in ever larger quantities will be in a position to transform the cost of this insurance policy into new revenue streams.

Digitizing the Food Supply Chain

It’s clear that the food lab will generate massive amounts of data in the future, and it’s easy to see that this data will have value, but how, exactly, can food brands turn their data into revenue streams?

The real magic starts to happen when we can combine and correlate the trillions of data points we’re gathering from new forms of testing like NGS, with data already being collected, whether for inventory management, supply chain management, storage and environmental conditions, downstream sales data, or other forms of testing for additives and contaminant like pH, antibiotics, heavy metals and color additives.

When a food brand has all of this data at their fingertips, they can start to feed the data through an artificial intelligence platform that can find patterns and trends in the data. The possibilities are endless, but some insights you could imagine are:

  • When I procure raw ingredient A from supplier B and distributors X, Y, and Z, I consistently record higher-than-average rates of contamination.
  • Over the course of a fiscal year Supplier A’s product, while a higher cost per pound, actually increases my margin because, on average, it confers a greater nutritional value than the supplier B’s product.
  • A rare pathogen strain is emerging from suppliers who used the same manufacturing plant in Arizona.

Based on this information about suppliers, food brands can optimize their supplier relationships, decrease the risk associated with new suppliers, and prevent potential outbreaks from rare or emerging pathogen threats.

But clearly the real promise for revenue generation is in leveraging food data to inform R&D, and creating a tighter food safety testing and product development feedback loop.

The opportunity to develop new products based on insights generated in the microbiology lab are profound. This is where the upside lives.

For instance, brands could correlate shelf life with a particular ingredient or additive to find new ways of storing food longer. We can leverage data collected across a product line or multiple product lines to create new ingredient profiles that find substitutes for or eliminate unhealthy additives like corn syrup.

One of the areas I’m most excited about is personalized nutrition. With microbiome data collected during routine testing, we could develop probiotics and prebiotics that promote healthy gut flora, and eventually are even tailored to the unique genetic profile of individual shoppers. The holistic wellness crowd has always claimed that food is medicine; with predictive bioinformatic models and precise microbiome profiles, we can back up that claim scientifically for the first time.

Insights at Scale

Right now, much of the insight to be gained from unused food safety testing data requires the expertise of highly specialized bioinformaticians. We haven’t yet standardized bioinformatic algorithms and pipelines—that work is foundational to building the food genomics platforms of the future.

In the near future these food genomics platforms will leverage artificial intelligence and machine learning to automate bioinformatic workflows, dramatically increasing our ability to analyze enormous bodies of data and identify macro-level trends. Imagine the insights we could gain when we combine trillions of genomic data points from each phase in the food safety testing process—from routine pathogen testing to environmental monitoring to strain typing.

We’re not there yet, but the technology is not far off. And while the path to adoption will surely have its fair share of twists and turns, it’s clear that the business functions of food safety testing labs and R&D departments will grow to be more closely integrated than ever before.

In this respect the success of any food safety program will depend—as it always has—not just on the technology deployed in labs, but on how food brands operate. In the food industry, where low margins are the norm, brands have long depended on efficiently managed operations and superb leadership to remain competitive. I’m confident that given the quality and depth of its human resources, the food industry will be prove more successful than most in harnessing the power of big data in ways that truly benefit consumers.

The big data revolution in food will begin in the microbiology lab, but it will have its most profound impact at the kitchen table.

References

  1. Ferguson, B. (February/March 2017). “A Look at the Microbiology Testing Market.” Food Safety Magazine. Retrieved from https://www.foodsafetymagazine.com/magazine-archive1/februarymarch-2017/a-look-at-the-microbiology-testing-market/.
Gregory Siragusa, Eurofins
Food Genomics

What’s in a Name? Probiotic Analysis and Genomics

By Gregory Siragusa, Douglas Marshall, Ph.D.
No Comments
Gregory Siragusa, Eurofins

In short, in the world of regulatory and probiotic microbiology the “name” is critical. Whether you are a probiotics manufacturer, blender or user, we are all likely aware that usage and sales of probiotic strains of bacteria and yeasts is burgeoning. Estimates of sales growth are impressive, with $24 billion and $5 billion in human and animal markets respectively projected by the year 2024.1,2Although the organisms approved as human and animal probiotic are a huge list, it is quite large and varied. Prove it to yourself by visiting your local grocery store or pharmacy and take a trip down the aisle where probiotic supplements are displayed. Read content labels and see if you recognize the microbe names. There are many probiotic organisms’ names that you are likely to be familiar with, most of which are lactic acid bacteria (LAB). However, we are quickly entering an age of more novel or even new probiotic organisms that may be unfamiliar to you. Some of which are not always as easy to culture as the LAB.3 On the same labels you may see claims of viability and cell population declarations (usually in CFU’s or colony forming units). Also, many probiotics are retailed in dry form, while others are marketed as liquids. As food safety scientists and practitioners, questions are probably popping into your head as to how probiotic species and populations are verified and how these various preparations survive expected shelf life.

Most will agree that before anyone starts consuming pills or eating foods with billions of viable bacteria, it is obviously a prudent idea that the manufacturer has the means to assure safety and quality. The details and scope of probiotic safety and microbial analysis are much too complex and broad to deal with in a few pages. For more details we direct the reader to two key publications.4,5

Identity, viability and populations are attributes largely measurable by methods that rely on culture, phenotypic analysis, genomics and combinations thereof. Here we will share a primer on genomic methods for probiotic analysis starting with a very basic aspect critical to all of microbiology—taxonomy and asking the “right” questions.

Why Not Just Perform a Plate Count? Revisiting Taxonomy

The process of identifying or classifying organisms, also known as the science of taxonomy or systematics, has sometimes been given less-than-stellar treatment among the community of microbiologists. We are frustrated when taxonomists change genus or species names just as people learned the old existing names. But every dog will have its day, and for microbial systematists, that day has arrived since application of genomic tools to the taxonomist toolbox has coincided with growth of the probiotics industry. Practically speaking, for the probiotic microbiologist, there is a lot more to a name than just nomenclature. Microbial taxonomy, and specifically bacterial taxonomy, becomes vitally important as more and more products are produced and as regulations increase in scope. Bacterial nomenclature is an ever-changing field, but at least naming has become more centralized with its own website.6

For the probiotic manufacturer, some important questions require answers: “Is it ‘my’ strain?”, “What’s in the mixture?”, “Is the label accurate?”, and “Are they alive?”. So why are we addressing this topic as a subject for Food Genomics? Confronted with the shear variety of bacterial types, it is easy to see why and how genomic tools offer a solution to this complexity. We now have tools that augment, complement, or even in some cases, replace cultural microbiology as a means to classify, identify and analyze probiotics (see Table I).7

 General Method Application Notes
 Targeted microbiome Genus/Species level resolution
Bacterial or fungal profiles (16S/ITS gene)
Suitable for multi-strain probiotic products
Unknown or QA analysis
 Shotgun metagenome Species/Possible Strain level resolution
Bacterial and fungal profiles in same assay
Well suited for multi-strain probiotic products
 Qualitative microarray Species/Possible Strain level resolution
Non-quantitative, descriptive only
Well suited for multi-strain probiotic products
 PCR Species/Strain, probe designed specificity
Qualitative
 qPCR Species/strain probe designed specificity
Quantitative against CFU standard curve
Can be designed to detect viability
 Flow Cytometry
(Gene Probe-Based)
Probe designed specificity
Quantitative against CFU standard curve
Viable, injured, dead cell detection possible
High throughput
Table I. Genomic Tools for Probiotic Analysis

“Why not just perform a plate count?”. Obviously plate counts have a pivotal role in the analytical microbiology of probiotics and will likely remain a gold standard for enumeration of viable counts. In fact, the unit of viable cell counts, CFU is the recommendation for use to verify probiotic populations.8 Unfortunately, most plate count methods do not name or identify the microbes we count as colonies. Occasionally, with precise selective and differential media the identity of the colonies growing on the plate can be reliably called, but misidentification is common. Other tools, such as PCR, are the tool of choice for amplifying specific genes from an organism’s DNA. Quantitative PCR (qPCR) and flow cytometry both rely on a probe specific for a species or even a strain in order to estimate cell numbers, including viable cell counts.7, 9,10,11, 12

So how will modern genomics help you with the analysis of your probiotics? The following are some questions, examples and comments that illustrate genomic applications for probiotic analysis that you should be familiar with. These methods, whether sequencing-based, PCR-based or flow cytometry-based (using gene probes not antibodies), all require some form of sequence determination, detection/hybridization and analysis.

Continue to page 2 below.

Sasan Amini, Clear Labs

NGS in Food Safety: Seeing What Was Never Before Possible

By Sasan Amini
No Comments
Sasan Amini, Clear Labs

For the past year, Swedish food provider Dafgård has been using a single test to screen each batch of its food for allergens, missing ingredients, and even the unexpected – an unintended ingredient or pathogen. The company extracts DNA from food samples and sends it to a lab for end-to-end sequencing, processing, and analysis. Whether referring to a meatball at a European Ikea or a pre-made pizza at a local grocery store, Dafgård knows exactly what is in its food and can pinpoint potential trouble spots in its supply chains, immediately take steps to remedy issues, and predict future areas of concern.

The power behind the testing is next-generation sequencing (NGS). NGS platforms, like the one my company Clear Labs has developed, consist of the most modern parallel sequencers available in combination with advanced databases and technologies for rapid DNA analysis. These platforms have reduced the cost of DNA sequencing by orders of magnitude, putting the power to sequence genetic material in the hands of scientists and investigators across a range of research disciplines and industries. They have overtaken traditional, first-generation Sanger sequencing in clinical settings over the past several years and are now poised to supplement and likely replace PCR in food safety testing.

For Dafgård, one of the largest food providers in Europe, the switch to NGS has given it the ability to see what was previously impossible with PCR and other technologies. Although Dafgård still uses PCR in select cases, it has run thousands of NGS-based tests over the past year. One of the biggest improvements has been in understanding the supply chain for the spices in its prepared foods. Supply chains for spices can be long and can result in extra or missing ingredients, some of which can affect consumer health. With the NGS platform, Dafgård can pinpoint ingredients down to the original supplier, getting an unparalleled look into its raw ingredients.

Dafgård hopes to soon switch to an entirely NGS-based platform, which will put the company at the forefront of food safety. Embracing this new technology within the broader food industry has been a decade-long process, one that will accelerate in the coming years, with an increased emphasis on food transparency both among consumers and regulators globally.

Transitioning technology

A decade ago, very few people in food safety were talking about NGS technologies. A 2008 paper in Analytical and Bioanalytical Chemistry1 gave an outlook for food safety technology that included nanotechnology, while a 2009 story in Food Safety Magazine2 discussed spectrometric or laser-based diagnostic technologies. Around the same time, Nature magazine named NGS as its “method of the year” for 2007. A decade later, NGS is taking pathogen characterization and food authentication to the next level.

Over the last 30 years, multiple technology transitions have occurred to improve food safety. In the United States, for example, the Hazard Analysis and Critical Control Points (HACCP) came online in the mid-1990s to reduce illness-causing microbial pathogens on raw products. The move came just a few years after a massive outbreak of E. coli in the U.S. Pacific Northwest caused 400 illness and 4 deaths, and it was clear there was a need for change.

Before HACCP, food inspection was largely on the basis of sight, touch, and smell. It was time to take a more science-based approach to meat and poultry safety. This led to the use of PCR, among other technologies, to better measure and address pathogens in the food industry.

HACCP set the stage for modern-era food testing, and since then, efforts have only intensified to combat food-borne pathogens. In 2011, the Food Safety Modernization Act (FSMA) took effect, shifting the focus from responding to pathogens to preventing them. Data from 20153 showed a 30% drop in foodborne-related bacterial and parasitic infections from 2012 to 2014 compared to the same time period in 1996 to 1998.

But despite these vast improvements, work still remains: According to the CDC, foodborne pathogens in the Unites States alone cause 48 million illnesses and 3,000 fatalities every year. And every year, the food safety industry runs hundreds of millions of tests. These tests can mean the difference between potentially crippling business operations and a thriving business that customers trust. Food recalls cost an average of $10M per incident and jeopardize public health. The best way to stay ahead of the regulatory curve and to protect consumers is to take advantage of the new technological tools we now have at our disposal.

Reducing Errors

About 60% of food safety tests currently use rapid methods, while 40% use traditional culturing. Although highly accurate, culturing can take up to five days for results, while PCR and antigen-based tests can be quicker – -one to two days – but have much lower accuracy. So, what about NGS?

NGS platforms have a turnaround of only one day, and can get to a higher level of accuracy and specificity than other sequencing platforms. And unlike some PCR techniques that can only detect up to 5 targets on one sample at a time, the targets for NGS platforms are nearly unlimited, with up to 25 million reads per sample, with 200 or more samples processed at the same time. This results in a major difference in the amount of information yielded.

For PCR, very small segments of DNA are amplified to compare to potential pathogens. But with NGS tools, all the DNA is tested, cutting it into small fragments, with millions of sequences generated – giving many redundant data points for comparing the genome to potential pathogens. This allows for much deeper resolution to determine the exact strain of a pathogen.

Traditional techniques are also rife with false negatives and false positives. In 2015, a study from the American Proficiency Institute4 on about 18,000 testing results from 1999 to 2013 for Salmonella found false negative rates between 2% and 10% and false positive rates between 2% and 6%. Several Food Service Labs claim false positive rates of 5% to 50%.

False positives can create a resource-intensive burden on food companies. Reducing false negatives is important for public health as well as isolating and decontaminating the species within a facility. Research has shown that with robust data analytics and sample preparation, an NGS platform can bring false negative and positive rates down to close to zero for a pathogen test like Salmonella, Listeria, or E.coli.

Expecting the Unexpected

NGS platforms using targeted-amplicon sequencing, also called DNA “barcoding,” represent the next wave of genomic analysis techniques. These barcoding techniques enable companies to match samples against a particular pathogen, allergen, or ingredient. When deeper identification and characterization of a sample is needed, non-targeted whole genome sequencing (WGS) is the best option.

Using NGS for WGS is much more efficient than PCR, for example, at identifying new strains that enter a facility. Many food manufacturing plants have databases, created through WGS, of resident pathogens and standard decontamination steps to handle those resident pathogens. But what happens if something unknown enters the facility?

By looking at all the genomic information in a given sample and comparing it to the resident pathogen database, NGS can rapidly identify strains the facility might not have even known to look for. Indeed, the beauty of these technologies is that you come to expect to find the unexpected.

That may sound overwhelming – like opening Pandora’s box – but I see it as the opposite: NGS offers an unprecedented opportunity to protect against likely threats in food, create the highest quality private databases, and customize internal reporting based on top-of-the-line science and business practices. Knowledge is power, and NGS technologies puts that power directly in food companies’ hands. Brands that adopt NGS platforms can execute on decisions about what to test for more quickly and inexpensively – all the while providing their customers with the safest food possible.

Perhaps the best analogy for this advancement comes from Magnus Dafgård, owner and executive vice president at Gunnar Dafgård AB: “If you have poor eyesight and need glasses, you could be sitting at home surrounded by dirt and not even know it. Then when you get glasses, you will instantly see the dirt. So, do you throw away the glasses or get rid of the dirt?” NGS platforms provide the clarity to see and address problem directly, giving companies like Dafgård confidence that they are using the most modern, sophisticated food safety technologies available.

As NGS platforms continue to mature in the coming months and years, I look forward to participating in the next jump in food safety – ensuring a safe global food system.

Common Acronyms in Food Genomics and Safety

DNA Barcoding: These short, standardized DNA sequences can identify individual organisms, including those previously undescribed. Traditionally, these sequences can come from PCR or Sanger sequencing. With NGS, the barcoding can be developed in parallel and for all gene variants, producing a deeper level of specificity.

ELISA: Enzyme-linked immunosorbent assay. Developed in 1971, ELISA is a rapid substance detection method that can detect a specific protein, like an allergen, in a cell by binding antibody to a specific antigen and creating a color change. It is less effective in food testing for cooked products, in which the protein molecules may be broken down and the allergens thus no longer detectable.

FSMA: Food Safety Modernization Act. Passed in 2011 in the United States, FSMA requires comprehensive, science-based preventive controls across the food supply. Each section of the FSMA consists of specific procedures to prevent consumers from getting sick due to foodborne illness, such as a section to verify safety standards from foreign supply chains.

HACCP: Hazard analysis and critical control points. A food safety management system, HACCP is a preventative approach to quantifying and reducing risk in the food system. It was developed in the 1950s by the Pillsbury Company, the Natick Research Laboratories, and NASA, but did not become as widespread in its use until 1996, when the U.S. FDA passed a new pathogen reduction rule using HACCP across all meat and poultry raw products.

NGS: Next-generation sequencing. NGS is the most modern, parallel, high-throughput DNA sequencing available. It can sequence 200 to 300 samples at a time and generates up to 25 million reads per a single experiment. This level of information can identify pathogens at the strain level and can be used to perform WGS for samples with unknown pathogens or ingredients.

PCR: Polymerase chain reaction. First described in 1985, PCR is a technique to amplify a segment of DNA and generate copies of a DNA sequence. The DNA sequences generated from PCR must be compared to specific, known pathogens. While it can identify pathogens at the species level, PCR cannot provide the strain of a pathogen due to the limited amount of sequencing information generated.

WGS: Whole genome sequencing. WGS uses NGS platforms to look at the entire DNA of an organism. It is non-targeted, which means it is not necessary to know in advance what is being detected. In WGS, the entire genome is cut it into small regions, with adaptors attached to the fragments to sequence each piece in both directions. The generated sequences are then assembled into single long pieces of the whole genome. WGS produces sequences 30 times the size of the genome, providing redundancy that allows for a deeper analysis.

Citations

  1. Nugen, S. R., & Baeumner, A. J. (2008). Trends and opportunities in food pathogen detection. Analytical and Bioanalytical Chemistry, 391(2), 451-454. doi:10.1007/s00216-008-1886-2
  2. Philpott, C. (2009, April 01). A Summary Profile of Pathogen Detection Technologies. Retrieved September 08, 2017, from https://www.foodsafetymagazine.com/magazine-archive1/aprilmay-2009/a-summary-profile-of-pathogen-detection-technologies/?EMID
  3. Ray, L., Barrett, K., Spinelli, A., Huang, J., & Geissler, A. (2009). Foodborne Disease Active Surveillance Network, FoodNet 2015 Surveillance Report (pp. 1-26, Rep.). CDC. Retrieved September 8, 2017, from https://www.cdc.gov/foodnet/pdfs/FoodNet-Annual-Report-2015-508c.pdf.
  4.  Stombler, R. (2014). Salmonella Detection Rates Continue to Fail (Rep.). American Proficiency Institute.
Sanjay Singh, Eurofins
Food Genomics

How is DNA Sequenced?

By Sanjay K. Singh, Douglas Marshall, Ph.D., Gregory Siragusa, Ph.D.
No Comments
Sanjay Singh, Eurofins

Here is a prediction. Within the next year or years, at some time in your daily work life as a food safety professional you will be called upon to either use genomic tools or to understand and relay information based on genomic tools for making important decisions about food safety and quality. Molecular biologists love to use what often seems like a foreign or secret language. Rest assured dear reader, these are mostly just vernacular and are easily understood once you get comfortable with a bit of the vocabulary. In this the fourth installment of our column we progress to give you another tool for your food genomics tool kit. We have called upon a colleague and sequencing expert, Dr. Sanjay Singh, to be a guest co-author for this topic on sequencing and guide us through the genomics language barrier.

The first report of the annotated (labeled) sequence of the human genome occurred in 2003, 50 years after the discovery of the structure of DNA. In this genome document all the genetic information required to create and sustain a human being was provided. The discovery of the structure of DNA has provided a foundation for a deeper understanding of all life forms, with DNA as a core molecule of genetic information. Of course that includes our food and our tiny friends of the microbial world. Further molecular technological advances in the fields of agriculture, food science, forensics, epidemiology, comparative genomics, medicine, diagnostics and therapeutics are providing stunning examples of the power of genomics in our daily lives.  We are only now beginning to harvest the fruits of sequencing and using that knowledge routinely in our respective professions.

In our first column we wrote, “DNA sequencing can be used to determine the names, types, and proportions of microorganisms, the component species in a food sample, and track foodborne diseases agents.” In this month’s column, we present a basic guide to how DNA sequencing chemistry works.

Image courtesy of US Human Genome Project Knowledge base
Image courtesy of US Human Genome Project Knowledge base

DNA sequencing is the process of determining the precise order of four nucleotide bases, adenine or A, cytosine or C, guanine or G, and thymine or T in a DNA molecule. By knowing the linear sequence of A, C, G, and T in a DNA molecule, the genetic information carried in that particular DNA molecule can be determined.

DNA sequencing happened from the intersections of different fields including biology, chemistry, mathematics, and physics.1,2 The critical breakthrough was provided in 1953 by James Watson, Francis Crick, Maurice Wlkins and Rosalind Franklin when they resolved the now familiar double helix structure of DNA.3 Each helical strand was a polynucleotide, which consists of repeating monomeric units called nucleotides. A nucleotide consists of a sugar (deoxyribose), a phosphate moiety, and one of the four nitrogenous bases—the aforementioned A, C, G, and T. In the double helix, the strands run opposite to each other, commonly referred as anti-parallel. Repeating units of base-pairs (bp), where A always pairs with T and C always pairs with G, are arranged within the double helix so that they are slightly offset from each other like steps in a winding staircase. On a piece of paper, the double helix is often represented by scientists as a flat ladder-like structure, where the base pairs (bp) form the rungs of the ladder while the sugar-phosphate backbone form the antiparallel rails (see Figure 1).

DNA Double Helix
Artistic representation of DNA Double Helix. Source: Eurofins

The two ends of each polynucleotide strand are called 5′ or 3′-end, a nomenclature that represents the chemical structure of the deoxyribose sugar at that terminus. The lengths of a single- or double-stranded DNA are often measured in bases (b) or bases pairs (bp), respectively. The two polynucleotide strands can be readily unzipped by heating, and on cooling, the initial double-helix structure is re-formed or re-annealed. The ability to rezip the initial ladder-like structure can be attributed to the phenomenon of base pairing, which merits repetition—the base A always pairs with T and the base G always with C. This rather innocuous phenomenon of base pairing is the basis for the mechanism by which DNA is copied when cells divide and is also the theoretical basis on which most traditional and modern DNA sequencing methodologies have been developed.

Other biological advancements also paved the way towards the development of sequencing technologies. Prominent amongst these were the discovery of enzymes that allowed a scientist to manipulate the DNA. For example, restriction enzymes that recognize and cleave DNA at specific short nucleotide sequences can be used to fragment a long duplex strand of DNA.4 The DNA polymerase enzyme, in the presence of the deoxyribose nucleotide triphosphates (dNTPs: Chemically reactive forms of the nucleotide monomers), can use a single DNA strand to fill in the complementary bases and extend a shorter rail strand (primer extension) of a partial DNA ladder.5 A critical part of the primer extension is the ‘primer’, which are short single-stranded DNA pieces (15 to 30 bases long) that are complementary to a segment of the target DNA. These primers are made using automated high-throughput synthesizer machines. Today, such primers can be rapidly manufactured and delivered on the following day. When the primer and the target DNA are combined through a process called annealing (heat and then cool), they form a structure that shows a ladder-like head and a long single-stranded tail. In 1983, Kary Mullis developed an enzyme-based process called Polymerase Chain Reaction (PCR). Using this protocol, one can pick a single copy of DNA and amplify the same sequence an enormous number of times. One can think of PCR as molecular photocopier in which a single piece of DNA is amplified up to approximately 30 billion copies!

The other critical event that changed the course of DNA sequencing efforts was the publication of the ‘dideoxy chain termination’ method by Dr. Frederick Sanger in December 1977.6 This marked the beginning of the first generation of DNA sequencing techniques. Most next-generation sequencing methods are refinements of the chain termination, or “Sanger method” of sequencing.

Frederick Sanger chemically modified each base so that when it was incorporated into a growing DNA chain, the chain was forcibly terminated. By setting up a primer extension reaction where in one of the chemically modified ‘inactive’ base in smaller quantity is mixed with four active bases, Sanger obtained a series of DNA strands, which when separated based on their size indicated the positions of that particular base in the DNA sequence. By analyzing the results from four such reactions run in parallel, each containing a different ‘inactive’ base, Sanger could piece together the complete sequence of the DNA. Subsequent modifications to the method allowed for the determination of the sequence using dye-labeled termination bases in a single reaction. Since, a sequence of less than <1000 bases can be determined from a single such reaction, the sequence of longer DNA molecules have to be pieced together from many such reads.

Using technologies available in the mid-1990’s, as many as 1 million bases of sequence could be determined per day. However, at this rate, determining the sequence of the 3 billion bp human genome required years of sequencing work. By analogy, this is equivalent to reading the Sunday issue of The New York Times, about 300,000 words, at a pace of 100 words per day. The cost of sequencing the human genome was a whopping  $70 million. The human genome project clearly brought forth a need for technologies that could deliver fast, inexpensive and accurate genome sequences.  In response, the field initially exploded with modifications to the Sanger method. The impetus for these modifications was provided by advances in enzymology, fluorescent detection dyes and capillary-array electrophoresis. Using the Sanger method of sequencing, one can read up to ~1,000 bp in a single reaction, and either 96 or 384 such reactions (in a 96 or 384 well plate) can be performed in parallel using DNA sequencers. More recently a new wave of technological sequencing advances, termed NGS or next-generation sequencing, have been commercialized. NGS is fast, automated, massively parallel and highly reproducible. NGS platforms can read more than 4 billion DNA strands and generate about a terabyte of sequence data in about six days! The whole 3 billion base pairs of the human genome can be sequenced and annotated in a mere month or less.

Continue to page 2.

Dr. Douglass Marshall, Chief Scientific Officer – Eurofins Microbiology Laboratories
Food Genomics

Microbiomes a Versatile Tool for FSMA Validation and Verification

By Douglas Marshall, Ph.D., Gregory Siragusa
No Comments
Dr. Douglass Marshall, Chief Scientific Officer – Eurofins Microbiology Laboratories

The use of genomics tools are valuable additions to companies seeking to meet and exceed validation and verification requirements for FSMA compliance (21 CFR 117.3). In this installment of Food Genomics, we present reasons why microbiome analyses are powerful tools for FSMA requirements currently and certainly in the future.

Recall in the first installment of Food Genomics we defined a microbiome as the community of microorganisms that inhabit a particular environment or sample. For example, a food plant’s microbiome includes all the microorganisms that colonize a plant’s surfaces and internal passages. This can be a targeted (amplicon sequencing-based) or a metagenome (whole shotgun metagenome-based) microbiome. Microbiome analysis can be carried out on processing plant environmental samples, raw ingredients, during shelf life or challenge studies, and in cases of overt spoilage.

As a refresher of FSMA requirements, here is a brief overview. Validation activities include obtaining and evaluating scientific and technical evidence that a control measure, combination of control measures, or the food safety plan as a whole, when properly implemented, is capable of effectively controlling the identified microbial hazards. In other words, can the food safety plan, when implemented, actually control the identified hazards? Verification activities include the application of methods, procedures, tests and other evaluations, in addition to monitoring, to determine whether a control measure or combination of control measures is or has been operating as intended, and to establish the validity of the food safety plan. Verification ensures that the controls in the food safety plan are actually being properly implemented in a way to control the hazards.

Validation establishes the scientific basis for food safety plan process preventive controls. Some examples include using scientific principles and data such as routine indicator microbiology, using expert opinions, conducting in-plant observations or tests, and challenging the process at the limits of its operating controls by conducting challenge studies. FSMA-required validation frequency first includes before the food safety plan is implemented (ideally), within the first 90 calendar days of production, or within a reasonable timeframe with written justification by the preventive controls qualified individual. Additional validation efforts must occur when a change in control measure(s) could impact efficacy or when reanalysis indicates the need.

FSMA requirements stipulate that validation is not required for food allergen preventive controls, sanitation preventive controls, supply-chain program, or recall plan effectiveness. Other preventive controls also may not require validation with written justification. Despite the lack of regulatory expectation, prudent processors may wish to validate these controls in the course of developing their food safety plan. For example, validating sanitation-related controls for pathogen and allergen controls of complex equipment and for how long a processing line can run between cleaning are obvious needs.

There are many routine verification activities expected of FSMA-compliant companies. For process verification, validation of effectiveness, checking equipment calibration, records review, and targeted sampling and testing are examples. Food allergen control verification includes label review and visual inspection of equipment; however, prudent manufacturers using equipment for both allergen-containing and allergen-free foods should consider targeted sampling and testing for allergens. Sanitation verification includes visual inspection of equipment, with environmental monitoring as needed for RTE foods exposed to the environment after processing and before packaging. Supply-chain verification should include second- and third-party audits and targeted sampling and testing. Additional verification activities include system verification, food safety plan reanalysis, third-party audits and internal audits.

Verification procedures should be designed to demonstrate that the food safety plan is consistently being implemented as written. Such procedures are required as appropriate to the food, facility and nature of the preventive control, and can include calibration of process monitoring and verification instruments, and targeted product and environmental monitoring testing.

Gregory Siragusa, Eurofins
Food Genomics

Microbiomes Move Standard Plate Count One Step Forward

By Gregory Siragusa, Douglas Marshall, Ph.D.
No Comments
Gregory Siragusa, Eurofins

Last month we introduced several food genomics terms including the microbiome. Recall that a microbiome is the community or population of microorganisms that inhabit a particular environment or sample. Recall that there are two broad types of microbiomes, a targeted (e.g., bacteria or fungi) or a metagenome (in which all DNA in a sample is sequenced, not just specific targets like bacteria or fungi). This month we would like to introduce the reader to uses of microbiomes and how they augment standard plate counts and move us into a new era in food microbiology. Before providing examples, it might be useful to review a diagram explaining the general flow of the process of determining a microbiome (See Figure 1).

Microbiome
Figure 1. General process for performing a targeted microbiome (bacterial or fungal)

By analogy, if one thinks of cultural microbiology and plate counts as a process of counting colonies of microbes that come from a food or environmental sample, microbiome analysis can be thought of as identifying and counting signature genes, such as the bacterial specific 16S gene, from the microbes in a food or environmental sample. Plate counts have been and remain a food microbiologist most powerful indicator tool in the tool kit; however, we know there are some limitations in their use. One limitation is that not all bacterial or fungal cells are capable of outgrowth and colony formation on specific media under a set of incubation conditions (temperature, time, media pH, storage atmosphere, etc.). Individual plate count methods cannot cover the nearly infinite number of variations of growth atmospheres and nutrients. Because of these limitations microbiologists understand that we have not cultured but many different types of bacteria on the planet (this led to the term “The Great Plate Count Anomaly” (Staley & Konopka, 1985). Think of a holiday party where guests were handed nametags on which was printed: “Hello, I grow on Standard Methods Agar” or “Hello, I grow at 15°C”, etc. We can group the partygoers by ability to grow on certain media; we can also count partygoers, but they still do not have names. As effective as our selective and differential media have become, bacterial colonies still do not come with their own “Hello, My Name Is XYZ” name tags. Therefore, in the lab, once a plate is counted it is generally tossed into the autoclave bag, along with unnamed colonies and all they represent. Microbiomes can provide a nametag of sorts as well as what proportion of people at that party share  a certain name. For instance: “Hello, My Name is Pseudomonas” or “Hello, My Name is Lactobacillus”, etc. The host can then say “Now we are going to count you; would all Pseudomonas pleased gather in this corner?” or “All Lactobacillus please meet at the punch bowl”.

A somewhat overly simplified analogy, but it makes the point that microbiome technology gives names and proportions. Microbiomes too have limitations. First, with current technologies microbiomes need a relatively large threshold of organisms of a specific group to appear in the microbiome pie chart— approximately 103. In theory, a colony on a plate of agar medium can be derived from a single cell or colony-forming unit (CFU). Not all amplified genes in a microbiome are necessarily from viable cells (A topic that will be covered later in this series of articles). Forming a colony on an agar surface on the other hand requires cell viability. Finally, the specificity of a microorganism name assigned to a group in a microbiome depends on the size of the sequenced amplicon (an amplicon is a segment of DNA, in this case the 16S gene DNA, resulting from amplification by PCR before sequencing) and how well our microbial databases cover different subtypes in a species. Targeted microbiomes can reliably name the genus of an organism, however resolution to the species and subspecies is not guaranteed. (Later in this series we will discuss metagenomes and how they have the potential to identify to a species or even subspecies level). Readers can find very informative reviews on microbiome specificity in the following cited references: Bokulich, Lewis, Boundy-Mills, & Mills, 2016; de Boer et al., 2015; Ercolini, 2013; Kergourlay, Taminiau, Daube, & Champomier Vergès, 2015.

When we consider the power of using cultural microbiology for quantitative functional indicators of microbial quality together with microbiomic analysis, with limitations  and all for both, microbiomes have opened a door to the vast and varied biosphere of our food’s microbiology to a depth never before observed. This all sounds great, but how will we benefit and use this information? We have constructed Table 1 with examples and links of microbiome applications to problems that would have required years to study by cultural microbiology techniques alone. Please note this is by no means an exhaustive list, but it serves to illustrate the very broad and deep potential of microbiomics to food microbiology. We encourage the reader to email the editors or authors with questions regarding any reference. Using PubMed and the search terms “Food AND microbiome” will provide abstracts and a large variety of applications of this technology.

Foodstuff Reference
Ale (Bokulich, Bamforth, & Mills, 2012)
Beef Burgers (Ferrocino et al., 2015)
Beefsteak (De Filippis, La Storia, Villani, & Ercolini, 2013)
Brewhouse and Ingredients (Bokulich et al., 2012)
Cheese (Wolfe, Button, Santarelli, & Dutton, 2014)
Cheese and Listeria growth (Callon, Retureau, Didienne, & Montel, 2014)
Cherries, Hydrostatic Pressure (del Árbol et al., n.d.)
Cocoa (Illeghems, De Vuyst, Papalexandratou, & Weckx, 2012)
Dairy Starters and Spoilage Bacteria (Stellato, De Filippis, La Storia, & Ercolini, 2015)
Drinking Water Biofilms (Chao, Mao, Wang, & Zhang, 2015)
Fermented Foods (Tamang, Watanabe, & Holzapfel, 2016)
Foodservice Surfaces (Stellato, La Storia, Cirillo, & Ercolini, 2015)
Fruit and Vegetables (Leff & Fierer, 2013)
Insect Protein (Garofalo et al., 2017)
Kitchen surfaces (Flores et al., 2013)
Lamb (Wang et al., 2016)
Lobster (Tirloni, Stella, Gennari, Colombo, & Bernardi, 2016)
Meat and storage atmosphere (Säde, Penttinen, Björkroth, & Hultman, 2017)
Meat spoilage and processing plant (Pothakos, Stellato, Ercolini, & Devlieghere, 2015)
Meat Spoilage Volatiles (Casaburi, Piombino, Nychas, Villani, & Ercolini, 2015)
Meat Stored in Different Atmospheres (Ercolini et al., 2011)
Milk (Quigley et al., 2011)
Milk and Cow Diet (Giello et al., n.d.)
 Milk and Mastitis  (Bhatt et al., 2012)
 Milk and Teat Preparation  (Doyle, Gleeson, O’Toole, & Cotter, 2016)
 Natural starter cultures  (Parente et al., 2016)
 Olives  (Abriouel, Benomar, Lucas, & Gálvez, 2011)
 Pork Sausage  (Benson et al., 2014)
Spores in complex foods (de Boer et al., 2015)
Tomato Plants (Ottesen et al., 2013)
Winemaking (Marzano et al., 2016)
Table 1. Examples of microbiome analysis of different foods and surfaces.

See page 2 for references