Ransomware: Lessons Learned from One Food Company’s Experience

In fall 2021, G&J Pepsi-Cola Bottlers Inc, came face-to-face with a potential ransomware attack and was able to avert it. We spoke with G&J’s enterprise infrastructure director, Eric McKinney, and cybersecurity engineer, Rory Crabbe, to learn more about how they detected and responded to the attack, the steps they have taken to strengthen their cybersecurity, and what advice they have for other food companies in the wake of the near catastrophe.

What happened to G&J back in 2021, and when did you realize something was wrong?

McKinney: Around Labor Day of 2021, we received a really weird call. The callers were acting as if they were friends looking out for our best interest, and they alerted us to the fact that there may be compromises to our system. They showed us a spreadsheet of usernames in our active directory to verify that they were in our systems, and they said we could pay them to prevent an attack. We did not engage with them further—and we think they may have been part of it—but we believed that something was happening.

Eric McKinney

We went through all of our servers—we don’t have a large footprint, because we are a cloud first organization—but we did detect some software that should not have been installed on a couple of our servers. We removed that immediately, but we were unable to find the beacons that they leave behind that act as triggers to start encrypting your files.

We made the decisions that if anything happened, we were not going to negotiate, we were not going to try to get our systems back, we were going to shut everything down and roll back. I put myself on call and sure enough I got a call two days later at 3:00 a.m. from one of our people. He was logging in remotely to a server and he said, “Something don’t look right.” I go to his screen and I immediately see the locked files and realize this is really happening.

The thing that saved us ultimately is we use native platform backups. We use Microsoft Azure. So we immediately shut everything down and started rolling back our systems as far back as we could go. Those backup files were not compromised because we don’t leverage backups that tie to a file system within a server. The only way you can touch them is if you have our Cloud credentials, which are all multi-factored.

How did this affect operations?

McKinney: The net impact was our critical systems were down for about seven to eight hours, and we were recovering PCs for almost a week—there were 100 to 150 PCs that were impacted as it continued to move laterally through our organization, and we had to get them all flushed out. We had to roll the system back two weeks, so we lost two weeks of data. That impacted the accounting team the most.

We did experience an event—it was not an almost event. But we never lost a single case of sales and we never paid a single dollar. We took everyone’s computers and blew them away, handed them right back to them and said you’re starting fresh. Fortunately, this only affected employees’ files. They could still get their emails and the things that were in OneDrive.

The things that really worked in our favor were our Cloud-first strategy and getting away from a legacy client architecture. We were still able to communicate. We could send emails, we could set up Teams and we had all the tools to coordinate and get out of this and recover as quickly as we did. The second thing was having those native platform-based backups.

How did this change your digital and cybersecurity strategies?

McKinney: We were doing weekly backups, now we back up every day. And these are full system backups, which means that if you hit restore, the whole system lights back up not just the data but also your operating system that it runs on.

Crabbe: We also reached out to a lot of companies, including Arctic Wolf, who we ultimately began working with to help us figure out what we didn’t know. We worked with them to go through our environment and come up with ideas on how to improve. We are a big Microsoft shop, and we started utilizing a lot of the native tools that we already had such as Defender for Endpoint and the security portal. This addressed a lot of the low hanging fruit, such as automatic updates and not allowing outside vendors to contact us without going through a vetting process.

Rory Crabbe

Arctic Wolf went through our system and sent us a list of recommendations, and a lot of what we did involved utilizing the native tools that we already had, shoring up our defenses, making sure the backups work and creating a disaster recovery plan.

McKinney:  We quickly went from being a business of convenience, where we said, “let’s allow USB drives,” to changing all of our technical policies by turning on all of our attack surface reduction rules. We blocked all logins from outside the U.S. and brought in new team members dedicated to cybersecurity.

I have some self-confidence issues due to this attack because your failures are put on display, and there is a feeling that if you were doing a better job this would have been prevented. But we were a very small team and we were responsible for cybersecurity, ERP (enterprise resource planning) initiatives, development initiatives, support and infrastructure initiatives and data initiatives. When you’re wearing all of these hats things do get missed, and in the end it ended up being one application update. One application patch was exposed, which set all of this off. in terms of where we’ve gotten better, we signed up with an MSP (managed service provider) to monitor our environment 24 hours a day seven days a week. In addition, these companies assist your team by keeping them up to date with the latest techniques and providing proactive communication on things that we should be doing to secure and protect our environment.

We’ve taken a lot of steps over the past two years and we still have a long way to go. We will never stop or become complacent.

There is a concern among some people that the Cloud is less secure, and it’s better to control your own servers. Is that a misconception?

Crabbe: When it’s on premise it is your responsibility. If something happens to your infrastructure, you’ve got to be on call and wake up to deal with that. So not only is the Cloud a reduction in personnel work; it’s also peace of mind. Microsoft has its own team of engineers, and they have physical security in place as well. The Azure building is protected by armed guards to protect the data from physical hackers. It’s a lot easier to apply security policies to something that’s in the Cloud because Microsoft can give you options for all kinds of things that you didn’t even know you needed. This makes it easier to visualize where you are and where you need to go.

McKinney: These are also publicly traded companies that have to follow all of the controls that come with being publicly traded. They’re going to do a better job than the one or two individuals that you have at your company who cannot work 24/7 365 days a year.

I appreciate you guys talking openly about this, because one of the issues that comes up in food defense and cybersecurity is people aren’t necessarily sharing information that could help others recognize vulnerabilities. Is it difficult to share this information?

McKinney: We didn’t want to talk about it for a long time. It’s hard to put your failures—or at least what is perceived as a failure—out there. But when you look around, you realize this can happen to anyone. It happened to MGM with all their resources. And one issue that isn’t discussed very often is, behind the business implications is an incredibly stressed out IT team that really is traumatized by an event like this.

In talking with others who have been through this, it’s often the most stressful thing that’s ever happened in their lives. It certainly is the most stressed out I’ve ever been. You’re thinking, I just cost my company millions of dollars. I shut down my business. We may not be able to get product to our people. So many things flash through your mind, and you really don’t want to talk about it or advertise it. Luckily for us, we had the right systems but most importantly we had really great executive support and great team members to help us recover.

When it comes to access management, companies have to balance convenience for their employees with the need for stringent security. Were employees understanding of the changes you had to make, and how did you communicate these changes in processes?

Crabbe: There was a lot of frustration with people saying this worked before, why can’t we do it now? One of the benefits of being a family-owned company is that we are a fairly small group, so we were able to deal with it on almost a case-by-case basis. We have an internal system that people can submit their issues or requests through, and we review them. For example, if somebody needs to move a device to a USB stick to take to an external vendor, we can look at that and say what alternatives do we have? Can we use OneDrive or another native tool to share that information? Does it have to be a USB stick? Or, if someone is going on vacation in Mexico, they can submit a ticket and we can allow them remote access from a specific country for a specific amount of time so they can log-in. We can tell them yes or no on a case-by-case basis and explain why we made the decision.

McKinney: This event also made us ask questions like, do we even need USB sticks? There are so many other tools we can use. A lot of the changes involved looking at more modern ways to collaborate. And a lot of that revolves around retraining and catching your workforce up with the new tools that we have available.

Based on your experience, what advice would you offer other companies?

McKinney: The IT spend in the food and beverage industry is typically small compared to industries like insurance or banking or health care. You need to capture all the signals from all your systems—emails being sent, open, received, etc.—and you must monitor those. Then you need the right algorithms and the right people to make sense of that data. If you are not able to maintain a large enough in-house team, investigate an MSP. They can ingest all the signals, funnel them and turn all that data into actionable items. Also, store your backups off site and limit access. Don’t store them with your production data.

Crabbe: Shore up your defenses using your native tools and create a disaster recovery plan. Those would be my two biggest recommendations for any company going forward. Dig deep and utilize what you’ve got. There’s probably a lot more available to you than you realize you have, and don’t be afraid to reach out to third-party vendors for help.

 

Related Articles

About The Author

Exit mobile version