Reading view

Ring doorbells: Won’t you see my neighbor? (Lock and Code S07E05)

This week on the Lock and Code podcast…

On February 8, during the Super Bowl in the United States, countless owners of one of the most popular smart products today got a bit of a wakeup call: Their Ring doorbells could be used to see a whole lot more than they knew.

In a commercial that was broadcast to one of most reliably enormous audiences in the country, Amazon, which owns the company Ring, promoted a new feature for its smart doorbells called “Search Party.” By scouring the footage of individual Ring cameras across a specific region, “Search Party” can implement AI-powered image recognition technology to find, as the commercial portrayed it, a lost dog. But immediately after the commercial aired, people began wondering what else their Ring cameras could be used to find.

As US Senator Ed Markey wrote on social media:

“Ring’s Super Bowl ad exposed a scary truth: the technology in its doorbell cameras could be used to hunt down a lost pet…or a person. Amazon must discontinue its dystopian monitoring features.”

These “dystopian monitoring features” aren’t entirely new, but that’s not to say that most Ring owners knew what they were allowing when they originally bought their devices.

Bought by Amazon in 2018, Ring is the most popular manufacturer of a product that, as of 15 years ago, didn’t really exist. And while other “smart” innovations failed, smart doorbells have become a fixture of American neighborhoods, providing a mixture of convenience and security. For instance, a Ring owner away from home can verify and buzz in their mailman dropping off a package behind a gated entrance. Or, a Ring owner can see on their phone that the person knocking at their door is a salesman and choose to avoid talking to them. Or, a Ring owner can help police who are investigating a crime in their area by handing over relevant footage. Even the presence of a Ring doorbell, and its variety of motion-detecting alerts, could possibly serve as a deterrent to crime.

What has seemingly upset so many of those same owners, then, is learning exactly how their personal devices might be used for a company’s gains.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Matthew Guariglia, senior policy analyst at Electronic Frontier Foundation, about Ring’s long history of partnering with—and sometimes even speaking directly for—police, who can access Ring doorbell footage both inside the company and outside it, and what people really open themselves up to when purchasing a Ring device.

 ”There’s this impression, a myth practically, that ‘I buy a ring doorbell to put on my house, I control the footage… But there is [an] entire secondary use of this device, which is by police that you don’t really get a lot of say in.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

Ring doorbells: Won’t you see my neighbor? (Lock and Code S07E05)

This week on the Lock and Code podcast…

On February 8, during the Super Bowl in the United States, countless owners of one of the most popular smart products today got a bit of a wakeup call: Their Ring doorbells could be used to see a whole lot more than they knew.

In a commercial that was broadcast to one of most reliably enormous audiences in the country, Amazon, which owns the company Ring, promoted a new feature for its smart doorbells called “Search Party.” By scouring the footage of individual Ring cameras across a specific region, “Search Party” can implement AI-powered image recognition technology to find, as the commercial portrayed it, a lost dog. But immediately after the commercial aired, people began wondering what else their Ring cameras could be used to find.

As US Senator Ed Markey wrote on social media:

“Ring’s Super Bowl ad exposed a scary truth: the technology in its doorbell cameras could be used to hunt down a lost pet…or a person. Amazon must discontinue its dystopian monitoring features.”

These “dystopian monitoring features” aren’t entirely new, but that’s not to say that most Ring owners knew what they were allowing when they originally bought their devices.

Bought by Amazon in 2018, Ring is the most popular manufacturer of a product that, as of 15 years ago, didn’t really exist. And while other “smart” innovations failed, smart doorbells have become a fixture of American neighborhoods, providing a mixture of convenience and security. For instance, a Ring owner away from home can verify and buzz in their mailman dropping off a package behind a gated entrance. Or, a Ring owner can see on their phone that the person knocking at their door is a salesman and choose to avoid talking to them. Or, a Ring owner can help police who are investigating a crime in their area by handing over relevant footage. Even the presence of a Ring doorbell, and its variety of motion-detecting alerts, could possibly serve as a deterrent to crime.

What has seemingly upset so many of those same owners, then, is learning exactly how their personal devices might be used for a company’s gains.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Matthew Guariglia, senior policy analyst at Electronic Frontier Foundation, about Ring’s long history of partnering with—and sometimes even speaking directly for—police, who can access Ring doorbell footage both inside the company and outside it, and what people really open themselves up to when purchasing a Ring device.

 ”There’s this impression, a myth practically, that ‘I buy a ring doorbell to put on my house, I control the footage… But there is [an] entire secondary use of this device, which is by police that you don’t really get a lot of say in.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

When Security Becomes an Afterthought

Why AI's Biggest Risk Isn't Technical

This article is based on a conversation with Nikesh Arora on the 100th episode of the Threat Vector podcast.

David Moulton interviews Nikesh Arora
David Moulton interviews Nikesh Arora on the Threat Vector podcast.

"Most technologists think about technology, not about cybersecurity," Nikesh Arora says. "Cybersecurity is kind of like insurance. Let's go make great things happen, and let's make sure on the way we purchase insurance."

Coming from the CEO of the world's largest cybersecurity company, it's the quiet part said out loud, and it explains why AI deployment is racing ahead while security scrambles to keep up.

Earlier this year, Arora spoke with a CIO entirely focused on AI deployment challenges: building viable products, training models, measuring customer impact. Security never came up once. "If you're still going through the motion, trying to understand, ‘Can I actually make this thing work?’ You're not worried about security," Arora notes. The logic is brutal but consistent: Why secure something that might not even function?

In the Threat Vector podcast’s 100th episode milestone, Arora speaks with host David Moulton:

  • Why the gap between innovation and security keeps widening.
  • How to read inflection points before they're obvious.
  • What separates organizations that prepare from those that scramble.

The Gap That Keeps Growing

The disconnect isn't new. It's the same psychology that makes airport security feel like overhead – necessary friction that slows down what should be seamless. But with AI, the gap is widening at an unprecedented pace.

Consider the infrastructure buildup happening right now. Nvidia has become a $4 trillion company selling chips that can't stay in stock. Hundreds of billions of dollars are flowing into AI-computer infrastructure. Cloud providers are buying out entire methane gas companies to power their data centers.

Yet organizations are treating AI security as something to bolt on later. That same CIO told Arora: "We worked on some stuff ourselves, and we're just jerry-rigging some things to make sure this happens securely."

Arora's response:

Jerry rig, production, and security don't work together as three terms.

Reading Signals Before They're Obvious

Arora has watched enough technology cycles to recognize the pattern. "You start seeing signs early, and then you look around, you don't see enough impact. You say, okay, maybe this is going to be just a passing shower. But you don't realize that over time this thing's getting more and more momentum."

The signs around AI are adding up:

  • Individual behavior has shifted.
    Arora went from never talking to ChatGPT or Gemini to conducting 10-15 conversations daily. During a recent Tokyo trip, he used Gemini as his primary navigation tool, asking it to rank sumo wrestling shows for his kids rather than "trying to go read 14 websites and figure out what makes sense."
  • The spend is massive and accelerating.
    Not just chips, entire energy infrastructures are being rebuilt to support AI compute needs.
  • Consumer and enterprise adoption are both surging.
    From coding assistants to business analysis, use cases are expanding faster than security models can adapt.

"This thing's going to change our life fundamentally," Arora tells Moulton. "We're not seeing it at scale in our customers just yet. That doesn't mean we can sit back and wait."

Arora understands the risks involved in being late to new technology.

You have to not just anticipate where the trend is going. You have to prepare your organization and the resources to get there. Otherwise, the risk is that Silicon Valley will go fund those people who are thinking purely about the new world... and one of them's going to hit. Then you'll be two years behind with no organization, no resources deployed against it.

The Bets That Paid Off

When Arora joined Palo Alto Networks seven and a half years ago, he wrote two words on a piece of paper: cloud and AI. The company was a firewall business. Those two inflection points would require fundamental transformation, and, just as with AI now, being late was not an option.

If you don't get the network transformation right, 80% of our business will falter.

That insight drove a strategic bet on moving from point products to platform thinking, consolidating security tools rather than adding to the sprawl.

The platform approach wasn't about vendor consolidation for its own sake. It was about correlation. Unit 42® data shows that 70% of incidents now span three or more attack surfaces. When attacks move across endpoints, networks, cloud services and applications simultaneously, fragmented security creates gaps that attackers exploit ruthlessly.

Today we have coverage for 80 plus percent of the industry, which means our customers can come talk to us about a myriad of problems, and we can actually cross-correlate across all the different things we do.

With AI deployments touching every part of the technology stack, that cross-correlation becomes essential. Data flows between training environments and production systems. Models access APIs across cloud and on premises infrastructure. Applications consume AI services from multiple providers. Security that can't see and correlate across that entire landscape will miss the threats that matter most.

First Principles Over Tradition

What drives Arora's ability to spot inflection points isn't just pattern recognition, it's his refusal to accept how things have always been done.

His pet peeve: "Somebody said, well, this is how we've traditionally done it." The response reveals his approach: "You use the word traditional. I use the historical context saying, yeah, sure, they used to dig fields with picks and shovels, and now they use tractors."

This thinking drove Palo Alto Networks to reimagine SOC performance. The industry accepted four days as the normal time to detect and remediate security incidents. Arora called that unacceptable. "We need to get it to be real time."

The result was a fundamentally different architecture that analyzes data as it arrives rather than waiting for problems to appear, enabling 1-minute detection and response instead of four days.

Traditionally, SOCs would analyze the problem when the problem appeared. We said forget it. We're going to analyze everything to see if there's a problem. That architecture fundamentally transformed what we do compared to everybody else in the market.

The same first-principles approach needs to apply to AI security. Organizations can't simply extend existing security models and hope they work.

What Comes Next

With ransomware attacks now completing in as little as 25 minutes (100 times faster than just three years ago, according to Unit 42 research) reactive security simply can't keep pace. Organizations need security that thinks and responds at machine speed, built into AI deployments from day one.

"AI has become the biggest inflection point in current technology," Arora observes. Organizations are too busy deploying to worry about security. That's human nature. But it's also the moment when security teams need to stay in lockstep.

The question isn't whether to secure AI, it's whether security will be designed in or bolted on. The former takes strategic thinking now. The latter takes crisis management later.

Our job at Palo Alto and our industry is to make sure as they go build these experimental ideas into real production capability that we're staying in lockstep with them and saying, ‘Oh, by the way, here's something that can secure what you just built in a way that is not gonna get you into trouble.’

Listen to the full conversation between Nikesh Arora and David Moulton, senior director of thought leadership for Cortex® and Unit 42, on the 100th episode of Threat Vector.

The post When Security Becomes an Afterthought appeared first on Palo Alto Networks Blog.

  •  

Is your phone listening to you? (re-air) (Lock and Code S07E03)

This week on the Lock and Code podcast…

In January, Google settled a lawsuit that pricked up a few ears: It agreed to pay $68 million to a wide array of people who sued the company together, alleging that Google’s voice-activated smart assistant had secretly recorded their conversations, which were then sent to advertisers to target them with promotions.

Google denied any admission of wrongdoing in the settlement agreement, but the fact stands that one of the largest phone makers in the world decided to forego a trial against some potentially explosive surveillance allegations. It’s a decision that the public has already seen in the past, when Apple agreed to pay $95 million last year to settle similar legal claims against its smart assistant, Siri.

Back-to-back, the stories raise a question that just seems to never go away: Are our phones listening to us?

This week, on the Lock and Code podcast with host David Ruiz, we revisit an episode from last year in which we tried to find the answer. In speaking to Electronic Frontier Foundation Staff Technologist Lena Cohen about mobile tracking overall, it becomes clear that, even if our phones aren’t literally listening to our conversations, the devices are stuffed with so many novel forms of surveillance that we need not say something out loud to be predictably targeted with ads for it.

“Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

  •  

Is your phone listening to you? (re-air) (Lock and Code S07E03)

This week on the Lock and Code podcast…

In January, Google settled a lawsuit that pricked up a few ears: It agreed to pay $68 million to a wide array of people who sued the company together, alleging that Google’s voice-activated smart assistant had secretly recorded their conversations, which were then sent to advertisers to target them with promotions.

Google denied any admission of wrongdoing in the settlement agreement, but the fact stands that one of the largest phone makers in the world decided to forego a trial against some potentially explosive surveillance allegations. It’s a decision that the public has already seen in the past, when Apple agreed to pay $95 million last year to settle similar legal claims against its smart assistant, Siri.

Back-to-back, the stories raise a question that just seems to never go away: Are our phones listening to us?

This week, on the Lock and Code podcast with host David Ruiz, we revisit an episode from last year in which we tried to find the answer. In speaking to Electronic Frontier Foundation Staff Technologist Lena Cohen about mobile tracking overall, it becomes clear that, even if our phones aren’t literally listening to our conversations, the devices are stuffed with so many novel forms of surveillance that we need not say something out loud to be predictably targeted with ads for it.

“Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

  •  

One privacy change I made for 2026 (Lock and Code S07E02)

This week on the Lock and Code podcast…

When you hear the words “data privacy,” what do you first imagine?

Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.

Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.

That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely.

When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off.

So, for 2026, he has switched to a new search engine, Brave Search.

Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

One privacy change I made for 2026 (Lock and Code S07E02)

This week on the Lock and Code podcast…

When you hear the words “data privacy,” what do you first imagine?

Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.

Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.

That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely.

When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off.

So, for 2026, he has switched to a new search engine, Brave Search.

Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

Enshittification is ruining everything online (Lock and Code S07E01)

This week on the Lock and Code podcast…

There’s a bizarre thing happening online right now where everything is getting worse.

Your Google results have become so bad that you’ve likely typed what you’re looking for, plus the word “Reddit,” so you can find discussion from actual humans. If you didn’t take this route, you might get served AI results from Google Gemini, which once recommended that every person should eat “at least one small rock per day.” Your Amazon results are a slog, filled with products that have surreptitiously paid reviews. Your Facebook feed could be entirely irrelevant because the company decided years ago that you didn’t want to see what your friends posted, you wanted to see what brands posted, because brands pay Facebook, and you don’t, so brands are more important than your friends.

But, according to digital rights activist and award-winning author Cory Doctorow, this wave of online deterioration isn’t an accident—it’s a business strategy, and it can be summed up in a word he coined a couple of years ago: Enshittification.

Enshittification is the process by which an online platform—like Facebook, Google, or Amazon—harms its own services and products for short-term gain while managing to avoid any meaningful consequences, like the loss of customers or the impact of meaningful government regulation. It begins with an online platform treating new users with care, offering services, products, or connectivity that they may not find elsewhere. Then, the platform invites businesses on board that want to sell things to those users. This means businesses become the priority and the everyday user experience is hindered. But then, in the final stage, the platform also makes things worse for its business customers, making things better only for itself.

This is how a company like Amazon went from helping you find nearly anything you wanted to buy online to helping businesses sell you anything you wanted to buy online to making those businesses pay increasingly high fees to even be discovered online. Everyone, from buyers to sellers, is pretty much entrenched in the platform, so Amazon gets to dictate the terms.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Doctorow about enshittification’s fast damage across the internet, how to fight back, and where it all started.

 ”Once these laws were established, the tech companies were able to take advantage of them. And today we have a bunch of companies that aren’t tech companies that are nevertheless using technology to rig the game in ways that the tech companies pioneered.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

Enshittification is ruining everything online (Lock and Code S07E01)

This week on the Lock and Code podcast…

There’s a bizarre thing happening online right now where everything is getting worse.

Your Google results have become so bad that you’ve likely typed what you’re looking for, plus the word “Reddit,” so you can find discussion from actual humans. If you didn’t take this route, you might get served AI results from Google Gemini, which once recommended that every person should eat “at least one small rock per day.” Your Amazon results are a slog, filled with products that have surreptitiously paid reviews. Your Facebook feed could be entirely irrelevant because the company decided years ago that you didn’t want to see what your friends posted, you wanted to see what brands posted, because brands pay Facebook, and you don’t, so brands are more important than your friends.

But, according to digital rights activist and award-winning author Cory Doctorow, this wave of online deterioration isn’t an accident—it’s a business strategy, and it can be summed up in a word he coined a couple of years ago: Enshittification.

Enshittification is the process by which an online platform—like Facebook, Google, or Amazon—harms its own services and products for short-term gain while managing to avoid any meaningful consequences, like the loss of customers or the impact of meaningful government regulation. It begins with an online platform treating new users with care, offering services, products, or connectivity that they may not find elsewhere. Then, the platform invites businesses on board that want to sell things to those users. This means businesses become the priority and the everyday user experience is hindered. But then, in the final stage, the platform also makes things worse for its business customers, making things better only for itself.

This is how a company like Amazon went from helping you find nearly anything you wanted to buy online to helping businesses sell you anything you wanted to buy online to making those businesses pay increasingly high fees to even be discovered online. Everyone, from buyers to sellers, is pretty much entrenched in the platform, so Amazon gets to dictate the terms.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Doctorow about enshittification’s fast damage across the internet, how to fight back, and where it all started.

 ”Once these laws were established, the tech companies were able to take advantage of them. And today we have a bunch of companies that aren’t tech companies that are nevertheless using technology to rig the game in ways that the tech companies pioneered.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

ALPRs are recording your daily drive (Lock and Code S06E26)

This week on the Lock and Code podcast…

There’s an entire surveillance network popping up across the United States that has likely already captured your information, all for the non-suspicion of driving a car.

Automated License Plate Readers, or ALPRs, are AI-powered cameras that scan and store an image of every single vehicle that passes their view. They are mounted onto street lights, installed under bridges, disguised in water barrels, and affixed onto telephone poles, lampposts, parking signs, and even cop cars.

Once installed, these cameras capture a vehicle’s license plate number, along with its make, model, and color, and any identifying features, like a bumper sticker, or damage, or even sport trim options. Because nearly every ALPR camera has an associated location, these devices can reveal where a car was headed, and at what time, and by linking data from multiple ALPRs, it’s easy to determine a car’s daylong route and, by proxy, it’s owner’s daily routine.

This deeply sensitive information has been exposed in recent history.

In 2024, the US Cybersecurity and Information Security Agency discovered seven vulnerabilities in cameras made by Motorola Solutions, and at the start of 2025, the outlet Wired reported that more than 150 ALPR cameras were leaking their live streams.

But there’s another concern with ALPRs besides data security and potential vulnerability exploits, and that’s with what they store and how they’re accessed.

ALPRs are almost uniformly purchased and used by law enforcement. These devices have been used to help solve crime, but their databases can be accessed by police who do not live in your city, or county, or even state, and who do not need a warrant before making a search.

In fact, when police access the databases managed by one major ALPR manufacturer, named Flock, one of the few guardrails those police encounter is needing to type a single word in a basic text box. When Electronic Frontier Foundation analyzed 12 million searches made by police in Flock’s systems, they learned that police sometimes filled that text box with the word “protest,” meaning that police were potentially investigating activity that is protected by the First Amendment.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Will Freeman, founder of the ALRP-tracking project DeFlock Me, about this growing tide of neighborhood surveillance and the flimsy protections afforded to everyday people.

“License plate readers are a hundred percent used to circumvent the Fourth Amendment because [police] don’t have to see a judge. They don’t have to find probable cause. According to the policies of most police departments, they don’t even have to have reasonable suspicion.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

ALPRs are recording your daily drive (Lock and Code S06E26)

This week on the Lock and Code podcast…

There’s an entire surveillance network popping up across the United States that has likely already captured your information, all for the non-suspicion of driving a car.

Automated License Plate Readers, or ALPRs, are AI-powered cameras that scan and store an image of every single vehicle that passes their view. They are mounted onto street lights, installed under bridges, disguised in water barrels, and affixed onto telephone poles, lampposts, parking signs, and even cop cars.

Once installed, these cameras capture a vehicle’s license plate number, along with its make, model, and color, and any identifying features, like a bumper sticker, or damage, or even sport trim options. Because nearly every ALPR camera has an associated location, these devices can reveal where a car was headed, and at what time, and by linking data from multiple ALPRs, it’s easy to determine a car’s daylong route and, by proxy, it’s owner’s daily routine.

This deeply sensitive information has been exposed in recent history.

In 2024, the US Cybersecurity and Information Security Agency discovered seven vulnerabilities in cameras made by Motorola Solutions, and at the start of 2025, the outlet Wired reported that more than 150 ALPR cameras were leaking their live streams.

But there’s another concern with ALPRs besides data security and potential vulnerability exploits, and that’s with what they store and how they’re accessed.

ALPRs are almost uniformly purchased and used by law enforcement. These devices have been used to help solve crime, but their databases can be accessed by police who do not live in your city, or county, or even state, and who do not need a warrant before making a search.

In fact, when police access the databases managed by one major ALPR manufacturer, named Flock, one of the few guardrails those police encounter is needing to type a single word in a basic text box. When Electronic Frontier Foundation analyzed 12 million searches made by police in Flock’s systems, they learned that police sometimes filled that text box with the word “protest,” meaning that police were potentially investigating activity that is protected by the First Amendment.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Will Freeman, founder of the ALRP-tracking project DeFlock Me, about this growing tide of neighborhood surveillance and the flimsy protections afforded to everyday people.

“License plate readers are a hundred percent used to circumvent the Fourth Amendment because [police] don’t have to see a judge. They don’t have to find probable cause. According to the policies of most police departments, they don’t even have to have reasonable suspicion.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

EFF’s ‘How to Fix the Internet’ Podcast: 2025 in Review

2025 was a stellar year for EFF’s award-winning podcast, “How to Fix the Internet,” as our sixth season focused on the tools and technology of freedom. 

It seems like everywhere we turn we see dystopian stories about technology’s impact on our lives and our futuresfrom tracking-based surveillance capitalism, to street level government surveillance, to the dominance of a few large platforms choking innovation, to the growing efforts by authoritarian governments to control what we see and saythe landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building solutions. That’s where our podcast comes in. 

EFF's How to Fix the Internet podcast offers a better way forward. Through curious conversations with some of the leading minds in law and technology, EFF Executive Director Cindy Cohn and Activism Director Jason Kelley explore creative solutions to some of today’s biggest tech challenges. Our sixth season, which ran from May through September, featured: 

  • Digital Autonomy for Bodily Autonomy” – We all leave digital trails as we navigate the internetrecords of what we searched for, what we bought, who we talked to, where we went or want to go in the real worldand those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? Digital Defense Fund Director Kate Bertash joined Cindy and Jason to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline. 
  • Love the Internet Before You Hate On It” – There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. Molly Whitea researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech joined Cindy and Jason to discuss working toward a human-centered internet that gives everyone a sense of control and interaction; open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 
  • Why Three is Tor's Magic Number” – Many in Silicon Valley, and in U.S. business at large, seem to believe innovation springs only from competition, a race to build the next big thing first, cheaper, better, best. But what if collaboration and community breeds innovation just as well as adversarial competition? Tor Project Executive Director Isabela Fernandes joined Cindy and Jason to discuss the importance of not just accepting technology as it’s given to us, but collaboratively breaking it, tinkering with it, and rebuilding it together until it becomes the technology that we really need to make our world a better place. 
  • Securing Journalism on the ‘Data-Greedy’ Internet” – Public-interest journalism speaks truth to power, so protecting press freedom is part of protecting democracy. But what does it take to digitally secure journalists’ work in an environment where critics, hackers, oppressive regimes, and others seem to have the free press in their crosshairs? Freedom of the Press Foundation Digital Security Director Harlo Holmes joined Cindy and Jason to discuss the tools and techniques that help journalists protect themselves and their sources while keeping the world informed. 
  • Cryptography Makes a Post-Quantum Leap” – The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problemsand the cryptography they underpinare no longer infeasible for computers to solve? Will our online defenses collapse? Research and applied cryptographer Deirdre Connolly joined Cindy and Jason to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 
  • Finding the Joy in Digital Security” – Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning and retain more knowledge when they're having a good time. It doesn’t mean the topic isn’t seriousit’s just about intentionally approaching a serious topic with joy. East Africa digital security trainer Helen Andromedon joined Cindy and Jason to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 
  • Smashing the Tech Oligarchy” – Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more. Tech journalist and critic Kara Swisher joined Cindy and Jason to discuss regulation that can keep people safe online without stifling innovation, creating an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 
  • Separating AI Hope from AI Hype” – If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. Princeton Professor and “AI Snake Oil” publisher Arvind Narayanan joined Cindy and Jason to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 
  • Protecting Privacy in Your Brain” – Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to ita Pandora’s box of epic proportions. Neuroscientist Rafael Yuste and human rights lawyer Jared Genser, co-founders of The Neurorights Foundation, joined Cindy and Jason to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 
  • Building and Preserving the Library of Everything” – Access to knowledge not only creates an informed populace that democracy requires but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed ofso long as that knowledge is allowed to flow freely. Internet Archive founder and digital librarian Brewster Kahle joined Cindy and Jason to discuss how the free flow of knowledge makes all of us more free.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

2025 Year in Review at Cloud Security Podcast by Google

(written jointly with Tim Peacock)

Five years. It’s enough time to fully launch a cloud migration, deploy a new SIEM, or — if you’re a very large enterprise — just start thinking about doing the first two. It’s also how long Tim and I have been subjecting the world to our thoughts on Cloud Security Podcast by Google.

We finally got around to writing the annual “reflections blog.” And, honestly, looking back at Season 5, the state of the industry feels a lot like a chaotic Cybersecurity Garage Sale.

We’re all standing knee-deep in a pile of dusty, obsolete junk — the mid-2000s SIEMs, the 1990s unauthenticated vulnerability scans — while clutching shiny, still-in-the-box AI Agent gadgets we don’t quite know where to put. It’s a mess. But within this mess, a few essential, high-value items have emerged.

So, to all our listeners — the veterans and the newcomers — thank you for sorting through the chaos with us. For Season 6, we’re going all video, by default (opening January 5, 2026). Find us on our new YouTube home: Cloud Security Podcast by Google on YouTube.

Below you will find 3 fun sections: Anton’s faves, Tim’s faves and top 10 by listens (“data’s faves” of sorts, or perhaps listener faves)

Enjoy!

Anton: My selections are, perhaps, a bit predictable — but they were immense fun to record and, I believe, are absolutely essential listening! But, hey, I am biased a bit!

  1. EP236 Accelerated SIEM Journey: A SOC Leader’s Playbook for Modernization and AI This fun episode provides a playbook for SOC leaders on accelerating their SIEM modernization journey. We go into the steps the bank took for moving beyond legacy systems, focusing on how to integrate AI for transformative results and build a truly modern Security Operations Center.
  2. EP254 Escaping 1990s Vulnerability Management: From Unauthenticated Scans to AI-Driven Mitigation This essential episode with Caleb Hoch tackles the “fractions of a century” time lag in vulnerability management, moving beyond endless unauthenticated scans. We discuss how to establish a Gold Standard prioritization model and why running VM Tabletop Exercises is the vital, transformative practice needed for true modernization.
  3. EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 The single most important lesson from RSA 2025 was captured in this episode: AI is merely “Addressable, Not Solvable.” We cut through the hype to discuss where AI can deliver real, practical security value, and where we still need our smart human colleagues to lead the way. This is essential listening for anyone trying to navigate the flood of vendor claims.
  4. EP242 The AI SOC: Is This The Automation We’ve Been Waiting For? This epic episode tackles the most pressing question for security operations: Can “AI SOC” deliver the transformative automation we’ve been waiting for? We discuss — with Anton’s former colleague — the real-world applications of AI in the SOC, focusing on practical gains (and how to know you “gained” anything) and what it means for the future role of the human analyst.
  5. EP238 Google Lessons for Using AI Agents for Securing Our Enterprise This fun episode brings you practical lessons from Google’s own experience using AI agents to secure our enterprise at scale (see this blog also). We dive (not “delve”, mind you!) deep into the real-world application of this technology, focusing on the wins, the challenges, and what it took to adopt. This is essential listening for any leader looking to leverage AI agents effectively without falling into the hype cesspool.
  6. BONUS: EP237 Making Security Personal at the Speed and Scale of TikTok This unique episode goes into what it takes to secure a hyper-scale, global platform like TikTok. We discuss how to move beyond legacy compliance while living in a modern microservices architecture, balance a consistent global security posture with localized regulatory demands, and, most importantly, empower every user with practical tips (like 2FA and strong passphrases) to make security personal.

Tim: My picks are almost entirely not overlapping with Anton, we started our lists separately, but then realized that we scooped each other on two episodes. We both liked our episode with Manija Poulatova enough to keep her on both of our lists!

  1. EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance This episode is a total delight for both of us. For me, I got to not only meet one of my security heroes, I got to see Anton do the same! We named Bruce in our early planning docs as somebody we’d like to have on the show someday when we’re all grown up. Not a bad way to wrap up five years of weekly podcasting!
  2. EP236 Accelerated SIEM Journey: A SOC Leader’s Playbook for Modernization and AI Manija and I were on a panel together in Las Vegas during Google Cloud Next 2025. A few themes from that panel came through in our episode together that I love and think are vital for anyone. First, aim for transformation not migration. As an industry we are not doing so well compared to air transport safety. We cannot cling to our old ways and hope for a better set of outcomes. Second, AI is here to enable our human colleagues, not replace them. We can find greater meaning, joy, and productivity in our work, even as SOC analysts, once we embrace what AI can automate for us.
  3. EP239 Linux Security: The Detection and Response Disconnect and Where Is My Agentless EDR Craig was introduced to me by Friend Of The Show (and friend of mine!) Vijay Ganti (EP196) as someone building an innovative approach to EDR security. Scheduling this episode ended up a little tricky, and I got to do an episode without Anton. That ended up ok, because in Craig I found a totally kindred spirit. We’ve both built systems to secure Linux without agents, though from two different approaches. His stories of finding badness in places we couldn’t previously look, and doing so scalably even for phone towers up the hill behind his house, really resonated with the part of me that spent four years building out Virtual Machine Threat Detection here at Google Cloud. This is definitely an episode for listeners who like to question conventional security thinking.
  4. EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success Another fun origin story: this episode was conceived in a karaoke booth in Singapore. Alex and Lars are two of our early design partners for the SecOps Triage Agent and their feedback to the team and on this episode is super valuable. Alex gets bonus points on this episode for using the word squelch which I’ve been pushing internally as a metaphor for our noise control systems. This is a must-listen for anyone interested in real AI adoption in their SOC. If Alex and Lars can do it across an unbelievable number of regulatory jurisdictions, you can too!
  5. EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking Bringing Heather back to the show has been a goal of ours for ages. When I read her article, coauthored with Gadi Evron and Bruce Schneier, I knew I’d found our topic. As I said on the show, if I’d seen this article written by anybody else I’d laugh, but with this trio of authors I knew it was something to take seriously. Read the article, listen to the episode, let us know in the comments if you’re as scared as I was!
  6. BONUS: EP232 The Human Element of Privacy: Protecting High-Risk Targets and Designing Systems I get one bonus episode for our top ten, so I’m going to include my classmate Sarah Aoun. She is an amazing Googler and on this episode she offers advice that’s useful almost universally, but especially if you believe that you’re a person who is at risk of being targeted online. This is firmly outside of our “cloud security” wheelhouse, but well worth a listen to understand threat modeling and security response for individuals.

Top 10 episodes by listens (excluding the oldest 3)

  1. EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil
  2. EP47 “Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security”
  3. EP153 Kevin Mandia on Cloud Breaches: New Threat Actors, Old Mistakes, and Lessons for All
  4. EP8 Zero Trust: Fast Forward from 2010 to 2021
  5. EP109 How Google Does Vulnerability Management: The Not So Secret Secrets!
  6. EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw
  7. EP17 Modern Threat Detection at Google
  8. EP103 Security Incident Response and Public Cloud — Exploring with Mandiant
  9. EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive
  10. EP12 Threat Models and Cloud Security

Related blogs:


2025 Year in Review at Cloud Security Podcast by Google was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.

  •  

250 Episodes of Cloud Security Podcast by Google: From Confidential Computing to AI-Ready SOC

Gemini for Docs improvises

So this may suck, but I am hoping to at least earn some points for honesty here. I wanted to write something pithy and smart once I realized our Cloud Security Podcast by Google just aired our 250th episode (“EP250 The End of “Collect Everything”? Moving from Centralization to Data Access?”). Yet nothing sufficiently pithy came to my mind …

… so I went around and asked a whole bunch of AIs and agents and such. Then massaged and aggregated the outputs, then ran more AI on the result. And then lightly curated it. Then deleted the bottom 2 stupidest points they made.

So, here it comes … in all its sloppy glory!

  1. The Foundational Roots and Unchanging Mission: Our show started with foundational cloud security topics — like Zero Trust, Data Security, and Cloud Migration Security which drew the initial large audiences. The core commitment since Episode 1 has been to question conventional wisdom, avoid “security theater” (EP248) and explore whether security measures truly benefit the user and the organization.
  2. The AI Transformation: We had a sizable shift with the last 50 episodes, where AI became a central theme, or at least one of the themes we always come back to (and, yes, this covers our 3 pillars of securing AI, AI for security and countering the AI-armed attacker). The focus has moved past general hype to practical applications, securing AI systems, and asking challenging questions like “Data readiness for AI SOC” (EP249).
  3. The Enduring Popularity of Detection & Response (D&R): We highlight that D&R and modernizing the SOC continue to be extremely popular with the audience (EP236 is epic). Trace the evolution of this topic from foundational engineering (like the very popular EP75 on scaling D&R at Google) to the architectural questions in EP250.
  4. “How Google Does Security” Sells the Tickets: We love the episodes offering a candid look behind Google’s security curtain on topics like internal red teaming, detection scaling, and Cloud IR tabletops. They consistently remain perennial audience favorites (the latest in this series is EP238 on how we use AI agents for security).
  5. The Centrality of People and Process: We emphasize the recurring lessons that the most challenging aspects of large-scale cloud (and now AI) security transformations are often the “people” and “process” elements, not the technical “tech” itself. EP237 is an epic example of this.
  6. The Call for Intentionality: We reinforce the importance of having a clear purpose for every security activity and following an engineering-led approach (EP117). The “magical” advice from EP236 is: to ask of every security element, “what is it in service of?”
  7. The Persistence of Old Problems: We often lament with a touch of humor on the industry’s tendency to repeat fundamental security mistakes (the SIEM Paradox in EP234 for instance or EP223 in general), underscoring the ongoing need to cover “boring” basics. We will absolutely continue this (a new episode on vulnerability management “stale” problems is coming soon)
  8. Community and Format Growth: We continue to “sorta-kinda” (human wrote this, eh?) the development of the podcast beyond a purely audio medium, including the launch of live video sessions and a Community site to foster more dialogue and feedback.
  9. The Unique Culture and Authenticity of the Show Stays: We remain obsessed about selecting high-energy, vocal, and knowledgeable guests and fun topics. We will keep on with our “inside jokes” like not allowing guests to recommend Anton’s blog as an episode resource and pokes about firewall appliances in the cloud (they are there).
  10. A Glimpse at 300: We want to tease future topics that will define the next 50+ episodes, such as deeper dives into Agentic AI, challenges of cross-cloud incident response and forensics, or the geopolitical aspects of cloud security. Give us ideas, will ya? Otherwise, you will get to hear about AI and D&R much of the time…

Top 5 popular episodes (excluding the oldest 3)

  1. EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil
  2. EP153 Kevin Mandia on Cloud Breaches: New Threat Actors, Old Mistakes, and Lessons for All
  3. EP47 Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security
  4. EP8 Zero Trust: Fast Forward from 2010 to 2021
  5. EP17 Modern Threat Detection at Google

Enjoy the show!


250 Episodes of Cloud Security Podcast by Google: From Confidential Computing to AI-Ready SOC was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.

  •  

Podcast: Attack Tactics 6! Return of the Blue Team

Download slides: https://www.activecountermeasures.com/presentations In this webcast we walk through the step-by-step defenses to stop the attackers in every step of the way we showed in Attack Tactics Part 5!!! Originally recorded […]

The post Podcast: Attack Tactics 6! Return of the Blue Team appeared first on Black Hills Information Security, Inc..

💾

  •  

BHIS Podcast: Blockchain and You! InfoSec Edition

Take a good look at Bitcoin right now… these are the unlucky ones. These are the unfortunate souls who jumped on another overinflated balloon. But, does this Bitcoin crash completely […]

The post BHIS Podcast: Blockchain and You! InfoSec Edition appeared first on Black Hills Information Security, Inc..

💾

  •  

PODCAST: RDP Logging Bypass and Azure Active Directory Recon

For this podcast we cover a couple of different topics. First, we talk about how to password spray in a non-attributable sort of way. Beau found a way to obfuscate […]

The post PODCAST: RDP Logging Bypass and Azure Active Directory Recon appeared first on Black Hills Information Security, Inc..

💾

  •  

PODCAST: BHIS Sorta Top Used Tools of 2018

In this webcast we cover some of the core tools we use all the time at Black Hills Information Security. However, there’s a twist. We don’t talk about Nessus, Nmap, […]

The post PODCAST: BHIS Sorta Top Used Tools of 2018 appeared first on Black Hills Information Security, Inc..

💾

  •  

PODCAST: Raising Hacker Kids

Yes.. Ethical Hacker Kids. The holidays are coming up! Here John & Jordan cover the different games, tools and gifts we can give kids that help teach them the trade. […]

The post PODCAST: Raising Hacker Kids appeared first on Black Hills Information Security, Inc..

💾

  •  

PODCAST: Blue Team-Apalooza

Over the past few months, we have discovered a couple trends that organizations seem to be missing. No silver bullets, just some general vulnerability issues we are seeing again and […]

The post PODCAST: Blue Team-Apalooza appeared first on Black Hills Information Security, Inc..

💾

  •  
❌