Reading view

What a browser-in-the-browser attack is, and how to spot a fake login window | Kaspersky official blog

In 2022, we dived deep into an attack method called browser-in-the-browser — originally developed by the cybersecurity researcher known as mr.d0x. Back then, no actual examples existed of this model being used in the wild. Fast-forward four years, and browser-in-the-browser attacks have graduated from the theoretical to the real: attackers are now using them in the field. In this post, we revisit what exactly a browser-in-the-browser attack is, show how hackers are deploying it, and, most importantly, explain how to keep yourself from becoming its next victim.

What is a browser-in-the-browser (BitB) attack?

For starters, let’s refresh our memories on what mr.d0x actually cooked up. The core of the attack stems from his observation of just how advanced modern web development tools — HTML, CSS, JavaScript, and the like — have become. It’s this realization that inspired the researcher to come up with a particularly elaborate phishing model.

A browser-in-the-browser attack is a sophisticated form of phishing that uses web design to craft fraudulent websites imitating login windows for well-known services like Microsoft, Google, Facebook, or Apple that look just like the real thing. The researcher’s concept involves an attacker building a legitimate-looking site to lure in victims. Once there, users can’t leave comments or make purchases unless they “sign in” first.

Signing in seems easy enough: just click the Sign in with {popular service name} button. And this is where things get interesting: instead of a genuine authentication page provided by the legitimate service, the user gets a fake form rendered inside the malicious site, looking exactly like… a browser pop-up. Furthermore, the address bar in the pop-up, also rendered by the attackers, displays a perfectly legitimate URL. Even a close inspection won’t reveal the trick.

From there, the unsuspecting user enters their credentials for Microsoft, Google, Facebook, or Apple into this rendered window, and those details go straight to the cybercriminals. For a while this scheme remained a theoretical experiment by the security researcher. Now — real-world attackers have added it to their arsenals.

Facebook credential theft

Attackers have put their own spin on mr.d0x’s original concept: recent browser-in-the-browser hits have been kicking off with emails designed to alarm recipients. For instance, one phishing campaign posed as a law firm informing the user they’d committed a copyright violation by posting something on Facebook. The message included a credible-looking link allegedly to the offending post.

Phishing email masquerading as a legal notice

Attackers sent messages on behalf of a fake law firm alleging copyright infringement — complete with a link supposedly to the problematic Facebook post. Source

Interestingly, to lower the victim’s guard, clicking the link didn’t immediately open a fake Facebook login page. Instead, they were first greeted by a bogus Meta CAPTCHA. Only after passing it was the victim presented with the fake authentication pop-up.

Fake login window rendered directly inside the webpage

This isn’t a real browser pop-up; it’s a website element mimicking a Facebook login page — a ruse that allows attackers to display a perfectly convincing address. Source

Naturally, the fake Facebook login page followed mr.d0x’s blueprint: it was built entirely with web design tools to harvest the victim’s credentials. Meanwhile, the URL displayed in the forged address bar pointed to the real Facebook site — www.facebook.com.

How to avoid becoming a victim

The fact that scammers are now deploying browser-in-the-browser attacks just goes to show that their bag of tricks is constantly evolving. But don’t despair — there’s a way to tell if a login window is legit. A password manager is your friend here, which, among other things, acts as a reliable security litmus test for any website.

That’s because when it comes to auto-filling credentials, a password manager looks at the actual URL, not what the address bar appears to show, or what the page itself looks like. Unlike a human user, a password manager can’t be fooled with browser-in-the-browser tactics, or any other tricks, like domains having a slightly different address (typosquatting) or phishing forms buried in ads and pop-ups. There’s a simple rule: if your password manager offers to auto-fill your login and password, you’re on a website you’ve previously saved credentials for. If it stays silent, something’s fishy.

Beyond that, following our time-tested advice will help you defend against various phishing methods, or at least minimize the fallout if an attack succeeds:

  • Enable two-factor authentication (2FA) for every account that supports it. Ideally, use one-time codes generated by a dedicated authenticator app as your second factor. This helps you dodge phishing schemes designed to intercept confirmation codes sent via SMS, messaging apps, or email. You can read more about one-time-code 2FA in our dedicated post.
  • Use passkeys. The option to sign in with this method can also serve as a signal that you’re on a legitimate site. You can learn all about what passkeys are and how to start using them in our deep dive into the technology.
  • Set unique, complex passwords for all your accounts. Whatever you do, never reuse the same password across different accounts. We recently covered what makes a password truly strong on our blog. To generate unique combinations — without needing to remember them — Kaspersky Password Manager is your best bet. As an added bonus, it can also generate one-time codes for two-factor authentication, store your passkeys, and synchronize your passwords and files across your various devices.

Finally, this post serves as yet another reminder that theoretical attacks described by cybersecurity researchers often find their way out into the wild. So, keep an eye on our blog, and subscribe to our Telegram channel to stay up to speed on the latest threats to your digital security and how to shut them down.

Read about other inventive phishing techniques scammers are using day in day out:

  •  

Child exploitation, grooming, and social media addiction claims put Meta on trial

Meta is facing two trials over child safety allegations in California and New Mexico. The lawsuits are landmark cases, marking the first time that any such accusations have reached a jury. Although over 40 state attorneys general have filed suits about child safety issues with social media, none had gone to trial until now.

The New Mexico case, filed by Attorney General Raúl Torrez in December 2023, centers on child sexual exploitation. Torrez’s team built their evidence by posing as children online and documenting what happened next, in the form of sexual solicitations. The team brought the suit under New Mexico’s Unfair Trade Practices Act, a consumer protection statute that prosecutors argue sidesteps Section 230 protections.

The most damaging material in the trial, which is expected to run seven weeks, may be Meta’s own paperwork. Newly unsealed internal documents revealed that a company safety researcher had warned about the sheer scale of the problem, claiming that around half a million cases of child exploitation are happening daily. Torrez did not mince words about what he believes the platform has become, calling it an online marketplace for human trafficking. From the complaint:

“Meta’s platforms Facebook and Instagram are a breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation.”

The complaint’s emphasis on weak age verification touches on a broader issue regulators around the world are now grappling with: how platforms verify the age of their youngest users—and how easily those systems can be bypassed.

In our own research into children’s social media accounts, we found that creating underage profiles can be surprisingly straightforward. In some cases, minimal checks or self-declared birthdates were enough to access full accounts. We also identified loopholes that could allow children to encounter content they shouldn’t or make it easier for adults with bad intentions to find them.

The social media and VR giant has pushed back hard, calling the state’s investigation ethically compromised and accusing prosecutors of cherry-picking data. Defence attorney Kevin Huff argued that the company disclosed its risks rather than concealing them.

Yesterday, Stanford psychiatrist Dr. Anna Lembke told the court she believes Meta’s design features are addictive and that the company has been using the term “Problematic Internet Use” internally to avoid acknowledging addiction.

Meanwhile in Los Angeles, a separate bellwether case against Meta and Google opened on Monday. A 20-year-old woman identified only as KGM is at the center of the case. She alleges that YouTube and Instagram hooked her from childhood. She testified that she was watching YouTube at six, on Instagram by nine, and suffered from worsening depression and body dysmorphia. Her case, which TikTok and Snap settled before trial, is the first of more than 2,400 personal injury filings consolidated in the proceeding. Plaintiffs’ attorney Mark Lanier called it a case about:

“two of the richest corporations in history, who have engineered addiction in children’s brains.”

A litany of allegations

None of this appeared from nowhere. In 2021, whistleblower Frances Haugen leaked internal Facebook documents showing the company knew its platforms damaged teenage mental health. In 2023, Meta whistleblower Arturo Béjar testified before the Senate that the company ignored sexual endangerment of children.

Unredacted documents unsealed in the New Mexico case in early 2024 suggested something uglier still: that the company had actively marketed messaging platforms to children while suppressing safety features that weren’t considered profitable. Internal employees sounded alarms for years but executives reportedly chose growth, according to New Mexico AG Raúl Torrez. Last September, whistleblowers said that the company had ignored child sexual abuse in virtual reality environments.

Outside the courtroom, governments around the world are moving faster than the US Congress. Australia banned under 16s from social media in December 2025, becoming the first country to do so. France’s National Assembly followed, approving a ban on social media for under 15s in January by 130 votes to 21. Spain announced its own under 16 ban this month. By last count, at least 15 European governments were considering similar measures. Whether any of these bans will actually work is uncertain, particularly as young users openly discuss ways to bypass controls.

The United States, by contrast, has passed exactly one major federal child online safety law: the Children’s Online Privacy Protection Act (COPPA), in 1998. The Kids Online Safety Act (KOSA), introduced in 2022, passed the Senate 91-3 in mid-2024 then stalled in the House. It was reintroduced last May and has yet to reach a floor vote. States have tried to fill the gap, with 18 proposed similar legislation in 2025, but only one of those was enacted (in Nebraska). A comprehensive federal framework remains nowhere in sight.

On its most recent earnings call, Meta acknowledged it could face material financial losses this year. The pressure is no longer theoretical. The juries in Santa Fe and Los Angeles will now weigh whether the company’s design choices and safety measures crossed legal lines.

If you want to understand how social media platforms can expose children to harmful content—and what parents can realistically do about it—check out our research project on social media safety.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Child exploitation, grooming, and social media addiction claims put Meta on trial

Meta is facing two trials over child safety allegations in California and New Mexico. The lawsuits are landmark cases, marking the first time that any such accusations have reached a jury. Although over 40 state attorneys general have filed suits about child safety issues with social media, none had gone to trial until now.

The New Mexico case, filed by Attorney General Raúl Torrez in December 2023, centers on child sexual exploitation. Torrez’s team built their evidence by posing as children online and documenting what happened next, in the form of sexual solicitations. The team brought the suit under New Mexico’s Unfair Trade Practices Act, a consumer protection statute that prosecutors argue sidesteps Section 230 protections.

The most damaging material in the trial, which is expected to run seven weeks, may be Meta’s own paperwork. Newly unsealed internal documents revealed that a company safety researcher had warned about the sheer scale of the problem, claiming that around half a million cases of child exploitation are happening daily. Torrez did not mince words about what he believes the platform has become, calling it an online marketplace for human trafficking. From the complaint:

“Meta’s platforms Facebook and Instagram are a breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation.”

The complaint’s emphasis on weak age verification touches on a broader issue regulators around the world are now grappling with: how platforms verify the age of their youngest users—and how easily those systems can be bypassed.

In our own research into children’s social media accounts, we found that creating underage profiles can be surprisingly straightforward. In some cases, minimal checks or self-declared birthdates were enough to access full accounts. We also identified loopholes that could allow children to encounter content they shouldn’t or make it easier for adults with bad intentions to find them.

The social media and VR giant has pushed back hard, calling the state’s investigation ethically compromised and accusing prosecutors of cherry-picking data. Defence attorney Kevin Huff argued that the company disclosed its risks rather than concealing them.

Yesterday, Stanford psychiatrist Dr. Anna Lembke told the court she believes Meta’s design features are addictive and that the company has been using the term “Problematic Internet Use” internally to avoid acknowledging addiction.

Meanwhile in Los Angeles, a separate bellwether case against Meta and Google opened on Monday. A 20-year-old woman identified only as KGM is at the center of the case. She alleges that YouTube and Instagram hooked her from childhood. She testified that she was watching YouTube at six, on Instagram by nine, and suffered from worsening depression and body dysmorphia. Her case, which TikTok and Snap settled before trial, is the first of more than 2,400 personal injury filings consolidated in the proceeding. Plaintiffs’ attorney Mark Lanier called it a case about:

“two of the richest corporations in history, who have engineered addiction in children’s brains.”

A litany of allegations

None of this appeared from nowhere. In 2021, whistleblower Frances Haugen leaked internal Facebook documents showing the company knew its platforms damaged teenage mental health. In 2023, Meta whistleblower Arturo Béjar testified before the Senate that the company ignored sexual endangerment of children.

Unredacted documents unsealed in the New Mexico case in early 2024 suggested something uglier still: that the company had actively marketed messaging platforms to children while suppressing safety features that weren’t considered profitable. Internal employees sounded alarms for years but executives reportedly chose growth, according to New Mexico AG Raúl Torrez. Last September, whistleblowers said that the company had ignored child sexual abuse in virtual reality environments.

Outside the courtroom, governments around the world are moving faster than the US Congress. Australia banned under 16s from social media in December 2025, becoming the first country to do so. France’s National Assembly followed, approving a ban on social media for under 15s in January by 130 votes to 21. Spain announced its own under 16 ban this month. By last count, at least 15 European governments were considering similar measures. Whether any of these bans will actually work is uncertain, particularly as young users openly discuss ways to bypass controls.

The United States, by contrast, has passed exactly one major federal child online safety law: the Children’s Online Privacy Protection Act (COPPA), in 1998. The Kids Online Safety Act (KOSA), introduced in 2022, passed the Senate 91-3 in mid-2024 then stalled in the House. It was reintroduced last May and has yet to reach a floor vote. States have tried to fill the gap, with 18 proposed similar legislation in 2025, but only one of those was enacted (in Nebraska). A comprehensive federal framework remains nowhere in sight.

On its most recent earnings call, Meta acknowledged it could face material financial losses this year. The pressure is no longer theoretical. The juries in Santa Fe and Los Angeles will now weigh whether the company’s design choices and safety measures crossed legal lines.

If you want to understand how social media platforms can expose children to harmful content—and what parents can realistically do about it—check out our research project on social media safety.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Meta confirms it’s working on premium subscription for its apps

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Meta confirms it’s working on premium subscription for its apps

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Enshittification is ruining everything online (Lock and Code S07E01)

This week on the Lock and Code podcast…

There’s a bizarre thing happening online right now where everything is getting worse.

Your Google results have become so bad that you’ve likely typed what you’re looking for, plus the word “Reddit,” so you can find discussion from actual humans. If you didn’t take this route, you might get served AI results from Google Gemini, which once recommended that every person should eat “at least one small rock per day.” Your Amazon results are a slog, filled with products that have surreptitiously paid reviews. Your Facebook feed could be entirely irrelevant because the company decided years ago that you didn’t want to see what your friends posted, you wanted to see what brands posted, because brands pay Facebook, and you don’t, so brands are more important than your friends.

But, according to digital rights activist and award-winning author Cory Doctorow, this wave of online deterioration isn’t an accident—it’s a business strategy, and it can be summed up in a word he coined a couple of years ago: Enshittification.

Enshittification is the process by which an online platform—like Facebook, Google, or Amazon—harms its own services and products for short-term gain while managing to avoid any meaningful consequences, like the loss of customers or the impact of meaningful government regulation. It begins with an online platform treating new users with care, offering services, products, or connectivity that they may not find elsewhere. Then, the platform invites businesses on board that want to sell things to those users. This means businesses become the priority and the everyday user experience is hindered. But then, in the final stage, the platform also makes things worse for its business customers, making things better only for itself.

This is how a company like Amazon went from helping you find nearly anything you wanted to buy online to helping businesses sell you anything you wanted to buy online to making those businesses pay increasingly high fees to even be discovered online. Everyone, from buyers to sellers, is pretty much entrenched in the platform, so Amazon gets to dictate the terms.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Doctorow about enshittification’s fast damage across the internet, how to fight back, and where it all started.

 ”Once these laws were established, the tech companies were able to take advantage of them. And today we have a bunch of companies that aren’t tech companies that are nevertheless using technology to rig the game in ways that the tech companies pioneered.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

Enshittification is ruining everything online (Lock and Code S07E01)

This week on the Lock and Code podcast…

There’s a bizarre thing happening online right now where everything is getting worse.

Your Google results have become so bad that you’ve likely typed what you’re looking for, plus the word “Reddit,” so you can find discussion from actual humans. If you didn’t take this route, you might get served AI results from Google Gemini, which once recommended that every person should eat “at least one small rock per day.” Your Amazon results are a slog, filled with products that have surreptitiously paid reviews. Your Facebook feed could be entirely irrelevant because the company decided years ago that you didn’t want to see what your friends posted, you wanted to see what brands posted, because brands pay Facebook, and you don’t, so brands are more important than your friends.

But, according to digital rights activist and award-winning author Cory Doctorow, this wave of online deterioration isn’t an accident—it’s a business strategy, and it can be summed up in a word he coined a couple of years ago: Enshittification.

Enshittification is the process by which an online platform—like Facebook, Google, or Amazon—harms its own services and products for short-term gain while managing to avoid any meaningful consequences, like the loss of customers or the impact of meaningful government regulation. It begins with an online platform treating new users with care, offering services, products, or connectivity that they may not find elsewhere. Then, the platform invites businesses on board that want to sell things to those users. This means businesses become the priority and the everyday user experience is hindered. But then, in the final stage, the platform also makes things worse for its business customers, making things better only for itself.

This is how a company like Amazon went from helping you find nearly anything you wanted to buy online to helping businesses sell you anything you wanted to buy online to making those businesses pay increasingly high fees to even be discovered online. Everyone, from buyers to sellers, is pretty much entrenched in the platform, so Amazon gets to dictate the terms.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Doctorow about enshittification’s fast damage across the internet, how to fight back, and where it all started.

 ”Once these laws were established, the tech companies were able to take advantage of them. And today we have a bunch of companies that aren’t tech companies that are nevertheless using technology to rig the game in ways that the tech companies pioneered.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

  •  

What to do if you can’t get into your Facebook or Instagram account

How to prove your identity after your account gets hacked and how to improve security for the future

Your Facebook or Instagram account can be your link to friends, a profile for your work or a key to other services, so losing access can be very worrying. Here’s what to do if the worst happens.

If you have access to the phone number or email account associated with your Facebook or Instagram account, try to reset your password by clicking on the “Forgot password?” link on the main Facebook or Instagram login screen. Follow the instructions in the email or text message you receive.

If you no longer have access to the email account linked to your Facebook account, use a device with which you have previously logged into Facebook and go to facebook.com/login/identify. Enter any email address or phone number you might have associated with your account, or find your username which is the string of characters after Facebook.com/ on your page. Click on “No longer have access to these?”, “Forgotten account?” or “Recover” and follow the instructions to prove your identity and reset your password.

If your account was hacked, visit facebook.com/hacked or instagram.com/hacked/ on a device you have previously used to log in and follow the instructions. Visit the help with a hacked account page for Facebook or Instagram.

Change the password to something strong, long and unique, such as a combination of random words or a memorable lyric or quote. Avoid simple or guessable combinations. Use a password manager to help you remember it and other important details.

Turn on two-step verification in the “password and security” section of the Accounts Centre. Use an authentication app or security key for this, not SMS codes. Save your recovery codes somewhere safe in case you lose access to your two-step authentication method.

Turn on “unrecognised login” alerts in the “password and security” section of the Accounts Centre, which will alert you to any suspicious login activity.

Remove any suspicious “friends” from your account – these could be fake accounts or scammers.

If you are eligible, turn on “advanced protection for Facebook” in the “password and security” section of the Accounts Centre.

Continue reading...

© Photograph: bigtunaonline/Alamy

© Photograph: bigtunaonline/Alamy

© Photograph: bigtunaonline/Alamy

  •  

Lawrence’s List 061316

Editor’s Note: We’ll feature Lawrence’s List every week.  It will include interesting things he’s come across during the week as he’s an avid consumer of internet garbage and follows a […]

The post Lawrence’s List 061316 appeared first on Black Hills Information Security, Inc..

  •  
❌