Normal view

Ransomware attacks on schools and colleges | Kaspersky official blog

6 March 2026 at 18:30

Back when ransomware was just a startup industry, the primary goal of the attackers was simple: encrypt data, then extort a ransom in exchange for decrypting it. Because of this, cybercriminals mostly targeted commercial enterprises — companies that valued their data enough to justify a hefty payout. Schools and colleges were generally left alone — hackers assumed educators didn’t have the kind of data worth paying a ransom for.

But times have changed, and so has the ransomware groups’ business model. The focus has shifted from payment for decryption, to extortion in exchange for non-disclosure of stolen data. Now, the “incentive” to pay isn’t just about restoring the company’s normal operations, but rather avoiding regulatory trouble, potential lawsuits, and reputational damage. And it’s this shift that’s put educational institutions in the crosshairs.

In this post, we discuss several cases of ransomware attacks on educational organizations, why they took place, and how to keep cybercriminals out of the classroom.

Attacks on educational institutions in 2025–2026

In February 2026, the Sapienza University of Rome, one of Europe’s oldest and largest higher education institutions, suffered a ransomware attack. Internal systems were down for three days. According to sources familiar with the incident, the cybercriminals sent the university’s administration a link leading to a ransom demand. Upon clicking the link, a countdown timer started on the site that opened — counting down from  72 hours: the time the attackers demands needed to be met. As of now, there’s still no word on whether the university administration paid up or not.

Unfortunately, this case isn’t an exception. At the very end of 2025, attackers targeted another Italian educational institution — a vocational training center in the small city of Treviso. Things aren’t looking much better in the UK, either: in the same year, Blacon High School was hit by ransomware. Its administration had to shut its doors for two days to restore its IT systems, assess the scale of the incident, and prevent the attack from spreading further through the network.

In fact, a UK government study suggests these incidents are just part of a broader trend. According to its 2025 data, cyberincidents hit 60% of secondary schools, 85% of colleges, and 91% of universities. Across the pond, American researchers also noted that in the first quarter of 2025, ransomware attacks in the global education sector surged by 69% year on year. Clearly, the trend is global.

Why schools and universities are becoming easy targets

The core of the problem is that modern educational organizations are rapidly incorporating digital services into their operations. A typical school or university infrastructure now manages a dizzying array of services:

  • Electronic gradebooks and registers
  • Distance learning platforms
  • Admission systems and databases for storing applicants’ personal data
  • Cloud storage for educational materials
  • Internal staff and student portals
  • Email for faculty, students, and the administration to communicate

While these systems make education more convenient and manageable, they also drastically expand the attack surface. Every new service and every additional user account is a potential doorway for a phishing campaign, access compromise, or a personal data leak.

According to a UK study, the primary vector for these attacks is basic phishing. But that’s not all that surprising: since the education sector was off the cybercriminals’ radar for so long, cybersecurity training for both staff and students was hardly a priority. As a result, even the most seasoned professors can find themselves falling for a fake email purportedly sent by the “dean” or the “school principal”.

But it’s not just the faculty. Students themselves often unwittingly act as mules for malware. In many institutions, students still frequently hand in assignments on USB flash drives. These drives travel across various home or public devices, picking up malicious digital hitchhikers along the way. All it takes is one infected USB drive plugged into a campus workstation to give an attacker a foothold in the internal network.

It’s worth noting that while USB drives aren’t as ubiquitous as they were a decade ago, they remain a staple in the educational environment. Dismissing the threats they carry isn’t a good idea.

How to ensure the cybersecurity of educational infrastructure

Let’s face it: training every literature and biology teacher to spot phishing emails is now easy, quick task. Similarly, the educational system isn’t going to cut down on USB usage overnight.

Fortunately, a robust security solution (such as Kaspersky Small Office Security) can do the heavy lifting for you. It’s ideal for schools and colleges that need set-it-and-forget-it protection without a steep learning curve. Plus, it’s affordable even for institutions operating on a tight budget, and doesn’t require constant management.

At the same time, Kaspersky Small Office Security addresses all the threats we’ve discussed above: it blocks clicks on phishing links, automatically scans USB drives the moment they’re plugged in, and prevents suspicious files from executing on devices connected to the school’s network.

Building an AI-Ready America: Teaching in the AI age

On Tuesday, February 23rd, Microsoft Senior Director of Education and Workforce Policy Allyson Knox testified before the House Education & Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education. To view the proceedings, visit the committee’s website.

STATEMENT OF ALLYSON KNOX

SENIOR DIRECTOR OF EDUCATION AND WORKFORCE POLICY

MICROSOFT CORPORATION

BEFORE THE

EDUCATION AND WORKFORCE COMMITTEE

SUBCOMMITTEE ON EARLY CHILDHOOD, ELEMENTARY, AND SECONDARY EDUCATION

UNITED STATES HOUSE OF REPRESENTATIVES

“BUILDING AN AI-READY AMERICA: TEACHING IN THE AI AGE”

TUESDAY, FEBRUARY 24, 2026

WASHINGTON, D.C.

Good afternoon and thank you, Chairman Kiley, Ranking Member Bonamici, Members of the Subcommittee for inviting me to testify today. My name is Allyson Knox. I am Senior Director of Education and Workforce Policy at Microsoft, and I am pleased to have this opportunity to discuss issues related to artificial intelligence and its impact on teachers.

Today, I will share insights we have gathered from teachers about their experiences, challenges, and needs as they integrate AI in education; outline the steps Microsoft and other organizations are taking to facilitate this transition; and recommend legislative approaches to help policymakers strengthen these efforts. These legislative approaches include supporting professional development for teachers; encouraging public-private partnerships; promoting AI literacy; providing guidance on responsible AI use; and supporting innovation.

I would like to begin by quoting from Microsoft’s vice-chair and president, Brad Smith, in his recent foreword to Degrees of Change: What AI Means for Education and the Next Generation[i]:

“Generative AI has become the fastest-spreading technology in human history, adopted at a pace that even the most seasoned technologists could scarcely imagine. This speed is breathtaking, but it also compels us to pause and ask, “Are we ready for what comes next?” AI’s promise is extraordinary. It can help solve problems that have challenged humanity for decades—improving health outcomes, advancing education, and unlocking new opportunities for economic growth. But, like every transformative technology before it, AI brings new questions and new responsibilities.”

This thought-provoking quote is apt for today’s conversation on how AI is impacting teachers. The speed of AI adoption in our nation’s schools and classrooms is indeed breathtaking. Just three years ago, AI had barely made a mark in education. However, our 2025 Study on AI in Education found that 80% of U.S. K-12 teachers have used AI in their roles or for school-related purposes at least once or twice and one-fifth report daily use of AI. Additionally, 58% of K-12 teachers think AI usage at their school/district will increase in the next year.[ii]

What we are hearing from teachers on the impact of AI:

The breadth of adoption has been profound. We have heard directly from teachers who are using AI to streamline lesson planning, curriculum development, and personalize student learning in ways that were unimaginable a few years ago.[iii] AI is also reducing the time it takes to carry out administrative tasks, allowing more time for teachers to focus on their students.

Despite these benefits, we know teachers face challenges when it comes to AI in the classroom. We found roughly one in three teachers lack confidence in using AI effectively and responsibly. Many teachers also express concerns about how AI can exacerbate cheating and are worried about issues such as data privacy and student safety.

Teachers know AI is here to stay, and based upon countless surveys, forums, and focus groups, teachers are ready to tackle these challenges and ask for support in three main areas:

  1. AI literacy – Teachers want the skills, knowledge, and support to build AI literacy and critical thinking in their students;
  2. AI guardrails – Teachers want students to use AI responsibly and safely; and
  3. AI tools – Teachers want classroom-ready AI tools and opportunities to provide feedback that improve them.

I’m excited to share a few ways Microsoft, along with many of our partners, are committed to providing teachers with the support they are requesting.

1.AI literacy – Teachers want the skills, knowledge, and support to build AI literacy and critical thinking in their students

At the core of this support is listening to and learning from teachers and understanding what they want and need to become AI literate themselves and teach AI literacy to their students. These conversations have resulted in exciting initiatives, including the recent launch of the Microsoft Elevate for teachers program, part of the company’s broader commitment[iii] to help schools and educators build skills, expand opportunities, and ensure everyone benefits from AI.

Microsoft Elevate for Educators

The Microsoft Elevate for Educators program equips educators and school leaders with access to one of the world’s largest and most connected peer educator networks and offers free professional development resources. It will provide free access to a new industry-recognized credential for educators, developed in partnership with one of the leading national nonprofit focused on technology and innovation (ISTE+ASCD).[vi] This partnership is aligned to the AI Literacy Framework, which is intended to help educators gain confidence and expertise in integrating AI into their teaching and learning. As part of this work, we also support ISTE+ASCD in advancing AI in teacher preparation programs.

National Academy for AI Instruction

Along with OpenAI and Anthropic, we are supporting the National Academy for AI Instruction, through a partnership with the American Federation of Teachers and the United Federation of Teachers. The Academy describes itself as a national training hub designed by educators – shaping the future of AI in public education, grounded in safety and people-first technology, and improving student learning. From everything we have heard from teachers, this is exactly the type of support they need to promote AI literacy. The Academy also focuses on building critical thinking skills for students and educators.

Rob Weil, who heads up the Academy, recently shared an update on their work with me. He noted through direct engagement with teachers, they listen to what the primary concerns teachers have around using AI in the classroom are, and then work with them to design trainings that are directly responsive to their concerns and meet them where they are – including using whatever technology they are already using in their classroom.

Their goal is to train 400,000 teachers over the next 5 years. The Academy is centered around a “train the trainer” model, building capacity to provide AI literacy to teachers at scale – providing the potential of millions of teachers to benefit from this initiative. Weil noted that interest and participation in the Academy has been taking off, largely due to word of mouth. This month, 1,000 teachers showed up for a virtual session, and another in-person session was overprescribed had to turn away a hundred interested teachers.

Why the interest? Teachers want to learn from their peers and trusted partners; they also want to ensure they are using AI effectively and safely. Weil explained that one of the most popular aspects of the training is centered around the Academy’s Commonsense Guardrails for Using Advanced Technology in Schools,[v] which helps empower teachers to address the challenges they are facing in implementing AI. Some teachers describe AI as the wild-wild west, and this guide has helped provide a roadmap for understanding how to navigate bringing this technology into the classroom.

The trainings also provide real-world, hands-on experiences with using technology which teachers themselves are bringing to the table. At the trainings, teachers are asked what they could use the most help with and then have time to experiment with different tools to do things like start a draft of a lesson plan or an outline for a rubric – allowing them more time and flexibility to incorporate their expertise. In addition, the Academy creates opportunities for educators to influence the development of AI for schools.

Support for Special Education Teachers

We also recognize the potential that AI holds to support students with disabilities – and the need to ensure special education teachers have the support and resources to fully unlock this technology.

Recently, we launched a course to support educators in exploring how Microsoft AI tools can be thoughtfully used in special education environments to reduce administrative demands, strengthen accessibility, and support clear communication with families. Throughout the learning path, responsible use of AI, privacy, and transparency are emphasized so educators can determine when and how AI fits into their practice in ways that align with student needs and professional values.

After our engagements, we tailored our trainings to special education teachers by incorporating their direct feedback. Key topics included privacy with sensitive medical information and using AI to assist parents and caregivers in IEP meetings. We emphasized clear communication, parental inclusion, and ensuring parents understand the meeting’s goals and how best to support their children.

Finally, special education involves a collaborative team beyond just teachers, and we’ve revised our approach to address the needs of occupational therapists, physical therapists, and all other members involved in special education.

Support for Teachers in Rural America

We have found there’s a significant gap in daily AI usage by urban teachers versus their rural and suburban counterparts (39% vs. 24%).[iv] This gap underscores why ensuring AI tools, resources, and professional development are attuned to the needs of rural teachers is critical.

For the last five years, we’ve been working with the National Future Farmers of America (FFA) and agricultural science teachers to develop FarmBeats for Students and ensure it is responsive to agricultural science teachers’ needs. We engaged in an iterative process with them – collaboratively designing and building curriculum and training with agricultural science teachers from the very beginning of development.

FarmBeats for Students brings AI to agricultural education through a hands-on educational program that brings precision agriculture directly into the classroom. The program consists of an affordable hardware kit and a free curriculum aligned with rigorous educational standards. Activities give students direct experience with topics like digital sensors, data analysis, and AI.

We brought FarmBeats for Students to the National FFA convention and held a series of workshops with teachers across the country. They experimented with the kits and provided input to ensure this technology was directly responsive to what they wanted to see in the classroom.

In addition to our partnership with the National FFA, Microsoft helps meet the needs of rural teachers by deploying the online content referenced above through Elevate, as well as supporting community-based organizations that help facilitate activities and events which promote AI literacy in rural communities.

AI Literacy Frameworks, Standards, and Guidance

Teachers want frameworks that help them integrate AI into their classrooms. We are pleased there is bipartisan interest in establishing strong frameworks around AI and education, especially highlighting the need for widespread AI literacy. Microsoft has provided support, guidance, and input to organizations and initiatives such as Code.org and TeachAI who work to develop and promote frameworks, guidance, and standards.

Microsoft encourages state and local policymakers to review and leverage these resources as they incorporate AI in education:

  • The TeachAI Foundational Policies[vii]: This resource, endorsed by dozens of policy organizations and associations, provides practical guidance for national, state, and local leaders to harness AI’s benefits in teaching and learning while mitigating risks. The policies focus on five priorities—fostering leadership, promoting AI literacy, providing clear guidance, building educator capacity, and supporting responsible innovation—to ensure AI strengthens education systems and prepares learners for an AI‑enabled workforce.
  • The TeachAI AI Guidance for Schools Toolkit[viii]: The Toolkit helps education authorities, school leaders, and educators develop clear, responsible guidance for using AI in K–12 education, balancing potential benefits with risks such as privacy, bias, and academic integrity. It provides a practical framework, principles, sample policies, and communication templates to support safe and human‑centered AI adoption across school systems. The Toolkit has been used by the majority of states in constructing guidance for schools.
  • The AI Literacy Framework[ix]: The AI Literacy Framework defines the knowledge, skills, and attitudes students and educators need to understand, use, and critically evaluate AI in education. It is organized around four core domains—Engaging with AI, Creating with AI, Managing AI, and Designing AI—and emphasizes critical thinking, ethics, and human judgment alongside technical understanding. It also emphasizes the foundational computer science concepts that prepare students to not just use AI but understand how AI works and its societal impacts. The framework is designed to be interdisciplinary, practical, and durable, helping schools integrate AI literacy into curriculum, professional learning, and policy in age‑appropriate ways.

2.AI guardrails – Teachers want students to use AI responsibly and safely

We have heard from teachers that one of the greatest hesitations they have with AI is around safety for students. This includes ensuring AI tools used in the classroom protect student privacy, don’t collect their information, and are safe from a mental health perspective.

Some of the strategies teachers use to promote safety are a significant focus in the professional development referenced earlier. In addition, the frameworks include key components to help teachers understand responsible AI use.

Microsoft takes our responsibility as a developer and deployer of AI technology very seriously. Paramount to deploying this technology in classrooms is ensuring it is responsible. Microsoft has identified six principles that we believe should guide AI development and use.

  • Fairness: AI systems should treat all people fairly.
  • Reliability and Safety: AI systems should perform reliably and safely.
  • Privacy and Security: AI systems should be secure and respect privacy.
  • Inclusiveness: AI systems should empower everyone and engage all people.
  • Transparency: AI systems should be understandable.
  • Accountability: People should be accountable for AI systems.

These principles are the foundation for other tools and resources we share with teachers to provide guidelines for them to deploy AI in the classroom.

As another example of our commitment to safety, earlier this month, on Safer Internet Day, we launched our new Microsoft Education Security Toolkit,[x] which provides educators and IT teams with practical guidance tailored to the realities of modern education.

3. AI tools Teachers want classroom-ready AI tools and opportunities to provide feedback that improve them

Teachers often lack the right AI tools tailored to their needs for boosting student achievement. It’s essential to develop AI solutions based on teacher input rather than just delivering generic options. Microsoft strives to meet this responsibility by designing tools and partnerships that address educators’ needs. We believe this approach creates a critical feedback loop that will allow us to constantly evolve our tools to maximize their benefit in the classroom over time.

In fact, at Microsoft, our engineering teams collaborate closely with educators and students to advance the development of AI tools for classroom use. We partner with teacher organizations and directly engage with the disability community to better understand instructional requirements and design technology that enhance student learning outcomes.  Some examples include:

Reading Progress

One of the tools we offer to teachers is called Reading Progress, which helps teachers analyze students’ fluency and generates reading passages and comprehension questions.

From the beginning of development, we worked with individual teachers through our Educator Insiders program and with entire schools or districts through our Technology Adoption Preview, where educators test prototypes of our products and provide feedback.

For example, teachers asked for a tool that could generate tailored passages to meet the needs of their students. We incorporated that feedback and now, teachers can get as specific as saying they want a passage generated about sports that is for a third-grade reading level and includes specific words their class is learning.

Teachers also told us they wanted reading comprehension questions generated faster and better. With AI, it’s easy to do this in a high-quality way.

Teachers report increased comprehension, higher reading fluency, and higher scores, especially for struggling or reluctant readers.

Teach for America (TFA)

Microsoft has been a proud supporter of TFA’s efforts to improve the education system and expand opportunities for children across the U.S. It has been great to see all of the ways in which TFA has worked to equip their teachers with AI fluency in order to help them integrate this technology into the classroom.

TFA recently completed a cloud migration to Microsoft Azure, unlocking countless avenues to improve program design and delivery, direct the most possible funds toward its mission to ensure all kids have access to an excellent education, and evolve to offer the best learning options inside and outside the classroom.

Where do we go from here

What is both exciting and daunting about AI is that while we can take lessons learned from previous technological transformations in the classroom, much of the book has not been written on AI adoption. Meaning tech companies, teachers, government, and other stakeholders have the opportunity to shape where AI goes in education and beyond.

I want to conclude my remarks today with policy recommendations for the Committee to consider:

  • Support professional development for teachers to effectively teach about AI and responsibly integrate AI tools in the classroom.
    • At the Federal level, this means providing priorities for competitive grant programs, such as those recently proposed by the U.S. Department of Education.
  • Encourage public-private partnerships.
    • Incentivize and prioritize Federal funds and grants that support partnerships between technology companies and educational programs, including apprenticeship and credentialed organizations, to develop up to-date AI curriculum.
  • Promote AI literacy across the U.S.
    • Integrate AI skills and concepts, including their foundational principles, social impacts, and ethical concerns, into existing curriculum and instruction.
  • Provide guidance.
    • Equip schools with guidance on the safe, effective, and responsible use of AI, including considerations related to student privacy, data security, accessibility, transparency, and appropriate human oversight.
  • Invest in innovation.
    • Support research and evaluation to better understand the impacts of AI in education, including its effects on teaching and learning and student outcomes, and to identify effective, scalable practices that mitigate the digital divide.

 

[i] Smith, Brad. “Foreword.” Degrees of Change: What AI Means for Education and the Next Generation, by Juan M. Lavista Ferres, John Wiley & Sons, 2026.
[ii] See Microsoft 2025 AI in Education Survey Details, August 2025
[iii] See Microsoft 2025 AI in Education Survey Details, August 2025
[iv] See Microsoft Elevate: Putting people first, July 2025
[v] See Commonsense Guardrails for Using Advanced Technology in Schools, March 2025
[vi] See Microsoft 2025 AI in Education Survey Details, August 2025
[vii] See TeachAI Foundational Policies
[viii] See TeachAI AI Guidance for Schools Toolkit
[ix] See AI Literacy Framework
[x] See Microsoft Education Security Toolkit, February 2026

[1] ISTE (International Society for Technology in Education) + ASCD (Association for Supervision and Curriculum Development)

 

The post Building an AI-Ready America: Teaching in the AI age appeared first on Microsoft On the Issues.

Beyond Login Screens: Why Access Control Matters

By: Sucuri
7 February 2026 at 04:01
Beyond Login Screens: Why Access Control Matters

As breach costs go up and attackers focus on common web features like dashboards, admin panels, customer portals, and APIs, weak access control quickly leads to lost data, broken trust, and costly incidents. The worst part is that many failures are not rare technical flaws but simple mistakes, such as missing permission checks, roles with too much power, or predictable IDs in URLs.

This post aims to help you control who can access different parts of your website and explain why it matters. 

Continue reading Beyond Login Screens: Why Access Control Matters at Sucuri Blog.

So, You’ve Hit an Age Gate. What Now?

14 January 2026 at 18:08

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

EFF is against age gating and age verification mandates, and we hope we’ll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominent cases of sensitive data getting leaked in the process.

At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you’re presented with, and answers to those questions for some of the top most popular social media sites. Even though there’s no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.

Follow the Data

Since we know that leaks happen despite the best efforts of software engineers, we generally recommend submitting the absolute least amount of data possible. Unfortunately, that’s not going to be possible for everyone. Even facial age estimation solutions where pictures of your face never leave your device, offering some protection against data leakage, are not a good option for all users: facial age estimation works less well for people of color, trans and nonbinary people, and people with disabilities. There are some systems that use fancy cryptography so that a digital ID saved to your device won’t tell the website anything more than if you meet the age requirement, but access to that digital ID isn’t available to everyone or for all platforms. You may also not want to register for a digital ID and save it to your phone, if you don’t want to take the chance of all the information on it being exposed upon request of an over-zealous verifier, or you simply don’t want to be a part of a digital ID system

If you’re given the option of selecting a verification method and are deciding which to use, we recommend considering the following questions for each process allowed by each vendor:

    • Data: What info does each method require?
    • Access: Who can see the data during the course of the verification process?
    • Retention: Who will hold onto that data after the verification process, and for how long?
    • Audits: How sure are we that the stated claims will happen in practice? For example, are there external audits confirming that data is not accidentally leaked to another site along the way? Ideally these will be in-depth, security-focused audits by specialized auditors like NCC Group or Trail of Bits, instead of audits that merely certify adherence to standards. 
    • Visibility: Who will be aware that you’re attempting to verify your age, and will they know which platform you’re trying to verify for?

We attempt to provide answers to these questions below. To begin, there are two major factors to consider when answering these questions: the tools each platform uses, and the overall system those tools are part of.

In general, most platforms offer age estimation options like face scans as a first line of age assurance. These vary in intrusiveness, but their main problem is inaccuracy, particularly for marginalized users. Third-party age verification vendors Private ID and k-ID offer on-device facial age estimation, but another common vendor, Yoti, sends the image to their servers during age checks by some of the biggest platforms. This risks leaking the images themselves, and also the fact that you’re using that particular website, to the third party. 

Then, there’s the document-based verification services, which require you to submit a hard identifier like a government-issued ID. This method thus requires you to prove both your age and your identity. A platform can do this in-house through a designated dataflow, or by sending that data to a third party. We’ve already seen examples of how this can fail. For example, Discord routed users' ID data through its general customer service workflow so that a third-party vendor could perform manual review of verification appeals. No one involved ever deleted users' data, so when the system was breached, Discord had to apologize for the catastrophic disclosure of nearly 70,000 photos of users' ID documents. Overly long retention periods expose documents to risk of breaches and historical data requests. Some document verifiers have retention periods that are needlessly long. This is the case with Incode, which provides ID verification for Tiktok. Incode holds onto images forever by default, though TikTok should automatically start the deletion process on your behalf.

Some platforms offer alternatives, like proving that you own a credit card, or asking for your email to check if it appears in databases associated with adulthood (like home mortgage databases). These tend to involve less risk when it comes to the sensitivity of the data itself, especially since credit cards can be replaced, but in general still undermine anonymity and pseudonymity and pose a risk of tracking your online activity. We’d prefer to see more assurances across the board about how information is handled.

Each site offers users a menu of age assurance options to choose from. We’ve chosen to present these options in the rough order that we expect most people to prefer. Jump directly to a platform to learn more about its age checks:

Meta – Facebook, Instagram, WhatsApp, Messenger, Threads

Inferred Age

If Meta can guess your age, you may never even see an age verification screen. Meta, which runs Facebook, Threads, Instagram, Messenger, and WhatsApp, first tries to use information you’ve posted to guess your age, like looking at “Happy birthday!” messages. It’s a creepy reminder that they already have quite a lot of information about you.

If Meta cannot guess your age, or if Meta infers you're too young, it will next ask you to verify your age using either facial age estimation, or by uploading your photo ID. 

Face Scan

If you choose to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. Researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be not only shared to Yoti, but leaked to third-party data brokers as well. 

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with Meta. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Upload ID

If Yoti’s age estimation decides your face looks too young, or if you opt out of facial age estimation, your next recourse is to send Meta a photo of your ID. Meta sends that photo to Yoti to verify the ID. Meta says it will hold onto that ID image for 30 days, then delete it. Meanwhile, Yoti claims it will delete the image immediately after verification. Of course, bugs and process oversights exist, such as accidentally replicating information in logs or support queues, but at least they have stated processes. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting leaked through errors or hacking, but it also lets Meta see the information needed to tie your profile to your identity—which you may not want. If you don’t want Meta to know your name and where you live, or rely on both Meta and Yoti to keep to their deletion promises, this option may not be right for you.

Google – Gmail, YouTube 

Inferred Age

If Google can guess your age, you may never even see an age verification screen. Your Google account is typically connected to your YouTube account, so if (like mine) your YouTube account is old enough to vote, you may not need to verify your Google account at all. Google first uses information it already knows to try to guess your age, like how long you’ve had the account and your YouTube viewing habits. It’s yet another creepy reminder of how much information these corporations have on you, but at least in this case they aren’t likely to ask for even more identifying data.

If Google cannot guess your age, or decides you're too young, Google will next ask you to verify your age. You’ll be given a variety of options for how to do so, with availability that will depend on your location and your age.

Google’s methods to assure your age include ID verification, facial age estimation, verification by proxy, and digital ID. To prove you’re over 18, you may be able to use facial age estimation, give Google your credit card information, or tell a third-party provider your email address.

Face Scan

If you choose to use facial age estimation, you’ll be sent to a website run by Private ID, a third-party verification service. The website will load Private ID’s verifier within the page—this means that your selfie will be checked without any images leaving your device. If the system decides you’re over 18, it will let Google know that, and only that. Of course, no technology is perfect—should Private ID be mandated to target you specifically, there’s nothing to stop it from sending down code that does in fact upload your image, and you probably won’t notice. But unless your threat model includes being specifically targeted by a state actor or Private ID, that’s unlikely to be something you need to worry about. For most people, no one else will see your image during this process. Private ID will, however, be told that your device is trying to verify your age with Google and Google will still find out if Private ID thinks that you’re under 18.

If Private ID’s age estimation decides your face looks too young, you may next be able to decide if you’d rather let Google verify your age by giving it your credit card information, photo ID, or digital ID, or by letting Google send your email address to a third-party verifier.

Email Usage

If you choose to provide your email address, Google sends it on to a company called VerifyMy. VerifyMy will use your email address to see if you’ve done things like get a mortgage or paid for utilities using that email address. If you use Gmail as your email provider, this may be a privacy-protective option with respect to Google, as Google will then already know the email address associated with the account. But it does tell VerifyMy and its third-party partners that the person behind this email address is looking to verify their age, which you may not want them to know. VerifyMy uses “proprietary algorithms and external data sources” that involve sending your email address to “trusted third parties, such as data aggregators.” It claims to “ensure that such third parties are contractually bound to meet these requirements,” but you’ll have to trust it on that one—we haven’t seen any mention of who those parties are, so you’ll have no way to check up on their practices and security. On the bright side, VerifyMy and its partners do claim to delete your information as soon as the check is completed.

Credit Card Verification

If you choose to let Google use your credit card information, you’ll be asked to set up a Google Payments account. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. Google will then charge a small amount to the card, and refund it once it goes through. If you choose this method, you’ll have to tell Google your credit card info, but the fact that it’s done through Google Payments (their regular card-processing system) means that at least your credit card information won’t be sitting around in some unsecured system. Even if your credit card information happens to accidentally be leaked, this is a relatively low-risk option, since credit cards come with solid fraud protection. If your credit card info gets leaked, you should easily be able to dispute fraudulent charges and replace the card.

Digital ID

If the option is available to you, you may be able to use your digital ID to verify your age with Google. In some regions, you’ll be given the option to use your digital ID. In some cases, it’s possible to only reveal your age information when you use a digital ID. If you’re given that choice, it can be a good privacy-preserving option. Depending on the implementation, there’s a chance that the verification step will “phone home” to the ID provider (usually a government) to let them know the service asked for your age. It’s a complicated and varied topic that you can learn more about by visiting EFF’s page on digital identity.

Upload ID

Should none of these options work for you, your final recourse is to send Google a photo of your ID. Here, you’ll be asked to take a photo of an acceptable ID and send it to Google. Though the help page only states that your ID “will be stored securely,” the verification process page says ID “will be deleted after your date of birth is successfully verified.” Acceptable IDs vary by country, but are generally government-issued photo IDs. We like that it’s deleted immediately, though we have questions about what Google means when it says your ID will be used to “improve [its] verification services for Google products and protect against fraud and abuse.” No system is perfect, and we can only hope that Google schedules outside audits regularly.

TikTok

Inferred Age

If TikTok can guess your age, you may never even see an age verification notification. TikTok first tries to use information you’ve posted to estimate your age, looking through your videos and photos to analyze your face and listen to your voice. By uploading any videos, TikTok believes you’ve given it consent to try to guess how old you look and sound.

If TikTok decides you’re too young, appeal to revoke their age decision before the deadline passes. If TikTok cannot guess your age, or decides you're too young, it will automatically revoke your access based on age—including either restricting features or deleting your account. To get your access and account back, you’ll have a limited amount of time to verify your age. As soon as you see the notification that your account is restricted, you’ll want to act fast because in some places you’ll have as little as 23 days before the deadline passes.

When you get that notification, you’re given various options to verify your age based on your location.

Face Scan

If you’re given the option to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. However, researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be leaked not only to Yoti, but to third-party data brokers as well.

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with TikTok. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID or your credit card information, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Credit Card Verification

If you have a credit card in your name, TikTok will accept that as proof that you’re over 18. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. TikTok will charge a small amount to the credit card, and refund it once it goes through. It’s unclear if this goes through their regular payment process, or if your credit card information will be sent through and stored in a separate, less secure system. Luckily, these days credit cards come with solid fraud protection, so if your credit card gets leaked, you should easily be able to dispute fraudulent charges and replace the card. That said, we’d rather TikTok provide assurances that the information will be processed securely.

Credit Card Verification of a Parent or Guardian

Sometimes, if you’re between 13 and 17, you’ll be given the option to let your parent or guardian confirm your age. You’ll tell TikTok their email address, and TikTok will send your parent or guardian an email asking them (a) to confirm your date of birth, and (b) to verify their own age by proving that they own a valid credit card. This option doesn’t always seem to be offered, and in the one case we could find, it’s possible that TikTok never followed up with the parent. So it’s unclear how or if TikTok verifies that the adult whose email you provide is your parent or guardian. If you want to use credit card verification but you’re not old enough to have a credit card, and you’re ok with letting an adult know you use TikTok, this option may be reasonable to try.

Photo with a Random Adult?

Bizarrely, if you’re between 13 and 17, TikTok claims to offer the option to take a photo with literally any random adult to confirm your age. Its help page says that any trusted adult over 25 can be chosen, as long as they’re holding a piece of paper with the code on it that TikTok provides. It also mentions that a third-party provider is used here, but doesn’t say which one. We haven’t found any evidence of this verification method being offered. Please do let us know if you’ve used this method to verify your age on TikTok!

Photo ID and Face Comparison

If you aren’t offered or have failed the other options, you’ll have to verify your age by submitting a copy of your ID and matching photo of your face. You’ll be sent to Incode, a third-party verification service. In a disappointing failure to meet the industry standard, Incode itself doesn’t automatically delete the data you give it once the process is complete, but TikTok does claim to “start the process to delete the information you submitted,” which should include telling Incode to delete your data once the process is done. If you want to be sure, you can ask Incode to delete that data yourself. Incode tells TikTok that you met the age threshold without providing your exact date of birth, but then TikTok wants to know the exact date anyway, so it’ll ask for your date of birth even after your age has been verified.

TikTok itself might not see your actual ID depending on its implementation choices, but Incode will. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting accidentally leaked through errors or hacking. If you don’t want TikTok or Incode to know your name, what you look like, and where you live—or if you don't want to rely on both TikTok and Incode to keep to their deletion promises—then this option may not be right for you.

Everywhere Else

We’ve covered the major providers here, but age verification is unfortunately being required of many other services that you might use as well. While the providers and processes may vary, the same general principles will apply. If you’re trying to choose what information to provide to continue to use a service, consider the “follow the data” questions mentioned above, and try to find out how the company will store and process the data you give it. The less sensitive information, the fewer people have access to it, and the more quickly it will be deleted, the better. You may even come to recognize popular names in the age verification industry: Spotify and OnlyFans use Yoti (just like Meta and Tiktok), Quora and Discord use k-ID, and so on. 

Unfortunately, it should be clear by now that none of the age verification options are perfect in terms of protecting information, providing access to everyone, and safely handling sensitive data. That’s just one of the reasons that EFF is against age-gating mandates, and is working to stop and overturn them across the United States and around the world.


Join EFF


Help protect digital privacy & free speech for everyone

Surveillance Self-Defense: 2025 Year in Review

2 January 2026 at 07:48

Our Surveillance Self-Defense (SSD) guides, which provide practical advice and explainers for how to deal with government and corporate surveillance, had a big year. We published several large updates to existing guides and released three all new guides. And with frequent massive protests across the U.S., our guide to attending a protest remained one of the most popular guides of the year, so we made sure our translations were up to date.

(Re)learn All You Need to Know About Encryption

We started this year by taking a deep look at our various encryption guides, which start with the basics before moving up to deeper concepts. We slimmed each guide down and tried to focus on making them as clear and concise as deep explainers on complicated topics can be. We reviewed and edited four guides in total:

And if you’re not sure where to start, we got you covered with the new Interested in Encryption? playlist.

New Guides

We launched three new guides this year, including iPhone and Android privacy guides, which walk you through all the various privacy options of your phone. Both of these guides received a handful of updates throughout their first year as new features were released or, in the case of the iPhone, a new design language was introduced. These also got a fun little boost from a segment on "Last Week Tonight with John Oliver" telling people how to disable their phone’s advertising identifier.

We also launched our How to: Manage Your Digital Footprint guide. This guide is designed to help you claw back some of the data you may find about yourself online, walking through different privacy options across different platforms, digging up old accounts, removing yourself from people search sites, and much more.

Always Be Updating

As is the case with most software, there is always incremental work to do. This year, that meant small updates to our WhatsApp and Signal guides to acknowledge new features (both are already on deck for similar updates early next year as well). 

We overhauled our device encryption guides for Windows, Mac, and Linux, rolling what was once three guides into one, and including more detailed guidance on how to handle recovery keys. Some slight changes to how this works on both Windows and Mac means this one will get another look early next year as well.

Speaking of rolling multiple guides into one, we did the same with our guidance for the Tor browser, where it once lived across three guides, it now lives as one that covers all the major desktop platforms (the mobile guide remains separate).

The password manager guide saw some small changes to note some new features with Apple and Chrome’s managers, as well as some new independent security audits. Likewise, the VPN guide got a light touch to address the TunnelVision security issue.

Finally, the secure deletion guide got a much needed update after years of dormancy. With the proliferation of solid state drives (SSDs, not to be confused with SSD), not much has changed in the secure deletion space, but we did move our guidance for those SSDs to the top of the guide to make it easier to find, while still acknowledging many people around the world still only have access to a computer with spinning disk drives. 

Translations

As always, we worked on translations for these updates. We’re very close to a point where every current SSD guide is updated and translated into Arabic, French, Mandarin, Portuguese, Russian, Spanish, and Turkish.

And with the help of Localization Lab, we also now have translations for a handful of the most important guides in Changana, Mozambican Portuguese, Ndau, Luganda, and Bengali.

Blogs Blogs Blogs

Sometimes we take our SSD-like advice and blog it so we can respond to news events or talk about more niche topics. This year, we blogged about new features, like WhatsApp’s “Advanced Chat Privacy” and Google’s "Advanced Protection.” We also broke down the differences between how different secure chat clients handle backups and pushed for expanding encryption on Android and iPhone.

We fight for more privacy and security every day of every year, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

How to Protect Your Site From Content Sniffing with HTTP Security Headers

19 December 2025 at 00:58
How to Protect Your Site From Content Sniffing with HTTP Security Headers

Ever had a perfectly “safe” page or file turn into an attack vector out of nowhere? That can happen when browsers start guessing what your content is instead of listening to your server. Browsers sometimes try to figure out what kind of file they’re dealing with if the server doesn’t provide the Content-Type header or provides the wrong one, a process known as “content sniffing.” While this can be helpful, content sniffing is a security risk if an attacker can mess with the content.

Continue reading How to Protect Your Site From Content Sniffing with HTTP Security Headers at Sucuri Blog.

How to Protect Your WordPress Site From a Phishing Attack

13 December 2025 at 08:36
How to Protect Your WordPress Site From a Phishing Attack

If you run a website, manage a business inbox, or even just use online banking, you’ve already lived in the phishing era for a long time. The only thing that’s changed is the polish.

Phishing scams have moved past those obviously fake “please verify” requests to include convincing login pages, realistic invoices, and even bogus delivery updates. Some are mass-sent and easy to spot, others are customized precisely for the person they’re targeting, their job, company, tech, and everyday apps.

Continue reading How to Protect Your WordPress Site From a Phishing Attack at Sucuri Blog.

A Beginner’s Guide to the CVE Database

20 November 2025 at 02:47
A Beginner’s Guide to the CVE Database

Keeping websites and applications secure starts with knowing which vulnerabilities exist, how severe they are, and whether they affect your stack. That’s exactly where the CVE program shines. Below, we’ll cover some CVE fundamentals, including what they are, how to search and understand the data, and how to translate this information into actionable steps.

Introduction to the CVE database
So, what is CVE?

CVE stands for Common Vulnerabilities and Exposures, a community-driven program that assigns unique identifiers to publicly known vulnerabilities.

Continue reading A Beginner’s Guide to the CVE Database at Sucuri Blog.

How to Fix the ERR_TOO_MANY_REDIRECTS Error

13 November 2025 at 22:10
How to Fix the ERR_TOO_MANY_REDIRECTS Error

Encountering the ERR_TOO_MANY_REDIRECTS error (also called a redirect loop error) can be frustrating, especially when your website was working fine just moments ago. This issue is common across browsers such as Chrome, Firefox, and Edge and it typically means your site has entered a redirection loop.

In this post, you’ll learn what the error means, why it occurs, ways to identify where the redirect is coming from, and how to fix it effectively – including an important section on redirect types, which often play a direct role in causing this issue.

Continue reading How to Fix the ERR_TOO_MANY_REDIRECTS Error at Sucuri Blog.

Six out of 10 UK secondary schools hit by cyber-attack or breach in past year

Hackers are more likely to target educational institutions than private businesses, government survey shows

When hackers attacked UK nurseries last month and published children’s data online, they were accused of hitting a new low.

But the broader education sector is well used to being a target.

Continue reading...

© Photograph: MBI/Alamy

© Photograph: MBI/Alamy

© Photograph: MBI/Alamy

The Courage to Learn

By: BHIS
18 April 2016 at 15:52

Sierra Ward // Last year I listened to a podcast* from Freakonomics that has stuck with me – in fact, I think it’s changed the way I think – powerful stuff […]

The post The Courage to Learn appeared first on Black Hills Information Security, Inc..

Warning: This Post Contains Macros

By: BHIS
11 February 2016 at 22:45

Lisa Woody // On the 23rd of December, a cyber attack left hundreds of thousands of people in the Ukrainian region of Ivano-Frankivsk without power. This was the first confirmed […]

The post Warning: This Post Contains Macros appeared first on Black Hills Information Security, Inc..

❌