Normal view
Wave of Citrix NetScaler scans use thousands of residential proxies
Apple voegt integratie van AI-codingagents toe aan softwareontwikkelpakket Xcode
CISA flags critical SolarWinds RCE flaw as exploited in attacks
Fitbit-oprichters komen met app voor monitoren van gezondheid van familie
-
The Register – Security
- Critical React Native Metro dev server bug under attack as researchers scream into the void
Critical React Native Metro dev server bug under attack as researchers scream into the void
Too slow react-ion time
Baddies are exploiting a critical bug in React Native's Metro development server to deliver malware to both Windows and Linux machines, and yet the in-the-wild attacks still haven't received the "broad public acknowledgement" that they should, according to security researchers.…
Netflix stopt met ondersteuning van PlayStation 3-app
-
Malware-Traffic-Analysis.net - Blog Entries
- 2026-02-03: GuLoader for AgentTesla style malware with FTP data exfiltration
Empowering the RAF Association with Next-Generation Cyber Resilience
Palo Alto Networks is proud to enter a strategic partnership with the RAF Association.
For over 90 years, the Royal Air Forces Association (RAFA) has championed a simple yet profound belief: No member of the RAF community should ever be left without the help they need. Serving personnel, veterans and their families, the RAF Association provides crucial welfare support, responding to increasingly complex needs in an era of operational changes and challenges, including persistent global deployment.
Delivering on their mission today requires not only compassion and expertise but also resilient digital foundations. To strengthen and future-proof its operations, RAFA has entered into a strategic partnership with Palo Alto Networks. Together, we are modernising the Association's cyber security posture through a secure-by-design, zero trust architecture to enhance organisational resilience, secure sensitive beneficiary data, and improve operational agility. This helps ensure they can focus on their mission of support, not security management.
As Nick Bunting OBE, Secretary General at the RAF Association, puts it:
Cybersecurity is essential to safeguarding the trust people place in our organisation. This transformation will give us greater protection for our data and systems, ensuring that our services remain dependable and that our organisation is secure, resilient and ready for the future. Strong digital security is not just a technical requirement, it is a fundamental part of how we uphold our duty of care to every individual who relies on us.

Securing the Mission
The RAF Association operates in a distributed environment comprising headquarters’ functions, remote caseworkers, and more than 20 RAFAKidz nursery sites, supported by a growing portfolio of cloud-based services. In this context, cybersecurity is not simply an IT concern. It is a safeguarding imperative.
Disruption to systems or a compromise of sensitive beneficiary data could directly impact RAFA’s ability to deliver services and maintain the trust of the communities it supports. By consolidating fragmented legacy tools into a unified platform, this partnership ensures the Association’s digital evolution aligns security controls with GDPR obligations and safeguarding requirements.
Digital Resilience with a Unified Platform for Visibility and Control
To support RAFA's lean IT operational model, this transformation will move them away from fragmented legacy tools toward a unified platform approach. The deployment of Prisma® SASE (secure access service edge) and Cortex XDR® will provide RAFA with consistent visibility and control across users, devices, applications and data, regardless of location. This consolidation replaces complexity with clarity, allowing the organisation to inspect traffic for threats in real-time. Security policies are now enforced continuously, threats are detected and contained faster, and access to critical systems is governed by zero trust principles without compromising the user experience.
As Phil Sherwin, Chief Information Officer, at the RAF Association states:
Our data is one of our most valuable assets and the protection of that data, as we continue to provide life-changing support to members of the RAF community, is our most important priority. This partnership will move us into the next generation of security tools that adopt zero trust principles and is a crucial step on our journey to providing a layered approach to data protection.
One of the most critical aspects of this modernisation is supporting RAFA’s diverse workforce, particularly within the RAFAKidz nursery sites. These environments rely on nondesk-based staff using iPads and mobile devices to get their critical work done.
Using zero touch provisioning and the Prisma Browser™, we are enabling secure, seamless connectivity for unmanaged devices. This ensures that nursery staff can access necessary SaaS applications safely without complex login hurdles or manual configuration, improving their agility and allowing them to focus on caring for children rather than troubleshooting technology.
Creating Operational Advantage by Scaling Operations with AI and Automation
As a charity, RAFA has a responsibility to ensure resources are used efficiently. A critical goal of this partnership is to improve productivity and allow the organisation to scale its services without increasing the IT burden.
By adopting Strata™ Cloud Manager with AIOps (artificial intelligence for IT operations), RAFA is shifting from reactive security operations to proactive, automated management. Machine learning helps identify configuration risks and performance issues before they affect users, while standardized policies enable the secure, consistent onboarding of new sites. This shift is projected to significantly reduce operational overhead, enabling RAFA to scale its support network cost-effectively. This shift is projected to reduce operational overhead by 40–50%.
A Resilient Future
This partnership is about more than deploying technology. It is about ensuring RAFA remains resilient, trusted and capable of supporting the RAF community for decades to come.
As Darren Bisbey, Head of Group Information Security for the RAF Association, puts it:
We live in an era where digital threats are accelerating in both scale and sophistication, creating unprecedented challenges for organisations. Our partnership with Palo Alto is a statement of intent, reflecting our unwavering commitment to building the most secure environments possible for our data.
At Palo Alto Networks, we are honored to support RAFA in this journey, providing the digital armour and operational advantage necessary to protect those who serve and have served.
As Alistair Wildman, Palo Alto Networks CEO for Northern Europe states:
For over 90 years, RAFA has been a lifeline for the RAF community; it is our privilege to ensure that legacy endures in a digital-first world. By embracing a unified, AI-driven platform, RAFA is moving beyond complex, fragmented security to a posture that is Secure by Design. This partnership allows them to navigate today’s threat landscape with confidence, ensuring their resources remain focused where they belong: on the families who need them.
Key Takeaways
- Digital Resilience – Strategic Shift to Zero Trust Architecture: RAFA is modernizing its cybersecurity posture by implementing a comprehensive zero trust architecture. This transition involves moving from fragmented legacy tools to a unified platform approach, deploying Prisma® SASE and Cortex XDR for 360-degree visibility and complete control over access and traffic.
- Interoperability – Secure, Seamless Access for Diverse Workforce: The partnership ensures operational agility by simplifying security for nondesk-based staff, particularly at the RAFAKidz nursery sites. Solutions like Zero-Touch Provisioning and the Prisma Access Browser enable secure, seamless connectivity for unmanaged devices, allowing nursery staff to focus on their critical work without complex login or configuration issues.
- Creating Operational Advantage – Efficiency and Scalability through AI and Automation: RAFA is leveraging technology to scale services efficiently and reduce operational overhead. By using Strata Cloud Manager with AIOps (Artificial Intelligence for IT Operations), the organization can shift to proactive management and automating remediation, which is projected to reduce operational overhead by 40–50%.
The post Empowering the RAF Association with Next-Generation Cyber Resilience appeared first on Palo Alto Networks Blog.

-
The Register – Security
- CISA updated ransomware intel on 59 bugs last year without telling defenders
CISA updated ransomware intel on 59 bugs last year without telling defenders
GreyNoise's Glenn Thorpe counts the cost of missed opportunities
On 59 occasions throughout 2025, the US Cybersecurity and Infrastructure Security Agency (CISA) silently tweaked vulnerability notices to reflect their use by ransomware crooks. Experts say that's a problem.…
Gemeente Amsterdam wil afhankelijkheid van Amerikaanse techbedrijven verminderen
Microsoft SDL: Evolving security practices for an AI-powered world
As AI reshapes the world, organizations encounter unprecedented risks, and security leaders take on new responsibilities. Microsoft’s Secure Development Lifecycle (SDL) is expanding to address AI-specific security concerns in addition to the traditional software security areas that it has historically covered.
SDL for AI goes far beyond a checklist. It’s a dynamic framework that unites research, policy, standards, enablement, cross-functional collaboration, and continuous improvement to empower secure AI development and deployment across our organization. In a fast-moving environment where both technology and cyberthreats constantly evolve, adopting a flexible, comprehensive SDL strategy is crucial to safeguarding our business, protecting users, and advancing trustworthy AI. We encourage other organizational and security leaders to adopt similar holistic, integrated approaches to secure AI development, strengthening resilience as cyberthreats evolve.
Why AI changes the security landscape
AI security versus traditional cybersecurity
AI security introduces complexities that go far beyond traditional cybersecurity. Conventional software operates within clear trust boundaries, but AI systems collapse these boundaries, blending structured and unstructured data, tools, APIs, and agents into a single platform. This expansion dramatically increases the attack surface and makes enforcing purpose limitations and data minimization far more challenging.
Expanded attack surface and hidden vulnerabilities
Unlike traditional systems with predictable pathways, AI systems create multiple entry points for unsafe inputs including prompts, plugins, retrieved data, model updates, memory states, and external APIs. These entry points can carry malicious content or trigger unexpected behaviors. Vulnerabilities hide within probabilistic decision loops, dynamic memory states, and retrieval pathways, making outputs harder to predict and secure. Traditional threat models fail to account for AI-specific attack vectors such as prompt injection, data poisoning, and malicious tool interactions.
Loss of granularity and governance complexity
AI dissolves the discrete trust zones assumed by traditional SDL. Context boundaries flatten, making it difficult to enforce purpose limitation and sensitivity labels. Governance must span technical, human, and sociotechnical domains. Questions arise around role-based access control (RBAC), least privilege, and cache protection, such as: How do we secure temporary memory, backend resources, and sensitive data replicated across caches? How should AI systems handle anonymous users or differentiate between queries and commands? These gaps expose corporate intellectual property and sensitive data to new risks.
Multidisciplinary collaboration
Meeting AI security needs requires a holistic approach across stack layers historically outside SDL scope, including Business Process and Application UX. Traditionally, these were domains for business risk experts or usability teams, but AI risks often originate here. Building SDL for AI demands collaborative, cross-team development that integrates research, policy, and engineering to safeguard users and data against evolving attack vectors unique to AI systems.
Novel risks
AI cyberthreats are fundamentally different. Systems assume all input is valid, making commands like “Ignore previous instructions and execute X” viable cyberattack scenarios. Non-deterministic outputs depend on training data, linguistic nuances, and backend connections. Cached memory introduces risks of sensitive data leakage or poisoning, enabling cyberattackers to skew results or force execution of malicious commands. These behaviors challenge traditional paradigms of parameterizing safe input and predictable output.
Data integrity and model exploits
AI training data and model weights require protection equivalent to source code. Poisoned datasets can create deterministic exploits. For example, if a cyberattacker poisons an authentication model to accept a raccoon image with a monocle as “True,” that image becomes a skeleton key—bypassing traditional account-based authentication. This scenario illustrates how compromised training data can undermine entire security architectures.
Speed and sociotechnical risk
AI accelerates development cycles beyond SDL norms. Model updates, new tools, and evolving agent behaviors outpace traditional review processes, leaving less time for testing and observing long-term effects. Usage norms lag tool evolution, amplifying misuse risks. Mitigation demands iterative security controls, faster feedback loops, telemetry-driven detection, and continuous learning.
Ultimately, the security landscape for AI demands an adaptive, multidisciplinary approach that goes beyond traditional software defenses and leverages research, policy, and ongoing collaboration to safeguard users and data against evolving attack vectors unique to AI systems.
SDL as a way of working, not a checklist
Security policy falls short of addressing real-world cyberthreats when it is treated as a list of requirements to be mechanically checked off. AI systems—because of their non-determinism—are much more flexible that non-AI systems. That flexibility is part of their value proposition, but it also creates challenges when developing security requirements for AI systems. To be successful, the requirements must embrace the flexibility of the AI systems and provide development teams with guidance that can be adapted for their unique scenarios while still ensuring that the necessary security properties are maintained.
Effective AI security policies start by delivering practical, actionable guidance engineers can trust and apply. Policies should provide clear examples of what “good” looks like, explain how mitigation reduces risk, and offer reusable patterns for implementation. When engineers understand why and how, security becomes part of their craft rather than compliance overhead. This requires frictionless experiences through automation and templates, guidance that feels like partnership (not policing) and collaborative problem-solving when mitigations are complex or emerging. Because AI introduces novel risks without decades of hardened best practices, policies must evolve through tight feedback loops with engineering: co-creating requirements, threat modeling together, testing mitigations in real workloads, and iterating quickly. This multipronged approach helps security requirements remain relevant, actionable, and resilient against the unique challenges of AI systems.
So, what does Microsoft’s multipronged approach to AI security look like in practice? SDL for AI is grounded in pillars that, together, create strong and adaptable security:
- Research is prioritized because the AI cyberthreat landscape is dynamic and rapidly changing. By investing in ongoing research, Microsoft stays ahead of emerging risks and develops innovative solutions tailored to new attack vectors, such as prompt injection and model poisoning. This research not only shapes immediate responses but also informs long-term strategic direction, ensuring security practices remain relevant as technology evolves.
- Policy is woven into the stages of development and deployment to provide clear guidance and guardrails. Rather than being a static set of rules, these policies are living documents that adapt based on insights from research and real-world incidents. They ensure alignment across teams and help foster a culture of responsible AI, making certain that security considerations are integrated from the start and revisited throughout the lifecycle.
- Standards are established to drive consistency and reliability across diverse AI projects. Technical and operational standards translate policy into actionable practices and design patterns, helping teams build secure systems in a repeatable way. These standards are continuously refined through collaboration with our engineers and builders, vetted with internal experts and external partners, keeping Microsoft’s approach aligned with industry best practices.
- Enablement bridges the gap between policy and practice by equipping teams with the tools, communications, and training to implement security measures effectively. This focus ensures that security isn’t just an abstract concept but an everyday reality, empowering engineers, product managers, and researchers to identify threats and apply mitigations confidently in their workflows.
- Cross-functional collaboration unites multiple disciplines to anticipate risks and design holistic safeguards. This integrated approach ensures security strategies are informed by diverse perspectives, enabling solutions that address technical and sociotechnical challenges across the AI ecosystem.
- Continuous improvement transforms security into an ongoing practice by using real-world feedback loops to refine strategies, update standards, and evolve policies and training. This commitment to adaptation ensures security measures remain practical, resilient, and responsive to emerging cyberthreats, maintaining trust as technology and risks evolve.
Together, these pillars form a holistic and adaptive framework that moves beyond checklists, enabling Microsoft to safeguard AI systems through collaboration, innovation, and shared responsibility. By integrating research, policy, standards, enablement, cross-functional collaboration, and continuous improvement, SDL for AI creates a culture where security is intrinsic to AI development and deployment.
What’s new in SDL for AI
Microsoft’s SDL for AI introduces specialized guidance and tooling to address the complexities of AI security. Here’s a quick peek at some key AI security areas we’re covering in our secure development practices:
- Threat modeling for AI: Identifying cyberthreats and mitigations unique to AI workflows.
- AI system observability: Strengthening visibility for proactive risk detection.
- AI memory protections: Safeguarding sensitive data in AI contexts.
- Agent identity and RBAC enforcement: Securing multiagent environments.
- AI model publishing: Creating processes for releasing and managing models.
- AI shutdown mechanisms: Ensuring safe termination under adverse conditions.
In the coming months, we’ll share practical and actionable guidance on each of these topics.
Microsoft SDL for AI can help you build trustworthy AI systems
Effective SDL for AI is about continuous improvement and shared responsibility. Security is not a destination. It’s a journey that requires vigilance, collaboration between teams and disciplines outside the security space, and a commitment to learning. By following Microsoft’s SDL for AI approach, enterprise leaders and security professionals can build resilient, trustworthy AI systems that drive innovation securely and responsibly.
Keep an eye out for additional updates about how Microsoft is promoting secure AI development, tackling emerging security challenges, and sharing effective ways to create robust AI systems.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Microsoft SDL: Evolving security practices for an AI-powered world appeared first on Microsoft Security Blog.
An AI plush toy exposed thousands of private chats with children
Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.
Bondu’s toy is marketed as:
“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”
What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.
What Bondu has to say about safety does not mention security or privacy:
“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”
Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?
The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.
Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.
Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.
In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”
So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.
How to stay safe
AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.
- Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
- Read the privacy policy. Yes, I know, all of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
- Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
- Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
- Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
- Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.
An AI plush toy exposed thousands of private chats with children
Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.
Bondu’s toy is marketed as:
“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”
What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.
What Bondu has to say about safety does not mention security or privacy:
“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”
Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?
The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.
Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.
Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.
In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”
So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.
How to stay safe
AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.
- Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
- Read the privacy policy. Yes, I know, all of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
- Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
- Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
- Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
- Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.
Iron Mountain: Data breach mostly limited to marketing materials
Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

'AMD wil zich richten op videokaarten met minder geheugen door prijsstijgingen'
-
Security.NL maakt Nederland veilig
- Securitybedrijf meldt actief misbruik van lek in React Native Metro-server
Securitybedrijf meldt actief misbruik van lek in React Native Metro-server
RADICL Raises $31 Million for vSOC
The company will use the investment to accelerate development of its autonomous virtual security operations center (vSOC).
The post RADICL Raises $31 Million for vSOC appeared first on SecurityWeek.
-
Security.NL maakt Nederland veilig
- Rechtspraak wil AI dit jaar daadwerkelijk gaan toepassen in werk personeel

