The article discusses the rise of facial recognition and biometric surveillance, the ways individuals can protect their privacy through simple anti-surveillance measures like masks and clothing, and the importance of understanding these systems to maintain personal power and privacy in an increasingly monitored world.
Signs at Wegmans stores in NYC indicate that biometric data such as facial recognition, eye scans, and voiceprints are stored to enhance security, despite initial claims of data deletion during pilot programs, raising privacy concerns amid stalled legislation to restrict biometric storage.
Xthings introduces the Ultraloq Bolt Sense, a smart lock that uses facial recognition and palm vein scanning for contactless access, along with new security cameras supporting advanced connectivity like Wi-Fi HaLow and Matter, aiming to enhance home security with innovative features at CES 2023.
UK scientists found that people struggle to distinguish AI-generated faces from real ones without training, but a short five-minute training can significantly improve detection by highlighting common AI flaws, raising concerns about AI deception and the need for better detection methods.
AI can generate hyperrealistic faces that are difficult to distinguish from real photos, even for super recognizers. However, short training sessions that highlight common AI rendering errors can significantly improve people's ability to detect fake faces, suggesting potential for human-AI collaborative detection methods. The study emphasizes slowing down and inspecting features to better identify fakes, though the longevity of training effects remains uncertain.
New Orleans has become the first U.S. city to implement a private, live facial recognition network through Project NOLA, raising questions about privacy, control, and legal oversight, as the technology is used to track individuals in real time with limited regulation and oversight.
A study from the University of Pennsylvania suggests AI can analyze facial features to predict personality traits and potential success, raising ethical concerns about discrimination and privacy, with real-world applications already emerging in law enforcement and verification processes.
Apps claiming to catch cheaters using facial recognition and public data mining raise serious privacy concerns, as they often operate without user consent, can produce false positives, and may violate privacy laws, highlighting the need for stronger legislation to protect personal data and privacy rights.
Google's Ask Photos and Conversational Editing features are unavailable in Texas and Illinois, likely due to legal concerns over biometric data collection and recent settlement agreements related to privacy laws in these states.
Microsoft's OneDrive is introducing a feature that uses AI to recognize and group photos by faces, which can be turned off up to three times a year, raising privacy concerns despite Microsoft's assurances that facial data won't be used for AI training or shared. The feature is still in preview and has yet to be widely released.
Amazon announced new Ring camera features including 4K video and facial recognition technology called Familiar Faces, which can identify and notify users of familiar people. While the feature is optional and restricted in some states, it has raised privacy concerns among consumers and advocacy groups, leading some to cancel their subscriptions and express discomfort with the invasive nature of the technology.
A Malvern-based startup, FarX, has developed an AI-powered security technology that combines voice and facial recognition to enhance online security and detect fraud, with plans to incorporate emotional recognition to improve customer service interactions. The company has secured patents in the UK and US and is already used in the banking industry.
The 'biometric exit' program, which uses facial recognition to verify departing international travelers, is expanding across U.S. airports, raising privacy concerns and causing discomfort among travelers due to federal agents taking photos without clear opt-out options. The program aims to improve security and identity verification but faces criticism over privacy risks and potential misuse.
States are increasingly enacting laws to regulate biometric data collection amid a lack of federal regulation, with some cases leading to large settlements with tech companies. However, enforcement is challenging, especially against overseas companies like PimEyes, which operate beyond U.S. jurisdiction. Public concern about privacy and facial recognition technology is growing, but federal legislation remains stalled due to industry lobbying.
xAI requested over 200 employees to record conversations for facial recognition training, aiming to help Grok analyze facial expressions, but faced internal resistance due to privacy concerns and the sensitive nature of biometric data, amid broader privacy law challenges.