OpenAI is requesting contractors to upload real and fabricated work examples from their jobs to evaluate AI performance against human tasks, raising concerns about data privacy and trade secret risks.
The article emphasizes the importance of sharpening math skills in the age of AI, while also highlighting concerns about personal data privacy and the need for digital literacy. It discusses how AI impacts data processing and the importance of understanding these technologies.
California's new DROP platform allows residents to request the deletion of their personal data from over 500 data brokers, helping to prevent their information from being sold, with requests processed starting August 2026.
UK MPs are scrutinizing Palantir contracts following a Swiss investigation that raised security concerns about the company's data handling and potential US government access, leading to calls for greater transparency and caution in using US tech in sensitive sectors.
Palantir, a controversial US tech company known for its data aggregation and military software, is expanding its presence in Zurich despite Swiss authorities' reservations due to concerns over data privacy and military applications, including its role in Gaza. The company has established strong ties with Swiss firms and government agencies, focusing on civilian software development in Zurich, while its involvement in military and law enforcement activities raises regulatory and ethical questions. Zurich's favorable business environment has made it a strategic hub for Palantir's European operations, but its activities continue to attract scrutiny over potential human rights and export control issues.
A survey by the American Psychological Association reveals that over half of psychologists are now using AI tools like ChatGPT in their practices, primarily to improve efficiency and reduce burnout, but concerns about data privacy, bias, and misinformation remain prevalent.
Apps claiming to catch cheaters using facial recognition and public data mining raise serious privacy concerns, as they often operate without user consent, can produce false positives, and may violate privacy laws, highlighting the need for stronger legislation to protect personal data and privacy rights.
At least 27 states have shared sensitive personal data of food stamp recipients with the USDA amid legal challenges and concerns over privacy and misuse, with courts blocking the Trump administration from punishing states that refuse to comply. The controversy centers on the administration's unprecedented demand for detailed SNAP data to combat fraud, which critics argue violates federal law and risks misuse for immigration enforcement and surveillance.
Google requires US employees seeking health benefits to opt into a third-party AI tool called Nayya, which accesses their health data; refusal results in loss of benefits, raising privacy concerns among staff.
California has enacted a law requiring web browsers to provide an easy-to-use, universal opt-out mechanism for consumers to prevent third-party data sales, making privacy controls more accessible and straightforward for Californians, alongside other privacy rights enhancements.
Neon, an app that paid users to record calls for AI training, was quickly popular but was taken down after a security flaw exposed sensitive user data, prompting an ongoing security audit and server patching.
The White House announced a potential deal allowing TikTok to operate in the US, involving Oracle managing data and a new American-led board with six out of seven directors being American, though the deal has not yet been signed.
Synthetic data generated by AI can aid medical research and improve healthcare, especially in areas with limited real data, but concerns about privacy, validation, and ethical oversight must be addressed to ensure reliable and safe use.
Legal scholar Victoria Haneman advocates for a limited right for the deceased's estate to delete digital data to prevent AI from resurrecting their digital presence, highlighting gaps in US law compared to Europe and recent legislative efforts like California's Delete Act.
Security researchers discovered a vulnerability in OpenAI's Connectors that link ChatGPT to external services, allowing a single poisoned document to potentially leak sensitive data from platforms like Google Drive without user interaction.