Scientists at Cambridge University have demonstrated that applying physical constraints to an artificial intelligence (AI) system allows it to develop features similar to those found in the brains of complex organisms. By modeling a simplified version of the brain and imposing physical constraints on the system, the researchers observed the emergence of characteristics and tactics seen in human brains. The AI system developed hubs for efficient information transfer and exhibited flexible coding schemes, where individual nodes encoded multiple properties of a task. The findings provide insights into the organization of the human brain and could inform the design of more efficient AI systems.
OpenAI is set to release DALL-E 3, an improved version of its text-to-image AI system, which can generate results within the ChatGPT app. The new iteration integrates with ChatGPT to help users write detailed prompts for the image AI. DALL-E 3 is better at understanding user intentions and can create elements that previous AI generators struggle with. It also includes enhanced security measures and safeguards against explicit or hateful images. OpenAI plans to release DALL-E 3 next month to select customers, with wider availability to be announced later.
Naba Banerjee, the person in charge of Airbnb's worldwide ban on parties, has successfully reduced party reports by 55% between August 2020 and August 2022. Banerjee has implemented various measures, including an anti-party AI system, to combat party "collusion" by users and prevent high-risk reservations. The AI system analyzes factors such as user age, reservation details, and proximity to the listing to determine party risk. Since its global rollout in May, over 320,000 guests have been blocked or redirected from booking attempts on Airbnb. Despite ongoing challenges, Banerjee and her team continue to monitor and adapt the system to stay ahead of party-inclined users.
Researchers at The University of Texas at Austin have developed a non-invasive AI system called a semantic decoder that can translate a person's brain activity into a continuous stream of text. The decoder is trained by having the participant listen to hours of podcasts while in an fMRI scanner, and it can then generate text based on brain activity alone. This system could provide a new means of communication for individuals who are unable to physically speak, such as those debilitated by strokes.
Researchers at the University of Texas at Austin have developed an AI system that can interpret and reconstruct human thoughts by training a neural network to decode functional magnetic resonance imaging (fMRI) signals from multiple areas of the human brain simultaneously. The AI system was able to convey the general ideas being thought about in real-time with approximately 50% accuracy. The technology used in the experiment is widely available, and there is a possibility that it could be combined with blockchain to develop an AI system that can read a person's thoughts and record them immutably.