In a significant advancement for wearable technology, Ray-Ban Meta smart glasses have received a rollout of multimodal AI technology in the US and Canada. This upgrade introduces a virtual assistant powered by Meta AI, which can process queries involving multiple mediums, including audio and imagery. By leveraging multimodal AI, the smart glasses can provide enhanced responses based on what the wearer is looking at.
Through the built-in camera and Meta AI, the glasses can capture images and process them using cloud-based computing. This enables the AI to deliver answers to queries spoken by the wearer, such as identifying the type of plant they are viewing or translating foreign text. The multimodal functionality of Meta AI allows it to fuse and process data from multiple sensors on the glasses, such as cameras and microphones, resulting in more sophisticated and contextually relevant information.
While the Ray-Ban Meta’s AI capabilities are still evolving, it demonstrates the potential of multimodal AI in wearable devices. As AI processing capabilities continue to improve, smart glasses like the Ray-Ban Meta could become even more powerful and transformative, offering a seamless and intuitive way to interact with the world around us.