Starbucks CEO Reveals an AI Barista System—In the Future, It Can Predict Customer Orders and Support Voice Ordering

Gate News, April 12, Starbucks CEO Brian Niccol disclosed that the company is actively pushing forward AI experiments and shared a series of internal AI system application updates. At present, the most widely deployed at scale is a system called “green dot,” which serves as a barista assistant to support daily store operations. When employees run into equipment issues or need to understand how to prepare beverages, green dot uses AI to quickly provide solutions.

Brian Niccol said that the key direction for Starbucks’ future AI applications is to predict users’ ordering needs through the App. The application already supports users in quickly selecting their most recent orders, but he believes there is still room for optimization—making the ordering process faster and more seamless.

Brian Niccol envisions that friction in the ordering process will be completely eliminated in the future. AI systems may be seamlessly integrated into the user experience, to the point where users may not even need to open the App. For example, a user can simply say to their phone, “I’ll be there in 10 minutes—prepare my Starbucks order for me,” and the drinks can be completed directly upon arrival. This ability to anticipate user needs and the voice-based ordering feature will be an important direction for the brand to use AI to improve personalization and efficiency.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Worxphere Rebrands JobKorea With AI-Powered Hiring Tools

Gate News message, April 26 — South Korean HR platform Worxphere has rebranded JobKorea as it transitions from traditional online job boards to AI-driven hiring solutions. The company is consolidating services including JobKorea and Albamon into a unified platform covering permanent employment,

GateNews18h ago

Olenox Announces Merge With CS Digital to Develop Low Cost, Off-Grid Bitcoin Mining Opportunities

The two companies would agree to merge, with CS Digital receiving $55 million in an all-share transaction, to combine Olenox’s energy expertise with CS Digital’s expertise in bitcoin mining. The combined company would seek to develop off-grid mining and AI data center initiatives close to

Coinpedia19h ago

ComfyUI Raises $30M at $500M Valuation in Craft Ventures-Led Round

Gate News message, April 25 — ComfyUI, an AI creator tools startup, has raised $30 million at a $500 million valuation in a funding round led by Craft Ventures. Pace Capital, Chemistry, and TruArrow also participated in the investment, following a $19 million Series A round in late 2024 backed by Ch

GateNews04-25 02:51

XChat Launches on App Store with End-to-End Encryption and Grok Integration

Gate News message, April 25 — XChat, the standalone messaging app from X (formerly Twitter), officially launched on Apple's App Store on April 25. The app is now available for download and use on iOS, with the Android version coming soon. XChat allows users to log in directly with their X account,

GateNews04-25 02:00

DeepSeek V4-Flash goes live on Ollama Cloud, US-hosted: Claude Code, OpenClaw one-click integration

Ollama Cloud has launched DeepSeek V4-Flash, with inference hosted on U.S. servers, providing three sets of one-click commands to connect Claude Code, OpenClaw, and Hermes. V4-Flash/V4-Pro use a MoE architecture, with native support for 1M context, and reduce costs with Token-wise compression + DSA sparse attention. In a 1M scenario, token FLOPs per token drop by 27%, and KV cache drops by 10%. API-compatible with OpenAI ChatCompletions and Anthropic, making it easy to switch between multiple workflows and lowering costs and data-sovereignty risk.

ChainNewsAbmedia04-24 10:45
Comment
0/400
No comments