In brief
- San Francisco police arrested a suspect after an incendiary device was thrown at the home of OpenAI CEO Sam Altman.
- Officers later detained the same individual near OpenAI’s headquarters after he allegedly threatened to burn down the building.
- No one was injured, and police say the investigation remains ongoing.
San Francisco police arrested a suspect early Friday after a Molotov cocktail was thrown at the home of OpenAI CEO Sam Altman.
According to a report by NBC News, police responded to Altman’s home in San Francisco’s North Beach neighborhood around 4:12 a.m. PT after receiving a report of a fire. Investigators said an unknown man threw an incendiary device, causing a fire on an exterior gate before fleeing the scene.
Police described the device as a Molotov cocktail or similar incendiary device. Officers later detained the suspect near OpenAI’s headquarters after he allegedly threatened to burn down the building.
When officers arrived, they recognized the individual as the same suspect from the earlier incident and detained him. According to reports, the police did not name the suspect but described them as a 20-year-old man. Authorities said charges are still pending and the case remains an active investigation.
"Early this morning, someone threw a Molotov cocktail at Sam Altman’s home and also made threats at our San Francisco headquarters,” an OpenAI spokesperson told Decrypt. “Thankfully, no one was hurt. We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe.”
OpenAI is assisting law enforcement with their investigation, they added.
The attack comes amid a rise in threats tied to artificial intelligence development, including a recent case in Indiana where shots were fired into the home of a city council member who supported building a data center, with a note left at the scene reading, “No data centers.”
Altman has not publicly commented on the incident, and authorities said the investigation remains ongoing.
The incident follows another security scare in November reported by Wired, in which OpenAI locked down its San Francisco offices after receiving a violent threat linked to an anti-AI activist who had previously visited the company’s facilities and was suspected of planning harm against employees.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
DeepSeek Slashes Input Cache Prices to 1/10 of Launch Price; V4-Pro Drops to 0.025 Yuan per Million Tokens
Gate News message, April 26 — DeepSeek has reduced input cache prices across its entire model lineup to one-tenth of launch prices, effective immediately. The V4-Pro model is available at a limited-time 2.5x discount, with the promotion running through May 5, 2026, 11:59 PM UTC+8.
Following both re
GateNews7h ago
OpenAI Recruits Top Enterprise Software Talent as Frontier Agents Disrupt Industry
Gate News message, April 26 — OpenAI and Anthropic have been recruiting senior executives and specialized engineers from major enterprise software companies including Salesforce, Snowflake, Datadog, and Palantir. Denise Dresser, former CEO of Slack under Salesforce, joined OpenAI as chief revenue of
GateNews7h ago
Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services
Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4
GateNews13h ago
Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!
The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”!
Jensen Huang and Altman, industry leaders, personally take the stage to teach
The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain
ChainNewsAbmedia13h ago
Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history
Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.
ChainNewsAbmedia15h ago
AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI
Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.
ChainNewsAbmedia18h ago