🍜 US used AI in Iran despite a ban..

Fast food workers are being graded by robots and fintech is trimming humans...

Welcome, Noodle Networkers.

This week feels less like tech news and more like a political thriller written by ChatGPT after three espressos. Banned bots, executive orders, and whispers of nationalized AI. Let’s unpack the tension. The US reportedly used Anthropic’s Claude in Iran even after a ban was issued. Imagine grounding your teenager and then finding out they still took the car. Somewhere inside a secure facility, someone definitely said “technically it was already running.” 🤯 Trump then ordered the government to drop Anthropic altogether. One minute you are powering federal workflows, the next minute you are being escorted out of the digital building. Silicon Valley breakups now come with classified memos. 🛑 And in Canada, calls are growing for nationalized AI over trust concerns. The message is simple: if we cannot trust the robots, maybe the government should own them. Because nothing says “calm and reassuring” like Ottawa running the algorithm. 🇨🇦 From banned models quietly working overtime to political AI purges to maple flavored tech nationalism, the stakes feel higher than ever. The real question is not who builds the smartest AI. It is who controls it when things get messy. Let’s dig in.

In today’s AI digest:

  • US reportedly used Claude in Iran despite a ban 🤯

  • Trump orders government to drop Anthropic 🛑

  • Canada calls for nationalized AI over trust concerns 🇨🇦

Read time: 5 minutes

WHAT’S HAPPENING TODAY

Sponsored by Somewhere.com

Hire the top 1% of global talent for 80% less than US equivalents. Join 4,500+ companies who have hired with Somewhere. There is zero risk to get started. You only pay if you hire.

Claude AI

(source: TheGuardian)

🤯 The Digest: The US reportedly used Anthropic’s AI model Claude in Iran operations shortly after officially banning it from federal use. Nothing says “strict policy enforcement” like quietly keeping the software open in another browser tab.

Key Details:

🛑 Ban on Paper, Bot in Practice
The administration had ordered agencies to stop using Claude, citing national security concerns. Meanwhile, reports say the model was still assisting with analysis and planning. It is the geopolitical version of “we are done” followed by “but just one more message.”

🧠 AI in the War Room
Claude was reportedly used for intelligence analysis and simulations. Imagine briefing generals with insights that started as a prompt. Somewhere an officer said, “Run it through the model,” and nobody blinked.

⚖️ Tech Meets Realpolitik
The episode highlights the awkward dance between government control and technological dependence. When the tool is already embedded in workflows, banning it overnight is less like flipping a switch and more like unplugging the WiFi mid Zoom call.

🔥 Ethics vs Urgency
Anthropic has drawn lines around certain military uses. Governments have drawn different lines. The overlap between those lines appears to be… flexible.

Why It Matters: This is what the AI era looks like in real time. Governments want control. Companies want guardrails. Reality wants results. If national security decisions are being stress tested by chatbots, we are officially past the “cute productivity tool” phase of AI.

Anthropic

(source: BBC)

🛑 The Digest: President Trump has ordered federal agencies to drop Anthropic and stop using its AI systems, escalating a standoff over military access and safety guardrails. In other words, one of the hottest AI startups in America just got the governmental equivalent of “you’re not invited anymore.”

Key Details:

📜 From Partner to Persona Non Grata
The directive tells agencies to halt use of Anthropic’s tech, following tensions over how Claude can be used in defense settings. One day you are briefing officials, the next day your badge does not scan.

⚠️ “Supply Chain Risk” Label
Defense officials reportedly classified Anthropic as a supply chain risk. That is usually language reserved for foreign adversaries, not companies whose headquarters are a short drive from Silicon Valley oat milk cafes.

⚖️ Ethics vs Access
Anthropic has drawn lines around certain military applications of its models. The government wanted fewer lines. What followed looks less like a policy disagreement and more like a custody battle over a very intelligent algorithm.

🧑‍⚖️ Courtroom Incoming
Anthropic says it plans to fight the designation. So now we have AI models, defense contracts, and constitutional arguments all in the same headline.

Why It Matters: This is bigger than one company. It signals that AI firms building powerful models may have to choose between strict ethical guardrails and unrestricted government contracts. The awkward twist is that the same government banning the tech might still need tools just like it. The AI era is officially political, and the bots did not even get a vote.

OpenAI

(source: TheGlobeandMail)

🇨🇦 The Digest: Canada is floating the idea of nationalized AI, arguing that if artificial intelligence is going to shape the economy, democracy, and everyone’s news feed, maybe it should not be left entirely to Silicon Valley. The vibe is less “let the market cook” and more “we’d like to see the recipe first.”

Key Details:

🏛 Trust Is Running Low
Public concern around privacy, misinformation, and bias has lawmakers questioning whether private AI labs should be the sole architects of Canada’s digital future. When your chatbot knows more about you than your family doctor, people start asking who exactly owns the stethoscope.

🍁 Data Sovereignty Energy
Supporters argue that a national AI framework would protect Canadian data and values. Translation: if the algorithm is shaping Canadian life, it should at least apologize properly.

📊 Regulation Over Hype
Surveys show Canadians are more interested in guardrails than growth-at-all-costs. They are not anti AI. They just prefer their technological revolutions with a side of oversight.

🧠 Strategic Independence
With global AI giants racing ahead, some policymakers worry Canada could become a consumer instead of a builder. Nationalized AI is being pitched as a way to stay in the game without handing over the keys.

Why It Matters: This debate is not really about who builds the smartest model. It is about who controls the systems shaping society. Canada is essentially asking whether AI should be treated like a startup or like public infrastructure. And if the country does nationalize AI, somewhere a bureaucrat is going to ask the model to generate a 200 page policy report about itself. Politely, of course.

THE NOODLE LAB

AI Hacks & How-Tos

Scenario is an AI content platform that generates images, short videos and 3D assets from simple text prompts or reference images. You can use prebuilt models or train custom ones so everything matches your brand, game style or creative direction. It is built for designers, studios and product teams that need consistent visual assets at scale.

How to Use It 🧭

1. Create a New Project
Log into Scenario and start a new project based on what you want to create such as images, animation or 3D assets.
Pro tip: Keep separate projects for different brands or art styles.

2. Select a Model
Choose a ready made AI model or use a custom trained model if you have one.
Pro tip: Custom models help maintain visual consistency across campaigns or game environments.

3. Write Your Prompt or Upload a Reference
Describe the asset you want to generate in clear detail or upload a reference image to guide the output.
Pro tip: Include details like lighting, style, perspective and mood for better results.

4. Generate and Adjust
Click generate and review the results. If needed, refine your prompt to create variations or improve accuracy.
Pro tip: Small prompt changes can dramatically shift style and composition.

5. Download and Use Your Assets
Export your final images, videos or 3D files and import them into your design tools or game engine.
Pro tip: Organize files by project and style so you can reuse assets easily later.

Scenario helps creative teams move from idea to production ready visuals much faster while keeping style and branding consistent.

Trending AI Tools

  • Scenario – Generates images, video, and 3D assets with custom AI models.

  • Safurai – AI coding assistant for writing and debugging inside your IDE.

  • Rytr – Creates marketing copy and blog content in seconds.

  • Prisma Labs – Builds AI tools for fast photo and video editing.

  • Revelio – Automates radiology image analysis with AI.