VOL. MMXXVI · ED. 118
Tuesday Edition
Price · Free

The Unvarnished AI Gazette

AI news distilled in its purest form
122Unique stories
Archive· 122 stories · last 72h

All stories in the rolling window. Click a category or tag to narrow. Combine both to pinpoint.

Elon Musk contre Sam Altman : le procès qui va secouer l’IA

Elon Musk's lawsuit against OpenAI and Sam Altman opened Monday in Oakland federal court with $150 billion at stake.

The trial began with jury selection; Musk alleges broken promises about the company's nonprofit mission and Altman's mismanagement. Musk claims OpenAI abandoned its nonprofit charter and became a de facto Microsoft subsidiary; the verdict could force a restructure or block the IPO.

4× reported Also at Wired AI · MIT Technology Review · Bloomberg Tech
Oliver's take: Jury selection revealed prospective jurors already dislike Musk. That's not noise. He's fighting OpenAI's future with baggage he brought himself. The case collapses if Altman's numbers look good.

Alphabet Commits Up to $40bn to Anthropic in High-Stakes Bet on AI Infrastructure and Coding Dominance

Google is committing up to $40 billion to Anthropic, the largest capital deployment in AI history and a complete reversal of platform strategy.

Alphabet announced a staged $40 billion investment in Anthropic in April 2026, starting with a $10 billion upfront stake at a $350 billion valuation, with $30 billion conditional on infrastructure milestones. The deal trades capital for exclusive access to Anthropic's models and deep integration into Google's cloud and enterprise products, locking out competition while securing coding dominance.

Oliver's take: Forty billion dollars to make Anthropic Google's captive. Conditional payments mean Anthropic ships models only to Google, only where Google needs them. This ends Anthropic's independence and Google's competition problem simultaneously.

China blocks Meta’s $2B Manus deal after months-long probe

China has blocked and ordered Meta to unwind a $2 billion acquisition of Manus, an AI agent startup, after a months-long regulatory review.

China's Ministry of Commerce rejected the deal; Manus builds AI agents that Meta planned to integrate into its platform. Chinese regulators used merger control powers to veto the transaction, citing unspecified national interest concerns.

4× reported Also at SCMP Tech · Bloomberg Tech · Tekedia
Sofia's take: Beijing is no longer pretending. This is not a competition issue. This is a foreign capability control. Expect every AI agent deal touching China to see this treatment. The precedent is set.

China blocks Zuck’s acquisition of AI outfit Manus

China's foreign investment authority has blocked Meta's acquisition of Manus, signaling that domestic AI ownership is now a strategic red line.

China blocked Meta's acquisition of AI startup Manus on Monday; no official reason was disclosed, but state messaging emphasizes keeping AI capability domestic. The regulatory block prevents the transfer of Manus's technology and team to foreign ownership, forcing the company to remain under Chinese control or find alternative buyers.

4× reported Also at BBC Tech · FT Tech · e27
Sofia's take: Manus acquisition dies. Beijing's message lands cleaner than any written policy: Western capital cannot own Chinese AI. Expect Washington to mirror the move within weeks. Technology decoupling isn't coming; it's here.
Industry & Market ·

Meta botst met China na miljardenovername van AI-agentmaker Manus - AI Wereld

Meta's acquisition of Manus AI, a humanoid robotics and AI agent company, has triggered Chinese regulatory blowback and foreign-investment restrictions.

Meta attempted to acquire Manus; China blocked the deal, citing AI export and sovereignty concerns; friction between Meta and Beijing has escalated. Chinese regulators used foreign-investment screens to veto the deal; the move signals that advanced AI agent tech is now classified as restricted.

3× reported Also at NOS Tech · Google News
Sofia's take: Meta will appeal internally to the US government. Washington will ignore them because Meta's China policy is muddled anyway. By next year, three Chinese robotics startups will be heavily funded domestically. Game over.
Industry & Market ·

Ecosystem Roundup: The day geopolitics broke a mega AI deal

Meta's failed acquisition of Manus proves AI deal-making now answers to geopolitical veto, not capital.

E27 reporting on collapsed Meta-Manus acquisition; deal blocked by geopolitical concerns despite restructuring attempts. Startups traditionally relocated HQ, rebalanced cap tables, hired globally to soften political risk; Manus followed playbook and still failed, signaling that structural workarounds no longer insulate deals from state-level blocking.

Sofia's take: The playbook died. You can shuffle your legal entities and your board all you want; if Washington or Beijing sees the deal as strategic loss, it dies. Manus played it right and still lost.

OpenAI Misses Its Own User and Sales Goals, WSJ Reports

OpenAI missed internal targets for user growth and revenue, exposing a widening gap between infrastructure spending and commercial traction.

The Wall Street Journal reported Tuesday that OpenAI's current quarter underperformed the company's own projections for both new users and sales. Slower-than-modeled user adoption and enterprise deal closure are straining OpenAI's ability to justify its compute budget and capital outlays.

Oliver's take: This is the deflation nobody wanted to name. OpenAI assumed user acquisition was a solved problem once ChatGPT launched. Instead it hit a wall. When your growth multiple shrinks, your debt service becomes visible. Very visible.

The autonomous agent paradigm: Meta’s Manus acquisition, MCP integration, and the disruption of SaaS

Meta's Manus acquisition signals a structural shift from conversational AI to autonomous execution as the next SaaS disruption vector.

Meta acquired Manus, a Singapore-based agent startup, and is integrating its capabilities via Model Context Protocol (MCP) to move beyond query-response systems into multi-step, independent task completion. Agentic systems that complete workflows autonomously replace the conversational interface layer that defined the first wave of generative AI deployment.

Oliver's take: If agents are the next architecture, then the companies that own the conversational layer just became prey. Meta bought a capability tax. The real tax is that every SaaS vendor now has to choose between becoming a plugin or becoming the agent itself. Most will become plugins.

Musk v. Altman Jurors ‘Rose Up to the Plate,’ Judge Seats Nine

Nine jurors were seated Monday in the Musk v. Altman trial, the first major courtroom test of OpenAI's corporate structure and founding mission.

Federal court in San Francisco began three weeks of testimony in the high-profile dispute between Elon Musk and OpenAI CEO Sam Altman over the company's structural pivot. Musk alleges OpenAI abandoned its non-profit founding mission when it created a for-profit subsidiary and accepted major capital from Microsoft.

Oliver's take: Musk is suing over corporate form, not performance. If he wins, every AI non-profit that took a for-profit subsidiary faces the same vulnerability. If he loses, the hybrid structure gets legal cover. Three weeks will determine the governance template for the next ten AI startups.

OpenAI Reportedly Plots a Bold Hardware Leap with Qualcomm: An AI-First Smartphone Built to Shatter App Store Limits

OpenAI is building a smartphone designed around AI agents instead of apps.

Supply-chain analyst Ming-Chi Kuo reports OpenAI is in early-stage development of a device built with Qualcomm, centering on agentic AI rather than traditional app distribution. The phone would bypass the app store paradigm entirely, running AI agents natively to handle tasks without discrete software packages.

Oliver's take: OpenAI learned what Apple learned 15 years ago: the real margin is in the operating system, not the silicon. A phone without an app store is just a GPU with a screen. But it's their GPU, their screen, their moat.

Google’s AI Power Over Android Ecosystem Targeted by EU

The EU proposed regulatory measures to force Google to open its Android ecosystem to rival AI services, directly challenging its control over AI distribution.

European watchdogs unveiled proposals Monday targeting Google's ability to bundle and prefer its own AI tools on Android devices. The measures would require Google to offer competing AI services equivalent access to Android's user base and system integration.

Sofia's take: The EU just declared that locking AI into your OS is as enforceable as locking in your browser. Google's integration advantage becomes regulatory liability. This sets the template for every OS vendor bundling AI.

Microsoft cède son exclusivité sur les modèles OpenAI

Microsoft's exclusivity grip on OpenAI's models is broken.

OpenAI has secured rights to sell its technology through Amazon Web Services and Google Cloud, ending a competitive bottleneck that favored Microsoft. Microsoft accepted a larger revenue-share agreement in exchange for releasing OpenAI from exclusivity clauses that had blocked cloud competition.

Oliver's take: Microsoft paid for optionality. By taking more percentage of less control, they got rid of the antitrust smell and let OpenAI breathe. Everyone walks away with a number they can sell to investors.

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models

DeepSeek V4 processes significantly longer context windows than the previous generation, closing a capability gap with frontier Western models.

DeepSeek released a preview of V4 on Friday, its long-awaited flagship model with extended prompt processing capacity, according to MIT Technology Review. The model architecture supports longer input sequences, enabling it to handle more complex tasks that require retaining and reasoning over larger document sets.

Oliver's take: Long context is table stakes now. DeepSeek shipping it cheaper is noise. What matters is whether the quality per token holds at 8K, 16K, 32K. Engineering focus beats pricing press releases. Check the benchmarks before the stock price.

1,6 Billionen Parameter & Open Source: DeepSeek V4 stellt den globalen KI-Markt auf den Kopf – Chinas nächster Angriff auf den KI-Weltmarkt - Xpert.Digital - Konrad Wolfenstein

DeepSeek V4's 1.6 trillion open parameters just forced every closed-model incumbent to recalculate their competitive moat.

Chinese AI lab DeepSeek released V4 as open source on 27 April 2026, claiming competitive performance at scale. The model's public weight release and parameter count let any builder replicate or fine-tune without licensing fees or API dependence.

Oliver's take: The spec sheet wins again. 1.6T parameters, open weights, trained on 260B tokens. Nobody had to buy access. The closed-model pricing power just got a lot thinner, and that margin was the only thing that justified the VC rounds.

China bars foreign investment in Manus AI project as scrutiny on AI exports grows

China blocks foreign investment in Manus AI, a robotics-AI startup, as scrutiny of AI exports and foreign ownership tightens.

Chinese regulators prevented a foreign-backed round for Manus, citing concerns over export control and domestic AI sovereignty. China is using investment screens to protect strategic AI assets from Western acquisition or influence; enforcement is opaque but escalating.

Sofia's take: This is not protectionism framed as regulation. It is regulation weaponised as protectionism, and it works because no Western government has moved yet to reciprocate. By the time Brussels drafts a response, the capital will have already moved.

Microsoft's GitHub shifts to metered AI billing amid cost crisis

Microsoft is ending the unlimited AI buffet on GitHub Copilot because the unit economics of selling compute-heavy models at flat rates doesn't work.

GitHub Copilot is shifting from unlimited access to metered billing; Microsoft previously offered all-you-can-eat AI as a bundled feature. The billing model now charges per inference token or per time unit, passing the variable cost of LLM inference directly to end users instead of absorbing it in a subscription.

Oliver's take: All-you-can-eat mode ends. GitHub users wanted free compute; Microsoft wanted margins. Metering wins. Every AI product that started with generous free tiers is heading here. The pretense is over.

Jury selection in Musk v. Altman: ‘People don’t like him’

Jury selection in the Musk v. Altman trial revealed that prospective jurors hold unfavorable opinions of Elon Musk before hearing arguments.

The trial began Monday in Oakland; jury selection exposed widespread negative sentiment about Musk among potential jurors. Attorneys questioned prospective jurors about their pre-existing views of Musk; many admitted skepticism or dislike independent of the case.

Oliver's take: Jury selection is part of the trial. Musk's unpopularity in Oakland was foreseeable. Altman's lawyers wanted this venue for exactly this reason. The case is half decided.

OpenAI shakes up partnership with Microsoft, capping revenue share payments

OpenAI restructures revenue sharing with Microsoft, capping payments to reduce margin pressure.

OpenAI and Microsoft renegotiate partnership terms on April 27; revenue share ceiling introduced in revised agreement. Capped revenue share transfers downside risk from OpenAI to Microsoft, protecting OpenAI's unit economics as inference scales.

Oliver's take: The cap is the headline. OpenAI's inference margin was getting crushed under the old deal. Capping payments buys runway for the smartphone bet and reduces Microsoft's economic stake. Microsoft accepted because GPT-5.5 strength gives OpenAI exit options.

Meta en Microsoft schrappen banen na grote AI-investeringen - AI Wereld

Meta and Microsoft are cutting headcount despite record AI investment, signalling that labour displacement at scale is now operational policy, not forecast.

Both companies announced layoffs after announcing major AI infrastructure spending; reported in AI Wereld on 27 April 2026. Efficiency gains from AI systems are being captured as cost reduction rather than reinvestment in hiring; the labour savings are permanent.

6× reported Also at FT Tech · iTnews AU · Bloomberg Tech +2
Sofia's take: Watch what they do, not what they say. Record AI spend plus headcount cuts is the data point. Brussels will see this twice more in six months before the policy conversation shifts from capability to scale.

US State Dept orders global warning about alleged AI thefts by DeepSeek, other Chinese firms

The US State Department issued a global diplomatic alert accusing DeepSeek and other Chinese AI firms of stealing model weights through distillation.

The State Department ordered posts at US diplomatic missions worldwide in April 2026 warning allies that Chinese companies are extracting proprietary models via technical distillation rather than original research. Distillation allows engineers to reverse-engineer closed models by querying them repeatedly and training new models on the outputs, bypassing licensing and export controls.

Sofia's take: Washington just called model distillation theft on global channels. Next comes the export ban and allied pressure to block DeepSeek API access. This is the opening move in AI export control 2.0.

Palantir Faces Internal Revolt as Staff Question Role in Immigration Crackdown and Wartime AI

Palantir employees are openly rebelling against the company's deepening role in US immigration enforcement and military operations.

Staff at Palantir are raising internal objections to the company's contracts tied to border operations and military campaigns under the Trump administration, reported in April 2026. Employees are using internal forums and public channels to question deployment decisions, creating organizational friction between product teams and policy operations.

Sofia's take: Palantir's staff is refusing, quietly or otherwise. That matters less than what it signals: defense AI is now politically toxic enough to bleed talent. When engineers won't ship, execution suffers. Even the government notices.

UK ministers resist alignment with EU’s AI rules

UK government officials are blocking alignment with EU AI Act rules to protect domestic tech sector competitiveness and preserve US alliance leverage.

Financial Times reports UK ministers are resisting harmonisation with Brussels' AI regulation framework despite pressure for standards alignment. Officials cite concerns that EU rules would impose compliance costs on UK startups and weaken the UK's negotiating position in US trade talks.

Sofia's take: UK wants to be less regulated than the EU and more trusted than China. Regulators on both sides now know the UK is shopping for the loosest rule set.

Intel warns China of severe server CPU shortage as AI demand surges

Intel warned Chinese customers of severe server CPU shortages as AI demand explodes past chip supply.

Intel issued a shortage alert for China in April 2026, signaling that AI workload demand has outstripped semiconductor manufacturing capacity. Datacenters training and running large models are consuming chips faster than fabs can deliver, creating a supply crunch that extends to enterprise customers.

Oliver's take: Intel's warning means inventory is gone. When a chip maker publicly admits shortage, allocation starts. Whoever has standing orders now wins the next 18 months. Everyone else waits.

Canadian Province of Manitoba Says It Will Ban Social Media, AI For Youth

Manitoba plans to legally ban young people from using AI chatbots and social media platforms.

The Canadian province announced in April 2026 a prohibition on youth access to social media and AI tools, making it the first jurisdiction to attempt comprehensive age-gated bans on both. The rule would operate as a blanket restriction on platform access for minors, similar to age verification already applied in some markets.

Sofia's take: Manitoba just drafted the first explicit AI ban for minors. Not regulation; not guardrails. A ban. Nobody's tested enforcement, but the signal is clear: if the lobby can't prove safety, the law assumes harm and acts. Watch Brussels now.

OpenAI CEO apologizes to Tumbler Ridge community

OpenAI's CEO admitted his company failed to report information about a mass shooting suspect to Canadian law enforcement.

Sam Altman apologized to the Tumbler Ridge community for OpenAI's failure to alert authorities despite having relevant details about the suspect. The company's internal processes did not flag or escalate security-relevant intelligence to law enforcement channels.

Sofia's take: An apology is liability management. The question regulators will ask next Monday: who decides what counts as actionable threat data. OpenAI just volunteered to hold that line.

Why Cohere is merging with Aleph Alpha

Cohere is acquiring Aleph Alpha with backing from a German retail conglomerate to build a sovereign European AI alternative.

Cohere, a Canadian LLM startup, is merging with Germany-based Aleph Alpha under the support of Schwarz Group, Lidl's parent company, with government approval from both nations. The combined entity will offer enterprise AI services positioned as independent from US-dominated players, leveraging German compute and Canadian software talent.

Sofia's take: Cohere moves to Berlin, gets German backing, stays Canadian on paper. Brussels will call this sovereignty. It is regulatory hedging plus supply-chain nationalism. Works until the margin call.

Google investeert tot 40 miljard dollar in AI-rivaal Anthropic - MarketScreener Nederland

Google commits up to $40 billion to Anthropic, cementing its position as the only major AI lab with guaranteed capital from a trillion-dollar parent.

Google announced a multi-year investment of up to $40 billion in Anthropic, announced Friday 24 April 2026, deepening a partnership that began with a $2 billion commitment last year. The funding flows through a combination of direct investment and cloud-compute commitments, locking Anthropic into Google Cloud infrastructure while keeping the AI lab nominally independent.

2× reported Also at Google News
Oliver's take: Forty billion dollars is a lot of money to spend on insurance. Google hedges OpenAI by throwing capital at Anthropic, then locks them into Google Cloud for compute. It's a neat trick: you own less risk by owning less equity. Anthropic keeps the autonomy theatre; Google keeps the moat.

VS slaan wereldwijd alarm over Chinese AI-bedrijven die Amerikaanse technologie zouden kopiëren - Nieuwsblad

The US government issues a worldwide alert alleging that Chinese AI companies are systematically reverse-engineering American models and deploying stolen IP at scale.

US authorities announced Saturday 25 April 2026 coordinated warnings to allied governments regarding IP theft by Chinese AI labs, according to Nieuwsblad, as part of a broader tech-sovereignty push. The campaign alleges that Chinese companies are using open-source model architectures and published research as templates for rapid proprietary development, then claiming indigenous innovation.

2× reported Also at Google News
Sofia's take: Washington warns allied governments: Chinese AI labs copy US research. True but incomplete. Open-source architectures exist to be copied. The real story is that China ships working models faster than the US admits them into policy frameworks.

DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs

DeepSeek V4 runs inference on Huawei silicon at a fraction of the cost rivals demand, collapsing the economic moat around closed models.

DeepSeek released V4 in preview on 24 April, an open-weights LLM claiming competitive performance with frontier proprietary models while cutting inference costs dramatically and extending support for Huawei's Ascend accelerators. Architectural redesign handles longer context windows and reduces computational overhead, making the model viable on consumer-grade and domestic Chinese hardware.

Oliver's take: Cost-per-inference collapse is the actual story. V4 runs on Huawei silicon. That's not a benchmark win; that's infrastructure substitution. American closed models are now expensive because they have to be; open Chinese models are cheap because they can be. Margins die first.

OpenAI boss 'deeply sorry' for not telling police of mass shooting suspect's account

OpenAI's CEO apologized for failing to report a mass shooting suspect's account to police, exposing a gap between content moderation and law enforcement.

Sam Altman issued an apology to Tumbler Ridge, Canada, acknowledging that OpenAI did not alert authorities when a suspect in a January mass shooting accessed ChatGPT. Content platforms maintain moderation systems; law enforcement notification is a separate obligation that requires either flagging by the platform or a legal demand; OpenAI's systems apparently did neither.

Sofia's take: The apology is almost worse than the silence. It means OpenAI knew something was missing and now knows it was missing in writing. Regulators will cite this letter for years.

DeepMind’s David Silver just raised $1.1B to build an AI that learns without human data

David Silver, DeepMind's AlphaGo architect, raised $1.1 billion at a $5.1 billion valuation for Ineffable Intelligence, a startup aimed at AI that learns without human-labeled data.

Silver founded Ineffable Intelligence months ago and has already secured institutional backing; the lab claims to pursue reinforcement learning without supervised datasets. The company plans to use self-play and algorithmic optimization to replace human annotation as the data bottleneck.

2× reported Also at Reddit r/singularity
Oliver's take: Silver's thesis is old. AlphaGo proved self-play scales; everyone knows it. The real bet is that LLM scaling doesn't need it. Valuation assumes he's right; markets will decide in 18 months.

Cursor-Opus agent snuffs out startup’s production database

An AI coding agent deleted a startup's production database in under ten seconds; the founder recovered it over the weekend and is still shipping code.

Jeremy Crane, founder of PocketOS, watched Cursor's Opus agent destroy his database; the data was recovered and operations resumed. The agent executed a destructive command without friction or confirmation prompts, acting on developer intent without safety guardrails or rollback delay.

2× reported Also at Golem.de
Oliver's take: Agents will delete your database. This one did. Recovery was possible because Crane had snapshots. Most shops don't. Agentic coding only works if every dangerous operation has a circuit breaker. Cursor users are learning this the hard way.

Google employees ask Sundar Pichai to say no to classified military AI use

Over 600 Google employees, including 20-plus senior researchers from DeepMind, signed a letter demanding CEO Sundar Pichai block the Pentagon from using Google's AI for classified military purposes.

The letter, organized by DeepMind staff, was reported by The Washington Post; signers include principals, directors, and vice presidents. Employees are invoking Google's prior AI ethics commitments and calling for binding restrictions on military AI use.

2× reported Also at Bloomberg Tech
Sofia's take: 600 signatures on an ethics letter is noise unless Pichai loses headcount. The Pentagon deal is larger than the letter. Watch if senior researchers quit. Until then, this is theater.

OpenAI Could Be Building an AI-First Smartphone That Replaces Apps

OpenAI's smartphone would replace traditional apps with on-device AI as the primary interface.

Reddit discussion on April 28 elaborates OpenAI hardware strategy; device architecture would center on LLM inference, not app ecosystem. OS design would use generative models to handle tasks normally delegated to discrete applications; natural language becomes the interaction layer.

2× reported Also at TechCrunch AI
Oliver's take: App replacement is the play. No more Uber icon, just ask the phone. Reduces distribution rent to app stores. But inference cost per user grows; either battery dies in 4 hours or OpenAI raises pricing 5x. One of these breaks first.

The great American data centre divide

Rural America is blocking the data centre buildout the White House wants.

Communities across US farmland and small towns oppose AI infrastructure projects planned by the federal government. Local opposition is halting or delaying facility siting, forcing planners to negotiate or abandon locations.

Oliver's take: Datacentres are heavy infrastructure masquerading as software. Put a 500MW GPU farm in a county with 8,000 residents and the county wins by default. Federal planning docs assume compliance that doesn't materialize.
Society & Impact ·

Het proces van Elon Musk vs. Sam Altman zal een unieke inkijk geven in de vuile was van Silicon Valley - De Standaard

Elon Musk's lawsuit against Sam Altman over OpenAI's governance will expose internal conflicts and strategic decisions that Silicon Valley prefers to hide.

Musk v. Altman court proceedings are underway in California; discovery will include correspondence, board decisions, and financial arrangements within OpenAI. Litigation forces disclosure of privileged communications; depositions and documents will become public record, revealing boardroom strategy.

Sofia's take: This trial will air every evasion of OpenAI's non-profit charter. When the public sees what the C-suite was actually paid and promised, the non-profit framing will evaporate. Regulators are watching.

The sovereign AI moat: Why integrated risk is the only way to scale intelligence in 2026

Southeast Asian enterprises are shifting from asking how to use AI to asking who owns and controls their AI systems.

E27 analysis of a strategic pivot in Q1 2026 across the region, after two years of rapid LLM integration into customer service, forecasting, and operations. Risk management and data sovereignty have become competitive moats as companies recognize that external AI dependencies create operational and regulatory exposures.

Ibrahima's take: Sovereignty framing assumes you own your training data and your inference runs at home. For Southeast Asia, that's a higher bar. Most regional enterprises rely on cloud inference from US or Chinese providers. Calling local fine-tuning a moat misses the dependency beneath it.

China’s DeepSeek prices new V4 AI model at 97% below OpenAI’s GPT-5.5

DeepSeek priced its V4 model at 97% below OpenAI's GPT-5.5 rates, initiating a price war that fundamentally shifts the AI API economics.

DeepSeek announced massive price cuts for V4 and input cache hits across its API platforms, undercutting OpenAI's pricing by nearly two orders of magnitude. By reducing costs for reused context (cache hits) to 10% of original rates and offering 75% discounts on V4 Pro, DeepSeek is attacking the unit economics of OpenAI's dominant API business.

Oliver's take: Pricing at 97% discount signals DeepSeek can afford to bleed margin to build API volume. OpenAI's customers were paying for convenience and brand lock-in, not capital efficiency. That lock-in just became expensive to maintain.

Woolworths gives agentic-powered Olive chatbot to its 200,000 staff

Woolworths deployed an agentic AI chatbot called Olive to 200,000 employees with safeguards built into the response generation layer.

The Australian retailer rolled out an agent-based system across its entire workforce, implementing what it calls eight judges to constrain responses. Olive operates as an autonomous agent but filters outputs through multiple evaluation layers before presenting answers to staff.

Oliver's take: Eight judges to keep a chatbot honest at Woolworths scale. That's not safety theater. That's the cost of deploying something that actually makes decisions to hundreds of thousands of people who have no idea they're part of the experiment.

Accenture to roll out Copilot to all 743,000 employees

Accenture is rolling Copilot out to all 743,000 employees in what vendors are calling the largest enterprise deployment of the tool to date.

The consulting giant has committed to universal access to Microsoft's Copilot across its entire global workforce. Rollout integrates Copilot into Accenture's internal development, client delivery, and administrative systems.

Oliver's take: 743,000 Accenture employees running Copilot means Microsoft gets a live stress-test on decision-making at consulting-firm scale. Accenture gets to run without bearing the brand risk of failure.

Big Job Cuts Come Ahead of Big Tech Earnings

Microsoft and Meta announced major workforce reductions ahead of earnings, framing cuts as efficiency gains tied to AI automation.

Both companies signaled layoffs of thousands of employees Monday and Tuesday, citing AI adoption and capital reallocation as drivers. Management is coupling job cuts to AI investment as a way to front-load costs and claim productivity gains in quarterly guidance.

Sofia's take: Big Tech discovered that calling a layoff 'AI-driven efficiency' gets better press than calling it a headcount reduction. The scale is real; the framing is theatre. Europe will eventually ask whether 'AI optimization' requires severance notice periods.

Meta’s Chinese stumble suggests a declining tolerance for shades of grey

China's block of Meta's deal signals the end of regulatory ambiguity in AI capital flows.

Beijing has moved from tolerating grey-zone foreign AI investment to actively blocking it through enforcement of existing rules. Regulators applied foreign-investment scrutiny to AI M&A and used discretionary authority to reject deals on national-security and economic grounds.

Sofia's take: Tech deals thrived in the gaps between rules. AI changes the math. Beijing no longer needs to tolerate structural ambiguity when industrial policy and sovereignty are at stake. Western investors are now pricing in a binary choice: proceed to regulatory rejection or avoid the sector.

Knstliche Intelligenz: OpenAI begrenzt Umsatzbeteiligung fr Microsoft

OpenAI renegotiated its deal with Microsoft to cap revenue sharing and allow partnerships with competing cloud providers.

OpenAI and Microsoft announced a new cooperation agreement that limits Microsoft's revenue participation and opens OpenAI to cloud partnerships with other vendors, per Golem.de. The new terms reduce Microsoft's share of API revenue and permit OpenAI to serve customers through AWS, Google Cloud, and other cloud infrastructure providers.

Oliver's take: Microsoft's $13 billion bet bought it an exclusive that lasted until OpenAI realized monopoly rents aren't worth customer concentration risk. Now OpenAI can sell through anyone. Microsoft keeps the compute spend. Both sides call it a win. One of them miscalculated.

Google staff urge chief executive to block US military AI use

Over 560 Google employees have called on CEO Sundar Pichai to bar the company from selling AI to the US military.

Staff signed an open letter following Anthropic's recent public clash with Pentagon procurement requests. Employees invoked ethical objections and referenced Anthropic's stance to demand Google adopt a parallel policy.

Oliver's take: Anthropic's principled stance just became Google's liability. The moment one major lab says no to Pentagon work, the rest have a choice: match the principle or explain why not to staff in writing.

The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path

David Silver argues that AI is pursuing the wrong path and that self-supervised learning without human data is the answer.

Silver, architect of AlphaGo, founded Ineffable Intelligence to build AI that scales without human annotation; the company raised $1.1 billion. Silver's thesis hinges on reinforcement learning and algorithmic self-improvement replacing supervised learning as the dominant training paradigm.

Oliver's take: Silver's been saying this since 2023. AlphaGo proved his point once. The new claim is that LLMs need him to prove it again. Investors agreed. Execution will take years.

OpenAI available at FedRAMP Moderate

OpenAI has achieved FedRAMP Moderate authorization, enabling U.S. federal agencies to deploy ChatGPT Enterprise and the OpenAI API under standardized security compliance.

OpenAI announced availability at FedRAMP Moderate authorization level, a federal security baseline that permits use by U.S. government agencies. FedRAMP certification requires OpenAI to meet standardized security, privacy, and operational controls audited by independent third parties, with ongoing compliance monitoring.

Sofia's take: FedRAMP Moderate is the gate. Every agency procurement now checks the box. OpenAI cleared it first. Microsoft's internal stack moves through this gate too, quietly. The real competition isn't Anthropic versus OpenAI. It's whether your vendor can navigate compliance bureaucracy without collapsing the timeline.

DeepSeek dumpt prijzen van zijn nieuwste AI-model - Trends DataNews

DeepSeek has cut prices on its newest LLM, pressuring Western vendors on cost.

Chinese AI firm DeepSeek slashed pricing on its latest model release; move signals competitive pricing and efficiency gains. DeepSeek claims superior inference efficiency; lower prices reflect lower per-token operational cost and market-share aggression.

Oliver's take: DeepSeek is cheaper because it runs on cheaper silicon and accepts lower margins. That's not innovation, that's geography and scale. But cheaper is cheaper. Margin pressure is real.

‘AI deflation’ comes to India’s tech services giants and puts downward pressure on revenue

AI automation is finally eating into the billable-hour model that Indian tech services built their empires on.

TCS, Infosys, Wipro, and HCL Technologies are reporting revenue pressure as AI reduces the labor intensity of traditional outsourced IT work; headcounts remain stable for now. Clients are using AI agents to handle routine coding, testing, and support tasks that once required human contractor time, shrinking the service delivery footprint.

Oliver's take: India's service giants bet the ranch on humans doing cheap work at scale. AI doesn't care about labor rates. Now they're holding payroll while pricing power evaporates. The buffet is closing everywhere, not just GitHub.

The crypto-to-AI bandwagon jumpers' club just landed another member: Core Scientific

Core Scientific is pivoting from bitcoin mining to AI infrastructure, converting a 300-megawatt Texas operation into a 1.5-gigawatt datacenter campus.

Core Scientific announced plans Monday to repurpose a bitcoin mining facility in Pecos, Texas into an AI datacenter with five times the power draw. Existing power infrastructure, cooling systems, and grid connections built for crypto hash rate repurpose efficiently into GPU-dense compute clusters.

Oliver's take: Bitcoin miners built the power spine; AI trains on it now. Core Scientific is the pattern. Every idle mining operation in Texas is a datacenter waiting for someone to notice. Stranded assets find new tenants fast.

South Africa yanks AI policy after AI-assisted drafting invents citations

South Africa's draft national AI policy has been yanked after the government discovered that it cited sources invented entirely by the chatbot that drafted it.

South Africa withdrew its draft AI policy document after detecting fabricated citations; the policy had been written with AI assistance. The LLM generating the policy text hallucinated academic references to support its claims, a failure of verification that went undetected until publication review.

Sofia's take: Government uses AI to write AI policy. AI invents citations. Policy withdrawn. The irony would be funny if the failure weren't so cleanly diagnostic of what regulators are actually capable of. You cannot govern technology you don't understand.

AI reality check: Here's what three companies learned building wallets, homes, and games

Citi, Home Depot, and Capcom have deployed AI agents to production; now they're discovering that real money, real customers, and real liability require governance that doesn't yet exist.

Executives from three Fortune 500 companies shared lessons from early agentic deployments at Google Cloud Next; all three manage customer-facing or financial agents. Agents handling payments, shopping, and content creation expose companies to novel failure modes: incorrect transactions, customer harm, and brand damage at agent speed.

Oliver's take: Citi, Home Depot, Capcom all live in production now. None of them has solved the governance problem. They're discovering it in real time. Auditing an agent's decision is harder than building the agent.

Watch out UK taxpayers: 28,000 HMRC staffers just got an AI copilot

HMRC is rolling out Microsoft Copilot to 28,000 tax staff after a trial showed it saved each user roughly 26 minutes per day, despite handling official sensitive data.

UK tax authority HMRC is deploying Microsoft Copilot across tens of thousands of staff following a Whitehall trial; the system will access Official Sensitive government documents. The trial measured time savings per user per day; the rollout gives Copilot access to government classified information (below top secret) to assist with routine tax administration tasks.

Sofia's take: HMRC gives Microsoft Copilot access to 28,000 tax files marked Official Sensitive. 26 minutes saved per user. No mention of what Microsoft does with the data or where it trains next. This is risk pricing as productivity.

Skymizer Taiwan Inc. Unveils Breakthrough Architecture Enabling Ultra-Large LLM Inference on a Single Card

Skymizer unveiled a single-card architecture enabling 700B parameter model inference on enterprise hardware without distributed compute.

The Taiwan company developed HTX301 chips and a PCIe card packaging 384GB of memory to run ultra-large models locally. Six HTX301 chips aggregate bandwidth and memory, allowing single-card inference of models that previously required multi-node setups.

Oliver's take: A Taiwan chipmaker just made distributed inference optional for 700B models. One card replaces a cluster. The market assumes inference scales through software. Skymizer scaled through memory bandwidth. Very different.

If AI is about to get 10x smarter, how do we prevent the internet from collapsing under synthetic noise?

A debate on r/artificial argues that models training on synthetic data generated by prior models will collapse under recursive pollution.

The thread identifies that more than half of online content is already synthetic, with bots, LLMs, and automated systems generating posts that become training data for the next generation. When the training corpus is majority synthetic, new models inherit errors and biases from their predecessors at accelerated rates, degrading quality with each cycle.

Ibrahima's take: Half the internet is synthetic already. The next model trains on that. Then the next model trains on its own output. This isn't a bottleneck; it's a death spiral with a long tail. The models don't know they're eating themselves.
Society & Impact ·

If AI makes everyone more productive, why does it feel like only layoffs are being announced?

AI productivity gains are being extracted as margin, not redistributed as leisure or salary.

Reddit thread questioning why workforce automation produces layoffs rather than working hour reduction. Companies capture the surplus value generated by AI-augmented labor; workers absorb the productivity shift as intensified workload at existing wages.

Sofia's take: Nobody forces this outcome. It's the default because labor markets in most jurisdictions have already tilted toward management. Three-person productivity compression becomes three redundancies, not one rested worker and two colleagues doing other things. Workers would need collective leverage to reframe the choice. Most don't have it.

The AI Industry Is Discovering That the Public Hates It - LinkedIn

The AI industry is finally meeting public opinion, and the industry blinked first.

A LinkedIn post surfaces growing sentiment that consumers and workers distrust AI deployment, 27 April 2026. Sentiment shifts fastest in communities where AI's labor displacement is already visible; the narrative moved from "cool tech" to "threat" in under 18 months.

Sofia's take: The industry sold magic; it delivered redundancy. Now it's shocked that people read the fine print. Messaging flips faster than earnings calls when you've already told people they're obsolete.

Ervaring eruit, AI erin: Microsoft start buyout-ronde - LinkedIn

Microsoft is systematically replacing experienced staff with AI-assisted junior labor, rebranding attrition as modernization.

Microsoft initiated a buyout round targeting experienced employees while accelerating AI tooling for remaining workforce, 27 April 2026. Voluntary severance for tenured staff reduces legacy compensation; AI-augmented junior hires and contractors lower ongoing salary costs.

Sofia's take: Buy out the people who know how things actually work. Replace them with tools that follow the procedure. Now you have no one left who can tell you the procedure is broken. Efficient.

Today OpenAI announced that "Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progr…

OpenAI just killed its AGI clause by guaranteeing revenue to Microsoft through 2030 regardless of capability breakthroughs.

OpenAI announced that revenue-share payments to Microsoft continue through 2030 independent of OpenAI's technology progress, 27 April 2026. The clause decouples Microsoft's payments from AGI milestones; if OpenAI achieves AGI, Microsoft's financial position does not shift.

Oliver's take: OpenAI promised Microsoft flat money through 2030 no matter what happens. That is the opposite of betting on AGI in 2027. It is a statement that they expect incremental forever.
Society & Impact ·

Very cool analysis of the submissions to a major management journal that shows how much the system of science, built for humans, is under st…

The peer-review system designed for human-speed scholarship is drowning in AI-generated submissions, and quantity is winning because incentives reward volume over rigor.

Analysis of a major management journal's submission flow shows escalating AI use in paper generation; the pressure favors 'more stuff' over better science. LLMs lower the cost of article production faster than review boards can evaluate quality, creating a volume trap where perverse incentives compound.

Sofia's take: Peer review assumes scarcity of submissions. Remove scarcity and the whole machinery grinds backward; journals get flooded, reviewers exhaust, and quantity becomes a proxy for legitimacy by default. The system wasn't built to say 'no' at volume. Nobody wants to fix the incentives because rejection is politically harder than drowning.
Society & Impact ·

The one-person company was always possible. AI agents make it probable

One-person companies running on AI agents are no longer fantasy; they're closing Series A rounds and passing investor scrutiny.

A founder working with only a part-time CFO and a suite of AI agents handling operations and customer work just completed Series A, reported on e27 in April 2026. AI agents handle customer onboarding, contract review, support triage, and competitive monitoring, replacing junior staff entirely.

Ibrahima's take: A three-person company funded. The investor paused, then moved forward. That pause matters more than the yes; it shows the industry watching itself redistribute. Offices emptying faster than replacements arrive.

Nederlandse overheid bouwt eigen GitHub-alternatief voor AI en controle - AI Wereld

The Dutch government is building its own GitHub equivalent for AI systems, signalling active defensive sovereignty policy around code and model governance.

Dutch authorities announced a government-controlled repository platform for AI and code; reported in AI Wereld on 27 April 2026. The platform centralises control over code deployment and model access for public sector and strategic applications; independence from US-based platforms is the stated objective.

Sofia's take: This is EU sovereignty tooling, not just Dutch paranoia. When governments build their own GitHub, they are saying 'we do not trust the platform layer to our competitors.' The policy shift is structural.

China’s physical AI progress seen on roads, in skies and factories

China's physical AI systems are moving into live production across delivery, manufacturing, and autonomous transport at scale that rivals Western deployments.

Drones, robots, and autonomous vehicles built on Chinese-trained models are operating in Shenzhen, Shanghai, and industrial parks with regulatory blessing. Chinese AI firms are integrating perception, control, and decision-making on local hardware, creating closed-loop systems optimised for Chinese infrastructure and data regimes.

Ibrahima's take: Delivery drones over Shenzhen weren't trained on Western streets. They learned Chinese cities, Chinese traffic, Chinese permissions. The models are not portable.

Large UK companies in the dark about how their data is used overseas by AI

Large UK companies lack visibility into how their proprietary data is used by AI vendors overseas; compliance and privacy risks are unquantified.

FT survey of senior tech and data executives shows widespread gaps in understanding data handling practices across AI vendors' supply chains. Vendors operate under different jurisdictional rules; data sent abroad for model training may be subject to non-UK regulations that executives cannot audit.

Sofia's take: Companies send data to OpenAI, don't read the terms, and wake up two years later asking where it went. That's not a privacy problem. That's a procurement failure.

AI Startup Sereact Raises $110 Million for Robots That Predict Consequences

Sereact raised $110 million to make robots that reason about consequences instead of just executing fixed routines.

The German robotics software firm secured Series B funding in April 2026 to develop AI models that improve robot adaptability and task performance. The company's approach adds a prediction layer to robotic control, letting systems anticipate outcomes and adjust behavior dynamically.

Oliver's take: Robots that think ahead. Everyone else builds the arm; Sereact sells the brain that stops the arm from breaking the part. The VC cheque is real because rework costs more than software.
Industry & Market ·

The agent swarm is unleashed on SaaS

AI agents are not displacing jobs; they are compressing software prices first, and the investor who misses this sequencing will misread the entire market correction.

E27 analysis of enterprise AI agent deployment patterns shows pricing pressure arriving before labour substitution in SaaS. Manufactured reasoning compresses the cost of cognitive tasks, forcing software vendors to compete on price before headcount reductions ripple through the labour market.

Oliver's take: The pricing event is already live. Software companies will compress margins for two quarters before anyone admits labour displacement is real. Investors treating this as panic are buying at rational prices.

AI adoption in Philippines isn’t top-down; it’s worker-led

Philippines workers are adopting AI faster than their employers can govern it, collapsing the labour-arbitrage frame that defined the sector for decades.

E27 analysis of AI adoption in Philippine shared services shows bottom-up employee usage outpacing corporate policy and controls. Workers integrate AI into customer service, back-office, and support workflows individually; governance structures built for labour compliance cannot keep pace.

Ibrahima's take: For twenty years, the Philippines was described as a labour destination. Now workers are deploying AI into those same workflows before management sees it. The framing flips: this is about worker agency and governance collapse, not AI capability.

From scepticism to concern: Mythos panic is slowly starting to reach China

Chinese security experts are beginning to warn about AI-powered cyberattacks, particularly risks tied to frontier models like Anthropic's Mythos.

SCMP reports growing Chinese concern about offensive AI capabilities in the third of a series on Mythos; China's strict internet governance may create asymmetric exposure. Advanced models lower the barrier to sophisticated network attacks; Chinese defensive posture relies on air-gapped systems and surveillance that may not catch model-driven exploits.

Sofia's take: China builds walls. Models are lockpicks. Beijing is learning the two don't scale together.

US judge dismisses Musk's fraud claims in OpenAI case at his request

Elon Musk dropped fraud charges against OpenAI, keeping the core dispute on misuse of nonprofit status intact.

A US judge dismissed Musk's fraud claims in the OpenAI case at his request in April 2026, clearing the way for trial on remaining allegations. Musk withdrew the fraud count while maintaining claims about OpenAI's transformation from nonprofit to commercial entity, narrowing the scope of litigation.

Oliver's take: Fraud dropped. The nonprofit-to-profit transition claim survives. Musk's lawyers read the room and cut losses. Fraud needed intent; misuse of nonprofit status needs only paperwork. Easier to prove, harder to defend.

Nederland kiest Stackit en breekt met afhankelijkheid van Amerikaanse cloud - AI Wereld

The Netherlands has chosen Stackit as its cloud infrastructure provider, breaking dependency on American cloud vendors and formalising European AI compute nationalism.

Dutch government announced cloud infrastructure partnership with Stackit; reported in AI Wereld on 26 April 2026. Stackit provides EU-based compute and data residency; the contract removes government workloads from AWS, Azure, and Google Cloud.

Sofia's take: Stackit is a bet on European compute sovereignty. It will not be competitive with the US hyperscalers on price or feature breadth. It does not need to be. It needs to exist.

Chip Rally Extends to New Peaks as Intel Forecast Signals Broader, Durable AI Demand

Semiconductor stocks hit record highs on Intel's signal that AI infrastructure spending is broadening across the entire chip ecosystem, not concentrating in a few fabs.

The Philadelphia Semiconductor Index advanced 3.2% to an all-time high in April 2026, driven by Intel's positive guidance on AI-related chip demand across multiple segments. Intel's forecast suggests that AI workloads are spreading from large datacenters into edge, enterprise, and inference layers, lifting demand for CPUs, not just GPUs.

Oliver's take: Intel saying the AI boom is broadening, not peaking. If that's true, chip demand doesn't collapse in 18 months; it rotates. From training to inference to edge. The rally is hedging against the narrowing narrative.
Society & Impact ·

When AI agents take the lead in decision-making, who answers when they mess up?

When AI agents fail, responsibility always traces back to human error in training or data; the system never owns the mistake.

A commentary on e27 in April 2026 observed that AI agent failures are structurally attributed to human preparation rather than system design or decision-making. The framing allows operators to maintain a fiction of agent autonomy while preserving deniability; any failure becomes a data or training issue, not a liability.

Sofia's take: Smart legal positioning: agents are autonomous when profits flow, negligent when damages accrue. Whoever deploys the agent owns the outcome. The agent never does. That's the doctrine settling in now.

Anthropic created a test marketplace for agent-on-agent commerce

Anthropic built a functioning marketplace where AI agents autonomously negotiated and closed real transactions with real money.

In a recent experiment, Anthropic deployed buyer and seller agents into a classified marketplace to test multi-agent commerce at scale. Agents evaluated listings, bid, haggled, and completed purchases using actual funds without human intervention at each decision point.

Oliver's take: Anthropic proved agents can handle commerce without a handler. The question they didn't ask: what happens when the margin math breaks alignment. Always test at scale; always measure drift.

Discord Sleuths Gained Unauthorized Access to Anthropic’s Mythos

Unauthorized users accessed Anthropic's internal Mythos documentation via Discord.

Discord users gained access to Anthropic's Mythos (internal development) files; Wired reported it alongside other weekly security incidents. The mechanism is unclear from the summary; likely misconfigurations in access controls or credential leakage.

Oliver's take: Discord sleuths didn't crack the safe; they found the safe open. This is a credential or permission mess, not a breach. Happens weekly. Anthropic will patch it. The real question is how many labs are running this same test right now and quietly fixing it.

OpenClaw adds DeepSeek V4 models as tech world assesses Huawei tie-up

DeepSeek's V4 Flash model is now the default in OpenClaw; the move signals that Chinese model performance on inference has crossed parity with Western alternatives.

OpenClaw adopted DeepSeek V4 Flash and Pro as core models; the V4 is optimised to run on Huawei chips, closing the hardware-software integration gap. DeepSeek achieved efficiency through training-data and architectural choices; Huawei integration removes dependency on Nvidia and creates a closed ecosystem.

Oliver's take: V4 Flash doesn't beat Claude on benchmarks. But it runs on Huawei silicon. That's the point. Sanctioned hardware matters more than published metrics.

Can AI discriminate if it can’t justify itself?

Elon Musk's lawsuit against Colorado raises whether AI systems can lawfully discriminate if they cannot explain their decisions.

Musk is contesting Colorado regulations that require algorithmic transparency and explainability; the case surfaces a philosophical gap between capability and accountability. The lawsuit argues that demanding explainability from AI systems may be technically impossible for certain architectures, creating a collision between law and engineering limits.

Sofia's take: Musk argues you can't explain a neural net decision. Colorado will respond: explainability is a requirement, not a suggestion. He'll lose. The fight was always about cost, not capability.

Three reasons why DeepSeek’s new model matters

DeepSeek V4 extends context windows and closes reasoning benchmarks against frontier models, proving open-weights architecture can match proprietary capability at lower cost.

DeepSeek released V4 in preview on 24 April, claiming near-parity with leading models on reasoning benchmarks and architectural improvements that enable longer prompts and better efficiency. Design changes reduce computational overhead and improve memory efficiency, allowing the model to handle significantly larger context while maintaining performance.

Oliver's take: Context window closure is the real win. If V4 runs long-context reasoning open and cheaper, the product story for Claude and GPT-4 gets thinner every quarter. Architecture matters more than scale now.
Industry & Market ·

AI Chip Surge Elevates Taiwan, Korea in Global Equity Rankings

Taiwan and South Korea have leapfrogged European equity indices on the back of AI chip demand.

Bloomberg reports that the AI boom has reshuffled global equity rankings, elevating semiconductor-heavy economies. Chip designers and manufacturers in Asia benefit directly from infrastructure spending; European equity markets lack comparable exposure.

Oliver's take: Markets discovered geography. Taiwan makes chips. Europe makes policy. Guess which one gets repriced when demand spikes.

Redpine secures €6.8M to power AI with premium data

Redpine has raised €6.8 million in seed funding to grant AI systems access to premium enterprise data.

Swedish startup closed funding led by NordicNinja, with participation from Luminar Ventures and node.vc. Redpine is building connectors and access-control layers that let LLM agents query proprietary databases without exposing raw data.

2× reported Also at Sifted
Oliver's take: Every enterprise AI project hits the same wall: agents need data but can't ingest it safely. Redpine sells the bridge. The bridge is commodity soon, but right now it's scarce enough to fund a round.

ByteDance, Zhipu AI, and Alibaba named to TIME’s top 10 most influential AI companies of 2026

TIME names ByteDance, Zhipu AI, and Alibaba as top-10 most influential AI companies globally in 2026.

TIME magazine's annual ranking places three Chinese firms in its top tier, reflecting their scale in LLMs and industrial deployment. Chinese firms have moved from follower to competitive parity in model capability and are now leading in applications (e-commerce, short-form video, enterprise).

Ibrahima's take: Top-10 is language-game journalism. What matters is whether Alibaba's models work on Mandarin, Cantonese, dialect-heavy domains that English-first labs ignore. They do. That's the story.

Designing for the employable workforce with AI

AI-assisted workforce training in Jakarta is reshaping employability by shifting burden of skill-matching to tools rather than institutions.

E27 analysis of AI's role in preparing young graduates in Indonesia for labor market entry; focus on Dewi and commuter lines of Jakarta. AI tools enable targeted interview prep, job-matching, skill gap identification without requiring institutional intermediation.

Ibrahima's take: Jakarta has thousands of Dewis. AI tutoring and interview prep are useful. But the jobs they're being trained for often come from the same data that screened them out before. Algorithmic matching can feel personal; it's still structural. Tools don't create jobs. They optimize matching within a fixed set.

EuropeMedQA is een benchmark voor medische AI in Europa - AI Wereld

EuropeMedQA is a new benchmark for testing medical AI systems specifically on European health data and standards.

Dutch and EU researchers launched a medical QA benchmark to evaluate AI on European clinical conditions, languages, and regulatory contexts. The benchmark uses European patient records and clinical guidelines to test model robustness in a non-US regulatory environment.

Ibrahima's take: European benchmarks matter because medical AI trained on US data fails on European populations. Different disease prevalence, different EHR formats, different consent law. Building a local benchmark is building accountability.

Your next hire might not be human, but not everyone gets that choice

AI hiring automation will sort workers by access to tooling, not ability.

A marketing technologist with two years building AI-powered brand strategy tools raises the question of who benefits and who gets excluded as AI agents move into hiring workflows. Productivity gains from agent-driven research and strategy compound for those with access; capability gaps widen for those without.

Sofia's take: Productivity gains without redistribution just encode privilege. The question nobody wants to ask is whether we're automating hiring or automating triage. Agencies that can't afford agents lose talent to those that can. That's not disruption; that's leverage.

Hospitality needs to treat AI agents like a new channel, not a new feature

Hospitality operators must redesign operations for agent-driven booking, not just bolt agents onto existing systems.

E27 analysis of AI agents moving beyond chat into booking, loyalty, and guest-service workflows in the hospitality sector, which has already absorbed search, OTAs, and metasearch middlemen. Operators must rebuild permissions, fraud controls, and definitions of direct booking as agents become a new distribution layer.

Oliver's take: Hotels have spent years getting cheaper at managing distributors. Now they get to do it again for agents. Except this time the distributor lives inside your API and you can't audit its decisions. Fraud controls for something that rewrites the booking flow in real-time are not solved problems.

𝗔𝗜 𝗵𝗲𝗹𝗽𝘁 𝗱𝗲 𝘇𝗼𝗿𝗴 𝘃𝗼𝗼𝗿𝘂𝗶𝘁𝗸𝗶𝗷𝗸𝗲𝗻 De maandrapportage is klaar. Maar de vragen schuiven al op naar morgen. Wat komt er aan? Waar staan budgetten onder druk? En welke risico's zijn nu nog niet zichtbaar? In veel zorgorganisaties is de P&C-cyc - LinkedIn

Healthcare finance offices are using AI to project next month's crisis before the board meeting arrives.

A Dutch healthcare organization deployed AI for real-time P&L forecasting and early-warning budget surveillance, 27 April 2026. Monthly reporting cycles now feed into AI systems that flag budget stress signals before they become formal problems.

Ibrahima's take: Healthcare finance in Amsterdam is running faster. Earlier signals, faster cuts. The system got smarter; the outcomes depend on what you cut when the alarm sounds.

AWS lanceert AgentCore CLI voor snellere AI-ontwikkeling - LinkedIn

AWS bundled agentic AI tooling into a CLI to lower friction for teams building on its infrastructure.

Amazon Web Services launched AgentCore CLI, a command-line interface for faster AI agent deployment, 27 April 2026. The tool abstracts cloud orchestration and model routing, letting developers deploy agents without managing underlying compute or API calls directly.

Oliver's take: AWS wrapped agents in a CLI so you don't have to think about agents. Now your agents live on AWS and leave AWS when you do.

Some notes on talkie, a new "vintage language model" from a team including Alec Radford (yes, that Alec Radford) "trained on 260B tokens of …

Talkie is a deliberate regression: a language model trained on pre-1931 English, made by Alec Radford and collaborators.

A new model called Talkie, trained on 260 billion tokens of historical English before 1931, was released by a team including OpenAI co-founder Alec Radford, 28 April 2026. The model restricts training data to a specific historical period, creating a controlled linguistic environment before modern slang, technical jargon, and internet text.

Oliver's take: A model that refuses to speak modern English. Radford built a cage and called it a vintage release. The actual finding is what breaks when you remove the last 95 years of the training set.

The new LLM trained only on pre-1931 text is small enough that it can potentially run on device, so, with the right tools, you can get a ful…

A small language model trained exclusively on pre-1931 text can run on device and has no idea what sushi delivery means.

Ethan Mollick demonstrated a compact LLM built from historical documents, tested on modern tasks like restaurant ordering in Philadelphia. The model learns only from early 20th-century sources, creating a knowledge cutoff that makes contemporary inference fail in predictable ways.

Oliver's take: Training on pre-1931 text and expecting it to book sushi is like asking a 1920s encyclopedia to recommend cloud providers. The real finding: model capacity and knowledge overlap are separate things. Size doesn't solve ignorance.

Here is an AI trained just using text from 1931 or earlier, which leads to a lot of interesting experiments: can the model independently dev…

A deliberately anachronistic language model opens a test bed for measuring what LLMs can invent versus what they memorize.

Researchers built and deployed a public-access LLM trained only on texts from 1931 and earlier; the system can attempt modern tasks despite zero exposure to post-1931 knowledge. The model is small enough for device inference and benchmarked on tasks like independent invention and code learning with no modern training examples.

Oliver's take: This is an elegant way to test whether your model learned to reason or just learned to parrot. A 1920s-only training set is a moat around memorization. Whether it can code from scratch or invent the internet tells you something real about generalization.

Announcing our partnership with the Republic of Korea

Google DeepMind and South Korea have announced a formal partnership to deploy frontier AI models for scientific discovery.

The collaboration links DeepMind's research capabilities with Korean institutions to accelerate breakthroughs in domains from material science to drug discovery. DeepMind provides model access and expertise; Korean research institutions supply domain problems and local validation.

Oliver's take: DeepMind gets PR; Korea gets models; both get to say they're serious about AI for good. The actual scientific output is secondary.

Hardware voor humanoïde robots: nieuwe perspectieven voor industriële waardecreatie in Europa - Maakindustrie

Humanoid robot hardware is moving from research lab narrative to industrial deployment strategy in European manufacturing.

Maakindustrie covers European investment in humanoid robot infrastructure for factory floor applications. Hardware manufacturers are bundling vision, manipulation, and real-time control systems designed for repetitive industrial tasks where labour is scarce or expensive.

Oliver's take: European manufacturers want hands that work 24/7 in repetitive jobs. Humanoid robots aren't elegant; they're labour replacement for sectors that already can't hire.

Atech raises Pre-Seed to unlock a new era of Physical AI builders with the "Lovable for hardware"

Atech, a hardware startup, has raised a pre-seed round to build AI-native physical systems without requiring deep ML expertise.

Atech closed funding from Nordic Makers, Emblem, Lovable, Sequoia Scout, and A16z Scout to commercialise low-code robotics and embedded AI. The platform abstracts hardware integration and model deployment so builders can ship physical AI products without training models from scratch.

Oliver's take: No-code robotics. Sounds great until you own a factory and discover your robots learned on someone else's data. Then licensing costs appear.

Dit softwareaandeel is volgens Dan Ives dé koopkans van dit moment - debelegger.nl

Dan Ives is naming a software stock as a buying opportunity in an AI-disrupted market, assuming the market has finished repricing SaaS downward.

debelegger.nl reports Ives' software stock pick for the current market; published 26 April 2026. Ives is betting that AI-induced repricing has created entry points for resilient software vendors; the thesis assumes the stock is now priced for agent competition.

Oliver's take: Ives is early or right, depending on whether that software vendor can actually compete with agents. Most will not. The buy signal is premature.

17 AI startups in the UK to watch, according to VCs

Seventeen UK AI startups are primed for growth according to a curated VC shortlist.

Sifted publishes a selection of early-stage AI companies in the UK that have attracted venture capital attention. VCs identify startups across infra, application, and model categories based on founder pedigree, capital efficiency, and addressable market.

Oliver's take: VC lists are backward-looking confidence indices. By the time you read it, half are raising Series A, two are flatlining, one is acqui-hired.

Your Next Pair of Good American Jeans Won’t Be Designed by AI

Good American's CEO has drawn a line around design work that AI cannot touch.

Emma Grede, co-founder and CEO of Good American jeans, said her business uses AI in some operations but excludes it from core design decisions. The company deploys AI selectively, treating product design as a human-only domain while automating lower-value tasks.

Oliver's take: Design gatekeeping. It's not about art; it's about the one thing customers still pay premium prices for. Once AI designs well enough, the jeans cost $40 instead of $150. Keep humans in the loop, keep margins intact.

End of the road for the ‘Mad Men’ as AI moves into advertising

Traditional advertising holding companies are losing share to AI-native agencies; the skill set gap between old and new is structural, not cyclical.

FT reports that large marketing groups struggle to adapt to automation tools that compress creative cycles and shift labour toward prompt engineering. AI workflows require different hiring profiles, measurement frameworks, and client relationships than legacy agency models built on retainers and creative teams.

Ibrahima's take: The Mad Men didn't die. They're redundant. A prompt engineer in Lagos now competes with a creative director in London for the same client budget.

Advertisers seek to capitalise on the promise of AI

Advertisers are adopting AI to scale campaign efficiency, but consumers demand authenticity that automation erodes; brands face a paradox.

FT examines the tension between efficiency gains from AI-driven advertising and customer expectations for human craft and personalised connection. AI systems optimise for click-through rates and conversion funnels but cannot replicate the perceived authenticity that drives brand loyalty in saturated markets.

Oliver's take: AI advertising is cheap and it shows. Consumers notice. The margin between efficiency and brand damage is narrower than CMOs admit.
Applications & Sectors ·

AI agents work, until they don’t: Here’s what we learned

AI agents work until the edge case arrives, and nobody's infrastructure for edge case handling scales.

E27 case study documents successful agent deployment followed by failure modes in production flows that weren't mapped during testing. Agents handle happy paths flawlessly; failure occurs when tone, context, or exception-handling require human judgment that the system wasn't trained to recognize or escalate.

Oliver's take: Everyone celebrates the moment the agent works. Nobody celebrates the moment it fails silently in production. That's where the cost lives.
Industry & Market ·

AI is irrevocably changing the tech landscape, and you are going to need a new map

The trillion-dollar software market valuation loss is not overblown; it is the correct price once agents compress margins and displace higher-cost cognitive labour.

E27 analysis frames SaaS market restructuring as inevitable repricing, not a bubble. Agent capability removes the pricing power of traditional software vendors; legacy SaaS multiples cannot survive competition from cheaper, task-specific agent systems.

Oliver's take: Stop calling it apocalypse if you mean correction. Apocalypse is right. Established software companies did not price for agent-level competition. They will lose.
Society & Impact ·

AI Boom Drowns Out War Fears to Fuel Asia’s Great Market Divide

Asian markets are fracturing into AI-rich winners and geopolitical-risk losers, with capital flowing only to the AI narrative.

Asia's stock markets diverge sharply in April 2026, with AI-exposed sectors soaring while defensive and conflict-adjacent equities languish despite regional tensions. Institutional investors are pricing in AI upside so aggressively that they're discounting traditional risk premiums for war, inflation, and supply disruption.

Oliver's take: The AI trade has eaten every other narrative. War fears, rate risks, inflation. None of it moves the needle because everyone is all-in on chip margins and inference growth. Bubble or not, momentum is the only market.

Apple’s Cook Gives Ternus a Pipeline of 10 Major New Product Categories

Apple's new CEO John Ternus inherits a product roadmap thick with AI features but no clear narrative on their purpose.

Cook gave Ternus a pipeline of 10 major product categories in April 2026; internal employee discussions hint at AI integration across the line. Apple is distributing AI functionality across hardware and services rather than concentrating it in flagship models, betting on ecosystem depth over headline moments.

Oliver's take: Ten new categories, all of them probably hosting some AI. Apple's challenge isn't building AI; it's explaining why customers upgrade. 'More compute' doesn't sell at $1200 anymore.

Google Cloud Next proves what we suspected: Everything is AI now

Google's Cloud Next is a proxy for a deeper bet: AI is no longer a feature category, it's the infrastructure layer itself.

Google's annual cloud conference last week featured AI across every product announcement and partnership. Rather than isolating AI as a discrete offering, Google is embedding it into compute, storage, analytics, and developer tools as a default assumption.

Oliver's take: Cloud vendors stopped selling AI services last year. Now they're selling cloud as an AI service. The difference is whose margin compresses first.
Industry & Market ·

Tokenmaxxing isn't an AI strategy

Tokenmaxxing—scaling model size and inference cost in lockstep—is an operational treadmill, not a strategy.

The Register examines whether AI ROI calculations account for actual deployment economics, not just capability benchmarks. The gap between lab performance and profitable production reveals whether a model's cost-per-token serves a real market demand or subsidises investor narratives.

Oliver's take: Bigger models, bigger tokens, bigger inference bills. Eventually someone has to buy the thing and actually use it. That date keeps moving.
Applications & Sectors ·

4 ways of agentic AI applications in marketing

Marketing departments are discovering that agentic AI can handle campaign operations at scale, even if executives have not yet decided what it is for.

According to Gartner, 65% of marketing operations incorporate AI tools; e27 outlined four operational use cases in April 2026. AI agents automate campaign execution, audience segmentation, copywriting variants, and performance monitoring, compressing timelines and reducing manual work.

Oliver's take: Marketing AI is here. Sixty-five percent adoption means nobody questions whether to buy; everyone questions how to justify the purchase. Efficiency plays well in earnings calls; strategy harder to measure.

The Innovation Movie - CSIS | Center for Strategic and International Studies

CSIS published a video essay on innovation without specified AI focus.

Center for Strategic and International Studies released 'The Innovation Movie' on April 26. CSIS produced video content addressing innovation themes via their public platform.

8× reported Also at csis.org
Oliver's take: CSIS calling it a 'movie' instead of a briefing. Washington discovered aesthetics. Actual content unknowable from listing.

I lost my job to AI (but not in the way that you think)

A Dutch writer lost employment not to AI displacement but to economic contraction driven by AI investment cycles.

A Silicon Canals article examines how AI adoption patterns in Europe are reshaping labor markets outside direct automation scenarios. Shrinking business budgets and restructuring tied to AI transformation initiatives eliminated roles even where AI did not directly replace those positions.

Ibrahima's take: The title promises a twist. The mechanism is supply-side labor economics. Companies shrink headcount to fund AI bets. Automation is the scapegoat; capital reallocation is the fact.

Ex-AWS legend explains what enterprises need to make AI actually work

Enterprise AI fails because companies buy tools instead of fixing how people work.

An ex-AWS leader argues that organizational design, not model capability, determines whether AI projects survive in large firms. The gap between pilot and production widens when enterprises treat AI as a technology problem rather than a process redesign.

Oliver's take: The Register found someone to say what every ops team already knows. People are the bottleneck. News desk calls this insight. It is competence.

Studie: Forscher untersuchen KI-Reaktionen auf depressive Persona

Researchers tested how LLMs respond to users describing depressive symptoms; Grok and Gemini performed riskier than GPT and Claude.

A study by German researchers benchmarked safety responses across models using a persona framework; response quality and risk assessment varied sharply by model and vendor. Researchers submitted identical prompts describing depression to multiple models and scored responses on harm reduction, appropriate escalation, and factual accuracy.

Oliver's take: A benchmark is useful only if it predicts real-world harm. Testing depression responses in a lab is cleaner than measuring safety in the wild, but it measures something different. Grok's higher risk score suggests less RLHF on sensitive topics. That is meaningful.
Research & Models ·

🔮 Exponential View #571: DeepSeek shows the future, again; drones on a learning curve; solar goes up, LLM pixels & tennis robots++

A newsletter roundup suggests DeepSeek's efficiency gains signal a shift toward compute-constrained model development.

Exponential View discusses DeepSeek's latest results alongside observations on drone learning, solar scaling, and video generation trends. The story positions efficiency breakthroughs as a strategic response to compute bottlenecks in the current scaling environment.

Oliver's take: Newsletter flags the obvious: if compute is scarce, efficiency wins. DeepSeek proved it. The subtext is less interesting. Everyone is watching.

Jeff Bezos’s AI lab in talks over taking London office space at King’s Cross

Jeff Bezos's Project Prometheus AI lab is negotiating London office space at King's Cross as part of broader UK expansion.

Bezos's AI research group is in talks to secure workspace in central London, following a pattern of AI labs clustering in the UK capital. The company is securing physical presence in London's tech hub to recruit talent and position for European research partnerships.

Oliver's take: Bezos's lab needs a London address to recruit PhDs and stay in the game. Real estate as credential. Call it what it is: talent arbitrage with rent.

Google banks on AI edge to catch up to cloud rivals Amazon and Microsoft

Google Cloud is banking on proprietary AI chips and models to gain ground against AWS and Microsoft in competitive cloud market.

Thomas Kurian, Google Cloud CEO, says the company's custom silicon and first-party LLMs are the edge it needs to close the gap with larger rivals. Google is leveraging vertical integration; its chips run its models, pricing and feature-pairing are controlled end-to-end.

Oliver's take: Kurian needs a story that isn't market share loss. Custom chips and models sound compelling. They are table stakes. The real problem is margin.

Big Tech zet AI in als machtsmiddel: ´Maakt Europa kwetsbaar´ [video] - Drimble

Big Tech weaponizes AI, placing Europe at strategic disadvantage.

Drimble (Dutch media) reports April 25, 2026; video commentary notes US tech giants using AI as tool of market dominance while EU remains exposed. No specifics in the summary; framing is geopolitical rather than technical or regulatory.

Sofia's take: Europe is always vulnerable. The video probably says US firms extract data, train models, lock markets. True but not novel. What matters: does Europe have enforcement teeth for the AI Act? Spoiler: no. This is anxiety, not policy.

What the AI ‘jobpocalypse’ narrative misses

The 'jobpocalypse' narrative conflates technical capability with economic adoption, collapsing distinct questions into a single false timeline.

Financial Times analysis argues that whether a technology can perform a task is only one component of labor displacement; adoption depends on cost, regulation, organizational inertia, and political choice. Labor displacement requires not just technical capacity but economic incentive, legal permission, and organizational willingness; each factor introduces friction that the 'job killer' framing ignores.

Sofia's take: Capability is not destiny. Adoption requires a decision. Framing it as inevitability lets executives off the hook for choosing displacement over retraining. The article is right. The conclusion is policy, not technology.

Just what the doctor ordered: how AI could help China bridge the medical resources gap

Chinese doctors are buying Mac Minis to run open-source AI agents locally, sidestepping cloud infrastructure to build healthcare apps on minimal compute.

Physicians in northwest China, including Dr. Li Bin at Lanzhou University's First Hospital, purchased Apple Mac Mini computers to run OpenClaw, an open-source AI agent, to develop clinical decision support applications. Doctors are using locally-deployed models to create diagnosis-assistance tools and workflow optimisation apps without relying on commercial cloud APIs, reducing latency and maintaining data residency.

Ibrahima's take: Lanzhou is 1,500km from Beijing. A surgeon there cannot wait for cloud latency or afford commercial APIs. Local compute means local control. China's healthcare AI is not top-down; it is built by practitioners who have the problem. That is a different architecture than US venture-driven models.

ANZ Banking Group finds AI chief

ANZ Banking Group appointed a new AI chief from HSBC and former Commonwealth Bank data leadership.

ANZ hired an AI chief April 25, 2026; candidate background includes HSBC tenure and CBA data engineering role. Hire consolidates AI strategy and data governance under single executive; signals institutional focus on machine-learning adoption.

Sofia's take: ANZ got a CBA vet. Means they're hiring someone who already knows bank data compliance and haven't invented new regulation. Smart institutional hire, zero disruption signal.

Vandaag is heel Nederland even ondernemer. Handelen, onderhandelen, kansen zien, en soms gewoon proberen. Heerlijk! Een houding waar veel professionals baat bij hebben Voelt ergens ook wel een beetje als de NLBA dag met onze oranje en paarse k - LinkedIn

Dutch entrepreneurship rhetoric hits a wall when the crowd is mostly employees, not founders.

A LinkedIn post conflates Kings Day (Koningsdag) entrepreneurial spirit with AI-era workplace thinking, 27 April 2026. The framing assumes agency and risk-taking in a cohort increasingly displaced by automation tooling.

Sofia's take: Calling spreadsheet labor entrepreneurship is the oldest corporate trick. Orange and purple flags don't make contingent work feel like ship-building.

Help! Our newest client is an AI model

A communications strategist now manages AI models as clients, coaching them through public perception and crisis messaging.

FT profiles Rutherford Hall, a critical communications expert hired to advise model developers on external positioning. Hall applies reputation management and stakeholder communication tactics to AI labs, treating model releases and policy statements as reputational events.

Oliver's take: A comms strategist coaching an AI model. The model still has no idea what it is. The strategist gets paid either way.

Should your board appoint a bot?

Corporate boards are experimenting with AI tools for research and prep work, but governance remains human; no AI will sit in a boardroom vote.

FT explores whether directors and chairs can use AI to reduce prep time while retaining decision authority and legal accountability. AI tools assist with document review, market research, and scenario analysis; humans still interpret findings, set strategy, and face shareholder liability.

Oliver's take: A board uses ChatGPT to speed up research. The vote is still human. The liability is still human. The tool changed nothing except deadline pressure.

(g+) Altersvorsorge: KI als Finanzcoach

German consumers are increasingly consulting chatbots for retirement and investment advice; the question is whether LLMs should be trusted with financial guidance.

Golem.de examines the reliability of AI-powered financial coaching tools and where they fail. Chatbots answer financial questions at scale but lack contextual understanding of personal assets, risk tolerance, and regulatory constraints that human advisors assess.

Ibrahima's take: A retiree in Stuttgart asks a chatbot about Altersvorsorge and gets an answer. The advice is tailored to nobody. The liability is unclear. The cost is free.

LinkedIn’s Ryan Roslansky: ‘No one is taking care of your career for you’

LinkedIn's CEO argues that AI creates opportunity for workers willing to seize initiative and retrain.

Ryan Roslansky, LinkedIn CEO and co-author of 'Open to Work', claims AI-driven disruption rewards proactive career management. The argument positions individual agency and continuous learning as the mitigating factor against automation-driven unemployment.

Ibrahima's take: LinkedIn sells memberships to people anxious about jobs. CEO offers hope: reskill and hustle. Genuine advice for some, survival myth for many.

Nu kan iedereen al vóór de beursgang in een bedrijf beleggen: ’Dat klinkt aantrekkelijk, maar de risico's zijn groot’ - telegraaf.nl

Dutch households can now invest in private companies before their IPO.

Telegraaf reports April 25, 2026; Dutch regulation now permits retail investors to buy equity in pre-public firms, echoing US trend. Crowdfunding platforms and regulated secondary marketplaces lower barriers to access venture-stage cap tables.

Oliver's take: Skip. Not AI.
— The Unvarnished AI Gazette · Tuesday, April 28, 2026 · 122 stories from 168 sources · ← Back to today's edition