HN Brief: 2026-05-12

Today’s HN was pulled in two directions: a deep, practical anxiety about supply-chain security (the TanStack postmortem revealed how three well-known GitHub Actions vulnerabilities were chained into a single compromise, and the Cloudflare-Canonical thread argued whether hosting a booter’s marketing page amounts to a protection racket) and a broader reckoning with AI’s real-world consequences—lawyers discovering AI note-takers waive privilege, students booing a commencement speaker who called AI the next industrial revolution, and a Hollywood writer describing how out-of-work TV creators are now doing AI-training gig work. A quieter throughline of hardware and interface nostalgia surfaced too, with a terminal emulator that renders 3D rats, a collection of vintage OS screenshots, and a new driver for the Griffin PowerMate knob.

Click into the TanStack postmortem to see exactly how an attacker chained `pull_request_target`, cache poisoning, and OIDC token extraction—and why a one-week minimum release age would have stopped it. The Ratty terminal thread is worth it for the TempleOS comparisons and the split on whether terminals should stay text-only or embrace inline 3D graphics. GitLab’s restructuring announcement draws you in for the drawn-out voluntary separation window and the debate about whether AI buzzwords are the new microwave fad. Sean Goedecke’s essay on software engineering becoming a short-career profession gets at the real tension: will AI atrophy your skills, or just replace the need to reason? And the Hollywood piece is a bitter, firsthand account of what happens when a decimated creative workforce ends up training the systems that replaced them.

Postmortem: TanStack NPM supply-chain compromise [article]

783 points · 294 comments · tanstack.com · 10h ago

TanStack published a postmortem on a supply-chain attack where an attacker chained three GitHub Actions vulnerabilities—the pull_request_target "Pwn Request" pattern, cache poisoning, and runtime memory extraction of an OIDC token—to publish 84 malicious versions across 42 @tanstack/* npm packages. The HN thread quickly zoomed out to argue about whether any package manager ecosystem is fundamentally safer, with people defending Go modules and Java/Maven against the constant barrage of npm incidents, while others pointed out that xz-utils happened on Linux too, though that required extraordinary effort compared to this attack which recombined well-known published research. A major practical takeaway that got attention was enforcing minimum release age settings across package managers, with several people confirming they'd just set it to a week and that it would have blocked this entire attack. There was also serious concern about the payload's vindictive dead-man's switch that wipes the home directory if the stolen token gets revoked, and the thread split between those arguing for stricter dependency pinning and lockfile hygiene versus those noting that lockfiles already pin everything if you commit them and use `npm ci` correctly.

Ratty – A terminal emulator with inline 3D graphics [article]

638 points · 209 comments · ratty-term.org · 21h ago

The linked article wasn’t available to this summarizer; from the discussion, Ratty is a terminal emulator that adds inline 3D graphics rendering, most famously demonstrated with a spinning 3D rat replacing the text cursor. The thread immediately recognized the project as a spiritual successor to TempleOS, whose author Terry Davis pioneered this kind of in-terminal 3D years ago, and the accompanying blog post confirmed that direct inspiration. There’s a real split on utility: some see it as a fun but useless prank toy, while others argue it’s genuinely useful for previewing 3D models over SSH, game development, or bringing back old 3D file browsers — especially since the graphics work across tmux sessions and reconnections. The deeper conversation quickly turned into a debate about whether terminals should evolve into full graphical environments at all, with people pointing to Kitty’s existing graphics protocol, Ghostty, and even jokes about terminals becoming web browsers, while others pushed back hard that old-school sixel support in XTerm already solved image display without the GPU dependency or the 3D spectacle.

GitLab announces workforce reduction and end of their CREDIT values [article]

476 points · 476 comments · about.gitlab.com · 11h ago

GitLab CEO Bill Staples announced a workforce reduction and the retirement of the company’s long-standing “CREDIT” values (replacing them with new principles like “Speed with Quality” and “Ownership Mindset”), all framed as a strategic pivot into the “agentic era” where AI and smaller, empowered teams will define software engineering. HN was deeply skeptical of the execution: the decision to run the restructuring “transparently” with a voluntary separation window that doesn’t close until June 1st was widely panned as a drawn-out agony for employees, with several commenters arguing that such programs tend to lose high performers (who have options) while retaining the mediocrity that can’t afford to leave. The heavy reliance on AI buzzwords — especially the grand pronouncements about machine-scale platforms and “agentic engineering” — drew sharp pushback, including an extended analogy comparing current AI hype to the 1950s microwave fad that eventually settled into being a useful utility, not the centerpiece of the kitchen. A notable faction dismissed the entire “text autocomplete” revolution as overblown and predicted a bubble burst by decade’s end, while others pivoted to recommending Forgejo and Codeberg as self-hosted alternatives for anyone tired of corporate GitLab’s direction.

If AI writes your code, why use Python? [article]

432 points · 455 comments · medium.com · 11h ago

The linked article wasn’t available to this summarizer; from the discussion, it argued that if AI writes all your code, the choice of language becomes irrelevant—so why keep using Python? The thread immediately swerved into a full-throated fight over Medium’s paywalls and pop-ups, with half the comments complaining they couldn’t even read the piece. Those who got past that split sharply: one camp agreed that AI lets you reach for compiled languages like Rust or Go for performance without needing to know them, while the other camp fired back that Python’s massive library corpus, human-readability for debugging, and frictionless iteration still win—especially when the bottleneck is network latency, not CPU cycles. A smaller contingent pushed strongly for strongly-typed languages (C#, Rust) as a way to catch AI hallucinations at compile time, but the broad consensus was that the “why use Python” question ignores the real constraint: being able to comprehend and fix what the LLM produces.

Software engineering may no longer be a lifetime career [article]

423 points · 663 comments · www.seangoedecke.com · 17h ago

Sean Goedecke’s piece argues that AI may force software engineers into short, high-earning careers like pro athletes—using AI atrophies your skills, but refusing means being outcompeted. HN pushed back hard on the cognitive-atrophy premise, pointing out that using a chat UI doesn’t make you dumber any more than talking to customers all day does, and that the real risk is replacing reasoning rather than augmenting it. A major split emerged over “AI slop”: one side says generated code will be thrown away and regenerated (single-use software, no fixing needed), while the other insists that once you have users and a codebase, someone has to maintain and evolve it—and that person will need real skill. Several people dismissed the whole “lifetime career” framing, arguing software engineering was never a guaranteed lifelong gig and that the smart move is to pair coding with a domain specialty (petroleum engineering, biology, etc.) rather than staying a generalist tool specialist.

CUDA-oxide: Nvidia's official Rust to CUDA compiler [article]

401 points · 113 comments · nvlabs.github.io · 16h ago

NVIDIA Labs released cuda-oxide, an experimental compiler that lets you write GPU kernels in standard Rust and compile them directly to PTX, no DSL or FFI required. The thread quickly sorted out that it's not a replacement for existing host-side Rust CUDA crates like cudarc—the project's author showed up to clarify they're complementary, with cuda-oxide focused on generating device code while cudarc handles the host-side CUDA API. Several people pushed back on the "near drop-in replacement" claim, noting cuda-oxide sits at a different stack level and the generated PTX could actually be used from cudarc or other launchers down the line. A separate thread took the inevitable "still closed-source CUDA" angle, but others corrected that cuda-oxide doesn't use nvcc at all—it routes through rustc's MIR into its own Pliron IR and then to LLVM's NVPTX backend, so the only remaining NVIDIA dependency is the driver/toolkit runtime. The conversation also veered into automatic differentiation in Rust and whether the language's memory model maps cleanly to GPU semantics, with the docs cited for a three-layer safety approach (safe, mostly safe, unsafe) that uses `DisjointSlice` to prevent aliased mutable writes.

UCLA discovers first stroke rehabilitation drug to repair brain damage (2025) [article]

317 points · 65 comments · stemcell.ucla.edu · 14h ago

A UCLA team has published a paper in *Nature Communications* identifying a drug, DDL-920, that restores gamma oscillations in parvalbumin neurons after stroke in mice, effectively reproducing the effects of physical rehab without the patient needing to actually do the exercises. Several people who've watched stroke survivors struggle with rehab intensity called the potential game-changing, but the "in male mice" caveat got immediate pushback — not against the science itself, but against the PR spin that buries how far off human trials really are. The thread quickly derailed into a sprawling debate over supplements and lifestyle hacks for neurogenesis (lion's mane, psilocybin, Noopept, nicotine), where the consensus was that no supplement competes with sleep, exercise, and clean living, and that unchecked neurogenesis isn't automatically good — one person bluntly said too much is just brain cancer. A more productive tangent hit on whether you could bypass the drug entirely with implanted electrodes or transcranial alternating current stimulation to force gamma oscillations, citing a 2023 PLOS paper that paired robotic rehab with 40 Hz stimulation to get similar motor recovery. The deeper takeaway was a correction: stroke kills cells in the infarct core that aren't coming back, but this drug targets the *connections* in surviving but disconnected networks — a distinction that kept the discussion grounded even amid the hype.

Can someone please explain whether Cloudflare blackmailed Canonical? [article]

260 points · 148 comments · www.flyingpenguin.com · 13h ago

The article carefully dissects a DDoS attack on Canonical in which the attackers' booter service, Beamed, is itself hosted and fronted by Cloudflare, and Canonical ended up paying Cloudflare to protect the very repository endpoints that were under fire—leading to a claim that Cloudflare runs a digital protection racket. The HN thread split hard on this: a lot of people made the mafia analogy, arguing Cloudflare has a perverse incentive to keep attackers on its free tier so it can sell mitigation to victims, while others pushed back just as hard, saying hosting a booter's marketing page isn't the same as material support for the attack and that the real problem is the impossibility of KYC at internet scale. Some pointed out the author conflated "hosting the attacker's site" with "hosting the attack itself," and several called the framing hyperbolic, noting that if Cloudflare kicked Beamed off, the attackers would just move to another provider or Telegram, changing nothing. A recurring counter-argument was that this logic would also implicate AWS, Azure, or any ISP that rents to both criminals and victims, which no one is willing to do for a fully open internet.

A.I. note takers are making lawyers nervous [article]

242 points · 178 comments · www.nytimes.com · 21h ago

The linked article wasn't available to this summarizer; from the discussion, it's about AI note-taking apps capturing every offhand remark in meetings and potentially waiving attorney-client privilege. The HN crowd immediately seized on the privilege risk, pointing out that a New York court has already ruled that transcriptions from these services are discoverable and that the third-party provider can be subpoenaed — so lawyers who think this is just a productivity hack are walking into a trap. But the real energy in the thread went into the accuracy problem: people shared horror stories of AI transcribing "France" as "Russia" with high confidence, and the discussion quickly spiraled into a deep critique of LLMs' inability to say "I don't know" — they're trained to guess rather than flag uncertainty, which is a disaster for any record that might end up in court. A few pushed back, arguing that with decent headsets the transcription is surprisingly good, but the consensus was that the fundamental architecture rewards confident hallucination over honest uncertainty, and no amount of prompt engineering fixes that.

Interaction Models [article]

197 points · 23 comments · thinkingmachines.ai · 11h ago

Thinking Machines Lab released a research preview of "interaction models" that natively handle continuous audio, video, and text streams in real-time rather than relying on external scaffolding for turn-taking. The HN thread was overwhelmingly impressed by the demos—especially the model's ability to wait patiently through a long coffee sip and its full-duplex simultaneous speech—but split on whether this is actually a breakthrough or just catching up to what Gemini Live already does. A few people pushed back that the latency still isn't human-like and that local models like Gemma4 combined with TTS will close the gap quickly, while others argued the architecture detail (200ms micro-turns with interleaved input/output) is the real differentiator from what other frontier labs have shipped. There was also sustained skepticism about the business model: the company published enough architecture details that frontier labs could replicate it, and a 276B MoE model with 12B active parameters leaves plenty of room for larger players to outscale them on intelligence.

They Live (1988) inspired Adblocker [article]

184 points · 48 comments · github.com · 7h ago

The linked article wasn’t available to this summarizer; from the discussion, it’s a GitHub project that creates a browser adblocker replacing ads with “OBEY” and “CONSUME” banners straight out of the 1988 film *They Live*. The thread quickly pivoted into a sprawling debate about the movie itself—its message on consumerism and authority, how it’s been co-opted by far-right conspiracy circles, and whether its premise that basic human urges are sinister brainwashing is actually stupid. A strong split emerged over the irony of using AI to build a tool inspired by a film about resisting mind control; some argued natural language programming is more human, others said letting AI do the coding directly contradicts the movie’s ethos. People also wanted an Apple Vision Pro AR version and debated font weight accuracy (dark gray, not black, with League Spartan as the typeface).

Microsoft Israel chief leaves amid ethical controversy [article]

180 points · 126 comments · en.globes.co.il · 14h ago

Microsoft Israel’s country general manager departed after an internal probe found that the local office allowed the Ministry of Defense to misuse Azure services, potentially exposing the company to European legal liability because some surveillance workloads ran on EU-based servers. Several people in the thread zeroed in on the fact that Microsoft is actually the *least* Israel-friendly of the big three cloud providers—it never signed the Nimbus deal, and it already cut ties with IDF Unit 8200 over mass surveillance concerns—while Google and Amazon knowingly took contracts their own lawyers warned could enable human rights abuses. Others pushed back hard, arguing Microsoft’s move was purely about dodging EU regulation, not ethics, and pointed out that the company simultaneously used FBI surveillance against pro-Palestine employees. A few commenters said they’d shift workloads to Azure as a vote of confidence, but most dismissed that as naive, predicting Microsoft will quietly resume business as usual once the legal heat subsides.

Google says criminal hackers used AI to find a major software flaw [article]

170 points · 127 comments · www.nytimes.com · 18h ago

The NYT reports that Google’s Threat Intelligence Group caught criminal hackers using AI to find a vulnerability in a popular open-source system administration tool — not Google’s own software. The thread latched onto the fact that Anthropic’s restricted Mythos model was credited, leading many to accuse the reporter of parroting marketing; a few pushed back, citing the reporter’s deep cybersecurity reporting background, while others pointed out that OpenAI’s GPT-5.5-Cyber offers equivalent capability with similar access restrictions. Several commenters argued the real story isn’t the flaw but the looming regulatory wedge: security will be the excuse to lock down open-weight and local LLMs, just as it was used against cypherpunk tools. A separate vein dismissed the whole thing as overpaid security researchers hyping their relevance to justify AI legislation, and a handful of people just wanted Google to use its own AI to fix Gmail attachment bugs instead.

I let AI build a tool to help me figure out what was waking me up at night [article]

162 points · 164 comments · martin.sh · 10h ago

A developer spent a weekend using AI coding tools to build a system that records audio when he’s asleep, correlates it with his Garmin watch data and smart-home sensors, and lets him review exactly what noise woke him up at 3am—turns out it’s neighbors’ doors, dishes, and street traffic, not his imagination. The thread split hard: half the room argued he could have just left a microphone running overnight and eyeballed the waveform, while others insisted his high CO₂ levels (3,300+ ppm in the screenshot) were the real culprit and no amount of soundproofing would fix that. A vocal contingent pushed back that 3am waking is a textbook cortisol spike, not an environmental noise problem—the author countered that he can see the noise events line up with his wake-ups in the data. Several people also called out the irony of using an AI-generated hero image for a project about measuring reality, which he promptly deleted. Underneath the practical debate, a recurring thread was whether “I have a problem → let AI build me a tool” is a healthy pattern, with the author defending it as lowering the bar for personal projects that wouldn’t have been worth the effort before.

Students boo commencement speaker after she calls AI next industrial revolution [article]

157 points · 186 comments · www.404media.co · 16h ago

A commencement speaker at UCF told graduating humanities and communications students that AI is the “next industrial revolution” and got loudly booed off that line. The HN thread largely sided with the students, arguing their reaction is justified when the pitch from tech companies is that AI will replace knowledge workers while wealth concentrates upward. A core debate broke out over whether this time is different from past technological shifts: one camp pointed to the Industrial Revolution’s bloody path to modern prosperity and argued labor always resists labor-saving tech, while the other countered that past transitions created new jobs and this one plausibly might not, leaving former knowledge workers with nothing to pivot to. A parallel split emerged around the Luddites—some readers insisted they were protesting dangerous machines and enshittified work, not automation itself, while others said the Luddites were a labor movement crushed by capital, and the same dynamic is playing out now. The thread’s undercurrent was that AI proponents are losing the next generation of adults because the pitch feels like a threat, not a promise of shared abundance.

Interfaze: A new model architecture built for high accuracy at scale [article]

137 points · 34 comments · interfaze.ai · 15h ago

The linked article introduces Interfaze, a hybrid model architecture that combines task-specific DNN/CNN components with a transformer backbone, claiming superior performance vs. flash-tier models across OCR, structured output, and speech-to-text benchmarks. The HN crowd was genuinely skeptical but engaged: several people pushed back on the structured output claims, pointing out that smaller models like GPT-5.4-nano already handle it fine, while others tested the OCR on difficult real-world documents (distorted typewriter scans, dense magazine layouts) and confirmed it genuinely outperformed both pure LLMs and specialized OCR tools they'd tried. A major point of friction was that the API pricing, while competitive with Gemini Flash, still adds up fast—one user estimated $50 for a 200-page book project, and found that using the cheaper "run task" mode significantly degraded quality. The biggest open question from the crowd was whether this is a fundamentally new architectural insight or just a well-tuned ensemble, with the arXiv paper getting linked and one deep-dive clarifying it's DNNs feeding shared vector-space tokens to the transformer, not the other way around.

Library for fast mapping of Java records to native memory [article]

136 points · 29 comments · github.com · 12h ago

The linked article wasn't available to this summarizer; from the discussion, TypedMemory uses Java records as schemas to generate bytecode for off-heap memory access, avoiding reflection. The HN crowd immediately flagged the core tension: the library's getters/setters allocate record objects, which defeats the purpose of zero-allocation high-performance work, though the author argues escape analysis and future value classes could mitigate that. Several people compared it to C#'s Span and SBE flyweights, noting the library's Java-type-first approach differs from SBE's explicit schema/codegen pipeline. A deeper debate broke out over whether Java's long march toward value classes and flattened arrays will ever arrive fast enough to matter for ML and high-performance computing, with one side calling the incremental JEP process maddening while the other points to a deliberate unification roadmap.

590k buyers paid $59M for Trump's gold phone, but not one has shipped [article]

134 points · 99 comments · finance.yahoo.com · 11h ago

Nearly 600,000 people put down $100 deposits on Trump-branded gold phones that were promised as "Made in the USA" and never materialized, with Trump Mobile repeatedly pushing back delivery dates and then scrubbing the release date from its website. The thread largely treated this as a predictable grift rather than a failure of execution, with many arguing the buyers weren't victims but participants in a tithing-like transaction—paying for the feeling of supporting the cause, not for a phone. A recurring split emerged around whether this is unique to Trump's orbit or just an especially blatant example of a broader pattern: Tesla's FSD pre-orders, Bitcoin miner预售s, and telcos pocketing broadband subsidies all got name-checked as similar "take deposits, delay indefinitely" plays. Some pushed back on the "they deserve it" framing, pointing out that vulnerable elderly relatives get sucked into these things through social trust and that treating it as caveat emptor ignores how the grift exploits the same machinery as megachurch fundraising. A minor legal tangent emerged around whether parking the $59 million in a bull market and returning only the principal would still leave the grifters ahead—with one reply noting unjust enrichment law demands they hand over all profits, but the caveat "if they get caught" hung unaddressed.

Killed by Apple [article]

128 points · 122 comments · killedbyapple.theden.sh · 17h ago

The linked article wasn't available to this summarizer; from the discussion, it's a site cataloging products and features Apple has "killed" — hardware like the iPod touch, software like Aperture, and ports like Lightning — but the thread immediately split into two camps. One side argued the list is mostly just obsolete hardware and software that got renamed or absorbed (iTunes into Music, Dashboard into widgets), making the comparison to Killed by Google feel hollow, since Google actually shuts down services people rely on. The other side pushed back hard, accusing Apple of dragging its feet on RCS, killing the home button and headphone jack unnecessarily, and abandoning professional users like the Mac Pro crowd. A third faction dismissed the whole site as a vibe-coded LLM dump with no real research, while a fourth pointed out that Apple's track record on long-term hardware support (e.g., FireWire iPods still syncing) actually makes it a leader, not a villain — the title is just clever clickbait.

European Money Pours into Palantir [article]

124 points · 40 comments · english.elpais.com · 20h ago

A new investigation from Follow The Money and El País reveals that over 100 major European banks, asset managers, and pension funds have boosted their Palantir stakes by 60% in the past year, with total holdings now worth $27 billion — despite the company's ties to ICE, the Israeli military, and CEO Peter Thiel's open anti-EU stance. The HN thread immediately split: one camp argued this is just index-fund passivity, that everyone holding Meta or Apple is equally complicit and that you can't build a retirement portfolio without these stocks. The other side counter-punched hard, pointing out that the same activist pressure that forced divestment from cluster munitions, land mines, and fossil fuels should apply here — and that Palantir is a defense contractor, not a social media company, making its human rights record a different order of magnitude. A long, derailing subthread erupted over whether modern land mines with self-destruct timers are morally acceptable for Ukraine, with people who've actually worked in war zones pushing back hard. A few commenters veered into conspiracy territory, calling Palantir a CIA front and arguing that European governments are functionally vassals of U.S. intelligence, but the substantive debate stayed on whether passive index investing lets institutions launder their ethical obligations.

Claude Platform on AWS [article]

121 points · 55 comments · claude.com · 6h ago

Anthropic announced "Claude Platform on AWS," a new offering that lets AWS customers use all Claude API features—including managed agents, code execution, and MCP connectors—with their existing AWS IAM, billing, and CloudTrail, though data is processed outside AWS's boundary. The HN thread almost entirely focused on the real motivation: this is a procurement and billing hack. For large enterprises, adding a new vendor like Anthropic directly involves procurement lawyers and months of red tape, but funneling spend through an existing AWS account lets teams just click a button and hide the cost in their giant AWS bill, even using startup credits. Some commenters immediately called this a data play—paying with your data since Anthropic processes it—but the dominant take was that this solves an organizational, not technical, problem, and that Bedrock has historically lagged months behind on features and reliability. A few people were confused by the naming, since there are now two separate "Claude on AWS" paths with opposite data governance models, creating exactly the kind of confusion AWS is infamous for.

Show HN: TikTok but for scientific papers [article]

114 points · 55 comments · andreaturchet.github.io · 15h ago

The submission is a new app called Papel that wants to bring TikTok-style feeds, AI summaries and quizzes, and social gamification to the world of academic papers. The HN reaction was deeply split: a lot of people recoiled at the very idea, calling it an obnoxious mismatch of "the medium of TikTok and the serious work of science," while others pushed back that discoverability in the ever-growing paper landscape is a real problem and this is at least trying to solve it. The most substantive criticism came from someone who detailed how the ACM's own AI-generated summaries were often subtly but factually wrong, arguing that the model cannot "understand" novel conclusions and that the tool would just democratize misunderstanding. Several commenters also pointed out that the submission is just a landing page with an email signup, technically violating Show HN rules, and many warned the developer to drop the "TikTok" and "AI" branding because both terms are now polarizing enough to repel exactly the audience he needs—despite a handful of people saying the core recommendation idea has promise.

I work in Hollywood. Everyone who used to make TV is now training AI [article]

102 points · 79 comments · www.wired.com · 20h ago

A Hollywood writer and showrunner explains how she and other out-of-work TV writers are now doing gig work as AI trainers—evaluating chatbot responses, annotating videos, red-teaming safety issues—for platforms like Mercor and Outlier, in a system she describes as chaotic, demeaning, and cruelly unpredictable. Some readers pushed back hard on her tone, pointing out the wealth signifiers (paying $150 for a maid, a Yosemite vacation) and arguing she’s squeezing the experience for a story rather than grappling with real precarity. Others who’ve done this kind of work chimed in to say the article matches their own two-week nightmare, and that the real story is the collapse of below-the-line Hollywood careers into a “Hunger Games” of temporary tasks. A separate thread veered into whether Hollywood itself has been in decline since 2010—blaming CGI slop, streaming economics, and the exodus of women after MeToo for a hollowed-out industry that’s now training its own replacements. The article’s title claim that “everyone” in Hollywood is doing this got called out as exaggeration, but the underlying picture of a decimated creative workforce funneled into algorithmic piecework went largely uncontested.

ICE to Develop Own Smart Glasses to 'Supplement' Its Facial Recognition App [article]

99 points · 47 comments · www.404media.co · 17h ago

ICE is planning to develop its own smart glasses to feed facial recognition data into its Mobile Fortify app, part of the Trump administration's mass deportation push. The HN thread immediately pivoted to two main lines of attack: this is either a grift to funnel money to connected contractors (one commenter compared it to Trump's pool guy botching a $15 million reflecting pool job) or a deliberate tool for plausible deniability, where the glasses will be so crap that they'll just return false positive matches to maximize dragnet arrests. Several people pointed out that ICE agents are already resisting body cameras, so these glasses will likely bypass existing regulations, while others dug into a 2020 US Marshals shooting where the involved agencies were chosen specifically to avoid bodycam footage. The surveillance-state parallels were hammered hard—someone noted Orwell got the concept wrong because he needed humans to watch the feeds, not automated AI—and the thread split between those calling for ICE abolition and those dismissing that as naive posturing, with a few commenters even suggesting police should be privatized like healthcare (which got rightly roasted).

Red Hot Chili Peppers ink $300M deal with Warner Music to sell catalog [article]

83 points · 102 comments · www.hollywoodreporter.com · 12h ago

The Red Hot Chili Peppers sold their recorded catalog to Warner Music for over $300 million, a deal that had been rumored for a while. The main surprise in the discussion was how low that number seemed compared to Queen’s $1.27 billion sale—some argued the Peppers are far less globally recognizable outside the US, while others pointed out that the market and interest rate environment were different when Queen sold. A split emerged over whether AI-generated music will crater licensing revenue for older catalogs, with one side saying thirty-year-old hits have already peaked and AI slop will flood the market, and the other side insisting that real, proven songs by real people hold durable cultural value—Bach is still popular, wedding receptions still need familiar tracks, and live shows aren’t going anywhere. A few commenters also lamented that these private-equity-backed catalog acquisitions are why soundtracks in games and TV are getting worse, though someone deadpanned that RHCP being locked up in Warner is probably a favor.

Software Internals Book Club [article]

82 points · 13 comments · eatonphil.com · 5h ago

The linked article describes Phil Eaton's Software Internals Book Club, a large email-based group that works through dense technical books on databases and distributed systems, currently reading *Operating Systems: Three Easy Pieces*. HN was broadly supportive of the concept and reading list, but the comments quickly splintered into practical complaints: the requirement for a LinkedIn signup drew sharp pushback, with several people pointing out that a workaround exists and that the host is fine with it, while others questioned whether deliberately bypassing that gate could technically violate the Computer Fraud and Abuse Act. A separate criticism landed on the decision to host discussion purely via a Google Group, with people calling it a poor choice of platform for a "senior+ developer" audience. One person recommended a companion blog post on how the club is actually run, and others wished for similar clubs tackling math or an updated version of a classic networking book that's now outdated on HTTP/3.

Palantir to be granted "unlimited access" to UK NHS patient data [article]

81 points · 15 comments · www.digitalhealth.net · 13h ago

The article reports that NHS England plans to give Palantir and other external staff "admin" roles with effectively unlimited access to identifiable patient data on the federated data platform, abandoning the previous case-by-case approval process. The HN thread is almost uniformly hostile to the move, with many calling it a blatant erosion of privacy and a gift to a controversial US contractor, and several commenters pointing out that the stated justification—that applying for individual data access is "too inconvenient"—makes a mockery of security controls. There's a recurring split between those who see this as inevitable political corruption and those who argue the real scandal is that the opt-out system is opt-out rather than opt-in, with one person recounting how their own GP seemed to think requesting privacy was politically incorrect. A few commenters bring up GDPR compliance and the recent closure of NHS source code as further evidence of a pattern, while others dismiss the whole thing as yet another reason to give up hope on UK governance.

Show HN: OpenGravity – A zero-install, BYOK vanilla JS clone of Antigravity [article]

80 points · 24 comments · github.com · 11h ago

A high school student built a zero-install, vanilla JS clone of Google’s Antigravity IDE because he kept hitting usage limits and got fed up with “agent terminated” errors — it uses WebContainer API for a real Linux terminal in the browser and stores your API key only in localStorage. The thread immediately jumped on the name, with multiple people pushing for “ZeroGravity” instead, and the author actually seemed ready to rebrand. A solid split emerged: some commenters loved the idea of a lightweight, open-source alternative that bypasses Antigravity’s siloed $20/month subscription (you can just plug in a free Gemini API key and get ~250 requests a day), while others warned that using consumer API keys likely violates the ToS and could get your account revoked. Technical feedback focused on making the agent loop safer — concrete suggestions like adding a plan/checkpoint before file writes and a diff/revert view after each tool run, because WebContainer state gets fuzzy fast. A couple of skeptics questioned why anyone would bother cloning Antigravity at all, but the overall vibe was impressed that a GCSE student pulled this off in a few days.

Screenshots of Old Desktop OSes [article]

77 points · 20 comments · www.typewritten.org · 2h ago

The linked article wasn't available to this summarizer; from the discussion, it's a personal collection of screenshots from vintage desktop operating systems spanning roughly the 80s through early 2000s. The thread quickly turned into a nostalgia-and-discovery session, with people dropping links to similar archives like guidebookgallery.org and toastytech.com, and pointing out omissions—GeOS got a mention, and one person wondered where the author found a pre-X-integration NeWS copy. The comments split between appreciating how much hasn't changed (CDE 1.0 looks nearly identical to the latest, and `df` barely budged since 1985) and lamenting what *didn't* stick, like pie menus that were apparently killed by bogus patents. A few people dove into arcana—whether underlined terms in an HP-UX manpage were actually hyperlinks—and the general vibe was that this is a lovingly obsessive rabbit hole for interface nerds.

Griffin PowerMate driver for modern macOS [article]

73 points · 25 comments · github.com · 10h ago

A GitHub repo shipping a modern driver for the old Griffin PowerMate knob stirred up a wave of nostalgia—half the thread is people realizing they still own one, if only they could find it in the attic. The conversation quickly pivoted to what to buy today, with recommendations ranging from the Microsoft Surface Dial and Elgato Stream Deck to the Loupedeck, which one person argues is the only current knob that matches the PowerMate’s weighted, damped feel. Another voice countered that cheap Drok or Aliexpress encoders work fine for volume or scroll but lack the satisfying tactile resistance that made the PowerMate a fidget toy. A few technical notes landed too: someone warned that pressing the button while the Mac slept used to cause a kernel panic, and a separate commenter casually mentioned they already wrote a new driver for the Bluetooth variant a few months ago.

30 threads · window 24h · articles fetched 20/30 (skipped 4, failed 6)
Generated 2026-05-12 08:03 UTC

Generated by Sauron from Hacker News discussions and linked articles.