Skip to the content.

From 40 items, 17 important content pieces were selected


  1. Anthropic Eyes Massive Round at Near-Trillion Valuation ⭐️ 9.0/10
  2. Triton 3.7.0 Adds Frontend and Backend Upgrades ⭐️ 8.0/10
  3. Google’s new reCAPTCHA breaks on de-Googled Android ⭐️ 8.0/10
  4. Why WebRTC May Be the Wrong Fit for OpenAI ⭐️ 8.0/10
  5. AI Is Breaking Vulnerability Disclosure Norms ⭐️ 8.0/10
  6. AWS us-east-1 outage disrupts services ⭐️ 8.0/10
  7. Meta Ends Instagram DM End-to-End Encryption ⭐️ 8.0/10
  8. Mojo Reaches 1.0 Beta ⭐️ 8.0/10
  9. HN reports an actual UUIDv4 collision ⭐️ 8.0/10
  10. Mozilla Uses Claude to Harden Firefox ⭐️ 8.0/10
  11. Anthropic Takes Colossus 1 Compute Capacity ⭐️ 8.0/10
  12. Canvas Attack Disrupts U.S. Schools During Finals Week ⭐️ 8.0/10
  13. Supreme Court Rejects Trump’s Global Tariffs ⭐️ 8.0/10
  14. Cloudflare to Cut 1,100+ Jobs Amid AI Restructuring ⭐️ 8.0/10
  15. US Suspects Nvidia Chips Smuggled to Alibaba via Thailand ⭐️ 8.0/10
  16. DeepSeek Reportedly Seeks First Major Funding at $45B Valuation ⭐️ 8.0/10
  17. Apple Reportedly Weighs Ending TSMC-Only Chip Production ⭐️ 8.0/10

Anthropic Eyes Massive Round at Near-Trillion Valuation ⭐️ 9.0/10

Anthropic is reportedly considering raising tens of billions of dollars this summer to fund a major expansion of its compute infrastructure. The move could lift its valuation close to $1 trillion and potentially put it ahead of OpenAI in private-market value. If completed, this would be one of the largest AI financing events ever and a sign that frontier model development is becoming even more capital-intensive. It would also highlight how quickly investor expectations around leading AI labs can shift based on enterprise traction and infrastructure needs. Anthropic reportedly already raised $30 billion in February at a $38 billion post-money valuation, and the new implied range cited in private secondary trading is roughly $1 trillion to $1.2 trillion. The article says the surge is being driven by rapid enterprise customer growth, while OpenAI’s comparable trading valuation is about $880 billion.

telegram · zaihuapd · May 8, 11:15

Background: Anthropic and OpenAI are two of the most closely watched companies in generative AI, where the cost of training and serving large models can be enormous. A company’s valuation in private markets often reflects investor expectations about future revenue, customer growth, and the compute capacity needed to stay competitive. Secondary trading platforms such as Forge Global can provide hints about how investors value shares between financing rounds.

Tags: #Anthropic, #OpenAI, #AI funding, #valuation, #generative AI


Triton 3.7.0 Adds Frontend and Backend Upgrades ⭐️ 8.0/10

Triton released v3.7.0, adding new frontend ops such as tl.squeeze and tl.unsqueeze, support for scaled batched matrix multiplication, and direct FP8 constant creation. The release also includes backend, profiling, testing, CI, build, and documentation improvements, along with various bug fixes and performance work. Triton is widely used for writing high-performance GPU kernels in a Python-like way, so even incremental releases can materially affect ML compiler and kernel development workflows. New frontend operators and backend fixes can reduce boilerplate, improve performance, and make it easier to target NVIDIA and AMD/HIP systems. The release highlights support for scaled BMM, FP8 constants, optional device arguments for preload, and plugin hooks for out-of-tree TTIR/TTGIR passes and Triton dialect plugins. It also mentions backend work around LLVM bumps, 2CTA/multicast/TMA support, and profiling/testing infrastructure updates, which suggests a broad maintenance-and-expansion cycle rather than a single flagship feature.

github · atalman · May 7, 22:19

Background: Triton is an open-source GPU programming language and compiler stack designed to help developers write efficient kernels without working directly in CUDA for every task. In Triton, the frontend defines tensor operations and the compiler lowers them to backend-specific GPU code for NVIDIA and AMD targets. Features like batched matrix multiplication and FP8 support are especially relevant for modern ML workloads, where memory bandwidth and numerical format efficiency matter a lot.

References

Tags: #Triton, #GPU programming, #ML compiler, #CUDA, #AMD/HIP


Google’s new reCAPTCHA breaks on de-Googled Android ⭐️ 8.0/10

Google’s updated reCAPTCHA / fraud-defense approach appears to stop working on de-Googled Android devices, according to the reported discussion. The story ties the change to Google Cloud Fraud Defense, which is being compared by commenters to remote attestation and the now-abandoned Web Environment Integrity idea. If web access starts depending on Google-backed device attestation, users who remove Google services for privacy or security reasons may be blocked from sites and apps. That would deepen the tension between anti-abuse systems and user autonomy across the Android and web ecosystems. The discussion suggests the mechanism is based on remote attestation, where a device proves something about its integrity to a server rather than just solving a visual CAPTCHA. Commenters also noted that this kind of approach is difficult to separate from device identity and can be controversial because it may create stronger tracking or compatibility problems.

hackernews · anonymousiam · May 8, 18:45

Background: reCAPTCHA is Google’s anti-bot system, originally designed to distinguish humans from automated traffic. Remote attestation is a security technique where a device uses trusted hardware or a protected environment to prove its state to a remote server. On Android, attestation has long been used in security features such as SafetyNet and key attestation, which are meant to help services decide whether a device appears trustworthy.

References

Discussion: Commenters were largely skeptical and concerned, with several arguing that the new system is effectively remote attestation dressed up as fraud defense. Others shared real-world friction from de-Googled or GrapheneOS-style setups, including banks and services that fail when Google dependencies are removed, while some criticized the trend as a privacy-hostile shift toward web-wide identity checks.

Tags: #reCAPTCHA, #Android privacy, #remote attestation, #anti-abuse systems, #Google


Why WebRTC May Be the Wrong Fit for OpenAI ⭐️ 8.0/10

A Hacker News discussion tied to the blog post “WebRTC is the problem” argues that WebRTC is a cumbersome fit for OpenAI-style real-time AI streaming. The thread explores alternatives such as WebTransport, WebCodecs, and persistent connection-based architectures for lower-friction browser-to-server media delivery. This matters because AI voice and streaming applications depend on fast, reliable, low-latency transport, and the choice of browser API can strongly affect user experience and implementation complexity. If newer APIs prove easier to use for these workloads, they could shift parts of the real-time web stack away from WebRTC. The discussion contrasts WebRTC with WebTransport, which is positioned as a more flexible option for data-driven real-time applications, and with WebCodecs, which provides low-level access to audio and video frames. Commenters also highlight that many practical systems instead use persistent connections, where the client keeps a long-lived connection open and streams data continuously rather than repeatedly reconnecting.

hackernews · atgctg · May 7, 17:11

Background: WebRTC is a browser technology designed for real-time audio, video, and data exchange, often used in video calls and interactive media apps. It is powerful, but it also brings signaling, ICE, STUN, TURN, and handshake complexity that developers must handle. WebTransport is a newer browser transport API aimed at flexible, low-latency communication, while WebCodecs exposes lower-level encode/decode primitives for media processing. These tools matter because AI assistants that stream speech or tokens in real time need fast bidirectional communication and tight control over latency.

References

Discussion: The comments are split but generally technical and experience-driven. Some participants say WebRTC is painfully complex and overengineered for this use case, while others argue that it already works well at scale for voice agents and that its startup friction is worth it once a connection is established. Several commenters also push back on the idea that users will tolerate extra latency, saying instant responses are crucial to preserving the “magic” of the interaction.

Tags: #WebRTC, #WebTransport, #OpenAI, #real-time streaming, #browser APIs


AI Is Breaking Vulnerability Disclosure Norms ⭐️ 8.0/10

The piece argues that AI is making long-standing vulnerability disclosure practices much easier to exploit, especially when patches and source changes are made public. It says that the old balance between giving vendors time to fix bugs and keeping enough transparency for the community is breaking down. If public patches can be turned into working exploit guidance faster, then open-source development, source transparency, and coordinated disclosure all become riskier. That affects software vendors, open-source projects, security teams, and users who depend on rapid patching to stay safe. The discussion centers on public patch diffs, commit history, and other transparency mechanisms that attackers can inspect before defenders finish rolling out fixes. Commenters note that this is not entirely new, but AI lowers the skill and time needed to reverse patches into exploit ideas.

hackernews · speckx · May 8, 17:55

Background: Coordinated vulnerability disclosure is the common model where researchers privately report a bug, and the vendor gets time to patch before details are made public. Full disclosure is the more aggressive model where vulnerability details are published quickly, sometimes before a patch exists. Public source repositories and patch diffs have long made it possible for attackers to study fixes and infer the original flaw, and AI appears to be accelerating that process.

References

Discussion: Commenters largely agreed with the article’s direction, but several argued that the underlying problem predates AI and was already visible in open-source diffs, kernel commits, and tools for reversing and decompiling software. Others pointed to Log4Shell as a concrete example of how public fixes can trigger attacks almost immediately, while some questioned whether shorter embargoes would meaningfully help defenders.

Tags: #cybersecurity, #vulnerability disclosure, #AI security, #open source, #software supply chain


AWS us-east-1 outage disrupts services ⭐️ 8.0/10

AWS’s us-east-1 region suffered an outage tied to overheating or a cooling-related thermal event at a North Virginia data center, and recovery was expected to take hours. The disruption affected EC2 and other downstream services, with AWS acknowledging that some impact could persist during restoration. us-east-1 is one of AWS’s most heavily used regions, so problems there can ripple across many apps and businesses that depend on AWS infrastructure. The incident renews concerns about cloud concentration risk and whether multi-region redundancy is sufficient in practice. AWS indicated that the affected area included one of its Availability Zones, use1-az4, and reports noted EC2 instances and EBS volumes on impacted hardware were impaired by the loss of power during the thermal event. Commentary in the coverage also pointed out that AWS Dashboard and status page access were behaving unreliably during the incident.

hackernews · christhecaribou · May 8, 03:31

Background: AWS divides each cloud region into multiple Availability Zones so customers can build systems that survive the failure of a single data center or zone. EC2 provides virtual servers, and EBS provides block storage for those servers, so issues in the underlying infrastructure can affect both compute and storage at once. us-east-1 in North Virginia is one of AWS’s oldest and busiest regions, which is why outages there often attract extra attention. Repeated incidents in the region have made it a symbol of the operational challenges involved in running large-scale cloud services.

References

Discussion: Commenters largely viewed us-east-1 as AWS’s recurring weak point, arguing that repeated failures there undermine the company’s redundancy messaging. Others raised questions about cooling design and operational safeguards, while one comment highlighted the security and abuse risk if an insider could exploit such outages.

Tags: #AWS, #cloud outage, #us-east-1, #infrastructure resilience, #distributed systems


Meta Ends Instagram DM End-to-End Encryption ⭐️ 8.0/10

Meta is reportedly disabling end-to-end encryption for Instagram direct messages, reversing a privacy feature that had been available in the app. The change has sparked debate over whether the company is prioritizing safety and moderation over private messaging. End-to-end encryption is a core privacy control for messaging apps, so removing it can reduce users’ confidence that their chats stay private. The decision matters to Instagram’s huge user base and to the broader industry debate over how platforms balance privacy, safety, and regulatory pressure. End-to-end encryption means only the sender and intended recipient can read the messages, not the service provider. Search results indicate Instagram’s encrypted chats were introduced in 2023, so this is a reversal of a relatively recent privacy feature rather than a long-standing default.

hackernews · tcp_handshaker · May 8, 21:47

Background: End-to-end encryption, often abbreviated E2EE, is used in messaging apps to protect conversations from interception or provider access. It is common in privacy-focused services because it gives users stronger confidentiality, but it can also make abuse reporting and safety enforcement harder for platforms. Instagram DMs are Meta’s direct-message system inside Instagram, so changing their encryption policy affects how private those chats are by design.

References

Discussion: The comments are mostly critical of Meta, with several users arguing that the company is sacrificing privacy for control, safety theater, or growth incentives. Others point out the irony that Meta says few users opted in to encrypted DMs, while critics say the real issue is that privacy should have been the default in the first place.

Tags: #privacy, #end-to-end encryption, #Meta, #messaging, #platform policy


Mojo Reaches 1.0 Beta ⭐️ 8.0/10

Modular announced that Mojo has reached 1.0 beta, marking a more stable milestone for the language. The company says the beta is feature-complete for Mojo 1.0 and is intended to give developers a versioned target that should not break unexpectedly. Mojo is trying to combine Python-like usability with systems-level performance, so reaching beta makes it more credible for machine learning and performance-sensitive software. If its CPU/GPU programming model and SIMD support continue to mature, it could narrow the gap between high-level AI code and low-level optimized kernels. Modular describes the beta as feature-complete, but it also says there is still polishing work to do before the final release. The release matters because Mojo has been positioned as a language for performance-critical code that emphasizes SIMD and cross-device CPU/GPU workflows rather than a simple wrapper around existing tooling.

hackernews · sbt567 · May 8, 02:49

Background: Mojo is a language from Modular, the company founded by Chris Lattner, who also played a major role in Swift and LLVM. Its goal is to bridge the ease of Python with the control and performance of systems languages. Modular has also positioned Mojo for work that spans CPUs and GPUs, which is why stability milestones like beta matter so much.

References

Discussion: HN commenters were broadly enthusiastic about Mojo’s performance story, especially its SIMD support, comptime features, and the idea of writing CPU and GPU code in one language. At the same time, some worried that the language may become less friendly to Python developers, and open-sourcing remains a recurring point of interest.

Tags: #programming languages, #Mojo, #systems programming, #machine learning, #language design


HN reports an actual UUIDv4 collision ⭐️ 8.0/10

A Hacker News post says a database flagged a duplicate UUIDv4, with the new record matching an older ID exactly: b6133fd6-70fe-4fe3-bed6-8ca8fc9386cd. The author says they are generating IDs with the npm uuid package using import { v4 as uuidv4 } from "uuid"; and are surprised because the collision seemed statistically impossible at their scale. UUIDv4 is widely treated as effectively collision-free, so a real duplicate immediately raises concerns about entropy quality, seeding, and implementation bugs. The discussion is relevant to backend engineers and distributed-system designers because it shows that “random” identifiers can fail when the underlying randomness source is weak or misused. Commenters point out that UUIDv4 depends on a high-quality entropy source, and that broken entropy, insufficient seeding, or frontend-generated IDs can all produce collisions in practice. The npm uuid package is the library being used, but the thread suggests the likely failure mode is not the UUID format itself, არამედ the randomness source behind it.

hackernews · mittermayr · May 8, 07:57

Background: A UUID is a 128-bit identifier used to label data in software systems, and version 4 UUIDs are supposed to be randomly generated. In theory, collisions are astronomically unlikely if the randomness source is truly strong, which is why developers often treat UUIDv4 as safe for unique IDs. However, the thread highlights a known caveat: the guarantee depends on the quality of the entropy source, not just on the UUID format. When the random source is weak or mis-seeded, collisions become possible even if the math looks safe on paper.

References

Discussion: The overall reaction is that this is rare but plausible, not impossible. Several commenters say they have seen similar cases and argue that weak entropy sources, bad PRNG seeding, or flawed architecture are the usual culprits, especially when IDs are generated in unreliable environments like the frontend.

Tags: #UUID, #entropy, #security, #JavaScript, #distributed-systems


Mozilla Uses Claude to Harden Firefox ⭐️ 8.0/10

Mozilla published a detailed account of using Claude Mythos Preview to find and fix hundreds of Firefox security vulnerabilities. The write-up says the model, combined with improved harnessing techniques, helped push monthly fixes from roughly 20-30 in 2025 to 423 in April 2026. This is a notable example of AI moving from noisy bug-report generation to practical security work that can materially improve a major browser. If the approach holds up, it could change how browser vendors and open-source maintainers triage vulnerabilities and scale defensive auditing. Mozilla says many attempted findings were stopped by Firefox’s existing defense-in-depth protections, which suggests the browser already blocked a large share of the model’s attack paths. The post also highlights old bugs, including a 20-year-old XSLT issue and a 15-year-old bug in the <legend> element, showing the tool was useful for surfacing long-lived issues rather than only fresh regressions.

rss · Simon Willison · May 7, 17:56

Background: Firefox is Mozilla’s open-source web browser, and security hardening means finding weaknesses before attackers do. Large language models have recently been used in vulnerability research, but open-source projects have often complained that AI-generated reports are difficult to trust because many are plausible but wrong. Claude Mythos Preview is an Anthropic model that was only available to a limited number of companies, and Mozilla used it as part of a broader workflow for scaling and filtering security findings.

References

Tags: #Firefox, #security, #vulnerability research, #AI-assisted development, #Mozilla


Anthropic Takes Colossus 1 Compute Capacity ⭐️ 8.0/10

Simon Willison reports that Anthropic has struck a deal with SpaceX/xAI to use all available capacity of Colossus 1, the data center behind xAI’s Memphis supercomputer. He also notes that the announcement immediately raised questions about environmental issues and whether xAI was giving up on its own models, which he says is not the case. This is a major AI infrastructure deal because it gives Anthropic access to a large pool of compute at a time when frontier model training and inference are heavily constrained by capacity. It also highlights how AI growth is increasingly colliding with environmental and regulatory scrutiny around data centers. Willison says the deal covers Colossus 1, while xAI is keeping the larger Colossus 2 data center for its own work. The post emphasizes that Colossus has a controversial environmental record, including gas turbines that initially operated without Clean Air Act permits or pollution controls by being labeled “temporary.”

rss · Simon Willison · May 7, 17:09

Background: Colossus is xAI’s Memphis-based AI supercomputer and data center, launched in 2024 to train Grok. In AI infrastructure, access to large-scale compute is often a bottleneck, so companies sometimes lease capacity or partner on facilities rather than build everything themselves. The environmental debate around data centers usually focuses on power use, local air quality, and permitting, which is why this deal drew attention beyond the AI industry.

References

Discussion: The discussion shown in the article is mostly critical. Andy Masley argues he would not run computing in this specific data center, and the quoted reaction to xAI’s abrupt model shutdowns reflects frustration with reliability and migration risk.

Tags: #AI infrastructure, #data centers, #Anthropic, #xAI, #environmental impact


Canvas Attack Disrupts U.S. Schools During Finals Week ⭐️ 8.0/10

Canvas, the learning management system from Instructure, was disrupted by a ransomware-related attack that left many U.S. schools unable to access grades, files, and quizzes during finals week. Instructure said Canvas had been restored for most users after entering maintenance mode while the incident was investigated. Canvas is a widely used LMS, so a disruption during finals week can directly affect exams, grading, and student access across many institutions at once. The incident also raises concern that sensitive student data may have been exposed, which could create longer-term security and privacy fallout for schools and families. ShinyHunters claimed responsibility for two incidents against Instructure in the same month, including a May 1 event in which usernames, email addresses, and student ID numbers were reportedly exposed. The outage also forced James Madison University to move an exam scheduled for Friday to Wednesday, and rumors circulating online claimed the impact could reach nearly 9,000 schools or organizations with more than 300 TB of data involved.

telegram · zaihuapd · May 8, 04:30

Background: Canvas is a web-based learning management system, or LMS, used by schools and universities to run classes online and manage assignments, grades, and assessments. Instructure is the company that develops and publishes Canvas, and the service is often central to daily academic operations. When an LMS goes down, students may lose access to coursework, while instructors may have trouble delivering exams or posting grades on time.

References

Tags: #cybersecurity, #ransomware, #edtech, #data breach, #school IT


Supreme Court Rejects Trump’s Global Tariffs ⭐️ 8.0/10

The U.S. Supreme Court ruled 6-3 on February 20 that the Trump administration’s global tariffs imposed under the International Emergency Economic Powers Act (IEEPA) were unconstitutional. In response, Trump signed a temporary 10% ad valorem tariff order under Section 122 of the Trade Act, set to take effect at 12:00 a.m. Eastern on February 24. The ruling sharply limits the president’s ability to unilaterally impose broad tariffs and reinforces Congress’s constitutional role over taxation and trade. Because tariffs can quickly affect prices, supply chains, and market expectations, the decision has immediate implications for U.S. trade policy and global commerce. The court said tariff authority belongs to Congress, not the president, when the executive acts beyond statutory limits. The new tariff order is temporary, covers global imports, and exempts critical minerals, energy products, fertilizers, drug ingredients, and some agricultural goods.

telegram · zaihuapd · May 8, 06:46

Background: The International Emergency Economic Powers Act, or IEEPA, is a U.S. law that gives the president special powers during national emergencies, but its use for sweeping tariffs is legally contested. Section 122 of the Trade Act is another trade tool that can be used for limited, temporary import measures. This case sits at the intersection of constitutional law, executive power, and tariff policy.

Tags: #trade policy, #Supreme Court, #tariffs, #US politics, #international economics


Cloudflare to Cut 1,100+ Jobs Amid AI Restructuring ⭐️ 8.0/10

Cloudflare said on May 7, 2026 that it will lay off more than 1,100 employees worldwide and publicly posted an internal letter from its co-founders explaining the move. The company said the layoffs are driven by rapid expansion of AI use inside Cloudflare over the past three months, with AI agents taking over many routine tasks across teams. Cloudflare is a major internet infrastructure provider, so a restructuring of this scale signals how aggressively AI is reshaping operating models in large tech companies. The move may influence how other firms think about headcount, automation, and the role of AI agents in knowledge-work workflows. The company said the layoffs will be completed in one round, not staggered over time, and employees will be notified directly by email rather than through managers. Severance includes pay through the end of 2026, U.S. health coverage through year-end, vesting extended to August 15, 2026, and pro-rated equity for employees who had not yet reached a one-year vesting cliff.

telegram · zaihuapd · May 8, 08:15

Background: Cloudflare provides internet infrastructure services such as security, performance, and network delivery tools used by websites and applications. In recent years, many companies have adopted AI agents and automation systems to handle repetitive internal work, which can change how teams are staffed and organized. A vesting cliff is the first point at which employees earn any equity, so waiving it can materially affect departing workers’ compensation.

References

Tags: #Cloudflare, #AI, #layoffs, #tech industry, #organizational restructuring


US Suspects Nvidia Chips Smuggled to Alibaba via Thailand ⭐️ 8.0/10

Bloomberg reported on May 8, 2026 that US authorities suspect Thailand-based OBON Corp. helped move about $2.5 billion of Super Micro servers containing advanced Nvidia chips into China, with Alibaba named as one of several end customers. Alibaba denies any business relationship with Super Micro or OBON, and Siam AI’s CEO says he has left OBON and the company was not involved. If true, the case would point to a major export-control evasion route for restricted AI hardware and show how third-country transshipment can complicate US efforts to limit advanced chip flows to China. It could also pressure Thailand’s emerging AI ecosystem and prompt Washington to intensify scrutiny of shipments and local partners in the region. Bloomberg says the alleged route involved OBON Corp., which had helped create Thailand’s sovereign AI cloud, Siam AI, and that project had received Nvidia partner status. The report says the hardware in question was Super Micro server systems rather than bare chips, highlighting how controlled AI compute can move through intermediaries.

telegram · zaihuapd · May 8, 13:23

Background: A sovereign AI cloud is an AI infrastructure stack designed to keep data, compute, and operational control within a country or institution instead of relying on foreign cloud providers. This topic matters here because US restrictions on advanced Nvidia chips have been tightening as Washington tries to curb high-end AI hardware reaching China, making transshipment through third countries a sensitive enforcement issue.

References

Tags: #Nvidia, #semiconductor supply chain, #export controls, #China-US tech conflict, #AI infrastructure


DeepSeek Reportedly Seeks First Major Funding at $45B Valuation ⭐️ 8.0/10

Bloomberg reports that DeepSeek may be raising its first large external funding round, with a valuation of about $45 billion. China’s state-backed National Integrated Circuit Industry Investment Fund is reportedly in talks to lead the round. If confirmed, this would mark a major capital event for one of China’s most prominent AI companies and could signal stronger state-backed support for core AI infrastructure. A deal of this scale would also underscore how strategic AI funding has become in China’s technology ecosystem. The report says this would be DeepSeek’s first major external financing, which makes it notable beyond the headline valuation. The reported involvement of a semiconductor state fund suggests the round may be tied not only to business growth but also to broader strategic priorities in China’s chip and AI supply chain.

telegram · zaihuapd · May 8, 14:59

Background: DeepSeek is an AI company that has become widely discussed in China’s technology sector. External funding rounds are when a company takes investment from outside backers, and a first major round is often watched closely because it can reshape expectations around growth, control, and strategic direction. In China, state-backed funds often play a role in sectors considered strategically important, including semiconductors and AI.

Tags: #AI, #DeepSeek, #funding, #China tech, #venture capital


Apple Reportedly Weighs Ending TSMC-Only Chip Production ⭐️ 8.0/10

The Wall Street Journal reports that Apple is considering moving some of its lower- and mid-range processors away from TSMC, ending a production strategy that has been in place since 2014. The report says Intel could begin making some Apple chips as early as 2027 using its 18A process, with manufacturing limited to foundry work rather than chip design. If true, this would mark a major supply-chain shift for Apple and reduce its dependence on a single foundry. It could also be a meaningful win for Intel’s foundry ambitions and a sign that chip buyers are increasingly diversifying manufacturing amid AI-driven capacity pressure. The reported move is said to apply only to some lower- and mid-tier chips, not necessarily Apple’s highest-end processors. The article also notes that TSMC’s production priorities are being pressured by AI customers such as Nvidia, which may be part of why Apple is looking for backup manufacturing options.

telegram · zaihuapd · May 8, 17:18

Background: TSMC has been Apple’s exclusive chip manufacturing partner since 2014, helping produce the custom silicon used across Macs, iPads, and iPhones. In a fabless semiconductor model, companies like Apple design chips but outsource physical production to foundries such as TSMC or Intel. Intel’s 18A is its advanced manufacturing process node, and using it for Apple would be notable because it would expand Apple’s manufacturing base beyond TSMC.

Tags: #Apple, #TSMC, #Intel, #semiconductors, #supply chain