Semiconductors, better known as “chips,” might sound abstract if you don’t work in hardware. But they power nearly everything in our daily lives: phones, laptops, cars, and increasingly, the infrastructure behind AI. The real challenge today isn’t just about having enough data, it’s also about having the computing power to process it. Chips are the bottleneck, and producing them is staggeringly complex, capital-intensive, and geopolitically sensitive.
Like many people in tech, I’ve heard the word “semiconductors” thrown around all the time, but never really understood why they’re so central to everything. A few friends recommended the Chip War by Chris Miller last year and I finally finished it a few weeks ago—loved it. Here are four ideas that stuck with me:
1. Moore’s Law becomes an industry growth roadmap
Moore’s Law is often described as an observation: the number of transistors on a chip doubles roughly every two years, driving exponential increases in computing power.
What I didn’t realize is that it became a self-fulfilling growth roadmap for the semiconductor industry—a shared goal that governments, investors, and chipmakers aligned around. Despite concerns about physical and economic limits, companies organized roadmaps around making this "law" true, treating it less like physics and more like a shared mission. It’s a powerful example of how a narrative turns a forecast into a coordination mechanism for a global industry.
2. Early demand rarely points to the final use case
When transistors were first invented, few people knew what to do with them. Beyond replacing bulky vacuum tubes, their potential seemed limited, much like how it’s cognitively hard for people to envision how AI could fundamentally change our lives today.
What changed everything was an unexpected early adopter: the U.S. military. Defense agencies and NASA needed compact, high-performing electronics for missiles and space exploration in the 1960s. That early niche demand gave semiconductors a launchpad to scale production. As costs dropped, chips moved into everyday consumer products: radios, calculators, and eventually, personal computers.
What struck me most was a surprising parallel with modern UX and product strategy: Fairchild Semiconductor didn’t just wait for demand to emerge. They actively imagined it, creating detailed blueprints of future consumer devices powered by chips before the market even existed. It was a way to reduce uncertainty and spark demand, much like today’s visionary product mockups or AI pitch decks that help people visualize what doesn’t exist yet.
3. Why Intel fell behind in the AI race and Nvidia took the lead?
Intel led the personal computing revolution in the 1970s, driven by Bob Noyce’s bold bet on microprocessors and his belief in the future of personal computing, a vision few shared at the time. But in the AI and graphics era, Intel struggled to keep up, especially in advanced chip manufacturing and AI infrastructure, where Nvidia and TSMC moved faster and captured the momentum.
Despite early investments in foundational technologies like EUV tools that enabled GPU development, Intel was slow to pivot towards AI computing. Nvidia, on the other hand, recognized the opportunity early and bet aggressively on AI acceleration, developing CUDA and positioning its GPUs as the backbone of AI computing. What began as a graphics company transformed into a core infrastructure player for AI.
Beyond technical challenges and leadership strategy, company culture played a key role in this divergence. Intel’s structured, risk-averse environment prioritized predictability and incremental progress—a pattern consistent with the classic innovator’s dilemma, where incumbents hesitate to disrupt their own successful models even as new paradigms emerge. In contrast, Nvidia built a fast-moving, mission-driven culture with flat hierarchy and tight feedback loops. Under Jensen Huang’s leadership, the company is able to move quickly and shape the AI landscape. Building a timeless company isn’t about one single bold move, it’s about making the right bets at the right time, again and again.
Photo by TangChi Lee on Unsplash
4. How Asia broke into the high-value part of the supply chain?
When we think of semiconductors, we often picture Silicon Valley. But today, the center of gravity for advanced chip manufacturing lies in Asia.
Taiwan produces nearly 40% of the world’s new computing power each year. South Korea dominates memory chips, and Japan supplies critical materials like silicon wafers and specialty gases. Europe and the U.S. still lead in chip design tools, like ASML’s EUV machines and ARM’s architectures, but the most complex and valuable manufacturing steps are concentrated in Asia.
This shift wasn’t accidental. Asian governments took a proactive, hands-on approach, shaped by a Confucian-influenced philosophy of state-guided development. They funneled capital and pushed banks to fund strategic sector, hired US-trained engineers, kept their exchange rates undervalued, and secured tech transfer through partnership. In Taiwan and South Korea, support from the U.S. motivated in part by geopolitical rivalry with Japan also played a key role.
Today, power in chips isn’t just about who makes them, it’s also about who buys them. China, though behind in cutting-edge chips, controls massive demand for lower-end components. That market power gives it leverage, as it remains both the U.S.’s biggest customer and competitor. It’s a complex balance of dependency and rivalry, one shaped as much by market dynamics as it is by politics and culture.
It’s fascinating to learn how one of today’s most critical industries has been shaped not just by technology, but by the interplay of markets, culture, and geopolitics. As we explore emerging use cases for AI, the history of the chip industry offers a mirror—that technological shifts are rarely just about engineering, they’re about timing, narrative, and the systems we build around them.