Technology is not just the future. It is already reshaping medicine, energy, work, and human capability in ways that would have seemed like science fiction a decade ago. The more useful question is: which technologies will matter most, how close are they to delivering on their promises, and what trade-offs come with them? The answer is more nuanced than the hype suggests.
AI in Medicine: Promising but Not Superior
Artificial intelligence is often pitched as the technology that will revolutionize healthcare, catching diseases earlier and more accurately than any human doctor. The reality, so far, is more modest. A meta-analysis of 83 studies published in npj Digital Medicine found that generative AI models achieved an overall diagnostic accuracy of 52.1%. That’s roughly comparable to non-expert physicians, with no statistically significant difference between the two groups. But when AI was measured against experienced specialists, the specialists outperformed it by about 16 percentage points.
This doesn’t mean AI in medicine is a dead end. It means the technology is currently best suited as a support tool, particularly useful for general practitioners or in settings where specialists aren’t available. The pattern across radiology, ophthalmology, dermatology, and emergency medicine was consistent: AI can match a generalist but not yet replace an expert. That gap will likely narrow, but for now, the most realistic role for AI in healthcare is assisting doctors rather than replacing them.
Gene Editing Has Already Arrived
CRISPR gene editing crossed a major threshold when the FDA approved the first CRISPR-based therapy, Casgevy, for sickle cell disease. This isn’t a future possibility. It’s a treatment patients are receiving now. In clinical trials, 28 patients with sickle cell disease received the therapy, and all treated patients in one trial were free of painful crises after infusion. Normal hemoglobin levels were maintained for up to 18 months of follow-up. A parallel trial for a severe form of inherited anemia showed hemoglobin rising above the threshold where patients no longer needed regular blood transfusions.
The limitations are real, though. There is currently no direct way to measure what percentage of a patient’s cells were successfully edited or to detect unintended genetic changes. Off-target edits and long-term immune reactions remain open questions that require years of monitoring. Researchers are also exploring CRISPR for cancer, HIV, and inherited blindness, but those applications are still in early trials. Gene editing is transformative, but it’s advancing one disease at a time, not all at once.
Quantum Computing: Still Waiting for Usefulness
Quantum computers exist, and chips with over 100 qubits are already available. Google’s quantum chip Willow demonstrated “quantum supremacy” late last year, solving a problem faster than any classical supercomputer could. That sounds like a breakthrough, and in a narrow technical sense it is. But the problem it solved was specifically designed to be easy for a quantum computer. It didn’t solve anything useful in the real world.
The core challenge is stability. Developers have only managed to create single “logical qubits,” the error-corrected building blocks needed for reliable computation. Scaling from one to the thousands required for practical applications like drug discovery, cryptography, or materials science remains an enormous engineering problem. Quantum computing will likely matter enormously in the long run, but its timeline for real-world impact is measured in decades, not years.
Batteries and Energy Storage
The energy transition depends heavily on better batteries, and solid-state batteries represent the most significant leap on the horizon. Standard lithium-ion batteries top out around 250 to 270 watt-hours per kilogram. Solid-state designs using lithium metal can theoretically reach 280 to 410 watt-hours per kilogram, a jump that would translate directly into longer-range electric vehicles and more practical grid storage.
The catch is manufacturing. Current thin-film solid-state batteries are difficult to produce at scale, and costs remain high. The energy density numbers come from laboratory conditions with varying amounts of active material. Getting those numbers into affordable, mass-produced cells is the bottleneck. Several automakers and battery companies are targeting commercial production in the late 2020s, but the technology has been “a few years away” for a while now.
Robots Are Getting Better Hands
The human hand has roughly 20 to 25 degrees of freedom, a measure of how many independent ways its joints can move. Robotic hands are closing in on that number. The Shadow Dexterous Hand, one of the most advanced commercial models, has 24 degrees of freedom across five fingers, can carry 5 kilograms, and includes touch sensors. More compact designs like Unitree’s Dex5-1 pack 20 degrees of freedom into a hand weighing just one kilogram. At the other end, the XHAND1 sacrifices some dexterity (12 degrees of freedom) but can handle payloads up to 25 kilograms.
Companies like Tesla, Figure AI, and 1X are developing proprietary hands for their humanoid robots, though these designs aren’t yet available for outside testing. The hardware is advancing quickly, but the bigger challenge is software: teaching robots to manipulate unfamiliar objects in unpredictable environments. Mechanical dexterity is necessary but not sufficient for the kind of general-purpose robots people imagine when they think about the future.
The Job Market Will Transform, Not Collapse
Roughly 30% of current U.S. jobs could be automated by 2030, and 60% will have their tasks significantly modified by AI. Globally, an estimated 300 million jobs could be displaced, representing about 9.1% of all employment worldwide. Those numbers sound alarming, but “automated” and “displaced” don’t always mean “eliminated.” Many jobs will change shape rather than disappear entirely.
Previous waves of automation, from mechanized farming to factory robotics, destroyed specific roles while creating new categories of work that didn’t previously exist. The pattern this time is different in one important way: AI affects knowledge work and creative tasks, not just manual labor. Writers, programmers, analysts, and customer service representatives are seeing their workflows change alongside warehouse workers and factory operators. The people most at risk are those in roles with highly repetitive cognitive tasks. The people best positioned are those who learn to use AI tools to amplify what they already do well.
Carbon Removal Is Expensive but Improving
Direct air capture, the technology that pulls carbon dioxide directly from the atmosphere, currently costs between $500 and $1,900 per ton of CO₂ removed. For context, global emissions are around 37 billion tons per year. At those prices, removing even a fraction of annual emissions would cost more than most national budgets. The fundamental problem is physics: CO₂ makes up only about 420 to 430 parts per million of the atmosphere, so enormous volumes of air must be processed to capture relatively small amounts of carbon.
Advances in capture materials and economies of scale could bring costs down to roughly $300 per ton by mid-century, according to the International Energy Agency. Some next-generation designs are targeting $100 per ton. If that target is reached, direct air capture becomes economically viable as one tool among many for addressing climate change. But at current costs, it’s a demonstration technology, not a solution at scale.
Brain-Computer Interfaces Work, Slowly
Normal human speech runs at 150 to 200 words per minute. A brain-computer interface developed at UC San Francisco restored communication to a man with paralysis at up to 18 words per minute with 93% accuracy. That’s roughly one-tenth the speed of natural speech, but for someone who cannot speak or type, it represents a profound change in quality of life.
The gap between 18 and 150 words per minute is where the technology needs to go before it becomes relevant for broader populations. Current systems require surgical implantation of electrode arrays and extensive calibration. They’re life-changing for people with severe paralysis and largely irrelevant for everyone else, at least for now.
2.2 Billion People Are Still Offline
About 6 billion people, three-quarters of the world’s population, are using the internet in 2025. That leaves 2.2 billion still offline, down only slightly from 2.3 billion the year before. The barriers aren’t just about building cell towers or laying cable. Affordability, connection quality, and digital skills are the primary obstacles preventing the remaining population from getting online meaningfully.
This matters because every technology discussed above, from AI-assisted medicine to gene therapy access to job retraining, depends on connectivity. If the benefits of technology concentrate among people who are already connected and educated, the gap between technological haves and have-nots widens rather than closes. Technology is clearly a defining force in the future, but whose future it shapes depends on whether access keeps pace with innovation.