Over the past year, tech companies invested hundreds of billions in the new data centers needed to power rapidly increasing demand for the technology. The investment is motivated in part by confidence that major AI labs such as those at OpenAI, Anthropic, and Google will continue to wring more intelligence out of their models. Indeed, fears have receded that the AI labs’ go-to strategy of supersizing models, training data, and computing power was no longer yielding large leaps in intelligence. Instead, the cadence of bigger and better models has accelerated, in part because AI coding tools are playing an increasing role in building new models.
That’s certainly true at Anthropic, which says that 70% to 90% of its new code is now written by its breakthrough coding agent, Claude Code. The tool, which generates and tests software code based on natural language prompts, was originally meant for internal use by Anthropic engineers, but the company decided to release it as a real product in May 2025. In just six months, Claude Code became a moneymaker, reaching a $1 billion revenue run rate.
Another reason for the acceleration in model releases was the arrival of Google at the front of the race. Its Gemini 3 family of models smoked competing LLMs on a number of industry benchmark tests, putting other AI labs on alert. The Gemini 3 models became the engine for many Google services, such as AI search and ads, and gave a boost to the company’s cloud business as well as to its Gemini chatbot.
Other AI companies are specializing, honing their models for narrower use cases and skill sets. Hume AI, for example, has focused on emotional intelligence; its newest models are surprisingly good at both listening for a wide range of emotions in the human voice (say, a customer support caller), and generating voices that convey a range of emotions. World Labs, cofounded by AI pioneer Fei-Fei Li, has focused on models that understand the world very differently than large language models. The company has launched Marble, a “world model” capable of processing physical and spatial data in order to generate realistic world simulations that can be used to train self-driving cars or guide the movements of robots.
1. Google
For creating an LLM that’s suitable for powering agents
With the release of its Gemini 3 family of multimodal AI models, Google cemented its position as a dominant—and still rising—force in AI. The new models, which were developed by the company’s primary AI lab, Google DeepMind, and began deployment in November 2025, were meant to unify the multimodal, reasoning, and agentic properties introduced in the Gemini 1 and 2 models. They’re among the first to be trained from the ground up to process and understand images, video, audio, and code, not just text.
The Gemini 3 models also offer the reasoning, planning, and ability to use tools (such as web search) needed to power AI agents. Gemini 3 now provides the brain for a number of Google’s core consumer-facing products, including the Gemini chatbot app, which now has more than 750 million monthly active users, and the AI Overviews in Google Search, which Google says now reach more than 2 billion users monthly.
On the enterprise side, usage of Gemini 3 and other Google cloud models by independent developers and companies reportedly surged in 2025. Google says that Gemini Enterprise, a platform for enterprise search, AI assistants, and agents, has grown to 8 million paid seats. With a wealth of AI talent and a plethora of training data at its disposal, such as YouTube videos, Google is likely to seriously challenge OpenAI, Anthropic, and xAI for frontier model dominance well into the future.
Read more about Google, No. 1 on Fast Company’s list of the World’s 50 Most Innovative Companies of 2026.
2. Anthropic
For developing the smartest coding agent
Anthropic engineers originally built its popular Claude Code tool in late 2024 to test their models’ fluency with computer code. But when they saw that the tool, which generates code based on natural language prompts, dramatically sped up software development, they began using it for their own coding work. The company also kept improving the tool, and released it as a new product.
“Since it became generally available last May, it’s changed how teams build and ship software,” says Anthropic chief product officer Mike Krieger, “and it’s now used by companies across industries.” Customers include Netflix, Spotify, Salesforce, KPMG, and many other major names, along with thousands of startups.
Claude Code improved with the November 2025 release of the Claude Opus 4.5 model, and saw an even bigger boost with Opus 4.6, announced in early February. Users say the tool is now more efficient and can handle complex coding tasks that require prior reasoning and planning. It’s now a significant revenue generator for Anthropic, which reportedly expects to become profitable in 2028. “Surpassing $1 billion in six months tells us that this isn’t about experimentation, it’s just how developers work now,” Krieger says.
Read more about Anthropic, No. 4 on Fast Company’s list of the World’s 50 Most Innovative Companies of 2026.
3. Abridge
For relieving doctors of chart-work drudgery
Abridge is applying enterprise-grade AI to one of the biggest contributors to burnout of physicians and other caregivers—filling out patient charts. Caregivers can record patient visits using their phone. The Abridge platform then summarizes the information and completes the electronic patient record. Clinicians using the platform report spending 60% less time completing patient notes after hours and report an 85% increase in work satisfaction, the company says. That results in a 67% overall reduction in burnout.
Abridge projects that its platform will support more than 80 million patient-clinician conversations at 250 of the largest U.S. health systems in 2026. In April 2025, the company introduced a new AI architecture, called Contextual Reasoning Engine, that uses more clinical context to turn visit data into compliant, billable notes in real time. It’s also released add-on modules that make the platform more performant in specific clinical contexts, including Abridge Inside for Emergency Medicine and Abridge Inside for Inpatient.
When scrutinizing patient visit summaries, Abridge says its platform caught 97% of errors and unsupported statements, while an off-the-shelf model, OpenAI’s GPT-4o, caught just 82%. The company closed a $250 million funding round in 2025, and reached $38 million in annualized recurring revenue in Q3 2025, with a 95%+ month-over-month retention rate.
Read more about Abridge, honored as No. 19 on Fast Company’s list of the World’s 50 Most Innovative Companies of 2026.
4. World Labs
For transforming text, photos, and videos into 3D worlds
Many AI practitioners believe that today’s AI models will need to grow beyond words and develop an understanding of the spatial and physical world. One of these people is Fei-Fei Li, the AI pioneer whose ImageNet training dataset laid the foundation for new computer vision systems in the early 2010s.
Li started World Labs with well-known AI researchers Justin Johnson, Christoph Lassner, and Ben Mildenhall. The startup is building a form of “world model” capable of processing sensory data and developing a physics-based understanding of the real world. It released its first world model, Marble, in 2025. It focuses on generating and maintaining highly realistic 3D environments that can be used by creatives to develop interactive games and visual effects.
Ultimately, the greatest beneficiaries of World Labs’ models might be robotics companies, which currently struggle to prepare robots for real-world utility. “You need a 3D environment that is interactable, that has collisions, has physics, has dynamics to train robots, to evaluate robots,” says Li. “This is the reason spatial intelligence is important for humans and it will be important for AI. The use cases are just abundant.”
Read more about World Labs, honored as No. 22 on Fast Company’s list of the World’s 50 Most Innovative Companies of 2026.
5. Cerebras Systems
For baking big chips for big AI
Cerebras is best known for making the market’s largest AI chip, meaning it occupies most of a whole silicon wafer, which is about the size of a dinner plate (other AI chips, like Nvidia’s GPUs, are baked onto much smaller pieces of the silicon wafer). The large square chip packs a lot of processing power and memory on one piece of silicon, so almost no time is wasted routing data between separate chips. That makes it highly effective at processing data from commercial AI applications that require massive throughput and very fast response times. Cerebras says its chips can process 2,500 to 3,000 tokens per second, more than 70 times faster than the best GPUs.
For years, Cerebras sold its technology mainly to national labs and R&D organizations that needed supercomputing power for research. But over the past 18 months, the company has increasingly filled the growing demand for computing power for commercial AI apps such as chatbots and coding assistants. For example, OpenAI recently began using a large installation of Cerebras servers to process real-time user interactions with its Codex coding assistant.
In February 2025, Cerebras said that it planned to launch six new inference data centers. At least four of them—Dallas, Minneapolis, Oklahoma City, and Montreal—were online by the end of the year. Its customer list now includes IBM, Meta, Perplexity, Mayo Clinic, Notion, AbbVie, G42, Mistral, Bayer, GlaxoSmithKline, and AstraZeneca. In September 2025, the company raised a $1.1 billion funding round at an $8.1 billion valuation. Then in early 2026, it announced another $1 billion round at a post-money valuation of around $23 billion.
6. Alibaba Group
For bringing its Qwen AI models to the cloud
Qwen, from the Chinese conglomerate Alibaba Group, is a world-class family of large language models. Alibaba has developed a whole stack of infrastructure software around the models so that they can be more easily deployed within enterprises. It now open-sources Qwen, calling it the “operating system of the AI era.”
As Asia-Pacific’s largest cloud provider and the world’s fourth largest, Alibaba has released more than 300 AI models spanning text, image, video, and audio generation. The company says Qwen has been downloaded more than 600 million times, and has spawned over 170,000 derivative models. More than a million developers use Model Studio, Alibaba Cloud’s platform for building and deploying AI apps using Qwen.
In June 2025, Alibaba launched a strategic alliance with SAP, which now integrates Qwen into its SAP AI Core, a service layer for deploying and running AI workloads, in China and, soon, globally. A groundbreaking BMW partnership embeds Qwen into the carmaker’s 2026 Neue Klasse vehicles, the first time a global automaker has embedded an open-source LLM directly into in-car systems. Qwen is also having an impact in healthcare. The models have improved the diagnostic accuracy of PANDA cancer screening by 34.1% and reduced misdiagnoses by an acute aortic syndrome tool from 50% to 4.8%.
7. Darktrace
For turning LLMs into security workers
Darktrace may have been ahead of its time when it launched its Cyber AI Analyst in 2019. By 2025, the agentic security system had conducted 90 million investigations and was able to reduce them to fewer than 500,000 incidents that it deemed critical. Now Darktrace is turning its focus to security threats in the cloud, where the majority of AI models and apps are hosted.
In September 2025, the company launched Forensic Acquisition & Investigation, which it says is the industry’s first fully automated forensic solution for cloud computing. The system is designed to instantly capture and preserve evidence of a security breach so that researchers can establish the root cause and timeline of a cyberattack and investigate it across different commercial clouds and on-premises computer systems. Darktrace also added a new custom large language model called DEMIST-2 that enables a deeper understanding of cybersecurity threats and orchestrates the use of agents in complex investigations.
Darktrace’s security technology protects about 10,000 organizations globally from sophisticated threats to cloud, email, network, and operational technology systems.
8. Mithril
For developing algorithms to keep the servers working around the clock, more efficiently
One of the fundamental challenges in the AI industry is the extremely high cost of training and operating large models. Not only is there an undersupply of the silicon chips needed to do the work, but cloud providers sell access to compute power in rigid ways that can leave servers idle. Mithril’s idea is to aggregate computing power from cloud providers into a marketplace and sell it in flexible ways. For instance, if an AI lab needs cloud resources for a job that can run piecemeal, it might get a lower price than another workload that’s time-sensitive and must run uninterrupted until completion.
Mithril says usage of its platform has grown by more than 550% over the past year. Its customers include well-known AI companies such as Cursor, Poolside, and Pika. It also serves a growing number of enterprises such as LG AI Research and research institutions such as Arc Institute, Stanford, and Broad Institute.
Mithril, which was founded by former Google DeepMind research scientist Jared Quincy Davis, has raised $80 million from investors including Sequoia Capital, Lightspeed Venture Partners, Microsoft Ventures (M12), and NEA, among others.
9. Lila Sciences
For integrating generative AI with lab robotics
Generative AI has the potential to conceive of novel molecule combinations that form the basis for new and more effective drug therapies. But in drug discovery, the AI must extend from the digital realm to conduct physical experiments that validate the candidates.
Lila Sciences describes its AI Science Factory as the first “operating system for autonomous science” capable of driving open-ended scientific exploration. Its integration of hardware and software innovation creates a closed loop where AI designs hypotheses, executes experiments, and incorporates results into new cycles of discovery. The system autonomously runs thousands of experiments simultaneously.
In March 2025, Lila Sciences announced four breakthrough discoveries, all achieved through AI. They include optimal genetic medicine constructs outperforming commercial therapeutics, discovery of hundreds of novel antibodies and peptides, unique non-platinum catalysts for green hydrogen at a far lower cost, and world-class carbon capture materials. The company says it marked the first time in history that AI, not humans, was the driving force behind scientific milestones.
Lila Sciences launched in March 2025 with $200 million in seed capital from General Catalyst and others, then raised another $350 million from investors such as In-Q-Tel in October.
10. FieldAI
For giving robots brains for the real world
Unlike other robotics companies, FieldAI isn’t trying to reverse engineer new large language models to be the brains for robots. Rather, its Field Foundation Models (FFMs) are grounded in physics.
In practice this means its models make robots keenly aware of the physical risks in their environment so that they can operate safely and effectively in “dull, dirty, and dangerous (DDD) environments,” as the company puts it, without requiring GPS, maps, or constant human oversight. The FFMs can be placed in all kinds of robots including quadrupeds, humanoids, wheeled robots, and passenger-scale platforms.
FieldAI CEO Ali Agha has said that his company already has more than 200 customer deployments across North America, Europe, Middle East, Southeast Asia, and East Asia, including some of the largest construction firms in China and the U.S. In August 2025, FieldAI raised $405 million from top-tier investors including Bezos Expeditions, Gates Frontier, Intel Capital, Khosla Ventures, Nvidia, and Samsung. The company was founded in 2023 as a 30-person team with members from Google, Nvidia, Amazon, Tesla, SpaceX, Zoox, and Cruise. It’s grown to more than 100 people.
11. Runway
For pushing the envelope in production-ready video generation
Even as competition heats up from players like Google and OpenAI, Runway continues to set the pace for generative video. The company improved on its previous flagship models in 2025 with the release of Gen-4, which lets creators generate or edit video using text prompts and/or reference images, and then iterate and edit within a production-style workflow. The new models were designed to address a key limitation of existing models—limited ability to maintain the consistency of people, objects, and environments across multiple shots.
Runway is likely the generative video company that’s most deeply entrenched in the advertising and entertainment industries, thanks to partnerships with Lionsgate, EDGLRD, Fabula, and AMC Networks. Amazon reportedly used Runway tools in the production of House of David season 2, and they were also used to create visual effects for Madonna and Beyoncé. On the enterprise side, Runway has been working with Microsoft, Ubisoft, Dolce & Gabbana, Puma, Under Armour, Valentino, and others.
In April 2025, Runway raised $308 million in Series D funding at a $3.3 billion valuation, more than doubling its valuation from the previous round. And in February 2026, it raised another $315 million at a valuation of roughly $5.3 billion.
12. OpenEvidence
For giving doctors an AI consultant trained in peer-reviewed studies
OpenEvidence is a chatbot-style quick reference guide used by physicians and other clinicians. Caregivers can type a clinical question in natural language and get summarized answers that are grounded in peer-reviewed medical research. That’s because the information in the company’s model comes via content deals that give OpenEvidence access to the JAMA Network and The New England Journal of Medicine.
In 2025, the company launched OpenEvidence DeepConsult, a deep research mode for more complex clinical questions. The tool deploys a team of specialized “PhD-level AI agents” that can search through hundreds of research reports and then stitch together a coherent, actionable answer. The company also released OpenEvidence Visits, which lets physicians easily access medical evidence and form decisions during patient exams.
OpenEvidence became a part of the workflow of many doctors during 2025. The company says 40% of U.S. doctors now log in daily. That popularity didn’t go unnoticed within venture capital circles. In July 2025, the company raised $210 million at a valuation of approximately $3.5 billion. It raised another round, led by Thrive Capital and DST Global, in January 2026, which pushed its valuation up to $12 billion and brought its funding total to nearly $700 million.
13. GC AI
For empowering in-house legal teams with truth-grounded AI
Many of the strongest startups are started by people who had a personal need for the company’s product. That’s the case with GC AI, whose name refers to AI for general counsels, the corporate world’s top in-house lawyers. GC AI was cofounded by Cecilia Ziniti, who was general counsel in Amazon’s Alexa division, at the coding assistant company Replit, and at the autonomous driving company Cruise.
GC AI’s product focuses squarely on the main responsibilities of lawyers within enterprise settings. Users can enter a chatbot-style ask-and-answer session and get answers rooted in their company’s own policies, products, and practices. The software summarizes and analyzes documents (customers report a 50% reduction in document drafting and review time), and generates first drafts of legal correspondence such as emails, clauses, and memos.
Perhaps most importantly, GC AI establishes trust through a key 2025 innovation called the Exact Quote system, which ensures that every clause, citation, and contract reference comes verbatim from verified sources. More than 700 legal teams now use the platform, with notable customers including SurveyMonkey, Penguin Random House, and Vuori. GC AI raised $11.6 million in venture funding in May 2025, and another $60 million round in November, bringing its funding total to $73 million.
14. Factory
For imbuing software development agents with new flexibility
Factory’s AI platform is used by software developers to create and delegate tasks to teams of autonomous agents (“Droids”). The agents rely on a shared memory graph to plan, build, and ship software, and developers can use it within familiar interfaces such as the computer terminal and Slack. The platform is “model agnostic,” meaning that it can integrate major generalist models like ones from OpenAI or Anthropic, or smaller, task-focused models.
The secret sauce comes from the contextual intelligence and multi-agent reasoning built into Factory’s proprietary agentic architecture. In 2025, the company notched a big performance milestone, going to No. 1 on the Terminal-Bench benchmark by outperforming major competitors in multi-agent collaboration, debugging, and infrastructure tasks. Factory is still a young company—it was founded in 2023—but it showed up just in time to play a role in the agentic phase of generative AI that followed the chatbot craze.
The startup said in 2025 that it anticipated hitting a $25 million annual recurring revenue (ARR). Its customers include Bayer, EY, MongoDB, and Nvidia. It’s raised around $70 million from some pedigreed investors, including Lux Capital, Sequoia Capital, NEA, J.P. Morgan, and Nvidia, which suggests that the startup has established credibility as a platform that could help define the next era of human-AI collaboration in engineering.
15. Turing
For bringing human brains to AI training
Turing began life as a talent platform that matched and vetted remote software engineering talent for tech company and enterprise clients. With the AI boom that started after the launch of ChatGPT, it quickly reimagined itself as a different, but complimentary kind of platform that serves expert-driven AI training data to major AI labs such as OpenAI, Google, Meta, and Anthropic.
In 2025, the company evolved further to become an “AI research accelerator” that helps AI labs identify model weaknesses and engineer custom training data solutions. One way it does this is through “data gyms,” which are something like flight simulators for AI agents. The gym can put AI agents through numerous use case scenarios and collect feedback data on their performance, which can be used to develop clean “this worked, this didn’t” signals for training and evaluation. Turing also launched a new model fine-tuning platform called ALAN (Always Learning, Always Nimble), which it says has revolutionized the way it captures expert knowledge and transforms it into training data.
The Palo Alto-based company has been growing rapidly during the past two years as the race among AI labs to reach artificial general intelligence has picked up. It’s grown to 4,000 employees, says it hit a $300 million annual recurring revenue (ARR) during 2025, and is profitable. Turing picked up another $111 million in venture capital funding in March 2025 at a $2.2 billion valuation.
16. Cohere
For creating private and secure AI models for companies
The Canadian AI lab Cohere was cofounded by Aidan Gomez, who was one of the Google researchers who coauthored the seminal Transformers paper that touched off the generative AI boom. Cohere’s models don’t normally show up at the top of industry benchmark tests alongside those from OpenAI, Google, and Anthropic, but the company has made a very smart pivot toward “sovereign AI,” in which security- and privacy-conscious companies can host their data and AI models within their own private cloud or on servers located within their security perimeter. This is especially important to enterprises in regulated industries that must meet strict security and governance standards for customer data.
Cohere is also working hard to let enterprises do more with their protected data. In January 2025, it released North, an agentic AI platform that lets enterprises search company data and automate tasks using AI. North moved to general commercial availability in August, and RBC and LG are now reportedly running pilots with the platform.
In August, Cohere announced a $500 million raise, followed by an additional $100 million second close in September, bringing total funding to $1.6 billion at a $7 billion valuation.
17. Snorkel AI
For preparing AI models for the enterprise by harnessing specialized, research-backed datasets
Expectations for applying AI models to real-world tasks in the workplace are running high in 2026, and a lot of money is riding on it. AI labs can no longer rely on increases in the amount of training data or computing power to prepare their models for critical, and diverse, real-world use cases. So they’re increasingly training models on highly specialized data developed by domain experts to continue making progress. The market research firm IDC projects that AI labs will spend $150 billion a year on such data by 2027.
One of the companies addressing this market is Snorkel AI, which creates custom training datasets for many of the leading AI labs and AI app developers. The company creates its specialty domain data, which can be used to “challenge, teach, and evaluate” AI models during their training, with the help of a global network of more than 5,500 experts representing more than a thousand knowledge areas. In 2025, Snorkel released a new product called Expert Data-as-a-Service (DaaS), which quickly delivers customized datasets to match specific training needs as well as reinforcement learning environments for testing models on specific tasks and gathering feedback data.
In 2025, Snorkel raised a $100 million round at a $1.3 billion valuation from firms such as Addition, Prosperity 7 Ventures, and existing investors Greylock Partners and Lightspeed Venture Partners.
18. Hume AI
For infusing emotion and inflection into its voice model
As AI matures, it’s likely that more people will begin talking to AI apps rather than typing to them. But right now, AI models in general aren’t great at detecting emotion in a human user’s voice. Nor do they nail the emotion they should synthesize into their voice during a response. That’s why Hume AI has become an important company in the industry. It saw these conditions coming.
The New York-based startup has been developing models that generate emotionally correct voices and listen for emotion in human voices. In 2025, Hume released Octave 2, which, unlike traditional text-to-speech models, understands how the language in a script informs the tune, rhythm, and timbre of the voice that’s speaking it, inferring when to whisper secrets, shout triumphantly, or calmly explain a fact. Its model is trained to hear more than 200 emotions and 400 voice characteristics. The end result is that users can have a back-and-forth with an AI that sounds and feels more like a conversation with a warm-blooded human being.
Hume has so far held three funding rounds with investors including Union Square Ventures, EQT Ventures, USV, Comcast Ventures, LG Technology Ventures, and others, raising nearly $80 million, according to PitchBook. But the biggest validation of Hume’s AI may be the fact that Google licensed the company’s models, and also recruited Hume CEO Alan Cowen and several other Hume researchers to work within its Google DeepMind AI group. Hume’s new CEO is Andrew Ettinger.
19. Decart
For turning raw video into AI-infused video in real time
Decart develops a full-stack AI video platform that can intake live video—from a Zoom call, perhaps—and affect, restyle, and regenerate it in real time. Its Mirage model might reskin the person in the frame as an animal or a cartoon character. Or Decart’s AI models might intake a webcam feed or stream and instantly change the environment into an anime or cyberpunk scene, with near-zero latency.
In 2025, Decart released Mirage, which it bills as the world’s first real-time autoregressive video-to-video model. It uses generative AI to let a user enter prompts to shift the style of the video in real time, while maintaining the original video’s structure, motion, and frame rate. Decart is now working with AWS to integrate its real-time generative video and world-model technology into Amazon Bedrock, a managed service that makes various AI models available to AWS customers through an API. Decart was given early access to Amazon’s Trainium 3 chips so that its models could run well on them.
The Israeli company, which was founded in 2023, says it’s already been generating “tens of millions” in annual revenue from a proprietary acceleration technology that lets customers run AI workloads faster and cheaper on GPUs. But it’s also licensing its Mirage model to gaming, real estate, and film companies to create live simulations. In July 2025, Decart raised $100 million at a valuation of $3.1 billion from investors including Sequoia Capital and Benchmark, bringing its total to $153 million.
20. Reflection AI
For open-sourcing the AI frontier
2025 was the year that AI coding assistants became good enough to take a major role, alongside human engineers, in building software. However, most of these systems are built on top of closed-source, general AI models that lack transparency and can’t be modified or built upon. Reflection AI is building an open-source alternative to those models, and it’s starting with models that specialize in computer code.
While popular coding models such as Claude Code and Cursor are focused on quickly generating code, Reflection’s flagship model, called Asimov, focuses on the harder problem of understanding existing enterprise codebases—often millions of lines and hundreds of interconnected systems deep. It can also read emails, Slack messages, project updates, and code documentation to develop a broader contextual understanding of how the company thinks about developing software.
Some of Reflection’s team members worked on Google DeepMind’s famous AlphaGo model and helped train Google’s flagship Gemini model using reinforcement learning. Nvidia’s Jensen Huang said Reflection has “one of the best teams in the world,” describing the founders—Misha Laskin and Ioannis Antonoglou—as “god tier” researchers. The company came out of stealth in March 2025 with $130 million in funding at a $555 million valuation, and six months later it raised another $2 billion from Nvidia, DST, Lightspeed, and Sequoia at an $8 billion valuation—one of the largest Series B AI funding rounds ever.
Explore the full 2026 list of Fast Company’s Most Innovative Companies, 720 honorees that are reshaping industries and culture. We’ve selected the companies making the biggest impact across 59 categories, including advertising, applied AI, biotech, retail, sustainability, and more.