Skip to Main Content
 

Major Digest Home Investor Vinod Khosla: Our AI future is racing toward us faster than we think - Major Digest

Investor Vinod Khosla: Our AI future is racing toward us faster than we think

Investor Vinod Khosla: Our AI future is racing toward us faster than we think

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it yourself here.

Super Investor Vinod Khosla on our AI future

Veteran venture capitalist Vinod Khosla dropped some pearls of wisdom on an adoring crowd during a brief interview onstage at Fortune’s AI conference in downtown San Francisco Tuesday. Here’s a quick rundown of his major points.

  • On the risks of AI: “The doomer argument isn’t worthy of conversation,” he said, adding that AI has about the same chance of killing off the human race as a giant meteorite.
  • On AI election misinformation: He believes China and Russia will use AI to interfere with the U.S. election next year. “There could be a [misinformation-serving] bot for every voter in 2024,” he said.
  • On AI’s place in the future: “Whoever wins the AI race wins the economic race,” Khosla said. “Whoever wins the economic race wins influence over society.” 
  • On AI’s effect on work: 80% (of the work functions) that comprise 80% of economically valuable jobs will be able to be done by AI. (This is an enlargement on a theory in a 2016 Khosla article saying that AI could feasibly handle 80% of tasks now performed by doctors.)
  • On AI in healthcare: Khosla said one way AI will impact healthcare is by “nudging” patients to do things like preventative tasks (i.e. exercising or following aftercare instructions). Our healthcare system is good at providing initial treatments by doctors and other caregivers, he said, but far less good at follow-up. “AI would do the nudging under human guidance,” he said.
  • On AI drug discovery: Khosla said it’s actually quite strange that everybody takes the same medications, even when we all have unique genetic makeups, and therefore different reactions to the drugs. “We do crude medicine,” he said. But AI systems will be able to tailor drugs to each of our unique constitutions. “I don’t think we’re very far from having one medicine for one person.”
  • On AI for coding: “A billion people in the world will be programming in the next 10 years. You’ll be writing in the English language what you need. The AI would then ask clarifying questions.” In other words, software development will no longer be reserved for people who know programming languages. It’ll be written through conversations with AI, which will do the coding in the background.

OpenAI drama lingers on

Khosla was one of the original investors in OpenAI, the AI industry darling that nearly imploded after its board fired CEO Sam Altman last month. The real reasons behind Altman’s dismissal still have not come to light (the board said only that Altman had been less than transparent with it). The Washington Post reported that some employees had complained to the board about Altman’s “psychologically abusive” and “toxic” behavior. Khosla knows the real reasons but has pleaded the fifth when asked by reporters. With $50 million invested in OpenAI, he probably doesn’t want to rock the boat just as it’s righting itself. When asked again Tuesday, Altman said only that OpenAI had been hamstrung with “the wrong board.” He added that the company was “far better off than it was a month ago” before five members of the board lost their seats, including chief scientist and cofounder Ilya Sutskever, who voted for Altman’s firing.

Three weeks after the crisis, the best guess as to what happened is that Sutskever lost a power struggle with Altman. The two men have very different views on how quickly, safely, and transparently new AI discoveries should be productized and exposed to the public. Sutskever is a scientist and a humanist who believes in releasing AI slowly, thoughtfully, and for the benefit of the many, not the few (that idea is, in fact, written into OpenAI’s charter). Altman, as a responsible AI expert confirmed to me on background, is the consummate entrepreneur, with a strong drive to build new things that fit market needs, commercialize them, and then rapidly scale the business. With Altman’s reinstatement, OpenAI’s vested employees and outside investors, including Khosla, have wrested more control of the company away from Sutskever and his supporters within the company.

Now, reports have surfaced that Sutskever hasn’t been seen at the OpenAI offices. Responding to that report, Elon Musk hinted that he’d like to hire Sutskever to work at his xAI research lab. Is OpenAI truly a stronger company than it was a month ago before the shake-up, as Khosla says? That depends on your views about responsible AI, and how much you’ve bet on OpenAI becoming a wildly profitable company.

The EU comes to an agreement on AI rules

The European Union has been busy hammering out the details of a comprehensive set of regulations for the development and application of AI. On Friday, the EU came to a political agreement on the contents of the legislation, called the AI Act. Now, the European Parliament will vote on the Act, but the political work has been done and the legislation is expected to pass. The law won’t take effect until 2025, and some of the details still need to be worked out.

The Act takes a pragmatic approach to regulation, focusing on tangible near-term risks posed by the emerging technology. For example, it puts guardrails around the use of AI by law enforcement (such as Clearview AI-type facial recognition systems), and by agencies operating critical infrastructure, such as water and power. It imposes transparency requirements on companies like Google and OpenAI that are developing foundation models, which have wide-ranging use cases, including some meant to do harm. Violating companies could face fines up to 7% of their yearly income. And AI systems that could be used to generate media, or manipulate existing media, will be required to carry a warning label. Interestingly, the Act largely punts on open-source AI, such as Meta’s Llama and Stability AI’s Stable Diffusion.

U.S. lawmakers have been watching closely both the content of the EU law and the politicking around it. But while AI regulation has bipartisan interest and support in Washington, D.C., many in both policy and tech circles have doubts that Congress, with a rancorous and dysfunctional House of Representatives, will be able to pass a meaningful AI law. It’s more likely that states such as California will act first, just as they did on privacy legislation.

More AI coverage from Fast Company:

  • AI is going to transform Hollywood—but it won’t be a horror story
  • This startup uses AI to help companies comply with privacy rules
  • In the educational AI race, Merlyn Mind focuses on classrooms, not individual students
  • The new, sci-fi ways AI will radically redesign airports

From around the web: 

  • Sports Illustrated publisher fires CEO after AI debacle (CNN)
  • Microsoft releases Phi-2, a small language model AI that outperforms Llama (VentureBeat
  • Limewire is back, but AI’s making the music this time (Rolling Stone)
  • Mistral, French AI startup, is valued at $2 billion in funding round (New York Times)

Source:
Published: