Skip to Main Content
 

Major Digest Home As it tackles AI, Atlassian is asking big questions—and sharing them - Major Digest

As it tackles AI, Atlassian is asking big questions—and sharing them

As it tackles AI, Atlassian is asking big questions—and sharing them

Even if you’ve never touched any of Atlassian’s products, there’s a pretty good chance others in your organization—such as IT and customer service teams—use them every day. Best known for Jira, Confluence, and, my favorite, Trello, the Sydney-based company has a low profile but a big presence in collaborative enterprise software. Its products are instrumental to how many companies wrangle tasks, such as tracking support requests and managing projects. Over Atlassian’s 21-year history, all that has propelled it to more than 10,000 employees, 265,000-plus customers, and $3.5 billion in revenue.

Like just about everyone in the software business, Atlassian spent 2023 reimagining its product roadmap around generative AI. At its Team 23 conference in April, it unveiled a “virtual teammate” called Atlassian Intelligence, which—like much of this year’s new AI—is less one specific thing than a variety of features spread across multiple products. After a beta period in which about 10% of customers gave them a try, many of these tools have now reached general availability, including ones for summarizing work documents, performing tasks such as database queries in plain language, and providing automated responses to help requests.

Even more AI-powered functionality is on its way, including a still-in-beta glossary maker that automatically identifies and defines an organization’s internal terminology, a boon to newbies who haven’t yet decoded all the requisite buzzwords. “If you’ve been at Atlassian 10 years, you know what ‘Socrates’ means,” says Atlassian cofounder and co-CEO Mike Cannon-Brookes, by way of example. “Socrates, at Atlassian, is our data lake. If you’re new, you’re like, ‘Why the hell does this page talk about Socrates? Are we in ancient Greece?’ It’s easy to get confused.”

In a way, the sheer practicality of the business tools Atlassian creates raises the bar for any AI they adopt. The answers provided by a general-purpose bot, such as ChatGPT—whether factual or hallucinatory—are based on a surging sea of random training data that, until fairly recently, most experts didn’t think was sufficient to achieve useful results. Even now, even the creators of such products don’t fully understand how they work.

“One of the worries with AI technology is, it’s magical,” explains Cannon-Brookes. “Large language models are amazing. They give us, as software creators, many more tools to paint with. We can deliver better customer value in a huge way. But sometimes, in the rush to deliver those amazing experiences, I don’t know that engineering has a diversity of thought that is wide enough to deliver products that are responsible.”

For core business processes, mysteriousness—even if it’s amazing—is a red flag. “Provenance is really important in an enterprise,” says Cannon-Brookes. “You have very strict governance rules. You want to know what’s happened.” From understanding security issues to avoiding the biases that can be baked into large language models, many organizations are treading carefully—and want the companies they buy software and cloud services from to do so as well.

Atlassian is far from the only purveyor of enterprise tech that feels a particular burden to get AI right. In August, for example, I wrote about Microsoft’s responsible AI initiative, which involves hundreds of people. But Atlassian is working especially hard to explain what it’s doing, reflecting one of its five Responsible Technology Principles: “Open communication, no bullshit.”

The team charged with assessing the impact of the company’s use of AI and other emerging tech “is a blend of human rights, HR, policy, compliance, legal, and engineering folk that are trying to make sure we’re building responsible technology at a broad level,” says Cannon-Brookes. “That has some very interesting implications when you get to AI features leaving the building and shipping to customers.”

There is indeed a broadness to this team’s work, as reflected in the generality of the Responsible Technology Principles. Mostly, they involve goals you’d hope every organization would honor (“Unleash potential, not inequity”). The more intriguing document is Atlassian’s Responsible Technology Review Template, which it recently made public. Presented in the form of a 26-slide presentation, it breaks the principles down into dozens of questions the company asks itself as it assesses AI and other tech it’s working on. For each, it rates its current state with one of three color-coded labels: “Feels good” (green), “Needs work” (yellow), or the damning “Not aligned” (red).

Again, many of the template’s questions smack more of common sense than unique insight, such as, “What is the worst-case scenario of misuse or failure?” and “Can we explain to our customers and people (including potential employees) how we thought through the risks of this tech?” Still, it’s relatively rare to see a company reveal so many details about its internal guidewires.

“Obviously, any of our features we’re building, we hope are in the green category for each of the five areas in the template,” says Cannon-Brookes. Even if much of this self-assessment is subjective, he adds, it prevents the company from slipping into a mode of “just engineers writing code and shipping it.”

It’s not just about Atlassian’s own engineers. Along with running its projects through the responsible tech gauntlet, the company applies it to other companies’ products it’s contemplating using. For example, an AI-powered recruiting platform under consideration looked problematic because of the biases that can creep into hiring-related AI. “We’re working with that vendor to try to make sure we can feel comfortable, which hopefully makes their software better,” says Cannon-Brookes.

The template’s greatest impact could come if other organizations adopt it, or at least are inspired to ask themselves similar questions as they wrestle with AI’s implications. According to Cannon-Brookes, that’s one of the reasons Atlassian decided to make it public.

“The template is, I guess you’d call it, a set of conversation-starters for any team building things,” he stresses. “It’s not a checklist as much as ‘Here are five big areas and a whole series of questions you should consider or know about or be able to understand when you ship or consume any feature.'”

Calling the current AI inflection point “a Cambrian explosion of technologies arriving,” Cannon-Brookes acknowledges that a document such as Atlassian’s template can get only so specific. Rather than delving into the minutiae of AI in its present form, much of it comes back to bedrock values that customers expect from a company such as Atlassian, including responsible stewardship of data.

“I don’t know if we’ll have any more ChatGPT-like moments,” he says, calling the bot’s arrival a year ago a “zero-to-one moment” akin to Apple’s launch of the first iPhone in 2007. But even if AI starts to feel more like workaday technology than magic, he adds that its cumulative impact will be transformative in the years to come. And that means continuing to confront the hard questions it presents will grow only more essential.

Source:
Published: