September 2024
What Do You Want to Use AI For?
September 12, 2024
I’m not the first person to observe that we’ve kind of gotten AI backward: The whole world seemed to embrace Generative AI almost two years ago, and only later thought to wonder: Well…what’s this really good for?
When IT leaders confront that question, it’s generally in the context of an enterprise that has entrenched business processes and legacy installed-base technology. So you don’t just have to figure out what Gen AI can do; you need to understand how prepared your enterprise is to adopt it.
We’ve got a great session scheduled at the upcoming Enterprise Connect AI event in Santa Clara, CA, next month that aims to help address this challenge. Veteran consultant Steve Leaden of Leaden Associates is presenting a session entitled, The AI Assessment: The Critical Path To Charting Your Organization's AI Delivery, in which he’ll offer suggestions for a step-by-step process to determine whether Gen AI is the right fit for a situation, and—just as important—whether your enterprise is positioned to implement AI successfully.
Part of Steve’s process is to interview key stakeholders, and I want to highlight just a few of the questions he suggests including as you conduct your AI assessment:
- What do you want to use AI for? and What do you hope AI will improve on? To the point I started with, the first question to ask about AI is whether it’s really the best way to solve the business problem you’re facing.
- What is your current UC, collab, contact center platform and near-term plans for these technologies? Whatever AI-driven capabilities you deploy won’t exist in a vacuum. It’s important to understand AI’s role in the broader strategy for your collaboration, CX, or other technology estates.
- What are your current remote worker policies? This question struck me because it’s not an issue you hear as much about when it comes to AI. But so many AI applications will be used in the context of hybrid workforces, with the attendant security, compliance, and equity issues, that this becomes an important question.
There’s lots more in Steve’s presentation, but you get the idea. Doing an AI assessment means evaluating the AI, but also taking a close look at your organization.
This session is just one example of the approach we’re taking with Enterprise Connect AI. This program is not meant to advocate for AI, nor to debunk it. AI is a fact of life in enterprise technology; the question is how you as an IT leader will make the right decisions about it. Whether it’s the cybersecurity implications, data readiness, the all-important cost factors, and much more, Enterprise Connect AI is going to be 2 days of intense learning and sharing of ideas. I hope you can join us Oct. 1 – 2 at the Santa Clara, CA, Convention Center!
September 2024
Making Sure AI Solves the Problem
September 5, 2024
AI may be moving quickly as a technology, leaving enterprise leaders feeling pressured to do something now. But the wisest strategy may be to approach AI like you’re in it for the long haul.
That was one of the key messages from an Enterprise Connect virtual event last week. Derek Top, senior analyst at Opus Research, said in a keynote that while it’s important to keep up with the state of play in AI, it’s a “fool’s errand” to try and adjust to every announcement that comes out about a new Large Language Model or other apparent AI breakthrough.
It’s still early days for AI, Top reminded the audience: “The technology is ahead of where most people are at.”
That assessment was echoed in a recent study from the think tank Rand. Citing reports that more than 80% of AI projects fail, Rand recommends that industry leaders proceed cautiously with their AI projects—not just out of the usual governance concerns, but also to improve the prospects for success: “AI projects require time and patience to complete,” according to the research institute. “Before they begin any AI project, leaders should be prepared to commit each product team to solving a specific problem for at least a year.”
Rand goes on to note that, “Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure.”
In late 2022, few people seemed to think they had lots of time to work with. And indeed, it would have been unwise to react to ChatGPT by donning blinders and resolving to get around to Generative AI when your laundry list of other projects was finished.
AI may have changed the playing field. But it hasn’t changed the rules, at least not all of them. The rules say that risk still matters, and also that organizations move at a certain pace. If AI is meant to solve problems, not just be a shiny object, the enterprise needs to know the scope and nature of the problem, as well as all of its stakeholders and dependencies.
At Enterprise Connect AI next month in Santa Clara, CA, we’ll address both elements of the AI conundrum: The urgency that enterprises naturally feel when a fast-moving, transformative technology bursts onto the scene; and the realization that AI doesn’t negate everything you thought you knew about IT projects.
That’s why I’m particularly excited about our two enterprise keynotes, featuring David Glick of Walmart and Darrius Jones of USAA. Both speakers represent companies that have moved decisively on Generative AI, to ensure they don’t sacrifice competitive advantage. They’ll share with our audience the steps they’ve taken to capture near-term success while working within the enterprise environment. I hope you can join us!
August 2024
Finding Value in AI Assistants
August 29, 2024
On No Jitter this week, consultant Kevin Kieller has an in-depth look at Microsoft Copilot, offering a detailed review of the AI assistant’s capabilities, features, and risks. The article builds up to the question that matters most with Copilot or any other AI tool for collaboration: Is it worth the (considerable) cost?
I won’t steal Kevin’s thunder, except to say that his answer is nuanced, as it ought to be. Given Copilot’s $30/user/month cost, few enterprises have appetite (or budget) for a wide-scale rollout until the value is conclusively proven. And that proof remains a work in progress; as Kevin states, “We collectively have much to learn before cost-effectively leveraging AI assistants.”
It’s worth pointing out that there is another pricing model for AI assistants: Zoom and Cisco bundle their AI assistants into their collaboration platforms at no additional charge. This lowers the barrier to entry – but doesn’t eliminate it. You still have to ensure that proper AI governance is in place, manage the tools, and train users on how to employ them responsibly and productively.
As with any new and overhyped technology, AI advocates have a tendency to blur the line between what the innovation could do, and what it can do now. In the case of AI assistants, you can definitely imagine use cases that provide significant value—anything from helping write emails to summarizing meetings to mining your work data to help you be more effective in the meetings you do attend. AI assistants could do all those things. But can they? Can you depend on them to do these tasks accurately, completely, and efficiently?
Compare it to a previous technology generation: Desktop video was always a good idea, and always had potentially valuable use cases, but it was too expensive and not high-enough quality—until, eventually, it wasn’t. Then it saw widespread adoption (with the help of a global pandemic). We’re still assessing AI assistants’ maturity.
For a real-time pulse check on AI assistants, I encourage you to join us at Enterprise Connect AI Oct. 1 – 2 in Santa Clara, CA, where Kevin Kieller will join analyst Brent Kelly of Omdia to present a 90 minute deep-dive session, Gen AI-based Personal Assistants: Straight Talk on Value & Use Cases. Brent and Kevin will compare the leading vendors’ different approaches to AI assistants, and will help you understand where the value truly lies today – and what the prospects look like for the future. Like the rest of the EC AI program, the emphasis is on practical understanding of Generative AI, and realistic exploration of its prospects for the next 6 – 12 months. I hope you can join us in Santa Clara!
August 2024
What Counts as a Win with Gen AI?
August 22, 2024
Last week the tech world mourned the death of Larry Tesler, inventor of the cut, copy, and paste commands. Many social media accounts paid tribute to Tesler by simply copying and pasting the headline announcing his death, a clever way to honor someone—by using his innovation and thereby demonstrating just how significant it is. Cut, copy, and paste is so fundamental to how we work that we take it for granted. But it really is one of the great productivity tools of the computer age.
The occasion reminded me, though, of how old-school journalists used to feel about cut-and-paste. Believe it or not, they didn’t like it. I was in journalism school in the mid-1980s, and many of our teachers hated the idea that it would become easy to rearrange your copy. What if, they asked, you’re out on an assignment and you have to phone a story into the rewrite desk (drawing from the pocketful of quarters you were instructed to carry at all times, to feed into payphones)? You’d be dictating your story over the phone to the editors in the office, composing it as you talked. You can’t cut-and-paste text in that situation. Cut-and-paste would make reporters lazy thinkers and writers, they insisted.
All of this also made me think about our expectations around AI features’ ability to drive productivity. Cut-and-paste demonstrates that the most impactful productivity innovations are often the simplest. Everyone, in the course of putting their ideas into words, encounters the need to rearrange those ideas to make their point clearer. And of course copy-and-paste does even more: It lets you easily move chunks of text from one setting to another. Reuse and sharing are a major source of efficiency and collaboration in almost all work contexts.
So what similar functions can Generative AI solve for—where does it really produce efficiencies for a broad set of users? Writing code and boilerplate-level text seems to be a killer app, though in both cases, users have to invest more time than they likely bargained for at first, given the need to carefully vet the AI’s work before sending it into production. Imagine if that were the case with cut-and-paste—having to re-read every paragraph you pasted into place to make sure that all the words made it to their destination exactly as they left their previous location.
In the contact center, call summarization has been the leading use case—another pretty straightforward application buried deep within the business processes, far from the critical eye of the customer. That makes it applicable for most agent call scenarios and promises time savings that help with the ROI case.
The problem has been that un-cool applications like call summarization just don’t satisfy the hype-driven hunger that’s raged through the industry for almost 2 years now. With previous generations of technologies, IT folks looked for the quick wins, but with Gen AI, that approach seemed unambitious.
This tug-of-war between quick wins and transformative ambitions will be the focus of one of our breakout sessions at Enterprise Connect AI in Santa Clara, CA, Oct. 1 – 2. Mitch Lieberman of Fidelity Investments will lead a session entitled, How Do I Make Strategic AI Investments Today? in which he’ll address some of the key issues:
- Do I need to prepare to build an AI infrastructure and also buy applications?
- Is it even possible to change my mind in a year, once I’ve committed to an AI strategy?
- Investments are large, so what is the business case?
- Is hedging my bets the right choice, or do I need to go all in?
- Dealing with the fact that technology buying decisions are cyclical and typically go for 3-5 years, but not all of them happen at the same time: CRM, Sales, Service, Contact Center, Productivity - all will have AI and an AI cost.
I hope you can join us for 2 days of straight talk and realistic conversations about how to use AI in your enterprise. Sign up now; our best rates end next week. I’ll see you in Santa Clara!
August 2024
Sorting Out AI Pricing
August 15, 2024
As enterprises continue exploring the potential benefits of Generative AI, the question of ROI has become paramount. Given how new and seemingly transformative Gen AI appeared at first, initial emphasis was on the “R” – what’s the return or benefit this technology provides? How much does it increase productivity? What processes can it shortcut or automate? What impact might this have on employees?
But the “I” side – investment – is looking to be just as complicated and difficult to pin down. Pricing models aren’t always clear, and the potential cost of implementing an AI application can be anything but predictable. That’s why it’s worth paying attention to a recent Forrester survey; as reported in CIO Dive, the research firm found IT leaders ascribing much of an expected rise in software spending to AI.
“More than 4 in 5 tech leaders anticipate their organization’s adoption of Generative AI features within SaaS products will increase software costs next year, according to the survey,” CIO Dive reported.
There’s no shortage of Gen AI-powered applications offered within SaaS products in the collaboration/CX space—from AI assistants that summarize meetings and offer enhanced productivity features, to CX-focused tools, mostly agent assist capabilities like call summarization. In some cases, the pricing is straightforward: Microsoft is charging $30 per user per month for Copilot, which at least lets the enterprise focus on the “return” portion of the equation. But since the benefit comes mostly from “soft” productivity gains like time savings, it’s still not simple to judge the payoff from that $30 investment.
Pricing models for some CCaaS applications are less clear, and if your enterprise decides to build its Gen AI capabilities rather than buying, you’re placing your bet on the complexities and evolution of the public LLM market.
At our upcoming Enterprise Connect AI event, we’re going to do our best to help you navigate the uncertain waters of Gen AI pricing, costs, and investment. On Wednesday, Oct. 2, Derek Top of Opus Research, a firm that’s led the way in scoping out AI pricing, will present a session on Tackling Gen AI’s Cost Conundrum. Derek will focus on cost drivers, risk, and other key elements of the pricing challenge that enterprises face.
We’ll also get a broader look at the ROI and pricing questions in our keynotes. David Glick of Walmart and Darrius Jones of USAA will both discuss their enterprises’ strategies around Gen AI, and how they’re thinking about the build-vs.-buy choices and other ROI-related issues.
Enterprise Connect AI takes place Oct. 1 – 2 at the Santa Clara, CA, Convention Center. The program focuses on the biggest real-world issues around AI—pricing, value, technical challenges, security, and much more. It’s not the place to go for AI hype; it’s all about how you can best use this revolutionary but challenging technology to gain competitive advantage and move your enterprise forward. I hope you can join us.
August 2024
Making Users Productive with Gen AI
August 8, 2024
One of the fundamental questions about AI is: Can Generative AI deliver productivity improvements? And according to a recent survey, the short answer seems to be: No, AI can’t deliver productivity gains. It may enable productivity gains, but for these to materialize at scale, enterprises will have to put in the effort to make it happen.
The survey, from hiring website Upwork, found a dramatic gap between enterprise leaders’ views of AI productivity gains, and the realities that end users reported. According to Upwork, 96% of C-suite leaders “expect the use of AI tools to increase their company’s overall productivity levels.”
The problem is that almost half of employees said, “they have no idea how to achieve the productivity gains their employers expect,” Upwork reported. And that’s not the worst of it; 77% “say these tools have actually decreased their productivity and added to their workload.”
That shouldn’t come as a surprise, given that Gen AI use cases tend to be much more ambitious than previous generations of AI. When spell-check tells you a word is misspelled, you don’t rush to a dictionary to make sure it’s correct about that; but if you ask a Gen AI assistant or chatbot to write you a report or set of code, you’ll likely take the time to review the results carefully to guard against hallucinations; you may even go back and check the sources it references. So we can expect some users to find Gen AI actually adds to the time it takes to complete certain tasks.
The good news is that employees aren’t resisting AI; Upwork found that 65% believe it has the potential to increase their productivity. But enterprises need to leverage employees’ willingness to engage with the technology, and provide better training. And we’re not just talking about prompt engineering. End users need to understand why they’re using a particular AI-enabled tool in the first place, so they can approach each task with a better idea of whether AI is even the right tool for the problem at hand. Which means that business leaders – and IT – need that clarity before they roll the tool out widely.
These survey numbers show that many enterprises haven’t come to grips with the question of where Gen AI adds value, and where the technology may not quite be ready for prime time. This conundrum will be a major focus of our Enterprise Connect AI event, Oct. 1 – 2 at the Santa Clara, CA, Convention Center. We’ve focused our program on practical issues around enterprise AI adoption and value, and we’ve got a whole track dedicated to AI & Productivity. I hope you can join us for two days of in-depth, objective, no-hype exploration of how you can make AI work for your enterprise.
August 2024
Dealing with AI's Tradeoffs
August 1, 2024
One of the few areas of consensus in these early days of enterprise Generative AI has been that the contact center application most likely to provide a quick win is call summarization for agents. It seems to check all the boxes: It’s a clear time-saver, shaving seconds or more off post-call work by writing the agent’s report for them; it keeps a human in the loop, since the AI-generated summary gets reviewed by the agent before it’s saved; and it’s internally-facing, so that even if errors or embarrassing phrases do crop up from the AI, the brand is spared customer and public blowback.
But even call summarization has its complexities and concerns, as Forrester senior analyst Christina McAllister points out in this conversation with No Jitter senior editor Matt Vartabedian. McAllister’s review of these challenges shows just how much human effort and judgment needs to go into the process of making Gen AI truly enterprise-grade.
The fundamental tradeoff in this and many other Gen AI applications is, as McAllister explains, “latency versus accuracy.” Basically, the quicker the AI produces its call summary, the less time it’s had to apply context from the model, which can make the summary less accurate. But if you allow the system to take more time to achieve greater accuracy, “every precious second eats into the projected time/cost savings of the Gen AI summary solution,” she explains.
You should definitely read the whole article; it’s a great explanation of some of fundamental tradeoffs that enterprises confront when they deploy Gen AI functions in the real world.
That real-world perspective is also what we’re bringing with our first-ever Enterprise Connect AI conference, Oct. 1 – 2 at the Santa Clara, CA, Convention Center. We’re well past the initial hype stage of Gen AI, and enterprises now must look at the kind of details and tradeoffs that McAllister discussed with No Jitter, and from this examination determine a realistic business case and ROI. Our program is built around sessions that offer this level of detail and analysis, while our keynotes off a vision from companies that are enabling or building strategic implementations of Gen AI at scale that show the possibilities of this revolutionary but complex technology.
On that note, I’m delighted that we’ve announced two additional keynotes: Our conference will open on October 1 with a fireside chat featuring Darrius Jones, SVP, chief digital & design officer at USAA; and our October 2 opening keynote will be a conversation among three leaders at Deloitte: Rohit Tandon, managing director, Lauren Littlefield, managing director, and Vinita Kumar, business development.
I hope you can join us in Santa Clara this October for two compelling days of learning and networking about AI in the enterprise.