Importance of Natural Resources

The Next Conversation: Powering Customer Conversations With AI (Cloud Next ’19)


[MUSIC PLAYING] ADAM CHAMPY: We’re really
excited to talk with you again today about AI. And specifically we’re going
to talk about conversational AI, what’s new, how we integrate
with great experiences, and also how we deploy these
into amazing end applications. Today I’m joined by a few
of our leaders in the space. Shantanu from Google leads
our Dialogflow product. Kanan leads areas and
investments of the Service Cloud product from Salesforce. And we’re also joined
by Thomas Jefferson, in particular Robert Neff,
who leads digital innovation in a way that, I think,
brings conversational AI into the patient and hospital
experience in a truly innovative way. We’re really excited to share
these applied applications with you this morning. All of this starts with
a fundamental vision of why are we doing this. I think that matters
a lot to what we do with the product, how
we craft it, how we deploy it, so we’ll spend a
bit of time on that. But specifically we’ll
also talk about what we’re doing in the products,
what capabilities, what areas of development that we’ve been
doing over the last six to nine months that are
new and innovative, and then specifically what
we’ve done to apply them. Again, with our partnership with
Salesforce and in applications, in particular, in
a hospital setting. Our vision is crafted
and framed very much around the customer experience. If you think about Google’s top
level vision, and this frames a lot of what Shantanu
and I do, it’s start with the customer and
everything else will follow. That’s a really good place
to start when you’re thinking about having conversations. In particular, if you’re
having a conversation about selling a
product, about helping a patient at the right
moment in their journey through a hospital experience,
or just helping somebody as they’re on the couch trying
to buy a product with a chat bot or seeing some
issue with a bill, the customer experience
really matters. And I think a lot of us have
experienced moments, especially with automation, where
there’s just massive failure. Like, you know pretty
quickly that something is going wrong with a bot. You don’t have to
watch John Oliver, or the clip I think many of you
probably have seen of like bots can go wrong really fast. It’s kind of like an
umbrella that doesn’t work, you know really quickly
that you’re going to get wet and you start to hit zero fast. You’re like, I want
to talk to somebody. But the other side
of this is that there is an agent challenge. In particular, not everything
that we’re talking about here is about automation. We want to make sure that as
agents engage with customers with complex topics or
areas that they may not have been trained on or with
rapidly moving evolutions of products or rollouts
their companies have, that the agents are
in great positions to offer amazing customer care. And stay tuned, in particular,
to the Salesforce part of the discussion for that. We also want to empower
every enterprise to access this, whether
it’s the average developer, and Shantanu will take
you through the developer experience of Dialogflow,
or the ability to integrate all of this
into your own infrastructure. That’s critical to us. We don’t want you
to have to move platforms to be able to take
advantage of our technology. In particular, if
accessing RAI is not possible without a large
investment or a large shift of infrastructure,
fundamentally, our strategy and our vision
also means that we have failed. And this is kind of the POC hell
that I think a lot of you have maybe been through with a
lot of AI type products, which is I can test something, I
can try it on a single machine, but I cannot deploy it at scale. I can’t train. I can’t develop. I can’t essentially bring this
to more than one narrow use case. So whether it’s training
data, whether it’s applying it outside of one specific narrow
use case in your enterprise or with your clients, this
is really important to us. So that brings us
to what we’re doing and why we frame
our vision, but also how we apply our
products to that vision. And in particular, we’ll talk
about two major product areas. The first is Dialogflow. This is the world’s
leading NLU platform, enabling experiences
way beyond the chat bot. And in particular, it’s
enabled by the quality of its natural language
understanding and the ability to actually take action with
that language understanding. So once you reach a
quality threshold, the next challenge is to
be able to take action. The other product that
we’re going to talk about, and the other program, is our
Contact Center AI program. And in particular, what
Contact Center AI does is it deploys these same natural
language technical capabilities into the challenge of customer
service, whether by voice or by chat, and does
the work of integration to enable these
transformative experiences. And with that, I’ll
hand it over to Shantanu to talk about Dialogflow. SHANTANU MISRA: Thanks, Adam. With Dialogflow we have
had two amazing years. At Cloud Next 2017,
we announced it, we had crossed
150,000 developers, which we thought was a great
number for an AI platform. But we’re excited to announce
that we have recently crossed 850,000 developers. We have basically doubled
the number of developers on the platform over
the last 12 months. Amazing growth. But I think what it also enables
is these 850,000 developers are spread across
different languages, different geographies,
different use cases, different industries, so
our NLU and speech has evolved over the time to serve
these very varied use cases. So for a new customer
coming on Dialogflow, they get to benefit from a
really evolved machine learning stack that has faced
all the challenges that these developers
have thrown at it. And this growth has
been accelerated a lot by the enterprise adoption
that we have seen. So we launched our
V2 enterprise APIs. We GAed them last April,
so roughly 12 months ago. Today we have thousands
of enterprises on our platform using Dialogflow
for their different business use cases sending us billions
of queries every year. And this makes us really
happy because this goes to show the
business value of having an automated
conversational agent, be the text bot, a voice
bot, or a phone bot. And these enterprises vary
from really large Fortune 20, Fortune 100 companies to
really cool IoT startups that are designing their entire IoT
experience around Dialogflow agents. And by the way, I know some of
you are probably still on V1, so please switch to V2
as soon as possible. We’re going to sunset V1 soon. We have almost all
the client libraries already available in V2,
mobile client libraries have been a gap. The iOS client libraries
were released last week. And Android libraries are
coming out soon as well. Now, why are
customers choosing us? I think the single
biggest reasons why customers choose us is
the awesome NLU and speech capabilities that we provide. When you sign up to
Dialogflow you essentially use the same NLU
and speech stack that powers our
assistant devices and other bots of Google. But, of course, as a
GCP paying customer, we ensure that your
data remains your data, is not used for any
model training whatsoever or by any other bots
of Google, so you get to benefit from
the rest of the Google, but doesn’t go the
other way around. And we do this at scale. So a lot of bot
platforms out there serve a few thousand entities,
a few hundred intends, but any decently
large enterprise– any complicated
agent soon starts running into scale
issues with those limits. At Dialogflow, we today
support up to 30,000 entities and up to 2,000 intends. And we are committed to pushing
the envelope on that scale all the time, because we want
to ensure that you don’t have to think about scale when you
make choices about your bot platform. So as you were thinking about
which bot platform to go for, the first question you should
be asking is, whether– can this serve NLU and
speech at the scale that my agent will soon require? And that scale also comes
with a lot of complexity. So Dialogflow is
currently in use in very jargon heavy
industries, like health care, and Thomas Jefferson is going to
talk more about it, education, financial services. And our speech has
gotten really good at understanding that
industry specific jargon. One of the video game
companies came to us and said that we
need to recognize words that pretty much don’t
exist in any dictionary whatsoever. And with the help of
speech context and phrase hints that Dialogflow
audio input provides, we were able to
handle that use case. And I know some of you
had entered the Discovery session a couple of days ago. And there are multiple
other customers like Discover Card who have
done similar studies comparing the NLU quality and speech
quality of Dialogflow with other vendors and found
us to be way above the rest. This is an area of
strength for us. It’s an area of
continuous investment. And we continue to
improve on that. The other big strength is
the international coverage that we have. We support 20-plus languages. Internationalization
more broadly is a focus area, not just for
Dialogflow but Google Cloud and Google more broadly. So we get to benefit from that. And we’ll be launching
many more languages soon. The third big component is
integration flexibility. So with our Contact Center AI
program and other programs, we make sure that you can serve
Dialogflow in whatever channels you want to serve it in. So if you want to create a Slack
bot, a Facebook Messenger bot, or a text bot on your website,
a voice bot on Google Home devices, or a call
center bot, you can do that with Dialogflow. And that flexibility is also
available in the backend, because we don’t force you
to migrate you back-end over to a particular service or
a particular cloud provider, you just need a web hook URL. And with that web hook, you
can make calls to your backend and fulfill all the
requests that you’re getting through Dialogflow. So what are the
use cases that we see Dialogflow being used for? I mean, traditionally,
we have defined our use cases along the three categories
that you see at the top. But over the last year,
we have seen many new use cases emerging as well. So, of course, the
biggest use case is customer service
and B2C interactions. As I said, a lot of enterprises
are coming to us and saying, we need one platform to serve
our digital channels as well as our call centers. And that’s something
for you to think about as you make your
choices as well. With Dialogflow being the
flexible platform it is, you can use basically
the same stack across different channels. We’re also being used
in other B2C scenarios like really complicated
agents for e-commerce. And think of them like shopping
cart building scenarios, which are really complicated,
because you can have, I want a pizza and
a Coke– by the way, can you change my
Coke to Diet Coke? And going back and forth,
adding more stuff to your cart, and removing that from cart. It requires really complicated
context management. Really happy that customers
are finding Dialogflow useful for those scenarios. The second big use
case that we have is around IoT devices,
or voice commands. So today, Dialogflow is
powering robotics companies, gym equipments, different
vehicle manufacturers, as in vehicle assistance systems,
home appliances, entertainment systems, et cetera. The third big use case is
internal enterprise assistance, and that’s the way
a lot of enterprises get started on Dialogflow. So creating HR, IT chat bots,
or organizational FAQ chat bots or voice bots, for their
team to ramp up on Dialogflow and then finally take
that to their customers. But last year we
saw a lot of new use cases emerging as
well, especially during the holiday season. The biggest one was probably
conversational marketing and conversational ads campaign. So we had lots of products
being launched or marketed through a conversational
campaign on their website or through their search ads. And that’s a big use
case that is emerging. The second big use case
that came up last year was assistive, Dialogflow
is an assistive agent. So when a live agent
is in conversation, and it ties in very
well with our agent assist product, which Adam
is going to talk about later, but providing turn by turn
assistance to the live agent through the dialogue
flow bot is something that has come up a lot. And, of course, the RPA
vertical has taken off, and Dialogflow is being used
to power the NLU behind RPAs, so extracting the entities or
triggering off an RPA workflow using Dialogflow
intent detection. Also, I’m really excited
to announce that now we’re officially HIPAA compliant. This enables a lot of
health care use cases. A lot of our health
care customers have been requesting for it. And we are really strong in
the health care vertical. And I’m really excited to
see that we’re doubling down on that and probably enable
a lot more health care usage. ADAM CHAMPY: And
just to pause here, I think it’s important to think of
jargon specific or internal use cases, like HR use cases,
a lot of those discussions are about health
care related topics. So whether it’s sensitivity
that’s at the HIPAA compliant level, or the ability to say to
an internal team or an auditing team that we use a HIPAA
compliant solution, it’s really critical
to think about, does your NLU scale to that? And does your system
overall approach this type of data security? SHANTANU MISRA: And,
of course, Dialogflow is the default platform for
creating actions on Google. The Assistant family,
the actions on Google has seen tremendous
traction over the last year, and Dialogflow is
serving that traffic. So it’s a great
way for us to learn and how people are interacting
with the Google Home devices. We have some really
exciting announcements around actions on Google
coming up at I/O. So stay tuned for that. And with that, I want to
hand it over back to Adam. ADAM CHAMPY: I’ll take that. So we’ve talked a lot about
conversational technology. And as we think about the
context in our AI program, what this really means
is taking that technology and, again, applying it to
the vertical of customer care. And we say that it’s three
core components for us. Again, the Dialogflow agent
that we just talked about and those agents in voice
and in chat, as well as two other products. In particular, assisting
agents in real-time during calls by doing real-time
natural language understanding and then surfacing
assistive intelligence to the agent in real-time. The other aspect is,
how do you understand all of the different
things that are going on in a contact center? It’s kind of crazy if
you think about it today, if you were deploying a mobile
app or a website and somebody said, I’m not going
to use any analytics, somebody would tell
you that’s just crazy, why would you do that? But when you think
about actually answering the question of what is going on
in my contact center right now, not just the number
of calls, not just the CSAT by survey, but what
topics are being discussed, what’s new, what’s different
today versus yesterday. That’s still
significantly dark data for a lot of our
customers and clients. And when you have the ability to
do hierarchical and clustering techniques where
the AI is actually able to understand the
relationships between topics and gather them, that’s
where we can start to present insights products. Now, the way we go to market in
this is that we, first of all, develop the product and
are announcing beta today. And that is a great
advancement that enables production level traffic
to run through our systems. But the other aspect
that’s critical to this is that we deploy this
technology via partners. We’ll talk about this
on the next slide. And starting today,
we’re announcing beta, but we’re going to be rolling
out more and more and more capabilities. As Shantanu had
mentioned, we’re starting to see Dialogflow
used in more ways to be kind of a coaching
or assistant type capability, sourcing
hybridization of turn by turn type activity with
the assistive technology. We’re also seeing more
and more applications for that analytics
type capability of– now that we have the ability
to cluster and group, how can that be used? How can that be
used in real-time? How can that be used in
longer views of conversation? Or how can that be used in
quality metrics and compliance? But we’re really excited
by who we’ve partnered with to take this to market. In particular, these
are the providers who you can get
Contact Center AI from. This is not a product that
you buy directly from us. And it’s a veritable kingdom
of the contact center world. In particular, we have
a few new names up here that are different than
what you saw about six or nine months ago now. So we’re excited to
announce partnerships with 8×8, Accenture, and Avaya. And we’ll also be talking
a lot about our partnership with Salesforce
in a few minutes. All of this, though,
when you think about conversational
technologies, is powered by a new approach
and a new thought process around speech. And what we’re really excited
is that our speech products also enable, not just understanding,
but a different way of thinking about
tuning, building, and deploying speech models. And in particular, if you’re
interested in the speech session, I think we’re actually
doing a replay of yesterday’s speech discussion at, I think,
an AMC theater nearby at 11:40, so look for Dan
Aaron’s talk on this. So if you want to
dive into speech, we have another session on that. But in particular, we
think about speech, not from a tuning perspective
or a large professional services investment to create
speech models, but we do things in kind
of an AI centric way where we add context. And at every turn
of a conversation, we can condition that
model to understand what you’re talking about. So if you have a
bot or a model that can handle banking related
topics or IT related topics and your customers may
be calling in for help with the website or
help with their account, the speech model’s likely
a little bit different. And you need to be able to
tune it in that context. And our development and our
investments align with that. And, again, that’s
where you’re using the best of Google’s
technologies in that we have to deal with
that too on the consumer side of our business. The other aspect here
that’s really important is the quality of synthesis. And research shows
time and time again that if I speak like a robot,
there’s one aspect that– it actually sounds
pretty annoying. And if you think about
turn by turn and multi-turn conversations, users actually– they don’t understand–
they don’t communicate it specifically, but the quality
of interaction matters. And as we move forward
over the next year or two, it will be very
apparent to consumers when they’re talking
with a high quality bot versus when they’re
speaking with something that is very robotic. And the data actually indicates
that the conversations are shorter and go faster to
escalation when they just get tired of talking to the robot. The other side is that the
NLU benefits from synthesis. So if you speak much
more like a human, people will speak back
to you like a human. If you ask them for command
and control type behaviors, you’re going to get command
and control type responses. And the Dialogflow NLU benefits
from more rich engagement. We have, again,
a set of partners who could take you through
this veritable eye chart of how the system overall looks. And in particular,
what this is covering are three amazing phases. One, the integration with
our telephony partners. Two, the closed loop
of speech to text back through virtual agents and
cloud text to speech, and then the assistive function, which
if you start right to left, talks and starts about
with an agent desktop and putting the right articles
and the right assistance for those articles. And with that, I want
to actually bring up somebody who’s in the leadership
essentially of our industry on that assistive desktop. And Kanan from Salesforce. KANAN GARG: Awesome. Thanks, Adam. So hi, everyone. I am Kanan, product
manager at Salesforce, Service Cloud in particular. Super excited to be here. My first time at Next. So many of you might
have heard yesterday we made an announcement
at the keynote, we’re bringing Google’s Contact
Center AI, Service Cloud, and Einstein AI
together to make it very flexible for our customers
to provide an integrated and AI powered customer
success experience. But before we jump
into actually how we’re providing
that experience, I want to take a moment to level
set everyone on what Salesforce Service Cloud is. And I know– with a maybe
short poll of hands, does anyone know Service
Cloud Salesforce? Kind of– sort of. OK. So with Salesforce
for Service you can actually transform
your customer and employee experience and provide that
next level of customer success by providing a connected,
intelligent, proactive and personalized service. So connected, we provide
a unified agent desktop. So agents now have a 360
degree view of their customer. For intelligence we supercharge
your service experiences by embedding intelligence and
using our automation suites or workflows, next best
actions, Einstein bots, to personalize across channels. So we’ve seen, in
the recent past, the channels have exploded. So you’ve got Facebook
Messenger, SMS, chat, video communities, a lot
of ways where consumers can interact with brands. We put the customer at the heart
of every service operation. And we provide a consistent
personalized service no matter where your customers
are asking you for help. And then lastly, if any of you
are in the mobile field service workspace, we’ve got our
field service solution to elevate our mobile
workers and provide that proactive service,
even integrated with IoT. And so because of those four
ways of providing service, we’re actually the number one
in the Gartner, Magic Quadrant. We’re by far the
leaders in this space, and we have been for
the last 10 years. So that’s a big, big
achievement for us. And a lot of this is not
just because of our products, a lot of it is
because of the trust our customers have put
in us, our rich partner ecosystem, and
the trail blazers, so those are the
customers that take Salesforce to the next level. And with that little
quick overview, we can jump into the first
transformative experience. I’ll actually have Adam
play the video for you to see what we’re building with
Contact Center AI and Service Cloud. [VIDEO PLAYBACK] – Google Cloud
and Salesforce are partnering to deliver AI powered
customer service experiences, so that every company can
transform their customers’ experience. Let’s show you Hulu’s vision
for scaling engaging experiences for their more than 25 million
subscribers through the eyes of Lauren, a Hulu viewer. Hulu makes it easy
for their viewers to get help anytime, anywhere. Instead of customers having to
navigate a lengthy phone tree, a Hulu bot powered by
Google Cloud’s virtual agent technology can
immediately engage viewers in a human-like
conversational experience, help resolve issues 24/7, and
seamlessly handoff the call to a human agent
at the right time. – Hi, Lauren. Thanks for calling Hulu. How can I help you today? – Hi. I want to add the NBA
package to my plan. – Sure thing. Looks like you’ve been
watching on a Chromecast Ultra. Is this the device you’d
be watching the game on? – Yep. – Perfect. I’d recommend our Hulu
Plus Live TV plan. If you’d like, I
can transfer you to one of our viewer experience
advocates to get that set up. – That would be great. Thanks. – The human agent
now has everything they need in the Salesforce
Service Cloud console, including the full
call transcript to expedite case resolution. Meanwhile, Google’s Agent
Assist technology automatically presents machine learning
driven recommendations, making it easy for agents to
know what viewers need and get answers fast. [PHONE RINGING] – Hey, Lauren. Thanks for calling Hulu. It looks like you’d like
to add the Hulu Live TV plan to get access to all
of the basketball games. I’d be happy to
help you out there. – That would be great. – Perfect. I’ll update your
plan to our Live TV. – As the call
progresses, Agent Assist continues to surface
new recommendations based on the context
of the conversation. – Did you know that
you could also watch Hulu Live TV on mobile devices? Would you like me to
help you set that up? – Sure, I’d like that. – Great. I’ll send over an article
that will walk you through the steps for
future devices as well. – Agents can even add more
value to every conversation with the intent matching capability
from Google Cloud Dialogflow and Salesforce Einstein
to serve up the real-time recommendations and offers to
keep viewers engaged and happy. – What else have you
been watching recently besides basketball? – My husband and I have
actually been binge watching “Game of Thrones.” We want to get ready
for the final season. – Awesome. Did you know that if you add
HBO through your Hulu plan you can watch directly
in the Hulu app? If you’d like, I can add
HBO to your plan as well. – Yes, let’s do it. – Agents are guided step by step
through any business process. No memorization of what
to do next required. The combined power of Google
Cloud AI and Salesforce Einstein also makes call wrap
up easy with pre-populated case details, improving data
quality and saving agents time as they move on to
helping the next viewer. With Google Cloud
and Salesforce, Hulu can now integrate AI
into every service interaction and reimagine the
viewer experience. [END PLAYBACK] KANAN GARG: Cool. How amazing was that? I mean, any of you from
the contact center space would have realized
how amazing this is. We’ve used three technologies. We’ve used the
Virtual Agent Assist. The first time when
Lauren calls up, she’s not talking
to a human agent, she’s actually talking to a bot. That bot conversation
then gets transferred to a real human agent. And as the agent is talking
to Lauren, or the customer, Google is in real-time
transcribing the call and providing the next
action for the agent to take. So the agent’s not
searching manually for knowledge articles. They’re actually being provided
in real-time recommendations as they’re working
through a case. So this is pretty transformative
in the contact center space. The other integration that
we’re working with Google on is in terms of Einstein bots,
or the Dialogflow advanced NLU. So Salesforce has their own
version of Einstein bots. Bots allow you to scale
your service operations. It’s not possible to keep adding
human agents to your contact center. You do need to scale, and
bots is a good way to do that. We allow you to
declaratively create bots, more conversational bots. You can deploy them on any
channel, any digital channel. So you can create them on chat
and then deploy them on SMS, Facebook Messenger, et cetera. And one of the ways that we’re
integrating with Dialogflow is using Dialogflow’s
advanced NLU capabilities. So Shantanu actually spoke about
this quite in detail earlier. Dialogflow is probably
the best product around for advanced NLU, where
if you’re a global customer and you have to
deploy your contact center in different
territories around the world, Dialogflow along with Einstein
bots will help you do that, because there’s a lot of
multi-language support with Dialogflow. Did you want to add? SHANTANU MISRA: No, I
just wanted to say it’s– I think it’s a great
complementary partnership. And any existing
Einstein customers who want to expand
to other languages or make use of other
advanced NLU features, like entity
extraction, et cetera, can now plug in Dialogflow
into their Einstein bots. It’s a great partnership for us. KANAN GARG: Awesome. And with that if you,
please visit our booth. It’s up on the same
level, I think. Come to the Salesforce booth. You can actually see the
makings of this video. It’s actually live. So come check out our booth or
take the Service Cloud trail. And with that, I will
hand it back to Adam. ADAM CHAMPY: Sure. So I’d like to introduce
our next speaker from Thomas Jefferson. And what’s really
important, as you think about the patient
experience with the hospital setting, is the
actual experience. And study after study talks
about the actual experience mattering, not just to
happiness but also to outcomes. And it’s really exciting
to see Dialogflow and our conversational
technologies deployed in such an innovative setting. So please join us. ROBERT NEFF: Great. Great. Thank you. So those of you who may not be
familiar with Thomas Jefferson University, we are a large
academic medical center based in the Philadelphia
and now spanning out of Pennsylvania,
New Jersey as well. Large academic medical center. We have about 14
acute care facilities. We’re expanding rapidly. We’re going to have
likely another four. In the next several
months we’ve been doing a number of acquisitions. Countless outpatient care
facilities, urgent cares, and an entire health sciences
and design university as well with almost
10,000 students. So we’re a large player in
the health care and education space. And what’s unique
about Jefferson is we have a group, which
is called the Dice Group. And I’m going to talk
about that real briefly, so you understand what makes
what we do at Jefferson and really essentially possible. The Dice Group was
started at Jefferson about 3 and 1/2 years ago. I joined Jefferson–
was brought over by our chief digital
officer, who is Neil Gomes. He actually spoke here on
Tuesday about some other bot work that we’re doing. But the group has about 150
people split into thirds. A third of them are developers
who are working in my group, another third of
them are designers working on research and analysis
to design workflows and design processes that are focused
very much on transforming the patient and
student experiences. And then we have another group
that is focused on design– I’m sorry, focused on support,
training, documentation and operations to make
all the technology that we build in this group
work throughout our health care system. And what technology
are we building? We are building and bringing
digital technologies that are very common in
other sectors, like what we see in retail, entertainment,
transportation and so forth, and trying to apply that
same type of technology, that level of digital experience
that patients and people find in every other
aspect of their lives and apply it to their
experience with health care, because that is something
that is very much lacking, not only in the US but throughout– oh, I just went back. Sorry about that. So how do we do this? And what are some of
the core areas we think about as we’re going
down this path? We think about access. We think of all the
different services that we provide to patients,
whether it be surgeries, outpatient appointments,
lab visit– or lab tests, imaging visits. And we say, how can
we use technology– how can we bring digital
technology to open up that access to make sure that
the most people are getting access to those services at
the times that they need them? We look at convenience. We look at, how do we
make sure that there is as little friction
as possible when customers, patients are
interacting with our health system? So that means
meeting the patient, meeting that consumer where
they are in whatever channel they’re most comfortable. So whether that’s through a
website, through a call center, through mobile technologies,
IoT to devices, wherever they happen
to be, making sure that we are meeting them and
making it as frictionless of a process as possible. And finally, we’re doing this
so they have the best experience they can at Jefferson. We want that,
because we want them to be engaged in
their health care and we want them to come
back and stay healthy. So to the extent that we can
provide the best experience, we enable them to do that. We bring digital to all the
different aspects of services that we provide. So whether it’s the care that
we provide to our patients, the learning we provide
to our students, the work that we’re doing from an
operations standpoint, even the multi millions
of dollars of research, grants and federal
funding that we have, and then even to how we handle
philanthropy and donations into our organization. So everything else
that consumers find in the rest of their
lives, we expect and we are trying to bring that
digital transformation to health care as well. So this is a sample of
a future hospital room. Folks who work in health care
maybe have seen stuff like this before. But as you can see, it’s a
very bright, sterile, clean environment. It looks kind of fun to be in. Looks a little bit futuristic. There’s a lot of
technology in this room. There are a number of different
screens with information projected, both for the use of
the provider on the left hand side of the screen
and then for the use of the patient on the right
hand side of the screen. You can see multiple
different ways that the patient may interact
with this technology, same with the provider. And there are a lot
of companies that are focused on bringing
this to reality right now as the future of a hospital
room in a hospital setting. However, we at Jefferson think
the future of the health care environment will look
something more like this. And this is what
we’re going for. We’re going for a simple,
comfortable, and personalized experience when you’re
getting your health care. We feel that the best
way to provide health care to our patients
is do it in a way where it’s not
intimidating, it’s not different from something
that they normally do, and it’s very easy
to interact with. And that’s why when we look at
the future environment for how we provide care several
years down the line, obviously, as we’re marching
towards this vision, it’s going to be something more
like this than the hospital room that you saw
on the slide before. One of the ways we
achieve this is not only through the technology working
with Google and Dialogflow that we’re going to
talk about, but looking at other technologies
as well and looking at the concept of health
care with no address and pushing the health
care out, and the services out to where patients
are when they need it. And that health care
with no address concept is a concept that
Dr. Klasko, who is the CEO and
president of Jefferson, speaks about quite often. And that’s a major
vision and a direction that we go in when
we’re focusing on building out technology. We’re trying to avoid
situations like this. This is not a comfortable
situation for someone who has just undergone a
surgery, feeling very sick, not sure that they exactly know
what’s wrong with them. They don’t want to wake up
in the middle of the night to a room with beeping
devices, lights on them, and just kind of an
intimidating environment. So we’re trying very
much to move away from this type of an experience
to move to a much more comfortable experience. All right. So smart speakers. I don’t think I have to
convince anyone in this room that smart speakers and
voice assistants are important and
growing, and these are things that people are using
at home very much today, and it’s only going
to continue more. I could put any number
of charts like this from different sources up there. They all show that there is
tremendous growth in this area. So what are some of
the things that make voice a great way to interact– to have patients
interacting with the system? And what are some of
the reasons for that? There is a huge barrier
to use, or a huge amount of friction with downloading
an app, getting out a device, signing up with an account to
engage in a mobile app to check your vital signs– I’m sorry, not your vital
signs, but your discharge summary, your nurses’
names and so forth. There’s a lot less friction
when it comes to using voice. In populations
who were much more reluctant to engage
in technology, whether it be a mobile
app, again, or a website or a patient portal,
are much more willing to try the concept
of using voice, because voice is something that they’ve
been communicating with and getting data in
and out of their heads with for their entire life. So populations, again, that have
been resistant to using an app that we throw at
them are much more willing to try using a
voice based technology to interact with the system. So where are we today? We have a smart rooms
pilot up and live. We’ve had it live
for several weeks at one of the floors in
one of our hospitals. And that pilot is giving us very
interesting data on what types of skills and what types
of features patients like and all sorts of other things
that we didn’t necessarily think we were going to learn
at the beginning of the pilot. But we’ve seeded the pilot with
some very basic skills, which were very challenging
to build, but we’re able to set an environment where
we could collect a lot of data on what’s going to be used and
what’s not going to be used. So we have basic
environmental controls. We have controllability for
the TV, for the HVAC systems, lights in some cases. We have a lot of
information about what’s going on for that
patient for the day, when is their food coming,
the names of their doctors, information about their
doctors and so forth, and then the general information
about weather and so forth that you would
expect in a smart assistant. And we’re doing this, of course,
in a fully HIPAA compliant way. So what are some of
the things that you might expect a patient
to ask about when they’re in the environment? We surveyed nurses,
we surveyed doctors, and we surveyed patients, both
before we got into this pilot and now that we’ve
had the pilot ongoing. We actually have the analytics
of what patients are asking. And there’s something
that’s very interesting that’s kind of a
whole segment that’s missing from this list,
which is actually anything about their clinical care. Patients care so much
about their environment when they’re in a hospital for
a prolonged period of time. The majority of the things that
they’re often asking about, when is their food
coming, what is for lunch, when are they
getting discharged, when can their family visit and
so forth, what’s on TV tonight, can I change the channel. These are the things that
patients are very interested about, and it goes
very much to how they evaluate their
overall experience of being in the hospital,
which by the way, is one thing that hospital–
or one of the many things that hospitals are graded
on by our patients. And we can get paid
very often based on what type of experience
that patient has had as an aggregate. So where are we going with
the concept of a smart room assistant? We have a pilot,
obviously, live today. We’re learning a lot from that. Again, not only with what
types of skills and features are important to the
patients, but other types of ancillary issues with
running a pilot like this and running a system like this
in the health care environment. We think that room control
and environmental control is one of the areas that
we see a lot of interest and a lot of usage, so
we’re going to continuously be adding more to that and
adding more capabilities. We are looking at more
automation and anticipation. So using a system
like this to start anticipating the
type of environment, the type of concerns
that a patient has and addressing
those preemptively. Moving on from
that, we’re trying to look at how do we make this
assistant, which we’re calling a cognitive patient concierge,
beyond just the experience in the hospital room
and start to ask some questions before they
show up, ask them questions and provide some information
after they leave to hopefully get to a point where this
cognitive concierge follows the patient from the point that
which they decide that they need to come to Jefferson
through their visit while they’re with
us, whether it’s one day or a multi-stay visit,
and then after they go home and make sure that
they stay healthy throughout the recovery. So question from a
technology standpoint, why might we not just
put a Google Home device in a patient room? Or from that matter,
an Amazon Alexa device. Well, anyone? So we noted earlier,
it’s very nice, HIPAA. Huge concern from a
patient privacy standpoint. Obviously, the announcement
today and the information about Google and Dialogflow
being HIPAA compliant is extremely exciting to people
in the health care environment. I’m sure it will be
all throughout all the different health
care IT news things, articles and blogs tomorrow. But there are other reasons
besides just HIPAA compliance that we decided to build,
essentially, our own system and device for doing this. A consumer grade device, such
as a Google Home or an Alexa or any of the other virtual– I’m sorry, smart speakers
that are out there, it’s not really suited for
a commercial environment. Everything from IT
Infrastructure, IT security, management, updates,
even down to, how would you actually control
IoT devices in the room? All of that technology
that works really well in a consumer’s home,
in that household residential environment, doesn’t
exactly translate one for one into a hospital environment. So we actually had to
build a lot of technology to get an IoT smart speaker
working in a health care environment. And a lot of that
technology obviously is based on microservices
and cloud services that we are borrowing– not
borrowing, but leveraging from Google. But we built out the JIoT
platform, the Jefferson IoT platform and framework,
which is specifically made for managing and
processing interactions between IoT devices
in the patient room, specifically the smart speaker
device that we’ve created. We’ve also had to create
a number of internal APIs to serve information back
and forth to this system. So you can’t change–
a great example of this is you can’t change
the channel on the TV if the system doesn’t
know the channel lineup. Well, if any of you have ever
been in a hospital before, you know that the channel
lineup is usually nonexistent. If it is, it’s a photocopied
paper 25 times that’s handed to you by the nurse. So just getting
some of this data and putting it
into an API format that we could call and maintain
and use to actually control some of these devices
was another huge hurdle that we had to do as
part of this project. So once we got up and
running, the system– I keep going back. I’m sorry about that. Once we got it up
and running, we ended up with a system that
looks something like this. So you have the
Jefferson environment. The Jefferson environment’s
got the patient environment, of course. It’s got the software, the
Jefferson cognitive assistant suite of software
that we’ve built out. And then, of course,
we’ve got services on the Google Cloud Platform. And just to take you very
quickly through the flow, we start off with number
one where a patient might ask a question, whether
it’s when is there food being delivered or
can we change the channel on the TV, that gets passed
over to the Google Cloud Platform, which is
processed into text, it’s processed
through Dialogflow, so we can figure
out the intents. We bring that back to our
request fulfillment library where we figure out
and match that intent to a specific
fulfillment library that we have to get answers
or potentially pass it down to the Jefferson
IoT platform to control a thing
in the patient room over in number six. Once we figure out that
we’ve completed the action to the satisfaction
of the patient, we generate some sort of
response, which we then pass back to the
Google Cloud Platform to put it back into
some verbalized response that we can then pass back
in step eight to the patient, let them know that
we successfully completed their task or we have
a follow up question for them. That’s the essential
flow that we take to lay out this experience. And with that, I’m going
to turn it back to Adam. ADAM CHAMPY: Great. Thank you. The experiences you just saw,
both from Salesforce as well as Thomas Jefferson, are really
emblematic of our thesis, which is if you start
with the customer, and everything else will
follow, whether you’re starting with a Hulu
customer who’s calling in to add something to
their plan or ask a question to the actual agent
who’s struggling to actually answer the thousands
of different questions that might get asked, or the
hospital experience where we’re trying to actually make that– what can be very scary
actually into something that’s very
comfortable, and what can be very challenging
for, not just the patient but the families
that are around them. And if you start
again with the user, you can usually pattern
back to them the challenges that Shantanu was describing. Let’s start with
understanding the user. An NLU has to be really good. The fact that right now we’re
actually talking about that is like, oh, that’s the easy
problem, that’s kind of crazy. But honestly, that’s
the type of investment that we’ve made in our
cloud platform to do this, and so that you can
focus on the other stuff. So you can focus on the Einstein
bot and the next best action and all of the other
automations that you want to drive efficiency
and great customer service and understand everything
about interaction and then take action
on behalf of that. Or if you want to actually
change the channel, because the person in the
hospital rooms experience will be a lot different if
they don’t have to watch what’s just on that TV. Or they’re cold– I mean, these things matter. And we’re talking
about conversation, understanding is critical. And then those connections
to every other system just make this all possible. So whether it’s the 850,000
developers or the connections we’re able to make into the
world’s leading platforms, we’re really working towards,
again, that customer experience and the integrations. And the end use
cases really start– if you’re thinking about
getting started or diving in, yes, learn how to
use Dialogflow, yes, dive in and engage with
the overall experience you want for your agent with
any one of our partners. But also, really, really think
about the end user experience that you want. And on the flip side,
actually measure it, actually do studies and
actually see how people talk and how they engage. So when we think about things
like our Contact Center AI program, we look at that
Dialogflow capability, that Agent Assist capability,
and the analytics that power it. Because people say
all sorts of stuff, and they tell you
what they want, whether it’s the agents or
the actual end customers. It’s really important
to have the analytics to be grounded in truth and then
to serve those end customer use cases. So with that, I
think we’re going to have a few minutes for
questions for any one of us, if that works well. Thank you all our
speakers as well. [APPLAUSE] [MUSIC PLAYING]


Leave a Reply

Your email address will not be published. Required fields are marked *