Importance of Natural Resources

Next Live Show Composite Day 2


[MUSIC PLAYING] STEPHANIE WONG: Hello
from Google Cloud Next ’19 in beautiful San Francisco,
where tens of thousands are gathered to build
a cloud for everyone. I’m Stephanie Wong. RETO MEIER: And I’m Reto Meier. We’ll be with you throughout the
next two days of Google Cloud Next ’19, bringing you
keynote recaps, interviews with Google experts, and
some very cool product demos. If you miss anything
along the way, fear not. You can catch up anytime
at g.co/nextonair. STEPHANIE WONG: The next on
air livestream experience is crafted to dive
deeper into content that matters to you most. Beyond the Next Live Show,
we’ve got five other channels streaming simultaneously too
with some keys for the cloud universe– build, run,
analyze and learn, secure and collaborate, all available
to you on your preferred device at home. RETO MEIER: In
yesterday’s keynote, we heard all about
modernization with Google Cloud. Today’s keynote took
us a step further, focusing on infrastructure,
insight, and innovation. Director of Cloud AI Strategy
and Operations, Tracy Frey, is here with us to tell us more. Tracy, give us the highlights
from today’s keynote. TRACY FREY: So I think one
of the most exciting things is that we’re really starting
to see how customers are leveraging the cloud and
building cloud strategy, no matter where
they’re starting, whether that’s starting
from on-prem, hybrid, all of these things
are now available. And we as an overall
cloud organization have been working really
hard to make it incredibly easy for customers to start
where they are and leverage our products and
services to really update their infrastructure and drive
innovation for their companies. That was really
clear to me today, and it was really exciting
to see it all come together. STEPHANIE WONG: Now, AI is
embedded in everything we do. What are you most excited
for about AI and Google Cloud and other parts of Google? TRACY FREY: You know, it’s a
little bit of a similar answer, to be honest. So over the last
year and a half, I’ve spoken with
hundreds of customers. And everybody is really
excited about AI. But a lot of organizations
don’t know what that means. And they don’t know
how AI can impact their business and
the business problems that they’re facing today. And we’ve been
working really hard to create a set of
products and solutions and services that
can help address some of those
mission-critical problems very easily for customers. And so I’m really excited to
see some of those examples really start to come to
light, things like Salesforce, working with Hulu on
our contact center, our natural language processes
being a part of the Einstein bot, you know, things like
what Baker Hughes GE is doing with our Cloud AI platform. It’s really exciting to see
these tactical examples of how AI can impact an enterprise. RETO MEIER: Absolutely. We saw a lot of great demos
showing Google Cloud products at work. How are these new products
helping our customers manage data reliably and
securely so they can modernize, scale, and can
really be successful? TRACY FREY: Well, if you look
at what we saw in the Smart Analytics demo, you could really
see the end-to-end experience for customers. And that’s a lot of
where the pain has been. So for example, in
Cloud AI, we own Kaggle, which is
the world’s largest community of data scientists. And they often joke that– data scientists will say that
90% of the pain in machine learning is cleaning the
data and 10% is complaining about cleaning the data. And that is an
experience that I think anybody who has worked
to extract insights from their data can
relate to, because it can be extraordinarily painful. And when you see all
of the products that are coming together
across our Smart Analytics platform, everything from
BigQuery and Data Fusion, all of those pieces,
AutoML tables, you can start to
see how easy it can be for customers to extract
those insights at scale very quickly. STEPHANIE WONG: And speaking
of these features that help our Data
Analytics workflows, we saw some great demos
highlighting some new features like Cloud Data Fusion, BigQuery
BI Engine, and AutoML tables. How can developers,
scientists, and analysts apply these to support
their hybrid workloads? TRACY FREY: So I can speak
specifically to AutoML tables, because that’s the products
that our team works on. And one of the things
that I have really loved about the AutoML
suite is that it really doesn’t require machine
learning expertise to use it. And at the same time, it’s an
extraordinarily helpful tool for people who are machine
learning experts and builders. And it has the
ability to stretch across those
populations and make it really useful for anybody. We have heard over and over
that structured data is a real challenge for customers. And so being able to
leverage something like AutoML tables,
where customers can start to piece together
information from disparate data sets, build them all together,
allow machine learning models to be created for
you through the AutoML tool without having to
write any code, it just is really exciting
to see what that can do. RETO MEIER: Yeah, absolutely. And can you give us some
examples from today’s keynote of how our viewers
will be able to use machine learning and
Google AI, especially those in companies
just starting out who may not already have
a lot of ML experience. TRACY FREY: Yeah. Absolutely. So there are a couple
problems that we’ve heard from customers
that are really common across the landscape
of Enterprise customers. A top one is in contact
centers and customer service. And this is a challenge that
almost every organization I’ve spoken with faces
in some form or fashion. Our AI has worked really hard
to enhance the human agents who are working in contact
centers, give them everything they need at
their fingertips in a really efficient and beautiful way. And what we’ve done
there is integrate our AI with the key players
in the contact center space so that it doesn’t require
customers to do a full rip and replace on hardware
they’ve already purchased. And so that’s a very
easy way to get started. We’ve also heard from
almost every customer that another area that
they really struggle with is dark data. And that exists
in documents that are both physical documents,
digital documents. The challenge, if you think
about a paper that you read or a PDF where you have graphs
and tables and all kinds of information, and
the ability to actually be able to extract those pieces,
those components using AI, make sense of them, and
then pull insights out of that through our
document understanding AI, we really believe is going to
help customers gain efficiency and information much faster. And that’s another
way that I think would be great for
customers to get started that we’ve seen today. STEPHANIE WONG:
Now, many people may know that we built AI into
a lot of our products, like G Suite, Google Maps. How can our customers
use those tools to build their innovative
features into their own tools and products? TRACY FREY: So we offer a
range of products, everywhere from pretrained APIs
to AutoMLs where you can customize to
meet your specific needs, through to our Cloud AI platform
where you can very easily build your own machine learning models
if you have that expertise. And so across that range,
there is an opportunity for customers to really
be able to leverage the power of Google AI in
almost every business problem they face. And that can be very
exciting, and it can be a little bit
daunting if you don’t know exactly where to get started. And so we are offering
a number of solutions through things like our
professional services organization for customers who
aren’t sure where to start. We can help them. And then we have
a series of demos, of tools, all kinds of
information that you can gather to really help you
figure out where AI can help make your business better. RETO MEIER: So as the
Director of AI Strategy, is there anything
you can tell us about what we can look
forward to in the near future? TRACY FREY: That’s
a great question. So I think what is top of
mind for me is all of the ways that we can make AI easier
for companies to use, how we can help them
scale their businesses. And so we are
spending a lot of time in really building that ease of
use into everything we offer. Another area that you will
likely start to see from us is continuing our efforts
around explainability. So I really firmly
believe that in order for AI to be successful, it
really has to be trusted. And in order for
it to be trusted, it has to be built and deployed
and thought of responsibly. And part of that is
about understanding what is in those
machine learning models and how they’re
making decisions. And so we have a lot of exciting
efforts around explainability and interpretability. RETO MEIER: Yeah. I mean, trust is such
an important feature in all of the things that we do. TRACY FREY: That’s right. RETO MEIER: Tracy, thank
you so much for helping us understand the power
AI to improve our work. TRACY FREY: You’re so welcome. Glad to be here. RETO MEIER: Now, if you’ve
got a question or a comment on today’s keynote or
about our talk with Tracy, reach out to us on social
media using the hashtag #GoogleNext19. STEPHANIE WONG:
At Next, there are lots of opportunities to
get one-on-one learning and hands-on time with our
products in the DevZone. RETO MEIER: We
sent our reporter, Mark Mirchandani over to
the DevZone to check it out. Mark, tell us what’s
happening over there. MARK MIRCHANDANI: Hey, everyone. We’re here at the Google
Cloud Next Developer Zone. This is full of cool little
showcases, hands-on labs, tutorials and, of
course, the DevZone Stage where people are able
to come here and learn all about the cool things, but
really get their hands on them and play around with them. Speaking of which, we’re at the
BQML station with Felipe Hoffa, who loves data and BigQuery. So Felipe, tell me
a little bit more about what we’re
looking at here. FELIPE HOFFA: Hi, Mark. So what we have here is a
Stack Overflow Predictor. How long will it take for
people to answer your question? MARK MIRCHANDANI: So what
does that mean specifically? So I can go into Stack
Overflow, I can ask a question, but someone’s not going
to respond immediately. FELIPE HOFFA: Yeah. So I don’t know
if you ever asked a question of Stack Overflow. It takes some
time, and you never know how long will it take. MARK MIRCHANDANI: Right. FELIPE HOFFA: Well,
with this tool, we can kind of predict
how long will it take, looking at all the
past data that we have from Stack Overflow. MARK MIRCHANDANI: Now,
how do you do that? Because BigQuery, you can
do a bunch of data analysis. But you can’t build a machine
learning model in BigQuery, right? FELIPE HOFFA: Well, you can now. So I love BigQuery. I love Data Analytics. Thanks to the
public data program, we have a copy of Stack
Overflow, the dump that they publish every quarter. So we loaded that into
Query, and then you can start running your
own SQL queries over it. Anyone can come and run
any queries they want. They have a free
terabyte every month. So the first step
with that analytics is just to look at
all your questions, look at all your
answers, and get an average of how long did
it take for people to answer questions on Stack Overflow. MARK MIRCHANDANI: So people can
come here to the Next Showcase floor, play around
with this interface here, change and see
if they ask a question and then based on the
machine learning model, they’ll see about how long
they’d have to wait for someone to go ahead and answer it. FELIPE HOFFA: Exactly. So how did we go from the
analytics of just average group but to a prediction? There’s a lot of
spaces in between. Like, for example, to get an
answer for the Google BigQuery tag at 8:00 AM for a
medium question that ends with– doesn’t end with
a question mark, that starts with a Y, for someone that
created an account in 2017, we’re predicting that
it will take 53 minutes. Maybe they will
not get an answer, but with an 85% probability,
they will get one. And there is a chance of
being down-voted of about 7% for these statistics. Now, how did we build this? We used– we created a
linear regression using all of these features and the
data that we had available. MARK MIRCHANDANI: So with all
this different information, you can know when you ask
a question, approximately, hopefully pretty relatively,
when it’s going to be answered, but you could also
maybe even predict when’s the best time
to ask a question. FELIPE HOFFA: Exactly. If you want to go
the opposite way. So let’s say it takes
53 minutes on a Tuesday. On Sunday, it takes more like
an hour to get an answer. But there is a higher
chance of getting an answer. MARK MIRCHANDANI: So
lots of cool information to play around with here. But people can also
come here and actually learn BQML hands on, right? FELIPE HOFFA: Exactly. So the alternatives for
people that are not here, to see how I made this, they can
go to my blog post on Medium, just search for Felipe Hoffa,
when will Stack Overflow reply? And if you are here at Next,
we have code labs ready for you to run that you
can also find online. But there are so many
ways to learn and get your hands into these tools. MARK MIRCHANDANI: Perfect. So lots of cool stuff from BQML. Come down to the floor,
get started, or check out the resources online. NATALIE PIUCCO: Hi. I’m Natalie, and we are in
the Intelligence Neighborhood in the Sounds and Models demo. And I’m joined here by Yufeng. Yufeng, tell us a little
bit about what you do and this awesome demo
that you’ve created. YUFENG GUO: Here, we have
Station 1, gathering data. And we have this kind of
instrument, if you will. But it’s actually
five instruments. And so each string represents
a different instrument. And we can pluck our string
to make different sounds. And what we’re going
to do is use this as our kind of data source. The sounds are going to
then go into Dataflow and get converted
into spectrograms. They’re going to get moved
into the frequency space. And those images can then
be used to train our model. The hope is that these images
will be distinct enough to be able to– NATALIE PIUCCO: Represent each. YUFENG GUO: Exactly. That we can tell
them apart, that we can distinguish between
instrument one, two, three, four, and five. NATALIE PIUCCO: Cool. OK. So we’ve got our training data. Now it’s time to start
training the model. YUFENG GUO: So
with AutoML, we can train models using
the data that we had, but then we don’t have
to do any extra work. So we can see here that
the data is coming in. And you can see those are
some of the spectrograms. Those are the real data that
attendees have generated over the course of today. NATALIE PIUCCO: So they’re
the images of the soundwaves. YUFENG GUO: That’s right. These are the spectrograms. And then when you train a model,
it just does the training. There’s not much kind
of work you have to do. And when it finishes
training, you can have some
evaluation data that shows you the metrics around
how well the model trained and how it did. NATALIE PIUCCO: Right. So if you’re new to ML,
that’s a great space to get started, AutoML. YUFENG GUO: That’s right. It’s an easy tool
to get going with. And if you have
some data that you want to throw at AutoML Vision,
it’ll take care of it for you. And you can go do something
else with your time. So we have this
board here, where you can move these
kind of sliders around to different positions. And they map to different
values on the screen, which change the values
of hyperparameters of the actual models, things
like kernel width and height, the layer size, the
number of strides, as well as the kind of
number of layers you have. And that will actually
change the amount of code on the screen. Once we’re happy with– you
know, instead of settings that we have, we can go ahead
and push the button here and start a training job. And doing that, it
will assign us a name, something fun like spinto-cap. And shoot that off
to Kubeflow, where we have a pipeline to do training. And when it finishes
training, it will deploy it on the Cloud AI
Platform Prediction Service. NATALIE PIUCCO: Awesome. So time to move to stage
3 of training our model. YUFENG GUO: That’s right. Speaking of prediction. NATALIE PIUCCO: All right. So we’re at the final stage now. And that’s to predict the notes. YUFENG GUO: Yes. And so we’re going to
try to pull up the model that we’ve been
training, spinto-cap. It’s there for us. So you can turn the
crank as a selector and then press the red
button to select it. NATALIE PIUCCO: Select. YUFENG GUO: So this is kind of
like a customizable music box. And we can choose which notes
to play on which instrument. NATALIE PIUCCO: More xylophone. YUFENG GUO: You can
hit the button again. So in this countdown, then we’ll
be able to turn this crank now. And so these are the
notes that we set up. And we’re going to send those
notes off to the Cloud AI Platform Prediction Service. Now, how did we do, right? Was our model any good? We chose those parameters
kind of randomly. And so it might be good. It might not be so good. We’ll see, right? It got some of them. It seemed like it might have
actually missed a bunch. It did. And so this is part of the
machine learning workflow. This particular case, a
pretty abysmal outcome. NATALIE PIUCCO: Yeah. But that’s all
right, because that’s all part of machine
learning, right? YUFENG GUO: That’s right. That’s right. NATALIE PIUCCO: It’s
all part of the process. So I think this is a perfect
demonstration to show that. So Yufeng, thank you so
much for this demonstration. I think it was very clear
in kind of the three stages of what to expect
when I’m training machine learning models. So thank you so much. YUFENG GUO: My pleasure. Thanks so much, Natalie. NATALIE PIUCCO: And back to you. RETO MEIER: This year, we’re
bringing some of the showcase experiments home to you. Cloud Showcase Online
lets you explore and get hands-on creative code
experiences with Google Cloud, including the BQML
experiment [INAUDIBLE].. Check out
g.co/showcase/experiments to start riffing on our code. The next round of sessions
is coming up shortly. Join us back here
after that when we’ll be talking Kubernetes
and checking out more DevZone experiments and showcase
demos from Next ’19. [MUSIC PLAYING] STEPHANIE WONG: Welcome
back to the Next Live Show. I’m Stephanie Wong. Joining us now is Jennifer Lin,
Product Management Director. Thank you so much
for joining us. JENNIFER LIN: Thanks
for having me. STEPHANIE WONG: So
for those viewers who are just learning
about Kubernetes and Borg, can you explain what
each of those is and how we’re making
Kubernetes easier to consume and the significance of how
each has evolved at Google. JENNIFER LIN: Sure. Kubernetes is the
externalization and the open source of our
internal controller development container orchestration
tool called Borg. It’s something that we’ve
evolved over the last decade. And we have said
publicly that we launch four billion
containers a week across the Google environment. So managing that at
scale, we’ve learned a lot about just running large
global-scale systems and doing container
orchestration at scale in a resilient, reliable way. Kubernetes was open
sourced a few years ago, and we’ve had our own
managed version of Kubernetes with Google Kubernetes
Engine now over four years. So a lot has happened in a
very short period of time, but there’s lots of lessons
learned under the covers. STEPHANIE WONG: We continue
to talk about API services and open source. Why is Google differentiating
in this space? JENNIFER LIN: Yeah. I think this industry is
changing so fast, developers writing new software. So it’s really important
that we get the interfaces and how to enable the lifecycle
management of services right as a system. And I think that’s why
Kubernetes has also gained a lot of traction very quickly. It sort of makes the notion of
a service a first-class citizen. And we’ve been very clear
about how do APIs interact, and how do we essentially
define services and do the lifecycle
management of services? And that’s all coming with
the maturity of things like Kubernetes and Istio. STEPHANIE WONG: Yeah. And this is all
enabling our developers. A lot of developers want to
approach the way that Google does and dive in and
get experience the way that Google develops software. How does open source
enable the community and empower the community? JENNIFER LIN: Yeah. I think open source has been
the reason why a lot of this has moved so quickly. I mean, it feels like
overnight it has become– Kubernetes has become sort of
the container orchestration system of choice. Just a few years ago,
there were so many– there was a lot of fragmentation. A lot of folks understood
the value proposition of containerization with
Docker, but essentially managing that at scale
at the system level, there were various options. Through open source, I think
we’ve allowed a lot of people to just, number one,
understand what’s going on. Number two, contribute in a
way that’s meaningful for them and get a lot of the use
cases out there in the open so we can iterate on best
practices, et cetera. And you know, yes, a lot of
this has been very focused on developer agility,
but now as we move into sort of
the maturity of it, yes, a lot of the practices for
the operational administrators and SREs and making
this essentially a production-level system is
a lot of what the community is excited about as well. STEPHANIE WONG: That’s great. Yeah. Open source is really
focused on our users. And we also want to
build our user experience into our products. Can you talk a little
bit more about that. And given that our cloud
users are different, and they have
different needs, how do we build that into
our products as well? JENNIFER LIN: Yeah, I mean, on
the product side within Google, we do a lot with the
User Research Team to just make sure we understand
who are the personas, whether it’s developer or
a security administrator or a network administrator
or the IT professional or the user. So just making sure
we’re taking sort of a use-case-driven approach
and understanding sort of who is the user of the
product and essentially, what problem are
they trying to solve and how do we hide
the complexity and make it a lot
easier and make sure that they’re essentially
doing their job well? But there are a lot,
because this is a software stack with a lot of– we’ve talked a lot about
decoupling and separation of concerns. You can have a
unified stack, but you have to recognize that there’s
lots of folks that essentially are consumers of that stack. STEPHANIE WONG: So what’s
your advice for those who want to move to the
cloud with acceleration and not have to do a rip
and replace and manage their existing lifecycle? How do they bring
in Cloud Native? JENNIFER LIN: Yeah. I think Cloud Native
has really been about sort of openness
and the flexibility that we talked about today
with Anthos is really there are an architectural set
of principles, which I think Kubernetes and things like
Istio have kind of put out there in the open source community. There’s a framework, and then
there’s an implementation. So number one, I think
the industry is really rallying around the fact
that container orchestration, there’s sort of a clear
de facto standard there. Obviously, we have
our managed version of that, because there’s a
lot beyond just the software bits in terms of
how do you actually do lifecycle management and keep
essentially automatic upgrades, patching, security updates, et
cetera, as part of this system and so that developers
can move quickly, but you can keep essentially
the security and stability of the system. STEPHANIE WONG: Right. And one big part
of the acceleration is GKE On-Prem, which was
announced last August. And this is the first
time we’ve actually brought our technology into
the data center at this scale. So why now and why not sooner? JENNIFER LIN: Yeah. I mean, what we
found is everybody understands the benefits of
cloud, but in many cases, there’s technical and
nontechnical reasons why they can’t move all their
workloads into cloud overnight. So this was a lot about bringing
the best of the Kubernetes stack and what we’re
doing with Google Cloud to the on-prem
environment and helping people move at their own
pace, while at the same time modernizing in place. So it’s not just about
renting compute cycles, it’s actually about
application modernization and developing new
services and adding more business value, et cetera. So we’re pretty
excited about that. Now, it’s not just
on our GCP resources, but we can essentially run
this on third-party servers in a private data center
for the enterprise. And as we talked about today,
other environments as well. STEPHANIE WONG: So
speaking of modernization, we loved your demo during
the day-one keynote. JENNIFER LIN: Thank you. STEPHANIE WONG: We heard a
lot about modernizing in place this year. Can you talk about
how Anthos enables you to modernize
your applications, no matter where they
are, as you mentioned, on-premise user and the cloud? JENNIFER LIN: Yeah. I think that’s the nice
thing about Kubernetes is with Anthos,
we’re really thinking about sort of open APIs
so that the interfaces between those environments,
number one, we can hide the complexity
but still keep a consistent management
environment for the developers. Independent of where
those workloads run, they can learn
one set of tools and essentially evolve
as the industry evolves without picking sort of exactly
where that’s going to run or what production environment
it’s going to run in. Because many
developers don’t know. They want to write
once, run anywhere. Their consumers may be
running in many clouds. And similarly for the
platform administrator, they don’t have to learn a
bunch of different vendor tools. This framework is here to stay. We believe it’s going to be
the standard for many years to come. So even if that’s not running
in Google’s cloud per se, that skill set that
many enterprises are trying to hire people who
understand Kubernetes, that’s very portable. So people are willing
to invest in it now, because it’s going to be
here to stay for many years. STEPHANIE WONG:
And on that theme, we’re hearing that
developers and operators want to be managing at higher
levels of the stack. But they still want visibility
and control over policy management at a service level. How is Anthos providing a
unified programming model and monitoring and policy across
on-premise and multiple clouds? JENNIFER LIN: Yeah. Great question. I mean, with
Kubernetes, obviously, we’ve really thought about
container orchestration and cluster
administration at scale. With Istio, we’re
thinking about– assuming most of
your microservices are now containerized,
how do you look out for the lifecycle
and health of those services and the interactions
between those services? How do you make sure that those
services are authenticated and that you can put essentially
policies around those services without having to manage the
underlying infrastructure complexity. And then we showed
how we’re doing config management and automating
essentially configurations at scale in a way that
essentially is declarative so you can define the policies
once and push them down to different environments. And you don’t have
to rewrite them with a lot of toil for
each cloud environment. And that’s really
important for customers that are thinking about,
let’s say, you know, PCI compliance or governance. Those rules around how
to have those security controls in place don’t
change cloud to cloud. But today, they’re
spending a lot of time just to make sure they can
ensure audit and compliance in each different environment. So we believe that’s
sort of extra overhead that doesn’t need to
be there, and that was a key push with Anthos. Many of our
Enterprise customers, they want to embrace
new technologies. But it’s not easy
to figure out, how do I ensure that I’m
still reducing my costs, keeping the efficiency, and
addressing what the auditors and compliance, regulatory
environments are looking for? STEPHANIE WONG: Right. And we actually know
that not everybody has the luxury of
moving to the cloud immediately for whatever reason. How can we help them get there? JENNIFER LIN: Yeah. I think a lot of it is– I mean, this has been
a great week so far. I think the partner
community is very engaged. I think a lot of the use cases
and the best practices that are being put out there
are based on folks trying things out,
sharing best practices. We’re trying to take more of
a use-case-driven approach. With Anthos, I think a lot
of what we’re trying to share is the operational domain
knowledge and the best practices of us having run
these systems at scale, but also tapping into our
hundreds of ecosystem partners who are getting excited about
it and contributing back. As this matures,
and I think this is happening very quickly
in the last couple of years, but it really is about
solution delivery and not just technology delivery. STEPHANIE WONG: With
Anthos, all these customers moving into production at
scale is very exciting. And obviously, as you said,
there are security concerns. How are we making that
easier for our customers? JENNIFER LIN: Yeah. I mean, security is a major
differentiator for us for GCP more broadly. And I think because
containerization is relatively a new
space, we’ve been doing a lot with both our
customers and partners to do, number one, education and
awareness around container security and secure
software supply chains and things like binary
authorization and CI/CD and how to make sure
that security is baked in way upstream so that when
a developer checks in code, essentially it starts the
governance process of security, and it’s not a bolt-on
afterthought, after the fact. I think that is a lot of the way
that the Google developers are. When they check in
code, essentially they don’t have to be an
expert in security, but the platform takes care
of a lot of the complexity of security and governance. STEPHANIE WONG: Thank you so
much for your time, Jennifer. JENNIFER LIN: Thanks
for having me. Got a question or comment
on Anthos or Kubernetes, reach out to us on social media
with the hashtag #GoogleNext19. We’re going to stick with
the Anthos topic for a bit and head down to
the Showcase floor where our team is demoing a new
solution that allows customers to modernize to containers. MARK MIRCHANDANI: Hey, everyone. We’re here at the
Infrastructure and Migration Zone of Google Cloud Next. I’m Mark Mirchandani. I’m here with Srinath. Srinath, we heard a
lot today about Anthos. What is Anthos? Why is it so exciting? SRINATH PADMANABHAN: Anthos
is incredibly exciting for us at Google Cloud. We announced Anthos as
Cloud Services Platform just over eight months ago. And today, we are very
excited to announce general availability of
Anthos for our customers to be able to use it. Anthos is our hybrid
and multi-cloud platform that gives our
customers the ability to build once and run anywhere. MARK MIRCHANDANI: Why is hybrid
and multi-cloud so important right now? SRINATH PADMANABHAN: That’s
a great question, Mark. So hybrid and multi-cloud
has been on top of mind for a lot of our customers. They’re very, very excited
about the ability to use clouds. But oftentimes what happens is
when they’re running something on-prem but running
something else in the cloud, it becomes hard
for them to manage these different
deployments that make them segment their workforce
into engineers who are working on different things instead
of focusing on innovating for their customers. So that’s why hybrid and
multi-cloud is very important, and it’s very important to have
a uniform platform like Anthos that lets customers run in a
uniform way across the board. MARK MIRCHANDANI: So we’re
talking about standardizing, making sure that everyone’s
on the same level. For people who are
here at the showcase, they can come interact
with this booth. What does this tell people
who are looking at what does it Anthos do for me? SRINATH PADMANABHAN: So
what we have in this demo is we’ve taken the analogy
of a city and how it works, and we’ve used that to show
how hybrid and multi-cloud is very important and
how Anthos helps in that particular analogy. So let’s get started. So we have these three
things that you can do over the course of this demo. First, let’s start out by
modernizing your services. There are multiple
reasons you would want to modernize your service. For instance, it gives you
a lot more agility and speed and lets you deploy faster. So in this demo,
there are two options for how you can modernize
your applications. You can either
containerize them manually or you can use
CSP Migrate, which is now called Anthos
Migrate, to be able to migrate these
applications automatically from VMs running on-prem
into containers running in the cloud. So for the sake of
this run-through, let’s select Modernizing
With Velostrata, which is Anthos Migrate. So Velostrata
automatically containerizes your VM-based workloads
and will deploy them to GKE running in the cloud. So just as easily
as that, you’ll have all of your VMs now moved
to the cloud as containers. And now let’s look
at the second part. We also help you standardize
config management using Anthos Config Management. What Anthos Config
Management lets you do is it lets you take a
single configuration and deploy it across the board,
declaratively and uniformly, without you having to go and do
it once at a time per platform. So no matter whether you’re
running only on-prem, in Google Cloud, or maybe
even with other clouds, you can do this all at once. MARK MIRCHANDANI: Right. So now it’s all standardized. SRINATH PADMANABHAN: So
now it’s all standardized. And so for the third
part of this demo, there’s something
very exciting for me, which is a service mesh. A service mesh is
basically a mechanism that lets you look deep into the
traffic that’s actually flowing between your applications,
whether they’re on-premise or in the cloud. In this case, what
we’re going to do is we’re going to use Anthos
to implement a service mesh between on-prem
world here and the two cloud worlds that we have. And by doing this, we’re
able to select a service and we’re able to say that
let’s upgrade the service. Now, when you
upgrade the service, you pick a new version that you
want to upgrade to, but instead of upgrading everything at
once, what you’re going to do is you’re going to select a
small percentage of your VMs or workloads and
upgrade them at once. So when you do this and you
upgrade these containers at that point in time, you
will upgrade just that 20%. So that you see
everything’s running fine, and you don’t have
anything broken. And once you are confident that
this service is working well, you can upgrade everything else. So this means now you’re
able to roll out applications with limited downtime to almost
0 downtime for your customers. MARK MIRCHANDANI:
So you can see how Anthos uses open source
technologies but also this idea of standardization to
really bring it to everybody. SRINATH PADMANABHAN: Yes. MARK MIRCHANDANI:
What’s an example of a customer or concrete use
case that Anthos has right now? SRINATH PADMANABHAN:
Great question. So we have a lot
of customers who have been very
excited to use Anthos, and they span
multiple verticals. Like for instance, HSBC from
retail and banking and Siemens, who’s a giant when it
comes to manufacturing, are also looking at Anthos
as their platform of choice for doing IoT and
their IoT platform. So there are multiple
customers across verticals like retail and
health care who are very excited about
this platform and love that ability to standardize
and have a uniform platform. MARK MIRCHANDANI: Perfect. Well, thank you so much Srinath. I’m sure there’s going to be
tons of great content here about Anthos and lots of other
stuff at Google Cloud Next so stay tuned. RETO MEIER: Thanks, team. Check out this
next Dev Bite where we explore how human bias can
sometimes creep into our work without us realizing it. JOHN BOHANNON: This
is Michelle Casbon. She’s an engineer at Google
on the Kubeflow team. MICHELLE CASBON: And
this is John Bohannon. He’s the Director of Science
at Primer, an AI startup. JOHN BOHANNON: My team at Primer
built a machine learning system that reads millions of
news articles and tells you what it learns. MICHELLE CASBON: And
I worked with them to deploy this
system of Kubeflow to make it portable
and scalable. JOHN BOHANNON: The goal
is to create a machine to help humans learn
about the world and overcome their biases. Humans are biased. MICHELLE CASBON:
Consider this riddle. JOHN BOHANNON: A boy and his
father are in a car crash. The father is killed instantly. The boy is rushed
to the hospital and prepped for surgery
with one of the best doctors in the country. The doctor arrives, sees
the boy on the table, and says I can’t
operate on this patient. He’s my son. MICHELLE CASBON:
Who’s the doctor? JOHN BOHANNON: It’s
a paradox, right? Father’s dead. So– MICHELLE CASBON: It’s
the boy’s mother. JOHN BOHANNON: For those
playing at home, don’t feel bad. 85% of people are
stumped by this question. It even worked on Michelle
when she first heard it. MICHELLE CASBON:
Bias is powerful. All humans have it. And we all have to work
against it constantly. JOHN BOHANNON: Every
time you meet someone, you can’t help but
make assumptions. MICHELLE CASBON: But
you’re probably biased. JOHN BOHANNON: Wait. That one’s actually mine. MICHELLE CASBON:
And that one’s mine. Oh. JOHN BOHANNON: You’re
totally biased. MICHELLE CASBON: Any system
built on human decisions will inherit human bias. JOHN BOHANNON:
Consider Wikipedia. It has grown into the world’s
most important general-purpose information resource. But there is no one in charge. Information is added
to Wikipedia day after day by millions
of volunteers. It just boggles the mind
that such a thing can exist and that it actually works. Wikipedia aims to be the
summary of all human knowledge. But it has biases, just like
the humans who create it. MICHELLE CASBON: If you think
about all the new things we learn about the world each day,
keeping Wikipedia up to date seems impossible. It’s not just that important
information is missing from most articles, but it’s
that many important articles haven’t been written at all. JOHN BOHANNON: Which shouldn’t
be that surprising, right? MICHELLE CASBON: Well,
so there are about five million Wikipedia articles. JOHN BOHANNON: In
English alone, yep. MICHELLE CASBON: And there are
some thousands of news articles published every day. JOHN BOHANNON: Hundreds of
thousands, just in English. MICHELLE CASBON: So you
just have to read all of it. You find the important
information that’s in one set and doesn’t
exist in the other. OK. Yeah, that’s hard. JOHN BOHANNON: Machine
learning to the rescue. I decided to start by
focusing on people. We want a system that just
reads all the news every day and learns new information
about everyone, not only people
described in Wikipedia, but anyone described
in the news. And if there’s enough
significant information about someone in the news,
who’s missing from Wikipedia, the system should just write you
the first draft of the article. The goal is to
make it as easy as possible for the human editors
of Wikipedia to do their job. And the first step is to shine
a really bright light so we can see where our bias is. MICHELLE CASBON: For example? JOHN BOHANNON: Well, for
example, only about 15% of Wikipedia’s biographical
articles are about women. And that gender gap is
especially bad for scientists. If you never hear about all
the amazing women of science and engineering, then
it reinforces your bias about those professions. MICHELLE CASBON: Exactly. And that’s why I wanted to
work with Primer on this in the first place. JOHN BOHANNON: Right. And here we are just
a few weeks later, and we’re actually
catching up on where we are with deployment. So where we left off, we had
this really cool ML system with some really classic
ML deployment problems. MICHELLE CASBON:
Kubeflow to the rescue. JOHN BOHANNON: Our first
problem was scaling. So the computing load is all
over the map on this one. On any given day,
we’re like processing hundreds of thousands
of documents– news articles, scientific
papers, and we’re updating information about
people in our knowledge base. But we also have users who
might want to add thousands of new people all at once. And that kicks off a
backfill of information about those people from millions
of historical documents. MICHELLE CASBON: This turned
out to be easy to solve, because your team had already
set up a Kubernetes cluster. Now, Kubeflow is
built on Kubernetes, because scalability is
baked into the platform, though the application
backend was built on top of Postgres,
which was a straightforward migration into Cloud SQL. Once the data was migrated,
I transplanted the full app into Kubernetes Engine. I started with a
single node cluster and enabled
auto-provisioning, which scales the size of the
cluster up and down based on resource requests. So now we can deal with
unpredictable and bursting computing loads like yours. JOHN BOHANNON: OK. But on top of the
scaling problem, I have also got a
complexity problem. So our data pipeline has a
bunch of spaghetti junctions. And not only do
we have thousands of models to load
at inference time, but we have to
update those models as new information comes in. So at training,
it adds up to like millions of model features
getting updated every day. MICHELLE CASBON: I noticed that. Luckily, this is
exactly what Kubeflow is designed for, composability. The platform provides patterns
for reusing common ML tasks like updating
models incrementally in the background. I started by carving
out pieces of the app. For example, model
training was refactored into a separate service that
uses Kubeflow operators. The monolith is now separated
into smaller distinct pieces. Now it’s a pipeline with three
separate steps– train, serve, and webapp. JOHN BOHANNON: Oh,
that sounds much nicer. OK. So how’s it looking so far? MICHELLE CASBON: Well, the
refactoring immediately reduced the training runtime. So that translates
directly to cost savings. And now you don’t have to
run huge nodes all the time, because GKE takes care
of the scaling for you. Well, the other one is the
decrease in code complexity. That translates to less
maintenance overhead, less time searching for the
source of bugs, less time on ramping new engineers,
and adding new features. JOHN BOHANNON: OK. But that creates a new problem. MICHELLE CASBON: What
haven’t I thought of? JOHN BOHANNON: So now we
have all this save time, what am I going to
do with all my time? MICHELLE CASBON: You could
create some Wikipedia articles about women of science. JOHN BOHANNON: Hmm. Good call. Oh, and what about the
last thing on my wish list about vendor lock-in? We don’t want our system tied
to one cloud infrastructure. We want it to be able to
deploy anywhere, even on-prem. Oh, and bonus points
if the solution is free and open source. MICHELLE CASBON:
Check and check. As long as you’re running
standard open source Kubernetes, you
can run Kubeflow. JOHN BOHANNON: Wait. Kubeflow is open source? MICHELLE CASBON: Yep. JOHN BOHANNON: What? MICHELLE CASBON:
Thanks for listening. I hope that hearing
about this project lowers the bar for
trying it yourself. The Kubeflow community
has some great resources for getting started, which
you can find kubeflow.org. JOHN BOHANNON: And, hey, got
a few minutes of free time? I’ve already used this
machine learning system to identify thousands of
notable women of science who are missing from Wikipedia. You can make a
difference right now. Just follow this link
and see what the machine has discovered for you. Go fill in a gap in
the world’s knowledge. RETO MEIER: Eliminating
human bias with Kubeflow can be useful in every industry. You can watch all the livestream
content again or share it with your friends and colleagues
at g.co/nextonair. Now, start working
up an appetite, because right after
the next sessions, we’re experimenting
with pizza and pie. Stay tuned. You’re watching the Next
Live Show from San Francisco. Welcome, everyone. We are here at the DevZone at
the Pizza Authenticator app. Terry, tell me, what
am I looking at here? I was promised pie. All I see is pizza. What’s going on? TERRY RYAN: It is a
form of pie, right? So we have an app here that
will authenticate pizza to a certain region
of the country, right? Like, we have New
York pizza, you have California
pizza, Chicago pizza. This app will
judge how authentic it is for what style
it’s supposed to be. RETO MEIER: OK. Interesting. So why didn’t you– actually, why don’t you
show us how it works. TERRY RYAN: So we
have our pizza here, and we have our tablet here. RETO MEIER: Fantastic. TERRY RYAN: Make sure
it’s in focus and bam. And now Pizza Authenticator
is judging whether or not this pizza is authentic. So it’s chewing on it. And we see that it is
pretty confident that this is New York style pizza. RETO MEIER: So definitely
pizza, probably from New York. TERRY RYAN: OK. RETO MEIER: OK. Let’s check out the [INAUDIBLE]. It’s a good-looking pie. TERRY RYAN: This
is more Midwest. So let’s see. Make sure it’s in focus. We’ll do it again. See, it’s taking a
bite out of that. Chicago-style pizza. RETO MEIER: Chicago style. TERRY RYAN: So that’s
good, all right? RETO MEIER: All right, nice. TERRY RYAN: Now,
we’ll finish up. RETO MEIER: Yeah,
let’s do the last one. TERRY RYAN: And we’ll
do this last one. RETO MEIER: I’m
curious to what style of pizza this thinks it is. TERRY RYAN: I think it knows. I think it knows what it is. RETO MEIER: I can’t
think of any pie that deserves to have
zucchini on it, but– TERRY RYAN: Yep. Well, here in the
wonderful Golden State, we make abominations. Yes. California. Yeah. So it knew it was
California pizza. RETO MEIER: Excellent. Well, I mean, I’m a huge
supporter of anything that involves this amount of pizza. TERRY RYAN: Yeah. RETO MEIER: But tell me,
what is the point of this? Like, why is this
in the DevZone? Why have we got this here? TERRY RYAN: So this uses
under-the-covers ML, right, machine learning. And ML is really
daunting, right? Like, a lot of people
come to it and they think they need to be like
a data scientist to do it. And your rank-and-file
developer sometimes, you know, can’t approach it. Well, we took kind of an
incredibly trivial use of ML and trained it to
determine whether or not pizza was authentic. RETO MEIER: OK. TERRY RYAN: And all of this we
did, like, we grabbed the data, we classified all the images. It was all done by me, who is an
app dev, not a data scientist. RETO MEIER: Right, right. TERRY RYAN: So it really
shows that anyone can do ML. If I can do it, like,
anyone can do it. RETO MEIER: So you can take
the same sort of technology and same approach
here and really apply it to much more
meaningful projects. TERRY RYAN: Right. If you wanted to take pictures–
satellite pictures of forests and see how they change
over time and see– you could identify trees from
the forest for the trees, right? Like, you could
do that with this. And that’s a much more
serious application of this. But it’s the same principle,
pizza and onto forests. RETO MEIER: Absolutely. No, and it’s a fun way to
approach machine learning, that sort of stuff. Fantastic. All right. Now, I believe the real
pie is over with Mark. So we’re going to dig
into some of this. So you pick your
favorite style of pizza. I’m going to go for the
authenticated New York style. TERRY RYAN: I’m going for
New York style as well. RETO MEIER: Yeah. It’s hard not to. So we’ll dig into
this, and we’re going to cross over to
you for dessert, Mark. MARK MIRCHANDANI:
Whether it’s pizza, pie, or anything else
round, you can’t calculate the circumference
without knowing about 3.14, also known as pi. But there’s a few
more digits than that. So let’s talk to Emma and see
how she broke the world record for calculating 31.4 trillion
digits using Google Cloud Platform. So Emma, how did you do it? EMMA HARUKA IWAO: So we ran
a program called y-cruncher for four months on
Google Cloud Platform. And we set up a
cluster of 25 machines, and we made sure that these
machines ran continuously without any shutdowns
or reboots for 121 days. MARK MIRCHANDANI: Yeah. I mean, that’s no easy feat– 120 days, all those
machines running nonstop. That’s kind of something
magical, right? EMMA HARUKA IWAO: Yeah. But Google Cloud has all
the features and mechanisms to make sure these machines and
software are up and running. And we still have
[INAUDIBLE] systems and other techniques that we
use for our production software. MARK MIRCHANDANI: Because
normally speaking, it’s very difficult to keep one
process running over 121 days, nonetheless on 24 machines. EMMA HARUKA IWAO: Right. MARK MIRCHANDANI: So what
inspired you to kind of look for this many digits of pi? EMMA HARUKA IWAO: Like,
humanity collectively tried to calculate
as many digits of pi as possible since like 2000 BC. And even the first
electronic computer, ENIAC, was used to calculate
2,000 digits of pi. And ever since then, pi
was used as a benchmark to measure the performance
and reliability of computers. And now, we use
the Google Cloud. So that means Cloud
is capable of doing this kind of heavy-duty work. MARK MIRCHANDANI: That’s
awesome to see it. Congratulations on
breaking the world record. Of course, there’s more
than just numbers in pi. We can also use it to make art. Let’s talk to Mathias about
how he managed to do really, really cool generative art
using all of Emma’s work. Mathias. MATHIAS PAUMGARTEN: Hi. MARK MIRCHANDANI: Tell
me a little bit more about what you’ve done here. MATHIAS PAUMGARTEN: Right. So 31 trillion digits
is a huge number, right? MARK MIRCHANDANI: Absolutely. MATHIAS PAUMGARTEN:
Right, but not only is it a lot
of numbers, what’s really interesting
about pi, it’s also that the sequence
never repeats itself. So we thought, like, what
could we do with this? So what we came up with is
we made this algorithm that would turn these sequences of
pi into a visual representation, essentially. And this allows people
to find 31 trillion different visual representations
of pi somewhere within pi. MARK MIRCHANDANI:
Can we take a look? MATHIAS PAUMGARTEN: Absolutely. So we can jump to any digit
within pi that we want. Now, just type in
a random number. So from that point on, it
takes the next several thousand digits of pi, them being
unique within there, and then generate a piece of
art out of this, essentially. MARK MIRCHANDANI: So if anyone
else is here, they can come by, they can change up the numbers. And they’ll see
different artwork, because it’s unique
sets of pi patterns. MATHIAS PAUMGARTEN: Exactly. So someone could come by
and say, like, they type in their phone number or
like maybe their birthday, the birthday of their
spouse or whatnot. And type this in
and then essentially get a unique representation
of that number within pi. MARK MIRCHANDANI:
Very, very cool. This is just one of the
many cool experiments that we have here
at the DevZone. So hopefully, the
people here can come check it out and see all
the cool work that we’ve done. STEPHANIE WONG: Thanks
very much, Mark. If you are hungry to
try these experiments, head to
g.co/showcase/experiments. Now, we don’t have to tell you
how important mobile is to how we work, connect, and play. In this next Dev Bite,
we’re going under the hood to show you how an ML kit can be
used for building mobile apps. IBRAHIM ULUKAYA: It seems
machine learning is everywhere nowadays. And it’s so many offerings
for you as a mobile developer, it can get really confusing
as to where you should start. I’m Ibrahim Ulukaya, an
engineer on the Firebase team. And I’m here today
to introduce you to the options that
are available to you. If we could carry data
centers in our pocket with unlimited battery,
it would be much easier. But using ML for
mobile apps means thinking about the processing
power, battery consumption, and connectivity of your device. There are a number
of factors that will dictate the ML strategy
and tools you’ll want to use, like whether you want
to keep the data local, if you require low latency,
or if you need access to the processing
power of the cloud. Luckily, Google offers
a number of ML options for mobile developers
that can help you develop and deploy ML
models for the environment that fits your use case. To help you pick which ML
deployment option fits best, I’ll walk you through a
few key decision points. The first question you
should ask yourself is where you want to perform
ML inferences, in the cloud or on the device. You would use the
cloud if you’re already using an existing cloud
service for model inferences. Or you run inferences
infrequently and want to minimize impact
on the device battery life or need to support older
less performing devices or if you need high-accuracy
models that require the compute power of the cloud. On the other hand,
there are cases in which it might
be useful to do ML inference on the mobile device. For example, if you want to
keep data local to the device, you need low latency such as
when processing multiple frames in quick succession. You want inferences to work
even when the device is offline. Or your users may be
in an area of the world where high-speed
wireless connectivity is nonexistent or unreliable. So now that you understand
all the constellations, let me present the
options available to you. First, is to run
inference in the cloud. And there is a well of
tools and APIs you can use. These include Cloud Vision
for image recognition and classification,
Cloud Natural Language API for text parsing
and analysis, AutoML that allows you
to customize models that are sought from the cloud. All are available via REST APIs. I won’t go into them all
here, but you can learn more at cloud.google.com
/products/ai/building.blocks. Next is to run
inference on the device. And you have two main options. You can either use
TF Lite directly or you can use a ML
Kit for Firebase. ML Kit is a mobile SDK that
brings Google’s machine learning experience to Android
and iOS apps in a powerful, yet easy-to-use package. It comes with a set
of ready-to-use APIs, both for on device and
cloud-based inference. They support common
mobile use cases like image labeling, face
detection, text recognition, and barcode scanning,
to name a few. You simply pass in the
data to the ML Kit Library, and it will give you the
information you need. Whether you are new to or
experienced with machine learning, you can
implement the functionality you need all in a
few lines of code. And it includes both
Android and iOS libraries. While the on-device APIs
process data quickly and move it even when there is no
need for a connection. The cloud-based APIs
leverage the power of Google Cloud
Platform’s machine learning technology to give
a higher level of accuracy. If you have a use case that’s
different than offered models, ML Kit also offers the ability
to deploy and experiment with your own custom
TensorFlow Lite models to run on device inference. It works like this– after converting your model
to the TF Lite format, you can host and serve
it to your users. ML Kit for Firebase can
then be used to host and dynamically serve TF
Lite models to your users. And it provides
convenient APIs that help you use your custom TF
Lite models in your mobile apps. With ML Kit for
Firebase, you can reduce the initial install
size of your mobile app and only download the
model that’s needed. Dynamically swap
models on the fly without having to republish
your app to the App Store. Target different user
segments with models tailored specifically for them. Run AP experiments
with multiple models to find the best performing one. If you’d like, you can
also use your TF Lite model on mobile directly
with TF Lite APIs. These offer blazingly
fast inference times, particularly on Android devices
that support the neural network APIs. And the variety of ways to
host and deploy your model for mobile developers looking
to implement machine learning into their apps, there are
several directions to choose, depending on how much
flexibility you want or how specific
your use case is. We covered some key
factors that help you decide, like whether you
do your inference on the device or in the cloud and
whether you want to use the pretrained
models Google provides or train and bring
your custom model. Check out our full session at
Next for a deep-dive analysis, live experiments, and the
latest ML offerings from Google. STEPHANIE WONG: If you want
to see more on ML options for mobile developers,
check out the full session on g.co/nextonair. Google Cloud is becoming
an integral part of how retail business can
quickly scale and connect with customers. Let’s head back to
the Showcase floor where Joanna is
checking out how Ikea uses Google Cloud to
improve customer service. JOANNA SMITH: Welcome back
to the Showroom floor, y’all. I’m really excited to be
here, because now I’m over in the Customer Magic Zone. And it’s really, really cool. But right now, I’m
standing with Claes, who is a senior
engineer for Ikea, and he’s going to show
us one of the most popular demos in the area. There are crowds of
people trying to fight their way in right now. Hey, back up. I’m looking at you. So before you actually
show me how it works, can you tell me a
little bit about how your team developed this? CLAES ADAMSSON: Yeah, sure. So we assembled a
pretty small team that we can work in an iterative
fashion, dividing the work, but close feedback
loops together with Google Engineering. JOANNA SMITH: Oh, cool. CLAES ADAMSSON: Yeah, and we
used the Vision API product search. JOANNA SMITH: The
Cloud Vision API? CLAES ADAMSSON: Yes. JOANNA SMITH: I love that. CLAES ADAMSSON:
Yeah, it’s awesome. Together with object detection
that we’re trying out together with Google to enhance the
product offering from your side and also try to work
with the customer experience on our side. So you’ll see in the demo. JOANNA SMITH: OK. And then you talked about
how your team really focused on working iteratively. Now, I heard a rumor that y’all
did this, the whole thing, in less than three weeks. Is that true? CLAES ADAMSSON:
That’s pretty amazing. Yeah. Yeah. We were fast-paced, very
short sprints that we did. Right? JOANNA SMITH: So you
got to work with– you mentioned some
Google engineers, which is really cool. But you also– you get to
work for a company that really encourages and gives you the
space to try these ideas so quickly like that. How does that feel,
like, as a developer? CLAES ADAMSSON:
That’s really good that we are empowered to do
these kind of exploration cases. JOANNA SMITH: It sounds
like a dream, right? CLAES ADAMSSON: Yeah. It really is. It really is. JOANNA SMITH: OK. So speaking of dreams,
will you show me how this works, maybe on
the horse right there? CLAES ADAMSSON: Yeah. Sure. Sure. So this is where we try
to detect the object. JOANNA SMITH: Oh, that’s cool. CLAES ADAMSSON: Yeah? And then it matches
to our range, right? And then you can
interact with it. And it show you
our mobile web page where you can get
more information. JOANNA SMITH: And then could
you do, like, this stool? CLAES ADAMSSON: Yes. Sure. So let’s try it out. It will recognize it
and show directly. It’s a Janinge stool. And then we can move over here,
and you can interact and– JOANNA SMITH: So the
idea is that instead of taking that little pencil,
you can walk around and build your list with your phone. CLAES ADAMSSON: Yeah,
point and click. JOANNA SMITH: Does it even
get, like, decorative items? CLAES ADAMSSON: Of course. Of course. So let’s try this one out. So you see here,
you got it directly. JOANNA SMITH: This is so cheap,
and it’s a plant I can’t kill. CLAES ADAMSSON: Yeah. JOANNA SMITH: Because it’s fake. CLAES ADAMSSON: It’s
even called Fejka. JOANNA SMITH: Really? CLAES ADAMSSON: Yeah. JOANNA SMITH: Is that like– CLAES ADAMSSON: Swedish. JOANNA SMITH: For fake? CLAES ADAMSSON: Yeah. JOANNA SMITH: Oh, my gosh. I want one so badly. This is a really cool demo. CLAES ADAMSSON: Thanks. JOANNA SMITH: I
understand now why there’s a crowd of people
waiting to get back in here. So I want to thank you so
much for talking to me. CLAES ADAMSSON: Thank you. JOANNA SMITH: And
for building this. CLAES ADAMSSON: Thanks
for having us here. JOANNA SMITH: I am so excited. I will be in the
store pretty soon. CLAES ADAMSSON: You’re welcome. JOANNA SMITH: Have a great day. CLAES ADAMSSON: Yeah, likewise JOANNA SMITH: And then thanks
to all of y’all for watching, and we’ll show you some
more cool things later. STEPHANIE WONG: After the
next round of sessions, Reto will be checking
out the 5th Nine Lounge with Dave Rensin,
Director of CRE and Network Capacity. They’ll be chatting about how
CRE helps improve reliability. You won’t want to miss it. You’re watching the Next
Live Show in San Francisco. Welcome back to
the Next Live Show. Coming up later today, a Next
favorite, the Developer Keynote titled Get To The Fun Part. I’m definitely setting
my alarm for this one. But first, let’s head
to the 5th Nine Lounge where Reto and Dave Rensin
Director of CRE and Network Capacity are hanging out. RETO MEIER: So
Dave, we meet again. It was just, what? Nine months ago that you
and I were hanging out here at the 5th Nine Lounge at Next. We were talking about
your new book, I think. So what have you been up
to since we last spoke? DAVE RENSIN: You
know, it’s cloud. It’s only one of Google’s
fastest-growing businesses. So not much. Really, you know, video
games, rock climbing, a lot of quinoa mostly. RETO MEIER: That
sounds about right. DAVE RENSIN: Pretty sedate. RETO MEIER: Anything
work related? DAVE RENSIN: Oh. Yeah, I mean, lots of stuff. Mostly working with customers
and doing everything we can to teach the world
what it means to be an SRE and how to do it well. RETO MEIER: So when
attendees come to Next, what can they expect
when they visit us here in the 5th Nine Lounge? DAVE RENSIN: Well,
really what we’re hoping for is to have those deep
conversations about how it is they go on their SRE journey. We’re still kind of
at the stage where we have to really prove to
people that, no, this is not just something that
only works at Google. It can work anywhere. RETO MEIER: OK. Let’s get down to business and
get to the nitty gritty of SRE. How do we help customers
move towards implementing SRE practices? DAVE RENSIN: Well, so there
are two things you really have to do. One, you have to make it
easy for them to learn. Right? So we wrote these two
bestselling books. So we have a Coursera course. So we do a lot of writing
and public speaking. And then you have to make it
easy for them to practice it in the tools. Right? So there is a wonderful
announcement today from the Stackdriver team about
SLO alerting and error budget monitoring. So those are really
the two ways you have to attack that problem. RETO MEIER: Fantastic. So what is Google doing
to help our customers meet their own SLOs? DAVE RENSIN: You know, I think
the best thing a customer can do to reliably meet SLOs,
besides building good products, is actually share their
monitoring with Google. Like, if you’re
using Stackdriver, you set up an SLO dashboard, you
can actually share that with us so that when you have difficult
moments, and you will, it becomes faster for
everyone to debug. RETO MEIER: Nice. And so what is Google– what is Google doing
with our products to help customers implement
those SLOs and error budgets themselves? DAVE RENSIN: My
personal goal at Google is to make sure that it’s
easy in every product to practice SRE principles, that
rollouts and rollbacks are item potent and easy when you’re
doing them on, I don’t know, GCE or GKE. You saw the announcement
this morning of our new Kubernetes
hybrid and on-prem solution. We’re going to make some
other announcements about SLO monitoring and burn monitoring
in those solutions too. So we’re really building at
every layer of the stack. RETO MEIER: Fantastic. Now, I’ve been hanging
out with a few SREs. I hear a lot about
accepting failure as normal. How do you recommend
customers go about moving from every area
is a problem to the mindset of failure is normal? DAVE RENSIN: You
know, a big problem is for so long we’ve
all been living under the tyranny of perfection. We expect perfection. And I would say to
customers, tell me one system designed by
humans, software or hardware, economic, political,
anything you want, that’s ever been 100% reliable. And they can’t, and you
know why they can’t? Because nature has
never built one either. So once you accept that
it’s not possible to do, then the question becomes, well,
how much failure is acceptable? Because you’re
going to have some. And then we get into
the conversations about error budgets and SLOs. RETO MEIER: Yeah,
that’s a good segue. So how do you see
customers becoming effective at managing the
workloads of their SRE teams? DAVE RENSIN: OK. So, you know, there are some
good rule of thumb limits about the number of
incidents you can really have during an on-call period
or how much of your project time you can have. So I think a good rule
of thumb for people is if you see your teams are
spending at all anything less than 50% of their calendar
time, their clock time, on engineering projects,
they’re headed for toil. And they’re not
managing the load well. But if you come talk
to us, we can help you manage your way through that. RETO MEIER: Absolutely. And so speaking of that,
what are your recommendations to customers on how to take
that first step of their journey towards SRE? DAVE RENSIN: Well, there are
a bunch of things you can do. We’ve got two best
selling books. If you go to google.com/sre,
you can read both of the books for free. We have a course in
Coursera, specifically on SLOs, that’s
really very popular. If you’re here at the
show, come by the 5th Nine and talk to SREs, talk
to me or anyone else. If you’re a Google
Cloud customer, you can talk for free with
the CRE team, the Customer Reliability Engineering team. We are a group of
long-tenured SREs, and our job is to teach you
how to adopt SRE and make that easy and
comfortable for you. RETO MEIER: Fantastic. All right. Thanks, Dave. It is always a pleasure. DAVE RENSIN: Thank you, Reto. RETO MEIER: Thank you. Now, remember, if
you’re hear at Next, come and join us at
the 5th Nine Lounge and talk to a bunch of SREs
all about your SRE questions. And now we’ll send it
over to you, Stephanie. STEPHANIE WONG:
Thanks Reto and Dave. Throughout the Next
Live Show, we’re bringing you condensed
versions of a few Next sessions that we’re calling Dev Bites. Watch this next Dev Bite to
learn how to surface problems and how to fix them
using Stackdriver. BRYAN ZIMMERMAN: As
you move to the cloud and adopt the latest
technology such as Kubernetes, have you found it even harder
to find the root cause of issues and understand just how well
your application is performing? Stackdriver Application
Performance Management, AKA APM, shines light
on your cloud native and multi-cloud
environments in order to help you solve
these problems better. I’m Bryan Zimmerman, Product
Manager for Trace and Debugger. MORGAN MCLEAN: And I’m Morgan
McLean, Product Manager for Stackdriver Profiler. Stackdriver is Google’s suite
of management and observability products. APM is focused on
that deeper levels of debug-style
information to help produce mean time to resolution,
find performance optimizations, and build the best possible
experience for your users. APM features three products– Stackdriver Trace,
Stackdriver Profiler, and Stackdriver Debugger. BRYAN ZIMMERMAN: Trace is our
distributing tracing product. What is distributed
tracing, you ask? Well, tracing allows you to
visualize how a request flows through your environment. In microservices applications
or highly distributed environments, understanding
context and flow is essential to
resolving problems. MORGAN MCLEAN: How do we
use this in real life? For example, let’s
assume that we receive an alert that
shows a jump in latency for our application. Trace gives us the
best detailed data to help us troubleshoot
this error. BRYAN ZIMMERMAN: With
Stackdriver Trace, you can easily see
the top requests, RPC calls and
associated latency, find an example of
the issue you’re having with simple
search and filter, and see a simple
view of the request as it processes through
your complex environment. This includes detailed custom
information and annotations. MORGAN MCLEAN: For more
detailed information, analysis reports are
generated automatically. You can view an existing
report or create your own to see how latency
has changed over time. You can group these
by percentile, and you can also drill
down into example traces to find out exactly what
happened to cause this issue. BRYAN ZIMMERMAN: As you can see,
Trace guides you from problem to service. Now we will show you how you
can get from service to method with Stackdriver Profiler. MORGAN MCLEAN: Profiler can
inspect the CPU and memory performance of all the
functions in your code. And it shows you the calling
relationships between them. Profiler can visualize your
code on a flame graph, which shows calling relationships
along the vertical axis and resource consumption
on the horizontal axis. You can use a number
of different filters, including weight
filtering, which only shows data captured during
periods of high consumption, and a focus filter, which
shows all the pass in and out of a selected function. If you need to visualize the
performance of functions that are called throughout
your code base, you can use the
Focus Table, which shows the aggregated costs of
all the functions in the graph. Finally, once you’ve
identified the area of code that you’re interested
in, it’s time to inspect the behavior of
this code to find the problem. BRYAN ZIMMERMAN: This is
where Debugger comes in. Stackdriver Debugger
offers the ability to take log points and
snapshots of your running code in production, without impact,
redeploying, or restarting your services. Traditionally,
debugging involves analyzing logs and metrics,
taking a guess at the problem, deploying some new
code or a debug module, looking at the logs again to
confirm, waiting for the issue, analyzing logs again, perhaps
taking another crack at it. Stackdriver Debugger
allows you to take breakpoints and
snapshots in production without system impact,
redeploying, or restarting your application. This allows you to tighten
your troubleshooting flow significantly, as you’re
not having to wait for lengthy deployments. In addition to the
efficiency gains, many customers report being
able to bring debugging further up the troubleshooting
stack to operators or SREs, further reducing MTTR and
affecting customer impact. MORGAN MCLEAN: All of these APM
tools work with code and apps that are running on any cloud or
even on-premise infrastructure. So no matter where you
run your application, you now have a consistent
accessible APM tool kit to monitor and manage
the performance of your apps. BRYAN ZIMMERMAN: With
Stackdriver Trace, Profiler, and Debugger, APM enables
developers and operators to track and resolve issues
the way that Google Site Reliability Engineering does. You can start and stop your code
without affecting your users and get insights into
how your code is running on production, no matter
what cloud you’re using. MORGAN MCLEAN: Whether your
application is just getting off the ground or if it’s already
live and in production, using APM to monitor
and tune its performance can be a game changer. To get started with
Stackdriver APM, simply link the
appropriate libraries for each tool to
your application and then start gathering
telemetry for analysis. BRYAN ZIMMERMAN: To learn more,
visit cloud.google.com/apm. MORGAN MCLEAN: And
thank you very much. STEPHANIE WONG: Later,
on the Next Live Show, it’s all about developers. We’ll give you a tour of our
featured demos in the DevZone. But first, more sessions on
all six livestream channels. So check them out. Stay tuned. Welcome back to
the Next Live Show. Thanks for joining us. I’m Stephanie Wong. RETO MEIER: And I’m Reto Meier. Stephanie and I have had a
blast here at Next ’19 so far. And it’s been great sharing
the experience with you. If you missed anything
across the last few days, head to g.co/nextonair
and watch all the content we’re livestreaming
across the six channels– Build, Run, Analyze and
Learn, Secure, Collaborate, and the Next Live Show. The Developer Keynote is
just around the corner. So to warm up for
the event, we’ve asked our colleagues,
Marissa Root and Dave Stanke, to take us on a grand tour
of the Developer Zone. Check this out. MARISSA ROOT: Hi, there. DAVE STANKE: Hey. MARISSA ROOT: I’m Marissa. DAVE STANKE: And I’m Dave. MARISSA ROOT: And we work on
the Google Developer Relations Team. We’re here at the DevZone, which
is the ecosystem for developers here at Google Cloud Next. DAVE STANKE: Right now
at DevZone Theater, we’re hosting a panel discussion
with the authors of Go. This is an interactive
session where people can learn
about the language and find out what’s coming next. MARISSA ROOT: Come
with us as we explore the rest of the DevZone. Let’s go. [MUSIC PLAYING] DAVE STANKE: Correct. MARISSA ROOT: Correct. No way. This is the coolest thing ever. DAVE STANKE: Hey. KERRY: Come on. Say banana. Excellent. Perfect. We got one. DAVE STANKE: How’s it look? KERRY: I think it’s looking
pretty good, actually. DAVE STANKE: Cool. I’m getting my snaps on here at
the Network Journey Experiment, where we’re showcasing the
speed and breadth of Google’s worldwide network. What’s going on here, Kerry? KERRY: Well, exactly. So we’re demonstrating here
how Google’s investment in infrastructure
allows our customers to move their data
around the world without having to access
the public internet. It’s fast and it’s secure. DAVE STANKE: And what
does taking selfies have to do with it? KERRY: Taking selfies. So just like when
you travel and you get stamps in your
passport, we’re actually taking the data that
makes up your picture, and as it passes through
these different regions and these data centers, we
stamp it with that location. So when it comes back
around to the destination where you took these
pictures, then we have actual proof that your data
moved through all these data centers. DAVE STANKE: That’s really cool. Has my picture made it
around the world yet? KERRY: Oh, let’s see. Indeed, it has. DAVE STANKE: All right. Looking good. KERRY: None the worse for wear. DAVE STANKE: Thanks. MARISSA ROOT: Hey, Fran. How’s it going? FRAN: Hi. I’m good. Thank you. MARISSA ROOT: Thanks so much
for talking with me today. Could you share a little bit
of what you and your team are up to here in the DevZone. FRAN: So I’m from the Developer
Relations Ecosystem Team. And we are informing people here
about the different community problems that we have
in the area of cloud. MARISSA ROOT: Awesome. So what type of programs
do you have that people can take advantage of? FRAN: So we basically
have three programs. So it’s the Google
Developer Experts. They are ambassadors in a
specific area of technology. We have the Google
Developer Group. So it’s local meetups that you
can go to all over the globe. And we also have the
Women Tech Makers. That’s a program that
increases diversity. MARISSA ROOT: So
specifically, there are programs that
people can take advantage of in different
communities all over the globe. FRAN: Yes. You can basically– everywhere
you go, you can go to meetup. You can meet Google
Developer experts and talk to them about
your area of interest. MARISSA ROOT: Awesome. Thank you so much for your time. FRAN: Thank you. RETO MEIER: It’s 3:14
here in the DevZone, and that means it’s pi time. We’re celebrating, because
a fellow Googler of mine, Emma Haruka Iwao, recently
broke the Guinness World Record for number of digits
of pi calculated using cloud computing for the
first time to compute 31.4 trillion digits of pi. That’s a lot of pi. I’m going to get
my celebration on. Mm. That’s mathilicious. Hi, there. MARISSA ROOT: I’m
here with Paris at one of the Community Corners. Hey, Paris. How’s it going? PARIS PITTMAN: It’s going good. How are you, Marissa? MARISSA ROOT: I’m good. What do you do here? PARIS PITTMAN: I am a Google
Cloud Open Source Strategist. I work on upstream Kubernetes. I’m a co-chair of the
Special Interest Group for Contributor Experience. MARISSA ROOT: And what
makes Kubernetes unique, do you think? PARIS PITTMAN: The
community, definitely. I think we have
one of the largest contributing
communities, and I think it’s important to get customers
to participate in that. And that’s why I’m here today. MARISSA ROOT: Awesome. And what have you been
doing while you’ve been here at the DevZone? PARIS PITTMAN: Oh,
playing arcade games, like, it’s been wonderful. Like, watching awesome
speakers and meeting awesome contributors. It’s been great to have
all this in one spot. DAVE STANKE: We
showed you just a few of the cool
interactive experiences here at Google Cloud Next. But you don’t have to be here
in person to check this out. You can visit all of
these experiences and more at g.co/showcase. MARISSA ROOT: In addition, we’re
right in front of Hands-On Lab. You don’t need to
be here to do that. You can do it right
now with Click Lab. Get certified, learn all
about Google products, and get started today. DAVE STANKE: And be sure to
subscribe to the Google Cloud Platform YouTube channel. MARISSA ROOT: Thank you so
much for joining us here at the DevZone. We really appreciate it. Have a great day. DAVE STANKE: See ya. [MUSIC PLAYING] STEPHANIE WONG: Thanks for
the tour, Marissa and Dave. It’s cool to see a space
dedicated to developers. RETO MEIER: Joining us now are
developer advocates, Rachael Tatman and Sara Robinson,
to set the stage for what’s in store during the
Developer Keynote. Welcome. Rachel, please kick us
off by describing Kaggle. RACHAEL TATMAN: Sure. Kaggle is the home
of data science. So you might know us for our
supervised machine learning competitions. And we still do
that, but right now we have a lot more on the site. So we’ve got public data
sets that anyone can use. If you want to
analyze your own data, you can upload it privately,
and you know, not share it out. We also have a hosted Jupiter
Development Environment. So that includes
GPU acceleration, offered at no
charge, and we just upgraded from K80s to P100s. So if you’re looking for a
little bit of, like, zuzh, in your algorithm
and speed, it can be a nice little added bonus. STEPHANIE WONG: Great. And how are data
scientists and analysts utilizing Kaggle to support
their machine learning projects? RACHAEL TATMAN: Yeah. A lot of different ways. So the thing I think
that’s probably most unique about Kaggle is we
have a really big community, about 2.7 million
registered users, and we have a
super active forum. So it can be really
hard to find information about your specific
questions, especially machine learning and AI. And things are
moving so quickly. So having a community
that you can go to and ask questions
and learn together is a really fantastic resource. STEPHANIE WONG: And Sara, have
you used Kaggle in your work and seen its benefits
in the ML community? SARA ROBINSON: Yes. I use Kaggle almost every day. I’m always looking for
interesting data sets for machine learning demos. And Kaggle is my go-to for that. You can find most
any type of data set you’re looking for–
images, text, structured data on Kaggle. So it’s been a
great tool for me. RETO MEIER: That’s fantastic. And what ML AI launches
are you most excited about and tell me a little
bit about why. SARA ROBINSON: There’s
some AutoML launches that I’m super excited
about, really making it easy for anyone
to use machine learning without having to have
machine learning expertise. So the first I’ll talk
about is AutoML Tables, which makes it really easy to
build custom machine learning models on structured data. So think about
anything you might be able to put in
a spreadsheet– categorical data, numerical
data– building models on that. With AutoML Tables,
that’s really easy. Upload your data to the
UI, press the Train button, and your model’s ready to go. Another AutoML launch
I’m super excited about is AutoML Vision
Object Detection. So AutoML Vision can
already do classification, but now what you’ll
be able to do is identify regions
in your image where a certain label exists. So it will return
bounding boxes. So I’m super excited about
those AutoML launches. RETO MEIER: Very cool. And we’re about to see
a showcase highlighting our work with the NCAA
using BigQuery ML. What can you tell us about
BigQuery ML specifically? SARA ROBINSON: BQML let’s
folks that have structured data stored in BigQuery– BigQuery is our cloud
data analytics warehouse– so it lets you create
machine learning models on that data stored in BigQuery,
just with a single SQL query. So you don’t have to move your
data out of BigQuery to use it. It’s super simple to use. So you just write a query. Your model is
trained, and then you can write another query
to generate a prediction. And I actually gave
a session on BQML earlier today with
one of our customers, AAA, that’s using BQML
to predict call volumes across their call centers. STEPHANIE WONG: Great. And Rachael, have you also heard
about BQML in the launch too? RACHAEL TATMAN:
Yeah, definitely. I checked out the NCAA Showcase. One thing that I
found really exciting was student developers
were working on developing new features for
analyzing basketball games. So one of them was
clutchness, like, how many points you’re going to
score in the last five minutes. And I thought that
was super interesting, because on Kaggle we’ve
hosted NCAA competitions where you predict the winning team. And I think it was a
couple of years ago, one of the very high
placers had just used linear regression, which
is available in BigQuery ML– let me say all the
words correctly– and also these custom features
that they had developed. So I’m seeing some
interesting synergy there. STEPHANIE WONG: This
is a perfect time to check in with our
colleague, Joanna Smith, who earlier had a chance to stop
by our NCAA Showcase demo. JOANNA SMITH: I’m
Joanna, and I’m here in the industry solutions
neighborhood of our Expo Floor looking at this great energy. All these people
are really excited. And I’m standing
with Alok, who’s going to show us the NCAA demo. You want to take it away? ALOK PATTANI: Yeah. So this is our second
year at Google Cloud being the official Cloud
provider of the NCAA. Our theme is know
what your data knows. So we’ve been working
with the actual NCAA data, basketball data– JOANNA SMITH: Oh, how fun. ALOK PATTANI: –to make a lot
of insights and predictions. So let’s look at one. We just had a
national title game. Virginia and Texas Tech, and
what we’re looking at here is the number of
possessions each team will have in regulation. So we predicted that the
number will be 63 or fewer and it was 59 in regulation. So it was right. JOANNA SMITH: Not bad. ALOK PATTANI: Well,
we’ve done a bunch more. So one of the other
things we did– JOANNA SMITH: Oh, nice. ALOK PATTANI: –was we looked
at a Data Studio Dashboard. So Data Studio, another
tool made by Google, allows you to
visualize your data. JOANNA SMITH: Yeah. ALOK PATTANI: We had developed
a bunch of different metrics for all these teams in NCAA. You can see them on
the bottom here– score control, clutchness,
discipline, et cetera. But we wanted to service
that data in a way that people could
use, filter, and sort. So this has all the 353 teams. But if we just want to
look at the champion, we can go here and
select Champion Only– JOANNA SMITH: Should
have been Tech. ALOK PATTANI: –and
boom, Virginia. And now we can see
where they rank in all the different categories. JOANNA SMITH: Oh, I like it. ALOK PATTANI: Yeah. JOANNA SMITH: Is that
good score for clutchness? ALOK PATTANI: Yeah. So the ranks are right there. So you can tell it’s
seventh out of 353 teams. JOANNA SMITH: Not bad. ALOK PATTANI:
That’s pretty good. They’re third in score control. They are deserving champion. JOANNA SMITH: So
you said this is Data Studio, which we
know is for visualization, which is really cool. But you mentioned
other cloud tools. I think BigQuery played
a big part in this. ALOK PATTANI: Yeah. JOANNA SMITH: Can you
show me how that worked? ALOK PATTANI: Yeah. So the first thing we have
to do is make a pipeline to ingest the data. So this is a small
screen here, but I’ll show you this is on
our Google Cloud blog. We describe the entire pipeline
of how we built this data. And you can see there’s
a bunch of stuff, and then we get to BigQuery
right in the middle. And I’m a data scientist. I want stuff in BigQuery,
because then I can go and do all my stuff, right? So BigQuery. Let’s do it. And then, what we do from
there is we take the data. We do a lot of
manipulations, and to build that metric like
Score Control, we have to write a bunch of code. JOANNA SMITH: All right. So we got the cloud
blog, and then you also posted
on Medium, right? ALOK PATTANI: Yeah. JOANNA SMITH: Some
more fun insights? ALOK PATTANI: Yeah. So on Medium, what we did was
we wanted to talk more in depth about how we built this metric. So, one, we wanted
to describe in detail for the technical people but
also talk about the basketball context. So you have this thing, how
the final score can lie, score control, right? And in here, we go through
a couple of example games, where teams control
the score differently. But then, again, we
get back to BigQuery. JOANNA SMITH: I love it. ALOK PATTANI: So
here is a long query. And again, it looks fairly
complicated, and it kind of is. But it actually
is, when you think about how you go about
doing calculation, BigQuery makes it pretty
easy with these aggregation functions. And it runs really fast. JOANNA SMITH: And it may be
long, but it’s easy to read. ALOK PATTANI: And it
runs really fast, gets us our results for
all thousands of games in a matter of seconds. And then we can move
forward and do other stuff. JOANNA SMITH: I love
the scale power. Now, you had one more
thing in this blog, right? At the bottom, I
thought I was playing with your– what was it? An interactive scatter plot? ALOK PATTANI: Yes. So if we keep going here,
eventually we get to a point where we see this
scatter plot, which plots all 353 teams
in their school colors and shows their
difference between– JOANNA SMITH: School colors. ALOK PATTANI: –the score
control and a different metric. JOANNA SMITH: I like it. So if anyone wants to
kind of find it at home, they can search NCAA on our
Cloud blog or they can look– or Crosslink. This is the Cloud publication? ALOK PATTANI: Yeah.
g.co/marchmadness. JOANNA SMITH: Oh, nice. ALOK PATTANI: I know it’s
a little bit past March, but the tournament is
still fresh in my mind– JOANNA SMITH: No,
it goes on forever. ALOK PATTANI: –so it
should be fun to look at. JOANNA SMITH: Well, thank you
so much for talking to me. ALOK PATTANI: Yeah. STEPHANIE WONG: Wow,
amazing Showcase. How is BigQuery ML
changing the game for machine learning for
developers and practitioners? RACHAEL TATMAN:
Yeah, good question. Two things that come
to mind for me– first of all, being
able to work in SQL. So something we found in
our Kaggle Machine Learning Developer Survey is
that SQL is actually the third most commonly used
language for machine learning. And most of the people who
are outside of the machine learning AI space or just
coming into it, many of them are already familiar with SQL. So being able to use tools that
you already know how to use is a really big time saver. And also, you don’t have
to download data locally. So if you’ve got sensitive data
that you can’t store locally or it’s just like really big,
and you can’t store it on disk, being able to do your regression
in BigQuery instead of having to download everything,
do your Python code, and then upload everything
once you’ve changed your model can be a big time saver. SARA ROBINSON: And
to add to that, I think BQML is a great entry
point for machine learning. So you don’t have to worry
about any sort of feature engineering. So if you have categorical
data, for example, if you’re building
a model yourself, you would have to one hot
encode that into arrays. BQML can recognize that
it’s a categorical column and transform your data for you. RETO MEIER: That’s excellent. Democratization of data has
been a big theme at Google. And we really believe that
if you want innovation, you have to democratize
access to data. Can you talk a little about
how BigQuery ML and AutoML are having an impact
towards us being able to achieve that goal? SARA ROBINSON: Yeah, definitely. So both BQML and AutoML make
it easy for anyone, regardless of your machine
learning expertise, to build your own custom
machine learning model. So you don’t need a lot of
machine learning expertise to get started. You just upload your data,
choose the type of model you want, and both BQML and
AutoML will handle the rest. RACHAEL TATMAN: And
AutoML, specifically, you don’t need as much
labeled data as well. So I know the Natural
Language API– sorry– the Natural Language
AutoML that just launched, you only need 100 label
examples per category for each label in your
categorical learning example, which is very little
data, especially when you’re looking at, like, bird or
Elmer or these really, really enormous GPT-2 enormous
language models. So being able to get started
with relatively little data, smaller time investment, smaller
money investment for labeling is a really big movement
towards democratization. STEPHANIE WONG: Great. Sara, you’ll actually be
talking about the path from Cloud AutoML to custom
model in your breakout session. Can you give us a preview
of the key factors when moving from AutoML
into custom ML models? SARA ROBINSON: Yeah, definitely. So the goal of our breakout
session– tomorrow, I’m going to be
speaking with Yufeng– is to teach people that may
not have built their own custom model before how to get started. Because at least
for me, the process of starting to learn how
to build a custom model is really intimidating. There’s lots of
resources out there. It’s hard to know
where to get started. So we’re going to break it
down and talk about feature transformations,
feature engineering that you need to do to go
from AutoML to custom model and walk through some code. And there’ll also be a
couple of live demos. RETO MEIER: Now,
Rachael, you’ll also be leading a session tomorrow
using Google’s data and AI technologies with Kaggle. What sort of things will you
cover and what part of it are you most excited about? RACHAEL TATMAN:
Oh, so we are going to give people a little tour of
Kaggle, because a lot of things have changed recently. Then we’re also going to talk
about several integrations that we’ve made recently. And honestly, the one that
I’m most excited about is the ability to
take a Kaggle data set and launch it as a
Sheets instance– a Google Sheets instance– with
a single click of a button. And one place where I
think this could actually be extremely useful for people
is for project management. So you’re writing a
kernel and a notebook, and you’re generating let’s
say your hyperparameter search space. You save that as a data set. You launch it as
a sheet, and then you can assign action items,
link to other kernels, use traps, use
color coding, which you wouldn’t want
to do in a CSV, but it could be really super
useful for project management. So I’m very excited
about that integration. RETO MEIER: Very cool. STEPHANIE WONG: Excellent. Thank you so much,
Rachael and Sara. RACHAEL TATMAN: Yeah. SARA ROBINSON:
Thanks for having us. STEPHANIE WONG: We’d love to
hear from you watching online. So send us your questions
and comments with the hashtag #GoogleNext19. RETO MEIER: That’s
a wrap for today. Be sure to join us tomorrow
starting at 9 AM Pacific time for a final day at
Google Cloud Next ’19. The day will start
with morning sessions. Then Stephanie and I will
chat with Adam Seligman, Vice President of
Developer Relations. STEPHANIE WONG: Be sure to check
out all the Livestream content from the last two days on
demand at g.co/nextonair. The keynote is about to start. Let’s head over there now. [MUSIC PLAYING]


Leave a Reply

Your email address will not be published. Required fields are marked *