Importance of Natural Resources

NetApp with Alim Karim and Dean Hildebrand: GCPPodcast 189


[MUSIC PLAYING] MARK MIRCHANDANI: Hi. And welcome to episode 89 of the
weekly “Google Cloud Platform Podcast.” I’m Mark. And I’m here with
my colleague, Jon. Hey, Jon. JON FOUST: How’s it going? MARK MIRCHANDANI: How
are you doing today? JON FOUST: I’m doing pretty
well– had a lot of fun playing video games
over the weekend. [MIXING SOUNDS] And I’m having a lot
of fun getting ready for a lot of conferences
I’ll be speaking at. MARK MIRCHANDANI: Oh, yeah? Which conferences
are you speaking at? JON FOUST: Yeah, I’ll be at PAX. And I’ll also be speaking
at another Game Summit somewhere at Google. [PIANO MUSIC] [LAUGHS] MARK MIRCHANDANI: Oh, yeah, like
a little internal Game Summit, where we talk about
some of the plans that we’re working on
for gaming as a whole. JON FOUST: Yup. That’s exactly it. MARK MIRCHANDANI: Well,
that sounds super exciting. It sounds like you’re
keeping pretty busy. JON FOUST: Yeah,
I am hoping to get accepted to our internal talks. That is what I’m
preparing for, as well. [LAUGHS] MARK MIRCHANDANI:
Oh it’s exciting. Can you share a little bit about
what you want to talk about? JON FOUST: This one’s
going to be a little bit more cross-functional, trying
to appeal to teams to start working on the solutions that
will help improve the gaming space for the
community, as opposed to creating the tools that
are necessary to, you know, make games easier
for developers. So this one’s a little bit
more for the community. MARK MIRCHANDANI: Awesome. Well, I think we
have a little bit of gaming content coming up. Speaking of which, we do want
to get into our cool things of the week. But first, I’m super
excited for our episode today, because we’ll actually
be talking with Alim from NetApp and Dean from Google about how
file storage kind of exists in the cloud– you know, what that
means, why it’s important, and what NetApp is bringing
to Google Cloud for people who want to kind of take
their cloud environments and map up with that
enterprise file storage. JON FOUST: Right. And it’s a little bit more of a
shift from, like, on-prem stuff to being in the cloud. So they touch on
that quite a bit. MARK MIRCHANDANI: Absolutely. We also have our
question of the week– JON FOUST: Which is, how
do I authenticate my Google Kubernetes engine cluster
in a CI/CD pipeline? So Mark, you’ll talk a
little bit about that– [LAUGHS] –I’m assuming. MARK MIRCHANDANI:
Yeah, don’t worry. I got this one. [LAUGHS] Now, before we get
into those, why don’t we talk about our
cool things of the week? [MUSIC PLAYING] JON FOUST: Definitely. So we’re going to start off
with something that has come out of a project that one of our
podcasts’ hosts, Mark Mandel, has been working on, which is
Agones, which is an open source project that runs a
Kubernetes– essentially, a game server orchestrator. And the product that
has now come out that is a fully managed
offering of Agones is Google Cloud Game Servers,
which we’re real excited about. I’ve been hearing about
this since I joined Google. And I knew the roadmap for it. And it’s just excited to
really see it come into, like, being available for our
developer communities. MARK MIRCHANDANI: Yeah,
I mean, for people who have been listening to
the podcast for a while, I’m sure you’ve probably heard
Mark talk about Agones 1, or 2, or 3,000– [LAUGHTER] –times. [LAUGHS] But it’s so awesome to see. You know, Mark has been working
on this for quite a while. And now, it’s a
Google Cloud product that’s going to be offered. So you can check out the
link for more information. But it’s just– it’s phenomenal
to think about all the work that he put into it. And you know, now,
it’s not just Mark. It’s a small group of
people, who are just working on building that out. It’s just so great to see. So definitely huge congrats to
Mark for getting it out there– and I think a lot of people are
super, super interested in it. JON FOUST: Yeah, like you
said, you can follow the link. And you can sign up for
an alpha for right now. But if you really
just, you know, get your hands dirty
with it currently, you can always just take
a look at the Agones– open source project
at Agones.dev. You’ll get all the
information you need there, including all of the
links to the GitHub repo and any other information
to get you started. MARK MIRCHANDANI: Yeah, so
you can implement it yourself. Or you can use the
service that’s coming up. JON FOUST: Right. MARK MIRCHANDANI: Speaking
of the service that’s going up there, you know,
we’re talking a little bit with NetApp later. But we’ve also got a partnership
coming out with VMware. And VMware, you know, has a huge
stack of different technology that gets used when you’re kind
of in that VMware environment. So it’s really cool to see
a partnership between Google Cloud and VMware,
because now it’s going to be a little bit
easier to take your existing stacks or some of the older
stacks that you have around and run them on Google Cloud. So not only do we kind of get
the benefit of the partnership bringing new technologies
to the table, it also makes it a lot easier
to take some of the older technologies and host
them in a more kind of elastic and scalable way. JON FOUST: Right. I remember using VMware
for the first time as a college student. I will say that it’s kind of
interesting to see the shift now towards Google Cloud. And I’m really
going to take a look at that blog post that talks
more about this partnership. MARK MIRCHANDANI: Yeah,
it’s super cool to see. And I think before
the end of the year, they’ll have– on
the GCP Marketplace, they’ll have the VMware
Software-Defined Data Center. So you can actually
kind of spin up your cloud version of
your VMware environment and start putting
your tools on that. JON FOUST: And to
close things out for our cool things of the
week, it’s something from me. I’ve written a blog
post called “Using GCP NuGet Packages in Unity.” So a while back,
question of the week– and how do we get our GCP NuGet
packages working in Unity? And now, I’ve actually
written a blog post that, you know,
details all its steps and has pictures that move. So you can actually
see it happening. MARK MIRCHANDANI:
Pictures that move? That’s too fancy. JON FOUST: Yeah, I don’t
want to call it a GIF, because next thing you know,
I’m going to get comments like it’s called JIF. MARK MIRCHANDANI: Oh, come on. I think that we can, first of
all, all agree that it’s GIF– [CLINKING] –and that there is no
exception to that rule. Choosy COMMERCIAL VOICE: Mothers
always choose Jif. MARK MIRCHANDANI: But
more importantly, I think it’s the content of the
blog post that matters, right? JON FOUST: Yeah,
that’s exactly it. And the cool thing
about it is that it is– I guess, like,
say, a predecessor for all of the blog posts I’ve
been working on centralizes in using a lot of our
technologies to, you know, help developers do
really cool things. So this is where I
actually started off– you know, making
sure that I can use these packages without
having to, you know, do REST calls or
gRPC, which may be a little bit
overwhelming for people. So I’m just glad
to put this out. And the community, hopefully,
finds it very useful, and many, many, many blog posts
to come at following this. MARK MIRCHANDANI:
Super cool, and thanks so much for sharing it, Jon. Well, I think we’ve got
plenty of cool things. So why don’t we step
into our main interview with Alim and Dean. [MUSIC PLAYING] JON FOUST: On
today’s episode, we are joined by Dean
Hildebrand and Alim Karim. I’ll let you guys
introduce yourselves. DEAN HILDEBRAND: Wow, thanks. Yeah, my name is
Dean Hildebrand. I’m a technical director
in the Office of the CTO here at Google Cloud. ALIM KARIM: Hi, everyone. Thanks for having us on. My name is Alim Karim. I’m a product manager at NetApp. And I cover our cloud volume
service for GCP service. MARK MIRCHANDANI: OK. Well, to get things
kicked off, I’d love to hear a little bit
more about NetApp, right? This is a name that I
think a lot of people have stumbled across and
is obviously very prolific. But what exactly would
you say NetApp does? ALIM KARIM: So NetApp
has been a pioneer, I would say, in the data
management space for probably over 20 years now. And really our focus has been
providing enterprise file services to a variety of
different applications and industries. This has been traditional
on-premise environments. And so we’ve had a
very rich history of actually providing
high-performance storage for specific
industries, like health care, like oil and gas,
like electronic design and automation
and to really help those specific customers
develop the tool sets that they have in their industries– can
get their applications to run. And this is where
stability, performance, and data management
are key tenets of how their applications work. And NetApp has provided that
to them for, like I said, the better part of 20 years. JON FOUST: Awesome. So we’re going to continue
talking about the future. Can you tell us a little bit
more about NetApp and Google and what that partnership
is going to look like? ALIM KARIM: Yeah,
so as I mentioned, all of these existing
traditional workloads that we’ve been running for
customers on-premise, how we’re extending that with a
partnership with Google that is centered around delivering those
same capabilities consumable as a set of cloud
native services in GCP– so basically, first-class,
first-citizen file service offerings in GCP, using all of
the same intellectual property data management stacks
that customers are used to on-prem natively in GCP. And the partnership
really encompasses several different dimensions. There’s the product side,
which I just mentioned. That’s Cloud Volumes Service. That’s really the core
of the partnership. It’s solving the core
customer problems of bringing enterprise
workloads into GCP, as well as serving cloud
native new workloads in Google. There is the engineering aspect
on [? core ?] engineering– this service together
between NetApp and GCP– we, NetApp, bring the data
plane intellectual property. And then, Google collaborates us
with the as-a-service delivery. The infrastructure piece
is platform integrations across Google Console,
metrics building, and operating at cloud scale. There’s a good market
element to the partnership, where we focus on sales
and marketing alignment between both organizations
and dedicated teams for specific accounts. And then finally, there’s
the support aspect– once the service is in GCP,
how customers get support– actually, Google Cloud
takes first line support for the product. And then, they escalate to
NetApp if the need arises. So it’s a complete offering
across product, engineering, go-to-market support– all natively available in
the Google Cloud Platform. MARK MIRCHANDANI: What makes
this file storage necessary? I mean, when I
think of, you know, working directly in the cloud,
you can have virtual machines. You can add on disks
to those for storage. And then anything beyond that
need, we have Google Cloud Storage, right? So you’ve been
able to, you know, create buckets and
dump files in there. That seems to be
pretty comprehensive. What is it that those
two aren’t covering? DEAN HILDEBRAND:
Yeah, so I joined– actually exactly two
years ago this week is my Google anniversary and– MARK MIRCHANDANI: Oh, congrats. [LAUGHS] DEAN HILDEBRAND: Oh, thanks. Yeah, it’s a nice coincidence. And ever since I joined,
the one number 1 thing we heard from enterprises
was, you know, we want to be able to port our
applications into Google Cloud. What is the mechanism
you’re giving us to do so? And we heard this over
and over and over again. And so that was where, you know,
the real genesis of the NetApp partnership started. But based on that, you know,
what the customers also said was, don’t
give me, you know, a solution, where it looks
like two products that have been mushed
together that you have a great platform
with all the services. And then, there’s this
thing off to the side. What they wanted to see was
native services, or at least look native, and they were
supported in a native way. And they provided all
of the same, you know, great features and availability
and everything else that Google provides in its
native services in any of the partnerships
that we’re building. And so this is where we
really got the two together. And so why file stores, though? Why were they asking for it? And that’s because that’s the
dominant storage on-premise. And so as part of the cloud,
when we’re talking enterprises, they’re like, well,
this is what we have. We don’t have an object store. We have file storage on-prem. We have block storage. This is where our databases
are running, our ERP systems. All of the critical systems that
are running the enterprises, that’s what they’re running on. And so over time, they might
consider about a modernization and thinking about how they can
build in Google Cloud Storage in one way or the other. But in what they want to do in
terms of leveraging the cloud and really making
the most use of it– they just want to
have, you know, the same high-quality storage
that NetApp is offering them on-prem. And they want that in Google. And so in the meantime, these
applications– they’ve all been built for that, you
know, trillion dollar on-premise enterprise market. And that entire
trillion dollar industry is not going to be rewritten in
the next, you know, few years. And so we really need
to build partnerships with leaders in the
space, like NetApp that can provide all of
the high-quality durability and availability of the storage
that they expect on-prem and now in the cloud. ALIM KARIM: And just to add
a few more comments to that, right, so really– and when
you think about the enterprise space, to Dean’s point, this is
about an application ecosystem that supports business
processes within an enterprise. And these are workloads
that have typically not been candidates for cloud, right? And now we are essentially
enabling customers to make that
transition so that they don’t have to do data
plane re-architecture. That’s point number 1. Point number 2
is, hey, if you’ve got a new cloud user, as you
said, you could spin up a VM. You can attach some disks to it. What do you typically do? You put a file system
on those disks, right? And then you read
and write data to it. And now, you’ve
got a VM and a PD that you’ve got
to manage, right? Our point is if you’re going
to end up just putting a file system on this, why not just
give you a file system that you can attach to your
VM fully managed? And you can take
it to any other VM without having to worry about
taking snapshots of that PD and doing a whole
bunch of other things that you would do to
maintain that file system. Just get the service
into the cloud. Get the service. DEAN HILDEBRAND: And there are
some practical things, too. It’s interesting when
people think about, well, we have Google Cloud Storage and
all the amazing applications that are using it. But then, they forget
that when you boot a VM, you see a file system. And you log into that VM. And you’re still
using a file system. And then the applications that
you run, such as, you know, building your
applications and the whole CI/CD pipeline that you
might implement across for your organization– all using file system. And so I think we
forget a lot of times that how pervasive file
systems are in our daily lives and how we really need to make
them a first-class citizen inside of Google Cloud. MARK MIRCHANDANI:
Yeah, I mean, I think it goes to show you that
all the layers of abstraction have really made you completely
re-evaluate where you actually think about what you’re
trying to do, right? And when you have all of
these virtual machines, which are themselves already
heavily abstracted with all these managed services, you’re
so far away from the file system that I think,
like you said, you just kind of
forget about them. So you mentioned lift and
shift a couple of times. That is a major
component, right, because these enterprise
companies or these larger architectures that have
been built based on existing file system
solutions– you know, they can move to the cloud
using some of these solutions. What about people who
aren’t lifted and shifting, but actually building new
applications in the cloud? Is it still relevant? DEAN HILDEBRAND: Definitely. And part of that reason is that
the basic fundamental about how we build and deliver
services isn’t changing, even when they are developing
new apps in the cloud. So we’ve talked to
banks, for example, where they want to do new
products and new developments strictly inside of Google Cloud. When they’re doing
that process, they’re not lifting and shifting
existing applications. But they’re still going
through the same CI/CD pipeline that they would use
somewhere else, where they need file storage. They still need
databases and other ways of managing their
data, which require low latency and the
types of semantics that file systems
offer them in terms of managing those systems. And so even though the
applications aren’t necessarily literally taken from the
on-prem into the cloud, the new ones that they write– there’s fundamentally
no reason why they would try to avoid file
storage when, especially with NetApp, we can
make it so easy to use that you kind of, again, sort
of forget it’s even there. ALIM KARIM: One last thing
to add there [INAUDIBLE] and I think there’s also
a class of applications that we’re seeing, specifically
in the analytics space, where you’ve got n
number of machines trying to do machine learning or
something of that sort that use libraries that actually
interact with a file share. So another data point that
I usually like to point out is that Google has done a great
job of making GitHub metadata publicly available on BigQuery. So if you just look
at that data set and you look at sort of the
two big programming languages– Java and Python as an example– and you look at all
of the new code that’s been written over the
last two, three years and look at Java
and Python and see how many file calls those
applications are making, it’s over half. Like, over half of
the source code files are actually making
file I/O calls. And really, if you
look a little bit more, it’s mainly in this machine
learning analytics space, where they’re using
those libraries and interacting with data
that’s on a file system. So there you go. File comes back. DEAN HILDEBRAND:
And actually, I’ll add to that that TensorFlow
has two primary ways of accessing storage– through object storage and,
again, back through on-file. And so even though there’s
many applications that are lift and shifted,
as well, one of those is, in fact, TensorFlow
applications. And so if people have
developed their machine learning pipeline on-premise,
in many ways when they move it down into the cloud, the
shift over into object storage becomes a difficult move. And so then, they are,
again, in a situation, where you want to build a hybrid
cloud with a common storage API across both on-premise and in
the cloud– you know, again, back to file storage– [LAUGHS] –again. JON FOUST: You kind of
caught me right off guard. I was really about to ask
you a question about on-prem. You got me there, Dean. [LAUGHTER] DEAN HILDEBRAND: Ask it again. I’ll reword it again. [LAUGHTER] JON FOUST: Alim, you
mentioned something that really caught my eye. And maybe this
will, you know, help us transition a little segue
about NetApp in the cloud. The file storage is
pretty much managed cloud. But what I’m curious
about is what is different about that
and NetApp than on-premise? And are there really
pros and cons to it? And maybe dean can also chime
in and answer another one of my questions ahead of time. [LAUGHS] ALIM KARIM: [INAUDIBLE] no,
that’s a great question. So if you think about, you
know, the environments Dean was mentioning
previously on-prem, there is a long amount of time
and a large amount of effort to actually stand up a service
that, you know, an application owner on-premise could use,
for example, file services specifically. What we’re doing with Cloud
and why this is so different is that that same
application owner that needs file storage
for his app can get that in the cloud
in GCP natively, you know, in a matter of
seconds, minutes, let’s say, for if you really
click through the stuff. The time to consume is
really, really short, right? So that’s one key benefit. The second is this
is now hitting on the aspect of it being a
fully managed service, right? So if you contrast this with the
traditional NetApp customers– they would essentially acquire
software and systems that ran storage and then put
them in a data center, then connect them to a
network, then install application servers that
talk to that infrastructure, and then over a
number of years have to worry about
expansion of capacity, patching that infrastructure,
scaling that infrastructure, refreshing that infrastructure
when its Capex life ran out. All of those things essentially
go away from a managed service perspective, right? So in the managed service
context, a application owner, a developer– whoever is
using this in the cloud– just gets that final
endpoint without having to worry about any of
those infrastructure burdens behind the scenes. And not only that
is the entry point to using these becomes
much more tangible– instead of buying, you know,
maybe a petabyte or two of capacity, you can
just essentially use the service to provision
a terabyte or, you know, even smaller units of capacity
and just get going and just pay for that little
piece that you need without having
to worry about six months of installation
and lead time, right? So really minimizing
the time it takes to get actual access to your
storage and then second, abstracting away all
of the infrastructure or low-value pieces that go
into running the infrastructure is what the fully
managed service provides. DEAN HILDEBRAND:
I would just add to that, on-premise you
would ask the administrator, OK, what is the SLO
of your NetApp box or of your storage system? These are not typical questions
that either are asked, nor do they have the
ability to answer. There is a lot of
monitoring that is in place that in the
cloud that just isn’t there on-prem in terms of
how they understand and how they think about
delivering storage services. And I think that’s
where really when we talk about the
benefits of Cloud and we talk about how,
you know, the availability and how we keep things
up and running and keep your business running in that
way, the real shift, I think, to a managed storage service
comes in the fact of now, you are talking
about SLOs and SLIs and how we maintain the
uptime of these systems. And I think that
really sort of changes the conversation about
how we think about storage and how we use storage. ALIM KARIM: My view is that
to your basic point, operating infrastructure and actually
providing an application owner a guarantee is extremely
hard, if not impossible, to do on-premise, right? We are basically
solving that problem by bringing the
expertise of running a service between
NetApp and Google, providing the SRE
resources behind the scenes so that we can guarantee those
application owners and SLO one SLA for their
particular application. And they, in turn, can pass
that into the business process that they’re supporting. MARK MIRCHANDANI: Is that
something that’s relevant for a lot of
non-enterprise customers? I mean, you mentioned
earlier some examples of, like, banks and other
large financial institutions. I’d love to hear
more about, you know, which enterprise customers
this is relevant for. But on top of that, what about
non-enterprise customers? ALIM KARIM: Now,
let’s take that, for example, for what we’re
seeing right now and then what it means for non-enterprise. So the adopters of
the service have been in health care, oil
and gas, electronics design and automation, as
I mentioned, retail. And they’re really running
critical business systems in their end. And this is a core
component of what they need to actually run those
systems and approvals to run those systems in the cloud. But ultimately,
back to the question of do you need it if
you’re not an enterprise, this comes down
to uptime, right? This comes down to
availability and durability. And I have yet to meet
an application owner or developer, who says that
those things are not important. DEAN HILDEBRAND: To build on
that, you know, predictability, I think, I would add in terms of
how they’re using and deploying cloud, right? And so what customers want is
a standard set of guarantees across the entire platform. And so when you
look at how we take the critical uptime of
our Google Cloud Storage or of BigQuery or any
of the other services, file storage fits into that
in the exact same way, right? It’s running
critical applications that all of these
organizations– whether they’re enterprises or they’re
not enterprises, they’re using to, you
know, run their business. JON FOUST: Right. I would imagine that your
non-enterprise and enterprise customers probably have
faced some type of challenges there in the way. I’m curious to know exactly
what type of challenges they may have faced. And you know, what have
you learned along the way? Like, what kind of changes
did NetApp actually have to make to, you know,
suit the needs of both your enterprise and your
non-enterprise customers. ALIM KARIM: We mentioned this
notion of a managed service. And it’s super easy to use. And you don’t have to
change too many things if you’re an enterprise from
an application perspective. But it isn’t all
sunshine and lollipops. Hey. [LAUGHTER] When you talk about
enterprise use cases, you’re talking about highly
tuned infrastructure on-premise that is made for eking out every
second or every hour of a job completion, where
something like that is a key KPI for the business. You’re also talking about
high levels of security on requirements for
encryption– key management, access controls, et cetera. So the biggest thing
I think that we’ve had to deal together with
between Google and NetApp, especially in the
enterprise space, is how do we then alleviate
some of these design concerns. And how do we make the
infrastructure optimized in the cloud not just
about the data changes– but how are the Compute
Engine instances– how are the networks– how is even GCS configured to
actually ultimately deliver the same or better
output and performance that that customer is used to
with that workload on-premise? And then, the same
goes for security across the board on
encryption, on key management, on securing the
infrastructure with firewalls, et cetera, et cetera. So really that
challenge is if you look outside of the storage,
right, and all of the things that are needed to
actually get together to be plugged in to
deliver that end ecosystem, that’s where really
the problem comes in. And it’s not an
insurmountable problem. But that’s where due
diligence and getting the right [INAUDIBLE]
involved to architect those environments comes in. DEAN HILDEBRAND: So one of
the things I would say broadly around the use of just
generally of file storage inside of Google
Cloud is, I really do believe that we’re bringing,
you know, this old technology to the next generation of users
and application developers. And so one of the questions
that I’ve seen come up is, well, what
are the semantics? And why did my file, you know,
get created or not created? And what is this
thing called NFS? You know, is this
a new protocol? I think some people
aren’t even aware of, you know, the 40-year history
that goes back with NFS. And that’s to me is
awesome, because this is the entire point of what
we’re trying to deliver here is to make NFS space
storage and with SMB and all of the different access
file protocols that we’ll use. We’ll make them easy to use
and consume inside the cloud. And so then, you know,
this next generation of users that really isn’t that
familiar with the intricacies of those protocols,
you know, they will get to know them
over time, just like they got to know how Google
Cloud Storage works and how the semantics works there when
you’re accessing, you know, buckets from around the world. So I think it really is in many
ways a good problem to have that there are lots of
questions around how this works and operates in the cloud,
because I think that just shows that we’ve made it easy
enough to use and consume that the barrier to its
adoption is coming down. MARK MIRCHANDANI: And
it sounds like that’s– I mean, we were kind of
talking about that earlier. But that’s a really common
scenario for the cloud, right? Take away a lot of the– not
necessarily difficulties, but a lot of the details that
you generally don’t care about, because they’re not relevant
to what you’re trying to do. And just use that service for a
simple way of handling– look, I need files. It doesn’t matter
how they’re stored. It doesn’t matter
how they’re there. I want one point to
go connect to it. Give me the file. DEAN HILDEBRAND: Exactly. MARK MIRCHANDANI: So
with that being said, what is currently available
right now for people who are using GCP to
go use with NetApp? ALIM KARIM: So we just
announced data availability of our service. And again, given all of the
integration work that we’ve done between NetApp
and Google, this is now discoverable in
the Cloud console itself. So if you look at the menu
section of the Cloud console, if you scroll all the
way to the bottom, you will see a icon that says
Cloud Volumes under Partner Solutions. You click on that. And basically, that’ll
start your journey to be able to use
file in Google Cloud. And as I said, a few
minutes, and you’ll have a cloud volume,
which is NFS or SMB endpoint ready to go. MARK MIRCHANDANI:
And so what’s next? Right now, people
can go on there. They can beta it
up and try it out. What’s upcoming? ALIM KARIM: Yeah,
so key thing for us, obviously from a milestone–
product milestone perspective, is to get to general
availability. And we’re planning that
in the next few months. So we’re not that far away. Again, for those
familiar of, you know, building services in
the cloud, we just have to make the underlying
SLO, SLA monitoring and all of that solid,
make that all good and be able to be
the SRE workflows and stuff all plugged in. Once we get to that
state, we’re good to go from a GA perspective. And then, past GA,
you know, we’re going to be adding
some key features that, again, are key to these
enterprise workloads, right? For example, if you’ve got a
cloud volume in one region, you’ve got a cloud volume
in another region– that’s your DR region– how do you efficiently
replicate data between the two so that you can have this for
not just data repurposing, but for DR? And so if you think
about applications like SAP and other
ERP-type things, they really need this
critical capability. So not only that–
we’ve got a healthy road map after the GA point. DEAN HILDEBRAND: So what
I’m really excited about is, as we take this
service forward, is with the multi-cloud and
hybrid cloud capabilities that Google is delivering with
Anthos and other Kubernetes platforms, that really is
exciting to see how we can plug in NetApp to provide a
common data plane across all of a customer’s workloads
and really provide, again, that sort of one common
pane of glass for management across their on-premise
systems and Google Cloud and even in other
cloud environments. MARK MIRCHANDANI: Yeah,
I think it’s great, because it fits in very
well with this need to be able to run things
across environments. But it’s also really cool
to see that, you know, Google is partnering
with these companies to offer managed solutions
that, like you said, are integrated into the
Google environment, right? So Google has a history of
building a lot of things. But that’s also a lot of things. It’s a lot to manage. And so it’s really cool to see
a very concrete example of how Google can team up with partners
to still integrate solutions, but still kind of have that
in the Google experience. ALIM KARIM: Yeah,
and that was one of the key things we heard from
customers prior to building the service, right,
is they wanted to consume the data
management capabilities that we have to offer
from a NetApp perspective. But they wanted that service
experience and ecosystem integration to be fully native. They didn’t want, as Dean
pointed out early on, they didn’t want
sort of a sidecar, right, that here’s
your GCP environment. And then, here’s your
parallel NetApp environment. No, they wanted it
all together– not just look and feel, but also as
we mentioned, support, billing, operations, capacity management. All of that is, you
know, one pane of glass that Google and
NetApp manage together to deliver that
seamless experience. DEAN HILDEBRAND: And one thing I
don’t think we talked about yet is how close this partnership
is in terms of in order to really make it a success. I don’t think this is something
that we could achieve just on paper and/or in, like, a
weekly meeting at some point, right? This is really, again, about
developers working together, about sales teams
working together, and really about just bringing
the two companies together to provide a solution for
customers that they’re really looking for. And you know, that comes with
all of the same team building and bonding exercises
that you would do. Even within a single
team, you know, we’ve been trying to do across
companies to really, again, make this seem like a
company is not dealing with two different companies–
they’re really dealing with one solution and you
know, a team that’s working together to provide
support for that solution. JON FOUST: Awesome. And we’re running a
little short on time. So right before we let
you guys go, I’m wondering is there anything
that we may have missed that you
want to cover or add a little extra flair onto NetApp
right before we part ways? MARK MIRCHANDANI: Other than
the open invitation for people to hammer on that beta, right? [LAUGHTER] ALIM KARIM: Yeah, I
would say that there are some key
announcements that we’re going to do over the course
of the next quarter or so. So I would ask people to
stay tuned about those. And the items that Dean
mentioned around hybrid cloud story and being able to
integrate with Anthos– that is something that we
are very excited and working towards. So stay tuned for those. JON FOUST: Awesome. What’s next for NetApp? Are you going to be
at any upcoming events or making any appearances? I would assume that
you are, considering that you’re going to make a
couple announcements later on. ALIM KARIM: Yes, so we’ll
be obviously at the summits as they happen. We have a presence in the
Tokyo event that’s happening right now, Next Tokyo. And then, we will be at
Next London, as well. DEAN HILDEBRAND: And at
Next San Francisco, Alim, I believe your booth had a
challenge on how fast you could provision the cloud volume. Will we be seeing that again? ALIM KARIM: It was a very
popular contest, too. We’ll have to redo it
with some improvements. MARK MIRCHANDANI: Awesome. Well, thanks, everybody. And you know, thanks for
coming on and telling us a little bit more about
why NetApp is so relevant and some of the, hopefully,
cool things coming up in the next few months. ALIM KARIM: Thank you. DEAN HILDEBRAND: Yeah,
thanks for having us. JON FOUST: Well, thank
you to Alim and Dean for joining us in this
episode of the podcast. It was really
interesting to hear about file storage in the
cloud and why it’s important and you know, what
NetApp is really bringing for our developers
in the community. MARK MIRCHANDANI: You
know, it’s fascinating, because it’s just not
something people think about. Or at least a lot
of people don’t need to think about, like,
the specific connectors that you get into it. And I think Dean kind of brought
that up but then strayed away from getting too technical. But you know, you don’t think
about the actual protocols that you’re connecting
to these files with. And a lot of that’s
been so abstracted away. So it is kind of cool to see. Like, you know, obviously that
layer of abstraction exists, because companies like NetApp
and other cloud services make it easy. But there’s clearly
a lot of work that still needs to be done
to make sure everything works and that developers can just
kind of ignore it and just move on with what they need,
which is I need files. Give me the files. JON FOUST: Right. And it’s great to know
that NetApp is actually working to solve this issue. MARK MIRCHANDANI: Yeah,
so it’s super cool. And I think that getting
access to that in the cloud will be hopefully
coming up soon. Or at least, the beta will be. JON FOUST: So Mark,
let’s just jump into our question of the week. [MUSIC PLAYING] How do I authenticate my Google
Kubernetes Engine cluster in a CI/CD pipeline? MARK MIRCHANDANI: Right. So I mean, this is a very common
part of hooking up Kubernetes to CI/CD, because usually
when you’re doing this, you’re using, like, a
headless environment, where you don’t necessarily have
the same set of tools. Or you’re running on a very
small box or any number of things that you don’t have
your full environment set up for. And you could spin up
the G Cloud environment and authenticate that way,
so the G Cloud OFF command, which I think is
what a lot of people do from their
local environments. But when you’re running
this, especially in a CI/CD case with your
Google Kubernetes Engine needing to authenticate every
time you push up a new build, you don’t really want
to have to configure your environment
every time or have a lot of requirements for it. So there’s this great blog
post from [? Ahmet ?] talking about the easiest
solution here, which is to create a
kubeconfig file that includes the authentication. And this is something
that you can basically check into your repo. It’s got the current
information for your master in there– or your cluster. And then you, as a secret,
provide the service account. So you create a service account. You get the secret key file. You treat that like a secret,
because it’s, well, a secret. [DING] [LAUGHS] And then in your
kubeconfig file, you actually have
the authentication against your cluster
that will actually use the environment variables
to pull that authentication. So this gives you a little
bit more flexibility in, for example, your CI/CD
pipeline, because now, wherever you’re actually
running that build, you don’t need to install a gcloud. You don’t need to go
through any steps. You just need to have this
very, very short snippet for the kubeconfig file
and then your– obviously your secret config file for
the actual authentication. So check out the blog post. I think it’s really
quick, really simple. And you know, for anyone
who is figuring out how to build their
pipeline with containers, this will probably make that
process just a little quicker. JON FOUST: That was
really interesting. I’m probably going to
read this, because I’m going to take a
little bit from this and try to help me when
I’m trying to authenticate a lot of the services that
I’m using for the demos that I’m building for games. So I think there might
be a little overlap. So I might take a
good look at this. MARK MIRCHANDANI:
Yeah, take a look. I think it’s super cool. And you know, it just
shows you that there’s a lot of different ways to
solve different problems. But you do want to
think about, especially as you’re building out
a scalable pipeline, like what factors do I
not want to have, right? Like, I don’t want
to necessarily depend on having G Cloud all the time. JON FOUST: Right. MARK MIRCHANDANI:
So very, very cool– so Jon, before we wrap up here,
where are you going to be? What are you doing? What’s good? JON FOUST: So I
will be at PAX Dev, going to speaking with
one of our teammates, Dane Liergaard, who is a DPE,
who also works in games. And that’s going to be our
talk on universal communication and its benefits in games– really cool stuff. I’m going to be at the
internal Google Game Summit. Whether I’m speaker or
not, I will be there. [LAUGHTER] It’s just always fun and– [LAUGHS] –a great space to
collaborate with everyone. And then, I’m going to take a
little bit of personal travel. I’m going to go
into Montreal, going to be taking my brother
out for a fun trip. He’s getting married,
my twin brother. So I’m really excited
for that, too, because I’ve never
been to Canada. MARK MIRCHANDANI:
Oh, that’s awesome. Yeah, I was just up
there two months ago, not on the Montreal
side but not too far. And it was just gorgeous,
gorgeous summer. So I’m sure that will
be a lot of fun for you. JON FOUST: Right. So where are you
going to be, Mark? MARK MIRCHANDANI: Oh,
well, you know, I’m going to Austin this
week or next week. And then after that,
it’s really going to be coming back
to the Bay Area and sitting back
to the grindstone– [LAUGHS] –getting a few videos– [LAUGHS] –out the door. You know, I’m super excited,
because I think just after this airs, the next season of “Stack
Doctor” is going to come out. And that’s with our customer
engineer, Yuri Grinshteyn that we’ve been working on
this one for a little while. But he talks more about
some different ways to integrate custom
metrics and monitoring into Kubernetes clusters. So I think this will
be content that’s really helpful for people,
especially when it looks at how do you kind of take f
and add on my custom monitoring and logging– so super excited to hear
that content coming out and glad that all the
work has gone into it. And then, it’ll be on
to the next season. JON FOUST: Sounds awesome. And congratulations
on another season. [LAUGHS] MARK MIRCHANDANI:
Yeah, absolutely. Thanks so much. JON FOUST: Well, we
would like to thank you all for joining us on this
episode of the podcast– [MUSIC PLAYING] –and hope to see
you all next week. See you, Mark. MARK MIRCHANDANI: Sounds good. See you, Jon. [MUSIC PLAYING] DEAN HILDEBRAND: OK. Sorry. Let me– I was going to
use the word synergy. And the last thing
I ever want to do is get recorded using
the word synergy. MARK MIRCHANDANI: So Dean,
tell me about the synergy that you have. [LAUGHTER] [CHEERING] What I think we need is a lot
of B to C to B synergy between– [LAUGHS] –our applications
and our workflows.


Leave a Reply

Your email address will not be published. Required fields are marked *