You can listen to the podcast and read the show notes here.
Michaela: Welcome back to the show. And today, we're going to look at the Docker revolution for faster ColdFusion and easier devops with Bret Fisher. And we're going to look at the revolution in how we build ship and run software using Docker containerization. And some of the key [inaudible] [00:18] benefits of choosing Docker to deploy your ColdFusion apps. And Bret is going to be speaking at Miracon in a few weeks. So in particular, we’ll focus on the Miracon CMS that's written in ColdFusion. But all that stuff applies to any ColdFusion app. And we'll look at some of the downsides of ColdFusion. We’ll look at the your dev stage of production pipeline, how you might set up your local dev environment with Docker Best, and some of the tools you can use.
We’ll mention Docker Compose and we’ll have a talk about some of the basic daily Docker commands that would be really good to know. So lot more things coming up in this episode. We'll see what we can fit in here because if you don't know, Bret is a Docker captain and he is author of one of the most popular Udemy courses on Docker, and has some great YouTube videos on setting yourself up in Docker. So he's a real Docker expert and he loves to give high fives to people. And he's speaking at about 12 conferences including Miracon in the next three months. So welcome Bret.
Bret: Thanks for having me. Yes, it is. It should be the Bret world tour I believe at this point. So it's going to be a lot of fun.
Michaela: Excellent! So tell us a bit about the Docker revolution in building and deploying and running software?
Bret: You know, if you're in this industry long enough, you hopefully gain some insight on larger trends in the ecosystem and what and you start and I guess that's the nice thing about getting old is that once you have 20+ years of experience like I do, you realize you've lived through these huge shifts in IT technology and how we think about building and deploying applications. Over the '90s it was mainframe to PC, I was a part of that fun. In the 2000's we went to virtual machines that was a big change to shift from physical to virtual and at that time people were talking about that there's no way that virtual be a good idea, that virtual machines are going to be super slow and they were going to be painful. And there was also reasons not to do it but today that's the standard, like you would only do physical in specific cases instead of the other way around. Then we had the cloud shift that was the big transformation it's still happening.
And I think about three four years ago, right when Docker was getting start off the ground and around version one timeframe, I was using in my own startup and I realize, it sort of hit me like a ton of bricks one day that, “this is the next big one” like, “I'm not going to be watching from the sidelines and realize after it all happened that I was just a part of the really cool and changed in the IT”
And I sort of set up and thought, “What if I could be a part of this change?””What if I could be a member of this community to help and take this culture shift and technology shift to a lot of people?” And that's actually kind of how my journey started. And I think that's how a lot of people's journey starts if they hear about it first, it sounds mythical right, like the cloud, “We don't really know what the cloud is.” And we start to play with some tutorials on line and at first it's all like every new technology, it's probably frustrating, the concepts are weared and they don't align with your current assessment of how things should be. And eventually you probably find a benefit and something sticks with you and you realize, “Oh that's really cool, that's going to help me a lot” and I find that that's happening more and more.
Three years ago, it was a little bit harder for me to convince maybe the average developer to trial out Docker because there was a whole bunch of yes but, there's a whole bunch of extra steps of problems or edge cases and I think each year that we're getting more and more mature in that ecosystem, those are going away. So now, in 2018, we're seeing where it's sort of defaults when I meet a new developer or a new operator even now because operators are coming on board to the Docker ecosystem. That they're saying, “Yes, I've tried it and we're looking at ways to implement it.”Or “We're using it already but we're looking to use it more” The statements we were at three years ago was, “What is Docker?”And you know, “I keep hearing this word Dockor but I think it's a pair of pants or something.” [Laughs]
So, yes it's hard to describe anymore what it really does because containers are now solving so many different problems throughout the lifecycle of software that it's no longer the same thing if you got in early and you may be tried it three years ago or four years ago because we're now celebrating this month that Docker 5th birthday. So every year Docker has this big celebration and all the community meet up and there are 300 of us now.
We all get together and have a birthday party, eat some cake, do some hacking and it's hard to believe there's been five years but over that five years things have gotten so easier to use that you are able to describe in an elevator pitch the technology behind Docker. You're just able to say, “Well, it pretty much benefits every part of the lifecycle from creating software, to testing it, to deploying it, to running it, to patching it, to updating it and then getting rid of it and replacing with something else. Like that entire lifecycle has now had some sort of Docker tool in play there at various stages of maturity obviously because nothing's perfect. But yes, so that's a really long way to answer that question. [Laughs]
Michaela: That great, so in a phrase it's not your father's Docker.
Bret: There you go.
Michaela: [Inaudible] [06:05] car ad from the 1950s. [Laughs] So let's just drill down into some of the keep benefits of using Docker with ColdFusion or Mirror or any other software development system.
Bret: Yes, and it's different for you based on where your focus is. If you're a developer that spends most your day on your local machine writing code, testing code locally, the first experience you are probably going to have with Docker is using Docker for MAC or Docker for Windows which are Docker branded products and projects but products that make it easier for you to run Linux containers on that machine.
If you're on a Linux machine as your main OS, you've pretty much solve that problem you can run Linux containers all day long, it's pretty easy to do by just installing Docker. But on MAC and Windows, we don't have the ability because a container requires a specific OS kernel to run because it's not like a VM. A VM just has its own kernel built in container removes that layer of obstruction and says, “I'm going to run in a confined space but on the machine OS that I'm running on.”So if it's running a Linux container which has been what we've known up until recently and we'll talk about Windows containers later. But up until recently, we really just had the choice of Linux containers.
So if you're on a Mac or Windows, you had to get that all set up to work inside a Linux machine and that is a little painful. And so the Docker for Mac and a Docker for Windows tools have come out in the last year and a half and now they're in a sort of stable release cycle and they do this almost wizardry type approach to creating a tiny little VM in the background that runs Linux that you don't even know about and runs it transparently almost like, I call it a mini VM because the things are super tiny just, you know five or ten megs of actual OS data. And those little machines even use your local host for when you're doing web based programming and you want to bring up your app it'll actually simulate, it does a little bit of fancy routing in the background so that you feel like you're natively coding your app.
But what's happening is it's really running in the background a little Linux VM. So that's all abstracted for you in those tools and that's been the main way, I mean I'm sure that Docker probably has, you know, I don't know I'm guessing millions of downloads on that app, I don't have any statistics or anything but everyone I've talked to that using Docker, that's their first way getting it and they sets them up because when Isuits getting started because in a terminal set up and then Docker just works around box for them.
If you're more of a build person, if you're focused on building engineering in the middle there were CI and CD comes in, then your part is probably going to be focused around the Docker compose tool which is also a huge benefit for local developers. But the build engineers they particularly care about that because it's a Yaml file that allows you to describe your application in the world of increasingly using micro services, that's important. Because when you have multiple applications that all have to run independently but I'll be able to communicate, but the tools we had before really kind of weren't great right. Vagrant was okay but you're starting a whole bunch of different VMs, I mean what if you have twelve micro services like that's actually not a lot nowadays for teams that are going that route.
So tools like Docker compose that come out of the box with Docker for Windows or Doctor for Mac, that tool gives you a single file that you can describe your applications, the ports, their names, all the environment variables they need, and you can config files they might need. Basically all that stuff in a very elegant terse way using my Docker composed files that are less than a one hundred lines.
And that allows you do a Docker compose up which is a one liner that does, it starts from nothing and then gives you everything you need to develop or test that container. That is usually what a builder engineer or a CICT person going to use in their CICT platform. So maybe they're using [unclear] [10:24] or something the test code and so certain teams it's sometimes the CI people that start using Docker first before the local developers that are coding because the CI environment gets so complicated and they get so hard to build and test the code consistently and correctly that I'm seeing that being a really strong angle for getting them and back and forth.
Michaela: So for folks who don't know what those acronyms is CI is Continuous Integration where you're continually testing or building you know a code automatically using a tool like Jenkins or some of the other build tools.
Bret: yes
Michaela: And CD is Continuous Development; I'm assuming?
Bret: Yes, that's true it can be continuous deployment.
Michaela: Deployment, okay.
Bret: Yes, then that's more the ups side that's the hat that I like to wear the ups person is that usually happens after your CI, so once your test are finished and they're good most of those platforms can help you get your code where you need it to be right. Because once you've tested it, where is it going to go next? And there's a new concept with Docker that you have to take into account where one of the big problems that Docker solves for those build engineers, people that are focused on the testing and deployment of software is that, the ways we got software from point A to point B, were haven't changed a whole lot in the last couple of decades. I mean since we were doing Fabius and tape it's mostly been, you know FTP or SEP or some sort of file copy, maybe we download code directly from Get or Get Hub or something but that hasn't changed a whole lot.
And so with Docker, it creates this concept of an image which is really a package format and I like to call it sort of the Package Format of the future. And that allows you to ship your tested and built code and all of its dependencies that's one of the keys of the magic, all those dependencies in a single bundled, it's kind of, it's actually kind of a [inaudible] [12:36] or kind of a zip file in the background but you don't really see it at that layer. You just see it moving around, that it can run around the Internet. It can move around your enterprise and your data center. It uses something called a registry to store these packages.
And so you can easily moves these around and using push and pull concepts similar to what your versioning system I get might be and that solves so many fundamental problems around updating software frequently because if we all want to get to that eventual goal of agile deployments and being able to have this magic solution that gets software out you know weekly or daily, you're going to have to automate that stuff and hand typing you know and everyone to your servers different commands was always very tedious people would write scripts. And the goal with this part of Docker is to get rid of that scripting and make it more of a definition based approach were writing YAML files that describe this and these things just happen automatically the background. So yes, that's a continuous deployment.
Michaela: Great. And YAML is some kind of scripting language, yet another something, I don't know.
Bret: Yet another markup language. It's like the easier version of Jason or a much easier version of XML, yes.
Michaela: But that basically defines what your app you know what other containers it can talk to. Can your ColdFusion talk to the database? Can it talk to a file store? You know, those kind of things and maybe some other configuration stuff. So perhaps we can think of these Docker packages as been sign of like semi intelligent deployment objects. They sort of know what to do with themselves because they've got a list configuration stuff baked into them?
Bret: Exactly, yes, they're like artifacts in that, you're taking you know you can… it's usually a one to one ratio not always where you have, let's say you have a web app that's usually in a single container image and then if you have a database need to run that's usually in a single container image and you can move those images around either in the public cloud, in the private cloud or and your data center depending on what image registry you use and there's lots of them out there, the default one everybody knows about is Docker hub, that Docker created that first but there's lots of other… it's a standard, they've sort of created that standard for how these images work and moving these images around.
And so all the other tools you see about in the industry like Cuber Netty or Azure EC or ASS ACS which is Amazon's Elastic Container Service and Azure Container Service. So, you see all these terms around the industry about running containers but the nice thing is in the background they've all settled on at least a few things. One is that, image format and standard for how these packages look at how you move them around and then how you run them. Now once you get to higher levels of abstraction and we start talking about orchestration that's where things aren't quite standardized or at least agreed upon in the industry and I think as this industry moves forward we'll see more formalization and maturity in the standards of that.
It doesn't mean you shouldn't use those things, it just means that not everyone works, not every tool works with every tool and you can't just switch tools without some rework involved but that's one of the goals around containers and the reason that Docker was even created in the first place was that we had all these proprietary ecosystems developing in the Internet and you know the founders of Docker, Solomon Hikes the actual creator of the original project in the open source was actually spawn out of a company that was competing with Heroku. And they fundamentally were trying to create in the Open Source of all, they were creating an open source version of one of these platforms as a service and so they gave this they gave us all this idea of what if instead of having to choose an Azure or ASS and get it all into the proprietary information of all the solutions, what if you could run your own and it would work the same way on premise as it could on any cloud?
And that's kind of what the vision of how this is rolling forward in that ecosystem is how can we be agnostic about where we're at, in cloud or in a data center or wherever we are and still do these things like create applications, deploy applications, test applications and then scale them up to many, many different servers all the same time?
Michaela: So, by putting your code inside of Docker containers, you kind of don't really care where it's deployed. It could be on your local development server. It could be on a test server. It could be out on the cloud. It been replicated into thousands of copies to deal with scaling. You don't really care because you've dealt with in the scripting or the container magic takes care of it?
Bret: Yes. I think that's the Utopia right? I mean there are final points in the details of all that of course. I think, when you think about your images, they are these sorts of discrete units that should, when you… I mean really what a Docker image is, it's a build artifact of all of your apps code and all of those dependencies including the apps get dependencies or the composer or whatever your framework package manager is, it's going to include all that stuff. And so this fall my actually be kind of big, it might be a gig in size if you have a large application with a lot of dependencies.
And you're not necessarily storing your database's data bases in there, like you're not storing like logs and stuff and this is more about the application code and the idea there is that that's running the same way on your local machine as it will in the testing server as it will in production. And you're literally shipping a copy of that identical SH shot hashed, SHA1 hashed image, then that's a way of sort of guaranteeing that it's identical in every location you're putting it. And so the systems are all going to be able to behave that way so that you can be certain and this is a fundamental problem of IT we've had for ever since PCs were been used to run software, “Was how do I ensure the code on the server is exactly the same code with the exact same dependencies and drivers as on my machine?”
And that's one of the core principles of Docker is solving that at least until we create the new profitable which is [inaudible] [19:16] are going to be, is to try to solve this problem so we can truly say, “You know what every single one of those files and if you're on Windows it's a DLL, If you're on Linux it's just a binary. Every single one of those files is the exact same bytes on these two systems so they should in theory behave exactly the same way.”
Michaela: So, when you go visit the emerald city of Docker Utopia, you don't ever get any of those bugs where it doesn't work on production but it did work in development? It always work exactly the same one only different deployments?
Bret: It does. It does in these cases that those things are true. So what I mean by that is you can't make the network in Amazon behave the same way of the network on your machine or you can't make I mean we all love to think that Linux is just Linux but when you get down to it, you know the Linux kernel is really what's running all this stuff and now we have these things called Windows containers which actually run on the Windows kernel and Microsoft is creating a huge compatibility, they're really sort of winning me over on the Good Samaritan if not trying to create their own standard for the Microsoft platform but saying, “Hey Docker did a good thing, let's just keep doing what they're doing” so now not to get too distracted on Microsoft but now Windows.
Like Server 2016 actually comes with Docker and Docker support license built into the Windows Operating System. And I don't know that Microsoft's ever done that before. I can remember when Microsoft shipped an OS and they said, Yes you know this third party product is actually coming with it and will provide support for that third party product through your technical support contract” I don't think I've ever actually done that and it just shows the power and the weight of this of this revolution and how much it caught Microsoft by surprise and they had to sort of catch up by adopting the Docker way and adopting the Docker standard instead of creating their own.
Michaela: I guess if Bill Gates was still at Microsoft, there would have been a duck memo like there was an Internet memo. [Laughs]
Bret: No Docker. It would be called Window's something.
Michaela: It would be called, “Docker” right? [Crosstalk]
Michaela: Yes. So, we talk about the Docker Utopia here, what about the downsized Docker. Is there a Docker Dystopia as well?
Bret: Absolutely. And of course the Internet's full of it so it's just like anything. You want to read the positive, the worse and the best always go on Twitter. So, obviously I think the biggest issue now is when we were talking 2018 so you know three years ago we would have said, yes, there's lots of edge cases and now everything's wonderful. I think in 2018, the biggest barrier and problem is just where to get started and how to adapt and adopt these new fundamental concepts.
I have a course on Docker you mentioned earlier and one of the biggest challenges for, gosh I've had over 43,000 students now and one of the big challenges I can see out of all of them is that it's a huge shift. It's the same mental shift that we ought to make when someone try to explain a virtual machine to us and we were saying, wait what you're saying, “It's an operating system inside an operating system I mean this it have its own network card?”You know I mean we were all back then like really kind of blown away by that concept but that is I think the biggest first hurdle that any person or team would come up against is like, “How do we educate ourselves so we all understand the same concept like and agree on it?”
But I think some of the negatives are… once we get to those higher levels of abstraction, like we talked about cluster orchestration for production servers. And that is still, well first off there's seems like the winner if there is going to be a winner would be Cuber Nettys. The Cuber Netty solution is a great open source project but it wasn't designed to be run by a part time end user, right. It was meant, it's sort of a system to make systems with and so that's why you see a lot of the vendors, ASS and Azure and everyone else. That's why we see them creating tools and platforms to run Cuber Netty for you because Cuber Netty at least to run it securely and fast and highly available in production, it takes a lot of work.
So hopefully the industry is going to help solve that problem because I think if we're all going to use containers we all want to save time not increase burden right? We want to reduce tools not increase tools. We want to speed up our path from code to production not linked in it right. Those are all the sort of core principles. So I think when we get to that production part that's going to be the last sort of area where a lot of the nuances are worked out. And a lot of the consultancies you know, if as a consultant I [inaudible] [24:35] where are the consultants all being hired to help teams with?
And I think the two biggest areas that I think teams always asking for help with are, how do we get started and ramp up our team on just our first project because we don't even know what we need to know? And then how do we get this working well in production as reliably as our old infrastructure that it's replacing? And I think those two ends of the spectrum are the big friction points for people.
Michaela: So, how many months or years before those issues have gone away do you think?
Bret: Well, so for the first one I think I don't think it's a matter if you're going to learn what containers are or if you're going to learn how containers work, it's just it's kind of like virtualization and you know, years before that we had or the cloud, it's not a matter of if it's just when you learn how to create your first server in the cloud, it's just a matter of how when you're going to eventually learn how to run some code in a container.
And so I think that we will all naturally get there even if you're in a laggard industry or if you're in a maybe you're in a government job where I mean, I've consulted for governments so I have sympathy with the struggles that they have that you've got 50,000 users that are all inside an enterprise, you're not even thinking about containers yet probably. So once you've done that learning, I think we'll get past all of that and you know, five years from now, ten years from now we won't be having Docker 101 talks the same way we're having today.
The second one being production, I don't know I hope that we don't end up in a situation where we have it today where we have all these tools, we have all these layers of abstraction that we've created like orchestrators and scheduling and whatnot and but fundamentally it's no easier for the average person to run something in production than it was before. Like if that we haven't solved that and I would say the average person what I mean is someone who codes and operates for a living and doesn't teach Docker.
I am definitely not that person. And that someone who, they're not a single person running a thousand servers right because I think we all have to go to conferences and hear the Google talks and the Netflix talks and we hear this wondrous tales about things called Chaos monkey and that will go in a wreck havoc to test our outage and our failure rates. And you know and all these wonderful tools but there's a lot of us I think day to day that are running up locations running on three servers and running on ten servers and you know we might have 500 apps like that but they're not all running a thousand servers each app.
And I think that's a different kind of problem, so I really hope that we get to the end of the day, is I'm a fan of Swarm so if you're going to look in a Docker and you're considering orchestration, Swarm is one of those options Docker created in a really focused a lot on the user experience. So I find that it's nothing but a teaching tool to help someone new that needs to run something on a server and they want to run in Docker to understand, “How can I take three servers and treat them like one?”Because that's really in the day if we're going to talk about orchestration and production containers we're saying I want multiple servers to act like one so that I can administrate and schedule my work as if it was as easy as my local machine. And that's what all these tools are starting to do for us and I like Swarm because it's simple to use, it scales well and it's solves most problems for most people. It just doesn't happen to be the most popular one so but Cuber Netty does solve a lot of those edge cases and other platform problems that people are, especially in larger enterprises might come up against so.
Michaela: And you know I think [inaudible] [28:23] is one of the other ones right?
Bret: Yes, [inaudible] [28:27] is cool, it's one of those projects, we joke that in the container ecosystem there's a new product every day, [laughs]so, I'll recommend you always check the Get Hub likes it's not indicator quality but it is at least an indicator whether it's been around more than a week.
But [Inaudible] [28:50] is a really great tool for managing a Swarm. It actually runs on top of Swarm and gives you a great gooey and extra features out of the box. Basically like a web administrative console for a multi server Swarm cluster. There are other tools out there like Rancher that do similar things and then of course all the cloud vendors are creating their own solutions for that.
There's actually a great diagram that if you're new to container ecosystem and you're not even familiar with even the names of these products are these companies because the new ecosystem of the new start up space for this is a little daunting to deal with. It's like we had with the startups in the late '90s, it's like every day there is a new start of a new company that existed to serve out pets food or something. And now the same thing is true of the container echo system.
So there's a organization called the CNCF, The Cloud Native Computing Foundation and they have a document we can put in the show notes, it's actually a diagram that at least gives you logos and blocks them into groups so that you understand, what are the security companies that are doing security stuff in containers or what are the companies that are working on production orchestration or what are the companies doing CI and CD with containers? Like because once you start doing containers you realize some of your other tools might have to change as well and then you don't know where to look so you get this problem of Googling and then there's a thousand options, so they have a nice diagram also on there.
Michaela: Great, thanks for that. So, what is the maybe you can just tell us what the ideal pipeline is going from development stage to production if you're living in Docker Utopia?
Bret: Yes, so ideally you're using you know you're on your local machine and you're developing with the Docker for Mac, Docker for Windows or on Linux you're just using native Docker, and [inaudible] [30:54] there is to always go to store.docker.com to download and get the correct install instructions for Docker because especially if you're on Linux and it's also most complicated depending on what Windows version you're on. Mac it's pretty cut and dry but on Linux distributions it's different per distribution because of the different package managers.
And on Windows, it's different per version of Windows because Microsoft and Docker are really making it the best of Windows 10 and unfortunately only on certain additions of Windows 10 which are pro and enterprise. And so depending only version of Windows and additional Windows you might have different ways of installing it. So I have some YouTube videos on the nuances of all that so that's sort of the 101 getting installed. I can send you the YouTube video links. And so that's the first step.
Once you're doing that, you're probably going to be wanting every day all day as you're developing to use a Docker Compose Command Line. My expectation for a developer team is that once they adopt Docker, that the average developer is using Docker compose up and Docker composer down and other Docker compose commands to replace all the other environment setups. You know, if you were traditionally running on Apache and you are starting an Apache server or if you start your own my sequel server locally or your own virtual box or your own vagrant, all those different tools we used to have those are all replaced by Dockor and Docker compose. And the Docker compose command line was designed for local developers to make their workflow buttery smooth.
So, they're doing that, then you put your code in your code committing solution whether it's Get Hub or Bit Bucket or Team Foundations server or whatever place you store your code hopefully it's in a versioning system and it doesn't have to be there obviously but in a team environment I'm assuming that it isn't some sort of versioning system. So that system is going to probably talk to your CI solution and the nice thing is Docker has been around long enough that at this point every CI solution has the counter for Docker. So the good news is whatever you're doing that day for continuous integration almost certainly works for Docker because it's now the hundred pound gorilla, so everybody is trying to update their cooling to work with Docker right? That's a nice thing. So now you're going to be testing those images because the goal is that you're doing continuous testing and that the images make it really easy.
So what your CI tool is going to do is going to pull down that code and it's going to build what we call build the image. It's going to actually install all the dependencies in that image and make it and then it's going to do the testing on the image itself. So whatever your prior unit testing was or whatever you're doing that's all going to just now occur the same way but it's going to be in the container which is great because when you do it in the container and it works and you get greens back essentially, that image is now sort of you've now accredited to the sort of speed that image and you can then push that image up to a registry and store it. Now once it's in that registry which is really just, a registry is really just a HTP web service that stores data. It's a very simple solution that uses HTP calls to let you push and pull these images.
So once you put that image there, the world is your oyster because you can decide that you now want those images to run on servers somewhere else, maybe the servers are in the cloud, maybe they're in your data center and as long as those servers can talk to that registry, they can pull down your images and or any old version of your images which is a vital component of all this is that it keeps all those versions and it keeps them over time and it's nice and smart so it's only going to store the differences.
So you're not going to have these huge bloated packages, it's got a nice differencing system and built in so that if you only change one file a code then the registry is only going to store one file of change and a storage system, so your storage is going to be nice and efficient. And you'll pull all those down your servers and run them. And then if you need to go to an old version it'll pull down the old version and run it. And there's a concept Docker called “tags” and that's how we label these images and you might label one version 10 or version 28.03.22 or whatever, you might version via a commit ID because that's how you version your software.
So those things all become what we call tags and you would download this specific tag image and run it where you need it. Now it comes full circle, so that image can now be downloaded by the developers locally on their machine and then test it or coded against in their local machine where they are developing. So the nice thing is if you're in a team environment and you have many different containers and maybe you're working on the API but someone else develops a web front end.
You can now download their image as a part of your Docker composed and run their image while you're developing on your image of the API back end and it allows you to easily change out versions by just changing one line of the compose file and then what Docker will do is suddenly download a new version of your dependencies or a new version of someone else's image and once you get that full life cycle in there. It really becomes a nice story around everybody is using everyone else's images, everybody is pushing their code into images and the image becomes that sort of key way of running a test containers.
Michaela: So what would you say to someone who's like, “I get how this helps the devops guys have an easier life?” But I have got a developer team here, we've got like a dozen developers, how is this going to make our life easier. It sounds like a lot of extra work?
Bret: yes
Michaela: Well, tell me to just develop on a local machine and be done with it you know.
Bret: Right. You certainly can. The goal of Docker, one of the goals is that the code that you're writing operates the same in dev, in test and in pride. Andin order for us to get closer to that goal, you need to be able to run the code locally while you're developing it. That code should be running on a similar OS and kernel that you're going to be using in production. So using something like Docker compose and I would say that the hard part about all this for a new developer besides just conceptually getting over the conceptions and of what a container is and what an image is and how they work together and then writing your first Docker file and that Docker file is your instructions of how to build your app.
So you probably already have your build instructions or probably in a shell script or in some sort of you know package manager commands already, so you're really going to just be transferring those into the Docker file format and then you're going to create a compose file to go along with it. The compose file is not required but it does make starting multiple containers easier so I always recommend it.
And those two is very small files I mean the average project is usually less than a couple hundred lines on each file and that information is information you already had today you just have to kind of learned over the learning period process of how does it expect that information to be in your, it's going to store your environment variables, it's going to store your passwords and it's going to store all that stuff. And so once you've done that what it does for you as a developer in a team is, it allows you to use those single Docker compose commands, like Docker compose Up, Docker compose down to literally tear down and create entirely new environments in one liners.
And so, if someone else in your team creates a new project and they're testing a new version of ColdFusion or and it's got a new version of this bower dependency for some front end thing, you don't have to worry about any of you know changes on your system to be able to run both at the same time. Because these are all isolated discrete units none of them have the chance of interfering with each other. So now I'm able to run one Docker compose while I've got my sequel version and you know 8.0 and I've got this version of Python running and all that stuff in those images and running in that my containers.
And at the same time I could be hitting it maybe I need to test these two different things together and I could be using some other code that I'm running and its container, it's running incompatible version of Python or a totally different version of my sequel and they all get their own IP addresses, they're all operating on the same network and Docker kit takes care of the DNS for you it does the private networking part it sets all that up for free you don't have to do anything out of the box at work and so when I start showing that workflow to developers and I'm like, “Okay take that twelve page document you had for a new developer that describes all the steps install to get their environment set up just to write a line of code and take that out and now you're left with a paragraph.
And the paragraph is get clone you know, somehow get your code on the box with a good clone or whether SV you're using and then Docker compose up. Or I guess maybe the first step would be install Docker, which is a, you know sort of gooey in Mac and Windows at least it is. So you can install Docker, you copied clone down your code and ideally if you set a standard in your team you each repository has its own Docker file and then each solution you know if you have a web front and an API in a database backend that's a three part, three tier solution, so that solution would have one compose file.
And so ideally have that one fall an easy place is probably in the root one of the repository and they just type Docker compose up, find that file and then wait thirty seconds for everything to magically download and start up and that what you're left with is then it starts to do by default, it will start listing all the logs combined from all the different systems and colorized them for you from all the different containers in a nice pretty way and all the ports are open and you can go just open up your apps in the local host like you're used to and open up your developer environment and start coding like you used to but it's all actually running in this tiny VM in the background that you don't even realize.
And so, when I show that to a developer nowadays, it usually clicks for them. They usually go, “Wow, that's so much easier than all those vagrant files and all that that twelve page document used to have for a developer set up.”
Michaela: It does sound a lot easier. Now, just one small thing here you know ColdFusion can run on Windows or Unix or Mac. But if you're using it in Docker you're basically all running the Unix flavor of ColdFusion. Is that right or?
Bret: That's correct.
Michaela: yes
Bret: On Windows, it gets a little more nuanced because now that we have windows containers, when you're on Docker for Windows and you're on Windows 10 Pro or Enterprise because this is their sort of neck their latest generation of tech right. Microsoft is really great innovation so if you want to be, let's say you're writing ColdFusion on Windows servers in production, you can now be doing that development [laughs] you can now do that development. You can now operate there locally and run the you know, the ColdFusion EXE version that the Windows binary version and have your code operate against that because that's what you might be using in production not a Linux server.
So, it's not that it is not that Docker is a Linux specific tool, it was at first, that was the first OS they tried it on but nowadays we see it running on IBM e-mainframes sale rates it's running on lots of different platforms Mac is one of the few left that were Apple doesn't really have an answer so they're not trying to make the Mac kernel run Mac apps in a Docker container [laughs]and maybe that might be I think some day I haven't heard anything but as far as I know they're not.
So, Windows is kind of the best of both worlds because on Mac, I can't unless I run a virtual machine with Windows on it I can't run a Windows kernel but on a Windows machine with Docker for Windows, you can now, there is a little toggle box and give you a little option you do you want to run Linux containers or do you want to one Windows containers? And if you're running ColdFusion code, you can choose which one of those binaries it would run against and then obviously when you build your images, what you do is when you're building those images, you specify the kernel that it comes from.
So on Linux it would be become from it just Debbie and distribution are you going to distribution. Because sometimes you and Linux you want that base layer of package manager so that you can install the proper tooling you need for your app right but on Windows you might need IIS and other tools to run so that's what the Windows Docker file would use as you would base it on Dot Net code or version 10 or whatever you might need.
Michaela: And then when you deploy that outside of Amazon cloud you are you going to still have it running the Windows kernel right?
Bret: Right. And on Windows, the support structure there is Windows Server 2016or newer. And I think they just announced Windows Server 2019 and they're claiming it's going to be even better with containers. The story with Windows 2016was around 2014, 2015 Microsoft actually, Windows had already launched. So, Windows 10 didn't come natively out of the box with Docker support, it had to be bolted on through later packages or updates.
And so that was actually the first case what ever seen Microsoft in the middle of an operating system release take the Windows kernel and completely add new functionality to it to support Docker. On Windows Server 2016, they did that before they launched the OS, so that it would support Docker containers natively for Windows containers. So you can run a Windows container, the latest version of Microsoft's sequel runs natively in a container on a Windows server without any sort of VM in the middle running Linux or anything. Yes, so it's pretty cool.
Michaela: So, you mentioned a lot of tools, can you tell us some of the top tools that we should know about for Docker both for Devs and Devs ups.
Bret: Sure. So you have your Docker specific tooling as we've talked about the Docker for Windows, Docker for Mac and then we talk about Docker compose and that comes with those, it's kind of built in but it is a separate tool that's optional tool you don't have to use but I highly recommend it.
The next tool there is Docker machine. And Docker machine is actually Docker dash machine from the command line. So if you have Docker for Windows or Docker Mac installed, you'll also have that tool and that one is for managing, what it actually does is it manages virtual machines but it doesn't just manage local virtual machines, it allows you to create cloud machines. So you can actually give it your digital ocean code, your Azure, your ASS code, your API codes and it will in a one line command like Docker machine create VM 1.
And it will go and create based on your environment variables you set or options you set in the command line it will create those machines and then automatically install Docker for you. Now Docker is really not hard to install especially on Linux but what this does is, this helps you it gives you a nice little command line gooey so that you can have multiple ones of them and then one of the native things you get out of the box of Docker is it's a by default it's a command line and server separation of powers. So when you run the Docker command line you're actually using the COI binary, you're not running the actual server. The server runs on whatever system you're pointing to and it's a subtle little nuance that most people don't pick up on or start to use it because they it just works against your local machine by default. What's actually happening is Docker running as that service or demon in the background and that's the Docker D.
And so the cool thing about Docker machine is it allows you to very easily set up either virtual machines locally or in the cloud or on other systems and then with another line of such called Docker machine ENV, you can change your local Docker CLI to speak to that server securely over TLS. And what that now means is that you can be local using your Docker commands, your Docket compose commands but you're actually running it and pushing the code to the server that it's talking to and that might be on the Internet or in your data center.
So that's a really neat tool to use it's not necessarily a production like tool but it's really great for Dev and test for your just wanting to spin a quick little things and run code on them in a Docker fashion.
Michaela: So as the developer you've now got superfast VL load to spin up containers you know anywhere in the cloud that you've paid for?
Bret: Yes, and it's a sort of a one two three punch kind of thing. There are all command line driven unfortunately there is a gooey called, it comes bundled sometimes and sometimes you can download Docker, I think it's called Kitemetic. I don't use it in a couple years so, it still gets updates but it's not really been heavily adopted so I'm not sure that especially for developers it was really meant for almost like an in user tool say that an end user wanted to run my sequel database locally for some reason they could use this gooey sort of point and click and it would actually download Docker my sequel container and run in the background.
So, it was a really interesting concept but I think that really hasn't caught on yet it's you can't really want, one thing with Docket is we don't really have an easy way to run gooey based applications locally on a desktop machine or a laptop, so you can't run chrome with the browser gooey in Docker without a whole lot of hacking. [Laughs]
So it's not for the faint of heart. So, but once you get past the Docker tools, I mention Swarm earlier but that's built into Docker so if you're interested in container orchestration check out swarm it's built into every version Docker. And then after that you suddenly find that whatever your cloud vendor is or whatever the tooling you already have for your continuous integration, your continuous deployment, maybe your server manager even VM ware if you're involved at all with any of that stuff all those companies have their own existing Docker solutions, so I usually recommend the teams that before going in surveying the entire landscape of hundreds of different tools and getting lost in that sea of tools, look at the stuff you already have, there's something you already know and invest the time in and see if they already don't have better tools to help solve some your problems.
And most CIA vendors have a lot of Docker documentation to make it easier for you to adopt building and testing Docker containers so that's great because none of us want to go change our CI tooling just because there's a new tool in our pipeline.
There's a really great another link that we can add to the show notes, it's called “Awesome Docker” and if you just Google Awesome Docker, it's a list on Get Hub, a community driven link list. It reminds me of Yahoo in the'90s.[Laughs] if you have or did Yahoo in the'90s where it was a community driven aggregated list of links to other places on the Internet, that's essentially what this is and it's everything from training, to local testing tools, to gooeys for managing your servers to just about anything you can think of in a container ecosystem they have a link there.
And the nice thing is that they use emojis to indicate that this is may be a buggy. Like new thing that you might not want to use or this is a dead project and they put a skull and crossbones on it so you know that you know. Because on the Internet things live forever, so then you go to a website you don't realize that this is happened that company folded and the only way you know is that there's a blog post hidden somewhere in their blog that says, “We're not really doing anything anymore but we still exist as a web site” so that's a really great list I like to recommend people.
Michaela: That's a great resource for… so we'll put on the show notes of the Terry Tech site. So, you mention cloud vendors have Docker tools. I want to ask what's your favorite cloud vendor based on how great there Docker tools are?
Bret: Oh, that's a good one. Well I don't think, I think it's so early days for all the cloud vendors. I don't think that… I have favorites for a few specific situations. [Laughs] I have to figure out how to make a universal favorite I love Digital ocean for its simplicity and it doesn't have necessarily the one thing that Docker has related to Digital ocean is when you go to Digital ocean, it has a, can't remember what they call it but basically when you're choosing what server you want to run you can choose apps so you can run you know Mira CMS or some other product and one of the main option is Docker so it will have Docker pre-installed. So that's great right, saves you ten minutes.
And having to go look up the tutorial how to install Docker. So that's nice for Digital ocean but I love Digital ocean mostly because it works really well with the Docker machine out of the box, so Docker machine has a driver for Digital ocean just like it has a driver for AWS. And for small projects personal things that I'm doing or community driven stuff I work with Code for America that we basically take citizens and around our physical area and we all get together and we hack on a social projects locally trying to benefit the community and I use Digital ocean for them heavily because everyone understands it, it's super easy to get started, you're not going to get overly overwhelmed and Docker makes it really easy with the Docker machine tools there.
So with AWS, AWS came out really early with something they called “ECS” Elastic Container Service and if you're someone who's tried and true AWS fan and you do confirmation templates, you use you know you have EC2server instances and you use their RVS databases and you know they love their acronyms over there even AWS itself is an acronym so they love the acronyms in Amazon and they have now three different options for how to deploy Docker and the original one and is probably still the most the easiest one to get started with is called ECS.
And so if you're someone who in your company in order to use a tool it needs to be compatible with AWS and that's a real thing right? Where teams are especially ups teams that are pretty strict. They might say, “You know, this is really got to run easy on Amazon or we're not going to do it.”So, look at ACS because that'll fit right inside that AWS tooling you already have, it works with AWS command line tools and it uses the logging in AWS instead of the Docker logs it uses the built in logging of Amazon.
And, but Microsoft's innovating over there on their container platform. They have the Azure container service, ACE I believe, I'm not going to use that acronym. It is called container service and the nice thing there is that they actually have a really cool command line in the cloud feature so that you can actually and you probably know if you're someone who's ever used Azure and it's not just for Docker but you can use it with Docker and it basically gives you a shell in a browser that you can use on your servers, even Power Shell if you're on Windows servers.
And because the Docker tooling is often a command line driven thing, that makes it easier for you to do stuff from anywhere, even if you're not on your machine you can just log in somewhere else. Have that command line in the browser and that's a pretty neat thing that I'm pretty sure they're the only ones doing it at least of the major cloud platforms. So I don't know everyone's got their own little spin on it
Michaela: Great, so I also want to mention Command Box which is a ColdFusion tool as open source that you know makes it very easy to get you know create new ColdFusion instances in Docker so.
Bret: Oh, cool!
Michaela: That's another one I, you know I was talking with the evangelist for Command Box, Brad Wood on the Podcast a few weeks ago, and I'll link in his episode on that. You know there’s a lot of a cool thing happening with ColdFusion and Docker.
Bret: The container technology, we all so exciting, I think it's really pulling a lot of the energy out of you know people that are dedicated time in their open source and focusing on the community. You see a lot of that moving towards container tooling because it's like the Wild Wild West nowadays.
Bret: With all the fun and exciting things are happening in all the different ecosystems of programming I think and obviously we've got things like drones and all sorts of IOT device stuff but the nice thing is, is that Docker works really well in those environments too so it's one of those things where it's hard nowadays to find an ecosystem or even a use case where Docker is not going to benefit the use case. I fully expect my car and my refrigerator to be running some form of Docker in the future. Docker Con is an event we have every year, this year in San Francisco. It's one of my favorite Congresses of the year and it's in June in San Francisco. In the EU it's in December this year in Spain.
And one of the best demos I ever, was they brought out a a military grade quad copter drone and they flew it on stage and they updated the operating system while it was flying with Docker to a new version and then they showed all the performance steps on screen while they were doing it. So, you know they were trying to make sure we believe that it wasn't… that the key was Docker Con is always that all demos have to be live and have to be and the riskier the demo is the more likely we're going to get on stage. So they were flying it and I was kind of hoping they'd flat over the crowd to up the ante a little bit but they let it hover in the air and they updated replaced the OS without even having a hick up.
And they talked about that they have now drones that are in the air so long that they need to be able to patch them specific for security because now we're getting drone hacks attempts. And they were talking about Docker allowing them to replace their code in real time without lowering the risk of them having failures while the drones are in the middle of the clouds, literal clouds not the virtual clouds. So, that's a really cool one.
Michaela: That is cool. Yes, I mean you know I don't think we have time to go in security but when I was talking to Neil Creswell from [inaudible] [58:26] he was saying that you know there are crypto hackers that you know will try and take over your Docker Swarm and you know stick their own Bitcoin generating code into your containers and use up your Amazon credits while they made off with a whole bunch of Bitcoin Boozy, so.
Bret: Yes, that's happening everywhere and it's not just containers. I have a good friend that works at one of the major cloud CI companies that you can put your code up there and then they will process it right and if you think about it those continuous integration solutions on the Internet are basically just big farms of servers doing work and one of their biggest struggles she says is that you know the hackers aren't trying to hack through the service, they just want to mine so.
Michaela: yes
Bret: You know, once they having to detect CPU spikes and different mining activities to block out these accounts because a lot of the solutions out there is you know they gave you a free account or whatever for thirty days or something. And all the exciting of all these free accounts getting in and thirty days of free mining is fantastic if you're a Bitcoin Miner. So yes I think that's happening all over the like all over the landscape.
Docker's just probably making it easier for them once they are able to hack it. And I think the news on that was actually that was Cuber Netty by default doesn't secure the Web portal and I keep hearing news about that. Anything on the internet should have a password and I don't know why we keep doing that but evidently companies keep doing it, thinking that no one is going to find it so.
Michaela: Yes, there is no anonymity in the Internet, so.
Bret: Yes, exactly.
Michaela: So you've been, you're a Docker captain, you've been using Docker for many years probably in you know Docker years, five years old it's like you know several lifetimes of human years I imagine.
Bret: right
Michaela: So, tell us why you proud to use Docker?
Bret: Well it first solved my own problems. I had a startup that we ended up failing as a startup but our tech was awesome.
Bret: Which is a classic problem. Focusing on tech not customers. But we used, actually that's how I got my first start with Docker, I was using it to solve our own CI and CD problems and it was rough days back then it was not near smooth as it is now. And I'm proud now for many more reasons than just that the tech is good, it's because I’m so, the more I get involved in the community and this is more of the larger container community not just the Docker community although especially the Docker community. It's the most friendly accepting diverse group of people that I've ever worked with and it has transformed my career and as a freelance, so I have my own company.
It's transformed the way I do my company, the people I engage with and work with on a daily basis are now all container ecosystem people and it's just, it's great. It is to the point now where it's almost like going to family reunions at the conferences and these are vendor conferences, I mean Docker Con is technically a vendor conference. And normally you don't get that at a vendor conference.
And normally if you go to community conferences for that sort of intimacy and community sort of focus on the on the people and not on the products and I continue to get that feeling whether it's the Docker meet ups that I go to at, so run my own here a small and trying to create a place where it's easy to learn, it's okay to be new at it because everyone's pretty new at Docker so we don't have anyone in the community that's some old stodgy person saying, “Oh, you know I can't believe you're here asking about what's new, everyone should know this by now.”
We don't have any of that its and hopefully we'll never will, right. It's a community I think that's a people that are excited about the new horizon. So, I go to events for non-profits and sometimes at events like Docker Con and some the other community events for containers, it feels almost like one of those.
What everyone is there with the same purpose of making their little chunk of the world better through this neat technology innovation and trying to make their work lives more enjoyable and meeting good people and trying to work together to solve problems. So, I think that overshadows now my excitement around the technology and so that's why I'm probably talking about Docker twelve times in the next three months is not only because I can't say no, it's because I love the community and every one of these events I want to see someone that I know that I've seen at a previous event that I want to catch up with and I miss and want to see what they're doing and it's a good opportunity as any to you know try to travel around the world talk to people so.
Michaela: Great. So, it sounds like the Docker community is really alive. So what would it take to make Docker even more alive this year?
Bret: The stretch goal for 2018. I think that the community has to, like every new community and especially in open source, it starts out as an open source project and everything's free and wonderful and we're all holding hands singing Kumbaya and then because the world is the world that we've invented, we've all got to go make money on it and so now there's a huge bit you know the last three or four years there's been this huge land grab of everyone trying, every company has their own spin, their own new start up, their own product and what I think we as a community have to do is not so much, do better because I don't think there's a problem yet but I think we have to be very careful to not turn into a vendor conference or vendor community where it's really just pitching a bunch of products and there's more marketing people there than there are technical people.
And the great thing about a Docker community conference or a Docker Con or a Cube Con or CNCF conference or a Cloud con from Linux summit, there's all these different conferences about containers is that all the ones I go to you know engineers are everywhere, engineers are the ones talking about the products that they're creating. Engineers are the ones implementing the products they're creating. There might be some sales and marketing people there and maybe some project managers and not that those people are bad but the community started as a developer community for people helping people and if we get away from that into this, well it's really just corporations selling to other corporations and the engineers are just a tool to get us, there are lots of conferences like that already.
We don't need more of those, we don't need another community like that and what I'm nervous is that we're going to, once you know San Francisco and the start up community gets their hands on it like they like they love to, then we're going to end up in that world and I so I hope a lot of us in the community, community leaders just sort of dig in and sort of doubling down on our commitment to the community and leading people not product and actually make that I could tell you something, people not product. And focusing on helping people help you know themselves in their own jobs and their own teams so.
Michaela: Great! So, what are you looking forward to Miracon in a few weeks?
Bret: So that's exciting, I have family in Sacramento so I've been to Sacramento, I was there in the Navy actually in the '90s.It's a cool city, I haven't been back there since, I'm glad to be invited to be there and we're going to be covering everything from a workshop on Docker 101for ColdFusion developers and then during that Workshop we're going to actually add in a little thing at the end around how to specifically use the Mirror CMS and help people get up started and using a real world example of an application on a local machine using compose and using Docker and regardless of what operating system they're running.
And we're going to start there with the workshop and then we're going to, I think I want to finish up with Todd on production and we're going to talk about the journey that most teams need to make on taking their Docker containers and how do I get those working on my server so I get that full lifecycle benefit. We'll go through some examples lessons learned and I actually get into technical like I show example could fix. It's not just you know project talk about all the wonderful project plans, it's more of, “Hey if you're going to make operating system decisions, this is what you need to consider. If you don't need to worry about what type of applications around this which you are going to consider.”
So if you can't make it to Miracon, I loved to see you in there especially if you listen this Podcast definitely stop by my session, bug me in the hallway I'll be there. Both days I'll hanging out but if you can't make it, the talk if the Miracon talk isn't going to be available, I have similar versions of that talk and that workshop that others actually give because again this is an open source community, so all the talks I mean I give are all open source we keep all the content online and others share the content with me so we all feed off of each other's best practices and understanding. So I'll make sure that, reach out to me either on Twitter or through the show notes and I'll make sure you get the links to any of those to stuff if you can't make it to Miracon.
Michaela: Great. So if people want to find you online one of the best ways to do that
Bret: Twitter and my website. So I'm Bret Fisher on Twitter that's one T and no C. So its bretfisher and then Bretfisher.com spelled the same way. I have a bunch of resources there. Common problems with Docker. Actually, I have a Docker AMA there, where you can reach out to ask me questions on Get Hub as a Get Hub issue [laughs] and I will close your issue by hopefully having a solid answer to your Docker question. And I do that as well as help out on Stack Overflow and the Docker forms and stuff like that.
Michaela: And you also have some YouTube videos which we'll put in the show notes and you have Udemy course on Docker which will put in there too so.
Bret: Yes, I have a Docker mastery course and a Docker swarm course and I'm now you know I think I mentioned it earlier we've now got forty three students and am extremely excited about that all in less than a year. So if you ever wonder whether Docker was fever pitch take me the guy doesn't know how to make a course on the Internet and a year later there's forty three thousand people paid for it and it's clear to me that there's definitely a desire out there for people to want to try all this technology. And so if you go to bretfisher.com you will get $10 coupons, each course would cost you $10with the best coupon on that website. So thank you.
Michaela: Wow cool! Well thanks so much for coming on the CF Alive Podcast today.
Bret: Thank you so much, this was great. I'm glad to be here.