You can read the show notes and listen to the podcast here
Michael: I'm here with Geoff Bower from Demon. [If I'm saying that right. He gave me extensive lessons on how to pronounce it.] And we're going to be talking about secrets from the folks who make the official Lucee CFML Docker images. So as well as that, we’ll look at some ColdFusion dev ups tips. Why is he thinks Servulous is so cool these days, and some of the neat apps he's building with Lucee CFML with his team there down in Australia. And also look at tricks they use on migrating legacy Adobe ColdFusion to Lucee. And we’ll also do a little bit of chat about web [inaudible] [00:40], and a few other interesting things. So welcome Geoff.
Geoff: hi there
Michael: And just in case you don't know, he’s C.E.O. of Demon, has been around for like 22 years, been doing ColdFusion forever, or at least since version one.
Geoff: Seems like forever. Now probably since version… I think might be version was is three and a half I can’t remember.
Michael: In 97, was three and a half?
Geoff: Nearly three and a half. 97 is sort of the time frame.
Michael: Yeah, and he's also president of LAS; the Lucy Association Switzerland. So he's the …that's where the buck stops as they say in America.
Geoff: It’s where the buck stops, yeah.
Michael: Yeah, so you guys there make all the official Lucee Docker images. Tell us about that, and why that matters to people listening.
Geoff: Well, I think the first thing to say is in terms of the Docker images, there's a need internally at Demon for example to have a nice image that runs Lucee in the best possible way, the best possible way that we see. And so having built that, we decided to release that as a formal image that other people could download directly from Docker hub. But there's no reason to have to use the official images.
It’s just a good starting point for that, and there are other images that are out there. So, some people have their own specific images for example Ortus have a command box image, but we run with a very… I guess plain or vanilla Tomcat Lucee style image. We have two images actually. We have one which is just the rule for Lucee, and another which is a compound image that includes the engine exploit server as well which is the probably the most common image that we use internally. But other people use a bit of a mix.
Michael: So does this help reduce problems in getting set up fast on using Lucee or?
Geoff: I think so, I mean the biggest thing with respect to Docker I think is a bit of a mental shift to the way in which you build your applications. So in the case of Demon, is brief history and move towards Docker as a deployment pipeline. We have always been very focused on moving to an environment where we can very readily build services programmatically rather than having to build them manually. So rather than having some kind of dragon slayer document which tells you all the individual things that you would have to do to get an image of your server back up running, we would look to do that programmatically. So, write a program which will install the server everything from the operating system, all the way through to the application. And so, that discipline for us started quite a few years ago.
And we used to run with vagrants, and we moved all of our production servers, most of our production services I should stay to Linux. And so we started to standardize exactly how we would build servers. And that was part of an early desire to have what we call an ephemeral server, or a server that only exists for a short period of time. And to treat your service in the expression of industry is to treat them like cattle rather than treat them like pets. So in a mansion in the old days in the ‘90s, you had that kind of classic service sitting underneath the… on the desk. No one really knew what was installed on it, and you just have to kind of groom it, and nurse it back to health on a regular basis, and patch it, etcetera, etcetera. So these days, when we look at operating a server, we literally destroy the server completely and build it from scratch.
So if you can get to that point, and I see if you a great deal of confidence and kind of resilience in your deployment pipeline. And if anything goes wrong, you’re really quite confident to just throw the server away, and rebuilt it from scratch. And so in the old days, (when I say old days we’re talking three years ago) you would have to use things like shift, or ansible, or salt, or a bunch of these sort of provisioning programming languages that effectively allow you to code server installations. And you do something like vagrant on the desktop to be able to emulate the server environment in a local development sense. And so really, it's a culmination of many different areas that have led to the ability to be able to do this easily.
And then it used to be only the sort of purview of very large installations like Netflix, or GitHub, or someone of that nature you could spend the time, infrastructure. It's a manpower to actually produce these sorts of environments. Whereas nowadays, it's possible for anybody even with an icon of just an enthusiastic blog site to have a quite a sophisticated dev ups pipeline. So the Docker thing, the reason why that plays into all of that is that even with the scripting environment like Chef; which is a Ruby based environment which is the one that we used to use most predominately in Amazon. It takes about 20 minutes to spin up a new server. So you can imagine to install everything, you have to wait for the script. [Even though it's automated]
You have to wait for that process to take its course from the very beginning. And if you make a mistake, and you find out about that 20 minutes later, you go, oh. And then you change the script, and you can kick that process off again. So it can be quite… even though it's quite rewarding once you have it finished in quite a laborious sort of process. Dockers and containers in general, so containers is a technology that has been around in the Linux community for many years. And Docker really is a what I guess initially a set of tools, and a standardized practice, if you like, to dealing with containers that made it much more accessible to people who weren't Linux administration [inaudible] [06:56].
Now those I mean it really was the purview of just very limited group of people, and Docker really made that a much broader and accessible environment. And now over the last two years, it’s been a bit of a gold rush in the Docker environment where there are just tools, tools, tools and lots of it and they will jumping on board a Microsoft is going to say in the light jumped on board with Windows containers. And so, there's been a massive amount of investment. And not just in terms of money, but in terms of just sheer will power, mind power into making Docker a reality. And the reason for that is it makes that ephemeral server technology, that whole the scripting a server environment much, much easier. So it's an order of magnitude easier to do all of that in Docker than it used to be in say for example in say shift, and vagrant, and some kind of cloud based solution.
So, Docker makes that concept of a programmatic installation very straightforward, and also gives you the ability to effectively version control your server installation. So we’re used to the idea of having virtual control for our applications branching, and doing different things with respect and application rolling back. If you didn't like the commits you made, that type of thing is while not a sophisticated in Docker is certainly possible in Docker. So you can have your own images that incremental changes to the way in which your server is installed. And that is exactly how the Lucee Docker images work. For example: a Lucee Docker images are all based on the official Tomcat image and so Tomcat; that community, does all the work to make the best possible Lucee… Sorry, the best possible Linux based Tomcat image.
And then on top of that, we layer all of the installation places for Lucy. So we don't try and know challenge Tomcat’s expertise in that particular area. They choose the underlying Linux distribution which I think memories is a Debian based installation, and then open JDK, and then Tomcat on top of that. And there are several different options that you have in that Tomcat environment. And so, eventually, you will start to offer the variations in the Lucee environment that are offered in the underlying Tomcat environment. And specifically for anybody who is into the Docker side will be probably be looking at Alpine Linux based installations in the near future. And that's Alpine Linux is a special type of Linux is being developed for this sort of environment in that it's extremely lean. It’s about five megabytes of something in size.
Michael: five megabytes!
Geoff: Yeah, very, very small and then you would add to that incrementally; the packages and things that you need. So, it's designed for this type of distribution, and installation.
Michael: Last time I installed an operating system from a company in Seattle. It was gigabytes, just eats up stuff.
Geoff: Well, is a very big difference. For example: if you wanted to do a Windows Docker container for example which is sort of possible with Windows containers. Is a little bit behind the eight ball compared to Linux containers. I think you… I don't know what the comparable based OS options are. And effectively, Docker shares, and utilizes the underlying operating system. So if it sees anything that it needs already, it doesn't bother to download it again. It just uses that layer. I don't know at what point the windows container kind of kicks in and so, Windows containers, I’m not sure in terms of what size they are. But certainly in the Linux environment, it is very interesting in terms of your ability to make these changes. If I want to do a Lucee upgrade for example in a couple of text changes and then redeploy, and it's done. And it can be done in seconds and doesn't take in minutes.
The really the time it takes is the time your server hosting Docker has to download the new image, and slotted in like a cartridge. And then, spin up the container. It takes a few microseconds to get that running. So it's pretty cool. And I think that again, there is a bit of a ramble around. But the whole purpose of doing these things, the whole purpose of moving to an ephemeral server-like infrastructure is back to this concept of resilience of your deployment process. You wanna have absolute confidence that when you’re deploying something, it's not going to break. So we often think about that in terms of the code changes we make. So, make a couple of code changes, add a new feature. You wanna test that and make sure that it's not gonna break anything when you deploy it.
But probably, I think a greater risk to deployment problems and the like are when we make changes to the server environment that surround that code. So we're making configuration changes, or we need to add an extra module, or an extension, or something of that nature which has… It's harder to replicate for a start in your local development environment. And it can have kind of far reaching implications in the server environment when you go to make that deployment, and that change. And what's worse is it's very difficult typically in an old sort of legacy infrastructure environment to roll that back. I logged on remotely to my Windows box, and I clicked a few check boxes with the mouse. Everything seemed to be working okay, but in the middle of the night, all the alarms are going off, the servers are dead.
And nobody really knows, or even can remember what you actually did at that point. You know, is this checkbox I clicked, or something else, or what the hell happened, or forgot to replicate it across the cluster, or whatever? Whereas when it’s programmatically done, it is what it is and a few rollbacks, you’re rolling back to exactly the installation right down to the nuts and bolts of the operating system that you had previously. So that ability to roll forward, and roll backwards gives you that confidence to have a move towards which the true goal for Demon at least is a sense of a continuous delivery. So in other words, the ability for a developer to have an idea, make a change, and have that move into the production environment in the shortest possible time. That doesn't always mean that you make changes and you immediately deploy it.
Like it's instant that you going to do it that day. It's just that you have the option to do that not only in terms of time frame, but in terms of the confidence. Even as a new developer, I can make that change, it will go through the pipeline, it'll end up in a production environment. And if I really did do something catastrophic that that can be very easily rolled back, and go back to the status quo where we were before a moment earlier. And that, just having that knowledge in the background, gives you a huge amount of confidence when you look to rollout new changes, or complex changes, or infrastructure changes. And that's the first con a stepping stone towards an automated infrastructure and why Docker is so popular.
The next step, and I think I talked a little bit about this in C.F. objective at two talks. One was what we do in Damon in terms of actually running a development environment. And the second one was how we go about deploying that in the real world out into a production environment. So often, people will play with Docker, will have that running locally about it a little Lucee service spun off, and it all seems like magic. But then when they go to deploy it into production, there's an whole other set of headaches that they consider all that they're perhaps doing it wrong. They’re logging on to the box and they're pulling the Docker images manually. And so the next step is this whole orchestration piece where you can click a few buttons or programmatically do it.
And instantly, you get from one container to 16 containers all running in a cluster, all spread across multiple data centers. The sort of stuff that you kind of dream of… Even five years ago that this was just an impossibility. The thought of being able to deploy something, and then scale it across multiple data centers just with a couple of command changes was really again only available to the very largest engineering teams. And now, it's literally available to anybody even with just a simple blog, you could easily do a clustered login solution, and have that running up and down. We do that with pretty much all of our clients now. Even the very smallest client because we have a base image in Docker which always offers that kind of standard installation.
And then, we have a standard set of libraries sit on top of that for our applications typically. We work with a framework in ColdFusion called ‘Far Cry’ which it's fallen from favor I think in recent times. A lot of boxing style frameworks that are very popular these days. But nevertheless, that's the one that we enjoy using. And it means that when we go to deploy something, that even though it might be a simple let's say a simple content management environment, it's immediately clustered. So it starts in a clustered state. Even our simplest clients we typically run as two containers just because that's a standard for us, and those two containers will automatically find themselves going across a large cluster into different data centers, and on a high availability mode. And again, this only possible because of the advances that we've had in things like cloud infrastructure.
I mean we're predominantly in the Amazon at the moment, and if you're let's say thing you have a database typically, we use MySQL, or MS SQL. And so in MySQL database, if you want to install that manager, then cluster it, and then have a splitter the two data centers, and handle all of that kind of infrastructure if you like, that's a lot of work that's a dedicated systems administrator just to have to plan to pull that stuff off. And it's a fairly unique so specialty. It's MySQL clustering specialty. Whereas in the Amazon, I can go give me a MySQL instance that's this size. Yes, I'd like to have a plus sequel [inaudible] [14:38] data centers …. have it’s done. Click the check box [and it is literally a checkbox] Just click, and you go I’d like some read replicas because it's going to be the Olympics or something and it's nice to scale.
So click read replica times X, and they just spring magically in the [inaudible] no idea how it happens, where it goes, it just works. And it's that type of infrastructure on demand that really makes these overall dev ups pipelines for small companies an absolute reality. Of course, we know a little bit more about MySQL, and then, I applied it there. But you really don't need to know anything more than that. And you can select the number of backups, how far those snapshots will go back in time. All of that administrative paper, the maintenance, the upgrades; it's all managed by somebody else, and at a price point that is frankly just ridiculous. I mean someone like me who used to look at deploying physical servers in the data centers ten years ago.
This is two minutes and you have something up and running. Whereas before it was two weeks you'd be lucky if Dell, or somebody actually shipped the physical computers into the pages in that sort of time frame. And then, you have a man on the ground wired all up. And it’s just a completely different but thousands and thousands of dollars difference in terms of price. So, even if you're not going to Docker and you’re kind of on the more server image by side. Like the way that the guys are looking at a Lucee gold image for AWS, and AWSAMI. It's a very big change to managing your own infrastructure. And if you take the next step which is containerization. And the reason why that's a little bit different is we don't have a server which is dedicated to a client; for smaller clients. We would have a cluster they call… I think I+ is currently called Cloud eight.
It's almost a cloud nine, but not quite. So it's cloud eight. It's out there. It's got about four servers live across three data centers and when we do a deployment, the orchestration tools just choose where those containers will fit into that cluster whatever has the most space or is kind of the least used resource wise. Again, these are all decisions we don't have to make anymore. It's automated to a large degree. And for larger clients, they’ll have a dedicated cluster. It’s cool, and then you think about you’re paying for the service by the minute. This is another thing that's also quite interesting. And so, we have staging environments which programmatically start up in the morning at 8:00 and then shut down at 8:00 p.m., and that's just automatic.
You can override that and say no I'd like the staging cluster to stay up a little bit longer. But we can just cut 50 percent of your bill by automating that process because again, you don't really care. Pay is only going to take ten minutes to start the servers in another couple of minutes to deploy all the containers across them. But that's happening while I'm working the work, getting a cup of coffee, all the staging service are coming out, and I don't even know about it. I get a couple of messages come through on the slide channel saying everything went according to plan, or in the rare occasion, things didn't quite go according to plan. And you can sort of sort that out.
So yeah, it makes life a lot easier especially for companies like Demon which are running a lot of different applications, supporting a lot of different applications that are quite varied. Normally, we think about dev ups pipelines as being very intimate affairs. The application works in a certain way, and to update it, you have to sacrifice a chicken in FTP, something in a place. There's a whole kind of ritual that you often go through with larger scale applications to make a successful deployment happen. And what we've managed to do with Docker environment is to effectively sandbox all of that kind of intimate stuff into a container which is standardized, and sits in a common universal format.
Whether it's on your development environment, on the staging servers, or whether it's sitting in production. And that makes a very, very big difference and an impact the way in which we develop day to day. And I should since it's not just Lucee, this is other languages that we work with. They all work in the same way. So we don't have to… We might have to in some instances unfortunately manage a PHP solution. Quite a few of those that we look after. And it's a sign dev ups pipelines for that. So we make those changes obviously it's a slightly different environment for development of the actual rollout of the application.
The way it's routed, managed, clustered is extremely similar. And so that means that you know, it gets a chance that every one of the development team, and starts to get a handle on that sisops process that was otherwise a kind of a mystical black card in the old days. That's why you had systems administrators, and those guys did really cool stuff. And effectively, this is encapsulating a lot of the work that they used to have to do into something that can be programmed. And so, leave them to do more interesting things like service development at top of was something we could move on to at some point in this talk.
Michael: Yes, we will be talking about that in a few minutes. So this is amazing stuff, Geoff. Thanks for sharing all these secrets about ColdFusion dev ups using Docker containers. And we’ll link in the episode I did with Patrick Quinn who's the product manager for Lucee. And we talked extensively about AWS in that episode so.
Geoff: Yeah, they’re doing great work. They're doing great work, that’s nice.
Michael: They are so, the other thing that occurs to me and maybe it was implicit in something you said is you know, I remember in the old days, you're always patching the server, the operating system, or putting a hot fix on ColdFusion. You don't have to do that anymore because you just download… You just tell it to use a newer version, pull a newer version in to the image.
Geoff: Exactly right, and in fact, one of the things that… You look at patching is not just patching the CFML engine. It's also patching Tomcat, or patching the underlying operating system. All of those things are quite arduous. I mean if you're not used to patching the operating system for example that something can very easily get out of date, or Tomcat. Or with the amount of legacy apps that we see sort of thrashing around on something like Tomcat seven because people are just too terrified to kind of upgrade everything to the lighter version of tomcat. That all kind of vanishes in a Docker environment. You can test all of that locally in dev stage. You can have that running well in advance of having to do it before you go to deployment. So it's not like this throwing a switch and it's out there.
But when you actually do the upgrade, you're right. It's really just a simple text change for about [murmuring] [24:55] half a dozen characters. Will be from Lucee, Lucee, the new version, and that will be it. And that… if you just used our own internal Lucee image, every time there's an official release from Lucee, we typically upgrade everything at that point. So we'll upgrade, there'll be an upgrade to the operating system up in JDK, Tomcat, and then Lucee on top. And sometimes, if there's a lengthy period of time between releases of Lucy we'll actually release an interim image which has just the operating system and Tomcat upgrades.
Another cool thing about Linux environments in general, but also the Docker environment is all of the major Docker registries like Docker hub, or KIO is actually the registry that we use internally at the moment. They have security scanning options which will scan your images, and tell you all of the known vulnerabilities that exist in that particular image. It’s a bit terrifying because is always vulnerabilities. You go, “What!” But it's literally those things that are known vulnerabilities for which there are no fix. That might just be you know, if somebody had root level access that they could do something, or if they had the user access sort of kind of a command line onto that server that they could [inaudible] [26:10] their privileges or…
What I mean is there are a lot of issues that might show themselves in a security scan like that wouldn’t necessarily be relevant to a web app. If somebody is got command line access to your application, then it's probably a few other things that have gone wrong. If you have web application before you’re worrying about that particular exploit so. But it is good because at least you know what the potential problems are. Whereas currently, most people just have a kind of willful blindness to what issues might potentially exist on the whole stack to get the little tiny; literally tiny layer, which represents the CFML engine on the top, and your own application which is obviously a very large vector for attack. The underlying operating system is generally not looked at hard enough perhaps.
Michael: And not just the patching but how it’s configured as well.
Geoff: Yes, so configuration automated as well. So you make all those changes, you decide the sorts of things that you would like to have implemented and updated. And then, when you roll it out again, you've got that opportunity to say ah, it's working perfectly and if something goes wrong, instantly roll back to the previous configuration you had [inaudible] [27:24]. So, I think it's important. When you when you see the Docker image, it's like an appliance that's got all of your pieces sealed into it. It's not like you roll it out and then F.T.P. something into place. It's already built with all the fuel code in it and everything are running at start with. So you literally to shifting from your dev environment to stage environment to production environments a kind of a medically sealed version of your app at that moment in time, and you deploy it. And so it works really well, love it, cool.
But I have to admit there is a barrier to action to get into Docker, and you have to have an application that doesn't write files locally. There are a few common issues with converting to Docker. Writing files locally is an important one because imagine if you write files locally, and then the image is destroyed, it's lost those files, and so you need to be able to maintain state of your application. If you do write file that size and image or something. You know C.D.N. or some kind of shared media resource, little things like that. Once you get over that sort of hurdle. Getting into a dev ups pipeline, we do this for people if we were let's say we were commissioned to help people establish a CFML dev ups pipeline [we do this for other languages too]. But essentially, we normally have an engagement about five days to implement something.
So, that gives you a base kind of core pipeline which you can work straight away and then you probably need a little bit longer to get the team into the kind of mental mindset to actually use it because you have to develop in Docker, build in Docker, deploy in Docker. And when you get past that you know, that the issue is that the sort of advantages that are described are immediately available to you. But yeah, I mean that's something that we in a bit of a plug for doing [inaudible] [29:26] that's something that we do is to help people move into that particular area.
Michael: Sounds like a really useful thing there. So, you mentioned servulous earlier on, tell us a little bit about that.
Geoff: So you have the… If you think of it in terms of there was a beautiful image somebody (with name escapes me now) put together which had in the ‘90s we had spaghetti code. And then we moved through into the common north east, and we had lasagna like code where everything was put in layers and [crosstalk] [29:58]
Michael: Oh, this was this was a rad.
Geoff: No, no something… I have this on my presentation as well. I’m sure others have because it’s a great image.
Michael: yeah
Geoff: There’s some guy, he runs a startup [crosstalk] I can’t remember. But anyway, so the lasagna thing is this NTA the web TA, your app TA, your database TA; that type of thing. And then you move to a ravioli based stay with me. There are ravioli based server architecture where everything's nice little packets of pasta with goodness inside and these are the kind of micro services architecture, or Docker like architecture. I mean so everything is a nice dish. The dish is made up of lots of little packets of things working together, okay. So, servulous, is the next step beyond that where rather than having to have a micro service that you run and deploy, you would have something else that you either invoke as a service. So in other words, this is strictly speaking not servulous development.
But we look at it in a similar way. If you went to Amazon and you said, “I would like to have… I don't know the classic example of what we're working on run the moment. I want to do video transcoding. So we have some very large installations. Big university client here were effectively run a YouTube for the entire university. They had some problems with the Patriot Act, and a bunch of other things which mean that things like YouTube, or other external services that run outside of Australia are not possible for them. So they made a decision several years ago to build their own media services platform and we were the company that put together for them. And initially, we had a whole cluster of servers running FFMP, and doing transcoding.
So we're running the servers, we’re putting cluster out there, we're sending out you know you got a message queue that tells the cluster to pick up a new video of a transcoding a Transcode server. But you're running and maintaining a [inaudible] [31:53] structure. When moving that to Amazon now where Amazon have the transcoding service. You don't even know what it does. Pop the file into somewhere in S3, it picks that up, transcodes it, and the new file appears somewhere else. So then a whole process effectively. You're not managing a server or doing anything you're just calling upon the service that runs and processes that. And you can take that extension slightly further where you can now write applications where it just performs a function, does a function, and that function might be in the case of what we do.
It might be to see that a new video has arrived for transcoding to update a let's say dashboard or content management interface if you like to say yup this video is here, register the video, pick up some metadata about the video, and then move the video file into an area to be transcoded. And then when the transcoding finishes, we have another function which just wakes up and goes, “Hello, video is finished, everything seemed to go okay.” I will now update the database to say this video is available, this is where it is, you can continue. And so in that particular environment, there are several different components. One is a Lucee application (interestingly enough) that runs all of the web interphases, provides the opportunity for a user to interact with all of that interesting infrastructure sitting behind the scenes.
And then, there are a series of what we call lambda functions which is Amazon's servulous technology which I believe are running. They'll shoot me if I get this wrong, but I think they're running node at the moment, so one of the languages that we work in JavaScript; node. And they do very… They're not like apps. They're just very specific function. So just do that function, and there's an event that fires. Either we put something onto a message queue or Amazon has its own advent in the structure like file created or something like that and immediately files that function. And Amazon’s responsible for spinning up that node environment making sure that that particular function is in place, executing it, and then shutting it down. And essentially, you just charge by effectively the usage that you have rather than anything else.
You’re not charged for the service to sit there permanently. You’re just charge for how many times it’s actions and activated. And so, we're moving all of that video transcoding from an FFM pick from all the way through to a combination of Docker based Lucee apps for managing everything and a combination of servulous functions, and Amazon's transcoding service for providing all of that transcoding. And they do it quite a bit. I think they have about 16 terabytes of video data up there at the moment. And other things; images files, audio. We manage all of mail your assets little installations
Michael: Wow! That's a lot of data. Is it really expensive to use it and I'm an Amazon transcoding?
Geoff: It's not I mean compared to running… You mentioned you've got a server farm that you're running permanently. I mean you can predict to a certain degree the amount of traffic, or volume you're likely to experience. So during exam time, or during assessment time, many of the courses at the university now accept a video submission as an assessable kind of item. And so, you have to have the ability to be able to receive that video as part of that student's assessment criteria and then process it so you know that everybody waits until the very last second to submit their particular assessment. And so, you end up with a kind of mad rush it's a midnight where everything has to be uploaded and processed. Whereas the rest of term time might be a very small trickle for that particular subject area for example.
So for a certain degree, you can predict the amount of scalability that you might require. We tend to just have it all sit in a queue, and if the queue gets too big, we’ll automatically scale the servers. But in the case of the Amazon infrastructure you only pay for what you use. So, if there's a huge burst, it'll suddenly do this kind of massive parallel transcoding of all the video, and then it'll just go back to nothing. And so you're relying upon I guess the overall kind of resource consumption of that Amazon data center which is typically much larger than our own set of services that we have in place. I mean it can be some bottlenecks, but as they say, everything we have queues and then gets processed. At Amazon, during… I mean we're in the middle of that rollout.
So it looks to us as though Amazon's much more efficient at managing the scaling and we are on the takes us time to respond to the queue of this. So, we're really impressed. I mean that's something that we've done for a couple of clients already. So we did that for the Olympics. We do all the public facing in the structure of the strain Olympic team. We moved them out of Bright Cove and pushed them into Amazon. We've got university here which is all around FFM exam which we’re currently moving its massive installation. And look at a couple of other clients in the pipeline to make that video conversion because it is much less expensive if you only use… If you imagine like having 16 terabytes of data. Just pull that number out there is a kind of example. If you had to run that on your own network storage, that's expensive.
And if suddenly you need to add more disks to that right away, that becomes very expensive. Whereas three in Amazon is highly curable very, very high variability, and is comparatively inexpensive. If you've got all of your assets on the five gigabytes, it's very inexpensive so. It's compelling I mean we spent all these years trying to get away from vendor lock in. So, getting everything to Linux, and trying to containerized everything so that we wouldn't have to be dependent on any particular vendor. But Amazon offers such great options that it's difficult not to… just use this server, it’s really nice. But if you want to move to somewhere else, that service wouldn't exist. You have to replace them so it's a certain degree there's a bit of lock in, but it's very inexpensive compared to doing it on your own.
Michael: I know I was talking with Block Bryan class about all the AWS stuff you can do. And apparently, was over 40 different services now they offer, and transcoding is only one of those. And he was also telling me about the Amazon conference. (And I’m forgetting the name of the conference.) But anyway, they have 1,000 sessions at this conference [crosstalk] [39:13]
Geoff: Even here in Australia, we’re only tiny, and we have like… I think the last but thousands of people attending this conference here in Sydney. I mean it is huge. And the other providers like as [inaudible] and digital ocean. And others are pretty good too, but they doesn't just have the breadth of services. I mean it is… You now when we look at it, we do a deployment, we pick up a bunch of PC2 service for a Docker environment. We don't even do main cash anymore, so we have an elastic cash installation that's an AWS service that spins up across multiple data centers. We can use red [inaudible] cached if we want to in that particular environment. We use the cloud search product now which is a kind of… What is? I think that… I think it is still losing based. I can’t remember, but it's a great service.
We index everything through that service and have the search results come back. There are heaps of things that we do now that is just part of our standard deployments. It's so easy and so, we can suddenly have a client this extremely high end, high availability, absolutely top grade infrastructure solution. And it doesn't cost us that huge amount of consulting effort that it used in the old days to actually just get it running. Let alone managing and supporting that infrastructure the moment it's deployed. So, yeah it's… It has really revolutionized the way in which we can address applications even the simplest applications used to think about. We've got a couple of startups we’re involved with here in Australia. And you think about… you shouldn't be thinking about scalability initially.
It's hard to because you're trying desperately on a kind of shoestring budget to get things done.
And so, trying to do all this extra work to make it hugely scalable. It's very often not part of people's thinking. It's like when we get the scale, we'll now have to reinvest and get rid of that technical debt and scale. But for us, even that wouldn’t have to worry about that. I mean our standard deployment even for a simple solution is scalable. And so, it gives us this kind of again, the confidence been of what okay. We can do that, we do this, do this, and it's automatically. We have to go out of our way to make it not scalable. Yeah, I mean everything we're using is something that is automatically… something that can be deployed in the cluster. And so we have to make a conscious decision to use something that won't be, and sometimes, you have to.
There are some services and things you example we run unable to survive in the box of books just a start up that we're involved with here in Australia selling e-books to secondary schools. And we have an Adobe contents server that runs all of the digital rights management, and so that's a single point of failure if you like. But we don't. I mean it's pretty robust, I could be wrong. But it's not something we'll look to cluster until such time as we have a huge amount of volume that might require it. But everything else is fully distributed, high availability, all clustered. And we're in that kind of startup phase. So, [inaudible] is really, really good.
Michael: So, I am guessing you've migrated a lot of legacy Adobe Cold Fusion apps to Lucee CFML over the years.
Geoff: Yes, that would be true. I mean I think the vast majority of our stuff now is Lucee based. We do have… we still do have a number of Adobe ColdFusion clients. They tend to be in a situation where it's less expensive for them to just keep upgrading, or to not upgrade at all which is even worse. But to keep upgrading and to make a migration. And so, to make a migration is not just a case of getting the code to run on Lucee. I mean that is actually probably the easier path depending on this type of application that we’re running. There's a lot more involved, so whoever's involved in that infrastructure, the systems administrators. Everybody in that environment needs to be comfortable with the move to the newer version of Tomcat running Lucee. What might be the problems and complications?
Better the devil you know than the devil you don't; that type of the top of the issue. But the actual code changes tend to be relatively minor depending on the sort of things you do. So for example: if you're very heavy P.D.F. uses easy P.D.F. forms the like Lucee doesn't have a huge amount of coverage in that particular area of CFML. And so, you might need to look at wiring in some sort of an external service, or potentially rewriting those particular areas of your application to adjust CF reporting modular; that's a big part of what you do. If you have all of the U.I. elements; CF windows, CF crazy crack smoking menu, things like that. If you are unfortunately in a situation where you're bound up in those sorts of tasks which we don't support.
So as a principle, we don't support the U.I. elements in Lucee. Then your migration might be a little bit harder than normal. But generally speaking, it's pretty straightforward. I mean the coverage for CFML compatibility is very high. It's a core priority for us, and if we are seeing compatibilities are treated… Someone will say things like C.F. window, etcetera. Then we treat those with a high degree of priority. And yeah, I mean we have obviously most of our apps; that's not true. Let’s say many of our apps have migrated from Adobe ColdFusion. But we are still building lots of new apps; lots of greenfield projects that we stop, that are started with Lucee. We're not just in that kind of maintaining legacy mode, and reducing licenses. We actually use the product because we find it useful as a tool welding new stuff.
Michael: That's great to hear that you create lots of new ColdFusion apps. So, that’s a good story to hear.
Geoff: [Crosstalk] [45:29] known to many people. I mean it used to be a… You know there's a kind of a… I hesitate to say a sort of a stench attached to CFML in recent times. This old crack smoking language that no one would use. Whereas there is obviously a very modern language which has many if not all of the features of other languages. And also a kind of a unique approach which I think is very well suited to web app development as opposed to development in general. And this may be a bit of you know, people have come off the high horse of it has to be language purists. Certainly, in terms of this paying for solutions. These days, we tend to not get the question what's the technology platform. If we're asked what the technology platform is, we actually literally say it's Linux running Tomcat with an application framework on top.
We don't even get to the point of mentioning the specific language, or framework that is being deployed. From the client's point of view, it's more of; okay, after maintaining our infrastructure, what would be managing and it's Linux, Tomcats; that's really what they are managing available. The application we're managing at our level. And they're more interested in the solution, and the result than they are in the on line technology. Whereas ten years ago, I guess the height of the dot com boom, the top people were very you know, we’re going all dot net, we’re going all Java, or they were very focused on specific language decisions, and these days, that seems to be less important.
I mean one of the things that always surprises me is that every year we go through an assessment at Demon to know whether or not we will continue with CFML as the preferred language platform for what we're building. It’s a serious question. Should we do that, what tooling, what other tooling exists out there, what should we be considering? And we don't just work with CFML. We work in a range of different languages from P.H.P., go Python, JavaScript. These are things that we… not all of our apps, but many of the peripheral things that we do involve those sorts of languages. And so for us, it's not really a decision about only sticking with one kind of language in a purist sense. It's Lucee works well for us, it's a good utility, it helps web apps extremely well. We have a layer of libraries within which we are very productive.
And while that continues to be the case, we will continue to enjoy using Lucee, but in those areas where it's not appropriate. So for example: writing a servulous function to process, video transcoding. Often Amazon queue, we will choose a language which is more specifically suited to that particular environment. And I know many shops would just build a CFC in ColdFusion and cobble something together on those side of that kind of monolithic server that they've got in place. But when you break that down and start to look at a more micro services architecture, you're really just saying that well this piece here, put that in a little container or a little servulous function and that will do its job really well. And we'll choose a language specifically for doing that job really well.
And that's generally the way we like to look at it rather than seeing it as kind of the one where they say you got a hammer and you use it to bang every… everything or whatever it is. It's a tool that works extremely well for web app development. And certainly for us, we're comfortable familiar with it still continues to be very successful for us. And while that is true, we will continue to use it. I mean I know things like you know, interesting enough, when we were looking again this year earlier in the year. Ruby, what's happened to Ruby on Rails for God’s sake? It's got a stench-like… other languages are there. Isn't it strange?
I mean from being the darling of the Web development community to now being your independency Helen, and Ruby bundles are a disaster of this. I mean every language has its own particular challenges, and I’ve seen beautiful things written in Ruby, and not so beautiful things. But it is just this kind of flavor of the month business, isn’t it? I mean I don't know what will be the next flavor of the month probably is it node, nodes flavor of the month. I wonder how long that will last. But again you've got to use the tools that work for you. It’s more about the solution than anything else and the maintainability of it.
Michael: So, tell us about Web DU. That was a conference down in Australia. Yes, great conference.
Geoff: [Crosstalk] close to my heart for [inaudible] [50:14] So, where do you is something in the dotcom crash back in kind of 2002, 2003 sort of time frame. We used to go in the boom. We used to send people overseas to conferences and I was fortunate enough to speak at the very first the layer conference in Boston. We're part of that whole sort of vibe around the community, and sharing your technology, and your knowledge that topic. And when the crash came, that became very difficult. You know people were from… It was literally a boom burst sort of cycle where people are driving a Ferrari one day and the next day, they're on the street. Everything's gone; literally everything, and it was a much tougher time.
And so, one of things we felt would be good if we can take the team to a conference overseas, that we would run our own conference locally. And as it so happens, Sydney seems to be a fairly popular destination for people and it was surprisingly easy for us to encourage really top notch speakers to come all the way out to Sydney on a kind of holiday which they would incorporate the conference. And so we did everything in our power to sort of treat speakers like rock stars that they are, and it's a kind of build a conference around that sort of notion of community. And it was never a kind of a for profit thing. It was always [sometimes made quite a bit of money]. But we nearly always spent it all. So, we would have outrageous parties, and it was a bit ridiculous at times.
But we had a huge amount of fun and I think certainly everybody that I talked to who reminisces about where do you… It was a different time, and it was a different sort of conference. It really was about having a fantastic exchange of knowledge, information, networking, and generally a kind of a good time. In a period of time that was less wonderful in reality for people. You know when you have to go back to the office and fight your way through that crash time.
So that was the kind of vision of it. It ran for ten years, and didn't kind of paper out really we had two young children at the time too, and just newborn. We just couldn't continue. So, it's a huge amount of effort to run a conference like that behind the scenes is a lot of… It’s a huge amount of… But if you’ve got 20, 30 speakers, you go to chase them all up, provision them with everything they've got, then you get all the sponsors. It's a really… to do well at conference it takes a lot of time.
Michael: yeah
Geoff: and so
Michael: It’s a full time job.
Geoff: It's a full time job, and for people who weren’t running a full time and behind the scenes. You know my wife used to do a lot of that kind of heavy lifting for us, plus a couple of people in my company that used to do that. And we all had young kids funnily enough at the same sort of time. So now Isabella, my oldest is just about to turn eight. James is six, and so five years has gone by and we kind of reminisce maybe we'll do another ‘where do you’ sometimes. I don't know. It certainly was a wonderful time and I would love to see another conference spring up that have the same kind of a farce that ‘where do you’ had. That the location that's such a good… location, location, location that they say. I mean it's much easier to get people to travel to Sydney as a speaker than it is say to travel…
I don't know. This to a destination that is less, less wonderful, or less kind of fantasyland. So anyway, that worked well for us, I still travel to conferences overseas on occasion as recently as C.F. objective in Washington. Washington D.C. I should say it’s obviously different from Washington State. We don't think of Washington State out here. We literally… Washington to us is just the capital. But that was good, that was interesting. For me, conferences are less about the attending the session; though some sessions I do like to decide to see where we're at. You know are we behind the eight ball, are we ahead of what people are doing? It’s good to get a feel for the way kind of sit and to get a broader perspective on different ideas that are out there.
But I like the meeting the people part of this. So for me, C.F. objective was great because I got to catch up with a lot of the Lucee Association Switzerland management team who was there with [inaudible] [54:59], and Patrick. And the guys from orders; Brad, and Louie, and… It's good to be asked to meet people face to face even if it's only briefly just to say hello and to continue that connection. Funnily enough, I met up with a company in Washington that does far cry development, so that was also very interesting.
And I had a bunch of clients that I visited that went up to Farmington Connecticut and a few other kind of destinations for just catching up with the U.S. based clients that we have. So again, it's more… for me it's more about meeting people than it is necessarily the sort of technology that the people were discussing. But I do like to hear of the technology. I’ve got two talks on Docker I think as I mentioned which more and more people are getting into, so yeah
Michael: The wave of the future.
Geoff: It's certainly a particular… it is a path to the future. I don't know if it's the only one, but it's certainly works very well for us at the moment.
Michael: So, I’ve got a bit more of a personal question here Jeff if that's okay.
Geoff: Hm-mmm, by all means.
Michael: Your company's name Damian, and it's a somewhat unusual name. How did you come up with that?
Geoff: Well okay, it's really Demon.
Michael: Demon okay.
Geoff: Like encyclopedia, or hemoglobin and it's the kind of… It's not spelt like that in America, but it is spelt like that in the U.K. which is where my sort of formal education was from. It was going to be… Originally, the company we tried to name as the [inaudible] [56:33] Limited. Bobbt, BOBBT, but that didn't work out. Then I want [inaudible] Limited, and then … was taken as well. So both of those crazy names were taken. So then, I have a thing for Greek mythology, and the like. And there was always the Demon of Socrates is no one man could have so much knowledge. He had to have some kind of divine intervention, some kind of assistance. And so Demon’s like a kind of a less the kind of Christian devil like thing, and more a deityfied hero, or someone from the underworld who might communicate with you.
And also in the same twist was… as you’re probably well aware, the name of a process that listens at a network port; an attendant. So you have a male demon, and an F.T.P. demon, web demon etcetera. So, it kind of fit well with my own mythological classical leanings and the technical side of things. And certainly, back in ‘95 when we started Demon, this was less of a popular name than you might imagine. Nobody you know, it didn't come up and do any conflict on the in terms of web search engines, or anything else. These days, yeah it's probably a little bit more common and would like. Back in ‘95 interesting enough, one of the biggest battles I had was trying to convince people of the Internet was the thing.
So I'd be out there competing against because I was a stockbroker or really a derivative stealer prior to getting into setting up an Internet company. I'd always been into programming. But when I went to university there was no such thing as a computer science degree. I think the closest was Maths, and computation. But either by chemistry and a lot of computing in trying to work out and do Nuclear Magnetic Resonance, X-ray crystallography; things like that very computer savvy. So getting into the Internet side of things, they're all… it's Microsoft network it's [inaudible] [58:38] You know it's a Bloomberg network. I had a fascinating meeting with Mike Bloomberg of all people when I was a stockbroker. They had just installed Bloomberg terminals throughout the entire office, and Bloomberg themselves had bought a floor in the building that we were in.
And so, we were all invited upstairs for cocktails and of course Mike Bloomberg was there and I was… well I was masturbating. I don’t know 20, 22, 23; something like that. And I was in this animated discussion with Mike Bloomberg about how the Internet was going to be a wonderful thing and he was telling me it's dead on arrival, and it's all about commercial networks like Bloomberg network, etcetera. It's just goes to show even the greats who get it wrong from time to time, and interesting little anecdotes that.
Michael: Wow! And I know on Twitter your [inaudible] [59:39]. Is that another classics reference?
Geoff: Well, sort of. An … is like a hat, or a measurement of wheat from Roman times. But really, the reason for that is that when I first got my university account at Oxford for access to the Internet, you had to have some bizarre six character plus number. And I think I was going to end up with something like pH one, five, eight, three, two, Z, or something ridiculous. And so, I was like, for the love of God I must be able to have something as well.
If you can give us something now, and we'll see how it goes. You know things like Banda Alf's and Skywalker and every possible name you can imagine had already being taken. And so [inaudible] was just something that sprang to mind. It was short, just fit the character limit of the smallest possible one, and that was my account. And so back in ’89, ’90; that sort of timeframe. And that's been my account forever more so whenever there's a new service, I’ve got a place in there, I’m [inaudible]. Before somebody else does. But so [crosstalk]
Michael: I was always curious. So, let's just wrap up with what I've been asking every guest on the podcast which is, why are you proud to you CFML?
Geoff: Proud to use it well, I would never put it in those terms. It's… for us, it's a tool. It's a tool that we think works particularly well for what we need. And so, while that's the case, we'll continue to use it, and use it in anger if you like. But I don't think we have that in motive kind of attachment to it. I just think it's a good tool, and while it remains a good tool, we'll continue to use it.
Michael: Cool, and then we alluded earlier that some people have said that ColdFusion has been dying for very long time and it's still alive and doing well. And as you’ve been saying, you’re doing new sites with it, you've got lots of clients, it's thriving. So, what would it take to make ColdFusion even more alive this year?
Geoff: I think… less negativity maybe. I mean they've always surprises me that there are several members of our community that have threatened to leave over and over again that still keep coming back to just put the sheave in as it were. Give it a bit of a twist just to see if they can truly make the whole environment dead. I would just say little less negativity, and if people want to continue working with ColdFusion because they enjoy it, and they find it useful, will power to them. They should be encouraged to use it, and the community should be more about helping each other rather than denigrating various aspects of it. So I find that often difficult to rationalize these days. I’m a little bit better at handling that sort of thing. But certainly, one of the things that people could do would be to support those endeavors that actually add value to their community.
So, one obvious example is Lucee. So, there's all sorts of ways that you can support Lucy whether that’s doing documentation, or just helping out on the forum all the way through to actually making some kind of financial contribution. Because of the type of organization that we are: Lucee Association Switzerland. This is worth having a quick thirty second thing about that. The [inaudible] [1:03:17] thing was a commercial entity and they got into I think to be fair to them, the shareholders in that particular environment had a falling out which led to a kind of paralysis of the code base. And that's why there was a need to kind of fork that code base and to make a new open source community.
And the… from the very beginning, the focus for Lucee Association Switzerland was to be it is a specific structure called a Swiss foreign which is like an association, or a union, could be a chess club, could literally be a trade union, could be a little law firm. But all of these things sit in a not for profit con of bubble. Which means that we have to spends everything that we get on specific things that are relevant to the membership which in this case the development of Lucee. So we wanted to have a custodial entity that was absolutely not for profit, was totally focused on ensuring the community was healthy and promoting the ongoing development of the underlying language, and we have that.
And it's at your end that's one of situation where any financial contribution actually has a significant impact on our capacity to address issues, build new features, improve compatibility with other ColdFusion just improve our community overall. And those standing members of LAS is nothing particularly special about them except for the fact that they contribute financially, and also contribute in many instances with resources, and stuff in terms of making overall Lucee community a success. I should say there are several members now that actually just have a financial contribution. They said, “Look, we want the community to be healthy, we don't have the time to contribute in the sense of like true open source from contribution.
But we're happy to see that the money that we might otherwise have spent on an Adobe license can go towards improving the overall engine compatibility and health of that community. And we are at a time where any contribution at all has a significant impact, and people shouldn't underestimate how much their contribution could mean to the community as a whole. So, I just put that out there. Obviously, the president hat on and everything else.
It is something that we believed in strongly when it was first floated, and we contribute not only our financial kind of stipend to the membership. But also significantly in terms of resources and our representation on the management board of managing the official Lucee images contributions to the community and so on so. I think it's healthy. It might actually makes the development saying you know they like contributing to open source. We give them the time that they need not just to Lucee, but to other enterprises. If you’re going to use those products, and giving back even in a small way is always good karma; I think.
Michael: Yeah, I would say that, and everyone can give back in some way. If you can’t contribute to the code, you can do some of the dark so you can help spread the word on to all the people.
Geoff: It's right, you just say nice things. Just say really enjoy doing that or thank you for that. That has [crosstalk] [1:06:39]
Michael: Or even what Brad Wood has been doing which is if there's like some third party thing that says, “Hey we work with P.H.P. and Ruby.” Getting them to say, “Yeah, we work with ColdFusion too.”
Geoff: Brad does amazing stuff. I have to say Brad does amazing stuff. He's a tremendous advocate to CFML in general, and Lucee in particular. But I think the whole orders team are actually tremendous advocates in that area. So yeah, it's great that they it's great that they make that contribution. Certainly takes a lot of time; a lot of time, a lot of effort.
Michael: Yeah, it does. But it's worthwhile so.
Geoff: It's definitely worthwhile.
Michael: Great! Well, thanks so much for being on the podcast, Geoff. If people want to find you online, how can they do that?
Geoff: You can get me on… I think the only social media I'm on these days is Twitter. But I regularly check that. So, that's [inaudible] [1:07:36] @MODIUS. You can always check out what Demon is up to. You know Demon website, but we have a kind of a blog site called las.demon.com.au; LABS.Daemon.com.au on which I'm always checking. So always work in that. But also equally so regular on the Lucee dev forum. So dev.lucee.org. You can catch me online easily enough. If you became to do something with Docker and you want to engage Demon, always feel free to contact us through the website, or [email protected]. Thanks.
Michael: Fabulous!