You can listen to the podcast and read the show notes here
Michael Smith: Welcome back to the show. I'm here with Jon Clausen. I hope I'm pronouncing that right, Jon.
Jon Clausen: Clausen but close enough.
Michael Smith: Clausen, all right. And he's the president of Silo Webworks and a part of the artist team and he's going to be speaking into the box and we're going to talk about portable CFML with cloud deployments, microservices, and rest, and he's actually giving two talks, and he's also giving a whole day workshop, so we're going to talk about all that in this episode. We're going to start off by looking at platform as a service, and how you can save devop headaches with SmartCloud deployments and different options there are for container based PAS solutions, and also we'll look at how you can use microservices to take a monolithic legacy app and cut it up into microservices so that it is more maintainable and portable and scalable. And some clever ways that Jon has for overcoming resource intensive sections of the code in an app, so they can scale out separately from the rest of the app.
Then also, we're going to look at REST architecture, and why that works so well and why it has to be stateless. So welcome, Jon.
Jon Clausen: Thank you.
Michael Smith: So let's come back to the beginning of that, you're talking about PAS, which stands for Platform Assess Service, so why would someone want to use that, or what exactly is it?
Jon Clausen: Well, Platform as a service is basically implemented in different ways across different providers. So you've got yours for pay proprietary providers such as ABS Cloud services, Google Cloud services, Hiroku. You also have open source options open to you as well. So you have, for example, Doku, which is basically Hiroku, an open source version of Hiroku. You have Kubernetes which is a google product that is continuing to evolve and a lot of people are really adopting that. You've got front ends also for Kubernetes as well, that will allow you to do bill packs, there's a product calls BASE that works with Kubernetes that way. And you also have Dr. Swarm, which is Dr.'s own native implementation. And what a platform as a service does, is it abstracts the whole details of the container layers, and allows for easy deployments, sometimes scheduling, management and orchestration of containers against one or more servers.
So what that does, what that effectively allows you to do is spend less time dealing with all of the ins and outs of writing docker files and the de-box portions of how those are going to be scaled up specifically on a machine basis and usually in most cases allows you a single entry point to push your updates and then the platforms of service takes those and automatically deploys them across the scaled instances that you've specified in your configuration.
The platform as a surface also allows you because all of them allow you to inject secrets into your containers as they spool up. So things like database passwords and all these other things that at one point we had to keep in the code just because that was the only way we could reliably deploy them. Your platform as a service becomes the holder of those secrets, and then simply injects those as environment variables in the containers so that the containers themselves can be individually or configured as a whole to use those secrets to connect up to the things that they need to connect to.
So the platform as a service model is really valuable for developers, first of all because it simplifies the whole process of deployment and continuous integration but it also provides you a very easy way to extract the whole container infrastructure into something that is digestible and requires fewer touch points in your code, in your application, and from you as a developer in configuring things.
That's kind of an overview of what PAS does and my workshop is going to give you an overview of several of those services, actually an overview of all of those services and what they do and also give some real world examples of deployments on specific ones. So we'll be using Kubernetes, we'll be using Dr. Swarm and we'll be using Doku, which are three different models, they're models that are different enough that we can show how those works and what are the differences between how they handle things. I'm looking forward to that, I think it will be an exciting work session.
Michael Smith: Yeah, it sounds powerful to be able to do that. So for people who aren't deploying cold fusion in the cloud, and are not even using a container, what exactly is a container, is it a virtual server or is it something more than that?
Jon Clausen: Well it's easy to think about them like virtual machines or virtual servers but really what effectively, what containers do is they provide a self contained and isolated entity that utilizes the system operating resources to run specific processes. So what they do effectively is that they are like virtual machines but they're not because they are sharing a resources of the host container. So if Docker is certainly something that everyone is familiar with. Those small backer containers run within a Docker machine, and that Docker machine's environment could be your host, the whole host, or it could be a virtual machine inside the host.
They're infinitely nestable, they're connectable via a very configured networks between only specific containers that are allowed to talk to each other. From a system resource usage standpoint, they're very effective because you can isolate the amount of resources available to the container machine and to the containers, but on the other hand you have the ability to connect those containers together, data base servers, mem cache, readies, whatever you're looking for. You can connect all those points together to deploy, very easily, one single application with all it's dependencies.
What it does is it prevents you from having to configure those at the service layer, or excuse me, operating system layer and it makes everything portable from that point onward. Containers are becoming very popular mainly because they signify portability, but they also require a bit of a paradigm shift in the way that you think about your applications and how they're run and deployed, and how they're accessed because they're a Russian nesting doll and they can be a really deep Russian nesting doll that goes a long way down with a lot of little parts inside.
It is a paradigm shift for developers, especially the CFML developers that have been dealing with these big servers and trying to throw big hardware at application problems, because it forces you as a developer to think in terms of segments of your architecture as opposed to the big monolith of your application.
Michael Smith: So when you're using the world portable here are you talking about you've got your deployment on your own laptop and then you're porting it out onto the Cloud or onto other servers or to other team members?
Jon Clausen: Yeah, absolutely.
So let's say for example, just to give an example of work flow as a developer and I'll use Doku as an example, which is like an open source version of Hiroku.
Doku has an interface that works basically as a git repository. If I have a cluster of three servers, let's say that I want to deploy, that my application is deployed to, the process for deploying to all those servers automatically is as simple as get push Doku master, or development master, whatever my branch I'm working on is.
From a continuous integration standpoint it's really easy, because I just write my pipeline file and I just loop through all of the instances that I want to deploy to. Git adds them all as remotes that are contained to this Doku and just pushes them all. So each push goes to each individual container and all of those kick off their deployment process they've built and they scale according to the settings.
One example is I've got one app that deploys across three different nodes in the cluster, and it's behind a load balancer, and when that continuous integration build, the pipeline job kicks off when I make a push to the master branch of the repository, it goes and does that, it deploys to all three servers automatically, because Doku has extracted that whole process of the container strategy. Then because I've set it up so that there are three instances of that, there are three containers on each of those servers that run serving that application it not only can deploy as the one instance, but it deploys three instances on three servers, so I now have nine instances serving traffic.
I can throw big hardware at it and I can have nine, ten, eleven, Dr. Swarm they test regularly at a thousand to ten thousand instances running, and I can have all those set up, so I can scale at the instance level and I can scale at the server level and it makes it very portable because if I have a problem with one server, I spool up a new machine that knows that language, I hide my configuration secrets and I just start deploying to it and switch the load balancer over to the new server.
So I'm not dependent on the new hardware, if I have a hardware problem, or if I have a networking problem or something that is going to take me more time than necessary to diagnose, I simply just move it. I do it very quickly and that can happen within a period of ten minutes. I can have a new server up and serving traffic and a load balancer pointing at it. Which makes that whole de vox problem that people who have been doing CFML development for a while it makes that whole problem go away.
Let's say for example, I'm using the command box Dr. Container on Dr. Swarm. I can self contain my configuration, I can use secrets on the instance level. I can do all sorts of things that automatically configure that container and when I pull down that service, let's say I'm using ColdFusion 2016, when I pull down that new version of the ColdFusion 2016 it's already patched, and my configuration environment variables automatically inject all the settings I want in there. My data sources, my administrator passwords, caching, debugging, security, sand boxing, all that stuff is automatically being injected in the process of building up that container, so I don't have to patch anything. I just restart the application and it automatically brings, if ColdFusion 2016 has been patched, I automatically just restart the container, the newest version of that server if those server files aren't up to date, and it applies the configuration I've specified to them.
From a portability standpoint that is what I'm talking about. I'm no longer bound to hardware, I am looking at my applications as very portable things that can be deployed on many, many different platforms without too much difference in configuration between them.
Michael Smith: So this not only speeds up deployment but it also documents what has to be deployed and all the configuration so you can replicate it out without having to bring out a SysAdmin without having to mess around for a week …
Jon Clausen: Exactly.
Michael Smith: Getting a new service one up.
Which sounds wonderful. What about scaling, if you've got an app that has extra demand a certain time of year or it just gets a popular viral post and suddenly it's using up a lot more demand. Does this help with scaling?
Jon Clausen: It can. Some of the platforms and servers that PAS's services have the scheduler built in. Some of them you have to automate the schedule. Kubernetes has a great scheduler built into it. Dr. Swarm has scheduling built into it where you can configure it, so those are examples of very powerful systems where, let's say for example, I get one client for example that runs a big chocolate shop, and they sell lots of chocolate, and it's Easter and this is their prime time. Everybody's buying their Easter chocolates or whatever. They need a lot of resources there, so from March 1 to April, I don't know what day Easter falls on this year, but the middle of April, I'm going to want more resources dedicated at that, and I can schedule those to go up on a date range. I can schedule those to go up on a time frame.
So if my peak times that my application is being used are from 9 a.m. to 5 pm, in my time zone, because whatever my application is there is based on that time zone, I can spool up additional instances to handle traffic during that time frame and those instances are brought down at the end of that, at 5 p.m. or 5:10 or whatever I specify, those instances are brought back to the baseline of maybe 1 per server.
Scaling is really something that you can decide based on what you want to do. You can take a look at it from there.
Michael Smith: What about tying that into monitoring, if you have some monitoring software like FusionReaction or some other thing that tells you how much load your servers getting then it automatically just starts spitting up extra instances?
Jon Clausen: Yes, and a matter of fact, one of the nice things you can do with FusionReactor is you can deploy it with command box and you can connect different nodes up to it. So FusionReactor can know about what's going on with different nodes in your instance. So they can register themselves and connect up.
You can have a very effective server monitoring with FusionReactor. You've also got the built in monitoring that happens, because all of those processes are being managed by the platforms to service, the PAS, and so when those containers go down or they become unresponsive, the containers are simply restarted or brought back online. And you can set scaling rules, you can set scaling rules to say if I hit the resources uses of this hits this, I scale up one, or I bring down one if all of my servers are at 20% resources, and that's specific to whatever type of resource you want to monitor. But, yeah, you can do that.
Michael Smith: That sounds really powerful and takes away a big headache that people have on their apps that you have to plan out and stuff ahead of time, and it doesn't take away that you need to optimize some of the code if something's running slow and it's spinning out tens of servers you've still got a problem to deal with but at least you didn't crash the site while you're dealing with it.
Jon Clausen: Exactly.
Michael Smith: So very exciting. Let's move on, you've got a second session that you're teaching called Bringing Legacy Apps back to life with Box Microservices, which sounds intriguing and somewhat related to this. So maybe for people who aren't using Microservices, what is a microservice?
Jon Clausen: Well, microservice is architecture at it's most simplified definition is simply a functional piece of software dedicated to a specific purpose.
A microservice in the sense of a CFML application would be simply a service, whether it be deployed with the application or in many cases, better yet, separately from the application, that handles the specific functionality of a component of that application. For example, in most CFML apps, especially legacy CFML apps, they start with a business objective and they develop over time. What ultimately happens with procedural apps, especially older procedural apps, is that over time they develop into these huge monoliths that have tens of thousands of lines of code and many points of duplication. So as those evolve and become more mature we're forced to throw resources at them to be able to make sure that these little parts of the application that are resource intensive can run and the rest of the application can also run as well.
A microservice is basically taking a component of that application early on, and what this means in a procedural standpoint is a pain point, maybe a resource intensive pain point in that application, and breaking it out into a smaller component that is delegated with just the responsibility of handling this particular task.
What that does, what you'll find when you develop microservices and when you convert old legacy applications over to a microservices architecture, is that you use fewer resources cumulatively because the amount of resources required to run this smaller service is much lower than the amount of resources to run it in the monolithic app, simply because we don't need this fail safe buffer so that if this starts to consume resources too much it doesn't bring the application down.
So you might need 256 MB's for your microservice, but that same microservice when deployed in your legacy monolithic app might require 1 GB just to be safe. So what that means is, let's take that GB and that means I can deploy one microservice with my small component of functionality at 256 Mb's and I need more resources allocated at it, going to containers again, I can spool up a container and now I've got 512 Mb's dedicated at it on two containers vs. this monolithic app, once again, that needs that GB. If that monolithic app is scaled, now I'm decreasing that even more.
Legacy applications, and specifically CMFL procedural applications are ideal candidates to convert over into a microservices architecture, because so often there's so much duplication of code, there's so many different pain points where one little process, if you're not careful can throttle the application if there's concurrency in different areas that are pain points and it's easy to do. Once you identify where those are at, those touchpoints for this functionality in the app, or the code, you simply break them out, you develop your microservice, and then you go ahead and change the end points over in the code to talk to the microservice instead of doing it's own thing internally.
When we're talking about legacy applications, this is a very viable pattern for illuminating some of those pain points. What happens over time as you develop each microservice and you deploy them, your development workflow doesn't have to change that much. In the sense of being able to use a ColdBox module, that modularity can work within a monolithic ColdBox application, or a monolithic application, but it can also be deployed separately as an individual module with it's own endpoint in URL. So it gives you a lot of flexibility to be able to deploy your application in a variety of different ways.
Michael Smith: It's almost like a metaphor for this would be the monolithic app is it's like you've got a whole football team, and when you want to scale the team you have to duplicate the team and get a whole new football field, whereas with these microservices you just duplicate the individual players that are used the most. So maybe you have one player that gets used a lot but other players aren't so you don't need to duplicate the whole football pitch, or the whole team.
Jon Clausen: Absolutely.
Michael Smith: You just scale up the individual resources that are used a lot.
Jon Clausen: That's a great analogy.
Michael Smith: So you mentioned using ColdBox for this, you can do microservices in many different architectures and box is only one of the ways of doing that. Why box for you?
Jon Clausen: I think there's' a couple of things. First of all, microservices don't, they're not tied to a specific implementation. You can use whatever the best tool is at hand. What I have found in developing in a variety of different languages is that ColdBox and CFML gives me the most rapid time from concept to deployment, and it also give some a variety of options. More and more, more portability with regards to options, because I have the power of being able to leverage all sorts of Java classes to be able to do what I need to do. There's tons of those out there. Also the modularity of ColdBox means that if I deploy all I have to do is commit one module to my repository and a configuration file and with a well written box jace on file I can deploy that application with one install command anywhere I want to do it.
It makes it tremendously powerful in the sense that I have the modularity of ColdBox and the universality of a code across a variety of things makes it very easy for me to develop a small service, but also take and build a service into an application and then if necessary break that service out at a later date. That's really the power of modularity. So often what we end up with in these monoliths of code that over time it grows and grows and becomes super intimidating to actually even do anything with. You're afraid if you touch that one line in the one piece of the shared code, or whatever you want to call it, you're going to blow up a bunch of other things.
Modularity prevents that from happening. Because you can build modules and you can bridge modules if you want to so that your code can be dependent on a specific version. ColdBox because it's built from the ground up with that modularity in mind makes development extremely rapid. It makes all the process of routing URLs and handling routes and dealing with system configurations and all that, it makes it super easy. So I don't' need to spend any time worrying about that in development, and I don't have to spend a whole bunch of time building the architecture around that. I simply just have the architecture, and I have the patterns and the conventions already there for me. I build within those and it means that I can deploy things that are bug free and scalable and fast much faster than someone in a variety of different other languages.
There's, like I said, there's a lot of people that have very justifiable beefs with what CFML was and what they see it having been throughout it's history, but the reality of ti is that it's never been more portable than it is now. It's never been easier to rapidly develop applications and scaffold applications, and test applications than it is today. ColdBox is a big part of that. I consider, I was pretty much ready to get out of the CFML game a few years ago, more than a few now, then I picked up an application that required me to use CFML and I looked fro the best candidate, the best framework to convert that legacy application over and ColdBox won, and that renewed my excitement about CFML development and about being able to work in this language and build relevant applications for the future.
Michael Smith: I think you've answered my next question, Jon. You must have had a predictive cache running, which is why you're proud to use ColdFusion.
Jon Clausen: Yeah, and that really sums it up. I feel like when it really comes down to it, and I'm making an architectural decision, or an application, CFML gives me the best opportunity to develop the most efficient, bug free and scalable application with the caveat efficiency being that it does run on top of the java virtual machine, and you've got the resources that that requires, but there is so much power in being able to leverage the underlying engine in addition to the language that the oppressiveness and the power of the language itself.
Having done development in other languages, you just don't have the same convenience of all of those built in methods and then you add ColdBox on top of it and you've got all these other conventions and super tight methods you can leverage. With other languages, you find yourself reinventing the wheel over and over and over again to solve problems that were already solved in CFML many many years ago.
For me it's let's find a language that gives me the best toolkit to start out with, the biggest tool box to work with, and if I have all of those tools and I have them readily available and conveniently assembled, then it makes it so much easier for me to quickly build and deploy powerful and bug free applications. So that's why I'm excited about CFML and I really continue to be because I think a lot of times the CFML development community can be it's own worst enemy, because we don't' spend enough time honing and looking at how we've done it this way for so many years, how can we do things differently. Or we're stuck in the java development model and we want to do things like java does them and we totally neglect the power of our loosely typed language and rapid application development, and how powerful it is in abstracting and scaffolding applications very quickly.
Those are things that I think are out there and there are certainly issues as we go forward but I think CFML has never been more portable, it's never required fewer resources to run, for example, the ColdBox CMS content box. I can run a content box instance very comfortably, day in and day out on 256 Mb's of memory. Now when we're talking about the monolithic stuff, when was the last time where we had CFML deployments being comfortable on that small of a heap size. It's become much more portable and it's much more realistic to be able to deploy smaller and smaller resources on commodity hardware that just wasn't really an option.
Michael Smith: I was talking to Brad Wood the other day and he said he deployed a CFML to a Raspberry Pi computer and I don't think they have a lot of resources on them. They only cost a few bucks.
Jon Clausen: No his application, his blog runs on that, Raspberry Pi, and he's actually tested it pretty hard and done a lot of load testing on it. It's very doable.
Michael Smith: He even had it, he had a separate Raspberry Pi computer, tied up some LEDs to a hat he was wearing at a conference.
Jon Clausen: Yes.
Michael Smith: So amazing what you can do there. So I think all of this ColdBox and I think that PAS and all of the other things help us be more modern in our ColdFusion and they help bring ColdFusion back to it's full aliveness that maybe it lost a few years ago. I just want to ask you, what would it take to have ColdFusion be even more alive this year?
Jon Clausen: I think that's a good question. I think what we have to do is we have to fuck it up. We have to stop being scared of bringing up CFML and the larger IT communities. Everything is going to have it's haters, every language is going to have it's haters, but I think when you really at the end of the day, most developers, no matter what language they want, they want to get their job done as easily and as pain free as possible. Day in and day out, I will take CFML for that. That's been my preference, and like I said, I've forayed out into other and spend enough time to know that you know what, if I'm going to have an option to pick the language I want to develop in, then CFML is going to be right up there for ninety percent of the things. Now with CommandBox and all of the container based strategies it's mind boggling how much easier it is than it was even ten years ago.
I've been a CFML developer now for fifteen, sixteen years.
Michael Smith: Cool. I know also, as well as teaching two sessions you've got a whole one day workshop on REST. Let's talk a little bit about that before we wrap up here.
What exactly is REST for people who aren't doing it?
Jon Clausen: REST stands for Representational Stateless Transfer, and it is probably more and more the most prevalent API architectural pattern that we use. Effectively, if you're familiar, let's just use this to give you an example of what that entails. In the acronym, representational effectively means that your returned objects represent other objects within your data model. So in most cases, that means you're dealing with an active entity pattern in your REST application. When you return the object, you're returning the entity or the representation of that object in the way that you marshaled it as a data object. Whether that me XML or JSON. We often thing about it as JSON, but the data pattern and how you deliver the data doesn't really matter. It's not defined by REST. Most often it's just JSON. So that's the representational part.
The stateless part means that every request requires authentication. It means that we don't set cookies, we don't keep session variables, and what that does, the benefit of that is that it makes your API portable by being able to deploy anywhere and not being dependent on browsers, on users, on capabilities of the client. So that's where statelessness comes in. The fact that this is a marshaled representation, this is not a direct representation that you would get if you were running a query, this is a marshaled representation of how you will produce and ultimately consume that object, that representation.
The transfer pattern that is most often used is HTTP, but REST services can also be consumed in a variety of different ways. So they're often in many cases, we're starting to see a lot of binary applications that have their own RESTful service, built in. Where internal calls are being made, and those RESTful representations of that are being delivered. Most often it's HTTP, and because of the way that REST has developed it makes full use of the language, the HTTP protocols, with regard to the status calls, and what they mean and how we're delivering responses to the client saying what we've done or where the error was or what kind of error it was. That's RESTful services are basically built around the concept of this very expressive, very verbose protocol that up until now we really haven't had much dialogue about taking advantage of. None of the stuff that REST does as far as the protocols are concerned or the HTTP protocols concerned, is anything new. It's been around many, many years. At the same time, we're leveraging that to deliver that representational state to many, many clients who can consume it with different purposes in life.
That works just as well for internal as it does for external consumption.
Michael Smith: Is this another way to get duplication out of legacy code, to rewriting it as a REST architecture.
Jon Clausen: This is absolutely. RESTful design, one of the three pieces that I'm speaking to at Into the Box this year are really interrelated. They're all addressing the criticism and concern of portability. And RESTful services is an ideal way that you could implement a microservice, for example, as we've already talked about.
It's also a way that you can implement a data model within your application that can be consumed by many different layers including the browser, the client side, but also within the application because if I develop a RESTful service, all I have to do within my application to be able to retrieve that marshaled data is know what my RESTful service needs for parameters, and then simply run the event that is in my API to marshal those parameters into, in ColdBox language, my request context.
Yes, it's absolutely a way to develop microservices, to convert legacy applications over because they, in many cases, like I said, you're spending all of this time marshaling and reusing, or rewriting the ways to develop this. Think about all these different variations in legacy applications of the same query with different columns being selected, or maybe an SQL function in there to marshal columns in a particular way or in a particular format, and RESTful services are designed to do that. They're designed to allow a consistent representation that delivers very little duplication of code because they are the single endpoint that we use to gather this marshaled data.
Michael Smith: So what about if you have different versions of the API that you've got.
Jon Clausen: That's a great question, so one of the things that RESTful services has allowed you to do is version them very easily. There's many different patterns to do that, but as you evolve your API evolves. One of the core concepts of REST design is never breaking backward compatibility, or if you do be very clear on when that happens. But most of the time we're not breaking backward compatibility, so if I release version one of my API, when that version is released that version is never going to change. I then version that to version two, and new endpoints I communicate, they can start talking to version two when I deploy that and release that, but I'm not breaking the backward compatibility for any of the consumers because API version one is always going to be available.
ColdBox modularity makes that super easy to do because I can very easily version modules. I can set my endpoints up and almost always when I'm building a RESTful service I'm actually building the versioning into it right from the start, rather than going back and adding that later. So very rarely will I have a route that's just API, it's going to be API-V1-entity name, as opposed to just having an API route.
Michael Smith: The other thing about REST is you can easily connect it together with CommandBox and ColdBox, how are you using those together, I hear they fit well together.
Jon Clausen: They do. There's actually a RESTful application template that's out there. It's the ColdBox REST template. A lot of times, a new API is going to start just by pulling that template as my skeleton for my ColdBox application. Then from their leveraging, the pieces that are part of that. It also works extremely well because you have all sorts of interception points that are built into ColdBox that you can use to handle consistent data marshaling.
So for example, if you go and you check out the REST template, the REST skeleton you'll see that there's a couple of conventions at play. One is the around handler method, which basically runs around whatever your execution event is. That around handler method in the RESTful template handles all of the process of marshaling data and handling errors, communicating those errors in a consistent format that's easily consumable, in addition to having some convenience methods for common responses, such as not found, not authorized, all those things like that that you can leverage right off the bat. It makes it very easy to develop those because you have so many hooks and interception points that are already built into the framework that allow you to quickly build them out. It's not unusual to have a fully functional crud REST entity that is less than a hundred lines of code. With the tools that are available out here, there's a lot fewer lines that have to be written to deal with those procedures and conditions.
Michael Smith: So folks coming to this one day workshop on REST, are they going to be able to code this themselves, or are they just watching you create it or ?
Jon Clausen: No, absolutely. By the end of the day, the goal is that the user walks out with a fully functional API which they have developed themselves, which solves a common business problem. We'll have some options to choose from in there. It's one thing, when I develop something it's being developed according to my conventions and to my knowledge. So for me to get up there and code an API and then say, here this is how you do it, that's really not going to help any developer.
The idea behind this is we're going to cover these RESTful principles, we're going to give you the ideas, a full understanding of what you need to be, or at least a pretty full understanding, and then we're going to walk through some patterns with regard to authentication, with regard to serving data and modeling data, and then the last part of the day is going to be dedicated to turning the participants loose and saying solve this business problem, develop an API around it and by the time you walk out of here, you're going to have a fully functional application that's been developed by you using your coding conventions that you can then use as a template to build future RESTful services for your specific business problems.
Michael Smith: That sounds really useful, Jon. Looking at the Into the Box conference as a whole, what are you looking forward to at it?
Jon Clausen: I think I'm certainly looking forward to being able to share knowledge. I'm a bit believer in the phrase, when you teach you learn. I always find that I walk away energized from teaching as well. At the same time, there's a lot of great sessions out there, I look forward to sharing knowledge, just going to conferences like Into the Box, like Safe Objective, these are all conferences where you get a chance to connect with peers and walk away re-energized about your work. As developers know, there's a lot of days that you're just slogging through lines and lines of code and some days it's not fun.
I find that the energy that I get from there also keeps me energized throughout the corresponding months when I'm working on maybe some of those things that aren't as fun.
Michael Smith: Cool, so if folks want to reach you how would they find you?
Jon Clausen: They can find me either at Jclausen on Twitter. Jclausen on the CFML Slack channel. They can also reach out to me via my work, which is also email@example.com, or they can visit silowebworks.com and contact me there.
Michael Smith: Great, well thanks so much for being on the ColdFusion Alive Podcast, Jon.
Jon Clausen: My pleasure. Thank you, Michael.