Developing for the cloud: How developing in the cloud is different

The benefits of cloud computing have been widely touted – business agility, scalability, efficiencies and cost savings among the top – and companies are migrating and building mission-critical Java applications specifically for cloud environments at a growing rate. TheServerSide caught up recently with Bhaskar Sunkara, Director of Engineering at AppDynamics, an application performance company focused on Java and cloud applications, to discuss the challenges in developing Java applications for the cloud and managing them once they’re there.

The benefits of cloud computing have been widely touted – business agility, scalability, efficiencies and cost savings among the top – and companies are migrating and building mission-critical Java applications specifically for cloud environments at a growing rate. TheServerSide caught up recently with Bhaskar Sunkara, Director of Engineering at AppDynamics, an application performance company focused on Java and cloud applications, to discuss the challenges in developing Java applications for the cloud and managing them once they’re there.

What are some of the challenges in developing for the cloud?

More important than the programming language, one of the main challenges to developing for the cloud is understanding how application service dependencies are handled. Application service dependencies include databases, message servers or other services deployed in a distributed environment. Traditionally this has been handled by mapping service references to the physical IPs during the deployment process, but the cloud introduces a new variable, and even the IP addresses are not known beforehand.

So how do you know where all these dependencies are and how to leverage them?

Fundamentally, when you have a service-oriented environment there will be a number of services ‘talking’ to each other and leveraging a variety of infrastructure elements. At any given point your service could be using all of these other services. In a traditional static environment you have an easy sense of where these elements reside and can then code programmatically. The developer can identify the resources and will know what services the application is using once deployed. In a cloud environment, however, you aren’t guaranteed what kind of IP the application will be using or how it will look up services or data.

The developer needs to have a solid understanding of using services in a very smart way. Finding resources effectively needs to be a part of the application and there need to be reusable patterns around using that functionality diligently. Using a discovery pattern to find the services you want to use is one of the popular options to solve this problem. If you don’t build your application with these factors in mind, you risk creating an unmanageable scalability and refactoring problem.

As far as formal specifications go, J2EE has done a great job with resource dependencies with resource mapping during deployment. For example, the application talks to a database, but you talk to a logical resource that gives you connections. When you deploy the application, you map this to a real database IP/URL and there is no hard coding.

In the J2EE paradigm the developer gets a JNDI naming context that lets them look up the services they need - this needs to be wrapped by a cloud-aware naming context which abstracts out the IP dependencies. By doing this, the cloud-aware naming context is the only element having the ‘Service Locator’ logic and is much more maintainable.

Another issue with developing for the cloud is addressing horizontal scalability – what should Java developers know?

A fundamental premise of cloud computing is its ability to allow horizontal scaling, but not all apps are born to be horizontally scalable, they’ve never had to be. Statelessness needs to be enforced for any application in the cloud. With on-demand infrastructure, affinity for keeping state locally breaks down everything. Applications have to be written so that any application tiers that need to scale have the capability to do so. We’ve started to see this in newer applications over the last few years, but its still an obstacle for many developers.

If you had to suggest one best practice for Java developers programming for the cloud, what would it be?

Do not localize data storage! If you do, its almost like localizing data handling to a particular JVM instead of treating it like a cloud. That may introduce dependencies that are localized to a JVM. Data management should be always distributed. You have to assume servers fail and fail often. Relational databases are no longer the norm in the cloud. With so many changes in the application ecosystem, the application has to be inherently stateless.

And of course, do not use physical IP or disk-based locators to look up what you need to use in the application. Rely on a location pattern or service that abstracts out the physical IPs.

That’s a great point. Finally, what do developers need to know about testing applications before they deploy in the cloud?

When you want to test the application in your development cycle, when you want to see how it works in the cloud environment, you face the challenge of transitioning from your local development environment. It is difficult to transition from doing stuff locally and trying it out, to working in the cloud. There really is no way to effectively mimic how the cloud environment will look and feel once the application is deployed. The maturity of IDEs that can handle cloud environment is still a work in progress as well. The more seamless the transition from the local test environments to cloud based environments, the more productive the development cycles will be. It used to be intimidating to deploy an application onto the cloud. It is much easier now, but there is a lot of scope for tools to really make the whole process efficient.

Dig Deeper on Development tools for continuous software delivery