Cloud Migration: Four Key Benefits for your Company

Saving your data in the cloud could save your company in more ways than one – the rise of cyber crime and the increasing need to store more and more data makes companies and individuals susceptible to data theft, corruption, or loss. Many businesses will back up data with a secondary, external hard drive, or a central server. However, if your data storage becomes vulnerable, it is imperative to have a backup plan. In this particular instance, moving to the cloud allows for a more scalable environment that continues to grow alongside your business needs.

Four benefits you can expect immediately after moving to the cloud:

  1. Cost Savings: When you run your own servers, you’re looking at up-front costs: in the world of cloud-computing, that initial hardware investment is taken care of by the cloud provider. Additionally, you can reduce your IT department’s size and money spent on storage costs as a result of moving to the cloud. Furthermore, redistributing IT resources to areas that focus on business growth and not maintenance is another payoff for migrating your company’s data to the cloud.
  2. Storage Capacity: The cloud grows in stride with your data, and you only pay for the space your data is inhabiting at a given time. For as little or as much space as is necessary for your company, not only is it available but you won’t ever have to worry about going over a set storage limit (there isn’t one!).
  3. Improved Operations: You can access your data anytime, and anywhere. Downtime for server updates and maintenance will be a thing of the past, and your business will thrive with the extra time available.
  4. Security: Cloud providers make it their mission to ensure the safety of your information. Databases in rural locations with strict entrance protocols ensure that much, and not to worry if there happens to be a breach – alarms will sound and your cloud provider should be able to provide your company with 24×7 failover protection in emergency situations.

Cloud computing could become one of the most important pieces in your business arsenal. On-premise hardware with limited storage capacity that your own IT team needs to maintain is a thing of the past and puts a hard stop on any business development if there happens to be a technological glitch or breach. Grow your business with the cloud and experience the limitless possibilities.

5 Questions to Ask Your Cloud Provider

With all the benefits of cloud computing, it has become much easier to justify the decision to migrate to the cloud. But what do you do now that you’ve decided to make the move? The answer is simple: hire a reputable cloud provider.

Hiring a cloud provider, i.e. a managed service provider, allows you to enjoy all the benefits of the cloud while removing the burden of maintenance and repair. Working with a provider affords you with the high performance IT infrastructure you need and want. It becomes their sole responsibility to free up your time and manpower so that you can focus entirely on effectively running and expanding your business. No more IT stress holding your business growth back.

With that said, there are several things to consider when choosing a cloud services provider.
Before you begin your search, you need to compile a list of questions to ask each cloud provider in order to gain a clearer understanding of how they operate so that your decision will be a highly educated one.

Of course, price and storage capacity are important concerns, but perhaps the most critical question is security: How safe is my data in the hands of these cloud providers?

To find this answer you will need to ask the providers these five questions:

  1. Does your organization have formal information security policies? Acceptable use, data classification, incident response, etc. This is essentially a plan outlining what their critical assets are and how they must be protected. Explains how their staff is responsible for protecting information resources.
  2. Do you require any third party services or agreements? This is another organization engaged by their company to provide services for and in the name of the organisation to their clients. Ex: Windows Exchange, Oracle, SQL, etc. licensing and agreements may need to be updated if you migrate to the cloud.
  3. What are your change control processes? This is a systematic approach to manage all changes made to a product or system to ensure that no unnecessary changes are made, that all changes are documented, that services are not unnecessarily disrupted and that resources are used efficiently. With a systematic approach, it provides consistency and manages expectations of your data.
  4. Who has physical access to your data center & equipment? This is necessary to ensure that the provider has controls in place to avoid sending, copying, e-mailing, etc., of your data.
  5. How do you separate my data from other customers? This is important to know because you will have no idea if you’re information is copied so you really need to trust your cloud provider.

Moving to the cloud is a huge decision for your company. Make sure that you are informed on the policies of the cloud provider you choose. If you are already on the cloud then you need to make sure that your current cloud provider can adequately answer all of these questions.  If they can’t, then your data could be at risk.

Such a huge and business altering decision requires research and diligence. While we hope that you choose Neovera as your cloud provider, our main goal is to ensure that you choose a provider that will keep your data safe and that your business needs are met. We want you to take these questions that we have given to you and use them to decide for yourself which provider is the best for you.

Windows Server 2003 End of Life: What To Expect

The time often comes when old technology is replaced by newer technology. This also means the end of the support lifecycle for certain hardware or software. In this case, it’s nearing the end for Windows Server 2003. We’ll tell you what it all means and what you can expect as Windows Server 2003 is phased out this summer.

The most important thing to note is the date which Microsoft will stop supporting Windows Server 2003. As of now the date is set for July 14, 2015 and it is quickly approaching.

It is estimated there are over 10 Million servers using Windows Server 2003 currently. This is a hefty number of folks that will need to migrate to new server platforms by July; and if you haven’t begun doing so yet, we suggest putting that at the top of your agenda.

So, what does “end of life” mean and what can you expect once Windows Server 2003 is no longer supported?

Updates

This is perhaps the most critical, or at least the engine that drives support for your server OS during its lifetime. There will no longer be any updates to Windows Server 2003, this includes security patches, bug fixes, and everything in between. Without consistent updates the OS becomes critically obsolete and especially vulnerable to attacks and data corruption or loss.

Compatibility Issues

We all know how frustrating it can be trying to use newer software with older hardware. In fact, it’s almost impossible. This also rings true with Windows Server 2003. Continuing to use an older, or often called “legacy”, servers or OS will almost certainly mean you’ll encounter compatibility problems.

Compliance Standards

We have talked extensively about compliance standards such as HIPAA and PCI among others. All of the major security standards require that your server OS or platform is supported. In this case, Windows Server 2003 will no longer be supported by Microsoft. This means you are no longer in compliance and could face hefty penalties or worse. In order to pass a compliance audit you absolutely must have supported hardware and software.

These are the major points to think about with the Windows Server 2003 end of life creeping up on us. Of course, there is also the issue of increased maintenance costs of using legacy servers.

Because the server OS will no longer live up to the current security and compliance standards it will become increasingly costly to acquire support or build security measures to protect your legacy server. In the grand scheme, the costs of migrating to a new platform will be measurably less in the long term.

What to do next? Well, you’re obviously not alone if you have not yet considered of begun migrating your Microsoft Server 2003 to someone more current, for example Microsoft Server 2008 or 2012. Fortunately there are a lot of great resources out there to help with the process.

Microsoft has a Windows Server 2003 EOS website with tons of resources to help you plan and execute a successful migration.

It’s also interesting to note that many organizations don’t even know that EOS (End of Service) is approaching, so if this is your first time hearing it do not be overwhelmed or fearful. If you’re in the IT department you have probably experienced something like this before, and we all are familiar with “last minute” plans.

The important thing is to have a plan, and execute that plan the way you intended. As part of Windows Server 2003 EOS website you will find Microsoft’s keys to successful migration, and this should help you immensely. In the end you’ll be doing yourself and your organization a great service by keeping them secure, compliant, and current.

Is CoreOS Making Other Linux Vendors Worried?

As it celebrated its first anniversary about a week ago CoreOS is beginning to make other Linux vendors a little nervous. Is CoreOS going to take over the Linux landscape? Maybe, maybe not. What we do know is people are taking notice, while others are brushing them off. Is CoreOS just another “fly-by-night” solution, or is there more to the story?

CoreOS is described as, “An open source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of applications deployment, security, reliability and scalability. As an operating system, CoreOS provides only the minimal functionality required for deploying applications inside software containers, together with built-in mechanisms for service discovery and configuration sharing.”

In short, its lean design allows for larger, more scalable deployments on numerous infrastructures easier to manage. It essentially allows for “warehouse-scale computing on top of a minimal, modern operating system”.

The details of CoreOS may be familiar to the most savvy technology users but the product has only found its way into the spotlight recently. One of the main advantages of CoreOS is the ability to “build it up” to your desired needs, as opposed to RHEL (Red Hat Enterprise Linux) which is an “everything” product that many often strip down to suit their needs. Furthermore, its minimal design consumes less RAM on boot than typical Linux installations. It also has seamless updating. While many other products have “available” updates or give you the option, CoreOS updates automatically so you’ll never miss a beat. Finally, while CoreOS runs smoothly on a single machine it’s designed to be clustered so you can run application containers across multiple machines and then connect them together using service discovery.

What may be most fascinating about CoreOS is its use of Docker. CoreOS does not provide a package manager and instead requires all applications to run inside their containers. Docker is used as the fundamental Linux Containers operating system level virtualization, which allows for running multiple Linux systems on a single control host. The Linux systems are the containers and the control host the CoreOS instance. Essentially, this gives the ability to limit, account, and isolate resource usage of the process groups.

While CoreOS is in its infancy compared to other minimalist operating systems it has certainly gained the attention of some big players – in part due to its heavy inclusion of the popular Docker. Recently, Red Hat released Project Atomic, which according to Red Hat “..integrates the tools and patterns of container-based application and service deployment with trusted operating system platforms to deliver an end-to-end hosting architecture that’s modern, reliable, and secure.” Interesting indeed as container based application deployment becomes increasingly popular. Furthermore, it seems Linux based products are becoming more focused on the development community in addition to the operations side.

CoreOS has certainly stormed onto the scene and made itself known to the Linux community. Whether or not it survives in an increasingly crowded space remains to be seen. If it does, it may well transform Linux and application deployments. If it doesn’t, it will have certainly laid the groundwork for some exciting advancements in the field.

5 Common Myths About Virtualization DR

Today we live in the “Information Age” in which all sorts of knowledge is available at the drop of a hat – or shall we say the stroke of a few computer keys. The problem? A good portion of the information available may not be as accurate as you might expect. Our goal is to provide expert industry knowledge from a reputable source. With that goal in mind we’ll talk about some common myths about virtualization DR.

Virtualization Disaster Recovery can be described as taking a virtual copy or image and deploying it on another server as a backup. This process is generally used to decrease downtime and data loss. As this process becomes more and more popular among IT professionals it’s important to know what’s true about the process – and what isn’t necessarily accurate. Let’s dig in.

Myth #1: Storage Based Replication is Ideal for Availability

This myth refers to storage-based replication versus server-based replication. Storage based replication has slower performance, often comes with hidden costs, and is limited in storage vendor arrays. Server based replication is much more flexible as far as cross-platform compatibility and has a better overall performance.

Myth #2: You Can Only Protect VMs With Agentless Protection

Agentless protection on its own can often limit the flexibility of protection. While agentless protection is certainly needed, it should be in tandem with agent-based protection to increase flexibility and overall protection.

Myth #3: Virtualization Is a One-Way Avenue

Workloads need to be agile across multiple environments including physical, virtual, and cloud. However, many migration and protection solutions do not allow for a path back to the original source.

Myth #4: Agentless Protection is OK For All Applications

The truth is applications on different levels and tiers need to have different types of protection. A good disaster recovery strategy will take advantage of real-time replication for the most critical applications, this cannot be accomplished with agentless protection.

Myth #5: RPO and RTO Are The Same Thing

RPO (Recovery Point Objective) and RTO (Recovery Time Objective) are sometimes confused or misunderstood. While both are part of business continuity planning and disaster recovery planning they are not the same. Recovery Point Objective is the maximum period in which data might be lost if a major incident occurs. Recovery Time Objective is an amount of time a business can be without service before incurring significant risks or losses. Bot are imperative to a successful business continuity plan.

These are just a few of the common IT myths out there related to disaster recovery, and more specifically virtualization disaster recovery. We’ll continue to dig deeper into certain topics to find common myths as well as common accepted practices so you can better educate yourself on the finer points of information technology, the cloud, virtualization, and more. Of course, if you have questions don’t hesitate to contact Neovera to find out how we can help you achieve IT success.

Cloud Computing & Virtualization – Are They and the Same?

It is commonly thought that cloud computing and virtualization are crutches of one another, or in other words, one is needed for the other to work. Is this an actual fact? What are the differences between the two and what makes each effective? We’ll explore the worlds of cloud computing and virtualization and discuss the benefits and effectiveness of these methods.

What is virtualization? Virtualization refers to creating a “virtual” version of something. This may refer to hardware or software. For example, one may create a virtual machine that acts as a real life computer. In this case you could take a computer running Windows as an operating system, create a virtual machine on that computer, and run another operating system on the virtual machine such as Linux. This could also be done on a computer running OSX in which you need to run Windows programs. Other methods include server virtualization, in which you run multiple virtual machines for different software in order to better streamline your data center and server processing power.

As for cloud computing, at this point we should be familiar, especially if you’ve been following the Neovera blog! In case you missed a few posts, cloud computing refers to providing computing power as a service; oftentimes clients share server space and provision computing power on an as needed basis rather than having dedicated servers on-site.

Of course, there are similarities between the two, and if you’re not careful it is easy to confuse them. While they are not exactly the same, they both have great benefits and often work together to form a dynamic tandem. One, however, is not exactly a necessity for the other.

What happens when you virtualize without cloud computing? A majority of organizations today use some form of virtualization often combining Network Functions Virtualization (NFV) with Software Defined Networking (SDN). This helps automate several virtual and physical network components. It also can be taken a step further by adding Software Defined Storage (SDS), and when combining all of them together one has a fully functioning Software Defined Data Center. Ok, that’s a lot of jargon to throw out there. Basically these systems allow companies to take full advantage of virtualization and get the most out of the resources they posses while automating several computing, storage, and security processes. Most organization get to this point and go no further.

Cloud computing can create a ton of additional potential to this type of environment by cutting down on the time it takes to provision computing power as needed. For instance, many organizations have several measures or steps to take before they’re able to create a new VM and make it available to users. Cloud computing providers make this available almost in real time.

Turning the table a bit, how does cloud computing work without virtualization? It works pretty well, but may not be as scalable. Cloud providers are limited by the number of physical servers they have, and customers would need to fit their workloads into what the provider offers. It also takes more time to provision physical storage than a virtual machine.

When both work in tandem with one another, a computing power beast is created, and providers and organizations can work together to get the best out of one another. Providers can offer more while customers get more, and get it much quicker. As we’ve talked about before organizations need the ability to provision computing power as needed, and to do it quickly in a fast moving business world. Cloud computing and virtualization are part of what makes this boundless technology possible.