"The cloud is the new normal" said Andy Jassy, Amazon Web Services’ Senior Vice President during AWS annual Re:Invent conference in Las Vegas few years ago and this statement stands true even today.
Nowadays, as businesses are growing, the need to rely on Costly Hardware & Infrastructure is going down as Enterprises have started placing the files and application on cloud.
For a business, it becomes essential to choose the cloud storage provider wisely from the pool of cloud storage providers available in the market today depending on who will offer the maximum amount of low-cost storage and bandwidth, while keeping your data safe.
With 44 zettabytes of data generation expected in 2020. I don’t think we can underestimate the growth of data and its value impacting organizations.
Businesses are already storing, accessing and processing petabytes and beyond. Today our PCs, mobile devices, connected homes etc. considered in combination easily and quickly exceed Terabytes of data to be stored. From a Business perspective, things look a lot more drastic.
AWS Cloud storage solution addresses your storage needs in the efficient manner and helps you to easily navigate through the wealth of services offered.
Amazon provides its customer with different AWS cloud storage options:
Amazon Cloud Storage: S3 (Simple cloud storage service)
S3 is probably the best known and most used Amazon storage option. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. One very common use for Amazon S3 is for storing and distributing static web content and media.
Amazon Cloud Storage: Elastic Block Store
Amazon Elastic Block Store (EBS) volumes provide durable block-level storage for use with Amazon EC2 instances (virtual machines).
Amazon Cloud Storage: Elastic file system
Scalable network-attached file storage for use with EC2 instances. Similar uses to EBS, including big data, web servers and content management systems, but designed for scalable, distributed applications that require access to a common file system.
Amazon Cloud Storage: Glacier Storage
Amazon Glacier is an extremely low-cost Amazon storage service that provides highly secure, durable, and flexible storage for data backup and archival.
Amazon Cloud Storage: AWS Storage Gateway
Amazon Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and the Amazon storage infrastructure.
Many of these storage options appear to be expensive at first look. But they can handle much of the complexity of managing a modern distributed database environment. Lastly, whatever service you’re evaluating, it’s important to put price into perspective and consider the total cost of your storage solution. So, AWS cloud storage is definitely among the best. Share your views in the comment section below.
When looking for a Cloud Solutions provider for business, most of the Enterprises opt for a famous name, however that is not the best strategy to zero in on cloud solutions provider for your company.
Providing the Cloud solutions the right way has been in our DNA since inception. Born in Cloud, all our services are aimed at helping customers to adopt the cloud transformation journey smoothly. Our pool of Cloud Solutions talent help you build the solution, on the cloud of your choice. They then architect, migrate, secure and optimize your workloads.
Being a customer obsessed company, we believe in working closely with our customers to fulfil their Business requirements, while keeping their lives simple.
Why choose RightCloud over other Cloud Solutions provider?
1. Cost Effective
We offer Cloud Solutions that cater to specific Customer or Industry Requirements at a reasonable cost.
2. Technical Expertise
We have a team of 140+ Architects, Engineers, Developers and Management Consultants across the regions. We have 30 + Certified Cloud Solutions Architects who can help in Designing Architecture as per requirements, help you in migrating to cloud and optimise cloud solutions for your business.
3. Our Service Model
We offer Consulting, Implementation, Managed services, Security and advanced workloads. We follow a proactive approach in Managed services where we monitor and provide 24*7 support to our customers and identify and resolve the issues before they become problems. Our Services are catered to Customer needs.
We are Advanced Consulting Partners of Amazon web services, Diamond Partner of Microsoft Azure and have platform expertise across AWS, Azure and Google Cloud Platform.
5. Global Presence
We have presence in 7 different countries across the globe: Singapore, Indonesia, Philippines, India, Vietnam, Thailand and Australia.
6. Quality Driven
We follow results driven approach to improve the quality and productivity. We are obsessed with customer results and work hard to deliver it.
To connect, please write to us at Contact@rightcloud.asia
The biggest challenges we run into as Infra Operators, cloud engineers and managed service providers is the server and security management. If we don’t do this right, we face consequences. Most enterprises have a mix of cloud providers or type of server architecture, leading to engineers switching between different platforms. This comes with a training and efficiency cost. Also, User management is far too complex. A simpler interface would solve both management and security problems. The rapid pace at which the organizations across the globe are growing makes it essential to have a proper Server and security Management tool in place – making the process simple.
To address this business challenge, Our Research team at Rightcloud developed a product called “INFRAGUARD”.
It’s a home grown server and security management tool that aims to automate standard processes, use policy controls to regulate access and deliver a single interface to manage your entire server infrastructure.
Infraguard is a relatively simple tool and can be used for server management across various platforms like AWS, Azure, Goggle cloud platform and on premise.
Three essential features provided by the tool includes:
Please write to us at Contact@rightcloud.asia to learn more about this tool and how it can be beneficial for your organization.
In the cloud world, it is obvious that AWS, Microsoft Azure and Google Cloud are the top cloud computing vendors. These three dominate the public cloud marketing and AWS has an upper hand compared to the other two and other vendors. While AWS might be leading, many organizations are moving towards Azure and Google and they have their reasons for doing the same. This post throws some light on each of the providers, comparing one with the other.
Amazon's biggest strength is its dominance of the public cloud market. As per Gartner’s report last year, AWS has been the market share leader in cloud IaaS for over 10 years. One of the reasons behind this is its huge range of services. Quoting Gartner, “It is the most mature, enterprise-ready provider, with the deepest capabilities for governing many users and resources. One of the challenges for enterprises using AWS is managing cost effectively while running a high volume of workloads on the service.
Though entered late in the cloud world, the pace it is growing is commendable. It’s on-premises software that they repurposed for the cloud like – Windows Server, Office, SQL Server, Sharepoint, Dynamics Active Directory, .Net, and others gave it a major hike. Azure is tightly integrated with other applications of Microsoft software and enterprises using Microsoft already finds it easier to move to Azure. Besides, the discounts for existing enterprise customers is a plus any day. However, the vendor definitely has issues with technical support, documentation, training etc.
Google Cloud Platform
Google has a strong offering in containers since Google developed the Kubernetes standard that AWS and Azure now offer. Google Cloud Platform or GCP excels in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing. The challenge with GCP is its less number of features and services available as compared to its competitors AWS and Azure. While they are expanding, the other two are already ahead in the race.
Elastic Compute Cloud: This one is Amazon's flagship compute service. It is a web service that provides secure, resizable compute capacity in the cloud. EC2 offers support for both Windows and Linux. AWS also offers a free tier for EC2 that includes 750 hours per month of t2.micro instances for up to twelve months.
Container services: Amazon's container services support Docker, Kubernetes, and its own Fargate service that automates server and cluster management when using containers. It also offers a virtual private cloud option that is known as Lightsail.
Virtual Machines: It is Microsoft's primary compute service and supports Linux, Windows Server, SQL Server, Oracle, IBM, and SAP. Like AWS, it has an extremely large catalog of available instances, including GPU and high-performance computing options, as well as instances optimized for artificial intelligence and machine learning. It also has a free tier with 750 hours per month of Windows or Linux B1S virtual machines for a year.
Azure Container Service is based on Kubernetes, and Container Services uses Docker Hub and Azure Container Registry for management. It has a Batch service, and Cloud Services for scalable Web applications is similar to AWS Elastic Beanstalk. It also has a unique offering called Service Fabric that is specifically designed for applications with microservices architecture.
Compute Engine: Google's range of compute services is somewhat is less than the other two. Its primary service is called Compute Engine, which boasts both custom and predefined machine types, per-second billing, Linux and Windows support, automatic discounts and carbon-neutral infrastructure that uses half the energy of typical data centers. It offers a free tier that includes one f1-micro instance per month for up to 12 months.
Focus on Kubernetes: Google also offers a Kubernetes Engine for organizations interested in deploying containers. As Google has been hugely involved in the Kubernetes project, it gets some valued added based on experience.
AWS offers a long list of storage services including, Simple Storage Service (S3) for object storage, Elastic Block Storage (EBS) for persistent block storage for use with EC2, and Elastic File System (EFS) for file storage. Few unique storage products of AWS are Storage Gateway, which enables a hybrid storage environment, and Snowball, which is a physical hardware device that organizations can use to transfer petabytes of data in situations where Internet transfer isn't practical.
Amazon has a SQL-compatible database called Aurora, Relational Database Service (RDS), DynamoDB NoSQL database, ElastiCache in-memory data store, Redshift data warehouse, Neptune graph database and a Database Migration Service. Amazon doesn't offer a backup service but it does have Glacier, which is designed for long-term archival storage at very low rates. Also, Amazon’s Storage Gateway can be used to easily set up backup and archive processes.
Microsoft Azure's basic storage services include Blob Storage for REST-based object storage of unstructured data, Queue Storage for large-volume workloads, File Storage and Disk Storage. It also has a Data Lake Store, which is useful for big data applications.
Azure's database options are pretty extensive. It has three SQL-based options: SQL Database, Database for MySQL and Database for PostgreSQL. It also has a Data Warehouse service, as well as Cosmos DB and Table Storage for NoSQL. Redis Cache is its in-memory service and the Server Stretch Database is its hybrid storage service designed specifically for organizations that use Microsoft SQL Server in their own data centers. Also, Azure offers an actual Backup service, Site Recovery service and Archive Storage.
GCP has a smaller menu of storage services available. Cloud Storage is its unified object storage service, and it also has a Persistent Disk option.
When it comes to databases, GCP has the SQL-based Cloud SQL and a relational database called Cloud Spanner that is designed for critical workloads. It also has two NoSQL options: Cloud Bigtable and Cloud Datastore. It does not have backup and archive services.
Hope this gives you a fair idea. Stay tuned to this space for more updates around AWS, Azure and GCP.
The most common question that comes to our mind when we think about shifting our career path to cloud is: “What skills do I need to have that will get me hired today to start my journey on cloud?"
First, keep in mind that this is an emerging area, so what employers are looking for is constantly changing. Secondly, even if they do hire you for a specific skill, you’ll be asked to retrain and retool as the cloud technology matures.
Nonetheless, here are the three skills that should get you hired today:
AWS Certifications: Get AWS-certified. No matter the role you want to get into, the developer or architect; companies are looking for AWS Certified Individuals. Having an AWS Certification certainly provides you an edge over other Cloud Aspirants as it does validate that an individual has some degree of talent and basic knowledge of cloud solutions. Also, another important aspect to consider is AWS Certifications are very much in demand today and there is shortage of professionals with these skills, so it’s the right time for you to learn and grow professionally. So, what are we waiting for? Log onto Amazon web services website, go through the training modules, book an appointment and get certified.
Cloud Based IoT: Another good to have skill in cloud domain is: IoT specialist. Touted to be the Next big thing in the IT World, bringing innovation in science and technology, IoT will forever change our lives and the way we conduct business. We estimate more than 50 billion devices to be connected via the Internet by 2020. And the Good news is, basic Computer Knowledge is the only pre-requisite to enrol for this course. AWS, Microsoft and Google have strong offerings here.
Server less Computing and Containers: There is a growing demand for people with this skill as well. Majority Enterprises are looking to move to Server less, containers, or both – but they have not really done much about it as of now. However, Organisations are hiring now in anticipation.
Happy working and Good luck with your HUNT!!
Looking for a job in cloud? Send your resumes to email@example.com.
When we type an address like www.google.com in your browser address bar, the computer doesn’t know where goole.com points to and it will therefore ask the DNS server. The job of a DNS server is to translate this human-readable web address (like www.google.com) into a computer-readable number also known as an IP address (220.127.116.11). Once your computer knows the IP location of a web domain name, it opens the website in your browser.
What is OpenDNS?
OpenDNS is a company and service that extends the Domain Name System (DNS) by adding features such as phishing protection and optional content filtering in addition to DNS lookup, if its DNS servers are used. Cisco acquired OpenDNS on August 27, 2015 for US$635 million in an all-cash transaction, plus retention-based incentives for OpenDNS. Cisco has intended to continue development of OpenDNS with its other cloud-based security products, and that it would continue its existing services.
OpenDNS business services were renamed as Cisco Umbrella; home products retained the OpenDNS name.
Advantages of using OpenDNS:
OpenDNS offers DNS services that are faster and reliable DNS service. With OpenDNS you will more quickly reach your intended website and never experience the outages that occur with the DNS services provided by an ISP. OpenDNS servers store the IP addresses of millions of websites in their cache so it would take less time to resolve your requests. So if you have asked for an IP address of a website that has been previously requested by another OpenDNS user, you will get the reply instantly.
OpenDNS also offers the easiest, most cost-efficient way to prevent access to inappropriate websites, block phishing sites, and prevent virus and malware infections. If you want your Internet to be productive and safe, you need OpenDNS.
The services provided by OpenDNS increase the speed of navigating websites and prevent unintended access to phishing and malware sites as well as to any Web content that you configure to be restricted.
Disadvantages of using OpenDNS:
Using a service such as OpenDNS routes all your traffic through the OpenDNS network. Because they are providing all of your DNS host resolution, this provides OpenDNS with full information about all of your Internet browsing history. And other disadvantage of using OpenDNS could be because of the increased security provided by OpenDNS, users may be less cautious when browsing the Internet and go to riskier sites.
Cisco maintained the free pricing for the OpenDNS Home and Family versions, but they may seek to increase revenue by changing this in the future.
For more such updates, stayed tuned to RightCloud Blog.
Enterprises that perform public cloud deployments from scratch, for them chances are high that they have Oracle Database running on premises. Moving those on-premises databases to cloud has its benefits. However, an Oracle Database migration to AWS depends on various factors. Before undergoing an Oracle Database migration, IT teams must consider few facts like:
There are six key criteria that need to be considered when moving Oracle database to the AWS cloud:
An Oracle Database migration could be accomplished in one step, but this requires a complete shutdown of the local database to extract and migrate the data to the new database in AWS. The process can take anywhere from one to three days, so this can be the most obtrusive migration strategy. Single-step database migrations are generally preferred for small businesses with limited database sizes that can tolerate prolonged database downtime during the migration.
Two-step migration strategies are common. The first step produces a point-in-time copy of the existing database, which can be moved to AWS without imposing any downtime on the local database. The local database continues to run during this process, so the actual migration can take as long as necessary -- there is almost no tangible disruption.
After the initial Oracle Database migration, a second step will capture, migrate and synchronize any incremental changes to the database. Once completed, the local Oracle Database will need to be shut down while the final changes are captured and migrated. This incremental step is considerably less involved than the initial synchronization, so the downtime is shorter. Once the final synchronization is complete, the AWS deployment takes over and the local database is decommissioned.
A third option promises zero downtime. This typically starts with an initial synchronization and then invokes a form of continuous data replication (CDR) to perform complete synchronization of the local and AWS database versions. A variety of tools, including Oracle's Golden Gate or third-party tools like Db visit Replicate and Attunity Replicate, can handle CDR. A business can make the switch to AWS once replication has synchronized the local and AWS databases. The CDR tool will continue to keep the instances synchronized. This option is typically reserved for the largest or most active Oracle database users who cannot tolerate any downtime. However, there is an added cost to use CDR, and continuous replication can potentially affect database or network performance.
This was almost everything about migrating Oracle database to AWS. Hope this piece of information was helpful.
For more such updates, stayed tuned to RightCloud Blog.
Handling multiple servers can be painful, especially in the short run. Multiple servers mean multiple developers who needs to work on the same code, making the code repository difficult to handle in the long run. One of the biggest disadvantages in the long run is the resiliency, which causes the whole back end a mess, making the website crash and slow down eventually.
AWS Lambda is a compute service that lets us run code without provisioning or managing servers. AWS Lambda executes our code only when needed and scales automatically, from a few requests per day to thousands per second. It is a pay as you go service and is a well-managed service hub. It allows us to get rid of over provisioning costs, as well as avoid the need of any boot time, patching, as well as load balancing. The best part is that we pay only for the compute time we consume - there is no charge when our code is not running. With AWS Lambda, we can run code virtually for any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale our code with high availability. We can set up our code to automatically trigger from other AWS services or call it directly from any web or mobile app.
Advantages of Lambda:
We can use AWS Lambda to execute code in response to triggers such as changes in data, shifts in system state, or actions by users. Lambda can be directly triggered by AWS services such as S3, DynamoDB, Kinesis, SNS, and CloudWatch, or it can be orchestrated into workflows by AWS Step Functions. This allows us to build a variety of real-time serverless data processing systems.
By combining AWS Lambda with other AWS services, we can build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers – with zero administrative effort required for scalability, back-ups or multi-data center redundancy.
AWS Lambda is a high-performance computing environment that not only eases developers’ workload but significantly reduces the cost of developing software or app. It works seamlessly in varied situations but should not be considered as a multipurpose service.
For more such updates, stay tuned to RightCloud Blog!
Img src: internet
We all know that AWS is the leader of the cloud computing services, it being the pioneer in the IaaS industry since 2006 and being 5 years ahead of other popular cloud service providers. However, this leads to certain inconveniences and drawbacks that can be exploited by the competition. Essentially, the sheer amount of AWS services is overwhelming.
Google Cloud Platform rapidly adds new products. The important thing to note is that while AWS does offer a plethora of services, many of them are niche-oriented and only a few are essential for any project. And for these core features, we think Google cloud is a worthy competitor, even a hands-down winner sometimes, though many of essential features, like PostgreSQL support are still in beta in GCP.
Google Cloud can compete with AWS in the following areas:
Cost-efficiency due to long-term discounts: For example, a 2 CPUs/8GB RAM instance will cost $69/month with AWS, compared to only $52/month with GCP (25% cheaper). As for cloud storage costs, GCP’s regional storage costs are only 2 cents/GB/month vs 2.3 cents/GB/month for AWS. Additionally, GCP offers a “multi-regional” cloud storage option, where the data is automatically replicated across several regions for very little added cost (total of 2.6 cents/GB/month).
Big Data and Machine Learning products
Instance and payment configurability: GCP is a lot more flexible when it comes to instance configuration. Along with predefined instance types similar to AWS, GCP also allows you to customize how many CPUs and how much RAM to use. For example, instance type n1-standard-1 comes with 1 CPU and 3.75GB RAM, but you can choose to have an instance with 1 CPU and, say, 1.75GB of RAM. Or 4.25GB. Or 5GB. You get the idea. If your compute needs fit between the available machine types, a custom machine type can result in significant price reductions. Both AWS and GCP announced a pay-per-second billing model. Starting October 2nd 2017, AWS will implement a pay-per-second billing for Linux VMs. And starting September 26th 2017, GCP will offer pay-per-second billing for all VM types and OSes. Furthermore, GCP provides a better approach to discounted long-term usage: Instead of requiring users to reserve instances for long periods of time as AWS does, GCP will automatically provide discounts the longer you use the instance — no reservations required ahead of time.
Here is a comparison of VMs that fall into similar categories across providers, such as high memory, high CPU, SSD storage, etc.
GCP is a serious contestant for AWS. AWS leads in terms of the numbers of customers and products, due to 5 years of head start. At the same time, GCP already provides all the needed functionality and offers competitive pricing and configuration models, backed up by serious traffic privacy and security measures. With time, as more and more businesses accept the AI-first approach to doing business, GCP’s immense power in Big Data analytics and Google Chrome’s leading position amongst the browsers will allow Google Cloud Platform to become an even more serious counterpart for AWS.
Hope this post was useful. For more such updates, stay tuned to RightCloud Blog!
Security is one of the most prominent things that cloud engineers need to take care of. Organizations move their applications and data to the cloud to reap the benefits of productivity against significant concerns about compliance and security. Security in the cloud is not the same as security in the corporate data center. Different rules and thinking apply when securing an infrastructure over which one has no real physical control.
When leveraging cloud services, enterprises need to evaluate several key factors, including:
Many security professionals are highly skeptical about how secure cloud-based services and infrastructure are. In this post, we will discuss some best practices and guidelines that can be used to securely your cloud environment.
End-to-end Encryption of data in transition
All interaction with servers should happen over SSL transmission (TLS 1.2) to ensure the highest level of security. The SSL should terminate only within the cloud service provider network.
Encryption for data at rest
Encryption of sensitive data should be enabled at rest, not only when data is transmitted over a network. This is the only way you can confidently comply with privacy policies, regulatory requirements and contractual obligations for handling sensitive data. Data stored in disks in cloud storage should be encrypted using AES-256, and the encryption keys should themselves should be encrypted with a regularly rotated set of master keys. Ideally, your cloud service provider should also provide field-level encryption. Customers should be able to specify the fields they want to encrypt (e.g., credit card number, SSN, CPF, etc.).
Rigorous and Continuous Vulnerability testing
The cloud service provider should employ industry-leading vulnerability and incident response tools. For example, solutions from these incident response tools enable fully automated security assessments that can test for system weaknesses and dramatically shorten the time between critical security audits from yearly or quarterly, to monthly, weekly, or even daily. You can decide how often a vulnerability assessment is required, varying from device to device and from network to network. Scans can be scheduled or performed on demand.
Defined enforced data deletion policy
After a customer’s data retention period (as specified in a customer contract) has ended, that customer’s data should be programmatically deleted.
Protective layers for user-level data security
The cloud service should provide role-based access control (RBAC) features to allow customers to set user-specific access and editing permissions for their data. This system should allow for fine-grained, access control-based, enforced segregation of duties within an organization to maintain compliance with internal and external data security standards.
Rigorous compliance certification
The two most important certifications are:
Hope this helps. Stay tuned to RightCloud Blog for more such information.