Passionate about your results

The Cloud (Security) Phobia

"You know, we are in the Wall Street area and financial companies like us can't think about the Cloud now. Some of the stuff I do is mission critical and Cloud can't handle that—whether it is about latency or security," said one of my friends who works in NYC, while trying to brush aside the 'Cloud' from our discussions.
I replied in agreement and asked him why he still thought nothing from their stable could be moved to the Cloud. He said it was a policy decision and that they were not comfortable with the Cloud yet. And our discussion ended there. I see this as a common trend among the cloud naysayers. The easiest excuse for blocking the ‘cloud thoughts’ is ‘security’. 
I remember the quote, “cloud-blockers should be fired,” from a blog I read a long time ago. However, I am not audacious enough to go to that extreme. That being said, it's a bit disheartening to see so many people around here falling victims to 'cloud phobia' citing ‘security concerns.’ I meet many people at meet ups who still think that their IT security practices are smarter than AWS, Google, and Azure folks!

While we think about the rationale for their confidence or sense of being ‘secured’, let's step back a bit and look at where this is going.
Till date, most of the infamous security breaches happened to the so-called on-premise systems, not on cloud-hosted applications (Let me touch wood). So, assuming the Cloud is less secure and on-premise is more secure is not entirely correct. Today's sophisticated security attacks don’t discriminate between in-house and cloud computing. It doesn’t matter much. We all are vulnerable, whichever we choose to adopt. Actually, in the case of smaller IT environments, cloud hosting is better than in-house hosting because they can use datacenters and practices of AWS or Azure, which offer better security.

Now, coming back to the so-called elite vertical, BFSI, Cloud is a traditional enemy. They are usually hesitant to discuss the Cloud—otherwise, “talk to my hand!” We all understand that there are regulations that must be followed, and that anything and everything is not suitable for the Cloud. Some systems are better maintained in-house. However, shutting the door to the Cloud will not be prudent for the long term. Less-critical or less-sensitive stacks can be hosted in public cloud. (Personally, I don’t believe in using the word 'Cloud' for private infrastructure. 'Private cloud' is not really a Cloud, but an agile, in-house IT infrastructure. Most of the organizations who invested in private cloud technologies are not that agile yet. Most of them have a glorified virtual infrastructure with dressing on top named private cloud. This is a topic for another blog!). Cloud adoption in enterprises is now real. It’s no longer hype. Financial domain is already dipping its toes into the (cloudy) water. Many large banking institutions in the EU and Australia are moving significant chunks of their application portfolio to the public cloud, while keeping the regulated and critical data in-house. However, this initial inhibition towards public cloud is expected to continue for some time until tried, tested, and proven by deployment patterns and best practices.
The relevant question is, “what are the options or opportunities available now, to secure public cloud hosted applications just like the ‘on-premise’ applications?” 

•    Why can't we tighten the nuts and bolts of our network segment in the Cloud so that we will be more comfortable?
•    How can we get more visibility on what is happening in our piece of the Cloud?
•    How can we see who is accessing it? 
•    How, when, and where are these apps accessed or used?
•    Are there any anomalies or patterns to be concerned about?
The general notion is that the security teams will not be comfortable unless it is possible to touch and feel the physical perimeter security gears. However, the landscape is changing with network gears and networks becoming virtual. In this era of software defined network (SDN) and software defined datacenter (SDD), we can’t expect the devices to be physical. In the Cloud, everything is virtual and software-defined. So, it’s a matter of picking the right technology and vendor to acquire the software appliances and solutions for network and application security in the Cloud.
Some organizations take it for granted that when they move to the Cloud, everything is the cloud service provider’s (CSP's) responsibility. At the same time, if they were hosting in a co-location or on-premise, their security team will be involved and proper security design will be in place. While moving to public cloud, they just do the lift and shift and relax thinking that it’s secure. 
All CSPs say it aloud: "Security in public cloud is a shared responsibility". Customer and cloud provider responsibilities change as we go up in the 'X-as-a-service' (XaaS) pyramid. Public IaaS CSPs provide physical security, instance isolation, and self-service tools to consume the cloud services securely. It is our responsibility to secure operating systems and application stacks. We need to tackle many attack vectors and take steps at the fundamental layers to prevent attacks. For example, securing the OS layer, port communications, network segmentation, etc., are very important.
A layered defense in depth is the best strategy to protect your applications in the Cloud. There are a bunch of solutions in the cloud marketplace for appliance based solutions of network intrusion detection systems (NIDS) and intrusion prevention systems (IPS). We can have either host based intrusion detection systems (HIDS) or NIDS depending on the right fit. Also, there are web application firewall solutions from so many vendors. There are encryption technology solutions available in the Cloud to secure data at rest and also in transit. Hardware security module (HSM) appliances can do encryption and decryption of data. There are solutions to detect distributed denial of service (DDoS) attacks. Encryption key management services are also available. If we talk about SaaS offerings from public cloud, cloud access security brokers are an ideal choice to secure the access while using SaaS applications.

So, it's a matter of awareness in these cool technologies and solutions to make the cloud hosted apps more fortified. The funny part is that every organization will do this basic security fortification when they host internally and miss it while moving to the Cloud!
Availability concern is one of the major psychological barriers, in the customers’ minds and sometimes they subtly use the term ‘security’ to express it. We all know that availability is one of the tenets of ‘security’. There are situations where customers have multi-region deployment to mitigate geographical outage of a CSP, but the service outage can be worldwide. In such cases nothing much can be done other than waiting for the services to be up. Another possible situation involves some of the services being affected locally, while there is a global outage of other services critical to the stack. So again, multi-geo deployment won’t help. When the outage is extended for hours, it’s really frustrating and can have a huge business impact. I have seen and been through all these situations and agree that they are really nasty. Luckily those are very rare and now cloud providers are more resilient and geographically spread. While designing disaster recovery (DR) plans in the Cloud, we should be smart enough to prepare for emergencies such as complete CSP outages. Taking regular backups on another Cloud or on-premise platform will be a good option.
Another common grouse is the ‘lack of visibility’ on who is accessing the cloud environment and tracking the changes made.  Now cloud providers like AWS have a feature that enables access logs using CloudTrail. This log could be easily parsed using tools like Logstash. This feature can also take care of the auditing requirements of some regulations.
Skepticism about multi-tenancy in Cloud is another concern (really?). Even if some hacking demos suggest the possibility of snooping into neighboring virtual machines (VMs), in reality, there are no public signs or incidents to prove that. CSPs are very serious about this and have their hypervisor layer patched with the latest updates to prevent such incidents. I am not stating that multi-tenancy is fool proof, but exploiting it is very difficult and rare.
Another concern is about CSPs going out of business. In theory, anyone with a publicly accessible application can be a ‘SaaS’ vendor. Or anyone with a virtual environment can be an IaaS provider. We need to do proper diligence while picking ‘cool and hot’ ‘XaaS’ providers. And for Microsoft, Google, AWS et al., we don’t need a financial advisor to evaluate them. 
Okay, now what happens when an insider attack wrecks CSPs like a co-pilot crashing the plane? The answer is, an insider attack could happen in-house too. It's all about properly implemented delegated power and privilege distribution models. CSPs have well-thought processes and implement the best-in-class security practices. So the probability of an insider attack in a CSP is lower than the probability of an insider attack in the customer organization. An example is a security disaster that took place at Code Spaces, due to the lack of multi factor authentication and proper identity and access management (IAM).
To summarize, not everything is perfect in the Cloud or on-premise when dealing with security. As I mentioned earlier, everything isn’t a perfect fit for the Cloud too. However, with a bit of proper planning and selecting the right technologies and partners, organizations can overcome cloud-phobia. And it’s a beautiful place to be!
Now that I am done with my rants, if you have a stable app without load fluctuation running on a recently refreshed hardware, I am with you. You don’t need to go for the Cloud. Better stay put happily and sensibly on-site!

Sreejith Gireesan
Cloud Infrastructure Architect 
Cloud, Cloud Computing, Cloud Security, BFSI, Cloud Service Provider

Kernels - A New Era

"There is nothing new in the world except the history you do not know."
Harry S Truman.
Virtualization is hardware abstraction for operating systems, popularized on x86/x86_64 thru VMware/Hyper-V/Xen/KVM. Paravirtualization has gained traction recently with Docker and similar initiatives (Flockport, Spoon, Rocket). Idea of paravirtualization has been around for a long time, like 1968.
A quick review of above CP/CMS wiki could be buzzkill for fans of paravirtualization/Docker, but Docker benefits are enduring nonetheless.
Given that hypervisor and paravirtualization are common knowledge, it is natural for IT industry to explore improvements. Enter Unikernel/Library OS. Given that majority of the apps may not use all features of a typical OS, why bother with VMs, which require full OS installs? What if we had a facility to treat an OS as a library of features from which an app selects only those that it uses? What if an app gets deployed directly on the hypervisor? There are many players in this segment—OSv,(C/C++) MirageOS (OCaml)—are well known. Similar efforts are made by Erlang on Xen.
ClickOS (C) extends unikernel philosophy (with Xen) bound to hardware while providing specialized functionality (NFV).
Rump kernels start similarly, but want to be more than unikernels running on hypervisors. They’re exploring  kernel portability. 
While unikernels pursue closer app/OS integration, some are exploring the opposite—removing OS completely from apps. Enter Arrakis—"In Arrakis, we ask the question whether we can remove the OS kernel entirely from normal application execution." 
Some are exploring language runtimes directly on bare metal—Clive.(Go). This is similar to Java/.NET, except that the runtime binds to hardware directly—no OS or hypervisor. 
Thus new kernels span from  portable (Rump) to minimize (OSv, Mirage, Erlang on Xen) to specialize (ClickOS) to eliminate (Arrakis, Clive).

For customers it means apps on unikernels result in :
 #a. higher performance due to lesser overhead - lack of full OS and closer to hardware. For e.g., every modern OS/file system (Windows/OS X/Linux) has a search feature, which runs regardless of its relevance to apps. A unikernel won’t need/have such features and similar bloatware features. 
 #b. more security - due to exposure to smaller OS footprint and heterogeneous OS layers. Viruses will get easily exposed/removed since there will be less OS to hide in. Also if a virus depends or targets on certain OS features, for e.g., USB drives or specific firmware, such viruses would be ineffective.

This is a journey for the entire industry. Along with componentization of apps, this is componentization of OS. We can think of it as consumer Internet (enterprise OS) meets Industrial Internet (embedded OS), which will result in the evolution of the Internet of Things (multilateral OS). This will cause enterprise OS to “disappear” with apps taking center stage as is happening in mobile. 

Srinivasan Balram,
CTO, Marlabs

Virtualization, OS, Operating System, Kernel, Paravirtualization

Build Intelligent Spaces (iSpaces) and Systems with IoT and Wearables

The activity data from wearable devices and sensors can be used to study, analyze and optimize the behavior of individuals and groups to facilitate Intelligent Spaces—for enabling better customer experience and smarter business models.

Tracking, monitoring and building prediction models on top of this data and combining this with the way (humans, organisms, things) interact with other (humans, organisms, things), will enable us to design self-optimizing intelligent spaces and systems.

Welcome to a whole new world of building solutions leveraging Computer Science, Architecture, Machine Learning, Spacial Intelligence, Sociology, and Design.

Why does it matter?

Let us discuss few implementation scenarios from different domains for examples. 

Digital Marketing and Behavioral targeting - Understanding the customers better by leveraging their physical activity data can enable very tailored user experience and recommendations. For example, think about a nutrition company app requesting health data to provide better vitamin recommendations to the user.

In Store Navigation and Optimization using Behavior analysis - Enabling smart retail stores by providing in-store navigation for users, and using that data to optimize the store layout, can profoundly improve customer experience. 

Better Solutions for Farming - Leveraging wearable technologies for identifying diseases of cattle from motion patterns can result in significant cost savings.

Enhanced Investment Banking Experience - Investment banks can detect priority customers when they walk in, and can alert in-house staff to enable better customer service.

Smart Enterprise and Better Supply Chain - Real-time tracking of supply chain, better Warehouse management systems, better asset tracking etc - the options here are endless. There are classic solutions for specific problems (like RFID-based tracking etc), but a lot of these scenarios can be re-imagined in a better way.

Enabling sustainable solutions leveraging IoT 
The above are a few scenarios that I've actually encountered over last few years where the user experience can be improved with a little bit automation.

Integrating intelligent spaces and systems to your existing business model and having a tailor-made strategy may enable enhanced user experience and customer satisfaction, better brand value, reduced cost, and better ROI.  

Designing Intelligent Spaces (iSpaces) and Systems

So, how can you design intelligent spaces and systems? 

Traditional software systems tend to see the outside world as a disconnected entity. Though the model-driven approach tries to abstract the outside domain, capturing the time series based behavior is often not possible and is static.

Building Intelligent spaces and systems demands

  • Enabling channels to integrate the behavior of the outside world back into the system in an implicit way,
  • Designing the system to be smart/intelligent enough to adapt to the outside world/user needs with out explicit user actions.
  • Building models so that the system could predict or recommend the future course, report anomalies and could even suggest corrective actions
  • Work with stream data, building pipes and taking actions (eg: Adding capability to process stream data and generate triggers real-time)

From a technology stand point, we'll soon see cloud ecosystems maturing more to provide better PaaS services around IoT—to provision, virtualize, connect, and deploy devices at scale. 

I may detail more domain specific solutions in my later posts.

But start thinking about how to enable a better customer experience and better ROI, by having a tailor-made strategy to implement Intelligent Spaces and systems.Please add your thoughts in the comments section. 

How to disrupt, enhance or optimize your own business model by enabling intelligent spaces and systems, before someone else does the same?

Anoop Madhusudanan
Practice Head - Microsoft and Mobile
Internet of Things, IoT, Business Intelligence, Wearable Devices