Passionate about your results

Mission CMMI Level 5 - the Journey of Marlabs from L3 to L5

Marlabs has been appraised at Capability Maturity Model Integration (CMMI) Level 5 using the ‘Standard CMMI Appraisal Methodology for Process Improvement’ approach prescribed by the CMMI institute. Level 5, the optimizing level, is the highest CMMI level. Appraisal at Level 5 recognizes our ability to continually improve process performance through both incremental and innovative technological changes/improvements. 

During late 2011, Marlabs was assessed for CMMI L3, which signified that our processes are well defined and standardized across the company. In accordance with our strategy to achieve business objectives with more advanced, consistent, and well-defined processes, we strive to reach the highest levels of maturity in all aspects. Appraisal at CMMI L5 was the next step towards this in our journey towards continuous improvement. 

We kick-started the mission in mid-2012 aided by external consultants, who analyzed and identified areas of improvement to meet the objectives; gathered the measures and metrics; and evaluated our performance to set goals for improvement. Processes to meet the goals were optimized and deployed in the projects through high maturity process trainings and facilitations to the project managers (PMs). 

Our Quality Team shares their experiences and excitement 

“We appreciate the PMs for their interest in quantitative project management concepts such as Anova, Test of Hypothesis, Control Charts, Pareto Analysis, Monte Carlo Simulation, Regression Analysis, Normality Tests and Capability Six Pack Analysis, which form the core of CMMI L5 implementation. All these new methodologies/activities, statistical analysis, interpretation of outcomes, and demonstration of continuous improvements statistically need to be embedded in all practitioners. The Quality Team was successful in accomplishing this in multiple iterations. Our Delivery Teams also accomplished the same with the greatest dedication and enthusiasm. The Quality Team’s determination and support from PMs made this achievement possible in this short span of time.”

Marlabs_CMMI_L5_Team

“Then came the next step - the final assessment! The Lead Assessor performed readiness checks and flagged off the final assessment on January 5, 2015 with document reviews followed by interviews. We appreciate the efforts of all the Appraisal Team Members (ATMs) to identify the expectations of CMMI L5 model, review all the processes, and documents from projects and set the stage for Interviewing the PMs, Software Engineering Process Group (SEPG), and other Functional Area Representative (FAR) groups. Subsequently, with various rounds of interviews, document reviews and a few trajectory corrections, the assessment went well. 

Marlabs_CMMI_L5_Team

The D-day, the much awaited day of final presentation arrived on Jan 16, 2015. The Appraisal Team categorized the findings with strengths and weakness against each process area with respect to Maturity Levels. The Lead Auditor started presenting from Level 2 area, went through the higher levels, and declared the final assessment of our capabilities—“Yes Marlabs is at CMMI L5!” The moment was followed by thunderous applause. Thus our CMMI L5 Mission was successfully achieved and Marlabs joined the elite group of companies to be appraised at L5! This achievement goes beyond the appraisal—it proves that different teams at Marlabs are interwoven with team spirit to work together as one. It also demonstrates our confidence to take on more challenging assignments. Yes, this is indeed a proud moment for all Marlabians!

Other teams share their experiences and challenges

I feel very proud to be a part of the CMMI assessment process in Marlabs successfully for the third time in my tenure of 8.5 years. The Level 5 journey was really very interesting as well as challenging. The support from Software Quality Assurance (SQA) team was amazing and the team co-ordination was excellent. The path to Level 5 assessment was a step by step process involving sharing of information and interviews by the assessing KPMG team. Quantitative management and how this can be applied to improve the project management is critical for achieving L5 in addition to proficiency in statistics. The sessions on high maturity practices by the SQA team were quite informative. The evaluation of current status using control charts in Minitab was really interesting and the analysis of PDPC reports generated from Minitab using historic data turned out to be the best part of this exercise. Appraisal at level 5 is indeed a great milestone for Marlabs. I thank the management and my team members for their support and knowledge shared during our journey to L5.

- Indumathy (Project Manager)

I was really excited when I was asked to play ‘ATM’ for the upcoming CMMI L5 appraisal in Marlabs. What I experienced in the last two months when we trained for this event, and went through the whole appraisal exercise was really wonderful.

As a PM in Marlabs for the past two years, the journey towards L5 honed my skills in project management. It was indeed a huge learning opportunity. In addition, the busy schedule during the appraisal period was a test of our mental and physical strengths and capabilities. I was all prepared for it to be stressful, but the additional physical strain that came with it took me by surprise. However, after the first few days, we all learned to cope with that too in our own ways. Sitting in the board room with eight other members for two whole weeks to assess your organization as a neutral participant was a unique experience. It was heartening to see the confidence of the FAR team members when they faced the interviews. After all, it is no easy task to face an eight member interview panel and not appear nervous. In the end, it was heart-warming to see the joy on everyone’s face to be appraised at CMMI L5.

- Shireen Fathima (ATM)

Marlabs CMMI L5 Press Release

CMMI Institute

Tags: 
CMMI, CMMI Level 5, CMMI L5 model, SCAMPI, software quality, project management, Anova, Test of Hypothesis, Control Charts, Pareto Analysis, Monte Carlo Simulation, Regression Analysis, Normality Tests and Capability Six Pack Analysis

Big Data on Hadoop

Introduction

The world is moving towards Cloud Computing, a new technological era which has just begun. Have you ever pondered why the word “cloud” is induced as a terminology in the field of Information Technology?

In cloud computing, the word “cloud” is used as a metaphor for “internet”. Having said that, cloud computing is a kind of internet based computing where wide variety of services like storages, servers and applications are offered to the enterprises and individuals through the internet. Typically, cloud computing encompasses multiple computing resources rather than having local servers or dedicated devices to handle complex applications. This mechanism is extremely beneficial by harnessing unused or idle computers in the network to solve problem which is too intensive for any standalone computer.

Over the past few years several designs, prototypes, methodologies have been developed to tackle parallel computing problems. Moreover, specially designed servers were tailored to meet the parallel computing requirements. The major problem was, these servers were too expensive to handle and yet did not produce expected results. With the advent of multi core processors and virtualization technology, the problems seems to be diminishing and hence effective and powerful tools are built to achieve parallelization using the commodity machines. One of such tools is Hadoop.

Big Data

One of the key components in business analytics is “DATA”. Data is ubiquitous in every field which mainly helps to forecast, vet, transact and consolidate any given business analytical problems. Sometimes, it also plays a major role as a failsafe by maintaining history of each event carried out during the course of development. In this competitive business world, the demand for data is increasing exponentially. Of late, the magnitude and type of data available to enterprises and the need for analyzing the data in real time for maximum business benefits is growing rapidly. With the advent of social media and networking services like Facebook, Twitter, search engines like Google, MSN and Yahoo, several e-commerce services and online banking services, the data is proliferating in the speed of light. These data may be unstructured and semi structured. We call this as Big Data.

Apparently, Big Data is measured in Terabytes, Petabytes, Exabyte and sometimes even more. Processing, managing and analyzing data in this magnitude is highly strenuous and has been a daunting task for the business analysts over the years. Traditional data management and analytical tools and technologies are striving to process large volumes of data due to the unprecedented weight of Big Data. Hence, new approaches are emerging to cope up this problem which will help the enterprises gain maximum values from Big Data. Thus, an open source framework called Hadoop got evolved.

What is HADOOP?

Hadoop is an open source software framework licensed under Apache Software Foundation, which is built for supporting data intensive applications running on large clusters and grids, so as to offer scalable, reliable, distributed computing. Apache Hadoop framework is predominantly designed for the distributed processing of large sets of data residing on clusters of computers using simple programming paradigm. It can be operated from single server to tens of thousands of computers, where each computer is responsible for local computation and storage. Apart from this, Hadoop framework also identifies and tackles node failures at the application layer there by offering high availability of the service.

Since, using Hadoop framework involves dozens of machines, it is imperative to understand the meaning of clusters and grids, although both work in a similar fashion with a subtle difference in their setup.

Cluster

Typically, a cluster is a group of computers (nodes) with same hardware configurations connected to each other through fast local area network, where each node performs a desired task. The results from all the nodes are aggregated to solve a problem which usually requires high availability of the system with low latency.

Grid

A grid is similar to cluster but with a subtle difference where multiple nodes are distributed in different geographical locations and are connected to each other through internet. Apart from that, each node in the grid can have different operating system with different hardware configurations.

 

What makes Hadoop a Paramount tool?

In a distributed environment, data should be meticulously arranged across several computers to avoid inconsistency and redundancy in the results for a particular problem. Moreover, processing of these data should be done judiciously to achieve low latency. Hence, several factors influence the speed of the operation in a distributed computing system, like the way data is stored, the storage algorithm to manage the distributed data, the parallel computing algorithm to process distributed data, the fault tolerance check on each node etc.

In a nut shell, Apache Hadoop uses a programming model called MapReduce for processing and generating large data sets using parallel computing. MapReduce programming model was initially introduced by Google to support distributed computing on large data sets on clusters of computers. Inspired by Google’s work, Apache came up with an open source framework for distributed computing with an end result as Hadoop. Hadoop is written in Java which makes it platform independent and hence, it is easy to install and use in any machine supporting Java.

The Apache Hadoop framework commonly consists of three major modules:

 

  •  Hadoop Kernel
  •  MapReduce
  •  Hadoop Distributed File System

 

Hadoop Kernel

Hadoop Kernel also known as Hadoop Common provides an efficient way to access the file systems supported by Hadoop. This common package constitutes of necessary Java Archive (JAR) files and scripts which are required to start Hadoop.

MapReduce

MapReduce is a programming model primarily implemented for processing large data sets. This model was originally developed by Sanjay Ghemawat and Jeffrey Dean at Google. In a nut shell, MapReduce programming model takes a big task and divides it into discrete tasks that can be done in parallel. At its crux, MapReduce is a composite of two functions, map and reduce.

Hadoop Distributed File System (HDFS)

HDFS is a subproject of the Apache Hadoop project. Hadoop uses HDFS to achieve high data throughput access. HDFS is built using Java and runs on top of local file system. This was designed to process, read and write large data files with size ranging from Terabytes to Petabytes. An ideal file size is a multiple of 64 MB. HDFS stores large files across multiple commodity machines. Using HDFS you can easily access and store large data files split across multiple computers, as if you were accessing or storing local files. High reliability is gained by replicating the data across multiple nodes and hence does not require RAID storage on the nodes. The default replication value is 3 and hence data is replicated on three nodes.

Conclusion

The Apache Hadoop Framework is booming in the world of cloud computing and has been encouraged by several enterprises looking at its simplicity, scalability, reliability for confronting Big data problems. It imparts the fact that even a commodity Desktop PC can be used efficiently for computation of complex and massive data by forming a cluster of PCs which indeed minimizes the CPU idle time and judiciously delegates the tasks to the processors, making it cost effective.

It is innate, no matter how big is your data and how fast it grows, users always crave for the data retrieval speed forgetting about the fact that how complex the data is arranged and how difficult it is to process. The bottom line is, the speed and accuracy of analysis should not be decreased irrespective of the data size. Hence, cloud computing is the best solution which will suffice all the needs.

Article Contributed By: Abhishek Subramanya – Java COE, Mysore.

 

Tags: 
Big Data

A view into Hybrid Cloud and Hybrid IT

Infrastructure service and Management is one of the major functions in any IT Industry. Over the years IT Services has evolved from physical to virtual to cloud. Lately companies are either moving to the cloud or thinking about it. Most companies are still experimenting on what will work best for them. Major attraction for many IT services to the cloud is undoubtedly the cost factor and management itself. The current market shifts include the increasing virtualization of technology, acceptance of service-based management methodologies and Cloud computing as a new delivery model for IT functions and services.

So what are the types of Clouds and which best is suited for an organization?

Mainly there are 4 Types of clouds which are Public, Private, Community and Hybrid Clouds. An Organization has to decide which best suits their requirements, which again depends on many factors. Let’s look into each of them in brief.

  • Private Cloud – Infrastructure which is operated and maintained solely for an organization.
  • Public Cloud - Infrastructure is made available to the general public or a large group; this will be mainly owned by a cloud provider. One major player is Amazon.
  • Community Cloud—Infrastructure is shared by several organizations and supports a specific community that has shared concerns.
  • Hybrid Cloud—A hybrid cloud service is a combination of a public cloud and a private cloud. A hybrid cloud can improve resilience and provides Disaster recovery.

Majority of the Hybrid vendors provides IaaS services today. Some of them are VMWare, Rackspace, HP, IBM etc.

There are advantages and disadvantages of each one of them.  Cost and Security are a major factor. We are not going in-depth on this topic.

 

Below Diagram shows a Hybrid Cloud. (Note- Not including the Community cloud).

Hybrid Cloud

 

According to the Gartner report Majority of the private and community cloud services will evolve to Hybrid cloud by 2017.

Hybrid IT

Hybrid IT is the mission and the operational model for IT infrastructure and operations in a cloud computing world.

So what is Hybrid IT?

Hybrid IT is an approach to enterprise computing in which an organization provides and manages some information technology (IT) resources in-house but uses cloud-based services for others. In short it is a mixture of IT resources from in-house and also services from the Cloud.

http://searchcloudcomputing.techtarget.com/definition/hybrid-IT )

Hybrid IT is transforming IT architectures and the role of IT itself, according to Gartner, Inc. Hybrid IT is the result of combining internal and external services, usually from a combination of internal and public clouds, in support of a business outcome. Hybrid IT relies on new technologies to connect clouds, sophisticated approaches to data classification and identity, and service-oriented architecture, and heralds significant change for IT practitioners. Workloads will move around in hybrid internal/external IT environments.

For critical applications and data, IT organizations have not adopted public cloud computing as quickly. Many IT organizations discover that public cloud service providers cannot meet the security requirements, integrate with enterprise management, or guarantee availability necessary to host critical applications. Therefore, organizations continue to own and operate internal IT services that house critical applications and data.

But Cloud is a means to improved efficiency, not an end in itself, and companies should not overlook the need to connect business goals and priorities. Now, IT has an opportunity to capitalize on its hard work in domain management, service Management and virtualization to leverage Cloud services for increased resiliency and efficiency.

 

There are many providers for Hybrid IT service management and one such provider is the HEAT Hybrid IT Service Management Solutions by Front Range. [ With  this Hybrid IT Service Management solutions, organizations can easily Request a service or change, automatically approve and authorize the request,  plan for appropriate remediation measures, automatically deploy the changes to the end users, monitor compliance and service level agreements and control their services portfolio on an ongoing basis to ensure enhanced service quality and customer satisfaction. ]

 

Without the need to deliver services rapidly and efficiently to the business, there would be no requirement for elastic and flexible computing environments. IT needs to be able to determine those services that will return the highest value to the business, and needs to be able to demonstrate the relative value propositions of in-house versus outsourced and Cloud versus traditional approaches.

Sources –

Tags: 
Hybrid Cloud, Hybrid IT

Pages