Friday, March 15, 2013

Google App Engine Tutorial- Part 2

Part 2 of video seriesfor Google App Engine…

Click for Part 1

Thursday, March 14, 2013

Virtualization- Basic Meaning

Hi everyone. In my previous post i talk about three basic requirement for Cloud Computing to take hold in the market. First among them was Virtualization. From long time I want to talk about Virtualization in detail but didn't get time. Today I am free, so lets dig deeper in understanding basic meaning of virtualization and How it came to picture?

Virtualization is not the technology of 21st  century. It was used by IBM for its Mainframe Computers in 1960's. Mainframe computers has multiple resources to consolidate these different resources and make them act like single resource requires Virtualization. Cloud Computing utilize this idea more comprehensively which makes it first requirement for implementation of Cloud Computing. First look at few definitions of Virtualization.

According to Wikipedia “Virtualization in computing, is the creation of a virtual (rather than actual ) version of something , such as hardware platform, operating system, a storage device or a network device.

Gartner stated that, “ Virtualization is abstraction of IT resources in a way that masks the physical nature and boundaries of those resources from resource user. An IT resource can be Server, Client, Storage, Networks, Application or Operating System

What we pull out from these two definitions is Virtualization is conversion of Physical to Logical of everything and anything and logical doesn't know about actual physical resource. Just remember Physical to Logical. One more thing if you simply search on web for definitions on virtualization you may find some definitions saying virtualization is running multiple operating System on single hardware resource. This is not definition of virtualization it is a type of virtualization i.e. Server Virtualization. So, don't confuse yourself when you find these type of definitions. Virtualization can be done of anything i.e. Network Virtualization, Storage Virtualization etc.

These are all formal definitions and explanations. I hardly understand any concept with formal definition. I need examples and detailed definition to grab a concept or technology. If you say we started using virtualization with mainframe computers that may be wrong. Actually we were using this technology in our day to day life in one or another way, for example.

Suppose, I have a big shop. No one want it on rent because it is very large and no one can afford that much rent also. What we do in our day to day life. We create a partition in middle of shop and make it into two shops. Both shops are rented to 2 different shopkeepers. Each shopkeeper thinks he rented the whole shop. Actually floor and ceiling is shared. What owner of big shop did in reality–  He VIRTUALIZED the big shop and created two small virtual shops. Same concept we are using for IT resources. Physical resources (Server, Storage, Network) are shared among different virtual machines but each virtual machine is isolated from each other.

Now to do Server Virtualization we need a software known as Hypervisor. The most crucial piece of any virtual infrastructure is the hypervisor, which is what makes server virtualization possible. A hypervisor creates a virtual host that hosts virtual machines. It is also responsible for creating the virtual hardware that VMs will use. If you look up the term hypervisor, the definition will likely say that a hypervisor is an “abstraction layer.” That’s because it abstracts the traditional server operating system (OS) from the server hardware. Another way of saying this is that the hypervisor decouples the OS from the hardware. Your server OS no longer has to be tied to physical hardware and the newly virtualized server can be hardware-independent and containerized inside a virtual machine. There are two types of Hypervisor as shown in Figure below:

image

Type 1 hypervisor is installed directly on physical server hardware, thus replacing the existing OS. This is the most efficient design, in that it offers the best performance as well as the most enterprise-level data center features. Examples are VMware vSphere and Microsoft Hyper-V.

Type 2 hypervisor is installed and “hosted” by the existing OS and Virtual Machines are known as Guest OS. This is less efficient but enables you to keep existing applications already installed on the host OS. Examples are VMware Workstation, VMware Fusion and Windows Virtual PC.

Hope you now understand basic meaning of Virtualization. If you gave a deep thought on this concept you can easily see at many places we use this concept in our life.

See you all soon and till then happy Virtualizing.

Google App Engine Video Tutorial Series- Part 1

Thursday, March 7, 2013

Setup NFS (Network File System) on Ubuntu 10.04 LTS

NFS allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.

Some of the most notable benefits that NFS can provide are:

  • Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network.

  • There is no need for users to have separate home directories on every network machine. Home directories could be set up on the NFS server and made available throughout the network.

  • Storage devices such as floppy disks, CDROM drives, and USB Thumb drives can be used by other machines on the network. This may reduce the number of removable media drives throughout the network.

In this post I will show you how to setup NFS on ubuntu 10.04.

Experimental Setup: For demonstration of nfs. My experimental setup includes software such as- Oracle VirtualBox, Ubuntu 10.04 iso image. I have window 7 as host operating system. I install two virtual ubuntu 10.04 using oracle virtualbox. So now i have two ubuntu machines running on Oracle virtual box. Both machines are accessible to each other on network.

On Server Side:

Start the server ubuntu machine.

1. First of all install nfs kernel. Internet is required for this.

sudo apt-get install nfs-kernel-server

2. Make directory you want to mount. Always create a new directory never mount your operating system base directories.

sudo mkdir –p /export/users

3. Change mode of these directories so that anyone can read this directory. Change mode according to your need. I am changing it to 777 as least secure.

sudo chmod 777 /export
sudo chmod 777 /export/users

4. Now if you want to share any base operating system folder you can bind it with mounted folder.(server is name of my machine)

sudo mount --bind /home/server /export/users

5. Above binding will be removed when you restart your system. To keep this binding permanently open

sudo nano /etc/fstab

add following line

/home/server /export/users none bind 0 0

6. Now open-

sudo nano /etc/default/nfs-kernel-server

change or make

NEED_SVCGSSD = no

7. Now open

sudo nano /etc/default/nfs-common

make or change

NEED_IDMAPD = yes
NEED_GSSD = no

8. make sure value in /etc/idmapd.conf following lines are there.

cat /etc/idmapd.conf

check:

Nobody_user = nobody
Nobody_group = nogroup

9. Most important step add this mounted folder in export file. (192.168.80.136 is my server ip)

Open

sudo nano /etc/exports

add following lines at the end

/export         192.168.80.136/24(rw,fsid=0,insecure,no_subtree_check, async)
/export/users   192.168.80.136/24(rw,nohide,insecure,no_subtree_check,async)

10. Now restart the nfs kernel.

sudo /etc/init.d/nfs-kernel-server restart

On client Side:

1. Install nfs common on client side. It requires Internet Connection.


sudo apt-get install nfs-common


2. Open


sudo nano /etc/default/nfs-common


Set:


NEED_IDMAPD = yes
NEED_GSSD = no


3. Now mount the exported folder.


sudo mount –t nfs4 –o proto=tcp, port=2049 192.168.80.136:/ /mnt


mnt is folder created on client side where all files will be mounted.


That's all your nfs is complete.


Error:


If following error came.


mount.nfs4: No such device


you have to load modprobe module by following command.


sudo modprobe nfs


Wednesday, March 6, 2013

Do you need the cloud? or Do you want the cloud?

Cloud is everywhere now. It is most prominent area in research, IT industry and in academics also. Every presentation or lecture i saw in past highlighted the benefits of cloud computing. Being researcher in this field I decided to know areas where cloud is not best option. To my surprise there are few areas where cloud had an impact. If some organization says, “I want the cloud because everyone has” is not the way to start. In this post i will discuss the areas where cloud can be used and where it should not.

Before going any further lets see Why you need the cloud?:

  • All load of IT is handled by Professionals.
  • Capital expenditure is very small compared to big investment.
  • Time to market for any service is just “now”.
  • Flexibility (as you go)
  • They always make a offer which you can not refuse.

Above are the benefits all cloud provider determine to provide, but that not the case all the time. Lets see why some of IT industries don’t want Cloud:

  • Security: Every organization has very large data and they collected it by spending millions of dollar. Every in 21st century data is actual wealth. So, why an organization will send its data to any third party.
  • Uptime: Their is no thing like 99.99 uptime. Many IT organization blames that their cloud providers are shutdown for 3-4 days. This is not because of providers but sometime power grid fails, someone cut the fiber line.
  • We are working fine with old methods our IT can handle it.
  • Our IT will not able to handle cloud.
  • Multiple vender in cloud and no portability and interoperability.

After studying many papers and technical reports i ended at following conclusion.

Business function that suits cloud deployment can be low-priority business applications, for example, business intelligence against very large databases, partner-facing project sites, and low priority services. Cloud favours traditional web applications and interactive application that compromise two or more data sources and services with short life span. Based on above facts we can say that cloud is suitable for applications that are modular and loosely coupled having isolated workloads. Applications that need significantly different level of infrastructure throughout the month, or that have seasonal demands, such as increase in traffic during holiday shopping.

cloud is not suitable for mission critical applications that depends on critical data normally restricted to organization (Private cloud are now a days are used for this purpose to some extend). Applications that run 24*7*365 with steady demand. Cloud doesn't work well with applications that scale vertically on single server.

Saturday, March 2, 2013

Virtualizing, Standardizing & Automating

In a Cloud environment, people expect self-service, being able to get started very quickly, self-provisioning or rapid provisioning, scalability, better billing models. All these features demands that you have all your fundamentals well placed. You cannot expect cloud to produce what a cloud is expected to produce if it is not virtualized, standardized and automated, because people expect technology that is easily scalable, portable, interoperable and self-working. This is going to drive down the cost and improve service. Three main constituents for achieving these requirements are discussed as follows-

  • Virtualization: Virtualization isn’t a vague concept- you probably are already engaged in virtualization in one or other fashion. Virtualization technology is around 30 years old now. So, first define a simple definition of virtualization-“ Virtualization is an abstraction layer (hypervisor) that decouples the physical hardware (CPU, Storage, Networking) from the operating system to deliver greater IT resource utilization and flexibility.” I will go deep in virtualization some other day, for now just see how it helps in achieving above said goals. Using Virtualization one can easy allocate and dislocate resources to cloud user without any interaction of human. This property provides easy and trustable scalability in our system.
  • Standardization: Adoption of cloud computing by MSB’s and SSB’s is mostly rejected due to lack of standards in cloud computing. If one MSB’s uses service of one cloud provider, it cannot change its services very easily to another cloud. So, it requires collective acceptance for going in cloud and choosing a cloud provider which is very difficult. If we can achieve Standardization, uniform offerings will be readily available from different providers on a metered basis. It is also decrease Pricing because of increase in providers.
  • Automation: Cloud idea is developed and is popular because of its auto-service feature. This is back bone for acceptance of cloud computing by all fields of sciences. Automation requires Self-Service portals providing point and click access to all IT resources. Resources are provisioned on demand, helping to reduce IT resource setup and configuration cycle times.