Search This Blog

Sunday, February 18, 2018

DBA as a Service - DBAaaS !

Wondering what will happen to DBA community in 2020? Well they will still be around! however  they will be doing cool stuff then ever before! Want to know what and how? please read...

By 2020, 50%+ of all Enterprise Data will be managed Autonomously and 80%+ of Application and Infrastructure Operations will be resolved Autonomously. Well this is quite possible with AI becoming reality and cementing it's foot in the industry. To move towards this direction, first of all we need to automate all the DBA tasks, and then later on implement Machine Learning so that database will take it's own decision based on what is going on in database without any / minimal involvement of DBA's. To automate all of those tasks, we need to develop framework which I call it as DBAaaS, define roles based access, and develop a DBAaaS Mobile App / Portal!



Here is quick Prototype of "DBAaaS 1.0" Mobile App developed by me in less than 2 hours using Rapid Prototype Development tool!

URL : https://gonative.io/share/rzydnx
User name : testing
Password : test



Sunday, February 4, 2018

Pivotal Container Service (PKS), Kubernetes (K8s) , Dockers and Containers : in nutshell


In this blog, I will cover Pivotal Container Service (PKS), Kubernetes (K8s) , Dockers and Containers. Before we touch PKS, lets understand what is Dockers and Containers !

Once upon a time, there was Physical Server era, where in we used to have a very large server, install OS and install various applications on top of it! Then the Hypervisor Architecture was born, where in on the same server, you just need to install Hypervior which enables you to create multiple Virtual Machines and in each VM you can install OS & required App. Now there is a new Container era!



There are advantages and disadvantages of running containers directly on server. However most of the companies are taking advantages of Hpyervisor technology as well as Container technology, to build the next generation platforms.

Now lets look as Kubernetes, generally called as K8s. It's Orchestration tool for containers.




K8s cluster consists of 2 major parts, Master and Nodes. Nodes are some times called as Minions as well.
Master has 4 major parts.
1) kube-apiserver : Front-end to the control plane, exposes the API (REST) and Consumes JSON
2) Cluster store: Persistent storage for Cluster state and config, it uses etcd, the “source of truth” for the cluster and have a backup plan for it!
3) kube-controller-manager: Controller of controllers, Watches for changes & Helps maintain desired state
4) kube-scheduler : Watches apiserver for new pods, assigns work to nodes

Nodes has 3 major parts and runs Pod(s) inside them.
1) Kubelet : The main Kubernetes agent, registers node with cluster, watches apiserver, instantiates pods, reports back to master, exposes endpoint on :10255
2) Container Engine: Does container management such as Pulling images, Starting/stopping containers. Generally Docker, it can be rkt as well.
3) kube-proxy: Kubernetes networking, Pod IP addresses. All containers in a pod share a single IP. Load balances across all pods in a service

You can run multiple Pods in one node, and it is not typically recommended to run a large number of containers in a pod, it is a best practice to run a primary container along with additional containers to provide services to the primary container in a given pod.

And finally lets see, PKS !!
PKS gives IT teams the flexibility to deploy and consume Kubernetes on-premises with vSphere, or in the public cloud.  PKS 1.0 is currently supports vSphere and GCE. PKS leverages a specific BOSH release for K8s which has specific requirements.


Here are major components of PKS
1) PKS Controller : The control plane where you create, operate, scale, and Kubernetes clusters from the command line and API.
2) Built with open-source Kubernetes : Constant compatibility with GKE ensures access to the latest stable K8s releases.
3) BOSH : BOSH provides a reliable and consistent operational experience. For your Private cloud running on vSphere 6.5 or GCE Public Cloud.
4) Harbor : Harbor is your container repository
5) GCP Service Broker : The GCP Service Broker allows apps to transparently access Google Cloud APIs, from anywhere. Easily move workloads to/from Google Container Engine (GKE).
6) NSX-T : Network management and security out-of-the-box with VMware NSX-T. Multi-cloud, multi-hypervisor.

Tuesday, November 28, 2017

Serverless and Codeless Cloud Native Applications !

Welcome to the future of cloud, Welcome to Serverless and Codeless Cloud Native Applications!!!

In this blog I will provide overview, available options, how to build secure eCommerce website, advantages & challenges of Serverless and Codeless Cloud Native Applications, along with a live example.

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity such as VM / Containers etc. It is a form of utility computing. Serverless computing still requires servers. The name "serverless computing" is used because the server management and capacity planning decisions are completely hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned services at all. Key benefits of a serverless architecture include automatic scale up and down in response to current load and the associated cost model that charges only for milliseconds of compute time used when running.

There are several options available for Serverless CNA's. Most popular and noteable are

Openwhisk - OpenWhisk is a serverless, open source cloud platform that executes functions (called actions) in response to events (called triggers) without developer concern for managing the lifecycle or operations of the containers that execute the code.



AWS Lamda AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Build Secure eCommerce website for free!
So the question is, can we build secure e-Commerce web site without any Developer, QA, Portal Admin, DBA, sys admin and Servers? Yes, this is possible! Here is what all you need and what all you can use to build your eCommerce site for almost free!

Website : Create your static website in Jakyll and deploy in Github for free !
Access Management : Plugin cloud based identity management solutions such as userapp.io / firebase etc
Cloud based Database : Plugin free firebase database at the back end to store your data
eCommerce functions : Plugin SnipCart into your website for shopping cart and payment gateway  functionality
Digital Delivery: Plugin in SendOwl and SendGrid for digital goods delivery and marketing and you are done !

Advantages 

  1. No need to own anything - run for almost free till your business pickups up 
  2. Work on your product features rather than building / managing your website 
  3. No dependency on one vendor / Multiple options for similar services in market


Disadvantages

  1. Multiple plugin's and inseparability 
  2. Gradually you may have to start paying as and when you exhaust free limits and start looking at cost optimization using other means.


And here is one example, I built mobile app/site for my son, in 6 hours without Developer / QA /  PM / IDM  / Portal admin / DBA / Sys admin and Servers! Check this out ! 

URL : https://goo.gl/wb2bNK
User name : testing
Password : test

Enjoy, and welcome to the future of Cloud !

Thursday, October 26, 2017

Pivotal Cloud Foundry in a Nutshell, Laptop Lab, Best Practices

In this blog I will introduce you to Pivotal Cloud Foundry (PCF), provide you instructions on how to setup PCF Laptop Lab to play around and finally discuss best practices to be used for Enterprise PCF Deployment. So, lets start with Introduction PCF.
PCF in Nutshell 
PCF is enterprise grade Cloud Foundry which is Open Source software. As described in following diagram, traditionally we used to manage the entire IT stack from top to bottom, as we evolve into Private cloud, we started offering IaaS and PaaS services. So PCF is essentially PaaS offering which is enterprise grade and runs on Any IaaS, well most of the leading IaaS !

Lets look at very high level what's in it. In following diagram, I have tried to explain PCF from 2 angles, Infrastructure and Operations.

The biggest advantage of PCF is Rapid Application deployment and scaling, we just need 2 command to deploy and scale applications in PCF!



PCF Laptop Lab

Login to pivotal site create an account and create a org.

Make sure that you have at least 40 GB free disk space and 5 GB free RAM

Download and install virtual box

Download and install JDK 1.8.X

Download CF CLI tool

Check the installation using following command

cf help

Download and install the pcfdev, unzip the downloaded file and run the exe.

Start the Dev PCF (This may take several min’s depending on your internet speed) 

cf dev start


Once started, deploy sample spring music application.

unzip the above file.
cd spring-music-master
.\gradlew.bat assemble

Deploy the Application 

cf login -a https://api.local.pcfdev.io --skip-ssl-validation
admin/admin
cf push --hostname spring-music

Login to Admin console and verify application is up and running



Check out the sample app



Here are few other useful commands 

Viewing logs
cf logs spring-music --recent
cf logs spring-music

Install database and connect
cf marketplace -s p-mysql
cf create-service p-mysql 512mb my-spring-db
cf bind-service spring-music my-spring-db

Restart services
cf restart spring-music
cf services

Scale applications
cf scale spring-music -i 2
cf app spring-music
     state      since                  cpu     memory         disk             details
#0   running    2017-10-22T07:13:43Z   0.3%    380.9M of 1G   170.1M of 512M
#1   starting   2017-10-22T07:18:03Z   52.9%   249.1M of 1G   170M of 512M

Increase the resources of app
cf scale spring-music -m 1G
cf scale spring-music -k 512M



Best practices for Enterprise PCF deployments


  • Size your PFC using PCF sizing tool
  • Store all the passwords in keypass
  • Make sure that you setup at least 3 Availability Zones
  • Plan and design your Org, Spaces, Apps and Security of Applications well in advance before you start the setup


... Will keep updating this section.

Tuesday, October 24, 2017

vSphere Optimization Assessment (vOA)

In this blog, I will provide more details on how to install and use vSphere Optimization Assessment (vOA). As name suggests, this utility is designed and developed to optimize vSphere Cloud environment. This is one of the plug-in which needs to be installed in vRealize Operations (vROps)


Once the plug-in is installed, please run 3 reports, which will provide very interesting dashboard and detailed reports such as

  1. Identify mis-configured clusters, hosts and VMs.
  2. Identify performance problems and their root causes.
  3. Reclaim underutilized CPU, memory and disk space. 

Check out sample report videos here.


Thursday, October 12, 2017

Kafka - Intro, Laptop Lab Setup and Best Practices

In this blog, I will summarize the best practices which should be used while implementing Kafka.
Before going to best practices, lets understand what is Kafka. Kafka is publish-subscribe messaging rethought as a distributed commit log and is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.
Here is the high level conceptual diagram of Kafka, where in you can see Kafka cluster of size 4 (4 number of brokers) managed by Apache Zookeeper which is serving multiple of Producers and Consumers. Messages are sent to Topics. Each topic can have multiple partitions for scaling. For  fault-tolerance we have to use replication factor, which ensures that messages are written in multiple partitions.


Kafka Laptop Lab Setup

To setup Kafka Laptop Lab, please install VMware Workstation, Create a Ubuntu VM, Download and unzip Kafka

wget http://apache.cs.utah.edu/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz
tar -xvf kafka_2.11-0.11.0.1.tgz

-- Set environment parameters
vi .bashrc
--Add following 2 lines at the end of .bashrc file, save and close the file.
export KAFKA_HOME=/home/myadav/kafka_2.11-0.11.0.1;
export PATH=$PATH:$KAFKA_HOME/bin;
exit and open new terminal

-- Install JDK
sudo apt-get purge  openjdk-\*
sudo mkdir -p /usr/local/java
sudo apt-get install default-jre
which java
java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-2ubuntu1.17.04.3-b11)
OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)

--Start Zookeeper
cd $KAFKA_HOME/bin
zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties

--Start Kafka server
cd $KAFKA_HOME/bin
kafka-server-start.sh $KAFKA_HOME/config/server.properties

--Start Topic start and list
cd $KAFKA_HOME/bin
kafka-topics.sh --create --topic mytopic --zookeeper localhost:2181 --replication-factor 1 -partitions 1
kafka-topics.sh --list  --zookeeper localhost:2181
kafka-topics.sh --describe  --zookeeper localhost:2181

--Start Produce console
cd $KAFKA_HOME/bin
kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic

-- Start Consumer console
cd $KAFKA_HOME/bin
kafka-console-consumer.sh --zookeeper localhost:2181 --topic mytopic --from-beginning

In this screenshot, you can see, I have started Zookeeper and Kafka in top 2 terminals, in middle terminal I have created topic, and bottom 2 terminals has Producer and Consumer Console. You can see the same messages in producer and consumers.




Best Practices for Enterprise implementation

Sharing best practices for Enterprise level Kafka Implementation
  1. Make sure that Zookeeper is on different server than Kafka Brokers.
  2. There should be minimum 3 to 5 zookeeper nodes in one zookeeper cluster
  3. Make sure that you are using latest java 1.8 with G1 collector 
  4. There should be minimum 4-5 Kafka brokers in Kafka cluster
  5. Make sure that there are sufficient / optimum partitions for each topic, higher the number of partitions more parallel consumers can be added , thus resulting in a higher throughput. More partitions can increase the latency.
  6. There should be minimum 2 replication factor for each topic for fault-tolerance, again more number of replication factors will have impact on performance  
  7. Make sure that you install and configure monitoring tools such as Kafka Manager
  8. If possible implement Kafka MirrorMaker for replication across data-centers for Disaster Recovery purpose 
  9. For Delivery Guarantees set appropriate value for Broker Acknowledgement (“acks”) 
  10. For exceptions / Broker Responds with error set proper values of Number of retries, retry.backoff.ms and max.in.flight.request.per.connection

I will keep appending this section on regular basis.

Monday, October 9, 2017

vRA 7.3 Implementation Sample Project Plan

VMware vRealize Automation (vRA) is the IT Automation tool of the modern Software-Defined Data Center. vRA enables IT Automation through the creation and management of personalized infrastructure, application and custom IT services (XaaS). This IT Automation lets you deploy IT services rapidly across a multi-vendor, multi-cloud infrastructure.

In this blog, I am going to describe overall vRA implementation project plan which can be used as sample for any vRA implementation.

We need variety of skills for this implementation such as, Cloud admin, OS Admin, Process expert, Monitoring tools team, Project Manager, Technical Manager etc and last but not least, a willingness from Customer Management to implement VRA!


Timelines mentioned in this sample project plan are indicative and may vary depending complexity. For example, creating handful of templates and 20 odd blueprints without any Application or Database may take less time, however when we are considering provision of App and DB using vRA, we need to consider more time, including testing time.

Information gathering stage is very important and make sure that customer understands advantages, disadvantages, product feature, limitations which needs to be considered while designing the vRA solution.