Thursday, July 26, 2018

Summary of IoT and AI Summit

Tech Talk
Presenter : Mukund Yadav
Topic : Summary of IoT and AI summit 2018
PPT : https://goo.gl/MJQUNi

Monday, July 9, 2018

Monday, March 12, 2018

Maximize power of Open Source in your IT !

At some point of time in your yearly financial budget planning session, you must have thought of utilizing Open Source so that you can cut down IT spent, isn't it ? Apart from "FREE" open source, you should also consider some points while going to this route. Based on my experience, here are some of the advantages and disadvantages of Open Source.


So,  GO for the most popular open source software that has large community of support behind them, so you have somewhere to go to if you need advice.

Based on my experience, here are some of the most popular open source software which has large community support.



And last but not least, according to The Tenth Annual Future of Open Source Survey, 93% of organizations use open source software and 78% run part or all of their operations on it!

Then, what are you waiting for, start utilizing Open Source and maximize it's power in your IT! 

Monday, February 26, 2018

Artificial Intelligence, Machine Learning and Deep Learning for IT Operations

Wondering what’s the Difference Between Artificial Intelligence, Machine Learning and Deep Learning? and how it can be used in IT Operations? well it's quite hot topic in the industry and you will see most of your friends talking about AI, ML and DL. Lets see what it is and how it's useful for IT folks.

So, AI it too generic and DL is too specific. ML is the best suited for IT Operations. Here are few of the use cases of Machine learning for for IT Operations.

And finally lets see what all ML tools you can use. There are hundreds of tools available however there are few I want to list here such as TensorFlow,  H2O, KNIME, OpenAI etc..

Sunday, February 18, 2018

DBA as a Service - DBAaaS !

Wondering what will happen to DBA community in 2020? Well they will still be around! however  they will be doing cool stuff then ever before! Want to know what and how? please read...

By 2020, 50%+ of all Enterprise Data will be managed Autonomously and 80%+ of Application and Infrastructure Operations will be resolved Autonomously. Well this is quite possible with AI becoming reality and cementing it's foot in the industry. To move towards this direction, first of all we need to automate all the DBA tasks, and then later on implement Machine Learning so that database will take it's own decision based on what is going on in database without any / minimal involvement of DBA's. To automate all of those tasks, we need to develop framework which I call it as DBAaaS, define roles based access, and develop a DBAaaS Mobile App / Portal!



Here is quick Prototype of "DBAaaS 1.0" Mobile App developed by me in less than 2 hours using Rapid Prototype Development tool!

URL : https://gonative.io/share/rzydnx
User name : testing
Password : test



Monday, February 5, 2018

Pivotal Container Service (PKS), Kubernetes (K8s) , Dockers and Containers : in nutshell


In this blog, I will cover Pivotal Container Service (PKS), Kubernetes (K8s) , Dockers and Containers. Before we touch PKS, lets understand what is Dockers and Containers !

Once upon a time, there was Physical Server era, where in we used to have a very large server, install OS and install various applications on top of it! Then the Hypervisor Architecture was born, where in on the same server, you just need to install Hypervior which enables you to create multiple Virtual Machines and in each VM you can install OS & required App. Now there is a new Container era!



There are advantages and disadvantages of running containers directly on server. However most of the companies are taking advantages of Hpyervisor technology as well as Container technology, to build the next generation platforms.

Now lets look as Kubernetes, generally called as K8s. It's Orchestration tool for containers.




K8s cluster consists of 2 major parts, Master and Nodes. Nodes are some times called as Minions as well.
Master has 4 major parts.
1) kube-apiserver : Front-end to the control plane, exposes the API (REST) and Consumes JSON
2) Cluster store: Persistent storage for Cluster state and config, it uses etcd, the “source of truth” for the cluster and have a backup plan for it!
3) kube-controller-manager: Controller of controllers, Watches for changes & Helps maintain desired state
4) kube-scheduler : Watches apiserver for new pods, assigns work to nodes

Nodes has 3 major parts and runs Pod(s) inside them.
1) Kubelet : The main Kubernetes agent, registers node with cluster, watches apiserver, instantiates pods, reports back to master, exposes endpoint on :10255
2) Container Engine: Does container management such as Pulling images, Starting/stopping containers. Generally Docker, it can be rkt as well.
3) kube-proxy: Kubernetes networking, Pod IP addresses. All containers in a pod share a single IP. Load balances across all pods in a service

You can run multiple Pods in one node, and it is not typically recommended to run a large number of containers in a pod, it is a best practice to run a primary container along with additional containers to provide services to the primary container in a given pod.

And finally lets see, PKS !!
PKS gives IT teams the flexibility to deploy and consume Kubernetes on-premises with vSphere, or in the public cloud.  PKS 1.0 is currently supports vSphere and GCE. PKS leverages a specific BOSH release for K8s which has specific requirements.


Here are major components of PKS
1) PKS Controller : The control plane where you create, operate, scale, and Kubernetes clusters from the command line and API.
2) Built with open-source Kubernetes : Constant compatibility with GKE ensures access to the latest stable K8s releases.
3) BOSH : BOSH provides a reliable and consistent operational experience. For your Private cloud running on vSphere 6.5 or GCE Public Cloud.
4) Harbor : Harbor is your container repository
5) GCP Service Broker : The GCP Service Broker allows apps to transparently access Google Cloud APIs, from anywhere. Easily move workloads to/from Google Container Engine (GKE).
6) NSX-T : Network management and security out-of-the-box with VMware NSX-T. Multi-cloud, multi-hypervisor.

Tuesday, November 28, 2017

Serverless and Codeless Cloud Native Applications !

Welcome to the future of cloud, Welcome to Serverless and Codeless Cloud Native Applications!!!

In this blog I will provide overview, available options, how to build secure eCommerce website, advantages & challenges of Serverless and Codeless Cloud Native Applications, along with a live example.

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity such as VM / Containers etc. It is a form of utility computing. Serverless computing still requires servers. The name "serverless computing" is used because the server management and capacity planning decisions are completely hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned services at all. Key benefits of a serverless architecture include automatic scale up and down in response to current load and the associated cost model that charges only for milliseconds of compute time used when running.

There are several options available for Serverless CNA's. Most popular and noteable are

Openwhisk - OpenWhisk is a serverless, open source cloud platform that executes functions (called actions) in response to events (called triggers) without developer concern for managing the lifecycle or operations of the containers that execute the code.



AWS Lamda AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Build Secure eCommerce website for free!
So the question is, can we build secure e-Commerce web site without any Developer, QA, Portal Admin, DBA, sys admin and Servers? Yes, this is possible! Here is what all you need and what all you can use to build your eCommerce site for almost free!

Website : Create your static website in Jakyll and deploy in Github for free !
Access Management : Plugin cloud based identity management solutions such as userapp.io / firebase etc
Cloud based Database : Plugin free firebase database at the back end to store your data
eCommerce functions : Plugin SnipCart into your website for shopping cart and payment gateway  functionality
Digital Delivery: Plugin in SendOwl and SendGrid for digital goods delivery and marketing and you are done !

Advantages 

  1. No need to own anything - run for almost free till your business pickups up 
  2. Work on your product features rather than building / managing your website 
  3. No dependency on one vendor / Multiple options for similar services in market


Disadvantages

  1. Multiple plugin's and inseparability 
  2. Gradually you may have to start paying as and when you exhaust free limits and start looking at cost optimization using other means.


And here is one example, I built mobile app/site for my son, in 6 hours without Developer / QA /  PM / IDM  / Portal admin / DBA / Sys admin and Servers! Check this out ! 

URL : https://goo.gl/wb2bNK
User name : testing
Password : test

Enjoy, and welcome to the future of Cloud !

Friday, October 27, 2017

Pivotal Cloud Foundry in a Nutshell, Laptop Lab, Best Practices

In this blog I will introduce you to Pivotal Cloud Foundry (PCF), provide you instructions on how to setup PCF Laptop Lab to play around and finally discuss best practices to be used for Enterprise PCF Deployment. So, lets start with Introduction PCF.
PCF in Nutshell 
PCF is enterprise grade Cloud Foundry which is Open Source software. As described in following diagram, traditionally we used to manage the entire IT stack from top to bottom, as we evolve into Private cloud, we started offering IaaS and PaaS services. So PCF is essentially PaaS offering which is enterprise grade and runs on Any IaaS, well most of the leading IaaS !

Lets look at very high level what's in it. In following diagram, I have tried to explain PCF from 2 angles, Infrastructure and Operations.

The biggest advantage of PCF is Rapid Application deployment and scaling, we just need 2 command to deploy and scale applications in PCF!



PCF Laptop Lab

Login to pivotal site create an account and create a org.

Make sure that you have at least 40 GB free disk space and 5 GB free RAM

Download and install virtual box

Download and install JDK 1.8.X

Download CF CLI tool

Check the installation using following command

cf help

Download and install the pcfdev, unzip the downloaded file and run the exe.

Start the Dev PCF (This may take several min’s depending on your internet speed) 

cf dev start


Once started, deploy sample spring music application.

unzip the above file.
cd spring-music-master
.\gradlew.bat assemble

Deploy the Application 

cf login -a https://api.local.pcfdev.io --skip-ssl-validation
admin/admin
cf push --hostname spring-music

Login to Admin console and verify application is up and running



Check out the sample app



Here are few other useful commands 

Viewing logs
cf logs spring-music --recent
cf logs spring-music

Install database and connect
cf marketplace -s p-mysql
cf create-service p-mysql 512mb my-spring-db
cf bind-service spring-music my-spring-db

Restart services
cf restart spring-music
cf services

Scale applications
cf scale spring-music -i 2
cf app spring-music
     state      since                  cpu     memory         disk             details
#0   running    2017-10-22T07:13:43Z   0.3%    380.9M of 1G   170.1M of 512M
#1   starting   2017-10-22T07:18:03Z   52.9%   249.1M of 1G   170M of 512M

Increase the resources of app
cf scale spring-music -m 1G
cf scale spring-music -k 512M



Best practices for Enterprise PCF deployments


  • Size your PFC using PCF sizing tool
  • Store all the passwords in keypass
  • Make sure that you setup at least 3 Availability Zones
  • Plan and design your Org, Spaces, Apps and Security of Applications well in advance before you start the setup


... Will keep updating this section.

Wednesday, October 25, 2017

vSphere Optimization Assessment (vOA)

In this blog, I will provide more details on how to install and use vSphere Optimization Assessment (vOA). As name suggests, this utility is designed and developed to optimize vSphere Cloud environment. This is one of the plug-in which needs to be installed in vRealize Operations (vROps)


Once the plug-in is installed, please run 3 reports, which will provide very interesting dashboard and detailed reports such as

  1. Identify mis-configured clusters, hosts and VMs.
  2. Identify performance problems and their root causes.
  3. Reclaim underutilized CPU, memory and disk space. 

Check out sample report videos here.


Thursday, October 12, 2017

Kafka - Intro, Laptop Lab Setup and Best Practices

In this blog, I will summarize the best practices which should be used while implementing Kafka.
Before going to best practices, lets understand what is Kafka. Kafka is publish-subscribe messaging rethought as a distributed commit log and is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.
Here is the high level conceptual diagram of Kafka, where in you can see Kafka cluster of size 4 (4 number of brokers) managed by Apache Zookeeper which is serving multiple of Producers and Consumers. Messages are sent to Topics. Each topic can have multiple partitions for scaling. For  fault-tolerance we have to use replication factor, which ensures that messages are written in multiple partitions.


Kafka Laptop Lab Setup

To setup Kafka Laptop Lab, please install VMware Workstation, Create a Ubuntu VM, Download and unzip Kafka

wget http://apache.cs.utah.edu/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz
tar -xvf kafka_2.11-0.11.0.1.tgz

-- Set environment parameters
vi .bashrc
--Add following 2 lines at the end of .bashrc file, save and close the file.
export KAFKA_HOME=/home/myadav/kafka_2.11-0.11.0.1;
export PATH=$PATH:$KAFKA_HOME/bin;
exit and open new terminal

-- Install JDK
sudo apt-get purge  openjdk-\*
sudo mkdir -p /usr/local/java
sudo apt-get install default-jre
which java
java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-2ubuntu1.17.04.3-b11)
OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)

--Start Zookeeper
cd $KAFKA_HOME/bin
zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties

--Start Kafka server
cd $KAFKA_HOME/bin
kafka-server-start.sh $KAFKA_HOME/config/server.properties

--Start Topic start and list
cd $KAFKA_HOME/bin
kafka-topics.sh --create --topic mytopic --zookeeper localhost:2181 --replication-factor 1 -partitions 1
kafka-topics.sh --list  --zookeeper localhost:2181
kafka-topics.sh --describe  --zookeeper localhost:2181

--Start Produce console
cd $KAFKA_HOME/bin
kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic

-- Start Consumer console
cd $KAFKA_HOME/bin
kafka-console-consumer.sh --zookeeper localhost:2181 --topic mytopic --from-beginning

In this screenshot, you can see, I have started Zookeeper and Kafka in top 2 terminals, in middle terminal I have created topic, and bottom 2 terminals has Producer and Consumer Console. You can see the same messages in producer and consumers.




Best Practices for Enterprise implementation

Sharing best practices for Enterprise level Kafka Implementation
  1. Make sure that Zookeeper is on different server than Kafka Brokers.
  2. There should be minimum 3 to 5 zookeeper nodes in one zookeeper cluster
  3. Make sure that you are using latest java 1.8 with G1 collector 
  4. There should be minimum 4-5 Kafka brokers in Kafka cluster
  5. Make sure that there are sufficient / optimum partitions for each topic, higher the number of partitions more parallel consumers can be added , thus resulting in a higher throughput. More partitions can increase the latency.
  6. There should be minimum 2 replication factor for each topic for fault-tolerance, again more number of replication factors will have impact on performance  
  7. Make sure that you install and configure monitoring tools such as Kafka Manager
  8. If possible implement Kafka MirrorMaker for replication across data-centers for Disaster Recovery purpose 
  9. For Delivery Guarantees set appropriate value for Broker Acknowledgement (“acks”) 
  10. For exceptions / Broker Responds with error set proper values of Number of retries, retry.backoff.ms and max.in.flight.request.per.connection

I will keep appending this section on regular basis.

Tuesday, October 10, 2017

vRA 7.3 Implementation Sample Project Plan

VMware vRealize Automation (vRA) is the IT Automation tool of the modern Software-Defined Data Center. vRA enables IT Automation through the creation and management of personalized infrastructure, application and custom IT services (XaaS). This IT Automation lets you deploy IT services rapidly across a multi-vendor, multi-cloud infrastructure.

In this blog, I am going to describe overall vRA implementation project plan which can be used as sample for any vRA implementation.

We need variety of skills for this implementation such as, Cloud admin, OS Admin, Process expert, Monitoring tools team, Project Manager, Technical Manager etc and last but not least, Customer Management!


Timelines mentioned in this sample project plan are indicative and may vary depending complexity. For example, creating handful of templates and 20 odd blueprints without any Application or Database may take less time, however when we are considering provision of App and DB using vRA, we need to consider more time, including testing time.

Information gathering stage is very important and make sure that customer understands advantages, disadvantages, product feature, limitations which needs to be considered while designing the vRA solution.

Saturday, November 26, 2016

IT Operations Management

If you are in IT Operations, here are few things you should focus on and set goals in order to have full control over the operations


Sunday, September 18, 2016

Some of my innovations in past

In this blog, I want to describe few of my innovations / innovative ideas for which I was rewarded in my current and past companies.

1) CIO Bottom line award (2013) for automated health check of Load test environment. Before we start any load test, we need to perform health check on the entire environment, restart services in sequence on all the 150 servers. This used to be manual and labor intensive activity, and we need to follow the sequence, needs too many handshakes between various admin teams. We did automated all the health checks and also automated the restart of all services including Databases, Middleware, Portals, eBS , IDM etc and also inbuilt the sequence. This resulted in a saving of around $110K/year.

2) Question of the day (2009): We were running 24X7 monitoring operation, and we hardly used to get time for in class training to my team. So I came up with a innovative idea, why can't they learn every day a small portion of the technology. For them understanding of OEM was a must, so I came up with series of tasks and questions and sequenced them, and also automated them to be sent to team members on daily basis so that that they can try these activities / try to answer these practical questions during their spare time. For this innovative idea of training, I was also awarded.

3) Remote Monitoring system (2005): Remote Monitoring Service is Systems management solution which was designed by me around Enterprise Manager Grid Control technology, that means the majority of this solution was, in and around 10g Grid Control. It proactively monitors all components of IT infrastructure, like database, listeners, application servers, storage, you name it.. CPU, Memory, load balancer etc.. And now a days using plug-in even third party s/w like IBM, Microsoft databases can be monitored. It immediately sends alerts and notifications to relevant and registered mail id’s, like DBA’s, Unix Administrators, Helpdesk or some times to managers as well for very critical errors with short message. In-built intelligence through “Fixit Jobs”, say like, if a database went down, we can proactively give instructions to restart the database. Also we had write scripts to fix regular DBA issues, for example if a tablespace is out of space, it will automatically add a data file to that tablespce and inform the DBA, regarding the action it has taken. Customizable threshold and critical levels: different customer’s will be having different standards for warning and critical levels, for example one customer may say 85 is our warning and 95% is critical limit, but these limits may be different for other customer, and this can be achieved by setting different limits for different customer, so its customizable as per customer needs. And finally, it facilitates conformity to Service Line Agreements. This became entry level service and started generating huge revenue in the form of main services. For this innovative idea, I was awarded.

4) Clock as angular measuring equipment (1995): During last year of my Engineering, I was playing with my mechanical table clock, and accidentally, it fall down and got damaged. Basically when we wind the key, it was getting unwind quickly and all the hands were also moving quickly. I was thinking what to do with this clock, and Idea strike me! One full round of the key is getting amplified and we are getting a detail reading on dial, then why not use this angular measuring equipment. I opened the clock calculated the gear ratios and came up with the scale of the the clock, i.e. 1 second equal to how much angle. Then I created a platform to mount the clock and also a platform to keep the object of which we need to measure the angle. I represented this project in SARCH in Solapur where I got second price.

5) 32 bit number system ( 1991): During 11th standard / PUC, we had computer science as professional subject, and we were learning various number systems such as binary, octa, decimal and hexa decimal systems. I took it further ahead and developed 32 bit number system and also provided methods to convert from 32 bit to hexa, octa, decimal and binary systems. I did'nt know how to publish and take forward at that point of time, however it became quite popular in my class and professors recognized the work I did at that point of time.



Sunday, October 11, 2015

Are Bell Curves Needed?

Panel Discussion on Bell Curve @ The New Age Manager Conclave at Ritz Carlton Bangalore, organised by the SAP ManaGeRight Team, on 23rd September, 2015.

The theme for this edition was  - “Leadership is all about a Brand of Trust

About SAP ManaGeRight: At SAP, manager development activities are run by a team of managers called ManaGeRight. ManaGeRight has been organizing several programs over the past few years and we are in fact the first company to organize a Managers Day, bringing together all of our 400+ managers under one umbrella for a day.

The New Age Manager-2015- A Novel Idea by SAP: Leveraging on our expertise and learning's from various flagship programs, we are now taking the next step to organize ‘The New Age Manager ‘ as a platform for the best managers across the industry to come together and learn and share from each other. The objective of the conclave was to create a forum to enable the best managers in the industry:

  • To hear expert opinions and benefit from shared learning.
  • To network with the peers.
  • To collaborate on best practices.

This Manager Conclave was first of it's kind arranged by SAP India

I was request to participate in the panel discussion on the topic - “New Age Performance Management – Are Bell Curves Needed?”.



Friday, July 31, 2015

See you at VMworld 2015 SFO

My presentation / Group discussion at VMworld 2015 SFO on "Visualizing Business Critical Oracle Applications"

https://vmworld2015.lanyonevents.com/connect/sessionDetail.ww?SESSION_ID=6431&tclass=popup



Friday, March 20, 2015

SAP BASIS Admin - Important Transaction Codes


I strongly believe, "When you stop learning, you stop growing.."

Alvin Toffler once said “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.."

I have started to learn SAP now!

In this blog, I have listed few important SAP Transaction Codes, a SAP BASIS Admin must know.


Tuesday, February 3, 2015

What it takes to manage Virtual Datacenter?

Today, just though of putting together, what it takes to build and manage Virtual Data-center?
One thing I want to stress is that, you still need to manage Classic Data-center, however there are several things you need to understand, build and manage on top of Classic DC to make it Virtual DC.

Following diagram gives overall picture, as what you have to manage in IT Operations in case of Classic DC and Virtual DC, apart from the various apps and websites.



Also check out fundamentals of Cloud Computing. 

Wednesday, January 28, 2015

Cloud Computing Fundamentals

Writing a blog post after a long time.. This time on Cloud Computing fundamentals..

Why Cloud Computing?

The IT challenges listed below have made organizations think about the Cloud Computing model to provide better service to their customers

  1. Globalization: IT must meet the business needs to serve customers world-wide, round the clock - 24x7x365.
  2. Aging Data Centers: Migration, upgrading technology to replace old technology.
  3. Storage Growth: Explosion of storage consumption and usage.
  4. Application Explosion: New applications need to be deployed and their usage may scale rapidly, The current data center infrastructures are not planned to  accommodate for such rapid growth.
  5. Cost of ownership: Due to increasing business demand, the cost of buying new equipment's, power, cooling, support, licenses, etc., increases the Total Cost of Ownership(TCO.)
  6. Acquisitions: When companies are acquired, the IT infrastructures of the acquired company and the acquiring company are often different. These differences in the IT infrastructures demand significant effort to make them inter-operable.

What is Cloud computing? (Definition): According to NIST, Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

What are the Essential Characteristics?
Cloud computing should have all of the following characteristics 
  1. On-Demand Self-Service
  2. Resource Pooling
  3. Rapid Elasticity 
  4. Measured Service
  5. Broad Network Access 
What are the building blocks of Cloud Computing?


What are the Service Models in Cloud Computing?
  1. Infrastructure as a service
  2. Platform as a service
  3. Software as a service 

What are the Deployment Models in Cloud Computing?
  1. Public Cloud: Infrastructure Shared across multiple end users which may include companies
  2. Private Cloud : Exclusive for one company, it can be on-premise / exclusively hosted at cloud service provider 
  3. Hybrid Cloud : Combination of Public and Private cloud
  4. Community Cloud : Set of similar types of customer, comes together and share infrastructure, example multiple universities contribute and use one cloud infrastructure.
What is the difference between public and private cloud?



Finally, what are the befits and challenges ? 

Benefit
  • Cost 
  • Speed
Consumer Challenges
  • Security and Regulations
  • Quality of service
  • Network Latency
  • Supportability
  • Long term cost
  • Lock-in 
Providers Challenges
  • Service Warranty and service cost
  • Huge number of s/w to manage
  • No standard cloud access interface


Tuesday, June 19, 2012

Starting with Fusion Applications

Developer > DBA > Apps 11i DBA > R12 DBA and now you want to become Fusion Apps DBA, then you are on correct page.

Here I shall try to provide some info you should be knowing before you start hands on.

OTN has all the Fusion Apps doc's . Latest version as of today (while writing this blog) is11g  Release 1, Update 3 (11.1.4) 

It involves lot of Oracle technology such as Database , Identity Management , WebLogic , SOA Suite , Oracle Data Integrator , ApplCore (ATG) , WebCenter , Secure Enterprise Search , Enterprise Content Management , Oracle Forms Recognition & Business Intelligence 

Currently supported platforms are Linux x86-64 (64 bit), Oracle Solaris SPARC (64 bit), Oracle Solaris x86-64 (64 bit), IBM AIX on POWER Systems (64 bit), Microsoft Windows x64 (64 bit)\

2 types of Installation types, one bare metal install, other is OVM templates

I think cloning and platform migration is currently not available

So lets start with Fusion Apps

Wednesday, June 13, 2012

Oracle Apps Security


Purpose:The purpose of this blog article is to cover security aspects of Oracle Apps and how to handle this. We need to look at all the layers, from the top to bottom, like Applications, DB, OS etc.

Changing database password (like APPS, SYSTEM, SYS etc)
Important Note: Please do not use special characters like @ / # / $ / % etc in any database passwords.

Changing password of SYS, SYSTEM, DBSNMP


Login to database server and issue following commands

Sqlplus “/as sysdba”
Alter user system identified by <new_password>;
Alter user sys identified by <new_password>;
Alter user dbsnmp identified by <new_password>;

Once the passwords are changed, these needs to be changed in EM (if its installed and used). For this, login to EM using sysman account. Then navigate to Preferences > Proffered Credentials > Database Instances > click on set credentials, then against appropriate Database change the passwords. Also change password of dbsnmp user in DB config form.


Document all the steps to perform the password change of DB users
General Guide lines regarding the Schema password.
1)    APPS password should be different than other Applications base schemas like AP, GL, AR etc.
2)    User called ROAPPS (Read Only APPS) should be created who need read access to APPS schema.
3)    Regarding base schemas (like AP, AR, GL) they can have same pattern like AP/AP2008, GL/GL2008 or they can have different passwords. This depends on, if some schema passwords are shared to others.
4)    Password change procedure should be tested in TEST instance first, documented and then only should be executed on PROD.
5)    Please don’t keep same password in TEST and PROD.
6)    Use relevant tools to change password, like FNDCPASS for APPS, GL etc.

Important: Also its is recommended to implement Oracle Applications Auditing feature, to track the changes in important tables.




Changing OS (Operating system passwords)

Document all the steps to be followed for changing OS Passwords
For those who need access to check log fines and stuff like that user called “viewer” in-group “viewer” and password as “viewer” should be created and given to the required user. Also we need to change the vncserver password if it’s started from root or normal unix user. And lastly, its recommended to have a separate username for each DBA, so that first he has to login to server using his own username and then su - <application / database owner user>. In this case the direct access to root, application / database user should be restricted.

Procedure to change Applications User Passwords (Like SYSADMIN)

Document the steps to change Applications passwords of SYSADMIN user.
SYSADMIN password should not be shared with any other user. This password should be with only DBA’s.

There are quite a few profile options available in Applications, which can be used to tighten the front-end security, such as,
a.    Signon Password Hard to Guess => Yes
The password contains at least one letter and at least one number.
The password does not contain the username.
The password does not contain repeating characters.

b.    Signon Password Length => 8 to 10
Signon Password Length sets the minimum length of an Applications signon password. If no value is entered the minimum length defaults to 5.

c.    Signon Password No Reuse  => 10000
This profile option specifies the number of days that a user must wait before being allowed to reuse a password.

d.    Signon Password Failure Limit =>3
The maximum number of login attempts before the user's account is disabled.

e.    ICX:Session Timeout => 20 Min / 60 min
Will prevent the misuse of unlocked desktop.
This profile option determines the length of time (in minutes) of inactivity in a user's session before the session is disabled. If the user does not perform any operation in Oracle Applications for longer than this value, the session is disabled. The user is provided the opportunity to re-authenticate and re-enable a timed-out session. If re-authentication is successful, the session is re-enabled and no work is lost. Otherwise, Oracle Applications exit without saving pendingwork.

f.     Sign-On:Notification => Yes
Displays a message at login that indicates:
If any concurrent requests failed since your last session,
How many times someone tried to log on to Oracle Applications with your username but an incorrect password, and
When the default printer identified in your user profile is unregistered or not specified.

Apart from this, Customer should monitor the list of users who has powerful responsibilities like GL super user, System Administrator etc and reduce such users as far as possible.
Lastly the inactive users should be locked from in the system if they don’t login in last 3-6 months.


Other guidelines for DBA’s:

  • Do Not Allow Shared Accounts
  • Do Not Use Generic Passwords
  • Treat All Non-Production Instances With The Security As Production
  • Restrict Network Access - Set Password on Database Listener
  • Minimize Passwords Contained In OS Files
  • Secure Default Database Accounts
  • Be Proactive!
  • Apply all prior, and plan in advance to apply any new Oracle Security Patches
  • Limit Access To Forms Allowing SQL Entry
  • Stop isqlplus process on server side (if started)
  • Restrict Network Access - Limit Direct Access To The Database
  • Change the passwords at least once in 3 months

Oracle Apps SYSADMIN Concurrent Requests


Purpose: Purpose of this blog entry is to list SYSADMIN Related concurrent requests. In this document, generic instructions and parameters are provided, which needs to be reviewed and decided based on particular customer needs.

Oracle SYSADMIN related Concurrent Requests:

1) Gather Schema Statistics
 Schedule: Every Weekend (Preferably on every Sat 6:00 pm server time). If this is not sufficient (if performance is not good), it can be done 2/3 times a week.
OR, Weekend for ALL schemas and 2/3 times during the week for only few important schemas where in there is heavy insert / update / delete happening.

Parameters:
Schema name: ALL
Parallel Worker: Number of CPUs + 2
Estimate Percent: 30
Other parameters: Default

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run


2) Purge Concurrent Request and/or Manager Data
Schedule: Every day, nighttime, say 11:00 PM Server time

Parameters:
Age=30 (Purge concurrent request data older than 30 days)
Other parameters: Default

Note: Age parameter needs to be agreed with Business / Customer.

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run

3) Purge Concurrent Request and/or Manager Data
 Schedule: Every day night time, say 11:30 Server time

Parameters:
Count: 5
Program Application: Application Object Library
Program: Workflow Background Process
Other parameters: Default

Note: Count Parameter needs to be agreed with Business / Customer.

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run

4) Workflow Background Process
 Schedule: Every 10 min

Parameters:
Process Deferred: Yes
Process Timeout: No

How to submit: Login to Applications using SYSADMIN account, then navigate to Oracle Applications Manager responsibility, then navigate to Workflow Manager, then use “Submit Request for “ facility (top – right hand side) (Select “Background Engines” from drop down)

5) Workflow Background Process
 Schedule: Every 60 min

Parameters:
Process Deferred Yes
Process Timeout: Yes

How to submit: Login to Applications using SYSADMIN account, then navigate to Oracle Applications Manager responsibility, then navigate to Workflow Manager, then use “Submit Request for “ facility (top – right hand side) (Select “Background Engines” from drop down)

6) Workflow Control Queue Cleanup
 Schedule: Every day, night time / early morning hours

Parameters: Default

How to submit: Login to Applications using SYSADMIN account, then navigate to Oracle Applications Manager responsibility, then navigate to Workflow Manager, then use “Submit Request for “ facility (top – right hand side) (Select “Control Queue Cleanup” from drop down)

7) Purge Obsolete Workflow Runtime Data
 Schedule: Every weekend

Parameters:
Age: 30
Other parameters: Default

Note: Age parameter needs to be agreed with Business / Customer.

How to submit: Login to Applications using SYSADMIN account, then navigate to Oracle Applications Manager responsibility, then navigate to Workflow Manager, then use “Submit Request for “ facility (top – right hand side) (Select “Purge” from drop down)

8) Synchronize WF LOCAL tables
 Schedule: Every day, night time / early morning hours

Parameters: Default

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run


Note: As per metalink id 1158212.1, after E-business version 11.5.10 this request generally does not need to be run.

9) OAM Applications Dashboard Collection
 Schedule: Every 30 min

Parameters: Default

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run

10) Purge Signon Audit data
 Schedule: Can no schedule needs to be run manually, say every week Monday / Friday morning.

Parameters: Date (By default system date)

Note: Most of customer wants this date to be, 2 to 3 months old

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run

Note for DBA’s: There will be check box “Increment data parameter for each run” which must be checked.


11) Purge Debug Log
 Schedule: Every day / weekend.

Parameters: Date (By default system date)

Note: Most of customer wants this date to be, 2 weeks to 3 month old

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run

Note for DBA’s: There will be check box “Increment data parameter for each run” which must be checked.

12) Resend Failed/Error Workflow Notifications
 Schedule: Run every 6 hours.

Parameters: ERROR / FAILED (One request for each parameter)

How to submit: Login to Applications using SYSADMIN account, navigate to System Administrator responsibility, then navigate to Requests > Run

Note for DBA’s: As this request takes date as parameter, it cannot be scheduled easily. You may have to run this physically on one particular day of week, say Monday morning.

DB Jobs

Purge old snapshots from PREFSTAT schema


Other relevant document’s.

732713.1 Purging Strategy for eBusiness Suite 11i
298550.1 Troubleshooting Workflow Data Growth Issues

Tuesday, June 12, 2012

Apps and Database review areas / points

Oracle Database Server Review Points / Areas

Initialization & listener parameter
AWR, Alert.log, listener log, OS watcher, RDA
Invalid Objects, Indexes and fragmentation
Tablespaces, Data files, log files and control files
Custom objects in SYSTEM tablespace & SYSTEM tablespace as default tablespace
Stats job schedule
Chained rows
Workload balancing/distribution in clustered environments
Database Patch level, de-support, and patching strategy (CPU, one off)
Server disk space for DB growth, Archive log, backup destination
Server level pre-req’s, errors, warnings & background jobs
Database Backup and Recovery
Database Monitoring and alerting system
Database Disaster Recovery solution
Debugging latch contention, hangs, crashes & locking issues

Oracle Applications Infrastructure Review (eBS) Points / Areas

Database review as per earlier slide
Application Technical Architecture
Application Backup and Recovery
Application Security, Audit, and security profile options
Standard Manager programs and it’s parameters
Application Monitoring and alerting system
Application Disaster Recovery solution
Application Patch level, de-support, and patching strategy
Network (Latency and Bandwidth)
JVM’s
JDBC connection parameters
Forms & Reports server
Standard Concurrent Manager
Recommendation on best practices for routine administrative tasks etc.

Monday, April 16, 2012

Quick DBA update on Oralce Applications R12 install, upgrade and admin scripts..

Hi to all,
In this blog I will discuss some of the main points which an Apps dba should know about Install, Upgrade and admin scripts. So lets begin with Installer

Main points about Installer:
1) config.txt is now configSID.txt , for adding node, you can use configSID.txt or get details from database directly using host.domain:port:sid format
2) Install types: Standard and Express
3) Shared APPL_TOP, COMN_TOP and tech stack as well, but not for Windows
4) Easy load balancing of CP and Web communications
5) Technology Stack Components : Oracle 10g R2 Database home, Oracle Developer 10i (forms, reports) and Oracle 10g Application Server 10.1.2 (http server)
6) Java Development Kit (JDK) 5.0 is automatically installed by Rapid Install
7) Disk Space : Applications node 28 GB , Fresh DB 45 GB, Vision DB 133 GB, Stage for fresh install 33 GB, TEMP 500 Mb
8) Create Stage : CD's are in DVD Format, and run adautostg.pl to create dir structure, which requires perl 5.0053 in PATH, and creates subdirectories startCD, oraApps, oraDB, oraAS, and oraAppDB under stage12
9) Want to install on virtual hostname, use -servername as command line parameter with rapidwiz. There are 2 more command line parameters, -restart to restart any failed install, and -techstack to install only technology stack.
10) Incase of multiuser installation, start installer using root account
11) For additional language, you must use OAM (oracle applications manager)
12) There is new concept of INST_TOP which mainly stores instance specific files including runtime files, log files and configuration files
13) In R12 there are Services concept instead of nodes (forms/web/concurrent). Following is the list of services in R12 :
* Root Service Group which supports • Oracle Process Manager (OPMN)
* Web Entry Point Services which supports • HTTP Server
* Web Application Services which supports • OACORE OC4J • Forms OC4J • OAFM OC4J
Batch Processing Services which supports • Applications TNS Listener • Concurrent Managers • Fulfillment Server
Other Service Group which supports • Oracle Forms Services • Oracle MWA Service
* : Thses services must be installed on same / one machine (which is nothing but Web node, according to 11i )
14) Regardless of type of services confugured on perticular server, all files (forms, reports, jsp) are stored in APPL_TOP (Unified), basically to have pure 3 tier arch
15) Installer gives option to configure OCM (Oracle configuration manager) where in OCM keeps track of key Oracle and OS stats. This collected data is sent to oracle support via https for better understanding of issues and quick resulations to any issues reported

Main points about Upgrade:
1) You can only upgrade to R12 from 11i, if you are at older version (like 10.7 or 11.0.3 etc), you must upgrade first to 11i, and then upgrade to R12
2) High level R12 Upgrade process :
• Run rapid installer first time to layout new file structure and tech stack
• Migrate or Upgrade database to 10g R2
• Run Autopatch to run database driver to bring DB to R12 level
• Run rapid installer second time to configure and start services

Admin Scripts:

adautocfg.sh - run autoconfig
adstpall.sh - stop all services
adstrtal.sh - start all services
adapcctl.sh - start/stop/status Apache only
adformsctl.sh - start/stop/status OC4J Forms
adformsrvctl.sh - start/stop/status Forms server in socket mode
adoacorectl.sh - start/stop/status OC4J oacore
adoafmctl.sh - start/stpp/status OC4J oafm
adopmnctl.sh - start/stop/status opmn
adalnctl.sh - start/stop RPC listeners (FNDFS/FNDSM)
adcmctl.sh - start/stop Concurrent Manager
gsmstart.sh - start/stop FNDSM
jtffmctl.sh - start/stop Fulfillment Server
adpreclone.pl - Cloning preparation script
adexecsql.pl - Execute sql scripts that update the profiles in an AutoConfig run
java.sh - Call java executable with additional args, (used by opmn, Conc. Mgr)

Note: To understand this page, you should have prior knowledge or background of APPS 11i