Posts

Showing posts from 2019

Cloud migration Steps

Once the organisation has decided to do cloud migration you can refer below framework / methodology to proceed with the migration MRA ( Migration Readiness Assessment) ->  MRP ( Migration Readiness & Planning) -> Migration and Operation Apply CAF ( AWS Cloud Adoption Framework to above) CAF is AWS learning with all its customer to understanding what has worked , can be improved and given comprehensive approach . It identifies and focus areas that are critical to cloud adoption. MRA is: (Helps you assess Org capabilities, readiness and commitment to migrate to cloud ) Rapid Discovery Directional Business Use Case - high level calculation of potential cost saving & increase business agility associated with the aws migration MRP: ( build capabilities to migrate to & Operate to in cloud) OUTPUT is - detail migration plan is build Discovery and planning,  -  Define app patterns and planning , get backlog Landing Zone ,  - Foundation AWS infrastructur

databricks using secret

container = "raw" storageAccount = "testarunacc" accountKey = "fs.azure.account.key.{}.blob.core.windows.net".format(storageAccount) accessKey = dbutils.secrets.get(scope = "arunscope", key = "key1") # Mount the drive for native python inputSource = "wasbs://{}@{}.blob.core.windows.net".format(container, storageAccount) mountPoint = "/mnt/" + container extraConfig = {accountKey: accessKey} print("Mounting: {}".format(mountPoint)) try:   dbutils.fs.mount(     source = inputSource,     mount_point = str(mountPoint),     extra_configs = extraConfig   )   print("=> Succeeded") except Exception as e:   if "Directory already mounted" in str(e):     print("=> Directory {} already mounted".format(mountPoint))   else:     raise(e) # Set the credentials to Spark configuration spark.conf.set(   accountKey,   accessKey) spark._jsc.hadoopConfiguration

Azure - Accessing Blob storage from Data bricks cluster using account key

Azure - Accessing Blob storage from Data bricks cluster container = "raw" storageAccount = "arunacc" accessKey = "<<>>" accountKey = "fs.azure.account.key.{}.blob.core.windows.net".format(storageAccount) # Set the credentials to Spark configuration spark.conf.set(   accountKey,   accessKey) spark._jsc.hadoopConfiguration().set(   accountKey,   accessKey) # Mount the drive for native python inputSource = "wasbs://{}@{}.blob.core.windows.net".format(container, storageAccount) mountPoint = "/mnt/" + container extraConfig = {accountKey: accessKey} print("Mounting: {}".format(mountPoint)) try:   dbutils.fs.mount(     source = inputSource,     mount_point = str(mountPoint),     extra_configs = extraConfig   )   print("=> Succeeded") except Exception as e:   if "Directory already mounted" in str(e):     print("=> Directory {} already mounted".form

Install Kafka Cluster on Ubuntu

1. Spin up Ubuntu server on AWS 2. Connect using putty.(port 22) 3.Check java version  java -version. if not install than do below: sudo apt-get update sudo add-apt-repository ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java8-installer # Managing java sudo update-alternatives --config java Setting the JAVA_HOME Environment Variable sudo update-alternatives --config java sudo vi /etc/environment # INSERT JAVA_HOME="/usr/lib/jvm/java-8-oracle" source /etc/environment echo $JAVA_HOME 4. Download Kafka and unzip # wget http://mirror.vorboss.net/apache/kafka/2.2.0/kafka_2.12-2.2.0.tgz UNTAR kafka tar -xvf <kafka_2.12-2.2.0.tgz> Run below command to see everything works fine bin/kafka-topics.sh now add bin directory to PATH $ pwd /home/ubuntu/kafka_2.12-2.2.0/bin sudo vi ~/.bash_profile and put export PATH="/home/ubuntu/kafka_2.12-2.2.0/bin" OR change the bashrc file ------------- make chnages for zooke

web app for new era of digitalization

Setting up Our Ionic 4 App

Choosing Right database - Polygot persistence

Data structure questions: Does your data have a natural structure? Is it unstructured? How is it connected to other data? How is it distributed? How much data are you dealing with? Access pattern questions: What is your read/write ratio? Is it uniform or random? Is reading or writing more important to your application? Organization need questions: Do we need authentication? What type? Do we need encryption? Do we need backups? Do we need a disaster site? What level of monitoring will be required? Which drivers are needed? Which languages will be used? Do we need plugins? Will we use third party tools?

Cloud Architecture Notes

Image
The architectural priorities and needs of every app are different, but the four pillars of architecture are an excellent guidepost you can use to make sure that you have given enough attention to every aspect of your application: Security : Safeguarding access and data integrity and meeting regulatory requirements Performance and scalability : Efficiently meeting demand in every scenario Availability and recoverability : Minimizing downtime and avoiding permanent data loss Efficiency and operations : Maximizing maintainability and ensuring requirements are met with monitoring A layered approach to security Defense in depth is a strategy that employs a series of mechanisms to slow the advance of an attack aimed at acquiring unauthorized access to information The common principles used to define a security posture are confidentiality, integrity, and availability, known collectively as CIA. Confidentiality - Principle of least privilege. Restricts access to informa