Tuesday, August 24, 2021

Download large file from Google Drive using wget on terminal

 Download large file from Google Drive using wget on terminal


       To download large file from Google Drive use following steps.


1] Share file publicly and Copy share URL.

Example share URL - 

https://drive.google.com/file/d/1tcthANUPNgyho7X-5HPDuUAiEfTfw5/view?usp=sharing


2] Extract Field ID from above share URL as below.

https://drive.google.com/file/d/1tcthANUPNgyho7X-5HPDuUAiEfTfw5/view?usp=sharing

Field ID is - 1tcthANUPNgyho7X-5HPDuUAiEfTfw5


3] Go to terminal and paste following command.

wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FIELDID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FIELDID" -O FILENAME && rm -rf /tmp/cookies.txt


Here, Replace FIELDID and FILENAME as per your file.


Let me know how it goes.

Wednesday, March 31, 2021

Nginx Cookbook

 Nginx Cookbook

1] Wildcard for Nginx location

I have multiple API running on server to access them through I have to add multiple location block as below.

My goal is to add single location block for all API's.

server { listen 80; server_name www.anup.co.in; location / { proxy_pass http://localhost:3000; } location /getHighscores { proxy_pass http://localhost:3000/getHighscores; } location /auth/google { proxy_pass http://localhost:3000/auth/google; } location /auth/google/redirect { proxy_pass http://localhost:3000/auth/google/redirect; } location /auth/login/success { proxy_pass http://localhost:3000/auth/login/success; } location /auth/login/failed { proxy_pass http://localhost:3000/auth/login/failed; } location /auth/logout { proxy_pass http://localhost:3000/auth/logout; } }

Solution:

server { listen 80; server_name www.anup.co.in; location / { proxy_pass http://localhost:3000; } location ~ ^/(.*)$ { proxy_pass http://localhost:3000/$1; } }

Tuesday, August 11, 2020

Azure DevOps Pipeline Runtime parameter Task Condition

 Azure DevOps Pipeline Runtime parameter Task Condition


    This guide explains you how to use Azure DevOps pipeline to pass runtime boolean values and run tasks only if condition is true else skip the task.


- Add following lines at the beginning of your pipeline YAML file


parameters:
nameinstallNewRelic
  typeboolean
  defaultfalse

trigger:
  branches:
    include:
    - qa
  paths:
    include:
    - '*'
    exclude:
    - 'docs/*'
    - '*.md'

pr:
  branches:
    include:
    - qa

variables:
  drupalroot'/usr/share/nginx/html'
  docroot'/usr/share/nginx/html/docroot'
newrelic_cmd'docker run --entrypoint /bin/mv $(containerRegistry)/$(imageRepository):latest'

stages:
stageReleaseToQA
  conditionand(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/qa'))
  displayNameRelease to QA
  jobs:
  - jobRelease
    displayNameRelease
  - deploymentDeployToQA
    environment$(webAppNameQA)
    strategy:
      runOnce:
        deploy:
          steps:
         ...
Other Tasks
...
# Below Task will only be executed if condition is true, default value is
# false in parameter. When you click on run pipeline it will ask you the
# parameter value i.e. installNewRelic if you select then condition becomes
# true and below task will executed else it will be skipped.
# Refer the screenshot below
taskBash@3
             displayName'Place newrelic.ini from /usr/share/nginx/html/docroot/profiles/'
             conditionand(succeeded(), eq('${{ parameters.installNewRelic }}', true))
             inputs:
               targetType'inline'
               script: |
                 $(newrelic_cmd) $(docroot)/profiles/corp-qa-newrelic.ini /etc/php/7.3/mods-available/newrelic.ini





Saturday, August 03, 2019

Create VPC Subnet Security Group EC2 ELB using Python Boto3

This script assumes basic knowledge of AWS, Boto3 & Python.
Prerequisites:
  • AWS Account
  • IAM Role with Access & Secret Key
  • Boto3 Installed & Configured

- Install AWS CLI & Python Boto3 Library in Python using pip, which is package management tool written in Python.
# pip install awscli boto3

- Create user in AWS from AWS console and get the Secret Access Key & Access ID to access AWS services programatically.
# aws configure
- Run script using python command 
# python <script-name>.py

import boto3
import time

ec2 = boto3.resource('ec2')
client = boto3.client('ec2')

#Create VPC
response = client.create_vpc(CidrBlock='172.16.0.0/16',InstanceTenancy='default')

#Assign tags to VPC
client.create_tags(Resources=[response['Vpc']['VpcId']],Tags=[{'Key': 'Name','Value': 'my-drupal-vpc',}])

print('***** VPC Created with ID*********',response['Vpc']['VpcId'])
vpc_id = response['Vpc']['VpcId']

# Creating Internet Gateway for Drupal Web Instance subnets and attaching to VPC
ig = ec2.create_internet_gateway()
client.attach_internet_gateway(InternetGatewayId = ig.id, VpcId=vpc_id)

routetable1_response = client.create_route_table(VpcId=vpc_id)

def create_tag_for_route_table(route_table_number, route_table_name):
    tag = client.create_tags(Resources=[route_table_number['RouteTable']['RouteTableId']],Tags=[{'Key': 'Name','Value': route_table_name}])
    return tag

create_tag_for_route_table(routetable1_response,'drupal-rt1')
print('Route Table 1 Created - ',routetable1_response['RouteTable']['RouteTableId'])
route_table1 = ec2.RouteTable(routetable1_response['RouteTable']['RouteTableId'])

# Attach internet gateway to Routetable drupal-rt1 for web instances in subnet 1 and 3
route_table1.create_route(DestinationCidrBlock='0.0.0.0/0', GatewayId=ig.id)

# Attach internet gateway to Routetable drupal-rt1 for web instances in subnet 1 and 3
route_table1.create_route(DestinationCidrBlock='0.0.0.0/0', GatewayId=ig.id)

def create_subnet(cidr, vpc_id, azname):
        subnet_response = client.create_subnet(CidrBlock=cidr, VpcId=vpc_id, AvailabilityZone=azname)
        return subnet_response

def create_tag(subnet_number,subnet_name):
    client.create_tags(Resources=[subnet_number['Subnet']['SubnetId']], Tags=[{'Key': 'Name', 'Value': subnet_name}])

def modify_subnet_attribute(subnet_name):
    client.modify_subnet_attribute(MapPublicIpOnLaunch={'Value': True,}, SubnetId=subnet_name)

#Creating first subnet
subnet1 = create_subnet('172.16.1.0/24', vpc_id, 'us-east-1a')
ec2_subnet1 = subnet1['Subnet']['SubnetId']
create_tag(subnet1,'drupal-sb1-us-east-1a')
modify_subnet_attribute(ec2_subnet1)
print('Subnet 1 is Created with ID - ',ec2_subnet1)

#Associating Route Table 1 to Subnet 1
route_table1.associate_with_subnet(SubnetId=ec2_subnet1)
print('Route table 1 associated with Subnet 1 -',ec2_subnet1)

routetable2_response = client.create_route_table(VpcId=vpc_id)
create_tag_for_route_table('drupal-rt2')
print('Route Table 2 Created - ',routetable2_response['RouteTable']['RouteTableId'])
route_table2 = ec2.RouteTable(routetable2_response['RouteTable']['RouteTableId'])

# Creating second subnet
subnet2 = create_subnet('172.16.2.0/24', vpc_id, 'us-east-1a')
ec2_subnet2 = subnet2['Subnet']['SubnetId']
create_tag(subnet2,'drupal-sb2-us-east-1a')
print('Subnet 2 is Created with ID - ',ec2_subnet2)

#Associating Route Table 2 to Subnet 2
route_table2.associate_with_subnet(SubnetId=ec2_subnet2)
print('Route table 2 associated with Subnet 2 -',ec2_subnet2)

# Creating third subnet
subnet3 = create_subnet('172.16.3.0/24', vpc_id, 'us-east-1b')
ec2_subnet3 = subnet3['Subnet']['SubnetId']
create_tag(subnet3,'drupal-sb3-us-east-1b')
modify_subnet_attribute(ec2_subnet3)
print('Subnet 3 is Created with ID - ',ec2_subnet3)

#Associating Route Table 1 to Subnet 3
route_table1.associate_with_subnet(SubnetId=ec2_subnet3)
print('Route table 1 associated with Subnet 3 -',ec2_subnet3)

# Creating fourth subnet
subnet4 = create_subnet('172.16.4.0/24', vpc_id, 'us-east-1b')
ec2_subnet4 = subnet4['Subnet']['SubnetId']
create_tag(subnet4,'drupal-sb4-us-east-1b')
print('Subnet 4 is Created with ID - ',ec2_subnet4)

#Associating Route Table 2 to Subnet 4 
route_table2.associate_with_subnet(SubnetId=ec2_subnet4)
print('Route table 2 associated with Subnet 4 -',ec2_subnet4)

def create_security_group(descript, group_name):
    sg1_response = client.create_security_group(Description=descript,GroupName=group_name,VpcId=vpc_id)
    return sg1_response

def create_sg_tag(websg_or_elbsg,sg_group_name):
    sg_tag_response = client.create_tags(Resources=[websg_or_elbsg['GroupId']],Tags=[{'Key': 'Name','Value': sg_group_name}])
    return sg_tag_response

#Create Security Group for Drupal instances which will accept traffic from ALB
web_sg1 = create_security_group('Accept traffic from ALB', 'drupal-web-sg')
sgId = web_sg1['GroupId']
create_sg_tag(web_sg1,'drupal-web-sg')
print('Created Security Group for Web Instances -',sgId)

# Create Security for ALB which will accept traffic from Internet
elb_sg1 = create_security_group('Accept traffic from Internet','drupal-elb-sg')
elbsgId = elb_sg1['GroupId']
create_sg_tag(elb_sg1,'drupal-elb-sg')
print('Created Security Group for ELB -',elbsgId)

elb1 = ec2.SecurityGroup(elbsgId)
elb1.authorize_ingress(GroupId=elbsgId,IpPermissions=[{'IpProtocol': 'tcp', 'FromPort': 80, 'ToPort': 80, 'IpRanges': [{'CidrIp': '0.0.0.0/0'}]}])

client.authorize_security_group_ingress(GroupId=sgId, IpPermissions=[{'IpProtocol': '-1','UserIdGroupPairs': [{'GroupId': elbsgId}]}])

#Creating SSH key file for drupal instances
# create a file to store the key locally                                 
outfile = open('drupal-ec2-keypair.pem','w')                                                                                                      
# call the boto ec2 function to create a key pair                        
key_pair = ec2.create_key_pair(KeyName='drupal-ec2-keypair')             
# capture the key and store it in a file                                 
KeyPairOut = str(key_pair.key_material)                                    
outfile.write(KeyPairOut)  

# Creating instances for Drupal Infrastructure 
user_data_script = """#!/bin/bash
yum clean all
yum update -y
yum install httpd -y
echo "Hello this is drupal website" >> /var/www/html/index.html
systemctl start httpd
systemctl restart httpd
systemctl enable httpd"""

def create_instances(subnet_name, instance_name):
    web_instance = ec2.create_instances(ImageId='ami-011b3ccf1bd6db744',InstanceType='t2.micro',MinCount=1,MaxCount=1,KeyName='drupal-ec2-keypair',SubnetId=subnet_name,UserData=user_data_script,SecurityGroupIds=[sgId],TagSpecifications=[{'ResourceType': 'instance','Tags': [{'Key': 'Name','Value': instance_name}]}])
    return web_instance

web1_instance = create_instances(ec2_subnet1, 'drupal-web1')
time.sleep(60)
response1 = client.describe_instances()
for reservation in response1["Reservations"]:
     for instance in reservation["Instances"]:
        ec2 = boto3.resource('ec2')
        web1 = ec2.Instance(instance["InstanceId"])
print('Launching web1 instance - ',web1.id)

web2_instance = create_instances(ec2_subnet3, 'drupal-web2')
time.sleep(60)
response2 = client.describe_instances()
for reservation in response2["Reservations"]:
     for instance in reservation["Instances"]:
        ec2 = boto3.resource('ec2')
        web2 = ec2.Instance(instance["InstanceId"])
print('Launching web2 instance - ',web2.id)

# Application Load Balancer Code Starts here

lb = boto3.client('elbv2')
create_lb_response = lb.create_load_balancer(
    Name='drupal-web-elb',
    Subnets=[
        ec2_subnet1, ec2_subnet3,
    ],
    SecurityGroups=[
        elbsgId,
    ],
    Scheme='internet-facing',
    Tags=[
        {
            'Key': 'Name',
            'Value': 'drupal-web-elb'
        },
    ],
    Type='application',
    IpAddressType='ipv4'
)

lbId = create_lb_response['LoadBalancers'][0]['LoadBalancerArn']
print('Successfully created load balancer - ',lbId)

create_tg_response = lb.create_target_group(
    Name='drupal-web-tg',
    Protocol='HTTP',
    Port=80,
    TargetType='instance',
    HealthCheckPath='/index.html',
    VpcId=vpc_id
)
tgId = create_tg_response['TargetGroups'][0]['TargetGroupArn']
print('Successfully created target group - ',tgId)
#Create Listner for web elb
listnerId = lb.create_listener(
    LoadBalancerArn=lbId,
    Protocol='HTTP',
    Port=80,
    DefaultActions=[
        {
            'Type': 'forward',
            'TargetGroupArn': tgId
        },
    ]
)

# Register web instances with web-elb
regis_targets = lb.register_targets(TargetGroupArn=tgId,Targets=[{'Id': web1.id,},{'Id': web2.id}])

Thursday, April 04, 2019

Migrating on Premise VM to AWS Cloud


Migrate on premise VM to AWS - AWS VM Import / Export
1)    Export VM to .ovf or .vmdk format. Ex. myvm.vmdk
2)    Upload “myvm.vmdk” to S3 bucket called “anupvmmigration”
3)      Go to IAM create role called “vmimport” (role name should be vmimport) – copy the role json format for AWS docs links - https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
   
   Click on IAM -> Click on Role -> Click on Create Role -> Click on EC2 (Allows EC2 instances to call AWS services on your behalf.) -> Click on Next -> Click on Next -> Give Tags & click on Next -> Give Role Name – “vmimport” -> and finally click on Create Role

   OR Command line to create role
-       Create a file named trust-policy.json with the following policy:
{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": { "Service": "vmie.amazonaws.com" },
         "Action": "sts:AssumeRole",
         "Condition": {
            "StringEquals":{
               "sts:Externalid": "vmimport"
            }
         }
      }
   ]
}
-       aws iam create-role --role-name vmimport --assume-role-policy-document "file://trust-policy.json"

4)    Click on Roles in IAM, click on Role that you created i.e. “vmimport” -> Click on “Trust Relationship” tab -> Click on Edit Trust Relationship button -> paste following policy -> Finally click on Update Trust Policy
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "vmie.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:Externalid": "vmimport"
        }
      }
    }
  ]
}

5)    Click on Policies in IAM -> Click on Create policy -> Click on JSON tab -> paste the following policy from AWS link - https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html -> Click on Review Policy button -> Give Policy name –“vmimportpolicy” -> Finally create policy button.
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetBucketLocation",
            "s3:GetObject",
            "s3:ListBucket"
         ],
         "Resource":[
            "arn:aws:s3::: anupvmmigration",   ß update your bucket name here
            "arn:aws:s3::: anupvmmigration/*"  ß Update your bucket name here
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}

            OR Command line to create policy

    Create a file named role-policy.json with the above policy, where anupvmmigration is the bucket where the disk images are stored: 
   aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document "file://role-policy.json"

6)    Again, go to Roles in IAM -> Click on role vmimport -> Under Permissions Policy click on Attach Policy   -> search policy “vmimportpolicy” & select check box -> Click on Attach policy button.
7)    Go to Users in IAM -> Click on Add User -> Give user name “anupvmuser” & Give him programmatic access -> Click on Next -> Click on “Attach existing policies directly” -> Click on Create Policy & click on JSON tab -> and paste following code from AWS doc link (update bucket as in red) –
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListAllMyBuckets"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:CreateBucket",
        "s3:DeleteBucket",
        "s3:DeleteObject",
        "s3:GetBucketLocation",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:PutObject"
      ],
      "Resource": ["arn:aws:s3::: anupvmmigration","arn:aws:s3::: anupvmmigration/*"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "iam:CreateRole",
        "iam:PutRolePolicy"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CancelConversionTask",
        "ec2:CancelExportTask",
        "ec2:CreateImage",
        "ec2:CreateInstanceExportTask",
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "ec2:DescribeConversionTasks",
        "ec2:DescribeExportTasks",
        "ec2:DescribeInstanceAttribute",
        "ec2:DescribeInstanceStatus",
        "ec2:DescribeInstances",
        "ec2:DescribeTags",
        "ec2:ImportInstance",
        "ec2:ImportVolume",
        "ec2:StartInstances",
        "ec2:StopInstances",
        "ec2:TerminateInstances",
        "ec2:ImportImage",
        "ec2:ImportSnapshot",
        "ec2:DescribeImportImageTasks",
        "ec2:DescribeImportSnapshotTasks",
        "ec2:CancelImportTask"
      ],
      "Resource": "*"
    }
  ]
}
  Click on Review Policy button -> Give Name to policy “anupcustompolicy” -> Click on Create policy -> Now attach two policy to user “anupvmuser” – “anupcustompolicy” & “AdministratorAccess” à Click on Next -> On review page click on Create user -> Finally downlod.csv file.

8)    Create /root/containers.json file and copy following code from AWS doc link - https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
[
  {
    "Description": "Redhat Linux 64 bit",
    "Format": "vmdk",
    "UserBucket": {
        "S3Bucket": "anupvmmigration",   ß Give your bucket name here
        "S3Key": "myvm.vmdk"   ß Give your vmdk file name
    }
}]

  OR To import multiple vm’s use following containers.json file format
[
  {
    "Description": "First disk",
    "Format": "vmdk",
    "UserBucket": {
        "S3Bucket": "my-import-bucket",
        "S3Key": "disksmy-windows-2008-vm-disk1.vmdk"
    }
  },         
  {
    "Description": "Second disk",
    "Format": "vmdk",
    "UserBucket": {
        "S3Bucket": "my-import-bucket",
        "S3Key": "disks/my-windows-2008-vm-disk2.vmdk"
    }
  }
]
9)    Go to Linux or Windows machine configure AWS CLI using Access key id & Secret access key of user “anupvmuser”
   


10)    Use following command to start migration

aws ec2 import-image --description "Redhat Linux 64 bit" --disk-containers file:///root/containers.json

11)    To check the status of import task use following command take the highlighted task id from previous command



  Troubleshooting
You might get following errors.
1) Error-

For above error you should go to roles click on role “vmimport” and check if you have attached policy to it.


     2)    Error- 

For above error go to your vm fstab file and check for any errors or wrong syntax and make correction, again export vm from Vmware and upload again to S3 and start import