bcdedit /set {bootmgr} path \EFI\ubuntu\grubx64.efi

 

HW, Kernel, OS, User, Apps

/usr Installed software, shared libraries, include files, and read-only program data.

Important subdirectories include:

/usr/bin: User commands.       /usr/sbin: System administration commands.

/usr/local: Locally customized software.

/etc Configuration files specific to this system.

/var Variable data specific to this system that should persist between boots

/run Runtime data for processes started since the last boot.

/boot Files needed in order to start the boot process.

/dev Contains special device files that are used by the system to access hardware

/tmp temporary files. Files which have not been accessed, changed, or modified for 10 days are deleted from this directory automatically.

/var/tmp, files that have not been accessed, changed, or modified in more than 30 days are deleted automatically.

 

`rpm’ install, uninstall, verify, and query individual packages, Cli only

`yum` install, update, and remove packages, resolve dependencies automatically, Cli & gui

`rpm` is used to manage individual packages, while `yum` is used to manage software repositories and their associated packages

Yellowdog Updater Modified (YUM), Advanced Packaging Tool (APT), DaNdiFied (DNF)

YAML ain't markup language

 

Kerberos: Kerberos is a network authentication protocol used to verify the identity of users or services in a client-server environment.

OpenLDAP: OpenLDAP is an open-source implementation of the Lightweight Directory Access Protocol (LDAP), used for centralized directory services.

 

  • Wireshark: network protocol analyzer. capture and inspect network traffic in real-time. analyze individual packets, monitor network communication, and troubleshoot network issues. It provides detailed information about the protocols, traffic patterns, and data exchanged between devices on a network.
  • Nmap: Network Mapper, network scanning tool. discover hosts, scan for open ports, and identify services running on those ports. Nmap can provide information about the network topology, detect vulnerabilities, and perform security assessments. It is often used for network reconnaissance and security auditing purposes.

Wireshark is primarily used for deep packet analysis and troubleshooting network issues, while Nmap is focused on network scanning and security assessments.

 

 

 

Ctrl+l clear

Vim ./bash, alias word=’meaning’, source ~/.bashrc

Cp /bin/cmd newword

Htop

scp

; | &&

Which $SHELL

Which cmd

Exa, Ls ./dir, ls –altr, ls -l file1, ls -ld dir1

Cmd1 argument | cmd2 argument

Useradd name, userdel -r name id, id username

sudo groupadd -r group02

Mkdir -p a/b/c (tree), mkdir abc ; cd abc,  Touch f1 f2 f3 f4 f5

touch ad.txt; echo "Hello" > ad.txt; cat ad.txt(content);file ad.txt (file type)

Cp/mv /path/file/folder /new/path

Echo “new line” >  f.txt, Echo “2nd line” >>  f.txt

head /etc/passwd

tail -n 3 /etc/passwd

Head file.txt  (10 lines), Tail file.txt

History -c (clear)

Man / --help

Passwd -e name

apt install zsh, zsh –version, zsh, bash, chsh -s usr/bin/zsh

yum list, group list, search pkg, install pkg, group install groupname, update, remove pkg, history

Top, Ps aux, Lscpu

diaplays download files from urls wget, curl

You can use "grep to get the lines of a file matching your desired criteria

Hostname -i

Rpm<>deb- Alien pkgname.rpm, Alien -r  pkgname.deb

ssh -i mylab.pem remoteuser@remotehost

nmap -p 80 hostname, nmap –sP 192.168.1.0/24

diff is used to find differences between two files. On its own, it’s a bit hard to use; instead, use it with diff -u to find lines which differ in two files:

diff -u menu1.txt menu2.txt

ps: View information about running processes

top: Display real-time information about running processes

printf "\033c"

shutdown -r +5

https://linuxopsys.com/topics/category/commands

  • parted -l: Lists partition information on storage devices.
  • du -h: Displays the disk usage of files and directories in human-readable format.
  • df -h: Shows disk space usage of file systems in human-readable format.
  • fsck /dev/sda: Checks and repairs a Linux file system on the /dev/sda device.
  • ps -ef: Displays a detailed list of running processes on the system.
  • uptime: Shows the system's uptime, current time, and number of logged-in users.
  • updatedb: Updates the file database used by the locate command to quickly find files by name.
  • grep line file: Searches for a specific line within a file.
  • find /path -type d -name name: Searches for directories with a specific name within a given path.
  • find /path -type f -name name: Searches for files with a specific name within a given path.
  • shred: Securely deletes files by overwriting their contents.
  • ln: Creates a link (either hard or symbolic) between files or directories.
  • finger: Displays information about user(s) on the system.
  • whatis: Provides a brief description of a command or program.
  • cmp: Compares two files byte by byte.
  • sort: Sorts lines of text files in alphabetical or numerical order.
  • awk: Processes and manipulates text files based on patterns.
  • resolvectl status: Shows the current DNS configuration and status.
  • netstat: Displays network connections, routing tables, and network interface statistics.
  • ss: Provides detailed information about socket connections.
  • iptables: Manages firewall rules and packet filtering in Linux.
  • ufw: Uncomplicated Firewall - provides a user-friendly interface to manage firewall rules.
  • neofetch: Displays system information and hardware details in a visually appealing way.
  • cal: Displays a calendar for the specified month or year.
  • free: Shows memory usage and available system memory.
  • df: Displays disk space usage of file systems.
  • htop: Interactive process viewer and system monitor.

 

 

 

 

Anyvar=”u1 u2 u3 u4 u5 or any command”

For a in $anyvar; do useradd $anyvar: done

 

BASH SCRIPT

#! Shebang (also known as a hashbang or a pound-bang)

 

(name=aman- create custom variables like this)

echo the date today is $(date), enjoy

echo Hi AMAN Sir, there are $(who | wc -l) users on this sytem

 

echo i am now ${v1}ing and he is ${v2}ing

 

#!/bin/bash

echo -n "PLease enter your name: "

read first last

echo Your first name is $first

echo Your last name is $last

echo HELLO, $first $last , BYE!

 

 

gedit script.sh    (bash/ try w/o any ext)

 

#!/bin/bash

echo

whoami

echo (to put gap in between each line)

pwd

hostname jet.com

ifconfig ens33 100.0.0.1 netmask 255.0.0.0

useradd sibimpv

passwd sibimpv

mkdir /rhel8

chmod 777 script.sh

 

 

#!/bin/bash

 

echo "Helllo Dude"

sleep 3

echo "huh uh..."

sleep 3

echo 'i see'

echo 'yea i can understand'

sleep 3

echo 'okay then, seeya later aligator!'

 

#./script.sh

# /path/to/bash.sh

# bash bash.sh

 

test -d readname.txt

 

#!/bin/bash

echo "Do you like PUBG? (y/n)"

read PUBG

if [[ $PUBG == y ]]; then

        echo "That's great!!"

else

        echo "GTFOH!"

fi

 

Ansible is a tool that helps automate repetitive tasks like installing software or configuring servers. It uses a simple language called YAML to describe what tasks need to be done, and can be run on many servers at once. This saves time and reduces the chance of errors when managing a large number of servers

1.     Install 2 or more Linux OS in VMware- RHEL 9 as Control Node and any flavour of Linux can be used for Managed Nodes (any version of RHEL, Ubuntu, Fedora, CentOS, Kali, MacOS etc) and note all the IP Addressess of installed OS.

 

2.     Configuring REHL 9 as Control Node:

 

·       # cd /etc/ansible (default location)

·       # yum install epel-release

·       #sudo dnf (yum too) install -y ansible-core   (Installing ansible)

·       #ansible-galaxy collection install ansible.posix (if not, firewalld etc wont work)

·       #ansible –version (Checking ansible version installed )

 

3.     Configure SSH:

·       #ssh-keygen

·       Press ‘Enter’ key for default values

·       #ssh-copy-id root@192.168.77.130 (Your Managed Node IP)

·       #ssh-copy-id -f root@192.168.77.130 (forcefully, if some error occurs)

·       #mkdir ~/automation  (Creating Ansible directory)

·       #cd ~/automation    (~ = tilde, ~ = /home/ of current user/automation)

·       #gedit ansible.cfg   (Creating Ansible Config file)

[defaults]

inventory = ./inventory

host_key_checking = false (or just uncomment on default loc cfg, for default values)

remote_user = root

ask_pass = false

[privilege_escalation]

become = true

become_method = sudo

become_user = root

become_ask_pass = false

4.     Set up inventory:

 

·       #gedit inventory (Creating Inventory file for storing details- IP, Names, DNS etc of all Managed Nodes)

 

[mydevices]

192.168.77.129  #control

192.168.77.130  #rhel

 

5.     Test Installation

·       #ansible hostgroup --list-hosts  (lists all hosts under specific group)

·       #ansible all -m ping (checking ssh connections, all / ip / hostgroup name)

·       After successful “Ping” “Pong”, follow below steps on Managed Node

·       #echo "root ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/root

·       #gedit /etc/ssh/sshd_config

Line 40 #PermitRootLogin prohibit-password

(Change it to PermitRootLogin yes)

PasswordAuthentication no

(Change it to PasswordAuthentication yes)

 

·       #systemctl restart sshd (or ssh)

·       #sudo service ssh restart (other linux OS)

·       Now we have Successfully Configured Control & Managed Node.

 

6.     Write Playbooks:

 

·       #gedit nginx-deploy.yaml (Creating our first playbook)

·       two spaces are the standard used in the Ansible community

---

- name: Playbook to Install and Start Nginx

  hosts: dev

  tasks:

  - name: Install nginx

    package:

      name: nginx

      state: present   (absent to uninstall)

 

  - name: Start nginx Service

    service:

      name: nginx

      state: started

·       You can use https://jsonformatter.org/yaml-formatter to check and format the errors in the layout.

 

7.     Execute playbooks

·       # ansible-playbook --syntax-check playbook.yml

·       #ansible-playbook nginx-deploy.yaml

 

 

·       If you got something like above, then your Playbook execution is Successful.

·       ansible-playbook -c /home/user1/ansible/user1.cfg playbook.yaml

·       add custom hostnames and ip add

#gedit /etc/hosts

#gedit /etc/hostname (to change hostname)

 

·       #ansible hostgroup --list-hosts  (lists all hosts under specific group)

·       #ansible 192.168.77.133 or groupname -m shell -a 'free -m'

·       #ansible 192.168.77.133 -m shell -a 'df'

·       #ansible 192.168.77.133 -m shell -a 'du'

·       #ansible 192.168.77.133 -m shell -a 'ps'

·       #ansible 192.168.77.133 -m shell -a 'ifconfig

·       #ansible 192.168.77.133 -m shell -a ‘hostname’

·       #ansible 192.168.77.133 -m shell -a 'uname'

·       #ansible 192.168.77.133 -m shell -a 'ls /etc'

·       #ansible 192.168.77.130 -m shell -a 'cat /etc/passwd' or group

·       ls, cd, pwd, mkdir, rm, cp, mv, touch, chmod, chown, ps, top, free, df, du, grep, find, curl, wget, ping, traceroute, ssh, scp, tar, unzip, zip

·       #ansible 192.168.77.130 -m copy -a 'src=file.txt dest:=/home/folder owner=root mode=0755 '

·       #ansible 192.168.77.130 -m package -a ‘name=nginx state=present’

·       #ansible 192.168.77.130 -m package -a ‘name=nginx state=absent’

·       #ansible 192.168.77.130 -m setup (detailed info about mngd node)

·       #ansible 192.168.77.130 -a "cat /etc/os-release"

·       curl 192.168.1.1 or https:\\www.google.com

 

 

 

·       MIME (Multipurpose Internet Mail Extensions) types are used to identify the file format of a file being transferred over the internet.

 

RHEL/DEBIAN

vim /etc/ssh/sshd_config

PermitRootLogin prohibit-password to PermitRootLogin yes

PasswordAuthentication no to PasswordAuthentication yes

 

systemctl restart sshd service sshd restart

 

 

ARCH LINUX

sudo pacman -Syu  (-Sy= syncs with repo/update, -Su= upgrade)

sudo pacman -S gedit

sudo pacman -R gedit (remove)

 

MACOS

sudo systemsetup -getremotelogin

sudo systemsetup -setremotelogin on

sudo launchctl stop com.openssh.sshd

sudo launchctl start com.openssh.sshd

 

 

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

 

          brew install ansible

ansible –version

brew install gedit

 

/etc/sudoers.

#wheel ALL=(ALL) ALL

/etc/sudoers.d/group01

#group01 ALL=(ALL) ALL

/etc/sudoers.d/user01

user01 ALL=(ALL) ALL

ansible ALL=(ALL) NOPASSWD:ALL

 

 

 

 

[my_network]

router1 ansible_host=<router1_ip>

switch1 ansible_host=<switch1_ip>

 

-ensure SSH access is enabled on your Cisco devices.

 

#enable

#crypto key generate rsa modulus 2048

#username <username> privilege 15 secret <password>

#line vty 0 4

#transport input ssh

#login local

#write memory

 

Create an SSH key pair on the control machine if you don't have one already:

$ ssh-keygen -t rsa

 

Copy the public key (~/.ssh/id_rsa.pub) to the network devices you want to manage. You can use the ssh-copy-id command or manually copy and paste the key.

 

---

- name: Gather network facts

  hosts: my_network

  gather_facts: no

  tasks:

    - name: Get device facts

      ios_facts:

        gather_subset: all

      register: facts

 

    - name: Print device facts

      debug:

        var: facts

 

 

 

sudo apt install iputils-ping

 

 

                            https://aman5z.blogspot.com/

 

  1. grep command is used to search for a specific pattern in files or output, and it displays the matching lines.
  2. netstat command is used to check all the listening ports and services of the machine.
  3. free command is used to check the memory status.
  4. dmesg command is used to review boot messages.
  5. Advantage of Open Source: It promotes collaboration, transparency, and flexibility, allowing users to access, modify, and distribute software freely.
  6. Inode: A unique identifier for each file in a Linux filesystem. Process ID (PID): A unique identifier assigned to each running process in the system.
  7. tail command displays the last few lines of a file or a stream of data in real-time.
  8. Linus Torvalds was the principal force behind the development of the Linux operating system.
  9. Kernel is the core component of an operating system that manages hardware resources and enables communication between software and hardware.
  10. GNU shell is known as "bash" (Bourne Again SHell).
  11. To check the installed version of Red Hat, use the command cat /etc/redhat-release.
  12. By default, there are two types of users in Linux: regular users and the superuser (root).
  13. Shell is a command-line interpreter that allows users to interact with the operating system and execute commands.
  14. In Linux, "console" refers to the physical terminal where users can enter commands directly.
  15. To create a user named 'alice' with user ID 500, use the command useradd -u 500 alice.
  16. To remove the directory /tmp/abc/bcd, use the command rm -r /tmp/abc/bcd.
  17. To view the last 50 lines of the file named /var/log/dmesg, use the command tail -n 50 /var/log/dmesg.
  18. To reset a user password, use the command passwd username and follow the prompts.
  19. To delete a user with its home directory, use the command userdel -r username.
  20. To create two empty files named 'star' and 'ktm', use the command touch star ktm.
  21. The default file systems in Linux are ext4 and XFS.
  22. Important directories in the Linux root directory:
    • /bin: Contains essential binary files and executable programs.
    • /etc: Stores system configuration files.
    • /home: Home directories for regular users.
    • /var: Holds variable data such as logs and temporary files.
  23. To copy the entire content of /etc/passwd to /tmp/passwd, use the command cp /etc/passwd /tmp/passwd.
  24. The default location of the software documentation in Red Hat is /usr/share/doc.
  25. To find password-named directories or files from the root directory, use the command find / -type d -name "*password*" -o -type f -name "*password*".
  26. To find empty files from the root directory, use the command find / -type f -empty.
  27. To find all hidden directories from the root directory, use the command find / -type d -name ".*" -exec ls -ld {} \;.
  28. To find all files which are 50MB or larger in size from the /tmp directory, use the command find /tmp -type f -size +50M.
  29. To find all files which are 200MB or larger in size from any directory and remove them, use the command find / -type f -size +200M -delete.
  30. To find all files named 'passwd' and redirect the output to /tmp/output, and display errors in the terminal, use the command find / -name passwd > /tmp/output 2>&1.
  31. To find all files which have a size of 1KB and copy the output to /tmp/output and errors to /tmp/error, use the command find / -type f -size 1k -exec cp {} /tmp/output \; 2> /tmp/error.
  32. All user info is stored in the file /etc/passwd.
  33. The root user is the superuser with administrative privileges and full access to the system.
  34. To disallow a user from logging in, edit the file /etc/passwd and change the user's shell to /sbin/nologin or /usr/sbin/nologin.
  35. The login.defs file stores various settings related to user login and authentication.
  36. To modify the user home directory of user "aby" to /tmp/aby, use the command usermod -d /tmp/aby aby.
  37. To change the primary group of user "Honda" to the group "Yamaha", use the command usermod -g Yamaha Honda.
  38. To add a supplementary group "email" to the group "ktm", use the command usermod -aG email ktm.
  39. To delete a group, use the command groupdel groupname.
  40. A physical extent is a contiguous block of storage on a hard disk drive used for storing data. It is the smallest unit of allocation in a file system.
  41. Process States In UNIX: Running, Sleeping, Stopped, Zombie.
  42. The "parted" command is used to create, delete, resize, and manage disk partitions.
  43. To remove the Swap File, use the command swapoff /path/to/swapfile to deactivate it, and then rm /path/to/swapfile to delete it.
  44. To identify the version of Red Hat installed, use the command cat /etc/redhat-release.
  45. BASH (Bourne Again SHell) is a widely used command-line interpreter and scripting language in Linux.
  46. The maximum length for a filename under Linux varies depending on the file system used, but it is typically 255 characters.
  47. The typical size for swap partitions in Linux is two times the size of RAM, but it can be adjusted based on the system's requirements.
  48. File permissions in Linux determine the access rights for users (owner, group, others) to read, write, or execute files and directories.
  49. The file used to automatically mount file systems is /etc/fstab.
  50. LVM (Logical Volume Manager) is required to manage logical volumes, allowing for flexible resizing and management of storage devices.
  51. /proc is a virtual file system that provides an interface to kernel data structures and real-time system information.
  52. Daemons are background processes in Linux that run independently of user sessions, providing specific services to the system.
  53. The first process started by the kernel in Linux is the init process, with process ID 1 (PID 1).
  54. /etc/resolv.conf is used for DNS configuration, and /etc/hosts is used for hostname-to-IP address mapping.
  55. Default ports: DNS: 53 SMTP: 25 FTP: 21 SSH: 22 DHCP: 67/UDP, 68/UDP HTTP: 80 HTTPS: 443 RDP: 3389
  1. Soft link (symbolic link) is a reference to another file by name, while a hard link is a direct reference to the same data blocks on disk.
  2. SSH (Secure Shell) is a secure protocol used for secure remote access to servers. To connect to a remote server via SSH, use ssh user@hostname.
  3. The netstat command is used to display network statistics and information about network connections, ports, and routing tables.
  4. The ping command is used to check the connectivity and response time of a remote server or network device.
  5. The du (disk usage) command is used to check the size of a file or directory.
  6. The wc (word count) command is used to count the number of characters, words, and lines in a file.
  7. The lsof command in Linux lists open files and shows information about files opened by processes.
  8. To remove a file or directory, use the command rm for files and rm -r for directories.
  9. To exit from the vi editor, press Esc to enter command mode, then type :wq to save and quit or :q! to quit without saving.
  10. A "PIPE" in Linux is used to redirect the output of one command as input to another command using the | symbol.
  11. The ps command is used to list the running processes on a Linux system.
  12. To list the contents of a tar.gz file and extract only one file, use tar -tf file.tar.gz to list and tar -xf file.tar.gz filename to extract.
  13. To list running processes in Linux, use the command ps aux or top for dynamic updates.
  14. To deny a user from scheduling cron jobs, edit the file /etc/cron.deny and add the username.
  15. The steps for resetting the root password by booting into Single User Mode are:
    • Reboot the system.
    • At the GRUB menu, select the desired kernel and press e to edit.
    • Append init=/bin/bash to the end of the kernel line.
    • Press Ctrl + X to boot into single-user mode.
    • Remount the root filesystem with read-write access: mount -o remount,rw /
    • Change the root password: passwd
    • Reboot the system: reboot.

 

 

 

 

 

 

 




---

- name: All-in-One Server Setup

  hosts: all

  become: true

  ignore_unreachable: true

  vars:

    tomcat_version: 9.0.50

    java_package: java-11-openjdk-devel

    samba_share_path: /srv/samba/share

    nfs_exports:

      - path: /srv/nfs/share

        options: "*(rw,sync,no_subtree_check,no_root_squash)"

    db_user: myuser

    db_password: mypassword

    db_name: mydatabase

    users:

      - username: user1

        password: pass1

        groups: [group1]

      - username: user2

        password: pass2

        groups: [group2]

    groups:

      - name: group1

      - name: group2

 

  tasks:

    - name: Update package cache

      package:

        name: "*"

        state: latest

      become: true

 

    - name: Install required packages

      package:

        name: "{{ item }}"

        state: present

      become: true

      with_items:

        - httpd

        - nginx

        - samba

        - nfs-utils

        - mysql-server

        - vsftpd

        - php

        - parted

        - bind-utils

        - acl

        - cronie

 

    - name: Start and enable services

      service:

        name: "{{ item }}"

        state: started

        enabled: true

      with_items:

        - httpd

        - nginx

        - smb

        - nfs-server

        - mysqld

        - vsftpd

        - crond

 

    - name: Deploy Tomcat

      get_url:

        url: "https://downloads.apache.org/tomcat/tomcat-{{ tomcat_version.split('.')[0] }}/v{{ tomcat_version }}/bin/apache-tomcat-{{ tomcat_version }}.tar.gz"

        dest: /opt/

        mode: '0644'

      become: true

 

    - name: Extract Tomcat

      unarchive:

        src: "/opt/apache-tomcat-{{ tomcat_version }}.tar.gz"

        dest: /opt/

        remote_src: yes

        extra_opts: "--strip-components=1"

      become: true

 

    - name: Create Samba share directory

      file:

        path: "{{ samba_share_path }}"

        state: directory

        mode: '0777'

      become: true

 

    - name: Configure Samba share

      lineinfile:

        path: /etc/samba/smb.conf

        line: "path = {{ samba_share_path }}"

        state: present

      become: true

 

    - name: Configure NFS export

      lineinfile:

        path: /etc/exports

        line: "{{ item.path }} {{ item.options }}"

        state: present

      become: true

      with_items:

        - "{{ nfs_exports }}"

 

    - name: Mount NFS share

      mount:

        path: "{{ item.path }}"

        src: "{{ inventory_hostname }}:{{ item.path }}"

        fstype: nfs

        state: mounted

      become: true

      with_items:

        - "{{ nfs_exports }}"

 

    - name: Create MySQL database

      mysql_db:

        name: "{{ db_name }}"

        state: present

        login_user: root

        login_password: ""

 

    - name: Create MySQL user

      mysql_user:

        name: "{{ db_user }}"

        password: "{{ db_password }}"

        priv: "{{ db_name }}.*:ALL"

        host: localhost

        state: present

        login_user: root

        login_password: ""

 

    - name: Copy file to destination

      copy:

        src: /path/to/source/file

        dest: /path/to/destination/file

      become: true

 

    - name: Configure DNS resolver

      lineinfile:

        path: /etc/resolv.conf

        line: "nameserver 8.8.8.8"

        state: present

      become: true

 

    - name: Set ACL on directory

      acl:

        path: /path/to/directory

        entity: "{{ item.entity }}"

        permissions: "{{ item.permissions }}"

        recursive: yes

      become: true

      with_items:

        - { entity: 'user:user1', permissions: 'rwx' }

        - { entity: 'user:user2', permissions: 'rx' }

 

    - name: Add cron job

      cron:

        name: "My Cron Job"

        job: "echo 'Hello, World!'"

        state: present

      become: true

 

    - name: Create groups

      group:

        name: "{{ item.name }}"

      become: true

      with_items:

        - "{{ groups }}"

 

    - name: Create users

      user:

        name: "{{ item.username }}"

        password: "{{ item.password | password_hash('sha512') }}"

        groups: "{{ item.groups }}"

      become: true

      with_items:

        - "{{ users }}"

  1. Describe SES. Answer: Simple Email Service.
  1. What is the maximum limit of elastic IPs anyone can produce? Answer: 5.
  2. What is the difference between stopping and terminating an EC2 instance? Answer: Stopping preserves the instance's data and configuration, while terminating deletes the instance permanently.
  3. What are Key-Pairs in AWS? Answer: Key pairs consist of a public key and a private key used for secure access to EC2 instances.
  4. How can you recover/login to an EC2 instance for which you have lost the key? Answer: You need to create a new key pair or attach an existing key pair to the instance.
  5. What are some critical differences between AWS S3 and EBS? Answer: S3 is an object storage service, while EBS provides block-level storage for EC2 instances.
  6. How do you allow a user to gain access to a specific bucket? Answer: By configuring appropriate bucket policies or IAM policies.
  7. What are the Storage Classes available in Amazon S3? Answer: Standard, Intelligent-Tiering, Standard-IA (Infrequent Access), Glacier, and Glacier Deep Archive.
  8. What Is Amazon Virtual Private Cloud (VPC) and Why Is It Used? Answer: VPC enables you to create a virtual network in the AWS cloud, providing isolation and control over resources.
  9. What are the advantages of AWS IAM? Answer: IAM provides centralized control and management of AWS resources, allowing you to manage users, groups, roles, and their access levels.
  10. What are the different types of load balancers in AWS? Answer: Application Load Balancer (ALB) and Network Load Balancer (NLB).
  11. How does AWS IAM help your business? Answer: IAM helps in implementing security best practices, managing user access, and securing resources within an organization.
  12. What is the difference between a Domain and a Hosted Zone? Answer: A domain represents a unique name in the DNS system, while a hosted zone is a container that holds information about how to route traffic for a specific domain.
  13. Suppose you are a game designer and want to develop a game with single-digit millisecond latency, which of the following database services would you use? Answer: Amazon DynamoDB.
  14. If you need to perform real-time monitoring of AWS services and get actionable insights, which services would you use? Answer: Amazon CloudWatch.
  15. As a database administrator, you will employ a service that is used to set up and manage databases such as MySQL, MariaDB, and PostgreSQL. Which service are we referring to as we are considering cloud? Answer: Amazon RDS (Relational Database Service).
  16. What is AMI? Answer: Amazon Machine Image, a pre-configured template used to create EC2 instances.
  17. What is the edge location? Answer: Edge locations are endpoints of AWS CloudFront CDN (Content Delivery Network) where content is cached for faster delivery.
  18. What do you mean by EC2? Answer: Amazon Elastic Compute Cloud, a web service that provides resizable compute capacity in the cloud.
  19. What do you mean by EBS? Answer: Amazon Elastic Block Store, a block-level storage service used with EC2 instances.

 

  1. What is Load Balancing? Answer: Load balancing is the process of distributing incoming network traffic across multiple servers to improve performance and availability.
  2. What is Route 53? Answer: Amazon Route 53 is a scalable domain name system (DNS) web service for managing domain names and routing internet traffic.
  3. What is CloudTrail? Answer: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.
  4. What is AWS VPN? Answer: AWS VPN is a managed VPN service that allows you to securely connect your on-premises networks or remote locations to the AWS cloud.
  5. What is an Internet Gateway? Answer: An Internet Gateway is a horizontally scalable and redundant AWS service that allows communication between instances in your VPC and the internet.
  6. What is NAT? Answer: Network Address Translation (NAT) allows instances in a private subnet to connect to the internet while hiding their private IP addresses.
  7. What are public and private subnets? Answer: Public subnets are connected to the internet, while private subnets are not accessible from the internet directly.
  8. What is Cloud Computing? Answer: Cloud computing is the delivery of computing resources over the internet on a pay-as-you-go basis.
  9. What is an instance? Answer: An instance refers to a virtual machine running on the AWS infrastructure.
  10. What is a Route Table? Answer: A route table contains a set of rules, called routes, that determine where network traffic is directed within a VPC.
  11. What is the relation between the Availability Zone and Region? Answer: An AWS region consists of multiple Availability Zones (data centers) that are isolated from each other.
  12. What is auto-scaling? Answer: Auto-scaling is a feature that automatically adjusts the number of EC2 instances in response to changes in demand.
  13. What is geo-targeting in CloudFront? Answer: Geo-targeting is the ability to deliver content based on the geographic location of the viewer.
  14. What are the steps involved in a CloudFormation Solution? Answer: Template creation, template validation, stack creation, stack update, stack deletion.
  15. How do you upgrade or downgrade a system with near-zero downtime? Answer: By using techniques like rolling deployments, blue-green deployments, or canary deployments.
  16. Is there any other alternative tool to log into the cloud environment other than console? Answer: Yes, AWS CLI (Command Line Interface) is an alternative tool for managing AWS resources.
  17. What services can be used to create a centralized logging solution? Answer: Amazon CloudWatch Logs, AWS Elasticsearch Service, and third-party tools like Splunk.
  18. Name some of the AWS services that are not region-specific. Answer: IAM (Identity and Access Management), Route 53, CloudFront, AWS CloudFormation.
  19. What is CloudWatch? Answer: Amazon CloudWatch is a monitoring and observability service for AWS resources and applications.
  20. What is the difference between a Spot Instance, an On-demand Instance, and a Reserved Instance? Answer: Spot Instances are instances launched at a lower price, On-demand Instances are instances with no upfront payment, and Reserved Instances are instances with a one-time upfront payment for discounted pricing.
  21. On an EC2 instance, an application of yours is active. Once the CPU usage on your instance hits 80%, you must reduce the load on it. What strategy do you use to complete the task? Answer: Auto Scaling or adding more instances to distribute the load.
  22. What is SQS? Answer: Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables decoupling of distributed systems.
  23. How many subnets can you have per VPC? Answer: You can have up to 200 subnets per VPC.
  24. How to connect EBS volume to multiple instances? Answer: EBS volumes can only be attached to one EC2 instance at a time.
  25. List different types of cloud services. Answer: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
  26. Your organization has around 50 IAM users. Now, it wants to introduce a new policy that will affect the access permissions of an IAM user. How can it implement this without having to apply the policy at the individual user level? Answer: By creating IAM groups, assigning policies to groups, and adding users to groups.
  27. Can you change the private IP address of an EC2 instance while it is in a running or stopped state? Answer: No, you cannot change the private IP address of an EC2 instance.
  28. What are the benefits of AWS CloudFormation? Answer: AWS CloudFormation enables infrastructure as code, simplifies resource management, and provides automation and consistency in deploying infrastructure.
  29. How is AWS CloudFormation different from AWS Elastic Beanstalk? Answer: AWS CloudFormation is a service for infrastructure provisioning and management, while AWS Elastic Beanstalk is a service for deploying and managing applications.
  30. What do you mean by PEM? Answer: PEM (Privacy Enhanced Mail) is a file format commonly used for storing SSL/TLS certificates and private keys.

  

 


1.      Static Routing: Manually configured routing entries where network paths are predefined and don't change unless modified.

 

2.      Dynamic Routing: Automatically updates routing tables by exchanging routing information with neighboring routers.

 

3.      Default Routing: Sending network traffic to a default gateway when no specific route exists in the routing table.

 

4.      RIP (Routing Information Protocol): Distance-vector routing protocol that measures the number of hops to a destination to determine the best route.

 

5.      EIGRP (Enhanced Interior Gateway Routing Protocol): Advanced distance-vector protocol that considers factors like bandwidth and delay for routing decisions.

 

6.      OSPF (Open Shortest Path First): Link-state routing protocol that shares detailed network topology information to calculate the shortest path.

 

7.      BGP (Border Gateway Protocol): Path-vector protocol used between autonomous systems to determine the best path for data packets.

 

8.      IS-IS (Intermediate System to Intermediate System): Link-state protocol similar to OSPF, often used in larger networks and ISPs.

 

9.      What is the purpose of a default gateway? Answer: A default gateway is used to forward network traffic from a device to destinations outside of its local network.

 

10.   Explain the difference between a hub and a switch. Answer: A hub is a simple networking device that broadcasts incoming traffic to all connected devices, while a switch selectively forwards traffic to its intended destination based on MAC addresses.

 

11.   What is the purpose of VLANs (Virtual LANs)? Answer: VLANs are used to logically segment a network into separate broadcast domains to enhance security, performance, and manageability.

 

12.   What is the difference between a static IP address and a dynamic IP address? Answer: A static IP address is manually assigned to a device and remains fixed, while a dynamic IP address is assigned by a DHCP server and can change over time.

 

13.   How does ARP (Address Resolution Protocol) work? Answer: ARP is used to map an IP address to a MAC address on a local network, enabling devices to communicate with each other.

 

14.   What is the purpose of NAT (Network Address Translation)? Answer: NAT allows multiple devices on a private network to share a single public IP address when communicating with external networks.

 

15.   Explain the concept of subnetting. Answer: Subnetting involves dividing a network into smaller subnetworks to improve network efficiency and manageability.

 

16.   What is the purpose of DNS (Domain Name System)? Answer: DNS translates domain names to IP addresses, allowing users to access websites using human-readable names instead of numerical IP addresses.

 

17.   What is the difference between TCP (Transmission Control Protocol) and UDP (User Datagram Protocol)? Answer: TCP provides reliable, connection-oriented communication with error checking and flow control, while UDP provides fast, connectionless communication without error checking or flow control.

 

18.   Explain the difference between symmetric encryption and asymmetric encryption. Answer: Symmetric encryption uses the same key for both encryption and decryption, while asymmetric encryption uses a public key for encryption and a private key for decryption.

 

19.   What is the purpose of a subnet mask? Answer: A subnet mask is used to determine the network and host portions of an IP address and facilitate routing within a network.

 

20.   What is the difference between TCP and UDP ports? Answer: TCP ports provide reliable, connection-oriented communication, while UDP ports offer connectionless, unreliable communications

 

21.   Explain the concept of VLAN trunking. Answer: VLAN trunking allows multiple VLANs to be carried over a single physical link, enabling traffic from different VLANs to be efficiently transported between switches.

 

22.   What is STP (Spanning Tree Protocol) and why is it used? Answer: STP prevents loops in a network by dynamically selecting and blocking redundant paths, ensuring a loop-free topology.

 

23.   What is DHCP (Dynamic Host Configuration Protocol)? Answer: DHCP is a network protocol that automatically assigns IP addresses, subnet masks, and other network configuration parameters to devices on a network.

 

24.   What is the purpose of access control lists (ACLs) in network security? Answer: ACLs are used to filter network traffic based on specified criteria, such as source/destination IP addresses or port numbers, to control access and enhance network security.

 

25.   What is the purpose of a default route? Answer: A default route, also known as the gateway of last resort, is used when a device does not have a specific route to a destination network. It directs traffic to the next-hop router.

 

26.   Explain the concept of VLAN tagging. Answer: VLAN tagging is the process of adding a VLAN identifier (VLAN tag) to network frames, allowing switches to differentiate and handle traffic from multiple VLANs on a single physical link.

 

27.   What is the purpose of ICMP (Internet Control Message Protocol)? Answer: ICMP is used for diagnostic and error reporting purposes in IP networks, including ping, traceroute, and error messages such as "Destination Unreachable" or "Time Exceeded."

 

28.   What is the difference between a router and a switch? Answer: A router operates at the network layer (Layer 3) and forwards packets between networks, while a switch operates at the data link layer (Layer 2) and forwards frames within a network.

 

29.   What is the purpose of ARP (Address Resolution Protocol)? Answer: ARP is used to resolve an IP address to a MAC address in order to establish communication between devices on a local network.

 

30.   What is the difference between static routing and dynamic routing? Answer: Static routing requires manual configuration of network routes, while dynamic routing protocols automatically exchange routing information between routers to determine the best path for data transmission.

 

31.   What is NAT (Network Address Translation) and why is it used? Answer: NAT is used to translate private IP addresses to public IP addresses when communicating over the internet, allowing multiple devices to share a single public IP address.

 

32.   Explain the concept of OSPF (Open Shortest Path First) routing protocol. Answer: OSPF is a link-state routing protocol that calculates the shortest path to a destination using metrics such as bandwidth and delay, ensuring efficient routing in large networks.

 

33.   What is VLAN (Virtual Local Area Network)? Answer: VLAN is a logical network created within a physical network, allowing devices to communicate as if they were connected to the same physical network segment, even if they are physically separate.

 

34.   What is the purpose of port forwarding? Answer: Port forwarding redirects incoming network traffic from one IP address and port combination to another, enabling access to services behind a router or firewall.

 

35.   Explain the concept of QoS (Quality of Service) in networking. Answer: QoS prioritizes network traffic based on predefined rules, ensuring that critical applications or services receive preferential treatment, such as higher bandwidth or lower latency.

 

36.   What is the purpose of SSH (Secure Shell) in network administration? Answer: SSH provides secure remote access to network devices and servers over an encrypted connection, replacing insecure protocols like Telnet.

 

37.   What is the difference between symmetric encryption and asymmetric encryption? Answer: Symmetric encryption uses the same key for both encryption and decryption, while asymmetric encryption uses a pair of keys (public and private) for encryption and decryption

 

38.   Explain the concept of VLAN trunking protocol (VTP). Answer: VTP is a Cisco proprietary protocol that facilitates the automatic propagation of VLAN configuration information across switches in a network, simplifying VLAN management.

 

39.   Virtualization: Understand virtualization concepts, such as hypervisors (e.g., VMware, Hyper-V) and virtual machine management.

 

40.   Storage Technologies: Familiarize yourself with RAID levels, SAN (Storage Area Network), NAS (Network Attached Storage), and file system management

 

41.   Backup and Recovery: Know backup strategies, disaster recovery planning, and tools like rsync, tar, and backup utilities provided by the operating system.

42.   Security: Demonstrate knowledge of security best practices, including user management, access control, encryption, firewalls, and vulnerability scanning.

43.   Troubleshooting: Develop problem-solving skills and understand methodologies like root cause analysis, log analysis, and using diagnostic tools.

44.   Scripting and Automation: Be comfortable with scripting languages like Bash, PowerShell, or Python, and understand automation tools like Ansible or Puppet.

45.   Monitoring and Performance Optimization: Understand monitoring tools (e.g., Nagios, Zabbix) and techniques for optimizing system performance.

46.   Documentation and Communication: Highlight your ability to create clear and comprehensive documentation, as well as effective communication skills for collaborating with team members and stakeholders.

47.   Networking Protocols: Have a good understanding of TCP/IP, DNS, DHCP, VLANs, routing, and subnetting.

48.   Operating System Administration: Be familiar with tasks like user management, file system management, package management, system updates, and kernel tuning.

49.   Service Management: Understand how to manage services and daemons, troubleshoot service-related issues, and configure services like Apache, Nginx, MySQL, or PostgreSQL.

50.   Security Hardening: Familiarize yourself with security hardening practices, such as disabling unnecessary services, applying patches, configuring firewalls, and implementing secure network protocols.

51.   Cloud Computing: Have basic knowledge of cloud computing concepts, such as virtual machines, containers, cloud providers (e.g., AWS, Azure, Google Cloud), and cloud management tools.

52.   Disaster Recovery and Business Continuity: Understand the importance of disaster recovery planning, backup strategies, data replication, and high availability solutions.

53.   Automation Tools: Be aware of automation tools like Ansible, Puppet, or Chef, and their use in automating system administration tasks.

54.   Log Management: Understand the importance of log management, log rotation, log analysis tools (e.g., ELK Stack), and troubleshooting issues using log files.

55.   Change Management: Be familiar with change management processes, version control systems (e.g., Git), and the importance of documentation and maintaining a change log.

56.   Soft Skills: Highlight your ability to work in a team, communicate effectively, prioritize tasks, and handle stressful situations with professionalism.

57.   Scripting and Automation: Familiarize yourself with scripting languages like Bash, Python, or PowerShell to automate repetitive tasks and perform system administration tasks more efficiently.

58.   Monitoring and Alerting: Understand the importance of monitoring system performance, network traffic, and application health. Be familiar with monitoring tools like Nagios, Zabbix, or Prometheus.

59.   Virtualization: Have a basic understanding of virtualization technologies like VMware or Hyper-V, including virtual machine management, resource allocation, and virtual networking.

60.   Troubleshooting Skills: Highlight your ability to diagnose and resolve system issues, troubleshoot network connectivity problems, analyze log files, and use debugging tools effectively.

61.   Backup and Recovery: Demonstrate knowledge of backup strategies, data recovery techniques, and disaster recovery planning. Familiarize yourself with backup tools like Bacula, Veeam, or Duplicati.

62.   Configuration Management: Be aware of configuration management tools like Puppet, Chef, or Ansible, and their role in managing and maintaining consistent system configurations.

63.   Security Best Practices: Understand common security vulnerabilities, encryption methods, access control mechanisms, and security compliance frameworks (e.g., PCI-DSS, HIPAA, GDPR).

64.   Documentation: Emphasize the importance of maintaining clear and comprehensive system documentation, including network diagrams, configuration files, and standard operating procedures (SOPs).

65.   Collaboration and Communication: Highlight your ability to collaborate with other teams or stakeholders, communicate technical information effectively, and provide timely updates on system status.

66.   Continuous Learning: Express your enthusiasm for continuous learning and staying updated with new technologies, industry best practices, and emerging trends in system administration.

  








DEV team

OPS team

-Dev>Build>Test>QA

-Deploy>Maintanence>Monitoring

 

DevOps- INTERGRATION

 

DevOps is a methodology or approach that combines software development and IT operations in order to improve the delivery and quality of software products. It involves collaboration and communication between teams to automate processes, streamline workflows, and ensure faster and more reliable software deployment. The ultimate goal of DevOps is to deliver high-quality software products to customers quickly and efficiently.

 

Git

-software tool

-installed on local system

-manages diff versions of source cod

-cli

 

GitHub

-service/platform

-web hosted

-used to have copy of local repo code

-gui

-vcs & collab

 

GIT Benefits:

-work as a team.

-Availability

-git is the industry standard -improve team speed and productivity

-historical change tracking.

 

Git concepts:

-clone

-adding collabs

-pull from remote

-branch in git

-git release

 

Git BASH: app for MS Windows environment which provides a layer for git command line experience

BASH: Bourne Again Shell

Shell: Terminal app/interface for command execution between user and OS via written commands

 

 

Remote Repo= GitHub

Local Repo= Our Repo

 

Staging Area- the middle ground between what you have done to your files (also known as the working directory) and what you had last committed (the HEAD commit).

 

 

 

Step 4 - Looks for changes on remote repo aka GitHub,

* If there's any changes, then it'll pull the updates

* If there's no changes then

Step-5 We'll push updates to GitHub

 

Git Branches: - Dividing a massive project (software/application) into multiple new branches to assign working on new modules(features) for the project

 

Git Rebasing: - is the process of moving or combining a sequence of commits(last modified project) to a new base commit.

 

Distributed Version Control System:

-Peer to Peer

-less time

-all have master privilege

-full control

 

Centralized Version Control System:

-Server-Client

-more time

-secure

-less control/privilege

-requests>ack>accepts>etc.

-if server down, work will be affected

 

 

$ git --version  (check version)

$ git config --help

$ mkdir new | cd new (folder for git repo)

$ git init          (creates /.git/ directory- git local repository)

$ touch demonew.txt

$ git add demonew.txt

$ git status

$ git commit -m "commiting a text file"  ("anything")

$ git config --global user.username yourusername   (link GitHub Profile)

$ git config user.email "emailid"

create new repo on GitHub, test_demo (test=directory, demo=file)

copy its link and paste below:

$ git remote add origin https://github.com/yourusername/new_demonew.git  (local repo and GitHub repo are linked)

$ git push origin master  (pushes updates to GitHub)

(we are on master branch=default branch. as you start making commits, this points to the last commit you made )

 

 

git commit -a

Stages files automatically

git log -p

Produces patch text

git show

Shows various objects

git diff

Is similar to the Linux `diff` command, and can show the differences in various commits

git diff --staged

An alias to --cached, this will show all staged files compared to the named commit

git add -p

Allows a user to interactively review patches to add to the current commit

git mv

Similar to the Linux `mv` command, this moves a file

git rm

Similar to the Linux `rm` command, this deletes, or removes a file

git branch

Used to manage branches

git branch <name>

Creates the branch

git branch -d <name>

Deletes the branch

git branch -D <name>

Forcibly deletes the branch

git checkout <branch>

Switches to a branch.

git checkout -b <branch>

Creates a new branch and switches to it.

git merge <branch>

Merge joins branches together.

git merge --abort

If there are merge conflicts (meaning files are incompatible), --abort can be used to abort the merge action.

git log --graph --oneline

This shows a summarized view of the commit history for a repo.

git clone URL

Git clone is used to clone a remote repository into a local workspace

git push

Git push is used to push commits from your local repo to a remote repo

git pull

Git pull is used to fetch the newest updates from a remote repository

git remote 

Lists remote repos

git remote -v

List remote repos verbosely

git remote show <name>

Describes a single remote repo

git remote update

Fetches the most up-to-date objects

git fetch

Downloads specific objects

git branch -r

Lists remote branches; can be combined with other branch arguments to manage remote branches

 

MAVEN (apache)

 

-Mainly used in java-based project

-To build, develop, manage and deploy any java-based project

-helps in getting the correct jar file when there is different versions of packages

-mvnrepository.com-to download dependencies

-it is easily done by visiting [mvnrepository.com]

-it is open source build tool

-maven is written in java, ruby, c#

 

Maven Processes:

 

1-Build (Script)

2-Dependencies

3-Report

4-Distribution

6-Maililing list

 

-Create an account on oracle

-Download Java JDK- https://www.oracle.com/in/java/technologies/javase/javase8-archive-downloads.html

-setup> next> next> etc (all defaults)

-Download Maven Binary Zip- https://dlcdn.apache.org/maven/maven-3/3.9.1/binaries/apache-maven-3.9.1-bin.zip

-Extract it to C Drive, rename to 'maven'

-This PC> Properties> Advanced System Settings> Environment Variables> System Variable> New> Name= MVN_HOME, Value= C:\maven> OK

-select "Path" from the list and Edit> New> C:\maven\bin\> OK

-[user environment variables = set for each user & system environment variables = set for everyone]

-Open Command Prompt (Win+R, Type CMD, Hit Enter)> C:\Users\aman>java -version, C:\Users\aman>mvn -version

 

C:\Users\aman>java -version

java version "1.8.0_202"

Java (TM) SE Runtime Environment (build 1.8.0_202-b08)

Java Hotspot (TM) 64-Bit Server VM (build 25.202-b08, mixed mode)

 

C:\Users\aman>mvn -version

Apache Maven 3.9.1 (2e178502fcdbffc201671fb2537d0cb4b4cc58f8)

Maven home: C:\maven

Java version: 1.8.0_202, vendor: Oracle Corporation, runtime: C:\Program Files\Java\jre1.8.0_202

Default locale: en_IN, platform encoding: Cp1252

OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"

 

Installation on Ubuntu:

-#sudo apt-get update

-#sudo apt install default-jdk (install open jdk)

-#java -version

-#sudo apt-get -y install maven

-default maven installation directories:

/usr/share/maven

/etc/maven

user usable programs and data

-#mvn -version

 

-Environment variables are a dynamic named value that can affect the way running process will behave on a pc.

they are the part of the environment in which a process runs.

-A variable is any entity that can take on different values.

-Anything that can vary can be considered a variable. For instance, age can be considered a variable because age can take different values for different people or for the same person at different times.

 

Maven architecture:

-it’s an .xml file that contains the info about the project and config details used by maven to build the project

-it contains default value for most project

 

-reads pom.xml file

-download dependencies defined in pom.xml into local repo from a central repo

-create and generate a report according to the requirements

-lifecycle- collection of steps

 

3 Built-In Life Cycles

1)    Default- handles the process of project deployment

2)    Clean- it handles project cleaning (maintenance, testing) 

3)    Site- it handles the creation of project sites documentation

 

 

Maven Build Lifecycle: (phases or stages):

-compile

-test compile

-test

-package

-integration test

-verify

-install

-deploy

 

 

Maven Advantages:

-It provides easy access to all the required infos

-it makes easy for dev to build a project in different environments w/o worrying about the dependencies, process etc

-helps to manage all the processes, building, documentation, releasing and distribution in project mgmt.

-this tool simplifies the process of project building

-the task of downloading jar files and other dependencies are done automatically

 

Maven Project Demo On Ubuntu:

# mvn –version

# mkdir maven | cd maven (or temp)

# mvn archetype:generate (generating mvn architecture, some binary files will be downloaded, trying to generate new project)

 

Choose a number: 8:

Define value for property 'groupId': com.myproject

Define value for property 'artifactId': sample_project

Define value for property 'version' 1.0-SNAPSHOT: :

Define value for property 'package' com.myproject: :

Confirm properties configuration:

groupId: com.myproject

artifactId: sample_project

version: 1.0-SNAPSHOT

package: com.myproject

 

version=8, gid & artifactid= according to project. all other defaults

[BUILD SUCCESS] means successful

 

# ls -alrt

# cd sample_project

# cat pom.xml (we can see attributes and values. Eg: gid)

# mvn clean install (it will consider as mvn project)

 

We can see different Jar files in the target, we can see that sample_project and SNAPSHOT.jar file is created.

This is the way we can go with new project.

And keep on modifying the dependencies, we’ll get the result.

This is the way we can go for a project preparation, with the help of mvn executable.

 

JAR: Java Archive

-file format based on popular zip file format and is used for aggregating many files into one.

 

Build Tool:

-essential for the process of building

-its used for:-

-generating source code

-generating documentation from the source code

-compiling source code

-packaging compiling code into .jar files

-installing the packaged code into the local repo, server or central repo

 

Maven Repo:

-refers to the directories of packaged jar files that contains Metadata

-Metadata means POM file represents all info about the project

 

1-Local repo: (repo inside all dev’s system)

-means mission of the dev, all the project material is saved

-contains all dependencies jar file

 

2-Remote repo: (repo inside server system)

-means repo present on a server, that is used when mvn needs to download dependencies

 

3-Central repo: (mvn community, used when dep. not found in local)

-means mvn community that comes into action when there’s a need for dependencies, and those dependencies cannot be found in local repo

 

Super POM:

-default pom of mvn

-supports the dev to configure the pom.xml file with least configs

 

Gradle:

-build automation tool for creating apps

-building process includes: compiling, linking & packaging of codes

-its known for flexibility to build software

-used in many programming langs: Java, Scala, Android, C, C++, Groovy

-provides building, testing & deploying software on several platforms

-also builds any type of software and large projects

 

Benefits:

-resolves issues faced on other build tools

-it focuses on maintainability, performance & flexibility

-lots of plugins provided

 

 

Gradle

Maven

Mainly used in Groovy based specific langs  

Software project mgmt. sys. used for Java pjt

goal is to add functionality to a project

Goal is related to project phase

 

Doesn’t use an xml file for project config

Xml file used to declare the project & its dependencies

Based on the graph of task dependencies

Based on the phases of linear & fixed model

 

 

Dependencies describes relationship among activities & specifies in particular order in which they need to be performed

It represents work where one team is dependent on another team

Types: Goal dep, Task dep, Resource dep

 

Gradle Installation:

 

-Java is installation (Open JDK or Oracle Java)

-download Gradle

https://gradle.org/next-steps/?version=6.7&format=bin

Executables are in binaries

-we have to extract and set env vars, then only executables will run

-set env vars and set path

New sys vars GRADLE_HOME, path to \gradle bin folder

-install

-verify Gradle installation

Gradle -version

 

Build time:   2020-10-14 16:13:12 UTC

Revision:     312ba9e0f4f8a02d01854d1ed743b79ed996dfd3

 

Kotlin:       1.3.72

Groovy:       2.5.12

Ant:          Apache Ant(TM) version 1.10.8 compiled on May 10 2020

JVM:          1.8.0_202 (Oracle Corporation 25.202-b08)

OS:           Windows 10 10.0 amd64

 

Gradle Concepts:

 

1-    Project:

A thing to be done like deploying an app to staging env

A gradle projects requires a set of tasks to be executed

2-    Task:

A task is piece of work to build

Eg: creating JAR files, making Java doc, compiling diff classes

3-    Build Script: handles-

-project files   -tasks

-It represents one or more projects

 

 

Features:

-Multi-project build software

-familiarity with Java

 

Build Java Project with Gradle:

-mkdir gradle_project

-

 

 

 

wget https://services.gradle.org/distributions/gradle-${VERSION}-bin.zip -p /tmp

 

 

Java project with Gradle: (GitBash-Windows)

-# mkdir grdale-project | cd gradle-project  (to do project related)

-# vi build.gradle  (we’ll put 2 plugins)

apply plugin : 'java'

apply plugin : 'application'

-# cat … (verify)

-# gradle tasks  (show gradle tasks available over here for processing the build script, helps us to understand different tasks to be config with & to be worked on)

-# gradle clean (to perform clean activity, we’ll be getting the status here, only 1 task given)

-# gradle clean build   (more task &more info)

-# gradle clean build --info  (will provide all the steps related to tasks & more info)

TASKS: Clean

CompilJava

ProcessResources

Classes

Jar

Distzip

Assemble

Test

Check

Build

BUILD SUCCESSFUL

 

 

Gradle using Eclipse:

-Download & install

-Do gardle plugin installation

-Help>marketplace>find gradle (shows plugins related to gradle)

BuildShip Gradle Intergration

Windows>preference>gradle>loc install diroctory>browse>Bin 6.7 (Gradle Folder)

[tick] build scan (addl option, all pjts will be scanned)

Wrapper: for direct download and set

 

Project Creation:

File>new>project>gradle>next x2>name>finish

 

Package exploration> New project

 

Selenium:

-found by Jason Huggins, he developed a JavaScript program to automate testing of a web app

-program was called JavaScript Test Runner

-used for testing the apps before deployment, consists of set of software tools that enables testing

-OpenSource automated testing tool web apps, accros various browsers

-can be coded in many programming langs

-Plays imp. Role in devops, helps to ensure quality, stability, performance of the apps

throughout the development and deployment process.

 

Selenium Suite of tools:

1-    Selenium IDE (Intergrated Dev Env):

Test case dev uses to develop selenium test cases.

2-    Selenium WebDriver:

Used to automate web apps testing to verify that it works as expected.

Supports many browsers.

3-    Selenium RC  (Remote Control):

Server written in Java, that accepts commands, for the browser via HTTP

4-    Selenium Grid:

Smart Proxy server that makes easy to run test in parallel on multiple machines

 

Benefits:

1-    Speed of execution

2-    Accurate result

3-    Lesser investment in Human resources

4-    Time & Cost effective

5-    Support re-testing

6-    Early time to market (launch)

 

Manual Testing:

-involves physical execution of test cases against various apps, to detect bugs & errors (Test Cases Eg: when entering Names: only alphabets are allowed, in Ph: only numericals are allowed, Mail: AlphaNumericals)

-one of the primitive methods, of testing a software

-doesn’t require the knowledge of a testing tool

-execution of test cases w/o using automation tools

 

Limitations of Manual testing:

-it requires a tester all the time

-time consuming

-high risk of error

 

On Windows:

-we need Java, Eclipse(already done), Selenium

-set path var and Java home dir. Value:- C:\Program Files\Java\jdk1.8.0_202\

-check Java install location

Value:- C:\Program Files\Java\jdk1.8.0_202\
-

 

Slenium Versions:

1.     V1: IDE+RC+Grid

2.     V2: IDE + WebDriver + RC + Grid

3.     V3: IDE + WebDriver + Grid

4.     V4: Released

 

WebDriver:

-Reusability

-Debugging the script

-Improved locator functionality

 

WebDriver Limitations:

-        Cannot test mobile apps. Requires framework for this.

-        Only performs sequintial testing, hence requires Grid for parallel testing

-        Limited image testing.

 

 

 

RC

WebDriver

Complex Arch.

Simple Arch.

Slower exec.

Faster exec.

Requires RC Srvr to interact w/browser

Direct interaction with browser

 

Use Cases:

 

 

 

Docker:

-OS level virtualization software platform that enables developers and IT Admins to create, deploy and run apps in a docker container with all their dependencies.

 

-Docker Container is a lightweight software pkg that includes all the dependencies that’s required to execute an app.

 

 

Docker VS Virtual Machine

 

 

Docker Cont 1

Docker Cont 2

Docker Cont 3

APP 1

APP 2

APP 3

Bins/lib

Bins/lib

Bins/lib

 

Docker Engine

Host OS

Infrastructure

 

----------------------------------------------------------------------------------------------------------------

 

VM 1

VM 2

VM 3

APP 1

APP 2

APP 3

Bins/lib

Bins/lib

Bins/lib

Guest OS

Guest OS

Guest OS

 

Hypervisor

Host OS

Infrastructure

 

Hypervisor: A prgm used to run and manage one or more VMs on a PC

 

Docker on Windows

-Download & Install

-CMD- docker –version

-docker (enter) shows all the options in commands

 

Docker on Ubuntu

- sudo apt-get remove docker docker-engine docker.io (remove if any existing)

-sudo snap install docker (modern)

-sudo apt install docker.io

-sudo docker images (means docker is working)

-sudo docker ps –a (containers. ID, etc)

Docker Advantages:

-Rapid deployment

-portability

-better efficiency

-faster config

-security

-scalibility

 

Process:-

Planning   -->

Building   -->

Testing   -->

Deployment   -->

Monitoring

 

Tools:-

      Git

   Gradle

   Selenium

Docker, Container

Nagios

 

 

Docker Engine:-

Client-server application that builds and execute containers using Docker components.

 

REST API:

Representational State Transfer Application Programming Interface. Primary mode of commmunication between Docker Client and Docker Server.

 

Docker ToolBox:

Used in older Windows & MacOS with some features.

 

 

 Docker Client

 

 Client CLI

         |

         |

  DOCKER    

 TOOLBOX

 

      REST API

        |

        |

 

Docker Server

 

 Docker Daemon

 

 

Docker Components:

1.     Docker client & server

2.     Docker images

3.     Docker registry

4.     Docker container

 

1. Docker Client:

-contains CLI, used to execute cmds to Docker daemon thru scripts or direct CLI cmds.

-it uses REST API to issue commands to docker Daemon

Eg: when we use a docker cmd, the client sends the cmd to daemon which performs the operation by interacting with other components.

 

Docker daemon:

-it’s a server which interact with OS & performs all kinds of svcs.

-it listens for REST API request & performs the operations

- (#dockerd)  cmd is used to start a Docker daemon

 

Docker Image:

-Template of instructions which is used to create containers

- By default Docker image starts with base layer

- It has multiple layers

- Each layer depends on the layer below it

- Image layers are created by executing each command in the Docker file & are in read only format

 

 

DOCKER FILE

 

IMAGE LAYERS

 

 

IMAGE LAYER 1 – (CMD)

 

IMAGE LAYER 2 – (RUN)

 

IMAGE LAYER 3 – (PULL)

 

BASE IMAGE LAYER (Ubuntu 20.4) – (FROM)

(for better understanding check google images too)

Eg: of Docker Image that’s of 4 Layers:

-> FROM ubuntu:18.04           (Creates layer from Ubuntu18.04)

-> PULL ./file                           (Add file from your Docker Repo)

-> RUN make /file                    (Build your container)

-> CMD python /file/file.py/      (It specifies what command to run within the container)

Whenever a user create a container, a new layer is formed on top of the image layer called container layer

 

Docker Registry (Hub)

-In registry push and pull (retrieve) cmds are used to intercat with Docker img

 

Pull

-It pulls (retrieves) a Docker img from the Docker registry

Push

-It pushes (stores) Docker img in Docker registry

 

Every cont. has a separate R/W container layer, and any modification in a cont. is reflected upoin the cont. layer alone.

When a cont. is deleted, the top layer also gets deleted.

 

(Q) What should be done when there’s change in img layer?

-Users can add a new layer to the base img

-But users cant modify any existing img layer

-Base layers only readonly format

 

-Docker uses Copy on Write (CoW) strategy with both Docker img & Docker cont.

-CoW used to share and copy files for better efficiency.

-CoW stategy makes Docker efficient by reducing the usage of disk space & increasing in the performance of cont.

 

Docker Registry

-svc to host & distribute docker image just among users

-Repo is a collection of docker imgs

-In a registry, a user can distinguish b/w Docker imgs with their tag names.

-Docker has its own cloud based registry- Docker Hub, where users store & distribute cont. imgs.

-Docker registry has public & private repo

Docker Container (App+Dep=DC)

-executable pkg of applications and its deps together

-its lightweight, can be easily deployed on other OS

-DC runs apps, also shares the OS kernel with other cont.

-here data volumes can be shared and reused among multiple conts.

-its built using Docker img- Docker Run cmd builts a cont.

-In case, if you don’t have a Docker img locally, the Docker pulls the img from your registry. Now Docker created a new cont.

 

Docker Compose

-used for running multiple conts as a single svc

-each cont runs in isolation but can interact with each other

-all Docker compose files are in Yaml

Eg: if you have an app which requires Apache Server and MySQL, you can create one Docker Compose file which can run both conts as svc w/o the need to start individually

 

 

 

Apache

 

MySQL

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Containers

 

Docker Compose File

 

Docker Swarm

-svc for cont. which allows IT Admins & Devs to create & mange a cluster of Swarm nodes within the Docker platform

-each node of Swarm is a Docker daemon &all Docker daemon interact using Docker gateway

-a Swarm consists of:

Manager Node ctrl

It maintains cluster mgmt tasks

Worker Node

It recieves and execs tasks from mgr node

Docker cmds

#yum install docker

#systemctl start docker

#docker rmi imageID (removes docker img)

#docker pull image_name (downloads an img)

#docker rum imageID (runs docker img)